Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2022-02-09

We've added 126 non-merge commits during the last 16 day(s) which contain
a total of 201 files changed, 4049 insertions(+), 2215 deletions(-).

The main changes are:

1) Add custom BPF allocator for JITs that pack multiple programs into a huge
page to reduce iTLB pressure, from Song Liu.

2) Add __user tagging support in vmlinux BTF and utilize it from BPF
verifier when generating loads, from Yonghong Song.

3) Add per-socket fast path check guarding from cgroup/BPF overhead when
used by only some sockets, from Pavel Begunkov.

4) Continued libbpf deprecation work of APIs/features and removal of their
usage from samples, selftests, libbpf & bpftool, from Andrii Nakryiko
and various others.

5) Improve BPF instruction set documentation by adding byte swap
instructions and cleaning up load/store section, from Christoph Hellwig.

6) Switch BPF preload infra to light skeleton and remove libbpf dependency
from it, from Alexei Starovoitov.

7) Fix architecture-agnostic macros in libbpf for accessing syscall
arguments from BPF progs for non-x86 architectures,
from Ilya Leoshkevich.

8) Rework port members in struct bpf_sk_lookup and struct bpf_sock to be
of 16-bit field with anonymous zero padding, from Jakub Sitnicki.

9) Add new bpf_copy_from_user_task() helper to read memory from a different
task than current. Add ability to create sleepable BPF iterator progs,
from Kenny Yu.

10) Implement XSK batching for ice's zero-copy driver used by AF_XDP and
utilize TX batching API from XSK buffer pool, from Maciej Fijalkowski.

11) Generate temporary netns names for BPF selftests to avoid naming
collisions, from Hangbin Liu.

12) Implement bpf_core_types_are_compat() with limited recursion for
in-kernel usage, from Matteo Croce.

13) Simplify pahole version detection and finally enable CONFIG_DEBUG_INFO_DWARF5
to be selected with CONFIG_DEBUG_INFO_BTF, from Nathan Chancellor.

14) Misc minor fixes to libbpf and selftests from various folks.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (126 commits)
selftests/bpf: Cover 4-byte load from remote_port in bpf_sk_lookup
bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide
libbpf: Fix compilation warning due to mismatched printf format
selftests/bpf: Test BPF_KPROBE_SYSCALL macro
libbpf: Add BPF_KPROBE_SYSCALL macro
libbpf: Fix accessing the first syscall argument on s390
libbpf: Fix accessing the first syscall argument on arm64
libbpf: Allow overriding PT_REGS_PARM1{_CORE}_SYSCALL
selftests/bpf: Skip test_bpf_syscall_macro's syscall_arg1 on arm64 and s390
libbpf: Fix accessing syscall arguments on riscv
libbpf: Fix riscv register names
libbpf: Fix accessing syscall arguments on powerpc
selftests/bpf: Use PT_REGS_SYSCALL_REGS in bpf_syscall_macro
libbpf: Add PT_REGS_SYSCALL_REGS macro
selftests/bpf: Fix an endianness issue in bpf_syscall_macro test
bpf: Fix bpf_prog_pack build HPAGE_PMD_SIZE
bpf: Fix leftover header->pages in sparc and powerpc code.
libbpf: Fix signedness bug in btf_dump_array_data()
selftests/bpf: Do not export subtest as standalone test
bpf, x86_64: Fail gracefully on bpf_jit_binary_pack_finalize failures
...
====================

Link: https://lore.kernel.org/r/20220209210050.8425-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4043 -2209
+13
Documentation/bpf/btf.rst
··· 503 503 * ``info.vlen``: 0 504 504 * ``type``: the type with ``btf_type_tag`` attribute 505 505 506 + Currently, ``BTF_KIND_TYPE_TAG`` is only emitted for pointer types. 507 + It has the following btf type chain: 508 + :: 509 + 510 + ptr -> [type_tag]* 511 + -> [const | volatile | restrict | typedef]* 512 + -> base_type 513 + 514 + Basically, a pointer type points to zero or more 515 + type_tag, then zero or more const/volatile/restrict/typedef 516 + and finally the base type. The base type is one of 517 + int, ptr, array, struct, union, enum, func_proto and float types. 518 + 506 519 3. BTF Kernel API 507 520 ================= 508 521
+151 -64
Documentation/bpf/instruction-set.rst
··· 22 22 Instruction encoding 23 23 ==================== 24 24 25 - eBPF uses 64-bit instructions with the following encoding: 25 + eBPF has two instruction encodings: 26 + 27 + * the basic instruction encoding, which uses 64 bits to encode an instruction 28 + * the wide instruction encoding, which appends a second 64-bit immediate value 29 + (imm64) after the basic instruction for a total of 128 bits. 30 + 31 + The basic instruction encoding looks as follows: 26 32 27 33 ============= ======= =============== ==================== ============ 28 34 32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB) ··· 88 82 otherwise identical operations. 89 83 The code field encodes the operation as below: 90 84 91 - ======== ===== ========================== 85 + ======== ===== ================================================= 92 86 code value description 93 - ======== ===== ========================== 87 + ======== ===== ================================================= 94 88 BPF_ADD 0x00 dst += src 95 89 BPF_SUB 0x10 dst -= src 96 90 BPF_MUL 0x20 dst \*= src ··· 104 98 BPF_XOR 0xa0 dst ^= src 105 99 BPF_MOV 0xb0 dst = src 106 100 BPF_ARSH 0xc0 sign extending shift right 107 - BPF_END 0xd0 endianness conversion 108 - ======== ===== ========================== 101 + BPF_END 0xd0 byte swap operations (see separate section below) 102 + ======== ===== ================================================= 109 103 110 104 BPF_ADD | BPF_X | BPF_ALU means:: 111 105 ··· 122 116 BPF_XOR | BPF_K | BPF_ALU64 means:: 123 117 124 118 src_reg = src_reg ^ imm32 119 + 120 + 121 + Byte swap instructions 122 + ---------------------- 123 + 124 + The byte swap instructions use an instruction class of ``BFP_ALU`` and a 4-bit 125 + code field of ``BPF_END``. 126 + 127 + The byte swap instructions instructions operate on the destination register 128 + only and do not use a separate source register or immediate value. 129 + 130 + The 1-bit source operand field in the opcode is used to to select what byte 131 + order the operation convert from or to: 132 + 133 + ========= ===== ================================================= 134 + source value description 135 + ========= ===== ================================================= 136 + BPF_TO_LE 0x00 convert between host byte order and little endian 137 + BPF_TO_BE 0x08 convert between host byte order and big endian 138 + ========= ===== ================================================= 139 + 140 + The imm field encodes the width of the swap operations. The following widths 141 + are supported: 16, 32 and 64. 142 + 143 + Examples: 144 + 145 + ``BPF_ALU | BPF_TO_LE | BPF_END`` with imm = 16 means:: 146 + 147 + dst_reg = htole16(dst_reg) 148 + 149 + ``BPF_ALU | BPF_TO_BE | BPF_END`` with imm = 64 means:: 150 + 151 + dst_reg = htobe64(dst_reg) 152 + 153 + ``BPF_FROM_LE`` and ``BPF_FROM_BE`` exist as aliases for ``BPF_TO_LE`` and 154 + ``BPF_TO_LE`` respetively. 125 155 126 156 127 157 Jump instructions ··· 218 176 ============= ===== ==================================== 219 177 mode modifier value description 220 178 ============= ===== ==================================== 221 - BPF_IMM 0x00 used for 64-bit mov 222 - BPF_ABS 0x20 legacy BPF packet access 223 - BPF_IND 0x40 legacy BPF packet access 224 - BPF_MEM 0x60 all normal load and store operations 179 + BPF_IMM 0x00 64-bit immediate instructions 180 + BPF_ABS 0x20 legacy BPF packet access (absolute) 181 + BPF_IND 0x40 legacy BPF packet access (indirect) 182 + BPF_MEM 0x60 regular load and store operations 225 183 BPF_ATOMIC 0xc0 atomic operations 226 184 ============= ===== ==================================== 227 185 228 - BPF_MEM | <size> | BPF_STX means:: 186 + 187 + Regular load and store operations 188 + --------------------------------- 189 + 190 + The ``BPF_MEM`` mode modifier is used to encode regular load and store 191 + instructions that transfer data between a register and memory. 192 + 193 + ``BPF_MEM | <size> | BPF_STX`` means:: 229 194 230 195 *(size *) (dst_reg + off) = src_reg 231 196 232 - BPF_MEM | <size> | BPF_ST means:: 197 + ``BPF_MEM | <size> | BPF_ST`` means:: 233 198 234 199 *(size *) (dst_reg + off) = imm32 235 200 236 - BPF_MEM | <size> | BPF_LDX means:: 201 + ``BPF_MEM | <size> | BPF_LDX`` means:: 237 202 238 203 dst_reg = *(size *) (src_reg + off) 239 204 240 - Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. 205 + Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW``. 241 206 242 207 Atomic operations 243 208 ----------------- 244 209 245 - eBPF includes atomic operations, which use the immediate field for extra 246 - encoding:: 210 + Atomic operations are operations that operate on memory and can not be 211 + interrupted or corrupted by other access to the same memory region 212 + by other eBPF programs or means outside of this specification. 247 213 248 - .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg 249 - .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg 214 + All atomic operations supported by eBPF are encoded as store operations 215 + that use the ``BPF_ATOMIC`` mode modifier as follows: 250 216 251 - The basic atomic operations supported are:: 217 + * ``BPF_ATOMIC | BPF_W | BPF_STX`` for 32-bit operations 218 + * ``BPF_ATOMIC | BPF_DW | BPF_STX`` for 64-bit operations 219 + * 8-bit and 16-bit wide atomic operations are not supported. 252 220 253 - BPF_ADD 254 - BPF_AND 255 - BPF_OR 256 - BPF_XOR 221 + The imm field is used to encode the actual atomic operation. 222 + Simple atomic operation use a subset of the values defined to encode 223 + arithmetic operations in the imm field to encode the atomic operation: 257 224 258 - Each having equivalent semantics with the ``BPF_ADD`` example, that is: the 259 - memory location addresed by ``dst_reg + off`` is atomically modified, with 260 - ``src_reg`` as the other operand. If the ``BPF_FETCH`` flag is set in the 261 - immediate, then these operations also overwrite ``src_reg`` with the 262 - value that was in memory before it was modified. 225 + ======== ===== =========== 226 + imm value description 227 + ======== ===== =========== 228 + BPF_ADD 0x00 atomic add 229 + BPF_OR 0x40 atomic or 230 + BPF_AND 0x50 atomic and 231 + BPF_XOR 0xa0 atomic xor 232 + ======== ===== =========== 263 233 264 - The more special operations are:: 265 234 266 - BPF_XCHG 235 + ``BPF_ATOMIC | BPF_W | BPF_STX`` with imm = BPF_ADD means:: 267 236 268 - This atomically exchanges ``src_reg`` with the value addressed by ``dst_reg + 269 - off``. :: 237 + *(u32 *)(dst_reg + off16) += src_reg 270 238 271 - BPF_CMPXCHG 239 + ``BPF_ATOMIC | BPF_DW | BPF_STX`` with imm = BPF ADD means:: 272 240 273 - This atomically compares the value addressed by ``dst_reg + off`` with 274 - ``R0``. If they match it is replaced with ``src_reg``. In either case, the 275 - value that was there before is zero-extended and loaded back to ``R0``. 241 + *(u64 *)(dst_reg + off16) += src_reg 276 242 277 - Note that 1 and 2 byte atomic operations are not supported. 243 + ``BPF_XADD`` is a deprecated name for ``BPF_ATOMIC | BPF_ADD``. 244 + 245 + In addition to the simple atomic operations, there also is a modifier and 246 + two complex atomic operations: 247 + 248 + =========== ================ =========================== 249 + imm value description 250 + =========== ================ =========================== 251 + BPF_FETCH 0x01 modifier: return old value 252 + BPF_XCHG 0xe0 | BPF_FETCH atomic exchange 253 + BPF_CMPXCHG 0xf0 | BPF_FETCH atomic compare and exchange 254 + =========== ================ =========================== 255 + 256 + The ``BPF_FETCH`` modifier is optional for simple atomic operations, and 257 + always set for the complex atomic operations. If the ``BPF_FETCH`` flag 258 + is set, then the operation also overwrites ``src_reg`` with the value that 259 + was in memory before it was modified. 260 + 261 + The ``BPF_XCHG`` operation atomically exchanges ``src_reg`` with the value 262 + addressed by ``dst_reg + off``. 263 + 264 + The ``BPF_CMPXCHG`` operation atomically compares the value addressed by 265 + ``dst_reg + off`` with ``R0``. If they match, the value addressed by 266 + ``dst_reg + off`` is replaced with ``src_reg``. In either case, the 267 + value that was at ``dst_reg + off`` before the operation is zero-extended 268 + and loaded back to ``R0``. 278 269 279 270 Clang can generate atomic instructions by default when ``-mcpu=v3`` is 280 271 enabled. If a lower version for ``-mcpu`` is set, the only atomic instruction ··· 315 240 the atomics features, while keeping a lower ``-mcpu`` version, you can use 316 241 ``-Xclang -target-feature -Xclang +alu32``. 317 242 318 - You may encounter ``BPF_XADD`` - this is a legacy name for ``BPF_ATOMIC``, 319 - referring to the exclusive-add operation encoded when the immediate field is 320 - zero. 243 + 64-bit immediate instructions 244 + ----------------------------- 321 245 322 - 16-byte instructions 323 - -------------------- 246 + Instructions with the ``BPF_IMM`` mode modifier use the wide instruction 247 + encoding for an extra imm64 value. 324 248 325 - eBPF has one 16-byte instruction: ``BPF_LD | BPF_DW | BPF_IMM`` which consists 326 - of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single 327 - instruction that loads 64-bit immediate value into a dst_reg. 249 + There is currently only one such instruction. 328 250 329 - Packet access instructions 330 - -------------------------- 251 + ``BPF_LD | BPF_DW | BPF_IMM`` means:: 331 252 332 - eBPF has two non-generic instructions: (BPF_ABS | <size> | BPF_LD) and 333 - (BPF_IND | <size> | BPF_LD) which are used to access packet data. 253 + dst_reg = imm64 334 254 335 - They had to be carried over from classic BPF to have strong performance of 336 - socket filters running in eBPF interpreter. These instructions can only 337 - be used when interpreter context is a pointer to ``struct sk_buff`` and 338 - have seven implicit operands. Register R6 is an implicit input that must 339 - contain pointer to sk_buff. Register R0 is an implicit output which contains 340 - the data fetched from the packet. Registers R1-R5 are scratch registers 341 - and must not be used to store the data across BPF_ABS | BPF_LD or 342 - BPF_IND | BPF_LD instructions. 343 255 344 - These instructions have implicit program exit condition as well. When 345 - eBPF program is trying to access the data beyond the packet boundary, 346 - the interpreter will abort the execution of the program. JIT compilers 347 - therefore must preserve this property. src_reg and imm32 fields are 348 - explicit inputs to these instructions. 256 + Legacy BPF Packet access instructions 257 + ------------------------------------- 349 258 350 - For example, BPF_IND | BPF_W | BPF_LD means:: 259 + eBPF has special instructions for access to packet data that have been 260 + carried over from classic BPF to retain the performance of legacy socket 261 + filters running in the eBPF interpreter. 262 + 263 + The instructions come in two forms: ``BPF_ABS | <size> | BPF_LD`` and 264 + ``BPF_IND | <size> | BPF_LD``. 265 + 266 + These instructions are used to access packet data and can only be used when 267 + the program context is a pointer to networking packet. ``BPF_ABS`` 268 + accesses packet data at an absolute offset specified by the immediate data 269 + and ``BPF_IND`` access packet data at an offset that includes the value of 270 + a register in addition to the immediate data. 271 + 272 + These instructions have seven implicit operands: 273 + 274 + * Register R6 is an implicit input that must contain pointer to a 275 + struct sk_buff. 276 + * Register R0 is an implicit output which contains the data fetched from 277 + the packet. 278 + * Registers R1-R5 are scratch registers that are clobbered after a call to 279 + ``BPF_ABS | BPF_LD`` or ``BPF_IND`` | BPF_LD instructions. 280 + 281 + These instructions have an implicit program exit condition as well. When an 282 + eBPF program is trying to access the data beyond the packet boundary, the 283 + program execution will be aborted. 284 + 285 + ``BPF_ABS | BPF_W | BPF_LD`` means:: 286 + 287 + R0 = ntohl(*(u32 *) (((struct sk_buff *) R6)->data + imm32)) 288 + 289 + ``BPF_IND | BPF_W | BPF_LD`` means:: 351 290 352 291 R0 = ntohl(*(u32 *) (((struct sk_buff *) R6)->data + src_reg + imm32)) 353 - 354 - and R1 - R5 are clobbered.
+2
MAINTAINERS
··· 3523 3523 F: net/sched/cls_bpf.c 3524 3524 F: samples/bpf/ 3525 3525 F: scripts/bpf_doc.py 3526 + F: scripts/pahole-flags.sh 3527 + F: scripts/pahole-version.sh 3526 3528 F: tools/bpf/ 3527 3529 F: tools/lib/bpf/ 3528 3530 F: tools/testing/selftests/bpf/
+5
arch/arm64/net/bpf_jit_comp.c
··· 1143 1143 return prog; 1144 1144 } 1145 1145 1146 + bool bpf_jit_supports_kfunc_call(void) 1147 + { 1148 + return true; 1149 + } 1150 + 1146 1151 u64 bpf_jit_alloc_exec_limit(void) 1147 1152 { 1148 1153 return VMALLOC_END - VMALLOC_START;
+1 -1
arch/powerpc/net/bpf_jit_comp.c
··· 264 264 fp->jited = 1; 265 265 fp->jited_len = proglen + FUNCTION_DESCR_SIZE; 266 266 267 - bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE)); 267 + bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + bpf_hdr->size); 268 268 if (!fp->is_func || extra_pass) { 269 269 bpf_jit_binary_lock_ro(bpf_hdr); 270 270 bpf_prog_fill_jited_linfo(fp, addrs);
+1 -1
arch/sparc/net/bpf_jit_comp_64.c
··· 1599 1599 if (bpf_jit_enable > 1) 1600 1600 bpf_jit_dump(prog->len, image_size, pass, ctx.image); 1601 1601 1602 - bpf_flush_icache(header, (u8 *)header + (header->pages * PAGE_SIZE)); 1602 + bpf_flush_icache(header, (u8 *)header + header->size); 1603 1603 1604 1604 if (!prog->is_func || extra_pass) { 1605 1605 bpf_jit_binary_lock_ro(header);
+1
arch/x86/Kconfig
··· 158 158 select HAVE_ALIGNED_STRUCT_PAGE if SLUB 159 159 select HAVE_ARCH_AUDITSYSCALL 160 160 select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE 161 + select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP 161 162 select HAVE_ARCH_JUMP_LABEL 162 163 select HAVE_ARCH_JUMP_LABEL_RELATIVE 163 164 select HAVE_ARCH_KASAN if X86_64
+1
arch/x86/include/asm/text-patching.h
··· 44 44 extern void *text_poke(void *addr, const void *opcode, size_t len); 45 45 extern void text_poke_sync(void); 46 46 extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len); 47 + extern void *text_poke_copy(void *addr, const void *opcode, size_t len); 47 48 extern int poke_int3_handler(struct pt_regs *regs); 48 49 extern void text_poke_bp(void *addr, const void *opcode, size_t len, const void *emulate); 49 50
+34
arch/x86/kernel/alternative.c
··· 1102 1102 return __text_poke(addr, opcode, len); 1103 1103 } 1104 1104 1105 + /** 1106 + * text_poke_copy - Copy instructions into (an unused part of) RX memory 1107 + * @addr: address to modify 1108 + * @opcode: source of the copy 1109 + * @len: length to copy, could be more than 2x PAGE_SIZE 1110 + * 1111 + * Not safe against concurrent execution; useful for JITs to dump 1112 + * new code blocks into unused regions of RX memory. Can be used in 1113 + * conjunction with synchronize_rcu_tasks() to wait for existing 1114 + * execution to quiesce after having made sure no existing functions 1115 + * pointers are live. 1116 + */ 1117 + void *text_poke_copy(void *addr, const void *opcode, size_t len) 1118 + { 1119 + unsigned long start = (unsigned long)addr; 1120 + size_t patched = 0; 1121 + 1122 + if (WARN_ON_ONCE(core_kernel_text(start))) 1123 + return NULL; 1124 + 1125 + mutex_lock(&text_mutex); 1126 + while (patched < len) { 1127 + unsigned long ptr = start + patched; 1128 + size_t s; 1129 + 1130 + s = min_t(size_t, PAGE_SIZE * 2 - offset_in_page(ptr), len - patched); 1131 + 1132 + __text_poke((void *)ptr, opcode + patched, s); 1133 + patched += s; 1134 + } 1135 + mutex_unlock(&text_mutex); 1136 + return addr; 1137 + } 1138 + 1105 1139 static void do_sync_core(void *info) 1106 1140 { 1107 1141 sync_core();
+42 -28
arch/x86/net/bpf_jit_comp.c
··· 330 330 } 331 331 332 332 static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, 333 - void *old_addr, void *new_addr, 334 - const bool text_live) 333 + void *old_addr, void *new_addr) 335 334 { 336 335 const u8 *nop_insn = x86_nops[5]; 337 336 u8 old_insn[X86_PATCH_SIZE]; ··· 364 365 goto out; 365 366 ret = 1; 366 367 if (memcmp(ip, new_insn, X86_PATCH_SIZE)) { 367 - if (text_live) 368 - text_poke_bp(ip, new_insn, X86_PATCH_SIZE, NULL); 369 - else 370 - memcpy(ip, new_insn, X86_PATCH_SIZE); 368 + text_poke_bp(ip, new_insn, X86_PATCH_SIZE, NULL); 371 369 ret = 0; 372 370 } 373 371 out: ··· 380 384 /* BPF poking in modules is not supported */ 381 385 return -EINVAL; 382 386 383 - return __bpf_arch_text_poke(ip, t, old_addr, new_addr, true); 387 + return __bpf_arch_text_poke(ip, t, old_addr, new_addr); 384 388 } 385 389 386 390 #define EMIT_LFENCE() EMIT3(0x0F, 0xAE, 0xE8) ··· 554 558 mutex_lock(&array->aux->poke_mutex); 555 559 target = array->ptrs[poke->tail_call.key]; 556 560 if (target) { 557 - /* Plain memcpy is used when image is not live yet 558 - * and still not locked as read-only. Once poke 559 - * location is active (poke->tailcall_target_stable), 560 - * any parallel bpf_arch_text_poke() might occur 561 - * still on the read-write image until we finally 562 - * locked it as read-only. Both modifications on 563 - * the given image are under text_mutex to avoid 564 - * interference. 565 - */ 566 561 ret = __bpf_arch_text_poke(poke->tailcall_target, 567 562 BPF_MOD_JUMP, NULL, 568 563 (u8 *)target->bpf_func + 569 - poke->adj_off, false); 564 + poke->adj_off); 570 565 BUG_ON(ret < 0); 571 566 ret = __bpf_arch_text_poke(poke->tailcall_bypass, 572 567 BPF_MOD_JUMP, 573 568 (u8 *)poke->tailcall_target + 574 - X86_PATCH_SIZE, NULL, false); 569 + X86_PATCH_SIZE, NULL); 575 570 BUG_ON(ret < 0); 576 571 } 577 572 WRITE_ONCE(poke->tailcall_target_stable, true); ··· 774 787 /* emit opcode */ 775 788 switch (atomic_op) { 776 789 case BPF_ADD: 777 - case BPF_SUB: 778 790 case BPF_AND: 779 791 case BPF_OR: 780 792 case BPF_XOR: ··· 853 867 854 868 #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp))) 855 869 856 - static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, 870 + static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image, 857 871 int oldproglen, struct jit_context *ctx, bool jmp_padding) 858 872 { 859 873 bool tail_call_reachable = bpf_prog->aux->tail_call_reachable; ··· 880 894 push_callee_regs(&prog, callee_regs_used); 881 895 882 896 ilen = prog - temp; 883 - if (image) 884 - memcpy(image + proglen, temp, ilen); 897 + if (rw_image) 898 + memcpy(rw_image + proglen, temp, ilen); 885 899 proglen += ilen; 886 900 addrs[0] = proglen; 887 901 prog = temp; ··· 1310 1324 pr_err("extable->insn doesn't fit into 32-bit\n"); 1311 1325 return -EFAULT; 1312 1326 } 1327 + /* switch ex to rw buffer for writes */ 1328 + ex = (void *)rw_image + ((void *)ex - (void *)image); 1329 + 1313 1330 ex->insn = delta; 1314 1331 1315 1332 ex->data = EX_TYPE_BPF; ··· 1695 1706 pr_err("bpf_jit: fatal error\n"); 1696 1707 return -EFAULT; 1697 1708 } 1698 - memcpy(image + proglen, temp, ilen); 1709 + memcpy(rw_image + proglen, temp, ilen); 1699 1710 } 1700 1711 proglen += ilen; 1701 1712 addrs[i] = proglen; ··· 2236 2247 } 2237 2248 2238 2249 struct x64_jit_data { 2250 + struct bpf_binary_header *rw_header; 2239 2251 struct bpf_binary_header *header; 2240 2252 int *addrs; 2241 2253 u8 *image; ··· 2249 2259 2250 2260 struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) 2251 2261 { 2262 + struct bpf_binary_header *rw_header = NULL; 2252 2263 struct bpf_binary_header *header = NULL; 2253 2264 struct bpf_prog *tmp, *orig_prog = prog; 2254 2265 struct x64_jit_data *jit_data; ··· 2258 2267 bool tmp_blinded = false; 2259 2268 bool extra_pass = false; 2260 2269 bool padding = false; 2270 + u8 *rw_image = NULL; 2261 2271 u8 *image = NULL; 2262 2272 int *addrs; 2263 2273 int pass; ··· 2294 2302 oldproglen = jit_data->proglen; 2295 2303 image = jit_data->image; 2296 2304 header = jit_data->header; 2305 + rw_header = jit_data->rw_header; 2306 + rw_image = (void *)rw_header + ((void *)image - (void *)header); 2297 2307 extra_pass = true; 2298 2308 padding = true; 2299 2309 goto skip_init_addrs; ··· 2326 2332 for (pass = 0; pass < MAX_PASSES || image; pass++) { 2327 2333 if (!padding && pass >= PADDING_PASSES) 2328 2334 padding = true; 2329 - proglen = do_jit(prog, addrs, image, oldproglen, &ctx, padding); 2335 + proglen = do_jit(prog, addrs, image, rw_image, oldproglen, &ctx, padding); 2330 2336 if (proglen <= 0) { 2331 2337 out_image: 2332 2338 image = NULL; 2333 2339 if (header) 2334 - bpf_jit_binary_free(header); 2340 + bpf_jit_binary_pack_free(header, rw_header); 2335 2341 prog = orig_prog; 2336 2342 goto out_addrs; 2337 2343 } ··· 2355 2361 sizeof(struct exception_table_entry); 2356 2362 2357 2363 /* allocate module memory for x86 insns and extable */ 2358 - header = bpf_jit_binary_alloc(roundup(proglen, align) + extable_size, 2359 - &image, align, jit_fill_hole); 2364 + header = bpf_jit_binary_pack_alloc(roundup(proglen, align) + extable_size, 2365 + &image, align, &rw_header, &rw_image, 2366 + jit_fill_hole); 2360 2367 if (!header) { 2361 2368 prog = orig_prog; 2362 2369 goto out_addrs; ··· 2373 2378 2374 2379 if (image) { 2375 2380 if (!prog->is_func || extra_pass) { 2381 + /* 2382 + * bpf_jit_binary_pack_finalize fails in two scenarios: 2383 + * 1) header is not pointing to proper module memory; 2384 + * 2) the arch doesn't support bpf_arch_text_copy(). 2385 + * 2386 + * Both cases are serious bugs and justify WARN_ON. 2387 + */ 2388 + if (WARN_ON(bpf_jit_binary_pack_finalize(prog, header, rw_header))) { 2389 + prog = orig_prog; 2390 + goto out_addrs; 2391 + } 2392 + 2376 2393 bpf_tail_call_direct_fixup(prog); 2377 - bpf_jit_binary_lock_ro(header); 2378 2394 } else { 2379 2395 jit_data->addrs = addrs; 2380 2396 jit_data->ctx = ctx; 2381 2397 jit_data->proglen = proglen; 2382 2398 jit_data->image = image; 2383 2399 jit_data->header = header; 2400 + jit_data->rw_header = rw_header; 2384 2401 } 2385 2402 prog->bpf_func = (void *)image; 2386 2403 prog->jited = 1; ··· 2419 2412 bool bpf_jit_supports_kfunc_call(void) 2420 2413 { 2421 2414 return true; 2415 + } 2416 + 2417 + void *bpf_arch_text_copy(void *dst, void *src, size_t len) 2418 + { 2419 + if (text_poke_copy(dst, src, len) == NULL) 2420 + return ERR_PTR(-EINVAL); 2421 + return dst; 2422 2422 }
-11
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 830 830 i40e_clean_tx_ring(tx_ring); 831 831 kfree(tx_ring->tx_bi); 832 832 tx_ring->tx_bi = NULL; 833 - kfree(tx_ring->xsk_descs); 834 - tx_ring->xsk_descs = NULL; 835 833 836 834 if (tx_ring->desc) { 837 835 dma_free_coherent(tx_ring->dev, tx_ring->size, ··· 1429 1431 if (!tx_ring->tx_bi) 1430 1432 goto err; 1431 1433 1432 - if (ring_is_xdp(tx_ring)) { 1433 - tx_ring->xsk_descs = kcalloc(I40E_MAX_NUM_DESCRIPTORS, sizeof(*tx_ring->xsk_descs), 1434 - GFP_KERNEL); 1435 - if (!tx_ring->xsk_descs) 1436 - goto err; 1437 - } 1438 - 1439 1434 u64_stats_init(&tx_ring->syncp); 1440 1435 1441 1436 /* round up to nearest 4K */ ··· 1452 1461 return 0; 1453 1462 1454 1463 err: 1455 - kfree(tx_ring->xsk_descs); 1456 - tx_ring->xsk_descs = NULL; 1457 1464 kfree(tx_ring->tx_bi); 1458 1465 tx_ring->tx_bi = NULL; 1459 1466 return -ENOMEM;
-1
drivers/net/ethernet/intel/i40e/i40e_txrx.h
··· 392 392 u16 rx_offset; 393 393 struct xdp_rxq_info xdp_rxq; 394 394 struct xsk_buff_pool *xsk_pool; 395 - struct xdp_desc *xsk_descs; /* For storing descriptors in the AF_XDP ZC path */ 396 395 } ____cacheline_internodealigned_in_smp; 397 396 398 397 static inline bool ring_uses_build_skb(struct i40e_ring *ring)
+2 -2
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 471 471 **/ 472 472 static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) 473 473 { 474 - struct xdp_desc *descs = xdp_ring->xsk_descs; 474 + struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs; 475 475 u32 nb_pkts, nb_processed = 0; 476 476 unsigned int total_bytes = 0; 477 477 478 - nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, descs, budget); 478 + nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget); 479 479 if (!nb_pkts) 480 480 return true; 481 481
+2
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 2803 2803 /* clone ring and setup updated count */ 2804 2804 xdp_rings[i] = *vsi->xdp_rings[i]; 2805 2805 xdp_rings[i].count = new_tx_cnt; 2806 + xdp_rings[i].next_dd = ICE_RING_QUARTER(&xdp_rings[i]) - 1; 2807 + xdp_rings[i].next_rs = ICE_RING_QUARTER(&xdp_rings[i]) - 1; 2806 2808 xdp_rings[i].desc = NULL; 2807 2809 xdp_rings[i].tx_buf = NULL; 2808 2810 err = ice_setup_tx_ring(&xdp_rings[i]);
+2 -2
drivers/net/ethernet/intel/ice/ice_main.c
··· 2495 2495 xdp_ring->reg_idx = vsi->txq_map[xdp_q_idx]; 2496 2496 xdp_ring->vsi = vsi; 2497 2497 xdp_ring->netdev = NULL; 2498 - xdp_ring->next_dd = ICE_TX_THRESH - 1; 2499 - xdp_ring->next_rs = ICE_TX_THRESH - 1; 2500 2498 xdp_ring->dev = dev; 2501 2499 xdp_ring->count = vsi->num_tx_desc; 2500 + xdp_ring->next_dd = ICE_RING_QUARTER(xdp_ring) - 1; 2501 + xdp_ring->next_rs = ICE_RING_QUARTER(xdp_ring) - 1; 2502 2502 WRITE_ONCE(vsi->xdp_rings[i], xdp_ring); 2503 2503 if (ice_setup_tx_ring(xdp_ring)) 2504 2504 goto free_xdp_rings;
+4 -2
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 173 173 174 174 tx_ring->next_to_use = 0; 175 175 tx_ring->next_to_clean = 0; 176 + tx_ring->next_dd = ICE_RING_QUARTER(tx_ring) - 1; 177 + tx_ring->next_rs = ICE_RING_QUARTER(tx_ring) - 1; 176 178 177 179 if (!tx_ring->netdev) 178 180 return; ··· 1469 1467 bool wd; 1470 1468 1471 1469 if (tx_ring->xsk_pool) 1472 - wd = ice_clean_tx_irq_zc(tx_ring, budget); 1470 + wd = ice_xmit_zc(tx_ring, ICE_DESC_UNUSED(tx_ring), budget); 1473 1471 else if (ice_ring_is_xdp(tx_ring)) 1474 1472 wd = true; 1475 1473 else ··· 1522 1520 /* Exit the polling mode, but don't re-enable interrupts if stack might 1523 1521 * poll us due to busy-polling 1524 1522 */ 1525 - if (likely(napi_complete_done(napi, work_done))) { 1523 + if (napi_complete_done(napi, work_done)) { 1526 1524 ice_net_dim(q_vector); 1527 1525 ice_enable_interrupt(q_vector); 1528 1526 } else {
+6 -4
drivers/net/ethernet/intel/ice/ice_txrx.h
··· 13 13 #define ICE_MAX_CHAINED_RX_BUFS 5 14 14 #define ICE_MAX_BUF_TXD 8 15 15 #define ICE_MIN_TX_LEN 17 16 - #define ICE_TX_THRESH 32 17 16 18 17 /* The size limit for a transmit buffer in a descriptor is (16K - 1). 19 18 * In order to align with the read requests we will align the value to ··· 109 110 #define ICE_DESC_UNUSED(R) \ 110 111 (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ 111 112 (R)->next_to_clean - (R)->next_to_use - 1) 113 + 114 + #define ICE_RING_QUARTER(R) ((R)->count >> 2) 112 115 113 116 #define ICE_TX_FLAGS_TSO BIT(0) 114 117 #define ICE_TX_FLAGS_HW_VLAN BIT(1) ··· 322 321 u16 count; /* Number of descriptors */ 323 322 u16 q_index; /* Queue number of ring */ 324 323 /* stats structs */ 324 + struct ice_txq_stats tx_stats; 325 + /* CL3 - 3rd cacheline starts here */ 325 326 struct ice_q_stats stats; 326 327 struct u64_stats_sync syncp; 327 - struct ice_txq_stats tx_stats; 328 - 329 - /* CL3 - 3rd cacheline starts here */ 330 328 struct rcu_head rcu; /* to avoid race on free */ 331 329 DECLARE_BITMAP(xps_state, ICE_TX_NBITS); /* XPS Config State */ 332 330 struct ice_channel *ch; 333 331 struct ice_ptp_tx *tx_tstamps; 334 332 spinlock_t tx_lock; 335 333 u32 txq_teid; /* Added Tx queue TEID */ 334 + /* CL4 - 4th cacheline starts here */ 335 + u16 xdp_tx_active; 336 336 #define ICE_TX_FLAGS_RING_XDP BIT(0) 337 337 u8 flags; 338 338 u8 dcb_tc; /* Traffic class of ring */
+9 -6
drivers/net/ethernet/intel/ice/ice_txrx_lib.c
··· 222 222 static void ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring) 223 223 { 224 224 unsigned int total_bytes = 0, total_pkts = 0; 225 + u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 225 226 u16 ntc = xdp_ring->next_to_clean; 226 227 struct ice_tx_desc *next_dd_desc; 227 228 u16 next_dd = xdp_ring->next_dd; ··· 234 233 cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) 235 234 return; 236 235 237 - for (i = 0; i < ICE_TX_THRESH; i++) { 236 + for (i = 0; i < tx_thresh; i++) { 238 237 tx_buf = &xdp_ring->tx_buf[ntc]; 239 238 240 239 total_bytes += tx_buf->bytecount; ··· 255 254 } 256 255 257 256 next_dd_desc->cmd_type_offset_bsz = 0; 258 - xdp_ring->next_dd = xdp_ring->next_dd + ICE_TX_THRESH; 257 + xdp_ring->next_dd = xdp_ring->next_dd + tx_thresh; 259 258 if (xdp_ring->next_dd > xdp_ring->count) 260 - xdp_ring->next_dd = ICE_TX_THRESH - 1; 259 + xdp_ring->next_dd = tx_thresh - 1; 261 260 xdp_ring->next_to_clean = ntc; 262 261 ice_update_tx_ring_stats(xdp_ring, total_pkts, total_bytes); 263 262 } ··· 270 269 */ 271 270 int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring) 272 271 { 272 + u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 273 273 u16 i = xdp_ring->next_to_use; 274 274 struct ice_tx_desc *tx_desc; 275 275 struct ice_tx_buf *tx_buf; 276 276 dma_addr_t dma; 277 277 278 - if (ICE_DESC_UNUSED(xdp_ring) < ICE_TX_THRESH) 278 + if (ICE_DESC_UNUSED(xdp_ring) < tx_thresh) 279 279 ice_clean_xdp_irq(xdp_ring); 280 280 281 281 if (!unlikely(ICE_DESC_UNUSED(xdp_ring))) { ··· 302 300 tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, 0, 303 301 size, 0); 304 302 303 + xdp_ring->xdp_tx_active++; 305 304 i++; 306 305 if (i == xdp_ring->count) { 307 306 i = 0; 308 307 tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 309 308 tx_desc->cmd_type_offset_bsz |= 310 309 cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 311 - xdp_ring->next_rs = ICE_TX_THRESH - 1; 310 + xdp_ring->next_rs = tx_thresh - 1; 312 311 } 313 312 xdp_ring->next_to_use = i; 314 313 ··· 317 314 tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 318 315 tx_desc->cmd_type_offset_bsz |= 319 316 cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 320 - xdp_ring->next_rs += ICE_TX_THRESH; 317 + xdp_ring->next_rs += tx_thresh; 321 318 } 322 319 323 320 return ICE_XDP_TX;
+269 -115
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 327 327 bool if_running, pool_present = !!pool; 328 328 int ret = 0, pool_failure = 0; 329 329 330 + if (!is_power_of_2(vsi->rx_rings[qid]->count) || 331 + !is_power_of_2(vsi->tx_rings[qid]->count)) { 332 + netdev_err(vsi->netdev, "Please align ring sizes to power of 2\n"); 333 + pool_failure = -EINVAL; 334 + goto failure; 335 + } 336 + 330 337 if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi); 331 338 332 339 if (if_running) { ··· 356 349 netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret); 357 350 } 358 351 352 + failure: 359 353 if (pool_failure) { 360 354 netdev_err(vsi->netdev, "Could not %sable buffer pool, error = %d\n", 361 355 pool_present ? "en" : "dis", pool_failure); ··· 367 359 } 368 360 369 361 /** 370 - * ice_alloc_rx_bufs_zc - allocate a number of Rx buffers 371 - * @rx_ring: Rx ring 362 + * ice_fill_rx_descs - pick buffers from XSK buffer pool and use it 363 + * @pool: XSK Buffer pool to pull the buffers from 364 + * @xdp: SW ring of xdp_buff that will hold the buffers 365 + * @rx_desc: Pointer to Rx descriptors that will be filled 372 366 * @count: The number of buffers to allocate 373 367 * 374 368 * This function allocates a number of Rx buffers from the fill ring 375 369 * or the internal recycle mechanism and places them on the Rx ring. 376 370 * 377 - * Returns true if all allocations were successful, false if any fail. 371 + * Note that ring wrap should be handled by caller of this function. 372 + * 373 + * Returns the amount of allocated Rx descriptors 378 374 */ 379 - bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) 375 + static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp, 376 + union ice_32b_rx_flex_desc *rx_desc, u16 count) 380 377 { 381 - union ice_32b_rx_flex_desc *rx_desc; 382 - u16 ntu = rx_ring->next_to_use; 383 - struct xdp_buff **xdp; 384 - u32 nb_buffs, i; 385 378 dma_addr_t dma; 379 + u16 buffs; 380 + int i; 386 381 387 - rx_desc = ICE_RX_DESC(rx_ring, ntu); 388 - xdp = ice_xdp_buf(rx_ring, ntu); 389 - 390 - nb_buffs = min_t(u16, count, rx_ring->count - ntu); 391 - nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs); 392 - if (!nb_buffs) 393 - return false; 394 - 395 - i = nb_buffs; 396 - while (i--) { 382 + buffs = xsk_buff_alloc_batch(pool, xdp, count); 383 + for (i = 0; i < buffs; i++) { 397 384 dma = xsk_buff_xdp_get_dma(*xdp); 398 385 rx_desc->read.pkt_addr = cpu_to_le64(dma); 399 386 rx_desc->wb.status_error0 = 0; ··· 397 394 xdp++; 398 395 } 399 396 397 + return buffs; 398 + } 399 + 400 + /** 401 + * __ice_alloc_rx_bufs_zc - allocate a number of Rx buffers 402 + * @rx_ring: Rx ring 403 + * @count: The number of buffers to allocate 404 + * 405 + * Place the @count of descriptors onto Rx ring. Handle the ring wrap 406 + * for case where space from next_to_use up to the end of ring is less 407 + * than @count. Finally do a tail bump. 408 + * 409 + * Returns true if all allocations were successful, false if any fail. 410 + */ 411 + static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) 412 + { 413 + union ice_32b_rx_flex_desc *rx_desc; 414 + u32 nb_buffs_extra = 0, nb_buffs; 415 + u16 ntu = rx_ring->next_to_use; 416 + u16 total_count = count; 417 + struct xdp_buff **xdp; 418 + 419 + rx_desc = ICE_RX_DESC(rx_ring, ntu); 420 + xdp = ice_xdp_buf(rx_ring, ntu); 421 + 422 + if (ntu + count >= rx_ring->count) { 423 + nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, 424 + rx_desc, 425 + rx_ring->count - ntu); 426 + rx_desc = ICE_RX_DESC(rx_ring, 0); 427 + xdp = ice_xdp_buf(rx_ring, 0); 428 + ntu = 0; 429 + count -= nb_buffs_extra; 430 + ice_release_rx_desc(rx_ring, 0); 431 + } 432 + 433 + nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count); 434 + 400 435 ntu += nb_buffs; 401 436 if (ntu == rx_ring->count) 402 437 ntu = 0; 403 438 404 - ice_release_rx_desc(rx_ring, ntu); 439 + if (rx_ring->next_to_use != ntu) 440 + ice_release_rx_desc(rx_ring, ntu); 405 441 406 - return count == nb_buffs; 442 + return total_count == (nb_buffs_extra + nb_buffs); 443 + } 444 + 445 + /** 446 + * ice_alloc_rx_bufs_zc - allocate a number of Rx buffers 447 + * @rx_ring: Rx ring 448 + * @count: The number of buffers to allocate 449 + * 450 + * Wrapper for internal allocation routine; figure out how many tail 451 + * bumps should take place based on the given threshold 452 + * 453 + * Returns true if all calls to internal alloc routine succeeded 454 + */ 455 + bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) 456 + { 457 + u16 rx_thresh = ICE_RING_QUARTER(rx_ring); 458 + u16 batched, leftover, i, tail_bumps; 459 + 460 + batched = ALIGN_DOWN(count, rx_thresh); 461 + tail_bumps = batched / rx_thresh; 462 + leftover = count & (rx_thresh - 1); 463 + 464 + for (i = 0; i < tail_bumps; i++) 465 + if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh)) 466 + return false; 467 + return __ice_alloc_rx_bufs_zc(rx_ring, leftover); 407 468 } 408 469 409 470 /** ··· 683 616 } 684 617 685 618 /** 686 - * ice_xmit_zc - Completes AF_XDP entries, and cleans XDP entries 687 - * @xdp_ring: XDP Tx ring 688 - * @budget: max number of frames to xmit 689 - * 690 - * Returns true if cleanup/transmission is done. 691 - */ 692 - static bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, int budget) 693 - { 694 - struct ice_tx_desc *tx_desc = NULL; 695 - bool work_done = true; 696 - struct xdp_desc desc; 697 - dma_addr_t dma; 698 - 699 - while (likely(budget-- > 0)) { 700 - struct ice_tx_buf *tx_buf; 701 - 702 - if (unlikely(!ICE_DESC_UNUSED(xdp_ring))) { 703 - xdp_ring->tx_stats.tx_busy++; 704 - work_done = false; 705 - break; 706 - } 707 - 708 - tx_buf = &xdp_ring->tx_buf[xdp_ring->next_to_use]; 709 - 710 - if (!xsk_tx_peek_desc(xdp_ring->xsk_pool, &desc)) 711 - break; 712 - 713 - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, desc.addr); 714 - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, 715 - desc.len); 716 - 717 - tx_buf->bytecount = desc.len; 718 - 719 - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_to_use); 720 - tx_desc->buf_addr = cpu_to_le64(dma); 721 - tx_desc->cmd_type_offset_bsz = 722 - ice_build_ctob(ICE_TXD_LAST_DESC_CMD, 0, desc.len, 0); 723 - 724 - xdp_ring->next_to_use++; 725 - if (xdp_ring->next_to_use == xdp_ring->count) 726 - xdp_ring->next_to_use = 0; 727 - } 728 - 729 - if (tx_desc) { 730 - ice_xdp_ring_update_tail(xdp_ring); 731 - xsk_tx_release(xdp_ring->xsk_pool); 732 - } 733 - 734 - return budget > 0 && work_done; 735 - } 736 - 737 - /** 738 619 * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer 739 620 * @xdp_ring: XDP Tx ring 740 621 * @tx_buf: Tx buffer to clean ··· 691 676 ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf) 692 677 { 693 678 xdp_return_frame((struct xdp_frame *)tx_buf->raw_buf); 679 + xdp_ring->xdp_tx_active--; 694 680 dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), 695 681 dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); 696 682 dma_unmap_len_set(tx_buf, len, 0); 697 683 } 698 684 699 685 /** 700 - * ice_clean_tx_irq_zc - Completes AF_XDP entries, and cleans XDP entries 701 - * @xdp_ring: XDP Tx ring 702 - * @budget: NAPI budget 686 + * ice_clean_xdp_irq_zc - Reclaim resources after transmit completes on XDP ring 687 + * @xdp_ring: XDP ring to clean 688 + * @napi_budget: amount of descriptors that NAPI allows us to clean 703 689 * 704 - * Returns true if cleanup/tranmission is done. 690 + * Returns count of cleaned descriptors 705 691 */ 706 - bool ice_clean_tx_irq_zc(struct ice_tx_ring *xdp_ring, int budget) 692 + static u16 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring, int napi_budget) 707 693 { 708 - int total_packets = 0, total_bytes = 0; 709 - s16 ntc = xdp_ring->next_to_clean; 710 - struct ice_tx_desc *tx_desc; 711 - struct ice_tx_buf *tx_buf; 712 - u32 xsk_frames = 0; 713 - bool xmit_done; 714 - 715 - tx_desc = ICE_TX_DESC(xdp_ring, ntc); 716 - tx_buf = &xdp_ring->tx_buf[ntc]; 717 - ntc -= xdp_ring->count; 694 + u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 695 + int budget = napi_budget / tx_thresh; 696 + u16 next_dd = xdp_ring->next_dd; 697 + u16 ntc, cleared_dds = 0; 718 698 719 699 do { 720 - if (!(tx_desc->cmd_type_offset_bsz & 721 - cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) 700 + struct ice_tx_desc *next_dd_desc; 701 + u16 desc_cnt = xdp_ring->count; 702 + struct ice_tx_buf *tx_buf; 703 + u32 xsk_frames; 704 + u16 i; 705 + 706 + next_dd_desc = ICE_TX_DESC(xdp_ring, next_dd); 707 + if (!(next_dd_desc->cmd_type_offset_bsz & 708 + cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) 722 709 break; 723 710 724 - total_bytes += tx_buf->bytecount; 725 - total_packets++; 726 - 727 - if (tx_buf->raw_buf) { 728 - ice_clean_xdp_tx_buf(xdp_ring, tx_buf); 729 - tx_buf->raw_buf = NULL; 730 - } else { 731 - xsk_frames++; 711 + cleared_dds++; 712 + xsk_frames = 0; 713 + if (likely(!xdp_ring->xdp_tx_active)) { 714 + xsk_frames = tx_thresh; 715 + goto skip; 732 716 } 733 717 734 - tx_desc->cmd_type_offset_bsz = 0; 735 - tx_buf++; 736 - tx_desc++; 737 - ntc++; 718 + ntc = xdp_ring->next_to_clean; 738 719 739 - if (unlikely(!ntc)) { 740 - ntc -= xdp_ring->count; 741 - tx_buf = xdp_ring->tx_buf; 742 - tx_desc = ICE_TX_DESC(xdp_ring, 0); 720 + for (i = 0; i < tx_thresh; i++) { 721 + tx_buf = &xdp_ring->tx_buf[ntc]; 722 + 723 + if (tx_buf->raw_buf) { 724 + ice_clean_xdp_tx_buf(xdp_ring, tx_buf); 725 + tx_buf->raw_buf = NULL; 726 + } else { 727 + xsk_frames++; 728 + } 729 + 730 + ntc++; 731 + if (ntc >= xdp_ring->count) 732 + ntc = 0; 743 733 } 734 + skip: 735 + xdp_ring->next_to_clean += tx_thresh; 736 + if (xdp_ring->next_to_clean >= desc_cnt) 737 + xdp_ring->next_to_clean -= desc_cnt; 738 + if (xsk_frames) 739 + xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); 740 + next_dd_desc->cmd_type_offset_bsz = 0; 741 + next_dd = next_dd + tx_thresh; 742 + if (next_dd >= desc_cnt) 743 + next_dd = tx_thresh - 1; 744 + } while (budget--); 744 745 745 - prefetch(tx_desc); 746 + xdp_ring->next_dd = next_dd; 746 747 747 - } while (likely(--budget)); 748 + return cleared_dds * tx_thresh; 749 + } 748 750 749 - ntc += xdp_ring->count; 750 - xdp_ring->next_to_clean = ntc; 751 + /** 752 + * ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor 753 + * @xdp_ring: XDP ring to produce the HW Tx descriptor on 754 + * @desc: AF_XDP descriptor to pull the DMA address and length from 755 + * @total_bytes: bytes accumulator that will be used for stats update 756 + */ 757 + static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, 758 + unsigned int *total_bytes) 759 + { 760 + struct ice_tx_desc *tx_desc; 761 + dma_addr_t dma; 751 762 752 - if (xsk_frames) 753 - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); 763 + dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, desc->addr); 764 + xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, desc->len); 765 + 766 + tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_to_use++); 767 + tx_desc->buf_addr = cpu_to_le64(dma); 768 + tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, 769 + 0, desc->len, 0); 770 + 771 + *total_bytes += desc->len; 772 + } 773 + 774 + /** 775 + * ice_xmit_pkt_batch - produce a batch of HW Tx descriptors out of AF_XDP descriptors 776 + * @xdp_ring: XDP ring to produce the HW Tx descriptors on 777 + * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from 778 + * @total_bytes: bytes accumulator that will be used for stats update 779 + */ 780 + static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, 781 + unsigned int *total_bytes) 782 + { 783 + u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 784 + u16 ntu = xdp_ring->next_to_use; 785 + struct ice_tx_desc *tx_desc; 786 + u32 i; 787 + 788 + loop_unrolled_for(i = 0; i < PKTS_PER_BATCH; i++) { 789 + dma_addr_t dma; 790 + 791 + dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, descs[i].addr); 792 + xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, descs[i].len); 793 + 794 + tx_desc = ICE_TX_DESC(xdp_ring, ntu++); 795 + tx_desc->buf_addr = cpu_to_le64(dma); 796 + tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, 797 + 0, descs[i].len, 0); 798 + 799 + *total_bytes += descs[i].len; 800 + } 801 + 802 + xdp_ring->next_to_use = ntu; 803 + 804 + if (xdp_ring->next_to_use > xdp_ring->next_rs) { 805 + tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 806 + tx_desc->cmd_type_offset_bsz |= 807 + cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 808 + xdp_ring->next_rs += tx_thresh; 809 + } 810 + } 811 + 812 + /** 813 + * ice_fill_tx_hw_ring - produce the number of Tx descriptors onto ring 814 + * @xdp_ring: XDP ring to produce the HW Tx descriptors on 815 + * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from 816 + * @nb_pkts: count of packets to be send 817 + * @total_bytes: bytes accumulator that will be used for stats update 818 + */ 819 + static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, 820 + u32 nb_pkts, unsigned int *total_bytes) 821 + { 822 + u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 823 + u32 batched, leftover, i; 824 + 825 + batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH); 826 + leftover = nb_pkts & (PKTS_PER_BATCH - 1); 827 + for (i = 0; i < batched; i += PKTS_PER_BATCH) 828 + ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes); 829 + for (; i < batched + leftover; i++) 830 + ice_xmit_pkt(xdp_ring, &descs[i], total_bytes); 831 + 832 + if (xdp_ring->next_to_use > xdp_ring->next_rs) { 833 + struct ice_tx_desc *tx_desc; 834 + 835 + tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 836 + tx_desc->cmd_type_offset_bsz |= 837 + cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 838 + xdp_ring->next_rs += tx_thresh; 839 + } 840 + } 841 + 842 + /** 843 + * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring 844 + * @xdp_ring: XDP ring to produce the HW Tx descriptors on 845 + * @budget: number of free descriptors on HW Tx ring that can be used 846 + * @napi_budget: amount of descriptors that NAPI allows us to clean 847 + * 848 + * Returns true if there is no more work that needs to be done, false otherwise 849 + */ 850 + bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget) 851 + { 852 + struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs; 853 + u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 854 + u32 nb_pkts, nb_processed = 0; 855 + unsigned int total_bytes = 0; 856 + 857 + if (budget < tx_thresh) 858 + budget += ice_clean_xdp_irq_zc(xdp_ring, napi_budget); 859 + 860 + nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget); 861 + if (!nb_pkts) 862 + return true; 863 + 864 + if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { 865 + struct ice_tx_desc *tx_desc; 866 + 867 + nb_processed = xdp_ring->count - xdp_ring->next_to_use; 868 + ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes); 869 + tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 870 + tx_desc->cmd_type_offset_bsz |= 871 + cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 872 + xdp_ring->next_rs = tx_thresh - 1; 873 + xdp_ring->next_to_use = 0; 874 + } 875 + 876 + ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed, 877 + &total_bytes); 878 + 879 + ice_xdp_ring_update_tail(xdp_ring); 880 + ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes); 754 881 755 882 if (xsk_uses_need_wakeup(xdp_ring->xsk_pool)) 756 883 xsk_set_tx_need_wakeup(xdp_ring->xsk_pool); 757 884 758 - ice_update_tx_ring_stats(xdp_ring, total_packets, total_bytes); 759 - xmit_done = ice_xmit_zc(xdp_ring, ICE_DFLT_IRQ_WORK); 760 - 761 - return budget > 0 && xmit_done; 885 + return nb_pkts < budget; 762 886 } 763 887 764 888 /**
+19 -8
drivers/net/ethernet/intel/ice/ice_xsk.h
··· 6 6 #include "ice_txrx.h" 7 7 #include "ice.h" 8 8 9 + #define PKTS_PER_BATCH 8 10 + 11 + #ifdef __clang__ 12 + #define loop_unrolled_for _Pragma("clang loop unroll_count(8)") for 13 + #elif __GNUC__ >= 4 14 + #define loop_unrolled_for _Pragma("GCC unroll 8") for 15 + #else 16 + #define loop_unrolled_for for 17 + #endif 18 + 9 19 struct ice_vsi; 10 20 11 21 #ifdef CONFIG_XDP_SOCKETS 12 22 int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, 13 23 u16 qid); 14 24 int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget); 15 - bool ice_clean_tx_irq_zc(struct ice_tx_ring *xdp_ring, int budget); 16 25 int ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags); 17 26 bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count); 18 27 bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi); 19 28 void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring); 20 29 void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring); 30 + bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget); 21 31 #else 32 + static inline bool 33 + ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring, 34 + u32 __always_unused budget, 35 + int __always_unused napi_budget) 36 + { 37 + return false; 38 + } 39 + 22 40 static inline int 23 41 ice_xsk_pool_setup(struct ice_vsi __always_unused *vsi, 24 42 struct xsk_buff_pool __always_unused *pool, ··· 50 32 int __always_unused budget) 51 33 { 52 34 return 0; 53 - } 54 - 55 - static inline bool 56 - ice_clean_tx_irq_zc(struct ice_tx_ring __always_unused *xdp_ring, 57 - int __always_unused budget) 58 - { 59 - return false; 60 35 } 61 36 62 37 static inline bool
+20 -4
include/linux/bpf-cgroup.h
··· 8 8 #include <linux/jump_label.h> 9 9 #include <linux/percpu.h> 10 10 #include <linux/rbtree.h> 11 + #include <net/sock.h> 11 12 #include <uapi/linux/bpf.h> 12 13 13 14 struct sock; ··· 166 165 int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key, 167 166 void *value, u64 flags); 168 167 168 + /* Opportunistic check to see whether we have any BPF program attached*/ 169 + static inline bool cgroup_bpf_sock_enabled(struct sock *sk, 170 + enum cgroup_bpf_attach_type type) 171 + { 172 + struct cgroup *cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data); 173 + struct bpf_prog_array *array; 174 + 175 + array = rcu_access_pointer(cgrp->bpf.effective[type]); 176 + return array != &bpf_empty_prog_array.hdr; 177 + } 178 + 169 179 /* Wrappers for __cgroup_bpf_run_filter_skb() guarded by cgroup_bpf_enabled. */ 170 180 #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb) \ 171 181 ({ \ 172 182 int __ret = 0; \ 173 - if (cgroup_bpf_enabled(CGROUP_INET_INGRESS)) \ 183 + if (cgroup_bpf_enabled(CGROUP_INET_INGRESS) && \ 184 + cgroup_bpf_sock_enabled(sk, CGROUP_INET_INGRESS)) \ 174 185 __ret = __cgroup_bpf_run_filter_skb(sk, skb, \ 175 186 CGROUP_INET_INGRESS); \ 176 187 \ ··· 194 181 int __ret = 0; \ 195 182 if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk && sk == skb->sk) { \ 196 183 typeof(sk) __sk = sk_to_full_sk(sk); \ 197 - if (sk_fullsock(__sk)) \ 184 + if (sk_fullsock(__sk) && \ 185 + cgroup_bpf_sock_enabled(__sk, CGROUP_INET_EGRESS)) \ 198 186 __ret = __cgroup_bpf_run_filter_skb(__sk, skb, \ 199 187 CGROUP_INET_EGRESS); \ 200 188 } \ ··· 361 347 kernel_optval) \ 362 348 ({ \ 363 349 int __ret = 0; \ 364 - if (cgroup_bpf_enabled(CGROUP_SETSOCKOPT)) \ 350 + if (cgroup_bpf_enabled(CGROUP_SETSOCKOPT) && \ 351 + cgroup_bpf_sock_enabled(sock, CGROUP_SETSOCKOPT)) \ 365 352 __ret = __cgroup_bpf_run_filter_setsockopt(sock, level, \ 366 353 optname, optval, \ 367 354 optlen, \ ··· 382 367 max_optlen, retval) \ 383 368 ({ \ 384 369 int __ret = retval; \ 385 - if (cgroup_bpf_enabled(CGROUP_GETSOCKOPT)) \ 370 + if (cgroup_bpf_enabled(CGROUP_GETSOCKOPT) && \ 371 + cgroup_bpf_sock_enabled(sock, CGROUP_GETSOCKOPT)) \ 386 372 if (!(sock)->sk_prot->bpf_bypass_getsockopt || \ 387 373 !INDIRECT_CALL_INET_1((sock)->sk_prot->bpf_bypass_getsockopt, \ 388 374 tcp_bpf_bypass_getsockopt, \
+25 -15
include/linux/bpf.h
··· 332 332 */ 333 333 MEM_ALLOC = BIT(2 + BPF_BASE_TYPE_BITS), 334 334 335 - __BPF_TYPE_LAST_FLAG = MEM_ALLOC, 335 + /* MEM is in user address space. */ 336 + MEM_USER = BIT(3 + BPF_BASE_TYPE_BITS), 337 + 338 + __BPF_TYPE_LAST_FLAG = MEM_USER, 336 339 }; 337 340 338 341 /* Max number of base types. */ ··· 591 588 const struct btf *btf, 592 589 const struct btf_type *t, int off, int size, 593 590 enum bpf_access_type atype, 594 - u32 *next_btf_id); 591 + u32 *next_btf_id, enum bpf_type_flag *flag); 595 592 }; 596 593 597 594 struct bpf_prog_offload_ops { ··· 846 843 void bpf_image_ksym_del(struct bpf_ksym *ksym); 847 844 void bpf_ksym_add(struct bpf_ksym *ksym); 848 845 void bpf_ksym_del(struct bpf_ksym *ksym); 849 - int bpf_jit_charge_modmem(u32 pages); 850 - void bpf_jit_uncharge_modmem(u32 pages); 846 + int bpf_jit_charge_modmem(u32 size); 847 + void bpf_jit_uncharge_modmem(u32 size); 851 848 bool bpf_prog_has_trampoline(const struct bpf_prog *prog); 852 849 #else 853 850 static inline int bpf_trampoline_link_prog(struct bpf_prog *prog, ··· 953 950 bool sleepable; 954 951 bool tail_call_reachable; 955 952 bool xdp_has_frags; 953 + bool use_bpf_prog_pack; 956 954 struct hlist_node tramp_hlist; 957 955 /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ 958 956 const struct btf_type *attach_func_proto; ··· 1236 1232 struct rcu_head rcu; 1237 1233 struct bpf_prog_array_item items[]; 1238 1234 }; 1235 + 1236 + struct bpf_empty_prog_array { 1237 + struct bpf_prog_array hdr; 1238 + struct bpf_prog *null_prog; 1239 + }; 1240 + 1241 + /* to avoid allocating empty bpf_prog_array for cgroups that 1242 + * don't have bpf program attached use one global 'bpf_empty_prog_array' 1243 + * It will not be modified the caller of bpf_prog_array_alloc() 1244 + * (since caller requested prog_cnt == 0) 1245 + * that pointer should be 'freed' by bpf_prog_array_free() 1246 + */ 1247 + extern struct bpf_empty_prog_array bpf_empty_prog_array; 1239 1248 1240 1249 struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags); 1241 1250 void bpf_prog_array_free(struct bpf_prog_array *progs); ··· 1784 1767 int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, 1785 1768 const struct btf_type *t, int off, int size, 1786 1769 enum bpf_access_type atype, 1787 - u32 *next_btf_id); 1770 + u32 *next_btf_id, enum bpf_type_flag *flag); 1788 1771 bool btf_struct_ids_match(struct bpf_verifier_log *log, 1789 1772 const struct btf *btf, u32 id, int off, 1790 1773 const struct btf *need_btf, u32 need_type_id); ··· 1892 1875 return -EOPNOTSUPP; 1893 1876 } 1894 1877 1895 - static inline bool dev_map_can_have_prog(struct bpf_map *map) 1896 - { 1897 - return false; 1898 - } 1899 - 1900 1878 static inline void __dev_flush(void) 1901 1879 { 1902 1880 } ··· 1953 1941 struct sk_buff *skb) 1954 1942 { 1955 1943 return -EOPNOTSUPP; 1956 - } 1957 - 1958 - static inline bool cpu_map_prog_allowed(struct bpf_map *map) 1959 - { 1960 - return false; 1961 1944 } 1962 1945 1963 1946 static inline struct bpf_prog *bpf_prog_get_type_path(const char *name, ··· 2250 2243 extern const struct bpf_func_proto bpf_find_vma_proto; 2251 2244 extern const struct bpf_func_proto bpf_loop_proto; 2252 2245 extern const struct bpf_func_proto bpf_strncmp_proto; 2246 + extern const struct bpf_func_proto bpf_copy_from_user_task_proto; 2253 2247 2254 2248 const struct bpf_func_proto *tracing_prog_func_proto( 2255 2249 enum bpf_func_id func_id, const struct bpf_prog *prog); ··· 2362 2354 2363 2355 int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, 2364 2356 void *addr1, void *addr2); 2357 + 2358 + void *bpf_arch_text_copy(void *dst, void *src, size_t len); 2365 2359 2366 2360 struct btf_id_set; 2367 2361 bool btf_id_set_contains(const struct btf_id_set *set, u32 id);
+10
include/linux/btf.h
··· 238 238 return BTF_INFO_KIND(t->info) == BTF_KIND_VAR; 239 239 } 240 240 241 + static inline bool btf_type_is_type_tag(const struct btf_type *t) 242 + { 243 + return BTF_INFO_KIND(t->info) == BTF_KIND_TYPE_TAG; 244 + } 245 + 241 246 /* union is only a special case of struct: 242 247 * all its offsetof(member) == 0 243 248 */ ··· 325 320 const struct btf_type *t) 326 321 { 327 322 return (const struct btf_var_secinfo *)(t + 1); 323 + } 324 + 325 + static inline struct btf_param *btf_params(const struct btf_type *t) 326 + { 327 + return (struct btf_param *)(t + 1); 328 328 } 329 329 330 330 #ifdef CONFIG_BPF_SYSCALL
+3
include/linux/compiler_types.h
··· 31 31 # define __kernel 32 32 # ifdef STRUCTLEAK_PLUGIN 33 33 # define __user __attribute__((user)) 34 + # elif defined(CONFIG_DEBUG_INFO_BTF) && defined(CONFIG_PAHOLE_HAS_BTF_TAG) && \ 35 + __has_attribute(btf_type_tag) 36 + # define __user __attribute__((btf_type_tag("user"))) 34 37 # else 35 38 # define __user 36 39 # endif
+15 -12
include/linux/filter.h
··· 548 548 #define BPF_IMAGE_ALIGNMENT 8 549 549 550 550 struct bpf_binary_header { 551 - u32 pages; 551 + u32 size; 552 552 u8 image[] __aligned(BPF_IMAGE_ALIGNMENT); 553 553 }; 554 554 ··· 886 886 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr) 887 887 { 888 888 set_vm_flush_reset_perms(hdr); 889 - set_memory_ro((unsigned long)hdr, hdr->pages); 890 - set_memory_x((unsigned long)hdr, hdr->pages); 891 - } 892 - 893 - static inline struct bpf_binary_header * 894 - bpf_jit_binary_hdr(const struct bpf_prog *fp) 895 - { 896 - unsigned long real_start = (unsigned long)fp->bpf_func; 897 - unsigned long addr = real_start & PAGE_MASK; 898 - 899 - return (void *)addr; 889 + set_memory_ro((unsigned long)hdr, hdr->size >> PAGE_SHIFT); 890 + set_memory_x((unsigned long)hdr, hdr->size >> PAGE_SHIFT); 900 891 } 901 892 902 893 int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap); ··· 1058 1067 void *bpf_jit_alloc_exec(unsigned long size); 1059 1068 void bpf_jit_free_exec(void *addr); 1060 1069 void bpf_jit_free(struct bpf_prog *fp); 1070 + 1071 + struct bpf_binary_header * 1072 + bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **ro_image, 1073 + unsigned int alignment, 1074 + struct bpf_binary_header **rw_hdr, 1075 + u8 **rw_image, 1076 + bpf_jit_fill_hole_t bpf_fill_ill_insns); 1077 + int bpf_jit_binary_pack_finalize(struct bpf_prog *prog, 1078 + struct bpf_binary_header *ro_header, 1079 + struct bpf_binary_header *rw_header); 1080 + void bpf_jit_binary_pack_free(struct bpf_binary_header *ro_header, 1081 + struct bpf_binary_header *rw_header); 1061 1082 1062 1083 int bpf_jit_add_poke_descriptor(struct bpf_prog *prog, 1063 1084 struct bpf_jit_poke_descriptor *poke);
-5
include/linux/skmsg.h
··· 170 170 #define sk_msg_iter_next(msg, which) \ 171 171 sk_msg_iter_var_next(msg->sg.which) 172 172 173 - static inline void sk_msg_clear_meta(struct sk_msg *msg) 174 - { 175 - memset(&msg->sg, 0, offsetofend(struct sk_msg_sg, copy)); 176 - } 177 - 178 173 static inline void sk_msg_init(struct sk_msg *msg) 179 174 { 180 175 BUILD_BUG_ON(ARRAY_SIZE(msg->sg.data) - 1 != NR_MSG_FRAG_IDS);
+2 -3
include/net/xdp_sock_drv.h
··· 13 13 14 14 void xsk_tx_completed(struct xsk_buff_pool *pool, u32 nb_entries); 15 15 bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc); 16 - u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *desc, u32 max); 16 + u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max); 17 17 void xsk_tx_release(struct xsk_buff_pool *pool); 18 18 struct xsk_buff_pool *xsk_get_pool_from_qid(struct net_device *dev, 19 19 u16 queue_id); ··· 142 142 return false; 143 143 } 144 144 145 - static inline u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *desc, 146 - u32 max) 145 + static inline u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max) 147 146 { 148 147 return 0; 149 148 }
+1
include/net/xsk_buff_pool.h
··· 60 60 */ 61 61 dma_addr_t *dma_pages; 62 62 struct xdp_buff_xsk *heads; 63 + struct xdp_desc *tx_descs; 63 64 u64 chunk_mask; 64 65 u64 addrs_cnt; 65 66 u32 free_list_cnt;
+15 -2
include/uapi/linux/bpf.h
··· 5076 5076 * associated to *xdp_md*, at *offset*. 5077 5077 * Return 5078 5078 * 0 on success, or a negative error in case of failure. 5079 + * 5080 + * long bpf_copy_from_user_task(void *dst, u32 size, const void *user_ptr, struct task_struct *tsk, u64 flags) 5081 + * Description 5082 + * Read *size* bytes from user space address *user_ptr* in *tsk*'s 5083 + * address space, and stores the data in *dst*. *flags* is not 5084 + * used yet and is provided for future extensibility. This helper 5085 + * can only be used by sleepable programs. 5086 + * Return 5087 + * 0 on success, or a negative error in case of failure. On error 5088 + * *dst* buffer is zeroed out. 5079 5089 */ 5080 5090 #define __BPF_FUNC_MAPPER(FN) \ 5081 5091 FN(unspec), \ ··· 5279 5269 FN(xdp_get_buff_len), \ 5280 5270 FN(xdp_load_bytes), \ 5281 5271 FN(xdp_store_bytes), \ 5272 + FN(copy_from_user_task), \ 5282 5273 /* */ 5283 5274 5284 5275 /* integer value in 'imm' field of BPF_CALL instruction selects which helper ··· 5574 5563 __u32 src_ip4; 5575 5564 __u32 src_ip6[4]; 5576 5565 __u32 src_port; /* host byte order */ 5577 - __u32 dst_port; /* network byte order */ 5566 + __be16 dst_port; /* network byte order */ 5567 + __u16 :16; /* zero padding */ 5578 5568 __u32 dst_ip4; 5579 5569 __u32 dst_ip6[4]; 5580 5570 __u32 state; ··· 6453 6441 __u32 protocol; /* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */ 6454 6442 __u32 remote_ip4; /* Network byte order */ 6455 6443 __u32 remote_ip6[4]; /* Network byte order */ 6456 - __u32 remote_port; /* Network byte order */ 6444 + __be16 remote_port; /* Network byte order */ 6445 + __u16 :16; /* Zero padding */ 6457 6446 __u32 local_ip4; /* Network byte order */ 6458 6447 __u32 local_ip6[4]; /* Network byte order */ 6459 6448 __u32 local_port; /* Host byte order */
+4
init/Kconfig
··· 86 86 config CC_HAS_NO_PROFILE_FN_ATTR 87 87 def_bool $(success,echo '__attribute__((no_profile_instrument_function)) int x();' | $(CC) -x c - -c -o /dev/null -Werror) 88 88 89 + config PAHOLE_VERSION 90 + int 91 + default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE)) 92 + 89 93 config CONSTRUCTORS 90 94 bool 91 95
+15 -5
kernel/bpf/bpf_iter.c
··· 5 5 #include <linux/anon_inodes.h> 6 6 #include <linux/filter.h> 7 7 #include <linux/bpf.h> 8 + #include <linux/rcupdate_trace.h> 8 9 9 10 struct bpf_iter_target_info { 10 11 struct list_head list; ··· 685 684 { 686 685 int ret; 687 686 688 - rcu_read_lock(); 689 - migrate_disable(); 690 - ret = bpf_prog_run(prog, ctx); 691 - migrate_enable(); 692 - rcu_read_unlock(); 687 + if (prog->aux->sleepable) { 688 + rcu_read_lock_trace(); 689 + migrate_disable(); 690 + might_fault(); 691 + ret = bpf_prog_run(prog, ctx); 692 + migrate_enable(); 693 + rcu_read_unlock_trace(); 694 + } else { 695 + rcu_read_lock(); 696 + migrate_disable(); 697 + ret = bpf_prog_run(prog, ctx); 698 + migrate_enable(); 699 + rcu_read_unlock(); 700 + } 693 701 694 702 /* bpf program can only return 0 or 1: 695 703 * 0 : okay
+166 -17
kernel/bpf/btf.c
··· 419 419 static int btf_resolve(struct btf_verifier_env *env, 420 420 const struct btf_type *t, u32 type_id); 421 421 422 + static int btf_func_check(struct btf_verifier_env *env, 423 + const struct btf_type *t); 424 + 422 425 static bool btf_type_is_modifier(const struct btf_type *t) 423 426 { 424 427 /* Some of them is not strictly a C modifier ··· 598 595 btf_type_is_struct(t) || 599 596 btf_type_is_array(t) || 600 597 btf_type_is_var(t) || 598 + btf_type_is_func(t) || 601 599 btf_type_is_decl_tag(t) || 602 600 btf_type_is_datasec(t); 603 601 } ··· 3575 3571 return 0; 3576 3572 } 3577 3573 3574 + static int btf_func_resolve(struct btf_verifier_env *env, 3575 + const struct resolve_vertex *v) 3576 + { 3577 + const struct btf_type *t = v->t; 3578 + u32 next_type_id = t->type; 3579 + int err; 3580 + 3581 + err = btf_func_check(env, t); 3582 + if (err) 3583 + return err; 3584 + 3585 + env_stack_pop_resolved(env, next_type_id, 0); 3586 + return 0; 3587 + } 3588 + 3578 3589 static struct btf_kind_operations func_ops = { 3579 3590 .check_meta = btf_func_check_meta, 3580 - .resolve = btf_df_resolve, 3591 + .resolve = btf_func_resolve, 3581 3592 .check_member = btf_df_check_member, 3582 3593 .check_kflag_member = btf_df_check_kflag_member, 3583 3594 .log_details = btf_ref_type_log, ··· 4213 4194 return !btf_resolved_type_id(btf, type_id) && 4214 4195 !btf_resolved_type_size(btf, type_id); 4215 4196 4216 - if (btf_type_is_decl_tag(t)) 4197 + if (btf_type_is_decl_tag(t) || btf_type_is_func(t)) 4217 4198 return btf_resolved_type_id(btf, type_id) && 4218 4199 !btf_resolved_type_size(btf, type_id); 4219 4200 ··· 4300 4281 4301 4282 if (btf_type_is_func_proto(t)) { 4302 4283 err = btf_func_proto_check(env, t); 4303 - if (err) 4304 - return err; 4305 - } 4306 - 4307 - if (btf_type_is_func(t)) { 4308 - err = btf_func_check(env, t); 4309 4284 if (err) 4310 4285 return err; 4311 4286 } ··· 4899 4886 const char *tname = prog->aux->attach_func_name; 4900 4887 struct bpf_verifier_log *log = info->log; 4901 4888 const struct btf_param *args; 4889 + const char *tag_value; 4902 4890 u32 nr_args, arg; 4903 4891 int i, ret; 4904 4892 ··· 5052 5038 info->btf = btf; 5053 5039 info->btf_id = t->type; 5054 5040 t = btf_type_by_id(btf, t->type); 5041 + 5042 + if (btf_type_is_type_tag(t)) { 5043 + tag_value = __btf_name_by_offset(btf, t->name_off); 5044 + if (strcmp(tag_value, "user") == 0) 5045 + info->reg_type |= MEM_USER; 5046 + } 5047 + 5055 5048 /* skip modifiers */ 5056 5049 while (btf_type_is_modifier(t)) { 5057 5050 info->btf_id = t->type; ··· 5085 5064 5086 5065 static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, 5087 5066 const struct btf_type *t, int off, int size, 5088 - u32 *next_btf_id) 5067 + u32 *next_btf_id, enum bpf_type_flag *flag) 5089 5068 { 5090 5069 u32 i, moff, mtrue_end, msize = 0, total_nelems = 0; 5091 5070 const struct btf_type *mtype, *elem_type = NULL; 5092 5071 const struct btf_member *member; 5093 - const char *tname, *mname; 5072 + const char *tname, *mname, *tag_value; 5094 5073 u32 vlen, elem_id, mid; 5095 5074 5096 5075 again: ··· 5274 5253 } 5275 5254 5276 5255 if (btf_type_is_ptr(mtype)) { 5277 - const struct btf_type *stype; 5256 + const struct btf_type *stype, *t; 5257 + enum bpf_type_flag tmp_flag = 0; 5278 5258 u32 id; 5279 5259 5280 5260 if (msize != size || off != moff) { ··· 5284 5262 mname, moff, tname, off, size); 5285 5263 return -EACCES; 5286 5264 } 5265 + 5266 + /* check __user tag */ 5267 + t = btf_type_by_id(btf, mtype->type); 5268 + if (btf_type_is_type_tag(t)) { 5269 + tag_value = __btf_name_by_offset(btf, t->name_off); 5270 + if (strcmp(tag_value, "user") == 0) 5271 + tmp_flag = MEM_USER; 5272 + } 5273 + 5287 5274 stype = btf_type_skip_modifiers(btf, mtype->type, &id); 5288 5275 if (btf_type_is_struct(stype)) { 5289 5276 *next_btf_id = id; 5277 + *flag = tmp_flag; 5290 5278 return WALK_PTR; 5291 5279 } 5292 5280 } ··· 5323 5291 int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, 5324 5292 const struct btf_type *t, int off, int size, 5325 5293 enum bpf_access_type atype __maybe_unused, 5326 - u32 *next_btf_id) 5294 + u32 *next_btf_id, enum bpf_type_flag *flag) 5327 5295 { 5296 + enum bpf_type_flag tmp_flag = 0; 5328 5297 int err; 5329 5298 u32 id; 5330 5299 5331 5300 do { 5332 - err = btf_struct_walk(log, btf, t, off, size, &id); 5301 + err = btf_struct_walk(log, btf, t, off, size, &id, &tmp_flag); 5333 5302 5334 5303 switch (err) { 5335 5304 case WALK_PTR: ··· 5338 5305 * we're done. 5339 5306 */ 5340 5307 *next_btf_id = id; 5308 + *flag = tmp_flag; 5341 5309 return PTR_TO_BTF_ID; 5342 5310 case WALK_SCALAR: 5343 5311 return SCALAR_VALUE; ··· 5383 5349 const struct btf *need_btf, u32 need_type_id) 5384 5350 { 5385 5351 const struct btf_type *type; 5352 + enum bpf_type_flag flag; 5386 5353 int err; 5387 5354 5388 5355 /* Are we already done? */ ··· 5394 5359 type = btf_type_by_id(btf, id); 5395 5360 if (!type) 5396 5361 return false; 5397 - err = btf_struct_walk(log, btf, type, off, 1, &id); 5362 + err = btf_struct_walk(log, btf, type, off, 1, &id, &flag); 5398 5363 if (err != WALK_STRUCT) 5399 5364 return false; 5400 5365 ··· 6775 6740 int ret; 6776 6741 6777 6742 btf = btf_get_module_btf(kset->owner); 6778 - if (IS_ERR_OR_NULL(btf)) 6779 - return btf ? PTR_ERR(btf) : -ENOENT; 6743 + if (!btf) { 6744 + if (!kset->owner && IS_ENABLED(CONFIG_DEBUG_INFO_BTF)) { 6745 + pr_err("missing vmlinux BTF, cannot register kfuncs\n"); 6746 + return -ENOENT; 6747 + } 6748 + if (kset->owner && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES)) { 6749 + pr_err("missing module BTF, cannot register kfuncs\n"); 6750 + return -ENOENT; 6751 + } 6752 + return 0; 6753 + } 6754 + if (IS_ERR(btf)) 6755 + return PTR_ERR(btf); 6780 6756 6781 6757 hook = bpf_prog_type_to_kfunc_hook(prog_type); 6782 6758 ret = btf_populate_kfunc_set(btf, hook, kset); ··· 6798 6752 } 6799 6753 EXPORT_SYMBOL_GPL(register_btf_kfunc_id_set); 6800 6754 6755 + #define MAX_TYPES_ARE_COMPAT_DEPTH 2 6756 + 6757 + static 6758 + int __bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id, 6759 + const struct btf *targ_btf, __u32 targ_id, 6760 + int level) 6761 + { 6762 + const struct btf_type *local_type, *targ_type; 6763 + int depth = 32; /* max recursion depth */ 6764 + 6765 + /* caller made sure that names match (ignoring flavor suffix) */ 6766 + local_type = btf_type_by_id(local_btf, local_id); 6767 + targ_type = btf_type_by_id(targ_btf, targ_id); 6768 + if (btf_kind(local_type) != btf_kind(targ_type)) 6769 + return 0; 6770 + 6771 + recur: 6772 + depth--; 6773 + if (depth < 0) 6774 + return -EINVAL; 6775 + 6776 + local_type = btf_type_skip_modifiers(local_btf, local_id, &local_id); 6777 + targ_type = btf_type_skip_modifiers(targ_btf, targ_id, &targ_id); 6778 + if (!local_type || !targ_type) 6779 + return -EINVAL; 6780 + 6781 + if (btf_kind(local_type) != btf_kind(targ_type)) 6782 + return 0; 6783 + 6784 + switch (btf_kind(local_type)) { 6785 + case BTF_KIND_UNKN: 6786 + case BTF_KIND_STRUCT: 6787 + case BTF_KIND_UNION: 6788 + case BTF_KIND_ENUM: 6789 + case BTF_KIND_FWD: 6790 + return 1; 6791 + case BTF_KIND_INT: 6792 + /* just reject deprecated bitfield-like integers; all other 6793 + * integers are by default compatible between each other 6794 + */ 6795 + return btf_int_offset(local_type) == 0 && btf_int_offset(targ_type) == 0; 6796 + case BTF_KIND_PTR: 6797 + local_id = local_type->type; 6798 + targ_id = targ_type->type; 6799 + goto recur; 6800 + case BTF_KIND_ARRAY: 6801 + local_id = btf_array(local_type)->type; 6802 + targ_id = btf_array(targ_type)->type; 6803 + goto recur; 6804 + case BTF_KIND_FUNC_PROTO: { 6805 + struct btf_param *local_p = btf_params(local_type); 6806 + struct btf_param *targ_p = btf_params(targ_type); 6807 + __u16 local_vlen = btf_vlen(local_type); 6808 + __u16 targ_vlen = btf_vlen(targ_type); 6809 + int i, err; 6810 + 6811 + if (local_vlen != targ_vlen) 6812 + return 0; 6813 + 6814 + for (i = 0; i < local_vlen; i++, local_p++, targ_p++) { 6815 + if (level <= 0) 6816 + return -EINVAL; 6817 + 6818 + btf_type_skip_modifiers(local_btf, local_p->type, &local_id); 6819 + btf_type_skip_modifiers(targ_btf, targ_p->type, &targ_id); 6820 + err = __bpf_core_types_are_compat(local_btf, local_id, 6821 + targ_btf, targ_id, 6822 + level - 1); 6823 + if (err <= 0) 6824 + return err; 6825 + } 6826 + 6827 + /* tail recurse for return type check */ 6828 + btf_type_skip_modifiers(local_btf, local_type->type, &local_id); 6829 + btf_type_skip_modifiers(targ_btf, targ_type->type, &targ_id); 6830 + goto recur; 6831 + } 6832 + default: 6833 + return 0; 6834 + } 6835 + } 6836 + 6837 + /* Check local and target types for compatibility. This check is used for 6838 + * type-based CO-RE relocations and follow slightly different rules than 6839 + * field-based relocations. This function assumes that root types were already 6840 + * checked for name match. Beyond that initial root-level name check, names 6841 + * are completely ignored. Compatibility rules are as follows: 6842 + * - any two STRUCTs/UNIONs/FWDs/ENUMs/INTs are considered compatible, but 6843 + * kind should match for local and target types (i.e., STRUCT is not 6844 + * compatible with UNION); 6845 + * - for ENUMs, the size is ignored; 6846 + * - for INT, size and signedness are ignored; 6847 + * - for ARRAY, dimensionality is ignored, element types are checked for 6848 + * compatibility recursively; 6849 + * - CONST/VOLATILE/RESTRICT modifiers are ignored; 6850 + * - TYPEDEFs/PTRs are compatible if types they pointing to are compatible; 6851 + * - FUNC_PROTOs are compatible if they have compatible signature: same 6852 + * number of input args and compatible return and argument types. 6853 + * These rules are not set in stone and probably will be adjusted as we get 6854 + * more experience with using BPF CO-RE relocations. 6855 + */ 6801 6856 int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id, 6802 6857 const struct btf *targ_btf, __u32 targ_id) 6803 6858 { 6804 - return -EOPNOTSUPP; 6859 + return __bpf_core_types_are_compat(local_btf, local_id, 6860 + targ_btf, targ_id, 6861 + MAX_TYPES_ARE_COMPAT_DEPTH); 6805 6862 } 6806 6863 6807 6864 static bool bpf_core_is_flavor_sep(const char *s)
-30
kernel/bpf/cgroup.c
··· 1384 1384 } 1385 1385 1386 1386 #ifdef CONFIG_NET 1387 - static bool __cgroup_bpf_prog_array_is_empty(struct cgroup *cgrp, 1388 - enum cgroup_bpf_attach_type attach_type) 1389 - { 1390 - struct bpf_prog_array *prog_array; 1391 - bool empty; 1392 - 1393 - rcu_read_lock(); 1394 - prog_array = rcu_dereference(cgrp->bpf.effective[attach_type]); 1395 - empty = bpf_prog_array_is_empty(prog_array); 1396 - rcu_read_unlock(); 1397 - 1398 - return empty; 1399 - } 1400 - 1401 1387 static int sockopt_alloc_buf(struct bpf_sockopt_kern *ctx, int max_optlen, 1402 1388 struct bpf_sockopt_buf *buf) 1403 1389 { ··· 1442 1456 }; 1443 1457 int ret, max_optlen; 1444 1458 1445 - /* Opportunistic check to see whether we have any BPF program 1446 - * attached to the hook so we don't waste time allocating 1447 - * memory and locking the socket. 1448 - */ 1449 - if (__cgroup_bpf_prog_array_is_empty(cgrp, CGROUP_SETSOCKOPT)) 1450 - return 0; 1451 - 1452 1459 /* Allocate a bit more than the initial user buffer for 1453 1460 * BPF program. The canonical use case is overriding 1454 1461 * TCP_CONGESTION(nv) to TCP_CONGESTION(cubic). 1455 1462 */ 1456 1463 max_optlen = max_t(int, 16, *optlen); 1457 - 1458 1464 max_optlen = sockopt_alloc_buf(&ctx, max_optlen, &buf); 1459 1465 if (max_optlen < 0) 1460 1466 return max_optlen; ··· 1528 1550 }; 1529 1551 int ret; 1530 1552 1531 - /* Opportunistic check to see whether we have any BPF program 1532 - * attached to the hook so we don't waste time allocating 1533 - * memory and locking the socket. 1534 - */ 1535 - if (__cgroup_bpf_prog_array_is_empty(cgrp, CGROUP_GETSOCKOPT)) 1536 - return retval; 1537 - 1538 1553 ctx.optlen = max_optlen; 1539 - 1540 1554 max_optlen = sockopt_alloc_buf(&ctx, max_optlen, &buf); 1541 1555 if (max_optlen < 0) 1542 1556 return max_optlen;
+259 -30
kernel/bpf/core.c
··· 537 537 static void 538 538 bpf_prog_ksym_set_addr(struct bpf_prog *prog) 539 539 { 540 - const struct bpf_binary_header *hdr = bpf_jit_binary_hdr(prog); 541 - unsigned long addr = (unsigned long)hdr; 542 - 543 540 WARN_ON_ONCE(!bpf_prog_ebpf_jited(prog)); 544 541 545 542 prog->aux->ksym.start = (unsigned long) prog->bpf_func; 546 - prog->aux->ksym.end = addr + hdr->pages * PAGE_SIZE; 543 + prog->aux->ksym.end = prog->aux->ksym.start + prog->jited_len; 547 544 } 548 545 549 546 static void ··· 805 808 return slot; 806 809 } 807 810 811 + /* 812 + * BPF program pack allocator. 813 + * 814 + * Most BPF programs are pretty small. Allocating a hole page for each 815 + * program is sometime a waste. Many small bpf program also adds pressure 816 + * to instruction TLB. To solve this issue, we introduce a BPF program pack 817 + * allocator. The prog_pack allocator uses HPAGE_PMD_SIZE page (2MB on x86) 818 + * to host BPF programs. 819 + */ 820 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 821 + #define BPF_PROG_PACK_SIZE HPAGE_PMD_SIZE 822 + #else 823 + #define BPF_PROG_PACK_SIZE PAGE_SIZE 824 + #endif 825 + #define BPF_PROG_CHUNK_SHIFT 6 826 + #define BPF_PROG_CHUNK_SIZE (1 << BPF_PROG_CHUNK_SHIFT) 827 + #define BPF_PROG_CHUNK_MASK (~(BPF_PROG_CHUNK_SIZE - 1)) 828 + #define BPF_PROG_CHUNK_COUNT (BPF_PROG_PACK_SIZE / BPF_PROG_CHUNK_SIZE) 829 + 830 + struct bpf_prog_pack { 831 + struct list_head list; 832 + void *ptr; 833 + unsigned long bitmap[BITS_TO_LONGS(BPF_PROG_CHUNK_COUNT)]; 834 + }; 835 + 836 + #define BPF_PROG_MAX_PACK_PROG_SIZE BPF_PROG_PACK_SIZE 837 + #define BPF_PROG_SIZE_TO_NBITS(size) (round_up(size, BPF_PROG_CHUNK_SIZE) / BPF_PROG_CHUNK_SIZE) 838 + 839 + static DEFINE_MUTEX(pack_mutex); 840 + static LIST_HEAD(pack_list); 841 + 842 + static struct bpf_prog_pack *alloc_new_pack(void) 843 + { 844 + struct bpf_prog_pack *pack; 845 + 846 + pack = kzalloc(sizeof(*pack), GFP_KERNEL); 847 + if (!pack) 848 + return NULL; 849 + pack->ptr = module_alloc(BPF_PROG_PACK_SIZE); 850 + if (!pack->ptr) { 851 + kfree(pack); 852 + return NULL; 853 + } 854 + bitmap_zero(pack->bitmap, BPF_PROG_PACK_SIZE / BPF_PROG_CHUNK_SIZE); 855 + list_add_tail(&pack->list, &pack_list); 856 + 857 + set_vm_flush_reset_perms(pack->ptr); 858 + set_memory_ro((unsigned long)pack->ptr, BPF_PROG_PACK_SIZE / PAGE_SIZE); 859 + set_memory_x((unsigned long)pack->ptr, BPF_PROG_PACK_SIZE / PAGE_SIZE); 860 + return pack; 861 + } 862 + 863 + static void *bpf_prog_pack_alloc(u32 size) 864 + { 865 + unsigned int nbits = BPF_PROG_SIZE_TO_NBITS(size); 866 + struct bpf_prog_pack *pack; 867 + unsigned long pos; 868 + void *ptr = NULL; 869 + 870 + if (size > BPF_PROG_MAX_PACK_PROG_SIZE) { 871 + size = round_up(size, PAGE_SIZE); 872 + ptr = module_alloc(size); 873 + if (ptr) { 874 + set_vm_flush_reset_perms(ptr); 875 + set_memory_ro((unsigned long)ptr, size / PAGE_SIZE); 876 + set_memory_x((unsigned long)ptr, size / PAGE_SIZE); 877 + } 878 + return ptr; 879 + } 880 + mutex_lock(&pack_mutex); 881 + list_for_each_entry(pack, &pack_list, list) { 882 + pos = bitmap_find_next_zero_area(pack->bitmap, BPF_PROG_CHUNK_COUNT, 0, 883 + nbits, 0); 884 + if (pos < BPF_PROG_CHUNK_COUNT) 885 + goto found_free_area; 886 + } 887 + 888 + pack = alloc_new_pack(); 889 + if (!pack) 890 + goto out; 891 + 892 + pos = 0; 893 + 894 + found_free_area: 895 + bitmap_set(pack->bitmap, pos, nbits); 896 + ptr = (void *)(pack->ptr) + (pos << BPF_PROG_CHUNK_SHIFT); 897 + 898 + out: 899 + mutex_unlock(&pack_mutex); 900 + return ptr; 901 + } 902 + 903 + static void bpf_prog_pack_free(struct bpf_binary_header *hdr) 904 + { 905 + struct bpf_prog_pack *pack = NULL, *tmp; 906 + unsigned int nbits; 907 + unsigned long pos; 908 + void *pack_ptr; 909 + 910 + if (hdr->size > BPF_PROG_MAX_PACK_PROG_SIZE) { 911 + module_memfree(hdr); 912 + return; 913 + } 914 + 915 + pack_ptr = (void *)((unsigned long)hdr & ~(BPF_PROG_PACK_SIZE - 1)); 916 + mutex_lock(&pack_mutex); 917 + 918 + list_for_each_entry(tmp, &pack_list, list) { 919 + if (tmp->ptr == pack_ptr) { 920 + pack = tmp; 921 + break; 922 + } 923 + } 924 + 925 + if (WARN_ONCE(!pack, "bpf_prog_pack bug\n")) 926 + goto out; 927 + 928 + nbits = BPF_PROG_SIZE_TO_NBITS(hdr->size); 929 + pos = ((unsigned long)hdr - (unsigned long)pack_ptr) >> BPF_PROG_CHUNK_SHIFT; 930 + 931 + bitmap_clear(pack->bitmap, pos, nbits); 932 + if (bitmap_find_next_zero_area(pack->bitmap, BPF_PROG_CHUNK_COUNT, 0, 933 + BPF_PROG_CHUNK_COUNT, 0) == 0) { 934 + list_del(&pack->list); 935 + module_memfree(pack->ptr); 936 + kfree(pack); 937 + } 938 + out: 939 + mutex_unlock(&pack_mutex); 940 + } 941 + 808 942 static atomic_long_t bpf_jit_current; 809 943 810 944 /* Can be overridden by an arch's JIT compiler if it has a custom, ··· 961 833 } 962 834 pure_initcall(bpf_jit_charge_init); 963 835 964 - int bpf_jit_charge_modmem(u32 pages) 836 + int bpf_jit_charge_modmem(u32 size) 965 837 { 966 - if (atomic_long_add_return(pages, &bpf_jit_current) > 967 - (bpf_jit_limit >> PAGE_SHIFT)) { 838 + if (atomic_long_add_return(size, &bpf_jit_current) > bpf_jit_limit) { 968 839 if (!bpf_capable()) { 969 - atomic_long_sub(pages, &bpf_jit_current); 840 + atomic_long_sub(size, &bpf_jit_current); 970 841 return -EPERM; 971 842 } 972 843 } ··· 973 846 return 0; 974 847 } 975 848 976 - void bpf_jit_uncharge_modmem(u32 pages) 849 + void bpf_jit_uncharge_modmem(u32 size) 977 850 { 978 - atomic_long_sub(pages, &bpf_jit_current); 851 + atomic_long_sub(size, &bpf_jit_current); 979 852 } 980 853 981 854 void *__weak bpf_jit_alloc_exec(unsigned long size) ··· 994 867 bpf_jit_fill_hole_t bpf_fill_ill_insns) 995 868 { 996 869 struct bpf_binary_header *hdr; 997 - u32 size, hole, start, pages; 870 + u32 size, hole, start; 998 871 999 872 WARN_ON_ONCE(!is_power_of_2(alignment) || 1000 873 alignment > BPF_IMAGE_ALIGNMENT); ··· 1004 877 * random section of illegal instructions. 1005 878 */ 1006 879 size = round_up(proglen + sizeof(*hdr) + 128, PAGE_SIZE); 1007 - pages = size / PAGE_SIZE; 1008 880 1009 - if (bpf_jit_charge_modmem(pages)) 881 + if (bpf_jit_charge_modmem(size)) 1010 882 return NULL; 1011 883 hdr = bpf_jit_alloc_exec(size); 1012 884 if (!hdr) { 1013 - bpf_jit_uncharge_modmem(pages); 885 + bpf_jit_uncharge_modmem(size); 1014 886 return NULL; 1015 887 } 1016 888 1017 889 /* Fill space with illegal/arch-dep instructions. */ 1018 890 bpf_fill_ill_insns(hdr, size); 1019 891 1020 - hdr->pages = pages; 892 + hdr->size = size; 1021 893 hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)), 1022 894 PAGE_SIZE - sizeof(*hdr)); 1023 895 start = (get_random_int() % hole) & ~(alignment - 1); ··· 1029 903 1030 904 void bpf_jit_binary_free(struct bpf_binary_header *hdr) 1031 905 { 1032 - u32 pages = hdr->pages; 906 + u32 size = hdr->size; 1033 907 1034 908 bpf_jit_free_exec(hdr); 1035 - bpf_jit_uncharge_modmem(pages); 909 + bpf_jit_uncharge_modmem(size); 910 + } 911 + 912 + /* Allocate jit binary from bpf_prog_pack allocator. 913 + * Since the allocated memory is RO+X, the JIT engine cannot write directly 914 + * to the memory. To solve this problem, a RW buffer is also allocated at 915 + * as the same time. The JIT engine should calculate offsets based on the 916 + * RO memory address, but write JITed program to the RW buffer. Once the 917 + * JIT engine finishes, it calls bpf_jit_binary_pack_finalize, which copies 918 + * the JITed program to the RO memory. 919 + */ 920 + struct bpf_binary_header * 921 + bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **image_ptr, 922 + unsigned int alignment, 923 + struct bpf_binary_header **rw_header, 924 + u8 **rw_image, 925 + bpf_jit_fill_hole_t bpf_fill_ill_insns) 926 + { 927 + struct bpf_binary_header *ro_header; 928 + u32 size, hole, start; 929 + 930 + WARN_ON_ONCE(!is_power_of_2(alignment) || 931 + alignment > BPF_IMAGE_ALIGNMENT); 932 + 933 + /* add 16 bytes for a random section of illegal instructions */ 934 + size = round_up(proglen + sizeof(*ro_header) + 16, BPF_PROG_CHUNK_SIZE); 935 + 936 + if (bpf_jit_charge_modmem(size)) 937 + return NULL; 938 + ro_header = bpf_prog_pack_alloc(size); 939 + if (!ro_header) { 940 + bpf_jit_uncharge_modmem(size); 941 + return NULL; 942 + } 943 + 944 + *rw_header = kvmalloc(size, GFP_KERNEL); 945 + if (!*rw_header) { 946 + bpf_prog_pack_free(ro_header); 947 + bpf_jit_uncharge_modmem(size); 948 + return NULL; 949 + } 950 + 951 + /* Fill space with illegal/arch-dep instructions. */ 952 + bpf_fill_ill_insns(*rw_header, size); 953 + (*rw_header)->size = size; 954 + 955 + hole = min_t(unsigned int, size - (proglen + sizeof(*ro_header)), 956 + BPF_PROG_CHUNK_SIZE - sizeof(*ro_header)); 957 + start = (get_random_int() % hole) & ~(alignment - 1); 958 + 959 + *image_ptr = &ro_header->image[start]; 960 + *rw_image = &(*rw_header)->image[start]; 961 + 962 + return ro_header; 963 + } 964 + 965 + /* Copy JITed text from rw_header to its final location, the ro_header. */ 966 + int bpf_jit_binary_pack_finalize(struct bpf_prog *prog, 967 + struct bpf_binary_header *ro_header, 968 + struct bpf_binary_header *rw_header) 969 + { 970 + void *ptr; 971 + 972 + ptr = bpf_arch_text_copy(ro_header, rw_header, rw_header->size); 973 + 974 + kvfree(rw_header); 975 + 976 + if (IS_ERR(ptr)) { 977 + bpf_prog_pack_free(ro_header); 978 + return PTR_ERR(ptr); 979 + } 980 + prog->aux->use_bpf_prog_pack = true; 981 + return 0; 982 + } 983 + 984 + /* bpf_jit_binary_pack_free is called in two different scenarios: 985 + * 1) when the program is freed after; 986 + * 2) when the JIT engine fails (before bpf_jit_binary_pack_finalize). 987 + * For case 2), we need to free both the RO memory and the RW buffer. 988 + * Also, ro_header->size in 2) is not properly set yet, so rw_header->size 989 + * is used for uncharge. 990 + */ 991 + void bpf_jit_binary_pack_free(struct bpf_binary_header *ro_header, 992 + struct bpf_binary_header *rw_header) 993 + { 994 + u32 size = rw_header ? rw_header->size : ro_header->size; 995 + 996 + bpf_prog_pack_free(ro_header); 997 + kvfree(rw_header); 998 + bpf_jit_uncharge_modmem(size); 999 + } 1000 + 1001 + static inline struct bpf_binary_header * 1002 + bpf_jit_binary_hdr(const struct bpf_prog *fp) 1003 + { 1004 + unsigned long real_start = (unsigned long)fp->bpf_func; 1005 + unsigned long addr; 1006 + 1007 + if (fp->aux->use_bpf_prog_pack) 1008 + addr = real_start & BPF_PROG_CHUNK_MASK; 1009 + else 1010 + addr = real_start & PAGE_MASK; 1011 + 1012 + return (void *)addr; 1036 1013 } 1037 1014 1038 1015 /* This symbol is only overridden by archs that have different ··· 1147 918 if (fp->jited) { 1148 919 struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp); 1149 920 1150 - bpf_jit_binary_free(hdr); 921 + if (fp->aux->use_bpf_prog_pack) 922 + bpf_jit_binary_pack_free(hdr, NULL /* rw_buffer */); 923 + else 924 + bpf_jit_binary_free(hdr); 1151 925 1152 926 WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(fp)); 1153 927 } ··· 2200 1968 }, 2201 1969 }; 2202 1970 2203 - /* to avoid allocating empty bpf_prog_array for cgroups that 2204 - * don't have bpf program attached use one global 'empty_prog_array' 2205 - * It will not be modified the caller of bpf_prog_array_alloc() 2206 - * (since caller requested prog_cnt == 0) 2207 - * that pointer should be 'freed' by bpf_prog_array_free() 2208 - */ 2209 - static struct { 2210 - struct bpf_prog_array hdr; 2211 - struct bpf_prog *null_prog; 2212 - } empty_prog_array = { 1971 + struct bpf_empty_prog_array bpf_empty_prog_array = { 2213 1972 .null_prog = NULL, 2214 1973 }; 1974 + EXPORT_SYMBOL(bpf_empty_prog_array); 2215 1975 2216 1976 struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags) 2217 1977 { ··· 2213 1989 (prog_cnt + 1), 2214 1990 flags); 2215 1991 2216 - return &empty_prog_array.hdr; 1992 + return &bpf_empty_prog_array.hdr; 2217 1993 } 2218 1994 2219 1995 void bpf_prog_array_free(struct bpf_prog_array *progs) 2220 1996 { 2221 - if (!progs || progs == &empty_prog_array.hdr) 1997 + if (!progs || progs == &bpf_empty_prog_array.hdr) 2222 1998 return; 2223 1999 kfree_rcu(progs, rcu); 2224 2000 } ··· 2675 2451 void *addr1, void *addr2) 2676 2452 { 2677 2453 return -ENOTSUPP; 2454 + } 2455 + 2456 + void * __weak bpf_arch_text_copy(void *dst, void *src, size_t len) 2457 + { 2458 + return ERR_PTR(-ENOTSUPP); 2678 2459 } 2679 2460 2680 2461 DEFINE_STATIC_KEY_FALSE(bpf_stats_enabled_key);
+34
kernel/bpf/helpers.c
··· 16 16 #include <linux/pid_namespace.h> 17 17 #include <linux/proc_ns.h> 18 18 #include <linux/security.h> 19 + #include <linux/btf_ids.h> 19 20 20 21 #include "../../lib/kstrtox.h" 21 22 ··· 670 669 .arg1_type = ARG_PTR_TO_UNINIT_MEM, 671 670 .arg2_type = ARG_CONST_SIZE_OR_ZERO, 672 671 .arg3_type = ARG_ANYTHING, 672 + }; 673 + 674 + BPF_CALL_5(bpf_copy_from_user_task, void *, dst, u32, size, 675 + const void __user *, user_ptr, struct task_struct *, tsk, u64, flags) 676 + { 677 + int ret; 678 + 679 + /* flags is not used yet */ 680 + if (unlikely(flags)) 681 + return -EINVAL; 682 + 683 + if (unlikely(!size)) 684 + return 0; 685 + 686 + ret = access_process_vm(tsk, (unsigned long)user_ptr, dst, size, 0); 687 + if (ret == size) 688 + return 0; 689 + 690 + memset(dst, 0, size); 691 + /* Return -EFAULT for partial read */ 692 + return ret < 0 ? ret : -EFAULT; 693 + } 694 + 695 + const struct bpf_func_proto bpf_copy_from_user_task_proto = { 696 + .func = bpf_copy_from_user_task, 697 + .gpl_only = true, 698 + .ret_type = RET_INTEGER, 699 + .arg1_type = ARG_PTR_TO_UNINIT_MEM, 700 + .arg2_type = ARG_CONST_SIZE_OR_ZERO, 701 + .arg3_type = ARG_ANYTHING, 702 + .arg4_type = ARG_PTR_TO_BTF_ID, 703 + .arg4_btf_id = &btf_tracing_ids[BTF_TRACING_TYPE_TASK], 704 + .arg5_type = ARG_ANYTHING 673 705 }; 674 706 675 707 BPF_CALL_2(bpf_per_cpu_ptr, const void *, ptr, u32, cpu)
+2 -26
kernel/bpf/preload/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 LIBBPF_SRCS = $(srctree)/tools/lib/bpf/ 4 - LIBBPF_OUT = $(abspath $(obj))/libbpf 5 - LIBBPF_A = $(LIBBPF_OUT)/libbpf.a 6 - LIBBPF_DESTDIR = $(LIBBPF_OUT) 7 - LIBBPF_INCLUDE = $(LIBBPF_DESTDIR)/include 8 - 9 - # Although not in use by libbpf's Makefile, set $(O) so that the "dummy" test 10 - # in tools/scripts/Makefile.include always succeeds when building the kernel 11 - # with $(O) pointing to a relative path, as in "make O=build bindeb-pkg". 12 - $(LIBBPF_A): | $(LIBBPF_OUT) 13 - $(Q)$(MAKE) -C $(LIBBPF_SRCS) O=$(LIBBPF_OUT)/ OUTPUT=$(LIBBPF_OUT)/ \ 14 - DESTDIR=$(LIBBPF_DESTDIR) prefix= \ 15 - $(LIBBPF_OUT)/libbpf.a install_headers 16 - 17 - libbpf_hdrs: $(LIBBPF_A) 18 - 19 - .PHONY: libbpf_hdrs 20 - 21 - $(LIBBPF_OUT): 22 - $(call msg,MKDIR,$@) 23 - $(Q)mkdir -p $@ 4 + LIBBPF_INCLUDE = $(LIBBPF_SRCS)/.. 24 5 25 6 userccflags += -I $(srctree)/tools/include/ -I $(srctree)/tools/include/uapi \ 26 7 -I $(LIBBPF_INCLUDE) -Wno-unused-result 27 8 28 9 userprogs := bpf_preload_umd 29 10 30 - clean-files := libbpf/ 31 - 32 - $(obj)/iterators/iterators.o: | libbpf_hdrs 33 - 34 11 bpf_preload_umd-objs := iterators/iterators.o 35 - bpf_preload_umd-userldlibs := $(LIBBPF_A) -lelf -lz 36 12 37 - $(obj)/bpf_preload_umd: $(LIBBPF_A) 13 + $(obj)/bpf_preload_umd: 38 14 39 15 $(obj)/bpf_preload_umd_blob.o: $(obj)/bpf_preload_umd 40 16
+3 -3
kernel/bpf/preload/iterators/Makefile
··· 35 35 36 36 .PHONY: all clean 37 37 38 - all: iterators.skel.h 38 + all: iterators.lskel.h 39 39 40 40 clean: 41 41 $(call msg,CLEAN) 42 42 $(Q)rm -rf $(OUTPUT) iterators 43 43 44 - iterators.skel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL) 44 + iterators.lskel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL) 45 45 $(call msg,GEN-SKEL,$@) 46 - $(Q)$(BPFTOOL) gen skeleton $< > $@ 46 + $(Q)$(BPFTOOL) gen skeleton -L $< > $@ 47 47 48 48 49 49 $(OUTPUT)/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT)
+21 -7
kernel/bpf/preload/iterators/iterators.c
··· 10 10 #include <bpf/libbpf.h> 11 11 #include <bpf/bpf.h> 12 12 #include <sys/mount.h> 13 - #include "iterators.skel.h" 13 + #include "iterators.lskel.h" 14 14 #include "bpf_preload_common.h" 15 15 16 16 int to_kernel = -1; 17 17 int from_kernel = 0; 18 18 19 - static int send_link_to_kernel(struct bpf_link *link, const char *link_name) 19 + static int __bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len) 20 + { 21 + union bpf_attr attr; 22 + int err; 23 + 24 + memset(&attr, 0, sizeof(attr)); 25 + attr.info.bpf_fd = bpf_fd; 26 + attr.info.info_len = *info_len; 27 + attr.info.info = (long) info; 28 + 29 + err = skel_sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, sizeof(attr)); 30 + if (!err) 31 + *info_len = attr.info.info_len; 32 + return err; 33 + } 34 + 35 + static int send_link_to_kernel(int link_fd, const char *link_name) 20 36 { 21 37 struct bpf_preload_info obj = {}; 22 38 struct bpf_link_info info = {}; 23 39 __u32 info_len = sizeof(info); 24 40 int err; 25 41 26 - err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &info, &info_len); 42 + err = __bpf_obj_get_info_by_fd(link_fd, &info, &info_len); 27 43 if (err) 28 44 return err; 29 45 obj.link_id = info.id; ··· 53 37 54 38 int main(int argc, char **argv) 55 39 { 56 - struct rlimit rlim = { RLIM_INFINITY, RLIM_INFINITY }; 57 40 struct iterators_bpf *skel; 58 41 int err, magic; 59 42 int debug_fd; ··· 70 55 printf("bad start magic %d\n", magic); 71 56 return 1; 72 57 } 73 - setrlimit(RLIMIT_MEMLOCK, &rlim); 74 58 /* libbpf opens BPF object and loads it into the kernel */ 75 59 skel = iterators_bpf__open_and_load(); 76 60 if (!skel) { ··· 86 72 goto cleanup; 87 73 88 74 /* send two bpf_link IDs with names to the kernel */ 89 - err = send_link_to_kernel(skel->links.dump_bpf_map, "maps.debug"); 75 + err = send_link_to_kernel(skel->links.dump_bpf_map_fd, "maps.debug"); 90 76 if (err) 91 77 goto cleanup; 92 - err = send_link_to_kernel(skel->links.dump_bpf_prog, "progs.debug"); 78 + err = send_link_to_kernel(skel->links.dump_bpf_prog_fd, "progs.debug"); 93 79 if (err) 94 80 goto cleanup; 95 81
+428
kernel/bpf/preload/iterators/iterators.lskel.h
··· 1 + /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ 2 + /* THIS FILE IS AUTOGENERATED! */ 3 + #ifndef __ITERATORS_BPF_SKEL_H__ 4 + #define __ITERATORS_BPF_SKEL_H__ 5 + 6 + #include <stdlib.h> 7 + #include <bpf/bpf.h> 8 + #include <bpf/skel_internal.h> 9 + 10 + struct iterators_bpf { 11 + struct bpf_loader_ctx ctx; 12 + struct { 13 + struct bpf_map_desc rodata; 14 + } maps; 15 + struct { 16 + struct bpf_prog_desc dump_bpf_map; 17 + struct bpf_prog_desc dump_bpf_prog; 18 + } progs; 19 + struct { 20 + int dump_bpf_map_fd; 21 + int dump_bpf_prog_fd; 22 + } links; 23 + struct iterators_bpf__rodata { 24 + } *rodata; 25 + }; 26 + 27 + static inline int 28 + iterators_bpf__dump_bpf_map__attach(struct iterators_bpf *skel) 29 + { 30 + int prog_fd = skel->progs.dump_bpf_map.prog_fd; 31 + int fd = skel_link_create(prog_fd, 0, BPF_TRACE_ITER); 32 + 33 + if (fd > 0) 34 + skel->links.dump_bpf_map_fd = fd; 35 + return fd; 36 + } 37 + 38 + static inline int 39 + iterators_bpf__dump_bpf_prog__attach(struct iterators_bpf *skel) 40 + { 41 + int prog_fd = skel->progs.dump_bpf_prog.prog_fd; 42 + int fd = skel_link_create(prog_fd, 0, BPF_TRACE_ITER); 43 + 44 + if (fd > 0) 45 + skel->links.dump_bpf_prog_fd = fd; 46 + return fd; 47 + } 48 + 49 + static inline int 50 + iterators_bpf__attach(struct iterators_bpf *skel) 51 + { 52 + int ret = 0; 53 + 54 + ret = ret < 0 ? ret : iterators_bpf__dump_bpf_map__attach(skel); 55 + ret = ret < 0 ? ret : iterators_bpf__dump_bpf_prog__attach(skel); 56 + return ret < 0 ? ret : 0; 57 + } 58 + 59 + static inline void 60 + iterators_bpf__detach(struct iterators_bpf *skel) 61 + { 62 + skel_closenz(skel->links.dump_bpf_map_fd); 63 + skel_closenz(skel->links.dump_bpf_prog_fd); 64 + } 65 + static void 66 + iterators_bpf__destroy(struct iterators_bpf *skel) 67 + { 68 + if (!skel) 69 + return; 70 + iterators_bpf__detach(skel); 71 + skel_closenz(skel->progs.dump_bpf_map.prog_fd); 72 + skel_closenz(skel->progs.dump_bpf_prog.prog_fd); 73 + munmap(skel->rodata, 4096); 74 + skel_closenz(skel->maps.rodata.map_fd); 75 + free(skel); 76 + } 77 + static inline struct iterators_bpf * 78 + iterators_bpf__open(void) 79 + { 80 + struct iterators_bpf *skel; 81 + 82 + skel = calloc(sizeof(*skel), 1); 83 + if (!skel) 84 + goto cleanup; 85 + skel->ctx.sz = (void *)&skel->links - (void *)skel; 86 + skel->rodata = 87 + mmap(NULL, 4096, PROT_READ | PROT_WRITE, 88 + MAP_SHARED | MAP_ANONYMOUS, -1, 0); 89 + if (skel->rodata == (void *) -1) 90 + goto cleanup; 91 + memcpy(skel->rodata, (void *)"\ 92 + \x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ 93 + \x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\ 94 + \x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\ 95 + \x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\ 96 + \x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x0a\0", 98); 97 + skel->maps.rodata.initial_value = (__u64)(long)skel->rodata; 98 + return skel; 99 + cleanup: 100 + iterators_bpf__destroy(skel); 101 + return NULL; 102 + } 103 + 104 + static inline int 105 + iterators_bpf__load(struct iterators_bpf *skel) 106 + { 107 + struct bpf_load_and_run_opts opts = {}; 108 + int err; 109 + 110 + opts.ctx = (struct bpf_loader_ctx *)skel; 111 + opts.data_sz = 6056; 112 + opts.data = (void *)"\ 113 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 114 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 115 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 116 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 117 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 118 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 119 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 120 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 121 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 122 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 123 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 124 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 125 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 126 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 127 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 128 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 129 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 130 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 131 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 132 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 133 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 134 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 135 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 136 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 137 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 138 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 139 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 140 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 141 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 142 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 143 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 144 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 145 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x9f\xeb\x01\0\ 146 + \x18\0\0\0\0\0\0\0\x1c\x04\0\0\x1c\x04\0\0\xf9\x04\0\0\0\0\0\0\0\0\0\x02\x02\0\ 147 + \0\0\x01\0\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\0\0\0\x04\ 148 + \0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\0\0\0\0\0\ 149 + \0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\0\0\0\x20\ 150 + \0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xa3\0\0\0\x03\0\0\x04\x18\0\0\0\xb1\0\ 151 + \0\0\x09\0\0\0\0\0\0\0\xb5\0\0\0\x0b\0\0\0\x40\0\0\0\xc0\0\0\0\x0b\0\0\0\x80\0\ 152 + \0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xc8\0\0\0\0\0\0\x07\0\0\0\0\xd1\0\0\0\0\0\0\ 153 + \x08\x0c\0\0\0\xd7\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\x94\x01\0\0\x03\0\0\x04\ 154 + \x18\0\0\0\x9c\x01\0\0\x0e\0\0\0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xa4\ 155 + \x01\0\0\x0e\0\0\0\xa0\0\0\0\xb0\x01\0\0\0\0\0\x08\x0f\0\0\0\xb6\x01\0\0\0\0\0\ 156 + \x01\x04\0\0\0\x20\0\0\0\xc3\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\ 157 + \0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xc8\x01\0\0\0\0\0\x01\x04\0\0\0\ 158 + \x20\0\0\0\0\0\0\0\0\0\0\x02\x14\0\0\0\x2c\x02\0\0\x02\0\0\x04\x10\0\0\0\x13\0\ 159 + \0\0\x03\0\0\0\0\0\0\0\x3f\x02\0\0\x15\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x18\0\ 160 + \0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x13\0\0\0\x44\x02\0\0\x01\0\0\x0c\ 161 + \x16\0\0\0\x90\x02\0\0\x01\0\0\x04\x08\0\0\0\x99\x02\0\0\x19\0\0\0\0\0\0\0\0\0\ 162 + \0\0\0\0\0\x02\x1a\0\0\0\xea\x02\0\0\x06\0\0\x04\x38\0\0\0\x9c\x01\0\0\x0e\0\0\ 163 + \0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xf7\x02\0\0\x1b\0\0\0\xc0\0\0\0\x08\ 164 + \x03\0\0\x15\0\0\0\0\x01\0\0\x11\x03\0\0\x1d\0\0\0\x40\x01\0\0\x1b\x03\0\0\x1e\ 165 + \0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x1c\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\0\0\0\0\ 166 + \0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\x65\x03\0\0\x02\0\0\x04\ 167 + \x08\0\0\0\x73\x03\0\0\x0e\0\0\0\0\0\0\0\x7c\x03\0\0\x0e\0\0\0\x20\0\0\0\x1b\ 168 + \x03\0\0\x03\0\0\x04\x18\0\0\0\x86\x03\0\0\x1b\0\0\0\0\0\0\0\x8e\x03\0\0\x21\0\ 169 + \0\0\x40\0\0\0\x94\x03\0\0\x23\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x22\0\0\0\0\0\ 170 + \0\0\0\0\0\x02\x24\0\0\0\x98\x03\0\0\x01\0\0\x04\x04\0\0\0\xa3\x03\0\0\x0e\0\0\ 171 + \0\0\0\0\0\x0c\x04\0\0\x01\0\0\x04\x04\0\0\0\x15\x04\0\0\x0e\0\0\0\0\0\0\0\0\0\ 172 + \0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\0\x8b\x04\0\0\0\0\0\x0e\x25\ 173 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\0\x9f\x04\ 174 + \0\0\0\0\0\x0e\x27\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\ 175 + \x20\0\0\0\xb5\x04\0\0\0\0\0\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\ 176 + \x1c\0\0\0\x12\0\0\0\x11\0\0\0\xca\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\0\0\0\0\ 177 + \0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\xe1\x04\0\0\0\0\0\x0e\x2d\0\0\ 178 + \0\x01\0\0\0\xe9\x04\0\0\x04\0\0\x0f\x62\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\0\x28\ 179 + \0\0\0\x23\0\0\0\x0e\0\0\0\x2a\0\0\0\x31\0\0\0\x20\0\0\0\x2c\0\0\0\x51\0\0\0\ 180 + \x11\0\0\0\xf1\x04\0\0\x01\0\0\x0f\x04\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\0\0\0\x62\ 181 + \x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\x74\ 182 + \x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\x70\ 183 + \x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x30\ 184 + \x3a\x30\0\x2f\x77\x2f\x6e\x65\x74\x2d\x6e\x65\x78\x74\x2f\x6b\x65\x72\x6e\x65\ 185 + \x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\ 186 + \x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\ 187 + \x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\ 188 + \x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\ 189 + \x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\ 190 + \x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\ 191 + \x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x75\x6e\x73\x69\x67\x6e\ 192 + \x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\0\x30\x3a\x31\0\x09\x73\x74\ 193 + \x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\ 194 + \x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\ 195 + \x29\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x20\x63\ 196 + \x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\x3b\0\x30\ 197 + \x3a\x32\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\ 198 + \x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\ 199 + \x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\ 200 + \x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\ 201 + \x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\ 202 + \0\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\ 203 + \x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\ 204 + \x52\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\ 205 + \x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\ 206 + \x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\ 207 + \x2d\x3e\x69\x64\x2c\x20\x6d\x61\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\ 208 + \x70\x2d\x3e\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x29\x3b\0\x7d\0\x62\ 209 + \x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x70\x72\ 210 + \x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x69\x74\x65\ 211 + \x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\ 212 + \x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\ 213 + \x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\x20\x28\x21\x70\x72\x6f\x67\x29\0\ 214 + \x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\x78\0\x09\x61\x75\x78\x20\x3d\x20\ 215 + \x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\ 216 + \x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\ 217 + \x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\ 218 + \x74\x61\x63\x68\x65\x64\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\ 219 + \x5f\x61\x75\x78\0\x61\x74\x74\x61\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\ 220 + \x65\0\x64\x73\x74\x5f\x70\x72\x6f\x67\0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\ 221 + \x62\x74\x66\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\ 222 + \x73\x65\x71\x2c\x20\x22\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\ 223 + \x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\ 224 + \x30\x3a\x35\0\x09\x69\x66\x20\x28\x21\x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\ 225 + \x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x69\x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\ 226 + \x70\x65\x5f\x69\x64\0\x30\0\x73\x74\x72\x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\ 227 + \0\x68\x64\x72\0\x62\x74\x66\x5f\x68\x65\x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\ 228 + \x65\x6e\0\x09\x74\x79\x70\x65\x73\x20\x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\ 229 + \x65\x73\x3b\0\x09\x62\x70\x66\x5f\x70\x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\ 230 + \x6b\x65\x72\x6e\x65\x6c\x28\x26\x74\x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\ 231 + \x29\x2c\x20\x74\x79\x70\x65\x73\x20\x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\ 232 + \x09\x73\x74\x72\x20\x3d\x20\x62\x74\x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\ 233 + \x3b\0\x62\x74\x66\x5f\x74\x79\x70\x65\0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\ 234 + \x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\ 235 + \x5f\x52\x45\x41\x44\x28\x74\x2c\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\ 236 + \x30\x3a\x32\x3a\x30\0\x09\x69\x66\x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\ 237 + \x3e\x3d\x20\x62\x74\x66\x2d\x3e\x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\ 238 + \x29\0\x09\x72\x65\x74\x75\x72\x6e\x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\ 239 + \x5f\x6f\x66\x66\x3b\0\x30\x3a\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\ 240 + \x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\ 241 + \x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\ 242 + \x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\ 243 + \x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\ 244 + \x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\0\0\ 245 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x2d\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\ 246 + \0\x04\0\0\0\x62\0\0\0\x01\0\0\0\x80\x04\0\0\0\0\0\0\0\0\0\0\x69\x74\x65\x72\ 247 + \x61\x74\x6f\x72\x2e\x72\x6f\x64\x61\x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\x2f\0\0\ 248 + \0\0\0\0\0\0\0\0\0\0\0\0\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\ 249 + \x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\ 250 + \x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\ 251 + \x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ 252 + \x61\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\ 253 + \x25\x73\x20\x25\x73\x0a\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 254 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\ 255 + \x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x1b\0\0\ 256 + \0\0\0\x79\x11\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\ 257 + \0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\ 258 + \0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x23\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\ 259 + \0\x61\x71\0\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\ 260 + \0\0\0\0\0\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\ 261 + \x7b\x1a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\ 262 + \x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x23\0\0\0\xb7\x03\0\0\x0e\0\0\0\ 263 + \xb7\x05\0\0\x18\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\ 264 + \0\0\0\0\x07\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x3c\x01\0\x01\0\0\0\x42\0\0\ 265 + \0\x7b\0\0\0\x24\x3c\x01\0\x02\0\0\0\x42\0\0\0\xee\0\0\0\x1d\x44\x01\0\x03\0\0\ 266 + \0\x42\0\0\0\x0f\x01\0\0\x06\x4c\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\x40\ 267 + \x01\0\x05\0\0\0\x42\0\0\0\x1a\x01\0\0\x1d\x40\x01\0\x06\0\0\0\x42\0\0\0\x43\ 268 + \x01\0\0\x06\x58\x01\0\x08\0\0\0\x42\0\0\0\x56\x01\0\0\x03\x5c\x01\0\x0f\0\0\0\ 269 + \x42\0\0\0\xdc\x01\0\0\x02\x64\x01\0\x1f\0\0\0\x42\0\0\0\x2a\x02\0\0\x01\x6c\ 270 + \x01\0\0\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\ 271 + \0\x10\0\0\0\x02\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\ 272 + \x28\0\0\0\x08\0\0\0\x3f\x01\0\0\0\0\0\0\x78\0\0\0\x0d\0\0\0\x3e\0\0\0\0\0\0\0\ 273 + \x88\0\0\0\x0d\0\0\0\xea\0\0\0\0\0\0\0\xa8\0\0\0\x0d\0\0\0\x3f\x01\0\0\0\0\0\0\ 274 + \x1a\0\0\0\x21\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 275 + \0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\ 276 + \0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\ 277 + \0\0\0\0\0\x0a\0\0\0\x01\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 278 + \0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x6d\ 279 + \x61\x70\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\ 280 + \0\0\0\0\x79\x12\x08\0\0\0\0\0\x15\x02\x3c\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x79\ 281 + \x27\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\ 282 + \0\x07\x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\ 283 + \x31\0\0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x7b\ 284 + \x6a\xc8\xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\xff\0\0\0\0\xb7\x03\0\0\ 285 + \x04\0\0\0\xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\x71\x28\0\0\0\0\0\x79\ 286 + \x78\x30\0\0\0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\ 287 + \0\x61\x11\x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\0\x03\0\0\0\x0f\x13\0\ 288 + \0\0\0\0\0\x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf8\xff\xff\xff\ 289 + \xb7\x02\0\0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\0\0\0\0\x79\xa3\xf8\xff\ 290 + \0\0\0\0\x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf4\xff\xff\xff\ 291 + \xb7\x02\0\0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\x04\0\0\0\x61\xa1\xf4\ 292 + \xff\0\0\0\0\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\x0f\x16\0\0\0\0\0\0\ 293 + \xbf\x69\0\0\0\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\0\0\0\0\x7b\x1a\xe0\ 294 + \xff\0\0\0\0\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x31\0\0\0\0\0\0\x7b\ 295 + \x1a\xe8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\x79\xa1\ 296 + \xc8\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x51\0\0\0\xb7\x03\0\0\x11\0\0\0\ 297 + \xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\ 298 + \0\0\0\0\x17\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x80\x01\0\x01\0\0\0\x42\0\0\ 299 + \0\x7b\0\0\0\x24\x80\x01\0\x02\0\0\0\x42\0\0\0\x60\x02\0\0\x1f\x88\x01\0\x03\0\ 300 + \0\0\x42\0\0\0\x84\x02\0\0\x06\x94\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\ 301 + \x84\x01\0\x05\0\0\0\x42\0\0\0\x9d\x02\0\0\x0e\xa0\x01\0\x06\0\0\0\x42\0\0\0\ 302 + \x1a\x01\0\0\x1d\x84\x01\0\x07\0\0\0\x42\0\0\0\x43\x01\0\0\x06\xa4\x01\0\x09\0\ 303 + \0\0\x42\0\0\0\xaf\x02\0\0\x03\xa8\x01\0\x11\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\ 304 + \xb0\x01\0\x18\0\0\0\x42\0\0\0\x5a\x03\0\0\x06\x04\x01\0\x1b\0\0\0\x42\0\0\0\0\ 305 + \0\0\0\0\0\0\0\x1c\0\0\0\x42\0\0\0\xab\x03\0\0\x0f\x10\x01\0\x1d\0\0\0\x42\0\0\ 306 + \0\xc0\x03\0\0\x2d\x14\x01\0\x1f\0\0\0\x42\0\0\0\xf7\x03\0\0\x0d\x0c\x01\0\x21\ 307 + \0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x22\0\0\0\x42\0\0\0\xc0\x03\0\0\x02\x14\x01\0\ 308 + \x25\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x28\0\0\0\x42\0\0\0\0\0\0\0\0\0\ 309 + \0\0\x29\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x2c\0\0\0\x42\0\0\0\x1e\x04\ 310 + \0\0\x0d\x18\x01\0\x2d\0\0\0\x42\0\0\0\x4c\x04\0\0\x1b\x1c\x01\0\x2e\0\0\0\x42\ 311 + \0\0\0\x4c\x04\0\0\x06\x1c\x01\0\x2f\0\0\0\x42\0\0\0\x6f\x04\0\0\x0d\x24\x01\0\ 312 + \x31\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\xb0\x01\0\x40\0\0\0\x42\0\0\0\x2a\x02\0\0\ 313 + \x01\xc0\x01\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\ 314 + \0\0\0\0\0\x10\0\0\0\x14\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x14\0\0\0\x3e\0\0\0\ 315 + \0\0\0\0\x28\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x30\0\0\0\x08\0\0\0\x3f\x01\0\0\ 316 + \0\0\0\0\x88\0\0\0\x1a\0\0\0\x3e\0\0\0\0\0\0\0\x98\0\0\0\x1a\0\0\0\xea\0\0\0\0\ 317 + \0\0\0\xb0\0\0\0\x1a\0\0\0\x52\x03\0\0\0\0\0\0\xb8\0\0\0\x1a\0\0\0\x56\x03\0\0\ 318 + \0\0\0\0\xc8\0\0\0\x1f\0\0\0\x84\x03\0\0\0\0\0\0\xe0\0\0\0\x20\0\0\0\xea\0\0\0\ 319 + \0\0\0\0\xf8\0\0\0\x20\0\0\0\x3e\0\0\0\0\0\0\0\x20\x01\0\0\x24\0\0\0\x3e\0\0\0\ 320 + \0\0\0\0\x58\x01\0\0\x1a\0\0\0\xea\0\0\0\0\0\0\0\x68\x01\0\0\x20\0\0\0\x46\x04\ 321 + \0\0\0\0\0\0\x90\x01\0\0\x1a\0\0\0\x3f\x01\0\0\0\0\0\0\xa0\x01\0\0\x1a\0\0\0\ 322 + \x87\x04\0\0\0\0\0\0\xa8\x01\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x1a\0\0\0\x42\0\0\ 323 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 324 + \0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0\x1c\0\0\ 325 + \0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x1a\0\ 326 + \0\0\x01\0\0\0\0\0\0\0\x13\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\ 327 + \0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\ 328 + \0\0\0\0"; 329 + opts.insns_sz = 2184; 330 + opts.insns = (void *)"\ 331 + \xbf\x16\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\x78\xff\xff\xff\xb7\x02\0\ 332 + \0\x88\0\0\0\xb7\x03\0\0\0\0\0\0\x85\0\0\0\x71\0\0\0\x05\0\x14\0\0\0\0\0\x61\ 333 + \xa1\x78\xff\0\0\0\0\xd5\x01\x01\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa1\x7c\xff\ 334 + \0\0\0\0\xd5\x01\x01\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa1\x80\xff\0\0\0\0\xd5\ 335 + \x01\x01\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa1\x84\xff\0\0\0\0\xd5\x01\x01\0\0\ 336 + \0\0\0\x85\0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x01\0\0\0\0\ 337 + \0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xbf\x70\0\0\ 338 + \0\0\0\0\x95\0\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 339 + \x48\x0e\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\ 340 + \0\0\x44\x0e\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\ 341 + \0\0\0\0\x38\x0e\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\ 342 + \x18\x61\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x12\0\ 343 + \0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\xb7\x03\0\0\x1c\0\0\0\x85\0\0\0\ 344 + \xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xd4\xff\0\0\0\0\x63\x7a\x78\xff\0\0\0\0\ 345 + \x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x0e\0\0\x63\x01\0\0\0\ 346 + \0\0\0\x61\x60\x20\0\0\0\0\0\x15\0\x03\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 347 + \x5c\x0e\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\ 348 + \0\x50\x0e\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\ 349 + \xc5\x07\xc3\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x63\x71\0\0\0\0\0\ 350 + \0\x79\x63\x18\0\0\0\0\0\x15\x03\x04\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x98\ 351 + \x0e\0\0\xb7\x02\0\0\x62\0\0\0\x85\0\0\0\x94\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\ 352 + \0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\x63\x01\0\ 353 + \0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 354 + \x10\x0f\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x98\x0e\0\0\x18\ 355 + \x61\0\0\0\0\0\0\0\0\0\0\x18\x0f\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x02\0\0\0\ 356 + \x18\x62\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\xb7\x03\0\0\x20\0\0\0\x85\0\0\0\xa6\0\ 357 + \0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xa3\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\0\ 358 + \0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\x63\x01\0\0\ 359 + \0\0\0\0\xb7\x01\0\0\x16\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\xb7\x03\ 360 + \0\0\x04\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x96\xff\0\0\0\0\ 361 + \x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x78\x11\0\ 362 + \0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\x18\x61\0\0\0\0\ 363 + \0\0\0\0\0\0\x70\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x40\ 364 + \x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\ 365 + \0\0\0\0\0\0\0\0\0\x48\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc8\x11\0\0\x7b\x01\ 366 + \0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\ 367 + \0\xe8\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x18\x61\ 368 + \0\0\0\0\0\0\0\0\0\0\xe0\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\ 369 + \x61\0\0\0\0\0\0\0\0\0\0\x80\x11\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\ 370 + \x18\x61\0\0\0\0\0\0\0\0\0\0\x84\x11\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\ 371 + \0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x88\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\xa0\x78\ 372 + \xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x11\0\0\x63\x01\0\0\0\0\0\0\x18\ 373 + \x61\0\0\0\0\0\0\0\0\0\0\xf8\x11\0\0\xb7\x02\0\0\x11\0\0\0\xb7\x03\0\0\x0c\0\0\ 374 + \0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x60\xff\ 375 + \0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x68\x11\0\0\x63\x70\x6c\0\0\0\0\0\x77\x07\ 376 + \0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\0\0\0\x18\x62\0\0\0\0\0\0\ 377 + \0\0\0\0\x68\x11\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\ 378 + \0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd8\x11\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\ 379 + \0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x4e\xff\0\0\0\0\x63\ 380 + \x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\x18\x61\0\0\0\0\0\ 381 + \0\0\0\0\0\x10\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x18\x12\ 382 + \0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\ 383 + \0\0\0\0\0\0\0\x28\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x50\x17\0\0\x7b\x01\0\0\ 384 + \0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 385 + \x60\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd0\x15\0\0\x18\ 386 + \x61\0\0\0\0\0\0\0\0\0\0\x80\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ 387 + \0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x78\x17\0\0\x7b\x01\0\0\0\0\0\0\x61\ 388 + \x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x17\0\0\x63\x01\0\0\0\0\0\0\ 389 + \x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x1c\x17\0\0\x63\x01\0\0\0\0\ 390 + \0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\x7b\x01\0\0\ 391 + \0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x48\x17\0\0\x63\ 392 + \x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x17\0\0\xb7\x02\0\0\x12\0\0\0\ 393 + \xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\ 394 + \0\0\xc5\x07\x17\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\x63\x70\x6c\ 395 + \0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\0\0\0\ 396 + \x18\x62\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\ 397 + \0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x70\x17\0\0\x61\x01\0\0\0\0\ 398 + \0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x05\ 399 + \xff\0\0\0\0\x63\x7a\x84\xff\0\0\0\0\x61\xa1\x78\xff\0\0\0\0\xd5\x01\x02\0\0\0\ 400 + \0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa0\x80\xff\0\0\0\0\x63\x06\ 401 + \x28\0\0\0\0\0\x61\xa0\x84\xff\0\0\0\0\x63\x06\x2c\0\0\0\0\0\x18\x61\0\0\0\0\0\ 402 + \0\0\0\0\0\0\0\0\0\x61\x10\0\0\0\0\0\0\x63\x06\x18\0\0\0\0\0\xb7\0\0\0\0\0\0\0\ 403 + \x95\0\0\0\0\0\0\0"; 404 + err = bpf_load_and_run(&opts); 405 + if (err < 0) 406 + return err; 407 + skel->rodata = 408 + mmap(skel->rodata, 4096, PROT_READ, MAP_SHARED | MAP_FIXED, 409 + skel->maps.rodata.map_fd, 0); 410 + return 0; 411 + } 412 + 413 + static inline struct iterators_bpf * 414 + iterators_bpf__open_and_load(void) 415 + { 416 + struct iterators_bpf *skel; 417 + 418 + skel = iterators_bpf__open(); 419 + if (!skel) 420 + return NULL; 421 + if (iterators_bpf__load(skel)) { 422 + iterators_bpf__destroy(skel); 423 + return NULL; 424 + } 425 + return skel; 426 + } 427 + 428 + #endif /* __ITERATORS_BPF_SKEL_H__ */
-412
kernel/bpf/preload/iterators/iterators.skel.h
··· 1 - /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ 2 - 3 - /* THIS FILE IS AUTOGENERATED! */ 4 - #ifndef __ITERATORS_BPF_SKEL_H__ 5 - #define __ITERATORS_BPF_SKEL_H__ 6 - 7 - #include <stdlib.h> 8 - #include <bpf/libbpf.h> 9 - 10 - struct iterators_bpf { 11 - struct bpf_object_skeleton *skeleton; 12 - struct bpf_object *obj; 13 - struct { 14 - struct bpf_map *rodata; 15 - } maps; 16 - struct { 17 - struct bpf_program *dump_bpf_map; 18 - struct bpf_program *dump_bpf_prog; 19 - } progs; 20 - struct { 21 - struct bpf_link *dump_bpf_map; 22 - struct bpf_link *dump_bpf_prog; 23 - } links; 24 - struct iterators_bpf__rodata { 25 - char dump_bpf_map____fmt[35]; 26 - char dump_bpf_map____fmt_1[14]; 27 - char dump_bpf_prog____fmt[32]; 28 - char dump_bpf_prog____fmt_2[17]; 29 - } *rodata; 30 - }; 31 - 32 - static void 33 - iterators_bpf__destroy(struct iterators_bpf *obj) 34 - { 35 - if (!obj) 36 - return; 37 - if (obj->skeleton) 38 - bpf_object__destroy_skeleton(obj->skeleton); 39 - free(obj); 40 - } 41 - 42 - static inline int 43 - iterators_bpf__create_skeleton(struct iterators_bpf *obj); 44 - 45 - static inline struct iterators_bpf * 46 - iterators_bpf__open_opts(const struct bpf_object_open_opts *opts) 47 - { 48 - struct iterators_bpf *obj; 49 - 50 - obj = (struct iterators_bpf *)calloc(1, sizeof(*obj)); 51 - if (!obj) 52 - return NULL; 53 - if (iterators_bpf__create_skeleton(obj)) 54 - goto err; 55 - if (bpf_object__open_skeleton(obj->skeleton, opts)) 56 - goto err; 57 - 58 - return obj; 59 - err: 60 - iterators_bpf__destroy(obj); 61 - return NULL; 62 - } 63 - 64 - static inline struct iterators_bpf * 65 - iterators_bpf__open(void) 66 - { 67 - return iterators_bpf__open_opts(NULL); 68 - } 69 - 70 - static inline int 71 - iterators_bpf__load(struct iterators_bpf *obj) 72 - { 73 - return bpf_object__load_skeleton(obj->skeleton); 74 - } 75 - 76 - static inline struct iterators_bpf * 77 - iterators_bpf__open_and_load(void) 78 - { 79 - struct iterators_bpf *obj; 80 - 81 - obj = iterators_bpf__open(); 82 - if (!obj) 83 - return NULL; 84 - if (iterators_bpf__load(obj)) { 85 - iterators_bpf__destroy(obj); 86 - return NULL; 87 - } 88 - return obj; 89 - } 90 - 91 - static inline int 92 - iterators_bpf__attach(struct iterators_bpf *obj) 93 - { 94 - return bpf_object__attach_skeleton(obj->skeleton); 95 - } 96 - 97 - static inline void 98 - iterators_bpf__detach(struct iterators_bpf *obj) 99 - { 100 - return bpf_object__detach_skeleton(obj->skeleton); 101 - } 102 - 103 - static inline int 104 - iterators_bpf__create_skeleton(struct iterators_bpf *obj) 105 - { 106 - struct bpf_object_skeleton *s; 107 - 108 - s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s)); 109 - if (!s) 110 - return -1; 111 - obj->skeleton = s; 112 - 113 - s->sz = sizeof(*s); 114 - s->name = "iterators_bpf"; 115 - s->obj = &obj->obj; 116 - 117 - /* maps */ 118 - s->map_cnt = 1; 119 - s->map_skel_sz = sizeof(*s->maps); 120 - s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz); 121 - if (!s->maps) 122 - goto err; 123 - 124 - s->maps[0].name = "iterator.rodata"; 125 - s->maps[0].map = &obj->maps.rodata; 126 - s->maps[0].mmaped = (void **)&obj->rodata; 127 - 128 - /* programs */ 129 - s->prog_cnt = 2; 130 - s->prog_skel_sz = sizeof(*s->progs); 131 - s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz); 132 - if (!s->progs) 133 - goto err; 134 - 135 - s->progs[0].name = "dump_bpf_map"; 136 - s->progs[0].prog = &obj->progs.dump_bpf_map; 137 - s->progs[0].link = &obj->links.dump_bpf_map; 138 - 139 - s->progs[1].name = "dump_bpf_prog"; 140 - s->progs[1].prog = &obj->progs.dump_bpf_prog; 141 - s->progs[1].link = &obj->links.dump_bpf_prog; 142 - 143 - s->data_sz = 7176; 144 - s->data = (void *)"\ 145 - \x7f\x45\x4c\x46\x02\x01\x01\0\0\0\0\0\0\0\0\0\x01\0\xf7\0\x01\0\0\0\0\0\0\0\0\ 146 - \0\0\0\0\0\0\0\0\0\0\0\x48\x18\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\x40\0\x0f\0\ 147 - \x0e\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\ 148 - \x1a\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\ 149 - \x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\0\0\0\0\0\0\0\0\0\ 150 - \0\0\0\xb7\x03\0\0\x23\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x71\0\ 151 - \0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\0\0\0\0\0\ 152 - \x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\x7b\x1a\xf8\ 153 - \xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\ 154 - \0\x18\x02\0\0\x23\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x0e\0\0\0\xb7\x05\0\0\x18\ 155 - \0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x79\x12\0\0\0\0\ 156 - \0\0\x79\x26\0\0\0\0\0\0\x79\x11\x08\0\0\0\0\0\x15\x01\x3b\0\0\0\0\0\x79\x17\0\ 157 - \0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\ 158 - \x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\x31\0\0\0\0\0\0\0\0\0\ 159 - \0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x7b\x6a\xc8\ 160 - \xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\xff\0\0\0\0\xb7\x03\0\0\x04\0\0\0\ 161 - \xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\x71\x28\0\0\0\0\0\x79\x78\x30\0\0\ 162 - \0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\0\x61\x11\ 163 - \x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\0\x03\0\0\0\x0f\x13\0\0\0\0\0\0\ 164 - \x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf8\xff\xff\xff\xb7\x02\0\ 165 - \0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\0\0\0\0\x79\xa3\xf8\xff\0\0\0\0\ 166 - \x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf4\xff\xff\xff\xb7\x02\0\ 167 - \0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\x04\0\0\0\x61\xa1\xf4\xff\0\0\0\0\ 168 - \x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\x0f\x16\0\0\0\0\0\0\xbf\x69\0\0\0\ 169 - \0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\0\0\0\0\x7b\x1a\xe0\xff\0\0\0\0\ 170 - \x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x31\0\0\0\0\0\0\x7b\x1a\xe8\xff\ 171 - \0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\x79\xa1\xc8\xff\0\0\0\ 172 - \0\x18\x02\0\0\x51\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x11\0\0\0\xb7\x05\0\0\x20\ 173 - \0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x20\x20\x69\x64\ 174 - \x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\ 175 - \x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\ 176 - \x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\ 177 - \x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\ 178 - \x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x0a\0\x47\x50\x4c\0\x9f\ 179 - \xeb\x01\0\x18\0\0\0\0\0\0\0\x1c\x04\0\0\x1c\x04\0\0\x09\x05\0\0\0\0\0\0\0\0\0\ 180 - \x02\x02\0\0\0\x01\0\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\ 181 - \0\0\0\x04\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\ 182 - \0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\ 183 - \0\0\0\x20\0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xaf\0\0\0\x03\0\0\x04\x18\0\ 184 - \0\0\xbd\0\0\0\x09\0\0\0\0\0\0\0\xc1\0\0\0\x0b\0\0\0\x40\0\0\0\xcc\0\0\0\x0b\0\ 185 - \0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xd4\0\0\0\0\0\0\x07\0\0\0\0\xdd\0\0\ 186 - \0\0\0\0\x08\x0c\0\0\0\xe3\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\xa4\x01\0\0\x03\ 187 - \0\0\x04\x18\0\0\0\xac\x01\0\0\x0e\0\0\0\0\0\0\0\xaf\x01\0\0\x11\0\0\0\x20\0\0\ 188 - \0\xb4\x01\0\0\x0e\0\0\0\xa0\0\0\0\xc0\x01\0\0\0\0\0\x08\x0f\0\0\0\xc6\x01\0\0\ 189 - \0\0\0\x01\x04\0\0\0\x20\0\0\0\xd3\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\ 190 - \0\0\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xd8\x01\0\0\0\0\0\x01\x04\ 191 - \0\0\0\x20\0\0\0\0\0\0\0\0\0\0\x02\x14\0\0\0\x3c\x02\0\0\x02\0\0\x04\x10\0\0\0\ 192 - \x13\0\0\0\x03\0\0\0\0\0\0\0\x4f\x02\0\0\x15\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\ 193 - \x18\0\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x13\0\0\0\x54\x02\0\0\x01\0\ 194 - \0\x0c\x16\0\0\0\xa0\x02\0\0\x01\0\0\x04\x08\0\0\0\xa9\x02\0\0\x19\0\0\0\0\0\0\ 195 - \0\0\0\0\0\0\0\0\x02\x1a\0\0\0\xfa\x02\0\0\x06\0\0\x04\x38\0\0\0\xac\x01\0\0\ 196 - \x0e\0\0\0\0\0\0\0\xaf\x01\0\0\x11\0\0\0\x20\0\0\0\x07\x03\0\0\x1b\0\0\0\xc0\0\ 197 - \0\0\x18\x03\0\0\x15\0\0\0\0\x01\0\0\x21\x03\0\0\x1d\0\0\0\x40\x01\0\0\x2b\x03\ 198 - \0\0\x1e\0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x1c\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\ 199 - \0\0\0\0\0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\x75\x03\0\0\x02\0\ 200 - \0\x04\x08\0\0\0\x83\x03\0\0\x0e\0\0\0\0\0\0\0\x8c\x03\0\0\x0e\0\0\0\x20\0\0\0\ 201 - \x2b\x03\0\0\x03\0\0\x04\x18\0\0\0\x96\x03\0\0\x1b\0\0\0\0\0\0\0\x9e\x03\0\0\ 202 - \x21\0\0\0\x40\0\0\0\xa4\x03\0\0\x23\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x22\0\0\ 203 - \0\0\0\0\0\0\0\0\x02\x24\0\0\0\xa8\x03\0\0\x01\0\0\x04\x04\0\0\0\xb3\x03\0\0\ 204 - \x0e\0\0\0\0\0\0\0\x1c\x04\0\0\x01\0\0\x04\x04\0\0\0\x25\x04\0\0\x0e\0\0\0\0\0\ 205 - \0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\0\x9b\x04\0\0\0\0\0\ 206 - \x0e\x25\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\0\ 207 - \xaf\x04\0\0\0\0\0\x0e\x27\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\ 208 - \x12\0\0\0\x20\0\0\0\xc5\x04\0\0\0\0\0\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\ 209 - \0\0\0\0\x1c\0\0\0\x12\0\0\0\x11\0\0\0\xda\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\ 210 - \0\0\0\0\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\xf1\x04\0\0\0\0\0\x0e\ 211 - \x2d\0\0\0\x01\0\0\0\xf9\x04\0\0\x04\0\0\x0f\0\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\ 212 - \0\x28\0\0\0\x23\0\0\0\x0e\0\0\0\x2a\0\0\0\x31\0\0\0\x20\0\0\0\x2c\0\0\0\x51\0\ 213 - \0\0\x11\0\0\0\x01\x05\0\0\x01\0\0\x0f\0\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\0\0\0\ 214 - \x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\ 215 - \x74\x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\ 216 - \x70\x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\ 217 - \x30\x3a\x30\0\x2f\x68\x6f\x6d\x65\x2f\x61\x6c\x72\x75\x61\x2f\x62\x75\x69\x6c\ 218 - \x64\x2f\x6c\x69\x6e\x75\x78\x2f\x6b\x65\x72\x6e\x65\x6c\x2f\x62\x70\x66\x2f\ 219 - \x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2f\x69\ 220 - \x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\x63\0\x09\x73\x74\x72\x75\ 221 - \x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\x73\x65\x71\x20\x3d\x20\ 222 - \x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x3b\0\x62\x70\x66\x5f\ 223 - \x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\x73\x65\x73\x73\x69\x6f\ 224 - \x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\x65\x71\x5f\x66\x69\x6c\ 225 - \x65\0\x5f\x5f\x75\x36\x34\0\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\x20\x75\x6e\ 226 - \x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x30\x3a\x31\0\x09\x73\x74\x72\x75\ 227 - \x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\x20\x63\ 228 - \x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\x29\0\ 229 - \x30\x3a\x32\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\ 230 - \x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\ 231 - \x3b\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\x29\ 232 - \0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\ 233 - \x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\ 234 - \x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x5c\ 235 - \x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\0\ 236 - \x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\ 237 - \x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\ 238 - \x52\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\ 239 - \x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\ 240 - \x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\ 241 - \x2d\x3e\x69\x64\x2c\x20\x6d\x61\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\ 242 - \x70\x2d\x3e\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x29\x3b\0\x7d\0\x62\ 243 - \x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x70\x72\ 244 - \x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x69\x74\x65\ 245 - \x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\ 246 - \x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\ 247 - \x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\x20\x28\x21\x70\x72\x6f\x67\x29\0\ 248 - \x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\x78\0\x09\x61\x75\x78\x20\x3d\x20\ 249 - \x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\ 250 - \x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\ 251 - \x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\ 252 - \x74\x61\x63\x68\x65\x64\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\ 253 - \x5f\x61\x75\x78\0\x61\x74\x74\x61\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\ 254 - \x65\0\x64\x73\x74\x5f\x70\x72\x6f\x67\0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\ 255 - \x62\x74\x66\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\ 256 - \x73\x65\x71\x2c\x20\x22\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\ 257 - \x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\ 258 - \x30\x3a\x35\0\x09\x69\x66\x20\x28\x21\x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\ 259 - \x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x69\x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\ 260 - \x70\x65\x5f\x69\x64\0\x30\0\x73\x74\x72\x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\ 261 - \0\x68\x64\x72\0\x62\x74\x66\x5f\x68\x65\x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\ 262 - \x65\x6e\0\x09\x74\x79\x70\x65\x73\x20\x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\ 263 - \x65\x73\x3b\0\x09\x62\x70\x66\x5f\x70\x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\ 264 - \x6b\x65\x72\x6e\x65\x6c\x28\x26\x74\x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\ 265 - \x29\x2c\x20\x74\x79\x70\x65\x73\x20\x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\ 266 - \x09\x73\x74\x72\x20\x3d\x20\x62\x74\x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\ 267 - \x3b\0\x62\x74\x66\x5f\x74\x79\x70\x65\0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\ 268 - \x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\ 269 - \x5f\x52\x45\x41\x44\x28\x74\x2c\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\ 270 - \x30\x3a\x32\x3a\x30\0\x09\x69\x66\x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\ 271 - \x3e\x3d\x20\x62\x74\x66\x2d\x3e\x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\ 272 - \x29\0\x09\x72\x65\x74\x75\x72\x6e\x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\ 273 - \x5f\x6f\x66\x66\x3b\0\x30\x3a\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\ 274 - \x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\ 275 - \x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\ 276 - \x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\ 277 - \x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\ 278 - \x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\x9f\ 279 - \xeb\x01\0\x20\0\0\0\0\0\0\0\x24\0\0\0\x24\0\0\0\x44\x02\0\0\x68\x02\0\0\xa4\ 280 - \x01\0\0\x08\0\0\0\x31\0\0\0\x01\0\0\0\0\0\0\0\x07\0\0\0\x62\x02\0\0\x01\0\0\0\ 281 - \0\0\0\0\x17\0\0\0\x10\0\0\0\x31\0\0\0\x09\0\0\0\0\0\0\0\x42\0\0\0\x87\0\0\0\ 282 - \x1e\x40\x01\0\x08\0\0\0\x42\0\0\0\x87\0\0\0\x24\x40\x01\0\x10\0\0\0\x42\0\0\0\ 283 - \xfe\0\0\0\x1d\x48\x01\0\x18\0\0\0\x42\0\0\0\x1f\x01\0\0\x06\x50\x01\0\x20\0\0\ 284 - \0\x42\0\0\0\x2e\x01\0\0\x1d\x44\x01\0\x28\0\0\0\x42\0\0\0\x53\x01\0\0\x06\x5c\ 285 - \x01\0\x38\0\0\0\x42\0\0\0\x66\x01\0\0\x03\x60\x01\0\x70\0\0\0\x42\0\0\0\xec\ 286 - \x01\0\0\x02\x68\x01\0\xf0\0\0\0\x42\0\0\0\x3a\x02\0\0\x01\x70\x01\0\x62\x02\0\ 287 - \0\x1a\0\0\0\0\0\0\0\x42\0\0\0\x87\0\0\0\x1e\x84\x01\0\x08\0\0\0\x42\0\0\0\x87\ 288 - \0\0\0\x24\x84\x01\0\x10\0\0\0\x42\0\0\0\x70\x02\0\0\x1f\x8c\x01\0\x18\0\0\0\ 289 - \x42\0\0\0\x94\x02\0\0\x06\x98\x01\0\x20\0\0\0\x42\0\0\0\xad\x02\0\0\x0e\xa4\ 290 - \x01\0\x28\0\0\0\x42\0\0\0\x2e\x01\0\0\x1d\x88\x01\0\x30\0\0\0\x42\0\0\0\x53\ 291 - \x01\0\0\x06\xa8\x01\0\x40\0\0\0\x42\0\0\0\xbf\x02\0\0\x03\xac\x01\0\x80\0\0\0\ 292 - \x42\0\0\0\x2f\x03\0\0\x02\xb4\x01\0\xb8\0\0\0\x42\0\0\0\x6a\x03\0\0\x06\x08\ 293 - \x01\0\xd0\0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\xd8\0\0\0\x42\0\0\0\xbb\x03\0\0\x0f\ 294 - \x14\x01\0\xe0\0\0\0\x42\0\0\0\xd0\x03\0\0\x2d\x18\x01\0\xf0\0\0\0\x42\0\0\0\ 295 - \x07\x04\0\0\x0d\x10\x01\0\0\x01\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x08\x01\0\0\x42\ 296 - \0\0\0\xd0\x03\0\0\x02\x18\x01\0\x20\x01\0\0\x42\0\0\0\x2e\x04\0\0\x0d\x1c\x01\ 297 - \0\x38\x01\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x40\x01\0\0\x42\0\0\0\x2e\x04\0\0\x0d\ 298 - \x1c\x01\0\x58\x01\0\0\x42\0\0\0\x2e\x04\0\0\x0d\x1c\x01\0\x60\x01\0\0\x42\0\0\ 299 - \0\x5c\x04\0\0\x1b\x20\x01\0\x68\x01\0\0\x42\0\0\0\x5c\x04\0\0\x06\x20\x01\0\ 300 - \x70\x01\0\0\x42\0\0\0\x7f\x04\0\0\x0d\x28\x01\0\x78\x01\0\0\x42\0\0\0\0\0\0\0\ 301 - \0\0\0\0\x80\x01\0\0\x42\0\0\0\x2f\x03\0\0\x02\xb4\x01\0\xf8\x01\0\0\x42\0\0\0\ 302 - \x3a\x02\0\0\x01\xc4\x01\0\x10\0\0\0\x31\0\0\0\x07\0\0\0\0\0\0\0\x02\0\0\0\x3e\ 303 - \0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x02\0\0\0\xfa\0\ 304 - \0\0\0\0\0\0\x20\0\0\0\x08\0\0\0\x2a\x01\0\0\0\0\0\0\x70\0\0\0\x0d\0\0\0\x3e\0\ 305 - \0\0\0\0\0\0\x80\0\0\0\x0d\0\0\0\xfa\0\0\0\0\0\0\0\xa0\0\0\0\x0d\0\0\0\x2a\x01\ 306 - \0\0\0\0\0\0\x62\x02\0\0\x12\0\0\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\ 307 - \0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x14\0\0\0\xfa\0\0\0\0\0\0\0\x20\0\0\0\ 308 - \x18\0\0\0\x3e\0\0\0\0\0\0\0\x28\0\0\0\x08\0\0\0\x2a\x01\0\0\0\0\0\0\x80\0\0\0\ 309 - \x1a\0\0\0\x3e\0\0\0\0\0\0\0\x90\0\0\0\x1a\0\0\0\xfa\0\0\0\0\0\0\0\xa8\0\0\0\ 310 - \x1a\0\0\0\x62\x03\0\0\0\0\0\0\xb0\0\0\0\x1a\0\0\0\x66\x03\0\0\0\0\0\0\xc0\0\0\ 311 - \0\x1f\0\0\0\x94\x03\0\0\0\0\0\0\xd8\0\0\0\x20\0\0\0\xfa\0\0\0\0\0\0\0\xf0\0\0\ 312 - \0\x20\0\0\0\x3e\0\0\0\0\0\0\0\x18\x01\0\0\x24\0\0\0\x3e\0\0\0\0\0\0\0\x50\x01\ 313 - \0\0\x1a\0\0\0\xfa\0\0\0\0\0\0\0\x60\x01\0\0\x20\0\0\0\x56\x04\0\0\0\0\0\0\x88\ 314 - \x01\0\0\x1a\0\0\0\x2a\x01\0\0\0\0\0\0\x98\x01\0\0\x1a\0\0\0\x97\x04\0\0\0\0\0\ 315 - \0\xa0\x01\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 316 - \0\0\0\0\0\0\0\x91\0\0\0\x04\0\xf1\xff\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xe6\0\0\ 317 - \0\0\0\x02\0\x70\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xd8\0\0\0\0\0\x02\0\xf0\0\0\0\0\ 318 - \0\0\0\0\0\0\0\0\0\0\0\xdf\0\0\0\0\0\x03\0\x78\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 319 - \xd1\0\0\0\0\0\x03\0\x80\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xca\0\0\0\0\0\x03\0\ 320 - \xf8\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x14\0\0\0\x01\0\x04\0\0\0\0\0\0\0\0\0\x23\ 321 - \0\0\0\0\0\0\0\x04\x01\0\0\x01\0\x04\0\x23\0\0\0\0\0\0\0\x0e\0\0\0\0\0\0\0\x28\ 322 - \0\0\0\x01\0\x04\0\x31\0\0\0\0\0\0\0\x20\0\0\0\0\0\0\0\xed\0\0\0\x01\0\x04\0\ 323 - \x51\0\0\0\0\0\0\0\x11\0\0\0\0\0\0\0\0\0\0\0\x03\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\ 324 - \0\0\0\0\0\0\0\0\0\x03\0\x03\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\ 325 - \x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xc2\0\0\0\x11\0\x05\0\0\0\0\0\0\0\0\0\ 326 - \x04\0\0\0\0\0\0\0\x3d\0\0\0\x12\0\x02\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\x5b\ 327 - \0\0\0\x12\0\x03\0\0\0\0\0\0\0\0\0\x08\x02\0\0\0\0\0\0\x48\0\0\0\0\0\0\0\x01\0\ 328 - \0\0\x0d\0\0\0\xc8\0\0\0\0\0\0\0\x01\0\0\0\x0d\0\0\0\x50\0\0\0\0\0\0\0\x01\0\0\ 329 - \0\x0d\0\0\0\xd0\x01\0\0\0\0\0\0\x01\0\0\0\x0d\0\0\0\xf0\x03\0\0\0\0\0\0\x0a\0\ 330 - \0\0\x0d\0\0\0\xfc\x03\0\0\0\0\0\0\x0a\0\0\0\x0d\0\0\0\x08\x04\0\0\0\0\0\0\x0a\ 331 - \0\0\0\x0d\0\0\0\x14\x04\0\0\0\0\0\0\x0a\0\0\0\x0d\0\0\0\x2c\x04\0\0\0\0\0\0\0\ 332 - \0\0\0\x0e\0\0\0\x2c\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x3c\0\0\0\0\0\0\0\0\0\0\0\ 333 - \x0c\0\0\0\x50\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x60\0\0\0\0\0\0\0\0\0\0\0\x0b\0\ 334 - \0\0\x70\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\ 335 - \x90\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xa0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xb0\0\ 336 - \0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xc0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xd0\0\0\0\0\ 337 - \0\0\0\0\0\0\0\x0b\0\0\0\xe8\0\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xf8\0\0\0\0\0\0\0\ 338 - \0\0\0\0\x0c\0\0\0\x08\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x18\x01\0\0\0\0\0\0\0\ 339 - \0\0\0\x0c\0\0\0\x28\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x38\x01\0\0\0\0\0\0\0\0\ 340 - \0\0\x0c\0\0\0\x48\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x58\x01\0\0\0\0\0\0\0\0\0\ 341 - \0\x0c\0\0\0\x68\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x78\x01\0\0\0\0\0\0\0\0\0\0\ 342 - \x0c\0\0\0\x88\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x98\x01\0\0\0\0\0\0\0\0\0\0\ 343 - \x0c\0\0\0\xa8\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xb8\x01\0\0\0\0\0\0\0\0\0\0\ 344 - \x0c\0\0\0\xc8\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xd8\x01\0\0\0\0\0\0\0\0\0\0\ 345 - \x0c\0\0\0\xe8\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xf8\x01\0\0\0\0\0\0\0\0\0\0\ 346 - \x0c\0\0\0\x08\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x18\x02\0\0\0\0\0\0\0\0\0\0\ 347 - \x0c\0\0\0\x28\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x38\x02\0\0\0\0\0\0\0\0\0\0\ 348 - \x0c\0\0\0\x48\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x58\x02\0\0\0\0\0\0\0\0\0\0\ 349 - \x0c\0\0\0\x68\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x78\x02\0\0\0\0\0\0\0\0\0\0\ 350 - \x0c\0\0\0\x94\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xa4\x02\0\0\0\0\0\0\0\0\0\0\ 351 - \x0b\0\0\0\xb4\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xc4\x02\0\0\0\0\0\0\0\0\0\0\ 352 - \x0b\0\0\0\xd4\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xe4\x02\0\0\0\0\0\0\0\0\0\0\ 353 - \x0b\0\0\0\xf4\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x0c\x03\0\0\0\0\0\0\0\0\0\0\ 354 - \x0c\0\0\0\x1c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x2c\x03\0\0\0\0\0\0\0\0\0\0\ 355 - \x0c\0\0\0\x3c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x4c\x03\0\0\0\0\0\0\0\0\0\0\ 356 - \x0c\0\0\0\x5c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x6c\x03\0\0\0\0\0\0\0\0\0\0\ 357 - \x0c\0\0\0\x7c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x8c\x03\0\0\0\0\0\0\0\0\0\0\ 358 - \x0c\0\0\0\x9c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xac\x03\0\0\0\0\0\0\0\0\0\0\ 359 - \x0c\0\0\0\xbc\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xcc\x03\0\0\0\0\0\0\0\0\0\0\ 360 - \x0c\0\0\0\xdc\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xec\x03\0\0\0\0\0\0\0\0\0\0\ 361 - \x0c\0\0\0\xfc\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x0c\x04\0\0\0\0\0\0\0\0\0\0\ 362 - \x0c\0\0\0\x1c\x04\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x4d\x4e\x40\x41\x42\x43\x4c\0\ 363 - \x2e\x74\x65\x78\x74\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\x2e\x65\x78\x74\0\x64\ 364 - \x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\ 365 - \x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\ 366 - \x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x2e\x72\x65\x6c\x69\x74\x65\ 367 - \x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\ 368 - \x72\x6f\x67\0\x2e\x72\x65\x6c\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\ 369 - \x67\0\x2e\x6c\x6c\x76\x6d\x5f\x61\x64\x64\x72\x73\x69\x67\0\x6c\x69\x63\x65\ 370 - \x6e\x73\x65\0\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\x63\0\ 371 - \x2e\x73\x74\x72\x74\x61\x62\0\x2e\x73\x79\x6d\x74\x61\x62\0\x2e\x72\x6f\x64\ 372 - \x61\x74\x61\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\0\x4c\x49\x43\x45\x4e\x53\x45\0\ 373 - \x4c\x42\x42\x31\x5f\x37\0\x4c\x42\x42\x31\x5f\x36\0\x4c\x42\x42\x30\x5f\x34\0\ 374 - \x4c\x42\x42\x31\x5f\x33\0\x4c\x42\x42\x30\x5f\x33\0\x64\x75\x6d\x70\x5f\x62\ 375 - \x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x64\x75\x6d\ 376 - \x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\0\0\ 377 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 378 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x01\0\0\ 379 - \0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 380 - \0\0\0\0\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x4e\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\ 381 - \0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\ 382 - \0\0\0\0\0\0\0\0\0\0\0\0\0\x6d\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 383 - \0\x40\x01\0\0\0\0\0\0\x08\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\ 384 - \0\0\0\0\0\0\0\xb1\0\0\0\x01\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x48\x03\0\ 385 - \0\0\0\0\0\x62\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 386 - \x89\0\0\0\x01\0\0\0\x03\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xaa\x03\0\0\0\0\0\0\x04\ 387 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xbd\0\0\0\x01\ 388 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xae\x03\0\0\0\0\0\0\x3d\x09\0\0\0\0\0\0\ 389 - \0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x01\0\0\0\0\0\0\0\ 390 - \0\0\0\0\0\0\0\0\0\0\0\0\xeb\x0c\0\0\0\0\0\0\x2c\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\ 391 - \0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xa9\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 392 - \0\0\0\0\0\x18\x11\0\0\0\0\0\0\x98\x01\0\0\0\0\0\0\x0e\0\0\0\x0e\0\0\0\x08\0\0\ 393 - \0\0\0\0\0\x18\0\0\0\0\0\0\0\x4a\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 394 - \0\xb0\x12\0\0\0\0\0\0\x20\0\0\0\0\0\0\0\x08\0\0\0\x02\0\0\0\x08\0\0\0\0\0\0\0\ 395 - \x10\0\0\0\0\0\0\0\x69\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xd0\x12\ 396 - \0\0\0\0\0\0\x20\0\0\0\0\0\0\0\x08\0\0\0\x03\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\ 397 - \0\0\0\0\xb9\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xf0\x12\0\0\0\0\0\ 398 - \0\x50\0\0\0\0\0\0\0\x08\0\0\0\x06\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\ 399 - \x07\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x40\x13\0\0\0\0\0\0\xe0\ 400 - \x03\0\0\0\0\0\0\x08\0\0\0\x07\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x7b\0\ 401 - \0\0\x03\x4c\xff\x6f\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\0\0\0\0\x07\ 402 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xa1\0\0\0\x03\ 403 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x27\x17\0\0\0\0\0\0\x1a\x01\0\0\0\0\0\0\ 404 - \0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"; 405 - 406 - return 0; 407 - err: 408 - bpf_object__destroy_skeleton(s); 409 - return -1; 410 - } 411 - 412 - #endif /* __ITERATORS_BPF_SKEL_H__ */
+3 -3
kernel/bpf/trampoline.c
··· 213 213 im = container_of(work, struct bpf_tramp_image, work); 214 214 bpf_image_ksym_del(&im->ksym); 215 215 bpf_jit_free_exec(im->image); 216 - bpf_jit_uncharge_modmem(1); 216 + bpf_jit_uncharge_modmem(PAGE_SIZE); 217 217 percpu_ref_exit(&im->pcref); 218 218 kfree_rcu(im, rcu); 219 219 } ··· 310 310 if (!im) 311 311 goto out; 312 312 313 - err = bpf_jit_charge_modmem(1); 313 + err = bpf_jit_charge_modmem(PAGE_SIZE); 314 314 if (err) 315 315 goto out_free_im; 316 316 ··· 332 332 out_free_image: 333 333 bpf_jit_free_exec(im->image); 334 334 out_uncharge: 335 - bpf_jit_uncharge_modmem(1); 335 + bpf_jit_uncharge_modmem(PAGE_SIZE); 336 336 out_free_im: 337 337 kfree(im); 338 338 out:
+25 -11
kernel/bpf/verifier.c
··· 536 536 static const char *reg_type_str(struct bpf_verifier_env *env, 537 537 enum bpf_reg_type type) 538 538 { 539 - char postfix[16] = {0}, prefix[16] = {0}; 539 + char postfix[16] = {0}, prefix[32] = {0}; 540 540 static const char * const str[] = { 541 541 [NOT_INIT] = "?", 542 542 [SCALAR_VALUE] = "inv", ··· 570 570 } 571 571 572 572 if (type & MEM_RDONLY) 573 - strncpy(prefix, "rdonly_", 16); 573 + strncpy(prefix, "rdonly_", 32); 574 574 if (type & MEM_ALLOC) 575 - strncpy(prefix, "alloc_", 16); 575 + strncpy(prefix, "alloc_", 32); 576 + if (type & MEM_USER) 577 + strncpy(prefix, "user_", 32); 576 578 577 579 snprintf(env->type_str_buf, TYPE_STR_BUF_LEN, "%s%s%s", 578 580 prefix, str[base_type(type)], postfix); ··· 1549 1547 static void mark_btf_ld_reg(struct bpf_verifier_env *env, 1550 1548 struct bpf_reg_state *regs, u32 regno, 1551 1549 enum bpf_reg_type reg_type, 1552 - struct btf *btf, u32 btf_id) 1550 + struct btf *btf, u32 btf_id, 1551 + enum bpf_type_flag flag) 1553 1552 { 1554 1553 if (reg_type == SCALAR_VALUE) { 1555 1554 mark_reg_unknown(env, regs, regno); 1556 1555 return; 1557 1556 } 1558 1557 mark_reg_known_zero(env, regs, regno); 1559 - regs[regno].type = PTR_TO_BTF_ID; 1558 + regs[regno].type = PTR_TO_BTF_ID | flag; 1560 1559 regs[regno].btf = btf; 1561 1560 regs[regno].btf_id = btf_id; 1562 1561 } ··· 4155 4152 struct bpf_reg_state *reg = regs + regno; 4156 4153 const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id); 4157 4154 const char *tname = btf_name_by_offset(reg->btf, t->name_off); 4155 + enum bpf_type_flag flag = 0; 4158 4156 u32 btf_id; 4159 4157 int ret; 4160 4158 ··· 4175 4171 return -EACCES; 4176 4172 } 4177 4173 4174 + if (reg->type & MEM_USER) { 4175 + verbose(env, 4176 + "R%d is ptr_%s access user memory: off=%d\n", 4177 + regno, tname, off); 4178 + return -EACCES; 4179 + } 4180 + 4178 4181 if (env->ops->btf_struct_access) { 4179 4182 ret = env->ops->btf_struct_access(&env->log, reg->btf, t, 4180 - off, size, atype, &btf_id); 4183 + off, size, atype, &btf_id, &flag); 4181 4184 } else { 4182 4185 if (atype != BPF_READ) { 4183 4186 verbose(env, "only read is supported\n"); ··· 4192 4181 } 4193 4182 4194 4183 ret = btf_struct_access(&env->log, reg->btf, t, off, size, 4195 - atype, &btf_id); 4184 + atype, &btf_id, &flag); 4196 4185 } 4197 4186 4198 4187 if (ret < 0) 4199 4188 return ret; 4200 4189 4201 4190 if (atype == BPF_READ && value_regno >= 0) 4202 - mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id); 4191 + mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag); 4203 4192 4204 4193 return 0; 4205 4194 } ··· 4212 4201 { 4213 4202 struct bpf_reg_state *reg = regs + regno; 4214 4203 struct bpf_map *map = reg->map_ptr; 4204 + enum bpf_type_flag flag = 0; 4215 4205 const struct btf_type *t; 4216 4206 const char *tname; 4217 4207 u32 btf_id; ··· 4250 4238 return -EACCES; 4251 4239 } 4252 4240 4253 - ret = btf_struct_access(&env->log, btf_vmlinux, t, off, size, atype, &btf_id); 4241 + ret = btf_struct_access(&env->log, btf_vmlinux, t, off, size, atype, &btf_id, &flag); 4254 4242 if (ret < 0) 4255 4243 return ret; 4256 4244 4257 4245 if (value_regno >= 0) 4258 - mark_btf_ld_reg(env, regs, value_regno, ret, btf_vmlinux, btf_id); 4246 + mark_btf_ld_reg(env, regs, value_regno, ret, btf_vmlinux, btf_id, flag); 4259 4247 4260 4248 return 0; 4261 4249 } ··· 4456 4444 if (err < 0) 4457 4445 return err; 4458 4446 4459 - err = check_ctx_access(env, insn_idx, off, size, t, &reg_type, &btf, &btf_id); 4447 + err = check_ctx_access(env, insn_idx, off, size, t, &reg_type, &btf, 4448 + &btf_id); 4460 4449 if (err) 4461 4450 verbose_linfo(env, insn_idx, "; "); 4462 4451 if (!err && t == BPF_READ && value_regno >= 0) { ··· 13067 13054 13068 13055 prog->jited = 1; 13069 13056 prog->bpf_func = func[0]->bpf_func; 13057 + prog->jited_len = func[0]->jited_len; 13070 13058 prog->aux->func = func; 13071 13059 prog->aux->func_cnt = env->subprog_cnt; 13072 13060 bpf_prog_jit_attempt_done(prog);
+2
kernel/trace/bpf_trace.c
··· 1235 1235 return &bpf_get_task_stack_proto; 1236 1236 case BPF_FUNC_copy_from_user: 1237 1237 return prog->aux->sleepable ? &bpf_copy_from_user_proto : NULL; 1238 + case BPF_FUNC_copy_from_user_task: 1239 + return prog->aux->sleepable ? &bpf_copy_from_user_task_proto : NULL; 1238 1240 case BPF_FUNC_snprintf_btf: 1239 1241 return &bpf_snprintf_btf_proto; 1240 1242 case BPF_FUNC_per_cpu_ptr:
+10 -2
lib/Kconfig.debug
··· 296 296 config DEBUG_INFO_DWARF5 297 297 bool "Generate DWARF Version 5 debuginfo" 298 298 depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502))) 299 - depends on !DEBUG_INFO_BTF 299 + depends on !DEBUG_INFO_BTF || PAHOLE_VERSION >= 121 300 300 help 301 301 Generate DWARF v5 debug info. Requires binutils 2.35.2, gcc 5.0+ (gcc 302 302 5.0+ accepts the -gdwarf-5 flag but only had partial support for some ··· 323 323 DWARF type info into equivalent deduplicated BTF type info. 324 324 325 325 config PAHOLE_HAS_SPLIT_BTF 326 - def_bool $(success, test `$(PAHOLE) --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/'` -ge "119") 326 + def_bool PAHOLE_VERSION >= 119 327 + 328 + config PAHOLE_HAS_BTF_TAG 329 + def_bool PAHOLE_VERSION >= 123 330 + depends on CC_IS_CLANG 331 + help 332 + Decide whether pahole emits btf_tag attributes (btf_type_tag and 333 + btf_decl_tag) or not. Currently only clang compiler implements 334 + these attributes, so make the config depend on CC_IS_CLANG. 327 335 328 336 config DEBUG_INFO_BTF_MODULES 329 337 def_bool y
+4 -2
net/bpf/bpf_dummy_struct_ops.c
··· 145 145 const struct btf *btf, 146 146 const struct btf_type *t, int off, 147 147 int size, enum bpf_access_type atype, 148 - u32 *next_btf_id) 148 + u32 *next_btf_id, 149 + enum bpf_type_flag *flag) 149 150 { 150 151 const struct btf_type *state; 151 152 s32 type_id; ··· 163 162 return -EACCES; 164 163 } 165 164 166 - err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id); 165 + err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id, 166 + flag); 167 167 if (err < 0) 168 168 return err; 169 169
+12 -6
net/bpf/test_run.c
··· 154 154 goto out; 155 155 156 156 if (sinfo) { 157 - int i, offset = len, data_len; 157 + int i, offset = len; 158 + u32 data_len; 158 159 159 160 for (i = 0; i < sinfo->nr_frags; i++) { 160 161 skb_frag_t *frag = &sinfo->frags[i]; ··· 165 164 break; 166 165 } 167 166 168 - data_len = min_t(int, copy_size - offset, 167 + data_len = min_t(u32, copy_size - offset, 169 168 skb_frag_size(frag)); 170 169 171 170 if (copy_to_user(data_out + offset, ··· 961 960 while (size < kattr->test.data_size_in) { 962 961 struct page *page; 963 962 skb_frag_t *frag; 964 - int data_len; 963 + u32 data_len; 964 + 965 + if (sinfo->nr_frags == MAX_SKB_FRAGS) { 966 + ret = -ENOMEM; 967 + goto out; 968 + } 965 969 966 970 page = alloc_page(GFP_KERNEL); 967 971 if (!page) { ··· 977 971 frag = &sinfo->frags[sinfo->nr_frags++]; 978 972 __skb_frag_set_page(frag, page); 979 973 980 - data_len = min_t(int, kattr->test.data_size_in - size, 974 + data_len = min_t(u32, kattr->test.data_size_in - size, 981 975 PAGE_SIZE); 982 976 skb_frag_size_set(frag, data_len); 983 977 ··· 1147 1141 if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx))) 1148 1142 goto out; 1149 1143 1150 - if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) { 1144 + if (user_ctx->local_port > U16_MAX) { 1151 1145 ret = -ERANGE; 1152 1146 goto out; 1153 1147 } ··· 1155 1149 ctx.family = (u16)user_ctx->family; 1156 1150 ctx.protocol = (u16)user_ctx->protocol; 1157 1151 ctx.dport = (u16)user_ctx->local_port; 1158 - ctx.sport = (__force __be16)user_ctx->remote_port; 1152 + ctx.sport = user_ctx->remote_port; 1159 1153 1160 1154 switch (ctx.family) { 1161 1155 case AF_INET:
+11 -2
net/core/filter.c
··· 8275 8275 struct bpf_insn_access_aux *info) 8276 8276 { 8277 8277 const int size_default = sizeof(__u32); 8278 + int field_size; 8278 8279 8279 8280 if (off < 0 || off >= sizeof(struct bpf_sock)) 8280 8281 return false; ··· 8287 8286 case offsetof(struct bpf_sock, family): 8288 8287 case offsetof(struct bpf_sock, type): 8289 8288 case offsetof(struct bpf_sock, protocol): 8290 - case offsetof(struct bpf_sock, dst_port): 8291 8289 case offsetof(struct bpf_sock, src_port): 8292 8290 case offsetof(struct bpf_sock, rx_queue_mapping): 8293 8291 case bpf_ctx_range(struct bpf_sock, src_ip4): ··· 8295 8295 case bpf_ctx_range_till(struct bpf_sock, dst_ip6[0], dst_ip6[3]): 8296 8296 bpf_ctx_record_field_size(info, size_default); 8297 8297 return bpf_ctx_narrow_access_ok(off, size, size_default); 8298 + case bpf_ctx_range(struct bpf_sock, dst_port): 8299 + field_size = size == size_default ? 8300 + size_default : sizeof_field(struct bpf_sock, dst_port); 8301 + bpf_ctx_record_field_size(info, field_size); 8302 + return bpf_ctx_narrow_access_ok(off, size, field_size); 8303 + case offsetofend(struct bpf_sock, dst_port) ... 8304 + offsetof(struct bpf_sock, dst_ip4) - 1: 8305 + return false; 8298 8306 } 8299 8307 8300 8308 return size == size_default; ··· 10853 10845 case bpf_ctx_range(struct bpf_sk_lookup, local_ip4): 10854 10846 case bpf_ctx_range_till(struct bpf_sk_lookup, remote_ip6[0], remote_ip6[3]): 10855 10847 case bpf_ctx_range_till(struct bpf_sk_lookup, local_ip6[0], local_ip6[3]): 10856 - case bpf_ctx_range(struct bpf_sk_lookup, remote_port): 10848 + case offsetof(struct bpf_sk_lookup, remote_port) ... 10849 + offsetof(struct bpf_sk_lookup, local_ip4) - 1: 10857 10850 case bpf_ctx_range(struct bpf_sk_lookup, local_port): 10858 10851 case bpf_ctx_range(struct bpf_sk_lookup, ingress_ifindex): 10859 10852 bpf_ctx_record_field_size(info, sizeof(__u32));
+4 -2
net/ipv4/bpf_tcp_ca.c
··· 96 96 const struct btf *btf, 97 97 const struct btf_type *t, int off, 98 98 int size, enum bpf_access_type atype, 99 - u32 *next_btf_id) 99 + u32 *next_btf_id, 100 + enum bpf_type_flag *flag) 100 101 { 101 102 size_t end; 102 103 103 104 if (atype == BPF_READ) 104 - return btf_struct_access(log, btf, t, off, size, atype, next_btf_id); 105 + return btf_struct_access(log, btf, t, off, size, atype, next_btf_id, 106 + flag); 105 107 106 108 if (t != tcp_sock_type) { 107 109 bpf_log(log, "only read is supported\n");
+6 -7
net/xdp/xsk.c
··· 343 343 } 344 344 EXPORT_SYMBOL(xsk_tx_peek_desc); 345 345 346 - static u32 xsk_tx_peek_release_fallback(struct xsk_buff_pool *pool, struct xdp_desc *descs, 347 - u32 max_entries) 346 + static u32 xsk_tx_peek_release_fallback(struct xsk_buff_pool *pool, u32 max_entries) 348 347 { 348 + struct xdp_desc *descs = pool->tx_descs; 349 349 u32 nb_pkts = 0; 350 350 351 351 while (nb_pkts < max_entries && xsk_tx_peek_desc(pool, &descs[nb_pkts])) ··· 355 355 return nb_pkts; 356 356 } 357 357 358 - u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *descs, 359 - u32 max_entries) 358 + u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max_entries) 360 359 { 361 360 struct xdp_sock *xs; 362 361 u32 nb_pkts; ··· 364 365 if (!list_is_singular(&pool->xsk_tx_list)) { 365 366 /* Fallback to the non-batched version */ 366 367 rcu_read_unlock(); 367 - return xsk_tx_peek_release_fallback(pool, descs, max_entries); 368 + return xsk_tx_peek_release_fallback(pool, max_entries); 368 369 } 369 370 370 371 xs = list_first_or_null_rcu(&pool->xsk_tx_list, struct xdp_sock, tx_list); ··· 373 374 goto out; 374 375 } 375 376 376 - nb_pkts = xskq_cons_peek_desc_batch(xs->tx, descs, pool, max_entries); 377 + nb_pkts = xskq_cons_peek_desc_batch(xs->tx, pool, max_entries); 377 378 if (!nb_pkts) { 378 379 xs->tx->queue_empty_descs++; 379 380 goto out; ··· 385 386 * packets. This avoids having to implement any buffering in 386 387 * the Tx path. 387 388 */ 388 - nb_pkts = xskq_prod_reserve_addr_batch(pool->cq, descs, nb_pkts); 389 + nb_pkts = xskq_prod_reserve_addr_batch(pool->cq, pool->tx_descs, nb_pkts); 389 390 if (!nb_pkts) 390 391 goto out; 391 392
+7
net/xdp/xsk_buff_pool.c
··· 37 37 if (!pool) 38 38 return; 39 39 40 + kvfree(pool->tx_descs); 40 41 kvfree(pool->heads); 41 42 kvfree(pool); 42 43 } ··· 58 57 pool->heads = kvcalloc(umem->chunks, sizeof(*pool->heads), GFP_KERNEL); 59 58 if (!pool->heads) 60 59 goto out; 60 + 61 + if (xs->tx) { 62 + pool->tx_descs = kcalloc(xs->tx->nentries, sizeof(*pool->tx_descs), GFP_KERNEL); 63 + if (!pool->tx_descs) 64 + goto out; 65 + } 61 66 62 67 pool->chunk_mask = ~((u64)umem->chunk_size - 1); 63 68 pool->addrs_cnt = umem->size;
+6 -13
net/xdp/xsk_queue.h
··· 205 205 return false; 206 206 } 207 207 208 - static inline u32 xskq_cons_read_desc_batch(struct xsk_queue *q, 209 - struct xdp_desc *descs, 210 - struct xsk_buff_pool *pool, u32 max) 208 + static inline u32 xskq_cons_read_desc_batch(struct xsk_queue *q, struct xsk_buff_pool *pool, 209 + u32 max) 211 210 { 212 211 u32 cached_cons = q->cached_cons, nb_entries = 0; 212 + struct xdp_desc *descs = pool->tx_descs; 213 213 214 214 while (cached_cons != q->cached_prod && nb_entries < max) { 215 215 struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring; ··· 282 282 return xskq_cons_read_desc(q, desc, pool); 283 283 } 284 284 285 - static inline u32 xskq_cons_peek_desc_batch(struct xsk_queue *q, struct xdp_desc *descs, 286 - struct xsk_buff_pool *pool, u32 max) 285 + static inline u32 xskq_cons_peek_desc_batch(struct xsk_queue *q, struct xsk_buff_pool *pool, 286 + u32 max) 287 287 { 288 288 u32 entries = xskq_cons_nb_entries(q, max); 289 289 290 - return xskq_cons_read_desc_batch(q, descs, pool, entries); 290 + return xskq_cons_read_desc_batch(q, pool, entries); 291 291 } 292 292 293 293 /* To improve performance in the xskq_cons_release functions, only update local state here. ··· 302 302 static inline void xskq_cons_release_n(struct xsk_queue *q, u32 cnt) 303 303 { 304 304 q->cached_cons += cnt; 305 - } 306 - 307 - static inline bool xskq_cons_is_full(struct xsk_queue *q) 308 - { 309 - /* No barriers needed since data is not accessed */ 310 - return READ_ONCE(q->ring->producer) - READ_ONCE(q->ring->consumer) == 311 - q->nentries; 312 305 } 313 306 314 307 static inline u32 xskq_cons_present_entries(struct xsk_queue *q)
+1 -1
samples/bpf/map_perf_test_user.c
··· 413 413 for (i = 0; i < NR_TESTS; i++) { 414 414 if (!strcmp(test_map_names[i], name) && 415 415 (check_test_flags(i))) { 416 - bpf_map__resize(map, num_map_entries); 416 + bpf_map__set_max_entries(map, num_map_entries); 417 417 continue; 418 418 } 419 419 }
+12 -6
samples/bpf/xdp1_user.c
··· 79 79 80 80 int main(int argc, char **argv) 81 81 { 82 - struct bpf_prog_load_attr prog_load_attr = { 83 - .prog_type = BPF_PROG_TYPE_XDP, 84 - }; 85 82 struct bpf_prog_info info = {}; 86 83 __u32 info_len = sizeof(info); 87 84 const char *optstr = "FSN"; 88 85 int prog_fd, map_fd, opt; 86 + struct bpf_program *prog; 89 87 struct bpf_object *obj; 90 88 struct bpf_map *map; 91 89 char filename[256]; ··· 121 123 } 122 124 123 125 snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 124 - prog_load_attr.file = filename; 125 - 126 - if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) 126 + obj = bpf_object__open_file(filename, NULL); 127 + if (libbpf_get_error(obj)) 127 128 return 1; 129 + 130 + prog = bpf_object__next_program(obj, NULL); 131 + bpf_program__set_type(prog, BPF_PROG_TYPE_XDP); 132 + 133 + err = bpf_object__load(obj); 134 + if (err) 135 + return 1; 136 + 137 + prog_fd = bpf_program__fd(prog); 128 138 129 139 map = bpf_object__next_map(obj, NULL); 130 140 if (!map) {
+12 -5
samples/bpf/xdp_adjust_tail_user.c
··· 82 82 83 83 int main(int argc, char **argv) 84 84 { 85 - struct bpf_prog_load_attr prog_load_attr = { 86 - .prog_type = BPF_PROG_TYPE_XDP, 87 - }; 88 85 unsigned char opt_flags[256] = {}; 89 86 const char *optstr = "i:T:P:SNFh"; 90 87 struct bpf_prog_info info = {}; 91 88 __u32 info_len = sizeof(info); 92 89 unsigned int kill_after_s = 0; 93 90 int i, prog_fd, map_fd, opt; 91 + struct bpf_program *prog; 94 92 struct bpf_object *obj; 95 93 __u32 max_pckt_size = 0; 96 94 __u32 key = 0; ··· 146 148 } 147 149 148 150 snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 149 - prog_load_attr.file = filename; 150 151 151 - if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) 152 + obj = bpf_object__open_file(filename, NULL); 153 + if (libbpf_get_error(obj)) 152 154 return 1; 155 + 156 + prog = bpf_object__next_program(obj, NULL); 157 + bpf_program__set_type(prog, BPF_PROG_TYPE_XDP); 158 + 159 + err = bpf_object__load(obj); 160 + if (err) 161 + return 1; 162 + 163 + prog_fd = bpf_program__fd(prog); 153 164 154 165 /* static global var 'max_pcktsz' is accessible from .data section */ 155 166 if (max_pckt_size) {
+9 -6
samples/bpf/xdp_fwd_user.c
··· 75 75 76 76 int main(int argc, char **argv) 77 77 { 78 - struct bpf_prog_load_attr prog_load_attr = { 79 - .prog_type = BPF_PROG_TYPE_XDP, 80 - }; 81 78 const char *prog_name = "xdp_fwd"; 82 79 struct bpf_program *prog = NULL; 83 80 struct bpf_program *pos; 84 81 const char *sec_name; 85 - int prog_fd, map_fd = -1; 82 + int prog_fd = -1, map_fd = -1; 86 83 char filename[PATH_MAX]; 87 84 struct bpf_object *obj; 88 85 int opt, i, idx, err; ··· 116 119 117 120 if (attach) { 118 121 snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 119 - prog_load_attr.file = filename; 120 122 121 123 if (access(filename, O_RDONLY) < 0) { 122 124 printf("error accessing file %s: %s\n", ··· 123 127 return 1; 124 128 } 125 129 126 - err = bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd); 130 + obj = bpf_object__open_file(filename, NULL); 131 + if (libbpf_get_error(obj)) 132 + return 1; 133 + 134 + prog = bpf_object__next_program(obj, NULL); 135 + bpf_program__set_type(prog, BPF_PROG_TYPE_XDP); 136 + 137 + err = bpf_object__load(obj); 127 138 if (err) { 128 139 printf("Does kernel support devmap lookup?\n"); 129 140 /* If not, the error message will be:
+4 -4
samples/bpf/xdp_redirect_cpu.bpf.c
··· 491 491 return bpf_redirect_map(&cpu_map, cpu_dest, 0); 492 492 } 493 493 494 - SEC("xdp_cpumap/redirect") 494 + SEC("xdp/cpumap") 495 495 int xdp_redirect_cpu_devmap(struct xdp_md *ctx) 496 496 { 497 497 void *data_end = (void *)(long)ctx->data_end; ··· 507 507 return bpf_redirect_map(&tx_port, 0, 0); 508 508 } 509 509 510 - SEC("xdp_cpumap/pass") 510 + SEC("xdp/cpumap") 511 511 int xdp_redirect_cpu_pass(struct xdp_md *ctx) 512 512 { 513 513 return XDP_PASS; 514 514 } 515 515 516 - SEC("xdp_cpumap/drop") 516 + SEC("xdp/cpumap") 517 517 int xdp_redirect_cpu_drop(struct xdp_md *ctx) 518 518 { 519 519 return XDP_DROP; 520 520 } 521 521 522 - SEC("xdp_devmap/egress") 522 + SEC("xdp/devmap") 523 523 int xdp_redirect_egress_prog(struct xdp_md *ctx) 524 524 { 525 525 void *data_end = (void *)(long)ctx->data_end;
+1 -1
samples/bpf/xdp_redirect_cpu_user.c
··· 70 70 71 71 printf(" Programs to be used for -p/--progname:\n"); 72 72 bpf_object__for_each_program(pos, obj) { 73 - if (bpf_program__is_xdp(pos)) { 73 + if (bpf_program__type(pos) == BPF_PROG_TYPE_XDP) { 74 74 if (!strncmp(bpf_program__name(pos), "xdp_prognum", 75 75 sizeof("xdp_prognum") - 1)) 76 76 printf(" %s\n", bpf_program__name(pos));
+1 -1
samples/bpf/xdp_redirect_map.bpf.c
··· 68 68 return xdp_redirect_map(ctx, &tx_port_native); 69 69 } 70 70 71 - SEC("xdp_devmap/egress") 71 + SEC("xdp/devmap") 72 72 int xdp_redirect_map_egress(struct xdp_md *ctx) 73 73 { 74 74 void *data_end = (void *)(long)ctx->data_end;
+1 -1
samples/bpf/xdp_redirect_map_multi.bpf.c
··· 53 53 return xdp_redirect_map(ctx, &forward_map_native); 54 54 } 55 55 56 - SEC("xdp_devmap/egress") 56 + SEC("xdp/devmap") 57 57 int xdp_devmap_prog(struct xdp_md *ctx) 58 58 { 59 59 void *data_end = (void *)(long)ctx->data_end;
+10 -7
samples/bpf/xdp_router_ipv4_user.c
··· 640 640 641 641 int main(int ac, char **argv) 642 642 { 643 - struct bpf_prog_load_attr prog_load_attr = { 644 - .prog_type = BPF_PROG_TYPE_XDP, 645 - }; 646 643 struct bpf_prog_info info = {}; 647 644 __u32 info_len = sizeof(info); 648 645 const char *optstr = "SF"; 646 + struct bpf_program *prog; 649 647 struct bpf_object *obj; 650 648 char filename[256]; 651 649 char **ifname_list; ··· 651 653 int err, i = 1; 652 654 653 655 snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 654 - prog_load_attr.file = filename; 655 656 656 657 total_ifindex = ac - 1; 657 658 ifname_list = (argv + 1); ··· 681 684 return 1; 682 685 } 683 686 684 - if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) 687 + obj = bpf_object__open_file(filename, NULL); 688 + if (libbpf_get_error(obj)) 685 689 return 1; 686 690 691 + prog = bpf_object__next_program(obj, NULL); 692 + bpf_program__set_type(prog, BPF_PROG_TYPE_XDP); 693 + 687 694 printf("\n******************loading bpf file*********************\n"); 688 - if (!prog_fd) { 689 - printf("bpf_prog_load_xattr: %s\n", strerror(errno)); 695 + err = bpf_object__load(obj); 696 + if (err) { 697 + printf("bpf_object__load(): %s\n", strerror(errno)); 690 698 return 1; 691 699 } 700 + prog_fd = bpf_program__fd(prog); 692 701 693 702 lpm_map_fd = bpf_object__find_map_fd_by_name(obj, "lpm_map"); 694 703 rxcnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rxcnt");
+11 -5
samples/bpf/xdp_rxq_info_user.c
··· 450 450 int main(int argc, char **argv) 451 451 { 452 452 __u32 cfg_options= NO_TOUCH ; /* Default: Don't touch packet memory */ 453 - struct bpf_prog_load_attr prog_load_attr = { 454 - .prog_type = BPF_PROG_TYPE_XDP, 455 - }; 456 453 struct bpf_prog_info info = {}; 457 454 __u32 info_len = sizeof(info); 458 455 int prog_fd, map_fd, opt, err; 459 456 bool use_separators = true; 460 457 struct config cfg = { 0 }; 458 + struct bpf_program *prog; 461 459 struct bpf_object *obj; 462 460 struct bpf_map *map; 463 461 char filename[256]; ··· 469 471 char *action_str = NULL; 470 472 471 473 snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 472 - prog_load_attr.file = filename; 473 474 474 - if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) 475 + obj = bpf_object__open_file(filename, NULL); 476 + if (libbpf_get_error(obj)) 475 477 return EXIT_FAIL; 478 + 479 + prog = bpf_object__next_program(obj, NULL); 480 + bpf_program__set_type(prog, BPF_PROG_TYPE_XDP); 481 + 482 + err = bpf_object__load(obj); 483 + if (err) 484 + return EXIT_FAIL; 485 + prog_fd = bpf_program__fd(prog); 476 486 477 487 map = bpf_object__find_map_by_name(obj, "config_map"); 478 488 stats_global_map = bpf_object__find_map_by_name(obj, "stats_global_map");
+1 -1
samples/bpf/xdp_sample_user.c
··· 1218 1218 default: 1219 1219 return -EINVAL; 1220 1220 } 1221 - if (bpf_map__resize(sample_map[i], sample_map_count[i]) < 0) 1221 + if (bpf_map__set_max_entries(sample_map[i], sample_map_count[i]) < 0) 1222 1222 return -errno; 1223 1223 } 1224 1224 sample_map[MAP_DEVMAP_XMIT_MULTI] = maps[MAP_DEVMAP_XMIT_MULTI];
+1 -1
samples/bpf/xdp_sample_user.h
··· 61 61 62 62 #define __attach_tp(name) \ 63 63 ({ \ 64 - if (!bpf_program__is_tracing(skel->progs.name)) \ 64 + if (bpf_program__type(skel->progs.name) != BPF_PROG_TYPE_TRACING)\ 65 65 return -EINVAL; \ 66 66 skel->links.name = bpf_program__attach(skel->progs.name); \ 67 67 if (!skel->links.name) \
+10 -7
samples/bpf/xdp_tx_iptunnel_user.c
··· 152 152 153 153 int main(int argc, char **argv) 154 154 { 155 - struct bpf_prog_load_attr prog_load_attr = { 156 - .prog_type = BPF_PROG_TYPE_XDP, 157 - }; 158 155 int min_port = 0, max_port = 0, vip2tnl_map_fd; 159 156 const char *optstr = "i:a:p:s:d:m:T:P:FSNh"; 160 157 unsigned char opt_flags[256] = {}; ··· 159 162 __u32 info_len = sizeof(info); 160 163 unsigned int kill_after_s = 0; 161 164 struct iptnl_info tnl = {}; 165 + struct bpf_program *prog; 162 166 struct bpf_object *obj; 163 167 struct vip vip = {}; 164 168 char filename[256]; ··· 257 259 } 258 260 259 261 snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 260 - prog_load_attr.file = filename; 261 262 262 - if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) 263 + obj = bpf_object__open_file(filename, NULL); 264 + if (libbpf_get_error(obj)) 263 265 return 1; 264 266 265 - if (!prog_fd) { 266 - printf("bpf_prog_load_xattr: %s\n", strerror(errno)); 267 + prog = bpf_object__next_program(obj, NULL); 268 + bpf_program__set_type(prog, BPF_PROG_TYPE_XDP); 269 + 270 + err = bpf_object__load(obj); 271 + if (err) { 272 + printf("bpf_object__load(): %s\n", strerror(errno)); 267 273 return 1; 268 274 } 275 + prog_fd = bpf_program__fd(prog); 269 276 270 277 rxcnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rxcnt"); 271 278 vip2tnl_map_fd = bpf_object__find_map_fd_by_name(obj, "vip2tnl");
+1 -1
scripts/pahole-flags.sh
··· 7 7 exit 0 8 8 fi 9 9 10 - pahole_ver=$(${PAHOLE} --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/') 10 + pahole_ver=$($(dirname $0)/pahole-version.sh ${PAHOLE}) 11 11 12 12 if [ "${pahole_ver}" -ge "118" ] && [ "${pahole_ver}" -le "121" ]; then 13 13 # pahole 1.18 through 1.21 can't handle zero-sized per-CPU vars
+13
scripts/pahole-version.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Usage: $ ./pahole-version.sh pahole 5 + # 6 + # Prints pahole's version in a 3-digit form, such as 119 for v1.19. 7 + 8 + if [ ! -x "$(command -v "$@")" ]; then 9 + echo 0 10 + exit 1 11 + fi 12 + 13 + "$@" --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/'
+1 -1
tools/bpf/bpftool/common.c
··· 310 310 { 311 311 const char *prog_name = prog_info->name; 312 312 const struct btf_type *func_type; 313 - const struct bpf_func_info finfo; 313 + const struct bpf_func_info finfo = {}; 314 314 struct bpf_prog_info info = {}; 315 315 __u32 info_len = sizeof(info); 316 316 struct btf *prog_btf = NULL;
+17 -12
tools/bpf/bpftool/feature.c
··· 487 487 size_t maxlen; 488 488 bool res; 489 489 490 - if (ifindex) 491 - /* Only test offload-able program types */ 492 - switch (prog_type) { 493 - case BPF_PROG_TYPE_SCHED_CLS: 494 - case BPF_PROG_TYPE_XDP: 495 - break; 496 - default: 497 - return; 498 - } 490 + if (ifindex) { 491 + p_info("BPF offload feature probing is not supported"); 492 + return; 493 + } 499 494 500 - res = bpf_probe_prog_type(prog_type, ifindex); 495 + res = libbpf_probe_bpf_prog_type(prog_type, NULL); 501 496 #ifdef USE_LIBCAP 502 497 /* Probe may succeed even if program load fails, for unprivileged users 503 498 * check that we did not fail because of insufficient permissions ··· 530 535 size_t maxlen; 531 536 bool res; 532 537 533 - res = bpf_probe_map_type(map_type, ifindex); 538 + if (ifindex) { 539 + p_info("BPF offload feature probing is not supported"); 540 + return; 541 + } 542 + 543 + res = libbpf_probe_bpf_map_type(map_type, NULL); 534 544 535 545 /* Probe result depends on the success of map creation, no additional 536 546 * check required for unprivileged users ··· 567 567 bool res = false; 568 568 569 569 if (supported_type) { 570 - res = bpf_probe_helper(id, prog_type, ifindex); 570 + if (ifindex) { 571 + p_info("BPF offload feature probing is not supported"); 572 + return; 573 + } 574 + 575 + res = libbpf_probe_bpf_helper(prog_type, id, NULL); 571 576 #ifdef USE_LIBCAP 572 577 /* Probe may succeed even if program load fails, for 573 578 * unprivileged users check that we did not fail because of
+6 -3
tools/bpf/bpftool/gen.c
··· 378 378 int prog_fd = skel->progs.%2$s.prog_fd; \n\ 379 379 ", obj_name, bpf_program__name(prog)); 380 380 381 - switch (bpf_program__get_type(prog)) { 381 + switch (bpf_program__type(prog)) { 382 382 case BPF_PROG_TYPE_RAW_TRACEPOINT: 383 383 tp_name = strchr(bpf_program__section_name(prog), '/') + 1; 384 - printf("\tint fd = bpf_raw_tracepoint_open(\"%s\", prog_fd);\n", tp_name); 384 + printf("\tint fd = skel_raw_tracepoint_open(\"%s\", prog_fd);\n", tp_name); 385 385 break; 386 386 case BPF_PROG_TYPE_TRACING: 387 - printf("\tint fd = bpf_raw_tracepoint_open(NULL, prog_fd);\n"); 387 + if (bpf_program__expected_attach_type(prog) == BPF_TRACE_ITER) 388 + printf("\tint fd = skel_link_create(prog_fd, 0, BPF_TRACE_ITER);\n"); 389 + else 390 + printf("\tint fd = skel_raw_tracepoint_open(NULL, prog_fd);\n"); 388 391 break; 389 392 default: 390 393 printf("\tint fd = ((void)prog_fd, 0); /* auto-attach not supported */\n");
+1 -4
tools/bpf/bpftool/main.c
··· 478 478 } 479 479 480 480 if (!legacy_libbpf) { 481 - enum libbpf_strict_mode mode; 482 - 483 481 /* Allow legacy map definitions for skeleton generation. 484 482 * It will still be rejected if users use LIBBPF_STRICT_ALL 485 483 * mode for loading generated skeleton. 486 484 */ 487 - mode = (__LIBBPF_STRICT_LAST - 1) & ~LIBBPF_STRICT_MAP_DEFINITIONS; 488 - ret = libbpf_set_strict_mode(mode); 485 + ret = libbpf_set_strict_mode(LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS); 489 486 if (ret) 490 487 p_err("failed to enable libbpf strict mode: %d", ret); 491 488 }
+6 -7
tools/bpf/bpftool/prog.c
··· 1272 1272 { 1273 1273 char *data_fname_in = NULL, *data_fname_out = NULL; 1274 1274 char *ctx_fname_in = NULL, *ctx_fname_out = NULL; 1275 - struct bpf_prog_test_run_attr test_attr = {0}; 1276 1275 const unsigned int default_size = SZ_32K; 1277 1276 void *data_in = NULL, *data_out = NULL; 1278 1277 void *ctx_in = NULL, *ctx_out = NULL; 1279 1278 unsigned int repeat = 1; 1280 1279 int fd, err; 1280 + LIBBPF_OPTS(bpf_test_run_opts, test_attr); 1281 1281 1282 1282 if (!REQ_ARGS(4)) 1283 1283 return -1; ··· 1395 1395 goto free_ctx_in; 1396 1396 } 1397 1397 1398 - test_attr.prog_fd = fd; 1399 1398 test_attr.repeat = repeat; 1400 1399 test_attr.data_in = data_in; 1401 1400 test_attr.data_out = data_out; 1402 1401 test_attr.ctx_in = ctx_in; 1403 1402 test_attr.ctx_out = ctx_out; 1404 1403 1405 - err = bpf_prog_test_run_xattr(&test_attr); 1404 + err = bpf_prog_test_run_opts(fd, &test_attr); 1406 1405 if (err) { 1407 1406 p_err("failed to run program: %s", strerror(errno)); 1408 1407 goto free_ctx_out; ··· 2282 2283 profile_obj->rodata->num_metric = num_metric; 2283 2284 2284 2285 /* adjust map sizes */ 2285 - bpf_map__resize(profile_obj->maps.events, num_metric * num_cpu); 2286 - bpf_map__resize(profile_obj->maps.fentry_readings, num_metric); 2287 - bpf_map__resize(profile_obj->maps.accum_readings, num_metric); 2288 - bpf_map__resize(profile_obj->maps.counts, 1); 2286 + bpf_map__set_max_entries(profile_obj->maps.events, num_metric * num_cpu); 2287 + bpf_map__set_max_entries(profile_obj->maps.fentry_readings, num_metric); 2288 + bpf_map__set_max_entries(profile_obj->maps.accum_readings, num_metric); 2289 + bpf_map__set_max_entries(profile_obj->maps.counts, 1); 2289 2290 2290 2291 /* change target name */ 2291 2292 profile_tgt_name = profile_target_name(profile_tgt_fd);
+15 -2
tools/include/uapi/linux/bpf.h
··· 5076 5076 * associated to *xdp_md*, at *offset*. 5077 5077 * Return 5078 5078 * 0 on success, or a negative error in case of failure. 5079 + * 5080 + * long bpf_copy_from_user_task(void *dst, u32 size, const void *user_ptr, struct task_struct *tsk, u64 flags) 5081 + * Description 5082 + * Read *size* bytes from user space address *user_ptr* in *tsk*'s 5083 + * address space, and stores the data in *dst*. *flags* is not 5084 + * used yet and is provided for future extensibility. This helper 5085 + * can only be used by sleepable programs. 5086 + * Return 5087 + * 0 on success, or a negative error in case of failure. On error 5088 + * *dst* buffer is zeroed out. 5079 5089 */ 5080 5090 #define __BPF_FUNC_MAPPER(FN) \ 5081 5091 FN(unspec), \ ··· 5279 5269 FN(xdp_get_buff_len), \ 5280 5270 FN(xdp_load_bytes), \ 5281 5271 FN(xdp_store_bytes), \ 5272 + FN(copy_from_user_task), \ 5282 5273 /* */ 5283 5274 5284 5275 /* integer value in 'imm' field of BPF_CALL instruction selects which helper ··· 5574 5563 __u32 src_ip4; 5575 5564 __u32 src_ip6[4]; 5576 5565 __u32 src_port; /* host byte order */ 5577 - __u32 dst_port; /* network byte order */ 5566 + __be16 dst_port; /* network byte order */ 5567 + __u16 :16; /* zero padding */ 5578 5568 __u32 dst_ip4; 5579 5569 __u32 dst_ip6[4]; 5580 5570 __u32 state; ··· 6453 6441 __u32 protocol; /* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */ 6454 6442 __u32 remote_ip4; /* Network byte order */ 6455 6443 __u32 remote_ip6[4]; /* Network byte order */ 6456 - __u32 remote_port; /* Network byte order */ 6444 + __be16 remote_port; /* Network byte order */ 6445 + __u16 :16; /* Zero padding */ 6457 6446 __u32 local_ip4; /* Network byte order */ 6458 6447 __u32 local_ip6[4]; /* Network byte order */ 6459 6448 __u32 local_port; /* Host byte order */
+2 -2
tools/lib/bpf/Makefile
··· 131 131 sort -u | wc -l) 132 132 VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \ 133 133 sed 's/\[.*\]//' | \ 134 - awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \ 134 + awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}' | \ 135 135 grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l) 136 136 137 137 CMD_TARGETS = $(LIB_TARGET) $(PC_FILE) ··· 194 194 sort -u > $(OUTPUT)libbpf_global_syms.tmp; \ 195 195 readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \ 196 196 sed 's/\[.*\]//' | \ 197 - awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \ 197 + awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}'| \ 198 198 grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | \ 199 199 sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; \ 200 200 diff -u $(OUTPUT)libbpf_global_syms.tmp \
+3 -1
tools/lib/bpf/bpf.h
··· 453 453 * out: length of cxt_out */ 454 454 }; 455 455 456 + LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_test_run_opts() instead") 456 457 LIBBPF_API int bpf_prog_test_run_xattr(struct bpf_prog_test_run_attr *test_attr); 457 458 458 459 /* 459 460 * bpf_prog_test_run does not check that data_out is large enough. Consider 460 - * using bpf_prog_test_run_xattr instead. 461 + * using bpf_prog_test_run_opts instead. 461 462 */ 463 + LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_prog_test_run_opts() instead") 462 464 LIBBPF_API int bpf_prog_test_run(int prog_fd, int repeat, void *data, 463 465 __u32 size, void *data_out, __u32 *size_out, 464 466 __u32 *retval, __u32 *duration);
+101 -2
tools/lib/bpf/bpf_tracing.h
··· 76 76 #define __PT_RC_REG ax 77 77 #define __PT_SP_REG sp 78 78 #define __PT_IP_REG ip 79 + /* syscall uses r10 for PARM4 */ 80 + #define PT_REGS_PARM4_SYSCALL(x) ((x)->r10) 81 + #define PT_REGS_PARM4_CORE_SYSCALL(x) BPF_CORE_READ(x, r10) 79 82 80 83 #else 81 84 ··· 108 105 #define __PT_RC_REG rax 109 106 #define __PT_SP_REG rsp 110 107 #define __PT_IP_REG rip 108 + /* syscall uses r10 for PARM4 */ 109 + #define PT_REGS_PARM4_SYSCALL(x) ((x)->r10) 110 + #define PT_REGS_PARM4_CORE_SYSCALL(x) BPF_CORE_READ(x, r10) 111 111 112 112 #endif /* __i386__ */ 113 113 114 114 #endif /* __KERNEL__ || __VMLINUX_H__ */ 115 115 116 116 #elif defined(bpf_target_s390) 117 + 118 + struct pt_regs___s390 { 119 + unsigned long orig_gpr2; 120 + }; 117 121 118 122 /* s390 provides user_pt_regs instead of struct pt_regs to userspace */ 119 123 #define __PT_REGS_CAST(x) ((const user_pt_regs *)(x)) ··· 134 124 #define __PT_RC_REG gprs[2] 135 125 #define __PT_SP_REG gprs[15] 136 126 #define __PT_IP_REG psw.addr 127 + #define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; }) 128 + #define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___s390 *)(x), orig_gpr2) 137 129 138 130 #elif defined(bpf_target_arm) 139 131 ··· 152 140 153 141 #elif defined(bpf_target_arm64) 154 142 143 + struct pt_regs___arm64 { 144 + unsigned long orig_x0; 145 + }; 146 + 155 147 /* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */ 156 148 #define __PT_REGS_CAST(x) ((const struct user_pt_regs *)(x)) 157 149 #define __PT_PARM1_REG regs[0] ··· 168 152 #define __PT_RC_REG regs[0] 169 153 #define __PT_SP_REG sp 170 154 #define __PT_IP_REG pc 155 + #define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; }) 156 + #define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___arm64 *)(x), orig_x0) 171 157 172 158 #elif defined(bpf_target_mips) 173 159 ··· 196 178 #define __PT_RC_REG gpr[3] 197 179 #define __PT_SP_REG sp 198 180 #define __PT_IP_REG nip 181 + /* powerpc does not select ARCH_HAS_SYSCALL_WRAPPER. */ 182 + #define PT_REGS_SYSCALL_REGS(ctx) ctx 199 183 200 184 #elif defined(bpf_target_sparc) 201 185 ··· 226 206 #define __PT_PARM4_REG a3 227 207 #define __PT_PARM5_REG a4 228 208 #define __PT_RET_REG ra 229 - #define __PT_FP_REG fp 209 + #define __PT_FP_REG s0 230 210 #define __PT_RC_REG a5 231 211 #define __PT_SP_REG sp 232 - #define __PT_IP_REG epc 212 + #define __PT_IP_REG pc 213 + /* riscv does not select ARCH_HAS_SYSCALL_WRAPPER. */ 214 + #define PT_REGS_SYSCALL_REGS(ctx) ctx 233 215 234 216 #endif 235 217 ··· 285 263 286 264 #endif 287 265 266 + #ifndef PT_REGS_PARM1_SYSCALL 267 + #define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1(x) 268 + #endif 269 + #define PT_REGS_PARM2_SYSCALL(x) PT_REGS_PARM2(x) 270 + #define PT_REGS_PARM3_SYSCALL(x) PT_REGS_PARM3(x) 271 + #ifndef PT_REGS_PARM4_SYSCALL 272 + #define PT_REGS_PARM4_SYSCALL(x) PT_REGS_PARM4(x) 273 + #endif 274 + #define PT_REGS_PARM5_SYSCALL(x) PT_REGS_PARM5(x) 275 + 276 + #ifndef PT_REGS_PARM1_CORE_SYSCALL 277 + #define PT_REGS_PARM1_CORE_SYSCALL(x) PT_REGS_PARM1_CORE(x) 278 + #endif 279 + #define PT_REGS_PARM2_CORE_SYSCALL(x) PT_REGS_PARM2_CORE(x) 280 + #define PT_REGS_PARM3_CORE_SYSCALL(x) PT_REGS_PARM3_CORE(x) 281 + #ifndef PT_REGS_PARM4_CORE_SYSCALL 282 + #define PT_REGS_PARM4_CORE_SYSCALL(x) PT_REGS_PARM4_CORE(x) 283 + #endif 284 + #define PT_REGS_PARM5_CORE_SYSCALL(x) PT_REGS_PARM5_CORE(x) 285 + 288 286 #else /* defined(bpf_target_defined) */ 289 287 290 288 #define PT_REGS_PARM1(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) ··· 332 290 #define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 333 291 #define BPF_KRETPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 334 292 293 + #define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 294 + #define PT_REGS_PARM2_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 295 + #define PT_REGS_PARM3_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 296 + #define PT_REGS_PARM4_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 297 + #define PT_REGS_PARM5_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 298 + 299 + #define PT_REGS_PARM1_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 300 + #define PT_REGS_PARM2_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 301 + #define PT_REGS_PARM3_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 302 + #define PT_REGS_PARM4_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 303 + #define PT_REGS_PARM5_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 304 + 335 305 #endif /* defined(bpf_target_defined) */ 306 + 307 + /* 308 + * When invoked from a syscall handler kprobe, returns a pointer to a 309 + * struct pt_regs containing syscall arguments and suitable for passing to 310 + * PT_REGS_PARMn_SYSCALL() and PT_REGS_PARMn_CORE_SYSCALL(). 311 + */ 312 + #ifndef PT_REGS_SYSCALL_REGS 313 + /* By default, assume that the arch selects ARCH_HAS_SYSCALL_WRAPPER. */ 314 + #define PT_REGS_SYSCALL_REGS(ctx) ((struct pt_regs *)PT_REGS_PARM1(ctx)) 315 + #endif 336 316 337 317 #ifndef ___bpf_concat 338 318 #define ___bpf_concat(a, b) a ## b ··· 469 405 _Pragma("GCC diagnostic pop") \ 470 406 } \ 471 407 static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args) 408 + 409 + #define ___bpf_syscall_args0() ctx 410 + #define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (void *)PT_REGS_PARM1_CORE_SYSCALL(regs) 411 + #define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (void *)PT_REGS_PARM2_CORE_SYSCALL(regs) 412 + #define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (void *)PT_REGS_PARM3_CORE_SYSCALL(regs) 413 + #define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (void *)PT_REGS_PARM4_CORE_SYSCALL(regs) 414 + #define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (void *)PT_REGS_PARM5_CORE_SYSCALL(regs) 415 + #define ___bpf_syscall_args(args...) ___bpf_apply(___bpf_syscall_args, ___bpf_narg(args))(args) 416 + 417 + /* 418 + * BPF_KPROBE_SYSCALL is a variant of BPF_KPROBE, which is intended for 419 + * tracing syscall functions, like __x64_sys_close. It hides the underlying 420 + * platform-specific low-level way of getting syscall input arguments from 421 + * struct pt_regs, and provides a familiar typed and named function arguments 422 + * syntax and semantics of accessing syscall input parameters. 423 + * 424 + * Original struct pt_regs* context is preserved as 'ctx' argument. This might 425 + * be necessary when using BPF helpers like bpf_perf_event_output(). 426 + * 427 + * This macro relies on BPF CO-RE support. 428 + */ 429 + #define BPF_KPROBE_SYSCALL(name, args...) \ 430 + name(struct pt_regs *ctx); \ 431 + static __attribute__((always_inline)) typeof(name(0)) \ 432 + ____##name(struct pt_regs *ctx, ##args); \ 433 + typeof(name(0)) name(struct pt_regs *ctx) \ 434 + { \ 435 + struct pt_regs *regs = PT_REGS_SYSCALL_REGS(ctx); \ 436 + _Pragma("GCC diagnostic push") \ 437 + _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 438 + return ____##name(___bpf_syscall_args(args)); \ 439 + _Pragma("GCC diagnostic pop") \ 440 + } \ 441 + static __attribute__((always_inline)) typeof(name(0)) \ 442 + ____##name(struct pt_regs *ctx, ##args) 472 443 473 444 #endif
+6 -6
tools/lib/bpf/btf.h
··· 147 147 LIBBPF_API int btf__align_of(const struct btf *btf, __u32 id); 148 148 LIBBPF_API int btf__fd(const struct btf *btf); 149 149 LIBBPF_API void btf__set_fd(struct btf *btf, int fd); 150 - LIBBPF_DEPRECATED_SINCE(0, 7, "use btf__raw_data() instead") 151 - LIBBPF_API const void *btf__get_raw_data(const struct btf *btf, __u32 *size); 152 150 LIBBPF_API const void *btf__raw_data(const struct btf *btf, __u32 *size); 153 151 LIBBPF_API const char *btf__name_by_offset(const struct btf *btf, __u32 offset); 154 152 LIBBPF_API const char *btf__str_by_offset(const struct btf *btf, __u32 offset); 153 + LIBBPF_DEPRECATED_SINCE(0, 7, "this API is not necessary when BTF-defined maps are used") 155 154 LIBBPF_API int btf__get_map_kv_tids(const struct btf *btf, const char *map_name, 156 155 __u32 expected_key_size, 157 156 __u32 expected_value_size, ··· 158 159 159 160 LIBBPF_API struct btf_ext *btf_ext__new(const __u8 *data, __u32 size); 160 161 LIBBPF_API void btf_ext__free(struct btf_ext *btf_ext); 161 - LIBBPF_API const void *btf_ext__get_raw_data(const struct btf_ext *btf_ext, 162 - __u32 *size); 162 + LIBBPF_API const void *btf_ext__raw_data(const struct btf_ext *btf_ext, __u32 *size); 163 163 LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_func_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions") 164 164 int btf_ext__reloc_func_info(const struct btf *btf, 165 165 const struct btf_ext *btf_ext, ··· 169 171 const struct btf_ext *btf_ext, 170 172 const char *sec_name, __u32 insns_cnt, 171 173 void **line_info, __u32 *cnt); 172 - LIBBPF_API __u32 btf_ext__func_info_rec_size(const struct btf_ext *btf_ext); 173 - LIBBPF_API __u32 btf_ext__line_info_rec_size(const struct btf_ext *btf_ext); 174 + LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_func_info is deprecated; write custom func_info parsing to fetch rec_size") 175 + __u32 btf_ext__func_info_rec_size(const struct btf_ext *btf_ext); 176 + LIBBPF_API LIBBPF_DEPRECATED("btf_ext__reloc_line_info is deprecated; write custom line_info parsing to fetch rec_size") 177 + __u32 btf_ext__line_info_rec_size(const struct btf_ext *btf_ext); 174 178 175 179 LIBBPF_API int btf__find_str(struct btf *btf, const char *s); 176 180 LIBBPF_API int btf__add_str(struct btf *btf, const char *s);
+4 -2
tools/lib/bpf/btf_dump.c
··· 1861 1861 { 1862 1862 const struct btf_array *array = btf_array(t); 1863 1863 const struct btf_type *elem_type; 1864 - __u32 i, elem_size = 0, elem_type_id; 1864 + __u32 i, elem_type_id; 1865 + __s64 elem_size; 1865 1866 bool is_array_member; 1866 1867 1867 1868 elem_type_id = array->type; 1868 1869 elem_type = skip_mods_and_typedefs(d->btf, elem_type_id, NULL); 1869 1870 elem_size = btf__resolve_size(d->btf, elem_type_id); 1870 1871 if (elem_size <= 0) { 1871 - pr_warn("unexpected elem size %d for array type [%u]\n", elem_size, id); 1872 + pr_warn("unexpected elem size %zd for array type [%u]\n", 1873 + (ssize_t)elem_size, id); 1872 1874 return -EINVAL; 1873 1875 } 1874 1876
+29 -22
tools/lib/bpf/libbpf.c
··· 156 156 157 157 int libbpf_set_strict_mode(enum libbpf_strict_mode mode) 158 158 { 159 - /* __LIBBPF_STRICT_LAST is the last power-of-2 value used + 1, so to 160 - * get all possible values we compensate last +1, and then (2*x - 1) 161 - * to get the bit mask 162 - */ 163 - if (mode != LIBBPF_STRICT_ALL 164 - && (mode & ~((__LIBBPF_STRICT_LAST - 1) * 2 - 1))) 165 - return errno = EINVAL, -EINVAL; 166 - 167 159 libbpf_mode = mode; 168 160 return 0; 169 161 } ··· 229 237 SEC_SLOPPY_PFX = 16, 230 238 /* BPF program support non-linear XDP buffer */ 231 239 SEC_XDP_FRAGS = 32, 240 + /* deprecated sec definitions not supposed to be used */ 241 + SEC_DEPRECATED = 64, 232 242 }; 233 243 234 244 struct bpf_sec_def { ··· 4194 4200 4195 4201 if (!bpf_map__is_internal(map)) { 4196 4202 pr_warn("Use of BPF_ANNOTATE_KV_PAIR is deprecated, use BTF-defined maps in .maps section instead\n"); 4203 + #pragma GCC diagnostic push 4204 + #pragma GCC diagnostic ignored "-Wdeprecated-declarations" 4197 4205 ret = btf__get_map_kv_tids(obj->btf, map->name, def->key_size, 4198 4206 def->value_size, &key_type_id, 4199 4207 &value_type_id); 4208 + #pragma GCC diagnostic pop 4200 4209 } else { 4201 4210 /* 4202 4211 * LLVM annotates global data differently in BTF, that is, ··· 6572 6575 if (prog->type == BPF_PROG_TYPE_XDP && (def & SEC_XDP_FRAGS)) 6573 6576 opts->prog_flags |= BPF_F_XDP_HAS_FRAGS; 6574 6577 6578 + if (def & SEC_DEPRECATED) 6579 + pr_warn("SEC(\"%s\") is deprecated, please see https://github.com/libbpf/libbpf/wiki/Libbpf-1.0-migration-guide#bpf-program-sec-annotation-deprecations for details\n", 6580 + prog->sec_name); 6581 + 6575 6582 if ((prog->type == BPF_PROG_TYPE_TRACING || 6576 6583 prog->type == BPF_PROG_TYPE_LSM || 6577 6584 prog->type == BPF_PROG_TYPE_EXT) && !prog->attach_btf_id) { ··· 7897 7896 return 0; 7898 7897 } 7899 7898 7900 - const char *bpf_map__get_pin_path(const struct bpf_map *map) 7901 - { 7902 - return map->pin_path; 7903 - } 7899 + __alias(bpf_map__pin_path) 7900 + const char *bpf_map__get_pin_path(const struct bpf_map *map); 7904 7901 7905 7902 const char *bpf_map__pin_path(const struct bpf_map *map) 7906 7903 { ··· 8463 8464 return fd; 8464 8465 } 8465 8466 8466 - enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog) 8467 + __alias(bpf_program__type) 8468 + enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog); 8469 + 8470 + enum bpf_prog_type bpf_program__type(const struct bpf_program *prog) 8467 8471 { 8468 8472 return prog->type; 8469 8473 } ··· 8510 8508 BPF_PROG_TYPE_FNS(extension, BPF_PROG_TYPE_EXT); 8511 8509 BPF_PROG_TYPE_FNS(sk_lookup, BPF_PROG_TYPE_SK_LOOKUP); 8512 8510 8513 - enum bpf_attach_type 8514 - bpf_program__get_expected_attach_type(const struct bpf_program *prog) 8511 + __alias(bpf_program__expected_attach_type) 8512 + enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog); 8513 + 8514 + enum bpf_attach_type bpf_program__expected_attach_type(const struct bpf_program *prog) 8515 8515 { 8516 8516 return prog->expected_attach_type; 8517 8517 } ··· 8597 8593 SEC_DEF("kretprobe/", KPROBE, 0, SEC_NONE, attach_kprobe), 8598 8594 SEC_DEF("uretprobe/", KPROBE, 0, SEC_NONE), 8599 8595 SEC_DEF("tc", SCHED_CLS, 0, SEC_NONE), 8600 - SEC_DEF("classifier", SCHED_CLS, 0, SEC_NONE | SEC_SLOPPY_PFX), 8596 + SEC_DEF("classifier", SCHED_CLS, 0, SEC_NONE | SEC_SLOPPY_PFX | SEC_DEPRECATED), 8601 8597 SEC_DEF("action", SCHED_ACT, 0, SEC_NONE | SEC_SLOPPY_PFX), 8602 8598 SEC_DEF("tracepoint/", TRACEPOINT, 0, SEC_NONE, attach_tp), 8603 8599 SEC_DEF("tp/", TRACEPOINT, 0, SEC_NONE, attach_tp), ··· 8616 8612 SEC_DEF("lsm/", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm), 8617 8613 SEC_DEF("lsm.s/", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm), 8618 8614 SEC_DEF("iter/", TRACING, BPF_TRACE_ITER, SEC_ATTACH_BTF, attach_iter), 8615 + SEC_DEF("iter.s/", TRACING, BPF_TRACE_ITER, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_iter), 8619 8616 SEC_DEF("syscall", SYSCALL, 0, SEC_SLEEPABLE), 8620 8617 SEC_DEF("xdp.frags/devmap", XDP, BPF_XDP_DEVMAP, SEC_XDP_FRAGS), 8621 - SEC_DEF("xdp_devmap/", XDP, BPF_XDP_DEVMAP, SEC_ATTACHABLE), 8618 + SEC_DEF("xdp/devmap", XDP, BPF_XDP_DEVMAP, SEC_ATTACHABLE), 8619 + SEC_DEF("xdp_devmap/", XDP, BPF_XDP_DEVMAP, SEC_ATTACHABLE | SEC_DEPRECATED), 8622 8620 SEC_DEF("xdp.frags/cpumap", XDP, BPF_XDP_CPUMAP, SEC_XDP_FRAGS), 8623 - SEC_DEF("xdp_cpumap/", XDP, BPF_XDP_CPUMAP, SEC_ATTACHABLE), 8621 + SEC_DEF("xdp/cpumap", XDP, BPF_XDP_CPUMAP, SEC_ATTACHABLE), 8622 + SEC_DEF("xdp_cpumap/", XDP, BPF_XDP_CPUMAP, SEC_ATTACHABLE | SEC_DEPRECATED), 8624 8623 SEC_DEF("xdp.frags", XDP, BPF_XDP, SEC_XDP_FRAGS), 8625 8624 SEC_DEF("xdp", XDP, BPF_XDP, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 8626 8625 SEC_DEF("perf_event", PERF_EVENT, 0, SEC_NONE | SEC_SLOPPY_PFX), ··· 9466 9459 open_attr.file = attr->file; 9467 9460 open_attr.prog_type = attr->prog_type; 9468 9461 9469 - obj = bpf_object__open_xattr(&open_attr); 9462 + obj = __bpf_object__open_xattr(&open_attr, 0); 9470 9463 err = libbpf_get_error(obj); 9471 9464 if (err) 9472 9465 return libbpf_err(-ENOENT); ··· 9483 9476 bpf_program__set_expected_attach_type(prog, 9484 9477 attach_type); 9485 9478 } 9486 - if (bpf_program__get_type(prog) == BPF_PROG_TYPE_UNSPEC) { 9479 + if (bpf_program__type(prog) == BPF_PROG_TYPE_UNSPEC) { 9487 9480 /* 9488 9481 * we haven't guessed from section name and user 9489 9482 * didn't provide a fallback type, too bad... ··· 9500 9493 } 9501 9494 9502 9495 bpf_object__for_each_map(map, obj) { 9503 - if (!bpf_map__is_offload_neutral(map)) 9496 + if (map->def.type != BPF_MAP_TYPE_PERF_EVENT_ARRAY) 9504 9497 map->map_ifindex = attr->ifindex; 9505 9498 } 9506 9499 ··· 10534 10527 return libbpf_err_ptr(-ENOMEM); 10535 10528 link->detach = &bpf_link__detach_fd; 10536 10529 10537 - attach_type = bpf_program__get_expected_attach_type(prog); 10530 + attach_type = bpf_program__expected_attach_type(prog); 10538 10531 link_fd = bpf_link_create(prog_fd, target_fd, attach_type, &opts); 10539 10532 if (link_fd < 0) { 10540 10533 link_fd = -errno;
+37 -4
tools/lib/bpf/libbpf.h
··· 180 180 const struct bpf_object_open_opts *opts); 181 181 182 182 /* deprecated bpf_object__open variants */ 183 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_object__open_mem() instead") 183 184 LIBBPF_API struct bpf_object * 184 185 bpf_object__open_buffer(const void *obj_buf, size_t obj_buf_sz, 185 186 const char *name); 187 + LIBBPF_DEPRECATED_SINCE(0, 7, "use bpf_object__open_file() instead") 186 188 LIBBPF_API struct bpf_object * 187 189 bpf_object__open_xattr(struct bpf_object_open_attr *attr); 188 190 ··· 246 244 (pos) = (tmp), (tmp) = bpf_object__next(tmp)) 247 245 248 246 typedef void (*bpf_object_clear_priv_t)(struct bpf_object *, void *); 247 + LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated") 249 248 LIBBPF_API int bpf_object__set_priv(struct bpf_object *obj, void *priv, 250 249 bpf_object_clear_priv_t clear_priv); 250 + LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated") 251 251 LIBBPF_API void *bpf_object__priv(const struct bpf_object *prog); 252 252 253 253 LIBBPF_API int ··· 281 277 282 278 typedef void (*bpf_program_clear_priv_t)(struct bpf_program *, void *); 283 279 280 + LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated") 284 281 LIBBPF_API int bpf_program__set_priv(struct bpf_program *prog, void *priv, 285 282 bpf_program_clear_priv_t clear_priv); 286 - 283 + LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated") 287 284 LIBBPF_API void *bpf_program__priv(const struct bpf_program *prog); 288 285 LIBBPF_API void bpf_program__set_ifindex(struct bpf_program *prog, 289 286 __u32 ifindex); ··· 596 591 /* 597 592 * Adjust type of BPF program. Default is kprobe. 598 593 */ 594 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 599 595 LIBBPF_API int bpf_program__set_socket_filter(struct bpf_program *prog); 596 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 600 597 LIBBPF_API int bpf_program__set_tracepoint(struct bpf_program *prog); 598 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 601 599 LIBBPF_API int bpf_program__set_raw_tracepoint(struct bpf_program *prog); 600 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 602 601 LIBBPF_API int bpf_program__set_kprobe(struct bpf_program *prog); 602 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 603 603 LIBBPF_API int bpf_program__set_lsm(struct bpf_program *prog); 604 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 604 605 LIBBPF_API int bpf_program__set_sched_cls(struct bpf_program *prog); 606 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 605 607 LIBBPF_API int bpf_program__set_sched_act(struct bpf_program *prog); 608 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 606 609 LIBBPF_API int bpf_program__set_xdp(struct bpf_program *prog); 610 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 607 611 LIBBPF_API int bpf_program__set_perf_event(struct bpf_program *prog); 612 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 608 613 LIBBPF_API int bpf_program__set_tracing(struct bpf_program *prog); 614 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 609 615 LIBBPF_API int bpf_program__set_struct_ops(struct bpf_program *prog); 616 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 610 617 LIBBPF_API int bpf_program__set_extension(struct bpf_program *prog); 618 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__set_type() instead") 611 619 LIBBPF_API int bpf_program__set_sk_lookup(struct bpf_program *prog); 612 620 613 - LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog); 621 + LIBBPF_API enum bpf_prog_type bpf_program__type(const struct bpf_program *prog); 614 622 LIBBPF_API void bpf_program__set_type(struct bpf_program *prog, 615 623 enum bpf_prog_type type); 616 624 617 625 LIBBPF_API enum bpf_attach_type 618 - bpf_program__get_expected_attach_type(const struct bpf_program *prog); 626 + bpf_program__expected_attach_type(const struct bpf_program *prog); 619 627 LIBBPF_API void 620 628 bpf_program__set_expected_attach_type(struct bpf_program *prog, 621 629 enum bpf_attach_type type); ··· 649 631 bpf_program__set_attach_target(struct bpf_program *prog, int attach_prog_fd, 650 632 const char *attach_func_name); 651 633 634 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 652 635 LIBBPF_API bool bpf_program__is_socket_filter(const struct bpf_program *prog); 636 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 653 637 LIBBPF_API bool bpf_program__is_tracepoint(const struct bpf_program *prog); 638 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 654 639 LIBBPF_API bool bpf_program__is_raw_tracepoint(const struct bpf_program *prog); 640 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 655 641 LIBBPF_API bool bpf_program__is_kprobe(const struct bpf_program *prog); 642 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 656 643 LIBBPF_API bool bpf_program__is_lsm(const struct bpf_program *prog); 644 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 657 645 LIBBPF_API bool bpf_program__is_sched_cls(const struct bpf_program *prog); 646 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 658 647 LIBBPF_API bool bpf_program__is_sched_act(const struct bpf_program *prog); 648 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 659 649 LIBBPF_API bool bpf_program__is_xdp(const struct bpf_program *prog); 650 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 660 651 LIBBPF_API bool bpf_program__is_perf_event(const struct bpf_program *prog); 652 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 661 653 LIBBPF_API bool bpf_program__is_tracing(const struct bpf_program *prog); 654 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 662 655 LIBBPF_API bool bpf_program__is_struct_ops(const struct bpf_program *prog); 656 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 663 657 LIBBPF_API bool bpf_program__is_extension(const struct bpf_program *prog); 658 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_program__type() instead") 664 659 LIBBPF_API bool bpf_program__is_sk_lookup(const struct bpf_program *prog); 665 660 666 661 /* ··· 747 716 /* get/set map size (max_entries) */ 748 717 LIBBPF_API __u32 bpf_map__max_entries(const struct bpf_map *map); 749 718 LIBBPF_API int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries); 719 + LIBBPF_DEPRECATED_SINCE(0, 8, "use bpf_map__set_max_entries() instead") 750 720 LIBBPF_API int bpf_map__resize(struct bpf_map *map, __u32 max_entries); 751 721 /* get/set map flags */ 752 722 LIBBPF_API __u32 bpf_map__map_flags(const struct bpf_map *map); ··· 772 740 LIBBPF_API int bpf_map__set_map_extra(struct bpf_map *map, __u64 map_extra); 773 741 774 742 typedef void (*bpf_map_clear_priv_t)(struct bpf_map *, void *); 743 + LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated") 775 744 LIBBPF_API int bpf_map__set_priv(struct bpf_map *map, void *priv, 776 745 bpf_map_clear_priv_t clear_priv); 746 + LIBBPF_DEPRECATED_SINCE(0, 7, "storage via set_priv/priv is deprecated") 777 747 LIBBPF_API void *bpf_map__priv(const struct bpf_map *map); 778 748 LIBBPF_API int bpf_map__set_initial_value(struct bpf_map *map, 779 749 const void *data, size_t size); ··· 792 758 */ 793 759 LIBBPF_API bool bpf_map__is_internal(const struct bpf_map *map); 794 760 LIBBPF_API int bpf_map__set_pin_path(struct bpf_map *map, const char *path); 795 - LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map); 796 761 LIBBPF_API const char *bpf_map__pin_path(const struct bpf_map *map); 797 762 LIBBPF_API bool bpf_map__is_pinned(const struct bpf_map *map); 798 763 LIBBPF_API int bpf_map__pin(struct bpf_map *map, const char *path);
+2
tools/lib/bpf/libbpf.map
··· 424 424 LIBBPF_0.7.0 { 425 425 global: 426 426 bpf_btf_load; 427 + bpf_program__expected_attach_type; 427 428 bpf_program__log_buf; 428 429 bpf_program__log_level; 429 430 bpf_program__set_log_buf; 430 431 bpf_program__set_log_level; 432 + bpf_program__type; 431 433 bpf_xdp_attach; 432 434 bpf_xdp_detach; 433 435 bpf_xdp_query;
+3
tools/lib/bpf/libbpf_internal.h
··· 92 92 # define offsetofend(TYPE, FIELD) \ 93 93 (offsetof(TYPE, FIELD) + sizeof(((TYPE *)0)->FIELD)) 94 94 #endif 95 + #ifndef __alias 96 + #define __alias(symbol) __attribute__((alias(#symbol))) 97 + #endif 95 98 96 99 /* Check whether a string `str` has prefix `pfx`, regardless if `pfx` is 97 100 * a string literal known at compilation time or char * pointer known only at
+17
tools/lib/bpf/libbpf_legacy.h
··· 86 86 87 87 #define DECLARE_LIBBPF_OPTS LIBBPF_OPTS 88 88 89 + /* "Discouraged" APIs which don't follow consistent libbpf naming patterns. 90 + * They are normally a trivial aliases or wrappers for proper APIs and are 91 + * left to minimize unnecessary disruption for users of libbpf. But they 92 + * shouldn't be used going forward. 93 + */ 94 + 95 + struct bpf_program; 96 + struct bpf_map; 97 + struct btf; 98 + struct btf_ext; 99 + 100 + LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog); 101 + LIBBPF_API enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog); 102 + LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map); 103 + LIBBPF_API const void *btf__get_raw_data(const struct btf *btf, __u32 *size); 104 + LIBBPF_API const void *btf_ext__get_raw_data(const struct btf_ext *btf_ext, __u32 *size); 105 + 89 106 #ifdef __cplusplus 90 107 } /* extern "C" */ 91 108 #endif
+68 -2
tools/lib/bpf/skel_internal.h
··· 70 70 return -EINVAL; 71 71 } 72 72 73 + #ifndef offsetofend 74 + #define offsetofend(TYPE, MEMBER) \ 75 + (offsetof(TYPE, MEMBER) + sizeof((((TYPE *)0)->MEMBER))) 76 + #endif 77 + 78 + static inline int skel_map_create(enum bpf_map_type map_type, 79 + const char *map_name, 80 + __u32 key_size, 81 + __u32 value_size, 82 + __u32 max_entries) 83 + { 84 + const size_t attr_sz = offsetofend(union bpf_attr, map_extra); 85 + union bpf_attr attr; 86 + 87 + memset(&attr, 0, attr_sz); 88 + 89 + attr.map_type = map_type; 90 + strncpy(attr.map_name, map_name, sizeof(attr.map_name)); 91 + attr.key_size = key_size; 92 + attr.value_size = value_size; 93 + attr.max_entries = max_entries; 94 + 95 + return skel_sys_bpf(BPF_MAP_CREATE, &attr, attr_sz); 96 + } 97 + 98 + static inline int skel_map_update_elem(int fd, const void *key, 99 + const void *value, __u64 flags) 100 + { 101 + const size_t attr_sz = offsetofend(union bpf_attr, flags); 102 + union bpf_attr attr; 103 + 104 + memset(&attr, 0, attr_sz); 105 + attr.map_fd = fd; 106 + attr.key = (long) key; 107 + attr.value = (long) value; 108 + attr.flags = flags; 109 + 110 + return skel_sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, attr_sz); 111 + } 112 + 113 + static inline int skel_raw_tracepoint_open(const char *name, int prog_fd) 114 + { 115 + const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint.prog_fd); 116 + union bpf_attr attr; 117 + 118 + memset(&attr, 0, attr_sz); 119 + attr.raw_tracepoint.name = (long) name; 120 + attr.raw_tracepoint.prog_fd = prog_fd; 121 + 122 + return skel_sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, attr_sz); 123 + } 124 + 125 + static inline int skel_link_create(int prog_fd, int target_fd, 126 + enum bpf_attach_type attach_type) 127 + { 128 + const size_t attr_sz = offsetofend(union bpf_attr, link_create.iter_info_len); 129 + union bpf_attr attr; 130 + 131 + memset(&attr, 0, attr_sz); 132 + attr.link_create.prog_fd = prog_fd; 133 + attr.link_create.target_fd = target_fd; 134 + attr.link_create.attach_type = attach_type; 135 + 136 + return skel_sys_bpf(BPF_LINK_CREATE, &attr, attr_sz); 137 + } 138 + 73 139 static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts) 74 140 { 75 141 int map_fd = -1, prog_fd = -1, key = 0, err; 76 142 union bpf_attr attr; 77 143 78 - map_fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1, NULL); 144 + map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1); 79 145 if (map_fd < 0) { 80 146 opts->errstr = "failed to create loader map"; 81 147 err = -errno; 82 148 goto out; 83 149 } 84 150 85 - err = bpf_map_update_elem(map_fd, &key, opts->data, 0); 151 + err = skel_map_update_elem(map_fd, &key, opts->data, 0); 86 152 if (err < 0) { 87 153 opts->errstr = "failed to update loader map"; 88 154 err = -errno;
+1 -1
tools/perf/tests/llvm.c
··· 13 13 { 14 14 struct bpf_object *obj; 15 15 16 - obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, NULL); 16 + obj = bpf_object__open_mem(obj_buf, obj_buf_sz, NULL); 17 17 if (libbpf_get_error(obj)) 18 18 return TEST_FAIL; 19 19 bpf_object__close(obj);
+6 -4
tools/perf/util/bpf-loader.c
··· 54 54 struct bpf_object * 55 55 bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz, const char *name) 56 56 { 57 + LIBBPF_OPTS(bpf_object_open_opts, opts, .object_name = name); 57 58 struct bpf_object *obj; 58 59 59 60 if (!libbpf_initialized) { ··· 62 61 libbpf_initialized = true; 63 62 } 64 63 65 - obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, name); 64 + obj = bpf_object__open_mem(obj_buf, obj_buf_sz, &opts); 66 65 if (IS_ERR_OR_NULL(obj)) { 67 66 pr_debug("bpf: failed to load buffer\n"); 68 67 return ERR_PTR(-EINVAL); ··· 73 72 74 73 struct bpf_object *bpf__prepare_load(const char *filename, bool source) 75 74 { 75 + LIBBPF_OPTS(bpf_object_open_opts, opts, .object_name = filename); 76 76 struct bpf_object *obj; 77 77 78 78 if (!libbpf_initialized) { ··· 96 94 return ERR_PTR(-BPF_LOADER_ERRNO__COMPILE); 97 95 } else 98 96 pr_debug("bpf: successful builtin compilation\n"); 99 - obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, filename); 97 + obj = bpf_object__open_mem(obj_buf, obj_buf_sz, &opts); 100 98 101 99 if (!IS_ERR_OR_NULL(obj) && llvm_param.dump_obj) 102 100 llvm__dump_obj(filename, obj_buf, obj_buf_sz); ··· 656 654 } 657 655 658 656 if (priv->is_tp) { 659 - bpf_program__set_tracepoint(prog); 657 + bpf_program__set_type(prog, BPF_PROG_TYPE_TRACEPOINT); 660 658 continue; 661 659 } 662 660 663 - bpf_program__set_kprobe(prog); 661 + bpf_program__set_type(prog, BPF_PROG_TYPE_KPROBE); 664 662 pev = &priv->pev; 665 663 666 664 err = convert_perf_probe_events(pev, 1);
+1 -1
tools/testing/selftests/bpf/Makefile
··· 330 330 331 331 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ 332 332 test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ 333 - map_ptr_kern.c core_kern.c 333 + map_ptr_kern.c core_kern.c core_kern_overflow.c 334 334 # Generate both light skeleton and libbpf skeleton for these 335 335 LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test_subprog.c 336 336 SKEL_BLACKLIST += $$(LSKELS)
+2
tools/testing/selftests/bpf/README.rst
··· 206 206 207 207 The btf_tag selftest requires LLVM support to recognize the btf_decl_tag and 208 208 btf_type_tag attributes. They are introduced in `Clang 14` [0_, 1_]. 209 + The subtests ``btf_type_tag_user_{mod1, mod2, vmlinux}`` also requires 210 + pahole version ``1.23``. 209 211 210 212 Without them, the btf_tag selftest will be skipped and you will observe: 211 213
+1 -1
tools/testing/selftests/bpf/benchs/bench_ringbufs.c
··· 151 151 /* record data + header take 16 bytes */ 152 152 skel->rodata->wakeup_data_size = args.sample_rate * 16; 153 153 154 - bpf_map__resize(skel->maps.ringbuf, args.ringbuf_sz); 154 + bpf_map__set_max_entries(skel->maps.ringbuf, args.ringbuf_sz); 155 155 156 156 if (ringbuf_bench__load(skel)) { 157 157 fprintf(stderr, "failed to load skeleton\n");
+2 -4
tools/testing/selftests/bpf/benchs/bench_trigger.c
··· 154 154 static void usetup(bool use_retprobe, bool use_nop) 155 155 { 156 156 size_t uprobe_offset; 157 - ssize_t base_addr; 158 157 struct bpf_link *link; 159 158 160 159 setup_libbpf(); ··· 164 165 exit(1); 165 166 } 166 167 167 - base_addr = get_base_addr(); 168 168 if (use_nop) 169 - uprobe_offset = get_uprobe_offset(&uprobe_target_with_nop, base_addr); 169 + uprobe_offset = get_uprobe_offset(&uprobe_target_with_nop); 170 170 else 171 - uprobe_offset = get_uprobe_offset(&uprobe_target_without_nop, base_addr); 171 + uprobe_offset = get_uprobe_offset(&uprobe_target_without_nop); 172 172 173 173 link = bpf_program__attach_uprobe(ctx.skel->progs.bench_trigger_uprobe, 174 174 use_retprobe,
+25
tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
··· 13 13 #define CREATE_TRACE_POINTS 14 14 #include "bpf_testmod-events.h" 15 15 16 + typedef int (*func_proto_typedef)(long); 17 + typedef int (*func_proto_typedef_nested1)(func_proto_typedef); 18 + typedef int (*func_proto_typedef_nested2)(func_proto_typedef_nested1); 19 + 16 20 DEFINE_PER_CPU(int, bpf_testmod_ksym_percpu) = 123; 17 21 18 22 noinline void 19 23 bpf_testmod_test_mod_kfunc(int i) 20 24 { 21 25 *(int *)this_cpu_ptr(&bpf_testmod_ksym_percpu) = i; 26 + } 27 + 28 + struct bpf_testmod_btf_type_tag_1 { 29 + int a; 30 + }; 31 + 32 + struct bpf_testmod_btf_type_tag_2 { 33 + struct bpf_testmod_btf_type_tag_1 __user *p; 34 + }; 35 + 36 + noinline int 37 + bpf_testmod_test_btf_type_tag_user_1(struct bpf_testmod_btf_type_tag_1 __user *arg) { 38 + BTF_TYPE_EMIT(func_proto_typedef); 39 + BTF_TYPE_EMIT(func_proto_typedef_nested1); 40 + BTF_TYPE_EMIT(func_proto_typedef_nested2); 41 + return arg->a; 42 + } 43 + 44 + noinline int 45 + bpf_testmod_test_btf_type_tag_user_2(struct bpf_testmod_btf_type_tag_2 *arg) { 46 + return arg->p->a; 22 47 } 23 48 24 49 noinline int bpf_testmod_loop_test(int n)
+35 -37
tools/testing/selftests/bpf/prog_tests/atomics.c
··· 7 7 static void test_add(struct atomics_lskel *skel) 8 8 { 9 9 int err, prog_fd; 10 - __u32 duration = 0, retval; 11 10 int link_fd; 11 + LIBBPF_OPTS(bpf_test_run_opts, topts); 12 12 13 13 link_fd = atomics_lskel__add__attach(skel); 14 14 if (!ASSERT_GT(link_fd, 0, "attach(add)")) 15 15 return; 16 16 17 17 prog_fd = skel->progs.add.prog_fd; 18 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 19 - NULL, NULL, &retval, &duration); 20 - if (CHECK(err || retval, "test_run add", 21 - "err %d errno %d retval %d duration %d\n", err, errno, retval, duration)) 18 + err = bpf_prog_test_run_opts(prog_fd, &topts); 19 + if (!ASSERT_OK(err, "test_run_opts err")) 20 + goto cleanup; 21 + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 22 22 goto cleanup; 23 23 24 24 ASSERT_EQ(skel->data->add64_value, 3, "add64_value"); ··· 39 39 static void test_sub(struct atomics_lskel *skel) 40 40 { 41 41 int err, prog_fd; 42 - __u32 duration = 0, retval; 43 42 int link_fd; 43 + LIBBPF_OPTS(bpf_test_run_opts, topts); 44 44 45 45 link_fd = atomics_lskel__sub__attach(skel); 46 46 if (!ASSERT_GT(link_fd, 0, "attach(sub)")) 47 47 return; 48 48 49 49 prog_fd = skel->progs.sub.prog_fd; 50 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 51 - NULL, NULL, &retval, &duration); 52 - if (CHECK(err || retval, "test_run sub", 53 - "err %d errno %d retval %d duration %d\n", 54 - err, errno, retval, duration)) 50 + err = bpf_prog_test_run_opts(prog_fd, &topts); 51 + if (!ASSERT_OK(err, "test_run_opts err")) 52 + goto cleanup; 53 + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 55 54 goto cleanup; 56 55 57 56 ASSERT_EQ(skel->data->sub64_value, -1, "sub64_value"); ··· 71 72 static void test_and(struct atomics_lskel *skel) 72 73 { 73 74 int err, prog_fd; 74 - __u32 duration = 0, retval; 75 75 int link_fd; 76 + LIBBPF_OPTS(bpf_test_run_opts, topts); 76 77 77 78 link_fd = atomics_lskel__and__attach(skel); 78 79 if (!ASSERT_GT(link_fd, 0, "attach(and)")) 79 80 return; 80 81 81 82 prog_fd = skel->progs.and.prog_fd; 82 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 83 - NULL, NULL, &retval, &duration); 84 - if (CHECK(err || retval, "test_run and", 85 - "err %d errno %d retval %d duration %d\n", err, errno, retval, duration)) 83 + err = bpf_prog_test_run_opts(prog_fd, &topts); 84 + if (!ASSERT_OK(err, "test_run_opts err")) 85 + goto cleanup; 86 + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 86 87 goto cleanup; 87 88 88 89 ASSERT_EQ(skel->data->and64_value, 0x010ull << 32, "and64_value"); ··· 99 100 static void test_or(struct atomics_lskel *skel) 100 101 { 101 102 int err, prog_fd; 102 - __u32 duration = 0, retval; 103 103 int link_fd; 104 + LIBBPF_OPTS(bpf_test_run_opts, topts); 104 105 105 106 link_fd = atomics_lskel__or__attach(skel); 106 107 if (!ASSERT_GT(link_fd, 0, "attach(or)")) 107 108 return; 108 109 109 110 prog_fd = skel->progs.or.prog_fd; 110 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 111 - NULL, NULL, &retval, &duration); 112 - if (CHECK(err || retval, "test_run or", 113 - "err %d errno %d retval %d duration %d\n", 114 - err, errno, retval, duration)) 111 + err = bpf_prog_test_run_opts(prog_fd, &topts); 112 + if (!ASSERT_OK(err, "test_run_opts err")) 113 + goto cleanup; 114 + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 115 115 goto cleanup; 116 116 117 117 ASSERT_EQ(skel->data->or64_value, 0x111ull << 32, "or64_value"); ··· 127 129 static void test_xor(struct atomics_lskel *skel) 128 130 { 129 131 int err, prog_fd; 130 - __u32 duration = 0, retval; 131 132 int link_fd; 133 + LIBBPF_OPTS(bpf_test_run_opts, topts); 132 134 133 135 link_fd = atomics_lskel__xor__attach(skel); 134 136 if (!ASSERT_GT(link_fd, 0, "attach(xor)")) 135 137 return; 136 138 137 139 prog_fd = skel->progs.xor.prog_fd; 138 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 139 - NULL, NULL, &retval, &duration); 140 - if (CHECK(err || retval, "test_run xor", 141 - "err %d errno %d retval %d duration %d\n", err, errno, retval, duration)) 140 + err = bpf_prog_test_run_opts(prog_fd, &topts); 141 + if (!ASSERT_OK(err, "test_run_opts err")) 142 + goto cleanup; 143 + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 142 144 goto cleanup; 143 145 144 146 ASSERT_EQ(skel->data->xor64_value, 0x101ull << 32, "xor64_value"); ··· 155 157 static void test_cmpxchg(struct atomics_lskel *skel) 156 158 { 157 159 int err, prog_fd; 158 - __u32 duration = 0, retval; 159 160 int link_fd; 161 + LIBBPF_OPTS(bpf_test_run_opts, topts); 160 162 161 163 link_fd = atomics_lskel__cmpxchg__attach(skel); 162 164 if (!ASSERT_GT(link_fd, 0, "attach(cmpxchg)")) 163 165 return; 164 166 165 167 prog_fd = skel->progs.cmpxchg.prog_fd; 166 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 167 - NULL, NULL, &retval, &duration); 168 - if (CHECK(err || retval, "test_run cmpxchg", 169 - "err %d errno %d retval %d duration %d\n", err, errno, retval, duration)) 168 + err = bpf_prog_test_run_opts(prog_fd, &topts); 169 + if (!ASSERT_OK(err, "test_run_opts err")) 170 + goto cleanup; 171 + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 170 172 goto cleanup; 171 173 172 174 ASSERT_EQ(skel->data->cmpxchg64_value, 2, "cmpxchg64_value"); ··· 184 186 static void test_xchg(struct atomics_lskel *skel) 185 187 { 186 188 int err, prog_fd; 187 - __u32 duration = 0, retval; 188 189 int link_fd; 190 + LIBBPF_OPTS(bpf_test_run_opts, topts); 189 191 190 192 link_fd = atomics_lskel__xchg__attach(skel); 191 193 if (!ASSERT_GT(link_fd, 0, "attach(xchg)")) 192 194 return; 193 195 194 196 prog_fd = skel->progs.xchg.prog_fd; 195 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 196 - NULL, NULL, &retval, &duration); 197 - if (CHECK(err || retval, "test_run xchg", 198 - "err %d errno %d retval %d duration %d\n", err, errno, retval, duration)) 197 + err = bpf_prog_test_run_opts(prog_fd, &topts); 198 + if (!ASSERT_OK(err, "test_run_opts err")) 199 + goto cleanup; 200 + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 199 201 goto cleanup; 200 202 201 203 ASSERT_EQ(skel->data->xchg64_value, 2, "xchg64_value");
+8 -10
tools/testing/selftests/bpf/prog_tests/attach_probe.c
··· 5 5 /* this is how USDT semaphore is actually defined, except volatile modifier */ 6 6 volatile unsigned short uprobe_ref_ctr __attribute__((unused)) __attribute((section(".probes"))); 7 7 8 - /* attach point */ 9 - static void method(void) { 10 - return ; 8 + /* uprobe attach point */ 9 + static void trigger_func(void) 10 + { 11 + asm volatile (""); 11 12 } 12 13 13 14 void test_attach_probe(void) ··· 18 17 struct bpf_link *kprobe_link, *kretprobe_link; 19 18 struct bpf_link *uprobe_link, *uretprobe_link; 20 19 struct test_attach_probe* skel; 21 - size_t uprobe_offset; 22 - ssize_t base_addr, ref_ctr_offset; 20 + ssize_t uprobe_offset, ref_ctr_offset; 23 21 bool legacy; 24 22 25 23 /* Check if new-style kprobe/uprobe API is supported. ··· 34 34 */ 35 35 legacy = access("/sys/bus/event_source/devices/kprobe/type", F_OK) != 0; 36 36 37 - base_addr = get_base_addr(); 38 - if (CHECK(base_addr < 0, "get_base_addr", 39 - "failed to find base addr: %zd", base_addr)) 37 + uprobe_offset = get_uprobe_offset(&trigger_func); 38 + if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset")) 40 39 return; 41 - uprobe_offset = get_uprobe_offset(&method, base_addr); 42 40 43 41 ref_ctr_offset = get_rel_offset((uintptr_t)&uprobe_ref_ctr); 44 42 if (!ASSERT_GE(ref_ctr_offset, 0, "ref_ctr_offset")) ··· 101 103 goto cleanup; 102 104 103 105 /* trigger & validate uprobe & uretprobe */ 104 - method(); 106 + trigger_func(); 105 107 106 108 if (CHECK(skel->bss->uprobe_res != 3, "check_uprobe_res", 107 109 "wrong uprobe res: %d\n", skel->bss->uprobe_res))
+11 -5
tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
··· 8 8 #include <test_progs.h> 9 9 #include "test_bpf_cookie.skel.h" 10 10 11 + /* uprobe attach point */ 12 + static void trigger_func(void) 13 + { 14 + asm volatile (""); 15 + } 16 + 11 17 static void kprobe_subtest(struct test_bpf_cookie *skel) 12 18 { 13 19 DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, opts); ··· 68 62 DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, opts); 69 63 struct bpf_link *link1 = NULL, *link2 = NULL; 70 64 struct bpf_link *retlink1 = NULL, *retlink2 = NULL; 71 - size_t uprobe_offset; 72 - ssize_t base_addr; 65 + ssize_t uprobe_offset; 73 66 74 - base_addr = get_base_addr(); 75 - uprobe_offset = get_uprobe_offset(&get_base_addr, base_addr); 67 + uprobe_offset = get_uprobe_offset(&trigger_func); 68 + if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset")) 69 + goto cleanup; 76 70 77 71 /* attach two uprobes */ 78 72 opts.bpf_cookie = 0x100; ··· 105 99 goto cleanup; 106 100 107 101 /* trigger uprobe && uretprobe */ 108 - get_base_addr(); 102 + trigger_func(); 109 103 110 104 ASSERT_EQ(skel->bss->uprobe_res, 0x100 | 0x200, "uprobe_res"); 111 105 ASSERT_EQ(skel->bss->uretprobe_res, 0x1000 | 0x2000, "uretprobe_res");
+20
tools/testing/selftests/bpf/prog_tests/bpf_iter.c
··· 138 138 bpf_iter_task__destroy(skel); 139 139 } 140 140 141 + static void test_task_sleepable(void) 142 + { 143 + struct bpf_iter_task *skel; 144 + 145 + skel = bpf_iter_task__open_and_load(); 146 + if (!ASSERT_OK_PTR(skel, "bpf_iter_task__open_and_load")) 147 + return; 148 + 149 + do_dummy_read(skel->progs.dump_task_sleepable); 150 + 151 + ASSERT_GT(skel->bss->num_expected_failure_copy_from_user_task, 0, 152 + "num_expected_failure_copy_from_user_task"); 153 + ASSERT_GT(skel->bss->num_success_copy_from_user_task, 0, 154 + "num_success_copy_from_user_task"); 155 + 156 + bpf_iter_task__destroy(skel); 157 + } 158 + 141 159 static void test_task_stack(void) 142 160 { 143 161 struct bpf_iter_task_stack *skel; ··· 1270 1252 test_bpf_map(); 1271 1253 if (test__start_subtest("task")) 1272 1254 test_task(); 1255 + if (test__start_subtest("task_sleepable")) 1256 + test_task_sleepable(); 1273 1257 if (test__start_subtest("task_stack")) 1274 1258 test_task_stack(); 1275 1259 if (test__start_subtest("task_file"))
+7 -3
tools/testing/selftests/bpf/prog_tests/bpf_nf.c
··· 11 11 void test_bpf_nf_ct(int mode) 12 12 { 13 13 struct test_bpf_nf *skel; 14 - int prog_fd, err, retval; 14 + int prog_fd, err; 15 + LIBBPF_OPTS(bpf_test_run_opts, topts, 16 + .data_in = &pkt_v4, 17 + .data_size_in = sizeof(pkt_v4), 18 + .repeat = 1, 19 + ); 15 20 16 21 skel = test_bpf_nf__open_and_load(); 17 22 if (!ASSERT_OK_PTR(skel, "test_bpf_nf__open_and_load")) ··· 27 22 else 28 23 prog_fd = bpf_program__fd(skel->progs.nf_skb_ct_test); 29 24 30 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), NULL, NULL, 31 - (__u32 *)&retval, NULL); 25 + err = bpf_prog_test_run_opts(prog_fd, &topts); 32 26 if (!ASSERT_OK(err, "bpf_prog_test_run")) 33 27 goto end; 34 28
+20 -1
tools/testing/selftests/bpf/prog_tests/btf.c
··· 3939 3939 .err_str = "Invalid component_idx", 3940 3940 }, 3941 3941 { 3942 + .descr = "decl_tag test #15, func, invalid func proto", 3943 + .raw_types = { 3944 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 3945 + BTF_DECL_TAG_ENC(NAME_TBD, 3, 0), /* [2] */ 3946 + BTF_FUNC_ENC(NAME_TBD, 8), /* [3] */ 3947 + BTF_END_RAW, 3948 + }, 3949 + BTF_STR_SEC("\0tag\0func"), 3950 + .map_type = BPF_MAP_TYPE_ARRAY, 3951 + .map_name = "tag_type_check_btf", 3952 + .key_size = sizeof(int), 3953 + .value_size = 4, 3954 + .key_type_id = 1, 3955 + .value_type_id = 1, 3956 + .max_entries = 1, 3957 + .btf_load_err = true, 3958 + .err_str = "Invalid type_id", 3959 + }, 3960 + { 3942 3961 .descr = "type_tag test #1", 3943 3962 .raw_types = { 3944 3963 BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ ··· 4580 4561 btf_ext__free(btf_ext); 4581 4562 4582 4563 /* temporary disable LIBBPF_STRICT_MAP_DEFINITIONS to test legacy maps */ 4583 - libbpf_set_strict_mode((__LIBBPF_STRICT_LAST - 1) & ~LIBBPF_STRICT_MAP_DEFINITIONS); 4564 + libbpf_set_strict_mode(LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS); 4584 4565 obj = bpf_object__open(test->file); 4585 4566 err = libbpf_get_error(obj); 4586 4567 if (CHECK(err, "obj: %d", err))
+97 -4
tools/testing/selftests/bpf/prog_tests/btf_tag.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright (c) 2021 Facebook */ 3 3 #include <test_progs.h> 4 - #include "btf_decl_tag.skel.h" 4 + #include <bpf/btf.h> 5 + #include "test_btf_decl_tag.skel.h" 5 6 6 7 /* struct btf_type_tag_test is referenced in btf_type_tag.skel.h */ 7 8 struct btf_type_tag_test { 8 9 int **p; 9 10 }; 10 11 #include "btf_type_tag.skel.h" 12 + #include "btf_type_tag_user.skel.h" 11 13 12 14 static void test_btf_decl_tag(void) 13 15 { 14 - struct btf_decl_tag *skel; 16 + struct test_btf_decl_tag *skel; 15 17 16 - skel = btf_decl_tag__open_and_load(); 18 + skel = test_btf_decl_tag__open_and_load(); 17 19 if (!ASSERT_OK_PTR(skel, "btf_decl_tag")) 18 20 return; 19 21 ··· 24 22 test__skip(); 25 23 } 26 24 27 - btf_decl_tag__destroy(skel); 25 + test_btf_decl_tag__destroy(skel); 28 26 } 29 27 30 28 static void test_btf_type_tag(void) ··· 43 41 btf_type_tag__destroy(skel); 44 42 } 45 43 44 + static void test_btf_type_tag_mod_user(bool load_test_user1) 45 + { 46 + const char *module_name = "bpf_testmod"; 47 + struct btf *vmlinux_btf, *module_btf; 48 + struct btf_type_tag_user *skel; 49 + __s32 type_id; 50 + int err; 51 + 52 + if (!env.has_testmod) { 53 + test__skip(); 54 + return; 55 + } 56 + 57 + /* skip the test if the module does not have __user tags */ 58 + vmlinux_btf = btf__load_vmlinux_btf(); 59 + if (!ASSERT_OK_PTR(vmlinux_btf, "could not load vmlinux BTF")) 60 + return; 61 + 62 + module_btf = btf__load_module_btf(module_name, vmlinux_btf); 63 + if (!ASSERT_OK_PTR(module_btf, "could not load module BTF")) 64 + goto free_vmlinux_btf; 65 + 66 + type_id = btf__find_by_name_kind(module_btf, "user", BTF_KIND_TYPE_TAG); 67 + if (type_id <= 0) { 68 + printf("%s:SKIP: btf_type_tag attribute not in %s", __func__, module_name); 69 + test__skip(); 70 + goto free_module_btf; 71 + } 72 + 73 + skel = btf_type_tag_user__open(); 74 + if (!ASSERT_OK_PTR(skel, "btf_type_tag_user")) 75 + goto free_module_btf; 76 + 77 + bpf_program__set_autoload(skel->progs.test_sys_getsockname, false); 78 + if (load_test_user1) 79 + bpf_program__set_autoload(skel->progs.test_user2, false); 80 + else 81 + bpf_program__set_autoload(skel->progs.test_user1, false); 82 + 83 + err = btf_type_tag_user__load(skel); 84 + ASSERT_ERR(err, "btf_type_tag_user"); 85 + 86 + btf_type_tag_user__destroy(skel); 87 + 88 + free_module_btf: 89 + btf__free(module_btf); 90 + free_vmlinux_btf: 91 + btf__free(vmlinux_btf); 92 + } 93 + 94 + static void test_btf_type_tag_vmlinux_user(void) 95 + { 96 + struct btf_type_tag_user *skel; 97 + struct btf *vmlinux_btf; 98 + __s32 type_id; 99 + int err; 100 + 101 + /* skip the test if the vmlinux does not have __user tags */ 102 + vmlinux_btf = btf__load_vmlinux_btf(); 103 + if (!ASSERT_OK_PTR(vmlinux_btf, "could not load vmlinux BTF")) 104 + return; 105 + 106 + type_id = btf__find_by_name_kind(vmlinux_btf, "user", BTF_KIND_TYPE_TAG); 107 + if (type_id <= 0) { 108 + printf("%s:SKIP: btf_type_tag attribute not in vmlinux btf", __func__); 109 + test__skip(); 110 + goto free_vmlinux_btf; 111 + } 112 + 113 + skel = btf_type_tag_user__open(); 114 + if (!ASSERT_OK_PTR(skel, "btf_type_tag_user")) 115 + goto free_vmlinux_btf; 116 + 117 + bpf_program__set_autoload(skel->progs.test_user2, false); 118 + bpf_program__set_autoload(skel->progs.test_user1, false); 119 + 120 + err = btf_type_tag_user__load(skel); 121 + ASSERT_ERR(err, "btf_type_tag_user"); 122 + 123 + btf_type_tag_user__destroy(skel); 124 + 125 + free_vmlinux_btf: 126 + btf__free(vmlinux_btf); 127 + } 128 + 46 129 void test_btf_tag(void) 47 130 { 48 131 if (test__start_subtest("btf_decl_tag")) 49 132 test_btf_decl_tag(); 50 133 if (test__start_subtest("btf_type_tag")) 51 134 test_btf_type_tag(); 135 + if (test__start_subtest("btf_type_tag_user_mod1")) 136 + test_btf_type_tag_mod_user(true); 137 + if (test__start_subtest("btf_type_tag_user_mod2")) 138 + test_btf_type_tag_mod_user(false); 139 + if (test__start_subtest("btf_type_tag_sys_user_vmlinux")) 140 + test_btf_type_tag_vmlinux_user(); 52 141 }
+13 -27
tools/testing/selftests/bpf/prog_tests/check_mtu.c
··· 79 79 struct bpf_program *prog, 80 80 __u32 mtu_expect) 81 81 { 82 - const char *prog_name = bpf_program__name(prog); 83 82 int retval_expect = XDP_PASS; 84 83 __u32 mtu_result = 0; 85 84 char buf[256] = {}; 86 - int err; 87 - struct bpf_prog_test_run_attr tattr = { 85 + int err, prog_fd = bpf_program__fd(prog); 86 + LIBBPF_OPTS(bpf_test_run_opts, topts, 88 87 .repeat = 1, 89 88 .data_in = &pkt_v4, 90 89 .data_size_in = sizeof(pkt_v4), 91 90 .data_out = buf, 92 91 .data_size_out = sizeof(buf), 93 - .prog_fd = bpf_program__fd(prog), 94 - }; 92 + ); 95 93 96 - err = bpf_prog_test_run_xattr(&tattr); 97 - CHECK_ATTR(err != 0, "bpf_prog_test_run", 98 - "prog_name:%s (err %d errno %d retval %d)\n", 99 - prog_name, err, errno, tattr.retval); 100 - 101 - CHECK(tattr.retval != retval_expect, "retval", 102 - "progname:%s unexpected retval=%d expected=%d\n", 103 - prog_name, tattr.retval, retval_expect); 94 + err = bpf_prog_test_run_opts(prog_fd, &topts); 95 + ASSERT_OK(err, "test_run"); 96 + ASSERT_EQ(topts.retval, retval_expect, "retval"); 104 97 105 98 /* Extract MTU that BPF-prog got */ 106 99 mtu_result = skel->bss->global_bpf_mtu_xdp; ··· 132 139 struct bpf_program *prog, 133 140 __u32 mtu_expect) 134 141 { 135 - const char *prog_name = bpf_program__name(prog); 136 142 int retval_expect = BPF_OK; 137 143 __u32 mtu_result = 0; 138 144 char buf[256] = {}; 139 - int err; 140 - struct bpf_prog_test_run_attr tattr = { 141 - .repeat = 1, 145 + int err, prog_fd = bpf_program__fd(prog); 146 + LIBBPF_OPTS(bpf_test_run_opts, topts, 142 147 .data_in = &pkt_v4, 143 148 .data_size_in = sizeof(pkt_v4), 144 149 .data_out = buf, 145 150 .data_size_out = sizeof(buf), 146 - .prog_fd = bpf_program__fd(prog), 147 - }; 151 + .repeat = 1, 152 + ); 148 153 149 - err = bpf_prog_test_run_xattr(&tattr); 150 - CHECK_ATTR(err != 0, "bpf_prog_test_run", 151 - "prog_name:%s (err %d errno %d retval %d)\n", 152 - prog_name, err, errno, tattr.retval); 153 - 154 - CHECK(tattr.retval != retval_expect, "retval", 155 - "progname:%s unexpected retval=%d expected=%d\n", 156 - prog_name, tattr.retval, retval_expect); 154 + err = bpf_prog_test_run_opts(prog_fd, &topts); 155 + ASSERT_OK(err, "test_run"); 156 + ASSERT_EQ(topts.retval, retval_expect, "retval"); 157 157 158 158 /* Extract MTU that BPF-prog got */ 159 159 mtu_result = skel->bss->global_bpf_mtu_tc;
+5 -5
tools/testing/selftests/bpf/prog_tests/cls_redirect.c
··· 161 161 } 162 162 } 163 163 164 - static bool was_decapsulated(struct bpf_prog_test_run_attr *tattr) 164 + static bool was_decapsulated(struct bpf_test_run_opts *tattr) 165 165 { 166 166 return tattr->data_size_out < tattr->data_size_in; 167 167 } ··· 367 367 368 368 static void test_cls_redirect_common(struct bpf_program *prog) 369 369 { 370 - struct bpf_prog_test_run_attr tattr = {}; 370 + LIBBPF_OPTS(bpf_test_run_opts, tattr); 371 371 int families[] = { AF_INET, AF_INET6 }; 372 372 struct sockaddr_storage ss; 373 373 struct sockaddr *addr; 374 374 socklen_t slen; 375 - int i, j, err; 375 + int i, j, err, prog_fd; 376 376 int servers[__NR_KIND][ARRAY_SIZE(families)] = {}; 377 377 int conns[__NR_KIND][ARRAY_SIZE(families)] = {}; 378 378 struct tuple tuples[__NR_KIND][ARRAY_SIZE(families)]; ··· 394 394 goto cleanup; 395 395 } 396 396 397 - tattr.prog_fd = bpf_program__fd(prog); 397 + prog_fd = bpf_program__fd(prog); 398 398 for (i = 0; i < ARRAY_SIZE(tests); i++) { 399 399 struct test_cfg *test = &tests[i]; 400 400 ··· 415 415 if (CHECK_FAIL(!tattr.data_size_in)) 416 416 continue; 417 417 418 - err = bpf_prog_test_run_xattr(&tattr); 418 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 419 419 if (CHECK_FAIL(err)) 420 420 continue; 421 421
+15 -1
tools/testing/selftests/bpf/prog_tests/core_kern.c
··· 7 7 void test_core_kern_lskel(void) 8 8 { 9 9 struct core_kern_lskel *skel; 10 + int link_fd; 10 11 11 12 skel = core_kern_lskel__open_and_load(); 12 - ASSERT_OK_PTR(skel, "open_and_load"); 13 + if (!ASSERT_OK_PTR(skel, "open_and_load")) 14 + return; 15 + 16 + link_fd = core_kern_lskel__core_relo_proto__attach(skel); 17 + if (!ASSERT_GT(link_fd, 0, "attach(core_relo_proto)")) 18 + goto cleanup; 19 + 20 + /* trigger tracepoints */ 21 + usleep(1); 22 + ASSERT_TRUE(skel->bss->proto_out[0], "bpf_core_type_exists"); 23 + ASSERT_FALSE(skel->bss->proto_out[1], "!bpf_core_type_exists"); 24 + ASSERT_TRUE(skel->bss->proto_out[2], "bpf_core_type_exists. nested"); 25 + 26 + cleanup: 13 27 core_kern_lskel__destroy(skel); 14 28 }
+13
tools/testing/selftests/bpf/prog_tests/core_kern_overflow.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "test_progs.h" 4 + #include "core_kern_overflow.lskel.h" 5 + 6 + void test_core_kern_overflow_lskel(void) 7 + { 8 + struct core_kern_overflow_lskel *skel; 9 + 10 + skel = core_kern_overflow_lskel__open_and_load(); 11 + if (!ASSERT_NULL(skel, "open_and_load")) 12 + core_kern_overflow_lskel__destroy(skel); 13 + }
+12 -15
tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
··· 26 26 static void test_dummy_init_ret_value(void) 27 27 { 28 28 __u64 args[1] = {0}; 29 - struct bpf_prog_test_run_attr attr = { 30 - .ctx_size_in = sizeof(args), 29 + LIBBPF_OPTS(bpf_test_run_opts, attr, 31 30 .ctx_in = args, 32 - }; 31 + .ctx_size_in = sizeof(args), 32 + ); 33 33 struct dummy_st_ops *skel; 34 34 int fd, err; 35 35 ··· 38 38 return; 39 39 40 40 fd = bpf_program__fd(skel->progs.test_1); 41 - attr.prog_fd = fd; 42 - err = bpf_prog_test_run_xattr(&attr); 41 + err = bpf_prog_test_run_opts(fd, &attr); 43 42 ASSERT_OK(err, "test_run"); 44 43 ASSERT_EQ(attr.retval, 0xf2f3f4f5, "test_ret"); 45 44 ··· 52 53 .val = exp_retval, 53 54 }; 54 55 __u64 args[1] = {(unsigned long)&in_state}; 55 - struct bpf_prog_test_run_attr attr = { 56 - .ctx_size_in = sizeof(args), 56 + LIBBPF_OPTS(bpf_test_run_opts, attr, 57 57 .ctx_in = args, 58 - }; 58 + .ctx_size_in = sizeof(args), 59 + ); 59 60 struct dummy_st_ops *skel; 60 61 int fd, err; 61 62 ··· 64 65 return; 65 66 66 67 fd = bpf_program__fd(skel->progs.test_1); 67 - attr.prog_fd = fd; 68 - err = bpf_prog_test_run_xattr(&attr); 68 + err = bpf_prog_test_run_opts(fd, &attr); 69 69 ASSERT_OK(err, "test_run"); 70 70 ASSERT_EQ(in_state.val, 0x5a, "test_ptr_ret"); 71 71 ASSERT_EQ(attr.retval, exp_retval, "test_ret"); ··· 75 77 static void test_dummy_multiple_args(void) 76 78 { 77 79 __u64 args[5] = {0, -100, 0x8a5f, 'c', 0x1234567887654321ULL}; 78 - struct bpf_prog_test_run_attr attr = { 79 - .ctx_size_in = sizeof(args), 80 + LIBBPF_OPTS(bpf_test_run_opts, attr, 80 81 .ctx_in = args, 81 - }; 82 + .ctx_size_in = sizeof(args), 83 + ); 82 84 struct dummy_st_ops *skel; 83 85 int fd, err; 84 86 size_t i; ··· 89 91 return; 90 92 91 93 fd = bpf_program__fd(skel->progs.test_2); 92 - attr.prog_fd = fd; 93 - err = bpf_prog_test_run_xattr(&attr); 94 + err = bpf_prog_test_run_opts(fd, &attr); 94 95 ASSERT_OK(err, "test_run"); 95 96 for (i = 0; i < ARRAY_SIZE(args); i++) { 96 97 snprintf(name, sizeof(name), "arg %zu", i);
+10 -14
tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
··· 9 9 struct fentry_test_lskel *fentry_skel = NULL; 10 10 struct fexit_test_lskel *fexit_skel = NULL; 11 11 __u64 *fentry_res, *fexit_res; 12 - __u32 duration = 0, retval; 13 12 int err, prog_fd, i; 13 + LIBBPF_OPTS(bpf_test_run_opts, topts); 14 14 15 15 fentry_skel = fentry_test_lskel__open_and_load(); 16 - if (CHECK(!fentry_skel, "fentry_skel_load", "fentry skeleton failed\n")) 16 + if (!ASSERT_OK_PTR(fentry_skel, "fentry_skel_load")) 17 17 goto close_prog; 18 18 fexit_skel = fexit_test_lskel__open_and_load(); 19 - if (CHECK(!fexit_skel, "fexit_skel_load", "fexit skeleton failed\n")) 19 + if (!ASSERT_OK_PTR(fexit_skel, "fexit_skel_load")) 20 20 goto close_prog; 21 21 22 22 err = fentry_test_lskel__attach(fentry_skel); 23 - if (CHECK(err, "fentry_attach", "fentry attach failed: %d\n", err)) 23 + if (!ASSERT_OK(err, "fentry_attach")) 24 24 goto close_prog; 25 25 err = fexit_test_lskel__attach(fexit_skel); 26 - if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err)) 26 + if (!ASSERT_OK(err, "fexit_attach")) 27 27 goto close_prog; 28 28 29 29 prog_fd = fexit_skel->progs.test1.prog_fd; 30 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 31 - NULL, NULL, &retval, &duration); 32 - CHECK(err || retval, "ipv6", 33 - "err %d errno %d retval %d duration %d\n", 34 - err, errno, retval, duration); 30 + err = bpf_prog_test_run_opts(prog_fd, &topts); 31 + ASSERT_OK(err, "ipv6 test_run"); 32 + ASSERT_OK(topts.retval, "ipv6 test retval"); 35 33 36 34 fentry_res = (__u64 *)fentry_skel->bss; 37 35 fexit_res = (__u64 *)fexit_skel->bss; 38 36 printf("%lld\n", fentry_skel->bss->test1_result); 39 37 for (i = 0; i < 8; i++) { 40 - CHECK(fentry_res[i] != 1, "result", 41 - "fentry_test%d failed err %lld\n", i + 1, fentry_res[i]); 42 - CHECK(fexit_res[i] != 1, "result", 43 - "fexit_test%d failed err %lld\n", i + 1, fexit_res[i]); 38 + ASSERT_EQ(fentry_res[i], 1, "fentry result"); 39 + ASSERT_EQ(fexit_res[i], 1, "fexit result"); 44 40 } 45 41 46 42 close_prog:
+3 -4
tools/testing/selftests/bpf/prog_tests/fentry_test.c
··· 6 6 static int fentry_test(struct fentry_test_lskel *fentry_skel) 7 7 { 8 8 int err, prog_fd, i; 9 - __u32 duration = 0, retval; 10 9 int link_fd; 11 10 __u64 *result; 11 + LIBBPF_OPTS(bpf_test_run_opts, topts); 12 12 13 13 err = fentry_test_lskel__attach(fentry_skel); 14 14 if (!ASSERT_OK(err, "fentry_attach")) ··· 20 20 return -1; 21 21 22 22 prog_fd = fentry_skel->progs.test1.prog_fd; 23 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 24 - NULL, NULL, &retval, &duration); 23 + err = bpf_prog_test_run_opts(prog_fd, &topts); 25 24 ASSERT_OK(err, "test_run"); 26 - ASSERT_EQ(retval, 0, "test_run"); 25 + ASSERT_EQ(topts.retval, 0, "test_run"); 27 26 28 27 result = (__u64 *)fentry_skel->bss; 29 28 for (i = 0; i < sizeof(*fentry_skel->bss) / sizeof(__u64); i++) {
+20 -14
tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
··· 58 58 test_cb cb) 59 59 { 60 60 struct bpf_object *obj = NULL, *tgt_obj; 61 - __u32 retval, tgt_prog_id, info_len; 61 + __u32 tgt_prog_id, info_len; 62 62 struct bpf_prog_info prog_info = {}; 63 63 struct bpf_program **prog = NULL, *p; 64 64 struct bpf_link **link = NULL; 65 65 int err, tgt_fd, i; 66 66 struct btf *btf; 67 + LIBBPF_OPTS(bpf_test_run_opts, topts, 68 + .data_in = &pkt_v6, 69 + .data_size_in = sizeof(pkt_v6), 70 + .repeat = 1, 71 + ); 67 72 68 73 err = bpf_prog_test_load(target_obj_file, BPF_PROG_TYPE_UNSPEC, 69 74 &tgt_obj, &tgt_fd); ··· 137 132 &link_info, &info_len); 138 133 ASSERT_OK(err, "link_fd_get_info"); 139 134 ASSERT_EQ(link_info.tracing.attach_type, 140 - bpf_program__get_expected_attach_type(prog[i]), 135 + bpf_program__expected_attach_type(prog[i]), 141 136 "link_attach_type"); 142 137 ASSERT_EQ(link_info.tracing.target_obj_id, tgt_prog_id, "link_tgt_obj_id"); 143 138 ASSERT_EQ(link_info.tracing.target_btf_id, btf_id, "link_tgt_btf_id"); ··· 152 147 if (!run_prog) 153 148 goto close_prog; 154 149 155 - err = bpf_prog_test_run(tgt_fd, 1, &pkt_v6, sizeof(pkt_v6), 156 - NULL, NULL, &retval, NULL); 150 + err = bpf_prog_test_run_opts(tgt_fd, &topts); 157 151 ASSERT_OK(err, "prog_run"); 158 - ASSERT_EQ(retval, 0, "prog_run_ret"); 152 + ASSERT_EQ(topts.retval, 0, "prog_run_ret"); 159 153 160 154 if (check_data_map(obj, prog_cnt, false)) 161 155 goto close_prog; ··· 229 225 const char *tgt_obj_file = "./test_pkt_access.o"; 230 226 struct bpf_program *prog = NULL; 231 227 struct bpf_object *tgt_obj; 232 - __u32 duration = 0, retval; 233 228 struct bpf_link *link; 234 229 int err = 0, tgt_fd; 230 + LIBBPF_OPTS(bpf_test_run_opts, topts, 231 + .data_in = &pkt_v6, 232 + .data_size_in = sizeof(pkt_v6), 233 + .repeat = 1, 234 + ); 235 235 236 236 prog = bpf_object__find_program_by_name(obj, prog_name); 237 - if (CHECK(!prog, "find_prog", "prog %s not found\n", prog_name)) 237 + if (!ASSERT_OK_PTR(prog, "find_prog")) 238 238 return -ENOENT; 239 239 240 240 err = bpf_prog_test_load(tgt_obj_file, BPF_PROG_TYPE_UNSPEC, 241 241 &tgt_obj, &tgt_fd); 242 - if (CHECK(err, "second_prog_load", "file %s err %d errno %d\n", 243 - tgt_obj_file, err, errno)) 242 + if (!ASSERT_OK(err, "second_prog_load")) 244 243 return err; 245 244 246 245 link = bpf_program__attach_freplace(prog, tgt_fd, tgt_name); 247 246 if (!ASSERT_OK_PTR(link, "second_link")) 248 247 goto out; 249 248 250 - err = bpf_prog_test_run(tgt_fd, 1, &pkt_v6, sizeof(pkt_v6), 251 - NULL, NULL, &retval, &duration); 252 - if (CHECK(err || retval, "ipv6", 253 - "err %d errno %d retval %d duration %d\n", 254 - err, errno, retval, duration)) 249 + err = bpf_prog_test_run_opts(tgt_fd, &topts); 250 + if (!ASSERT_OK(err, "ipv6 test_run")) 251 + goto out; 252 + if (!ASSERT_OK(topts.retval, "ipv6 retval")) 255 253 goto out; 256 254 257 255 err = check_data_map(obj, 1, true);
+11 -11
tools/testing/selftests/bpf/prog_tests/fexit_stress.c
··· 10 10 char test_skb[128] = {}; 11 11 int fexit_fd[CNT] = {}; 12 12 int link_fd[CNT] = {}; 13 - __u32 duration = 0; 14 13 char error[4096]; 15 - __u32 prog_ret; 16 14 int err, i, filter_fd; 17 15 18 16 const struct bpf_insn trace_program[] = { ··· 34 36 .log_size = sizeof(error), 35 37 ); 36 38 39 + LIBBPF_OPTS(bpf_test_run_opts, topts, 40 + .data_in = test_skb, 41 + .data_size_in = sizeof(test_skb), 42 + .repeat = 1, 43 + ); 44 + 37 45 err = libbpf_find_vmlinux_btf_id("bpf_fentry_test1", 38 46 trace_opts.expected_attach_type); 39 - if (CHECK(err <= 0, "find_vmlinux_btf_id", "failed: %d\n", err)) 47 + if (!ASSERT_GT(err, 0, "find_vmlinux_btf_id")) 40 48 goto out; 41 49 trace_opts.attach_btf_id = err; 42 50 ··· 51 47 trace_program, 52 48 sizeof(trace_program) / sizeof(struct bpf_insn), 53 49 &trace_opts); 54 - if (CHECK(fexit_fd[i] < 0, "fexit loaded", 55 - "failed: %d errno %d\n", fexit_fd[i], errno)) 50 + if (!ASSERT_GE(fexit_fd[i], 0, "fexit load")) 56 51 goto out; 57 52 link_fd[i] = bpf_raw_tracepoint_open(NULL, fexit_fd[i]); 58 - if (CHECK(link_fd[i] < 0, "fexit attach failed", 59 - "prog %d failed: %d err %d\n", i, link_fd[i], errno)) 53 + if (!ASSERT_GE(link_fd[i], 0, "fexit attach")) 60 54 goto out; 61 55 } 62 56 63 57 filter_fd = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", 64 58 skb_program, sizeof(skb_program) / sizeof(struct bpf_insn), 65 59 &skb_opts); 66 - if (CHECK(filter_fd < 0, "test_program_loaded", "failed: %d errno %d\n", 67 - filter_fd, errno)) 60 + if (!ASSERT_GE(filter_fd, 0, "test_program_loaded")) 68 61 goto out; 69 62 70 - err = bpf_prog_test_run(filter_fd, 1, test_skb, sizeof(test_skb), 0, 71 - 0, &prog_ret, 0); 63 + err = bpf_prog_test_run_opts(filter_fd, &topts); 72 64 close(filter_fd); 73 65 CHECK_FAIL(err); 74 66 out:
+3 -4
tools/testing/selftests/bpf/prog_tests/fexit_test.c
··· 6 6 static int fexit_test(struct fexit_test_lskel *fexit_skel) 7 7 { 8 8 int err, prog_fd, i; 9 - __u32 duration = 0, retval; 10 9 int link_fd; 11 10 __u64 *result; 11 + LIBBPF_OPTS(bpf_test_run_opts, topts); 12 12 13 13 err = fexit_test_lskel__attach(fexit_skel); 14 14 if (!ASSERT_OK(err, "fexit_attach")) ··· 20 20 return -1; 21 21 22 22 prog_fd = fexit_skel->progs.test1.prog_fd; 23 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 24 - NULL, NULL, &retval, &duration); 23 + err = bpf_prog_test_run_opts(prog_fd, &topts); 25 24 ASSERT_OK(err, "test_run"); 26 - ASSERT_EQ(retval, 0, "test_run"); 25 + ASSERT_EQ(topts.retval, 0, "test_run"); 27 26 28 27 result = (__u64 *)fexit_skel->bss; 29 28 for (i = 0; i < sizeof(*fexit_skel->bss) / sizeof(__u64); i++) {
+14 -17
tools/testing/selftests/bpf/prog_tests/flow_dissector.c
··· 13 13 #endif 14 14 15 15 #define CHECK_FLOW_KEYS(desc, got, expected) \ 16 - CHECK_ATTR(memcmp(&got, &expected, sizeof(got)) != 0, \ 16 + _CHECK(memcmp(&got, &expected, sizeof(got)) != 0, \ 17 17 desc, \ 18 + topts.duration, \ 18 19 "nhoff=%u/%u " \ 19 20 "thoff=%u/%u " \ 20 21 "addr_proto=0x%x/0x%x " \ ··· 488 487 /* Keep in sync with 'flags' from eth_get_headlen. */ 489 488 __u32 eth_get_headlen_flags = 490 489 BPF_FLOW_DISSECTOR_F_PARSE_1ST_FRAG; 491 - struct bpf_prog_test_run_attr tattr = {}; 490 + LIBBPF_OPTS(bpf_test_run_opts, topts); 492 491 struct bpf_flow_keys flow_keys = {}; 493 492 __u32 key = (__u32)(tests[i].keys.sport) << 16 | 494 493 tests[i].keys.dport; ··· 504 503 CHECK(err < 0, "tx_tap", "err %d errno %d\n", err, errno); 505 504 506 505 err = bpf_map_lookup_elem(keys_fd, &key, &flow_keys); 507 - CHECK_ATTR(err, tests[i].name, "bpf_map_lookup_elem %d\n", err); 506 + ASSERT_OK(err, "bpf_map_lookup_elem"); 508 507 509 - CHECK_ATTR(err, tests[i].name, "skb-less err %d\n", err); 510 508 CHECK_FLOW_KEYS(tests[i].name, flow_keys, tests[i].keys); 511 509 512 510 err = bpf_map_delete_elem(keys_fd, &key); 513 - CHECK_ATTR(err, tests[i].name, "bpf_map_delete_elem %d\n", err); 511 + ASSERT_OK(err, "bpf_map_delete_elem"); 514 512 } 515 513 } 516 514 ··· 573 573 574 574 for (i = 0; i < ARRAY_SIZE(tests); i++) { 575 575 struct bpf_flow_keys flow_keys; 576 - struct bpf_prog_test_run_attr tattr = { 577 - .prog_fd = prog_fd, 576 + LIBBPF_OPTS(bpf_test_run_opts, topts, 578 577 .data_in = &tests[i].pkt, 579 578 .data_size_in = sizeof(tests[i].pkt), 580 579 .data_out = &flow_keys, 581 - }; 580 + ); 582 581 static struct bpf_flow_keys ctx = {}; 583 582 584 583 if (tests[i].flags) { 585 - tattr.ctx_in = &ctx; 586 - tattr.ctx_size_in = sizeof(ctx); 584 + topts.ctx_in = &ctx; 585 + topts.ctx_size_in = sizeof(ctx); 587 586 ctx.flags = tests[i].flags; 588 587 } 589 588 590 - err = bpf_prog_test_run_xattr(&tattr); 591 - CHECK_ATTR(tattr.data_size_out != sizeof(flow_keys) || 592 - err || tattr.retval != 1, 593 - tests[i].name, 594 - "err %d errno %d retval %d duration %d size %u/%zu\n", 595 - err, errno, tattr.retval, tattr.duration, 596 - tattr.data_size_out, sizeof(flow_keys)); 589 + err = bpf_prog_test_run_opts(prog_fd, &topts); 590 + ASSERT_OK(err, "test_run"); 591 + ASSERT_EQ(topts.retval, 1, "test_run retval"); 592 + ASSERT_EQ(topts.data_size_out, sizeof(flow_keys), 593 + "test_run data_size_out"); 597 594 CHECK_FLOW_KEYS(tests[i].name, flow_keys, tests[i].keys); 598 595 } 599 596
+13 -11
tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
··· 5 5 void serial_test_flow_dissector_load_bytes(void) 6 6 { 7 7 struct bpf_flow_keys flow_keys; 8 - __u32 duration = 0, retval, size; 9 8 struct bpf_insn prog[] = { 10 9 // BPF_REG_1 - 1st argument: context 11 10 // BPF_REG_2 - 2nd argument: offset, start at first byte ··· 26 27 BPF_EXIT_INSN(), 27 28 }; 28 29 int fd, err; 30 + LIBBPF_OPTS(bpf_test_run_opts, topts, 31 + .data_in = &pkt_v4, 32 + .data_size_in = sizeof(pkt_v4), 33 + .data_out = &flow_keys, 34 + .data_size_out = sizeof(flow_keys), 35 + .repeat = 1, 36 + ); 29 37 30 38 /* make sure bpf_skb_load_bytes is not allowed from skb-less context 31 39 */ 32 40 fd = bpf_test_load_program(BPF_PROG_TYPE_FLOW_DISSECTOR, prog, 33 41 ARRAY_SIZE(prog), "GPL", 0, NULL, 0); 34 - CHECK(fd < 0, 35 - "flow_dissector-bpf_skb_load_bytes-load", 36 - "fd %d errno %d\n", 37 - fd, errno); 42 + ASSERT_GE(fd, 0, "bpf_test_load_program good fd"); 38 43 39 - err = bpf_prog_test_run(fd, 1, &pkt_v4, sizeof(pkt_v4), 40 - &flow_keys, &size, &retval, &duration); 41 - CHECK(size != sizeof(flow_keys) || err || retval != 1, 42 - "flow_dissector-bpf_skb_load_bytes", 43 - "err %d errno %d retval %d duration %d size %u/%zu\n", 44 - err, errno, retval, duration, size, sizeof(flow_keys)); 44 + err = bpf_prog_test_run_opts(fd, &topts); 45 + ASSERT_OK(err, "test_run"); 46 + ASSERT_EQ(topts.data_size_out, sizeof(flow_keys), 47 + "test_run data_size_out"); 48 + ASSERT_EQ(topts.retval, 1, "test_run retval"); 45 49 46 50 if (fd >= -1) 47 51 close(fd);
+20 -12
tools/testing/selftests/bpf/prog_tests/for_each.c
··· 12 12 int i, err, hashmap_fd, max_entries, percpu_map_fd; 13 13 struct for_each_hash_map_elem *skel; 14 14 __u64 *percpu_valbuf = NULL; 15 - __u32 key, num_cpus, retval; 15 + __u32 key, num_cpus; 16 16 __u64 val; 17 + LIBBPF_OPTS(bpf_test_run_opts, topts, 18 + .data_in = &pkt_v4, 19 + .data_size_in = sizeof(pkt_v4), 20 + .repeat = 1, 21 + ); 17 22 18 23 skel = for_each_hash_map_elem__open_and_load(); 19 24 if (!ASSERT_OK_PTR(skel, "for_each_hash_map_elem__open_and_load")) ··· 47 42 if (!ASSERT_OK(err, "percpu_map_update")) 48 43 goto out; 49 44 50 - err = bpf_prog_test_run(bpf_program__fd(skel->progs.test_pkt_access), 51 - 1, &pkt_v4, sizeof(pkt_v4), NULL, NULL, 52 - &retval, &duration); 53 - if (CHECK(err || retval, "ipv4", "err %d errno %d retval %d\n", 54 - err, errno, retval)) 45 + err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_pkt_access), &topts); 46 + duration = topts.duration; 47 + if (CHECK(err || topts.retval, "ipv4", "err %d errno %d retval %d\n", 48 + err, errno, topts.retval)) 55 49 goto out; 56 50 57 51 ASSERT_EQ(skel->bss->hashmap_output, 4, "hashmap_output"); ··· 73 69 74 70 static void test_array_map(void) 75 71 { 76 - __u32 key, num_cpus, max_entries, retval; 72 + __u32 key, num_cpus, max_entries; 77 73 int i, arraymap_fd, percpu_map_fd, err; 78 74 struct for_each_array_map_elem *skel; 79 75 __u64 *percpu_valbuf = NULL; 80 76 __u64 val, expected_total; 77 + LIBBPF_OPTS(bpf_test_run_opts, topts, 78 + .data_in = &pkt_v4, 79 + .data_size_in = sizeof(pkt_v4), 80 + .repeat = 1, 81 + ); 81 82 82 83 skel = for_each_array_map_elem__open_and_load(); 83 84 if (!ASSERT_OK_PTR(skel, "for_each_array_map_elem__open_and_load")) ··· 115 106 if (!ASSERT_OK(err, "percpu_map_update")) 116 107 goto out; 117 108 118 - err = bpf_prog_test_run(bpf_program__fd(skel->progs.test_pkt_access), 119 - 1, &pkt_v4, sizeof(pkt_v4), NULL, NULL, 120 - &retval, &duration); 121 - if (CHECK(err || retval, "ipv4", "err %d errno %d retval %d\n", 122 - err, errno, retval)) 109 + err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_pkt_access), &topts); 110 + duration = topts.duration; 111 + if (CHECK(err || topts.retval, "ipv4", "err %d errno %d retval %d\n", 112 + err, errno, topts.retval)) 123 113 goto out; 124 114 125 115 ASSERT_EQ(skel->bss->arraymap_output, expected_total, "array_output");
+5 -7
tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
··· 5 5 void test_get_func_args_test(void) 6 6 { 7 7 struct get_func_args_test *skel = NULL; 8 - __u32 duration = 0, retval; 9 8 int err, prog_fd; 9 + LIBBPF_OPTS(bpf_test_run_opts, topts); 10 10 11 11 skel = get_func_args_test__open_and_load(); 12 12 if (!ASSERT_OK_PTR(skel, "get_func_args_test__open_and_load")) ··· 20 20 * fentry/fexit programs. 21 21 */ 22 22 prog_fd = bpf_program__fd(skel->progs.test1); 23 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 24 - NULL, NULL, &retval, &duration); 23 + err = bpf_prog_test_run_opts(prog_fd, &topts); 25 24 ASSERT_OK(err, "test_run"); 26 - ASSERT_EQ(retval, 0, "test_run"); 25 + ASSERT_EQ(topts.retval, 0, "test_run"); 27 26 28 27 /* This runs bpf_modify_return_test function and triggers 29 28 * fmod_ret_test and fexit_test programs. 30 29 */ 31 30 prog_fd = bpf_program__fd(skel->progs.fmod_ret_test); 32 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 33 - NULL, NULL, &retval, &duration); 31 + err = bpf_prog_test_run_opts(prog_fd, &topts); 34 32 ASSERT_OK(err, "test_run"); 35 - ASSERT_EQ(retval, 1234, "test_run"); 33 + ASSERT_EQ(topts.retval, 1234, "test_run"); 36 34 37 35 ASSERT_EQ(skel->bss->test1_result, 1, "test1_result"); 38 36 ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
+4 -6
tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
··· 5 5 void test_get_func_ip_test(void) 6 6 { 7 7 struct get_func_ip_test *skel = NULL; 8 - __u32 duration = 0, retval; 9 8 int err, prog_fd; 9 + LIBBPF_OPTS(bpf_test_run_opts, topts); 10 10 11 11 skel = get_func_ip_test__open(); 12 12 if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open")) ··· 29 29 goto cleanup; 30 30 31 31 prog_fd = bpf_program__fd(skel->progs.test1); 32 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 33 - NULL, NULL, &retval, &duration); 32 + err = bpf_prog_test_run_opts(prog_fd, &topts); 34 33 ASSERT_OK(err, "test_run"); 35 - ASSERT_EQ(retval, 0, "test_run"); 34 + ASSERT_EQ(topts.retval, 0, "test_run"); 36 35 37 36 prog_fd = bpf_program__fd(skel->progs.test5); 38 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 39 - NULL, NULL, &retval, &duration); 37 + err = bpf_prog_test_run_opts(prog_fd, &topts); 40 38 41 39 ASSERT_OK(err, "test_run"); 42 40
+1 -1
tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
··· 27 27 return; 28 28 29 29 /* override program type */ 30 - bpf_program__set_perf_event(skel->progs.oncpu); 30 + bpf_program__set_type(skel->progs.oncpu, BPF_PROG_TYPE_PERF_EVENT); 31 31 32 32 err = test_stacktrace_build_id__load(skel); 33 33 if (CHECK(err, "skel_load", "skeleton load failed: %d\n", err))
+13 -11
tools/testing/selftests/bpf/prog_tests/global_data.c
··· 132 132 void test_global_data(void) 133 133 { 134 134 const char *file = "./test_global_data.o"; 135 - __u32 duration = 0, retval; 136 135 struct bpf_object *obj; 137 136 int err, prog_fd; 137 + LIBBPF_OPTS(bpf_test_run_opts, topts, 138 + .data_in = &pkt_v4, 139 + .data_size_in = sizeof(pkt_v4), 140 + .repeat = 1, 141 + ); 138 142 139 143 err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd); 140 - if (CHECK(err, "load program", "error %d loading %s\n", err, file)) 144 + if (!ASSERT_OK(err, "load program")) 141 145 return; 142 146 143 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 144 - NULL, NULL, &retval, &duration); 145 - CHECK(err || retval, "pass global data run", 146 - "err %d errno %d retval %d duration %d\n", 147 - err, errno, retval, duration); 147 + err = bpf_prog_test_run_opts(prog_fd, &topts); 148 + ASSERT_OK(err, "pass global data run err"); 149 + ASSERT_OK(topts.retval, "pass global data run retval"); 148 150 149 - test_global_data_number(obj, duration); 150 - test_global_data_string(obj, duration); 151 - test_global_data_struct(obj, duration); 152 - test_global_data_rdonly(obj, duration); 151 + test_global_data_number(obj, topts.duration); 152 + test_global_data_string(obj, topts.duration); 153 + test_global_data_struct(obj, topts.duration); 154 + test_global_data_rdonly(obj, topts.duration); 153 155 154 156 bpf_object__close(obj); 155 157 }
+8 -6
tools/testing/selftests/bpf/prog_tests/global_func_args.c
··· 40 40 void test_global_func_args(void) 41 41 { 42 42 const char *file = "./test_global_func_args.o"; 43 - __u32 retval; 44 43 struct bpf_object *obj; 45 44 int err, prog_fd; 45 + LIBBPF_OPTS(bpf_test_run_opts, topts, 46 + .data_in = &pkt_v4, 47 + .data_size_in = sizeof(pkt_v4), 48 + .repeat = 1, 49 + ); 46 50 47 51 err = bpf_prog_test_load(file, BPF_PROG_TYPE_CGROUP_SKB, &obj, &prog_fd); 48 52 if (CHECK(err, "load program", "error %d loading %s\n", err, file)) 49 53 return; 50 54 51 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 52 - NULL, NULL, &retval, &duration); 53 - CHECK(err || retval, "pass global func args run", 54 - "err %d errno %d retval %d duration %d\n", 55 - err, errno, retval, duration); 55 + err = bpf_prog_test_run_opts(prog_fd, &topts); 56 + ASSERT_OK(err, "test_run"); 57 + ASSERT_OK(topts.retval, "test_run retval"); 56 58 57 59 test_global_func_args0(obj); 58 60
+7 -9
tools/testing/selftests/bpf/prog_tests/kfree_skb.c
··· 53 53 void serial_test_kfree_skb(void) 54 54 { 55 55 struct __sk_buff skb = {}; 56 - struct bpf_prog_test_run_attr tattr = { 56 + LIBBPF_OPTS(bpf_test_run_opts, topts, 57 57 .data_in = &pkt_v6, 58 58 .data_size_in = sizeof(pkt_v6), 59 59 .ctx_in = &skb, 60 60 .ctx_size_in = sizeof(skb), 61 - }; 61 + ); 62 62 struct kfree_skb *skel = NULL; 63 63 struct bpf_link *link; 64 64 struct bpf_object *obj; 65 65 struct perf_buffer *pb = NULL; 66 - int err; 66 + int err, prog_fd; 67 67 bool passed = false; 68 68 __u32 duration = 0; 69 69 const int zero = 0; 70 70 bool test_ok[2]; 71 71 72 72 err = bpf_prog_test_load("./test_pkt_access.o", BPF_PROG_TYPE_SCHED_CLS, 73 - &obj, &tattr.prog_fd); 73 + &obj, &prog_fd); 74 74 if (CHECK(err, "prog_load sched cls", "err %d errno %d\n", err, errno)) 75 75 return; 76 76 ··· 100 100 goto close_prog; 101 101 102 102 memcpy(skb.cb, &cb, sizeof(cb)); 103 - err = bpf_prog_test_run_xattr(&tattr); 104 - duration = tattr.duration; 105 - CHECK(err || tattr.retval, "ipv6", 106 - "err %d errno %d retval %d duration %d\n", 107 - err, errno, tattr.retval, duration); 103 + err = bpf_prog_test_run_opts(prog_fd, &topts); 104 + ASSERT_OK(err, "ipv6 test_run"); 105 + ASSERT_OK(topts.retval, "ipv6 test_run retval"); 108 106 109 107 /* read perf buffer */ 110 108 err = perf_buffer__poll(pb, 100);
+28 -18
tools/testing/selftests/bpf/prog_tests/kfunc_call.c
··· 9 9 static void test_main(void) 10 10 { 11 11 struct kfunc_call_test_lskel *skel; 12 - int prog_fd, retval, err; 12 + int prog_fd, err; 13 + LIBBPF_OPTS(bpf_test_run_opts, topts, 14 + .data_in = &pkt_v4, 15 + .data_size_in = sizeof(pkt_v4), 16 + .repeat = 1, 17 + ); 13 18 14 19 skel = kfunc_call_test_lskel__open_and_load(); 15 20 if (!ASSERT_OK_PTR(skel, "skel")) 16 21 return; 17 22 18 23 prog_fd = skel->progs.kfunc_call_test1.prog_fd; 19 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 20 - NULL, NULL, (__u32 *)&retval, NULL); 24 + err = bpf_prog_test_run_opts(prog_fd, &topts); 21 25 ASSERT_OK(err, "bpf_prog_test_run(test1)"); 22 - ASSERT_EQ(retval, 12, "test1-retval"); 26 + ASSERT_EQ(topts.retval, 12, "test1-retval"); 23 27 24 28 prog_fd = skel->progs.kfunc_call_test2.prog_fd; 25 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 26 - NULL, NULL, (__u32 *)&retval, NULL); 29 + err = bpf_prog_test_run_opts(prog_fd, &topts); 27 30 ASSERT_OK(err, "bpf_prog_test_run(test2)"); 28 - ASSERT_EQ(retval, 3, "test2-retval"); 31 + ASSERT_EQ(topts.retval, 3, "test2-retval"); 29 32 30 33 prog_fd = skel->progs.kfunc_call_test_ref_btf_id.prog_fd; 31 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 32 - NULL, NULL, (__u32 *)&retval, NULL); 34 + err = bpf_prog_test_run_opts(prog_fd, &topts); 33 35 ASSERT_OK(err, "bpf_prog_test_run(test_ref_btf_id)"); 34 - ASSERT_EQ(retval, 0, "test_ref_btf_id-retval"); 36 + ASSERT_EQ(topts.retval, 0, "test_ref_btf_id-retval"); 35 37 36 38 kfunc_call_test_lskel__destroy(skel); 37 39 } ··· 41 39 static void test_subprog(void) 42 40 { 43 41 struct kfunc_call_test_subprog *skel; 44 - int prog_fd, retval, err; 42 + int prog_fd, err; 43 + LIBBPF_OPTS(bpf_test_run_opts, topts, 44 + .data_in = &pkt_v4, 45 + .data_size_in = sizeof(pkt_v4), 46 + .repeat = 1, 47 + ); 45 48 46 49 skel = kfunc_call_test_subprog__open_and_load(); 47 50 if (!ASSERT_OK_PTR(skel, "skel")) 48 51 return; 49 52 50 53 prog_fd = bpf_program__fd(skel->progs.kfunc_call_test1); 51 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 52 - NULL, NULL, (__u32 *)&retval, NULL); 54 + err = bpf_prog_test_run_opts(prog_fd, &topts); 53 55 ASSERT_OK(err, "bpf_prog_test_run(test1)"); 54 - ASSERT_EQ(retval, 10, "test1-retval"); 56 + ASSERT_EQ(topts.retval, 10, "test1-retval"); 55 57 ASSERT_NEQ(skel->data->active_res, -1, "active_res"); 56 58 ASSERT_EQ(skel->data->sk_state_res, BPF_TCP_CLOSE, "sk_state_res"); 57 59 ··· 65 59 static void test_subprog_lskel(void) 66 60 { 67 61 struct kfunc_call_test_subprog_lskel *skel; 68 - int prog_fd, retval, err; 62 + int prog_fd, err; 63 + LIBBPF_OPTS(bpf_test_run_opts, topts, 64 + .data_in = &pkt_v4, 65 + .data_size_in = sizeof(pkt_v4), 66 + .repeat = 1, 67 + ); 69 68 70 69 skel = kfunc_call_test_subprog_lskel__open_and_load(); 71 70 if (!ASSERT_OK_PTR(skel, "skel")) 72 71 return; 73 72 74 73 prog_fd = skel->progs.kfunc_call_test1.prog_fd; 75 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 76 - NULL, NULL, (__u32 *)&retval, NULL); 74 + err = bpf_prog_test_run_opts(prog_fd, &topts); 77 75 ASSERT_OK(err, "bpf_prog_test_run(test1)"); 78 - ASSERT_EQ(retval, 10, "test1-retval"); 76 + ASSERT_EQ(topts.retval, 10, "test1-retval"); 79 77 ASSERT_NEQ(skel->data->active_res, -1, "active_res"); 80 78 ASSERT_EQ(skel->data->sk_state_res, BPF_TCP_CLOSE, "sk_state_res"); 81 79
+17 -10
tools/testing/selftests/bpf/prog_tests/ksyms_module.c
··· 6 6 #include "test_ksyms_module.lskel.h" 7 7 #include "test_ksyms_module.skel.h" 8 8 9 - void test_ksyms_module_lskel(void) 9 + static void test_ksyms_module_lskel(void) 10 10 { 11 11 struct test_ksyms_module_lskel *skel; 12 - int retval; 13 12 int err; 13 + LIBBPF_OPTS(bpf_test_run_opts, topts, 14 + .data_in = &pkt_v4, 15 + .data_size_in = sizeof(pkt_v4), 16 + .repeat = 1, 17 + ); 14 18 15 19 if (!env.has_testmod) { 16 20 test__skip(); ··· 24 20 skel = test_ksyms_module_lskel__open_and_load(); 25 21 if (!ASSERT_OK_PTR(skel, "test_ksyms_module_lskel__open_and_load")) 26 22 return; 27 - err = bpf_prog_test_run(skel->progs.load.prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 28 - NULL, NULL, (__u32 *)&retval, NULL); 23 + err = bpf_prog_test_run_opts(skel->progs.load.prog_fd, &topts); 29 24 if (!ASSERT_OK(err, "bpf_prog_test_run")) 30 25 goto cleanup; 31 - ASSERT_EQ(retval, 0, "retval"); 26 + ASSERT_EQ(topts.retval, 0, "retval"); 32 27 ASSERT_EQ(skel->bss->out_bpf_testmod_ksym, 42, "bpf_testmod_ksym"); 33 28 cleanup: 34 29 test_ksyms_module_lskel__destroy(skel); 35 30 } 36 31 37 - void test_ksyms_module_libbpf(void) 32 + static void test_ksyms_module_libbpf(void) 38 33 { 39 34 struct test_ksyms_module *skel; 40 - int retval, err; 35 + int err; 36 + LIBBPF_OPTS(bpf_test_run_opts, topts, 37 + .data_in = &pkt_v4, 38 + .data_size_in = sizeof(pkt_v4), 39 + .repeat = 1, 40 + ); 41 41 42 42 if (!env.has_testmod) { 43 43 test__skip(); ··· 51 43 skel = test_ksyms_module__open_and_load(); 52 44 if (!ASSERT_OK_PTR(skel, "test_ksyms_module__open")) 53 45 return; 54 - err = bpf_prog_test_run(bpf_program__fd(skel->progs.load), 1, &pkt_v4, 55 - sizeof(pkt_v4), NULL, NULL, (__u32 *)&retval, NULL); 46 + err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.load), &topts); 56 47 if (!ASSERT_OK(err, "bpf_prog_test_run")) 57 48 goto cleanup; 58 - ASSERT_EQ(retval, 0, "retval"); 49 + ASSERT_EQ(topts.retval, 0, "retval"); 59 50 ASSERT_EQ(skel->bss->out_bpf_testmod_ksym, 42, "bpf_testmod_ksym"); 60 51 cleanup: 61 52 test_ksyms_module__destroy(skel);
+22 -13
tools/testing/selftests/bpf/prog_tests/l4lb_all.c
··· 23 23 __u8 flags; 24 24 } real_def = {.dst = MAGIC_VAL}; 25 25 __u32 ch_key = 11, real_num = 3; 26 - __u32 duration, retval, size; 27 26 int err, i, prog_fd, map_fd; 28 27 __u64 bytes = 0, pkts = 0; 29 28 struct bpf_object *obj; 30 29 char buf[128]; 31 30 u32 *magic = (u32 *)buf; 31 + LIBBPF_OPTS(bpf_test_run_opts, topts, 32 + .data_out = buf, 33 + .data_size_out = sizeof(buf), 34 + .repeat = NUM_ITER, 35 + ); 32 36 33 37 err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd); 34 38 if (CHECK_FAIL(err)) ··· 53 49 goto out; 54 50 bpf_map_update_elem(map_fd, &real_num, &real_def, 0); 55 51 56 - err = bpf_prog_test_run(prog_fd, NUM_ITER, &pkt_v4, sizeof(pkt_v4), 57 - buf, &size, &retval, &duration); 58 - CHECK(err || retval != 7/*TC_ACT_REDIRECT*/ || size != 54 || 59 - *magic != MAGIC_VAL, "ipv4", 60 - "err %d errno %d retval %d size %d magic %x\n", 61 - err, errno, retval, size, *magic); 52 + topts.data_in = &pkt_v4; 53 + topts.data_size_in = sizeof(pkt_v4); 62 54 63 - err = bpf_prog_test_run(prog_fd, NUM_ITER, &pkt_v6, sizeof(pkt_v6), 64 - buf, &size, &retval, &duration); 65 - CHECK(err || retval != 7/*TC_ACT_REDIRECT*/ || size != 74 || 66 - *magic != MAGIC_VAL, "ipv6", 67 - "err %d errno %d retval %d size %d magic %x\n", 68 - err, errno, retval, size, *magic); 55 + err = bpf_prog_test_run_opts(prog_fd, &topts); 56 + ASSERT_OK(err, "test_run"); 57 + ASSERT_EQ(topts.retval, 7 /*TC_ACT_REDIRECT*/, "ipv4 test_run retval"); 58 + ASSERT_EQ(topts.data_size_out, 54, "ipv4 test_run data_size_out"); 59 + ASSERT_EQ(*magic, MAGIC_VAL, "ipv4 magic"); 60 + 61 + topts.data_in = &pkt_v6; 62 + topts.data_size_in = sizeof(pkt_v6); 63 + topts.data_size_out = sizeof(buf); /* reset out size */ 64 + 65 + err = bpf_prog_test_run_opts(prog_fd, &topts); 66 + ASSERT_OK(err, "test_run"); 67 + ASSERT_EQ(topts.retval, 7 /*TC_ACT_REDIRECT*/, "ipv6 test_run retval"); 68 + ASSERT_EQ(topts.data_size_out, 74, "ipv6 test_run data_size_out"); 69 + ASSERT_EQ(*magic, MAGIC_VAL, "ipv6 magic"); 69 70 70 71 map_fd = bpf_find_map(__func__, obj, "stats"); 71 72 if (map_fd < 0)
+1 -1
tools/testing/selftests/bpf/prog_tests/log_buf.c
··· 202 202 const void *raw_btf_data; 203 203 __u32 raw_btf_size; 204 204 struct btf *btf; 205 - char *log_buf; 205 + char *log_buf = NULL; 206 206 int fd = -1; 207 207 208 208 btf = btf__new_empty();
+9 -6
tools/testing/selftests/bpf/prog_tests/map_lock.c
··· 4 4 5 5 static void *spin_lock_thread(void *arg) 6 6 { 7 - __u32 duration, retval; 8 7 int err, prog_fd = *(u32 *) arg; 8 + LIBBPF_OPTS(bpf_test_run_opts, topts, 9 + .data_in = &pkt_v4, 10 + .data_size_in = sizeof(pkt_v4), 11 + .repeat = 10000, 12 + ); 9 13 10 - err = bpf_prog_test_run(prog_fd, 10000, &pkt_v4, sizeof(pkt_v4), 11 - NULL, NULL, &retval, &duration); 12 - CHECK(err || retval, "", 13 - "err %d errno %d retval %d duration %d\n", 14 - err, errno, retval, duration); 14 + err = bpf_prog_test_run_opts(prog_fd, &topts); 15 + ASSERT_OK(err, "test_run_opts err"); 16 + ASSERT_OK(topts.retval, "test_run_opts retval"); 17 + 15 18 pthread_exit(arg); 16 19 } 17 20
+10 -6
tools/testing/selftests/bpf/prog_tests/map_ptr.c
··· 9 9 void test_map_ptr(void) 10 10 { 11 11 struct map_ptr_kern_lskel *skel; 12 - __u32 duration = 0, retval; 13 12 char buf[128]; 14 13 int err; 15 14 int page_size = getpagesize(); 15 + LIBBPF_OPTS(bpf_test_run_opts, topts, 16 + .data_in = &pkt_v4, 17 + .data_size_in = sizeof(pkt_v4), 18 + .data_out = buf, 19 + .data_size_out = sizeof(buf), 20 + .repeat = 1, 21 + ); 16 22 17 23 skel = map_ptr_kern_lskel__open(); 18 24 if (!ASSERT_OK_PTR(skel, "skel_open")) ··· 32 26 33 27 skel->bss->page_size = page_size; 34 28 35 - err = bpf_prog_test_run(skel->progs.cg_skb.prog_fd, 1, &pkt_v4, 36 - sizeof(pkt_v4), buf, NULL, &retval, NULL); 29 + err = bpf_prog_test_run_opts(skel->progs.cg_skb.prog_fd, &topts); 37 30 38 - if (CHECK(err, "test_run", "err=%d errno=%d\n", err, errno)) 31 + if (!ASSERT_OK(err, "test_run")) 39 32 goto cleanup; 40 33 41 - if (CHECK(!retval, "retval", "retval=%d map_type=%u line=%u\n", retval, 42 - skel->bss->g_map_type, skel->bss->g_line)) 34 + if (!ASSERT_NEQ(topts.retval, 0, "test_run retval")) 43 35 goto cleanup; 44 36 45 37 cleanup:
+12 -21
tools/testing/selftests/bpf/prog_tests/modify_return.c
··· 15 15 { 16 16 struct modify_return *skel = NULL; 17 17 int err, prog_fd; 18 - __u32 duration = 0, retval; 19 18 __u16 side_effect; 20 19 __s16 ret; 20 + LIBBPF_OPTS(bpf_test_run_opts, topts); 21 21 22 22 skel = modify_return__open_and_load(); 23 - if (CHECK(!skel, "skel_load", "modify_return skeleton failed\n")) 23 + if (!ASSERT_OK_PTR(skel, "skel_load")) 24 24 goto cleanup; 25 25 26 26 err = modify_return__attach(skel); 27 - if (CHECK(err, "modify_return", "attach failed: %d\n", err)) 27 + if (!ASSERT_OK(err, "modify_return__attach failed")) 28 28 goto cleanup; 29 29 30 30 skel->bss->input_retval = input_retval; 31 31 prog_fd = bpf_program__fd(skel->progs.fmod_ret_test); 32 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, 0, 33 - &retval, &duration); 32 + err = bpf_prog_test_run_opts(prog_fd, &topts); 33 + ASSERT_OK(err, "test_run"); 34 34 35 - CHECK(err, "test_run", "err %d errno %d\n", err, errno); 35 + side_effect = UPPER(topts.retval); 36 + ret = LOWER(topts.retval); 36 37 37 - side_effect = UPPER(retval); 38 - ret = LOWER(retval); 39 - 40 - CHECK(ret != want_ret, "test_run", 41 - "unexpected ret: %d, expected: %d\n", ret, want_ret); 42 - CHECK(side_effect != want_side_effect, "modify_return", 43 - "unexpected side_effect: %d\n", side_effect); 44 - 45 - CHECK(skel->bss->fentry_result != 1, "modify_return", 46 - "fentry failed\n"); 47 - CHECK(skel->bss->fexit_result != 1, "modify_return", 48 - "fexit failed\n"); 49 - CHECK(skel->bss->fmod_ret_result != 1, "modify_return", 50 - "fmod_ret failed\n"); 38 + ASSERT_EQ(ret, want_ret, "test_run ret"); 39 + ASSERT_EQ(side_effect, want_side_effect, "modify_return side_effect"); 40 + ASSERT_EQ(skel->bss->fentry_result, 1, "modify_return fentry_result"); 41 + ASSERT_EQ(skel->bss->fexit_result, 1, "modify_return fexit_result"); 42 + ASSERT_EQ(skel->bss->fmod_ret_result, 1, "modify_return fmod_ret_result"); 51 43 52 44 cleanup: 53 45 modify_return__destroy(skel); ··· 55 63 0 /* want_side_effect */, 56 64 -EINVAL /* want_ret */); 57 65 } 58 -
+15 -11
tools/testing/selftests/bpf/prog_tests/pkt_access.c
··· 6 6 { 7 7 const char *file = "./test_pkt_access.o"; 8 8 struct bpf_object *obj; 9 - __u32 duration, retval; 10 9 int err, prog_fd; 10 + LIBBPF_OPTS(bpf_test_run_opts, topts, 11 + .data_in = &pkt_v4, 12 + .data_size_in = sizeof(pkt_v4), 13 + .repeat = 100000, 14 + ); 11 15 12 16 err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd); 13 17 if (CHECK_FAIL(err)) 14 18 return; 15 19 16 - err = bpf_prog_test_run(prog_fd, 100000, &pkt_v4, sizeof(pkt_v4), 17 - NULL, NULL, &retval, &duration); 18 - CHECK(err || retval, "ipv4", 19 - "err %d errno %d retval %d duration %d\n", 20 - err, errno, retval, duration); 20 + err = bpf_prog_test_run_opts(prog_fd, &topts); 21 + ASSERT_OK(err, "ipv4 test_run_opts err"); 22 + ASSERT_OK(topts.retval, "ipv4 test_run_opts retval"); 21 23 22 - err = bpf_prog_test_run(prog_fd, 100000, &pkt_v6, sizeof(pkt_v6), 23 - NULL, NULL, &retval, &duration); 24 - CHECK(err || retval, "ipv6", 25 - "err %d errno %d retval %d duration %d\n", 26 - err, errno, retval, duration); 24 + topts.data_in = &pkt_v6; 25 + topts.data_size_in = sizeof(pkt_v6); 26 + topts.data_size_out = 0; /* reset from last call */ 27 + err = bpf_prog_test_run_opts(prog_fd, &topts); 28 + ASSERT_OK(err, "ipv6 test_run_opts err"); 29 + ASSERT_OK(topts.retval, "ipv6 test_run_opts retval"); 30 + 27 31 bpf_object__close(obj); 28 32 }
+8 -6
tools/testing/selftests/bpf/prog_tests/pkt_md_access.c
··· 6 6 { 7 7 const char *file = "./test_pkt_md_access.o"; 8 8 struct bpf_object *obj; 9 - __u32 duration, retval; 10 9 int err, prog_fd; 10 + LIBBPF_OPTS(bpf_test_run_opts, topts, 11 + .data_in = &pkt_v4, 12 + .data_size_in = sizeof(pkt_v4), 13 + .repeat = 10, 14 + ); 11 15 12 16 err = bpf_prog_test_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd); 13 17 if (CHECK_FAIL(err)) 14 18 return; 15 19 16 - err = bpf_prog_test_run(prog_fd, 10, &pkt_v4, sizeof(pkt_v4), 17 - NULL, NULL, &retval, &duration); 18 - CHECK(err || retval, "", 19 - "err %d errno %d retval %d duration %d\n", 20 - err, errno, retval, duration); 20 + err = bpf_prog_test_run_opts(prog_fd, &topts); 21 + ASSERT_OK(err, "test_run_opts err"); 22 + ASSERT_OK(topts.retval, "test_run_opts retval"); 21 23 22 24 bpf_object__close(obj); 23 25 }
+77
tools/testing/selftests/bpf/prog_tests/prog_run_opts.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <test_progs.h> 3 + #include <network_helpers.h> 4 + 5 + #include "test_pkt_access.skel.h" 6 + 7 + static const __u32 duration; 8 + 9 + static void check_run_cnt(int prog_fd, __u64 run_cnt) 10 + { 11 + struct bpf_prog_info info = {}; 12 + __u32 info_len = sizeof(info); 13 + int err; 14 + 15 + err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); 16 + if (CHECK(err, "get_prog_info", "failed to get bpf_prog_info for fd %d\n", prog_fd)) 17 + return; 18 + 19 + CHECK(run_cnt != info.run_cnt, "run_cnt", 20 + "incorrect number of repetitions, want %llu have %llu\n", run_cnt, info.run_cnt); 21 + } 22 + 23 + void test_prog_run_opts(void) 24 + { 25 + struct test_pkt_access *skel; 26 + int err, stats_fd = -1, prog_fd; 27 + char buf[10] = {}; 28 + __u64 run_cnt = 0; 29 + 30 + LIBBPF_OPTS(bpf_test_run_opts, topts, 31 + .repeat = 1, 32 + .data_in = &pkt_v4, 33 + .data_size_in = sizeof(pkt_v4), 34 + .data_out = buf, 35 + .data_size_out = 5, 36 + ); 37 + 38 + stats_fd = bpf_enable_stats(BPF_STATS_RUN_TIME); 39 + if (!ASSERT_GE(stats_fd, 0, "enable_stats good fd")) 40 + return; 41 + 42 + skel = test_pkt_access__open_and_load(); 43 + if (!ASSERT_OK_PTR(skel, "open_and_load")) 44 + goto cleanup; 45 + 46 + prog_fd = bpf_program__fd(skel->progs.test_pkt_access); 47 + 48 + err = bpf_prog_test_run_opts(prog_fd, &topts); 49 + ASSERT_EQ(errno, ENOSPC, "test_run errno"); 50 + ASSERT_ERR(err, "test_run"); 51 + ASSERT_OK(topts.retval, "test_run retval"); 52 + 53 + ASSERT_EQ(topts.data_size_out, sizeof(pkt_v4), "test_run data_size_out"); 54 + ASSERT_EQ(buf[5], 0, "overflow, BPF_PROG_TEST_RUN ignored size hint"); 55 + 56 + run_cnt += topts.repeat; 57 + check_run_cnt(prog_fd, run_cnt); 58 + 59 + topts.data_out = NULL; 60 + topts.data_size_out = 0; 61 + topts.repeat = 2; 62 + errno = 0; 63 + 64 + err = bpf_prog_test_run_opts(prog_fd, &topts); 65 + ASSERT_OK(errno, "run_no_output errno"); 66 + ASSERT_OK(err, "run_no_output err"); 67 + ASSERT_OK(topts.retval, "run_no_output retval"); 68 + 69 + run_cnt += topts.repeat; 70 + check_run_cnt(prog_fd, run_cnt); 71 + 72 + cleanup: 73 + if (skel) 74 + test_pkt_access__destroy(skel); 75 + if (stats_fd >= 0) 76 + close(stats_fd); 77 + }
-83
tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - #include <test_progs.h> 3 - #include <network_helpers.h> 4 - 5 - #include "test_pkt_access.skel.h" 6 - 7 - static const __u32 duration; 8 - 9 - static void check_run_cnt(int prog_fd, __u64 run_cnt) 10 - { 11 - struct bpf_prog_info info = {}; 12 - __u32 info_len = sizeof(info); 13 - int err; 14 - 15 - err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); 16 - if (CHECK(err, "get_prog_info", "failed to get bpf_prog_info for fd %d\n", prog_fd)) 17 - return; 18 - 19 - CHECK(run_cnt != info.run_cnt, "run_cnt", 20 - "incorrect number of repetitions, want %llu have %llu\n", run_cnt, info.run_cnt); 21 - } 22 - 23 - void test_prog_run_xattr(void) 24 - { 25 - struct test_pkt_access *skel; 26 - int err, stats_fd = -1; 27 - char buf[10] = {}; 28 - __u64 run_cnt = 0; 29 - 30 - struct bpf_prog_test_run_attr tattr = { 31 - .repeat = 1, 32 - .data_in = &pkt_v4, 33 - .data_size_in = sizeof(pkt_v4), 34 - .data_out = buf, 35 - .data_size_out = 5, 36 - }; 37 - 38 - stats_fd = bpf_enable_stats(BPF_STATS_RUN_TIME); 39 - if (CHECK_ATTR(stats_fd < 0, "enable_stats", "failed %d\n", errno)) 40 - return; 41 - 42 - skel = test_pkt_access__open_and_load(); 43 - if (CHECK_ATTR(!skel, "open_and_load", "failed\n")) 44 - goto cleanup; 45 - 46 - tattr.prog_fd = bpf_program__fd(skel->progs.test_pkt_access); 47 - 48 - err = bpf_prog_test_run_xattr(&tattr); 49 - CHECK_ATTR(err >= 0 || errno != ENOSPC || tattr.retval, "run", 50 - "err %d errno %d retval %d\n", err, errno, tattr.retval); 51 - 52 - CHECK_ATTR(tattr.data_size_out != sizeof(pkt_v4), "data_size_out", 53 - "incorrect output size, want %zu have %u\n", 54 - sizeof(pkt_v4), tattr.data_size_out); 55 - 56 - CHECK_ATTR(buf[5] != 0, "overflow", 57 - "BPF_PROG_TEST_RUN ignored size hint\n"); 58 - 59 - run_cnt += tattr.repeat; 60 - check_run_cnt(tattr.prog_fd, run_cnt); 61 - 62 - tattr.data_out = NULL; 63 - tattr.data_size_out = 0; 64 - tattr.repeat = 2; 65 - errno = 0; 66 - 67 - err = bpf_prog_test_run_xattr(&tattr); 68 - CHECK_ATTR(err || errno || tattr.retval, "run_no_output", 69 - "err %d errno %d retval %d\n", err, errno, tattr.retval); 70 - 71 - tattr.data_size_out = 1; 72 - err = bpf_prog_test_run_xattr(&tattr); 73 - CHECK_ATTR(err != -EINVAL, "run_wrong_size_out", "err %d\n", err); 74 - 75 - run_cnt += tattr.repeat; 76 - check_run_cnt(tattr.prog_fd, run_cnt); 77 - 78 - cleanup: 79 - if (skel) 80 - test_pkt_access__destroy(skel); 81 - if (stats_fd >= 0) 82 - close(stats_fd); 83 - }
+26 -20
tools/testing/selftests/bpf/prog_tests/queue_stack_map.c
··· 10 10 static void test_queue_stack_map_by_type(int type) 11 11 { 12 12 const int MAP_SIZE = 32; 13 - __u32 vals[MAP_SIZE], duration, retval, size, val; 13 + __u32 vals[MAP_SIZE], val; 14 14 int i, err, prog_fd, map_in_fd, map_out_fd; 15 15 char file[32], buf[128]; 16 16 struct bpf_object *obj; 17 17 struct iphdr iph; 18 + LIBBPF_OPTS(bpf_test_run_opts, topts, 19 + .data_in = &pkt_v4, 20 + .data_size_in = sizeof(pkt_v4), 21 + .data_out = buf, 22 + .data_size_out = sizeof(buf), 23 + .repeat = 1, 24 + ); 18 25 19 26 /* Fill test values to be used */ 20 27 for (i = 0; i < MAP_SIZE; i++) ··· 65 58 pkt_v4.iph.saddr = vals[MAP_SIZE - 1 - i] * 5; 66 59 } 67 60 68 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 69 - buf, &size, &retval, &duration); 70 - if (err || retval || size != sizeof(pkt_v4)) 61 + topts.data_size_out = sizeof(buf); 62 + err = bpf_prog_test_run_opts(prog_fd, &topts); 63 + if (err || topts.retval || 64 + topts.data_size_out != sizeof(pkt_v4)) 71 65 break; 72 66 memcpy(&iph, buf + sizeof(struct ethhdr), sizeof(iph)); 73 67 if (iph.daddr != val) 74 68 break; 75 69 } 76 70 77 - CHECK(err || retval || size != sizeof(pkt_v4) || iph.daddr != val, 78 - "bpf_map_pop_elem", 79 - "err %d errno %d retval %d size %d iph->daddr %u\n", 80 - err, errno, retval, size, iph.daddr); 71 + ASSERT_OK(err, "bpf_map_pop_elem"); 72 + ASSERT_OK(topts.retval, "bpf_map_pop_elem test retval"); 73 + ASSERT_EQ(topts.data_size_out, sizeof(pkt_v4), 74 + "bpf_map_pop_elem data_size_out"); 75 + ASSERT_EQ(iph.daddr, val, "bpf_map_pop_elem iph.daddr"); 81 76 82 77 /* Queue is empty, program should return TC_ACT_SHOT */ 83 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 84 - buf, &size, &retval, &duration); 85 - CHECK(err || retval != 2 /* TC_ACT_SHOT */|| size != sizeof(pkt_v4), 86 - "check-queue-stack-map-empty", 87 - "err %d errno %d retval %d size %d\n", 88 - err, errno, retval, size); 78 + topts.data_size_out = sizeof(buf); 79 + err = bpf_prog_test_run_opts(prog_fd, &topts); 80 + ASSERT_OK(err, "check-queue-stack-map-empty"); 81 + ASSERT_EQ(topts.retval, 2 /* TC_ACT_SHOT */, 82 + "check-queue-stack-map-empty test retval"); 83 + ASSERT_EQ(topts.data_size_out, sizeof(pkt_v4), 84 + "check-queue-stack-map-empty data_size_out"); 89 85 90 86 /* Check that the program pushed elements correctly */ 91 87 for (i = 0; i < MAP_SIZE; i++) { 92 88 err = bpf_map_lookup_and_delete_elem(map_out_fd, NULL, &val); 93 - if (err || val != vals[i] * 5) 94 - break; 89 + ASSERT_OK(err, "bpf_map_lookup_and_delete_elem"); 90 + ASSERT_EQ(val, vals[i] * 5, "bpf_map_push_elem val"); 95 91 } 96 - 97 - CHECK(i != MAP_SIZE && (err || val != vals[i] * 5), 98 - "bpf_map_push_elem", "err %d value %u\n", err, val); 99 - 100 92 out: 101 93 pkt_v4.iph.saddr = 0; 102 94 bpf_object__close(obj);
+27 -37
tools/testing/selftests/bpf/prog_tests/raw_tp_test_run.c
··· 5 5 #include "bpf/libbpf_internal.h" 6 6 #include "test_raw_tp_test_run.skel.h" 7 7 8 - static int duration; 9 - 10 8 void test_raw_tp_test_run(void) 11 9 { 12 - struct bpf_prog_test_run_attr test_attr = {}; 13 10 int comm_fd = -1, err, nr_online, i, prog_fd; 14 11 __u64 args[2] = {0x1234ULL, 0x5678ULL}; 15 12 int expected_retval = 0x1234 + 0x5678; 16 13 struct test_raw_tp_test_run *skel; 17 14 char buf[] = "new_name"; 18 15 bool *online = NULL; 19 - DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 20 - .ctx_in = args, 21 - .ctx_size_in = sizeof(args), 22 - .flags = BPF_F_TEST_RUN_ON_CPU, 23 - ); 16 + LIBBPF_OPTS(bpf_test_run_opts, opts, 17 + .ctx_in = args, 18 + .ctx_size_in = sizeof(args), 19 + .flags = BPF_F_TEST_RUN_ON_CPU, 20 + ); 24 21 25 22 err = parse_cpu_mask_file("/sys/devices/system/cpu/online", &online, 26 23 &nr_online); 27 - if (CHECK(err, "parse_cpu_mask_file", "err %d\n", err)) 24 + if (!ASSERT_OK(err, "parse_cpu_mask_file")) 28 25 return; 29 26 30 27 skel = test_raw_tp_test_run__open_and_load(); 31 - if (CHECK(!skel, "skel_open", "failed to open skeleton\n")) 28 + if (!ASSERT_OK_PTR(skel, "skel_open")) 32 29 goto cleanup; 33 30 34 31 err = test_raw_tp_test_run__attach(skel); 35 - if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err)) 32 + if (!ASSERT_OK(err, "skel_attach")) 36 33 goto cleanup; 37 34 38 35 comm_fd = open("/proc/self/comm", O_WRONLY|O_TRUNC); 39 - if (CHECK(comm_fd < 0, "open /proc/self/comm", "err %d\n", errno)) 36 + if (!ASSERT_GE(comm_fd, 0, "open /proc/self/comm")) 40 37 goto cleanup; 41 38 42 39 err = write(comm_fd, buf, sizeof(buf)); 43 - CHECK(err < 0, "task rename", "err %d", errno); 40 + ASSERT_GE(err, 0, "task rename"); 44 41 45 - CHECK(skel->bss->count == 0, "check_count", "didn't increase\n"); 46 - CHECK(skel->data->on_cpu != 0xffffffff, "check_on_cpu", "got wrong value\n"); 42 + ASSERT_NEQ(skel->bss->count, 0, "check_count"); 43 + ASSERT_EQ(skel->data->on_cpu, 0xffffffff, "check_on_cpu"); 47 44 48 45 prog_fd = bpf_program__fd(skel->progs.rename); 49 - test_attr.prog_fd = prog_fd; 50 - test_attr.ctx_in = args; 51 - test_attr.ctx_size_in = sizeof(__u64); 46 + opts.ctx_in = args; 47 + opts.ctx_size_in = sizeof(__u64); 52 48 53 - err = bpf_prog_test_run_xattr(&test_attr); 54 - CHECK(err == 0, "test_run", "should fail for too small ctx\n"); 49 + err = bpf_prog_test_run_opts(prog_fd, &opts); 50 + ASSERT_NEQ(err, 0, "test_run should fail for too small ctx"); 55 51 56 - test_attr.ctx_size_in = sizeof(args); 57 - err = bpf_prog_test_run_xattr(&test_attr); 58 - CHECK(err < 0, "test_run", "err %d\n", errno); 59 - CHECK(test_attr.retval != expected_retval, "check_retval", 60 - "expect 0x%x, got 0x%x\n", expected_retval, test_attr.retval); 52 + opts.ctx_size_in = sizeof(args); 53 + err = bpf_prog_test_run_opts(prog_fd, &opts); 54 + ASSERT_OK(err, "test_run"); 55 + ASSERT_EQ(opts.retval, expected_retval, "check_retval"); 61 56 62 57 for (i = 0; i < nr_online; i++) { 63 58 if (!online[i]) ··· 61 66 opts.cpu = i; 62 67 opts.retval = 0; 63 68 err = bpf_prog_test_run_opts(prog_fd, &opts); 64 - CHECK(err < 0, "test_run_opts", "err %d\n", errno); 65 - CHECK(skel->data->on_cpu != i, "check_on_cpu", 66 - "expect %d got %d\n", i, skel->data->on_cpu); 67 - CHECK(opts.retval != expected_retval, 68 - "check_retval", "expect 0x%x, got 0x%x\n", 69 - expected_retval, opts.retval); 69 + ASSERT_OK(err, "test_run_opts"); 70 + ASSERT_EQ(skel->data->on_cpu, i, "check_on_cpu"); 71 + ASSERT_EQ(opts.retval, expected_retval, "check_retval"); 70 72 } 71 73 72 74 /* invalid cpu ID should fail with ENXIO */ 73 75 opts.cpu = 0xffffffff; 74 76 err = bpf_prog_test_run_opts(prog_fd, &opts); 75 - CHECK(err >= 0 || errno != ENXIO, 76 - "test_run_opts_fail", 77 - "should failed with ENXIO\n"); 77 + ASSERT_EQ(errno, ENXIO, "test_run_opts should fail with ENXIO"); 78 + ASSERT_ERR(err, "test_run_opts_fail"); 78 79 79 80 /* non-zero cpu w/o BPF_F_TEST_RUN_ON_CPU should fail with EINVAL */ 80 81 opts.cpu = 1; 81 82 opts.flags = 0; 82 83 err = bpf_prog_test_run_opts(prog_fd, &opts); 83 - CHECK(err >= 0 || errno != EINVAL, 84 - "test_run_opts_fail", 85 - "should failed with EINVAL\n"); 84 + ASSERT_EQ(errno, EINVAL, "test_run_opts should fail with EINVAL"); 85 + ASSERT_ERR(err, "test_run_opts_fail"); 86 86 87 87 cleanup: 88 88 close(comm_fd);
+9 -7
tools/testing/selftests/bpf/prog_tests/raw_tp_writable_test_run.c
··· 56 56 0, 57 57 }; 58 58 59 - __u32 prog_ret; 60 - int err = bpf_prog_test_run(filter_fd, 1, test_skb, sizeof(test_skb), 0, 61 - 0, &prog_ret, 0); 59 + LIBBPF_OPTS(bpf_test_run_opts, topts, 60 + .data_in = test_skb, 61 + .data_size_in = sizeof(test_skb), 62 + .repeat = 1, 63 + ); 64 + int err = bpf_prog_test_run_opts(filter_fd, &topts); 62 65 CHECK(err != 42, "test_run", 63 66 "tracepoint did not modify return value\n"); 64 - CHECK(prog_ret != 0, "test_run_ret", 67 + CHECK(topts.retval != 0, "test_run_ret", 65 68 "socket_filter did not return 0\n"); 66 69 67 70 close(tp_fd); 68 71 69 - err = bpf_prog_test_run(filter_fd, 1, test_skb, sizeof(test_skb), 0, 0, 70 - &prog_ret, 0); 72 + err = bpf_prog_test_run_opts(filter_fd, &topts); 71 73 CHECK(err != 0, "test_run_notrace", 72 74 "test_run failed with %d errno %d\n", err, errno); 73 - CHECK(prog_ret != 0, "test_run_ret_notrace", 75 + CHECK(topts.retval != 0, "test_run_ret_notrace", 74 76 "socket_filter did not return 0\n"); 75 77 76 78 out_filterfd:
+11 -10
tools/testing/selftests/bpf/prog_tests/signal_pending.c
··· 13 13 struct itimerval timeo = { 14 14 .it_value.tv_usec = 100000, /* 100ms */ 15 15 }; 16 - __u32 duration = 0, retval; 17 16 int prog_fd; 18 17 int err; 19 18 int i; 19 + LIBBPF_OPTS(bpf_test_run_opts, topts, 20 + .data_in = &pkt_v4, 21 + .data_size_in = sizeof(pkt_v4), 22 + .repeat = 0xffffffff, 23 + ); 20 24 21 25 for (i = 0; i < ARRAY_SIZE(prog); i++) 22 26 prog[i] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0); ··· 28 24 29 25 prog_fd = bpf_test_load_program(prog_type, prog, ARRAY_SIZE(prog), 30 26 "GPL", 0, NULL, 0); 31 - CHECK(prog_fd < 0, "test-run", "errno %d\n", errno); 27 + ASSERT_GE(prog_fd, 0, "test-run load"); 32 28 33 29 err = sigaction(SIGALRM, &sigalrm_action, NULL); 34 - CHECK(err, "test-run-signal-sigaction", "errno %d\n", errno); 30 + ASSERT_OK(err, "test-run-signal-sigaction"); 35 31 36 32 err = setitimer(ITIMER_REAL, &timeo, NULL); 37 - CHECK(err, "test-run-signal-timer", "errno %d\n", errno); 33 + ASSERT_OK(err, "test-run-signal-timer"); 38 34 39 - err = bpf_prog_test_run(prog_fd, 0xffffffff, &pkt_v4, sizeof(pkt_v4), 40 - NULL, NULL, &retval, &duration); 41 - CHECK(duration > 500000000, /* 500ms */ 42 - "test-run-signal-duration", 43 - "duration %dns > 500ms\n", 44 - duration); 35 + err = bpf_prog_test_run_opts(prog_fd, &topts); 36 + ASSERT_LE(topts.duration, 500000000 /* 500ms */, 37 + "test-run-signal-duration"); 45 38 46 39 signal(SIGALRM, SIG_DFL); 47 40 }
+28 -53
tools/testing/selftests/bpf/prog_tests/skb_ctx.c
··· 20 20 .gso_size = 10, 21 21 .hwtstamp = 11, 22 22 }; 23 - struct bpf_prog_test_run_attr tattr = { 23 + LIBBPF_OPTS(bpf_test_run_opts, tattr, 24 24 .data_in = &pkt_v4, 25 25 .data_size_in = sizeof(pkt_v4), 26 26 .ctx_in = &skb, 27 27 .ctx_size_in = sizeof(skb), 28 28 .ctx_out = &skb, 29 29 .ctx_size_out = sizeof(skb), 30 - }; 30 + ); 31 31 struct bpf_object *obj; 32 - int err; 33 - int i; 32 + int err, prog_fd, i; 34 33 35 - err = bpf_prog_test_load("./test_skb_ctx.o", BPF_PROG_TYPE_SCHED_CLS, &obj, 36 - &tattr.prog_fd); 37 - if (CHECK_ATTR(err, "load", "err %d errno %d\n", err, errno)) 34 + err = bpf_prog_test_load("./test_skb_ctx.o", BPF_PROG_TYPE_SCHED_CLS, 35 + &obj, &prog_fd); 36 + if (!ASSERT_OK(err, "load")) 38 37 return; 39 38 40 39 /* ctx_in != NULL, ctx_size_in == 0 */ 41 40 42 41 tattr.ctx_size_in = 0; 43 - err = bpf_prog_test_run_xattr(&tattr); 44 - CHECK_ATTR(err == 0, "ctx_size_in", "err %d errno %d\n", err, errno); 42 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 43 + ASSERT_NEQ(err, 0, "ctx_size_in"); 45 44 tattr.ctx_size_in = sizeof(skb); 46 45 47 46 /* ctx_out != NULL, ctx_size_out == 0 */ 48 47 49 48 tattr.ctx_size_out = 0; 50 - err = bpf_prog_test_run_xattr(&tattr); 51 - CHECK_ATTR(err == 0, "ctx_size_out", "err %d errno %d\n", err, errno); 49 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 50 + ASSERT_NEQ(err, 0, "ctx_size_out"); 52 51 tattr.ctx_size_out = sizeof(skb); 53 52 54 53 /* non-zero [len, tc_index] fields should be rejected*/ 55 54 56 55 skb.len = 1; 57 - err = bpf_prog_test_run_xattr(&tattr); 58 - CHECK_ATTR(err == 0, "len", "err %d errno %d\n", err, errno); 56 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 57 + ASSERT_NEQ(err, 0, "len"); 59 58 skb.len = 0; 60 59 61 60 skb.tc_index = 1; 62 - err = bpf_prog_test_run_xattr(&tattr); 63 - CHECK_ATTR(err == 0, "tc_index", "err %d errno %d\n", err, errno); 61 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 62 + ASSERT_NEQ(err, 0, "tc_index"); 64 63 skb.tc_index = 0; 65 64 66 65 /* non-zero [hash, sk] fields should be rejected */ 67 66 68 67 skb.hash = 1; 69 - err = bpf_prog_test_run_xattr(&tattr); 70 - CHECK_ATTR(err == 0, "hash", "err %d errno %d\n", err, errno); 68 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 69 + ASSERT_NEQ(err, 0, "hash"); 71 70 skb.hash = 0; 72 71 73 72 skb.sk = (struct bpf_sock *)1; 74 - err = bpf_prog_test_run_xattr(&tattr); 75 - CHECK_ATTR(err == 0, "sk", "err %d errno %d\n", err, errno); 73 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 74 + ASSERT_NEQ(err, 0, "sk"); 76 75 skb.sk = 0; 77 76 78 - err = bpf_prog_test_run_xattr(&tattr); 79 - CHECK_ATTR(err != 0 || tattr.retval, 80 - "run", 81 - "err %d errno %d retval %d\n", 82 - err, errno, tattr.retval); 83 - 84 - CHECK_ATTR(tattr.ctx_size_out != sizeof(skb), 85 - "ctx_size_out", 86 - "incorrect output size, want %zu have %u\n", 87 - sizeof(skb), tattr.ctx_size_out); 77 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 78 + ASSERT_OK(err, "test_run"); 79 + ASSERT_OK(tattr.retval, "test_run retval"); 80 + ASSERT_EQ(tattr.ctx_size_out, sizeof(skb), "ctx_size_out"); 88 81 89 82 for (i = 0; i < 5; i++) 90 - CHECK_ATTR(skb.cb[i] != i + 2, 91 - "ctx_out_cb", 92 - "skb->cb[i] == %d, expected %d\n", 93 - skb.cb[i], i + 2); 94 - CHECK_ATTR(skb.priority != 7, 95 - "ctx_out_priority", 96 - "skb->priority == %d, expected %d\n", 97 - skb.priority, 7); 98 - CHECK_ATTR(skb.ifindex != 1, 99 - "ctx_out_ifindex", 100 - "skb->ifindex == %d, expected %d\n", 101 - skb.ifindex, 1); 102 - CHECK_ATTR(skb.ingress_ifindex != 11, 103 - "ctx_out_ingress_ifindex", 104 - "skb->ingress_ifindex == %d, expected %d\n", 105 - skb.ingress_ifindex, 11); 106 - CHECK_ATTR(skb.tstamp != 8, 107 - "ctx_out_tstamp", 108 - "skb->tstamp == %lld, expected %d\n", 109 - skb.tstamp, 8); 110 - CHECK_ATTR(skb.mark != 10, 111 - "ctx_out_mark", 112 - "skb->mark == %u, expected %d\n", 113 - skb.mark, 10); 83 + ASSERT_EQ(skb.cb[i], i + 2, "ctx_out_cb"); 84 + ASSERT_EQ(skb.priority, 7, "ctx_out_priority"); 85 + ASSERT_EQ(skb.ifindex, 1, "ctx_out_ifindex"); 86 + ASSERT_EQ(skb.ingress_ifindex, 11, "ctx_out_ingress_ifindex"); 87 + ASSERT_EQ(skb.tstamp, 8, "ctx_out_tstamp"); 88 + ASSERT_EQ(skb.mark, 10, "ctx_out_mark"); 114 89 115 90 bpf_object__close(obj); 116 91 }
+8 -8
tools/testing/selftests/bpf/prog_tests/skb_helpers.c
··· 9 9 .gso_segs = 8, 10 10 .gso_size = 10, 11 11 }; 12 - struct bpf_prog_test_run_attr tattr = { 12 + LIBBPF_OPTS(bpf_test_run_opts, topts, 13 13 .data_in = &pkt_v4, 14 14 .data_size_in = sizeof(pkt_v4), 15 15 .ctx_in = &skb, 16 16 .ctx_size_in = sizeof(skb), 17 17 .ctx_out = &skb, 18 18 .ctx_size_out = sizeof(skb), 19 - }; 19 + ); 20 20 struct bpf_object *obj; 21 - int err; 21 + int err, prog_fd; 22 22 23 - err = bpf_prog_test_load("./test_skb_helpers.o", BPF_PROG_TYPE_SCHED_CLS, &obj, 24 - &tattr.prog_fd); 25 - if (CHECK_ATTR(err, "load", "err %d errno %d\n", err, errno)) 23 + err = bpf_prog_test_load("./test_skb_helpers.o", 24 + BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd); 25 + if (!ASSERT_OK(err, "load")) 26 26 return; 27 - err = bpf_prog_test_run_xattr(&tattr); 28 - CHECK_ATTR(err, "len", "err %d errno %d\n", err, errno); 27 + err = bpf_prog_test_run_opts(prog_fd, &topts); 28 + ASSERT_OK(err, "test_run"); 29 29 bpf_object__close(obj); 30 30 }
+41 -17
tools/testing/selftests/bpf/prog_tests/sock_fields.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright (c) 2019 Facebook */ 3 3 4 + #define _GNU_SOURCE 4 5 #include <netinet/in.h> 5 6 #include <arpa/inet.h> 6 7 #include <unistd.h> 8 + #include <sched.h> 7 9 #include <stdlib.h> 8 10 #include <string.h> 9 11 #include <errno.h> ··· 22 20 enum bpf_linum_array_idx { 23 21 EGRESS_LINUM_IDX, 24 22 INGRESS_LINUM_IDX, 23 + READ_SK_DST_PORT_LINUM_IDX, 25 24 __NR_BPF_LINUM_ARRAY_IDX, 26 25 }; 27 26 ··· 45 42 static int linum_map_fd; 46 43 static __u32 duration; 47 44 48 - static __u32 egress_linum_idx = EGRESS_LINUM_IDX; 49 - static __u32 ingress_linum_idx = INGRESS_LINUM_IDX; 45 + static bool create_netns(void) 46 + { 47 + if (!ASSERT_OK(unshare(CLONE_NEWNET), "create netns")) 48 + return false; 49 + 50 + if (!ASSERT_OK(system("ip link set dev lo up"), "bring up lo")) 51 + return false; 52 + 53 + return true; 54 + } 50 55 51 56 static void print_sk(const struct bpf_sock *sk, const char *prefix) 52 57 { ··· 102 91 { 103 92 struct bpf_tcp_sock srv_tp, cli_tp, listen_tp; 104 93 struct bpf_sock srv_sk, cli_sk, listen_sk; 105 - __u32 ingress_linum, egress_linum; 94 + __u32 idx, ingress_linum, egress_linum, linum; 106 95 int err; 107 96 108 - err = bpf_map_lookup_elem(linum_map_fd, &egress_linum_idx, 109 - &egress_linum); 97 + idx = EGRESS_LINUM_IDX; 98 + err = bpf_map_lookup_elem(linum_map_fd, &idx, &egress_linum); 110 99 CHECK(err < 0, "bpf_map_lookup_elem(linum_map_fd)", 111 100 "err:%d errno:%d\n", err, errno); 112 101 113 - err = bpf_map_lookup_elem(linum_map_fd, &ingress_linum_idx, 114 - &ingress_linum); 102 + idx = INGRESS_LINUM_IDX; 103 + err = bpf_map_lookup_elem(linum_map_fd, &idx, &ingress_linum); 115 104 CHECK(err < 0, "bpf_map_lookup_elem(linum_map_fd)", 116 105 "err:%d errno:%d\n", err, errno); 106 + 107 + idx = READ_SK_DST_PORT_LINUM_IDX; 108 + err = bpf_map_lookup_elem(linum_map_fd, &idx, &linum); 109 + ASSERT_OK(err, "bpf_map_lookup_elem(linum_map_fd, READ_SK_DST_PORT_IDX)"); 110 + ASSERT_EQ(linum, 0, "failure in read_sk_dst_port on line"); 117 111 118 112 memcpy(&srv_sk, &skel->bss->srv_sk, sizeof(srv_sk)); 119 113 memcpy(&srv_tp, &skel->bss->srv_tp, sizeof(srv_tp)); ··· 278 262 char buf[DATA_LEN]; 279 263 280 264 /* Prepare listen_fd */ 281 - listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0, 0); 265 + listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0xcafe, 0); 282 266 /* start_server() has logged the error details */ 283 267 if (CHECK_FAIL(listen_fd == -1)) 284 268 goto done; ··· 346 330 347 331 void serial_test_sock_fields(void) 348 332 { 349 - struct bpf_link *egress_link = NULL, *ingress_link = NULL; 350 333 int parent_cg_fd = -1, child_cg_fd = -1; 334 + struct bpf_link *link; 335 + 336 + /* Use a dedicated netns to have a fixed listen port */ 337 + if (!create_netns()) 338 + return; 351 339 352 340 /* Create a cgroup, get fd, and join it */ 353 341 parent_cg_fd = test__join_cgroup(PARENT_CGROUP); ··· 372 352 if (CHECK(!skel, "test_sock_fields__open_and_load", "failed\n")) 373 353 goto done; 374 354 375 - egress_link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields, 376 - child_cg_fd); 377 - if (!ASSERT_OK_PTR(egress_link, "attach_cgroup(egress)")) 355 + link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields, child_cg_fd); 356 + if (!ASSERT_OK_PTR(link, "attach_cgroup(egress_read_sock_fields)")) 378 357 goto done; 358 + skel->links.egress_read_sock_fields = link; 379 359 380 - ingress_link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields, 381 - child_cg_fd); 382 - if (!ASSERT_OK_PTR(ingress_link, "attach_cgroup(ingress)")) 360 + link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields, child_cg_fd); 361 + if (!ASSERT_OK_PTR(link, "attach_cgroup(ingress_read_sock_fields)")) 383 362 goto done; 363 + skel->links.ingress_read_sock_fields = link; 364 + 365 + link = bpf_program__attach_cgroup(skel->progs.read_sk_dst_port, child_cg_fd); 366 + if (!ASSERT_OK_PTR(link, "attach_cgroup(read_sk_dst_port")) 367 + goto done; 368 + skel->links.read_sk_dst_port = link; 384 369 385 370 linum_map_fd = bpf_map__fd(skel->maps.linum_map); 386 371 sk_pkt_out_cnt_fd = bpf_map__fd(skel->maps.sk_pkt_out_cnt); ··· 394 369 test(); 395 370 396 371 done: 397 - bpf_link__destroy(egress_link); 398 - bpf_link__destroy(ingress_link); 372 + test_sock_fields__detach(skel); 399 373 test_sock_fields__destroy(skel); 400 374 if (child_cg_fd >= 0) 401 375 close(child_cg_fd);
+9 -11
tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
··· 140 140 141 141 static void test_sockmap_update(enum bpf_map_type map_type) 142 142 { 143 - struct bpf_prog_test_run_attr tattr; 144 143 int err, prog, src, duration = 0; 145 144 struct test_sockmap_update *skel; 146 145 struct bpf_map *dst_map; 147 146 const __u32 zero = 0; 148 147 char dummy[14] = {0}; 148 + LIBBPF_OPTS(bpf_test_run_opts, topts, 149 + .data_in = dummy, 150 + .data_size_in = sizeof(dummy), 151 + .repeat = 1, 152 + ); 149 153 __s64 sk; 150 154 151 155 sk = connected_socket_v4(); ··· 171 167 if (CHECK(err, "update_elem(src)", "errno=%u\n", errno)) 172 168 goto out; 173 169 174 - tattr = (struct bpf_prog_test_run_attr){ 175 - .prog_fd = prog, 176 - .repeat = 1, 177 - .data_in = dummy, 178 - .data_size_in = sizeof(dummy), 179 - }; 180 - 181 - err = bpf_prog_test_run_xattr(&tattr); 182 - if (CHECK_ATTR(err || !tattr.retval, "bpf_prog_test_run", 183 - "errno=%u retval=%u\n", errno, tattr.retval)) 170 + err = bpf_prog_test_run_opts(prog, &topts); 171 + if (!ASSERT_OK(err, "test_run")) 172 + goto out; 173 + if (!ASSERT_NEQ(topts.retval, 0, "test_run retval")) 184 174 goto out; 185 175 186 176 compare_cookies(skel->maps.src, dst_map);
+8 -6
tools/testing/selftests/bpf/prog_tests/spinlock.c
··· 4 4 5 5 static void *spin_lock_thread(void *arg) 6 6 { 7 - __u32 duration, retval; 8 7 int err, prog_fd = *(u32 *) arg; 8 + LIBBPF_OPTS(bpf_test_run_opts, topts, 9 + .data_in = &pkt_v4, 10 + .data_size_in = sizeof(pkt_v4), 11 + .repeat = 10000, 12 + ); 9 13 10 - err = bpf_prog_test_run(prog_fd, 10000, &pkt_v4, sizeof(pkt_v4), 11 - NULL, NULL, &retval, &duration); 12 - CHECK(err || retval, "", 13 - "err %d errno %d retval %d duration %d\n", 14 - err, errno, retval, duration); 14 + err = bpf_prog_test_run_opts(prog_fd, &topts); 15 + ASSERT_OK(err, "test_run"); 16 + ASSERT_OK(topts.retval, "test_run retval"); 15 17 pthread_exit(arg); 16 18 } 17 19
+1 -1
tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
··· 42 42 return; 43 43 44 44 /* override program type */ 45 - bpf_program__set_perf_event(skel->progs.oncpu); 45 + bpf_program__set_type(skel->progs.oncpu, BPF_PROG_TYPE_PERF_EVENT); 46 46 47 47 err = test_stacktrace_build_id__load(skel); 48 48 if (CHECK(err, "skel_load", "skeleton load failed: %d\n", err))
+5 -5
tools/testing/selftests/bpf/prog_tests/syscall.c
··· 20 20 .log_buf = (uintptr_t) verifier_log, 21 21 .log_size = sizeof(verifier_log), 22 22 }; 23 - struct bpf_prog_test_run_attr tattr = { 23 + LIBBPF_OPTS(bpf_test_run_opts, tattr, 24 24 .ctx_in = &ctx, 25 25 .ctx_size_in = sizeof(ctx), 26 - }; 26 + ); 27 27 struct syscall *skel = NULL; 28 28 __u64 key = 12, value = 0; 29 - int err; 29 + int err, prog_fd; 30 30 31 31 skel = syscall__open_and_load(); 32 32 if (!ASSERT_OK_PTR(skel, "skel_load")) 33 33 goto cleanup; 34 34 35 - tattr.prog_fd = bpf_program__fd(skel->progs.bpf_prog); 36 - err = bpf_prog_test_run_xattr(&tattr); 35 + prog_fd = bpf_program__fd(skel->progs.bpf_prog); 36 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 37 37 ASSERT_EQ(err, 0, "err"); 38 38 ASSERT_EQ(tattr.retval, 1, "retval"); 39 39 ASSERT_GT(ctx.map_fd, 0, "ctx.map_fd");
+123 -115
tools/testing/selftests/bpf/prog_tests/tailcalls.c
··· 12 12 struct bpf_map *prog_array; 13 13 struct bpf_program *prog; 14 14 struct bpf_object *obj; 15 - __u32 retval, duration; 16 15 char prog_name[32]; 17 16 char buff[128] = {}; 17 + LIBBPF_OPTS(bpf_test_run_opts, topts, 18 + .data_in = buff, 19 + .data_size_in = sizeof(buff), 20 + .repeat = 1, 21 + ); 18 22 19 23 err = bpf_prog_test_load("tailcall1.o", BPF_PROG_TYPE_SCHED_CLS, &obj, 20 24 &prog_fd); ··· 58 54 } 59 55 60 56 for (i = 0; i < bpf_map__max_entries(prog_array); i++) { 61 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 62 - &duration, &retval, NULL); 63 - CHECK(err || retval != i, "tailcall", 64 - "err %d errno %d retval %d\n", err, errno, retval); 57 + err = bpf_prog_test_run_opts(main_fd, &topts); 58 + ASSERT_OK(err, "tailcall"); 59 + ASSERT_EQ(topts.retval, i, "tailcall retval"); 65 60 66 61 err = bpf_map_delete_elem(map_fd, &i); 67 62 if (CHECK_FAIL(err)) 68 63 goto out; 69 64 } 70 65 71 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 72 - &duration, &retval, NULL); 73 - CHECK(err || retval != 3, "tailcall", "err %d errno %d retval %d\n", 74 - err, errno, retval); 66 + err = bpf_prog_test_run_opts(main_fd, &topts); 67 + ASSERT_OK(err, "tailcall"); 68 + ASSERT_EQ(topts.retval, 3, "tailcall retval"); 75 69 76 70 for (i = 0; i < bpf_map__max_entries(prog_array); i++) { 77 71 snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); ··· 87 85 goto out; 88 86 } 89 87 90 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 91 - &duration, &retval, NULL); 92 - CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n", 93 - err, errno, retval); 88 + err = bpf_prog_test_run_opts(main_fd, &topts); 89 + ASSERT_OK(err, "tailcall"); 90 + ASSERT_OK(topts.retval, "tailcall retval"); 94 91 95 92 for (i = 0; i < bpf_map__max_entries(prog_array); i++) { 96 93 j = bpf_map__max_entries(prog_array) - 1 - i; ··· 111 110 for (i = 0; i < bpf_map__max_entries(prog_array); i++) { 112 111 j = bpf_map__max_entries(prog_array) - 1 - i; 113 112 114 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 115 - &duration, &retval, NULL); 116 - CHECK(err || retval != j, "tailcall", 117 - "err %d errno %d retval %d\n", err, errno, retval); 113 + err = bpf_prog_test_run_opts(main_fd, &topts); 114 + ASSERT_OK(err, "tailcall"); 115 + ASSERT_EQ(topts.retval, j, "tailcall retval"); 118 116 119 117 err = bpf_map_delete_elem(map_fd, &i); 120 118 if (CHECK_FAIL(err)) 121 119 goto out; 122 120 } 123 121 124 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 125 - &duration, &retval, NULL); 126 - CHECK(err || retval != 3, "tailcall", "err %d errno %d retval %d\n", 127 - err, errno, retval); 122 + err = bpf_prog_test_run_opts(main_fd, &topts); 123 + ASSERT_OK(err, "tailcall"); 124 + ASSERT_EQ(topts.retval, 3, "tailcall retval"); 128 125 129 126 for (i = 0; i < bpf_map__max_entries(prog_array); i++) { 130 127 err = bpf_map_delete_elem(map_fd, &i); 131 128 if (CHECK_FAIL(err >= 0 || errno != ENOENT)) 132 129 goto out; 133 130 134 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 135 - &duration, &retval, NULL); 136 - CHECK(err || retval != 3, "tailcall", 137 - "err %d errno %d retval %d\n", err, errno, retval); 131 + err = bpf_prog_test_run_opts(main_fd, &topts); 132 + ASSERT_OK(err, "tailcall"); 133 + ASSERT_EQ(topts.retval, 3, "tailcall retval"); 138 134 } 139 135 140 136 out: ··· 148 150 struct bpf_map *prog_array; 149 151 struct bpf_program *prog; 150 152 struct bpf_object *obj; 151 - __u32 retval, duration; 152 153 char prog_name[32]; 153 154 char buff[128] = {}; 155 + LIBBPF_OPTS(bpf_test_run_opts, topts, 156 + .data_in = buff, 157 + .data_size_in = sizeof(buff), 158 + .repeat = 1, 159 + ); 154 160 155 161 err = bpf_prog_test_load("tailcall2.o", BPF_PROG_TYPE_SCHED_CLS, &obj, 156 162 &prog_fd); ··· 193 191 goto out; 194 192 } 195 193 196 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 197 - &duration, &retval, NULL); 198 - CHECK(err || retval != 2, "tailcall", "err %d errno %d retval %d\n", 199 - err, errno, retval); 194 + err = bpf_prog_test_run_opts(main_fd, &topts); 195 + ASSERT_OK(err, "tailcall"); 196 + ASSERT_EQ(topts.retval, 2, "tailcall retval"); 200 197 201 198 i = 2; 202 199 err = bpf_map_delete_elem(map_fd, &i); 203 200 if (CHECK_FAIL(err)) 204 201 goto out; 205 202 206 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 207 - &duration, &retval, NULL); 208 - CHECK(err || retval != 1, "tailcall", "err %d errno %d retval %d\n", 209 - err, errno, retval); 203 + err = bpf_prog_test_run_opts(main_fd, &topts); 204 + ASSERT_OK(err, "tailcall"); 205 + ASSERT_EQ(topts.retval, 1, "tailcall retval"); 210 206 211 207 i = 0; 212 208 err = bpf_map_delete_elem(map_fd, &i); 213 209 if (CHECK_FAIL(err)) 214 210 goto out; 215 211 216 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 217 - &duration, &retval, NULL); 218 - CHECK(err || retval != 3, "tailcall", "err %d errno %d retval %d\n", 219 - err, errno, retval); 212 + err = bpf_prog_test_run_opts(main_fd, &topts); 213 + ASSERT_OK(err, "tailcall"); 214 + ASSERT_EQ(topts.retval, 3, "tailcall retval"); 220 215 out: 221 216 bpf_object__close(obj); 222 217 } ··· 224 225 struct bpf_map *prog_array, *data_map; 225 226 struct bpf_program *prog; 226 227 struct bpf_object *obj; 227 - __u32 retval, duration; 228 228 char buff[128] = {}; 229 + LIBBPF_OPTS(bpf_test_run_opts, topts, 230 + .data_in = buff, 231 + .data_size_in = sizeof(buff), 232 + .repeat = 1, 233 + ); 229 234 230 235 err = bpf_prog_test_load(which, BPF_PROG_TYPE_SCHED_CLS, &obj, 231 236 &prog_fd); ··· 265 262 if (CHECK_FAIL(err)) 266 263 goto out; 267 264 268 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 269 - &duration, &retval, NULL); 270 - CHECK(err || retval != 1, "tailcall", "err %d errno %d retval %d\n", 271 - err, errno, retval); 265 + err = bpf_prog_test_run_opts(main_fd, &topts); 266 + ASSERT_OK(err, "tailcall"); 267 + ASSERT_EQ(topts.retval, 1, "tailcall retval"); 272 268 273 269 data_map = bpf_object__find_map_by_name(obj, "tailcall.bss"); 274 270 if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map))) ··· 279 277 280 278 i = 0; 281 279 err = bpf_map_lookup_elem(data_fd, &i, &val); 282 - CHECK(err || val != 33, "tailcall count", "err %d errno %d count %d\n", 283 - err, errno, val); 280 + ASSERT_OK(err, "tailcall count"); 281 + ASSERT_EQ(val, 33, "tailcall count"); 284 282 285 283 i = 0; 286 284 err = bpf_map_delete_elem(map_fd, &i); 287 285 if (CHECK_FAIL(err)) 288 286 goto out; 289 287 290 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 291 - &duration, &retval, NULL); 292 - CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n", 293 - err, errno, retval); 288 + err = bpf_prog_test_run_opts(main_fd, &topts); 289 + ASSERT_OK(err, "tailcall"); 290 + ASSERT_OK(topts.retval, "tailcall retval"); 294 291 out: 295 292 bpf_object__close(obj); 296 293 } ··· 320 319 struct bpf_map *prog_array, *data_map; 321 320 struct bpf_program *prog; 322 321 struct bpf_object *obj; 323 - __u32 retval, duration; 324 322 static const int zero = 0; 325 323 char buff[128] = {}; 326 324 char prog_name[32]; 325 + LIBBPF_OPTS(bpf_test_run_opts, topts, 326 + .data_in = buff, 327 + .data_size_in = sizeof(buff), 328 + .repeat = 1, 329 + ); 327 330 328 331 err = bpf_prog_test_load("tailcall4.o", BPF_PROG_TYPE_SCHED_CLS, &obj, 329 332 &prog_fd); ··· 379 374 if (CHECK_FAIL(err)) 380 375 goto out; 381 376 382 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 383 - &duration, &retval, NULL); 384 - CHECK(err || retval != i, "tailcall", 385 - "err %d errno %d retval %d\n", err, errno, retval); 377 + err = bpf_prog_test_run_opts(main_fd, &topts); 378 + ASSERT_OK(err, "tailcall"); 379 + ASSERT_EQ(topts.retval, i, "tailcall retval"); 386 380 } 387 381 388 382 for (i = 0; i < bpf_map__max_entries(prog_array); i++) { ··· 393 389 if (CHECK_FAIL(err)) 394 390 goto out; 395 391 396 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 397 - &duration, &retval, NULL); 398 - CHECK(err || retval != 3, "tailcall", 399 - "err %d errno %d retval %d\n", err, errno, retval); 392 + err = bpf_prog_test_run_opts(main_fd, &topts); 393 + ASSERT_OK(err, "tailcall"); 394 + ASSERT_EQ(topts.retval, 3, "tailcall retval"); 400 395 } 401 396 out: 402 397 bpf_object__close(obj); ··· 410 407 struct bpf_map *prog_array, *data_map; 411 408 struct bpf_program *prog; 412 409 struct bpf_object *obj; 413 - __u32 retval, duration; 414 410 static const int zero = 0; 415 411 char buff[128] = {}; 416 412 char prog_name[32]; 413 + LIBBPF_OPTS(bpf_test_run_opts, topts, 414 + .data_in = buff, 415 + .data_size_in = sizeof(buff), 416 + .repeat = 1, 417 + ); 417 418 418 419 err = bpf_prog_test_load("tailcall5.o", BPF_PROG_TYPE_SCHED_CLS, &obj, 419 420 &prog_fd); ··· 469 462 if (CHECK_FAIL(err)) 470 463 goto out; 471 464 472 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 473 - &duration, &retval, NULL); 474 - CHECK(err || retval != i, "tailcall", 475 - "err %d errno %d retval %d\n", err, errno, retval); 465 + err = bpf_prog_test_run_opts(main_fd, &topts); 466 + ASSERT_OK(err, "tailcall"); 467 + ASSERT_EQ(topts.retval, i, "tailcall retval"); 476 468 } 477 469 478 470 for (i = 0; i < bpf_map__max_entries(prog_array); i++) { ··· 483 477 if (CHECK_FAIL(err)) 484 478 goto out; 485 479 486 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 487 - &duration, &retval, NULL); 488 - CHECK(err || retval != 3, "tailcall", 489 - "err %d errno %d retval %d\n", err, errno, retval); 480 + err = bpf_prog_test_run_opts(main_fd, &topts); 481 + ASSERT_OK(err, "tailcall"); 482 + ASSERT_EQ(topts.retval, 3, "tailcall retval"); 490 483 } 491 484 out: 492 485 bpf_object__close(obj); ··· 500 495 struct bpf_map *prog_array; 501 496 struct bpf_program *prog; 502 497 struct bpf_object *obj; 503 - __u32 retval, duration; 504 498 char prog_name[32]; 499 + LIBBPF_OPTS(bpf_test_run_opts, topts, 500 + .data_in = &pkt_v4, 501 + .data_size_in = sizeof(pkt_v4), 502 + .repeat = 1, 503 + ); 505 504 506 505 err = bpf_prog_test_load("tailcall_bpf2bpf1.o", BPF_PROG_TYPE_SCHED_CLS, 507 506 &obj, &prog_fd); ··· 545 536 goto out; 546 537 } 547 538 548 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 549 - 0, &retval, &duration); 550 - CHECK(err || retval != 1, "tailcall", 551 - "err %d errno %d retval %d\n", err, errno, retval); 539 + err = bpf_prog_test_run_opts(main_fd, &topts); 540 + ASSERT_OK(err, "tailcall"); 541 + ASSERT_EQ(topts.retval, 1, "tailcall retval"); 552 542 553 543 /* jmp -> nop, call subprog that will do tailcall */ 554 544 i = 1; ··· 555 547 if (CHECK_FAIL(err)) 556 548 goto out; 557 549 558 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 559 - 0, &retval, &duration); 560 - CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n", 561 - err, errno, retval); 550 + err = bpf_prog_test_run_opts(main_fd, &topts); 551 + ASSERT_OK(err, "tailcall"); 552 + ASSERT_OK(topts.retval, "tailcall retval"); 562 553 563 554 /* make sure that subprog can access ctx and entry prog that 564 555 * called this subprog can properly return ··· 567 560 if (CHECK_FAIL(err)) 568 561 goto out; 569 562 570 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 571 - 0, &retval, &duration); 572 - CHECK(err || retval != sizeof(pkt_v4) * 2, 573 - "tailcall", "err %d errno %d retval %d\n", 574 - err, errno, retval); 563 + err = bpf_prog_test_run_opts(main_fd, &topts); 564 + ASSERT_OK(err, "tailcall"); 565 + ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 2, "tailcall retval"); 575 566 out: 576 567 bpf_object__close(obj); 577 568 } ··· 584 579 struct bpf_map *prog_array, *data_map; 585 580 struct bpf_program *prog; 586 581 struct bpf_object *obj; 587 - __u32 retval, duration; 588 582 char buff[128] = {}; 583 + LIBBPF_OPTS(bpf_test_run_opts, topts, 584 + .data_in = buff, 585 + .data_size_in = sizeof(buff), 586 + .repeat = 1, 587 + ); 589 588 590 589 err = bpf_prog_test_load("tailcall_bpf2bpf2.o", BPF_PROG_TYPE_SCHED_CLS, 591 590 &obj, &prog_fd); ··· 625 616 if (CHECK_FAIL(err)) 626 617 goto out; 627 618 628 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 629 - &duration, &retval, NULL); 630 - CHECK(err || retval != 1, "tailcall", "err %d errno %d retval %d\n", 631 - err, errno, retval); 619 + err = bpf_prog_test_run_opts(main_fd, &topts); 620 + ASSERT_OK(err, "tailcall"); 621 + ASSERT_EQ(topts.retval, 1, "tailcall retval"); 632 622 633 623 data_map = bpf_object__find_map_by_name(obj, "tailcall.bss"); 634 624 if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map))) ··· 639 631 640 632 i = 0; 641 633 err = bpf_map_lookup_elem(data_fd, &i, &val); 642 - CHECK(err || val != 33, "tailcall count", "err %d errno %d count %d\n", 643 - err, errno, val); 634 + ASSERT_OK(err, "tailcall count"); 635 + ASSERT_EQ(val, 33, "tailcall count"); 644 636 645 637 i = 0; 646 638 err = bpf_map_delete_elem(map_fd, &i); 647 639 if (CHECK_FAIL(err)) 648 640 goto out; 649 641 650 - err = bpf_prog_test_run(main_fd, 1, buff, sizeof(buff), 0, 651 - &duration, &retval, NULL); 652 - CHECK(err || retval != 0, "tailcall", "err %d errno %d retval %d\n", 653 - err, errno, retval); 642 + err = bpf_prog_test_run_opts(main_fd, &topts); 643 + ASSERT_OK(err, "tailcall"); 644 + ASSERT_OK(topts.retval, "tailcall retval"); 654 645 out: 655 646 bpf_object__close(obj); 656 647 } ··· 664 657 struct bpf_map *prog_array; 665 658 struct bpf_program *prog; 666 659 struct bpf_object *obj; 667 - __u32 retval, duration; 668 660 char prog_name[32]; 661 + LIBBPF_OPTS(bpf_test_run_opts, topts, 662 + .data_in = &pkt_v4, 663 + .data_size_in = sizeof(pkt_v4), 664 + .repeat = 1, 665 + ); 669 666 670 667 err = bpf_prog_test_load("tailcall_bpf2bpf3.o", BPF_PROG_TYPE_SCHED_CLS, 671 668 &obj, &prog_fd); ··· 708 697 goto out; 709 698 } 710 699 711 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 712 - &duration, &retval, NULL); 713 - CHECK(err || retval != sizeof(pkt_v4) * 3, 714 - "tailcall", "err %d errno %d retval %d\n", 715 - err, errno, retval); 700 + err = bpf_prog_test_run_opts(main_fd, &topts); 701 + ASSERT_OK(err, "tailcall"); 702 + ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 3, "tailcall retval"); 716 703 717 704 i = 1; 718 705 err = bpf_map_delete_elem(map_fd, &i); 719 706 if (CHECK_FAIL(err)) 720 707 goto out; 721 708 722 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 723 - &duration, &retval, NULL); 724 - CHECK(err || retval != sizeof(pkt_v4), 725 - "tailcall", "err %d errno %d retval %d\n", 726 - err, errno, retval); 709 + err = bpf_prog_test_run_opts(main_fd, &topts); 710 + ASSERT_OK(err, "tailcall"); 711 + ASSERT_EQ(topts.retval, sizeof(pkt_v4), "tailcall retval"); 727 712 728 713 i = 0; 729 714 err = bpf_map_delete_elem(map_fd, &i); 730 715 if (CHECK_FAIL(err)) 731 716 goto out; 732 717 733 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 734 - &duration, &retval, NULL); 735 - CHECK(err || retval != sizeof(pkt_v4) * 2, 736 - "tailcall", "err %d errno %d retval %d\n", 737 - err, errno, retval); 718 + err = bpf_prog_test_run_opts(main_fd, &topts); 719 + ASSERT_OK(err, "tailcall"); 720 + ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 2, "tailcall retval"); 738 721 out: 739 722 bpf_object__close(obj); 740 723 } ··· 759 754 struct bpf_map *prog_array, *data_map; 760 755 struct bpf_program *prog; 761 756 struct bpf_object *obj; 762 - __u32 retval, duration; 763 757 char prog_name[32]; 758 + LIBBPF_OPTS(bpf_test_run_opts, topts, 759 + .data_in = &pkt_v4, 760 + .data_size_in = sizeof(pkt_v4), 761 + .repeat = 1, 762 + ); 764 763 765 764 err = bpf_prog_test_load("tailcall_bpf2bpf4.o", BPF_PROG_TYPE_SCHED_CLS, 766 765 &obj, &prog_fd); ··· 818 809 if (CHECK_FAIL(err)) 819 810 goto out; 820 811 821 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 822 - &duration, &retval, NULL); 823 - CHECK(err || retval != sizeof(pkt_v4) * 3, "tailcall", "err %d errno %d retval %d\n", 824 - err, errno, retval); 812 + err = bpf_prog_test_run_opts(main_fd, &topts); 813 + ASSERT_OK(err, "tailcall"); 814 + ASSERT_EQ(topts.retval, sizeof(pkt_v4) * 3, "tailcall retval"); 825 815 826 816 i = 0; 827 817 err = bpf_map_lookup_elem(data_fd, &i, &val); 828 - CHECK(err || val.count != 31, "tailcall count", "err %d errno %d count %d\n", 829 - err, errno, val.count); 818 + ASSERT_OK(err, "tailcall count"); 819 + ASSERT_EQ(val.count, 31, "tailcall count"); 830 820 831 821 out: 832 822 bpf_object__close(obj);
+10 -6
tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
··· 3 3 #include <test_progs.h> 4 4 #include "test_task_pt_regs.skel.h" 5 5 6 + /* uprobe attach point */ 7 + static void trigger_func(void) 8 + { 9 + asm volatile (""); 10 + } 11 + 6 12 void test_task_pt_regs(void) 7 13 { 8 14 struct test_task_pt_regs *skel; 9 15 struct bpf_link *uprobe_link; 10 - size_t uprobe_offset; 11 - ssize_t base_addr; 16 + ssize_t uprobe_offset; 12 17 bool match; 13 18 14 - base_addr = get_base_addr(); 15 - if (!ASSERT_GT(base_addr, 0, "get_base_addr")) 19 + uprobe_offset = get_uprobe_offset(&trigger_func); 20 + if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset")) 16 21 return; 17 - uprobe_offset = get_uprobe_offset(&get_base_addr, base_addr); 18 22 19 23 skel = test_task_pt_regs__open_and_load(); 20 24 if (!ASSERT_OK_PTR(skel, "skel_open")) ··· 36 32 skel->links.handle_uprobe = uprobe_link; 37 33 38 34 /* trigger & validate uprobe */ 39 - get_base_addr(); 35 + trigger_func(); 40 36 41 37 if (!ASSERT_EQ(skel->bss->uprobe_res, 1, "check_uprobe_res")) 42 38 goto cleanup;
+73
tools/testing/selftests/bpf/prog_tests/test_bpf_syscall_macro.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright 2022 Sony Group Corporation */ 3 + #include <sys/prctl.h> 4 + #include <test_progs.h> 5 + #include "bpf_syscall_macro.skel.h" 6 + 7 + void test_bpf_syscall_macro(void) 8 + { 9 + struct bpf_syscall_macro *skel = NULL; 10 + int err; 11 + int exp_arg1 = 1001; 12 + unsigned long exp_arg2 = 12; 13 + unsigned long exp_arg3 = 13; 14 + unsigned long exp_arg4 = 14; 15 + unsigned long exp_arg5 = 15; 16 + 17 + /* check whether it can open program */ 18 + skel = bpf_syscall_macro__open(); 19 + if (!ASSERT_OK_PTR(skel, "bpf_syscall_macro__open")) 20 + return; 21 + 22 + skel->rodata->filter_pid = getpid(); 23 + 24 + /* check whether it can load program */ 25 + err = bpf_syscall_macro__load(skel); 26 + if (!ASSERT_OK(err, "bpf_syscall_macro__load")) 27 + goto cleanup; 28 + 29 + /* check whether it can attach kprobe */ 30 + err = bpf_syscall_macro__attach(skel); 31 + if (!ASSERT_OK(err, "bpf_syscall_macro__attach")) 32 + goto cleanup; 33 + 34 + /* check whether args of syscall are copied correctly */ 35 + prctl(exp_arg1, exp_arg2, exp_arg3, exp_arg4, exp_arg5); 36 + #if defined(__aarch64__) || defined(__s390__) 37 + ASSERT_NEQ(skel->bss->arg1, exp_arg1, "syscall_arg1"); 38 + #else 39 + ASSERT_EQ(skel->bss->arg1, exp_arg1, "syscall_arg1"); 40 + #endif 41 + ASSERT_EQ(skel->bss->arg2, exp_arg2, "syscall_arg2"); 42 + ASSERT_EQ(skel->bss->arg3, exp_arg3, "syscall_arg3"); 43 + /* it cannot copy arg4 when uses PT_REGS_PARM4 on x86_64 */ 44 + #ifdef __x86_64__ 45 + ASSERT_NEQ(skel->bss->arg4_cx, exp_arg4, "syscall_arg4_from_cx"); 46 + #else 47 + ASSERT_EQ(skel->bss->arg4_cx, exp_arg4, "syscall_arg4_from_cx"); 48 + #endif 49 + ASSERT_EQ(skel->bss->arg4, exp_arg4, "syscall_arg4"); 50 + ASSERT_EQ(skel->bss->arg5, exp_arg5, "syscall_arg5"); 51 + 52 + /* check whether args of syscall are copied correctly for CORE variants */ 53 + ASSERT_EQ(skel->bss->arg1_core, exp_arg1, "syscall_arg1_core_variant"); 54 + ASSERT_EQ(skel->bss->arg2_core, exp_arg2, "syscall_arg2_core_variant"); 55 + ASSERT_EQ(skel->bss->arg3_core, exp_arg3, "syscall_arg3_core_variant"); 56 + /* it cannot copy arg4 when uses PT_REGS_PARM4_CORE on x86_64 */ 57 + #ifdef __x86_64__ 58 + ASSERT_NEQ(skel->bss->arg4_core_cx, exp_arg4, "syscall_arg4_from_cx_core_variant"); 59 + #else 60 + ASSERT_EQ(skel->bss->arg4_core_cx, exp_arg4, "syscall_arg4_from_cx_core_variant"); 61 + #endif 62 + ASSERT_EQ(skel->bss->arg4_core, exp_arg4, "syscall_arg4_core_variant"); 63 + ASSERT_EQ(skel->bss->arg5_core, exp_arg5, "syscall_arg5_core_variant"); 64 + 65 + ASSERT_EQ(skel->bss->option_syscall, exp_arg1, "BPF_KPROBE_SYSCALL_option"); 66 + ASSERT_EQ(skel->bss->arg2_syscall, exp_arg2, "BPF_KPROBE_SYSCALL_arg2"); 67 + ASSERT_EQ(skel->bss->arg3_syscall, exp_arg3, "BPF_KPROBE_SYSCALL_arg3"); 68 + ASSERT_EQ(skel->bss->arg4_syscall, exp_arg4, "BPF_KPROBE_SYSCALL_arg4"); 69 + ASSERT_EQ(skel->bss->arg5_syscall, exp_arg5, "BPF_KPROBE_SYSCALL_arg5"); 70 + 71 + cleanup: 72 + bpf_syscall_macro__destroy(skel); 73 + }
+7 -7
tools/testing/selftests/bpf/prog_tests/test_profiler.c
··· 8 8 9 9 static int sanity_run(struct bpf_program *prog) 10 10 { 11 - struct bpf_prog_test_run_attr test_attr = {}; 11 + LIBBPF_OPTS(bpf_test_run_opts, test_attr); 12 12 __u64 args[] = {1, 2, 3}; 13 - __u32 duration = 0; 14 13 int err, prog_fd; 15 14 16 15 prog_fd = bpf_program__fd(prog); 17 - test_attr.prog_fd = prog_fd; 18 16 test_attr.ctx_in = args; 19 17 test_attr.ctx_size_in = sizeof(args); 20 - err = bpf_prog_test_run_xattr(&test_attr); 21 - if (CHECK(err || test_attr.retval, "test_run", 22 - "err %d errno %d retval %d duration %d\n", 23 - err, errno, test_attr.retval, duration)) 18 + err = bpf_prog_test_run_opts(prog_fd, &test_attr); 19 + if (!ASSERT_OK(err, "test_run")) 24 20 return -1; 21 + 22 + if (!ASSERT_OK(test_attr.retval, "test_run retval")) 23 + return -1; 24 + 25 25 return 0; 26 26 } 27 27
+9 -6
tools/testing/selftests/bpf/prog_tests/test_skb_pkt_end.c
··· 6 6 7 7 static int sanity_run(struct bpf_program *prog) 8 8 { 9 - __u32 duration, retval; 10 9 int err, prog_fd; 10 + LIBBPF_OPTS(bpf_test_run_opts, topts, 11 + .data_in = &pkt_v4, 12 + .data_size_in = sizeof(pkt_v4), 13 + .repeat = 1, 14 + ); 11 15 12 16 prog_fd = bpf_program__fd(prog); 13 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 14 - NULL, NULL, &retval, &duration); 15 - if (CHECK(err || retval != 123, "test_run", 16 - "err %d errno %d retval %d duration %d\n", 17 - err, errno, retval, duration)) 17 + err = bpf_prog_test_run_opts(prog_fd, &topts); 18 + if (!ASSERT_OK(err, "test_run")) 19 + return -1; 20 + if (!ASSERT_EQ(topts.retval, 123, "test_run retval")) 18 21 return -1; 19 22 return 0; 20 23 }
+3 -4
tools/testing/selftests/bpf/prog_tests/timer.c
··· 6 6 static int timer(struct timer *timer_skel) 7 7 { 8 8 int err, prog_fd; 9 - __u32 duration = 0, retval; 9 + LIBBPF_OPTS(bpf_test_run_opts, topts); 10 10 11 11 err = timer__attach(timer_skel); 12 12 if (!ASSERT_OK(err, "timer_attach")) ··· 16 16 ASSERT_EQ(timer_skel->data->callback2_check, 52, "callback2_check1"); 17 17 18 18 prog_fd = bpf_program__fd(timer_skel->progs.test1); 19 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 20 - NULL, NULL, &retval, &duration); 19 + err = bpf_prog_test_run_opts(prog_fd, &topts); 21 20 ASSERT_OK(err, "test_run"); 22 - ASSERT_EQ(retval, 0, "test_run"); 21 + ASSERT_EQ(topts.retval, 0, "test_run"); 23 22 timer__detach(timer_skel); 24 23 25 24 usleep(50); /* 10 usecs should be enough, but give it extra */
+3 -4
tools/testing/selftests/bpf/prog_tests/timer_mim.c
··· 6 6 7 7 static int timer_mim(struct timer_mim *timer_skel) 8 8 { 9 - __u32 duration = 0, retval; 10 9 __u64 cnt1, cnt2; 11 10 int err, prog_fd, key1 = 1; 11 + LIBBPF_OPTS(bpf_test_run_opts, topts); 12 12 13 13 err = timer_mim__attach(timer_skel); 14 14 if (!ASSERT_OK(err, "timer_attach")) 15 15 return err; 16 16 17 17 prog_fd = bpf_program__fd(timer_skel->progs.test1); 18 - err = bpf_prog_test_run(prog_fd, 1, NULL, 0, 19 - NULL, NULL, &retval, &duration); 18 + err = bpf_prog_test_run_opts(prog_fd, &topts); 20 19 ASSERT_OK(err, "test_run"); 21 - ASSERT_EQ(retval, 0, "test_run"); 20 + ASSERT_EQ(topts.retval, 0, "test_run"); 22 21 timer_mim__detach(timer_skel); 23 22 24 23 /* check that timer_cb[12] are incrementing 'cnt' */
+16 -12
tools/testing/selftests/bpf/prog_tests/trace_ext.c
··· 23 23 int err, pkt_fd, ext_fd; 24 24 struct bpf_program *prog; 25 25 char buf[100]; 26 - __u32 retval; 27 26 __u64 len; 27 + LIBBPF_OPTS(bpf_test_run_opts, topts, 28 + .data_in = &pkt_v4, 29 + .data_size_in = sizeof(pkt_v4), 30 + .repeat = 1, 31 + ); 28 32 29 33 /* open/load/attach test_pkt_md_access */ 30 34 skel_pkt = test_pkt_md_access__open_and_load(); ··· 81 77 82 78 /* load/attach tracing */ 83 79 err = test_trace_ext_tracing__load(skel_trace); 84 - if (CHECK(err, "setup", "tracing/test_pkt_md_access_new load failed\n")) { 80 + if (!ASSERT_OK(err, "tracing/test_pkt_md_access_new load")) { 85 81 libbpf_strerror(err, buf, sizeof(buf)); 86 82 fprintf(stderr, "%s\n", buf); 87 83 goto cleanup; 88 84 } 89 85 90 86 err = test_trace_ext_tracing__attach(skel_trace); 91 - if (CHECK(err, "setup", "tracing/test_pkt_md_access_new attach failed: %d\n", err)) 87 + if (!ASSERT_OK(err, "tracing/test_pkt_md_access_new attach")) 92 88 goto cleanup; 93 89 94 90 /* trigger the test */ 95 - err = bpf_prog_test_run(pkt_fd, 1, &pkt_v4, sizeof(pkt_v4), 96 - NULL, NULL, &retval, &duration); 97 - CHECK(err || retval, "run", "err %d errno %d retval %d\n", err, errno, retval); 91 + err = bpf_prog_test_run_opts(pkt_fd, &topts); 92 + ASSERT_OK(err, "test_run_opts err"); 93 + ASSERT_OK(topts.retval, "test_run_opts retval"); 98 94 99 95 bss_ext = skel_ext->bss; 100 96 bss_trace = skel_trace->bss; 101 97 102 98 len = bss_ext->ext_called; 103 99 104 - CHECK(bss_ext->ext_called == 0, 105 - "check", "failed to trigger freplace/test_pkt_md_access\n"); 106 - CHECK(bss_trace->fentry_called != len, 107 - "check", "failed to trigger fentry/test_pkt_md_access_new\n"); 108 - CHECK(bss_trace->fexit_called != len, 109 - "check", "failed to trigger fexit/test_pkt_md_access_new\n"); 100 + ASSERT_NEQ(bss_ext->ext_called, 0, 101 + "failed to trigger freplace/test_pkt_md_access"); 102 + ASSERT_EQ(bss_trace->fentry_called, len, 103 + "failed to trigger fentry/test_pkt_md_access_new"); 104 + ASSERT_EQ(bss_trace->fexit_called, len, 105 + "failed to trigger fexit/test_pkt_md_access_new"); 110 106 111 107 cleanup: 112 108 test_trace_ext_tracing__destroy(skel_trace);
+21 -13
tools/testing/selftests/bpf/prog_tests/xdp.c
··· 13 13 char buf[128]; 14 14 struct ipv6hdr iph6; 15 15 struct iphdr iph; 16 - __u32 duration, retval, size; 17 16 int err, prog_fd, map_fd; 17 + LIBBPF_OPTS(bpf_test_run_opts, topts, 18 + .data_in = &pkt_v4, 19 + .data_size_in = sizeof(pkt_v4), 20 + .data_out = buf, 21 + .data_size_out = sizeof(buf), 22 + .repeat = 1, 23 + ); 18 24 19 25 err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); 20 26 if (CHECK_FAIL(err)) ··· 32 26 bpf_map_update_elem(map_fd, &key4, &value4, 0); 33 27 bpf_map_update_elem(map_fd, &key6, &value6, 0); 34 28 35 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 36 - buf, &size, &retval, &duration); 29 + err = bpf_prog_test_run_opts(prog_fd, &topts); 37 30 memcpy(&iph, buf + sizeof(struct ethhdr), sizeof(iph)); 38 - CHECK(err || retval != XDP_TX || size != 74 || 39 - iph.protocol != IPPROTO_IPIP, "ipv4", 40 - "err %d errno %d retval %d size %d\n", 41 - err, errno, retval, size); 31 + ASSERT_OK(err, "test_run"); 32 + ASSERT_EQ(topts.retval, XDP_TX, "ipv4 test_run retval"); 33 + ASSERT_EQ(topts.data_size_out, 74, "ipv4 test_run data_size_out"); 34 + ASSERT_EQ(iph.protocol, IPPROTO_IPIP, "ipv4 test_run iph.protocol"); 42 35 43 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v6, sizeof(pkt_v6), 44 - buf, &size, &retval, &duration); 36 + topts.data_in = &pkt_v6; 37 + topts.data_size_in = sizeof(pkt_v6); 38 + topts.data_size_out = sizeof(buf); 39 + 40 + err = bpf_prog_test_run_opts(prog_fd, &topts); 45 41 memcpy(&iph6, buf + sizeof(struct ethhdr), sizeof(iph6)); 46 - CHECK(err || retval != XDP_TX || size != 114 || 47 - iph6.nexthdr != IPPROTO_IPV6, "ipv6", 48 - "err %d errno %d retval %d size %d\n", 49 - err, errno, retval, size); 42 + ASSERT_OK(err, "test_run"); 43 + ASSERT_EQ(topts.retval, XDP_TX, "ipv6 test_run retval"); 44 + ASSERT_EQ(topts.data_size_out, 114, "ipv6 test_run data_size_out"); 45 + ASSERT_EQ(iph6.nexthdr, IPPROTO_IPV6, "ipv6 test_run iph6.nexthdr"); 50 46 out: 51 47 bpf_object__close(obj); 52 48 }
+20 -14
tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c
··· 2 2 #include <test_progs.h> 3 3 #include <network_helpers.h> 4 4 5 - void test_xdp_update_frags(void) 5 + static void test_xdp_update_frags(void) 6 6 { 7 7 const char *file = "./test_xdp_update_frags.o"; 8 - __u32 duration, retval, size; 9 8 struct bpf_program *prog; 10 9 struct bpf_object *obj; 11 10 int err, prog_fd; 12 11 __u32 *offset; 13 12 __u8 *buf; 13 + LIBBPF_OPTS(bpf_test_run_opts, topts); 14 14 15 15 obj = bpf_object__open(file); 16 16 if (libbpf_get_error(obj)) ··· 32 32 buf[*offset] = 0xaa; /* marker at offset 16 (head) */ 33 33 buf[*offset + 15] = 0xaa; /* marker at offset 31 (head) */ 34 34 35 - err = bpf_prog_test_run(prog_fd, 1, buf, 128, 36 - buf, &size, &retval, &duration); 35 + topts.data_in = buf; 36 + topts.data_out = buf; 37 + topts.data_size_in = 128; 38 + topts.data_size_out = 128; 39 + 40 + err = bpf_prog_test_run_opts(prog_fd, &topts); 37 41 38 42 /* test_xdp_update_frags: buf[16,31]: 0xaa -> 0xbb */ 39 43 ASSERT_OK(err, "xdp_update_frag"); 40 - ASSERT_EQ(retval, XDP_PASS, "xdp_update_frag retval"); 44 + ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); 41 45 ASSERT_EQ(buf[16], 0xbb, "xdp_update_frag buf[16]"); 42 46 ASSERT_EQ(buf[31], 0xbb, "xdp_update_frag buf[31]"); 43 47 ··· 57 53 buf[*offset] = 0xaa; /* marker at offset 5000 (frag0) */ 58 54 buf[*offset + 15] = 0xaa; /* marker at offset 5015 (frag0) */ 59 55 60 - err = bpf_prog_test_run(prog_fd, 1, buf, 9000, 61 - buf, &size, &retval, &duration); 56 + topts.data_in = buf; 57 + topts.data_out = buf; 58 + topts.data_size_in = 9000; 59 + topts.data_size_out = 9000; 60 + 61 + err = bpf_prog_test_run_opts(prog_fd, &topts); 62 62 63 63 /* test_xdp_update_frags: buf[5000,5015]: 0xaa -> 0xbb */ 64 64 ASSERT_OK(err, "xdp_update_frag"); 65 - ASSERT_EQ(retval, XDP_PASS, "xdp_update_frag retval"); 65 + ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); 66 66 ASSERT_EQ(buf[5000], 0xbb, "xdp_update_frag buf[5000]"); 67 67 ASSERT_EQ(buf[5015], 0xbb, "xdp_update_frag buf[5015]"); 68 68 ··· 76 68 buf[*offset] = 0xaa; /* marker at offset 3510 (head) */ 77 69 buf[*offset + 15] = 0xaa; /* marker at offset 3525 (frag0) */ 78 70 79 - err = bpf_prog_test_run(prog_fd, 1, buf, 9000, 80 - buf, &size, &retval, &duration); 71 + err = bpf_prog_test_run_opts(prog_fd, &topts); 81 72 82 73 /* test_xdp_update_frags: buf[3510,3525]: 0xaa -> 0xbb */ 83 74 ASSERT_OK(err, "xdp_update_frag"); 84 - ASSERT_EQ(retval, XDP_PASS, "xdp_update_frag retval"); 75 + ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); 85 76 ASSERT_EQ(buf[3510], 0xbb, "xdp_update_frag buf[3510]"); 86 77 ASSERT_EQ(buf[3525], 0xbb, "xdp_update_frag buf[3525]"); 87 78 ··· 90 83 buf[*offset] = 0xaa; /* marker at offset 7606 (frag0) */ 91 84 buf[*offset + 15] = 0xaa; /* marker at offset 7621 (frag1) */ 92 85 93 - err = bpf_prog_test_run(prog_fd, 1, buf, 9000, 94 - buf, &size, &retval, &duration); 86 + err = bpf_prog_test_run_opts(prog_fd, &topts); 95 87 96 88 /* test_xdp_update_frags: buf[7606,7621]: 0xaa -> 0xbb */ 97 89 ASSERT_OK(err, "xdp_update_frag"); 98 - ASSERT_EQ(retval, XDP_PASS, "xdp_update_frag retval"); 90 + ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); 99 91 ASSERT_EQ(buf[7606], 0xbb, "xdp_update_frag buf[7606]"); 100 92 ASSERT_EQ(buf[7621], 0xbb, "xdp_update_frag buf[7621]"); 101 93
+76 -50
tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
··· 5 5 static void test_xdp_adjust_tail_shrink(void) 6 6 { 7 7 const char *file = "./test_xdp_adjust_tail_shrink.o"; 8 - __u32 duration, retval, size, expect_sz; 8 + __u32 expect_sz; 9 9 struct bpf_object *obj; 10 10 int err, prog_fd; 11 11 char buf[128]; 12 + LIBBPF_OPTS(bpf_test_run_opts, topts, 13 + .data_in = &pkt_v4, 14 + .data_size_in = sizeof(pkt_v4), 15 + .data_out = buf, 16 + .data_size_out = sizeof(buf), 17 + .repeat = 1, 18 + ); 12 19 13 20 err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); 14 21 if (ASSERT_OK(err, "test_xdp_adjust_tail_shrink")) 15 22 return; 16 23 17 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 18 - buf, &size, &retval, &duration); 24 + err = bpf_prog_test_run_opts(prog_fd, &topts); 19 25 ASSERT_OK(err, "ipv4"); 20 - ASSERT_EQ(retval, XDP_DROP, "ipv4 retval"); 26 + ASSERT_EQ(topts.retval, XDP_DROP, "ipv4 retval"); 21 27 22 28 expect_sz = sizeof(pkt_v6) - 20; /* Test shrink with 20 bytes */ 23 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v6, sizeof(pkt_v6), 24 - buf, &size, &retval, &duration); 29 + topts.data_in = &pkt_v6; 30 + topts.data_size_in = sizeof(pkt_v6); 31 + topts.data_size_out = sizeof(buf); 32 + err = bpf_prog_test_run_opts(prog_fd, &topts); 25 33 ASSERT_OK(err, "ipv6"); 26 - ASSERT_EQ(retval, XDP_TX, "ipv6 retval"); 27 - ASSERT_EQ(size, expect_sz, "ipv6 size"); 34 + ASSERT_EQ(topts.retval, XDP_TX, "ipv6 retval"); 35 + ASSERT_EQ(topts.data_size_out, expect_sz, "ipv6 size"); 28 36 29 37 bpf_object__close(obj); 30 38 } ··· 42 34 const char *file = "./test_xdp_adjust_tail_grow.o"; 43 35 struct bpf_object *obj; 44 36 char buf[4096]; /* avoid segfault: large buf to hold grow results */ 45 - __u32 duration, retval, size, expect_sz; 37 + __u32 expect_sz; 46 38 int err, prog_fd; 39 + LIBBPF_OPTS(bpf_test_run_opts, topts, 40 + .data_in = &pkt_v4, 41 + .data_size_in = sizeof(pkt_v4), 42 + .data_out = buf, 43 + .data_size_out = sizeof(buf), 44 + .repeat = 1, 45 + ); 47 46 48 47 err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); 49 48 if (ASSERT_OK(err, "test_xdp_adjust_tail_grow")) 50 49 return; 51 50 52 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), 53 - buf, &size, &retval, &duration); 51 + err = bpf_prog_test_run_opts(prog_fd, &topts); 54 52 ASSERT_OK(err, "ipv4"); 55 - ASSERT_EQ(retval, XDP_DROP, "ipv4 retval"); 53 + ASSERT_EQ(topts.retval, XDP_DROP, "ipv4 retval"); 56 54 57 55 expect_sz = sizeof(pkt_v6) + 40; /* Test grow with 40 bytes */ 58 - err = bpf_prog_test_run(prog_fd, 1, &pkt_v6, sizeof(pkt_v6) /* 74 */, 59 - buf, &size, &retval, &duration); 56 + topts.data_in = &pkt_v6; 57 + topts.data_size_in = sizeof(pkt_v6); 58 + err = bpf_prog_test_run_opts(prog_fd, &topts); 60 59 ASSERT_OK(err, "ipv6"); 61 - ASSERT_EQ(retval, XDP_TX, "ipv6 retval"); 62 - ASSERT_EQ(size, expect_sz, "ipv6 size"); 60 + ASSERT_EQ(topts.retval, XDP_TX, "ipv6 retval"); 61 + ASSERT_EQ(topts.data_size_out, expect_sz, "ipv6 size"); 63 62 64 63 bpf_object__close(obj); 65 64 } ··· 78 63 int tailroom = 320; /* SKB_DATA_ALIGN(sizeof(struct skb_shared_info))*/; 79 64 struct bpf_object *obj; 80 65 int err, cnt, i; 81 - int max_grow; 66 + int max_grow, prog_fd; 82 67 83 - struct bpf_prog_test_run_attr tattr = { 68 + LIBBPF_OPTS(bpf_test_run_opts, tattr, 84 69 .repeat = 1, 85 70 .data_in = &buf, 86 71 .data_out = &buf, 87 72 .data_size_in = 0, /* Per test */ 88 73 .data_size_out = 0, /* Per test */ 89 - }; 74 + ); 90 75 91 - err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &tattr.prog_fd); 76 + err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); 92 77 if (ASSERT_OK(err, "test_xdp_adjust_tail_grow")) 93 78 return; 94 79 ··· 97 82 tattr.data_size_in = 64; /* Determine test case via pkt size */ 98 83 tattr.data_size_out = 128; /* Limit copy_size */ 99 84 /* Kernel side alloc packet memory area that is zero init */ 100 - err = bpf_prog_test_run_xattr(&tattr); 85 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 101 86 102 87 ASSERT_EQ(errno, ENOSPC, "case-64 errno"); /* Due limit copy_size in bpf_test_finish */ 103 88 ASSERT_EQ(tattr.retval, XDP_TX, "case-64 retval"); ··· 115 100 memset(buf, 2, sizeof(buf)); 116 101 tattr.data_size_in = 128; /* Determine test case via pkt size */ 117 102 tattr.data_size_out = sizeof(buf); /* Copy everything */ 118 - err = bpf_prog_test_run_xattr(&tattr); 103 + err = bpf_prog_test_run_opts(prog_fd, &tattr); 119 104 120 105 max_grow = 4096 - XDP_PACKET_HEADROOM - tailroom; /* 3520 */ 121 106 ASSERT_OK(err, "case-128"); ··· 133 118 bpf_object__close(obj); 134 119 } 135 120 136 - void test_xdp_adjust_frags_tail_shrink(void) 121 + static void test_xdp_adjust_frags_tail_shrink(void) 137 122 { 138 123 const char *file = "./test_xdp_adjust_tail_shrink.o"; 139 - __u32 duration, retval, size, exp_size; 124 + __u32 exp_size; 140 125 struct bpf_program *prog; 141 126 struct bpf_object *obj; 142 127 int err, prog_fd; 143 128 __u8 *buf; 129 + LIBBPF_OPTS(bpf_test_run_opts, topts); 144 130 145 131 /* For the individual test cases, the first byte in the packet 146 132 * indicates which test will be run. ··· 164 148 165 149 /* Test case removing 10 bytes from last frag, NOT freeing it */ 166 150 exp_size = 8990; /* 9000 - 10 */ 167 - err = bpf_prog_test_run(prog_fd, 1, buf, 9000, 168 - buf, &size, &retval, &duration); 151 + topts.data_in = buf; 152 + topts.data_out = buf; 153 + topts.data_size_in = 9000; 154 + topts.data_size_out = 9000; 155 + err = bpf_prog_test_run_opts(prog_fd, &topts); 169 156 170 157 ASSERT_OK(err, "9Kb-10b"); 171 - ASSERT_EQ(retval, XDP_TX, "9Kb-10b retval"); 172 - ASSERT_EQ(size, exp_size, "9Kb-10b size"); 158 + ASSERT_EQ(topts.retval, XDP_TX, "9Kb-10b retval"); 159 + ASSERT_EQ(topts.data_size_out, exp_size, "9Kb-10b size"); 173 160 174 161 /* Test case removing one of two pages, assuming 4K pages */ 175 162 buf[0] = 1; 176 163 exp_size = 4900; /* 9000 - 4100 */ 177 - err = bpf_prog_test_run(prog_fd, 1, buf, 9000, 178 - buf, &size, &retval, &duration); 164 + 165 + topts.data_size_out = 9000; /* reset from previous invocation */ 166 + err = bpf_prog_test_run_opts(prog_fd, &topts); 179 167 180 168 ASSERT_OK(err, "9Kb-4Kb"); 181 - ASSERT_EQ(retval, XDP_TX, "9Kb-4Kb retval"); 182 - ASSERT_EQ(size, exp_size, "9Kb-4Kb size"); 169 + ASSERT_EQ(topts.retval, XDP_TX, "9Kb-4Kb retval"); 170 + ASSERT_EQ(topts.data_size_out, exp_size, "9Kb-4Kb size"); 183 171 184 172 /* Test case removing two pages resulting in a linear xdp_buff */ 185 173 buf[0] = 2; 186 174 exp_size = 800; /* 9000 - 8200 */ 187 - err = bpf_prog_test_run(prog_fd, 1, buf, 9000, 188 - buf, &size, &retval, &duration); 175 + topts.data_size_out = 9000; /* reset from previous invocation */ 176 + err = bpf_prog_test_run_opts(prog_fd, &topts); 189 177 190 178 ASSERT_OK(err, "9Kb-9Kb"); 191 - ASSERT_EQ(retval, XDP_TX, "9Kb-9Kb retval"); 192 - ASSERT_EQ(size, exp_size, "9Kb-9Kb size"); 179 + ASSERT_EQ(topts.retval, XDP_TX, "9Kb-9Kb retval"); 180 + ASSERT_EQ(topts.data_size_out, exp_size, "9Kb-9Kb size"); 193 181 194 182 free(buf); 195 183 out: 196 184 bpf_object__close(obj); 197 185 } 198 186 199 - void test_xdp_adjust_frags_tail_grow(void) 187 + static void test_xdp_adjust_frags_tail_grow(void) 200 188 { 201 189 const char *file = "./test_xdp_adjust_tail_grow.o"; 202 - __u32 duration, retval, size, exp_size; 190 + __u32 exp_size; 203 191 struct bpf_program *prog; 204 192 struct bpf_object *obj; 205 193 int err, i, prog_fd; 206 194 __u8 *buf; 195 + LIBBPF_OPTS(bpf_test_run_opts, topts); 207 196 208 197 obj = bpf_object__open(file); 209 198 if (libbpf_get_error(obj)) ··· 226 205 227 206 /* Test case add 10 bytes to last frag */ 228 207 memset(buf, 1, 16384); 229 - size = 9000; 230 - exp_size = size + 10; 231 - err = bpf_prog_test_run(prog_fd, 1, buf, size, 232 - buf, &size, &retval, &duration); 208 + exp_size = 9000 + 10; 209 + 210 + topts.data_in = buf; 211 + topts.data_out = buf; 212 + topts.data_size_in = 9000; 213 + topts.data_size_out = 16384; 214 + err = bpf_prog_test_run_opts(prog_fd, &topts); 233 215 234 216 ASSERT_OK(err, "9Kb+10b"); 235 - ASSERT_EQ(retval, XDP_TX, "9Kb+10b retval"); 236 - ASSERT_EQ(size, exp_size, "9Kb+10b size"); 217 + ASSERT_EQ(topts.retval, XDP_TX, "9Kb+10b retval"); 218 + ASSERT_EQ(topts.data_size_out, exp_size, "9Kb+10b size"); 237 219 238 220 for (i = 0; i < 9000; i++) 239 221 ASSERT_EQ(buf[i], 1, "9Kb+10b-old"); ··· 249 225 250 226 /* Test a too large grow */ 251 227 memset(buf, 1, 16384); 252 - size = 9001; 253 - exp_size = size; 254 - err = bpf_prog_test_run(prog_fd, 1, buf, size, 255 - buf, &size, &retval, &duration); 228 + exp_size = 9001; 229 + 230 + topts.data_in = topts.data_out = buf; 231 + topts.data_size_in = 9001; 232 + topts.data_size_out = 16384; 233 + err = bpf_prog_test_run_opts(prog_fd, &topts); 256 234 257 235 ASSERT_OK(err, "9Kb+10b"); 258 - ASSERT_EQ(retval, XDP_DROP, "9Kb+10b retval"); 259 - ASSERT_EQ(size, exp_size, "9Kb+10b size"); 236 + ASSERT_EQ(topts.retval, XDP_DROP, "9Kb+10b retval"); 237 + ASSERT_EQ(topts.data_size_out, exp_size, "9Kb+10b size"); 260 238 261 239 free(buf); 262 240 out:
+13 -16
tools/testing/selftests/bpf/prog_tests/xdp_attach.c
··· 11 11 const char *file = "./test_xdp.o"; 12 12 struct bpf_prog_info info = {}; 13 13 int err, fd1, fd2, fd3; 14 - DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, 15 - .old_fd = -1); 14 + LIBBPF_OPTS(bpf_xdp_attach_opts, opts); 16 15 17 16 len = sizeof(info); 18 17 ··· 37 38 if (CHECK_FAIL(err)) 38 39 goto out_2; 39 40 40 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd1, XDP_FLAGS_REPLACE, 41 - &opts); 41 + err = bpf_xdp_attach(IFINDEX_LO, fd1, XDP_FLAGS_REPLACE, &opts); 42 42 if (CHECK(err, "load_ok", "initial load failed")) 43 43 goto out_close; 44 44 45 - err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 45 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0); 46 46 if (CHECK(err || id0 != id1, "id1_check", 47 47 "loaded prog id %u != id1 %u, err %d", id0, id1, err)) 48 48 goto out_close; 49 49 50 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd2, XDP_FLAGS_REPLACE, 51 - &opts); 50 + err = bpf_xdp_attach(IFINDEX_LO, fd2, XDP_FLAGS_REPLACE, &opts); 52 51 if (CHECK(!err, "load_fail", "load with expected id didn't fail")) 53 52 goto out; 54 53 55 - opts.old_fd = fd1; 56 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd2, 0, &opts); 54 + opts.old_prog_fd = fd1; 55 + err = bpf_xdp_attach(IFINDEX_LO, fd2, 0, &opts); 57 56 if (CHECK(err, "replace_ok", "replace valid old_fd failed")) 58 57 goto out; 59 - err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 58 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0); 60 59 if (CHECK(err || id0 != id2, "id2_check", 61 60 "loaded prog id %u != id2 %u, err %d", id0, id2, err)) 62 61 goto out_close; 63 62 64 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd3, 0, &opts); 63 + err = bpf_xdp_attach(IFINDEX_LO, fd3, 0, &opts); 65 64 if (CHECK(!err, "replace_fail", "replace invalid old_fd didn't fail")) 66 65 goto out; 67 66 68 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, 0, &opts); 67 + err = bpf_xdp_detach(IFINDEX_LO, 0, &opts); 69 68 if (CHECK(!err, "remove_fail", "remove invalid old_fd didn't fail")) 70 69 goto out; 71 70 72 - opts.old_fd = fd2; 73 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, 0, &opts); 71 + opts.old_prog_fd = fd2; 72 + err = bpf_xdp_detach(IFINDEX_LO, 0, &opts); 74 73 if (CHECK(err, "remove_ok", "remove valid old_fd failed")) 75 74 goto out; 76 75 77 - err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 76 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0); 78 77 if (CHECK(err || id0 != 0, "unload_check", 79 78 "loaded prog id %u != 0, err %d", id0, err)) 80 79 goto out_close; 81 80 out: 82 - bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0); 81 + bpf_xdp_detach(IFINDEX_LO, 0, NULL); 83 82 out_close: 84 83 bpf_object__close(obj3); 85 84 out_2:
+9 -5
tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c
··· 45 45 struct test_xdp_bpf2bpf *ftrace_skel, 46 46 int pkt_size) 47 47 { 48 - __u32 duration = 0, retval, size; 49 48 __u8 *buf, *buf_in; 50 49 int err; 50 + LIBBPF_OPTS(bpf_test_run_opts, topts); 51 51 52 52 if (!ASSERT_LE(pkt_size, BUF_SZ, "pkt_size") || 53 53 !ASSERT_GE(pkt_size, sizeof(pkt_v4), "pkt_size")) ··· 73 73 } 74 74 75 75 /* Run test program */ 76 - err = bpf_prog_test_run(pkt_fd, 1, buf_in, pkt_size, 77 - buf, &size, &retval, &duration); 76 + topts.data_in = buf_in; 77 + topts.data_size_in = pkt_size; 78 + topts.data_out = buf; 79 + topts.data_size_out = BUF_SZ; 80 + 81 + err = bpf_prog_test_run_opts(pkt_fd, &topts); 78 82 79 83 ASSERT_OK(err, "ipv4"); 80 - ASSERT_EQ(retval, XDP_PASS, "ipv4 retval"); 81 - ASSERT_EQ(size, pkt_size, "ipv4 size"); 84 + ASSERT_EQ(topts.retval, XDP_PASS, "ipv4 retval"); 85 + ASSERT_EQ(topts.data_size_out, pkt_size, "ipv4 size"); 82 86 83 87 /* Make sure bpf_xdp_output() was triggered and it sent the expected 84 88 * data to the perf ring buffer.
+6 -6
tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
··· 8 8 9 9 #define IFINDEX_LO 1 10 10 11 - void test_xdp_with_cpumap_helpers(void) 11 + static void test_xdp_with_cpumap_helpers(void) 12 12 { 13 13 struct test_xdp_with_cpumap_helpers *skel; 14 14 struct bpf_prog_info info = {}; ··· 24 24 return; 25 25 26 26 prog_fd = bpf_program__fd(skel->progs.xdp_redir_prog); 27 - err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); 27 + err = bpf_xdp_attach(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE, NULL); 28 28 if (!ASSERT_OK(err, "Generic attach of program with 8-byte CPUMAP")) 29 29 goto out_close; 30 30 31 - err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); 31 + err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL); 32 32 ASSERT_OK(err, "XDP program detach"); 33 33 34 34 prog_fd = bpf_program__fd(skel->progs.xdp_dummy_cm); ··· 46 46 ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to cpumap entry prog_id"); 47 47 48 48 /* can not attach BPF_XDP_CPUMAP program to a device */ 49 - err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); 49 + err = bpf_xdp_attach(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE, NULL); 50 50 if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_CPUMAP program")) 51 - bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); 51 + bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL); 52 52 53 53 val.qsize = 192; 54 54 val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog); ··· 68 68 test_xdp_with_cpumap_helpers__destroy(skel); 69 69 } 70 70 71 - void test_xdp_with_cpumap_frags_helpers(void) 71 + static void test_xdp_with_cpumap_frags_helpers(void) 72 72 { 73 73 struct test_xdp_with_cpumap_frags_helpers *skel; 74 74 struct bpf_prog_info info = {};
+5 -5
tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
··· 26 26 return; 27 27 28 28 dm_fd = bpf_program__fd(skel->progs.xdp_redir_prog); 29 - err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE); 29 + err = bpf_xdp_attach(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE, NULL); 30 30 if (!ASSERT_OK(err, "Generic attach of program with 8-byte devmap")) 31 31 goto out_close; 32 32 33 - err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); 33 + err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL); 34 34 ASSERT_OK(err, "XDP program detach"); 35 35 36 36 dm_fd = bpf_program__fd(skel->progs.xdp_dummy_dm); ··· 48 48 ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to devmap entry prog_id"); 49 49 50 50 /* can not attach BPF_XDP_DEVMAP program to a device */ 51 - err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE); 51 + err = bpf_xdp_attach(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE, NULL); 52 52 if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_DEVMAP program")) 53 - bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); 53 + bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_SKB_MODE, NULL); 54 54 55 55 val.ifindex = 1; 56 56 val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog); ··· 81 81 } 82 82 } 83 83 84 - void test_xdp_with_devmap_frags_helpers(void) 84 + static void test_xdp_with_devmap_frags_helpers(void) 85 85 { 86 86 struct test_xdp_with_devmap_frags_helpers *skel; 87 87 struct bpf_prog_info info = {};
+7 -7
tools/testing/selftests/bpf/prog_tests/xdp_info.c
··· 14 14 15 15 /* Get prog_id for XDP_ATTACHED_NONE mode */ 16 16 17 - err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, 0); 17 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &prog_id); 18 18 if (CHECK(err, "get_xdp_none", "errno=%d\n", errno)) 19 19 return; 20 20 if (CHECK(prog_id, "prog_id_none", "unexpected prog_id=%u\n", prog_id)) 21 21 return; 22 22 23 - err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_SKB_MODE); 23 + err = bpf_xdp_query_id(IFINDEX_LO, XDP_FLAGS_SKB_MODE, &prog_id); 24 24 if (CHECK(err, "get_xdp_none_skb", "errno=%d\n", errno)) 25 25 return; 26 26 if (CHECK(prog_id, "prog_id_none_skb", "unexpected prog_id=%u\n", ··· 37 37 if (CHECK(err, "get_prog_info", "errno=%d\n", errno)) 38 38 goto out_close; 39 39 40 - err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); 40 + err = bpf_xdp_attach(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE, NULL); 41 41 if (CHECK(err, "set_xdp_skb", "errno=%d\n", errno)) 42 42 goto out_close; 43 43 44 44 /* Get prog_id for single prog mode */ 45 45 46 - err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, 0); 46 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &prog_id); 47 47 if (CHECK(err, "get_xdp", "errno=%d\n", errno)) 48 48 goto out; 49 49 if (CHECK(prog_id != info.id, "prog_id", "prog_id not available\n")) 50 50 goto out; 51 51 52 - err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_SKB_MODE); 52 + err = bpf_xdp_query_id(IFINDEX_LO, XDP_FLAGS_SKB_MODE, &prog_id); 53 53 if (CHECK(err, "get_xdp_skb", "errno=%d\n", errno)) 54 54 goto out; 55 55 if (CHECK(prog_id != info.id, "prog_id_skb", "prog_id not available\n")) 56 56 goto out; 57 57 58 - err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_DRV_MODE); 58 + err = bpf_xdp_query_id(IFINDEX_LO, XDP_FLAGS_DRV_MODE, &prog_id); 59 59 if (CHECK(err, "get_xdp_drv", "errno=%d\n", errno)) 60 60 goto out; 61 61 if (CHECK(prog_id, "prog_id_drv", "unexpected prog_id=%u\n", prog_id)) 62 62 goto out; 63 63 64 64 out: 65 - bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0); 65 + bpf_xdp_detach(IFINDEX_LO, 0, NULL); 66 66 out_close: 67 67 bpf_object__close(obj); 68 68 }
+13 -13
tools/testing/selftests/bpf/prog_tests/xdp_link.c
··· 8 8 9 9 void serial_test_xdp_link(void) 10 10 { 11 - DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, .old_fd = -1); 12 11 struct test_xdp_link *skel1 = NULL, *skel2 = NULL; 13 12 __u32 id1, id2, id0 = 0, prog_fd1, prog_fd2; 13 + LIBBPF_OPTS(bpf_xdp_attach_opts, opts); 14 14 struct bpf_link_info link_info; 15 15 struct bpf_prog_info prog_info; 16 16 struct bpf_link *link; ··· 41 41 id2 = prog_info.id; 42 42 43 43 /* set initial prog attachment */ 44 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd1, XDP_FLAGS_REPLACE, &opts); 44 + err = bpf_xdp_attach(IFINDEX_LO, prog_fd1, XDP_FLAGS_REPLACE, &opts); 45 45 if (!ASSERT_OK(err, "fd_attach")) 46 46 goto cleanup; 47 47 48 48 /* validate prog ID */ 49 - err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 49 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0); 50 50 if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val")) 51 51 goto cleanup; 52 52 ··· 55 55 if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) { 56 56 bpf_link__destroy(link); 57 57 /* best-effort detach prog */ 58 - opts.old_fd = prog_fd1; 59 - bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, XDP_FLAGS_REPLACE, &opts); 58 + opts.old_prog_fd = prog_fd1; 59 + bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_REPLACE, &opts); 60 60 goto cleanup; 61 61 } 62 62 63 63 /* detach BPF program */ 64 - opts.old_fd = prog_fd1; 65 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, XDP_FLAGS_REPLACE, &opts); 64 + opts.old_prog_fd = prog_fd1; 65 + err = bpf_xdp_detach(IFINDEX_LO, XDP_FLAGS_REPLACE, &opts); 66 66 if (!ASSERT_OK(err, "prog_detach")) 67 67 goto cleanup; 68 68 ··· 73 73 skel1->links.xdp_handler = link; 74 74 75 75 /* validate prog ID */ 76 - err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 76 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0); 77 77 if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val")) 78 78 goto cleanup; 79 79 80 80 /* BPF prog attach is not allowed to replace BPF link */ 81 - opts.old_fd = prog_fd1; 82 - err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd2, XDP_FLAGS_REPLACE, &opts); 81 + opts.old_prog_fd = prog_fd1; 82 + err = bpf_xdp_attach(IFINDEX_LO, prog_fd2, XDP_FLAGS_REPLACE, &opts); 83 83 if (!ASSERT_ERR(err, "prog_attach_fail")) 84 84 goto cleanup; 85 85 86 86 /* Can't force-update when BPF link is active */ 87 - err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd2, 0); 87 + err = bpf_xdp_attach(IFINDEX_LO, prog_fd2, 0, NULL); 88 88 if (!ASSERT_ERR(err, "prog_update_fail")) 89 89 goto cleanup; 90 90 91 91 /* Can't force-detach when BPF link is active */ 92 - err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0); 92 + err = bpf_xdp_detach(IFINDEX_LO, 0, NULL); 93 93 if (!ASSERT_ERR(err, "prog_detach_fail")) 94 94 goto cleanup; 95 95 ··· 109 109 goto cleanup; 110 110 skel2->links.xdp_handler = link; 111 111 112 - err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 112 + err = bpf_xdp_query_id(IFINDEX_LO, 0, &id0); 113 113 if (!ASSERT_OK(err, "id2_check_err") || !ASSERT_EQ(id0, id2, "id2_check_val")) 114 114 goto cleanup; 115 115
+25 -19
tools/testing/selftests/bpf/prog_tests/xdp_noinline.c
··· 25 25 __u8 flags; 26 26 } real_def = {.dst = MAGIC_VAL}; 27 27 __u32 ch_key = 11, real_num = 3; 28 - __u32 duration = 0, retval, size; 29 28 int err, i; 30 29 __u64 bytes = 0, pkts = 0; 31 30 char buf[128]; 32 31 u32 *magic = (u32 *)buf; 32 + LIBBPF_OPTS(bpf_test_run_opts, topts, 33 + .data_in = &pkt_v4, 34 + .data_size_in = sizeof(pkt_v4), 35 + .data_out = buf, 36 + .data_size_out = sizeof(buf), 37 + .repeat = NUM_ITER, 38 + ); 33 39 34 40 skel = test_xdp_noinline__open_and_load(); 35 - if (CHECK(!skel, "skel_open_and_load", "failed\n")) 41 + if (!ASSERT_OK_PTR(skel, "skel_open_and_load")) 36 42 return; 37 43 38 44 bpf_map_update_elem(bpf_map__fd(skel->maps.vip_map), &key, &value, 0); 39 45 bpf_map_update_elem(bpf_map__fd(skel->maps.ch_rings), &ch_key, &real_num, 0); 40 46 bpf_map_update_elem(bpf_map__fd(skel->maps.reals), &real_num, &real_def, 0); 41 47 42 - err = bpf_prog_test_run(bpf_program__fd(skel->progs.balancer_ingress_v4), 43 - NUM_ITER, &pkt_v4, sizeof(pkt_v4), 44 - buf, &size, &retval, &duration); 45 - CHECK(err || retval != 1 || size != 54 || 46 - *magic != MAGIC_VAL, "ipv4", 47 - "err %d errno %d retval %d size %d magic %x\n", 48 - err, errno, retval, size, *magic); 48 + err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.balancer_ingress_v4), &topts); 49 + ASSERT_OK(err, "ipv4 test_run"); 50 + ASSERT_EQ(topts.retval, 1, "ipv4 test_run retval"); 51 + ASSERT_EQ(topts.data_size_out, 54, "ipv4 test_run data_size_out"); 52 + ASSERT_EQ(*magic, MAGIC_VAL, "ipv4 test_run magic"); 49 53 50 - err = bpf_prog_test_run(bpf_program__fd(skel->progs.balancer_ingress_v6), 51 - NUM_ITER, &pkt_v6, sizeof(pkt_v6), 52 - buf, &size, &retval, &duration); 53 - CHECK(err || retval != 1 || size != 74 || 54 - *magic != MAGIC_VAL, "ipv6", 55 - "err %d errno %d retval %d size %d magic %x\n", 56 - err, errno, retval, size, *magic); 54 + topts.data_in = &pkt_v6; 55 + topts.data_size_in = sizeof(pkt_v6); 56 + topts.data_out = buf; 57 + topts.data_size_out = sizeof(buf); 58 + 59 + err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.balancer_ingress_v6), &topts); 60 + ASSERT_OK(err, "ipv6 test_run"); 61 + ASSERT_EQ(topts.retval, 1, "ipv6 test_run retval"); 62 + ASSERT_EQ(topts.data_size_out, 74, "ipv6 test_run data_size_out"); 63 + ASSERT_EQ(*magic, MAGIC_VAL, "ipv6 test_run magic"); 57 64 58 65 bpf_map_lookup_elem(bpf_map__fd(skel->maps.stats), &stats_key, stats); 59 66 for (i = 0; i < nr_cpus; i++) { 60 67 bytes += stats[i].bytes; 61 68 pkts += stats[i].pkts; 62 69 } 63 - CHECK(bytes != MAGIC_BYTES * NUM_ITER * 2 || pkts != NUM_ITER * 2, 64 - "stats", "bytes %lld pkts %lld\n", 65 - (unsigned long long)bytes, (unsigned long long)pkts); 70 + ASSERT_EQ(bytes, MAGIC_BYTES * NUM_ITER * 2, "stats bytes"); 71 + ASSERT_EQ(pkts, NUM_ITER * 2, "stats pkts"); 66 72 test_xdp_noinline__destroy(skel); 67 73 }
+11 -8
tools/testing/selftests/bpf/prog_tests/xdp_perf.c
··· 4 4 void test_xdp_perf(void) 5 5 { 6 6 const char *file = "./xdp_dummy.o"; 7 - __u32 duration, retval, size; 8 7 struct bpf_object *obj; 9 8 char in[128], out[128]; 10 9 int err, prog_fd; 10 + LIBBPF_OPTS(bpf_test_run_opts, topts, 11 + .data_in = in, 12 + .data_size_in = sizeof(in), 13 + .data_out = out, 14 + .data_size_out = sizeof(out), 15 + .repeat = 1000000, 16 + ); 11 17 12 18 err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); 13 19 if (CHECK_FAIL(err)) 14 20 return; 15 21 16 - err = bpf_prog_test_run(prog_fd, 1000000, &in[0], 128, 17 - out, &size, &retval, &duration); 18 - 19 - CHECK(err || retval != XDP_PASS || size != 128, 20 - "xdp-perf", 21 - "err %d errno %d retval %d size %d\n", 22 - err, errno, retval, size); 22 + err = bpf_prog_test_run_opts(prog_fd, &topts); 23 + ASSERT_OK(err, "test_run"); 24 + ASSERT_EQ(topts.retval, XDP_PASS, "test_run retval"); 25 + ASSERT_EQ(topts.data_size_out, 128, "test_run data_size_out"); 23 26 24 27 bpf_object__close(obj); 25 28 }
+4 -3
tools/testing/selftests/bpf/progs/bloom_filter_bench.c
··· 5 5 #include <linux/bpf.h> 6 6 #include <stdbool.h> 7 7 #include <bpf/bpf_helpers.h> 8 + #include "bpf_misc.h" 8 9 9 10 char _license[] SEC("license") = "GPL"; 10 11 ··· 88 87 return 0; 89 88 } 90 89 91 - SEC("fentry/__x64_sys_getpgid") 90 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 92 91 int bloom_lookup(void *ctx) 93 92 { 94 93 struct callback_ctx data; ··· 101 100 return 0; 102 101 } 103 102 104 - SEC("fentry/__x64_sys_getpgid") 103 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 105 104 int bloom_update(void *ctx) 106 105 { 107 106 struct callback_ctx data; ··· 114 113 return 0; 115 114 } 116 115 117 - SEC("fentry/__x64_sys_getpgid") 116 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 118 117 int bloom_hashmap_lookup(void *ctx) 119 118 { 120 119 __u64 *result;
+3 -2
tools/testing/selftests/bpf/progs/bloom_filter_map.c
··· 3 3 4 4 #include <linux/bpf.h> 5 5 #include <bpf/bpf_helpers.h> 6 + #include "bpf_misc.h" 6 7 7 8 char _license[] SEC("license") = "GPL"; 8 9 ··· 52 51 return 0; 53 52 } 54 53 55 - SEC("fentry/__x64_sys_getpgid") 54 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 56 55 int inner_map(void *ctx) 57 56 { 58 57 struct bpf_map *inner_map; ··· 71 70 return 0; 72 71 } 73 72 74 - SEC("fentry/__x64_sys_getpgid") 73 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 75 74 int check_bloom(void *ctx) 76 75 { 77 76 struct callback_ctx data;
+54
tools/testing/selftests/bpf/progs/bpf_iter_task.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 + #include <bpf/bpf_tracing.h> 5 6 6 7 char _license[] SEC("license") = "GPL"; 7 8 ··· 22 21 BPF_SEQ_PRINTF(seq, " tgid gid\n"); 23 22 24 23 BPF_SEQ_PRINTF(seq, "%8d %8d\n", task->tgid, task->pid); 24 + return 0; 25 + } 26 + 27 + int num_expected_failure_copy_from_user_task = 0; 28 + int num_success_copy_from_user_task = 0; 29 + 30 + SEC("iter.s/task") 31 + int dump_task_sleepable(struct bpf_iter__task *ctx) 32 + { 33 + struct seq_file *seq = ctx->meta->seq; 34 + struct task_struct *task = ctx->task; 35 + static const char info[] = " === END ==="; 36 + struct pt_regs *regs; 37 + void *ptr; 38 + uint32_t user_data = 0; 39 + int ret; 40 + 41 + if (task == (void *)0) { 42 + BPF_SEQ_PRINTF(seq, "%s\n", info); 43 + return 0; 44 + } 45 + 46 + /* Read an invalid pointer and ensure we get an error */ 47 + ptr = NULL; 48 + ret = bpf_copy_from_user_task(&user_data, sizeof(uint32_t), ptr, task, 0); 49 + if (ret) { 50 + ++num_expected_failure_copy_from_user_task; 51 + } else { 52 + BPF_SEQ_PRINTF(seq, "%s\n", info); 53 + return 0; 54 + } 55 + 56 + /* Try to read the contents of the task's instruction pointer from the 57 + * remote task's address space. 58 + */ 59 + regs = (struct pt_regs *)bpf_task_pt_regs(task); 60 + if (regs == (void *)0) { 61 + BPF_SEQ_PRINTF(seq, "%s\n", info); 62 + return 0; 63 + } 64 + ptr = (void *)PT_REGS_IP(regs); 65 + 66 + ret = bpf_copy_from_user_task(&user_data, sizeof(uint32_t), ptr, task, 0); 67 + if (ret) { 68 + BPF_SEQ_PRINTF(seq, "%s\n", info); 69 + return 0; 70 + } 71 + ++num_success_copy_from_user_task; 72 + 73 + if (ctx->meta->seq_num == 0) 74 + BPF_SEQ_PRINTF(seq, " tgid gid data\n"); 75 + 76 + BPF_SEQ_PRINTF(seq, "%8d %8d %8d\n", task->tgid, task->pid, user_data); 25 77 return 0; 26 78 }
+5 -4
tools/testing/selftests/bpf/progs/bpf_loop.c
··· 3 3 4 4 #include "vmlinux.h" 5 5 #include <bpf/bpf_helpers.h> 6 + #include "bpf_misc.h" 6 7 7 8 char _license[] SEC("license") = "GPL"; 8 9 ··· 54 53 return 0; 55 54 } 56 55 57 - SEC("fentry/__x64_sys_nanosleep") 56 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 58 57 int test_prog(void *ctx) 59 58 { 60 59 struct callback_ctx data = {}; ··· 72 71 return 0; 73 72 } 74 73 75 - SEC("fentry/__x64_sys_nanosleep") 74 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 76 75 int prog_null_ctx(void *ctx) 77 76 { 78 77 if (bpf_get_current_pid_tgid() >> 32 != pid) ··· 83 82 return 0; 84 83 } 85 84 86 - SEC("fentry/__x64_sys_nanosleep") 85 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 87 86 int prog_invalid_flags(void *ctx) 88 87 { 89 88 struct callback_ctx data = {}; ··· 96 95 return 0; 97 96 } 98 97 99 - SEC("fentry/__x64_sys_nanosleep") 98 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 100 99 int prog_nested_calls(void *ctx) 101 100 { 102 101 struct callback_ctx data = {};
+2 -1
tools/testing/selftests/bpf/progs/bpf_loop_bench.c
··· 3 3 4 4 #include "vmlinux.h" 5 5 #include <bpf/bpf_helpers.h> 6 + #include "bpf_misc.h" 6 7 7 8 char _license[] SEC("license") = "GPL"; 8 9 ··· 15 14 return 0; 16 15 } 17 16 18 - SEC("fentry/__x64_sys_getpgid") 17 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 19 18 int benchmark(void *ctx) 20 19 { 21 20 for (int i = 0; i < 1000; i++) {
+19
tools/testing/selftests/bpf/progs/bpf_misc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __BPF_MISC_H__ 3 + #define __BPF_MISC_H__ 4 + 5 + #if defined(__TARGET_ARCH_x86) 6 + #define SYSCALL_WRAPPER 1 7 + #define SYS_PREFIX "__x64_" 8 + #elif defined(__TARGET_ARCH_s390) 9 + #define SYSCALL_WRAPPER 1 10 + #define SYS_PREFIX "__s390x_" 11 + #elif defined(__TARGET_ARCH_arm64) 12 + #define SYSCALL_WRAPPER 1 13 + #define SYS_PREFIX "__arm64_" 14 + #else 15 + #define SYSCALL_WRAPPER 0 16 + #define SYS_PREFIX "__se_" 17 + #endif 18 + 19 + #endif
+84
tools/testing/selftests/bpf/progs/bpf_syscall_macro.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright 2022 Sony Group Corporation */ 3 + #include <vmlinux.h> 4 + 5 + #include <bpf/bpf_core_read.h> 6 + #include <bpf/bpf_helpers.h> 7 + #include <bpf/bpf_tracing.h> 8 + #include "bpf_misc.h" 9 + 10 + int arg1 = 0; 11 + unsigned long arg2 = 0; 12 + unsigned long arg3 = 0; 13 + unsigned long arg4_cx = 0; 14 + unsigned long arg4 = 0; 15 + unsigned long arg5 = 0; 16 + 17 + int arg1_core = 0; 18 + unsigned long arg2_core = 0; 19 + unsigned long arg3_core = 0; 20 + unsigned long arg4_core_cx = 0; 21 + unsigned long arg4_core = 0; 22 + unsigned long arg5_core = 0; 23 + 24 + int option_syscall = 0; 25 + unsigned long arg2_syscall = 0; 26 + unsigned long arg3_syscall = 0; 27 + unsigned long arg4_syscall = 0; 28 + unsigned long arg5_syscall = 0; 29 + 30 + const volatile pid_t filter_pid = 0; 31 + 32 + SEC("kprobe/" SYS_PREFIX "sys_prctl") 33 + int BPF_KPROBE(handle_sys_prctl) 34 + { 35 + struct pt_regs *real_regs; 36 + pid_t pid = bpf_get_current_pid_tgid() >> 32; 37 + unsigned long tmp = 0; 38 + 39 + if (pid != filter_pid) 40 + return 0; 41 + 42 + real_regs = PT_REGS_SYSCALL_REGS(ctx); 43 + 44 + /* test for PT_REGS_PARM */ 45 + 46 + #if !defined(bpf_target_arm64) && !defined(bpf_target_s390) 47 + bpf_probe_read_kernel(&tmp, sizeof(tmp), &PT_REGS_PARM1_SYSCALL(real_regs)); 48 + #endif 49 + arg1 = tmp; 50 + bpf_probe_read_kernel(&arg2, sizeof(arg2), &PT_REGS_PARM2_SYSCALL(real_regs)); 51 + bpf_probe_read_kernel(&arg3, sizeof(arg3), &PT_REGS_PARM3_SYSCALL(real_regs)); 52 + bpf_probe_read_kernel(&arg4_cx, sizeof(arg4_cx), &PT_REGS_PARM4(real_regs)); 53 + bpf_probe_read_kernel(&arg4, sizeof(arg4), &PT_REGS_PARM4_SYSCALL(real_regs)); 54 + bpf_probe_read_kernel(&arg5, sizeof(arg5), &PT_REGS_PARM5_SYSCALL(real_regs)); 55 + 56 + /* test for the CORE variant of PT_REGS_PARM */ 57 + arg1_core = PT_REGS_PARM1_CORE_SYSCALL(real_regs); 58 + arg2_core = PT_REGS_PARM2_CORE_SYSCALL(real_regs); 59 + arg3_core = PT_REGS_PARM3_CORE_SYSCALL(real_regs); 60 + arg4_core_cx = PT_REGS_PARM4_CORE(real_regs); 61 + arg4_core = PT_REGS_PARM4_CORE_SYSCALL(real_regs); 62 + arg5_core = PT_REGS_PARM5_CORE_SYSCALL(real_regs); 63 + 64 + return 0; 65 + } 66 + 67 + SEC("kprobe/" SYS_PREFIX "sys_prctl") 68 + int BPF_KPROBE_SYSCALL(prctl_enter, int option, unsigned long arg2, 69 + unsigned long arg3, unsigned long arg4, unsigned long arg5) 70 + { 71 + pid_t pid = bpf_get_current_pid_tgid() >> 32; 72 + 73 + if (pid != filter_pid) 74 + return 0; 75 + 76 + option_syscall = option; 77 + arg2_syscall = arg2; 78 + arg3_syscall = arg3; 79 + arg4_syscall = arg4; 80 + arg5_syscall = arg5; 81 + return 0; 82 + } 83 + 84 + char _license[] SEC("license") = "GPL";
tools/testing/selftests/bpf/progs/btf_decl_tag.c tools/testing/selftests/bpf/progs/test_btf_decl_tag.c
+40
tools/testing/selftests/bpf/progs/btf_type_tag_user.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2022 Facebook */ 3 + #include "vmlinux.h" 4 + #include <bpf/bpf_helpers.h> 5 + #include <bpf/bpf_tracing.h> 6 + 7 + struct bpf_testmod_btf_type_tag_1 { 8 + int a; 9 + }; 10 + 11 + struct bpf_testmod_btf_type_tag_2 { 12 + struct bpf_testmod_btf_type_tag_1 *p; 13 + }; 14 + 15 + int g; 16 + 17 + SEC("fentry/bpf_testmod_test_btf_type_tag_user_1") 18 + int BPF_PROG(test_user1, struct bpf_testmod_btf_type_tag_1 *arg) 19 + { 20 + g = arg->a; 21 + return 0; 22 + } 23 + 24 + SEC("fentry/bpf_testmod_test_btf_type_tag_user_2") 25 + int BPF_PROG(test_user2, struct bpf_testmod_btf_type_tag_2 *arg) 26 + { 27 + g = arg->p->a; 28 + return 0; 29 + } 30 + 31 + /* int __sys_getsockname(int fd, struct sockaddr __user *usockaddr, 32 + * int __user *usockaddr_len); 33 + */ 34 + SEC("fentry/__sys_getsockname") 35 + int BPF_PROG(test_sys_getsockname, int fd, struct sockaddr *usockaddr, 36 + int *usockaddr_len) 37 + { 38 + g = usockaddr->sa_family; 39 + return 0; 40 + }
+16
tools/testing/selftests/bpf/progs/core_kern.c
··· 101 101 return 0; 102 102 } 103 103 104 + typedef int (*func_proto_typedef___match)(long); 105 + typedef int (*func_proto_typedef___doesnt_match)(char *); 106 + typedef int (*func_proto_typedef_nested1)(func_proto_typedef___match); 107 + 108 + int proto_out[3]; 109 + 110 + SEC("raw_tracepoint/sys_enter") 111 + int core_relo_proto(void *ctx) 112 + { 113 + proto_out[0] = bpf_core_type_exists(func_proto_typedef___match); 114 + proto_out[1] = bpf_core_type_exists(func_proto_typedef___doesnt_match); 115 + proto_out[2] = bpf_core_type_exists(func_proto_typedef_nested1); 116 + 117 + return 0; 118 + } 119 + 104 120 char LICENSE[] SEC("license") = "GPL";
+22
tools/testing/selftests/bpf/progs/core_kern_overflow.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include "vmlinux.h" 3 + 4 + #include <bpf/bpf_helpers.h> 5 + #include <bpf/bpf_tracing.h> 6 + #include <bpf/bpf_core_read.h> 7 + 8 + typedef int (*func_proto_typedef)(long); 9 + typedef int (*func_proto_typedef_nested1)(func_proto_typedef); 10 + typedef int (*func_proto_typedef_nested2)(func_proto_typedef_nested1); 11 + 12 + int proto_out; 13 + 14 + SEC("raw_tracepoint/sys_enter") 15 + int core_relo_proto(void *ctx) 16 + { 17 + proto_out = bpf_core_type_exists(func_proto_typedef_nested2); 18 + 19 + return 0; 20 + } 21 + 22 + char LICENSE[] SEC("license") = "GPL";
+5 -4
tools/testing/selftests/bpf/progs/fexit_sleep.c
··· 3 3 #include "vmlinux.h" 4 4 #include <bpf/bpf_helpers.h> 5 5 #include <bpf/bpf_tracing.h> 6 + #include "bpf_misc.h" 6 7 7 8 char LICENSE[] SEC("license") = "GPL"; 8 9 ··· 11 10 int fentry_cnt = 0; 12 11 int fexit_cnt = 0; 13 12 14 - SEC("fentry/__x64_sys_nanosleep") 15 - int BPF_PROG(nanosleep_fentry, const struct pt_regs *regs) 13 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 14 + int nanosleep_fentry(void *ctx) 16 15 { 17 16 if (bpf_get_current_pid_tgid() >> 32 != pid) 18 17 return 0; ··· 21 20 return 0; 22 21 } 23 22 24 - SEC("fexit/__x64_sys_nanosleep") 25 - int BPF_PROG(nanosleep_fexit, const struct pt_regs *regs, int ret) 23 + SEC("fexit/" SYS_PREFIX "sys_nanosleep") 24 + int nanosleep_fexit(void *ctx) 26 25 { 27 26 if (bpf_get_current_pid_tgid() >> 32 != pid) 28 27 return 0;
+2 -1
tools/testing/selftests/bpf/progs/perfbuf_bench.c
··· 4 4 #include <linux/bpf.h> 5 5 #include <stdint.h> 6 6 #include <bpf/bpf_helpers.h> 7 + #include "bpf_misc.h" 7 8 8 9 char _license[] SEC("license") = "GPL"; 9 10 ··· 19 18 long sample_val = 42; 20 19 long dropped __attribute__((aligned(128))) = 0; 21 20 22 - SEC("fentry/__x64_sys_getpgid") 21 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 23 22 int bench_perfbuf(void *ctx) 24 23 { 25 24 __u64 *sample;
+2 -1
tools/testing/selftests/bpf/progs/ringbuf_bench.c
··· 4 4 #include <linux/bpf.h> 5 5 #include <stdint.h> 6 6 #include <bpf/bpf_helpers.h> 7 + #include "bpf_misc.h" 7 8 8 9 char _license[] SEC("license") = "GPL"; 9 10 ··· 31 30 return sz >= wakeup_data_size ? BPF_RB_FORCE_WAKEUP : BPF_RB_NO_WAKEUP; 32 31 } 33 32 34 - SEC("fentry/__x64_sys_getpgid") 33 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 35 34 int bench_ringbuf(void *ctx) 36 35 { 37 36 long *sample, flags;
+2 -1
tools/testing/selftests/bpf/progs/sockopt_sk.c
··· 72 72 * reasons. 73 73 */ 74 74 75 - if (optval + sizeof(struct tcp_zerocopy_receive) > optval_end) 75 + /* Check that optval contains address (__u64) */ 76 + if (optval + sizeof(__u64) > optval_end) 76 77 return 0; /* bounds check */ 77 78 78 79 if (((struct tcp_zerocopy_receive *)optval)->address != 0)
+1 -14
tools/testing/selftests/bpf/progs/test_probe_user.c
··· 7 7 8 8 #include <bpf/bpf_helpers.h> 9 9 #include <bpf/bpf_tracing.h> 10 - 11 - #if defined(__TARGET_ARCH_x86) 12 - #define SYSCALL_WRAPPER 1 13 - #define SYS_PREFIX "__x64_" 14 - #elif defined(__TARGET_ARCH_s390) 15 - #define SYSCALL_WRAPPER 1 16 - #define SYS_PREFIX "__s390x_" 17 - #elif defined(__TARGET_ARCH_arm64) 18 - #define SYSCALL_WRAPPER 1 19 - #define SYS_PREFIX "__arm64_" 20 - #else 21 - #define SYSCALL_WRAPPER 0 22 - #define SYS_PREFIX "" 23 - #endif 10 + #include "bpf_misc.h" 24 11 25 12 static struct sockaddr_in old; 26 13
+2 -1
tools/testing/selftests/bpf/progs/test_ringbuf.c
··· 3 3 4 4 #include <linux/bpf.h> 5 5 #include <bpf/bpf_helpers.h> 6 + #include "bpf_misc.h" 6 7 7 8 char _license[] SEC("license") = "GPL"; 8 9 ··· 36 35 /* inner state */ 37 36 long seq = 0; 38 37 39 - SEC("fentry/__x64_sys_getpgid") 38 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 40 39 int test_ringbuf(void *ctx) 41 40 { 42 41 int cur_pid = bpf_get_current_pid_tgid() >> 32;
+6
tools/testing/selftests/bpf/progs/test_sk_lookup.c
··· 392 392 { 393 393 struct bpf_sock *sk; 394 394 int err, family; 395 + __u32 val_u32; 395 396 bool v4; 396 397 397 398 v4 = (ctx->family == AF_INET); ··· 417 416 LSB(ctx->remote_port, 2) != 0 || LSB(ctx->remote_port, 3) != 0) 418 417 return SK_DROP; 419 418 if (LSW(ctx->remote_port, 0) != SRC_PORT) 419 + return SK_DROP; 420 + 421 + /* Load from remote_port field with zero padding (backward compatibility) */ 422 + val_u32 = *(__u32 *)&ctx->remote_port; 423 + if (val_u32 != bpf_htonl(bpf_ntohs(SRC_PORT) << 16)) 420 424 return SK_DROP; 421 425 422 426 /* Narrow loads from local_port field. Expect DST_PORT. */
+41
tools/testing/selftests/bpf/progs/test_sock_fields.c
··· 12 12 enum bpf_linum_array_idx { 13 13 EGRESS_LINUM_IDX, 14 14 INGRESS_LINUM_IDX, 15 + READ_SK_DST_PORT_LINUM_IDX, 15 16 __NR_BPF_LINUM_ARRAY_IDX, 16 17 }; 17 18 ··· 247 246 248 247 skcpy(&listen_sk, sk); 249 248 tpcpy(&listen_tp, tp); 249 + 250 + return CG_OK; 251 + } 252 + 253 + static __noinline bool sk_dst_port__load_word(struct bpf_sock *sk) 254 + { 255 + __u32 *word = (__u32 *)&sk->dst_port; 256 + return word[0] == bpf_htonl(0xcafe0000); 257 + } 258 + 259 + static __noinline bool sk_dst_port__load_half(struct bpf_sock *sk) 260 + { 261 + __u16 *half = (__u16 *)&sk->dst_port; 262 + return half[0] == bpf_htons(0xcafe); 263 + } 264 + 265 + static __noinline bool sk_dst_port__load_byte(struct bpf_sock *sk) 266 + { 267 + __u8 *byte = (__u8 *)&sk->dst_port; 268 + return byte[0] == 0xca && byte[1] == 0xfe; 269 + } 270 + 271 + SEC("cgroup_skb/egress") 272 + int read_sk_dst_port(struct __sk_buff *skb) 273 + { 274 + __u32 linum, linum_idx; 275 + struct bpf_sock *sk; 276 + 277 + linum_idx = READ_SK_DST_PORT_LINUM_IDX; 278 + 279 + sk = skb->sk; 280 + if (!sk) 281 + RET_LOG(); 282 + 283 + if (!sk_dst_port__load_word(sk)) 284 + RET_LOG(); 285 + if (!sk_dst_port__load_half(sk)) 286 + RET_LOG(); 287 + if (!sk_dst_port__load_byte(sk)) 288 + RET_LOG(); 250 289 251 290 return CG_OK; 252 291 }
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_frags_helpers.c
··· 12 12 __uint(max_entries, 4); 13 13 } cpu_map SEC(".maps"); 14 14 15 - SEC("xdp_cpumap/dummy_cm") 15 + SEC("xdp/cpumap") 16 16 int xdp_dummy_cm(struct xdp_md *ctx) 17 17 { 18 18 return XDP_PASS;
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c
··· 24 24 return XDP_PASS; 25 25 } 26 26 27 - SEC("xdp_cpumap/dummy_cm") 27 + SEC("xdp/cpumap") 28 28 int xdp_dummy_cm(struct xdp_md *ctx) 29 29 { 30 30 if (ctx->ingress_ifindex == IFINDEX_LO)
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_with_devmap_frags_helpers.c
··· 12 12 /* valid program on DEVMAP entry via SEC name; 13 13 * has access to egress and ingress ifindex 14 14 */ 15 - SEC("xdp_devmap/map_prog") 15 + SEC("xdp/devmap") 16 16 int xdp_dummy_dm(struct xdp_md *ctx) 17 17 { 18 18 return XDP_PASS;
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c
··· 27 27 /* valid program on DEVMAP entry via SEC name; 28 28 * has access to egress and ingress ifindex 29 29 */ 30 - SEC("xdp_devmap/map_prog") 30 + SEC("xdp/devmap") 31 31 int xdp_dummy_dm(struct xdp_md *ctx) 32 32 { 33 33 char fmt[] = "devmap redirect: dev %u -> dev %u len %u\n";
+2 -1
tools/testing/selftests/bpf/progs/trace_printk.c
··· 4 4 #include "vmlinux.h" 5 5 #include <bpf/bpf_helpers.h> 6 6 #include <bpf/bpf_tracing.h> 7 + #include "bpf_misc.h" 7 8 8 9 char _license[] SEC("license") = "GPL"; 9 10 ··· 13 12 14 13 const char fmt[] = "Testing,testing %d\n"; 15 14 16 - SEC("fentry/__x64_sys_nanosleep") 15 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 17 16 int sys_enter(void *ctx) 18 17 { 19 18 trace_printk_ret = bpf_trace_printk(fmt, sizeof(fmt),
+2 -1
tools/testing/selftests/bpf/progs/trace_vprintk.c
··· 4 4 #include "vmlinux.h" 5 5 #include <bpf/bpf_helpers.h> 6 6 #include <bpf/bpf_tracing.h> 7 + #include "bpf_misc.h" 7 8 8 9 char _license[] SEC("license") = "GPL"; 9 10 ··· 12 11 int trace_vprintk_ret = 0; 13 12 int trace_vprintk_ran = 0; 14 13 15 - SEC("fentry/__x64_sys_nanosleep") 14 + SEC("fentry/" SYS_PREFIX "sys_nanosleep") 16 15 int sys_enter(void *ctx) 17 16 { 18 17 static const char one[] = "1";
+5 -4
tools/testing/selftests/bpf/progs/trigger_bench.c
··· 5 5 #include <asm/unistd.h> 6 6 #include <bpf/bpf_helpers.h> 7 7 #include <bpf/bpf_tracing.h> 8 + #include "bpf_misc.h" 8 9 9 10 char _license[] SEC("license") = "GPL"; 10 11 ··· 26 25 return 0; 27 26 } 28 27 29 - SEC("kprobe/__x64_sys_getpgid") 28 + SEC("kprobe/" SYS_PREFIX "sys_getpgid") 30 29 int bench_trigger_kprobe(void *ctx) 31 30 { 32 31 __sync_add_and_fetch(&hits, 1); 33 32 return 0; 34 33 } 35 34 36 - SEC("fentry/__x64_sys_getpgid") 35 + SEC("fentry/" SYS_PREFIX "sys_getpgid") 37 36 int bench_trigger_fentry(void *ctx) 38 37 { 39 38 __sync_add_and_fetch(&hits, 1); 40 39 return 0; 41 40 } 42 41 43 - SEC("fentry.s/__x64_sys_getpgid") 42 + SEC("fentry.s/" SYS_PREFIX "sys_getpgid") 44 43 int bench_trigger_fentry_sleep(void *ctx) 45 44 { 46 45 __sync_add_and_fetch(&hits, 1); 47 46 return 0; 48 47 } 49 48 50 - SEC("fmod_ret/__x64_sys_getpgid") 49 + SEC("fmod_ret/" SYS_PREFIX "sys_getpgid") 51 50 int bench_trigger_fmodret(void *ctx) 52 51 { 53 52 __sync_add_and_fetch(&hits, 1);
+1 -1
tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c
··· 70 70 BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS); 71 71 } 72 72 73 - SEC("xdp_devmap/map_prog") 73 + SEC("xdp/devmap") 74 74 int xdp_devmap_prog(struct xdp_md *ctx) 75 75 { 76 76 void *data_end = (void *)(long)ctx->data_end;
+7 -4
tools/testing/selftests/bpf/test_lru_map.c
··· 61 61 }; 62 62 __u8 data[64] = {}; 63 63 int mfd, pfd, ret, zero = 0; 64 - __u32 retval = 0; 64 + LIBBPF_OPTS(bpf_test_run_opts, topts, 65 + .data_in = data, 66 + .data_size_in = sizeof(data), 67 + .repeat = 1, 68 + ); 65 69 66 70 mfd = bpf_map_create(BPF_MAP_TYPE_ARRAY, NULL, sizeof(int), sizeof(__u64), 1, NULL); 67 71 if (mfd < 0) ··· 79 75 return -1; 80 76 } 81 77 82 - ret = bpf_prog_test_run(pfd, 1, data, sizeof(data), 83 - NULL, NULL, &retval, NULL); 84 - if (ret < 0 || retval != 42) { 78 + ret = bpf_prog_test_run_opts(pfd, &topts); 79 + if (ret < 0 || topts.retval != 42) { 85 80 ret = -1; 86 81 } else { 87 82 assert(!bpf_map_lookup_elem(mfd, &zero, value));
+78 -72
tools/testing/selftests/bpf/test_lwt_seg6local.sh
··· 23 23 24 24 # Kselftest framework requirement - SKIP code is 4. 25 25 ksft_skip=4 26 + readonly NS1="ns1-$(mktemp -u XXXXXX)" 27 + readonly NS2="ns2-$(mktemp -u XXXXXX)" 28 + readonly NS3="ns3-$(mktemp -u XXXXXX)" 29 + readonly NS4="ns4-$(mktemp -u XXXXXX)" 30 + readonly NS5="ns5-$(mktemp -u XXXXXX)" 31 + readonly NS6="ns6-$(mktemp -u XXXXXX)" 26 32 27 33 msg="skip all tests:" 28 34 if [ $UID != 0 ]; then ··· 47 41 fi 48 42 49 43 set +e 50 - ip netns del ns1 2> /dev/null 51 - ip netns del ns2 2> /dev/null 52 - ip netns del ns3 2> /dev/null 53 - ip netns del ns4 2> /dev/null 54 - ip netns del ns5 2> /dev/null 55 - ip netns del ns6 2> /dev/null 44 + ip netns del ${NS1} 2> /dev/null 45 + ip netns del ${NS2} 2> /dev/null 46 + ip netns del ${NS3} 2> /dev/null 47 + ip netns del ${NS4} 2> /dev/null 48 + ip netns del ${NS5} 2> /dev/null 49 + ip netns del ${NS6} 2> /dev/null 56 50 rm -f $TMP_FILE 57 51 } 58 52 59 53 set -e 60 54 61 - ip netns add ns1 62 - ip netns add ns2 63 - ip netns add ns3 64 - ip netns add ns4 65 - ip netns add ns5 66 - ip netns add ns6 55 + ip netns add ${NS1} 56 + ip netns add ${NS2} 57 + ip netns add ${NS3} 58 + ip netns add ${NS4} 59 + ip netns add ${NS5} 60 + ip netns add ${NS6} 67 61 68 62 trap cleanup 0 2 3 6 9 69 63 ··· 73 67 ip link add veth7 type veth peer name veth8 74 68 ip link add veth9 type veth peer name veth10 75 69 76 - ip link set veth1 netns ns1 77 - ip link set veth2 netns ns2 78 - ip link set veth3 netns ns2 79 - ip link set veth4 netns ns3 80 - ip link set veth5 netns ns3 81 - ip link set veth6 netns ns4 82 - ip link set veth7 netns ns4 83 - ip link set veth8 netns ns5 84 - ip link set veth9 netns ns5 85 - ip link set veth10 netns ns6 70 + ip link set veth1 netns ${NS1} 71 + ip link set veth2 netns ${NS2} 72 + ip link set veth3 netns ${NS2} 73 + ip link set veth4 netns ${NS3} 74 + ip link set veth5 netns ${NS3} 75 + ip link set veth6 netns ${NS4} 76 + ip link set veth7 netns ${NS4} 77 + ip link set veth8 netns ${NS5} 78 + ip link set veth9 netns ${NS5} 79 + ip link set veth10 netns ${NS6} 86 80 87 - ip netns exec ns1 ip link set dev veth1 up 88 - ip netns exec ns2 ip link set dev veth2 up 89 - ip netns exec ns2 ip link set dev veth3 up 90 - ip netns exec ns3 ip link set dev veth4 up 91 - ip netns exec ns3 ip link set dev veth5 up 92 - ip netns exec ns4 ip link set dev veth6 up 93 - ip netns exec ns4 ip link set dev veth7 up 94 - ip netns exec ns5 ip link set dev veth8 up 95 - ip netns exec ns5 ip link set dev veth9 up 96 - ip netns exec ns6 ip link set dev veth10 up 97 - ip netns exec ns6 ip link set dev lo up 81 + ip netns exec ${NS1} ip link set dev veth1 up 82 + ip netns exec ${NS2} ip link set dev veth2 up 83 + ip netns exec ${NS2} ip link set dev veth3 up 84 + ip netns exec ${NS3} ip link set dev veth4 up 85 + ip netns exec ${NS3} ip link set dev veth5 up 86 + ip netns exec ${NS4} ip link set dev veth6 up 87 + ip netns exec ${NS4} ip link set dev veth7 up 88 + ip netns exec ${NS5} ip link set dev veth8 up 89 + ip netns exec ${NS5} ip link set dev veth9 up 90 + ip netns exec ${NS6} ip link set dev veth10 up 91 + ip netns exec ${NS6} ip link set dev lo up 98 92 99 93 # All link scope addresses and routes required between veths 100 - ip netns exec ns1 ip -6 addr add fb00::12/16 dev veth1 scope link 101 - ip netns exec ns1 ip -6 route add fb00::21 dev veth1 scope link 102 - ip netns exec ns2 ip -6 addr add fb00::21/16 dev veth2 scope link 103 - ip netns exec ns2 ip -6 addr add fb00::34/16 dev veth3 scope link 104 - ip netns exec ns2 ip -6 route add fb00::43 dev veth3 scope link 105 - ip netns exec ns3 ip -6 route add fb00::65 dev veth5 scope link 106 - ip netns exec ns3 ip -6 addr add fb00::43/16 dev veth4 scope link 107 - ip netns exec ns3 ip -6 addr add fb00::56/16 dev veth5 scope link 108 - ip netns exec ns4 ip -6 addr add fb00::65/16 dev veth6 scope link 109 - ip netns exec ns4 ip -6 addr add fb00::78/16 dev veth7 scope link 110 - ip netns exec ns4 ip -6 route add fb00::87 dev veth7 scope link 111 - ip netns exec ns5 ip -6 addr add fb00::87/16 dev veth8 scope link 112 - ip netns exec ns5 ip -6 addr add fb00::910/16 dev veth9 scope link 113 - ip netns exec ns5 ip -6 route add fb00::109 dev veth9 scope link 114 - ip netns exec ns5 ip -6 route add fb00::109 table 117 dev veth9 scope link 115 - ip netns exec ns6 ip -6 addr add fb00::109/16 dev veth10 scope link 94 + ip netns exec ${NS1} ip -6 addr add fb00::12/16 dev veth1 scope link 95 + ip netns exec ${NS1} ip -6 route add fb00::21 dev veth1 scope link 96 + ip netns exec ${NS2} ip -6 addr add fb00::21/16 dev veth2 scope link 97 + ip netns exec ${NS2} ip -6 addr add fb00::34/16 dev veth3 scope link 98 + ip netns exec ${NS2} ip -6 route add fb00::43 dev veth3 scope link 99 + ip netns exec ${NS3} ip -6 route add fb00::65 dev veth5 scope link 100 + ip netns exec ${NS3} ip -6 addr add fb00::43/16 dev veth4 scope link 101 + ip netns exec ${NS3} ip -6 addr add fb00::56/16 dev veth5 scope link 102 + ip netns exec ${NS4} ip -6 addr add fb00::65/16 dev veth6 scope link 103 + ip netns exec ${NS4} ip -6 addr add fb00::78/16 dev veth7 scope link 104 + ip netns exec ${NS4} ip -6 route add fb00::87 dev veth7 scope link 105 + ip netns exec ${NS5} ip -6 addr add fb00::87/16 dev veth8 scope link 106 + ip netns exec ${NS5} ip -6 addr add fb00::910/16 dev veth9 scope link 107 + ip netns exec ${NS5} ip -6 route add fb00::109 dev veth9 scope link 108 + ip netns exec ${NS5} ip -6 route add fb00::109 table 117 dev veth9 scope link 109 + ip netns exec ${NS6} ip -6 addr add fb00::109/16 dev veth10 scope link 116 110 117 - ip netns exec ns1 ip -6 addr add fb00::1/16 dev lo 118 - ip netns exec ns1 ip -6 route add fb00::6 dev veth1 via fb00::21 111 + ip netns exec ${NS1} ip -6 addr add fb00::1/16 dev lo 112 + ip netns exec ${NS1} ip -6 route add fb00::6 dev veth1 via fb00::21 119 113 120 - ip netns exec ns2 ip -6 route add fb00::6 encap bpf in obj test_lwt_seg6local.o sec encap_srh dev veth2 121 - ip netns exec ns2 ip -6 route add fd00::1 dev veth3 via fb00::43 scope link 114 + ip netns exec ${NS2} ip -6 route add fb00::6 encap bpf in obj test_lwt_seg6local.o sec encap_srh dev veth2 115 + ip netns exec ${NS2} ip -6 route add fd00::1 dev veth3 via fb00::43 scope link 122 116 123 - ip netns exec ns3 ip -6 route add fc42::1 dev veth5 via fb00::65 124 - ip netns exec ns3 ip -6 route add fd00::1 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec add_egr_x dev veth4 117 + ip netns exec ${NS3} ip -6 route add fc42::1 dev veth5 via fb00::65 118 + ip netns exec ${NS3} ip -6 route add fd00::1 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec add_egr_x dev veth4 125 119 126 - ip netns exec ns4 ip -6 route add fd00::2 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec pop_egr dev veth6 127 - ip netns exec ns4 ip -6 addr add fc42::1 dev lo 128 - ip netns exec ns4 ip -6 route add fd00::3 dev veth7 via fb00::87 120 + ip netns exec ${NS4} ip -6 route add fd00::2 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec pop_egr dev veth6 121 + ip netns exec ${NS4} ip -6 addr add fc42::1 dev lo 122 + ip netns exec ${NS4} ip -6 route add fd00::3 dev veth7 via fb00::87 129 123 130 - ip netns exec ns5 ip -6 route add fd00::4 table 117 dev veth9 via fb00::109 131 - ip netns exec ns5 ip -6 route add fd00::3 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec inspect_t dev veth8 124 + ip netns exec ${NS5} ip -6 route add fd00::4 table 117 dev veth9 via fb00::109 125 + ip netns exec ${NS5} ip -6 route add fd00::3 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec inspect_t dev veth8 132 126 133 - ip netns exec ns6 ip -6 addr add fb00::6/16 dev lo 134 - ip netns exec ns6 ip -6 addr add fd00::4/16 dev lo 127 + ip netns exec ${NS6} ip -6 addr add fb00::6/16 dev lo 128 + ip netns exec ${NS6} ip -6 addr add fd00::4/16 dev lo 135 129 136 - ip netns exec ns1 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 137 - ip netns exec ns2 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 138 - ip netns exec ns3 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 139 - ip netns exec ns4 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 140 - ip netns exec ns5 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 130 + ip netns exec ${NS1} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 131 + ip netns exec ${NS2} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 132 + ip netns exec ${NS3} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 133 + ip netns exec ${NS4} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 134 + ip netns exec ${NS5} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 141 135 142 - ip netns exec ns6 sysctl net.ipv6.conf.all.seg6_enabled=1 > /dev/null 143 - ip netns exec ns6 sysctl net.ipv6.conf.lo.seg6_enabled=1 > /dev/null 144 - ip netns exec ns6 sysctl net.ipv6.conf.veth10.seg6_enabled=1 > /dev/null 136 + ip netns exec ${NS6} sysctl net.ipv6.conf.all.seg6_enabled=1 > /dev/null 137 + ip netns exec ${NS6} sysctl net.ipv6.conf.lo.seg6_enabled=1 > /dev/null 138 + ip netns exec ${NS6} sysctl net.ipv6.conf.veth10.seg6_enabled=1 > /dev/null 145 139 146 - ip netns exec ns6 nc -l -6 -u -d 7330 > $TMP_FILE & 147 - ip netns exec ns1 bash -c "echo 'foobar' | nc -w0 -6 -u -p 2121 -s fb00::1 fb00::6 7330" 140 + ip netns exec ${NS6} nc -l -6 -u -d 7330 > $TMP_FILE & 141 + ip netns exec ${NS1} bash -c "echo 'foobar' | nc -w0 -6 -u -p 2121 -s fb00::1 fb00::6 7330" 148 142 sleep 5 # wait enough time to ensure the UDP datagram arrived to the last segment 149 143 kill -TERM $! 150 144
+1 -1
tools/testing/selftests/bpf/test_maps.c
··· 738 738 sizeof(key), sizeof(value), 739 739 6, NULL); 740 740 if (fd < 0) { 741 - if (!bpf_probe_map_type(BPF_MAP_TYPE_SOCKMAP, 0)) { 741 + if (!libbpf_probe_bpf_map_type(BPF_MAP_TYPE_SOCKMAP, NULL)) { 742 742 printf("%s SKIP (unsupported map type BPF_MAP_TYPE_SOCKMAP)\n", 743 743 __func__); 744 744 skips++;
+3 -2
tools/testing/selftests/bpf/test_tcp_check_syncookie.sh
··· 4 4 # Copyright (c) 2019 Cloudflare 5 5 6 6 set -eu 7 + readonly NS1="ns1-$(mktemp -u XXXXXX)" 7 8 8 9 wait_for_ip() 9 10 { ··· 29 28 30 29 ns1_exec() 31 30 { 32 - ip netns exec ns1 "$@" 31 + ip netns exec ${NS1} "$@" 33 32 } 34 33 35 34 setup() 36 35 { 37 - ip netns add ns1 36 + ip netns add ${NS1} 38 37 ns1_exec ip link set lo up 39 38 40 39 ns1_exec sysctl -w net.ipv4.tcp_syncookies=2
+12 -8
tools/testing/selftests/bpf/test_verifier.c
··· 456 456 457 457 static bool skip_unsupported_map(enum bpf_map_type map_type) 458 458 { 459 - if (!bpf_probe_map_type(map_type, 0)) { 459 + if (!libbpf_probe_bpf_map_type(map_type, NULL)) { 460 460 printf("SKIP (unsupported map type %d)\n", map_type); 461 461 skips++; 462 462 return true; ··· 1021 1021 { 1022 1022 __u8 tmp[TEST_DATA_LEN << 2]; 1023 1023 __u32 size_tmp = sizeof(tmp); 1024 - uint32_t retval; 1025 1024 int err, saved_errno; 1025 + LIBBPF_OPTS(bpf_test_run_opts, topts, 1026 + .data_in = data, 1027 + .data_size_in = size_data, 1028 + .data_out = tmp, 1029 + .data_size_out = size_tmp, 1030 + .repeat = 1, 1031 + ); 1026 1032 1027 1033 if (unpriv) 1028 1034 set_admin(true); 1029 - err = bpf_prog_test_run(fd_prog, 1, data, size_data, 1030 - tmp, &size_tmp, &retval, NULL); 1035 + err = bpf_prog_test_run_opts(fd_prog, &topts); 1031 1036 saved_errno = errno; 1032 1037 1033 1038 if (unpriv) ··· 1056 1051 } 1057 1052 } 1058 1053 1059 - if (retval != expected_val && 1060 - expected_val != POINTER_VALUE) { 1061 - printf("FAIL retval %d != %d ", retval, expected_val); 1054 + if (topts.retval != expected_val && expected_val != POINTER_VALUE) { 1055 + printf("FAIL retval %d != %d ", topts.retval, expected_val); 1062 1056 return 1; 1063 1057 } 1064 1058 ··· 1180 1176 * bpf_probe_prog_type won't give correct answer 1181 1177 */ 1182 1178 if (fd_prog < 0 && prog_type != BPF_PROG_TYPE_TRACING && 1183 - !bpf_probe_prog_type(prog_type, 0)) { 1179 + !libbpf_probe_bpf_prog_type(prog_type, NULL)) { 1184 1180 printf("SKIP (unsupported program type %d)\n", prog_type); 1185 1181 skips++; 1186 1182 goto close_fds;
+20 -18
tools/testing/selftests/bpf/test_xdp_meta.sh
··· 2 2 3 3 # Kselftest framework requirement - SKIP code is 4. 4 4 readonly KSFT_SKIP=4 5 + readonly NS1="ns1-$(mktemp -u XXXXXX)" 6 + readonly NS2="ns2-$(mktemp -u XXXXXX)" 5 7 6 8 cleanup() 7 9 { ··· 15 13 16 14 set +e 17 15 ip link del veth1 2> /dev/null 18 - ip netns del ns1 2> /dev/null 19 - ip netns del ns2 2> /dev/null 16 + ip netns del ${NS1} 2> /dev/null 17 + ip netns del ${NS2} 2> /dev/null 20 18 } 21 19 22 20 ip link set dev lo xdp off 2>/dev/null > /dev/null ··· 26 24 fi 27 25 set -e 28 26 29 - ip netns add ns1 30 - ip netns add ns2 27 + ip netns add ${NS1} 28 + ip netns add ${NS2} 31 29 32 30 trap cleanup 0 2 3 6 9 33 31 34 32 ip link add veth1 type veth peer name veth2 35 33 36 - ip link set veth1 netns ns1 37 - ip link set veth2 netns ns2 34 + ip link set veth1 netns ${NS1} 35 + ip link set veth2 netns ${NS2} 38 36 39 - ip netns exec ns1 ip addr add 10.1.1.11/24 dev veth1 40 - ip netns exec ns2 ip addr add 10.1.1.22/24 dev veth2 37 + ip netns exec ${NS1} ip addr add 10.1.1.11/24 dev veth1 38 + ip netns exec ${NS2} ip addr add 10.1.1.22/24 dev veth2 41 39 42 - ip netns exec ns1 tc qdisc add dev veth1 clsact 43 - ip netns exec ns2 tc qdisc add dev veth2 clsact 40 + ip netns exec ${NS1} tc qdisc add dev veth1 clsact 41 + ip netns exec ${NS2} tc qdisc add dev veth2 clsact 44 42 45 - ip netns exec ns1 tc filter add dev veth1 ingress bpf da obj test_xdp_meta.o sec t 46 - ip netns exec ns2 tc filter add dev veth2 ingress bpf da obj test_xdp_meta.o sec t 43 + ip netns exec ${NS1} tc filter add dev veth1 ingress bpf da obj test_xdp_meta.o sec t 44 + ip netns exec ${NS2} tc filter add dev veth2 ingress bpf da obj test_xdp_meta.o sec t 47 45 48 - ip netns exec ns1 ip link set dev veth1 xdp obj test_xdp_meta.o sec x 49 - ip netns exec ns2 ip link set dev veth2 xdp obj test_xdp_meta.o sec x 46 + ip netns exec ${NS1} ip link set dev veth1 xdp obj test_xdp_meta.o sec x 47 + ip netns exec ${NS2} ip link set dev veth2 xdp obj test_xdp_meta.o sec x 50 48 51 - ip netns exec ns1 ip link set dev veth1 up 52 - ip netns exec ns2 ip link set dev veth2 up 49 + ip netns exec ${NS1} ip link set dev veth1 up 50 + ip netns exec ${NS2} ip link set dev veth2 up 53 51 54 - ip netns exec ns1 ping -c 1 10.1.1.22 55 - ip netns exec ns2 ping -c 1 10.1.1.11 52 + ip netns exec ${NS1} ping -c 1 10.1.1.22 53 + ip netns exec ${NS2} ping -c 1 10.1.1.11 56 54 57 55 exit 0
+16 -14
tools/testing/selftests/bpf/test_xdp_redirect.sh
··· 10 10 # | xdp forwarding | 11 11 # ------------------ 12 12 13 + readonly NS1="ns1-$(mktemp -u XXXXXX)" 14 + readonly NS2="ns2-$(mktemp -u XXXXXX)" 13 15 ret=0 14 16 15 17 setup() ··· 19 17 20 18 local xdpmode=$1 21 19 22 - ip netns add ns1 23 - ip netns add ns2 20 + ip netns add ${NS1} 21 + ip netns add ${NS2} 24 22 25 - ip link add veth1 index 111 type veth peer name veth11 netns ns1 26 - ip link add veth2 index 222 type veth peer name veth22 netns ns2 23 + ip link add veth1 index 111 type veth peer name veth11 netns ${NS1} 24 + ip link add veth2 index 222 type veth peer name veth22 netns ${NS2} 27 25 28 26 ip link set veth1 up 29 27 ip link set veth2 up 30 - ip -n ns1 link set dev veth11 up 31 - ip -n ns2 link set dev veth22 up 28 + ip -n ${NS1} link set dev veth11 up 29 + ip -n ${NS2} link set dev veth22 up 32 30 33 - ip -n ns1 addr add 10.1.1.11/24 dev veth11 34 - ip -n ns2 addr add 10.1.1.22/24 dev veth22 31 + ip -n ${NS1} addr add 10.1.1.11/24 dev veth11 32 + ip -n ${NS2} addr add 10.1.1.22/24 dev veth22 35 33 } 36 34 37 35 cleanup() 38 36 { 39 37 ip link del veth1 2> /dev/null 40 38 ip link del veth2 2> /dev/null 41 - ip netns del ns1 2> /dev/null 42 - ip netns del ns2 2> /dev/null 39 + ip netns del ${NS1} 2> /dev/null 40 + ip netns del ${NS2} 2> /dev/null 43 41 } 44 42 45 43 test_xdp_redirect() ··· 54 52 return 0 55 53 fi 56 54 57 - ip -n ns1 link set veth11 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null 58 - ip -n ns2 link set veth22 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null 55 + ip -n ${NS1} link set veth11 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null 56 + ip -n ${NS2} link set veth22 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null 59 57 ip link set dev veth1 $xdpmode obj test_xdp_redirect.o sec redirect_to_222 &> /dev/null 60 58 ip link set dev veth2 $xdpmode obj test_xdp_redirect.o sec redirect_to_111 &> /dev/null 61 59 62 - if ip netns exec ns1 ping -c 1 10.1.1.22 &> /dev/null && 63 - ip netns exec ns2 ping -c 1 10.1.1.11 &> /dev/null; then 60 + if ip netns exec ${NS1} ping -c 1 10.1.1.22 &> /dev/null && 61 + ip netns exec ${NS2} ping -c 1 10.1.1.11 &> /dev/null; then 64 62 echo "selftests: test_xdp_redirect $xdpmode [PASS]"; 65 63 else 66 64 ret=1
+30 -28
tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
··· 32 32 PASS=0 33 33 FAIL=0 34 34 LOG_DIR=$(mktemp -d) 35 + declare -a NS 36 + NS[0]="ns0-$(mktemp -u XXXXXX)" 37 + NS[1]="ns1-$(mktemp -u XXXXXX)" 38 + NS[2]="ns2-$(mktemp -u XXXXXX)" 39 + NS[3]="ns3-$(mktemp -u XXXXXX)" 35 40 36 41 test_pass() 37 42 { ··· 52 47 53 48 clean_up() 54 49 { 55 - for i in $(seq $NUM); do 56 - ip link del veth$i 2> /dev/null 57 - ip netns del ns$i 2> /dev/null 50 + for i in $(seq 0 $NUM); do 51 + ip netns del ${NS[$i]} 2> /dev/null 58 52 done 59 - ip netns del ns0 2> /dev/null 60 53 } 61 54 62 55 # Kselftest framework requirement - SKIP code is 4. ··· 82 79 mode="xdpdrv" 83 80 fi 84 81 85 - ip netns add ns0 82 + ip netns add ${NS[0]} 86 83 for i in $(seq $NUM); do 87 - ip netns add ns$i 88 - ip -n ns$i link add veth0 index 2 type veth \ 89 - peer name veth$i netns ns0 index $((1 + $i)) 90 - ip -n ns0 link set veth$i up 91 - ip -n ns$i link set veth0 up 84 + ip netns add ${NS[$i]} 85 + ip -n ${NS[$i]} link add veth0 type veth peer name veth$i netns ${NS[0]} 86 + ip -n ${NS[$i]} link set veth0 up 87 + ip -n ${NS[0]} link set veth$i up 92 88 93 - ip -n ns$i addr add 192.0.2.$i/24 dev veth0 94 - ip -n ns$i addr add 2001:db8::$i/64 dev veth0 89 + ip -n ${NS[$i]} addr add 192.0.2.$i/24 dev veth0 90 + ip -n ${NS[$i]} addr add 2001:db8::$i/64 dev veth0 95 91 # Add a neigh entry for IPv4 ping test 96 - ip -n ns$i neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0 97 - ip -n ns$i link set veth0 $mode obj \ 92 + ip -n ${NS[$i]} neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0 93 + ip -n ${NS[$i]} link set veth0 $mode obj \ 98 94 xdp_dummy.o sec xdp &> /dev/null || \ 99 95 { test_fail "Unable to load dummy xdp" && exit 1; } 100 96 IFACES="$IFACES veth$i" 101 - veth_mac[$i]=$(ip -n ns0 link show veth$i | awk '/link\/ether/ {print $2}') 97 + veth_mac[$i]=$(ip -n ${NS[0]} link show veth$i | awk '/link\/ether/ {print $2}') 102 98 done 103 99 } 104 100 ··· 106 104 local mode=$1 107 105 108 106 # mac test 109 - ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log & 110 - ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log & 107 + ip netns exec ${NS[2]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log & 108 + ip netns exec ${NS[3]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log & 111 109 sleep 0.5 112 - ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null 110 + ip netns exec ${NS[1]} ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null 113 111 sleep 0.5 114 112 pkill tcpdump 115 113 ··· 125 123 local mode=$1 126 124 127 125 # ping6 test: echo request should be redirect back to itself, not others 128 - ip netns exec ns1 ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02 126 + ip netns exec ${NS[1]} ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02 129 127 130 - ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log & 131 - ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log & 132 - ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log & 128 + ip netns exec ${NS[1]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log & 129 + ip netns exec ${NS[2]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log & 130 + ip netns exec ${NS[3]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log & 133 131 sleep 0.5 134 132 # ARP test 135 - ip netns exec ns1 arping -q -c 2 -I veth0 192.0.2.254 133 + ip netns exec ${NS[1]} arping -q -c 2 -I veth0 192.0.2.254 136 134 # IPv4 test 137 - ip netns exec ns1 ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null 135 + ip netns exec ${NS[1]} ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null 138 136 # IPv6 test 139 - ip netns exec ns1 ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null 137 + ip netns exec ${NS[1]} ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null 140 138 sleep 0.5 141 139 pkill tcpdump 142 140 ··· 182 180 xdpgeneric) drv_p="-S";; 183 181 esac 184 182 185 - ip netns exec ns0 ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log & 183 + ip netns exec ${NS[0]} ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log & 186 184 xdp_pid=$! 187 185 sleep 1 188 186 if ! ps -p $xdp_pid > /dev/null; then ··· 199 197 kill $xdp_pid 200 198 } 201 199 202 - trap clean_up EXIT 203 - 204 200 check_env 201 + 202 + trap clean_up EXIT 205 203 206 204 for mode in ${DRV_MODE}; do 207 205 setup_ns $mode
+21 -18
tools/testing/selftests/bpf/test_xdp_veth.sh
··· 22 22 TESTNAME=xdp_veth 23 23 BPF_FS=$(awk '$3 == "bpf" {print $2; exit}' /proc/mounts) 24 24 BPF_DIR=$BPF_FS/test_$TESTNAME 25 + readonly NS1="ns1-$(mktemp -u XXXXXX)" 26 + readonly NS2="ns2-$(mktemp -u XXXXXX)" 27 + readonly NS3="ns3-$(mktemp -u XXXXXX)" 25 28 26 29 _cleanup() 27 30 { ··· 32 29 ip link del veth1 2> /dev/null 33 30 ip link del veth2 2> /dev/null 34 31 ip link del veth3 2> /dev/null 35 - ip netns del ns1 2> /dev/null 36 - ip netns del ns2 2> /dev/null 37 - ip netns del ns3 2> /dev/null 32 + ip netns del ${NS1} 2> /dev/null 33 + ip netns del ${NS2} 2> /dev/null 34 + ip netns del ${NS3} 2> /dev/null 38 35 rm -rf $BPF_DIR 2> /dev/null 39 36 } 40 37 ··· 80 77 81 78 trap cleanup_skip EXIT 82 79 83 - ip netns add ns1 84 - ip netns add ns2 85 - ip netns add ns3 80 + ip netns add ${NS1} 81 + ip netns add ${NS2} 82 + ip netns add ${NS3} 86 83 87 - ip link add veth1 index 111 type veth peer name veth11 netns ns1 88 - ip link add veth2 index 122 type veth peer name veth22 netns ns2 89 - ip link add veth3 index 133 type veth peer name veth33 netns ns3 84 + ip link add veth1 index 111 type veth peer name veth11 netns ${NS1} 85 + ip link add veth2 index 122 type veth peer name veth22 netns ${NS2} 86 + ip link add veth3 index 133 type veth peer name veth33 netns ${NS3} 90 87 91 88 ip link set veth1 up 92 89 ip link set veth2 up 93 90 ip link set veth3 up 94 91 95 - ip -n ns1 addr add 10.1.1.11/24 dev veth11 96 - ip -n ns3 addr add 10.1.1.33/24 dev veth33 92 + ip -n ${NS1} addr add 10.1.1.11/24 dev veth11 93 + ip -n ${NS3} addr add 10.1.1.33/24 dev veth33 97 94 98 - ip -n ns1 link set dev veth11 up 99 - ip -n ns2 link set dev veth22 up 100 - ip -n ns3 link set dev veth33 up 95 + ip -n ${NS1} link set dev veth11 up 96 + ip -n ${NS2} link set dev veth22 up 97 + ip -n ${NS3} link set dev veth33 up 101 98 102 99 mkdir $BPF_DIR 103 100 bpftool prog loadall \ ··· 110 107 ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1 111 108 ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2 112 109 113 - ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp 114 - ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp 115 - ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp 110 + ip -n ${NS1} link set dev veth11 xdp obj xdp_dummy.o sec xdp 111 + ip -n ${NS2} link set dev veth22 xdp obj xdp_tx.o sec xdp 112 + ip -n ${NS3} link set dev veth33 xdp obj xdp_dummy.o sec xdp 116 113 117 114 trap cleanup EXIT 118 115 119 - ip netns exec ns1 ping -c 1 -W 1 10.1.1.33 116 + ip netns exec ${NS1} ping -c 1 -W 1 10.1.1.33 120 117 121 118 exit 0
+34 -32
tools/testing/selftests/bpf/test_xdp_vlan.sh
··· 4 4 5 5 # Kselftest framework requirement - SKIP code is 4. 6 6 readonly KSFT_SKIP=4 7 + readonly NS1="ns1-$(mktemp -u XXXXXX)" 8 + readonly NS2="ns2-$(mktemp -u XXXXXX)" 7 9 8 10 # Allow wrapper scripts to name test 9 11 if [ -z "$TESTNAME" ]; then ··· 51 49 52 50 if [ -n "$INTERACTIVE" ]; then 53 51 echo "Namespace setup still active explore with:" 54 - echo " ip netns exec ns1 bash" 55 - echo " ip netns exec ns2 bash" 52 + echo " ip netns exec ${NS1} bash" 53 + echo " ip netns exec ${NS2} bash" 56 54 exit $status 57 55 fi 58 56 59 57 set +e 60 58 ip link del veth1 2> /dev/null 61 - ip netns del ns1 2> /dev/null 62 - ip netns del ns2 2> /dev/null 59 + ip netns del ${NS1} 2> /dev/null 60 + ip netns del ${NS2} 2> /dev/null 63 61 } 64 62 65 63 # Using external program "getopt" to get --long-options ··· 128 126 # Interactive mode likely require us to cleanup netns 129 127 if [ -n "$INTERACTIVE" ]; then 130 128 ip link del veth1 2> /dev/null 131 - ip netns del ns1 2> /dev/null 132 - ip netns del ns2 2> /dev/null 129 + ip netns del ${NS1} 2> /dev/null 130 + ip netns del ${NS2} 2> /dev/null 133 131 fi 134 132 135 133 # Exit on failure ··· 146 144 fi 147 145 148 146 # Create two namespaces 149 - ip netns add ns1 150 - ip netns add ns2 147 + ip netns add ${NS1} 148 + ip netns add ${NS2} 151 149 152 150 # Run cleanup if failing or on kill 153 151 trap cleanup 0 2 3 6 9 ··· 156 154 ip link add veth1 type veth peer name veth2 157 155 158 156 # Move veth1 and veth2 into the respective namespaces 159 - ip link set veth1 netns ns1 160 - ip link set veth2 netns ns2 157 + ip link set veth1 netns ${NS1} 158 + ip link set veth2 netns ${NS2} 161 159 162 160 # NOTICE: XDP require VLAN header inside packet payload 163 161 # - Thus, disable VLAN offloading driver features 164 162 # - For veth REMEMBER TX side VLAN-offload 165 163 # 166 164 # Disable rx-vlan-offload (mostly needed on ns1) 167 - ip netns exec ns1 ethtool -K veth1 rxvlan off 168 - ip netns exec ns2 ethtool -K veth2 rxvlan off 165 + ip netns exec ${NS1} ethtool -K veth1 rxvlan off 166 + ip netns exec ${NS2} ethtool -K veth2 rxvlan off 169 167 # 170 168 # Disable tx-vlan-offload (mostly needed on ns2) 171 - ip netns exec ns2 ethtool -K veth2 txvlan off 172 - ip netns exec ns1 ethtool -K veth1 txvlan off 169 + ip netns exec ${NS2} ethtool -K veth2 txvlan off 170 + ip netns exec ${NS1} ethtool -K veth1 txvlan off 173 171 174 172 export IPADDR1=100.64.41.1 175 173 export IPADDR2=100.64.41.2 176 174 177 175 # In ns1/veth1 add IP-addr on plain net_device 178 - ip netns exec ns1 ip addr add ${IPADDR1}/24 dev veth1 179 - ip netns exec ns1 ip link set veth1 up 176 + ip netns exec ${NS1} ip addr add ${IPADDR1}/24 dev veth1 177 + ip netns exec ${NS1} ip link set veth1 up 180 178 181 179 # In ns2/veth2 create VLAN device 182 180 export VLAN=4011 183 181 export DEVNS2=veth2 184 - ip netns exec ns2 ip link add link $DEVNS2 name $DEVNS2.$VLAN type vlan id $VLAN 185 - ip netns exec ns2 ip addr add ${IPADDR2}/24 dev $DEVNS2.$VLAN 186 - ip netns exec ns2 ip link set $DEVNS2 up 187 - ip netns exec ns2 ip link set $DEVNS2.$VLAN up 182 + ip netns exec ${NS2} ip link add link $DEVNS2 name $DEVNS2.$VLAN type vlan id $VLAN 183 + ip netns exec ${NS2} ip addr add ${IPADDR2}/24 dev $DEVNS2.$VLAN 184 + ip netns exec ${NS2} ip link set $DEVNS2 up 185 + ip netns exec ${NS2} ip link set $DEVNS2.$VLAN up 188 186 189 187 # Bringup lo in netns (to avoids confusing people using --interactive) 190 - ip netns exec ns1 ip link set lo up 191 - ip netns exec ns2 ip link set lo up 188 + ip netns exec ${NS1} ip link set lo up 189 + ip netns exec ${NS2} ip link set lo up 192 190 193 191 # At this point, the hosts cannot reach each-other, 194 192 # because ns2 are using VLAN tags on the packets. 195 193 196 - ip netns exec ns2 sh -c 'ping -W 1 -c 1 100.64.41.1 || echo "Success: First ping must fail"' 194 + ip netns exec ${NS2} sh -c 'ping -W 1 -c 1 100.64.41.1 || echo "Success: First ping must fail"' 197 195 198 196 199 197 # Now we can use the test_xdp_vlan.c program to pop/push these VLAN tags ··· 204 202 205 203 # First test: Remove VLAN by setting VLAN ID 0, using "xdp_vlan_change" 206 204 export XDP_PROG=xdp_vlan_change 207 - ip netns exec ns1 ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG 205 + ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG 208 206 209 207 # In ns1: egress use TC to add back VLAN tag 4011 210 208 # (del cmd) 211 209 # tc qdisc del dev $DEVNS1 clsact 2> /dev/null 212 210 # 213 - ip netns exec ns1 tc qdisc add dev $DEVNS1 clsact 214 - ip netns exec ns1 tc filter add dev $DEVNS1 egress \ 211 + ip netns exec ${NS1} tc qdisc add dev $DEVNS1 clsact 212 + ip netns exec ${NS1} tc filter add dev $DEVNS1 egress \ 215 213 prio 1 handle 1 bpf da obj $FILE sec tc_vlan_push 216 214 217 215 # Now the namespaces can reach each-other, test with ping: 218 - ip netns exec ns2 ping -i 0.2 -W 2 -c 2 $IPADDR1 219 - ip netns exec ns1 ping -i 0.2 -W 2 -c 2 $IPADDR2 216 + ip netns exec ${NS2} ping -i 0.2 -W 2 -c 2 $IPADDR1 217 + ip netns exec ${NS1} ping -i 0.2 -W 2 -c 2 $IPADDR2 220 218 221 219 # Second test: Replace xdp prog, that fully remove vlan header 222 220 # ··· 225 223 # ETH_P_8021Q indication, and this cause overwriting of our changes. 226 224 # 227 225 export XDP_PROG=xdp_vlan_remove_outer2 228 - ip netns exec ns1 ip link set $DEVNS1 $XDP_MODE off 229 - ip netns exec ns1 ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG 226 + ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE off 227 + ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG 230 228 231 229 # Now the namespaces should still be able reach each-other, test with ping: 232 - ip netns exec ns2 ping -i 0.2 -W 2 -c 2 $IPADDR1 233 - ip netns exec ns1 ping -i 0.2 -W 2 -c 2 $IPADDR2 230 + ip netns exec ${NS2} ping -i 0.2 -W 2 -c 2 $IPADDR1 231 + ip netns exec ${NS1} ping -i 0.2 -W 2 -c 2 $IPADDR2
+31 -39
tools/testing/selftests/bpf/trace_helpers.c
··· 138 138 } 139 139 } 140 140 141 + ssize_t get_uprobe_offset(const void *addr) 142 + { 143 + size_t start, end, base; 144 + char buf[256]; 145 + bool found = false; 146 + FILE *f; 147 + 148 + f = fopen("/proc/self/maps", "r"); 149 + if (!f) 150 + return -errno; 151 + 152 + while (fscanf(f, "%zx-%zx %s %zx %*[^\n]\n", &start, &end, buf, &base) == 4) { 153 + if (buf[2] == 'x' && (uintptr_t)addr >= start && (uintptr_t)addr < end) { 154 + found = true; 155 + break; 156 + } 157 + } 158 + 159 + fclose(f); 160 + 161 + if (!found) 162 + return -ESRCH; 163 + 141 164 #if defined(__powerpc64__) && defined(_CALL_ELF) && _CALL_ELF == 2 142 165 143 166 #define OP_RT_RA_MASK 0xffff0000UL 144 167 #define LIS_R2 0x3c400000UL 145 168 #define ADDIS_R2_R12 0x3c4c0000UL 146 169 #define ADDI_R2_R2 0x38420000UL 147 - 148 - ssize_t get_uprobe_offset(const void *addr, ssize_t base) 149 - { 150 - u32 *insn = (u32 *)(uintptr_t)addr; 151 170 152 171 /* 153 172 * A PPC64 ABIv2 function may have a local and a global entry ··· 184 165 * lis r2,XXXX 185 166 * addi r2,r2,XXXX 186 167 */ 187 - if ((((*insn & OP_RT_RA_MASK) == ADDIS_R2_R12) || 188 - ((*insn & OP_RT_RA_MASK) == LIS_R2)) && 189 - ((*(insn + 1) & OP_RT_RA_MASK) == ADDI_R2_R2)) 190 - return (ssize_t)(insn + 2) - base; 191 - else 192 - return (uintptr_t)addr - base; 193 - } 168 + { 169 + const u32 *insn = (const u32 *)(uintptr_t)addr; 194 170 195 - #else 196 - 197 - ssize_t get_uprobe_offset(const void *addr, ssize_t base) 198 - { 199 - return (uintptr_t)addr - base; 200 - } 201 - 202 - #endif 203 - 204 - ssize_t get_base_addr(void) 205 - { 206 - size_t start, offset; 207 - char buf[256]; 208 - FILE *f; 209 - 210 - f = fopen("/proc/self/maps", "r"); 211 - if (!f) 212 - return -errno; 213 - 214 - while (fscanf(f, "%zx-%*x %s %zx %*[^\n]\n", 215 - &start, buf, &offset) == 3) { 216 - if (strcmp(buf, "r-xp") == 0) { 217 - fclose(f); 218 - return start - offset; 219 - } 171 + if ((((*insn & OP_RT_RA_MASK) == ADDIS_R2_R12) || 172 + ((*insn & OP_RT_RA_MASK) == LIS_R2)) && 173 + ((*(insn + 1) & OP_RT_RA_MASK) == ADDI_R2_R2)) 174 + return (uintptr_t)(insn + 2) - start + base; 220 175 } 221 - 222 - fclose(f); 223 - return -EINVAL; 176 + #endif 177 + return (uintptr_t)addr - start + base; 224 178 } 225 179 226 180 ssize_t get_rel_offset(uintptr_t addr)
+1 -2
tools/testing/selftests/bpf/trace_helpers.h
··· 18 18 19 19 void read_trace_pipe(void); 20 20 21 - ssize_t get_uprobe_offset(const void *addr, ssize_t base); 22 - ssize_t get_base_addr(void); 21 + ssize_t get_uprobe_offset(const void *addr); 23 22 ssize_t get_rel_offset(uintptr_t addr); 24 23 25 24 #endif
+78 -3
tools/testing/selftests/bpf/verifier/sock.c
··· 121 121 .result = ACCEPT, 122 122 }, 123 123 { 124 - "sk_fullsock(skb->sk): sk->dst_port [narrow load]", 124 + "sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)", 125 + .insns = { 126 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), 127 + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), 128 + BPF_MOV64_IMM(BPF_REG_0, 0), 129 + BPF_EXIT_INSN(), 130 + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), 131 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), 132 + BPF_MOV64_IMM(BPF_REG_0, 0), 133 + BPF_EXIT_INSN(), 134 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)), 135 + BPF_MOV64_IMM(BPF_REG_0, 0), 136 + BPF_EXIT_INSN(), 137 + }, 138 + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, 139 + .result = ACCEPT, 140 + }, 141 + { 142 + "sk_fullsock(skb->sk): sk->dst_port [half load]", 125 143 .insns = { 126 144 BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), 127 145 BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), ··· 157 139 .result = ACCEPT, 158 140 }, 159 141 { 160 - "sk_fullsock(skb->sk): sk->dst_port [load 2nd byte]", 142 + "sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)", 161 143 .insns = { 162 144 BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), 163 145 BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), ··· 167 149 BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), 168 150 BPF_MOV64_IMM(BPF_REG_0, 0), 169 151 BPF_EXIT_INSN(), 170 - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1), 152 + BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2), 153 + BPF_MOV64_IMM(BPF_REG_0, 0), 154 + BPF_EXIT_INSN(), 155 + }, 156 + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, 157 + .result = REJECT, 158 + .errstr = "invalid sock access", 159 + }, 160 + { 161 + "sk_fullsock(skb->sk): sk->dst_port [byte load]", 162 + .insns = { 163 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), 164 + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), 165 + BPF_MOV64_IMM(BPF_REG_0, 0), 166 + BPF_EXIT_INSN(), 167 + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), 168 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), 169 + BPF_MOV64_IMM(BPF_REG_0, 0), 170 + BPF_EXIT_INSN(), 171 + BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port)), 172 + BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1), 173 + BPF_MOV64_IMM(BPF_REG_0, 0), 174 + BPF_EXIT_INSN(), 175 + }, 176 + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, 177 + .result = ACCEPT, 178 + }, 179 + { 180 + "sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)", 181 + .insns = { 182 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), 183 + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), 184 + BPF_MOV64_IMM(BPF_REG_0, 0), 185 + BPF_EXIT_INSN(), 186 + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), 187 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), 188 + BPF_MOV64_IMM(BPF_REG_0, 0), 189 + BPF_EXIT_INSN(), 190 + BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2), 191 + BPF_MOV64_IMM(BPF_REG_0, 0), 192 + BPF_EXIT_INSN(), 193 + }, 194 + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, 195 + .result = REJECT, 196 + .errstr = "invalid sock access", 197 + }, 198 + { 199 + "sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)", 200 + .insns = { 201 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), 202 + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), 203 + BPF_MOV64_IMM(BPF_REG_0, 0), 204 + BPF_EXIT_INSN(), 205 + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), 206 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), 207 + BPF_MOV64_IMM(BPF_REG_0, 0), 208 + BPF_EXIT_INSN(), 209 + BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, dst_port)), 171 210 BPF_MOV64_IMM(BPF_REG_0, 0), 172 211 BPF_EXIT_INSN(), 173 212 },
+4 -4
tools/testing/selftests/bpf/xdp_redirect_multi.c
··· 32 32 int i; 33 33 34 34 for (i = 0; ifaces[i] > 0; i++) { 35 - if (bpf_get_link_xdp_id(ifaces[i], &prog_id, xdp_flags)) { 36 - printf("bpf_get_link_xdp_id failed\n"); 35 + if (bpf_xdp_query_id(ifaces[i], xdp_flags, &prog_id)) { 36 + printf("bpf_xdp_query_id failed\n"); 37 37 exit(1); 38 38 } 39 39 if (prog_id) 40 - bpf_set_link_xdp_fd(ifaces[i], -1, xdp_flags); 40 + bpf_xdp_detach(ifaces[i], xdp_flags, NULL); 41 41 } 42 42 43 43 exit(0); ··· 210 210 } 211 211 212 212 /* bind prog_fd to each interface */ 213 - ret = bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags); 213 + ret = bpf_xdp_attach(ifindex, prog_fd, xdp_flags, NULL); 214 214 if (ret) { 215 215 printf("Set xdp fd failed on %d\n", ifindex); 216 216 goto err_out;
+2 -2
tools/testing/selftests/bpf/xdping.c
··· 29 29 30 30 static void cleanup(int sig) 31 31 { 32 - bpf_set_link_xdp_fd(ifindex, -1, xdp_flags); 32 + bpf_xdp_detach(ifindex, xdp_flags, NULL); 33 33 if (sig) 34 34 exit(1); 35 35 } ··· 203 203 204 204 printf("XDP setup disrupts network connectivity, hit Ctrl+C to quit\n"); 205 205 206 - if (bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags) < 0) { 206 + if (bpf_xdp_attach(ifindex, prog_fd, xdp_flags, NULL) < 0) { 207 207 fprintf(stderr, "Link set xdp fd failed for %s\n", ifname); 208 208 goto done; 209 209 }
+49 -31
tools/testing/selftests/bpf/xdpxceiver.c
··· 266 266 } 267 267 268 268 static int xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem, 269 - struct ifobject *ifobject, u32 qid) 269 + struct ifobject *ifobject, bool shared) 270 270 { 271 - struct xsk_socket_config cfg; 271 + struct xsk_socket_config cfg = {}; 272 272 struct xsk_ring_cons *rxr; 273 273 struct xsk_ring_prod *txr; 274 274 275 275 xsk->umem = umem; 276 276 cfg.rx_size = xsk->rxqsize; 277 277 cfg.tx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS; 278 - cfg.libbpf_flags = 0; 278 + cfg.libbpf_flags = XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD; 279 279 cfg.xdp_flags = ifobject->xdp_flags; 280 280 cfg.bind_flags = ifobject->bind_flags; 281 + if (shared) 282 + cfg.bind_flags |= XDP_SHARED_UMEM; 281 283 282 284 txr = ifobject->tx_on ? &xsk->tx : NULL; 283 285 rxr = ifobject->rx_on ? &xsk->rx : NULL; 284 - return xsk_socket__create(&xsk->xsk, ifobject->ifname, qid, umem->umem, rxr, txr, &cfg); 286 + return xsk_socket__create(&xsk->xsk, ifobject->ifname, 0, umem->umem, rxr, txr, &cfg); 285 287 } 286 288 287 289 static struct option long_options[] = { ··· 389 387 for (i = 0; i < MAX_INTERFACES; i++) { 390 388 struct ifobject *ifobj = i ? ifobj_rx : ifobj_tx; 391 389 392 - ifobj->umem = &ifobj->umem_arr[0]; 393 390 ifobj->xsk = &ifobj->xsk_arr[0]; 394 391 ifobj->use_poll = false; 395 392 ifobj->pacing_on = true; ··· 402 401 ifobj->tx_on = false; 403 402 } 404 403 404 + memset(ifobj->umem, 0, sizeof(*ifobj->umem)); 405 + ifobj->umem->num_frames = DEFAULT_UMEM_BUFFERS; 406 + ifobj->umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; 407 + 405 408 for (j = 0; j < MAX_SOCKETS; j++) { 406 - memset(&ifobj->umem_arr[j], 0, sizeof(ifobj->umem_arr[j])); 407 409 memset(&ifobj->xsk_arr[j], 0, sizeof(ifobj->xsk_arr[j])); 408 - ifobj->umem_arr[j].num_frames = DEFAULT_UMEM_BUFFERS; 409 - ifobj->umem_arr[j].frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; 410 410 ifobj->xsk_arr[j].rxqsize = XSK_RING_CONS__DEFAULT_NUM_DESCS; 411 411 } 412 412 } ··· 952 950 953 951 static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) 954 952 { 953 + u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size; 955 954 int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE; 955 + int ret, ifindex; 956 + void *bufs; 956 957 u32 i; 957 958 958 959 ifobject->ns_fd = switch_namespace(ifobject->nsname); ··· 963 958 if (ifobject->umem->unaligned_mode) 964 959 mmap_flags |= MAP_HUGETLB; 965 960 961 + bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); 962 + if (bufs == MAP_FAILED) 963 + exit_with_error(errno); 964 + 965 + ret = xsk_configure_umem(ifobject->umem, bufs, umem_sz); 966 + if (ret) 967 + exit_with_error(-ret); 968 + 966 969 for (i = 0; i < test->nb_sockets; i++) { 967 - u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size; 968 970 u32 ctr = 0; 969 - void *bufs; 970 - int ret; 971 - 972 - bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); 973 - if (bufs == MAP_FAILED) 974 - exit_with_error(errno); 975 - 976 - ret = xsk_configure_umem(&ifobject->umem_arr[i], bufs, umem_sz); 977 - if (ret) 978 - exit_with_error(-ret); 979 971 980 972 while (ctr++ < SOCK_RECONF_CTR) { 981 - ret = xsk_configure_socket(&ifobject->xsk_arr[i], &ifobject->umem_arr[i], 982 - ifobject, i); 973 + ret = xsk_configure_socket(&ifobject->xsk_arr[i], ifobject->umem, 974 + ifobject, !!i); 983 975 if (!ret) 984 976 break; 985 977 ··· 987 985 } 988 986 } 989 987 990 - ifobject->umem = &ifobject->umem_arr[0]; 991 988 ifobject->xsk = &ifobject->xsk_arr[0]; 989 + 990 + if (!ifobject->rx_on) 991 + return; 992 + 993 + ifindex = if_nametoindex(ifobject->ifname); 994 + if (!ifindex) 995 + exit_with_error(errno); 996 + 997 + ret = xsk_setup_xdp_prog(ifindex, &ifobject->xsk_map_fd); 998 + if (ret) 999 + exit_with_error(-ret); 1000 + 1001 + ret = xsk_socket__update_xskmap(ifobject->xsk->xsk, ifobject->xsk_map_fd); 1002 + if (ret) 1003 + exit_with_error(-ret); 992 1004 } 993 1005 994 1006 static void testapp_cleanup_xsk_res(struct ifobject *ifobj) ··· 1158 1142 1159 1143 static void swap_xsk_resources(struct ifobject *ifobj_tx, struct ifobject *ifobj_rx) 1160 1144 { 1145 + int ret; 1146 + 1161 1147 xsk_socket__delete(ifobj_tx->xsk->xsk); 1162 - xsk_umem__delete(ifobj_tx->umem->umem); 1163 1148 xsk_socket__delete(ifobj_rx->xsk->xsk); 1164 - xsk_umem__delete(ifobj_rx->umem->umem); 1165 - ifobj_tx->umem = &ifobj_tx->umem_arr[1]; 1166 1149 ifobj_tx->xsk = &ifobj_tx->xsk_arr[1]; 1167 - ifobj_rx->umem = &ifobj_rx->umem_arr[1]; 1168 1150 ifobj_rx->xsk = &ifobj_rx->xsk_arr[1]; 1151 + 1152 + ret = xsk_socket__update_xskmap(ifobj_rx->xsk->xsk, ifobj_rx->xsk_map_fd); 1153 + if (ret) 1154 + exit_with_error(-ret); 1169 1155 } 1170 1156 1171 1157 static void testapp_bpf_res(struct test_spec *test) ··· 1426 1408 if (!ifobj->xsk_arr) 1427 1409 goto out_xsk_arr; 1428 1410 1429 - ifobj->umem_arr = calloc(MAX_SOCKETS, sizeof(*ifobj->umem_arr)); 1430 - if (!ifobj->umem_arr) 1431 - goto out_umem_arr; 1411 + ifobj->umem = calloc(1, sizeof(*ifobj->umem)); 1412 + if (!ifobj->umem) 1413 + goto out_umem; 1432 1414 1433 1415 return ifobj; 1434 1416 1435 - out_umem_arr: 1417 + out_umem: 1436 1418 free(ifobj->xsk_arr); 1437 1419 out_xsk_arr: 1438 1420 free(ifobj); ··· 1441 1423 1442 1424 static void ifobject_delete(struct ifobject *ifobj) 1443 1425 { 1444 - free(ifobj->umem_arr); 1426 + free(ifobj->umem); 1445 1427 free(ifobj->xsk_arr); 1446 1428 free(ifobj); 1447 1429 }
+1 -1
tools/testing/selftests/bpf/xdpxceiver.h
··· 125 125 struct xsk_socket_info *xsk; 126 126 struct xsk_socket_info *xsk_arr; 127 127 struct xsk_umem_info *umem; 128 - struct xsk_umem_info *umem_arr; 129 128 thread_func_t func_ptr; 130 129 struct pkt_stream *pkt_stream; 131 130 int ns_fd; 131 + int xsk_map_fd; 132 132 u32 dst_ip; 133 133 u32 src_ip; 134 134 u32 xdp_flags;