Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
bpf-next 2021-10-02

We've added 85 non-merge commits during the last 15 day(s) which contain
a total of 132 files changed, 13779 insertions(+), 6724 deletions(-).

The main changes are:

1) Massive update on test_bpf.ko coverage for JITs as preparatory work for
an upcoming MIPS eBPF JIT, from Johan Almbladh.

2) Add a batched interface for RX buffer allocation in AF_XDP buffer pool,
with driver support for i40e and ice from Magnus Karlsson.

3) Add legacy uprobe support to libbpf to complement recently merged legacy
kprobe support, from Andrii Nakryiko.

4) Add bpf_trace_vprintk() as variadic printk helper, from Dave Marchevsky.

5) Support saving the register state in verifier when spilling <8byte bounded
scalar to the stack, from Martin Lau.

6) Add libbpf opt-in for stricter BPF program section name handling as part
of libbpf 1.0 effort, from Andrii Nakryiko.

7) Add a document to help clarifying BPF licensing, from Alexei Starovoitov.

8) Fix skel_internal.h to propagate errno if the loader indicates an internal
error, from Kumar Kartikeya Dwivedi.

9) Fix build warnings with -Wcast-function-type so that the option can later
be enabled by default for the kernel, from Kees Cook.

10) Fix libbpf to ignore STT_SECTION symbols in legacy map definitions as it
otherwise errors out when encountering them, from Toke Høiland-Jørgensen.

11) Teach libbpf to recognize specialized maps (such as for perf RB) and
internally remove BTF type IDs when creating them, from Hengqi Chen.

12) Various fixes and improvements to BPF selftests.
====================

Link: https://lore.kernel.org/r/20211002001327.15169-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+8209 -1154
+92
Documentation/bpf/bpf_licensing.rst
··· 1 + ============= 2 + BPF licensing 3 + ============= 4 + 5 + Background 6 + ========== 7 + 8 + * Classic BPF was BSD licensed 9 + 10 + "BPF" was originally introduced as BSD Packet Filter in 11 + http://www.tcpdump.org/papers/bpf-usenix93.pdf. The corresponding instruction 12 + set and its implementation came from BSD with BSD license. That original 13 + instruction set is now known as "classic BPF". 14 + 15 + However an instruction set is a specification for machine-language interaction, 16 + similar to a programming language. It is not a code. Therefore, the 17 + application of a BSD license may be misleading in a certain context, as the 18 + instruction set may enjoy no copyright protection. 19 + 20 + * eBPF (extended BPF) instruction set continues to be BSD 21 + 22 + In 2014, the classic BPF instruction set was significantly extended. We 23 + typically refer to this instruction set as eBPF to disambiguate it from cBPF. 24 + The eBPF instruction set is still BSD licensed. 25 + 26 + Implementations of eBPF 27 + ======================= 28 + 29 + Using the eBPF instruction set requires implementing code in both kernel space 30 + and user space. 31 + 32 + In Linux Kernel 33 + --------------- 34 + 35 + The reference implementations of the eBPF interpreter and various just-in-time 36 + compilers are part of Linux and are GPLv2 licensed. The implementation of 37 + eBPF helper functions is also GPLv2 licensed. Interpreters, JITs, helpers, 38 + and verifiers are called eBPF runtime. 39 + 40 + In User Space 41 + ------------- 42 + 43 + There are also implementations of eBPF runtime (interpreter, JITs, helper 44 + functions) under 45 + Apache2 (https://github.com/iovisor/ubpf), 46 + MIT (https://github.com/qmonnet/rbpf), and 47 + BSD (https://github.com/DPDK/dpdk/blob/main/lib/librte_bpf). 48 + 49 + In HW 50 + ----- 51 + 52 + The HW can choose to execute eBPF instruction natively and provide eBPF runtime 53 + in HW or via the use of implementing firmware with a proprietary license. 54 + 55 + In other operating systems 56 + -------------------------- 57 + 58 + Other kernels or user space implementations of eBPF instruction set and runtime 59 + can have proprietary licenses. 60 + 61 + Using BPF programs in the Linux kernel 62 + ====================================== 63 + 64 + Linux Kernel (while being GPLv2) allows linking of proprietary kernel modules 65 + under these rules: 66 + Documentation/process/license-rules.rst 67 + 68 + When a kernel module is loaded, the linux kernel checks which functions it 69 + intends to use. If any function is marked as "GPL only," the corresponding 70 + module or program has to have GPL compatible license. 71 + 72 + Loading BPF program into the Linux kernel is similar to loading a kernel 73 + module. BPF is loaded at run time and not statically linked to the Linux 74 + kernel. BPF program loading follows the same license checking rules as kernel 75 + modules. BPF programs can be proprietary if they don't use "GPL only" BPF 76 + helper functions. 77 + 78 + Further, some BPF program types - Linux Security Modules (LSM) and TCP 79 + Congestion Control (struct_ops), as of Aug 2021 - are required to be GPL 80 + compatible even if they don't use "GPL only" helper functions directly. The 81 + registration step of LSM and TCP congestion control modules of the Linux 82 + kernel is done through EXPORT_SYMBOL_GPL kernel functions. In that sense LSM 83 + and struct_ops BPF programs are implicitly calling "GPL only" functions. 84 + The same restriction applies to BPF programs that call kernel functions 85 + directly via unstable interface also known as "kfunc". 86 + 87 + Packaging BPF programs with user space applications 88 + ==================================================== 89 + 90 + Generally, proprietary-licensed applications and GPL licensed BPF programs 91 + written for the Linux kernel in the same package can co-exist because they are 92 + separate executable processes. This applies to both cBPF and eBPF programs.
+9
Documentation/bpf/index.rst
··· 82 82 s390 83 83 84 84 85 + Licensing 86 + ========= 87 + 88 + .. toctree:: 89 + :maxdepth: 1 90 + 91 + bpf_licensing 92 + 93 + 85 94 Other 86 95 ===== 87 96
+26 -28
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 193 193 { 194 194 u16 ntu = rx_ring->next_to_use; 195 195 union i40e_rx_desc *rx_desc; 196 - struct xdp_buff **bi, *xdp; 196 + struct xdp_buff **xdp; 197 + u32 nb_buffs, i; 197 198 dma_addr_t dma; 198 - bool ok = true; 199 199 200 200 rx_desc = I40E_RX_DESC(rx_ring, ntu); 201 - bi = i40e_rx_bi(rx_ring, ntu); 202 - do { 203 - xdp = xsk_buff_alloc(rx_ring->xsk_pool); 204 - if (!xdp) { 205 - ok = false; 206 - goto no_buffers; 207 - } 208 - *bi = xdp; 209 - dma = xsk_buff_xdp_get_dma(xdp); 201 + xdp = i40e_rx_bi(rx_ring, ntu); 202 + 203 + nb_buffs = min_t(u16, count, rx_ring->count - ntu); 204 + nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs); 205 + if (!nb_buffs) 206 + return false; 207 + 208 + i = nb_buffs; 209 + while (i--) { 210 + dma = xsk_buff_xdp_get_dma(*xdp); 210 211 rx_desc->read.pkt_addr = cpu_to_le64(dma); 211 212 rx_desc->read.hdr_addr = 0; 212 213 213 214 rx_desc++; 214 - bi++; 215 - ntu++; 216 - 217 - if (unlikely(ntu == rx_ring->count)) { 218 - rx_desc = I40E_RX_DESC(rx_ring, 0); 219 - bi = i40e_rx_bi(rx_ring, 0); 220 - ntu = 0; 221 - } 222 - } while (--count); 223 - 224 - no_buffers: 225 - if (rx_ring->next_to_use != ntu) { 226 - /* clear the status bits for the next_to_use descriptor */ 227 - rx_desc->wb.qword1.status_error_len = 0; 228 - i40e_release_rx_desc(rx_ring, ntu); 215 + xdp++; 229 216 } 230 217 231 - return ok; 218 + ntu += nb_buffs; 219 + if (ntu == rx_ring->count) { 220 + rx_desc = I40E_RX_DESC(rx_ring, 0); 221 + xdp = i40e_rx_bi(rx_ring, 0); 222 + ntu = 0; 223 + } 224 + 225 + /* clear the status bits for the next_to_use descriptor */ 226 + rx_desc->wb.qword1.status_error_len = 0; 227 + i40e_release_rx_desc(rx_ring, ntu); 228 + 229 + return count == nb_buffs ? true : false; 232 230 } 233 231 234 232 /** ··· 363 365 break; 364 366 365 367 bi = *i40e_rx_bi(rx_ring, next_to_clean); 366 - bi->data_end = bi->data + size; 368 + xsk_buff_set_size(bi, size); 367 369 xsk_buff_dma_sync_for_cpu(bi, rx_ring->xsk_pool); 368 370 369 371 xdp_res = i40e_run_xdp_zc(rx_ring, bi);
+5 -11
drivers/net/ethernet/intel/ice/ice_txrx.h
··· 164 164 }; 165 165 166 166 struct ice_rx_buf { 167 - union { 168 - struct { 169 - dma_addr_t dma; 170 - struct page *page; 171 - unsigned int page_offset; 172 - u16 pagecnt_bias; 173 - }; 174 - struct { 175 - struct xdp_buff *xdp; 176 - }; 177 - }; 167 + dma_addr_t dma; 168 + struct page *page; 169 + unsigned int page_offset; 170 + u16 pagecnt_bias; 178 171 }; 179 172 180 173 struct ice_q_stats { ··· 263 270 union { 264 271 struct ice_tx_buf *tx_buf; 265 272 struct ice_rx_buf *rx_buf; 273 + struct xdp_buff **xdp_buf; 266 274 }; 267 275 /* CL2 - 2nd cacheline starts here */ 268 276 u16 q_index; /* Queue number of ring */
+44 -50
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 364 364 { 365 365 union ice_32b_rx_flex_desc *rx_desc; 366 366 u16 ntu = rx_ring->next_to_use; 367 - struct ice_rx_buf *rx_buf; 368 - bool ok = true; 367 + struct xdp_buff **xdp; 368 + u32 nb_buffs, i; 369 369 dma_addr_t dma; 370 370 371 - if (!count) 372 - return true; 373 - 374 371 rx_desc = ICE_RX_DESC(rx_ring, ntu); 375 - rx_buf = &rx_ring->rx_buf[ntu]; 372 + xdp = &rx_ring->xdp_buf[ntu]; 376 373 377 - do { 378 - rx_buf->xdp = xsk_buff_alloc(rx_ring->xsk_pool); 379 - if (!rx_buf->xdp) { 380 - ok = false; 381 - break; 382 - } 374 + nb_buffs = min_t(u16, count, rx_ring->count - ntu); 375 + nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs); 376 + if (!nb_buffs) 377 + return false; 383 378 384 - dma = xsk_buff_xdp_get_dma(rx_buf->xdp); 379 + i = nb_buffs; 380 + while (i--) { 381 + dma = xsk_buff_xdp_get_dma(*xdp); 385 382 rx_desc->read.pkt_addr = cpu_to_le64(dma); 386 - rx_desc->wb.status_error0 = 0; 387 383 388 384 rx_desc++; 389 - rx_buf++; 390 - ntu++; 391 - 392 - if (unlikely(ntu == rx_ring->count)) { 393 - rx_desc = ICE_RX_DESC(rx_ring, 0); 394 - rx_buf = rx_ring->rx_buf; 395 - ntu = 0; 396 - } 397 - } while (--count); 398 - 399 - if (rx_ring->next_to_use != ntu) { 400 - /* clear the status bits for the next_to_use descriptor */ 401 - rx_desc->wb.status_error0 = 0; 402 - ice_release_rx_desc(rx_ring, ntu); 385 + xdp++; 403 386 } 404 387 405 - return ok; 388 + ntu += nb_buffs; 389 + if (ntu == rx_ring->count) { 390 + rx_desc = ICE_RX_DESC(rx_ring, 0); 391 + xdp = rx_ring->xdp_buf; 392 + ntu = 0; 393 + } 394 + 395 + /* clear the status bits for the next_to_use descriptor */ 396 + rx_desc->wb.status_error0 = 0; 397 + ice_release_rx_desc(rx_ring, ntu); 398 + 399 + return count == nb_buffs ? true : false; 406 400 } 407 401 408 402 /** ··· 415 421 /** 416 422 * ice_construct_skb_zc - Create an sk_buff from zero-copy buffer 417 423 * @rx_ring: Rx ring 418 - * @rx_buf: zero-copy Rx buffer 424 + * @xdp_arr: Pointer to the SW ring of xdp_buff pointers 419 425 * 420 426 * This function allocates a new skb from a zero-copy Rx buffer. 421 427 * 422 428 * Returns the skb on success, NULL on failure. 423 429 */ 424 430 static struct sk_buff * 425 - ice_construct_skb_zc(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) 431 + ice_construct_skb_zc(struct ice_ring *rx_ring, struct xdp_buff **xdp_arr) 426 432 { 427 - unsigned int metasize = rx_buf->xdp->data - rx_buf->xdp->data_meta; 428 - unsigned int datasize = rx_buf->xdp->data_end - rx_buf->xdp->data; 429 - unsigned int datasize_hard = rx_buf->xdp->data_end - 430 - rx_buf->xdp->data_hard_start; 433 + struct xdp_buff *xdp = *xdp_arr; 434 + unsigned int metasize = xdp->data - xdp->data_meta; 435 + unsigned int datasize = xdp->data_end - xdp->data; 436 + unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start; 431 437 struct sk_buff *skb; 432 438 433 439 skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard, ··· 435 441 if (unlikely(!skb)) 436 442 return NULL; 437 443 438 - skb_reserve(skb, rx_buf->xdp->data - rx_buf->xdp->data_hard_start); 439 - memcpy(__skb_put(skb, datasize), rx_buf->xdp->data, datasize); 444 + skb_reserve(skb, xdp->data - xdp->data_hard_start); 445 + memcpy(__skb_put(skb, datasize), xdp->data, datasize); 440 446 if (metasize) 441 447 skb_metadata_set(skb, metasize); 442 448 443 - xsk_buff_free(rx_buf->xdp); 444 - rx_buf->xdp = NULL; 449 + xsk_buff_free(xdp); 450 + *xdp_arr = NULL; 445 451 return skb; 446 452 } 447 453 ··· 515 521 while (likely(total_rx_packets < (unsigned int)budget)) { 516 522 union ice_32b_rx_flex_desc *rx_desc; 517 523 unsigned int size, xdp_res = 0; 518 - struct ice_rx_buf *rx_buf; 524 + struct xdp_buff **xdp; 519 525 struct sk_buff *skb; 520 526 u16 stat_err_bits; 521 527 u16 vlan_tag = 0; ··· 538 544 if (!size) 539 545 break; 540 546 541 - rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean]; 542 - rx_buf->xdp->data_end = rx_buf->xdp->data + size; 543 - xsk_buff_dma_sync_for_cpu(rx_buf->xdp, rx_ring->xsk_pool); 547 + xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean]; 548 + xsk_buff_set_size(*xdp, size); 549 + xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool); 544 550 545 - xdp_res = ice_run_xdp_zc(rx_ring, rx_buf->xdp); 551 + xdp_res = ice_run_xdp_zc(rx_ring, *xdp); 546 552 if (xdp_res) { 547 553 if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) 548 554 xdp_xmit |= xdp_res; 549 555 else 550 - xsk_buff_free(rx_buf->xdp); 556 + xsk_buff_free(*xdp); 551 557 552 - rx_buf->xdp = NULL; 558 + *xdp = NULL; 553 559 total_rx_bytes += size; 554 560 total_rx_packets++; 555 561 cleaned_count++; ··· 559 565 } 560 566 561 567 /* XDP_PASS path */ 562 - skb = ice_construct_skb_zc(rx_ring, rx_buf); 568 + skb = ice_construct_skb_zc(rx_ring, xdp); 563 569 if (!skb) { 564 570 rx_ring->rx_stats.alloc_buf_failed++; 565 571 break; ··· 807 813 u16 i; 808 814 809 815 for (i = 0; i < rx_ring->count; i++) { 810 - struct ice_rx_buf *rx_buf = &rx_ring->rx_buf[i]; 816 + struct xdp_buff **xdp = &rx_ring->xdp_buf[i]; 811 817 812 - if (!rx_buf->xdp) 818 + if (!xdp) 813 819 continue; 814 820 815 - rx_buf->xdp = NULL; 821 + *xdp = NULL; 816 822 } 817 823 } 818 824
+6 -1
include/linux/bpf.h
··· 48 48 extern spinlock_t btf_idr_lock; 49 49 extern struct kobject *btf_kobj; 50 50 51 + typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64); 51 52 typedef int (*bpf_iter_init_seq_priv_t)(void *private_data, 52 53 struct bpf_iter_aux_info *aux); 53 54 typedef void (*bpf_iter_fini_seq_priv_t)(void *private_data); ··· 143 142 int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env, 144 143 struct bpf_func_state *caller, 145 144 struct bpf_func_state *callee); 146 - int (*map_for_each_callback)(struct bpf_map *map, void *callback_fn, 145 + int (*map_for_each_callback)(struct bpf_map *map, 146 + bpf_callback_t callback_fn, 147 147 void *callback_ctx, u64 flags); 148 148 149 149 /* BTF name and id of struct allocated by map_alloc */ ··· 1091 1089 int bpf_prog_calc_tag(struct bpf_prog *fp); 1092 1090 1093 1091 const struct bpf_func_proto *bpf_get_trace_printk_proto(void); 1092 + const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void); 1094 1093 1095 1094 typedef unsigned long (*bpf_ctx_copy_t)(void *dst, const void *src, 1096 1095 unsigned long off, unsigned long len); ··· 2219 2216 2220 2217 struct btf_id_set; 2221 2218 bool btf_id_set_contains(const struct btf_id_set *set, u32 id); 2219 + 2220 + #define MAX_BPRINTF_VARARGS 12 2222 2221 2223 2222 int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args, 2224 2223 u32 **bin_buf, u32 num_args);
+3 -4
include/linux/filter.h
··· 360 360 .off = 0, \ 361 361 .imm = TGT }) 362 362 363 - /* Function call */ 363 + /* Convert function address to BPF immediate */ 364 364 365 - #define BPF_CAST_CALL(x) \ 366 - ((u64 (*)(u64, u64, u64, u64, u64))(x)) 365 + #define BPF_CALL_IMM(x) ((void *)(x) - (void *)__bpf_call_base) 367 366 368 367 #define BPF_EMIT_CALL(FUNC) \ 369 368 ((struct bpf_insn) { \ ··· 370 371 .dst_reg = 0, \ 371 372 .src_reg = 0, \ 372 373 .off = 0, \ 373 - .imm = ((FUNC) - __bpf_call_base) }) 374 + .imm = BPF_CALL_IMM(FUNC) }) 374 375 375 376 /* Raw code statement block */ 376 377
+4 -4
include/net/xdp.h
··· 15 15 * level RX-ring queues. It is information that is specific to how 16 16 * the driver have configured a given RX-ring queue. 17 17 * 18 - * Each xdp_buff frame received in the driver carry a (pointer) 18 + * Each xdp_buff frame received in the driver carries a (pointer) 19 19 * reference to this xdp_rxq_info structure. This provides the XDP 20 20 * data-path read-access to RX-info for both kernel and bpf-side 21 21 * (limited subset). 22 22 * 23 23 * For now, direct access is only safe while running in NAPI/softirq 24 - * context. Contents is read-mostly and must not be updated during 24 + * context. Contents are read-mostly and must not be updated during 25 25 * driver NAPI/softirq poll. 26 26 * 27 27 * The driver usage API is a register and unregister API. ··· 30 30 * can be attached as long as it doesn't change the underlying 31 31 * RX-ring. If the RX-ring does change significantly, the NIC driver 32 32 * naturally need to stop the RX-ring before purging and reallocating 33 - * memory. In that process the driver MUST call unregistor (which 34 - * also apply for driver shutdown and unload). The register API is 33 + * memory. In that process the driver MUST call unregister (which 34 + * also applies for driver shutdown and unload). The register API is 35 35 * also mandatory during RX-ring setup. 36 36 */ 37 37
+22
include/net/xdp_sock_drv.h
··· 77 77 return xp_alloc(pool); 78 78 } 79 79 80 + /* Returns as many entries as possible up to max. 0 <= N <= max. */ 81 + static inline u32 xsk_buff_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max) 82 + { 83 + return xp_alloc_batch(pool, xdp, max); 84 + } 85 + 80 86 static inline bool xsk_buff_can_alloc(struct xsk_buff_pool *pool, u32 count) 81 87 { 82 88 return xp_can_alloc(pool, count); ··· 93 87 struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp); 94 88 95 89 xp_free(xskb); 90 + } 91 + 92 + static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size) 93 + { 94 + xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM; 95 + xdp->data_meta = xdp->data; 96 + xdp->data_end = xdp->data + size; 96 97 } 97 98 98 99 static inline dma_addr_t xsk_buff_raw_get_dma(struct xsk_buff_pool *pool, ··· 225 212 return NULL; 226 213 } 227 214 215 + static inline u32 xsk_buff_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max) 216 + { 217 + return 0; 218 + } 219 + 228 220 static inline bool xsk_buff_can_alloc(struct xsk_buff_pool *pool, u32 count) 229 221 { 230 222 return false; 231 223 } 232 224 233 225 static inline void xsk_buff_free(struct xdp_buff *xdp) 226 + { 227 + } 228 + 229 + static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size) 234 230 { 235 231 } 236 232
+46 -2
include/net/xsk_buff_pool.h
··· 7 7 #include <linux/if_xdp.h> 8 8 #include <linux/types.h> 9 9 #include <linux/dma-mapping.h> 10 + #include <linux/bpf.h> 10 11 #include <net/xdp.h> 11 12 12 13 struct xsk_buff_pool; ··· 24 23 dma_addr_t dma; 25 24 dma_addr_t frame_dma; 26 25 struct xsk_buff_pool *pool; 27 - bool unaligned; 28 26 u64 orig_addr; 29 27 struct list_head free_list_node; 30 28 }; ··· 67 67 u32 free_heads_cnt; 68 68 u32 headroom; 69 69 u32 chunk_size; 70 + u32 chunk_shift; 70 71 u32 frame_len; 71 72 u8 cached_need_wakeup; 72 73 bool uses_need_wakeup; ··· 82 81 struct xdp_buff_xsk *free_heads[]; 83 82 }; 84 83 84 + /* Masks for xdp_umem_page flags. 85 + * The low 12-bits of the addr will be 0 since this is the page address, so we 86 + * can use them for flags. 87 + */ 88 + #define XSK_NEXT_PG_CONTIG_SHIFT 0 89 + #define XSK_NEXT_PG_CONTIG_MASK BIT_ULL(XSK_NEXT_PG_CONTIG_SHIFT) 90 + 85 91 /* AF_XDP core. */ 86 92 struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, 87 93 struct xdp_umem *umem); ··· 97 89 int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem, 98 90 struct net_device *dev, u16 queue_id); 99 91 void xp_destroy(struct xsk_buff_pool *pool); 100 - void xp_release(struct xdp_buff_xsk *xskb); 101 92 void xp_get_pool(struct xsk_buff_pool *pool); 102 93 bool xp_put_pool(struct xsk_buff_pool *pool); 103 94 void xp_clear_dev(struct xsk_buff_pool *pool); ··· 106 99 /* AF_XDP, and XDP core. */ 107 100 void xp_free(struct xdp_buff_xsk *xskb); 108 101 102 + static inline void xp_init_xskb_addr(struct xdp_buff_xsk *xskb, struct xsk_buff_pool *pool, 103 + u64 addr) 104 + { 105 + xskb->orig_addr = addr; 106 + xskb->xdp.data_hard_start = pool->addrs + addr + pool->headroom; 107 + } 108 + 109 + static inline void xp_init_xskb_dma(struct xdp_buff_xsk *xskb, struct xsk_buff_pool *pool, 110 + dma_addr_t *dma_pages, u64 addr) 111 + { 112 + xskb->frame_dma = (dma_pages[addr >> PAGE_SHIFT] & ~XSK_NEXT_PG_CONTIG_MASK) + 113 + (addr & ~PAGE_MASK); 114 + xskb->dma = xskb->frame_dma + pool->headroom + XDP_PACKET_HEADROOM; 115 + } 116 + 109 117 /* AF_XDP ZC drivers, via xdp_sock_buff.h */ 110 118 void xp_set_rxq_info(struct xsk_buff_pool *pool, struct xdp_rxq_info *rxq); 111 119 int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev, 112 120 unsigned long attrs, struct page **pages, u32 nr_pages); 113 121 void xp_dma_unmap(struct xsk_buff_pool *pool, unsigned long attrs); 114 122 struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool); 123 + u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max); 115 124 bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count); 116 125 void *xp_raw_get_data(struct xsk_buff_pool *pool, u64 addr); 117 126 dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr); ··· 201 178 { 202 179 return xp_unaligned_extract_addr(addr) + 203 180 xp_unaligned_extract_offset(addr); 181 + } 182 + 183 + static inline u32 xp_aligned_extract_idx(struct xsk_buff_pool *pool, u64 addr) 184 + { 185 + return xp_aligned_extract_addr(pool, addr) >> pool->chunk_shift; 186 + } 187 + 188 + static inline void xp_release(struct xdp_buff_xsk *xskb) 189 + { 190 + if (xskb->pool->unaligned) 191 + xskb->pool->free_heads[xskb->pool->free_heads_cnt++] = xskb; 192 + } 193 + 194 + static inline u64 xp_get_handle(struct xdp_buff_xsk *xskb) 195 + { 196 + u64 offset = xskb->xdp.data - xskb->xdp.data_hard_start; 197 + 198 + offset += xskb->pool->headroom; 199 + if (!xskb->pool->unaligned) 200 + return xskb->orig_addr + offset; 201 + return xskb->orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT); 204 202 } 205 203 206 204 #endif /* XSK_BUFF_POOL_H_ */
+14 -2
include/uapi/linux/bpf.h
··· 4046 4046 * arguments. The *data* are a **u64** array and corresponding format string 4047 4047 * values are stored in the array. For strings and pointers where pointees 4048 4048 * are accessed, only the pointer values are stored in the *data* array. 4049 - * The *data_len* is the size of *data* in bytes. 4049 + * The *data_len* is the size of *data* in bytes - must be a multiple of 8. 4050 4050 * 4051 4051 * Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory. 4052 4052 * Reading kernel memory may fail due to either invalid address or ··· 4751 4751 * Each format specifier in **fmt** corresponds to one u64 element 4752 4752 * in the **data** array. For strings and pointers where pointees 4753 4753 * are accessed, only the pointer values are stored in the *data* 4754 - * array. The *data_len* is the size of *data* in bytes. 4754 + * array. The *data_len* is the size of *data* in bytes - must be 4755 + * a multiple of 8. 4755 4756 * 4756 4757 * Formats **%s** and **%p{i,I}{4,6}** require to read kernel 4757 4758 * memory. Reading kernel memory may fail due to either invalid ··· 4899 4898 * **-EINVAL** if *flags* is not zero. 4900 4899 * 4901 4900 * **-ENOENT** if architecture does not support branch records. 4901 + * 4902 + * long bpf_trace_vprintk(const char *fmt, u32 fmt_size, const void *data, u32 data_len) 4903 + * Description 4904 + * Behaves like **bpf_trace_printk**\ () helper, but takes an array of u64 4905 + * to format and can handle more format args as a result. 4906 + * 4907 + * Arguments are to be used as in **bpf_seq_printf**\ () helper. 4908 + * Return 4909 + * The number of bytes written to the buffer, or a negative error 4910 + * in case of failure. 4902 4911 */ 4903 4912 #define __BPF_FUNC_MAPPER(FN) \ 4904 4913 FN(unspec), \ ··· 5088 5077 FN(get_attach_cookie), \ 5089 5078 FN(task_pt_regs), \ 5090 5079 FN(get_branch_snapshot), \ 5080 + FN(trace_vprintk), \ 5091 5081 /* */ 5092 5082 5093 5083 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
+3 -4
kernel/bpf/arraymap.c
··· 645 645 .seq_priv_size = sizeof(struct bpf_iter_seq_array_map_info), 646 646 }; 647 647 648 - static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn, 648 + static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn, 649 649 void *callback_ctx, u64 flags) 650 650 { 651 651 u32 i, key, num_elems = 0; ··· 668 668 val = array->value + array->elem_size * i; 669 669 num_elems++; 670 670 key = i; 671 - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, 672 - (u64)(long)&key, (u64)(long)val, 673 - (u64)(long)callback_ctx, 0); 671 + ret = callback_fn((u64)(long)map, (u64)(long)&key, 672 + (u64)(long)val, (u64)(long)callback_ctx, 0); 674 673 /* return value: 0 - continue, 1 - stop and return */ 675 674 if (ret) 676 675 break;
+5
kernel/bpf/core.c
··· 2357 2357 return NULL; 2358 2358 } 2359 2359 2360 + const struct bpf_func_proto * __weak bpf_get_trace_vprintk_proto(void) 2361 + { 2362 + return NULL; 2363 + } 2364 + 2360 2365 u64 __weak 2361 2366 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size, 2362 2367 void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
+6 -7
kernel/bpf/hashtab.c
··· 668 668 669 669 BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, 670 670 (void *(*)(struct bpf_map *map, void *key))NULL)); 671 - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); 671 + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); 672 672 *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 1); 673 673 *insn++ = BPF_ALU64_IMM(BPF_ADD, ret, 674 674 offsetof(struct htab_elem, key) + ··· 709 709 710 710 BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, 711 711 (void *(*)(struct bpf_map *map, void *key))NULL)); 712 - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); 712 + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); 713 713 *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 4); 714 714 *insn++ = BPF_LDX_MEM(BPF_B, ref_reg, ret, 715 715 offsetof(struct htab_elem, lru_node) + ··· 2049 2049 .seq_priv_size = sizeof(struct bpf_iter_seq_hash_map_info), 2050 2050 }; 2051 2051 2052 - static int bpf_for_each_hash_elem(struct bpf_map *map, void *callback_fn, 2052 + static int bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn, 2053 2053 void *callback_ctx, u64 flags) 2054 2054 { 2055 2055 struct bpf_htab *htab = container_of(map, struct bpf_htab, map); ··· 2089 2089 val = elem->key + roundup_key_size; 2090 2090 } 2091 2091 num_elems++; 2092 - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, 2093 - (u64)(long)key, (u64)(long)val, 2094 - (u64)(long)callback_ctx, 0); 2092 + ret = callback_fn((u64)(long)map, (u64)(long)key, 2093 + (u64)(long)val, (u64)(long)callback_ctx, 0); 2095 2094 /* return value: 0 - continue, 1 - stop and return */ 2096 2095 if (ret) { 2097 2096 rcu_read_unlock(); ··· 2396 2397 2397 2398 BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, 2398 2399 (void *(*)(struct bpf_map *map, void *key))NULL)); 2399 - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); 2400 + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); 2400 2401 *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 2); 2401 2402 *insn++ = BPF_ALU64_IMM(BPF_ADD, ret, 2402 2403 offsetof(struct htab_elem, key) +
+5 -6
kernel/bpf/helpers.c
··· 979 979 return err; 980 980 } 981 981 982 - #define MAX_SNPRINTF_VARARGS 12 983 - 984 982 BPF_CALL_5(bpf_snprintf, char *, str, u32, str_size, char *, fmt, 985 983 const void *, data, u32, data_len) 986 984 { 987 985 int err, num_args; 988 986 u32 *bin_args; 989 987 990 - if (data_len % 8 || data_len > MAX_SNPRINTF_VARARGS * 8 || 988 + if (data_len % 8 || data_len > MAX_BPRINTF_VARARGS * 8 || 991 989 (data_len && !data)) 992 990 return -EINVAL; 993 991 num_args = data_len / 8; ··· 1056 1058 struct bpf_hrtimer *t = container_of(hrtimer, struct bpf_hrtimer, timer); 1057 1059 struct bpf_map *map = t->map; 1058 1060 void *value = t->value; 1059 - void *callback_fn; 1061 + bpf_callback_t callback_fn; 1060 1062 void *key; 1061 1063 u32 idx; 1062 1064 ··· 1081 1083 key = value - round_up(map->key_size, 8); 1082 1084 } 1083 1085 1084 - BPF_CAST_CALL(callback_fn)((u64)(long)map, (u64)(long)key, 1085 - (u64)(long)value, 0, 0); 1086 + callback_fn((u64)(long)map, (u64)(long)key, (u64)(long)value, 0, 0); 1086 1087 /* The verifier checked that return value is zero. */ 1087 1088 1088 1089 this_cpu_write(hrtimer_running, NULL); ··· 1434 1437 return &bpf_snprintf_proto; 1435 1438 case BPF_FUNC_task_pt_regs: 1436 1439 return &bpf_task_pt_regs_proto; 1440 + case BPF_FUNC_trace_vprintk: 1441 + return bpf_get_trace_vprintk_proto(); 1437 1442 default: 1438 1443 return NULL; 1439 1444 }
+81 -44
kernel/bpf/verifier.c
··· 612 612 return btf_name_by_offset(btf, btf_type_by_id(btf, id)->name_off); 613 613 } 614 614 615 + /* The reg state of a pointer or a bounded scalar was saved when 616 + * it was spilled to the stack. 617 + */ 618 + static bool is_spilled_reg(const struct bpf_stack_state *stack) 619 + { 620 + return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL; 621 + } 622 + 623 + static void scrub_spilled_slot(u8 *stype) 624 + { 625 + if (*stype != STACK_INVALID) 626 + *stype = STACK_MISC; 627 + } 628 + 615 629 static void print_verifier_state(struct bpf_verifier_env *env, 616 630 const struct bpf_func_state *state) 617 631 { ··· 731 717 continue; 732 718 verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE); 733 719 print_liveness(env, state->stack[i].spilled_ptr.live); 734 - if (state->stack[i].slot_type[0] == STACK_SPILL) { 720 + if (is_spilled_reg(&state->stack[i])) { 735 721 reg = &state->stack[i].spilled_ptr; 736 722 t = reg->type; 737 723 verbose(env, "=%s", reg_type_str[t]); ··· 1744 1730 1745 1731 desc = &tab->descs[tab->nr_descs++]; 1746 1732 desc->func_id = func_id; 1747 - desc->imm = BPF_CAST_CALL(addr) - __bpf_call_base; 1733 + desc->imm = BPF_CALL_IMM(addr); 1748 1734 err = btf_distill_func_proto(&env->log, btf_vmlinux, 1749 1735 func_proto, func_name, 1750 1736 &desc->func_model); ··· 2387 2373 reg->precise = true; 2388 2374 } 2389 2375 for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) { 2390 - if (func->stack[j].slot_type[0] != STACK_SPILL) 2376 + if (!is_spilled_reg(&func->stack[j])) 2391 2377 continue; 2392 2378 reg = &func->stack[j].spilled_ptr; 2393 2379 if (reg->type != SCALAR_VALUE) ··· 2429 2415 } 2430 2416 2431 2417 while (spi >= 0) { 2432 - if (func->stack[spi].slot_type[0] != STACK_SPILL) { 2418 + if (!is_spilled_reg(&func->stack[spi])) { 2433 2419 stack_mask = 0; 2434 2420 break; 2435 2421 } ··· 2528 2514 return 0; 2529 2515 } 2530 2516 2531 - if (func->stack[i].slot_type[0] != STACK_SPILL) { 2517 + if (!is_spilled_reg(&func->stack[i])) { 2532 2518 stack_mask &= ~(1ull << i); 2533 2519 continue; 2534 2520 } ··· 2640 2626 } 2641 2627 2642 2628 static void save_register_state(struct bpf_func_state *state, 2643 - int spi, struct bpf_reg_state *reg) 2629 + int spi, struct bpf_reg_state *reg, 2630 + int size) 2644 2631 { 2645 2632 int i; 2646 2633 2647 2634 state->stack[spi].spilled_ptr = *reg; 2648 - state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 2635 + if (size == BPF_REG_SIZE) 2636 + state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 2649 2637 2650 - for (i = 0; i < BPF_REG_SIZE; i++) 2651 - state->stack[spi].slot_type[i] = STACK_SPILL; 2638 + for (i = BPF_REG_SIZE; i > BPF_REG_SIZE - size; i--) 2639 + state->stack[spi].slot_type[i - 1] = STACK_SPILL; 2640 + 2641 + /* size < 8 bytes spill */ 2642 + for (; i; i--) 2643 + scrub_spilled_slot(&state->stack[spi].slot_type[i - 1]); 2652 2644 } 2653 2645 2654 2646 /* check_stack_{read,write}_fixed_off functions track spill/fill of registers, ··· 2701 2681 env->insn_aux_data[insn_idx].sanitize_stack_spill = true; 2702 2682 } 2703 2683 2704 - if (reg && size == BPF_REG_SIZE && register_is_bounded(reg) && 2684 + if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && 2705 2685 !register_is_null(reg) && env->bpf_capable) { 2706 2686 if (dst_reg != BPF_REG_FP) { 2707 2687 /* The backtracking logic can only recognize explicit ··· 2714 2694 if (err) 2715 2695 return err; 2716 2696 } 2717 - save_register_state(state, spi, reg); 2697 + save_register_state(state, spi, reg, size); 2718 2698 } else if (reg && is_spillable_regtype(reg->type)) { 2719 2699 /* register containing pointer is being spilled into stack */ 2720 2700 if (size != BPF_REG_SIZE) { ··· 2726 2706 verbose(env, "cannot spill pointers to stack into stack frame of the caller\n"); 2727 2707 return -EINVAL; 2728 2708 } 2729 - save_register_state(state, spi, reg); 2709 + save_register_state(state, spi, reg, size); 2730 2710 } else { 2731 2711 u8 type = STACK_MISC; 2732 2712 2733 2713 /* regular write of data into stack destroys any spilled ptr */ 2734 2714 state->stack[spi].spilled_ptr.type = NOT_INIT; 2735 2715 /* Mark slots as STACK_MISC if they belonged to spilled ptr. */ 2736 - if (state->stack[spi].slot_type[0] == STACK_SPILL) 2716 + if (is_spilled_reg(&state->stack[spi])) 2737 2717 for (i = 0; i < BPF_REG_SIZE; i++) 2738 - state->stack[spi].slot_type[i] = STACK_MISC; 2718 + scrub_spilled_slot(&state->stack[spi].slot_type[i]); 2739 2719 2740 2720 /* only mark the slot as written if all 8 bytes were written 2741 2721 * otherwise read propagation may incorrectly stop too soon ··· 2938 2918 struct bpf_func_state *state = vstate->frame[vstate->curframe]; 2939 2919 int i, slot = -off - 1, spi = slot / BPF_REG_SIZE; 2940 2920 struct bpf_reg_state *reg; 2941 - u8 *stype; 2921 + u8 *stype, type; 2942 2922 2943 2923 stype = reg_state->stack[spi].slot_type; 2944 2924 reg = &reg_state->stack[spi].spilled_ptr; 2945 2925 2946 - if (stype[0] == STACK_SPILL) { 2926 + if (is_spilled_reg(&reg_state->stack[spi])) { 2947 2927 if (size != BPF_REG_SIZE) { 2928 + u8 scalar_size = 0; 2929 + 2948 2930 if (reg->type != SCALAR_VALUE) { 2949 2931 verbose_linfo(env, env->insn_idx, "; "); 2950 2932 verbose(env, "invalid size of register fill\n"); 2951 2933 return -EACCES; 2952 2934 } 2953 - if (dst_regno >= 0) { 2954 - mark_reg_unknown(env, state->regs, dst_regno); 2955 - state->regs[dst_regno].live |= REG_LIVE_WRITTEN; 2956 - } 2935 + 2957 2936 mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); 2937 + if (dst_regno < 0) 2938 + return 0; 2939 + 2940 + for (i = BPF_REG_SIZE; i > 0 && stype[i - 1] == STACK_SPILL; i--) 2941 + scalar_size++; 2942 + 2943 + if (!(off % BPF_REG_SIZE) && size == scalar_size) { 2944 + /* The earlier check_reg_arg() has decided the 2945 + * subreg_def for this insn. Save it first. 2946 + */ 2947 + s32 subreg_def = state->regs[dst_regno].subreg_def; 2948 + 2949 + state->regs[dst_regno] = *reg; 2950 + state->regs[dst_regno].subreg_def = subreg_def; 2951 + } else { 2952 + for (i = 0; i < size; i++) { 2953 + type = stype[(slot - i) % BPF_REG_SIZE]; 2954 + if (type == STACK_SPILL) 2955 + continue; 2956 + if (type == STACK_MISC) 2957 + continue; 2958 + verbose(env, "invalid read from stack off %d+%d size %d\n", 2959 + off, i, size); 2960 + return -EACCES; 2961 + } 2962 + mark_reg_unknown(env, state->regs, dst_regno); 2963 + } 2964 + state->regs[dst_regno].live |= REG_LIVE_WRITTEN; 2958 2965 return 0; 2959 2966 } 2960 2967 for (i = 1; i < BPF_REG_SIZE; i++) { ··· 3012 2965 } 3013 2966 mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); 3014 2967 } else { 3015 - u8 type; 3016 - 3017 2968 for (i = 0; i < size; i++) { 3018 2969 type = stype[(slot - i) % BPF_REG_SIZE]; 3019 2970 if (type == STACK_MISC) ··· 4559 4514 goto mark; 4560 4515 } 4561 4516 4562 - if (state->stack[spi].slot_type[0] == STACK_SPILL && 4517 + if (is_spilled_reg(&state->stack[spi]) && 4563 4518 state->stack[spi].spilled_ptr.type == PTR_TO_BTF_ID) 4564 4519 goto mark; 4565 4520 4566 - if (state->stack[spi].slot_type[0] == STACK_SPILL && 4521 + if (is_spilled_reg(&state->stack[spi]) && 4567 4522 (state->stack[spi].spilled_ptr.type == SCALAR_VALUE || 4568 4523 env->allow_ptr_leaks)) { 4569 4524 if (clobber) { 4570 4525 __mark_reg_unknown(env, &state->stack[spi].spilled_ptr); 4571 4526 for (j = 0; j < BPF_REG_SIZE; j++) 4572 - state->stack[spi].slot_type[j] = STACK_MISC; 4527 + scrub_spilled_slot(&state->stack[spi].slot_type[j]); 4573 4528 } 4574 4529 goto mark; 4575 4530 } ··· 10401 10356 * return false to continue verification of this path 10402 10357 */ 10403 10358 return false; 10404 - if (i % BPF_REG_SIZE) 10359 + if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1) 10405 10360 continue; 10406 - if (old->stack[spi].slot_type[0] != STACK_SPILL) 10361 + if (!is_spilled_reg(&old->stack[spi])) 10407 10362 continue; 10408 10363 if (!regsafe(env, &old->stack[spi].spilled_ptr, 10409 10364 &cur->stack[spi].spilled_ptr, idmap)) ··· 10610 10565 } 10611 10566 10612 10567 for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) { 10613 - if (state->stack[i].slot_type[0] != STACK_SPILL) 10568 + if (!is_spilled_reg(&state->stack[i])) 10614 10569 continue; 10615 10570 state_reg = &state->stack[i].spilled_ptr; 10616 10571 if (state_reg->type != SCALAR_VALUE || ··· 12514 12469 if (!bpf_pseudo_call(insn)) 12515 12470 continue; 12516 12471 subprog = insn->off; 12517 - insn->imm = BPF_CAST_CALL(func[subprog]->bpf_func) - 12518 - __bpf_call_base; 12472 + insn->imm = BPF_CALL_IMM(func[subprog]->bpf_func); 12519 12473 } 12520 12474 12521 12475 /* we use the aux data to keep a list of the start addresses ··· 12994 12950 patch_map_ops_generic: 12995 12951 switch (insn->imm) { 12996 12952 case BPF_FUNC_map_lookup_elem: 12997 - insn->imm = BPF_CAST_CALL(ops->map_lookup_elem) - 12998 - __bpf_call_base; 12953 + insn->imm = BPF_CALL_IMM(ops->map_lookup_elem); 12999 12954 continue; 13000 12955 case BPF_FUNC_map_update_elem: 13001 - insn->imm = BPF_CAST_CALL(ops->map_update_elem) - 13002 - __bpf_call_base; 12956 + insn->imm = BPF_CALL_IMM(ops->map_update_elem); 13003 12957 continue; 13004 12958 case BPF_FUNC_map_delete_elem: 13005 - insn->imm = BPF_CAST_CALL(ops->map_delete_elem) - 13006 - __bpf_call_base; 12959 + insn->imm = BPF_CALL_IMM(ops->map_delete_elem); 13007 12960 continue; 13008 12961 case BPF_FUNC_map_push_elem: 13009 - insn->imm = BPF_CAST_CALL(ops->map_push_elem) - 13010 - __bpf_call_base; 12962 + insn->imm = BPF_CALL_IMM(ops->map_push_elem); 13011 12963 continue; 13012 12964 case BPF_FUNC_map_pop_elem: 13013 - insn->imm = BPF_CAST_CALL(ops->map_pop_elem) - 13014 - __bpf_call_base; 12965 + insn->imm = BPF_CALL_IMM(ops->map_pop_elem); 13015 12966 continue; 13016 12967 case BPF_FUNC_map_peek_elem: 13017 - insn->imm = BPF_CAST_CALL(ops->map_peek_elem) - 13018 - __bpf_call_base; 12968 + insn->imm = BPF_CALL_IMM(ops->map_peek_elem); 13019 12969 continue; 13020 12970 case BPF_FUNC_redirect_map: 13021 - insn->imm = BPF_CAST_CALL(ops->map_redirect) - 13022 - __bpf_call_base; 12971 + insn->imm = BPF_CALL_IMM(ops->map_redirect); 13023 12972 continue; 13024 12973 } 13025 12974
+51 -3
kernel/trace/bpf_trace.c
··· 398 398 .arg2_type = ARG_CONST_SIZE, 399 399 }; 400 400 401 - const struct bpf_func_proto *bpf_get_trace_printk_proto(void) 401 + static void __set_printk_clr_event(void) 402 402 { 403 403 /* 404 404 * This program might be calling bpf_trace_printk, ··· 410 410 */ 411 411 if (trace_set_clr_event("bpf_trace", "bpf_trace_printk", 1)) 412 412 pr_warn_ratelimited("could not enable bpf_trace_printk events"); 413 + } 413 414 415 + const struct bpf_func_proto *bpf_get_trace_printk_proto(void) 416 + { 417 + __set_printk_clr_event(); 414 418 return &bpf_trace_printk_proto; 415 419 } 416 420 417 - #define MAX_SEQ_PRINTF_VARARGS 12 421 + BPF_CALL_4(bpf_trace_vprintk, char *, fmt, u32, fmt_size, const void *, data, 422 + u32, data_len) 423 + { 424 + static char buf[BPF_TRACE_PRINTK_SIZE]; 425 + unsigned long flags; 426 + int ret, num_args; 427 + u32 *bin_args; 428 + 429 + if (data_len & 7 || data_len > MAX_BPRINTF_VARARGS * 8 || 430 + (data_len && !data)) 431 + return -EINVAL; 432 + num_args = data_len / 8; 433 + 434 + ret = bpf_bprintf_prepare(fmt, fmt_size, data, &bin_args, num_args); 435 + if (ret < 0) 436 + return ret; 437 + 438 + raw_spin_lock_irqsave(&trace_printk_lock, flags); 439 + ret = bstr_printf(buf, sizeof(buf), fmt, bin_args); 440 + 441 + trace_bpf_trace_printk(buf); 442 + raw_spin_unlock_irqrestore(&trace_printk_lock, flags); 443 + 444 + bpf_bprintf_cleanup(); 445 + 446 + return ret; 447 + } 448 + 449 + static const struct bpf_func_proto bpf_trace_vprintk_proto = { 450 + .func = bpf_trace_vprintk, 451 + .gpl_only = true, 452 + .ret_type = RET_INTEGER, 453 + .arg1_type = ARG_PTR_TO_MEM, 454 + .arg2_type = ARG_CONST_SIZE, 455 + .arg3_type = ARG_PTR_TO_MEM_OR_NULL, 456 + .arg4_type = ARG_CONST_SIZE_OR_ZERO, 457 + }; 458 + 459 + const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void) 460 + { 461 + __set_printk_clr_event(); 462 + return &bpf_trace_vprintk_proto; 463 + } 418 464 419 465 BPF_CALL_5(bpf_seq_printf, struct seq_file *, m, char *, fmt, u32, fmt_size, 420 466 const void *, data, u32, data_len) ··· 468 422 int err, num_args; 469 423 u32 *bin_args; 470 424 471 - if (data_len & 7 || data_len > MAX_SEQ_PRINTF_VARARGS * 8 || 425 + if (data_len & 7 || data_len > MAX_BPRINTF_VARARGS * 8 || 472 426 (data_len && !data)) 473 427 return -EINVAL; 474 428 num_args = data_len / 8; ··· 1208 1162 return &bpf_get_func_ip_proto_tracing; 1209 1163 case BPF_FUNC_get_branch_snapshot: 1210 1164 return &bpf_get_branch_snapshot_proto; 1165 + case BPF_FUNC_trace_vprintk: 1166 + return bpf_get_trace_vprintk_proto(); 1211 1167 default: 1212 1168 return bpf_base_func_proto(func_id); 1213 1169 }
+5829 -206
lib/test_bpf.c
··· 52 52 #define FLAG_NO_DATA BIT(0) 53 53 #define FLAG_EXPECTED_FAIL BIT(1) 54 54 #define FLAG_SKB_FRAG BIT(2) 55 + #define FLAG_VERIFIER_ZEXT BIT(3) 55 56 56 57 enum { 57 58 CLASSIC = BIT(6), /* Old BPF instructions only. */ ··· 81 80 int expected_errcode; /* used when FLAG_EXPECTED_FAIL is set in the aux */ 82 81 __u8 frag_data[MAX_DATA]; 83 82 int stack_depth; /* for eBPF only, since tests don't call verifier */ 83 + int nr_testruns; /* Custom run count, defaults to MAX_TESTRUNS if 0 */ 84 84 }; 85 85 86 86 /* Large test cases need separate allocation and fill handler. */ ··· 463 461 return __bpf_fill_stxdw(self, BPF_DW); 464 462 } 465 463 466 - static int bpf_fill_long_jmp(struct bpf_test *self) 464 + static int __bpf_ld_imm64(struct bpf_insn insns[2], u8 reg, s64 imm64) 467 465 { 468 - unsigned int len = BPF_MAXINSNS; 469 - struct bpf_insn *insn; 466 + struct bpf_insn tmp[] = {BPF_LD_IMM64(reg, imm64)}; 467 + 468 + memcpy(insns, tmp, sizeof(tmp)); 469 + return 2; 470 + } 471 + 472 + /* 473 + * Branch conversion tests. Complex operations can expand to a lot 474 + * of instructions when JITed. This in turn may cause jump offsets 475 + * to overflow the field size of the native instruction, triggering 476 + * a branch conversion mechanism in some JITs. 477 + */ 478 + static int __bpf_fill_max_jmp(struct bpf_test *self, int jmp, int imm) 479 + { 480 + struct bpf_insn *insns; 481 + int len = S16_MAX + 5; 470 482 int i; 483 + 484 + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); 485 + if (!insns) 486 + return -ENOMEM; 487 + 488 + i = __bpf_ld_imm64(insns, R1, 0x0123456789abcdefULL); 489 + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); 490 + insns[i++] = BPF_JMP_IMM(jmp, R0, imm, S16_MAX); 491 + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 2); 492 + insns[i++] = BPF_EXIT_INSN(); 493 + 494 + while (i < len - 1) { 495 + static const int ops[] = { 496 + BPF_LSH, BPF_RSH, BPF_ARSH, BPF_ADD, 497 + BPF_SUB, BPF_MUL, BPF_DIV, BPF_MOD, 498 + }; 499 + int op = ops[(i >> 1) % ARRAY_SIZE(ops)]; 500 + 501 + if (i & 1) 502 + insns[i++] = BPF_ALU32_REG(op, R0, R1); 503 + else 504 + insns[i++] = BPF_ALU64_REG(op, R0, R1); 505 + } 506 + 507 + insns[i++] = BPF_EXIT_INSN(); 508 + self->u.ptr.insns = insns; 509 + self->u.ptr.len = len; 510 + BUG_ON(i != len); 511 + 512 + return 0; 513 + } 514 + 515 + /* Branch taken by runtime decision */ 516 + static int bpf_fill_max_jmp_taken(struct bpf_test *self) 517 + { 518 + return __bpf_fill_max_jmp(self, BPF_JEQ, 1); 519 + } 520 + 521 + /* Branch not taken by runtime decision */ 522 + static int bpf_fill_max_jmp_not_taken(struct bpf_test *self) 523 + { 524 + return __bpf_fill_max_jmp(self, BPF_JEQ, 0); 525 + } 526 + 527 + /* Branch always taken, known at JIT time */ 528 + static int bpf_fill_max_jmp_always_taken(struct bpf_test *self) 529 + { 530 + return __bpf_fill_max_jmp(self, BPF_JGE, 0); 531 + } 532 + 533 + /* Branch never taken, known at JIT time */ 534 + static int bpf_fill_max_jmp_never_taken(struct bpf_test *self) 535 + { 536 + return __bpf_fill_max_jmp(self, BPF_JLT, 0); 537 + } 538 + 539 + /* ALU result computation used in tests */ 540 + static bool __bpf_alu_result(u64 *res, u64 v1, u64 v2, u8 op) 541 + { 542 + *res = 0; 543 + switch (op) { 544 + case BPF_MOV: 545 + *res = v2; 546 + break; 547 + case BPF_AND: 548 + *res = v1 & v2; 549 + break; 550 + case BPF_OR: 551 + *res = v1 | v2; 552 + break; 553 + case BPF_XOR: 554 + *res = v1 ^ v2; 555 + break; 556 + case BPF_LSH: 557 + *res = v1 << v2; 558 + break; 559 + case BPF_RSH: 560 + *res = v1 >> v2; 561 + break; 562 + case BPF_ARSH: 563 + *res = v1 >> v2; 564 + if (v2 > 0 && v1 > S64_MAX) 565 + *res |= ~0ULL << (64 - v2); 566 + break; 567 + case BPF_ADD: 568 + *res = v1 + v2; 569 + break; 570 + case BPF_SUB: 571 + *res = v1 - v2; 572 + break; 573 + case BPF_MUL: 574 + *res = v1 * v2; 575 + break; 576 + case BPF_DIV: 577 + if (v2 == 0) 578 + return false; 579 + *res = div64_u64(v1, v2); 580 + break; 581 + case BPF_MOD: 582 + if (v2 == 0) 583 + return false; 584 + div64_u64_rem(v1, v2, res); 585 + break; 586 + } 587 + return true; 588 + } 589 + 590 + /* Test an ALU shift operation for all valid shift values */ 591 + static int __bpf_fill_alu_shift(struct bpf_test *self, u8 op, 592 + u8 mode, bool alu32) 593 + { 594 + static const s64 regs[] = { 595 + 0x0123456789abcdefLL, /* dword > 0, word < 0 */ 596 + 0xfedcba9876543210LL, /* dowrd < 0, word > 0 */ 597 + 0xfedcba0198765432LL, /* dowrd < 0, word < 0 */ 598 + 0x0123458967abcdefLL, /* dword > 0, word > 0 */ 599 + }; 600 + int bits = alu32 ? 32 : 64; 601 + int len = (2 + 7 * bits) * ARRAY_SIZE(regs) + 3; 602 + struct bpf_insn *insn; 603 + int imm, k; 604 + int i = 0; 471 605 472 606 insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); 473 607 if (!insn) 474 608 return -ENOMEM; 475 609 476 - insn[0] = BPF_ALU64_IMM(BPF_MOV, R0, 1); 477 - insn[1] = BPF_JMP_IMM(BPF_JEQ, R0, 1, len - 2 - 1); 610 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); 478 611 479 - /* 480 - * Fill with a complex 64-bit operation that expands to a lot of 481 - * instructions on 32-bit JITs. The large jump offset can then 482 - * overflow the conditional branch field size, triggering a branch 483 - * conversion mechanism in some JITs. 484 - * 485 - * Note: BPF_MAXINSNS of ALU64 MUL is enough to trigger such branch 486 - * conversion on the 32-bit MIPS JIT. For other JITs, the instruction 487 - * count and/or operation may need to be modified to trigger the 488 - * branch conversion. 489 - */ 490 - for (i = 2; i < len - 1; i++) 491 - insn[i] = BPF_ALU64_IMM(BPF_MUL, R0, (i << 16) + i); 612 + for (k = 0; k < ARRAY_SIZE(regs); k++) { 613 + s64 reg = regs[k]; 492 614 493 - insn[len - 1] = BPF_EXIT_INSN(); 615 + i += __bpf_ld_imm64(&insn[i], R3, reg); 616 + 617 + for (imm = 0; imm < bits; imm++) { 618 + u64 val; 619 + 620 + /* Perform operation */ 621 + insn[i++] = BPF_ALU64_REG(BPF_MOV, R1, R3); 622 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R2, imm); 623 + if (alu32) { 624 + if (mode == BPF_K) 625 + insn[i++] = BPF_ALU32_IMM(op, R1, imm); 626 + else 627 + insn[i++] = BPF_ALU32_REG(op, R1, R2); 628 + 629 + if (op == BPF_ARSH) 630 + reg = (s32)reg; 631 + else 632 + reg = (u32)reg; 633 + __bpf_alu_result(&val, reg, imm, op); 634 + val = (u32)val; 635 + } else { 636 + if (mode == BPF_K) 637 + insn[i++] = BPF_ALU64_IMM(op, R1, imm); 638 + else 639 + insn[i++] = BPF_ALU64_REG(op, R1, R2); 640 + __bpf_alu_result(&val, reg, imm, op); 641 + } 642 + 643 + /* 644 + * When debugging a JIT that fails this test, one 645 + * can write the immediate value to R0 here to find 646 + * out which operand values that fail. 647 + */ 648 + 649 + /* Load reference and check the result */ 650 + i += __bpf_ld_imm64(&insn[i], R4, val); 651 + insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R4, 1); 652 + insn[i++] = BPF_EXIT_INSN(); 653 + } 654 + } 655 + 656 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); 657 + insn[i++] = BPF_EXIT_INSN(); 494 658 495 659 self->u.ptr.insns = insn; 660 + self->u.ptr.len = len; 661 + BUG_ON(i != len); 662 + 663 + return 0; 664 + } 665 + 666 + static int bpf_fill_alu64_lsh_imm(struct bpf_test *self) 667 + { 668 + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_K, false); 669 + } 670 + 671 + static int bpf_fill_alu64_rsh_imm(struct bpf_test *self) 672 + { 673 + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_K, false); 674 + } 675 + 676 + static int bpf_fill_alu64_arsh_imm(struct bpf_test *self) 677 + { 678 + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_K, false); 679 + } 680 + 681 + static int bpf_fill_alu64_lsh_reg(struct bpf_test *self) 682 + { 683 + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_X, false); 684 + } 685 + 686 + static int bpf_fill_alu64_rsh_reg(struct bpf_test *self) 687 + { 688 + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_X, false); 689 + } 690 + 691 + static int bpf_fill_alu64_arsh_reg(struct bpf_test *self) 692 + { 693 + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_X, false); 694 + } 695 + 696 + static int bpf_fill_alu32_lsh_imm(struct bpf_test *self) 697 + { 698 + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_K, true); 699 + } 700 + 701 + static int bpf_fill_alu32_rsh_imm(struct bpf_test *self) 702 + { 703 + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_K, true); 704 + } 705 + 706 + static int bpf_fill_alu32_arsh_imm(struct bpf_test *self) 707 + { 708 + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_K, true); 709 + } 710 + 711 + static int bpf_fill_alu32_lsh_reg(struct bpf_test *self) 712 + { 713 + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_X, true); 714 + } 715 + 716 + static int bpf_fill_alu32_rsh_reg(struct bpf_test *self) 717 + { 718 + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_X, true); 719 + } 720 + 721 + static int bpf_fill_alu32_arsh_reg(struct bpf_test *self) 722 + { 723 + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_X, true); 724 + } 725 + 726 + /* 727 + * Test an ALU register shift operation for all valid shift values 728 + * for the case when the source and destination are the same. 729 + */ 730 + static int __bpf_fill_alu_shift_same_reg(struct bpf_test *self, u8 op, 731 + bool alu32) 732 + { 733 + int bits = alu32 ? 32 : 64; 734 + int len = 3 + 6 * bits; 735 + struct bpf_insn *insn; 736 + int i = 0; 737 + u64 val; 738 + 739 + insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); 740 + if (!insn) 741 + return -ENOMEM; 742 + 743 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); 744 + 745 + for (val = 0; val < bits; val++) { 746 + u64 res; 747 + 748 + /* Perform operation */ 749 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R1, val); 750 + if (alu32) 751 + insn[i++] = BPF_ALU32_REG(op, R1, R1); 752 + else 753 + insn[i++] = BPF_ALU64_REG(op, R1, R1); 754 + 755 + /* Compute the reference result */ 756 + __bpf_alu_result(&res, val, val, op); 757 + if (alu32) 758 + res = (u32)res; 759 + i += __bpf_ld_imm64(&insn[i], R2, res); 760 + 761 + /* Check the actual result */ 762 + insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R2, 1); 763 + insn[i++] = BPF_EXIT_INSN(); 764 + } 765 + 766 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); 767 + insn[i++] = BPF_EXIT_INSN(); 768 + 769 + self->u.ptr.insns = insn; 770 + self->u.ptr.len = len; 771 + BUG_ON(i != len); 772 + 773 + return 0; 774 + } 775 + 776 + static int bpf_fill_alu64_lsh_same_reg(struct bpf_test *self) 777 + { 778 + return __bpf_fill_alu_shift_same_reg(self, BPF_LSH, false); 779 + } 780 + 781 + static int bpf_fill_alu64_rsh_same_reg(struct bpf_test *self) 782 + { 783 + return __bpf_fill_alu_shift_same_reg(self, BPF_RSH, false); 784 + } 785 + 786 + static int bpf_fill_alu64_arsh_same_reg(struct bpf_test *self) 787 + { 788 + return __bpf_fill_alu_shift_same_reg(self, BPF_ARSH, false); 789 + } 790 + 791 + static int bpf_fill_alu32_lsh_same_reg(struct bpf_test *self) 792 + { 793 + return __bpf_fill_alu_shift_same_reg(self, BPF_LSH, true); 794 + } 795 + 796 + static int bpf_fill_alu32_rsh_same_reg(struct bpf_test *self) 797 + { 798 + return __bpf_fill_alu_shift_same_reg(self, BPF_RSH, true); 799 + } 800 + 801 + static int bpf_fill_alu32_arsh_same_reg(struct bpf_test *self) 802 + { 803 + return __bpf_fill_alu_shift_same_reg(self, BPF_ARSH, true); 804 + } 805 + 806 + /* 807 + * Common operand pattern generator for exhaustive power-of-two magnitudes 808 + * tests. The block size parameters can be adjusted to increase/reduce the 809 + * number of combinatons tested and thereby execution speed and memory 810 + * footprint. 811 + */ 812 + 813 + static inline s64 value(int msb, int delta, int sign) 814 + { 815 + return sign * (1LL << msb) + delta; 816 + } 817 + 818 + static int __bpf_fill_pattern(struct bpf_test *self, void *arg, 819 + int dbits, int sbits, int block1, int block2, 820 + int (*emit)(struct bpf_test*, void*, 821 + struct bpf_insn*, s64, s64)) 822 + { 823 + static const int sgn[][2] = {{1, 1}, {1, -1}, {-1, 1}, {-1, -1}}; 824 + struct bpf_insn *insns; 825 + int di, si, bt, db, sb; 826 + int count, len, k; 827 + int extra = 1 + 2; 828 + int i = 0; 829 + 830 + /* Total number of iterations for the two pattern */ 831 + count = (dbits - 1) * (sbits - 1) * block1 * block1 * ARRAY_SIZE(sgn); 832 + count += (max(dbits, sbits) - 1) * block2 * block2 * ARRAY_SIZE(sgn); 833 + 834 + /* Compute the maximum number of insns and allocate the buffer */ 835 + len = extra + count * (*emit)(self, arg, NULL, 0, 0); 836 + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); 837 + if (!insns) 838 + return -ENOMEM; 839 + 840 + /* Add head instruction(s) */ 841 + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); 842 + 843 + /* 844 + * Pattern 1: all combinations of power-of-two magnitudes and sign, 845 + * and with a block of contiguous values around each magnitude. 846 + */ 847 + for (di = 0; di < dbits - 1; di++) /* Dst magnitudes */ 848 + for (si = 0; si < sbits - 1; si++) /* Src magnitudes */ 849 + for (k = 0; k < ARRAY_SIZE(sgn); k++) /* Sign combos */ 850 + for (db = -(block1 / 2); 851 + db < (block1 + 1) / 2; db++) 852 + for (sb = -(block1 / 2); 853 + sb < (block1 + 1) / 2; sb++) { 854 + s64 dst, src; 855 + 856 + dst = value(di, db, sgn[k][0]); 857 + src = value(si, sb, sgn[k][1]); 858 + i += (*emit)(self, arg, 859 + &insns[i], 860 + dst, src); 861 + } 862 + /* 863 + * Pattern 2: all combinations for a larger block of values 864 + * for each power-of-two magnitude and sign, where the magnitude is 865 + * the same for both operands. 866 + */ 867 + for (bt = 0; bt < max(dbits, sbits) - 1; bt++) /* Magnitude */ 868 + for (k = 0; k < ARRAY_SIZE(sgn); k++) /* Sign combos */ 869 + for (db = -(block2 / 2); db < (block2 + 1) / 2; db++) 870 + for (sb = -(block2 / 2); 871 + sb < (block2 + 1) / 2; sb++) { 872 + s64 dst, src; 873 + 874 + dst = value(bt % dbits, db, sgn[k][0]); 875 + src = value(bt % sbits, sb, sgn[k][1]); 876 + i += (*emit)(self, arg, &insns[i], 877 + dst, src); 878 + } 879 + 880 + /* Append tail instructions */ 881 + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); 882 + insns[i++] = BPF_EXIT_INSN(); 883 + BUG_ON(i > len); 884 + 885 + self->u.ptr.insns = insns; 886 + self->u.ptr.len = i; 887 + 888 + return 0; 889 + } 890 + 891 + /* 892 + * Block size parameters used in pattern tests below. une as needed to 893 + * increase/reduce the number combinations tested, see following examples. 894 + * block values per operand MSB 895 + * ---------------------------------------- 896 + * 0 none 897 + * 1 (1 << MSB) 898 + * 2 (1 << MSB) + [-1, 0] 899 + * 3 (1 << MSB) + [-1, 0, 1] 900 + */ 901 + #define PATTERN_BLOCK1 1 902 + #define PATTERN_BLOCK2 5 903 + 904 + /* Number of test runs for a pattern test */ 905 + #define NR_PATTERN_RUNS 1 906 + 907 + /* 908 + * Exhaustive tests of ALU operations for all combinations of power-of-two 909 + * magnitudes of the operands, both for positive and negative values. The 910 + * test is designed to verify e.g. the ALU and ALU64 operations for JITs that 911 + * emit different code depending on the magnitude of the immediate value. 912 + */ 913 + static int __bpf_emit_alu64_imm(struct bpf_test *self, void *arg, 914 + struct bpf_insn *insns, s64 dst, s64 imm) 915 + { 916 + int op = *(int *)arg; 917 + int i = 0; 918 + u64 res; 919 + 920 + if (!insns) 921 + return 7; 922 + 923 + if (__bpf_alu_result(&res, dst, (s32)imm, op)) { 924 + i += __bpf_ld_imm64(&insns[i], R1, dst); 925 + i += __bpf_ld_imm64(&insns[i], R3, res); 926 + insns[i++] = BPF_ALU64_IMM(op, R1, imm); 927 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); 928 + insns[i++] = BPF_EXIT_INSN(); 929 + } 930 + 931 + return i; 932 + } 933 + 934 + static int __bpf_emit_alu32_imm(struct bpf_test *self, void *arg, 935 + struct bpf_insn *insns, s64 dst, s64 imm) 936 + { 937 + int op = *(int *)arg; 938 + int i = 0; 939 + u64 res; 940 + 941 + if (!insns) 942 + return 7; 943 + 944 + if (__bpf_alu_result(&res, (u32)dst, (u32)imm, op)) { 945 + i += __bpf_ld_imm64(&insns[i], R1, dst); 946 + i += __bpf_ld_imm64(&insns[i], R3, (u32)res); 947 + insns[i++] = BPF_ALU32_IMM(op, R1, imm); 948 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); 949 + insns[i++] = BPF_EXIT_INSN(); 950 + } 951 + 952 + return i; 953 + } 954 + 955 + static int __bpf_emit_alu64_reg(struct bpf_test *self, void *arg, 956 + struct bpf_insn *insns, s64 dst, s64 src) 957 + { 958 + int op = *(int *)arg; 959 + int i = 0; 960 + u64 res; 961 + 962 + if (!insns) 963 + return 9; 964 + 965 + if (__bpf_alu_result(&res, dst, src, op)) { 966 + i += __bpf_ld_imm64(&insns[i], R1, dst); 967 + i += __bpf_ld_imm64(&insns[i], R2, src); 968 + i += __bpf_ld_imm64(&insns[i], R3, res); 969 + insns[i++] = BPF_ALU64_REG(op, R1, R2); 970 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); 971 + insns[i++] = BPF_EXIT_INSN(); 972 + } 973 + 974 + return i; 975 + } 976 + 977 + static int __bpf_emit_alu32_reg(struct bpf_test *self, void *arg, 978 + struct bpf_insn *insns, s64 dst, s64 src) 979 + { 980 + int op = *(int *)arg; 981 + int i = 0; 982 + u64 res; 983 + 984 + if (!insns) 985 + return 9; 986 + 987 + if (__bpf_alu_result(&res, (u32)dst, (u32)src, op)) { 988 + i += __bpf_ld_imm64(&insns[i], R1, dst); 989 + i += __bpf_ld_imm64(&insns[i], R2, src); 990 + i += __bpf_ld_imm64(&insns[i], R3, (u32)res); 991 + insns[i++] = BPF_ALU32_REG(op, R1, R2); 992 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); 993 + insns[i++] = BPF_EXIT_INSN(); 994 + } 995 + 996 + return i; 997 + } 998 + 999 + static int __bpf_fill_alu64_imm(struct bpf_test *self, int op) 1000 + { 1001 + return __bpf_fill_pattern(self, &op, 64, 32, 1002 + PATTERN_BLOCK1, PATTERN_BLOCK2, 1003 + &__bpf_emit_alu64_imm); 1004 + } 1005 + 1006 + static int __bpf_fill_alu32_imm(struct bpf_test *self, int op) 1007 + { 1008 + return __bpf_fill_pattern(self, &op, 64, 32, 1009 + PATTERN_BLOCK1, PATTERN_BLOCK2, 1010 + &__bpf_emit_alu32_imm); 1011 + } 1012 + 1013 + static int __bpf_fill_alu64_reg(struct bpf_test *self, int op) 1014 + { 1015 + return __bpf_fill_pattern(self, &op, 64, 64, 1016 + PATTERN_BLOCK1, PATTERN_BLOCK2, 1017 + &__bpf_emit_alu64_reg); 1018 + } 1019 + 1020 + static int __bpf_fill_alu32_reg(struct bpf_test *self, int op) 1021 + { 1022 + return __bpf_fill_pattern(self, &op, 64, 64, 1023 + PATTERN_BLOCK1, PATTERN_BLOCK2, 1024 + &__bpf_emit_alu32_reg); 1025 + } 1026 + 1027 + /* ALU64 immediate operations */ 1028 + static int bpf_fill_alu64_mov_imm(struct bpf_test *self) 1029 + { 1030 + return __bpf_fill_alu64_imm(self, BPF_MOV); 1031 + } 1032 + 1033 + static int bpf_fill_alu64_and_imm(struct bpf_test *self) 1034 + { 1035 + return __bpf_fill_alu64_imm(self, BPF_AND); 1036 + } 1037 + 1038 + static int bpf_fill_alu64_or_imm(struct bpf_test *self) 1039 + { 1040 + return __bpf_fill_alu64_imm(self, BPF_OR); 1041 + } 1042 + 1043 + static int bpf_fill_alu64_xor_imm(struct bpf_test *self) 1044 + { 1045 + return __bpf_fill_alu64_imm(self, BPF_XOR); 1046 + } 1047 + 1048 + static int bpf_fill_alu64_add_imm(struct bpf_test *self) 1049 + { 1050 + return __bpf_fill_alu64_imm(self, BPF_ADD); 1051 + } 1052 + 1053 + static int bpf_fill_alu64_sub_imm(struct bpf_test *self) 1054 + { 1055 + return __bpf_fill_alu64_imm(self, BPF_SUB); 1056 + } 1057 + 1058 + static int bpf_fill_alu64_mul_imm(struct bpf_test *self) 1059 + { 1060 + return __bpf_fill_alu64_imm(self, BPF_MUL); 1061 + } 1062 + 1063 + static int bpf_fill_alu64_div_imm(struct bpf_test *self) 1064 + { 1065 + return __bpf_fill_alu64_imm(self, BPF_DIV); 1066 + } 1067 + 1068 + static int bpf_fill_alu64_mod_imm(struct bpf_test *self) 1069 + { 1070 + return __bpf_fill_alu64_imm(self, BPF_MOD); 1071 + } 1072 + 1073 + /* ALU32 immediate operations */ 1074 + static int bpf_fill_alu32_mov_imm(struct bpf_test *self) 1075 + { 1076 + return __bpf_fill_alu32_imm(self, BPF_MOV); 1077 + } 1078 + 1079 + static int bpf_fill_alu32_and_imm(struct bpf_test *self) 1080 + { 1081 + return __bpf_fill_alu32_imm(self, BPF_AND); 1082 + } 1083 + 1084 + static int bpf_fill_alu32_or_imm(struct bpf_test *self) 1085 + { 1086 + return __bpf_fill_alu32_imm(self, BPF_OR); 1087 + } 1088 + 1089 + static int bpf_fill_alu32_xor_imm(struct bpf_test *self) 1090 + { 1091 + return __bpf_fill_alu32_imm(self, BPF_XOR); 1092 + } 1093 + 1094 + static int bpf_fill_alu32_add_imm(struct bpf_test *self) 1095 + { 1096 + return __bpf_fill_alu32_imm(self, BPF_ADD); 1097 + } 1098 + 1099 + static int bpf_fill_alu32_sub_imm(struct bpf_test *self) 1100 + { 1101 + return __bpf_fill_alu32_imm(self, BPF_SUB); 1102 + } 1103 + 1104 + static int bpf_fill_alu32_mul_imm(struct bpf_test *self) 1105 + { 1106 + return __bpf_fill_alu32_imm(self, BPF_MUL); 1107 + } 1108 + 1109 + static int bpf_fill_alu32_div_imm(struct bpf_test *self) 1110 + { 1111 + return __bpf_fill_alu32_imm(self, BPF_DIV); 1112 + } 1113 + 1114 + static int bpf_fill_alu32_mod_imm(struct bpf_test *self) 1115 + { 1116 + return __bpf_fill_alu32_imm(self, BPF_MOD); 1117 + } 1118 + 1119 + /* ALU64 register operations */ 1120 + static int bpf_fill_alu64_mov_reg(struct bpf_test *self) 1121 + { 1122 + return __bpf_fill_alu64_reg(self, BPF_MOV); 1123 + } 1124 + 1125 + static int bpf_fill_alu64_and_reg(struct bpf_test *self) 1126 + { 1127 + return __bpf_fill_alu64_reg(self, BPF_AND); 1128 + } 1129 + 1130 + static int bpf_fill_alu64_or_reg(struct bpf_test *self) 1131 + { 1132 + return __bpf_fill_alu64_reg(self, BPF_OR); 1133 + } 1134 + 1135 + static int bpf_fill_alu64_xor_reg(struct bpf_test *self) 1136 + { 1137 + return __bpf_fill_alu64_reg(self, BPF_XOR); 1138 + } 1139 + 1140 + static int bpf_fill_alu64_add_reg(struct bpf_test *self) 1141 + { 1142 + return __bpf_fill_alu64_reg(self, BPF_ADD); 1143 + } 1144 + 1145 + static int bpf_fill_alu64_sub_reg(struct bpf_test *self) 1146 + { 1147 + return __bpf_fill_alu64_reg(self, BPF_SUB); 1148 + } 1149 + 1150 + static int bpf_fill_alu64_mul_reg(struct bpf_test *self) 1151 + { 1152 + return __bpf_fill_alu64_reg(self, BPF_MUL); 1153 + } 1154 + 1155 + static int bpf_fill_alu64_div_reg(struct bpf_test *self) 1156 + { 1157 + return __bpf_fill_alu64_reg(self, BPF_DIV); 1158 + } 1159 + 1160 + static int bpf_fill_alu64_mod_reg(struct bpf_test *self) 1161 + { 1162 + return __bpf_fill_alu64_reg(self, BPF_MOD); 1163 + } 1164 + 1165 + /* ALU32 register operations */ 1166 + static int bpf_fill_alu32_mov_reg(struct bpf_test *self) 1167 + { 1168 + return __bpf_fill_alu32_reg(self, BPF_MOV); 1169 + } 1170 + 1171 + static int bpf_fill_alu32_and_reg(struct bpf_test *self) 1172 + { 1173 + return __bpf_fill_alu32_reg(self, BPF_AND); 1174 + } 1175 + 1176 + static int bpf_fill_alu32_or_reg(struct bpf_test *self) 1177 + { 1178 + return __bpf_fill_alu32_reg(self, BPF_OR); 1179 + } 1180 + 1181 + static int bpf_fill_alu32_xor_reg(struct bpf_test *self) 1182 + { 1183 + return __bpf_fill_alu32_reg(self, BPF_XOR); 1184 + } 1185 + 1186 + static int bpf_fill_alu32_add_reg(struct bpf_test *self) 1187 + { 1188 + return __bpf_fill_alu32_reg(self, BPF_ADD); 1189 + } 1190 + 1191 + static int bpf_fill_alu32_sub_reg(struct bpf_test *self) 1192 + { 1193 + return __bpf_fill_alu32_reg(self, BPF_SUB); 1194 + } 1195 + 1196 + static int bpf_fill_alu32_mul_reg(struct bpf_test *self) 1197 + { 1198 + return __bpf_fill_alu32_reg(self, BPF_MUL); 1199 + } 1200 + 1201 + static int bpf_fill_alu32_div_reg(struct bpf_test *self) 1202 + { 1203 + return __bpf_fill_alu32_reg(self, BPF_DIV); 1204 + } 1205 + 1206 + static int bpf_fill_alu32_mod_reg(struct bpf_test *self) 1207 + { 1208 + return __bpf_fill_alu32_reg(self, BPF_MOD); 1209 + } 1210 + 1211 + /* 1212 + * Test JITs that implement complex ALU operations as function 1213 + * calls, and must re-arrange operands for argument passing. 1214 + */ 1215 + static int __bpf_fill_alu_imm_regs(struct bpf_test *self, u8 op, bool alu32) 1216 + { 1217 + int len = 2 + 10 * 10; 1218 + struct bpf_insn *insns; 1219 + u64 dst, res; 1220 + int i = 0; 1221 + u32 imm; 1222 + int rd; 1223 + 1224 + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); 1225 + if (!insns) 1226 + return -ENOMEM; 1227 + 1228 + /* Operand and result values according to operation */ 1229 + if (alu32) 1230 + dst = 0x76543210U; 1231 + else 1232 + dst = 0x7edcba9876543210ULL; 1233 + imm = 0x01234567U; 1234 + 1235 + if (op == BPF_LSH || op == BPF_RSH || op == BPF_ARSH) 1236 + imm &= 31; 1237 + 1238 + __bpf_alu_result(&res, dst, imm, op); 1239 + 1240 + if (alu32) 1241 + res = (u32)res; 1242 + 1243 + /* Check all operand registers */ 1244 + for (rd = R0; rd <= R9; rd++) { 1245 + i += __bpf_ld_imm64(&insns[i], rd, dst); 1246 + 1247 + if (alu32) 1248 + insns[i++] = BPF_ALU32_IMM(op, rd, imm); 1249 + else 1250 + insns[i++] = BPF_ALU64_IMM(op, rd, imm); 1251 + 1252 + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, res, 2); 1253 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1254 + insns[i++] = BPF_EXIT_INSN(); 1255 + 1256 + insns[i++] = BPF_ALU64_IMM(BPF_RSH, rd, 32); 1257 + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, res >> 32, 2); 1258 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1259 + insns[i++] = BPF_EXIT_INSN(); 1260 + } 1261 + 1262 + insns[i++] = BPF_MOV64_IMM(R0, 1); 1263 + insns[i++] = BPF_EXIT_INSN(); 1264 + 1265 + self->u.ptr.insns = insns; 1266 + self->u.ptr.len = len; 1267 + BUG_ON(i != len); 1268 + 1269 + return 0; 1270 + } 1271 + 1272 + /* ALU64 K registers */ 1273 + static int bpf_fill_alu64_mov_imm_regs(struct bpf_test *self) 1274 + { 1275 + return __bpf_fill_alu_imm_regs(self, BPF_MOV, false); 1276 + } 1277 + 1278 + static int bpf_fill_alu64_and_imm_regs(struct bpf_test *self) 1279 + { 1280 + return __bpf_fill_alu_imm_regs(self, BPF_AND, false); 1281 + } 1282 + 1283 + static int bpf_fill_alu64_or_imm_regs(struct bpf_test *self) 1284 + { 1285 + return __bpf_fill_alu_imm_regs(self, BPF_OR, false); 1286 + } 1287 + 1288 + static int bpf_fill_alu64_xor_imm_regs(struct bpf_test *self) 1289 + { 1290 + return __bpf_fill_alu_imm_regs(self, BPF_XOR, false); 1291 + } 1292 + 1293 + static int bpf_fill_alu64_lsh_imm_regs(struct bpf_test *self) 1294 + { 1295 + return __bpf_fill_alu_imm_regs(self, BPF_LSH, false); 1296 + } 1297 + 1298 + static int bpf_fill_alu64_rsh_imm_regs(struct bpf_test *self) 1299 + { 1300 + return __bpf_fill_alu_imm_regs(self, BPF_RSH, false); 1301 + } 1302 + 1303 + static int bpf_fill_alu64_arsh_imm_regs(struct bpf_test *self) 1304 + { 1305 + return __bpf_fill_alu_imm_regs(self, BPF_ARSH, false); 1306 + } 1307 + 1308 + static int bpf_fill_alu64_add_imm_regs(struct bpf_test *self) 1309 + { 1310 + return __bpf_fill_alu_imm_regs(self, BPF_ADD, false); 1311 + } 1312 + 1313 + static int bpf_fill_alu64_sub_imm_regs(struct bpf_test *self) 1314 + { 1315 + return __bpf_fill_alu_imm_regs(self, BPF_SUB, false); 1316 + } 1317 + 1318 + static int bpf_fill_alu64_mul_imm_regs(struct bpf_test *self) 1319 + { 1320 + return __bpf_fill_alu_imm_regs(self, BPF_MUL, false); 1321 + } 1322 + 1323 + static int bpf_fill_alu64_div_imm_regs(struct bpf_test *self) 1324 + { 1325 + return __bpf_fill_alu_imm_regs(self, BPF_DIV, false); 1326 + } 1327 + 1328 + static int bpf_fill_alu64_mod_imm_regs(struct bpf_test *self) 1329 + { 1330 + return __bpf_fill_alu_imm_regs(self, BPF_MOD, false); 1331 + } 1332 + 1333 + /* ALU32 K registers */ 1334 + static int bpf_fill_alu32_mov_imm_regs(struct bpf_test *self) 1335 + { 1336 + return __bpf_fill_alu_imm_regs(self, BPF_MOV, true); 1337 + } 1338 + 1339 + static int bpf_fill_alu32_and_imm_regs(struct bpf_test *self) 1340 + { 1341 + return __bpf_fill_alu_imm_regs(self, BPF_AND, true); 1342 + } 1343 + 1344 + static int bpf_fill_alu32_or_imm_regs(struct bpf_test *self) 1345 + { 1346 + return __bpf_fill_alu_imm_regs(self, BPF_OR, true); 1347 + } 1348 + 1349 + static int bpf_fill_alu32_xor_imm_regs(struct bpf_test *self) 1350 + { 1351 + return __bpf_fill_alu_imm_regs(self, BPF_XOR, true); 1352 + } 1353 + 1354 + static int bpf_fill_alu32_lsh_imm_regs(struct bpf_test *self) 1355 + { 1356 + return __bpf_fill_alu_imm_regs(self, BPF_LSH, true); 1357 + } 1358 + 1359 + static int bpf_fill_alu32_rsh_imm_regs(struct bpf_test *self) 1360 + { 1361 + return __bpf_fill_alu_imm_regs(self, BPF_RSH, true); 1362 + } 1363 + 1364 + static int bpf_fill_alu32_arsh_imm_regs(struct bpf_test *self) 1365 + { 1366 + return __bpf_fill_alu_imm_regs(self, BPF_ARSH, true); 1367 + } 1368 + 1369 + static int bpf_fill_alu32_add_imm_regs(struct bpf_test *self) 1370 + { 1371 + return __bpf_fill_alu_imm_regs(self, BPF_ADD, true); 1372 + } 1373 + 1374 + static int bpf_fill_alu32_sub_imm_regs(struct bpf_test *self) 1375 + { 1376 + return __bpf_fill_alu_imm_regs(self, BPF_SUB, true); 1377 + } 1378 + 1379 + static int bpf_fill_alu32_mul_imm_regs(struct bpf_test *self) 1380 + { 1381 + return __bpf_fill_alu_imm_regs(self, BPF_MUL, true); 1382 + } 1383 + 1384 + static int bpf_fill_alu32_div_imm_regs(struct bpf_test *self) 1385 + { 1386 + return __bpf_fill_alu_imm_regs(self, BPF_DIV, true); 1387 + } 1388 + 1389 + static int bpf_fill_alu32_mod_imm_regs(struct bpf_test *self) 1390 + { 1391 + return __bpf_fill_alu_imm_regs(self, BPF_MOD, true); 1392 + } 1393 + 1394 + /* 1395 + * Test JITs that implement complex ALU operations as function 1396 + * calls, and must re-arrange operands for argument passing. 1397 + */ 1398 + static int __bpf_fill_alu_reg_pairs(struct bpf_test *self, u8 op, bool alu32) 1399 + { 1400 + int len = 2 + 10 * 10 * 12; 1401 + u64 dst, src, res, same; 1402 + struct bpf_insn *insns; 1403 + int rd, rs; 1404 + int i = 0; 1405 + 1406 + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); 1407 + if (!insns) 1408 + return -ENOMEM; 1409 + 1410 + /* Operand and result values according to operation */ 1411 + if (alu32) { 1412 + dst = 0x76543210U; 1413 + src = 0x01234567U; 1414 + } else { 1415 + dst = 0x7edcba9876543210ULL; 1416 + src = 0x0123456789abcdefULL; 1417 + } 1418 + 1419 + if (op == BPF_LSH || op == BPF_RSH || op == BPF_ARSH) 1420 + src &= 31; 1421 + 1422 + __bpf_alu_result(&res, dst, src, op); 1423 + __bpf_alu_result(&same, src, src, op); 1424 + 1425 + if (alu32) { 1426 + res = (u32)res; 1427 + same = (u32)same; 1428 + } 1429 + 1430 + /* Check all combinations of operand registers */ 1431 + for (rd = R0; rd <= R9; rd++) { 1432 + for (rs = R0; rs <= R9; rs++) { 1433 + u64 val = rd == rs ? same : res; 1434 + 1435 + i += __bpf_ld_imm64(&insns[i], rd, dst); 1436 + i += __bpf_ld_imm64(&insns[i], rs, src); 1437 + 1438 + if (alu32) 1439 + insns[i++] = BPF_ALU32_REG(op, rd, rs); 1440 + else 1441 + insns[i++] = BPF_ALU64_REG(op, rd, rs); 1442 + 1443 + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, val, 2); 1444 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1445 + insns[i++] = BPF_EXIT_INSN(); 1446 + 1447 + insns[i++] = BPF_ALU64_IMM(BPF_RSH, rd, 32); 1448 + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, val >> 32, 2); 1449 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1450 + insns[i++] = BPF_EXIT_INSN(); 1451 + } 1452 + } 1453 + 1454 + insns[i++] = BPF_MOV64_IMM(R0, 1); 1455 + insns[i++] = BPF_EXIT_INSN(); 1456 + 1457 + self->u.ptr.insns = insns; 1458 + self->u.ptr.len = len; 1459 + BUG_ON(i != len); 1460 + 1461 + return 0; 1462 + } 1463 + 1464 + /* ALU64 X register combinations */ 1465 + static int bpf_fill_alu64_mov_reg_pairs(struct bpf_test *self) 1466 + { 1467 + return __bpf_fill_alu_reg_pairs(self, BPF_MOV, false); 1468 + } 1469 + 1470 + static int bpf_fill_alu64_and_reg_pairs(struct bpf_test *self) 1471 + { 1472 + return __bpf_fill_alu_reg_pairs(self, BPF_AND, false); 1473 + } 1474 + 1475 + static int bpf_fill_alu64_or_reg_pairs(struct bpf_test *self) 1476 + { 1477 + return __bpf_fill_alu_reg_pairs(self, BPF_OR, false); 1478 + } 1479 + 1480 + static int bpf_fill_alu64_xor_reg_pairs(struct bpf_test *self) 1481 + { 1482 + return __bpf_fill_alu_reg_pairs(self, BPF_XOR, false); 1483 + } 1484 + 1485 + static int bpf_fill_alu64_lsh_reg_pairs(struct bpf_test *self) 1486 + { 1487 + return __bpf_fill_alu_reg_pairs(self, BPF_LSH, false); 1488 + } 1489 + 1490 + static int bpf_fill_alu64_rsh_reg_pairs(struct bpf_test *self) 1491 + { 1492 + return __bpf_fill_alu_reg_pairs(self, BPF_RSH, false); 1493 + } 1494 + 1495 + static int bpf_fill_alu64_arsh_reg_pairs(struct bpf_test *self) 1496 + { 1497 + return __bpf_fill_alu_reg_pairs(self, BPF_ARSH, false); 1498 + } 1499 + 1500 + static int bpf_fill_alu64_add_reg_pairs(struct bpf_test *self) 1501 + { 1502 + return __bpf_fill_alu_reg_pairs(self, BPF_ADD, false); 1503 + } 1504 + 1505 + static int bpf_fill_alu64_sub_reg_pairs(struct bpf_test *self) 1506 + { 1507 + return __bpf_fill_alu_reg_pairs(self, BPF_SUB, false); 1508 + } 1509 + 1510 + static int bpf_fill_alu64_mul_reg_pairs(struct bpf_test *self) 1511 + { 1512 + return __bpf_fill_alu_reg_pairs(self, BPF_MUL, false); 1513 + } 1514 + 1515 + static int bpf_fill_alu64_div_reg_pairs(struct bpf_test *self) 1516 + { 1517 + return __bpf_fill_alu_reg_pairs(self, BPF_DIV, false); 1518 + } 1519 + 1520 + static int bpf_fill_alu64_mod_reg_pairs(struct bpf_test *self) 1521 + { 1522 + return __bpf_fill_alu_reg_pairs(self, BPF_MOD, false); 1523 + } 1524 + 1525 + /* ALU32 X register combinations */ 1526 + static int bpf_fill_alu32_mov_reg_pairs(struct bpf_test *self) 1527 + { 1528 + return __bpf_fill_alu_reg_pairs(self, BPF_MOV, true); 1529 + } 1530 + 1531 + static int bpf_fill_alu32_and_reg_pairs(struct bpf_test *self) 1532 + { 1533 + return __bpf_fill_alu_reg_pairs(self, BPF_AND, true); 1534 + } 1535 + 1536 + static int bpf_fill_alu32_or_reg_pairs(struct bpf_test *self) 1537 + { 1538 + return __bpf_fill_alu_reg_pairs(self, BPF_OR, true); 1539 + } 1540 + 1541 + static int bpf_fill_alu32_xor_reg_pairs(struct bpf_test *self) 1542 + { 1543 + return __bpf_fill_alu_reg_pairs(self, BPF_XOR, true); 1544 + } 1545 + 1546 + static int bpf_fill_alu32_lsh_reg_pairs(struct bpf_test *self) 1547 + { 1548 + return __bpf_fill_alu_reg_pairs(self, BPF_LSH, true); 1549 + } 1550 + 1551 + static int bpf_fill_alu32_rsh_reg_pairs(struct bpf_test *self) 1552 + { 1553 + return __bpf_fill_alu_reg_pairs(self, BPF_RSH, true); 1554 + } 1555 + 1556 + static int bpf_fill_alu32_arsh_reg_pairs(struct bpf_test *self) 1557 + { 1558 + return __bpf_fill_alu_reg_pairs(self, BPF_ARSH, true); 1559 + } 1560 + 1561 + static int bpf_fill_alu32_add_reg_pairs(struct bpf_test *self) 1562 + { 1563 + return __bpf_fill_alu_reg_pairs(self, BPF_ADD, true); 1564 + } 1565 + 1566 + static int bpf_fill_alu32_sub_reg_pairs(struct bpf_test *self) 1567 + { 1568 + return __bpf_fill_alu_reg_pairs(self, BPF_SUB, true); 1569 + } 1570 + 1571 + static int bpf_fill_alu32_mul_reg_pairs(struct bpf_test *self) 1572 + { 1573 + return __bpf_fill_alu_reg_pairs(self, BPF_MUL, true); 1574 + } 1575 + 1576 + static int bpf_fill_alu32_div_reg_pairs(struct bpf_test *self) 1577 + { 1578 + return __bpf_fill_alu_reg_pairs(self, BPF_DIV, true); 1579 + } 1580 + 1581 + static int bpf_fill_alu32_mod_reg_pairs(struct bpf_test *self) 1582 + { 1583 + return __bpf_fill_alu_reg_pairs(self, BPF_MOD, true); 1584 + } 1585 + 1586 + /* 1587 + * Exhaustive tests of atomic operations for all power-of-two operand 1588 + * magnitudes, both for positive and negative values. 1589 + */ 1590 + 1591 + static int __bpf_emit_atomic64(struct bpf_test *self, void *arg, 1592 + struct bpf_insn *insns, s64 dst, s64 src) 1593 + { 1594 + int op = *(int *)arg; 1595 + u64 keep, fetch, res; 1596 + int i = 0; 1597 + 1598 + if (!insns) 1599 + return 21; 1600 + 1601 + switch (op) { 1602 + case BPF_XCHG: 1603 + res = src; 1604 + break; 1605 + default: 1606 + __bpf_alu_result(&res, dst, src, BPF_OP(op)); 1607 + } 1608 + 1609 + keep = 0x0123456789abcdefULL; 1610 + if (op & BPF_FETCH) 1611 + fetch = dst; 1612 + else 1613 + fetch = src; 1614 + 1615 + i += __bpf_ld_imm64(&insns[i], R0, keep); 1616 + i += __bpf_ld_imm64(&insns[i], R1, dst); 1617 + i += __bpf_ld_imm64(&insns[i], R2, src); 1618 + i += __bpf_ld_imm64(&insns[i], R3, res); 1619 + i += __bpf_ld_imm64(&insns[i], R4, fetch); 1620 + i += __bpf_ld_imm64(&insns[i], R5, keep); 1621 + 1622 + insns[i++] = BPF_STX_MEM(BPF_DW, R10, R1, -8); 1623 + insns[i++] = BPF_ATOMIC_OP(BPF_DW, op, R10, R2, -8); 1624 + insns[i++] = BPF_LDX_MEM(BPF_DW, R1, R10, -8); 1625 + 1626 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); 1627 + insns[i++] = BPF_EXIT_INSN(); 1628 + 1629 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R4, 1); 1630 + insns[i++] = BPF_EXIT_INSN(); 1631 + 1632 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R5, 1); 1633 + insns[i++] = BPF_EXIT_INSN(); 1634 + 1635 + return i; 1636 + } 1637 + 1638 + static int __bpf_emit_atomic32(struct bpf_test *self, void *arg, 1639 + struct bpf_insn *insns, s64 dst, s64 src) 1640 + { 1641 + int op = *(int *)arg; 1642 + u64 keep, fetch, res; 1643 + int i = 0; 1644 + 1645 + if (!insns) 1646 + return 21; 1647 + 1648 + switch (op) { 1649 + case BPF_XCHG: 1650 + res = src; 1651 + break; 1652 + default: 1653 + __bpf_alu_result(&res, (u32)dst, (u32)src, BPF_OP(op)); 1654 + } 1655 + 1656 + keep = 0x0123456789abcdefULL; 1657 + if (op & BPF_FETCH) 1658 + fetch = (u32)dst; 1659 + else 1660 + fetch = src; 1661 + 1662 + i += __bpf_ld_imm64(&insns[i], R0, keep); 1663 + i += __bpf_ld_imm64(&insns[i], R1, (u32)dst); 1664 + i += __bpf_ld_imm64(&insns[i], R2, src); 1665 + i += __bpf_ld_imm64(&insns[i], R3, (u32)res); 1666 + i += __bpf_ld_imm64(&insns[i], R4, fetch); 1667 + i += __bpf_ld_imm64(&insns[i], R5, keep); 1668 + 1669 + insns[i++] = BPF_STX_MEM(BPF_W, R10, R1, -4); 1670 + insns[i++] = BPF_ATOMIC_OP(BPF_W, op, R10, R2, -4); 1671 + insns[i++] = BPF_LDX_MEM(BPF_W, R1, R10, -4); 1672 + 1673 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); 1674 + insns[i++] = BPF_EXIT_INSN(); 1675 + 1676 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R4, 1); 1677 + insns[i++] = BPF_EXIT_INSN(); 1678 + 1679 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R5, 1); 1680 + insns[i++] = BPF_EXIT_INSN(); 1681 + 1682 + return i; 1683 + } 1684 + 1685 + static int __bpf_emit_cmpxchg64(struct bpf_test *self, void *arg, 1686 + struct bpf_insn *insns, s64 dst, s64 src) 1687 + { 1688 + int i = 0; 1689 + 1690 + if (!insns) 1691 + return 23; 1692 + 1693 + i += __bpf_ld_imm64(&insns[i], R0, ~dst); 1694 + i += __bpf_ld_imm64(&insns[i], R1, dst); 1695 + i += __bpf_ld_imm64(&insns[i], R2, src); 1696 + 1697 + /* Result unsuccessful */ 1698 + insns[i++] = BPF_STX_MEM(BPF_DW, R10, R1, -8); 1699 + insns[i++] = BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -8); 1700 + insns[i++] = BPF_LDX_MEM(BPF_DW, R3, R10, -8); 1701 + 1702 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 2); 1703 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1704 + insns[i++] = BPF_EXIT_INSN(); 1705 + 1706 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R3, 2); 1707 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1708 + insns[i++] = BPF_EXIT_INSN(); 1709 + 1710 + /* Result successful */ 1711 + insns[i++] = BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -8); 1712 + insns[i++] = BPF_LDX_MEM(BPF_DW, R3, R10, -8); 1713 + 1714 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R3, 2); 1715 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1716 + insns[i++] = BPF_EXIT_INSN(); 1717 + 1718 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2); 1719 + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); 1720 + insns[i++] = BPF_EXIT_INSN(); 1721 + 1722 + return i; 1723 + } 1724 + 1725 + static int __bpf_emit_cmpxchg32(struct bpf_test *self, void *arg, 1726 + struct bpf_insn *insns, s64 dst, s64 src) 1727 + { 1728 + int i = 0; 1729 + 1730 + if (!insns) 1731 + return 27; 1732 + 1733 + i += __bpf_ld_imm64(&insns[i], R0, ~dst); 1734 + i += __bpf_ld_imm64(&insns[i], R1, (u32)dst); 1735 + i += __bpf_ld_imm64(&insns[i], R2, src); 1736 + 1737 + /* Result unsuccessful */ 1738 + insns[i++] = BPF_STX_MEM(BPF_W, R10, R1, -4); 1739 + insns[i++] = BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R2, -4); 1740 + insns[i++] = BPF_ZEXT_REG(R0), /* Zext always inserted by verifier */ 1741 + insns[i++] = BPF_LDX_MEM(BPF_W, R3, R10, -4); 1742 + 1743 + insns[i++] = BPF_JMP32_REG(BPF_JEQ, R1, R3, 2); 1744 + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); 1745 + insns[i++] = BPF_EXIT_INSN(); 1746 + 1747 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R3, 2); 1748 + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); 1749 + insns[i++] = BPF_EXIT_INSN(); 1750 + 1751 + /* Result successful */ 1752 + i += __bpf_ld_imm64(&insns[i], R0, dst); 1753 + insns[i++] = BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R2, -4); 1754 + insns[i++] = BPF_ZEXT_REG(R0), /* Zext always inserted by verifier */ 1755 + insns[i++] = BPF_LDX_MEM(BPF_W, R3, R10, -4); 1756 + 1757 + insns[i++] = BPF_JMP32_REG(BPF_JEQ, R2, R3, 2); 1758 + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); 1759 + insns[i++] = BPF_EXIT_INSN(); 1760 + 1761 + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2); 1762 + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); 1763 + insns[i++] = BPF_EXIT_INSN(); 1764 + 1765 + return i; 1766 + } 1767 + 1768 + static int __bpf_fill_atomic64(struct bpf_test *self, int op) 1769 + { 1770 + return __bpf_fill_pattern(self, &op, 64, 64, 1771 + 0, PATTERN_BLOCK2, 1772 + &__bpf_emit_atomic64); 1773 + } 1774 + 1775 + static int __bpf_fill_atomic32(struct bpf_test *self, int op) 1776 + { 1777 + return __bpf_fill_pattern(self, &op, 64, 64, 1778 + 0, PATTERN_BLOCK2, 1779 + &__bpf_emit_atomic32); 1780 + } 1781 + 1782 + /* 64-bit atomic operations */ 1783 + static int bpf_fill_atomic64_add(struct bpf_test *self) 1784 + { 1785 + return __bpf_fill_atomic64(self, BPF_ADD); 1786 + } 1787 + 1788 + static int bpf_fill_atomic64_and(struct bpf_test *self) 1789 + { 1790 + return __bpf_fill_atomic64(self, BPF_AND); 1791 + } 1792 + 1793 + static int bpf_fill_atomic64_or(struct bpf_test *self) 1794 + { 1795 + return __bpf_fill_atomic64(self, BPF_OR); 1796 + } 1797 + 1798 + static int bpf_fill_atomic64_xor(struct bpf_test *self) 1799 + { 1800 + return __bpf_fill_atomic64(self, BPF_XOR); 1801 + } 1802 + 1803 + static int bpf_fill_atomic64_add_fetch(struct bpf_test *self) 1804 + { 1805 + return __bpf_fill_atomic64(self, BPF_ADD | BPF_FETCH); 1806 + } 1807 + 1808 + static int bpf_fill_atomic64_and_fetch(struct bpf_test *self) 1809 + { 1810 + return __bpf_fill_atomic64(self, BPF_AND | BPF_FETCH); 1811 + } 1812 + 1813 + static int bpf_fill_atomic64_or_fetch(struct bpf_test *self) 1814 + { 1815 + return __bpf_fill_atomic64(self, BPF_OR | BPF_FETCH); 1816 + } 1817 + 1818 + static int bpf_fill_atomic64_xor_fetch(struct bpf_test *self) 1819 + { 1820 + return __bpf_fill_atomic64(self, BPF_XOR | BPF_FETCH); 1821 + } 1822 + 1823 + static int bpf_fill_atomic64_xchg(struct bpf_test *self) 1824 + { 1825 + return __bpf_fill_atomic64(self, BPF_XCHG); 1826 + } 1827 + 1828 + static int bpf_fill_cmpxchg64(struct bpf_test *self) 1829 + { 1830 + return __bpf_fill_pattern(self, NULL, 64, 64, 0, PATTERN_BLOCK2, 1831 + &__bpf_emit_cmpxchg64); 1832 + } 1833 + 1834 + /* 32-bit atomic operations */ 1835 + static int bpf_fill_atomic32_add(struct bpf_test *self) 1836 + { 1837 + return __bpf_fill_atomic32(self, BPF_ADD); 1838 + } 1839 + 1840 + static int bpf_fill_atomic32_and(struct bpf_test *self) 1841 + { 1842 + return __bpf_fill_atomic32(self, BPF_AND); 1843 + } 1844 + 1845 + static int bpf_fill_atomic32_or(struct bpf_test *self) 1846 + { 1847 + return __bpf_fill_atomic32(self, BPF_OR); 1848 + } 1849 + 1850 + static int bpf_fill_atomic32_xor(struct bpf_test *self) 1851 + { 1852 + return __bpf_fill_atomic32(self, BPF_XOR); 1853 + } 1854 + 1855 + static int bpf_fill_atomic32_add_fetch(struct bpf_test *self) 1856 + { 1857 + return __bpf_fill_atomic32(self, BPF_ADD | BPF_FETCH); 1858 + } 1859 + 1860 + static int bpf_fill_atomic32_and_fetch(struct bpf_test *self) 1861 + { 1862 + return __bpf_fill_atomic32(self, BPF_AND | BPF_FETCH); 1863 + } 1864 + 1865 + static int bpf_fill_atomic32_or_fetch(struct bpf_test *self) 1866 + { 1867 + return __bpf_fill_atomic32(self, BPF_OR | BPF_FETCH); 1868 + } 1869 + 1870 + static int bpf_fill_atomic32_xor_fetch(struct bpf_test *self) 1871 + { 1872 + return __bpf_fill_atomic32(self, BPF_XOR | BPF_FETCH); 1873 + } 1874 + 1875 + static int bpf_fill_atomic32_xchg(struct bpf_test *self) 1876 + { 1877 + return __bpf_fill_atomic32(self, BPF_XCHG); 1878 + } 1879 + 1880 + static int bpf_fill_cmpxchg32(struct bpf_test *self) 1881 + { 1882 + return __bpf_fill_pattern(self, NULL, 64, 64, 0, PATTERN_BLOCK2, 1883 + &__bpf_emit_cmpxchg32); 1884 + } 1885 + 1886 + /* 1887 + * Test JITs that implement ATOMIC operations as function calls or 1888 + * other primitives, and must re-arrange operands for argument passing. 1889 + */ 1890 + static int __bpf_fill_atomic_reg_pairs(struct bpf_test *self, u8 width, u8 op) 1891 + { 1892 + struct bpf_insn *insn; 1893 + int len = 2 + 34 * 10 * 10; 1894 + u64 mem, upd, res; 1895 + int rd, rs, i = 0; 1896 + 1897 + insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); 1898 + if (!insn) 1899 + return -ENOMEM; 1900 + 1901 + /* Operand and memory values */ 1902 + if (width == BPF_DW) { 1903 + mem = 0x0123456789abcdefULL; 1904 + upd = 0xfedcba9876543210ULL; 1905 + } else { /* BPF_W */ 1906 + mem = 0x01234567U; 1907 + upd = 0x76543210U; 1908 + } 1909 + 1910 + /* Memory updated according to operation */ 1911 + switch (op) { 1912 + case BPF_XCHG: 1913 + res = upd; 1914 + break; 1915 + case BPF_CMPXCHG: 1916 + res = mem; 1917 + break; 1918 + default: 1919 + __bpf_alu_result(&res, mem, upd, BPF_OP(op)); 1920 + } 1921 + 1922 + /* Test all operand registers */ 1923 + for (rd = R0; rd <= R9; rd++) { 1924 + for (rs = R0; rs <= R9; rs++) { 1925 + u64 cmp, src; 1926 + 1927 + /* Initialize value in memory */ 1928 + i += __bpf_ld_imm64(&insn[i], R0, mem); 1929 + insn[i++] = BPF_STX_MEM(width, R10, R0, -8); 1930 + 1931 + /* Initialize registers in order */ 1932 + i += __bpf_ld_imm64(&insn[i], R0, ~mem); 1933 + i += __bpf_ld_imm64(&insn[i], rs, upd); 1934 + insn[i++] = BPF_MOV64_REG(rd, R10); 1935 + 1936 + /* Perform atomic operation */ 1937 + insn[i++] = BPF_ATOMIC_OP(width, op, rd, rs, -8); 1938 + if (op == BPF_CMPXCHG && width == BPF_W) 1939 + insn[i++] = BPF_ZEXT_REG(R0); 1940 + 1941 + /* Check R0 register value */ 1942 + if (op == BPF_CMPXCHG) 1943 + cmp = mem; /* Expect value from memory */ 1944 + else if (R0 == rd || R0 == rs) 1945 + cmp = 0; /* Aliased, checked below */ 1946 + else 1947 + cmp = ~mem; /* Expect value to be preserved */ 1948 + if (cmp) { 1949 + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, R0, 1950 + (u32)cmp, 2); 1951 + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); 1952 + insn[i++] = BPF_EXIT_INSN(); 1953 + insn[i++] = BPF_ALU64_IMM(BPF_RSH, R0, 32); 1954 + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, R0, 1955 + cmp >> 32, 2); 1956 + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); 1957 + insn[i++] = BPF_EXIT_INSN(); 1958 + } 1959 + 1960 + /* Check source register value */ 1961 + if (rs == R0 && op == BPF_CMPXCHG) 1962 + src = 0; /* Aliased with R0, checked above */ 1963 + else if (rs == rd && (op == BPF_CMPXCHG || 1964 + !(op & BPF_FETCH))) 1965 + src = 0; /* Aliased with rd, checked below */ 1966 + else if (op == BPF_CMPXCHG) 1967 + src = upd; /* Expect value to be preserved */ 1968 + else if (op & BPF_FETCH) 1969 + src = mem; /* Expect fetched value from mem */ 1970 + else /* no fetch */ 1971 + src = upd; /* Expect value to be preserved */ 1972 + if (src) { 1973 + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, rs, 1974 + (u32)src, 2); 1975 + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); 1976 + insn[i++] = BPF_EXIT_INSN(); 1977 + insn[i++] = BPF_ALU64_IMM(BPF_RSH, rs, 32); 1978 + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, rs, 1979 + src >> 32, 2); 1980 + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); 1981 + insn[i++] = BPF_EXIT_INSN(); 1982 + } 1983 + 1984 + /* Check destination register value */ 1985 + if (!(rd == R0 && op == BPF_CMPXCHG) && 1986 + !(rd == rs && (op & BPF_FETCH))) { 1987 + insn[i++] = BPF_JMP_REG(BPF_JEQ, rd, R10, 2); 1988 + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); 1989 + insn[i++] = BPF_EXIT_INSN(); 1990 + } 1991 + 1992 + /* Check value in memory */ 1993 + if (rs != rd) { /* No aliasing */ 1994 + i += __bpf_ld_imm64(&insn[i], R1, res); 1995 + } else if (op == BPF_XCHG) { /* Aliased, XCHG */ 1996 + insn[i++] = BPF_MOV64_REG(R1, R10); 1997 + } else if (op == BPF_CMPXCHG) { /* Aliased, CMPXCHG */ 1998 + i += __bpf_ld_imm64(&insn[i], R1, mem); 1999 + } else { /* Aliased, ALU oper */ 2000 + i += __bpf_ld_imm64(&insn[i], R1, mem); 2001 + insn[i++] = BPF_ALU64_REG(BPF_OP(op), R1, R10); 2002 + } 2003 + 2004 + insn[i++] = BPF_LDX_MEM(width, R0, R10, -8); 2005 + if (width == BPF_DW) 2006 + insn[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2); 2007 + else /* width == BPF_W */ 2008 + insn[i++] = BPF_JMP32_REG(BPF_JEQ, R0, R1, 2); 2009 + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); 2010 + insn[i++] = BPF_EXIT_INSN(); 2011 + } 2012 + } 2013 + 2014 + insn[i++] = BPF_MOV64_IMM(R0, 1); 2015 + insn[i++] = BPF_EXIT_INSN(); 2016 + 2017 + self->u.ptr.insns = insn; 2018 + self->u.ptr.len = i; 2019 + BUG_ON(i > len); 2020 + 2021 + return 0; 2022 + } 2023 + 2024 + /* 64-bit atomic register tests */ 2025 + static int bpf_fill_atomic64_add_reg_pairs(struct bpf_test *self) 2026 + { 2027 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_ADD); 2028 + } 2029 + 2030 + static int bpf_fill_atomic64_and_reg_pairs(struct bpf_test *self) 2031 + { 2032 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_AND); 2033 + } 2034 + 2035 + static int bpf_fill_atomic64_or_reg_pairs(struct bpf_test *self) 2036 + { 2037 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_OR); 2038 + } 2039 + 2040 + static int bpf_fill_atomic64_xor_reg_pairs(struct bpf_test *self) 2041 + { 2042 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_XOR); 2043 + } 2044 + 2045 + static int bpf_fill_atomic64_add_fetch_reg_pairs(struct bpf_test *self) 2046 + { 2047 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_ADD | BPF_FETCH); 2048 + } 2049 + 2050 + static int bpf_fill_atomic64_and_fetch_reg_pairs(struct bpf_test *self) 2051 + { 2052 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_AND | BPF_FETCH); 2053 + } 2054 + 2055 + static int bpf_fill_atomic64_or_fetch_reg_pairs(struct bpf_test *self) 2056 + { 2057 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_OR | BPF_FETCH); 2058 + } 2059 + 2060 + static int bpf_fill_atomic64_xor_fetch_reg_pairs(struct bpf_test *self) 2061 + { 2062 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_XOR | BPF_FETCH); 2063 + } 2064 + 2065 + static int bpf_fill_atomic64_xchg_reg_pairs(struct bpf_test *self) 2066 + { 2067 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_XCHG); 2068 + } 2069 + 2070 + static int bpf_fill_atomic64_cmpxchg_reg_pairs(struct bpf_test *self) 2071 + { 2072 + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_CMPXCHG); 2073 + } 2074 + 2075 + /* 32-bit atomic register tests */ 2076 + static int bpf_fill_atomic32_add_reg_pairs(struct bpf_test *self) 2077 + { 2078 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_ADD); 2079 + } 2080 + 2081 + static int bpf_fill_atomic32_and_reg_pairs(struct bpf_test *self) 2082 + { 2083 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_AND); 2084 + } 2085 + 2086 + static int bpf_fill_atomic32_or_reg_pairs(struct bpf_test *self) 2087 + { 2088 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_OR); 2089 + } 2090 + 2091 + static int bpf_fill_atomic32_xor_reg_pairs(struct bpf_test *self) 2092 + { 2093 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_XOR); 2094 + } 2095 + 2096 + static int bpf_fill_atomic32_add_fetch_reg_pairs(struct bpf_test *self) 2097 + { 2098 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_ADD | BPF_FETCH); 2099 + } 2100 + 2101 + static int bpf_fill_atomic32_and_fetch_reg_pairs(struct bpf_test *self) 2102 + { 2103 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_AND | BPF_FETCH); 2104 + } 2105 + 2106 + static int bpf_fill_atomic32_or_fetch_reg_pairs(struct bpf_test *self) 2107 + { 2108 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_OR | BPF_FETCH); 2109 + } 2110 + 2111 + static int bpf_fill_atomic32_xor_fetch_reg_pairs(struct bpf_test *self) 2112 + { 2113 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_XOR | BPF_FETCH); 2114 + } 2115 + 2116 + static int bpf_fill_atomic32_xchg_reg_pairs(struct bpf_test *self) 2117 + { 2118 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_XCHG); 2119 + } 2120 + 2121 + static int bpf_fill_atomic32_cmpxchg_reg_pairs(struct bpf_test *self) 2122 + { 2123 + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_CMPXCHG); 2124 + } 2125 + 2126 + /* 2127 + * Test the two-instruction 64-bit immediate load operation for all 2128 + * power-of-two magnitudes of the immediate operand. For each MSB, a block 2129 + * of immediate values centered around the power-of-two MSB are tested, 2130 + * both for positive and negative values. The test is designed to verify 2131 + * the operation for JITs that emit different code depending on the magnitude 2132 + * of the immediate value. This is often the case if the native instruction 2133 + * immediate field width is narrower than 32 bits. 2134 + */ 2135 + static int bpf_fill_ld_imm64(struct bpf_test *self) 2136 + { 2137 + int block = 64; /* Increase for more tests per MSB position */ 2138 + int len = 3 + 8 * 63 * block * 2; 2139 + struct bpf_insn *insn; 2140 + int bit, adj, sign; 2141 + int i = 0; 2142 + 2143 + insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); 2144 + if (!insn) 2145 + return -ENOMEM; 2146 + 2147 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); 2148 + 2149 + for (bit = 0; bit <= 62; bit++) { 2150 + for (adj = -block / 2; adj < block / 2; adj++) { 2151 + for (sign = -1; sign <= 1; sign += 2) { 2152 + s64 imm = sign * ((1LL << bit) + adj); 2153 + 2154 + /* Perform operation */ 2155 + i += __bpf_ld_imm64(&insn[i], R1, imm); 2156 + 2157 + /* Load reference */ 2158 + insn[i++] = BPF_ALU32_IMM(BPF_MOV, R2, imm); 2159 + insn[i++] = BPF_ALU32_IMM(BPF_MOV, R3, 2160 + (u32)(imm >> 32)); 2161 + insn[i++] = BPF_ALU64_IMM(BPF_LSH, R3, 32); 2162 + insn[i++] = BPF_ALU64_REG(BPF_OR, R2, R3); 2163 + 2164 + /* Check result */ 2165 + insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R2, 1); 2166 + insn[i++] = BPF_EXIT_INSN(); 2167 + } 2168 + } 2169 + } 2170 + 2171 + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); 2172 + insn[i++] = BPF_EXIT_INSN(); 2173 + 2174 + self->u.ptr.insns = insn; 2175 + self->u.ptr.len = len; 2176 + BUG_ON(i != len); 2177 + 2178 + return 0; 2179 + } 2180 + 2181 + /* 2182 + * Exhaustive tests of JMP operations for all combinations of power-of-two 2183 + * magnitudes of the operands, both for positive and negative values. The 2184 + * test is designed to verify e.g. the JMP and JMP32 operations for JITs that 2185 + * emit different code depending on the magnitude of the immediate value. 2186 + */ 2187 + 2188 + static bool __bpf_match_jmp_cond(s64 v1, s64 v2, u8 op) 2189 + { 2190 + switch (op) { 2191 + case BPF_JSET: 2192 + return !!(v1 & v2); 2193 + case BPF_JEQ: 2194 + return v1 == v2; 2195 + case BPF_JNE: 2196 + return v1 != v2; 2197 + case BPF_JGT: 2198 + return (u64)v1 > (u64)v2; 2199 + case BPF_JGE: 2200 + return (u64)v1 >= (u64)v2; 2201 + case BPF_JLT: 2202 + return (u64)v1 < (u64)v2; 2203 + case BPF_JLE: 2204 + return (u64)v1 <= (u64)v2; 2205 + case BPF_JSGT: 2206 + return v1 > v2; 2207 + case BPF_JSGE: 2208 + return v1 >= v2; 2209 + case BPF_JSLT: 2210 + return v1 < v2; 2211 + case BPF_JSLE: 2212 + return v1 <= v2; 2213 + } 2214 + return false; 2215 + } 2216 + 2217 + static int __bpf_emit_jmp_imm(struct bpf_test *self, void *arg, 2218 + struct bpf_insn *insns, s64 dst, s64 imm) 2219 + { 2220 + int op = *(int *)arg; 2221 + 2222 + if (insns) { 2223 + bool match = __bpf_match_jmp_cond(dst, (s32)imm, op); 2224 + int i = 0; 2225 + 2226 + insns[i++] = BPF_ALU32_IMM(BPF_MOV, R0, match); 2227 + 2228 + i += __bpf_ld_imm64(&insns[i], R1, dst); 2229 + insns[i++] = BPF_JMP_IMM(op, R1, imm, 1); 2230 + if (!match) 2231 + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); 2232 + insns[i++] = BPF_EXIT_INSN(); 2233 + 2234 + return i; 2235 + } 2236 + 2237 + return 5 + 1; 2238 + } 2239 + 2240 + static int __bpf_emit_jmp32_imm(struct bpf_test *self, void *arg, 2241 + struct bpf_insn *insns, s64 dst, s64 imm) 2242 + { 2243 + int op = *(int *)arg; 2244 + 2245 + if (insns) { 2246 + bool match = __bpf_match_jmp_cond((s32)dst, (s32)imm, op); 2247 + int i = 0; 2248 + 2249 + i += __bpf_ld_imm64(&insns[i], R1, dst); 2250 + insns[i++] = BPF_JMP32_IMM(op, R1, imm, 1); 2251 + if (!match) 2252 + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); 2253 + insns[i++] = BPF_EXIT_INSN(); 2254 + 2255 + return i; 2256 + } 2257 + 2258 + return 5; 2259 + } 2260 + 2261 + static int __bpf_emit_jmp_reg(struct bpf_test *self, void *arg, 2262 + struct bpf_insn *insns, s64 dst, s64 src) 2263 + { 2264 + int op = *(int *)arg; 2265 + 2266 + if (insns) { 2267 + bool match = __bpf_match_jmp_cond(dst, src, op); 2268 + int i = 0; 2269 + 2270 + i += __bpf_ld_imm64(&insns[i], R1, dst); 2271 + i += __bpf_ld_imm64(&insns[i], R2, src); 2272 + insns[i++] = BPF_JMP_REG(op, R1, R2, 1); 2273 + if (!match) 2274 + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); 2275 + insns[i++] = BPF_EXIT_INSN(); 2276 + 2277 + return i; 2278 + } 2279 + 2280 + return 7; 2281 + } 2282 + 2283 + static int __bpf_emit_jmp32_reg(struct bpf_test *self, void *arg, 2284 + struct bpf_insn *insns, s64 dst, s64 src) 2285 + { 2286 + int op = *(int *)arg; 2287 + 2288 + if (insns) { 2289 + bool match = __bpf_match_jmp_cond((s32)dst, (s32)src, op); 2290 + int i = 0; 2291 + 2292 + i += __bpf_ld_imm64(&insns[i], R1, dst); 2293 + i += __bpf_ld_imm64(&insns[i], R2, src); 2294 + insns[i++] = BPF_JMP32_REG(op, R1, R2, 1); 2295 + if (!match) 2296 + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); 2297 + insns[i++] = BPF_EXIT_INSN(); 2298 + 2299 + return i; 2300 + } 2301 + 2302 + return 7; 2303 + } 2304 + 2305 + static int __bpf_fill_jmp_imm(struct bpf_test *self, int op) 2306 + { 2307 + return __bpf_fill_pattern(self, &op, 64, 32, 2308 + PATTERN_BLOCK1, PATTERN_BLOCK2, 2309 + &__bpf_emit_jmp_imm); 2310 + } 2311 + 2312 + static int __bpf_fill_jmp32_imm(struct bpf_test *self, int op) 2313 + { 2314 + return __bpf_fill_pattern(self, &op, 64, 32, 2315 + PATTERN_BLOCK1, PATTERN_BLOCK2, 2316 + &__bpf_emit_jmp32_imm); 2317 + } 2318 + 2319 + static int __bpf_fill_jmp_reg(struct bpf_test *self, int op) 2320 + { 2321 + return __bpf_fill_pattern(self, &op, 64, 64, 2322 + PATTERN_BLOCK1, PATTERN_BLOCK2, 2323 + &__bpf_emit_jmp_reg); 2324 + } 2325 + 2326 + static int __bpf_fill_jmp32_reg(struct bpf_test *self, int op) 2327 + { 2328 + return __bpf_fill_pattern(self, &op, 64, 64, 2329 + PATTERN_BLOCK1, PATTERN_BLOCK2, 2330 + &__bpf_emit_jmp32_reg); 2331 + } 2332 + 2333 + /* JMP immediate tests */ 2334 + static int bpf_fill_jmp_jset_imm(struct bpf_test *self) 2335 + { 2336 + return __bpf_fill_jmp_imm(self, BPF_JSET); 2337 + } 2338 + 2339 + static int bpf_fill_jmp_jeq_imm(struct bpf_test *self) 2340 + { 2341 + return __bpf_fill_jmp_imm(self, BPF_JEQ); 2342 + } 2343 + 2344 + static int bpf_fill_jmp_jne_imm(struct bpf_test *self) 2345 + { 2346 + return __bpf_fill_jmp_imm(self, BPF_JNE); 2347 + } 2348 + 2349 + static int bpf_fill_jmp_jgt_imm(struct bpf_test *self) 2350 + { 2351 + return __bpf_fill_jmp_imm(self, BPF_JGT); 2352 + } 2353 + 2354 + static int bpf_fill_jmp_jge_imm(struct bpf_test *self) 2355 + { 2356 + return __bpf_fill_jmp_imm(self, BPF_JGE); 2357 + } 2358 + 2359 + static int bpf_fill_jmp_jlt_imm(struct bpf_test *self) 2360 + { 2361 + return __bpf_fill_jmp_imm(self, BPF_JLT); 2362 + } 2363 + 2364 + static int bpf_fill_jmp_jle_imm(struct bpf_test *self) 2365 + { 2366 + return __bpf_fill_jmp_imm(self, BPF_JLE); 2367 + } 2368 + 2369 + static int bpf_fill_jmp_jsgt_imm(struct bpf_test *self) 2370 + { 2371 + return __bpf_fill_jmp_imm(self, BPF_JSGT); 2372 + } 2373 + 2374 + static int bpf_fill_jmp_jsge_imm(struct bpf_test *self) 2375 + { 2376 + return __bpf_fill_jmp_imm(self, BPF_JSGE); 2377 + } 2378 + 2379 + static int bpf_fill_jmp_jslt_imm(struct bpf_test *self) 2380 + { 2381 + return __bpf_fill_jmp_imm(self, BPF_JSLT); 2382 + } 2383 + 2384 + static int bpf_fill_jmp_jsle_imm(struct bpf_test *self) 2385 + { 2386 + return __bpf_fill_jmp_imm(self, BPF_JSLE); 2387 + } 2388 + 2389 + /* JMP32 immediate tests */ 2390 + static int bpf_fill_jmp32_jset_imm(struct bpf_test *self) 2391 + { 2392 + return __bpf_fill_jmp32_imm(self, BPF_JSET); 2393 + } 2394 + 2395 + static int bpf_fill_jmp32_jeq_imm(struct bpf_test *self) 2396 + { 2397 + return __bpf_fill_jmp32_imm(self, BPF_JEQ); 2398 + } 2399 + 2400 + static int bpf_fill_jmp32_jne_imm(struct bpf_test *self) 2401 + { 2402 + return __bpf_fill_jmp32_imm(self, BPF_JNE); 2403 + } 2404 + 2405 + static int bpf_fill_jmp32_jgt_imm(struct bpf_test *self) 2406 + { 2407 + return __bpf_fill_jmp32_imm(self, BPF_JGT); 2408 + } 2409 + 2410 + static int bpf_fill_jmp32_jge_imm(struct bpf_test *self) 2411 + { 2412 + return __bpf_fill_jmp32_imm(self, BPF_JGE); 2413 + } 2414 + 2415 + static int bpf_fill_jmp32_jlt_imm(struct bpf_test *self) 2416 + { 2417 + return __bpf_fill_jmp32_imm(self, BPF_JLT); 2418 + } 2419 + 2420 + static int bpf_fill_jmp32_jle_imm(struct bpf_test *self) 2421 + { 2422 + return __bpf_fill_jmp32_imm(self, BPF_JLE); 2423 + } 2424 + 2425 + static int bpf_fill_jmp32_jsgt_imm(struct bpf_test *self) 2426 + { 2427 + return __bpf_fill_jmp32_imm(self, BPF_JSGT); 2428 + } 2429 + 2430 + static int bpf_fill_jmp32_jsge_imm(struct bpf_test *self) 2431 + { 2432 + return __bpf_fill_jmp32_imm(self, BPF_JSGE); 2433 + } 2434 + 2435 + static int bpf_fill_jmp32_jslt_imm(struct bpf_test *self) 2436 + { 2437 + return __bpf_fill_jmp32_imm(self, BPF_JSLT); 2438 + } 2439 + 2440 + static int bpf_fill_jmp32_jsle_imm(struct bpf_test *self) 2441 + { 2442 + return __bpf_fill_jmp32_imm(self, BPF_JSLE); 2443 + } 2444 + 2445 + /* JMP register tests */ 2446 + static int bpf_fill_jmp_jset_reg(struct bpf_test *self) 2447 + { 2448 + return __bpf_fill_jmp_reg(self, BPF_JSET); 2449 + } 2450 + 2451 + static int bpf_fill_jmp_jeq_reg(struct bpf_test *self) 2452 + { 2453 + return __bpf_fill_jmp_reg(self, BPF_JEQ); 2454 + } 2455 + 2456 + static int bpf_fill_jmp_jne_reg(struct bpf_test *self) 2457 + { 2458 + return __bpf_fill_jmp_reg(self, BPF_JNE); 2459 + } 2460 + 2461 + static int bpf_fill_jmp_jgt_reg(struct bpf_test *self) 2462 + { 2463 + return __bpf_fill_jmp_reg(self, BPF_JGT); 2464 + } 2465 + 2466 + static int bpf_fill_jmp_jge_reg(struct bpf_test *self) 2467 + { 2468 + return __bpf_fill_jmp_reg(self, BPF_JGE); 2469 + } 2470 + 2471 + static int bpf_fill_jmp_jlt_reg(struct bpf_test *self) 2472 + { 2473 + return __bpf_fill_jmp_reg(self, BPF_JLT); 2474 + } 2475 + 2476 + static int bpf_fill_jmp_jle_reg(struct bpf_test *self) 2477 + { 2478 + return __bpf_fill_jmp_reg(self, BPF_JLE); 2479 + } 2480 + 2481 + static int bpf_fill_jmp_jsgt_reg(struct bpf_test *self) 2482 + { 2483 + return __bpf_fill_jmp_reg(self, BPF_JSGT); 2484 + } 2485 + 2486 + static int bpf_fill_jmp_jsge_reg(struct bpf_test *self) 2487 + { 2488 + return __bpf_fill_jmp_reg(self, BPF_JSGE); 2489 + } 2490 + 2491 + static int bpf_fill_jmp_jslt_reg(struct bpf_test *self) 2492 + { 2493 + return __bpf_fill_jmp_reg(self, BPF_JSLT); 2494 + } 2495 + 2496 + static int bpf_fill_jmp_jsle_reg(struct bpf_test *self) 2497 + { 2498 + return __bpf_fill_jmp_reg(self, BPF_JSLE); 2499 + } 2500 + 2501 + /* JMP32 register tests */ 2502 + static int bpf_fill_jmp32_jset_reg(struct bpf_test *self) 2503 + { 2504 + return __bpf_fill_jmp32_reg(self, BPF_JSET); 2505 + } 2506 + 2507 + static int bpf_fill_jmp32_jeq_reg(struct bpf_test *self) 2508 + { 2509 + return __bpf_fill_jmp32_reg(self, BPF_JEQ); 2510 + } 2511 + 2512 + static int bpf_fill_jmp32_jne_reg(struct bpf_test *self) 2513 + { 2514 + return __bpf_fill_jmp32_reg(self, BPF_JNE); 2515 + } 2516 + 2517 + static int bpf_fill_jmp32_jgt_reg(struct bpf_test *self) 2518 + { 2519 + return __bpf_fill_jmp32_reg(self, BPF_JGT); 2520 + } 2521 + 2522 + static int bpf_fill_jmp32_jge_reg(struct bpf_test *self) 2523 + { 2524 + return __bpf_fill_jmp32_reg(self, BPF_JGE); 2525 + } 2526 + 2527 + static int bpf_fill_jmp32_jlt_reg(struct bpf_test *self) 2528 + { 2529 + return __bpf_fill_jmp32_reg(self, BPF_JLT); 2530 + } 2531 + 2532 + static int bpf_fill_jmp32_jle_reg(struct bpf_test *self) 2533 + { 2534 + return __bpf_fill_jmp32_reg(self, BPF_JLE); 2535 + } 2536 + 2537 + static int bpf_fill_jmp32_jsgt_reg(struct bpf_test *self) 2538 + { 2539 + return __bpf_fill_jmp32_reg(self, BPF_JSGT); 2540 + } 2541 + 2542 + static int bpf_fill_jmp32_jsge_reg(struct bpf_test *self) 2543 + { 2544 + return __bpf_fill_jmp32_reg(self, BPF_JSGE); 2545 + } 2546 + 2547 + static int bpf_fill_jmp32_jslt_reg(struct bpf_test *self) 2548 + { 2549 + return __bpf_fill_jmp32_reg(self, BPF_JSLT); 2550 + } 2551 + 2552 + static int bpf_fill_jmp32_jsle_reg(struct bpf_test *self) 2553 + { 2554 + return __bpf_fill_jmp32_reg(self, BPF_JSLE); 2555 + } 2556 + 2557 + /* 2558 + * Set up a sequence of staggered jumps, forwards and backwards with 2559 + * increasing offset. This tests the conversion of relative jumps to 2560 + * JITed native jumps. On some architectures, for example MIPS, a large 2561 + * PC-relative jump offset may overflow the immediate field of the native 2562 + * conditional branch instruction, triggering a conversion to use an 2563 + * absolute jump instead. Since this changes the jump offsets, another 2564 + * offset computation pass is necessary, and that may in turn trigger 2565 + * another branch conversion. This jump sequence is particularly nasty 2566 + * in that regard. 2567 + * 2568 + * The sequence generation is parameterized by size and jump type. 2569 + * The size must be even, and the expected result is always size + 1. 2570 + * Below is an example with size=8 and result=9. 2571 + * 2572 + * ________________________Start 2573 + * R0 = 0 2574 + * R1 = r1 2575 + * R2 = r2 2576 + * ,------- JMP +4 * 3______________Preamble: 4 insns 2577 + * ,----------|-ind 0- if R0 != 7 JMP 8 * 3 + 1 <--------------------. 2578 + * | | R0 = 8 | 2579 + * | | JMP +7 * 3 ------------------------. 2580 + * | ,--------|-----1- if R0 != 5 JMP 7 * 3 + 1 <--------------. | | 2581 + * | | | R0 = 6 | | | 2582 + * | | | JMP +5 * 3 ------------------. | | 2583 + * | | ,------|-----2- if R0 != 3 JMP 6 * 3 + 1 <--------. | | | | 2584 + * | | | | R0 = 4 | | | | | 2585 + * | | | | JMP +3 * 3 ------------. | | | | 2586 + * | | | ,----|-----3- if R0 != 1 JMP 5 * 3 + 1 <--. | | | | | | 2587 + * | | | | | R0 = 2 | | | | | | | 2588 + * | | | | | JMP +1 * 3 ------. | | | | | | 2589 + * | | | | ,--t=====4> if R0 != 0 JMP 4 * 3 + 1 1 2 3 4 5 6 7 8 loc 2590 + * | | | | | R0 = 1 -1 +2 -3 +4 -5 +6 -7 +8 off 2591 + * | | | | | JMP -2 * 3 ---' | | | | | | | 2592 + * | | | | | ,------5- if R0 != 2 JMP 3 * 3 + 1 <-----' | | | | | | 2593 + * | | | | | | R0 = 3 | | | | | | 2594 + * | | | | | | JMP -4 * 3 ---------' | | | | | 2595 + * | | | | | | ,----6- if R0 != 4 JMP 2 * 3 + 1 <-----------' | | | | 2596 + * | | | | | | | R0 = 5 | | | | 2597 + * | | | | | | | JMP -6 * 3 ---------------' | | | 2598 + * | | | | | | | ,--7- if R0 != 6 JMP 1 * 3 + 1 <-----------------' | | 2599 + * | | | | | | | | R0 = 7 | | 2600 + * | | Error | | | JMP -8 * 3 ---------------------' | 2601 + * | | paths | | | ,8- if R0 != 8 JMP 0 * 3 + 1 <-----------------------' 2602 + * | | | | | | | | | R0 = 9__________________Sequence: 3 * size - 1 insns 2603 + * `-+-+-+-+-+-+-+-+-> EXIT____________________Return: 1 insn 2604 + * 2605 + */ 2606 + 2607 + /* The maximum size parameter */ 2608 + #define MAX_STAGGERED_JMP_SIZE ((0x7fff / 3) & ~1) 2609 + 2610 + /* We use a reduced number of iterations to get a reasonable execution time */ 2611 + #define NR_STAGGERED_JMP_RUNS 10 2612 + 2613 + static int __bpf_fill_staggered_jumps(struct bpf_test *self, 2614 + const struct bpf_insn *jmp, 2615 + u64 r1, u64 r2) 2616 + { 2617 + int size = self->test[0].result - 1; 2618 + int len = 4 + 3 * (size + 1); 2619 + struct bpf_insn *insns; 2620 + int off, ind; 2621 + 2622 + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); 2623 + if (!insns) 2624 + return -ENOMEM; 2625 + 2626 + /* Preamble */ 2627 + insns[0] = BPF_ALU64_IMM(BPF_MOV, R0, 0); 2628 + insns[1] = BPF_ALU64_IMM(BPF_MOV, R1, r1); 2629 + insns[2] = BPF_ALU64_IMM(BPF_MOV, R2, r2); 2630 + insns[3] = BPF_JMP_IMM(BPF_JA, 0, 0, 3 * size / 2); 2631 + 2632 + /* Sequence */ 2633 + for (ind = 0, off = size; ind <= size; ind++, off -= 2) { 2634 + struct bpf_insn *ins = &insns[4 + 3 * ind]; 2635 + int loc; 2636 + 2637 + if (off == 0) 2638 + off--; 2639 + 2640 + loc = abs(off); 2641 + ins[0] = BPF_JMP_IMM(BPF_JNE, R0, loc - 1, 2642 + 3 * (size - ind) + 1); 2643 + ins[1] = BPF_ALU64_IMM(BPF_MOV, R0, loc); 2644 + ins[2] = *jmp; 2645 + ins[2].off = 3 * (off - 1); 2646 + } 2647 + 2648 + /* Return */ 2649 + insns[len - 1] = BPF_EXIT_INSN(); 2650 + 2651 + self->u.ptr.insns = insns; 496 2652 self->u.ptr.len = len; 497 2653 498 2654 return 0; 499 2655 } 2656 + 2657 + /* 64-bit unconditional jump */ 2658 + static int bpf_fill_staggered_ja(struct bpf_test *self) 2659 + { 2660 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JA, 0, 0, 0); 2661 + 2662 + return __bpf_fill_staggered_jumps(self, &jmp, 0, 0); 2663 + } 2664 + 2665 + /* 64-bit immediate jumps */ 2666 + static int bpf_fill_staggered_jeq_imm(struct bpf_test *self) 2667 + { 2668 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JEQ, R1, 1234, 0); 2669 + 2670 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2671 + } 2672 + 2673 + static int bpf_fill_staggered_jne_imm(struct bpf_test *self) 2674 + { 2675 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JNE, R1, 1234, 0); 2676 + 2677 + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 0); 2678 + } 2679 + 2680 + static int bpf_fill_staggered_jset_imm(struct bpf_test *self) 2681 + { 2682 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSET, R1, 0x82, 0); 2683 + 2684 + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0); 2685 + } 2686 + 2687 + static int bpf_fill_staggered_jgt_imm(struct bpf_test *self) 2688 + { 2689 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JGT, R1, 1234, 0); 2690 + 2691 + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 0); 2692 + } 2693 + 2694 + static int bpf_fill_staggered_jge_imm(struct bpf_test *self) 2695 + { 2696 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JGE, R1, 1234, 0); 2697 + 2698 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2699 + } 2700 + 2701 + static int bpf_fill_staggered_jlt_imm(struct bpf_test *self) 2702 + { 2703 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JLT, R1, 0x80000000, 0); 2704 + 2705 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2706 + } 2707 + 2708 + static int bpf_fill_staggered_jle_imm(struct bpf_test *self) 2709 + { 2710 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JLE, R1, 1234, 0); 2711 + 2712 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2713 + } 2714 + 2715 + static int bpf_fill_staggered_jsgt_imm(struct bpf_test *self) 2716 + { 2717 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSGT, R1, -2, 0); 2718 + 2719 + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); 2720 + } 2721 + 2722 + static int bpf_fill_staggered_jsge_imm(struct bpf_test *self) 2723 + { 2724 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSGE, R1, -2, 0); 2725 + 2726 + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); 2727 + } 2728 + 2729 + static int bpf_fill_staggered_jslt_imm(struct bpf_test *self) 2730 + { 2731 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSLT, R1, -1, 0); 2732 + 2733 + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); 2734 + } 2735 + 2736 + static int bpf_fill_staggered_jsle_imm(struct bpf_test *self) 2737 + { 2738 + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSLE, R1, -1, 0); 2739 + 2740 + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); 2741 + } 2742 + 2743 + /* 64-bit register jumps */ 2744 + static int bpf_fill_staggered_jeq_reg(struct bpf_test *self) 2745 + { 2746 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JEQ, R1, R2, 0); 2747 + 2748 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); 2749 + } 2750 + 2751 + static int bpf_fill_staggered_jne_reg(struct bpf_test *self) 2752 + { 2753 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JNE, R1, R2, 0); 2754 + 2755 + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 1234); 2756 + } 2757 + 2758 + static int bpf_fill_staggered_jset_reg(struct bpf_test *self) 2759 + { 2760 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSET, R1, R2, 0); 2761 + 2762 + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0x82); 2763 + } 2764 + 2765 + static int bpf_fill_staggered_jgt_reg(struct bpf_test *self) 2766 + { 2767 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JGT, R1, R2, 0); 2768 + 2769 + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 1234); 2770 + } 2771 + 2772 + static int bpf_fill_staggered_jge_reg(struct bpf_test *self) 2773 + { 2774 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JGE, R1, R2, 0); 2775 + 2776 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); 2777 + } 2778 + 2779 + static int bpf_fill_staggered_jlt_reg(struct bpf_test *self) 2780 + { 2781 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JLT, R1, R2, 0); 2782 + 2783 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0x80000000); 2784 + } 2785 + 2786 + static int bpf_fill_staggered_jle_reg(struct bpf_test *self) 2787 + { 2788 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JLE, R1, R2, 0); 2789 + 2790 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); 2791 + } 2792 + 2793 + static int bpf_fill_staggered_jsgt_reg(struct bpf_test *self) 2794 + { 2795 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSGT, R1, R2, 0); 2796 + 2797 + return __bpf_fill_staggered_jumps(self, &jmp, -1, -2); 2798 + } 2799 + 2800 + static int bpf_fill_staggered_jsge_reg(struct bpf_test *self) 2801 + { 2802 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSGE, R1, R2, 0); 2803 + 2804 + return __bpf_fill_staggered_jumps(self, &jmp, -2, -2); 2805 + } 2806 + 2807 + static int bpf_fill_staggered_jslt_reg(struct bpf_test *self) 2808 + { 2809 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSLT, R1, R2, 0); 2810 + 2811 + return __bpf_fill_staggered_jumps(self, &jmp, -2, -1); 2812 + } 2813 + 2814 + static int bpf_fill_staggered_jsle_reg(struct bpf_test *self) 2815 + { 2816 + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSLE, R1, R2, 0); 2817 + 2818 + return __bpf_fill_staggered_jumps(self, &jmp, -1, -1); 2819 + } 2820 + 2821 + /* 32-bit immediate jumps */ 2822 + static int bpf_fill_staggered_jeq32_imm(struct bpf_test *self) 2823 + { 2824 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JEQ, R1, 1234, 0); 2825 + 2826 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2827 + } 2828 + 2829 + static int bpf_fill_staggered_jne32_imm(struct bpf_test *self) 2830 + { 2831 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JNE, R1, 1234, 0); 2832 + 2833 + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 0); 2834 + } 2835 + 2836 + static int bpf_fill_staggered_jset32_imm(struct bpf_test *self) 2837 + { 2838 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSET, R1, 0x82, 0); 2839 + 2840 + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0); 2841 + } 2842 + 2843 + static int bpf_fill_staggered_jgt32_imm(struct bpf_test *self) 2844 + { 2845 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JGT, R1, 1234, 0); 2846 + 2847 + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 0); 2848 + } 2849 + 2850 + static int bpf_fill_staggered_jge32_imm(struct bpf_test *self) 2851 + { 2852 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JGE, R1, 1234, 0); 2853 + 2854 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2855 + } 2856 + 2857 + static int bpf_fill_staggered_jlt32_imm(struct bpf_test *self) 2858 + { 2859 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JLT, R1, 0x80000000, 0); 2860 + 2861 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2862 + } 2863 + 2864 + static int bpf_fill_staggered_jle32_imm(struct bpf_test *self) 2865 + { 2866 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JLE, R1, 1234, 0); 2867 + 2868 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); 2869 + } 2870 + 2871 + static int bpf_fill_staggered_jsgt32_imm(struct bpf_test *self) 2872 + { 2873 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSGT, R1, -2, 0); 2874 + 2875 + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); 2876 + } 2877 + 2878 + static int bpf_fill_staggered_jsge32_imm(struct bpf_test *self) 2879 + { 2880 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSGE, R1, -2, 0); 2881 + 2882 + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); 2883 + } 2884 + 2885 + static int bpf_fill_staggered_jslt32_imm(struct bpf_test *self) 2886 + { 2887 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSLT, R1, -1, 0); 2888 + 2889 + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); 2890 + } 2891 + 2892 + static int bpf_fill_staggered_jsle32_imm(struct bpf_test *self) 2893 + { 2894 + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSLE, R1, -1, 0); 2895 + 2896 + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); 2897 + } 2898 + 2899 + /* 32-bit register jumps */ 2900 + static int bpf_fill_staggered_jeq32_reg(struct bpf_test *self) 2901 + { 2902 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JEQ, R1, R2, 0); 2903 + 2904 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); 2905 + } 2906 + 2907 + static int bpf_fill_staggered_jne32_reg(struct bpf_test *self) 2908 + { 2909 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JNE, R1, R2, 0); 2910 + 2911 + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 1234); 2912 + } 2913 + 2914 + static int bpf_fill_staggered_jset32_reg(struct bpf_test *self) 2915 + { 2916 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSET, R1, R2, 0); 2917 + 2918 + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0x82); 2919 + } 2920 + 2921 + static int bpf_fill_staggered_jgt32_reg(struct bpf_test *self) 2922 + { 2923 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JGT, R1, R2, 0); 2924 + 2925 + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 1234); 2926 + } 2927 + 2928 + static int bpf_fill_staggered_jge32_reg(struct bpf_test *self) 2929 + { 2930 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JGE, R1, R2, 0); 2931 + 2932 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); 2933 + } 2934 + 2935 + static int bpf_fill_staggered_jlt32_reg(struct bpf_test *self) 2936 + { 2937 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JLT, R1, R2, 0); 2938 + 2939 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0x80000000); 2940 + } 2941 + 2942 + static int bpf_fill_staggered_jle32_reg(struct bpf_test *self) 2943 + { 2944 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JLE, R1, R2, 0); 2945 + 2946 + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); 2947 + } 2948 + 2949 + static int bpf_fill_staggered_jsgt32_reg(struct bpf_test *self) 2950 + { 2951 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSGT, R1, R2, 0); 2952 + 2953 + return __bpf_fill_staggered_jumps(self, &jmp, -1, -2); 2954 + } 2955 + 2956 + static int bpf_fill_staggered_jsge32_reg(struct bpf_test *self) 2957 + { 2958 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSGE, R1, R2, 0); 2959 + 2960 + return __bpf_fill_staggered_jumps(self, &jmp, -2, -2); 2961 + } 2962 + 2963 + static int bpf_fill_staggered_jslt32_reg(struct bpf_test *self) 2964 + { 2965 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSLT, R1, R2, 0); 2966 + 2967 + return __bpf_fill_staggered_jumps(self, &jmp, -2, -1); 2968 + } 2969 + 2970 + static int bpf_fill_staggered_jsle32_reg(struct bpf_test *self) 2971 + { 2972 + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSLE, R1, R2, 0); 2973 + 2974 + return __bpf_fill_staggered_jumps(self, &jmp, -1, -1); 2975 + } 2976 + 500 2977 501 2978 static struct bpf_test tests[] = { 502 2979 { ··· 4431 1950 INTERNAL, 4432 1951 { }, 4433 1952 { { 0, -1 } } 4434 - }, 4435 - { 4436 - /* 4437 - * Register (non-)clobbering test, in the case where a 32-bit 4438 - * JIT implements complex ALU64 operations via function calls. 4439 - * If so, the function call must be invisible in the eBPF 4440 - * registers. The JIT must then save and restore relevant 4441 - * registers during the call. The following tests check that 4442 - * the eBPF registers retain their values after such a call. 4443 - */ 4444 - "INT: Register clobbering, R1 updated", 4445 - .u.insns_int = { 4446 - BPF_ALU32_IMM(BPF_MOV, R0, 0), 4447 - BPF_ALU32_IMM(BPF_MOV, R1, 123456789), 4448 - BPF_ALU32_IMM(BPF_MOV, R2, 2), 4449 - BPF_ALU32_IMM(BPF_MOV, R3, 3), 4450 - BPF_ALU32_IMM(BPF_MOV, R4, 4), 4451 - BPF_ALU32_IMM(BPF_MOV, R5, 5), 4452 - BPF_ALU32_IMM(BPF_MOV, R6, 6), 4453 - BPF_ALU32_IMM(BPF_MOV, R7, 7), 4454 - BPF_ALU32_IMM(BPF_MOV, R8, 8), 4455 - BPF_ALU32_IMM(BPF_MOV, R9, 9), 4456 - BPF_ALU64_IMM(BPF_DIV, R1, 123456789), 4457 - BPF_JMP_IMM(BPF_JNE, R0, 0, 10), 4458 - BPF_JMP_IMM(BPF_JNE, R1, 1, 9), 4459 - BPF_JMP_IMM(BPF_JNE, R2, 2, 8), 4460 - BPF_JMP_IMM(BPF_JNE, R3, 3, 7), 4461 - BPF_JMP_IMM(BPF_JNE, R4, 4, 6), 4462 - BPF_JMP_IMM(BPF_JNE, R5, 5, 5), 4463 - BPF_JMP_IMM(BPF_JNE, R6, 6, 4), 4464 - BPF_JMP_IMM(BPF_JNE, R7, 7, 3), 4465 - BPF_JMP_IMM(BPF_JNE, R8, 8, 2), 4466 - BPF_JMP_IMM(BPF_JNE, R9, 9, 1), 4467 - BPF_ALU32_IMM(BPF_MOV, R0, 1), 4468 - BPF_EXIT_INSN(), 4469 - }, 4470 - INTERNAL, 4471 - { }, 4472 - { { 0, 1 } } 4473 - }, 4474 - { 4475 - "INT: Register clobbering, R2 updated", 4476 - .u.insns_int = { 4477 - BPF_ALU32_IMM(BPF_MOV, R0, 0), 4478 - BPF_ALU32_IMM(BPF_MOV, R1, 1), 4479 - BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789), 4480 - BPF_ALU32_IMM(BPF_MOV, R3, 3), 4481 - BPF_ALU32_IMM(BPF_MOV, R4, 4), 4482 - BPF_ALU32_IMM(BPF_MOV, R5, 5), 4483 - BPF_ALU32_IMM(BPF_MOV, R6, 6), 4484 - BPF_ALU32_IMM(BPF_MOV, R7, 7), 4485 - BPF_ALU32_IMM(BPF_MOV, R8, 8), 4486 - BPF_ALU32_IMM(BPF_MOV, R9, 9), 4487 - BPF_ALU64_IMM(BPF_DIV, R2, 123456789), 4488 - BPF_JMP_IMM(BPF_JNE, R0, 0, 10), 4489 - BPF_JMP_IMM(BPF_JNE, R1, 1, 9), 4490 - BPF_JMP_IMM(BPF_JNE, R2, 2, 8), 4491 - BPF_JMP_IMM(BPF_JNE, R3, 3, 7), 4492 - BPF_JMP_IMM(BPF_JNE, R4, 4, 6), 4493 - BPF_JMP_IMM(BPF_JNE, R5, 5, 5), 4494 - BPF_JMP_IMM(BPF_JNE, R6, 6, 4), 4495 - BPF_JMP_IMM(BPF_JNE, R7, 7, 3), 4496 - BPF_JMP_IMM(BPF_JNE, R8, 8, 2), 4497 - BPF_JMP_IMM(BPF_JNE, R9, 9, 1), 4498 - BPF_ALU32_IMM(BPF_MOV, R0, 1), 4499 - BPF_EXIT_INSN(), 4500 - }, 4501 - INTERNAL, 4502 - { }, 4503 - { { 0, 1 } } 4504 - }, 4505 - { 4506 - /* 4507 - * Test 32-bit JITs that implement complex ALU64 operations as 4508 - * function calls R0 = f(R1, R2), and must re-arrange operands. 4509 - */ 4510 - #define NUMER 0xfedcba9876543210ULL 4511 - #define DENOM 0x0123456789abcdefULL 4512 - "ALU64_DIV X: Operand register permutations", 4513 - .u.insns_int = { 4514 - /* R0 / R2 */ 4515 - BPF_LD_IMM64(R0, NUMER), 4516 - BPF_LD_IMM64(R2, DENOM), 4517 - BPF_ALU64_REG(BPF_DIV, R0, R2), 4518 - BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), 4519 - BPF_EXIT_INSN(), 4520 - /* R1 / R0 */ 4521 - BPF_LD_IMM64(R1, NUMER), 4522 - BPF_LD_IMM64(R0, DENOM), 4523 - BPF_ALU64_REG(BPF_DIV, R1, R0), 4524 - BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), 4525 - BPF_EXIT_INSN(), 4526 - /* R0 / R1 */ 4527 - BPF_LD_IMM64(R0, NUMER), 4528 - BPF_LD_IMM64(R1, DENOM), 4529 - BPF_ALU64_REG(BPF_DIV, R0, R1), 4530 - BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), 4531 - BPF_EXIT_INSN(), 4532 - /* R2 / R0 */ 4533 - BPF_LD_IMM64(R2, NUMER), 4534 - BPF_LD_IMM64(R0, DENOM), 4535 - BPF_ALU64_REG(BPF_DIV, R2, R0), 4536 - BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), 4537 - BPF_EXIT_INSN(), 4538 - /* R2 / R1 */ 4539 - BPF_LD_IMM64(R2, NUMER), 4540 - BPF_LD_IMM64(R1, DENOM), 4541 - BPF_ALU64_REG(BPF_DIV, R2, R1), 4542 - BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), 4543 - BPF_EXIT_INSN(), 4544 - /* R1 / R2 */ 4545 - BPF_LD_IMM64(R1, NUMER), 4546 - BPF_LD_IMM64(R2, DENOM), 4547 - BPF_ALU64_REG(BPF_DIV, R1, R2), 4548 - BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), 4549 - BPF_EXIT_INSN(), 4550 - /* R1 / R1 */ 4551 - BPF_LD_IMM64(R1, NUMER), 4552 - BPF_ALU64_REG(BPF_DIV, R1, R1), 4553 - BPF_JMP_IMM(BPF_JEQ, R1, 1, 1), 4554 - BPF_EXIT_INSN(), 4555 - /* R2 / R2 */ 4556 - BPF_LD_IMM64(R2, DENOM), 4557 - BPF_ALU64_REG(BPF_DIV, R2, R2), 4558 - BPF_JMP_IMM(BPF_JEQ, R2, 1, 1), 4559 - BPF_EXIT_INSN(), 4560 - /* R3 / R4 */ 4561 - BPF_LD_IMM64(R3, NUMER), 4562 - BPF_LD_IMM64(R4, DENOM), 4563 - BPF_ALU64_REG(BPF_DIV, R3, R4), 4564 - BPF_JMP_IMM(BPF_JEQ, R3, NUMER / DENOM, 1), 4565 - BPF_EXIT_INSN(), 4566 - /* Successful return */ 4567 - BPF_LD_IMM64(R0, 1), 4568 - BPF_EXIT_INSN(), 4569 - }, 4570 - INTERNAL, 4571 - { }, 4572 - { { 0, 1 } }, 4573 - #undef NUMER 4574 - #undef DENOM 4575 1953 }, 4576 1954 #ifdef CONFIG_32BIT 4577 1955 { ··· 7595 5255 { }, 7596 5256 { { 0, (u32) cpu_to_be64(0x0123456789abcdefLL) } }, 7597 5257 }, 5258 + { 5259 + "ALU_END_FROM_BE 64: 0x0123456789abcdef >> 32 -> 0x01234567", 5260 + .u.insns_int = { 5261 + BPF_LD_IMM64(R0, 0x0123456789abcdefLL), 5262 + BPF_ENDIAN(BPF_FROM_BE, R0, 64), 5263 + BPF_ALU64_IMM(BPF_RSH, R0, 32), 5264 + BPF_EXIT_INSN(), 5265 + }, 5266 + INTERNAL, 5267 + { }, 5268 + { { 0, (u32) (cpu_to_be64(0x0123456789abcdefLL) >> 32) } }, 5269 + }, 5270 + /* BPF_ALU | BPF_END | BPF_FROM_BE, reversed */ 5271 + { 5272 + "ALU_END_FROM_BE 16: 0xfedcba9876543210 -> 0x3210", 5273 + .u.insns_int = { 5274 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5275 + BPF_ENDIAN(BPF_FROM_BE, R0, 16), 5276 + BPF_EXIT_INSN(), 5277 + }, 5278 + INTERNAL, 5279 + { }, 5280 + { { 0, cpu_to_be16(0x3210) } }, 5281 + }, 5282 + { 5283 + "ALU_END_FROM_BE 32: 0xfedcba9876543210 -> 0x76543210", 5284 + .u.insns_int = { 5285 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5286 + BPF_ENDIAN(BPF_FROM_BE, R0, 32), 5287 + BPF_ALU64_REG(BPF_MOV, R1, R0), 5288 + BPF_ALU64_IMM(BPF_RSH, R1, 32), 5289 + BPF_ALU32_REG(BPF_ADD, R0, R1), /* R1 = 0 */ 5290 + BPF_EXIT_INSN(), 5291 + }, 5292 + INTERNAL, 5293 + { }, 5294 + { { 0, cpu_to_be32(0x76543210) } }, 5295 + }, 5296 + { 5297 + "ALU_END_FROM_BE 64: 0xfedcba9876543210 -> 0x76543210", 5298 + .u.insns_int = { 5299 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5300 + BPF_ENDIAN(BPF_FROM_BE, R0, 64), 5301 + BPF_EXIT_INSN(), 5302 + }, 5303 + INTERNAL, 5304 + { }, 5305 + { { 0, (u32) cpu_to_be64(0xfedcba9876543210ULL) } }, 5306 + }, 5307 + { 5308 + "ALU_END_FROM_BE 64: 0xfedcba9876543210 >> 32 -> 0xfedcba98", 5309 + .u.insns_int = { 5310 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5311 + BPF_ENDIAN(BPF_FROM_BE, R0, 64), 5312 + BPF_ALU64_IMM(BPF_RSH, R0, 32), 5313 + BPF_EXIT_INSN(), 5314 + }, 5315 + INTERNAL, 5316 + { }, 5317 + { { 0, (u32) (cpu_to_be64(0xfedcba9876543210ULL) >> 32) } }, 5318 + }, 7598 5319 /* BPF_ALU | BPF_END | BPF_FROM_LE */ 7599 5320 { 7600 5321 "ALU_END_FROM_LE 16: 0x0123456789abcdef -> 0xefcd", ··· 7692 5291 INTERNAL, 7693 5292 { }, 7694 5293 { { 0, (u32) cpu_to_le64(0x0123456789abcdefLL) } }, 5294 + }, 5295 + { 5296 + "ALU_END_FROM_LE 64: 0x0123456789abcdef >> 32 -> 0xefcdab89", 5297 + .u.insns_int = { 5298 + BPF_LD_IMM64(R0, 0x0123456789abcdefLL), 5299 + BPF_ENDIAN(BPF_FROM_LE, R0, 64), 5300 + BPF_ALU64_IMM(BPF_RSH, R0, 32), 5301 + BPF_EXIT_INSN(), 5302 + }, 5303 + INTERNAL, 5304 + { }, 5305 + { { 0, (u32) (cpu_to_le64(0x0123456789abcdefLL) >> 32) } }, 5306 + }, 5307 + /* BPF_ALU | BPF_END | BPF_FROM_LE, reversed */ 5308 + { 5309 + "ALU_END_FROM_LE 16: 0xfedcba9876543210 -> 0x1032", 5310 + .u.insns_int = { 5311 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5312 + BPF_ENDIAN(BPF_FROM_LE, R0, 16), 5313 + BPF_EXIT_INSN(), 5314 + }, 5315 + INTERNAL, 5316 + { }, 5317 + { { 0, cpu_to_le16(0x3210) } }, 5318 + }, 5319 + { 5320 + "ALU_END_FROM_LE 32: 0xfedcba9876543210 -> 0x10325476", 5321 + .u.insns_int = { 5322 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5323 + BPF_ENDIAN(BPF_FROM_LE, R0, 32), 5324 + BPF_ALU64_REG(BPF_MOV, R1, R0), 5325 + BPF_ALU64_IMM(BPF_RSH, R1, 32), 5326 + BPF_ALU32_REG(BPF_ADD, R0, R1), /* R1 = 0 */ 5327 + BPF_EXIT_INSN(), 5328 + }, 5329 + INTERNAL, 5330 + { }, 5331 + { { 0, cpu_to_le32(0x76543210) } }, 5332 + }, 5333 + { 5334 + "ALU_END_FROM_LE 64: 0xfedcba9876543210 -> 0x10325476", 5335 + .u.insns_int = { 5336 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5337 + BPF_ENDIAN(BPF_FROM_LE, R0, 64), 5338 + BPF_EXIT_INSN(), 5339 + }, 5340 + INTERNAL, 5341 + { }, 5342 + { { 0, (u32) cpu_to_le64(0xfedcba9876543210ULL) } }, 5343 + }, 5344 + { 5345 + "ALU_END_FROM_LE 64: 0xfedcba9876543210 >> 32 -> 0x98badcfe", 5346 + .u.insns_int = { 5347 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 5348 + BPF_ENDIAN(BPF_FROM_LE, R0, 64), 5349 + BPF_ALU64_IMM(BPF_RSH, R0, 32), 5350 + BPF_EXIT_INSN(), 5351 + }, 5352 + INTERNAL, 5353 + { }, 5354 + { { 0, (u32) (cpu_to_le64(0xfedcba9876543210ULL) >> 32) } }, 5355 + }, 5356 + /* BPF_LDX_MEM B/H/W/DW */ 5357 + { 5358 + "BPF_LDX_MEM | BPF_B", 5359 + .u.insns_int = { 5360 + BPF_LD_IMM64(R1, 0x0102030405060708ULL), 5361 + BPF_LD_IMM64(R2, 0x0000000000000008ULL), 5362 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5363 + #ifdef __BIG_ENDIAN 5364 + BPF_LDX_MEM(BPF_B, R0, R10, -1), 5365 + #else 5366 + BPF_LDX_MEM(BPF_B, R0, R10, -8), 5367 + #endif 5368 + BPF_JMP_REG(BPF_JNE, R0, R2, 1), 5369 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5370 + BPF_EXIT_INSN(), 5371 + }, 5372 + INTERNAL, 5373 + { }, 5374 + { { 0, 0 } }, 5375 + .stack_depth = 8, 5376 + }, 5377 + { 5378 + "BPF_LDX_MEM | BPF_B, MSB set", 5379 + .u.insns_int = { 5380 + BPF_LD_IMM64(R1, 0x8182838485868788ULL), 5381 + BPF_LD_IMM64(R2, 0x0000000000000088ULL), 5382 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5383 + #ifdef __BIG_ENDIAN 5384 + BPF_LDX_MEM(BPF_B, R0, R10, -1), 5385 + #else 5386 + BPF_LDX_MEM(BPF_B, R0, R10, -8), 5387 + #endif 5388 + BPF_JMP_REG(BPF_JNE, R0, R2, 1), 5389 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5390 + BPF_EXIT_INSN(), 5391 + }, 5392 + INTERNAL, 5393 + { }, 5394 + { { 0, 0 } }, 5395 + .stack_depth = 8, 5396 + }, 5397 + { 5398 + "BPF_LDX_MEM | BPF_H", 5399 + .u.insns_int = { 5400 + BPF_LD_IMM64(R1, 0x0102030405060708ULL), 5401 + BPF_LD_IMM64(R2, 0x0000000000000708ULL), 5402 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5403 + #ifdef __BIG_ENDIAN 5404 + BPF_LDX_MEM(BPF_H, R0, R10, -2), 5405 + #else 5406 + BPF_LDX_MEM(BPF_H, R0, R10, -8), 5407 + #endif 5408 + BPF_JMP_REG(BPF_JNE, R0, R2, 1), 5409 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5410 + BPF_EXIT_INSN(), 5411 + }, 5412 + INTERNAL, 5413 + { }, 5414 + { { 0, 0 } }, 5415 + .stack_depth = 8, 5416 + }, 5417 + { 5418 + "BPF_LDX_MEM | BPF_H, MSB set", 5419 + .u.insns_int = { 5420 + BPF_LD_IMM64(R1, 0x8182838485868788ULL), 5421 + BPF_LD_IMM64(R2, 0x0000000000008788ULL), 5422 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5423 + #ifdef __BIG_ENDIAN 5424 + BPF_LDX_MEM(BPF_H, R0, R10, -2), 5425 + #else 5426 + BPF_LDX_MEM(BPF_H, R0, R10, -8), 5427 + #endif 5428 + BPF_JMP_REG(BPF_JNE, R0, R2, 1), 5429 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5430 + BPF_EXIT_INSN(), 5431 + }, 5432 + INTERNAL, 5433 + { }, 5434 + { { 0, 0 } }, 5435 + .stack_depth = 8, 5436 + }, 5437 + { 5438 + "BPF_LDX_MEM | BPF_W", 5439 + .u.insns_int = { 5440 + BPF_LD_IMM64(R1, 0x0102030405060708ULL), 5441 + BPF_LD_IMM64(R2, 0x0000000005060708ULL), 5442 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5443 + #ifdef __BIG_ENDIAN 5444 + BPF_LDX_MEM(BPF_W, R0, R10, -4), 5445 + #else 5446 + BPF_LDX_MEM(BPF_W, R0, R10, -8), 5447 + #endif 5448 + BPF_JMP_REG(BPF_JNE, R0, R2, 1), 5449 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5450 + BPF_EXIT_INSN(), 5451 + }, 5452 + INTERNAL, 5453 + { }, 5454 + { { 0, 0 } }, 5455 + .stack_depth = 8, 5456 + }, 5457 + { 5458 + "BPF_LDX_MEM | BPF_W, MSB set", 5459 + .u.insns_int = { 5460 + BPF_LD_IMM64(R1, 0x8182838485868788ULL), 5461 + BPF_LD_IMM64(R2, 0x0000000085868788ULL), 5462 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5463 + #ifdef __BIG_ENDIAN 5464 + BPF_LDX_MEM(BPF_W, R0, R10, -4), 5465 + #else 5466 + BPF_LDX_MEM(BPF_W, R0, R10, -8), 5467 + #endif 5468 + BPF_JMP_REG(BPF_JNE, R0, R2, 1), 5469 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5470 + BPF_EXIT_INSN(), 5471 + }, 5472 + INTERNAL, 5473 + { }, 5474 + { { 0, 0 } }, 5475 + .stack_depth = 8, 5476 + }, 5477 + /* BPF_STX_MEM B/H/W/DW */ 5478 + { 5479 + "BPF_STX_MEM | BPF_B", 5480 + .u.insns_int = { 5481 + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), 5482 + BPF_LD_IMM64(R2, 0x0102030405060708ULL), 5483 + BPF_LD_IMM64(R3, 0x8090a0b0c0d0e008ULL), 5484 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5485 + #ifdef __BIG_ENDIAN 5486 + BPF_STX_MEM(BPF_B, R10, R2, -1), 5487 + #else 5488 + BPF_STX_MEM(BPF_B, R10, R2, -8), 5489 + #endif 5490 + BPF_LDX_MEM(BPF_DW, R0, R10, -8), 5491 + BPF_JMP_REG(BPF_JNE, R0, R3, 1), 5492 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5493 + BPF_EXIT_INSN(), 5494 + }, 5495 + INTERNAL, 5496 + { }, 5497 + { { 0, 0 } }, 5498 + .stack_depth = 8, 5499 + }, 5500 + { 5501 + "BPF_STX_MEM | BPF_B, MSB set", 5502 + .u.insns_int = { 5503 + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), 5504 + BPF_LD_IMM64(R2, 0x8182838485868788ULL), 5505 + BPF_LD_IMM64(R3, 0x8090a0b0c0d0e088ULL), 5506 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5507 + #ifdef __BIG_ENDIAN 5508 + BPF_STX_MEM(BPF_B, R10, R2, -1), 5509 + #else 5510 + BPF_STX_MEM(BPF_B, R10, R2, -8), 5511 + #endif 5512 + BPF_LDX_MEM(BPF_DW, R0, R10, -8), 5513 + BPF_JMP_REG(BPF_JNE, R0, R3, 1), 5514 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5515 + BPF_EXIT_INSN(), 5516 + }, 5517 + INTERNAL, 5518 + { }, 5519 + { { 0, 0 } }, 5520 + .stack_depth = 8, 5521 + }, 5522 + { 5523 + "BPF_STX_MEM | BPF_H", 5524 + .u.insns_int = { 5525 + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), 5526 + BPF_LD_IMM64(R2, 0x0102030405060708ULL), 5527 + BPF_LD_IMM64(R3, 0x8090a0b0c0d00708ULL), 5528 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5529 + #ifdef __BIG_ENDIAN 5530 + BPF_STX_MEM(BPF_H, R10, R2, -2), 5531 + #else 5532 + BPF_STX_MEM(BPF_H, R10, R2, -8), 5533 + #endif 5534 + BPF_LDX_MEM(BPF_DW, R0, R10, -8), 5535 + BPF_JMP_REG(BPF_JNE, R0, R3, 1), 5536 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5537 + BPF_EXIT_INSN(), 5538 + }, 5539 + INTERNAL, 5540 + { }, 5541 + { { 0, 0 } }, 5542 + .stack_depth = 8, 5543 + }, 5544 + { 5545 + "BPF_STX_MEM | BPF_H, MSB set", 5546 + .u.insns_int = { 5547 + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), 5548 + BPF_LD_IMM64(R2, 0x8182838485868788ULL), 5549 + BPF_LD_IMM64(R3, 0x8090a0b0c0d08788ULL), 5550 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5551 + #ifdef __BIG_ENDIAN 5552 + BPF_STX_MEM(BPF_H, R10, R2, -2), 5553 + #else 5554 + BPF_STX_MEM(BPF_H, R10, R2, -8), 5555 + #endif 5556 + BPF_LDX_MEM(BPF_DW, R0, R10, -8), 5557 + BPF_JMP_REG(BPF_JNE, R0, R3, 1), 5558 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5559 + BPF_EXIT_INSN(), 5560 + }, 5561 + INTERNAL, 5562 + { }, 5563 + { { 0, 0 } }, 5564 + .stack_depth = 8, 5565 + }, 5566 + { 5567 + "BPF_STX_MEM | BPF_W", 5568 + .u.insns_int = { 5569 + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), 5570 + BPF_LD_IMM64(R2, 0x0102030405060708ULL), 5571 + BPF_LD_IMM64(R3, 0x8090a0b005060708ULL), 5572 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5573 + #ifdef __BIG_ENDIAN 5574 + BPF_STX_MEM(BPF_W, R10, R2, -4), 5575 + #else 5576 + BPF_STX_MEM(BPF_W, R10, R2, -8), 5577 + #endif 5578 + BPF_LDX_MEM(BPF_DW, R0, R10, -8), 5579 + BPF_JMP_REG(BPF_JNE, R0, R3, 1), 5580 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5581 + BPF_EXIT_INSN(), 5582 + }, 5583 + INTERNAL, 5584 + { }, 5585 + { { 0, 0 } }, 5586 + .stack_depth = 8, 5587 + }, 5588 + { 5589 + "BPF_STX_MEM | BPF_W, MSB set", 5590 + .u.insns_int = { 5591 + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), 5592 + BPF_LD_IMM64(R2, 0x8182838485868788ULL), 5593 + BPF_LD_IMM64(R3, 0x8090a0b085868788ULL), 5594 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 5595 + #ifdef __BIG_ENDIAN 5596 + BPF_STX_MEM(BPF_W, R10, R2, -4), 5597 + #else 5598 + BPF_STX_MEM(BPF_W, R10, R2, -8), 5599 + #endif 5600 + BPF_LDX_MEM(BPF_DW, R0, R10, -8), 5601 + BPF_JMP_REG(BPF_JNE, R0, R3, 1), 5602 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 5603 + BPF_EXIT_INSN(), 5604 + }, 5605 + INTERNAL, 5606 + { }, 5607 + { { 0, 0 } }, 5608 + .stack_depth = 8, 7695 5609 }, 7696 5610 /* BPF_ST(X) | BPF_MEM | BPF_B/H/W/DW */ 7697 5611 { ··· 8245 5529 * Individual tests are expanded from template macros for all 8246 5530 * combinations of ALU operation, word size and fetching. 8247 5531 */ 5532 + #define BPF_ATOMIC_POISON(width) ((width) == BPF_W ? (0xbaadf00dULL << 32) : 0) 5533 + 8248 5534 #define BPF_ATOMIC_OP_TEST1(width, op, logic, old, update, result) \ 8249 5535 { \ 8250 5536 "BPF_ATOMIC | " #width ", " #op ": Test: " \ 8251 5537 #old " " #logic " " #update " = " #result, \ 8252 5538 .u.insns_int = { \ 8253 - BPF_ALU32_IMM(BPF_MOV, R5, update), \ 5539 + BPF_LD_IMM64(R5, (update) | BPF_ATOMIC_POISON(width)), \ 8254 5540 BPF_ST_MEM(width, R10, -40, old), \ 8255 5541 BPF_ATOMIC_OP(width, op, R10, R5, -40), \ 8256 5542 BPF_LDX_MEM(width, R0, R10, -40), \ 5543 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 5544 + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ 5545 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8257 5546 BPF_EXIT_INSN(), \ 8258 5547 }, \ 8259 5548 INTERNAL, \ ··· 8272 5551 #old " " #logic " " #update " = " #result, \ 8273 5552 .u.insns_int = { \ 8274 5553 BPF_ALU64_REG(BPF_MOV, R1, R10), \ 8275 - BPF_ALU32_IMM(BPF_MOV, R0, update), \ 5554 + BPF_LD_IMM64(R0, (update) | BPF_ATOMIC_POISON(width)), \ 8276 5555 BPF_ST_MEM(BPF_W, R10, -40, old), \ 8277 5556 BPF_ATOMIC_OP(width, op, R10, R0, -40), \ 8278 5557 BPF_ALU64_REG(BPF_MOV, R0, R10), \ 8279 5558 BPF_ALU64_REG(BPF_SUB, R0, R1), \ 5559 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 5560 + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ 5561 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8280 5562 BPF_EXIT_INSN(), \ 8281 5563 }, \ 8282 5564 INTERNAL, \ ··· 8293 5569 #old " " #logic " " #update " = " #result, \ 8294 5570 .u.insns_int = { \ 8295 5571 BPF_ALU64_REG(BPF_MOV, R0, R10), \ 8296 - BPF_ALU32_IMM(BPF_MOV, R1, update), \ 5572 + BPF_LD_IMM64(R1, (update) | BPF_ATOMIC_POISON(width)), \ 8297 5573 BPF_ST_MEM(width, R10, -40, old), \ 8298 5574 BPF_ATOMIC_OP(width, op, R10, R1, -40), \ 8299 5575 BPF_ALU64_REG(BPF_SUB, R0, R10), \ 5576 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 5577 + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ 5578 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8300 5579 BPF_EXIT_INSN(), \ 8301 5580 }, \ 8302 5581 INTERNAL, \ ··· 8312 5585 "BPF_ATOMIC | " #width ", " #op ": Test fetch: " \ 8313 5586 #old " " #logic " " #update " = " #result, \ 8314 5587 .u.insns_int = { \ 8315 - BPF_ALU32_IMM(BPF_MOV, R3, update), \ 5588 + BPF_LD_IMM64(R3, (update) | BPF_ATOMIC_POISON(width)), \ 8316 5589 BPF_ST_MEM(width, R10, -40, old), \ 8317 5590 BPF_ATOMIC_OP(width, op, R10, R3, -40), \ 8318 - BPF_ALU64_REG(BPF_MOV, R0, R3), \ 5591 + BPF_ALU32_REG(BPF_MOV, R0, R3), \ 8319 5592 BPF_EXIT_INSN(), \ 8320 5593 }, \ 8321 5594 INTERNAL, \ ··· 8413 5686 BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab), 8414 5687 BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab), 8415 5688 BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab), 5689 + #undef BPF_ATOMIC_POISON 8416 5690 #undef BPF_ATOMIC_OP_TEST1 8417 5691 #undef BPF_ATOMIC_OP_TEST2 8418 5692 #undef BPF_ATOMIC_OP_TEST3 ··· 8498 5770 "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful return", 8499 5771 .u.insns_int = { 8500 5772 BPF_LD_IMM64(R1, 0x0123456789abcdefULL), 8501 - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), 5773 + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), 8502 5774 BPF_ALU64_REG(BPF_MOV, R0, R1), 8503 5775 BPF_STX_MEM(BPF_DW, R10, R1, -40), 8504 5776 BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40), ··· 8515 5787 "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful store", 8516 5788 .u.insns_int = { 8517 5789 BPF_LD_IMM64(R1, 0x0123456789abcdefULL), 8518 - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), 5790 + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), 8519 5791 BPF_ALU64_REG(BPF_MOV, R0, R1), 8520 5792 BPF_STX_MEM(BPF_DW, R10, R0, -40), 8521 5793 BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40), ··· 8533 5805 "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure return", 8534 5806 .u.insns_int = { 8535 5807 BPF_LD_IMM64(R1, 0x0123456789abcdefULL), 8536 - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), 5808 + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), 8537 5809 BPF_ALU64_REG(BPF_MOV, R0, R1), 8538 5810 BPF_ALU64_IMM(BPF_ADD, R0, 1), 8539 5811 BPF_STX_MEM(BPF_DW, R10, R1, -40), ··· 8551 5823 "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure store", 8552 5824 .u.insns_int = { 8553 5825 BPF_LD_IMM64(R1, 0x0123456789abcdefULL), 8554 - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), 5826 + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), 8555 5827 BPF_ALU64_REG(BPF_MOV, R0, R1), 8556 5828 BPF_ALU64_IMM(BPF_ADD, R0, 1), 8557 5829 BPF_STX_MEM(BPF_DW, R10, R1, -40), ··· 8570 5842 "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test side effects", 8571 5843 .u.insns_int = { 8572 5844 BPF_LD_IMM64(R1, 0x0123456789abcdefULL), 8573 - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), 5845 + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), 8574 5846 BPF_ALU64_REG(BPF_MOV, R0, R1), 8575 5847 BPF_STX_MEM(BPF_DW, R10, R1, -40), 8576 5848 BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40), 8577 - BPF_LD_IMM64(R0, 0xfecdba9876543210ULL), 5849 + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), 8578 5850 BPF_JMP_REG(BPF_JNE, R0, R2, 1), 8579 5851 BPF_ALU64_REG(BPF_SUB, R0, R2), 8580 5852 BPF_EXIT_INSN(), ··· 9920 7192 { }, 9921 7193 { { 0, 1 } }, 9922 7194 }, 9923 - { /* Mainly checking JIT here. */ 9924 - "BPF_MAXINSNS: Very long conditional jump", 9925 - { }, 9926 - INTERNAL | FLAG_NO_DATA, 9927 - { }, 9928 - { { 0, 1 } }, 9929 - .fill_helper = bpf_fill_long_jmp, 9930 - }, 9931 7195 { 9932 7196 "JMP_JA: Jump, gap, jump, ...", 9933 7197 { }, ··· 11133 8413 {}, 11134 8414 { { 0, 2 } }, 11135 8415 }, 8416 + /* BPF_LDX_MEM with operand aliasing */ 8417 + { 8418 + "LDX_MEM_B: operand register aliasing", 8419 + .u.insns_int = { 8420 + BPF_ST_MEM(BPF_B, R10, -8, 123), 8421 + BPF_MOV64_REG(R0, R10), 8422 + BPF_LDX_MEM(BPF_B, R0, R0, -8), 8423 + BPF_EXIT_INSN(), 8424 + }, 8425 + INTERNAL, 8426 + { }, 8427 + { { 0, 123 } }, 8428 + .stack_depth = 8, 8429 + }, 8430 + { 8431 + "LDX_MEM_H: operand register aliasing", 8432 + .u.insns_int = { 8433 + BPF_ST_MEM(BPF_H, R10, -8, 12345), 8434 + BPF_MOV64_REG(R0, R10), 8435 + BPF_LDX_MEM(BPF_H, R0, R0, -8), 8436 + BPF_EXIT_INSN(), 8437 + }, 8438 + INTERNAL, 8439 + { }, 8440 + { { 0, 12345 } }, 8441 + .stack_depth = 8, 8442 + }, 8443 + { 8444 + "LDX_MEM_W: operand register aliasing", 8445 + .u.insns_int = { 8446 + BPF_ST_MEM(BPF_W, R10, -8, 123456789), 8447 + BPF_MOV64_REG(R0, R10), 8448 + BPF_LDX_MEM(BPF_W, R0, R0, -8), 8449 + BPF_EXIT_INSN(), 8450 + }, 8451 + INTERNAL, 8452 + { }, 8453 + { { 0, 123456789 } }, 8454 + .stack_depth = 8, 8455 + }, 8456 + { 8457 + "LDX_MEM_DW: operand register aliasing", 8458 + .u.insns_int = { 8459 + BPF_LD_IMM64(R1, 0x123456789abcdefULL), 8460 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 8461 + BPF_MOV64_REG(R0, R10), 8462 + BPF_LDX_MEM(BPF_DW, R0, R0, -8), 8463 + BPF_ALU64_REG(BPF_SUB, R0, R1), 8464 + BPF_MOV64_REG(R1, R0), 8465 + BPF_ALU64_IMM(BPF_RSH, R1, 32), 8466 + BPF_ALU64_REG(BPF_OR, R0, R1), 8467 + BPF_EXIT_INSN(), 8468 + }, 8469 + INTERNAL, 8470 + { }, 8471 + { { 0, 0 } }, 8472 + .stack_depth = 8, 8473 + }, 8474 + /* 8475 + * Register (non-)clobbering tests for the case where a JIT implements 8476 + * complex ALU or ATOMIC operations via function calls. If so, the 8477 + * function call must be transparent to the eBPF registers. The JIT 8478 + * must therefore save and restore relevant registers across the call. 8479 + * The following tests check that the eBPF registers retain their 8480 + * values after such an operation. Mainly intended for complex ALU 8481 + * and atomic operation, but we run it for all. You never know... 8482 + * 8483 + * Note that each operations should be tested twice with different 8484 + * destinations, to check preservation for all registers. 8485 + */ 8486 + #define BPF_TEST_CLOBBER_ALU(alu, op, dst, src) \ 8487 + { \ 8488 + #alu "_" #op " to " #dst ": no clobbering", \ 8489 + .u.insns_int = { \ 8490 + BPF_ALU64_IMM(BPF_MOV, R0, R0), \ 8491 + BPF_ALU64_IMM(BPF_MOV, R1, R1), \ 8492 + BPF_ALU64_IMM(BPF_MOV, R2, R2), \ 8493 + BPF_ALU64_IMM(BPF_MOV, R3, R3), \ 8494 + BPF_ALU64_IMM(BPF_MOV, R4, R4), \ 8495 + BPF_ALU64_IMM(BPF_MOV, R5, R5), \ 8496 + BPF_ALU64_IMM(BPF_MOV, R6, R6), \ 8497 + BPF_ALU64_IMM(BPF_MOV, R7, R7), \ 8498 + BPF_ALU64_IMM(BPF_MOV, R8, R8), \ 8499 + BPF_ALU64_IMM(BPF_MOV, R9, R9), \ 8500 + BPF_##alu(BPF_ ##op, dst, src), \ 8501 + BPF_ALU32_IMM(BPF_MOV, dst, dst), \ 8502 + BPF_JMP_IMM(BPF_JNE, R0, R0, 10), \ 8503 + BPF_JMP_IMM(BPF_JNE, R1, R1, 9), \ 8504 + BPF_JMP_IMM(BPF_JNE, R2, R2, 8), \ 8505 + BPF_JMP_IMM(BPF_JNE, R3, R3, 7), \ 8506 + BPF_JMP_IMM(BPF_JNE, R4, R4, 6), \ 8507 + BPF_JMP_IMM(BPF_JNE, R5, R5, 5), \ 8508 + BPF_JMP_IMM(BPF_JNE, R6, R6, 4), \ 8509 + BPF_JMP_IMM(BPF_JNE, R7, R7, 3), \ 8510 + BPF_JMP_IMM(BPF_JNE, R8, R8, 2), \ 8511 + BPF_JMP_IMM(BPF_JNE, R9, R9, 1), \ 8512 + BPF_ALU64_IMM(BPF_MOV, R0, 1), \ 8513 + BPF_EXIT_INSN(), \ 8514 + }, \ 8515 + INTERNAL, \ 8516 + { }, \ 8517 + { { 0, 1 } } \ 8518 + } 8519 + /* ALU64 operations, register clobbering */ 8520 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, AND, R8, 123456789), 8521 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, AND, R9, 123456789), 8522 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, OR, R8, 123456789), 8523 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, OR, R9, 123456789), 8524 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, XOR, R8, 123456789), 8525 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, XOR, R9, 123456789), 8526 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, LSH, R8, 12), 8527 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, LSH, R9, 12), 8528 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, RSH, R8, 12), 8529 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, RSH, R9, 12), 8530 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ARSH, R8, 12), 8531 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ARSH, R9, 12), 8532 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ADD, R8, 123456789), 8533 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ADD, R9, 123456789), 8534 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, SUB, R8, 123456789), 8535 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, SUB, R9, 123456789), 8536 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MUL, R8, 123456789), 8537 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MUL, R9, 123456789), 8538 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, DIV, R8, 123456789), 8539 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, DIV, R9, 123456789), 8540 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MOD, R8, 123456789), 8541 + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MOD, R9, 123456789), 8542 + /* ALU32 immediate operations, register clobbering */ 8543 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, AND, R8, 123456789), 8544 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, AND, R9, 123456789), 8545 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, OR, R8, 123456789), 8546 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, OR, R9, 123456789), 8547 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, XOR, R8, 123456789), 8548 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, XOR, R9, 123456789), 8549 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, LSH, R8, 12), 8550 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, LSH, R9, 12), 8551 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, RSH, R8, 12), 8552 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, RSH, R9, 12), 8553 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ARSH, R8, 12), 8554 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ARSH, R9, 12), 8555 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ADD, R8, 123456789), 8556 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ADD, R9, 123456789), 8557 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, SUB, R8, 123456789), 8558 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, SUB, R9, 123456789), 8559 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MUL, R8, 123456789), 8560 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MUL, R9, 123456789), 8561 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, DIV, R8, 123456789), 8562 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, DIV, R9, 123456789), 8563 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MOD, R8, 123456789), 8564 + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MOD, R9, 123456789), 8565 + /* ALU64 register operations, register clobbering */ 8566 + BPF_TEST_CLOBBER_ALU(ALU64_REG, AND, R8, R1), 8567 + BPF_TEST_CLOBBER_ALU(ALU64_REG, AND, R9, R1), 8568 + BPF_TEST_CLOBBER_ALU(ALU64_REG, OR, R8, R1), 8569 + BPF_TEST_CLOBBER_ALU(ALU64_REG, OR, R9, R1), 8570 + BPF_TEST_CLOBBER_ALU(ALU64_REG, XOR, R8, R1), 8571 + BPF_TEST_CLOBBER_ALU(ALU64_REG, XOR, R9, R1), 8572 + BPF_TEST_CLOBBER_ALU(ALU64_REG, LSH, R8, R1), 8573 + BPF_TEST_CLOBBER_ALU(ALU64_REG, LSH, R9, R1), 8574 + BPF_TEST_CLOBBER_ALU(ALU64_REG, RSH, R8, R1), 8575 + BPF_TEST_CLOBBER_ALU(ALU64_REG, RSH, R9, R1), 8576 + BPF_TEST_CLOBBER_ALU(ALU64_REG, ARSH, R8, R1), 8577 + BPF_TEST_CLOBBER_ALU(ALU64_REG, ARSH, R9, R1), 8578 + BPF_TEST_CLOBBER_ALU(ALU64_REG, ADD, R8, R1), 8579 + BPF_TEST_CLOBBER_ALU(ALU64_REG, ADD, R9, R1), 8580 + BPF_TEST_CLOBBER_ALU(ALU64_REG, SUB, R8, R1), 8581 + BPF_TEST_CLOBBER_ALU(ALU64_REG, SUB, R9, R1), 8582 + BPF_TEST_CLOBBER_ALU(ALU64_REG, MUL, R8, R1), 8583 + BPF_TEST_CLOBBER_ALU(ALU64_REG, MUL, R9, R1), 8584 + BPF_TEST_CLOBBER_ALU(ALU64_REG, DIV, R8, R1), 8585 + BPF_TEST_CLOBBER_ALU(ALU64_REG, DIV, R9, R1), 8586 + BPF_TEST_CLOBBER_ALU(ALU64_REG, MOD, R8, R1), 8587 + BPF_TEST_CLOBBER_ALU(ALU64_REG, MOD, R9, R1), 8588 + /* ALU32 register operations, register clobbering */ 8589 + BPF_TEST_CLOBBER_ALU(ALU32_REG, AND, R8, R1), 8590 + BPF_TEST_CLOBBER_ALU(ALU32_REG, AND, R9, R1), 8591 + BPF_TEST_CLOBBER_ALU(ALU32_REG, OR, R8, R1), 8592 + BPF_TEST_CLOBBER_ALU(ALU32_REG, OR, R9, R1), 8593 + BPF_TEST_CLOBBER_ALU(ALU32_REG, XOR, R8, R1), 8594 + BPF_TEST_CLOBBER_ALU(ALU32_REG, XOR, R9, R1), 8595 + BPF_TEST_CLOBBER_ALU(ALU32_REG, LSH, R8, R1), 8596 + BPF_TEST_CLOBBER_ALU(ALU32_REG, LSH, R9, R1), 8597 + BPF_TEST_CLOBBER_ALU(ALU32_REG, RSH, R8, R1), 8598 + BPF_TEST_CLOBBER_ALU(ALU32_REG, RSH, R9, R1), 8599 + BPF_TEST_CLOBBER_ALU(ALU32_REG, ARSH, R8, R1), 8600 + BPF_TEST_CLOBBER_ALU(ALU32_REG, ARSH, R9, R1), 8601 + BPF_TEST_CLOBBER_ALU(ALU32_REG, ADD, R8, R1), 8602 + BPF_TEST_CLOBBER_ALU(ALU32_REG, ADD, R9, R1), 8603 + BPF_TEST_CLOBBER_ALU(ALU32_REG, SUB, R8, R1), 8604 + BPF_TEST_CLOBBER_ALU(ALU32_REG, SUB, R9, R1), 8605 + BPF_TEST_CLOBBER_ALU(ALU32_REG, MUL, R8, R1), 8606 + BPF_TEST_CLOBBER_ALU(ALU32_REG, MUL, R9, R1), 8607 + BPF_TEST_CLOBBER_ALU(ALU32_REG, DIV, R8, R1), 8608 + BPF_TEST_CLOBBER_ALU(ALU32_REG, DIV, R9, R1), 8609 + BPF_TEST_CLOBBER_ALU(ALU32_REG, MOD, R8, R1), 8610 + BPF_TEST_CLOBBER_ALU(ALU32_REG, MOD, R9, R1), 8611 + #undef BPF_TEST_CLOBBER_ALU 8612 + #define BPF_TEST_CLOBBER_ATOMIC(width, op) \ 8613 + { \ 8614 + "Atomic_" #width " " #op ": no clobbering", \ 8615 + .u.insns_int = { \ 8616 + BPF_ALU64_IMM(BPF_MOV, R0, 0), \ 8617 + BPF_ALU64_IMM(BPF_MOV, R1, 1), \ 8618 + BPF_ALU64_IMM(BPF_MOV, R2, 2), \ 8619 + BPF_ALU64_IMM(BPF_MOV, R3, 3), \ 8620 + BPF_ALU64_IMM(BPF_MOV, R4, 4), \ 8621 + BPF_ALU64_IMM(BPF_MOV, R5, 5), \ 8622 + BPF_ALU64_IMM(BPF_MOV, R6, 6), \ 8623 + BPF_ALU64_IMM(BPF_MOV, R7, 7), \ 8624 + BPF_ALU64_IMM(BPF_MOV, R8, 8), \ 8625 + BPF_ALU64_IMM(BPF_MOV, R9, 9), \ 8626 + BPF_ST_MEM(width, R10, -8, \ 8627 + (op) == BPF_CMPXCHG ? 0 : \ 8628 + (op) & BPF_FETCH ? 1 : 0), \ 8629 + BPF_ATOMIC_OP(width, op, R10, R1, -8), \ 8630 + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), \ 8631 + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), \ 8632 + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), \ 8633 + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), \ 8634 + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), \ 8635 + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), \ 8636 + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), \ 8637 + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), \ 8638 + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), \ 8639 + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), \ 8640 + BPF_ALU64_IMM(BPF_MOV, R0, 1), \ 8641 + BPF_EXIT_INSN(), \ 8642 + }, \ 8643 + INTERNAL, \ 8644 + { }, \ 8645 + { { 0, 1 } }, \ 8646 + .stack_depth = 8, \ 8647 + } 8648 + /* 64-bit atomic operations, register clobbering */ 8649 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_ADD), 8650 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_AND), 8651 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_OR), 8652 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_XOR), 8653 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_ADD | BPF_FETCH), 8654 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_AND | BPF_FETCH), 8655 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_OR | BPF_FETCH), 8656 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_XOR | BPF_FETCH), 8657 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_XCHG), 8658 + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_CMPXCHG), 8659 + /* 32-bit atomic operations, register clobbering */ 8660 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_ADD), 8661 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_AND), 8662 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_OR), 8663 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_XOR), 8664 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_ADD | BPF_FETCH), 8665 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_AND | BPF_FETCH), 8666 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_OR | BPF_FETCH), 8667 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_XOR | BPF_FETCH), 8668 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_XCHG), 8669 + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_CMPXCHG), 8670 + #undef BPF_TEST_CLOBBER_ATOMIC 8671 + /* Checking that ALU32 src is not zero extended in place */ 8672 + #define BPF_ALU32_SRC_ZEXT(op) \ 8673 + { \ 8674 + "ALU32_" #op "_X: src preserved in zext", \ 8675 + .u.insns_int = { \ 8676 + BPF_LD_IMM64(R1, 0x0123456789acbdefULL),\ 8677 + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL),\ 8678 + BPF_ALU64_REG(BPF_MOV, R0, R1), \ 8679 + BPF_ALU32_REG(BPF_##op, R2, R1), \ 8680 + BPF_ALU64_REG(BPF_SUB, R0, R1), \ 8681 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 8682 + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ 8683 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8684 + BPF_EXIT_INSN(), \ 8685 + }, \ 8686 + INTERNAL, \ 8687 + { }, \ 8688 + { { 0, 0 } }, \ 8689 + } 8690 + BPF_ALU32_SRC_ZEXT(MOV), 8691 + BPF_ALU32_SRC_ZEXT(AND), 8692 + BPF_ALU32_SRC_ZEXT(OR), 8693 + BPF_ALU32_SRC_ZEXT(XOR), 8694 + BPF_ALU32_SRC_ZEXT(ADD), 8695 + BPF_ALU32_SRC_ZEXT(SUB), 8696 + BPF_ALU32_SRC_ZEXT(MUL), 8697 + BPF_ALU32_SRC_ZEXT(DIV), 8698 + BPF_ALU32_SRC_ZEXT(MOD), 8699 + #undef BPF_ALU32_SRC_ZEXT 8700 + /* Checking that ATOMIC32 src is not zero extended in place */ 8701 + #define BPF_ATOMIC32_SRC_ZEXT(op) \ 8702 + { \ 8703 + "ATOMIC_W_" #op ": src preserved in zext", \ 8704 + .u.insns_int = { \ 8705 + BPF_LD_IMM64(R0, 0x0123456789acbdefULL), \ 8706 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 8707 + BPF_ST_MEM(BPF_W, R10, -4, 0), \ 8708 + BPF_ATOMIC_OP(BPF_W, BPF_##op, R10, R1, -4), \ 8709 + BPF_ALU64_REG(BPF_SUB, R0, R1), \ 8710 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 8711 + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ 8712 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8713 + BPF_EXIT_INSN(), \ 8714 + }, \ 8715 + INTERNAL, \ 8716 + { }, \ 8717 + { { 0, 0 } }, \ 8718 + .stack_depth = 8, \ 8719 + } 8720 + BPF_ATOMIC32_SRC_ZEXT(ADD), 8721 + BPF_ATOMIC32_SRC_ZEXT(AND), 8722 + BPF_ATOMIC32_SRC_ZEXT(OR), 8723 + BPF_ATOMIC32_SRC_ZEXT(XOR), 8724 + #undef BPF_ATOMIC32_SRC_ZEXT 8725 + /* Checking that CMPXCHG32 src is not zero extended in place */ 8726 + { 8727 + "ATOMIC_W_CMPXCHG: src preserved in zext", 8728 + .u.insns_int = { 8729 + BPF_LD_IMM64(R1, 0x0123456789acbdefULL), 8730 + BPF_ALU64_REG(BPF_MOV, R2, R1), 8731 + BPF_ALU64_REG(BPF_MOV, R0, 0), 8732 + BPF_ST_MEM(BPF_W, R10, -4, 0), 8733 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R1, -4), 8734 + BPF_ALU64_REG(BPF_SUB, R1, R2), 8735 + BPF_ALU64_REG(BPF_MOV, R2, R1), 8736 + BPF_ALU64_IMM(BPF_RSH, R2, 32), 8737 + BPF_ALU64_REG(BPF_OR, R1, R2), 8738 + BPF_ALU64_REG(BPF_MOV, R0, R1), 8739 + BPF_EXIT_INSN(), 8740 + }, 8741 + INTERNAL, 8742 + { }, 8743 + { { 0, 0 } }, 8744 + .stack_depth = 8, 8745 + }, 8746 + /* Checking that JMP32 immediate src is not zero extended in place */ 8747 + #define BPF_JMP32_IMM_ZEXT(op) \ 8748 + { \ 8749 + "JMP32_" #op "_K: operand preserved in zext", \ 8750 + .u.insns_int = { \ 8751 + BPF_LD_IMM64(R0, 0x0123456789acbdefULL),\ 8752 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 8753 + BPF_JMP32_IMM(BPF_##op, R0, 1234, 1), \ 8754 + BPF_JMP_A(0), /* Nop */ \ 8755 + BPF_ALU64_REG(BPF_SUB, R0, R1), \ 8756 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 8757 + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ 8758 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8759 + BPF_EXIT_INSN(), \ 8760 + }, \ 8761 + INTERNAL, \ 8762 + { }, \ 8763 + { { 0, 0 } }, \ 8764 + } 8765 + BPF_JMP32_IMM_ZEXT(JEQ), 8766 + BPF_JMP32_IMM_ZEXT(JNE), 8767 + BPF_JMP32_IMM_ZEXT(JSET), 8768 + BPF_JMP32_IMM_ZEXT(JGT), 8769 + BPF_JMP32_IMM_ZEXT(JGE), 8770 + BPF_JMP32_IMM_ZEXT(JLT), 8771 + BPF_JMP32_IMM_ZEXT(JLE), 8772 + BPF_JMP32_IMM_ZEXT(JSGT), 8773 + BPF_JMP32_IMM_ZEXT(JSGE), 8774 + BPF_JMP32_IMM_ZEXT(JSGT), 8775 + BPF_JMP32_IMM_ZEXT(JSLT), 8776 + BPF_JMP32_IMM_ZEXT(JSLE), 8777 + #undef BPF_JMP2_IMM_ZEXT 8778 + /* Checking that JMP32 dst & src are not zero extended in place */ 8779 + #define BPF_JMP32_REG_ZEXT(op) \ 8780 + { \ 8781 + "JMP32_" #op "_X: operands preserved in zext", \ 8782 + .u.insns_int = { \ 8783 + BPF_LD_IMM64(R0, 0x0123456789acbdefULL),\ 8784 + BPF_LD_IMM64(R1, 0xfedcba9876543210ULL),\ 8785 + BPF_ALU64_REG(BPF_MOV, R2, R0), \ 8786 + BPF_ALU64_REG(BPF_MOV, R3, R1), \ 8787 + BPF_JMP32_IMM(BPF_##op, R0, R1, 1), \ 8788 + BPF_JMP_A(0), /* Nop */ \ 8789 + BPF_ALU64_REG(BPF_SUB, R0, R2), \ 8790 + BPF_ALU64_REG(BPF_SUB, R1, R3), \ 8791 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8792 + BPF_ALU64_REG(BPF_MOV, R1, R0), \ 8793 + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ 8794 + BPF_ALU64_REG(BPF_OR, R0, R1), \ 8795 + BPF_EXIT_INSN(), \ 8796 + }, \ 8797 + INTERNAL, \ 8798 + { }, \ 8799 + { { 0, 0 } }, \ 8800 + } 8801 + BPF_JMP32_REG_ZEXT(JEQ), 8802 + BPF_JMP32_REG_ZEXT(JNE), 8803 + BPF_JMP32_REG_ZEXT(JSET), 8804 + BPF_JMP32_REG_ZEXT(JGT), 8805 + BPF_JMP32_REG_ZEXT(JGE), 8806 + BPF_JMP32_REG_ZEXT(JLT), 8807 + BPF_JMP32_REG_ZEXT(JLE), 8808 + BPF_JMP32_REG_ZEXT(JSGT), 8809 + BPF_JMP32_REG_ZEXT(JSGE), 8810 + BPF_JMP32_REG_ZEXT(JSGT), 8811 + BPF_JMP32_REG_ZEXT(JSLT), 8812 + BPF_JMP32_REG_ZEXT(JSLE), 8813 + #undef BPF_JMP2_REG_ZEXT 8814 + /* ALU64 K register combinations */ 8815 + { 8816 + "ALU64_MOV_K: registers", 8817 + { }, 8818 + INTERNAL, 8819 + { }, 8820 + { { 0, 1 } }, 8821 + .fill_helper = bpf_fill_alu64_mov_imm_regs, 8822 + }, 8823 + { 8824 + "ALU64_AND_K: registers", 8825 + { }, 8826 + INTERNAL, 8827 + { }, 8828 + { { 0, 1 } }, 8829 + .fill_helper = bpf_fill_alu64_and_imm_regs, 8830 + }, 8831 + { 8832 + "ALU64_OR_K: registers", 8833 + { }, 8834 + INTERNAL, 8835 + { }, 8836 + { { 0, 1 } }, 8837 + .fill_helper = bpf_fill_alu64_or_imm_regs, 8838 + }, 8839 + { 8840 + "ALU64_XOR_K: registers", 8841 + { }, 8842 + INTERNAL, 8843 + { }, 8844 + { { 0, 1 } }, 8845 + .fill_helper = bpf_fill_alu64_xor_imm_regs, 8846 + }, 8847 + { 8848 + "ALU64_LSH_K: registers", 8849 + { }, 8850 + INTERNAL, 8851 + { }, 8852 + { { 0, 1 } }, 8853 + .fill_helper = bpf_fill_alu64_lsh_imm_regs, 8854 + }, 8855 + { 8856 + "ALU64_RSH_K: registers", 8857 + { }, 8858 + INTERNAL, 8859 + { }, 8860 + { { 0, 1 } }, 8861 + .fill_helper = bpf_fill_alu64_rsh_imm_regs, 8862 + }, 8863 + { 8864 + "ALU64_ARSH_K: registers", 8865 + { }, 8866 + INTERNAL, 8867 + { }, 8868 + { { 0, 1 } }, 8869 + .fill_helper = bpf_fill_alu64_arsh_imm_regs, 8870 + }, 8871 + { 8872 + "ALU64_ADD_K: registers", 8873 + { }, 8874 + INTERNAL, 8875 + { }, 8876 + { { 0, 1 } }, 8877 + .fill_helper = bpf_fill_alu64_add_imm_regs, 8878 + }, 8879 + { 8880 + "ALU64_SUB_K: registers", 8881 + { }, 8882 + INTERNAL, 8883 + { }, 8884 + { { 0, 1 } }, 8885 + .fill_helper = bpf_fill_alu64_sub_imm_regs, 8886 + }, 8887 + { 8888 + "ALU64_MUL_K: registers", 8889 + { }, 8890 + INTERNAL, 8891 + { }, 8892 + { { 0, 1 } }, 8893 + .fill_helper = bpf_fill_alu64_mul_imm_regs, 8894 + }, 8895 + { 8896 + "ALU64_DIV_K: registers", 8897 + { }, 8898 + INTERNAL, 8899 + { }, 8900 + { { 0, 1 } }, 8901 + .fill_helper = bpf_fill_alu64_div_imm_regs, 8902 + }, 8903 + { 8904 + "ALU64_MOD_K: registers", 8905 + { }, 8906 + INTERNAL, 8907 + { }, 8908 + { { 0, 1 } }, 8909 + .fill_helper = bpf_fill_alu64_mod_imm_regs, 8910 + }, 8911 + /* ALU32 K registers */ 8912 + { 8913 + "ALU32_MOV_K: registers", 8914 + { }, 8915 + INTERNAL, 8916 + { }, 8917 + { { 0, 1 } }, 8918 + .fill_helper = bpf_fill_alu32_mov_imm_regs, 8919 + }, 8920 + { 8921 + "ALU32_AND_K: registers", 8922 + { }, 8923 + INTERNAL, 8924 + { }, 8925 + { { 0, 1 } }, 8926 + .fill_helper = bpf_fill_alu32_and_imm_regs, 8927 + }, 8928 + { 8929 + "ALU32_OR_K: registers", 8930 + { }, 8931 + INTERNAL, 8932 + { }, 8933 + { { 0, 1 } }, 8934 + .fill_helper = bpf_fill_alu32_or_imm_regs, 8935 + }, 8936 + { 8937 + "ALU32_XOR_K: registers", 8938 + { }, 8939 + INTERNAL, 8940 + { }, 8941 + { { 0, 1 } }, 8942 + .fill_helper = bpf_fill_alu32_xor_imm_regs, 8943 + }, 8944 + { 8945 + "ALU32_LSH_K: registers", 8946 + { }, 8947 + INTERNAL, 8948 + { }, 8949 + { { 0, 1 } }, 8950 + .fill_helper = bpf_fill_alu32_lsh_imm_regs, 8951 + }, 8952 + { 8953 + "ALU32_RSH_K: registers", 8954 + { }, 8955 + INTERNAL, 8956 + { }, 8957 + { { 0, 1 } }, 8958 + .fill_helper = bpf_fill_alu32_rsh_imm_regs, 8959 + }, 8960 + { 8961 + "ALU32_ARSH_K: registers", 8962 + { }, 8963 + INTERNAL, 8964 + { }, 8965 + { { 0, 1 } }, 8966 + .fill_helper = bpf_fill_alu32_arsh_imm_regs, 8967 + }, 8968 + { 8969 + "ALU32_ADD_K: registers", 8970 + { }, 8971 + INTERNAL, 8972 + { }, 8973 + { { 0, 1 } }, 8974 + .fill_helper = bpf_fill_alu32_add_imm_regs, 8975 + }, 8976 + { 8977 + "ALU32_SUB_K: registers", 8978 + { }, 8979 + INTERNAL, 8980 + { }, 8981 + { { 0, 1 } }, 8982 + .fill_helper = bpf_fill_alu32_sub_imm_regs, 8983 + }, 8984 + { 8985 + "ALU32_MUL_K: registers", 8986 + { }, 8987 + INTERNAL, 8988 + { }, 8989 + { { 0, 1 } }, 8990 + .fill_helper = bpf_fill_alu32_mul_imm_regs, 8991 + }, 8992 + { 8993 + "ALU32_DIV_K: registers", 8994 + { }, 8995 + INTERNAL, 8996 + { }, 8997 + { { 0, 1 } }, 8998 + .fill_helper = bpf_fill_alu32_div_imm_regs, 8999 + }, 9000 + { 9001 + "ALU32_MOD_K: registers", 9002 + { }, 9003 + INTERNAL, 9004 + { }, 9005 + { { 0, 1 } }, 9006 + .fill_helper = bpf_fill_alu32_mod_imm_regs, 9007 + }, 9008 + /* ALU64 X register combinations */ 9009 + { 9010 + "ALU64_MOV_X: register combinations", 9011 + { }, 9012 + INTERNAL, 9013 + { }, 9014 + { { 0, 1 } }, 9015 + .fill_helper = bpf_fill_alu64_mov_reg_pairs, 9016 + }, 9017 + { 9018 + "ALU64_AND_X: register combinations", 9019 + { }, 9020 + INTERNAL, 9021 + { }, 9022 + { { 0, 1 } }, 9023 + .fill_helper = bpf_fill_alu64_and_reg_pairs, 9024 + }, 9025 + { 9026 + "ALU64_OR_X: register combinations", 9027 + { }, 9028 + INTERNAL, 9029 + { }, 9030 + { { 0, 1 } }, 9031 + .fill_helper = bpf_fill_alu64_or_reg_pairs, 9032 + }, 9033 + { 9034 + "ALU64_XOR_X: register combinations", 9035 + { }, 9036 + INTERNAL, 9037 + { }, 9038 + { { 0, 1 } }, 9039 + .fill_helper = bpf_fill_alu64_xor_reg_pairs, 9040 + }, 9041 + { 9042 + "ALU64_LSH_X: register combinations", 9043 + { }, 9044 + INTERNAL, 9045 + { }, 9046 + { { 0, 1 } }, 9047 + .fill_helper = bpf_fill_alu64_lsh_reg_pairs, 9048 + }, 9049 + { 9050 + "ALU64_RSH_X: register combinations", 9051 + { }, 9052 + INTERNAL, 9053 + { }, 9054 + { { 0, 1 } }, 9055 + .fill_helper = bpf_fill_alu64_rsh_reg_pairs, 9056 + }, 9057 + { 9058 + "ALU64_ARSH_X: register combinations", 9059 + { }, 9060 + INTERNAL, 9061 + { }, 9062 + { { 0, 1 } }, 9063 + .fill_helper = bpf_fill_alu64_arsh_reg_pairs, 9064 + }, 9065 + { 9066 + "ALU64_ADD_X: register combinations", 9067 + { }, 9068 + INTERNAL, 9069 + { }, 9070 + { { 0, 1 } }, 9071 + .fill_helper = bpf_fill_alu64_add_reg_pairs, 9072 + }, 9073 + { 9074 + "ALU64_SUB_X: register combinations", 9075 + { }, 9076 + INTERNAL, 9077 + { }, 9078 + { { 0, 1 } }, 9079 + .fill_helper = bpf_fill_alu64_sub_reg_pairs, 9080 + }, 9081 + { 9082 + "ALU64_MUL_X: register combinations", 9083 + { }, 9084 + INTERNAL, 9085 + { }, 9086 + { { 0, 1 } }, 9087 + .fill_helper = bpf_fill_alu64_mul_reg_pairs, 9088 + }, 9089 + { 9090 + "ALU64_DIV_X: register combinations", 9091 + { }, 9092 + INTERNAL, 9093 + { }, 9094 + { { 0, 1 } }, 9095 + .fill_helper = bpf_fill_alu64_div_reg_pairs, 9096 + }, 9097 + { 9098 + "ALU64_MOD_X: register combinations", 9099 + { }, 9100 + INTERNAL, 9101 + { }, 9102 + { { 0, 1 } }, 9103 + .fill_helper = bpf_fill_alu64_mod_reg_pairs, 9104 + }, 9105 + /* ALU32 X register combinations */ 9106 + { 9107 + "ALU32_MOV_X: register combinations", 9108 + { }, 9109 + INTERNAL, 9110 + { }, 9111 + { { 0, 1 } }, 9112 + .fill_helper = bpf_fill_alu32_mov_reg_pairs, 9113 + }, 9114 + { 9115 + "ALU32_AND_X: register combinations", 9116 + { }, 9117 + INTERNAL, 9118 + { }, 9119 + { { 0, 1 } }, 9120 + .fill_helper = bpf_fill_alu32_and_reg_pairs, 9121 + }, 9122 + { 9123 + "ALU32_OR_X: register combinations", 9124 + { }, 9125 + INTERNAL, 9126 + { }, 9127 + { { 0, 1 } }, 9128 + .fill_helper = bpf_fill_alu32_or_reg_pairs, 9129 + }, 9130 + { 9131 + "ALU32_XOR_X: register combinations", 9132 + { }, 9133 + INTERNAL, 9134 + { }, 9135 + { { 0, 1 } }, 9136 + .fill_helper = bpf_fill_alu32_xor_reg_pairs, 9137 + }, 9138 + { 9139 + "ALU32_LSH_X: register combinations", 9140 + { }, 9141 + INTERNAL, 9142 + { }, 9143 + { { 0, 1 } }, 9144 + .fill_helper = bpf_fill_alu32_lsh_reg_pairs, 9145 + }, 9146 + { 9147 + "ALU32_RSH_X: register combinations", 9148 + { }, 9149 + INTERNAL, 9150 + { }, 9151 + { { 0, 1 } }, 9152 + .fill_helper = bpf_fill_alu32_rsh_reg_pairs, 9153 + }, 9154 + { 9155 + "ALU32_ARSH_X: register combinations", 9156 + { }, 9157 + INTERNAL, 9158 + { }, 9159 + { { 0, 1 } }, 9160 + .fill_helper = bpf_fill_alu32_arsh_reg_pairs, 9161 + }, 9162 + { 9163 + "ALU32_ADD_X: register combinations", 9164 + { }, 9165 + INTERNAL, 9166 + { }, 9167 + { { 0, 1 } }, 9168 + .fill_helper = bpf_fill_alu32_add_reg_pairs, 9169 + }, 9170 + { 9171 + "ALU32_SUB_X: register combinations", 9172 + { }, 9173 + INTERNAL, 9174 + { }, 9175 + { { 0, 1 } }, 9176 + .fill_helper = bpf_fill_alu32_sub_reg_pairs, 9177 + }, 9178 + { 9179 + "ALU32_MUL_X: register combinations", 9180 + { }, 9181 + INTERNAL, 9182 + { }, 9183 + { { 0, 1 } }, 9184 + .fill_helper = bpf_fill_alu32_mul_reg_pairs, 9185 + }, 9186 + { 9187 + "ALU32_DIV_X: register combinations", 9188 + { }, 9189 + INTERNAL, 9190 + { }, 9191 + { { 0, 1 } }, 9192 + .fill_helper = bpf_fill_alu32_div_reg_pairs, 9193 + }, 9194 + { 9195 + "ALU32_MOD_X register combinations", 9196 + { }, 9197 + INTERNAL, 9198 + { }, 9199 + { { 0, 1 } }, 9200 + .fill_helper = bpf_fill_alu32_mod_reg_pairs, 9201 + }, 9202 + /* Exhaustive test of ALU64 shift operations */ 9203 + { 9204 + "ALU64_LSH_K: all shift values", 9205 + { }, 9206 + INTERNAL | FLAG_NO_DATA, 9207 + { }, 9208 + { { 0, 1 } }, 9209 + .fill_helper = bpf_fill_alu64_lsh_imm, 9210 + }, 9211 + { 9212 + "ALU64_RSH_K: all shift values", 9213 + { }, 9214 + INTERNAL | FLAG_NO_DATA, 9215 + { }, 9216 + { { 0, 1 } }, 9217 + .fill_helper = bpf_fill_alu64_rsh_imm, 9218 + }, 9219 + { 9220 + "ALU64_ARSH_K: all shift values", 9221 + { }, 9222 + INTERNAL | FLAG_NO_DATA, 9223 + { }, 9224 + { { 0, 1 } }, 9225 + .fill_helper = bpf_fill_alu64_arsh_imm, 9226 + }, 9227 + { 9228 + "ALU64_LSH_X: all shift values", 9229 + { }, 9230 + INTERNAL | FLAG_NO_DATA, 9231 + { }, 9232 + { { 0, 1 } }, 9233 + .fill_helper = bpf_fill_alu64_lsh_reg, 9234 + }, 9235 + { 9236 + "ALU64_RSH_X: all shift values", 9237 + { }, 9238 + INTERNAL | FLAG_NO_DATA, 9239 + { }, 9240 + { { 0, 1 } }, 9241 + .fill_helper = bpf_fill_alu64_rsh_reg, 9242 + }, 9243 + { 9244 + "ALU64_ARSH_X: all shift values", 9245 + { }, 9246 + INTERNAL | FLAG_NO_DATA, 9247 + { }, 9248 + { { 0, 1 } }, 9249 + .fill_helper = bpf_fill_alu64_arsh_reg, 9250 + }, 9251 + /* Exhaustive test of ALU32 shift operations */ 9252 + { 9253 + "ALU32_LSH_K: all shift values", 9254 + { }, 9255 + INTERNAL | FLAG_NO_DATA, 9256 + { }, 9257 + { { 0, 1 } }, 9258 + .fill_helper = bpf_fill_alu32_lsh_imm, 9259 + }, 9260 + { 9261 + "ALU32_RSH_K: all shift values", 9262 + { }, 9263 + INTERNAL | FLAG_NO_DATA, 9264 + { }, 9265 + { { 0, 1 } }, 9266 + .fill_helper = bpf_fill_alu32_rsh_imm, 9267 + }, 9268 + { 9269 + "ALU32_ARSH_K: all shift values", 9270 + { }, 9271 + INTERNAL | FLAG_NO_DATA, 9272 + { }, 9273 + { { 0, 1 } }, 9274 + .fill_helper = bpf_fill_alu32_arsh_imm, 9275 + }, 9276 + { 9277 + "ALU32_LSH_X: all shift values", 9278 + { }, 9279 + INTERNAL | FLAG_NO_DATA, 9280 + { }, 9281 + { { 0, 1 } }, 9282 + .fill_helper = bpf_fill_alu32_lsh_reg, 9283 + }, 9284 + { 9285 + "ALU32_RSH_X: all shift values", 9286 + { }, 9287 + INTERNAL | FLAG_NO_DATA, 9288 + { }, 9289 + { { 0, 1 } }, 9290 + .fill_helper = bpf_fill_alu32_rsh_reg, 9291 + }, 9292 + { 9293 + "ALU32_ARSH_X: all shift values", 9294 + { }, 9295 + INTERNAL | FLAG_NO_DATA, 9296 + { }, 9297 + { { 0, 1 } }, 9298 + .fill_helper = bpf_fill_alu32_arsh_reg, 9299 + }, 9300 + /* 9301 + * Exhaustive test of ALU64 shift operations when 9302 + * source and destination register are the same. 9303 + */ 9304 + { 9305 + "ALU64_LSH_X: all shift values with the same register", 9306 + { }, 9307 + INTERNAL | FLAG_NO_DATA, 9308 + { }, 9309 + { { 0, 1 } }, 9310 + .fill_helper = bpf_fill_alu64_lsh_same_reg, 9311 + }, 9312 + { 9313 + "ALU64_RSH_X: all shift values with the same register", 9314 + { }, 9315 + INTERNAL | FLAG_NO_DATA, 9316 + { }, 9317 + { { 0, 1 } }, 9318 + .fill_helper = bpf_fill_alu64_rsh_same_reg, 9319 + }, 9320 + { 9321 + "ALU64_ARSH_X: all shift values with the same register", 9322 + { }, 9323 + INTERNAL | FLAG_NO_DATA, 9324 + { }, 9325 + { { 0, 1 } }, 9326 + .fill_helper = bpf_fill_alu64_arsh_same_reg, 9327 + }, 9328 + /* 9329 + * Exhaustive test of ALU32 shift operations when 9330 + * source and destination register are the same. 9331 + */ 9332 + { 9333 + "ALU32_LSH_X: all shift values with the same register", 9334 + { }, 9335 + INTERNAL | FLAG_NO_DATA, 9336 + { }, 9337 + { { 0, 1 } }, 9338 + .fill_helper = bpf_fill_alu32_lsh_same_reg, 9339 + }, 9340 + { 9341 + "ALU32_RSH_X: all shift values with the same register", 9342 + { }, 9343 + INTERNAL | FLAG_NO_DATA, 9344 + { }, 9345 + { { 0, 1 } }, 9346 + .fill_helper = bpf_fill_alu32_rsh_same_reg, 9347 + }, 9348 + { 9349 + "ALU32_ARSH_X: all shift values with the same register", 9350 + { }, 9351 + INTERNAL | FLAG_NO_DATA, 9352 + { }, 9353 + { { 0, 1 } }, 9354 + .fill_helper = bpf_fill_alu32_arsh_same_reg, 9355 + }, 9356 + /* ALU64 immediate magnitudes */ 9357 + { 9358 + "ALU64_MOV_K: all immediate value magnitudes", 9359 + { }, 9360 + INTERNAL | FLAG_NO_DATA, 9361 + { }, 9362 + { { 0, 1 } }, 9363 + .fill_helper = bpf_fill_alu64_mov_imm, 9364 + .nr_testruns = NR_PATTERN_RUNS, 9365 + }, 9366 + { 9367 + "ALU64_AND_K: all immediate value magnitudes", 9368 + { }, 9369 + INTERNAL | FLAG_NO_DATA, 9370 + { }, 9371 + { { 0, 1 } }, 9372 + .fill_helper = bpf_fill_alu64_and_imm, 9373 + .nr_testruns = NR_PATTERN_RUNS, 9374 + }, 9375 + { 9376 + "ALU64_OR_K: all immediate value magnitudes", 9377 + { }, 9378 + INTERNAL | FLAG_NO_DATA, 9379 + { }, 9380 + { { 0, 1 } }, 9381 + .fill_helper = bpf_fill_alu64_or_imm, 9382 + .nr_testruns = NR_PATTERN_RUNS, 9383 + }, 9384 + { 9385 + "ALU64_XOR_K: all immediate value magnitudes", 9386 + { }, 9387 + INTERNAL | FLAG_NO_DATA, 9388 + { }, 9389 + { { 0, 1 } }, 9390 + .fill_helper = bpf_fill_alu64_xor_imm, 9391 + .nr_testruns = NR_PATTERN_RUNS, 9392 + }, 9393 + { 9394 + "ALU64_ADD_K: all immediate value magnitudes", 9395 + { }, 9396 + INTERNAL | FLAG_NO_DATA, 9397 + { }, 9398 + { { 0, 1 } }, 9399 + .fill_helper = bpf_fill_alu64_add_imm, 9400 + .nr_testruns = NR_PATTERN_RUNS, 9401 + }, 9402 + { 9403 + "ALU64_SUB_K: all immediate value magnitudes", 9404 + { }, 9405 + INTERNAL | FLAG_NO_DATA, 9406 + { }, 9407 + { { 0, 1 } }, 9408 + .fill_helper = bpf_fill_alu64_sub_imm, 9409 + .nr_testruns = NR_PATTERN_RUNS, 9410 + }, 9411 + { 9412 + "ALU64_MUL_K: all immediate value magnitudes", 9413 + { }, 9414 + INTERNAL | FLAG_NO_DATA, 9415 + { }, 9416 + { { 0, 1 } }, 9417 + .fill_helper = bpf_fill_alu64_mul_imm, 9418 + .nr_testruns = NR_PATTERN_RUNS, 9419 + }, 9420 + { 9421 + "ALU64_DIV_K: all immediate value magnitudes", 9422 + { }, 9423 + INTERNAL | FLAG_NO_DATA, 9424 + { }, 9425 + { { 0, 1 } }, 9426 + .fill_helper = bpf_fill_alu64_div_imm, 9427 + .nr_testruns = NR_PATTERN_RUNS, 9428 + }, 9429 + { 9430 + "ALU64_MOD_K: all immediate value magnitudes", 9431 + { }, 9432 + INTERNAL | FLAG_NO_DATA, 9433 + { }, 9434 + { { 0, 1 } }, 9435 + .fill_helper = bpf_fill_alu64_mod_imm, 9436 + .nr_testruns = NR_PATTERN_RUNS, 9437 + }, 9438 + /* ALU32 immediate magnitudes */ 9439 + { 9440 + "ALU32_MOV_K: all immediate value magnitudes", 9441 + { }, 9442 + INTERNAL | FLAG_NO_DATA, 9443 + { }, 9444 + { { 0, 1 } }, 9445 + .fill_helper = bpf_fill_alu32_mov_imm, 9446 + .nr_testruns = NR_PATTERN_RUNS, 9447 + }, 9448 + { 9449 + "ALU32_AND_K: all immediate value magnitudes", 9450 + { }, 9451 + INTERNAL | FLAG_NO_DATA, 9452 + { }, 9453 + { { 0, 1 } }, 9454 + .fill_helper = bpf_fill_alu32_and_imm, 9455 + .nr_testruns = NR_PATTERN_RUNS, 9456 + }, 9457 + { 9458 + "ALU32_OR_K: all immediate value magnitudes", 9459 + { }, 9460 + INTERNAL | FLAG_NO_DATA, 9461 + { }, 9462 + { { 0, 1 } }, 9463 + .fill_helper = bpf_fill_alu32_or_imm, 9464 + .nr_testruns = NR_PATTERN_RUNS, 9465 + }, 9466 + { 9467 + "ALU32_XOR_K: all immediate value magnitudes", 9468 + { }, 9469 + INTERNAL | FLAG_NO_DATA, 9470 + { }, 9471 + { { 0, 1 } }, 9472 + .fill_helper = bpf_fill_alu32_xor_imm, 9473 + .nr_testruns = NR_PATTERN_RUNS, 9474 + }, 9475 + { 9476 + "ALU32_ADD_K: all immediate value magnitudes", 9477 + { }, 9478 + INTERNAL | FLAG_NO_DATA, 9479 + { }, 9480 + { { 0, 1 } }, 9481 + .fill_helper = bpf_fill_alu32_add_imm, 9482 + .nr_testruns = NR_PATTERN_RUNS, 9483 + }, 9484 + { 9485 + "ALU32_SUB_K: all immediate value magnitudes", 9486 + { }, 9487 + INTERNAL | FLAG_NO_DATA, 9488 + { }, 9489 + { { 0, 1 } }, 9490 + .fill_helper = bpf_fill_alu32_sub_imm, 9491 + .nr_testruns = NR_PATTERN_RUNS, 9492 + }, 9493 + { 9494 + "ALU32_MUL_K: all immediate value magnitudes", 9495 + { }, 9496 + INTERNAL | FLAG_NO_DATA, 9497 + { }, 9498 + { { 0, 1 } }, 9499 + .fill_helper = bpf_fill_alu32_mul_imm, 9500 + .nr_testruns = NR_PATTERN_RUNS, 9501 + }, 9502 + { 9503 + "ALU32_DIV_K: all immediate value magnitudes", 9504 + { }, 9505 + INTERNAL | FLAG_NO_DATA, 9506 + { }, 9507 + { { 0, 1 } }, 9508 + .fill_helper = bpf_fill_alu32_div_imm, 9509 + .nr_testruns = NR_PATTERN_RUNS, 9510 + }, 9511 + { 9512 + "ALU32_MOD_K: all immediate value magnitudes", 9513 + { }, 9514 + INTERNAL | FLAG_NO_DATA, 9515 + { }, 9516 + { { 0, 1 } }, 9517 + .fill_helper = bpf_fill_alu32_mod_imm, 9518 + .nr_testruns = NR_PATTERN_RUNS, 9519 + }, 9520 + /* ALU64 register magnitudes */ 9521 + { 9522 + "ALU64_MOV_X: all register value magnitudes", 9523 + { }, 9524 + INTERNAL | FLAG_NO_DATA, 9525 + { }, 9526 + { { 0, 1 } }, 9527 + .fill_helper = bpf_fill_alu64_mov_reg, 9528 + .nr_testruns = NR_PATTERN_RUNS, 9529 + }, 9530 + { 9531 + "ALU64_AND_X: all register value magnitudes", 9532 + { }, 9533 + INTERNAL | FLAG_NO_DATA, 9534 + { }, 9535 + { { 0, 1 } }, 9536 + .fill_helper = bpf_fill_alu64_and_reg, 9537 + .nr_testruns = NR_PATTERN_RUNS, 9538 + }, 9539 + { 9540 + "ALU64_OR_X: all register value magnitudes", 9541 + { }, 9542 + INTERNAL | FLAG_NO_DATA, 9543 + { }, 9544 + { { 0, 1 } }, 9545 + .fill_helper = bpf_fill_alu64_or_reg, 9546 + .nr_testruns = NR_PATTERN_RUNS, 9547 + }, 9548 + { 9549 + "ALU64_XOR_X: all register value magnitudes", 9550 + { }, 9551 + INTERNAL | FLAG_NO_DATA, 9552 + { }, 9553 + { { 0, 1 } }, 9554 + .fill_helper = bpf_fill_alu64_xor_reg, 9555 + .nr_testruns = NR_PATTERN_RUNS, 9556 + }, 9557 + { 9558 + "ALU64_ADD_X: all register value magnitudes", 9559 + { }, 9560 + INTERNAL | FLAG_NO_DATA, 9561 + { }, 9562 + { { 0, 1 } }, 9563 + .fill_helper = bpf_fill_alu64_add_reg, 9564 + .nr_testruns = NR_PATTERN_RUNS, 9565 + }, 9566 + { 9567 + "ALU64_SUB_X: all register value magnitudes", 9568 + { }, 9569 + INTERNAL | FLAG_NO_DATA, 9570 + { }, 9571 + { { 0, 1 } }, 9572 + .fill_helper = bpf_fill_alu64_sub_reg, 9573 + .nr_testruns = NR_PATTERN_RUNS, 9574 + }, 9575 + { 9576 + "ALU64_MUL_X: all register value magnitudes", 9577 + { }, 9578 + INTERNAL | FLAG_NO_DATA, 9579 + { }, 9580 + { { 0, 1 } }, 9581 + .fill_helper = bpf_fill_alu64_mul_reg, 9582 + .nr_testruns = NR_PATTERN_RUNS, 9583 + }, 9584 + { 9585 + "ALU64_DIV_X: all register value magnitudes", 9586 + { }, 9587 + INTERNAL | FLAG_NO_DATA, 9588 + { }, 9589 + { { 0, 1 } }, 9590 + .fill_helper = bpf_fill_alu64_div_reg, 9591 + .nr_testruns = NR_PATTERN_RUNS, 9592 + }, 9593 + { 9594 + "ALU64_MOD_X: all register value magnitudes", 9595 + { }, 9596 + INTERNAL | FLAG_NO_DATA, 9597 + { }, 9598 + { { 0, 1 } }, 9599 + .fill_helper = bpf_fill_alu64_mod_reg, 9600 + .nr_testruns = NR_PATTERN_RUNS, 9601 + }, 9602 + /* ALU32 register magnitudes */ 9603 + { 9604 + "ALU32_MOV_X: all register value magnitudes", 9605 + { }, 9606 + INTERNAL | FLAG_NO_DATA, 9607 + { }, 9608 + { { 0, 1 } }, 9609 + .fill_helper = bpf_fill_alu32_mov_reg, 9610 + .nr_testruns = NR_PATTERN_RUNS, 9611 + }, 9612 + { 9613 + "ALU32_AND_X: all register value magnitudes", 9614 + { }, 9615 + INTERNAL | FLAG_NO_DATA, 9616 + { }, 9617 + { { 0, 1 } }, 9618 + .fill_helper = bpf_fill_alu32_and_reg, 9619 + .nr_testruns = NR_PATTERN_RUNS, 9620 + }, 9621 + { 9622 + "ALU32_OR_X: all register value magnitudes", 9623 + { }, 9624 + INTERNAL | FLAG_NO_DATA, 9625 + { }, 9626 + { { 0, 1 } }, 9627 + .fill_helper = bpf_fill_alu32_or_reg, 9628 + .nr_testruns = NR_PATTERN_RUNS, 9629 + }, 9630 + { 9631 + "ALU32_XOR_X: all register value magnitudes", 9632 + { }, 9633 + INTERNAL | FLAG_NO_DATA, 9634 + { }, 9635 + { { 0, 1 } }, 9636 + .fill_helper = bpf_fill_alu32_xor_reg, 9637 + .nr_testruns = NR_PATTERN_RUNS, 9638 + }, 9639 + { 9640 + "ALU32_ADD_X: all register value magnitudes", 9641 + { }, 9642 + INTERNAL | FLAG_NO_DATA, 9643 + { }, 9644 + { { 0, 1 } }, 9645 + .fill_helper = bpf_fill_alu32_add_reg, 9646 + .nr_testruns = NR_PATTERN_RUNS, 9647 + }, 9648 + { 9649 + "ALU32_SUB_X: all register value magnitudes", 9650 + { }, 9651 + INTERNAL | FLAG_NO_DATA, 9652 + { }, 9653 + { { 0, 1 } }, 9654 + .fill_helper = bpf_fill_alu32_sub_reg, 9655 + .nr_testruns = NR_PATTERN_RUNS, 9656 + }, 9657 + { 9658 + "ALU32_MUL_X: all register value magnitudes", 9659 + { }, 9660 + INTERNAL | FLAG_NO_DATA, 9661 + { }, 9662 + { { 0, 1 } }, 9663 + .fill_helper = bpf_fill_alu32_mul_reg, 9664 + .nr_testruns = NR_PATTERN_RUNS, 9665 + }, 9666 + { 9667 + "ALU32_DIV_X: all register value magnitudes", 9668 + { }, 9669 + INTERNAL | FLAG_NO_DATA, 9670 + { }, 9671 + { { 0, 1 } }, 9672 + .fill_helper = bpf_fill_alu32_div_reg, 9673 + .nr_testruns = NR_PATTERN_RUNS, 9674 + }, 9675 + { 9676 + "ALU32_MOD_X: all register value magnitudes", 9677 + { }, 9678 + INTERNAL | FLAG_NO_DATA, 9679 + { }, 9680 + { { 0, 1 } }, 9681 + .fill_helper = bpf_fill_alu32_mod_reg, 9682 + .nr_testruns = NR_PATTERN_RUNS, 9683 + }, 9684 + /* LD_IMM64 immediate magnitudes */ 9685 + { 9686 + "LD_IMM64: all immediate value magnitudes", 9687 + { }, 9688 + INTERNAL | FLAG_NO_DATA, 9689 + { }, 9690 + { { 0, 1 } }, 9691 + .fill_helper = bpf_fill_ld_imm64, 9692 + }, 9693 + /* 64-bit ATOMIC register combinations */ 9694 + { 9695 + "ATOMIC_DW_ADD: register combinations", 9696 + { }, 9697 + INTERNAL, 9698 + { }, 9699 + { { 0, 1 } }, 9700 + .fill_helper = bpf_fill_atomic64_add_reg_pairs, 9701 + .stack_depth = 8, 9702 + }, 9703 + { 9704 + "ATOMIC_DW_AND: register combinations", 9705 + { }, 9706 + INTERNAL, 9707 + { }, 9708 + { { 0, 1 } }, 9709 + .fill_helper = bpf_fill_atomic64_and_reg_pairs, 9710 + .stack_depth = 8, 9711 + }, 9712 + { 9713 + "ATOMIC_DW_OR: register combinations", 9714 + { }, 9715 + INTERNAL, 9716 + { }, 9717 + { { 0, 1 } }, 9718 + .fill_helper = bpf_fill_atomic64_or_reg_pairs, 9719 + .stack_depth = 8, 9720 + }, 9721 + { 9722 + "ATOMIC_DW_XOR: register combinations", 9723 + { }, 9724 + INTERNAL, 9725 + { }, 9726 + { { 0, 1 } }, 9727 + .fill_helper = bpf_fill_atomic64_xor_reg_pairs, 9728 + .stack_depth = 8, 9729 + }, 9730 + { 9731 + "ATOMIC_DW_ADD_FETCH: register combinations", 9732 + { }, 9733 + INTERNAL, 9734 + { }, 9735 + { { 0, 1 } }, 9736 + .fill_helper = bpf_fill_atomic64_add_fetch_reg_pairs, 9737 + .stack_depth = 8, 9738 + }, 9739 + { 9740 + "ATOMIC_DW_AND_FETCH: register combinations", 9741 + { }, 9742 + INTERNAL, 9743 + { }, 9744 + { { 0, 1 } }, 9745 + .fill_helper = bpf_fill_atomic64_and_fetch_reg_pairs, 9746 + .stack_depth = 8, 9747 + }, 9748 + { 9749 + "ATOMIC_DW_OR_FETCH: register combinations", 9750 + { }, 9751 + INTERNAL, 9752 + { }, 9753 + { { 0, 1 } }, 9754 + .fill_helper = bpf_fill_atomic64_or_fetch_reg_pairs, 9755 + .stack_depth = 8, 9756 + }, 9757 + { 9758 + "ATOMIC_DW_XOR_FETCH: register combinations", 9759 + { }, 9760 + INTERNAL, 9761 + { }, 9762 + { { 0, 1 } }, 9763 + .fill_helper = bpf_fill_atomic64_xor_fetch_reg_pairs, 9764 + .stack_depth = 8, 9765 + }, 9766 + { 9767 + "ATOMIC_DW_XCHG: register combinations", 9768 + { }, 9769 + INTERNAL, 9770 + { }, 9771 + { { 0, 1 } }, 9772 + .fill_helper = bpf_fill_atomic64_xchg_reg_pairs, 9773 + .stack_depth = 8, 9774 + }, 9775 + { 9776 + "ATOMIC_DW_CMPXCHG: register combinations", 9777 + { }, 9778 + INTERNAL, 9779 + { }, 9780 + { { 0, 1 } }, 9781 + .fill_helper = bpf_fill_atomic64_cmpxchg_reg_pairs, 9782 + .stack_depth = 8, 9783 + }, 9784 + /* 32-bit ATOMIC register combinations */ 9785 + { 9786 + "ATOMIC_W_ADD: register combinations", 9787 + { }, 9788 + INTERNAL, 9789 + { }, 9790 + { { 0, 1 } }, 9791 + .fill_helper = bpf_fill_atomic32_add_reg_pairs, 9792 + .stack_depth = 8, 9793 + }, 9794 + { 9795 + "ATOMIC_W_AND: register combinations", 9796 + { }, 9797 + INTERNAL, 9798 + { }, 9799 + { { 0, 1 } }, 9800 + .fill_helper = bpf_fill_atomic32_and_reg_pairs, 9801 + .stack_depth = 8, 9802 + }, 9803 + { 9804 + "ATOMIC_W_OR: register combinations", 9805 + { }, 9806 + INTERNAL, 9807 + { }, 9808 + { { 0, 1 } }, 9809 + .fill_helper = bpf_fill_atomic32_or_reg_pairs, 9810 + .stack_depth = 8, 9811 + }, 9812 + { 9813 + "ATOMIC_W_XOR: register combinations", 9814 + { }, 9815 + INTERNAL, 9816 + { }, 9817 + { { 0, 1 } }, 9818 + .fill_helper = bpf_fill_atomic32_xor_reg_pairs, 9819 + .stack_depth = 8, 9820 + }, 9821 + { 9822 + "ATOMIC_W_ADD_FETCH: register combinations", 9823 + { }, 9824 + INTERNAL, 9825 + { }, 9826 + { { 0, 1 } }, 9827 + .fill_helper = bpf_fill_atomic32_add_fetch_reg_pairs, 9828 + .stack_depth = 8, 9829 + }, 9830 + { 9831 + "ATOMIC_W_AND_FETCH: register combinations", 9832 + { }, 9833 + INTERNAL, 9834 + { }, 9835 + { { 0, 1 } }, 9836 + .fill_helper = bpf_fill_atomic32_and_fetch_reg_pairs, 9837 + .stack_depth = 8, 9838 + }, 9839 + { 9840 + "ATOMIC_W_OR_FETCH: register combinations", 9841 + { }, 9842 + INTERNAL, 9843 + { }, 9844 + { { 0, 1 } }, 9845 + .fill_helper = bpf_fill_atomic32_or_fetch_reg_pairs, 9846 + .stack_depth = 8, 9847 + }, 9848 + { 9849 + "ATOMIC_W_XOR_FETCH: register combinations", 9850 + { }, 9851 + INTERNAL, 9852 + { }, 9853 + { { 0, 1 } }, 9854 + .fill_helper = bpf_fill_atomic32_xor_fetch_reg_pairs, 9855 + .stack_depth = 8, 9856 + }, 9857 + { 9858 + "ATOMIC_W_XCHG: register combinations", 9859 + { }, 9860 + INTERNAL, 9861 + { }, 9862 + { { 0, 1 } }, 9863 + .fill_helper = bpf_fill_atomic32_xchg_reg_pairs, 9864 + .stack_depth = 8, 9865 + }, 9866 + { 9867 + "ATOMIC_W_CMPXCHG: register combinations", 9868 + { }, 9869 + INTERNAL, 9870 + { }, 9871 + { { 0, 1 } }, 9872 + .fill_helper = bpf_fill_atomic32_cmpxchg_reg_pairs, 9873 + .stack_depth = 8, 9874 + }, 9875 + /* 64-bit ATOMIC magnitudes */ 9876 + { 9877 + "ATOMIC_DW_ADD: all operand magnitudes", 9878 + { }, 9879 + INTERNAL | FLAG_NO_DATA, 9880 + { }, 9881 + { { 0, 1 } }, 9882 + .fill_helper = bpf_fill_atomic64_add, 9883 + .stack_depth = 8, 9884 + .nr_testruns = NR_PATTERN_RUNS, 9885 + }, 9886 + { 9887 + "ATOMIC_DW_AND: all operand magnitudes", 9888 + { }, 9889 + INTERNAL | FLAG_NO_DATA, 9890 + { }, 9891 + { { 0, 1 } }, 9892 + .fill_helper = bpf_fill_atomic64_and, 9893 + .stack_depth = 8, 9894 + .nr_testruns = NR_PATTERN_RUNS, 9895 + }, 9896 + { 9897 + "ATOMIC_DW_OR: all operand magnitudes", 9898 + { }, 9899 + INTERNAL | FLAG_NO_DATA, 9900 + { }, 9901 + { { 0, 1 } }, 9902 + .fill_helper = bpf_fill_atomic64_or, 9903 + .stack_depth = 8, 9904 + .nr_testruns = NR_PATTERN_RUNS, 9905 + }, 9906 + { 9907 + "ATOMIC_DW_XOR: all operand magnitudes", 9908 + { }, 9909 + INTERNAL | FLAG_NO_DATA, 9910 + { }, 9911 + { { 0, 1 } }, 9912 + .fill_helper = bpf_fill_atomic64_xor, 9913 + .stack_depth = 8, 9914 + .nr_testruns = NR_PATTERN_RUNS, 9915 + }, 9916 + { 9917 + "ATOMIC_DW_ADD_FETCH: all operand magnitudes", 9918 + { }, 9919 + INTERNAL | FLAG_NO_DATA, 9920 + { }, 9921 + { { 0, 1 } }, 9922 + .fill_helper = bpf_fill_atomic64_add_fetch, 9923 + .stack_depth = 8, 9924 + .nr_testruns = NR_PATTERN_RUNS, 9925 + }, 9926 + { 9927 + "ATOMIC_DW_AND_FETCH: all operand magnitudes", 9928 + { }, 9929 + INTERNAL | FLAG_NO_DATA, 9930 + { }, 9931 + { { 0, 1 } }, 9932 + .fill_helper = bpf_fill_atomic64_and_fetch, 9933 + .stack_depth = 8, 9934 + .nr_testruns = NR_PATTERN_RUNS, 9935 + }, 9936 + { 9937 + "ATOMIC_DW_OR_FETCH: all operand magnitudes", 9938 + { }, 9939 + INTERNAL | FLAG_NO_DATA, 9940 + { }, 9941 + { { 0, 1 } }, 9942 + .fill_helper = bpf_fill_atomic64_or_fetch, 9943 + .stack_depth = 8, 9944 + .nr_testruns = NR_PATTERN_RUNS, 9945 + }, 9946 + { 9947 + "ATOMIC_DW_XOR_FETCH: all operand magnitudes", 9948 + { }, 9949 + INTERNAL | FLAG_NO_DATA, 9950 + { }, 9951 + { { 0, 1 } }, 9952 + .fill_helper = bpf_fill_atomic64_xor_fetch, 9953 + .stack_depth = 8, 9954 + .nr_testruns = NR_PATTERN_RUNS, 9955 + }, 9956 + { 9957 + "ATOMIC_DW_XCHG: all operand magnitudes", 9958 + { }, 9959 + INTERNAL | FLAG_NO_DATA, 9960 + { }, 9961 + { { 0, 1 } }, 9962 + .fill_helper = bpf_fill_atomic64_xchg, 9963 + .stack_depth = 8, 9964 + .nr_testruns = NR_PATTERN_RUNS, 9965 + }, 9966 + { 9967 + "ATOMIC_DW_CMPXCHG: all operand magnitudes", 9968 + { }, 9969 + INTERNAL | FLAG_NO_DATA, 9970 + { }, 9971 + { { 0, 1 } }, 9972 + .fill_helper = bpf_fill_cmpxchg64, 9973 + .stack_depth = 8, 9974 + .nr_testruns = NR_PATTERN_RUNS, 9975 + }, 9976 + /* 64-bit atomic magnitudes */ 9977 + { 9978 + "ATOMIC_W_ADD: all operand magnitudes", 9979 + { }, 9980 + INTERNAL | FLAG_NO_DATA, 9981 + { }, 9982 + { { 0, 1 } }, 9983 + .fill_helper = bpf_fill_atomic32_add, 9984 + .stack_depth = 8, 9985 + .nr_testruns = NR_PATTERN_RUNS, 9986 + }, 9987 + { 9988 + "ATOMIC_W_AND: all operand magnitudes", 9989 + { }, 9990 + INTERNAL | FLAG_NO_DATA, 9991 + { }, 9992 + { { 0, 1 } }, 9993 + .fill_helper = bpf_fill_atomic32_and, 9994 + .stack_depth = 8, 9995 + .nr_testruns = NR_PATTERN_RUNS, 9996 + }, 9997 + { 9998 + "ATOMIC_W_OR: all operand magnitudes", 9999 + { }, 10000 + INTERNAL | FLAG_NO_DATA, 10001 + { }, 10002 + { { 0, 1 } }, 10003 + .fill_helper = bpf_fill_atomic32_or, 10004 + .stack_depth = 8, 10005 + .nr_testruns = NR_PATTERN_RUNS, 10006 + }, 10007 + { 10008 + "ATOMIC_W_XOR: all operand magnitudes", 10009 + { }, 10010 + INTERNAL | FLAG_NO_DATA, 10011 + { }, 10012 + { { 0, 1 } }, 10013 + .fill_helper = bpf_fill_atomic32_xor, 10014 + .stack_depth = 8, 10015 + .nr_testruns = NR_PATTERN_RUNS, 10016 + }, 10017 + { 10018 + "ATOMIC_W_ADD_FETCH: all operand magnitudes", 10019 + { }, 10020 + INTERNAL | FLAG_NO_DATA, 10021 + { }, 10022 + { { 0, 1 } }, 10023 + .fill_helper = bpf_fill_atomic32_add_fetch, 10024 + .stack_depth = 8, 10025 + .nr_testruns = NR_PATTERN_RUNS, 10026 + }, 10027 + { 10028 + "ATOMIC_W_AND_FETCH: all operand magnitudes", 10029 + { }, 10030 + INTERNAL | FLAG_NO_DATA, 10031 + { }, 10032 + { { 0, 1 } }, 10033 + .fill_helper = bpf_fill_atomic32_and_fetch, 10034 + .stack_depth = 8, 10035 + .nr_testruns = NR_PATTERN_RUNS, 10036 + }, 10037 + { 10038 + "ATOMIC_W_OR_FETCH: all operand magnitudes", 10039 + { }, 10040 + INTERNAL | FLAG_NO_DATA, 10041 + { }, 10042 + { { 0, 1 } }, 10043 + .fill_helper = bpf_fill_atomic32_or_fetch, 10044 + .stack_depth = 8, 10045 + .nr_testruns = NR_PATTERN_RUNS, 10046 + }, 10047 + { 10048 + "ATOMIC_W_XOR_FETCH: all operand magnitudes", 10049 + { }, 10050 + INTERNAL | FLAG_NO_DATA, 10051 + { }, 10052 + { { 0, 1 } }, 10053 + .fill_helper = bpf_fill_atomic32_xor_fetch, 10054 + .stack_depth = 8, 10055 + .nr_testruns = NR_PATTERN_RUNS, 10056 + }, 10057 + { 10058 + "ATOMIC_W_XCHG: all operand magnitudes", 10059 + { }, 10060 + INTERNAL | FLAG_NO_DATA, 10061 + { }, 10062 + { { 0, 1 } }, 10063 + .fill_helper = bpf_fill_atomic32_xchg, 10064 + .stack_depth = 8, 10065 + .nr_testruns = NR_PATTERN_RUNS, 10066 + }, 10067 + { 10068 + "ATOMIC_W_CMPXCHG: all operand magnitudes", 10069 + { }, 10070 + INTERNAL | FLAG_NO_DATA, 10071 + { }, 10072 + { { 0, 1 } }, 10073 + .fill_helper = bpf_fill_cmpxchg32, 10074 + .stack_depth = 8, 10075 + .nr_testruns = NR_PATTERN_RUNS, 10076 + }, 10077 + /* JMP immediate magnitudes */ 10078 + { 10079 + "JMP_JSET_K: all immediate value magnitudes", 10080 + { }, 10081 + INTERNAL | FLAG_NO_DATA, 10082 + { }, 10083 + { { 0, 1 } }, 10084 + .fill_helper = bpf_fill_jmp_jset_imm, 10085 + .nr_testruns = NR_PATTERN_RUNS, 10086 + }, 10087 + { 10088 + "JMP_JEQ_K: all immediate value magnitudes", 10089 + { }, 10090 + INTERNAL | FLAG_NO_DATA, 10091 + { }, 10092 + { { 0, 1 } }, 10093 + .fill_helper = bpf_fill_jmp_jeq_imm, 10094 + .nr_testruns = NR_PATTERN_RUNS, 10095 + }, 10096 + { 10097 + "JMP_JNE_K: all immediate value magnitudes", 10098 + { }, 10099 + INTERNAL | FLAG_NO_DATA, 10100 + { }, 10101 + { { 0, 1 } }, 10102 + .fill_helper = bpf_fill_jmp_jne_imm, 10103 + .nr_testruns = NR_PATTERN_RUNS, 10104 + }, 10105 + { 10106 + "JMP_JGT_K: all immediate value magnitudes", 10107 + { }, 10108 + INTERNAL | FLAG_NO_DATA, 10109 + { }, 10110 + { { 0, 1 } }, 10111 + .fill_helper = bpf_fill_jmp_jgt_imm, 10112 + .nr_testruns = NR_PATTERN_RUNS, 10113 + }, 10114 + { 10115 + "JMP_JGE_K: all immediate value magnitudes", 10116 + { }, 10117 + INTERNAL | FLAG_NO_DATA, 10118 + { }, 10119 + { { 0, 1 } }, 10120 + .fill_helper = bpf_fill_jmp_jge_imm, 10121 + .nr_testruns = NR_PATTERN_RUNS, 10122 + }, 10123 + { 10124 + "JMP_JLT_K: all immediate value magnitudes", 10125 + { }, 10126 + INTERNAL | FLAG_NO_DATA, 10127 + { }, 10128 + { { 0, 1 } }, 10129 + .fill_helper = bpf_fill_jmp_jlt_imm, 10130 + .nr_testruns = NR_PATTERN_RUNS, 10131 + }, 10132 + { 10133 + "JMP_JLE_K: all immediate value magnitudes", 10134 + { }, 10135 + INTERNAL | FLAG_NO_DATA, 10136 + { }, 10137 + { { 0, 1 } }, 10138 + .fill_helper = bpf_fill_jmp_jle_imm, 10139 + .nr_testruns = NR_PATTERN_RUNS, 10140 + }, 10141 + { 10142 + "JMP_JSGT_K: all immediate value magnitudes", 10143 + { }, 10144 + INTERNAL | FLAG_NO_DATA, 10145 + { }, 10146 + { { 0, 1 } }, 10147 + .fill_helper = bpf_fill_jmp_jsgt_imm, 10148 + .nr_testruns = NR_PATTERN_RUNS, 10149 + }, 10150 + { 10151 + "JMP_JSGE_K: all immediate value magnitudes", 10152 + { }, 10153 + INTERNAL | FLAG_NO_DATA, 10154 + { }, 10155 + { { 0, 1 } }, 10156 + .fill_helper = bpf_fill_jmp_jsge_imm, 10157 + .nr_testruns = NR_PATTERN_RUNS, 10158 + }, 10159 + { 10160 + "JMP_JSLT_K: all immediate value magnitudes", 10161 + { }, 10162 + INTERNAL | FLAG_NO_DATA, 10163 + { }, 10164 + { { 0, 1 } }, 10165 + .fill_helper = bpf_fill_jmp_jslt_imm, 10166 + .nr_testruns = NR_PATTERN_RUNS, 10167 + }, 10168 + { 10169 + "JMP_JSLE_K: all immediate value magnitudes", 10170 + { }, 10171 + INTERNAL | FLAG_NO_DATA, 10172 + { }, 10173 + { { 0, 1 } }, 10174 + .fill_helper = bpf_fill_jmp_jsle_imm, 10175 + .nr_testruns = NR_PATTERN_RUNS, 10176 + }, 10177 + /* JMP register magnitudes */ 10178 + { 10179 + "JMP_JSET_X: all register value magnitudes", 10180 + { }, 10181 + INTERNAL | FLAG_NO_DATA, 10182 + { }, 10183 + { { 0, 1 } }, 10184 + .fill_helper = bpf_fill_jmp_jset_reg, 10185 + .nr_testruns = NR_PATTERN_RUNS, 10186 + }, 10187 + { 10188 + "JMP_JEQ_X: all register value magnitudes", 10189 + { }, 10190 + INTERNAL | FLAG_NO_DATA, 10191 + { }, 10192 + { { 0, 1 } }, 10193 + .fill_helper = bpf_fill_jmp_jeq_reg, 10194 + .nr_testruns = NR_PATTERN_RUNS, 10195 + }, 10196 + { 10197 + "JMP_JNE_X: all register value magnitudes", 10198 + { }, 10199 + INTERNAL | FLAG_NO_DATA, 10200 + { }, 10201 + { { 0, 1 } }, 10202 + .fill_helper = bpf_fill_jmp_jne_reg, 10203 + .nr_testruns = NR_PATTERN_RUNS, 10204 + }, 10205 + { 10206 + "JMP_JGT_X: all register value magnitudes", 10207 + { }, 10208 + INTERNAL | FLAG_NO_DATA, 10209 + { }, 10210 + { { 0, 1 } }, 10211 + .fill_helper = bpf_fill_jmp_jgt_reg, 10212 + .nr_testruns = NR_PATTERN_RUNS, 10213 + }, 10214 + { 10215 + "JMP_JGE_X: all register value magnitudes", 10216 + { }, 10217 + INTERNAL | FLAG_NO_DATA, 10218 + { }, 10219 + { { 0, 1 } }, 10220 + .fill_helper = bpf_fill_jmp_jge_reg, 10221 + .nr_testruns = NR_PATTERN_RUNS, 10222 + }, 10223 + { 10224 + "JMP_JLT_X: all register value magnitudes", 10225 + { }, 10226 + INTERNAL | FLAG_NO_DATA, 10227 + { }, 10228 + { { 0, 1 } }, 10229 + .fill_helper = bpf_fill_jmp_jlt_reg, 10230 + .nr_testruns = NR_PATTERN_RUNS, 10231 + }, 10232 + { 10233 + "JMP_JLE_X: all register value magnitudes", 10234 + { }, 10235 + INTERNAL | FLAG_NO_DATA, 10236 + { }, 10237 + { { 0, 1 } }, 10238 + .fill_helper = bpf_fill_jmp_jle_reg, 10239 + .nr_testruns = NR_PATTERN_RUNS, 10240 + }, 10241 + { 10242 + "JMP_JSGT_X: all register value magnitudes", 10243 + { }, 10244 + INTERNAL | FLAG_NO_DATA, 10245 + { }, 10246 + { { 0, 1 } }, 10247 + .fill_helper = bpf_fill_jmp_jsgt_reg, 10248 + .nr_testruns = NR_PATTERN_RUNS, 10249 + }, 10250 + { 10251 + "JMP_JSGE_X: all register value magnitudes", 10252 + { }, 10253 + INTERNAL | FLAG_NO_DATA, 10254 + { }, 10255 + { { 0, 1 } }, 10256 + .fill_helper = bpf_fill_jmp_jsge_reg, 10257 + .nr_testruns = NR_PATTERN_RUNS, 10258 + }, 10259 + { 10260 + "JMP_JSLT_X: all register value magnitudes", 10261 + { }, 10262 + INTERNAL | FLAG_NO_DATA, 10263 + { }, 10264 + { { 0, 1 } }, 10265 + .fill_helper = bpf_fill_jmp_jslt_reg, 10266 + .nr_testruns = NR_PATTERN_RUNS, 10267 + }, 10268 + { 10269 + "JMP_JSLE_X: all register value magnitudes", 10270 + { }, 10271 + INTERNAL | FLAG_NO_DATA, 10272 + { }, 10273 + { { 0, 1 } }, 10274 + .fill_helper = bpf_fill_jmp_jsle_reg, 10275 + .nr_testruns = NR_PATTERN_RUNS, 10276 + }, 10277 + /* JMP32 immediate magnitudes */ 10278 + { 10279 + "JMP32_JSET_K: all immediate value magnitudes", 10280 + { }, 10281 + INTERNAL | FLAG_NO_DATA, 10282 + { }, 10283 + { { 0, 1 } }, 10284 + .fill_helper = bpf_fill_jmp32_jset_imm, 10285 + .nr_testruns = NR_PATTERN_RUNS, 10286 + }, 10287 + { 10288 + "JMP32_JEQ_K: all immediate value magnitudes", 10289 + { }, 10290 + INTERNAL | FLAG_NO_DATA, 10291 + { }, 10292 + { { 0, 1 } }, 10293 + .fill_helper = bpf_fill_jmp32_jeq_imm, 10294 + .nr_testruns = NR_PATTERN_RUNS, 10295 + }, 10296 + { 10297 + "JMP32_JNE_K: all immediate value magnitudes", 10298 + { }, 10299 + INTERNAL | FLAG_NO_DATA, 10300 + { }, 10301 + { { 0, 1 } }, 10302 + .fill_helper = bpf_fill_jmp32_jne_imm, 10303 + .nr_testruns = NR_PATTERN_RUNS, 10304 + }, 10305 + { 10306 + "JMP32_JGT_K: all immediate value magnitudes", 10307 + { }, 10308 + INTERNAL | FLAG_NO_DATA, 10309 + { }, 10310 + { { 0, 1 } }, 10311 + .fill_helper = bpf_fill_jmp32_jgt_imm, 10312 + .nr_testruns = NR_PATTERN_RUNS, 10313 + }, 10314 + { 10315 + "JMP32_JGE_K: all immediate value magnitudes", 10316 + { }, 10317 + INTERNAL | FLAG_NO_DATA, 10318 + { }, 10319 + { { 0, 1 } }, 10320 + .fill_helper = bpf_fill_jmp32_jge_imm, 10321 + .nr_testruns = NR_PATTERN_RUNS, 10322 + }, 10323 + { 10324 + "JMP32_JLT_K: all immediate value magnitudes", 10325 + { }, 10326 + INTERNAL | FLAG_NO_DATA, 10327 + { }, 10328 + { { 0, 1 } }, 10329 + .fill_helper = bpf_fill_jmp32_jlt_imm, 10330 + .nr_testruns = NR_PATTERN_RUNS, 10331 + }, 10332 + { 10333 + "JMP32_JLE_K: all immediate value magnitudes", 10334 + { }, 10335 + INTERNAL | FLAG_NO_DATA, 10336 + { }, 10337 + { { 0, 1 } }, 10338 + .fill_helper = bpf_fill_jmp32_jle_imm, 10339 + .nr_testruns = NR_PATTERN_RUNS, 10340 + }, 10341 + { 10342 + "JMP32_JSGT_K: all immediate value magnitudes", 10343 + { }, 10344 + INTERNAL | FLAG_NO_DATA, 10345 + { }, 10346 + { { 0, 1 } }, 10347 + .fill_helper = bpf_fill_jmp32_jsgt_imm, 10348 + .nr_testruns = NR_PATTERN_RUNS, 10349 + }, 10350 + { 10351 + "JMP32_JSGE_K: all immediate value magnitudes", 10352 + { }, 10353 + INTERNAL | FLAG_NO_DATA, 10354 + { }, 10355 + { { 0, 1 } }, 10356 + .fill_helper = bpf_fill_jmp32_jsge_imm, 10357 + .nr_testruns = NR_PATTERN_RUNS, 10358 + }, 10359 + { 10360 + "JMP32_JSLT_K: all immediate value magnitudes", 10361 + { }, 10362 + INTERNAL | FLAG_NO_DATA, 10363 + { }, 10364 + { { 0, 1 } }, 10365 + .fill_helper = bpf_fill_jmp32_jslt_imm, 10366 + .nr_testruns = NR_PATTERN_RUNS, 10367 + }, 10368 + { 10369 + "JMP32_JSLE_K: all immediate value magnitudes", 10370 + { }, 10371 + INTERNAL | FLAG_NO_DATA, 10372 + { }, 10373 + { { 0, 1 } }, 10374 + .fill_helper = bpf_fill_jmp32_jsle_imm, 10375 + .nr_testruns = NR_PATTERN_RUNS, 10376 + }, 10377 + /* JMP32 register magnitudes */ 10378 + { 10379 + "JMP32_JSET_X: all register value magnitudes", 10380 + { }, 10381 + INTERNAL | FLAG_NO_DATA, 10382 + { }, 10383 + { { 0, 1 } }, 10384 + .fill_helper = bpf_fill_jmp32_jset_reg, 10385 + .nr_testruns = NR_PATTERN_RUNS, 10386 + }, 10387 + { 10388 + "JMP32_JEQ_X: all register value magnitudes", 10389 + { }, 10390 + INTERNAL | FLAG_NO_DATA, 10391 + { }, 10392 + { { 0, 1 } }, 10393 + .fill_helper = bpf_fill_jmp32_jeq_reg, 10394 + .nr_testruns = NR_PATTERN_RUNS, 10395 + }, 10396 + { 10397 + "JMP32_JNE_X: all register value magnitudes", 10398 + { }, 10399 + INTERNAL | FLAG_NO_DATA, 10400 + { }, 10401 + { { 0, 1 } }, 10402 + .fill_helper = bpf_fill_jmp32_jne_reg, 10403 + .nr_testruns = NR_PATTERN_RUNS, 10404 + }, 10405 + { 10406 + "JMP32_JGT_X: all register value magnitudes", 10407 + { }, 10408 + INTERNAL | FLAG_NO_DATA, 10409 + { }, 10410 + { { 0, 1 } }, 10411 + .fill_helper = bpf_fill_jmp32_jgt_reg, 10412 + .nr_testruns = NR_PATTERN_RUNS, 10413 + }, 10414 + { 10415 + "JMP32_JGE_X: all register value magnitudes", 10416 + { }, 10417 + INTERNAL | FLAG_NO_DATA, 10418 + { }, 10419 + { { 0, 1 } }, 10420 + .fill_helper = bpf_fill_jmp32_jge_reg, 10421 + .nr_testruns = NR_PATTERN_RUNS, 10422 + }, 10423 + { 10424 + "JMP32_JLT_X: all register value magnitudes", 10425 + { }, 10426 + INTERNAL | FLAG_NO_DATA, 10427 + { }, 10428 + { { 0, 1 } }, 10429 + .fill_helper = bpf_fill_jmp32_jlt_reg, 10430 + .nr_testruns = NR_PATTERN_RUNS, 10431 + }, 10432 + { 10433 + "JMP32_JLE_X: all register value magnitudes", 10434 + { }, 10435 + INTERNAL | FLAG_NO_DATA, 10436 + { }, 10437 + { { 0, 1 } }, 10438 + .fill_helper = bpf_fill_jmp32_jle_reg, 10439 + .nr_testruns = NR_PATTERN_RUNS, 10440 + }, 10441 + { 10442 + "JMP32_JSGT_X: all register value magnitudes", 10443 + { }, 10444 + INTERNAL | FLAG_NO_DATA, 10445 + { }, 10446 + { { 0, 1 } }, 10447 + .fill_helper = bpf_fill_jmp32_jsgt_reg, 10448 + .nr_testruns = NR_PATTERN_RUNS, 10449 + }, 10450 + { 10451 + "JMP32_JSGE_X: all register value magnitudes", 10452 + { }, 10453 + INTERNAL | FLAG_NO_DATA, 10454 + { }, 10455 + { { 0, 1 } }, 10456 + .fill_helper = bpf_fill_jmp32_jsge_reg, 10457 + .nr_testruns = NR_PATTERN_RUNS, 10458 + }, 10459 + { 10460 + "JMP32_JSLT_X: all register value magnitudes", 10461 + { }, 10462 + INTERNAL | FLAG_NO_DATA, 10463 + { }, 10464 + { { 0, 1 } }, 10465 + .fill_helper = bpf_fill_jmp32_jslt_reg, 10466 + .nr_testruns = NR_PATTERN_RUNS, 10467 + }, 10468 + { 10469 + "JMP32_JSLE_X: all register value magnitudes", 10470 + { }, 10471 + INTERNAL | FLAG_NO_DATA, 10472 + { }, 10473 + { { 0, 1 } }, 10474 + .fill_helper = bpf_fill_jmp32_jsle_reg, 10475 + .nr_testruns = NR_PATTERN_RUNS, 10476 + }, 10477 + /* Conditional jumps with constant decision */ 10478 + { 10479 + "JMP_JSET_K: imm = 0 -> never taken", 10480 + .u.insns_int = { 10481 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10482 + BPF_JMP_IMM(BPF_JSET, R1, 0, 1), 10483 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10484 + BPF_EXIT_INSN(), 10485 + }, 10486 + INTERNAL | FLAG_NO_DATA, 10487 + { }, 10488 + { { 0, 0 } }, 10489 + }, 10490 + { 10491 + "JMP_JLT_K: imm = 0 -> never taken", 10492 + .u.insns_int = { 10493 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10494 + BPF_JMP_IMM(BPF_JLT, R1, 0, 1), 10495 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10496 + BPF_EXIT_INSN(), 10497 + }, 10498 + INTERNAL | FLAG_NO_DATA, 10499 + { }, 10500 + { { 0, 0 } }, 10501 + }, 10502 + { 10503 + "JMP_JGE_K: imm = 0 -> always taken", 10504 + .u.insns_int = { 10505 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10506 + BPF_JMP_IMM(BPF_JGE, R1, 0, 1), 10507 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10508 + BPF_EXIT_INSN(), 10509 + }, 10510 + INTERNAL | FLAG_NO_DATA, 10511 + { }, 10512 + { { 0, 1 } }, 10513 + }, 10514 + { 10515 + "JMP_JGT_K: imm = 0xffffffff -> never taken", 10516 + .u.insns_int = { 10517 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10518 + BPF_JMP_IMM(BPF_JGT, R1, U32_MAX, 1), 10519 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10520 + BPF_EXIT_INSN(), 10521 + }, 10522 + INTERNAL | FLAG_NO_DATA, 10523 + { }, 10524 + { { 0, 0 } }, 10525 + }, 10526 + { 10527 + "JMP_JLE_K: imm = 0xffffffff -> always taken", 10528 + .u.insns_int = { 10529 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10530 + BPF_JMP_IMM(BPF_JLE, R1, U32_MAX, 1), 10531 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10532 + BPF_EXIT_INSN(), 10533 + }, 10534 + INTERNAL | FLAG_NO_DATA, 10535 + { }, 10536 + { { 0, 1 } }, 10537 + }, 10538 + { 10539 + "JMP32_JSGT_K: imm = 0x7fffffff -> never taken", 10540 + .u.insns_int = { 10541 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10542 + BPF_JMP32_IMM(BPF_JSGT, R1, S32_MAX, 1), 10543 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10544 + BPF_EXIT_INSN(), 10545 + }, 10546 + INTERNAL | FLAG_NO_DATA, 10547 + { }, 10548 + { { 0, 0 } }, 10549 + }, 10550 + { 10551 + "JMP32_JSGE_K: imm = -0x80000000 -> always taken", 10552 + .u.insns_int = { 10553 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10554 + BPF_JMP32_IMM(BPF_JSGE, R1, S32_MIN, 1), 10555 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10556 + BPF_EXIT_INSN(), 10557 + }, 10558 + INTERNAL | FLAG_NO_DATA, 10559 + { }, 10560 + { { 0, 1 } }, 10561 + }, 10562 + { 10563 + "JMP32_JSLT_K: imm = -0x80000000 -> never taken", 10564 + .u.insns_int = { 10565 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10566 + BPF_JMP32_IMM(BPF_JSLT, R1, S32_MIN, 1), 10567 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10568 + BPF_EXIT_INSN(), 10569 + }, 10570 + INTERNAL | FLAG_NO_DATA, 10571 + { }, 10572 + { { 0, 0 } }, 10573 + }, 10574 + { 10575 + "JMP32_JSLE_K: imm = 0x7fffffff -> always taken", 10576 + .u.insns_int = { 10577 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10578 + BPF_JMP32_IMM(BPF_JSLE, R1, S32_MAX, 1), 10579 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10580 + BPF_EXIT_INSN(), 10581 + }, 10582 + INTERNAL | FLAG_NO_DATA, 10583 + { }, 10584 + { { 0, 1 } }, 10585 + }, 10586 + { 10587 + "JMP_JEQ_X: dst = src -> always taken", 10588 + .u.insns_int = { 10589 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10590 + BPF_JMP_REG(BPF_JEQ, R1, R1, 1), 10591 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10592 + BPF_EXIT_INSN(), 10593 + }, 10594 + INTERNAL | FLAG_NO_DATA, 10595 + { }, 10596 + { { 0, 1 } }, 10597 + }, 10598 + { 10599 + "JMP_JGE_X: dst = src -> always taken", 10600 + .u.insns_int = { 10601 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10602 + BPF_JMP_REG(BPF_JGE, R1, R1, 1), 10603 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10604 + BPF_EXIT_INSN(), 10605 + }, 10606 + INTERNAL | FLAG_NO_DATA, 10607 + { }, 10608 + { { 0, 1 } }, 10609 + }, 10610 + { 10611 + "JMP_JLE_X: dst = src -> always taken", 10612 + .u.insns_int = { 10613 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10614 + BPF_JMP_REG(BPF_JLE, R1, R1, 1), 10615 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10616 + BPF_EXIT_INSN(), 10617 + }, 10618 + INTERNAL | FLAG_NO_DATA, 10619 + { }, 10620 + { { 0, 1 } }, 10621 + }, 10622 + { 10623 + "JMP_JSGE_X: dst = src -> always taken", 10624 + .u.insns_int = { 10625 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10626 + BPF_JMP_REG(BPF_JSGE, R1, R1, 1), 10627 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10628 + BPF_EXIT_INSN(), 10629 + }, 10630 + INTERNAL | FLAG_NO_DATA, 10631 + { }, 10632 + { { 0, 1 } }, 10633 + }, 10634 + { 10635 + "JMP_JSLE_X: dst = src -> always taken", 10636 + .u.insns_int = { 10637 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10638 + BPF_JMP_REG(BPF_JSLE, R1, R1, 1), 10639 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10640 + BPF_EXIT_INSN(), 10641 + }, 10642 + INTERNAL | FLAG_NO_DATA, 10643 + { }, 10644 + { { 0, 1 } }, 10645 + }, 10646 + { 10647 + "JMP_JNE_X: dst = src -> never taken", 10648 + .u.insns_int = { 10649 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10650 + BPF_JMP_REG(BPF_JNE, R1, R1, 1), 10651 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10652 + BPF_EXIT_INSN(), 10653 + }, 10654 + INTERNAL | FLAG_NO_DATA, 10655 + { }, 10656 + { { 0, 0 } }, 10657 + }, 10658 + { 10659 + "JMP_JGT_X: dst = src -> never taken", 10660 + .u.insns_int = { 10661 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10662 + BPF_JMP_REG(BPF_JGT, R1, R1, 1), 10663 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10664 + BPF_EXIT_INSN(), 10665 + }, 10666 + INTERNAL | FLAG_NO_DATA, 10667 + { }, 10668 + { { 0, 0 } }, 10669 + }, 10670 + { 10671 + "JMP_JLT_X: dst = src -> never taken", 10672 + .u.insns_int = { 10673 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10674 + BPF_JMP_REG(BPF_JLT, R1, R1, 1), 10675 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10676 + BPF_EXIT_INSN(), 10677 + }, 10678 + INTERNAL | FLAG_NO_DATA, 10679 + { }, 10680 + { { 0, 0 } }, 10681 + }, 10682 + { 10683 + "JMP_JSGT_X: dst = src -> never taken", 10684 + .u.insns_int = { 10685 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10686 + BPF_JMP_REG(BPF_JSGT, R1, R1, 1), 10687 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10688 + BPF_EXIT_INSN(), 10689 + }, 10690 + INTERNAL | FLAG_NO_DATA, 10691 + { }, 10692 + { { 0, 0 } }, 10693 + }, 10694 + { 10695 + "JMP_JSLT_X: dst = src -> never taken", 10696 + .u.insns_int = { 10697 + BPF_ALU64_IMM(BPF_MOV, R0, 1), 10698 + BPF_JMP_REG(BPF_JSLT, R1, R1, 1), 10699 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10700 + BPF_EXIT_INSN(), 10701 + }, 10702 + INTERNAL | FLAG_NO_DATA, 10703 + { }, 10704 + { { 0, 0 } }, 10705 + }, 10706 + /* Short relative jumps */ 10707 + { 10708 + "Short relative jump: offset=0", 10709 + .u.insns_int = { 10710 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10711 + BPF_JMP_IMM(BPF_JEQ, R0, 0, 0), 10712 + BPF_EXIT_INSN(), 10713 + BPF_ALU32_IMM(BPF_MOV, R0, -1), 10714 + }, 10715 + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, 10716 + { }, 10717 + { { 0, 0 } }, 10718 + }, 10719 + { 10720 + "Short relative jump: offset=1", 10721 + .u.insns_int = { 10722 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10723 + BPF_JMP_IMM(BPF_JEQ, R0, 0, 1), 10724 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10725 + BPF_EXIT_INSN(), 10726 + BPF_ALU32_IMM(BPF_MOV, R0, -1), 10727 + }, 10728 + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, 10729 + { }, 10730 + { { 0, 0 } }, 10731 + }, 10732 + { 10733 + "Short relative jump: offset=2", 10734 + .u.insns_int = { 10735 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10736 + BPF_JMP_IMM(BPF_JEQ, R0, 0, 2), 10737 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10738 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10739 + BPF_EXIT_INSN(), 10740 + BPF_ALU32_IMM(BPF_MOV, R0, -1), 10741 + }, 10742 + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, 10743 + { }, 10744 + { { 0, 0 } }, 10745 + }, 10746 + { 10747 + "Short relative jump: offset=3", 10748 + .u.insns_int = { 10749 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10750 + BPF_JMP_IMM(BPF_JEQ, R0, 0, 3), 10751 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10752 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10753 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10754 + BPF_EXIT_INSN(), 10755 + BPF_ALU32_IMM(BPF_MOV, R0, -1), 10756 + }, 10757 + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, 10758 + { }, 10759 + { { 0, 0 } }, 10760 + }, 10761 + { 10762 + "Short relative jump: offset=4", 10763 + .u.insns_int = { 10764 + BPF_ALU64_IMM(BPF_MOV, R0, 0), 10765 + BPF_JMP_IMM(BPF_JEQ, R0, 0, 4), 10766 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10767 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10768 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10769 + BPF_ALU32_IMM(BPF_ADD, R0, 1), 10770 + BPF_EXIT_INSN(), 10771 + BPF_ALU32_IMM(BPF_MOV, R0, -1), 10772 + }, 10773 + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, 10774 + { }, 10775 + { { 0, 0 } }, 10776 + }, 10777 + /* Conditional branch conversions */ 10778 + { 10779 + "Long conditional jump: taken at runtime", 10780 + { }, 10781 + INTERNAL | FLAG_NO_DATA, 10782 + { }, 10783 + { { 0, 1 } }, 10784 + .fill_helper = bpf_fill_max_jmp_taken, 10785 + }, 10786 + { 10787 + "Long conditional jump: not taken at runtime", 10788 + { }, 10789 + INTERNAL | FLAG_NO_DATA, 10790 + { }, 10791 + { { 0, 2 } }, 10792 + .fill_helper = bpf_fill_max_jmp_not_taken, 10793 + }, 10794 + { 10795 + "Long conditional jump: always taken, known at JIT time", 10796 + { }, 10797 + INTERNAL | FLAG_NO_DATA, 10798 + { }, 10799 + { { 0, 1 } }, 10800 + .fill_helper = bpf_fill_max_jmp_always_taken, 10801 + }, 10802 + { 10803 + "Long conditional jump: never taken, known at JIT time", 10804 + { }, 10805 + INTERNAL | FLAG_NO_DATA, 10806 + { }, 10807 + { { 0, 2 } }, 10808 + .fill_helper = bpf_fill_max_jmp_never_taken, 10809 + }, 10810 + /* Staggered jump sequences, immediate */ 10811 + { 10812 + "Staggered jumps: JMP_JA", 10813 + { }, 10814 + INTERNAL | FLAG_NO_DATA, 10815 + { }, 10816 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10817 + .fill_helper = bpf_fill_staggered_ja, 10818 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10819 + }, 10820 + { 10821 + "Staggered jumps: JMP_JEQ_K", 10822 + { }, 10823 + INTERNAL | FLAG_NO_DATA, 10824 + { }, 10825 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10826 + .fill_helper = bpf_fill_staggered_jeq_imm, 10827 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10828 + }, 10829 + { 10830 + "Staggered jumps: JMP_JNE_K", 10831 + { }, 10832 + INTERNAL | FLAG_NO_DATA, 10833 + { }, 10834 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10835 + .fill_helper = bpf_fill_staggered_jne_imm, 10836 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10837 + }, 10838 + { 10839 + "Staggered jumps: JMP_JSET_K", 10840 + { }, 10841 + INTERNAL | FLAG_NO_DATA, 10842 + { }, 10843 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10844 + .fill_helper = bpf_fill_staggered_jset_imm, 10845 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10846 + }, 10847 + { 10848 + "Staggered jumps: JMP_JGT_K", 10849 + { }, 10850 + INTERNAL | FLAG_NO_DATA, 10851 + { }, 10852 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10853 + .fill_helper = bpf_fill_staggered_jgt_imm, 10854 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10855 + }, 10856 + { 10857 + "Staggered jumps: JMP_JGE_K", 10858 + { }, 10859 + INTERNAL | FLAG_NO_DATA, 10860 + { }, 10861 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10862 + .fill_helper = bpf_fill_staggered_jge_imm, 10863 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10864 + }, 10865 + { 10866 + "Staggered jumps: JMP_JLT_K", 10867 + { }, 10868 + INTERNAL | FLAG_NO_DATA, 10869 + { }, 10870 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10871 + .fill_helper = bpf_fill_staggered_jlt_imm, 10872 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10873 + }, 10874 + { 10875 + "Staggered jumps: JMP_JLE_K", 10876 + { }, 10877 + INTERNAL | FLAG_NO_DATA, 10878 + { }, 10879 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10880 + .fill_helper = bpf_fill_staggered_jle_imm, 10881 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10882 + }, 10883 + { 10884 + "Staggered jumps: JMP_JSGT_K", 10885 + { }, 10886 + INTERNAL | FLAG_NO_DATA, 10887 + { }, 10888 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10889 + .fill_helper = bpf_fill_staggered_jsgt_imm, 10890 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10891 + }, 10892 + { 10893 + "Staggered jumps: JMP_JSGE_K", 10894 + { }, 10895 + INTERNAL | FLAG_NO_DATA, 10896 + { }, 10897 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10898 + .fill_helper = bpf_fill_staggered_jsge_imm, 10899 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10900 + }, 10901 + { 10902 + "Staggered jumps: JMP_JSLT_K", 10903 + { }, 10904 + INTERNAL | FLAG_NO_DATA, 10905 + { }, 10906 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10907 + .fill_helper = bpf_fill_staggered_jslt_imm, 10908 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10909 + }, 10910 + { 10911 + "Staggered jumps: JMP_JSLE_K", 10912 + { }, 10913 + INTERNAL | FLAG_NO_DATA, 10914 + { }, 10915 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10916 + .fill_helper = bpf_fill_staggered_jsle_imm, 10917 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10918 + }, 10919 + /* Staggered jump sequences, register */ 10920 + { 10921 + "Staggered jumps: JMP_JEQ_X", 10922 + { }, 10923 + INTERNAL | FLAG_NO_DATA, 10924 + { }, 10925 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10926 + .fill_helper = bpf_fill_staggered_jeq_reg, 10927 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10928 + }, 10929 + { 10930 + "Staggered jumps: JMP_JNE_X", 10931 + { }, 10932 + INTERNAL | FLAG_NO_DATA, 10933 + { }, 10934 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10935 + .fill_helper = bpf_fill_staggered_jne_reg, 10936 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10937 + }, 10938 + { 10939 + "Staggered jumps: JMP_JSET_X", 10940 + { }, 10941 + INTERNAL | FLAG_NO_DATA, 10942 + { }, 10943 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10944 + .fill_helper = bpf_fill_staggered_jset_reg, 10945 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10946 + }, 10947 + { 10948 + "Staggered jumps: JMP_JGT_X", 10949 + { }, 10950 + INTERNAL | FLAG_NO_DATA, 10951 + { }, 10952 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10953 + .fill_helper = bpf_fill_staggered_jgt_reg, 10954 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10955 + }, 10956 + { 10957 + "Staggered jumps: JMP_JGE_X", 10958 + { }, 10959 + INTERNAL | FLAG_NO_DATA, 10960 + { }, 10961 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10962 + .fill_helper = bpf_fill_staggered_jge_reg, 10963 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10964 + }, 10965 + { 10966 + "Staggered jumps: JMP_JLT_X", 10967 + { }, 10968 + INTERNAL | FLAG_NO_DATA, 10969 + { }, 10970 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10971 + .fill_helper = bpf_fill_staggered_jlt_reg, 10972 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10973 + }, 10974 + { 10975 + "Staggered jumps: JMP_JLE_X", 10976 + { }, 10977 + INTERNAL | FLAG_NO_DATA, 10978 + { }, 10979 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10980 + .fill_helper = bpf_fill_staggered_jle_reg, 10981 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10982 + }, 10983 + { 10984 + "Staggered jumps: JMP_JSGT_X", 10985 + { }, 10986 + INTERNAL | FLAG_NO_DATA, 10987 + { }, 10988 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10989 + .fill_helper = bpf_fill_staggered_jsgt_reg, 10990 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 10991 + }, 10992 + { 10993 + "Staggered jumps: JMP_JSGE_X", 10994 + { }, 10995 + INTERNAL | FLAG_NO_DATA, 10996 + { }, 10997 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 10998 + .fill_helper = bpf_fill_staggered_jsge_reg, 10999 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11000 + }, 11001 + { 11002 + "Staggered jumps: JMP_JSLT_X", 11003 + { }, 11004 + INTERNAL | FLAG_NO_DATA, 11005 + { }, 11006 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11007 + .fill_helper = bpf_fill_staggered_jslt_reg, 11008 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11009 + }, 11010 + { 11011 + "Staggered jumps: JMP_JSLE_X", 11012 + { }, 11013 + INTERNAL | FLAG_NO_DATA, 11014 + { }, 11015 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11016 + .fill_helper = bpf_fill_staggered_jsle_reg, 11017 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11018 + }, 11019 + /* Staggered jump sequences, JMP32 immediate */ 11020 + { 11021 + "Staggered jumps: JMP32_JEQ_K", 11022 + { }, 11023 + INTERNAL | FLAG_NO_DATA, 11024 + { }, 11025 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11026 + .fill_helper = bpf_fill_staggered_jeq32_imm, 11027 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11028 + }, 11029 + { 11030 + "Staggered jumps: JMP32_JNE_K", 11031 + { }, 11032 + INTERNAL | FLAG_NO_DATA, 11033 + { }, 11034 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11035 + .fill_helper = bpf_fill_staggered_jne32_imm, 11036 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11037 + }, 11038 + { 11039 + "Staggered jumps: JMP32_JSET_K", 11040 + { }, 11041 + INTERNAL | FLAG_NO_DATA, 11042 + { }, 11043 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11044 + .fill_helper = bpf_fill_staggered_jset32_imm, 11045 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11046 + }, 11047 + { 11048 + "Staggered jumps: JMP32_JGT_K", 11049 + { }, 11050 + INTERNAL | FLAG_NO_DATA, 11051 + { }, 11052 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11053 + .fill_helper = bpf_fill_staggered_jgt32_imm, 11054 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11055 + }, 11056 + { 11057 + "Staggered jumps: JMP32_JGE_K", 11058 + { }, 11059 + INTERNAL | FLAG_NO_DATA, 11060 + { }, 11061 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11062 + .fill_helper = bpf_fill_staggered_jge32_imm, 11063 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11064 + }, 11065 + { 11066 + "Staggered jumps: JMP32_JLT_K", 11067 + { }, 11068 + INTERNAL | FLAG_NO_DATA, 11069 + { }, 11070 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11071 + .fill_helper = bpf_fill_staggered_jlt32_imm, 11072 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11073 + }, 11074 + { 11075 + "Staggered jumps: JMP32_JLE_K", 11076 + { }, 11077 + INTERNAL | FLAG_NO_DATA, 11078 + { }, 11079 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11080 + .fill_helper = bpf_fill_staggered_jle32_imm, 11081 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11082 + }, 11083 + { 11084 + "Staggered jumps: JMP32_JSGT_K", 11085 + { }, 11086 + INTERNAL | FLAG_NO_DATA, 11087 + { }, 11088 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11089 + .fill_helper = bpf_fill_staggered_jsgt32_imm, 11090 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11091 + }, 11092 + { 11093 + "Staggered jumps: JMP32_JSGE_K", 11094 + { }, 11095 + INTERNAL | FLAG_NO_DATA, 11096 + { }, 11097 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11098 + .fill_helper = bpf_fill_staggered_jsge32_imm, 11099 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11100 + }, 11101 + { 11102 + "Staggered jumps: JMP32_JSLT_K", 11103 + { }, 11104 + INTERNAL | FLAG_NO_DATA, 11105 + { }, 11106 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11107 + .fill_helper = bpf_fill_staggered_jslt32_imm, 11108 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11109 + }, 11110 + { 11111 + "Staggered jumps: JMP32_JSLE_K", 11112 + { }, 11113 + INTERNAL | FLAG_NO_DATA, 11114 + { }, 11115 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11116 + .fill_helper = bpf_fill_staggered_jsle32_imm, 11117 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11118 + }, 11119 + /* Staggered jump sequences, JMP32 register */ 11120 + { 11121 + "Staggered jumps: JMP32_JEQ_X", 11122 + { }, 11123 + INTERNAL | FLAG_NO_DATA, 11124 + { }, 11125 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11126 + .fill_helper = bpf_fill_staggered_jeq32_reg, 11127 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11128 + }, 11129 + { 11130 + "Staggered jumps: JMP32_JNE_X", 11131 + { }, 11132 + INTERNAL | FLAG_NO_DATA, 11133 + { }, 11134 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11135 + .fill_helper = bpf_fill_staggered_jne32_reg, 11136 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11137 + }, 11138 + { 11139 + "Staggered jumps: JMP32_JSET_X", 11140 + { }, 11141 + INTERNAL | FLAG_NO_DATA, 11142 + { }, 11143 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11144 + .fill_helper = bpf_fill_staggered_jset32_reg, 11145 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11146 + }, 11147 + { 11148 + "Staggered jumps: JMP32_JGT_X", 11149 + { }, 11150 + INTERNAL | FLAG_NO_DATA, 11151 + { }, 11152 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11153 + .fill_helper = bpf_fill_staggered_jgt32_reg, 11154 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11155 + }, 11156 + { 11157 + "Staggered jumps: JMP32_JGE_X", 11158 + { }, 11159 + INTERNAL | FLAG_NO_DATA, 11160 + { }, 11161 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11162 + .fill_helper = bpf_fill_staggered_jge32_reg, 11163 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11164 + }, 11165 + { 11166 + "Staggered jumps: JMP32_JLT_X", 11167 + { }, 11168 + INTERNAL | FLAG_NO_DATA, 11169 + { }, 11170 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11171 + .fill_helper = bpf_fill_staggered_jlt32_reg, 11172 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11173 + }, 11174 + { 11175 + "Staggered jumps: JMP32_JLE_X", 11176 + { }, 11177 + INTERNAL | FLAG_NO_DATA, 11178 + { }, 11179 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11180 + .fill_helper = bpf_fill_staggered_jle32_reg, 11181 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11182 + }, 11183 + { 11184 + "Staggered jumps: JMP32_JSGT_X", 11185 + { }, 11186 + INTERNAL | FLAG_NO_DATA, 11187 + { }, 11188 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11189 + .fill_helper = bpf_fill_staggered_jsgt32_reg, 11190 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11191 + }, 11192 + { 11193 + "Staggered jumps: JMP32_JSGE_X", 11194 + { }, 11195 + INTERNAL | FLAG_NO_DATA, 11196 + { }, 11197 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11198 + .fill_helper = bpf_fill_staggered_jsge32_reg, 11199 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11200 + }, 11201 + { 11202 + "Staggered jumps: JMP32_JSLT_X", 11203 + { }, 11204 + INTERNAL | FLAG_NO_DATA, 11205 + { }, 11206 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11207 + .fill_helper = bpf_fill_staggered_jslt32_reg, 11208 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11209 + }, 11210 + { 11211 + "Staggered jumps: JMP32_JSLE_X", 11212 + { }, 11213 + INTERNAL | FLAG_NO_DATA, 11214 + { }, 11215 + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, 11216 + .fill_helper = bpf_fill_staggered_jsle32_reg, 11217 + .nr_testruns = NR_STAGGERED_JMP_RUNS, 11218 + }, 11136 11219 }; 11137 11220 11138 11221 static struct net_device dev; ··· 14099 8576 fp->type = BPF_PROG_TYPE_SOCKET_FILTER; 14100 8577 memcpy(fp->insnsi, fptr, fp->len * sizeof(struct bpf_insn)); 14101 8578 fp->aux->stack_depth = tests[which].stack_depth; 8579 + fp->aux->verifier_zext = !!(tests[which].aux & 8580 + FLAG_VERIFIER_ZEXT); 14102 8581 14103 8582 /* We cannot error here as we don't need type compatibility 14104 8583 * checks. ··· 14155 8630 static int run_one(const struct bpf_prog *fp, struct bpf_test *test) 14156 8631 { 14157 8632 int err_cnt = 0, i, runs = MAX_TESTRUNS; 8633 + 8634 + if (test->nr_testruns) 8635 + runs = min(test->nr_testruns, MAX_TESTRUNS); 14158 8636 14159 8637 for (i = 0; i < MAX_SUBTESTS; i++) { 14160 8638 void *data; ··· 14218 8690 14219 8691 static __init int prepare_bpf_tests(void) 14220 8692 { 14221 - int i; 14222 - 14223 8693 if (test_id >= 0) { 14224 8694 /* 14225 8695 * if a test_id was specified, use test_range to ··· 14261 8735 } 14262 8736 } 14263 8737 14264 - for (i = 0; i < ARRAY_SIZE(tests); i++) { 14265 - if (tests[i].fill_helper && 14266 - tests[i].fill_helper(&tests[i]) < 0) 14267 - return -ENOMEM; 14268 - } 14269 - 14270 8738 return 0; 14271 8739 } 14272 8740 14273 8741 static __init void destroy_bpf_tests(void) 14274 8742 { 14275 - int i; 14276 - 14277 - for (i = 0; i < ARRAY_SIZE(tests); i++) { 14278 - if (tests[i].fill_helper) 14279 - kfree(tests[i].u.ptr.insns); 14280 - } 14281 8743 } 14282 8744 14283 8745 static bool exclude_test(int test_id) ··· 14470 8956 14471 8957 pr_info("#%d %s ", i, tests[i].descr); 14472 8958 8959 + if (tests[i].fill_helper && 8960 + tests[i].fill_helper(&tests[i]) < 0) { 8961 + pr_cont("FAIL to prog_fill\n"); 8962 + continue; 8963 + } 8964 + 14473 8965 fp = generate_filter(i, &err); 8966 + 8967 + if (tests[i].fill_helper) { 8968 + kfree(tests[i].u.ptr.insns); 8969 + tests[i].u.ptr.insns = NULL; 8970 + } 8971 + 14474 8972 if (fp == NULL) { 14475 8973 if (err == 0) { 14476 8974 pass_cnt++; ··· 14519 8993 struct tail_call_test { 14520 8994 const char *descr; 14521 8995 struct bpf_insn insns[MAX_INSNS]; 8996 + int flags; 14522 8997 int result; 14523 8998 int stack_depth; 14524 8999 }; 9000 + 9001 + /* Flags that can be passed to tail call test cases */ 9002 + #define FLAG_NEED_STATE BIT(0) 9003 + #define FLAG_RESULT_IN_STATE BIT(1) 14525 9004 14526 9005 /* 14527 9006 * Magic marker used in test snippets for tail calls below. ··· 14546 9015 BPF_RAW_INSN(BPF_ALU | BPF_MOV | BPF_K, R3, 0, \ 14547 9016 offset, TAIL_CALL_MARKER), \ 14548 9017 BPF_JMP_IMM(BPF_TAIL_CALL, 0, 0, 0) 9018 + 9019 + /* 9020 + * A test function to be called from a BPF program, clobbering a lot of 9021 + * CPU registers in the process. A JITed BPF program calling this function 9022 + * must save and restore any caller-saved registers it uses for internal 9023 + * state, for example the current tail call count. 9024 + */ 9025 + BPF_CALL_1(bpf_test_func, u64, arg) 9026 + { 9027 + char buf[64]; 9028 + long a = 0; 9029 + long b = 1; 9030 + long c = 2; 9031 + long d = 3; 9032 + long e = 4; 9033 + long f = 5; 9034 + long g = 6; 9035 + long h = 7; 9036 + 9037 + return snprintf(buf, sizeof(buf), 9038 + "%ld %lu %lx %ld %lu %lx %ld %lu %x", 9039 + a, b, c, d, e, f, g, h, (int)arg); 9040 + } 9041 + #define BPF_FUNC_test_func __BPF_FUNC_MAX_ID 14549 9042 14550 9043 /* 14551 9044 * Tail call tests. Each test case may call any other test in the table, ··· 14621 9066 { 14622 9067 "Tail call error path, max count reached", 14623 9068 .insns = { 14624 - BPF_ALU64_IMM(BPF_ADD, R1, 1), 14625 - BPF_ALU64_REG(BPF_MOV, R0, R1), 9069 + BPF_LDX_MEM(BPF_W, R2, R1, 0), 9070 + BPF_ALU64_IMM(BPF_ADD, R2, 1), 9071 + BPF_STX_MEM(BPF_W, R1, R2, 0), 14626 9072 TAIL_CALL(0), 14627 9073 BPF_EXIT_INSN(), 14628 9074 }, 14629 - .result = MAX_TAIL_CALL_CNT + 1, 9075 + .flags = FLAG_NEED_STATE | FLAG_RESULT_IN_STATE, 9076 + .result = (MAX_TAIL_CALL_CNT + 1 + 1) * MAX_TESTRUNS, 9077 + }, 9078 + { 9079 + "Tail call count preserved across function calls", 9080 + .insns = { 9081 + BPF_LDX_MEM(BPF_W, R2, R1, 0), 9082 + BPF_ALU64_IMM(BPF_ADD, R2, 1), 9083 + BPF_STX_MEM(BPF_W, R1, R2, 0), 9084 + BPF_STX_MEM(BPF_DW, R10, R1, -8), 9085 + BPF_CALL_REL(BPF_FUNC_get_numa_node_id), 9086 + BPF_CALL_REL(BPF_FUNC_ktime_get_ns), 9087 + BPF_CALL_REL(BPF_FUNC_ktime_get_boot_ns), 9088 + BPF_CALL_REL(BPF_FUNC_ktime_get_coarse_ns), 9089 + BPF_CALL_REL(BPF_FUNC_jiffies64), 9090 + BPF_CALL_REL(BPF_FUNC_test_func), 9091 + BPF_LDX_MEM(BPF_DW, R1, R10, -8), 9092 + BPF_ALU32_REG(BPF_MOV, R0, R1), 9093 + TAIL_CALL(0), 9094 + BPF_EXIT_INSN(), 9095 + }, 9096 + .stack_depth = 8, 9097 + .flags = FLAG_NEED_STATE | FLAG_RESULT_IN_STATE, 9098 + .result = (MAX_TAIL_CALL_CNT + 1 + 1) * MAX_TESTRUNS, 14630 9099 }, 14631 9100 { 14632 9101 "Tail call error path, NULL target", 14633 9102 .insns = { 14634 - BPF_ALU64_IMM(BPF_MOV, R0, -1), 9103 + BPF_LDX_MEM(BPF_W, R2, R1, 0), 9104 + BPF_ALU64_IMM(BPF_ADD, R2, 1), 9105 + BPF_STX_MEM(BPF_W, R1, R2, 0), 14635 9106 TAIL_CALL(TAIL_CALL_NULL), 14636 - BPF_ALU64_IMM(BPF_MOV, R0, 1), 14637 9107 BPF_EXIT_INSN(), 14638 9108 }, 14639 - .result = 1, 9109 + .flags = FLAG_NEED_STATE | FLAG_RESULT_IN_STATE, 9110 + .result = MAX_TESTRUNS, 14640 9111 }, 14641 9112 { 14642 9113 "Tail call error path, index out of range", 14643 9114 .insns = { 14644 - BPF_ALU64_IMM(BPF_MOV, R0, -1), 9115 + BPF_LDX_MEM(BPF_W, R2, R1, 0), 9116 + BPF_ALU64_IMM(BPF_ADD, R2, 1), 9117 + BPF_STX_MEM(BPF_W, R1, R2, 0), 14645 9118 TAIL_CALL(TAIL_CALL_INVALID), 14646 - BPF_ALU64_IMM(BPF_MOV, R0, 1), 14647 9119 BPF_EXIT_INSN(), 14648 9120 }, 14649 - .result = 1, 9121 + .flags = FLAG_NEED_STATE | FLAG_RESULT_IN_STATE, 9122 + .result = MAX_TESTRUNS, 14650 9123 }, 14651 9124 }; 14652 9125 ··· 14730 9147 /* Relocate runtime tail call offsets and addresses */ 14731 9148 for (i = 0; i < len; i++) { 14732 9149 struct bpf_insn *insn = &fp->insnsi[i]; 14733 - 14734 - if (insn->imm != TAIL_CALL_MARKER) 14735 - continue; 9150 + long addr = 0; 14736 9151 14737 9152 switch (insn->code) { 14738 9153 case BPF_LD | BPF_DW | BPF_IMM: 9154 + if (insn->imm != TAIL_CALL_MARKER) 9155 + break; 14739 9156 insn[0].imm = (u32)(long)progs; 14740 9157 insn[1].imm = ((u64)(long)progs) >> 32; 14741 9158 break; 14742 9159 14743 9160 case BPF_ALU | BPF_MOV | BPF_K: 9161 + if (insn->imm != TAIL_CALL_MARKER) 9162 + break; 14744 9163 if (insn->off == TAIL_CALL_NULL) 14745 9164 insn->imm = ntests; 14746 9165 else if (insn->off == TAIL_CALL_INVALID) ··· 14750 9165 else 14751 9166 insn->imm = which + insn->off; 14752 9167 insn->off = 0; 9168 + break; 9169 + 9170 + case BPF_JMP | BPF_CALL: 9171 + if (insn->src_reg != BPF_PSEUDO_CALL) 9172 + break; 9173 + switch (insn->imm) { 9174 + case BPF_FUNC_get_numa_node_id: 9175 + addr = (long)&numa_node_id; 9176 + break; 9177 + case BPF_FUNC_ktime_get_ns: 9178 + addr = (long)&ktime_get_ns; 9179 + break; 9180 + case BPF_FUNC_ktime_get_boot_ns: 9181 + addr = (long)&ktime_get_boot_fast_ns; 9182 + break; 9183 + case BPF_FUNC_ktime_get_coarse_ns: 9184 + addr = (long)&ktime_get_coarse_ns; 9185 + break; 9186 + case BPF_FUNC_jiffies64: 9187 + addr = (long)&get_jiffies_64; 9188 + break; 9189 + case BPF_FUNC_test_func: 9190 + addr = (long)&bpf_test_func; 9191 + break; 9192 + default: 9193 + err = -EFAULT; 9194 + goto out_err; 9195 + } 9196 + *insn = BPF_EMIT_CALL(addr); 9197 + if ((long)__bpf_call_base + insn->imm != addr) 9198 + *insn = BPF_JMP_A(0); /* Skip: NOP */ 9199 + break; 14753 9200 } 14754 9201 } 14755 9202 ··· 14814 9197 for (i = 0; i < ARRAY_SIZE(tail_call_tests); i++) { 14815 9198 struct tail_call_test *test = &tail_call_tests[i]; 14816 9199 struct bpf_prog *fp = progs->ptrs[i]; 9200 + int *data = NULL; 9201 + int state = 0; 14817 9202 u64 duration; 14818 9203 int ret; 14819 9204 ··· 14832 9213 if (fp->jited) 14833 9214 jit_cnt++; 14834 9215 14835 - ret = __run_one(fp, NULL, MAX_TESTRUNS, &duration); 9216 + if (test->flags & FLAG_NEED_STATE) 9217 + data = &state; 9218 + ret = __run_one(fp, data, MAX_TESTRUNS, &duration); 9219 + if (test->flags & FLAG_RESULT_IN_STATE) 9220 + ret = state; 14836 9221 if (ret == test->result) { 14837 9222 pr_cont("%lld PASS", duration); 14838 9223 pass_cnt++;
+4 -2
net/bpf/test_run.c
··· 807 807 if (ret) 808 808 goto free_data; 809 809 810 - bpf_prog_change_xdp(NULL, prog); 810 + if (repeat > 1) 811 + bpf_prog_change_xdp(NULL, prog); 811 812 ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); 812 813 /* We convert the xdp_buff back to an xdp_md before checking the return 813 814 * code so the reference count of any held netdevice will be decremented ··· 829 828 sizeof(struct xdp_md)); 830 829 831 830 out: 832 - bpf_prog_change_xdp(prog, NULL); 831 + if (repeat > 1) 832 + bpf_prog_change_xdp(prog, NULL); 833 833 free_data: 834 834 kfree(data); 835 835 free_ctx:
-15
net/xdp/xsk.c
··· 134 134 return 0; 135 135 } 136 136 137 - void xp_release(struct xdp_buff_xsk *xskb) 138 - { 139 - xskb->pool->free_heads[xskb->pool->free_heads_cnt++] = xskb; 140 - } 141 - 142 - static u64 xp_get_handle(struct xdp_buff_xsk *xskb) 143 - { 144 - u64 offset = xskb->xdp.data - xskb->xdp.data_hard_start; 145 - 146 - offset += xskb->pool->headroom; 147 - if (!xskb->pool->unaligned) 148 - return xskb->orig_addr + offset; 149 - return xskb->orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT); 150 - } 151 - 152 137 static int __xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len) 153 138 { 154 139 struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp);
+115 -17
net/xdp/xsk_buff_pool.c
··· 44 44 struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, 45 45 struct xdp_umem *umem) 46 46 { 47 + bool unaligned = umem->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG; 47 48 struct xsk_buff_pool *pool; 48 49 struct xdp_buff_xsk *xskb; 49 - u32 i; 50 + u32 i, entries; 50 51 51 - pool = kvzalloc(struct_size(pool, free_heads, umem->chunks), 52 - GFP_KERNEL); 52 + entries = unaligned ? umem->chunks : 0; 53 + pool = kvzalloc(struct_size(pool, free_heads, entries), GFP_KERNEL); 53 54 if (!pool) 54 55 goto out; 55 56 ··· 64 63 pool->free_heads_cnt = umem->chunks; 65 64 pool->headroom = umem->headroom; 66 65 pool->chunk_size = umem->chunk_size; 67 - pool->unaligned = umem->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG; 66 + pool->chunk_shift = ffs(umem->chunk_size) - 1; 67 + pool->unaligned = unaligned; 68 68 pool->frame_len = umem->chunk_size - umem->headroom - 69 69 XDP_PACKET_HEADROOM; 70 70 pool->umem = umem; ··· 83 81 xskb = &pool->heads[i]; 84 82 xskb->pool = pool; 85 83 xskb->xdp.frame_sz = umem->chunk_size - umem->headroom; 86 - pool->free_heads[i] = xskb; 84 + if (pool->unaligned) 85 + pool->free_heads[i] = xskb; 86 + else 87 + xp_init_xskb_addr(xskb, pool, i * pool->chunk_size); 87 88 } 88 89 89 90 return pool; ··· 411 406 412 407 if (pool->unaligned) 413 408 xp_check_dma_contiguity(dma_map); 409 + else 410 + for (i = 0; i < pool->heads_cnt; i++) { 411 + struct xdp_buff_xsk *xskb = &pool->heads[i]; 412 + 413 + xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr); 414 + } 414 415 415 416 err = xp_init_dma_info(pool, dma_map); 416 417 if (err) { ··· 459 448 if (pool->free_heads_cnt == 0) 460 449 return NULL; 461 450 462 - xskb = pool->free_heads[--pool->free_heads_cnt]; 463 - 464 451 for (;;) { 465 452 if (!xskq_cons_peek_addr_unchecked(pool->fq, &addr)) { 466 453 pool->fq->queue_empty_descs++; 467 - xp_release(xskb); 468 454 return NULL; 469 455 } 470 456 ··· 474 466 } 475 467 break; 476 468 } 477 - xskq_cons_release(pool->fq); 478 469 479 - xskb->orig_addr = addr; 480 - xskb->xdp.data_hard_start = pool->addrs + addr + pool->headroom; 481 - if (pool->dma_pages_cnt) { 482 - xskb->frame_dma = (pool->dma_pages[addr >> PAGE_SHIFT] & 483 - ~XSK_NEXT_PG_CONTIG_MASK) + 484 - (addr & ~PAGE_MASK); 485 - xskb->dma = xskb->frame_dma + pool->headroom + 486 - XDP_PACKET_HEADROOM; 470 + if (pool->unaligned) { 471 + xskb = pool->free_heads[--pool->free_heads_cnt]; 472 + xp_init_xskb_addr(xskb, pool, addr); 473 + if (pool->dma_pages_cnt) 474 + xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr); 475 + } else { 476 + xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)]; 487 477 } 478 + 479 + xskq_cons_release(pool->fq); 488 480 return xskb; 489 481 } 490 482 ··· 514 506 return &xskb->xdp; 515 507 } 516 508 EXPORT_SYMBOL(xp_alloc); 509 + 510 + static u32 xp_alloc_new_from_fq(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max) 511 + { 512 + u32 i, cached_cons, nb_entries; 513 + 514 + if (max > pool->free_heads_cnt) 515 + max = pool->free_heads_cnt; 516 + max = xskq_cons_nb_entries(pool->fq, max); 517 + 518 + cached_cons = pool->fq->cached_cons; 519 + nb_entries = max; 520 + i = max; 521 + while (i--) { 522 + struct xdp_buff_xsk *xskb; 523 + u64 addr; 524 + bool ok; 525 + 526 + __xskq_cons_read_addr_unchecked(pool->fq, cached_cons++, &addr); 527 + 528 + ok = pool->unaligned ? xp_check_unaligned(pool, &addr) : 529 + xp_check_aligned(pool, &addr); 530 + if (unlikely(!ok)) { 531 + pool->fq->invalid_descs++; 532 + nb_entries--; 533 + continue; 534 + } 535 + 536 + if (pool->unaligned) { 537 + xskb = pool->free_heads[--pool->free_heads_cnt]; 538 + xp_init_xskb_addr(xskb, pool, addr); 539 + if (pool->dma_pages_cnt) 540 + xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr); 541 + } else { 542 + xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)]; 543 + } 544 + 545 + *xdp = &xskb->xdp; 546 + xdp++; 547 + } 548 + 549 + xskq_cons_release_n(pool->fq, max); 550 + return nb_entries; 551 + } 552 + 553 + static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 nb_entries) 554 + { 555 + struct xdp_buff_xsk *xskb; 556 + u32 i; 557 + 558 + nb_entries = min_t(u32, nb_entries, pool->free_list_cnt); 559 + 560 + i = nb_entries; 561 + while (i--) { 562 + xskb = list_first_entry(&pool->free_list, struct xdp_buff_xsk, free_list_node); 563 + list_del(&xskb->free_list_node); 564 + 565 + *xdp = &xskb->xdp; 566 + xdp++; 567 + } 568 + pool->free_list_cnt -= nb_entries; 569 + 570 + return nb_entries; 571 + } 572 + 573 + u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max) 574 + { 575 + u32 nb_entries1 = 0, nb_entries2; 576 + 577 + if (unlikely(pool->dma_need_sync)) { 578 + /* Slow path */ 579 + *xdp = xp_alloc(pool); 580 + return !!*xdp; 581 + } 582 + 583 + if (unlikely(pool->free_list_cnt)) { 584 + nb_entries1 = xp_alloc_reused(pool, xdp, max); 585 + if (nb_entries1 == max) 586 + return nb_entries1; 587 + 588 + max -= nb_entries1; 589 + xdp += nb_entries1; 590 + } 591 + 592 + nb_entries2 = xp_alloc_new_from_fq(pool, xdp, max); 593 + if (!nb_entries2) 594 + pool->fq->queue_empty_descs++; 595 + 596 + return nb_entries1 + nb_entries2; 597 + } 598 + EXPORT_SYMBOL(xp_alloc_batch); 517 599 518 600 bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count) 519 601 {
+8 -4
net/xdp/xsk_queue.h
··· 111 111 112 112 /* Functions that read and validate content from consumer rings. */ 113 113 114 - static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr) 114 + static inline void __xskq_cons_read_addr_unchecked(struct xsk_queue *q, u32 cached_cons, u64 *addr) 115 115 { 116 116 struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring; 117 + u32 idx = cached_cons & q->ring_mask; 117 118 119 + *addr = ring->desc[idx]; 120 + } 121 + 122 + static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr) 123 + { 118 124 if (q->cached_cons != q->cached_prod) { 119 - u32 idx = q->cached_cons & q->ring_mask; 120 - 121 - *addr = ring->desc[idx]; 125 + __xskq_cons_read_addr_unchecked(q, q->cached_cons, addr); 122 126 return true; 123 127 } 124 128
+27 -12
samples/bpf/xdp_router_ipv4_user.c
··· 155 155 printf("%d\n", nh->nlmsg_type); 156 156 157 157 memset(&route, 0, sizeof(route)); 158 - printf("Destination\t\tGateway\t\tGenmask\t\tMetric\t\tIface\n"); 158 + printf("Destination Gateway Genmask Metric Iface\n"); 159 159 for (; NLMSG_OK(nh, nll); nh = NLMSG_NEXT(nh, nll)) { 160 160 rt_msg = (struct rtmsg *)NLMSG_DATA(nh); 161 161 rtm_family = rt_msg->rtm_family; ··· 207 207 int metric; 208 208 __be32 gw; 209 209 } *prefix_value; 210 + struct in_addr dst_addr, gw_addr, mask_addr; 210 211 211 212 prefix_key = alloca(sizeof(*prefix_key) + 3); 212 213 prefix_value = alloca(sizeof(*prefix_value)); ··· 235 234 for (i = 0; i < 4; i++) 236 235 prefix_key->data[i] = (route.dst >> i * 8) & 0xff; 237 236 238 - printf("%3d.%d.%d.%d\t\t%3x\t\t%d\t\t%d\t\t%s\n", 239 - (int)prefix_key->data[0], 240 - (int)prefix_key->data[1], 241 - (int)prefix_key->data[2], 242 - (int)prefix_key->data[3], 243 - route.gw, route.dst_len, 237 + dst_addr.s_addr = route.dst; 238 + printf("%-16s", inet_ntoa(dst_addr)); 239 + 240 + gw_addr.s_addr = route.gw; 241 + printf("%-16s", inet_ntoa(gw_addr)); 242 + 243 + mask_addr.s_addr = htonl(~(0xffffffffU >> route.dst_len)); 244 + printf("%-16s%-7d%s\n", inet_ntoa(mask_addr), 244 245 route.metric, 245 246 route.iface_name); 247 + 246 248 if (bpf_map_lookup_elem(lpm_map_fd, prefix_key, 247 249 prefix_value) < 0) { 248 250 for (i = 0; i < 4; i++) ··· 397 393 398 394 if (nh->nlmsg_type == RTM_GETNEIGH) 399 395 printf("READING arp entry\n"); 400 - printf("Address\tHwAddress\n"); 396 + printf("Address HwAddress\n"); 401 397 for (; NLMSG_OK(nh, nll); nh = NLMSG_NEXT(nh, nll)) { 398 + struct in_addr dst_addr; 399 + char mac_str[18]; 400 + int len = 0, i; 401 + 402 402 rt_msg = (struct ndmsg *)NLMSG_DATA(nh); 403 403 rt_attr = (struct rtattr *)RTM_RTA(rt_msg); 404 404 ndm_family = rt_msg->ndm_family; ··· 423 415 } 424 416 arp_entry.dst = atoi(dsts); 425 417 arp_entry.mac = atol(mac); 426 - printf("%x\t\t%llx\n", arp_entry.dst, arp_entry.mac); 418 + 419 + dst_addr.s_addr = arp_entry.dst; 420 + for (i = 0; i < 6; i++) 421 + len += snprintf(mac_str + len, 18 - len, "%02llx%s", 422 + ((arp_entry.mac >> i * 8) & 0xff), 423 + i < 5 ? ":" : ""); 424 + printf("%-16s%s\n", inet_ntoa(dst_addr), mac_str); 425 + 427 426 if (ndm_family == AF_INET) { 428 427 if (bpf_map_lookup_elem(exact_match_map_fd, 429 428 &arp_entry.dst, ··· 687 672 if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) 688 673 return 1; 689 674 690 - printf("\n**************loading bpf file*********************\n\n\n"); 675 + printf("\n******************loading bpf file*********************\n"); 691 676 if (!prog_fd) { 692 677 printf("bpf_prog_load_xattr: %s\n", strerror(errno)); 693 678 return 1; ··· 737 722 signal(SIGINT, int_exit); 738 723 signal(SIGTERM, int_exit); 739 724 740 - printf("*******************ROUTE TABLE*************************\n\n\n"); 725 + printf("\n*******************ROUTE TABLE*************************\n"); 741 726 get_route_table(AF_INET); 742 - printf("*******************ARP TABLE***************************\n\n\n"); 727 + printf("\n*******************ARP TABLE***************************\n"); 743 728 get_arp_table(AF_INET); 744 729 if (monitor_route() < 0) { 745 730 printf("Error in receiving route update");
+1
tools/bpf/bpftool/feature.c
··· 624 624 */ 625 625 switch (id) { 626 626 case BPF_FUNC_trace_printk: 627 + case BPF_FUNC_trace_vprintk: 627 628 case BPF_FUNC_probe_write_user: 628 629 if (!full_mode) 629 630 continue;
+4 -1
tools/bpf/bpftool/gen.c
··· 803 803 } \n\ 804 804 \n\ 805 805 err = %1$s__create_skeleton(obj); \n\ 806 - err = err ?: bpf_object__open_skeleton(obj->skeleton, opts);\n\ 806 + if (err) \n\ 807 + goto err_out; \n\ 808 + \n\ 809 + err = bpf_object__open_skeleton(obj->skeleton, opts);\n\ 807 810 if (err) \n\ 808 811 goto err_out; \n\ 809 812 \n\
+14 -2
tools/include/uapi/linux/bpf.h
··· 4046 4046 * arguments. The *data* are a **u64** array and corresponding format string 4047 4047 * values are stored in the array. For strings and pointers where pointees 4048 4048 * are accessed, only the pointer values are stored in the *data* array. 4049 - * The *data_len* is the size of *data* in bytes. 4049 + * The *data_len* is the size of *data* in bytes - must be a multiple of 8. 4050 4050 * 4051 4051 * Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory. 4052 4052 * Reading kernel memory may fail due to either invalid address or ··· 4751 4751 * Each format specifier in **fmt** corresponds to one u64 element 4752 4752 * in the **data** array. For strings and pointers where pointees 4753 4753 * are accessed, only the pointer values are stored in the *data* 4754 - * array. The *data_len* is the size of *data* in bytes. 4754 + * array. The *data_len* is the size of *data* in bytes - must be 4755 + * a multiple of 8. 4755 4756 * 4756 4757 * Formats **%s** and **%p{i,I}{4,6}** require to read kernel 4757 4758 * memory. Reading kernel memory may fail due to either invalid ··· 4899 4898 * **-EINVAL** if *flags* is not zero. 4900 4899 * 4901 4900 * **-ENOENT** if architecture does not support branch records. 4901 + * 4902 + * long bpf_trace_vprintk(const char *fmt, u32 fmt_size, const void *data, u32 data_len) 4903 + * Description 4904 + * Behaves like **bpf_trace_printk**\ () helper, but takes an array of u64 4905 + * to format and can handle more format args as a result. 4906 + * 4907 + * Arguments are to be used as in **bpf_seq_printf**\ () helper. 4908 + * Return 4909 + * The number of bytes written to the buffer, or a negative error 4910 + * in case of failure. 4902 4911 */ 4903 4912 #define __BPF_FUNC_MAPPER(FN) \ 4904 4913 FN(unspec), \ ··· 5088 5077 FN(get_attach_cookie), \ 5089 5078 FN(task_pt_regs), \ 5090 5079 FN(get_branch_snapshot), \ 5080 + FN(trace_vprintk), \ 5091 5081 /* */ 5092 5082 5093 5083 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
+43 -8
tools/lib/bpf/bpf_helpers.h
··· 14 14 #define __type(name, val) typeof(val) *name 15 15 #define __array(name, val) typeof(val) *name[] 16 16 17 - /* Helper macro to print out debug messages */ 18 - #define bpf_printk(fmt, ...) \ 19 - ({ \ 20 - char ____fmt[] = fmt; \ 21 - bpf_trace_printk(____fmt, sizeof(____fmt), \ 22 - ##__VA_ARGS__); \ 23 - }) 24 - 25 17 /* 26 18 * Helper macro to place programs, maps, license in 27 19 * different sections in elf_bpf file. Section names ··· 215 223 bpf_snprintf(out, out_size, ___fmt, \ 216 224 ___param, sizeof(___param)); \ 217 225 }) 226 + 227 + #ifdef BPF_NO_GLOBAL_DATA 228 + #define BPF_PRINTK_FMT_MOD 229 + #else 230 + #define BPF_PRINTK_FMT_MOD static const 231 + #endif 232 + 233 + #define __bpf_printk(fmt, ...) \ 234 + ({ \ 235 + BPF_PRINTK_FMT_MOD char ____fmt[] = fmt; \ 236 + bpf_trace_printk(____fmt, sizeof(____fmt), \ 237 + ##__VA_ARGS__); \ 238 + }) 239 + 240 + /* 241 + * __bpf_vprintk wraps the bpf_trace_vprintk helper with variadic arguments 242 + * instead of an array of u64. 243 + */ 244 + #define __bpf_vprintk(fmt, args...) \ 245 + ({ \ 246 + static const char ___fmt[] = fmt; \ 247 + unsigned long long ___param[___bpf_narg(args)]; \ 248 + \ 249 + _Pragma("GCC diagnostic push") \ 250 + _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 251 + ___bpf_fill(___param, args); \ 252 + _Pragma("GCC diagnostic pop") \ 253 + \ 254 + bpf_trace_vprintk(___fmt, sizeof(___fmt), \ 255 + ___param, sizeof(___param)); \ 256 + }) 257 + 258 + /* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args 259 + * Otherwise use __bpf_vprintk 260 + */ 261 + #define ___bpf_pick_printk(...) \ 262 + ___bpf_nth(_, ##__VA_ARGS__, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \ 263 + __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \ 264 + __bpf_vprintk, __bpf_vprintk, __bpf_printk /*3*/, __bpf_printk /*2*/,\ 265 + __bpf_printk /*1*/, __bpf_printk /*0*/) 266 + 267 + /* Helper macro to print out debug messages */ 268 + #define bpf_printk(fmt, args...) ___bpf_pick_printk(args)(fmt, ##args) 218 269 219 270 #endif
+6 -1
tools/lib/bpf/gen_loader.c
··· 5 5 #include <string.h> 6 6 #include <errno.h> 7 7 #include <linux/filter.h> 8 + #include <sys/param.h> 8 9 #include "btf.h" 9 10 #include "bpf.h" 10 11 #include "libbpf.h" ··· 136 135 137 136 static int add_data(struct bpf_gen *gen, const void *data, __u32 size) 138 137 { 138 + __u32 size8 = roundup(size, 8); 139 + __u64 zero = 0; 139 140 void *prev; 140 141 141 - if (realloc_data_buf(gen, size)) 142 + if (realloc_data_buf(gen, size8)) 142 143 return 0; 143 144 prev = gen->data_cur; 144 145 memcpy(gen->data_cur, data, size); 145 146 gen->data_cur += size; 147 + memcpy(gen->data_cur, &zero, size8 - size); 148 + gen->data_cur += size8 - size; 146 149 return prev - gen->data_start; 147 150 } 148 151
+504 -342
tools/lib/bpf/libbpf.c
··· 220 220 221 221 struct bpf_sec_def; 222 222 223 - typedef struct bpf_link *(*attach_fn_t)(const struct bpf_program *prog); 223 + typedef int (*init_fn_t)(struct bpf_program *prog, long cookie); 224 + typedef int (*preload_fn_t)(struct bpf_program *prog, struct bpf_prog_load_params *attr, long cookie); 225 + typedef struct bpf_link *(*attach_fn_t)(const struct bpf_program *prog, long cookie); 226 + 227 + /* stored as sec_def->cookie for all libbpf-supported SEC()s */ 228 + enum sec_def_flags { 229 + SEC_NONE = 0, 230 + /* expected_attach_type is optional, if kernel doesn't support that */ 231 + SEC_EXP_ATTACH_OPT = 1, 232 + /* legacy, only used by libbpf_get_type_names() and 233 + * libbpf_attach_type_by_name(), not used by libbpf itself at all. 234 + * This used to be associated with cgroup (and few other) BPF programs 235 + * that were attachable through BPF_PROG_ATTACH command. Pretty 236 + * meaningless nowadays, though. 237 + */ 238 + SEC_ATTACHABLE = 2, 239 + SEC_ATTACHABLE_OPT = SEC_ATTACHABLE | SEC_EXP_ATTACH_OPT, 240 + /* attachment target is specified through BTF ID in either kernel or 241 + * other BPF program's BTF object */ 242 + SEC_ATTACH_BTF = 4, 243 + /* BPF program type allows sleeping/blocking in kernel */ 244 + SEC_SLEEPABLE = 8, 245 + /* allow non-strict prefix matching */ 246 + SEC_SLOPPY_PFX = 16, 247 + }; 224 248 225 249 struct bpf_sec_def { 226 250 const char *sec; 227 - size_t len; 228 251 enum bpf_prog_type prog_type; 229 252 enum bpf_attach_type expected_attach_type; 230 - bool is_exp_attach_type_optional; 231 - bool is_attachable; 232 - bool is_attach_btf; 233 - bool is_sleepable; 253 + long cookie; 254 + 255 + init_fn_t init_fn; 256 + preload_fn_t preload_fn; 234 257 attach_fn_t attach_fn; 235 258 }; 236 259 ··· 1688 1665 void *ext_val; 1689 1666 __u64 num; 1690 1667 1691 - if (strncmp(buf, "CONFIG_", 7)) 1668 + if (!str_has_pfx(buf, "CONFIG_")) 1692 1669 return 0; 1693 1670 1694 1671 sep = strchr(buf, '='); ··· 1868 1845 continue; 1869 1846 if (sym.st_shndx != obj->efile.maps_shndx) 1870 1847 continue; 1848 + if (GELF_ST_TYPE(sym.st_info) == STT_SECTION) 1849 + continue; 1871 1850 nr_maps++; 1872 1851 } 1873 1852 /* Assume equally sized map definitions */ ··· 1894 1869 continue; 1895 1870 if (sym.st_shndx != obj->efile.maps_shndx) 1896 1871 continue; 1872 + if (GELF_ST_TYPE(sym.st_info) == STT_SECTION) 1873 + continue; 1897 1874 1898 1875 map = bpf_object__add_map(obj); 1899 1876 if (IS_ERR(map)) ··· 1908 1881 return -LIBBPF_ERRNO__FORMAT; 1909 1882 } 1910 1883 1911 - if (GELF_ST_TYPE(sym.st_info) == STT_SECTION 1912 - || GELF_ST_BIND(sym.st_info) == STB_LOCAL) { 1884 + if (GELF_ST_BIND(sym.st_info) == STB_LOCAL) { 1913 1885 pr_warn("map '%s' (legacy): static maps are not supported\n", map_name); 1914 1886 return -ENOTSUP; 1915 1887 } ··· 2939 2913 static bool is_sec_name_dwarf(const char *name) 2940 2914 { 2941 2915 /* approximation, but the actual list is too long */ 2942 - return strncmp(name, ".debug_", sizeof(".debug_") - 1) == 0; 2916 + return str_has_pfx(name, ".debug_"); 2943 2917 } 2944 2918 2945 2919 static bool ignore_elf_section(GElf_Shdr *hdr, const char *name) ··· 2961 2935 if (is_sec_name_dwarf(name)) 2962 2936 return true; 2963 2937 2964 - if (strncmp(name, ".rel", sizeof(".rel") - 1) == 0) { 2938 + if (str_has_pfx(name, ".rel")) { 2965 2939 name += sizeof(".rel") - 1; 2966 2940 /* DWARF section relocations */ 2967 2941 if (is_sec_name_dwarf(name)) ··· 4669 4643 create_attr.inner_map_fd = map->inner_map_fd; 4670 4644 } 4671 4645 4646 + switch (def->type) { 4647 + case BPF_MAP_TYPE_PERF_EVENT_ARRAY: 4648 + case BPF_MAP_TYPE_CGROUP_ARRAY: 4649 + case BPF_MAP_TYPE_STACK_TRACE: 4650 + case BPF_MAP_TYPE_ARRAY_OF_MAPS: 4651 + case BPF_MAP_TYPE_HASH_OF_MAPS: 4652 + case BPF_MAP_TYPE_DEVMAP: 4653 + case BPF_MAP_TYPE_DEVMAP_HASH: 4654 + case BPF_MAP_TYPE_CPUMAP: 4655 + case BPF_MAP_TYPE_XSKMAP: 4656 + case BPF_MAP_TYPE_SOCKMAP: 4657 + case BPF_MAP_TYPE_SOCKHASH: 4658 + case BPF_MAP_TYPE_QUEUE: 4659 + case BPF_MAP_TYPE_STACK: 4660 + case BPF_MAP_TYPE_RINGBUF: 4661 + create_attr.btf_fd = 0; 4662 + create_attr.btf_key_type_id = 0; 4663 + create_attr.btf_value_type_id = 0; 4664 + map->btf_key_type_id = 0; 4665 + map->btf_value_type_id = 0; 4666 + default: 4667 + break; 4668 + } 4669 + 4672 4670 if (obj->gen_loader) { 4673 4671 bpf_gen__map_create(obj->gen_loader, &create_attr, is_inner ? -1 : map - obj->maps); 4674 4672 /* Pretend to have valid FD to pass various fd >= 0 checks. ··· 6144 6094 return 0; 6145 6095 } 6146 6096 6097 + static int libbpf_find_attach_btf_id(struct bpf_program *prog, const char *attach_name, 6098 + int *btf_obj_fd, int *btf_type_id); 6099 + 6100 + /* this is called as prog->sec_def->preload_fn for libbpf-supported sec_defs */ 6101 + static int libbpf_preload_prog(struct bpf_program *prog, 6102 + struct bpf_prog_load_params *attr, long cookie) 6103 + { 6104 + enum sec_def_flags def = cookie; 6105 + 6106 + /* old kernels might not support specifying expected_attach_type */ 6107 + if ((def & SEC_EXP_ATTACH_OPT) && !kernel_supports(prog->obj, FEAT_EXP_ATTACH_TYPE)) 6108 + attr->expected_attach_type = 0; 6109 + 6110 + if (def & SEC_SLEEPABLE) 6111 + attr->prog_flags |= BPF_F_SLEEPABLE; 6112 + 6113 + if ((prog->type == BPF_PROG_TYPE_TRACING || 6114 + prog->type == BPF_PROG_TYPE_LSM || 6115 + prog->type == BPF_PROG_TYPE_EXT) && !prog->attach_btf_id) { 6116 + int btf_obj_fd = 0, btf_type_id = 0, err; 6117 + const char *attach_name; 6118 + 6119 + attach_name = strchr(prog->sec_name, '/') + 1; 6120 + err = libbpf_find_attach_btf_id(prog, attach_name, &btf_obj_fd, &btf_type_id); 6121 + if (err) 6122 + return err; 6123 + 6124 + /* cache resolved BTF FD and BTF type ID in the prog */ 6125 + prog->attach_btf_obj_fd = btf_obj_fd; 6126 + prog->attach_btf_id = btf_type_id; 6127 + 6128 + /* but by now libbpf common logic is not utilizing 6129 + * prog->atach_btf_obj_fd/prog->attach_btf_id anymore because 6130 + * this callback is called after attrs were populated by 6131 + * libbpf, so this callback has to update attr explicitly here 6132 + */ 6133 + attr->attach_btf_obj_fd = btf_obj_fd; 6134 + attr->attach_btf_id = btf_type_id; 6135 + } 6136 + return 0; 6137 + } 6138 + 6147 6139 static int 6148 6140 load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, 6149 6141 char *license, __u32 kern_version, int *pfd) ··· 6194 6102 char *cp, errmsg[STRERR_BUFSIZE]; 6195 6103 size_t log_buf_size = 0; 6196 6104 char *log_buf = NULL; 6197 - int btf_fd, ret; 6105 + int btf_fd, ret, err; 6198 6106 6199 6107 if (prog->type == BPF_PROG_TYPE_UNSPEC) { 6200 6108 /* ··· 6210 6118 return -EINVAL; 6211 6119 6212 6120 load_attr.prog_type = prog->type; 6213 - /* old kernels might not support specifying expected_attach_type */ 6214 - if (!kernel_supports(prog->obj, FEAT_EXP_ATTACH_TYPE) && prog->sec_def && 6215 - prog->sec_def->is_exp_attach_type_optional) 6216 - load_attr.expected_attach_type = 0; 6217 - else 6218 - load_attr.expected_attach_type = prog->expected_attach_type; 6121 + load_attr.expected_attach_type = prog->expected_attach_type; 6219 6122 if (kernel_supports(prog->obj, FEAT_PROG_NAME)) 6220 6123 load_attr.name = prog->name; 6221 6124 load_attr.insns = insns; 6222 6125 load_attr.insn_cnt = insns_cnt; 6223 6126 load_attr.license = license; 6224 6127 load_attr.attach_btf_id = prog->attach_btf_id; 6225 - if (prog->attach_prog_fd) 6226 - load_attr.attach_prog_fd = prog->attach_prog_fd; 6227 - else 6228 - load_attr.attach_btf_obj_fd = prog->attach_btf_obj_fd; 6128 + load_attr.attach_prog_fd = prog->attach_prog_fd; 6129 + load_attr.attach_btf_obj_fd = prog->attach_btf_obj_fd; 6229 6130 load_attr.attach_btf_id = prog->attach_btf_id; 6230 6131 load_attr.kern_version = kern_version; 6231 6132 load_attr.prog_ifindex = prog->prog_ifindex; ··· 6236 6151 } 6237 6152 load_attr.log_level = prog->log_level; 6238 6153 load_attr.prog_flags = prog->prog_flags; 6154 + 6155 + /* adjust load_attr if sec_def provides custom preload callback */ 6156 + if (prog->sec_def && prog->sec_def->preload_fn) { 6157 + err = prog->sec_def->preload_fn(prog, &load_attr, prog->sec_def->cookie); 6158 + if (err < 0) { 6159 + pr_warn("prog '%s': failed to prepare load attributes: %d\n", 6160 + prog->name, err); 6161 + return err; 6162 + } 6163 + } 6239 6164 6240 6165 if (prog->obj->gen_loader) { 6241 6166 bpf_gen__prog_load(prog->obj->gen_loader, &load_attr, ··· 6362 6267 return 0; 6363 6268 } 6364 6269 6365 - static int libbpf_find_attach_btf_id(struct bpf_program *prog, int *btf_obj_fd, int *btf_type_id); 6366 - 6367 6270 int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver) 6368 6271 { 6369 6272 int err = 0, fd, i; ··· 6369 6276 if (prog->obj->loaded) { 6370 6277 pr_warn("prog '%s': can't load after object was loaded\n", prog->name); 6371 6278 return libbpf_err(-EINVAL); 6372 - } 6373 - 6374 - if ((prog->type == BPF_PROG_TYPE_TRACING || 6375 - prog->type == BPF_PROG_TYPE_LSM || 6376 - prog->type == BPF_PROG_TYPE_EXT) && !prog->attach_btf_id) { 6377 - int btf_obj_fd = 0, btf_type_id = 0; 6378 - 6379 - err = libbpf_find_attach_btf_id(prog, &btf_obj_fd, &btf_type_id); 6380 - if (err) 6381 - return libbpf_err(err); 6382 - 6383 - prog->attach_btf_obj_fd = btf_obj_fd; 6384 - prog->attach_btf_id = btf_type_id; 6385 6279 } 6386 6280 6387 6281 if (prog->instances.nr < 0 || !prog->instances.fds) { ··· 6480 6400 static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object_open_opts *opts) 6481 6401 { 6482 6402 struct bpf_program *prog; 6403 + int err; 6483 6404 6484 6405 bpf_object__for_each_program(prog, obj) { 6485 6406 prog->sec_def = find_sec_def(prog->sec_name); ··· 6491 6410 continue; 6492 6411 } 6493 6412 6494 - if (prog->sec_def->is_sleepable) 6495 - prog->prog_flags |= BPF_F_SLEEPABLE; 6496 6413 bpf_program__set_type(prog, prog->sec_def->prog_type); 6497 6414 bpf_program__set_expected_attach_type(prog, prog->sec_def->expected_attach_type); 6498 6415 ··· 6500 6421 prog->sec_def->prog_type == BPF_PROG_TYPE_EXT) 6501 6422 prog->attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0); 6502 6423 #pragma GCC diagnostic pop 6424 + 6425 + /* sec_def can have custom callback which should be called 6426 + * after bpf_program is initialized to adjust its properties 6427 + */ 6428 + if (prog->sec_def->init_fn) { 6429 + err = prog->sec_def->init_fn(prog, prog->sec_def->cookie); 6430 + if (err < 0) { 6431 + pr_warn("prog '%s': failed to initialize: %d\n", 6432 + prog->name, err); 6433 + return err; 6434 + } 6435 + } 6503 6436 } 6504 6437 6505 6438 return 0; ··· 6938 6847 if (err) 6939 6848 return err; 6940 6849 pr_debug("extern (kcfg) %s=0x%x\n", ext->name, kver); 6941 - } else if (ext->type == EXT_KCFG && 6942 - strncmp(ext->name, "CONFIG_", 7) == 0) { 6850 + } else if (ext->type == EXT_KCFG && str_has_pfx(ext->name, "CONFIG_")) { 6943 6851 need_config = true; 6944 6852 } else if (ext->type == EXT_KSYM) { 6945 6853 if (ext->ksym.type_id) ··· 7998 7908 prog->expected_attach_type = type; 7999 7909 } 8000 7910 8001 - #define BPF_PROG_SEC_IMPL(string, ptype, eatype, eatype_optional, \ 8002 - attachable, attach_btf) \ 8003 - { \ 8004 - .sec = string, \ 8005 - .len = sizeof(string) - 1, \ 8006 - .prog_type = ptype, \ 8007 - .expected_attach_type = eatype, \ 8008 - .is_exp_attach_type_optional = eatype_optional, \ 8009 - .is_attachable = attachable, \ 8010 - .is_attach_btf = attach_btf, \ 8011 - } 8012 - 8013 - /* Programs that can NOT be attached. */ 8014 - #define BPF_PROG_SEC(string, ptype) BPF_PROG_SEC_IMPL(string, ptype, 0, 0, 0, 0) 8015 - 8016 - /* Programs that can be attached. */ 8017 - #define BPF_APROG_SEC(string, ptype, atype) \ 8018 - BPF_PROG_SEC_IMPL(string, ptype, atype, true, 1, 0) 8019 - 8020 - /* Programs that must specify expected attach type at load time. */ 8021 - #define BPF_EAPROG_SEC(string, ptype, eatype) \ 8022 - BPF_PROG_SEC_IMPL(string, ptype, eatype, false, 1, 0) 8023 - 8024 - /* Programs that use BTF to identify attach point */ 8025 - #define BPF_PROG_BTF(string, ptype, eatype) \ 8026 - BPF_PROG_SEC_IMPL(string, ptype, eatype, false, 0, 1) 8027 - 8028 - /* Programs that can be attached but attach type can't be identified by section 8029 - * name. Kept for backward compatibility. 8030 - */ 8031 - #define BPF_APROG_COMPAT(string, ptype) BPF_PROG_SEC(string, ptype) 8032 - 8033 - #define SEC_DEF(sec_pfx, ptype, ...) { \ 7911 + #define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \ 8034 7912 .sec = sec_pfx, \ 8035 - .len = sizeof(sec_pfx) - 1, \ 8036 7913 .prog_type = BPF_PROG_TYPE_##ptype, \ 7914 + .expected_attach_type = atype, \ 7915 + .cookie = (long)(flags), \ 7916 + .preload_fn = libbpf_preload_prog, \ 8037 7917 __VA_ARGS__ \ 8038 7918 } 8039 7919 8040 - static struct bpf_link *attach_kprobe(const struct bpf_program *prog); 8041 - static struct bpf_link *attach_tp(const struct bpf_program *prog); 8042 - static struct bpf_link *attach_raw_tp(const struct bpf_program *prog); 8043 - static struct bpf_link *attach_trace(const struct bpf_program *prog); 8044 - static struct bpf_link *attach_lsm(const struct bpf_program *prog); 8045 - static struct bpf_link *attach_iter(const struct bpf_program *prog); 7920 + static struct bpf_link *attach_kprobe(const struct bpf_program *prog, long cookie); 7921 + static struct bpf_link *attach_tp(const struct bpf_program *prog, long cookie); 7922 + static struct bpf_link *attach_raw_tp(const struct bpf_program *prog, long cookie); 7923 + static struct bpf_link *attach_trace(const struct bpf_program *prog, long cookie); 7924 + static struct bpf_link *attach_lsm(const struct bpf_program *prog, long cookie); 7925 + static struct bpf_link *attach_iter(const struct bpf_program *prog, long cookie); 8046 7926 8047 7927 static const struct bpf_sec_def section_defs[] = { 8048 - BPF_PROG_SEC("socket", BPF_PROG_TYPE_SOCKET_FILTER), 8049 - BPF_EAPROG_SEC("sk_reuseport/migrate", BPF_PROG_TYPE_SK_REUSEPORT, 8050 - BPF_SK_REUSEPORT_SELECT_OR_MIGRATE), 8051 - BPF_EAPROG_SEC("sk_reuseport", BPF_PROG_TYPE_SK_REUSEPORT, 8052 - BPF_SK_REUSEPORT_SELECT), 8053 - SEC_DEF("kprobe/", KPROBE, 8054 - .attach_fn = attach_kprobe), 8055 - BPF_PROG_SEC("uprobe/", BPF_PROG_TYPE_KPROBE), 8056 - SEC_DEF("kretprobe/", KPROBE, 8057 - .attach_fn = attach_kprobe), 8058 - BPF_PROG_SEC("uretprobe/", BPF_PROG_TYPE_KPROBE), 8059 - BPF_PROG_SEC("classifier", BPF_PROG_TYPE_SCHED_CLS), 8060 - BPF_PROG_SEC("action", BPF_PROG_TYPE_SCHED_ACT), 8061 - SEC_DEF("tracepoint/", TRACEPOINT, 8062 - .attach_fn = attach_tp), 8063 - SEC_DEF("tp/", TRACEPOINT, 8064 - .attach_fn = attach_tp), 8065 - SEC_DEF("raw_tracepoint/", RAW_TRACEPOINT, 8066 - .attach_fn = attach_raw_tp), 8067 - SEC_DEF("raw_tp/", RAW_TRACEPOINT, 8068 - .attach_fn = attach_raw_tp), 8069 - SEC_DEF("tp_btf/", TRACING, 8070 - .expected_attach_type = BPF_TRACE_RAW_TP, 8071 - .is_attach_btf = true, 8072 - .attach_fn = attach_trace), 8073 - SEC_DEF("fentry/", TRACING, 8074 - .expected_attach_type = BPF_TRACE_FENTRY, 8075 - .is_attach_btf = true, 8076 - .attach_fn = attach_trace), 8077 - SEC_DEF("fmod_ret/", TRACING, 8078 - .expected_attach_type = BPF_MODIFY_RETURN, 8079 - .is_attach_btf = true, 8080 - .attach_fn = attach_trace), 8081 - SEC_DEF("fexit/", TRACING, 8082 - .expected_attach_type = BPF_TRACE_FEXIT, 8083 - .is_attach_btf = true, 8084 - .attach_fn = attach_trace), 8085 - SEC_DEF("fentry.s/", TRACING, 8086 - .expected_attach_type = BPF_TRACE_FENTRY, 8087 - .is_attach_btf = true, 8088 - .is_sleepable = true, 8089 - .attach_fn = attach_trace), 8090 - SEC_DEF("fmod_ret.s/", TRACING, 8091 - .expected_attach_type = BPF_MODIFY_RETURN, 8092 - .is_attach_btf = true, 8093 - .is_sleepable = true, 8094 - .attach_fn = attach_trace), 8095 - SEC_DEF("fexit.s/", TRACING, 8096 - .expected_attach_type = BPF_TRACE_FEXIT, 8097 - .is_attach_btf = true, 8098 - .is_sleepable = true, 8099 - .attach_fn = attach_trace), 8100 - SEC_DEF("freplace/", EXT, 8101 - .is_attach_btf = true, 8102 - .attach_fn = attach_trace), 8103 - SEC_DEF("lsm/", LSM, 8104 - .is_attach_btf = true, 8105 - .expected_attach_type = BPF_LSM_MAC, 8106 - .attach_fn = attach_lsm), 8107 - SEC_DEF("lsm.s/", LSM, 8108 - .is_attach_btf = true, 8109 - .is_sleepable = true, 8110 - .expected_attach_type = BPF_LSM_MAC, 8111 - .attach_fn = attach_lsm), 8112 - SEC_DEF("iter/", TRACING, 8113 - .expected_attach_type = BPF_TRACE_ITER, 8114 - .is_attach_btf = true, 8115 - .attach_fn = attach_iter), 8116 - SEC_DEF("syscall", SYSCALL, 8117 - .is_sleepable = true), 8118 - BPF_EAPROG_SEC("xdp_devmap/", BPF_PROG_TYPE_XDP, 8119 - BPF_XDP_DEVMAP), 8120 - BPF_EAPROG_SEC("xdp_cpumap/", BPF_PROG_TYPE_XDP, 8121 - BPF_XDP_CPUMAP), 8122 - BPF_APROG_SEC("xdp", BPF_PROG_TYPE_XDP, 8123 - BPF_XDP), 8124 - BPF_PROG_SEC("perf_event", BPF_PROG_TYPE_PERF_EVENT), 8125 - BPF_PROG_SEC("lwt_in", BPF_PROG_TYPE_LWT_IN), 8126 - BPF_PROG_SEC("lwt_out", BPF_PROG_TYPE_LWT_OUT), 8127 - BPF_PROG_SEC("lwt_xmit", BPF_PROG_TYPE_LWT_XMIT), 8128 - BPF_PROG_SEC("lwt_seg6local", BPF_PROG_TYPE_LWT_SEG6LOCAL), 8129 - BPF_APROG_SEC("cgroup_skb/ingress", BPF_PROG_TYPE_CGROUP_SKB, 8130 - BPF_CGROUP_INET_INGRESS), 8131 - BPF_APROG_SEC("cgroup_skb/egress", BPF_PROG_TYPE_CGROUP_SKB, 8132 - BPF_CGROUP_INET_EGRESS), 8133 - BPF_APROG_COMPAT("cgroup/skb", BPF_PROG_TYPE_CGROUP_SKB), 8134 - BPF_EAPROG_SEC("cgroup/sock_create", BPF_PROG_TYPE_CGROUP_SOCK, 8135 - BPF_CGROUP_INET_SOCK_CREATE), 8136 - BPF_EAPROG_SEC("cgroup/sock_release", BPF_PROG_TYPE_CGROUP_SOCK, 8137 - BPF_CGROUP_INET_SOCK_RELEASE), 8138 - BPF_APROG_SEC("cgroup/sock", BPF_PROG_TYPE_CGROUP_SOCK, 8139 - BPF_CGROUP_INET_SOCK_CREATE), 8140 - BPF_EAPROG_SEC("cgroup/post_bind4", BPF_PROG_TYPE_CGROUP_SOCK, 8141 - BPF_CGROUP_INET4_POST_BIND), 8142 - BPF_EAPROG_SEC("cgroup/post_bind6", BPF_PROG_TYPE_CGROUP_SOCK, 8143 - BPF_CGROUP_INET6_POST_BIND), 8144 - BPF_APROG_SEC("cgroup/dev", BPF_PROG_TYPE_CGROUP_DEVICE, 8145 - BPF_CGROUP_DEVICE), 8146 - BPF_APROG_SEC("sockops", BPF_PROG_TYPE_SOCK_OPS, 8147 - BPF_CGROUP_SOCK_OPS), 8148 - BPF_APROG_SEC("sk_skb/stream_parser", BPF_PROG_TYPE_SK_SKB, 8149 - BPF_SK_SKB_STREAM_PARSER), 8150 - BPF_APROG_SEC("sk_skb/stream_verdict", BPF_PROG_TYPE_SK_SKB, 8151 - BPF_SK_SKB_STREAM_VERDICT), 8152 - BPF_APROG_COMPAT("sk_skb", BPF_PROG_TYPE_SK_SKB), 8153 - BPF_APROG_SEC("sk_msg", BPF_PROG_TYPE_SK_MSG, 8154 - BPF_SK_MSG_VERDICT), 8155 - BPF_APROG_SEC("lirc_mode2", BPF_PROG_TYPE_LIRC_MODE2, 8156 - BPF_LIRC_MODE2), 8157 - BPF_APROG_SEC("flow_dissector", BPF_PROG_TYPE_FLOW_DISSECTOR, 8158 - BPF_FLOW_DISSECTOR), 8159 - BPF_EAPROG_SEC("cgroup/bind4", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8160 - BPF_CGROUP_INET4_BIND), 8161 - BPF_EAPROG_SEC("cgroup/bind6", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8162 - BPF_CGROUP_INET6_BIND), 8163 - BPF_EAPROG_SEC("cgroup/connect4", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8164 - BPF_CGROUP_INET4_CONNECT), 8165 - BPF_EAPROG_SEC("cgroup/connect6", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8166 - BPF_CGROUP_INET6_CONNECT), 8167 - BPF_EAPROG_SEC("cgroup/sendmsg4", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8168 - BPF_CGROUP_UDP4_SENDMSG), 8169 - BPF_EAPROG_SEC("cgroup/sendmsg6", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8170 - BPF_CGROUP_UDP6_SENDMSG), 8171 - BPF_EAPROG_SEC("cgroup/recvmsg4", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8172 - BPF_CGROUP_UDP4_RECVMSG), 8173 - BPF_EAPROG_SEC("cgroup/recvmsg6", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8174 - BPF_CGROUP_UDP6_RECVMSG), 8175 - BPF_EAPROG_SEC("cgroup/getpeername4", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8176 - BPF_CGROUP_INET4_GETPEERNAME), 8177 - BPF_EAPROG_SEC("cgroup/getpeername6", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8178 - BPF_CGROUP_INET6_GETPEERNAME), 8179 - BPF_EAPROG_SEC("cgroup/getsockname4", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8180 - BPF_CGROUP_INET4_GETSOCKNAME), 8181 - BPF_EAPROG_SEC("cgroup/getsockname6", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 8182 - BPF_CGROUP_INET6_GETSOCKNAME), 8183 - BPF_EAPROG_SEC("cgroup/sysctl", BPF_PROG_TYPE_CGROUP_SYSCTL, 8184 - BPF_CGROUP_SYSCTL), 8185 - BPF_EAPROG_SEC("cgroup/getsockopt", BPF_PROG_TYPE_CGROUP_SOCKOPT, 8186 - BPF_CGROUP_GETSOCKOPT), 8187 - BPF_EAPROG_SEC("cgroup/setsockopt", BPF_PROG_TYPE_CGROUP_SOCKOPT, 8188 - BPF_CGROUP_SETSOCKOPT), 8189 - BPF_PROG_SEC("struct_ops", BPF_PROG_TYPE_STRUCT_OPS), 8190 - BPF_EAPROG_SEC("sk_lookup/", BPF_PROG_TYPE_SK_LOOKUP, 8191 - BPF_SK_LOOKUP), 7928 + SEC_DEF("socket", SOCKET_FILTER, 0, SEC_NONE | SEC_SLOPPY_PFX), 7929 + SEC_DEF("sk_reuseport/migrate", SK_REUSEPORT, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7930 + SEC_DEF("sk_reuseport", SK_REUSEPORT, BPF_SK_REUSEPORT_SELECT, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7931 + SEC_DEF("kprobe/", KPROBE, 0, SEC_NONE, attach_kprobe), 7932 + SEC_DEF("uprobe/", KPROBE, 0, SEC_NONE), 7933 + SEC_DEF("kretprobe/", KPROBE, 0, SEC_NONE, attach_kprobe), 7934 + SEC_DEF("uretprobe/", KPROBE, 0, SEC_NONE), 7935 + SEC_DEF("tc", SCHED_CLS, 0, SEC_NONE), 7936 + SEC_DEF("classifier", SCHED_CLS, 0, SEC_NONE | SEC_SLOPPY_PFX), 7937 + SEC_DEF("action", SCHED_ACT, 0, SEC_NONE | SEC_SLOPPY_PFX), 7938 + SEC_DEF("tracepoint/", TRACEPOINT, 0, SEC_NONE, attach_tp), 7939 + SEC_DEF("tp/", TRACEPOINT, 0, SEC_NONE, attach_tp), 7940 + SEC_DEF("raw_tracepoint/", RAW_TRACEPOINT, 0, SEC_NONE, attach_raw_tp), 7941 + SEC_DEF("raw_tp/", RAW_TRACEPOINT, 0, SEC_NONE, attach_raw_tp), 7942 + SEC_DEF("tp_btf/", TRACING, BPF_TRACE_RAW_TP, SEC_ATTACH_BTF, attach_trace), 7943 + SEC_DEF("fentry/", TRACING, BPF_TRACE_FENTRY, SEC_ATTACH_BTF, attach_trace), 7944 + SEC_DEF("fmod_ret/", TRACING, BPF_MODIFY_RETURN, SEC_ATTACH_BTF, attach_trace), 7945 + SEC_DEF("fexit/", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF, attach_trace), 7946 + SEC_DEF("fentry.s/", TRACING, BPF_TRACE_FENTRY, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), 7947 + SEC_DEF("fmod_ret.s/", TRACING, BPF_MODIFY_RETURN, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), 7948 + SEC_DEF("fexit.s/", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace), 7949 + SEC_DEF("freplace/", EXT, 0, SEC_ATTACH_BTF, attach_trace), 7950 + SEC_DEF("lsm/", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm), 7951 + SEC_DEF("lsm.s/", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm), 7952 + SEC_DEF("iter/", TRACING, BPF_TRACE_ITER, SEC_ATTACH_BTF, attach_iter), 7953 + SEC_DEF("syscall", SYSCALL, 0, SEC_SLEEPABLE), 7954 + SEC_DEF("xdp_devmap/", XDP, BPF_XDP_DEVMAP, SEC_ATTACHABLE), 7955 + SEC_DEF("xdp_cpumap/", XDP, BPF_XDP_CPUMAP, SEC_ATTACHABLE), 7956 + SEC_DEF("xdp", XDP, BPF_XDP, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7957 + SEC_DEF("perf_event", PERF_EVENT, 0, SEC_NONE | SEC_SLOPPY_PFX), 7958 + SEC_DEF("lwt_in", LWT_IN, 0, SEC_NONE | SEC_SLOPPY_PFX), 7959 + SEC_DEF("lwt_out", LWT_OUT, 0, SEC_NONE | SEC_SLOPPY_PFX), 7960 + SEC_DEF("lwt_xmit", LWT_XMIT, 0, SEC_NONE | SEC_SLOPPY_PFX), 7961 + SEC_DEF("lwt_seg6local", LWT_SEG6LOCAL, 0, SEC_NONE | SEC_SLOPPY_PFX), 7962 + SEC_DEF("cgroup_skb/ingress", CGROUP_SKB, BPF_CGROUP_INET_INGRESS, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7963 + SEC_DEF("cgroup_skb/egress", CGROUP_SKB, BPF_CGROUP_INET_EGRESS, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7964 + SEC_DEF("cgroup/skb", CGROUP_SKB, 0, SEC_NONE | SEC_SLOPPY_PFX), 7965 + SEC_DEF("cgroup/sock_create", CGROUP_SOCK, BPF_CGROUP_INET_SOCK_CREATE, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7966 + SEC_DEF("cgroup/sock_release", CGROUP_SOCK, BPF_CGROUP_INET_SOCK_RELEASE, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7967 + SEC_DEF("cgroup/sock", CGROUP_SOCK, BPF_CGROUP_INET_SOCK_CREATE, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7968 + SEC_DEF("cgroup/post_bind4", CGROUP_SOCK, BPF_CGROUP_INET4_POST_BIND, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7969 + SEC_DEF("cgroup/post_bind6", CGROUP_SOCK, BPF_CGROUP_INET6_POST_BIND, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7970 + SEC_DEF("cgroup/dev", CGROUP_DEVICE, BPF_CGROUP_DEVICE, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7971 + SEC_DEF("sockops", SOCK_OPS, BPF_CGROUP_SOCK_OPS, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7972 + SEC_DEF("sk_skb/stream_parser", SK_SKB, BPF_SK_SKB_STREAM_PARSER, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7973 + SEC_DEF("sk_skb/stream_verdict",SK_SKB, BPF_SK_SKB_STREAM_VERDICT, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7974 + SEC_DEF("sk_skb", SK_SKB, 0, SEC_NONE | SEC_SLOPPY_PFX), 7975 + SEC_DEF("sk_msg", SK_MSG, BPF_SK_MSG_VERDICT, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7976 + SEC_DEF("lirc_mode2", LIRC_MODE2, BPF_LIRC_MODE2, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7977 + SEC_DEF("flow_dissector", FLOW_DISSECTOR, BPF_FLOW_DISSECTOR, SEC_ATTACHABLE_OPT | SEC_SLOPPY_PFX), 7978 + SEC_DEF("cgroup/bind4", CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_BIND, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7979 + SEC_DEF("cgroup/bind6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_BIND, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7980 + SEC_DEF("cgroup/connect4", CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_CONNECT, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7981 + SEC_DEF("cgroup/connect6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_CONNECT, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7982 + SEC_DEF("cgroup/sendmsg4", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP4_SENDMSG, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7983 + SEC_DEF("cgroup/sendmsg6", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP6_SENDMSG, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7984 + SEC_DEF("cgroup/recvmsg4", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP4_RECVMSG, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7985 + SEC_DEF("cgroup/recvmsg6", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP6_RECVMSG, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7986 + SEC_DEF("cgroup/getpeername4", CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_GETPEERNAME, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7987 + SEC_DEF("cgroup/getpeername6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_GETPEERNAME, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7988 + SEC_DEF("cgroup/getsockname4", CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_GETSOCKNAME, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7989 + SEC_DEF("cgroup/getsockname6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_GETSOCKNAME, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7990 + SEC_DEF("cgroup/sysctl", CGROUP_SYSCTL, BPF_CGROUP_SYSCTL, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7991 + SEC_DEF("cgroup/getsockopt", CGROUP_SOCKOPT, BPF_CGROUP_GETSOCKOPT, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7992 + SEC_DEF("cgroup/setsockopt", CGROUP_SOCKOPT, BPF_CGROUP_SETSOCKOPT, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 7993 + SEC_DEF("struct_ops+", STRUCT_OPS, 0, SEC_NONE), 7994 + SEC_DEF("sk_lookup", SK_LOOKUP, BPF_SK_LOOKUP, SEC_ATTACHABLE | SEC_SLOPPY_PFX), 8192 7995 }; 8193 - 8194 - #undef BPF_PROG_SEC_IMPL 8195 - #undef BPF_PROG_SEC 8196 - #undef BPF_APROG_SEC 8197 - #undef BPF_EAPROG_SEC 8198 - #undef BPF_APROG_COMPAT 8199 - #undef SEC_DEF 8200 7996 8201 7997 #define MAX_TYPE_NAME_SIZE 32 8202 7998 8203 7999 static const struct bpf_sec_def *find_sec_def(const char *sec_name) 8204 8000 { 8205 - int i, n = ARRAY_SIZE(section_defs); 8001 + const struct bpf_sec_def *sec_def; 8002 + enum sec_def_flags sec_flags; 8003 + int i, n = ARRAY_SIZE(section_defs), len; 8004 + bool strict = libbpf_mode & LIBBPF_STRICT_SEC_NAME; 8206 8005 8207 8006 for (i = 0; i < n; i++) { 8208 - if (strncmp(sec_name, 8209 - section_defs[i].sec, section_defs[i].len)) 8007 + sec_def = &section_defs[i]; 8008 + sec_flags = sec_def->cookie; 8009 + len = strlen(sec_def->sec); 8010 + 8011 + /* "type/" always has to have proper SEC("type/extras") form */ 8012 + if (sec_def->sec[len - 1] == '/') { 8013 + if (str_has_pfx(sec_name, sec_def->sec)) 8014 + return sec_def; 8210 8015 continue; 8211 - return &section_defs[i]; 8016 + } 8017 + 8018 + /* "type+" means it can be either exact SEC("type") or 8019 + * well-formed SEC("type/extras") with proper '/' separator 8020 + */ 8021 + if (sec_def->sec[len - 1] == '+') { 8022 + len--; 8023 + /* not even a prefix */ 8024 + if (strncmp(sec_name, sec_def->sec, len) != 0) 8025 + continue; 8026 + /* exact match or has '/' separator */ 8027 + if (sec_name[len] == '\0' || sec_name[len] == '/') 8028 + return sec_def; 8029 + continue; 8030 + } 8031 + 8032 + /* SEC_SLOPPY_PFX definitions are allowed to be just prefix 8033 + * matches, unless strict section name mode 8034 + * (LIBBPF_STRICT_SEC_NAME) is enabled, in which case the 8035 + * match has to be exact. 8036 + */ 8037 + if ((sec_flags & SEC_SLOPPY_PFX) && !strict) { 8038 + if (str_has_pfx(sec_name, sec_def->sec)) 8039 + return sec_def; 8040 + continue; 8041 + } 8042 + 8043 + /* Definitions not marked SEC_SLOPPY_PFX (e.g., 8044 + * SEC("syscall")) are exact matches in both modes. 8045 + */ 8046 + if (strcmp(sec_name, sec_def->sec) == 0) 8047 + return sec_def; 8212 8048 } 8213 8049 return NULL; 8214 8050 } ··· 8151 8135 buf[0] = '\0'; 8152 8136 /* Forge string buf with all available names */ 8153 8137 for (i = 0; i < ARRAY_SIZE(section_defs); i++) { 8154 - if (attach_type && !section_defs[i].is_attachable) 8155 - continue; 8138 + const struct bpf_sec_def *sec_def = &section_defs[i]; 8139 + 8140 + if (attach_type) { 8141 + if (sec_def->preload_fn != libbpf_preload_prog) 8142 + continue; 8143 + 8144 + if (!(sec_def->cookie & SEC_ATTACHABLE)) 8145 + continue; 8146 + } 8156 8147 8157 8148 if (strlen(buf) + strlen(section_defs[i].sec) + 2 > len) { 8158 8149 free(buf); ··· 8483 8460 return -ESRCH; 8484 8461 } 8485 8462 8486 - static int libbpf_find_attach_btf_id(struct bpf_program *prog, int *btf_obj_fd, int *btf_type_id) 8463 + static int libbpf_find_attach_btf_id(struct bpf_program *prog, const char *attach_name, 8464 + int *btf_obj_fd, int *btf_type_id) 8487 8465 { 8488 8466 enum bpf_attach_type attach_type = prog->expected_attach_type; 8489 8467 __u32 attach_prog_fd = prog->attach_prog_fd; 8490 - const char *attach_name; 8491 8468 int err = 0; 8492 - 8493 - if (!prog->sec_def || !prog->sec_def->is_attach_btf) { 8494 - pr_warn("failed to identify BTF ID based on ELF section name '%s'\n", 8495 - prog->sec_name); 8496 - return -ESRCH; 8497 - } 8498 - attach_name = prog->sec_name + prog->sec_def->len; 8499 8469 8500 8470 /* BPF program's BTF ID */ 8501 8471 if (attach_prog_fd) { ··· 8539 8523 return libbpf_err(-EINVAL); 8540 8524 } 8541 8525 8542 - if (!sec_def->is_attachable) 8526 + if (sec_def->preload_fn != libbpf_preload_prog) 8527 + return libbpf_err(-EINVAL); 8528 + if (!(sec_def->cookie & SEC_ATTACHABLE)) 8543 8529 return libbpf_err(-EINVAL); 8544 8530 8545 8531 *attach_type = sec_def->expected_attach_type; ··· 9029 9011 return 0; 9030 9012 } 9031 9013 9032 - static int poke_kprobe_events(bool add, const char *name, bool retprobe, uint64_t offset) 9033 - { 9034 - int fd, ret = 0; 9035 - pid_t p = getpid(); 9036 - char cmd[260], probename[128], probefunc[128]; 9037 - const char *file = "/sys/kernel/debug/tracing/kprobe_events"; 9038 - 9039 - if (retprobe) 9040 - snprintf(probename, sizeof(probename), "kretprobes/%s_libbpf_%u", name, p); 9041 - else 9042 - snprintf(probename, sizeof(probename), "kprobes/%s_libbpf_%u", name, p); 9043 - 9044 - if (offset) 9045 - snprintf(probefunc, sizeof(probefunc), "%s+%zu", name, (size_t)offset); 9046 - 9047 - if (add) { 9048 - snprintf(cmd, sizeof(cmd), "%c:%s %s", 9049 - retprobe ? 'r' : 'p', 9050 - probename, 9051 - offset ? probefunc : name); 9052 - } else { 9053 - snprintf(cmd, sizeof(cmd), "-:%s", probename); 9054 - } 9055 - 9056 - fd = open(file, O_WRONLY | O_APPEND, 0); 9057 - if (!fd) 9058 - return -errno; 9059 - ret = write(fd, cmd, strlen(cmd)); 9060 - if (ret < 0) 9061 - ret = -errno; 9062 - close(fd); 9063 - 9064 - return ret; 9065 - } 9066 - 9067 - static inline int add_kprobe_event_legacy(const char *name, bool retprobe, uint64_t offset) 9068 - { 9069 - return poke_kprobe_events(true, name, retprobe, offset); 9070 - } 9071 - 9072 - static inline int remove_kprobe_event_legacy(const char *name, bool retprobe) 9073 - { 9074 - return poke_kprobe_events(false, name, retprobe, 0); 9075 - } 9076 - 9077 9014 struct bpf_link_perf { 9078 9015 struct bpf_link link; 9079 9016 int perf_event_fd; 9080 9017 /* legacy kprobe support: keep track of probe identifier and type */ 9081 9018 char *legacy_probe_name; 9019 + bool legacy_is_kprobe; 9082 9020 bool legacy_is_retprobe; 9083 9021 }; 9022 + 9023 + static int remove_kprobe_event_legacy(const char *probe_name, bool retprobe); 9024 + static int remove_uprobe_event_legacy(const char *probe_name, bool retprobe); 9084 9025 9085 9026 static int bpf_link_perf_detach(struct bpf_link *link) 9086 9027 { ··· 9053 9076 close(perf_link->perf_event_fd); 9054 9077 close(link->fd); 9055 9078 9056 - /* legacy kprobe needs to be removed after perf event fd closure */ 9057 - if (perf_link->legacy_probe_name) 9058 - err = remove_kprobe_event_legacy(perf_link->legacy_probe_name, 9059 - perf_link->legacy_is_retprobe); 9079 + /* legacy uprobe/kprobe needs to be removed after perf event fd closure */ 9080 + if (perf_link->legacy_probe_name) { 9081 + if (perf_link->legacy_is_kprobe) { 9082 + err = remove_kprobe_event_legacy(perf_link->legacy_probe_name, 9083 + perf_link->legacy_is_retprobe); 9084 + } else { 9085 + err = remove_uprobe_event_legacy(perf_link->legacy_probe_name, 9086 + perf_link->legacy_is_retprobe); 9087 + } 9088 + } 9060 9089 9061 9090 return err; 9062 9091 } ··· 9185 9202 return ret; 9186 9203 } 9187 9204 9188 - static int determine_kprobe_perf_type_legacy(const char *func_name, bool is_retprobe) 9189 - { 9190 - char file[192]; 9191 - 9192 - snprintf(file, sizeof(file), 9193 - "/sys/kernel/debug/tracing/events/%s/%s_libbpf_%d/id", 9194 - is_retprobe ? "kretprobes" : "kprobes", 9195 - func_name, getpid()); 9196 - 9197 - return parse_uint_from_file(file, "%d\n"); 9198 - } 9199 - 9200 9205 static int determine_kprobe_perf_type(void) 9201 9206 { 9202 9207 const char *file = "/sys/bus/event_source/devices/kprobe/type"; ··· 9267 9296 return pfd; 9268 9297 } 9269 9298 9270 - static int perf_event_kprobe_open_legacy(bool retprobe, const char *name, uint64_t offset, int pid) 9299 + static int append_to_file(const char *file, const char *fmt, ...) 9300 + { 9301 + int fd, n, err = 0; 9302 + va_list ap; 9303 + 9304 + fd = open(file, O_WRONLY | O_APPEND, 0); 9305 + if (fd < 0) 9306 + return -errno; 9307 + 9308 + va_start(ap, fmt); 9309 + n = vdprintf(fd, fmt, ap); 9310 + va_end(ap); 9311 + 9312 + if (n < 0) 9313 + err = -errno; 9314 + 9315 + close(fd); 9316 + return err; 9317 + } 9318 + 9319 + static void gen_kprobe_legacy_event_name(char *buf, size_t buf_sz, 9320 + const char *kfunc_name, size_t offset) 9321 + { 9322 + snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx", getpid(), kfunc_name, offset); 9323 + } 9324 + 9325 + static int add_kprobe_event_legacy(const char *probe_name, bool retprobe, 9326 + const char *kfunc_name, size_t offset) 9327 + { 9328 + const char *file = "/sys/kernel/debug/tracing/kprobe_events"; 9329 + 9330 + return append_to_file(file, "%c:%s/%s %s+0x%zx", 9331 + retprobe ? 'r' : 'p', 9332 + retprobe ? "kretprobes" : "kprobes", 9333 + probe_name, kfunc_name, offset); 9334 + } 9335 + 9336 + static int remove_kprobe_event_legacy(const char *probe_name, bool retprobe) 9337 + { 9338 + const char *file = "/sys/kernel/debug/tracing/kprobe_events"; 9339 + 9340 + return append_to_file(file, "-:%s/%s", retprobe ? "kretprobes" : "kprobes", probe_name); 9341 + } 9342 + 9343 + static int determine_kprobe_perf_type_legacy(const char *probe_name, bool retprobe) 9344 + { 9345 + char file[256]; 9346 + 9347 + snprintf(file, sizeof(file), 9348 + "/sys/kernel/debug/tracing/events/%s/%s/id", 9349 + retprobe ? "kretprobes" : "kprobes", probe_name); 9350 + 9351 + return parse_uint_from_file(file, "%d\n"); 9352 + } 9353 + 9354 + static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe, 9355 + const char *kfunc_name, size_t offset, int pid) 9271 9356 { 9272 9357 struct perf_event_attr attr = {}; 9273 9358 char errmsg[STRERR_BUFSIZE]; 9274 9359 int type, pfd, err; 9275 9360 9276 - err = add_kprobe_event_legacy(name, retprobe, offset); 9361 + err = add_kprobe_event_legacy(probe_name, retprobe, kfunc_name, offset); 9277 9362 if (err < 0) { 9278 - pr_warn("failed to add legacy kprobe event: %s\n", 9363 + pr_warn("failed to add legacy kprobe event for '%s+0x%zx': %s\n", 9364 + kfunc_name, offset, 9279 9365 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 9280 9366 return err; 9281 9367 } 9282 - type = determine_kprobe_perf_type_legacy(name, retprobe); 9368 + type = determine_kprobe_perf_type_legacy(probe_name, retprobe); 9283 9369 if (type < 0) { 9284 - pr_warn("failed to determine legacy kprobe event id: %s\n", 9370 + pr_warn("failed to determine legacy kprobe event id for '%s+0x%zx': %s\n", 9371 + kfunc_name, offset, 9285 9372 libbpf_strerror_r(type, errmsg, sizeof(errmsg))); 9286 9373 return type; 9287 9374 } ··· 9369 9340 char errmsg[STRERR_BUFSIZE]; 9370 9341 char *legacy_probe = NULL; 9371 9342 struct bpf_link *link; 9372 - unsigned long offset; 9343 + size_t offset; 9373 9344 bool retprobe, legacy; 9374 9345 int pfd, err; 9375 9346 ··· 9386 9357 func_name, offset, 9387 9358 -1 /* pid */, 0 /* ref_ctr_off */); 9388 9359 } else { 9360 + char probe_name[256]; 9361 + 9362 + gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name), 9363 + func_name, offset); 9364 + 9389 9365 legacy_probe = strdup(func_name); 9390 9366 if (!legacy_probe) 9391 9367 return libbpf_err_ptr(-ENOMEM); 9392 9368 9393 - pfd = perf_event_kprobe_open_legacy(retprobe, func_name, 9369 + pfd = perf_event_kprobe_open_legacy(legacy_probe, retprobe, func_name, 9394 9370 offset, -1 /* pid */); 9395 9371 } 9396 9372 if (pfd < 0) { 9397 - pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n", 9398 - prog->name, retprobe ? "kretprobe" : "kprobe", func_name, 9399 - libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); 9400 - return libbpf_err_ptr(pfd); 9373 + err = -errno; 9374 + pr_warn("prog '%s': failed to create %s '%s+0x%zx' perf event: %s\n", 9375 + prog->name, retprobe ? "kretprobe" : "kprobe", 9376 + func_name, offset, 9377 + libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 9378 + goto err_out; 9401 9379 } 9402 9380 link = bpf_program__attach_perf_event_opts(prog, pfd, &pe_opts); 9403 9381 err = libbpf_get_error(link); 9404 9382 if (err) { 9405 9383 close(pfd); 9406 - pr_warn("prog '%s': failed to attach to %s '%s': %s\n", 9407 - prog->name, retprobe ? "kretprobe" : "kprobe", func_name, 9384 + pr_warn("prog '%s': failed to attach to %s '%s+0x%zx': %s\n", 9385 + prog->name, retprobe ? "kretprobe" : "kprobe", 9386 + func_name, offset, 9408 9387 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 9409 - return libbpf_err_ptr(err); 9388 + goto err_out; 9410 9389 } 9411 9390 if (legacy) { 9412 9391 struct bpf_link_perf *perf_link = container_of(link, struct bpf_link_perf, link); 9413 9392 9414 9393 perf_link->legacy_probe_name = legacy_probe; 9394 + perf_link->legacy_is_kprobe = true; 9415 9395 perf_link->legacy_is_retprobe = retprobe; 9416 9396 } 9417 9397 9418 9398 return link; 9399 + err_out: 9400 + free(legacy_probe); 9401 + return libbpf_err_ptr(err); 9419 9402 } 9420 9403 9421 9404 struct bpf_link *bpf_program__attach_kprobe(const struct bpf_program *prog, ··· 9441 9400 return bpf_program__attach_kprobe_opts(prog, func_name, &opts); 9442 9401 } 9443 9402 9444 - static struct bpf_link *attach_kprobe(const struct bpf_program *prog) 9403 + static struct bpf_link *attach_kprobe(const struct bpf_program *prog, long cookie) 9445 9404 { 9446 9405 DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, opts); 9447 9406 unsigned long offset = 0; ··· 9450 9409 char *func; 9451 9410 int n, err; 9452 9411 9453 - func_name = prog->sec_name + prog->sec_def->len; 9454 - opts.retprobe = strcmp(prog->sec_def->sec, "kretprobe/") == 0; 9412 + opts.retprobe = str_has_pfx(prog->sec_name, "kretprobe/"); 9413 + if (opts.retprobe) 9414 + func_name = prog->sec_name + sizeof("kretprobe/") - 1; 9415 + else 9416 + func_name = prog->sec_name + sizeof("kprobe/") - 1; 9455 9417 9456 9418 n = sscanf(func_name, "%m[a-zA-Z0-9_.]+%li", &func, &offset); 9457 9419 if (n < 1) { ··· 9475 9431 return link; 9476 9432 } 9477 9433 9434 + static void gen_uprobe_legacy_event_name(char *buf, size_t buf_sz, 9435 + const char *binary_path, uint64_t offset) 9436 + { 9437 + int i; 9438 + 9439 + snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx", getpid(), binary_path, (size_t)offset); 9440 + 9441 + /* sanitize binary_path in the probe name */ 9442 + for (i = 0; buf[i]; i++) { 9443 + if (!isalnum(buf[i])) 9444 + buf[i] = '_'; 9445 + } 9446 + } 9447 + 9448 + static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe, 9449 + const char *binary_path, size_t offset) 9450 + { 9451 + const char *file = "/sys/kernel/debug/tracing/uprobe_events"; 9452 + 9453 + return append_to_file(file, "%c:%s/%s %s:0x%zx", 9454 + retprobe ? 'r' : 'p', 9455 + retprobe ? "uretprobes" : "uprobes", 9456 + probe_name, binary_path, offset); 9457 + } 9458 + 9459 + static inline int remove_uprobe_event_legacy(const char *probe_name, bool retprobe) 9460 + { 9461 + const char *file = "/sys/kernel/debug/tracing/uprobe_events"; 9462 + 9463 + return append_to_file(file, "-:%s/%s", retprobe ? "uretprobes" : "uprobes", probe_name); 9464 + } 9465 + 9466 + static int determine_uprobe_perf_type_legacy(const char *probe_name, bool retprobe) 9467 + { 9468 + char file[512]; 9469 + 9470 + snprintf(file, sizeof(file), 9471 + "/sys/kernel/debug/tracing/events/%s/%s/id", 9472 + retprobe ? "uretprobes" : "uprobes", probe_name); 9473 + 9474 + return parse_uint_from_file(file, "%d\n"); 9475 + } 9476 + 9477 + static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe, 9478 + const char *binary_path, size_t offset, int pid) 9479 + { 9480 + struct perf_event_attr attr; 9481 + int type, pfd, err; 9482 + 9483 + err = add_uprobe_event_legacy(probe_name, retprobe, binary_path, offset); 9484 + if (err < 0) { 9485 + pr_warn("failed to add legacy uprobe event for %s:0x%zx: %d\n", 9486 + binary_path, (size_t)offset, err); 9487 + return err; 9488 + } 9489 + type = determine_uprobe_perf_type_legacy(probe_name, retprobe); 9490 + if (type < 0) { 9491 + pr_warn("failed to determine legacy uprobe event id for %s:0x%zx: %d\n", 9492 + binary_path, offset, err); 9493 + return type; 9494 + } 9495 + 9496 + memset(&attr, 0, sizeof(attr)); 9497 + attr.size = sizeof(attr); 9498 + attr.config = type; 9499 + attr.type = PERF_TYPE_TRACEPOINT; 9500 + 9501 + pfd = syscall(__NR_perf_event_open, &attr, 9502 + pid < 0 ? -1 : pid, /* pid */ 9503 + pid == -1 ? 0 : -1, /* cpu */ 9504 + -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC); 9505 + if (pfd < 0) { 9506 + err = -errno; 9507 + pr_warn("legacy uprobe perf_event_open() failed: %d\n", err); 9508 + return err; 9509 + } 9510 + return pfd; 9511 + } 9512 + 9478 9513 LIBBPF_API struct bpf_link * 9479 9514 bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid, 9480 9515 const char *binary_path, size_t func_offset, 9481 9516 const struct bpf_uprobe_opts *opts) 9482 9517 { 9483 9518 DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts); 9484 - char errmsg[STRERR_BUFSIZE]; 9519 + char errmsg[STRERR_BUFSIZE], *legacy_probe = NULL; 9485 9520 struct bpf_link *link; 9486 9521 size_t ref_ctr_off; 9487 9522 int pfd, err; 9488 - bool retprobe; 9523 + bool retprobe, legacy; 9489 9524 9490 9525 if (!OPTS_VALID(opts, bpf_uprobe_opts)) 9491 9526 return libbpf_err_ptr(-EINVAL); ··· 9573 9450 ref_ctr_off = OPTS_GET(opts, ref_ctr_offset, 0); 9574 9451 pe_opts.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0); 9575 9452 9576 - pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path, 9577 - func_offset, pid, ref_ctr_off); 9453 + legacy = determine_uprobe_perf_type() < 0; 9454 + if (!legacy) { 9455 + pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path, 9456 + func_offset, pid, ref_ctr_off); 9457 + } else { 9458 + char probe_name[512]; 9459 + 9460 + if (ref_ctr_off) 9461 + return libbpf_err_ptr(-EINVAL); 9462 + 9463 + gen_uprobe_legacy_event_name(probe_name, sizeof(probe_name), 9464 + binary_path, func_offset); 9465 + 9466 + legacy_probe = strdup(probe_name); 9467 + if (!legacy_probe) 9468 + return libbpf_err_ptr(-ENOMEM); 9469 + 9470 + pfd = perf_event_uprobe_open_legacy(legacy_probe, retprobe, 9471 + binary_path, func_offset, pid); 9472 + } 9578 9473 if (pfd < 0) { 9474 + err = -errno; 9579 9475 pr_warn("prog '%s': failed to create %s '%s:0x%zx' perf event: %s\n", 9580 9476 prog->name, retprobe ? "uretprobe" : "uprobe", 9581 9477 binary_path, func_offset, 9582 - libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); 9583 - return libbpf_err_ptr(pfd); 9478 + libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 9479 + goto err_out; 9584 9480 } 9481 + 9585 9482 link = bpf_program__attach_perf_event_opts(prog, pfd, &pe_opts); 9586 9483 err = libbpf_get_error(link); 9587 9484 if (err) { ··· 9610 9467 prog->name, retprobe ? "uretprobe" : "uprobe", 9611 9468 binary_path, func_offset, 9612 9469 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 9613 - return libbpf_err_ptr(err); 9470 + goto err_out; 9471 + } 9472 + if (legacy) { 9473 + struct bpf_link_perf *perf_link = container_of(link, struct bpf_link_perf, link); 9474 + 9475 + perf_link->legacy_probe_name = legacy_probe; 9476 + perf_link->legacy_is_kprobe = false; 9477 + perf_link->legacy_is_retprobe = retprobe; 9614 9478 } 9615 9479 return link; 9480 + err_out: 9481 + free(legacy_probe); 9482 + return libbpf_err_ptr(err); 9483 + 9616 9484 } 9617 9485 9618 9486 struct bpf_link *bpf_program__attach_uprobe(const struct bpf_program *prog, ··· 9727 9573 return bpf_program__attach_tracepoint_opts(prog, tp_category, tp_name, NULL); 9728 9574 } 9729 9575 9730 - static struct bpf_link *attach_tp(const struct bpf_program *prog) 9576 + static struct bpf_link *attach_tp(const struct bpf_program *prog, long cookie) 9731 9577 { 9732 9578 char *sec_name, *tp_cat, *tp_name; 9733 9579 struct bpf_link *link; ··· 9736 9582 if (!sec_name) 9737 9583 return libbpf_err_ptr(-ENOMEM); 9738 9584 9739 - /* extract "tp/<category>/<name>" */ 9740 - tp_cat = sec_name + prog->sec_def->len; 9585 + /* extract "tp/<category>/<name>" or "tracepoint/<category>/<name>" */ 9586 + if (str_has_pfx(prog->sec_name, "tp/")) 9587 + tp_cat = sec_name + sizeof("tp/") - 1; 9588 + else 9589 + tp_cat = sec_name + sizeof("tracepoint/") - 1; 9741 9590 tp_name = strchr(tp_cat, '/'); 9742 9591 if (!tp_name) { 9743 9592 free(sec_name); ··· 9784 9627 return link; 9785 9628 } 9786 9629 9787 - static struct bpf_link *attach_raw_tp(const struct bpf_program *prog) 9630 + static struct bpf_link *attach_raw_tp(const struct bpf_program *prog, long cookie) 9788 9631 { 9789 - const char *tp_name = prog->sec_name + prog->sec_def->len; 9632 + const char *tp_name; 9633 + 9634 + if (str_has_pfx(prog->sec_name, "raw_tp/")) 9635 + tp_name = prog->sec_name + sizeof("raw_tp/") - 1; 9636 + else 9637 + tp_name = prog->sec_name + sizeof("raw_tracepoint/") - 1; 9790 9638 9791 9639 return bpf_program__attach_raw_tracepoint(prog, tp_name); 9792 9640 } ··· 9836 9674 return bpf_program__attach_btf_id(prog); 9837 9675 } 9838 9676 9839 - static struct bpf_link *attach_trace(const struct bpf_program *prog) 9677 + static struct bpf_link *attach_trace(const struct bpf_program *prog, long cookie) 9840 9678 { 9841 9679 return bpf_program__attach_trace(prog); 9842 9680 } 9843 9681 9844 - static struct bpf_link *attach_lsm(const struct bpf_program *prog) 9682 + static struct bpf_link *attach_lsm(const struct bpf_program *prog, long cookie) 9845 9683 { 9846 9684 return bpf_program__attach_lsm(prog); 9847 9685 } ··· 9972 9810 return link; 9973 9811 } 9974 9812 9975 - static struct bpf_link *attach_iter(const struct bpf_program *prog) 9813 + static struct bpf_link *attach_iter(const struct bpf_program *prog, long cookie) 9976 9814 { 9977 9815 return bpf_program__attach_iter(prog, NULL); 9978 9816 } ··· 9982 9820 if (!prog->sec_def || !prog->sec_def->attach_fn) 9983 9821 return libbpf_err_ptr(-ESRCH); 9984 9822 9985 - return prog->sec_def->attach_fn(prog); 9823 + return prog->sec_def->attach_fn(prog, prog->sec_def->cookie); 9986 9824 } 9987 9825 9988 9826 static int bpf_link__detach_struct_ops(struct bpf_link *link)
+58 -9
tools/lib/bpf/libbpf.h
··· 269 269 /* custom user-provided value fetchable through bpf_get_attach_cookie() */ 270 270 __u64 bpf_cookie; 271 271 /* function's offset to install kprobe to */ 272 - unsigned long offset; 272 + size_t offset; 273 273 /* kprobe is return probe */ 274 274 bool retprobe; 275 275 size_t :0; ··· 481 481 unsigned int map_flags; 482 482 }; 483 483 484 - /* 485 - * The 'struct bpf_map' in include/linux/bpf.h is internal to the kernel, 486 - * so no need to worry about a name clash. 484 + /** 485 + * @brief **bpf_object__find_map_by_name()** returns BPF map of 486 + * the given name, if it exists within the passed BPF object 487 + * @param obj BPF object 488 + * @param name name of the BPF map 489 + * @return BPF map instance, if such map exists within the BPF object; 490 + * or NULL otherwise. 487 491 */ 488 492 LIBBPF_API struct bpf_map * 489 493 bpf_object__find_map_by_name(const struct bpf_object *obj, const char *name); ··· 513 509 LIBBPF_API struct bpf_map * 514 510 bpf_map__prev(const struct bpf_map *map, const struct bpf_object *obj); 515 511 516 - /* get/set map FD */ 512 + /** 513 + * @brief **bpf_map__fd()** gets the file descriptor of the passed 514 + * BPF map 515 + * @param map the BPF map instance 516 + * @return the file descriptor; or -EINVAL in case of an error 517 + */ 517 518 LIBBPF_API int bpf_map__fd(const struct bpf_map *map); 518 519 LIBBPF_API int bpf_map__reuse_fd(struct bpf_map *map, int fd); 519 520 /* get map definition */ ··· 559 550 const void *data, size_t size); 560 551 LIBBPF_API const void *bpf_map__initial_value(struct bpf_map *map, size_t *psize); 561 552 LIBBPF_API bool bpf_map__is_offload_neutral(const struct bpf_map *map); 553 + 554 + /** 555 + * @brief **bpf_map__is_internal()** tells the caller whether or not the 556 + * passed map is a special map created by libbpf automatically for things like 557 + * global variables, __ksym externs, Kconfig values, etc 558 + * @param map the bpf_map 559 + * @return true, if the map is an internal map; false, otherwise 560 + */ 562 561 LIBBPF_API bool bpf_map__is_internal(const struct bpf_map *map); 563 562 LIBBPF_API int bpf_map__set_pin_path(struct bpf_map *map, const char *path); 564 563 LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map); ··· 578 561 LIBBPF_API int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd); 579 562 LIBBPF_API struct bpf_map *bpf_map__inner_map(struct bpf_map *map); 580 563 564 + /** 565 + * @brief **libbpf_get_error()** extracts the error code from the passed 566 + * pointer 567 + * @param ptr pointer returned from libbpf API function 568 + * @return error code; or 0 if no error occured 569 + * 570 + * Many libbpf API functions which return pointers have logic to encode error 571 + * codes as pointers, and do not return NULL. Meaning **libbpf_get_error()** 572 + * should be used on the return value from these functions immediately after 573 + * calling the API function, with no intervening calls that could clobber the 574 + * `errno` variable. Consult the individual functions documentation to verify 575 + * if this logic applies should be used. 576 + * 577 + * For these API functions, if `libbpf_set_strict_mode(LIBBPF_STRICT_CLEAN_PTRS)` 578 + * is enabled, NULL is returned on error instead. 579 + * 580 + * If ptr is NULL, then errno should be already set by the failing 581 + * API, because libbpf never returns NULL on success and it now always 582 + * sets errno on error. 583 + * 584 + * Example usage: 585 + * 586 + * struct perf_buffer *pb; 587 + * 588 + * pb = perf_buffer__new(bpf_map__fd(obj->maps.events), PERF_BUFFER_PAGES, &opts); 589 + * err = libbpf_get_error(pb); 590 + * if (err) { 591 + * pb = NULL; 592 + * fprintf(stderr, "failed to open perf buffer: %d\n", err); 593 + * goto cleanup; 594 + * } 595 + */ 581 596 LIBBPF_API long libbpf_get_error(const void *ptr); 582 597 583 598 struct bpf_prog_load_attr { ··· 874 825 LIBBPF_API void 875 826 bpf_program__bpil_offs_to_addr(struct bpf_prog_info_linear *info_linear); 876 827 877 - /* 878 - * A helper function to get the number of possible CPUs before looking up 879 - * per-CPU maps. Negative errno is returned on failure. 828 + /** 829 + * @brief **libbpf_num_possible_cpus()** is a helper function to get the 830 + * number of possible CPUs that the host kernel supports and expects. 831 + * @return number of possible CPUs; or error code on failure 880 832 * 881 833 * Example usage: 882 834 * ··· 887 837 * } 888 838 * long values[ncpus]; 889 839 * bpf_map_lookup_elem(per_cpu_map_fd, key, values); 890 - * 891 840 */ 892 841 LIBBPF_API int libbpf_num_possible_cpus(void); 893 842
+7
tools/lib/bpf/libbpf_internal.h
··· 89 89 (offsetof(TYPE, FIELD) + sizeof(((TYPE *)0)->FIELD)) 90 90 #endif 91 91 92 + /* Check whether a string `str` has prefix `pfx`, regardless if `pfx` is 93 + * a string literal known at compilation time or char * pointer known only at 94 + * runtime. 95 + */ 96 + #define str_has_pfx(str, pfx) \ 97 + (strncmp(str, pfx, __builtin_constant_p(pfx) ? sizeof(pfx) - 1 : strlen(pfx)) == 0) 98 + 92 99 /* Symbol versioning is different between static and shared library. 93 100 * Properly versioned symbols are needed for shared library, but 94 101 * only the symbol of the new version is needed for static library.
+9
tools/lib/bpf/libbpf_legacy.h
··· 46 46 */ 47 47 LIBBPF_STRICT_DIRECT_ERRS = 0x02, 48 48 49 + /* 50 + * Enforce strict BPF program section (SEC()) names. 51 + * E.g., while prefiously SEC("xdp_whatever") or SEC("perf_event_blah") were 52 + * allowed, with LIBBPF_STRICT_SEC_PREFIX this will become 53 + * unrecognized by libbpf and would have to be just SEC("xdp") and 54 + * SEC("xdp") and SEC("perf_event"). 55 + */ 56 + LIBBPF_STRICT_SEC_NAME = 0x04, 57 + 49 58 __LIBBPF_STRICT_LAST, 50 59 }; 51 60
+4 -2
tools/lib/bpf/skel_internal.h
··· 105 105 err = skel_sys_bpf(BPF_PROG_RUN, &attr, sizeof(attr)); 106 106 if (err < 0 || (int)attr.test.retval < 0) { 107 107 opts->errstr = "failed to execute loader prog"; 108 - if (err < 0) 108 + if (err < 0) { 109 109 err = -errno; 110 - else 110 + } else { 111 111 err = (int)attr.test.retval; 112 + errno = -err; 113 + } 112 114 goto out; 113 115 } 114 116 err = 0;
+2 -1
tools/testing/selftests/bpf/Makefile
··· 315 315 linked_vars.skel.h linked_maps.skel.h 316 316 317 317 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ 318 - test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c 318 + test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \ 319 + trace_vprintk.c 319 320 SKEL_BLACKLIST += $$(LSKELS) 320 321 321 322 test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
+13
tools/testing/selftests/bpf/README.rst
··· 242 242 .. Links 243 243 .. _clang reloc patch: https://reviews.llvm.org/D102712 244 244 .. _kernel llvm reloc: /Documentation/bpf/llvm_reloc.rst 245 + 246 + Clang dependencies for the u32 spill test (xdpwall) 247 + =================================================== 248 + The xdpwall selftest requires a change in `Clang 14`__. 249 + 250 + Without it, the xdpwall selftest will fail and the error message 251 + from running test_progs will look like: 252 + 253 + .. code-block:: console 254 + 255 + test_xdpwall:FAIL:Does LLVM have https://reviews.llvm.org/D109073? unexpected error: -4007 256 + 257 + __ https://reviews.llvm.org/D109073
+20 -4
tools/testing/selftests/bpf/prog_tests/attach_probe.c
··· 14 14 struct test_attach_probe* skel; 15 15 size_t uprobe_offset; 16 16 ssize_t base_addr, ref_ctr_offset; 17 + bool legacy; 18 + 19 + /* Check if new-style kprobe/uprobe API is supported. 20 + * Kernels that support new FD-based kprobe and uprobe BPF attachment 21 + * through perf_event_open() syscall expose 22 + * /sys/bus/event_source/devices/kprobe/type and 23 + * /sys/bus/event_source/devices/uprobe/type files, respectively. They 24 + * contain magic numbers that are passed as "type" field of 25 + * perf_event_attr. Lack of such file in the system indicates legacy 26 + * kernel with old-style kprobe/uprobe attach interface through 27 + * creating per-probe event through tracefs. For such cases 28 + * ref_ctr_offset feature is not supported, so we don't test it. 29 + */ 30 + legacy = access("/sys/bus/event_source/devices/kprobe/type", F_OK) != 0; 17 31 18 32 base_addr = get_base_addr(); 19 33 if (CHECK(base_addr < 0, "get_base_addr", ··· 59 45 goto cleanup; 60 46 skel->links.handle_kretprobe = kretprobe_link; 61 47 62 - ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before"); 48 + if (!legacy) 49 + ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before"); 63 50 64 51 uprobe_opts.retprobe = false; 65 - uprobe_opts.ref_ctr_offset = ref_ctr_offset; 52 + uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset; 66 53 uprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uprobe, 67 54 0 /* self pid */, 68 55 "/proc/self/exe", ··· 73 58 goto cleanup; 74 59 skel->links.handle_uprobe = uprobe_link; 75 60 76 - ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after"); 61 + if (!legacy) 62 + ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after"); 77 63 78 64 /* if uprobe uses ref_ctr, uretprobe has to use ref_ctr as well */ 79 65 uprobe_opts.retprobe = true; 80 - uprobe_opts.ref_ctr_offset = ref_ctr_offset; 66 + uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset; 81 67 uretprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uretprobe, 82 68 -1 /* any pid */, 83 69 "/proc/self/exe",
+21 -6
tools/testing/selftests/bpf/prog_tests/btf_dump.c
··· 358 358 TEST_BTF_DUMP_DATA_OVER(btf, d, NULL, str, int, sizeof(int)-1, "", 1); 359 359 360 360 #ifdef __SIZEOF_INT128__ 361 - TEST_BTF_DUMP_DATA(btf, d, NULL, str, __int128, BTF_F_COMPACT, 362 - "(__int128)0xffffffffffffffff", 363 - 0xffffffffffffffff); 364 - ASSERT_OK(btf_dump_data(btf, d, "__int128", NULL, 0, &i, 16, str, 365 - "(__int128)0xfffffffffffffffffffffffffffffffe"), 366 - "dump __int128"); 361 + /* gcc encode unsigned __int128 type with name "__int128 unsigned" in dwarf, 362 + * and clang encode it with name "unsigned __int128" in dwarf. 363 + * Do an availability test for either variant before doing actual test. 364 + */ 365 + if (btf__find_by_name(btf, "unsigned __int128") > 0) { 366 + TEST_BTF_DUMP_DATA(btf, d, NULL, str, unsigned __int128, BTF_F_COMPACT, 367 + "(unsigned __int128)0xffffffffffffffff", 368 + 0xffffffffffffffff); 369 + ASSERT_OK(btf_dump_data(btf, d, "unsigned __int128", NULL, 0, &i, 16, str, 370 + "(unsigned __int128)0xfffffffffffffffffffffffffffffffe"), 371 + "dump unsigned __int128"); 372 + } else if (btf__find_by_name(btf, "__int128 unsigned") > 0) { 373 + TEST_BTF_DUMP_DATA(btf, d, NULL, str, __int128 unsigned, BTF_F_COMPACT, 374 + "(__int128 unsigned)0xffffffffffffffff", 375 + 0xffffffffffffffff); 376 + ASSERT_OK(btf_dump_data(btf, d, "__int128 unsigned", NULL, 0, &i, 16, str, 377 + "(__int128 unsigned)0xfffffffffffffffffffffffffffffffe"), 378 + "dump unsigned __int128"); 379 + } else { 380 + ASSERT_TRUE(false, "unsigned_int128_not_found"); 381 + } 367 382 #endif 368 383 } 369 384
+2 -2
tools/testing/selftests/bpf/prog_tests/flow_dissector.c
··· 458 458 return -1; 459 459 460 460 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 461 - snprintf(prog_name, sizeof(prog_name), "flow_dissector/%i", i); 461 + snprintf(prog_name, sizeof(prog_name), "flow_dissector_%d", i); 462 462 463 - prog = bpf_object__find_program_by_title(obj, prog_name); 463 + prog = bpf_object__find_program_by_name(obj, prog_name); 464 464 if (!prog) 465 465 return -1; 466 466
+2 -3
tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
··· 38 38 39 39 static void close_perf_events(void) 40 40 { 41 - int cpu = 0; 42 - int fd; 41 + int cpu, fd; 43 42 44 - while (cpu++ < cpu_cnt) { 43 + for (cpu = 0; cpu < cpu_cnt; cpu++) { 45 44 fd = pfd_array[cpu]; 46 45 if (fd < 0) 47 46 break;
+2 -2
tools/testing/selftests/bpf/prog_tests/probe_user.c
··· 3 3 4 4 void test_probe_user(void) 5 5 { 6 - const char *prog_name = "kprobe/__sys_connect"; 6 + const char *prog_name = "handle_sys_connect"; 7 7 const char *obj_file = "./test_probe_user.o"; 8 8 DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts, ); 9 9 int err, results_map_fd, sock_fd, duration = 0; ··· 18 18 if (!ASSERT_OK_PTR(obj, "obj_open_file")) 19 19 return; 20 20 21 - kprobe_prog = bpf_object__find_program_by_title(obj, prog_name); 21 + kprobe_prog = bpf_object__find_program_by_name(obj, prog_name); 22 22 if (CHECK(!kprobe_prog, "find_probe", 23 23 "prog '%s' not found\n", prog_name)) 24 24 goto cleanup;
+36 -16
tools/testing/selftests/bpf/prog_tests/reference_tracking.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <test_progs.h> 3 3 4 + static void toggle_object_autoload_progs(const struct bpf_object *obj, 5 + const char *name_load) 6 + { 7 + struct bpf_program *prog; 8 + 9 + bpf_object__for_each_program(prog, obj) { 10 + const char *name = bpf_program__name(prog); 11 + 12 + if (!strcmp(name_load, name)) 13 + bpf_program__set_autoload(prog, true); 14 + else 15 + bpf_program__set_autoload(prog, false); 16 + } 17 + } 18 + 4 19 void test_reference_tracking(void) 5 20 { 6 21 const char *file = "test_sk_lookup_kern.o"; ··· 24 9 .object_name = obj_name, 25 10 .relaxed_maps = true, 26 11 ); 27 - struct bpf_object *obj; 12 + struct bpf_object *obj_iter, *obj = NULL; 28 13 struct bpf_program *prog; 29 14 __u32 duration = 0; 30 15 int err = 0; 31 16 32 - obj = bpf_object__open_file(file, &open_opts); 33 - if (!ASSERT_OK_PTR(obj, "obj_open_file")) 17 + obj_iter = bpf_object__open_file(file, &open_opts); 18 + if (!ASSERT_OK_PTR(obj_iter, "obj_iter_open_file")) 34 19 return; 35 20 36 - if (CHECK(strcmp(bpf_object__name(obj), obj_name), "obj_name", 21 + if (CHECK(strcmp(bpf_object__name(obj_iter), obj_name), "obj_name", 37 22 "wrong obj name '%s', expected '%s'\n", 38 - bpf_object__name(obj), obj_name)) 23 + bpf_object__name(obj_iter), obj_name)) 39 24 goto cleanup; 40 25 41 - bpf_object__for_each_program(prog, obj) { 42 - const char *title; 26 + bpf_object__for_each_program(prog, obj_iter) { 27 + const char *name; 43 28 44 - /* Ignore .text sections */ 45 - title = bpf_program__section_name(prog); 46 - if (strstr(title, ".text") != NULL) 29 + name = bpf_program__name(prog); 30 + if (!test__start_subtest(name)) 47 31 continue; 48 32 49 - if (!test__start_subtest(title)) 50 - continue; 33 + obj = bpf_object__open_file(file, &open_opts); 34 + if (!ASSERT_OK_PTR(obj, "obj_open_file")) 35 + goto cleanup; 51 36 37 + toggle_object_autoload_progs(obj, name); 52 38 /* Expect verifier failure if test name has 'err' */ 53 - if (strstr(title, "err_") != NULL) { 39 + if (strncmp(name, "err_", sizeof("err_") - 1) == 0) { 54 40 libbpf_print_fn_t old_print_fn; 55 41 56 42 old_print_fn = libbpf_set_print(NULL); 57 - err = !bpf_program__load(prog, "GPL", 0); 43 + err = !bpf_object__load(obj); 58 44 libbpf_set_print(old_print_fn); 59 45 } else { 60 - err = bpf_program__load(prog, "GPL", 0); 46 + err = bpf_object__load(obj); 61 47 } 62 - CHECK(err, title, "\n"); 48 + ASSERT_OK(err, name); 49 + 50 + bpf_object__close(obj); 51 + obj = NULL; 63 52 } 64 53 65 54 cleanup: 66 55 bpf_object__close(obj); 56 + bpf_object__close(obj_iter); 67 57 }
+1 -1
tools/testing/selftests/bpf/prog_tests/sk_assign.c
··· 48 48 return false; 49 49 sprintf(tc_cmd, "%s %s %s %s", "tc filter add dev lo ingress bpf", 50 50 "direct-action object-file ./test_sk_assign.o", 51 - "section classifier/sk_assign_test", 51 + "section tc", 52 52 (env.verbosity < VERBOSE_VERY) ? " 2>/dev/null" : "verbose"); 53 53 if (CHECK(system(tc_cmd), "BPF load failed;", 54 54 "run with -vv for more info\n"))
+15 -15
tools/testing/selftests/bpf/prog_tests/sockopt_multi.c
··· 2 2 #include <test_progs.h> 3 3 #include "cgroup_helpers.h" 4 4 5 - static int prog_attach(struct bpf_object *obj, int cgroup_fd, const char *title) 5 + static int prog_attach(struct bpf_object *obj, int cgroup_fd, const char *title, const char *name) 6 6 { 7 7 enum bpf_attach_type attach_type; 8 8 enum bpf_prog_type prog_type; ··· 15 15 return -1; 16 16 } 17 17 18 - prog = bpf_object__find_program_by_title(obj, title); 18 + prog = bpf_object__find_program_by_name(obj, name); 19 19 if (!prog) { 20 - log_err("Failed to find %s BPF program", title); 20 + log_err("Failed to find %s BPF program", name); 21 21 return -1; 22 22 } 23 23 24 24 err = bpf_prog_attach(bpf_program__fd(prog), cgroup_fd, 25 25 attach_type, BPF_F_ALLOW_MULTI); 26 26 if (err) { 27 - log_err("Failed to attach %s BPF program", title); 27 + log_err("Failed to attach %s BPF program", name); 28 28 return -1; 29 29 } 30 30 31 31 return 0; 32 32 } 33 33 34 - static int prog_detach(struct bpf_object *obj, int cgroup_fd, const char *title) 34 + static int prog_detach(struct bpf_object *obj, int cgroup_fd, const char *title, const char *name) 35 35 { 36 36 enum bpf_attach_type attach_type; 37 37 enum bpf_prog_type prog_type; ··· 42 42 if (err) 43 43 return -1; 44 44 45 - prog = bpf_object__find_program_by_title(obj, title); 45 + prog = bpf_object__find_program_by_name(obj, name); 46 46 if (!prog) 47 47 return -1; 48 48 ··· 89 89 * - child: 0x80 -> 0x90 90 90 */ 91 91 92 - err = prog_attach(obj, cg_child, "cgroup/getsockopt/child"); 92 + err = prog_attach(obj, cg_child, "cgroup/getsockopt", "_getsockopt_child"); 93 93 if (err) 94 94 goto detach; 95 95 ··· 113 113 * - parent: 0x90 -> 0xA0 114 114 */ 115 115 116 - err = prog_attach(obj, cg_parent, "cgroup/getsockopt/parent"); 116 + err = prog_attach(obj, cg_parent, "cgroup/getsockopt", "_getsockopt_parent"); 117 117 if (err) 118 118 goto detach; 119 119 ··· 157 157 * - parent: unexpected 0x40, EPERM 158 158 */ 159 159 160 - err = prog_detach(obj, cg_child, "cgroup/getsockopt/child"); 160 + err = prog_detach(obj, cg_child, "cgroup/getsockopt", "_getsockopt_child"); 161 161 if (err) { 162 162 log_err("Failed to detach child program"); 163 163 goto detach; ··· 198 198 } 199 199 200 200 detach: 201 - prog_detach(obj, cg_child, "cgroup/getsockopt/child"); 202 - prog_detach(obj, cg_parent, "cgroup/getsockopt/parent"); 201 + prog_detach(obj, cg_child, "cgroup/getsockopt", "_getsockopt_child"); 202 + prog_detach(obj, cg_parent, "cgroup/getsockopt", "_getsockopt_parent"); 203 203 204 204 return err; 205 205 } ··· 236 236 237 237 /* Attach child program and make sure it adds 0x10. */ 238 238 239 - err = prog_attach(obj, cg_child, "cgroup/setsockopt"); 239 + err = prog_attach(obj, cg_child, "cgroup/setsockopt", "_setsockopt"); 240 240 if (err) 241 241 goto detach; 242 242 ··· 263 263 264 264 /* Attach parent program and make sure it adds another 0x10. */ 265 265 266 - err = prog_attach(obj, cg_parent, "cgroup/setsockopt"); 266 + err = prog_attach(obj, cg_parent, "cgroup/setsockopt", "_setsockopt"); 267 267 if (err) 268 268 goto detach; 269 269 ··· 289 289 } 290 290 291 291 detach: 292 - prog_detach(obj, cg_child, "cgroup/setsockopt"); 293 - prog_detach(obj, cg_parent, "cgroup/setsockopt"); 292 + prog_detach(obj, cg_child, "cgroup/setsockopt", "_setsockopt"); 293 + prog_detach(obj, cg_parent, "cgroup/setsockopt", "_setsockopt"); 294 294 295 295 return err; 296 296 }
+29 -29
tools/testing/selftests/bpf/prog_tests/tailcalls.c
··· 21 21 if (CHECK_FAIL(err)) 22 22 return; 23 23 24 - prog = bpf_object__find_program_by_title(obj, "classifier"); 24 + prog = bpf_object__find_program_by_name(obj, "entry"); 25 25 if (CHECK_FAIL(!prog)) 26 26 goto out; 27 27 ··· 38 38 goto out; 39 39 40 40 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 41 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 41 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 42 42 43 - prog = bpf_object__find_program_by_title(obj, prog_name); 43 + prog = bpf_object__find_program_by_name(obj, prog_name); 44 44 if (CHECK_FAIL(!prog)) 45 45 goto out; 46 46 ··· 70 70 err, errno, retval); 71 71 72 72 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 73 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 73 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 74 74 75 - prog = bpf_object__find_program_by_title(obj, prog_name); 75 + prog = bpf_object__find_program_by_name(obj, prog_name); 76 76 if (CHECK_FAIL(!prog)) 77 77 goto out; 78 78 ··· 92 92 93 93 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 94 94 j = bpf_map__def(prog_array)->max_entries - 1 - i; 95 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", j); 95 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", j); 96 96 97 - prog = bpf_object__find_program_by_title(obj, prog_name); 97 + prog = bpf_object__find_program_by_name(obj, prog_name); 98 98 if (CHECK_FAIL(!prog)) 99 99 goto out; 100 100 ··· 159 159 if (CHECK_FAIL(err)) 160 160 return; 161 161 162 - prog = bpf_object__find_program_by_title(obj, "classifier"); 162 + prog = bpf_object__find_program_by_name(obj, "entry"); 163 163 if (CHECK_FAIL(!prog)) 164 164 goto out; 165 165 ··· 176 176 goto out; 177 177 178 178 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 179 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 179 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 180 180 181 - prog = bpf_object__find_program_by_title(obj, prog_name); 181 + prog = bpf_object__find_program_by_name(obj, prog_name); 182 182 if (CHECK_FAIL(!prog)) 183 183 goto out; 184 184 ··· 233 233 if (CHECK_FAIL(err)) 234 234 return; 235 235 236 - prog = bpf_object__find_program_by_title(obj, "classifier"); 236 + prog = bpf_object__find_program_by_name(obj, "entry"); 237 237 if (CHECK_FAIL(!prog)) 238 238 goto out; 239 239 ··· 249 249 if (CHECK_FAIL(map_fd < 0)) 250 250 goto out; 251 251 252 - prog = bpf_object__find_program_by_title(obj, "classifier/0"); 252 + prog = bpf_object__find_program_by_name(obj, "classifier_0"); 253 253 if (CHECK_FAIL(!prog)) 254 254 goto out; 255 255 ··· 329 329 if (CHECK_FAIL(err)) 330 330 return; 331 331 332 - prog = bpf_object__find_program_by_title(obj, "classifier"); 332 + prog = bpf_object__find_program_by_name(obj, "entry"); 333 333 if (CHECK_FAIL(!prog)) 334 334 goto out; 335 335 ··· 354 354 return; 355 355 356 356 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 357 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 357 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 358 358 359 - prog = bpf_object__find_program_by_title(obj, prog_name); 359 + prog = bpf_object__find_program_by_name(obj, prog_name); 360 360 if (CHECK_FAIL(!prog)) 361 361 goto out; 362 362 ··· 417 417 if (CHECK_FAIL(err)) 418 418 return; 419 419 420 - prog = bpf_object__find_program_by_title(obj, "classifier"); 420 + prog = bpf_object__find_program_by_name(obj, "entry"); 421 421 if (CHECK_FAIL(!prog)) 422 422 goto out; 423 423 ··· 442 442 return; 443 443 444 444 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 445 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 445 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 446 446 447 - prog = bpf_object__find_program_by_title(obj, prog_name); 447 + prog = bpf_object__find_program_by_name(obj, prog_name); 448 448 if (CHECK_FAIL(!prog)) 449 449 goto out; 450 450 ··· 503 503 if (CHECK_FAIL(err)) 504 504 return; 505 505 506 - prog = bpf_object__find_program_by_title(obj, "classifier"); 506 + prog = bpf_object__find_program_by_name(obj, "entry"); 507 507 if (CHECK_FAIL(!prog)) 508 508 goto out; 509 509 ··· 521 521 522 522 /* nop -> jmp */ 523 523 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 524 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 524 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 525 525 526 - prog = bpf_object__find_program_by_title(obj, prog_name); 526 + prog = bpf_object__find_program_by_name(obj, prog_name); 527 527 if (CHECK_FAIL(!prog)) 528 528 goto out; 529 529 ··· 587 587 if (CHECK_FAIL(err)) 588 588 return; 589 589 590 - prog = bpf_object__find_program_by_title(obj, "classifier"); 590 + prog = bpf_object__find_program_by_name(obj, "entry"); 591 591 if (CHECK_FAIL(!prog)) 592 592 goto out; 593 593 ··· 603 603 if (CHECK_FAIL(map_fd < 0)) 604 604 goto out; 605 605 606 - prog = bpf_object__find_program_by_title(obj, "classifier/0"); 606 + prog = bpf_object__find_program_by_name(obj, "classifier_0"); 607 607 if (CHECK_FAIL(!prog)) 608 608 goto out; 609 609 ··· 665 665 if (CHECK_FAIL(err)) 666 666 return; 667 667 668 - prog = bpf_object__find_program_by_title(obj, "classifier"); 668 + prog = bpf_object__find_program_by_name(obj, "entry"); 669 669 if (CHECK_FAIL(!prog)) 670 670 goto out; 671 671 ··· 682 682 goto out; 683 683 684 684 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 685 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 685 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 686 686 687 - prog = bpf_object__find_program_by_title(obj, prog_name); 687 + prog = bpf_object__find_program_by_name(obj, prog_name); 688 688 if (CHECK_FAIL(!prog)) 689 689 goto out; 690 690 ··· 762 762 if (CHECK_FAIL(err)) 763 763 return; 764 764 765 - prog = bpf_object__find_program_by_title(obj, "classifier"); 765 + prog = bpf_object__find_program_by_name(obj, "entry"); 766 766 if (CHECK_FAIL(!prog)) 767 767 goto out; 768 768 ··· 779 779 goto out; 780 780 781 781 for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) { 782 - snprintf(prog_name, sizeof(prog_name), "classifier/%i", i); 782 + snprintf(prog_name, sizeof(prog_name), "classifier_%d", i); 783 783 784 - prog = bpf_object__find_program_by_title(obj, prog_name); 784 + prog = bpf_object__find_program_by_name(obj, prog_name); 785 785 if (CHECK_FAIL(!prog)) 786 786 goto out; 787 787
+9 -15
tools/testing/selftests/bpf/prog_tests/trace_printk.c
··· 10 10 11 11 void test_trace_printk(void) 12 12 { 13 - int err, iter = 0, duration = 0, found = 0; 13 + int err = 0, iter = 0, found = 0; 14 14 struct trace_printk__bss *bss; 15 15 struct trace_printk *skel; 16 16 char *buf = NULL; ··· 18 18 size_t buflen; 19 19 20 20 skel = trace_printk__open(); 21 - if (CHECK(!skel, "skel_open", "failed to open skeleton\n")) 21 + if (!ASSERT_OK_PTR(skel, "trace_printk__open")) 22 22 return; 23 23 24 - ASSERT_EQ(skel->rodata->fmt[0], 'T', "invalid printk fmt string"); 24 + ASSERT_EQ(skel->rodata->fmt[0], 'T', "skel->rodata->fmt[0]"); 25 25 skel->rodata->fmt[0] = 't'; 26 26 27 27 err = trace_printk__load(skel); 28 - if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err)) 28 + if (!ASSERT_OK(err, "trace_printk__load")) 29 29 goto cleanup; 30 30 31 31 bss = skel->bss; 32 32 33 33 err = trace_printk__attach(skel); 34 - if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err)) 34 + if (!ASSERT_OK(err, "trace_printk__attach")) 35 35 goto cleanup; 36 36 37 37 fp = fopen(TRACEBUF, "r"); 38 - if (CHECK(fp == NULL, "could not open trace buffer", 39 - "error %d opening %s", errno, TRACEBUF)) 38 + if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)")) 40 39 goto cleanup; 41 40 42 41 /* We do not want to wait forever if this test fails... */ ··· 45 46 usleep(1); 46 47 trace_printk__detach(skel); 47 48 48 - if (CHECK(bss->trace_printk_ran == 0, 49 - "bpf_trace_printk never ran", 50 - "ran == %d", bss->trace_printk_ran)) 49 + if (!ASSERT_GT(bss->trace_printk_ran, 0, "bss->trace_printk_ran")) 51 50 goto cleanup; 52 51 53 - if (CHECK(bss->trace_printk_ret <= 0, 54 - "bpf_trace_printk returned <= 0 value", 55 - "got %d", bss->trace_printk_ret)) 52 + if (!ASSERT_GT(bss->trace_printk_ret, 0, "bss->trace_printk_ret")) 56 53 goto cleanup; 57 54 58 55 /* verify our search string is in the trace buffer */ ··· 61 66 break; 62 67 } 63 68 64 - if (CHECK(!found, "message from bpf_trace_printk not found", 65 - "no instance of %s in %s", SEARCHMSG, TRACEBUF)) 69 + if (!ASSERT_EQ(found, bss->trace_printk_ran, "found")) 66 70 goto cleanup; 67 71 68 72 cleanup:
+68
tools/testing/selftests/bpf/prog_tests/trace_vprintk.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2021 Facebook */ 3 + 4 + #include <test_progs.h> 5 + 6 + #include "trace_vprintk.lskel.h" 7 + 8 + #define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe" 9 + #define SEARCHMSG "1,2,3,4,5,6,7,8,9,10" 10 + 11 + void test_trace_vprintk(void) 12 + { 13 + int err = 0, iter = 0, found = 0; 14 + struct trace_vprintk__bss *bss; 15 + struct trace_vprintk *skel; 16 + char *buf = NULL; 17 + FILE *fp = NULL; 18 + size_t buflen; 19 + 20 + skel = trace_vprintk__open_and_load(); 21 + if (!ASSERT_OK_PTR(skel, "trace_vprintk__open_and_load")) 22 + goto cleanup; 23 + 24 + bss = skel->bss; 25 + 26 + err = trace_vprintk__attach(skel); 27 + if (!ASSERT_OK(err, "trace_vprintk__attach")) 28 + goto cleanup; 29 + 30 + fp = fopen(TRACEBUF, "r"); 31 + if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)")) 32 + goto cleanup; 33 + 34 + /* We do not want to wait forever if this test fails... */ 35 + fcntl(fileno(fp), F_SETFL, O_NONBLOCK); 36 + 37 + /* wait for tracepoint to trigger */ 38 + usleep(1); 39 + trace_vprintk__detach(skel); 40 + 41 + if (!ASSERT_GT(bss->trace_vprintk_ran, 0, "bss->trace_vprintk_ran")) 42 + goto cleanup; 43 + 44 + if (!ASSERT_GT(bss->trace_vprintk_ret, 0, "bss->trace_vprintk_ret")) 45 + goto cleanup; 46 + 47 + /* verify our search string is in the trace buffer */ 48 + while (getline(&buf, &buflen, fp) >= 0 || errno == EAGAIN) { 49 + if (strstr(buf, SEARCHMSG) != NULL) 50 + found++; 51 + if (found == bss->trace_vprintk_ran) 52 + break; 53 + if (++iter > 1000) 54 + break; 55 + } 56 + 57 + if (!ASSERT_EQ(found, bss->trace_vprintk_ran, "found")) 58 + goto cleanup; 59 + 60 + if (!ASSERT_LT(bss->null_data_vprintk_ret, 0, "bss->null_data_vprintk_ret")) 61 + goto cleanup; 62 + 63 + cleanup: 64 + trace_vprintk__destroy(skel); 65 + free(buf); 66 + if (fp) 67 + fclose(fp); 68 + }
+15
tools/testing/selftests/bpf/prog_tests/xdpwall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2021 Facebook */ 3 + 4 + #include "test_progs.h" 5 + #include "xdpwall.skel.h" 6 + 7 + void test_xdpwall(void) 8 + { 9 + struct xdpwall *skel; 10 + 11 + skel = xdpwall__open_and_load(); 12 + ASSERT_OK_PTR(skel, "Does LLMV have https://reviews.llvm.org/D109073?"); 13 + 14 + xdpwall__destroy(skel); 15 + }
+1 -2
tools/testing/selftests/bpf/progs/bpf_flow.c
··· 19 19 #include <bpf/bpf_helpers.h> 20 20 #include <bpf/bpf_endian.h> 21 21 22 - int _version SEC("version") = 1; 23 22 #define PROG(F) PROG_(F, _##F) 24 - #define PROG_(NUM, NAME) SEC("flow_dissector/"#NUM) int bpf_func##NAME 23 + #define PROG_(NUM, NAME) SEC("flow_dissector") int flow_dissector_##NUM 25 24 26 25 /* These are the identifiers of the BPF programs that will be used in tail 27 26 * calls. Name is limited to 16 characters, with the terminating character and
+2 -2
tools/testing/selftests/bpf/progs/cg_storage_multi_isolated.c
··· 20 20 21 21 __u32 invocations = 0; 22 22 23 - SEC("cgroup_skb/egress/1") 23 + SEC("cgroup_skb/egress") 24 24 int egress1(struct __sk_buff *skb) 25 25 { 26 26 struct cgroup_value *ptr_cg_storage = ··· 32 32 return 1; 33 33 } 34 34 35 - SEC("cgroup_skb/egress/2") 35 + SEC("cgroup_skb/egress") 36 36 int egress2(struct __sk_buff *skb) 37 37 { 38 38 struct cgroup_value *ptr_cg_storage =
+2 -2
tools/testing/selftests/bpf/progs/cg_storage_multi_shared.c
··· 20 20 21 21 __u32 invocations = 0; 22 22 23 - SEC("cgroup_skb/egress/1") 23 + SEC("cgroup_skb/egress") 24 24 int egress1(struct __sk_buff *skb) 25 25 { 26 26 struct cgroup_value *ptr_cg_storage = ··· 32 32 return 1; 33 33 } 34 34 35 - SEC("cgroup_skb/egress/2") 35 + SEC("cgroup_skb/egress") 36 36 int egress2(struct __sk_buff *skb) 37 37 { 38 38 struct cgroup_value *ptr_cg_storage =
+1 -1
tools/testing/selftests/bpf/progs/for_each_array_map_elem.c
··· 47 47 48 48 u32 arraymap_output = 0; 49 49 50 - SEC("classifier") 50 + SEC("tc") 51 51 int test_pkt_access(struct __sk_buff *skb) 52 52 { 53 53 struct callback_ctx data;
+1 -1
tools/testing/selftests/bpf/progs/for_each_hash_map_elem.c
··· 78 78 int hashmap_elems = 0; 79 79 int percpu_map_elems = 0; 80 80 81 - SEC("classifier") 81 + SEC("tc") 82 82 int test_pkt_access(struct __sk_buff *skb) 83 83 { 84 84 struct callback_ctx data;
+2 -2
tools/testing/selftests/bpf/progs/kfree_skb.c
··· 9 9 char _license[] SEC("license") = "GPL"; 10 10 struct { 11 11 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); 12 - __uint(key_size, sizeof(int)); 13 - __uint(value_size, sizeof(int)); 12 + __type(key, int); 13 + __type(value, int); 14 14 } perf_buf_map SEC(".maps"); 15 15 16 16 #define _(P) (__builtin_preserve_access_index(P))
+2 -2
tools/testing/selftests/bpf/progs/kfunc_call_test.c
··· 8 8 extern __u64 bpf_kfunc_call_test1(struct sock *sk, __u32 a, __u64 b, 9 9 __u32 c, __u64 d) __ksym; 10 10 11 - SEC("classifier") 11 + SEC("tc") 12 12 int kfunc_call_test2(struct __sk_buff *skb) 13 13 { 14 14 struct bpf_sock *sk = skb->sk; ··· 23 23 return bpf_kfunc_call_test2((struct sock *)sk, 1, 2); 24 24 } 25 25 26 - SEC("classifier") 26 + SEC("tc") 27 27 int kfunc_call_test1(struct __sk_buff *skb) 28 28 { 29 29 struct bpf_sock *sk = skb->sk;
+1 -1
tools/testing/selftests/bpf/progs/kfunc_call_test_subprog.c
··· 33 33 return (__u32)bpf_kfunc_call_test1((struct sock *)sk, 1, 2, 3, 4); 34 34 } 35 35 36 - SEC("classifier") 36 + SEC("tc") 37 37 int kfunc_call_test1(struct __sk_buff *skb) 38 38 { 39 39 return f1(skb);
+2 -2
tools/testing/selftests/bpf/progs/perf_event_stackmap.c
··· 11 11 struct { 12 12 __uint(type, BPF_MAP_TYPE_STACK_TRACE); 13 13 __uint(max_entries, 16384); 14 - __uint(key_size, sizeof(__u32)); 15 - __uint(value_size, sizeof(stack_trace_t)); 14 + __type(key, __u32); 15 + __type(value, stack_trace_t); 16 16 } stackmap SEC(".maps"); 17 17 18 18 struct {
+1 -1
tools/testing/selftests/bpf/progs/skb_pkt_end.c
··· 25 25 return ip; 26 26 } 27 27 28 - SEC("classifier/cls") 28 + SEC("tc") 29 29 int main_prog(struct __sk_buff *skb) 30 30 { 31 31 struct iphdr *ip = NULL;
+6 -6
tools/testing/selftests/bpf/progs/sockmap_verdict_prog.c
··· 7 7 struct { 8 8 __uint(type, BPF_MAP_TYPE_SOCKMAP); 9 9 __uint(max_entries, 20); 10 - __uint(key_size, sizeof(int)); 11 - __uint(value_size, sizeof(int)); 10 + __type(key, int); 11 + __type(value, int); 12 12 } sock_map_rx SEC(".maps"); 13 13 14 14 struct { 15 15 __uint(type, BPF_MAP_TYPE_SOCKMAP); 16 16 __uint(max_entries, 20); 17 - __uint(key_size, sizeof(int)); 18 - __uint(value_size, sizeof(int)); 17 + __type(key, int); 18 + __type(value, int); 19 19 } sock_map_tx SEC(".maps"); 20 20 21 21 struct { 22 22 __uint(type, BPF_MAP_TYPE_SOCKMAP); 23 23 __uint(max_entries, 20); 24 - __uint(key_size, sizeof(int)); 25 - __uint(value_size, sizeof(int)); 24 + __type(key, int); 25 + __type(value, int); 26 26 } sock_map_msg SEC(".maps"); 27 27 28 28 struct {
+2 -3
tools/testing/selftests/bpf/progs/sockopt_multi.c
··· 4 4 #include <bpf/bpf_helpers.h> 5 5 6 6 char _license[] SEC("license") = "GPL"; 7 - __u32 _version SEC("version") = 1; 8 7 9 - SEC("cgroup/getsockopt/child") 8 + SEC("cgroup/getsockopt") 10 9 int _getsockopt_child(struct bpf_sockopt *ctx) 11 10 { 12 11 __u8 *optval_end = ctx->optval_end; ··· 28 29 return 1; 29 30 } 30 31 31 - SEC("cgroup/getsockopt/parent") 32 + SEC("cgroup/getsockopt") 32 33 int _getsockopt_parent(struct bpf_sockopt *ctx) 33 34 { 34 35 __u8 *optval_end = ctx->optval_end;
+3 -4
tools/testing/selftests/bpf/progs/tailcall1.c
··· 11 11 } jmp_table SEC(".maps"); 12 12 13 13 #define TAIL_FUNC(x) \ 14 - SEC("classifier/" #x) \ 15 - int bpf_func_##x(struct __sk_buff *skb) \ 14 + SEC("tc") \ 15 + int classifier_##x(struct __sk_buff *skb) \ 16 16 { \ 17 17 return x; \ 18 18 } ··· 20 20 TAIL_FUNC(1) 21 21 TAIL_FUNC(2) 22 22 23 - SEC("classifier") 23 + SEC("tc") 24 24 int entry(struct __sk_buff *skb) 25 25 { 26 26 /* Multiple locations to make sure we patch ··· 45 45 } 46 46 47 47 char __license[] SEC("license") = "GPL"; 48 - int _version SEC("version") = 1;
+11 -12
tools/testing/selftests/bpf/progs/tailcall2.c
··· 10 10 __uint(value_size, sizeof(__u32)); 11 11 } jmp_table SEC(".maps"); 12 12 13 - SEC("classifier/0") 14 - int bpf_func_0(struct __sk_buff *skb) 13 + SEC("tc") 14 + int classifier_0(struct __sk_buff *skb) 15 15 { 16 16 bpf_tail_call_static(skb, &jmp_table, 1); 17 17 return 0; 18 18 } 19 19 20 - SEC("classifier/1") 21 - int bpf_func_1(struct __sk_buff *skb) 20 + SEC("tc") 21 + int classifier_1(struct __sk_buff *skb) 22 22 { 23 23 bpf_tail_call_static(skb, &jmp_table, 2); 24 24 return 1; 25 25 } 26 26 27 - SEC("classifier/2") 28 - int bpf_func_2(struct __sk_buff *skb) 27 + SEC("tc") 28 + int classifier_2(struct __sk_buff *skb) 29 29 { 30 30 return 2; 31 31 } 32 32 33 - SEC("classifier/3") 34 - int bpf_func_3(struct __sk_buff *skb) 33 + SEC("tc") 34 + int classifier_3(struct __sk_buff *skb) 35 35 { 36 36 bpf_tail_call_static(skb, &jmp_table, 4); 37 37 return 3; 38 38 } 39 39 40 - SEC("classifier/4") 41 - int bpf_func_4(struct __sk_buff *skb) 40 + SEC("tc") 41 + int classifier_4(struct __sk_buff *skb) 42 42 { 43 43 bpf_tail_call_static(skb, &jmp_table, 3); 44 44 return 4; 45 45 } 46 46 47 - SEC("classifier") 47 + SEC("tc") 48 48 int entry(struct __sk_buff *skb) 49 49 { 50 50 bpf_tail_call_static(skb, &jmp_table, 0); ··· 56 56 } 57 57 58 58 char __license[] SEC("license") = "GPL"; 59 - int _version SEC("version") = 1;
+3 -4
tools/testing/selftests/bpf/progs/tailcall3.c
··· 12 12 13 13 int count = 0; 14 14 15 - SEC("classifier/0") 16 - int bpf_func_0(struct __sk_buff *skb) 15 + SEC("tc") 16 + int classifier_0(struct __sk_buff *skb) 17 17 { 18 18 count++; 19 19 bpf_tail_call_static(skb, &jmp_table, 0); 20 20 return 1; 21 21 } 22 22 23 - SEC("classifier") 23 + SEC("tc") 24 24 int entry(struct __sk_buff *skb) 25 25 { 26 26 bpf_tail_call_static(skb, &jmp_table, 0); ··· 28 28 } 29 29 30 30 char __license[] SEC("license") = "GPL"; 31 - int _version SEC("version") = 1;
+3 -4
tools/testing/selftests/bpf/progs/tailcall4.c
··· 13 13 int selector = 0; 14 14 15 15 #define TAIL_FUNC(x) \ 16 - SEC("classifier/" #x) \ 17 - int bpf_func_##x(struct __sk_buff *skb) \ 16 + SEC("tc") \ 17 + int classifier_##x(struct __sk_buff *skb) \ 18 18 { \ 19 19 return x; \ 20 20 } ··· 22 22 TAIL_FUNC(1) 23 23 TAIL_FUNC(2) 24 24 25 - SEC("classifier") 25 + SEC("tc") 26 26 int entry(struct __sk_buff *skb) 27 27 { 28 28 bpf_tail_call(skb, &jmp_table, selector); ··· 30 30 } 31 31 32 32 char __license[] SEC("license") = "GPL"; 33 - int _version SEC("version") = 1;
+3 -4
tools/testing/selftests/bpf/progs/tailcall5.c
··· 13 13 int selector = 0; 14 14 15 15 #define TAIL_FUNC(x) \ 16 - SEC("classifier/" #x) \ 17 - int bpf_func_##x(struct __sk_buff *skb) \ 16 + SEC("tc") \ 17 + int classifier_##x(struct __sk_buff *skb) \ 18 18 { \ 19 19 return x; \ 20 20 } ··· 22 22 TAIL_FUNC(1) 23 23 TAIL_FUNC(2) 24 24 25 - SEC("classifier") 25 + SEC("tc") 26 26 int entry(struct __sk_buff *skb) 27 27 { 28 28 int idx = 0; ··· 37 37 } 38 38 39 39 char __license[] SEC("license") = "GPL"; 40 - int _version SEC("version") = 1;
+3 -3
tools/testing/selftests/bpf/progs/tailcall6.c
··· 12 12 13 13 int count, which; 14 14 15 - SEC("classifier/0") 16 - int bpf_func_0(struct __sk_buff *skb) 15 + SEC("tc") 16 + int classifier_0(struct __sk_buff *skb) 17 17 { 18 18 count++; 19 19 if (__builtin_constant_p(which)) ··· 22 22 return 1; 23 23 } 24 24 25 - SEC("classifier") 25 + SEC("tc") 26 26 int entry(struct __sk_buff *skb) 27 27 { 28 28 if (__builtin_constant_p(which))
+3 -4
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf1.c
··· 10 10 } jmp_table SEC(".maps"); 11 11 12 12 #define TAIL_FUNC(x) \ 13 - SEC("classifier/" #x) \ 14 - int bpf_func_##x(struct __sk_buff *skb) \ 13 + SEC("tc") \ 14 + int classifier_##x(struct __sk_buff *skb) \ 15 15 { \ 16 16 return x; \ 17 17 } ··· 26 26 return skb->len * 2; 27 27 } 28 28 29 - SEC("classifier") 29 + SEC("tc") 30 30 int entry(struct __sk_buff *skb) 31 31 { 32 32 bpf_tail_call_static(skb, &jmp_table, 1); ··· 35 35 } 36 36 37 37 char __license[] SEC("license") = "GPL"; 38 - int _version SEC("version") = 1;
+3 -4
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf2.c
··· 22 22 23 23 int count = 0; 24 24 25 - SEC("classifier/0") 26 - int bpf_func_0(struct __sk_buff *skb) 25 + SEC("tc") 26 + int classifier_0(struct __sk_buff *skb) 27 27 { 28 28 count++; 29 29 return subprog_tail(skb); 30 30 } 31 31 32 - SEC("classifier") 32 + SEC("tc") 33 33 int entry(struct __sk_buff *skb) 34 34 { 35 35 bpf_tail_call_static(skb, &jmp_table, 0); ··· 38 38 } 39 39 40 40 char __license[] SEC("license") = "GPL"; 41 - int _version SEC("version") = 1;
+5 -6
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf3.c
··· 33 33 return skb->len * 2; 34 34 } 35 35 36 - SEC("classifier/0") 37 - int bpf_func_0(struct __sk_buff *skb) 36 + SEC("tc") 37 + int classifier_0(struct __sk_buff *skb) 38 38 { 39 39 volatile char arr[128] = {}; 40 40 41 41 return subprog_tail2(skb); 42 42 } 43 43 44 - SEC("classifier/1") 45 - int bpf_func_1(struct __sk_buff *skb) 44 + SEC("tc") 45 + int classifier_1(struct __sk_buff *skb) 46 46 { 47 47 volatile char arr[128] = {}; 48 48 49 49 return skb->len * 3; 50 50 } 51 51 52 - SEC("classifier") 52 + SEC("tc") 53 53 int entry(struct __sk_buff *skb) 54 54 { 55 55 volatile char arr[128] = {}; ··· 58 58 } 59 59 60 60 char __license[] SEC("license") = "GPL"; 61 - int _version SEC("version") = 1;
+7 -8
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c
··· 50 50 return skb->len; 51 51 } 52 52 53 - SEC("classifier/1") 54 - int bpf_func_1(struct __sk_buff *skb) 53 + SEC("tc") 54 + int classifier_1(struct __sk_buff *skb) 55 55 { 56 56 return subprog_tail_2(skb); 57 57 } 58 58 59 - SEC("classifier/2") 60 - int bpf_func_2(struct __sk_buff *skb) 59 + SEC("tc") 60 + int classifier_2(struct __sk_buff *skb) 61 61 { 62 62 count++; 63 63 return subprog_tail_2(skb); 64 64 } 65 65 66 - SEC("classifier/0") 67 - int bpf_func_0(struct __sk_buff *skb) 66 + SEC("tc") 67 + int classifier_0(struct __sk_buff *skb) 68 68 { 69 69 return subprog_tail_1(skb); 70 70 } 71 71 72 - SEC("classifier") 72 + SEC("tc") 73 73 int entry(struct __sk_buff *skb) 74 74 { 75 75 return subprog_tail(skb); 76 76 } 77 77 78 78 char __license[] SEC("license") = "GPL"; 79 - int _version SEC("version") = 1;
+7 -7
tools/testing/selftests/bpf/progs/test_btf_map_in_map.c
··· 21 21 struct outer_arr { 22 22 __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); 23 23 __uint(max_entries, 3); 24 - __uint(key_size, sizeof(int)); 25 - __uint(value_size, sizeof(int)); 24 + __type(key, int); 25 + __type(value, int); 26 26 /* it's possible to use anonymous struct as inner map definition here */ 27 27 __array(values, struct { 28 28 __uint(type, BPF_MAP_TYPE_ARRAY); ··· 61 61 struct outer_arr_dyn { 62 62 __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); 63 63 __uint(max_entries, 3); 64 - __uint(key_size, sizeof(int)); 65 - __uint(value_size, sizeof(int)); 64 + __type(key, int); 65 + __type(value, int); 66 66 __array(values, struct { 67 67 __uint(type, BPF_MAP_TYPE_ARRAY); 68 68 __uint(map_flags, BPF_F_INNER_MAP); ··· 81 81 struct outer_hash { 82 82 __uint(type, BPF_MAP_TYPE_HASH_OF_MAPS); 83 83 __uint(max_entries, 5); 84 - __uint(key_size, sizeof(int)); 84 + __type(key, int); 85 85 /* Here everything works flawlessly due to reuse of struct inner_map 86 86 * and compiler will complain at the attempt to use non-inner_map 87 87 * references below. This is great experience. ··· 111 111 struct outer_sockarr_sz1 { 112 112 __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); 113 113 __uint(max_entries, 1); 114 - __uint(key_size, sizeof(int)); 115 - __uint(value_size, sizeof(int)); 114 + __type(key, int); 115 + __type(value, int); 116 116 __array(values, struct sockarr_sz1); 117 117 } outer_sockarr SEC(".maps") = { 118 118 .values = { (void *)&sockarr_sz1 },
+1 -1
tools/testing/selftests/bpf/progs/test_btf_skc_cls_ingress.c
··· 145 145 return TC_ACT_OK; 146 146 } 147 147 148 - SEC("classifier/ingress") 148 + SEC("tc") 149 149 int cls_ingress(struct __sk_buff *skb) 150 150 { 151 151 struct ipv6hdr *ip6h;
+2 -2
tools/testing/selftests/bpf/progs/test_cgroup_link.c
··· 6 6 int calls = 0; 7 7 int alt_calls = 0; 8 8 9 - SEC("cgroup_skb/egress1") 9 + SEC("cgroup_skb/egress") 10 10 int egress(struct __sk_buff *skb) 11 11 { 12 12 __sync_fetch_and_add(&calls, 1); 13 13 return 1; 14 14 } 15 15 16 - SEC("cgroup_skb/egress2") 16 + SEC("cgroup_skb/egress") 17 17 int egress_alt(struct __sk_buff *skb) 18 18 { 19 19 __sync_fetch_and_add(&alt_calls, 1);
+6 -6
tools/testing/selftests/bpf/progs/test_check_mtu.c
··· 153 153 return retval; 154 154 } 155 155 156 - SEC("classifier") 156 + SEC("tc") 157 157 int tc_use_helper(struct __sk_buff *ctx) 158 158 { 159 159 int retval = BPF_OK; /* Expected retval on successful test */ ··· 172 172 return retval; 173 173 } 174 174 175 - SEC("classifier") 175 + SEC("tc") 176 176 int tc_exceed_mtu(struct __sk_buff *ctx) 177 177 { 178 178 __u32 ifindex = GLOBAL_USER_IFINDEX; ··· 196 196 return retval; 197 197 } 198 198 199 - SEC("classifier") 199 + SEC("tc") 200 200 int tc_exceed_mtu_da(struct __sk_buff *ctx) 201 201 { 202 202 /* SKB Direct-Access variant */ ··· 223 223 return retval; 224 224 } 225 225 226 - SEC("classifier") 226 + SEC("tc") 227 227 int tc_minus_delta(struct __sk_buff *ctx) 228 228 { 229 229 int retval = BPF_OK; /* Expected retval on successful test */ ··· 245 245 return retval; 246 246 } 247 247 248 - SEC("classifier") 248 + SEC("tc") 249 249 int tc_input_len(struct __sk_buff *ctx) 250 250 { 251 251 int retval = BPF_OK; /* Expected retval on successful test */ ··· 265 265 return retval; 266 266 } 267 267 268 - SEC("classifier") 268 + SEC("tc") 269 269 int tc_input_len_exceed(struct __sk_buff *ctx) 270 270 { 271 271 int retval = BPF_DROP; /* Fail */
+1 -1
tools/testing/selftests/bpf/progs/test_cls_redirect.c
··· 928 928 } 929 929 } 930 930 931 - SEC("classifier/cls_redirect") 931 + SEC("tc") 932 932 int cls_redirect(struct __sk_buff *skb) 933 933 { 934 934 metrics_t *metrics = get_global_metrics();
+1 -1
tools/testing/selftests/bpf/progs/test_global_data.c
··· 68 68 bpf_map_update_elem(&result_##map, &key, var, 0); \ 69 69 } while (0) 70 70 71 - SEC("classifier/static_data_load") 71 + SEC("tc") 72 72 int load_static_data(struct __sk_buff *skb) 73 73 { 74 74 static const __u64 bar = ~0;
+1 -1
tools/testing/selftests/bpf/progs/test_global_func1.c
··· 38 38 return skb->ifindex * val * var; 39 39 } 40 40 41 - SEC("classifier/test") 41 + SEC("tc") 42 42 int test_cls(struct __sk_buff *skb) 43 43 { 44 44 return f0(1, skb) + f1(skb) + f2(2, skb) + f3(3, skb, 4);
+1 -1
tools/testing/selftests/bpf/progs/test_global_func3.c
··· 54 54 } 55 55 #endif 56 56 57 - SEC("classifier/test") 57 + SEC("tc") 58 58 int test_cls(struct __sk_buff *skb) 59 59 { 60 60 #ifndef NO_FN8
+1 -1
tools/testing/selftests/bpf/progs/test_global_func5.c
··· 24 24 return skb->ifindex * val; 25 25 } 26 26 27 - SEC("classifier/test") 27 + SEC("tc") 28 28 int test_cls(struct __sk_buff *skb) 29 29 { 30 30 return f1(skb) + f2(2, skb) + f3(3, skb);
+1 -1
tools/testing/selftests/bpf/progs/test_global_func6.c
··· 24 24 return skb->ifindex * val; 25 25 } 26 26 27 - SEC("classifier/test") 27 + SEC("tc") 28 28 int test_cls(struct __sk_buff *skb) 29 29 { 30 30 return f1(skb) + f2(2, skb) + f3(3, skb);
+1 -1
tools/testing/selftests/bpf/progs/test_global_func7.c
··· 10 10 skb->tc_index = 0; 11 11 } 12 12 13 - SEC("classifier/test") 13 + SEC("tc") 14 14 int test_cls(struct __sk_buff *skb) 15 15 { 16 16 foo(skb);
+5 -7
tools/testing/selftests/bpf/progs/test_map_in_map.c
··· 9 9 __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); 10 10 __uint(max_entries, 1); 11 11 __uint(map_flags, 0); 12 - __uint(key_size, sizeof(__u32)); 13 - /* must be sizeof(__u32) for map in map */ 14 - __uint(value_size, sizeof(__u32)); 12 + __type(key, __u32); 13 + __type(value, __u32); 15 14 } mim_array SEC(".maps"); 16 15 17 16 struct { 18 17 __uint(type, BPF_MAP_TYPE_HASH_OF_MAPS); 19 18 __uint(max_entries, 1); 20 19 __uint(map_flags, 0); 21 - __uint(key_size, sizeof(int)); 22 - /* must be sizeof(__u32) for map in map */ 23 - __uint(value_size, sizeof(__u32)); 20 + __type(key, int); 21 + __type(value, __u32); 24 22 } mim_hash SEC(".maps"); 25 23 26 - SEC("xdp_mimtest") 24 + SEC("xdp") 27 25 int xdp_mimtest0(struct xdp_md *ctx) 28 26 { 29 27 int value = 123;
+1 -1
tools/testing/selftests/bpf/progs/test_map_in_map_invalid.c
··· 13 13 struct { 14 14 __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); 15 15 __uint(max_entries, 0); /* This will make map creation to fail */ 16 - __uint(key_size, sizeof(__u32)); 16 + __type(key, __u32); 17 17 __array(values, struct inner); 18 18 } mim SEC(".maps"); 19 19
+1 -1
tools/testing/selftests/bpf/progs/test_misc_tcp_hdr_options.c
··· 293 293 return check_active_hdr_in(skops); 294 294 } 295 295 296 - SEC("sockops/misc_estab") 296 + SEC("sockops") 297 297 int misc_estab(struct bpf_sock_ops *skops) 298 298 { 299 299 int true_val = 1;
+4 -4
tools/testing/selftests/bpf/progs/test_pe_preserve_elems.c
··· 7 7 struct { 8 8 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); 9 9 __uint(max_entries, 1); 10 - __uint(key_size, sizeof(int)); 11 - __uint(value_size, sizeof(int)); 10 + __type(key, int); 11 + __type(value, int); 12 12 } array_1 SEC(".maps"); 13 13 14 14 struct { 15 15 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); 16 16 __uint(max_entries, 1); 17 - __uint(key_size, sizeof(int)); 18 - __uint(value_size, sizeof(int)); 17 + __type(key, int); 18 + __type(value, int); 19 19 __uint(map_flags, BPF_F_PRESERVE_ELEMS); 20 20 } array_2 SEC(".maps"); 21 21
+2 -2
tools/testing/selftests/bpf/progs/test_perf_buffer.c
··· 8 8 9 9 struct { 10 10 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); 11 - __uint(key_size, sizeof(int)); 12 - __uint(value_size, sizeof(int)); 11 + __type(key, int); 12 + __type(value, int); 13 13 } perf_buf_map SEC(".maps"); 14 14 15 15 SEC("tp/raw_syscalls/sys_enter")
+1 -1
tools/testing/selftests/bpf/progs/test_pkt_access.c
··· 97 97 return 0; 98 98 } 99 99 100 - SEC("classifier/test_pkt_access") 100 + SEC("tc") 101 101 int test_pkt_access(struct __sk_buff *skb) 102 102 { 103 103 void *data_end = (void *)(long)skb->data_end;
+1 -3
tools/testing/selftests/bpf/progs/test_pkt_md_access.c
··· 7 7 #include <linux/pkt_cls.h> 8 8 #include <bpf/bpf_helpers.h> 9 9 10 - int _version SEC("version") = 1; 11 - 12 10 #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ 13 11 #define TEST_FIELD(TYPE, FIELD, MASK) \ 14 12 { \ ··· 25 27 } 26 28 #endif 27 29 28 - SEC("classifier/test_pkt_md_access") 30 + SEC("tc") 29 31 int test_pkt_md_access(struct __sk_buff *skb) 30 32 { 31 33 TEST_FIELD(__u8, len, 0xFF);
+26 -2
tools/testing/selftests/bpf/progs/test_probe_user.c
··· 8 8 #include <bpf/bpf_helpers.h> 9 9 #include <bpf/bpf_tracing.h> 10 10 11 + #if defined(__TARGET_ARCH_x86) 12 + #define SYSCALL_WRAPPER 1 13 + #define SYS_PREFIX "__x64_" 14 + #elif defined(__TARGET_ARCH_s390) 15 + #define SYSCALL_WRAPPER 1 16 + #define SYS_PREFIX "__s390x_" 17 + #elif defined(__TARGET_ARCH_arm64) 18 + #define SYSCALL_WRAPPER 1 19 + #define SYS_PREFIX "__arm64_" 20 + #else 21 + #define SYSCALL_WRAPPER 0 22 + #define SYS_PREFIX "" 23 + #endif 24 + 11 25 static struct sockaddr_in old; 12 26 13 - SEC("kprobe/__sys_connect") 27 + SEC("kprobe/" SYS_PREFIX "sys_connect") 14 28 int BPF_KPROBE(handle_sys_connect) 15 29 { 16 - void *ptr = (void *)PT_REGS_PARM2(ctx); 30 + #if SYSCALL_WRAPPER == 1 31 + struct pt_regs *real_regs; 32 + #endif 17 33 struct sockaddr_in new; 34 + void *ptr; 35 + 36 + #if SYSCALL_WRAPPER == 0 37 + ptr = (void *)PT_REGS_PARM2(ctx); 38 + #else 39 + real_regs = (struct pt_regs *)PT_REGS_PARM1(ctx); 40 + bpf_probe_read_kernel(&ptr, sizeof(ptr), &PT_REGS_PARM2(real_regs)); 41 + #endif 18 42 19 43 bpf_probe_read_user(&old, sizeof(old), ptr); 20 44 __builtin_memset(&new, 0xab, sizeof(new));
+2 -2
tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c
··· 24 24 struct { 25 25 __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); 26 26 __uint(max_entries, 1); 27 - __uint(key_size, sizeof(__u32)); 28 - __uint(value_size, sizeof(__u32)); 27 + __type(key, __u32); 28 + __type(value, __u32); 29 29 } outer_map SEC(".maps"); 30 30 31 31 struct {
+1 -2
tools/testing/selftests/bpf/progs/test_sk_assign.c
··· 36 36 .pinning = PIN_GLOBAL_NS, 37 37 }; 38 38 39 - int _version SEC("version") = 1; 40 39 char _license[] SEC("license") = "GPL"; 41 40 42 41 /* Fill 'tuple' with L3 info, and attempt to find L4. On fail, return NULL. */ ··· 158 159 return ret; 159 160 } 160 161 161 - SEC("classifier/sk_assign_test") 162 + SEC("tc") 162 163 int bpf_sk_assign_test(struct __sk_buff *skb) 163 164 { 164 165 struct bpf_sock_tuple *tuple, ln = {0};
+22 -22
tools/testing/selftests/bpf/progs/test_sk_lookup.c
··· 72 72 static const __u32 DST_IP4 = IP4(127, 0, 0, 1); 73 73 static const __u32 DST_IP6[] = IP6(0xfd000000, 0x0, 0x0, 0x00000001); 74 74 75 - SEC("sk_lookup/lookup_pass") 75 + SEC("sk_lookup") 76 76 int lookup_pass(struct bpf_sk_lookup *ctx) 77 77 { 78 78 return SK_PASS; 79 79 } 80 80 81 - SEC("sk_lookup/lookup_drop") 81 + SEC("sk_lookup") 82 82 int lookup_drop(struct bpf_sk_lookup *ctx) 83 83 { 84 84 return SK_DROP; 85 85 } 86 86 87 - SEC("sk_reuseport/reuse_pass") 87 + SEC("sk_reuseport") 88 88 int reuseport_pass(struct sk_reuseport_md *ctx) 89 89 { 90 90 return SK_PASS; 91 91 } 92 92 93 - SEC("sk_reuseport/reuse_drop") 93 + SEC("sk_reuseport") 94 94 int reuseport_drop(struct sk_reuseport_md *ctx) 95 95 { 96 96 return SK_DROP; 97 97 } 98 98 99 99 /* Redirect packets destined for port DST_PORT to socket at redir_map[0]. */ 100 - SEC("sk_lookup/redir_port") 100 + SEC("sk_lookup") 101 101 int redir_port(struct bpf_sk_lookup *ctx) 102 102 { 103 103 struct bpf_sock *sk; ··· 116 116 } 117 117 118 118 /* Redirect packets destined for DST_IP4 address to socket at redir_map[0]. */ 119 - SEC("sk_lookup/redir_ip4") 119 + SEC("sk_lookup") 120 120 int redir_ip4(struct bpf_sk_lookup *ctx) 121 121 { 122 122 struct bpf_sock *sk; ··· 139 139 } 140 140 141 141 /* Redirect packets destined for DST_IP6 address to socket at redir_map[0]. */ 142 - SEC("sk_lookup/redir_ip6") 142 + SEC("sk_lookup") 143 143 int redir_ip6(struct bpf_sk_lookup *ctx) 144 144 { 145 145 struct bpf_sock *sk; ··· 164 164 return err ? SK_DROP : SK_PASS; 165 165 } 166 166 167 - SEC("sk_lookup/select_sock_a") 167 + SEC("sk_lookup") 168 168 int select_sock_a(struct bpf_sk_lookup *ctx) 169 169 { 170 170 struct bpf_sock *sk; ··· 179 179 return err ? SK_DROP : SK_PASS; 180 180 } 181 181 182 - SEC("sk_lookup/select_sock_a_no_reuseport") 182 + SEC("sk_lookup") 183 183 int select_sock_a_no_reuseport(struct bpf_sk_lookup *ctx) 184 184 { 185 185 struct bpf_sock *sk; ··· 194 194 return err ? SK_DROP : SK_PASS; 195 195 } 196 196 197 - SEC("sk_reuseport/select_sock_b") 197 + SEC("sk_reuseport") 198 198 int select_sock_b(struct sk_reuseport_md *ctx) 199 199 { 200 200 __u32 key = KEY_SERVER_B; ··· 205 205 } 206 206 207 207 /* Check that bpf_sk_assign() returns -EEXIST if socket already selected. */ 208 - SEC("sk_lookup/sk_assign_eexist") 208 + SEC("sk_lookup") 209 209 int sk_assign_eexist(struct bpf_sk_lookup *ctx) 210 210 { 211 211 struct bpf_sock *sk; ··· 238 238 } 239 239 240 240 /* Check that bpf_sk_assign(BPF_SK_LOOKUP_F_REPLACE) can override selection. */ 241 - SEC("sk_lookup/sk_assign_replace_flag") 241 + SEC("sk_lookup") 242 242 int sk_assign_replace_flag(struct bpf_sk_lookup *ctx) 243 243 { 244 244 struct bpf_sock *sk; ··· 270 270 } 271 271 272 272 /* Check that bpf_sk_assign(sk=NULL) is accepted. */ 273 - SEC("sk_lookup/sk_assign_null") 273 + SEC("sk_lookup") 274 274 int sk_assign_null(struct bpf_sk_lookup *ctx) 275 275 { 276 276 struct bpf_sock *sk = NULL; ··· 313 313 } 314 314 315 315 /* Check that selected sk is accessible through context. */ 316 - SEC("sk_lookup/access_ctx_sk") 316 + SEC("sk_lookup") 317 317 int access_ctx_sk(struct bpf_sk_lookup *ctx) 318 318 { 319 319 struct bpf_sock *sk1 = NULL, *sk2 = NULL; ··· 379 379 * are not covered because they give bogus results, that is the 380 380 * verifier ignores the offset. 381 381 */ 382 - SEC("sk_lookup/ctx_narrow_access") 382 + SEC("sk_lookup") 383 383 int ctx_narrow_access(struct bpf_sk_lookup *ctx) 384 384 { 385 385 struct bpf_sock *sk; ··· 553 553 } 554 554 555 555 /* Check that sk_assign rejects SERVER_A socket with -ESOCKNOSUPPORT */ 556 - SEC("sk_lookup/sk_assign_esocknosupport") 556 + SEC("sk_lookup") 557 557 int sk_assign_esocknosupport(struct bpf_sk_lookup *ctx) 558 558 { 559 559 struct bpf_sock *sk; ··· 578 578 return ret; 579 579 } 580 580 581 - SEC("sk_lookup/multi_prog_pass1") 581 + SEC("sk_lookup") 582 582 int multi_prog_pass1(struct bpf_sk_lookup *ctx) 583 583 { 584 584 bpf_map_update_elem(&run_map, &KEY_PROG1, &PROG_DONE, BPF_ANY); 585 585 return SK_PASS; 586 586 } 587 587 588 - SEC("sk_lookup/multi_prog_pass2") 588 + SEC("sk_lookup") 589 589 int multi_prog_pass2(struct bpf_sk_lookup *ctx) 590 590 { 591 591 bpf_map_update_elem(&run_map, &KEY_PROG2, &PROG_DONE, BPF_ANY); 592 592 return SK_PASS; 593 593 } 594 594 595 - SEC("sk_lookup/multi_prog_drop1") 595 + SEC("sk_lookup") 596 596 int multi_prog_drop1(struct bpf_sk_lookup *ctx) 597 597 { 598 598 bpf_map_update_elem(&run_map, &KEY_PROG1, &PROG_DONE, BPF_ANY); 599 599 return SK_DROP; 600 600 } 601 601 602 - SEC("sk_lookup/multi_prog_drop2") 602 + SEC("sk_lookup") 603 603 int multi_prog_drop2(struct bpf_sk_lookup *ctx) 604 604 { 605 605 bpf_map_update_elem(&run_map, &KEY_PROG2, &PROG_DONE, BPF_ANY); ··· 623 623 return SK_PASS; 624 624 } 625 625 626 - SEC("sk_lookup/multi_prog_redir1") 626 + SEC("sk_lookup") 627 627 int multi_prog_redir1(struct bpf_sk_lookup *ctx) 628 628 { 629 629 int ret; ··· 633 633 return SK_PASS; 634 634 } 635 635 636 - SEC("sk_lookup/multi_prog_redir2") 636 + SEC("sk_lookup") 637 637 int multi_prog_redir2(struct bpf_sk_lookup *ctx) 638 638 { 639 639 int ret;
+18 -19
tools/testing/selftests/bpf/progs/test_sk_lookup_kern.c
··· 15 15 #include <bpf/bpf_helpers.h> 16 16 #include <bpf/bpf_endian.h> 17 17 18 - int _version SEC("version") = 1; 19 18 char _license[] SEC("license") = "GPL"; 20 19 21 20 /* Fill 'tuple' with L3 info, and attempt to find L4. On fail, return NULL. */ ··· 52 53 return result; 53 54 } 54 55 55 - SEC("classifier/sk_lookup_success") 56 - int bpf_sk_lookup_test0(struct __sk_buff *skb) 56 + SEC("tc") 57 + int sk_lookup_success(struct __sk_buff *skb) 57 58 { 58 59 void *data_end = (void *)(long)skb->data_end; 59 60 void *data = (void *)(long)skb->data; ··· 78 79 return sk ? TC_ACT_OK : TC_ACT_UNSPEC; 79 80 } 80 81 81 - SEC("classifier/sk_lookup_success_simple") 82 - int bpf_sk_lookup_test1(struct __sk_buff *skb) 82 + SEC("tc") 83 + int sk_lookup_success_simple(struct __sk_buff *skb) 83 84 { 84 85 struct bpf_sock_tuple tuple = {}; 85 86 struct bpf_sock *sk; ··· 90 91 return 0; 91 92 } 92 93 93 - SEC("classifier/err_use_after_free") 94 - int bpf_sk_lookup_uaf(struct __sk_buff *skb) 94 + SEC("tc") 95 + int err_use_after_free(struct __sk_buff *skb) 95 96 { 96 97 struct bpf_sock_tuple tuple = {}; 97 98 struct bpf_sock *sk; ··· 105 106 return family; 106 107 } 107 108 108 - SEC("classifier/err_modify_sk_pointer") 109 - int bpf_sk_lookup_modptr(struct __sk_buff *skb) 109 + SEC("tc") 110 + int err_modify_sk_pointer(struct __sk_buff *skb) 110 111 { 111 112 struct bpf_sock_tuple tuple = {}; 112 113 struct bpf_sock *sk; ··· 120 121 return 0; 121 122 } 122 123 123 - SEC("classifier/err_modify_sk_or_null_pointer") 124 - int bpf_sk_lookup_modptr_or_null(struct __sk_buff *skb) 124 + SEC("tc") 125 + int err_modify_sk_or_null_pointer(struct __sk_buff *skb) 125 126 { 126 127 struct bpf_sock_tuple tuple = {}; 127 128 struct bpf_sock *sk; ··· 134 135 return 0; 135 136 } 136 137 137 - SEC("classifier/err_no_release") 138 - int bpf_sk_lookup_test2(struct __sk_buff *skb) 138 + SEC("tc") 139 + int err_no_release(struct __sk_buff *skb) 139 140 { 140 141 struct bpf_sock_tuple tuple = {}; 141 142 ··· 143 144 return 0; 144 145 } 145 146 146 - SEC("classifier/err_release_twice") 147 - int bpf_sk_lookup_test3(struct __sk_buff *skb) 147 + SEC("tc") 148 + int err_release_twice(struct __sk_buff *skb) 148 149 { 149 150 struct bpf_sock_tuple tuple = {}; 150 151 struct bpf_sock *sk; ··· 155 156 return 0; 156 157 } 157 158 158 - SEC("classifier/err_release_unchecked") 159 - int bpf_sk_lookup_test4(struct __sk_buff *skb) 159 + SEC("tc") 160 + int err_release_unchecked(struct __sk_buff *skb) 160 161 { 161 162 struct bpf_sock_tuple tuple = {}; 162 163 struct bpf_sock *sk; ··· 172 173 bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 173 174 } 174 175 175 - SEC("classifier/err_no_release_subcall") 176 - int bpf_sk_lookup_test5(struct __sk_buff *skb) 176 + SEC("tc") 177 + int err_no_release_subcall(struct __sk_buff *skb) 177 178 { 178 179 lookup_no_release(skb); 179 180 return 0;
+1 -1
tools/testing/selftests/bpf/progs/test_skb_helpers.c
··· 14 14 15 15 char _license[] SEC("license") = "GPL"; 16 16 17 - SEC("classifier/test_skb_helpers") 17 + SEC("tc") 18 18 int test_skb_helpers(struct __sk_buff *skb) 19 19 { 20 20 struct task_struct *task;
+1 -1
tools/testing/selftests/bpf/progs/test_sockmap_listen.c
··· 56 56 return verdict; 57 57 } 58 58 59 - SEC("sk_skb/skb_verdict") 59 + SEC("sk_skb") 60 60 int prog_skb_verdict(struct __sk_buff *skb) 61 61 { 62 62 unsigned int *count;
+1 -1
tools/testing/selftests/bpf/progs/test_sockmap_skb_verdict_attach.c
··· 9 9 __type(value, __u64); 10 10 } sock_map SEC(".maps"); 11 11 12 - SEC("sk_skb/skb_verdict") 12 + SEC("sk_skb") 13 13 int prog_skb_verdict(struct __sk_buff *skb) 14 14 { 15 15 return SK_DROP;
+1 -1
tools/testing/selftests/bpf/progs/test_sockmap_update.c
··· 24 24 __type(value, __u64); 25 25 } dst_sock_hash SEC(".maps"); 26 26 27 - SEC("classifier/copy_sock_map") 27 + SEC("tc") 28 28 int copy_sock_map(void *ctx) 29 29 { 30 30 struct bpf_sock *sk;
+2 -2
tools/testing/selftests/bpf/progs/test_stacktrace_build_id.c
··· 28 28 __uint(type, BPF_MAP_TYPE_STACK_TRACE); 29 29 __uint(max_entries, 128); 30 30 __uint(map_flags, BPF_F_STACK_BUILD_ID); 31 - __uint(key_size, sizeof(__u32)); 32 - __uint(value_size, sizeof(stack_trace_t)); 31 + __type(key, __u32); 32 + __type(value, stack_trace_t); 33 33 } stackmap SEC(".maps"); 34 34 35 35 struct {
+2 -2
tools/testing/selftests/bpf/progs/test_stacktrace_map.c
··· 27 27 struct { 28 28 __uint(type, BPF_MAP_TYPE_STACK_TRACE); 29 29 __uint(max_entries, 16384); 30 - __uint(key_size, sizeof(__u32)); 31 - __uint(value_size, sizeof(stack_trace_t)); 30 + __type(key, __u32); 31 + __type(value, stack_trace_t); 32 32 } stackmap SEC(".maps"); 33 33 34 34 struct {
+1 -1
tools/testing/selftests/bpf/progs/test_tc_bpf.c
··· 5 5 6 6 /* Dummy prog to test TC-BPF API */ 7 7 8 - SEC("classifier") 8 + SEC("tc") 9 9 int cls(struct __sk_buff *skb) 10 10 { 11 11 return 0;
+3 -3
tools/testing/selftests/bpf/progs/test_tc_neigh.c
··· 70 70 return v6_equal(ip6h->daddr, addr); 71 71 } 72 72 73 - SEC("classifier/chk_egress") 73 + SEC("tc") 74 74 int tc_chk(struct __sk_buff *skb) 75 75 { 76 76 void *data_end = ctx_ptr(skb->data_end); ··· 83 83 return !raw[0] && !raw[1] && !raw[2] ? TC_ACT_SHOT : TC_ACT_OK; 84 84 } 85 85 86 - SEC("classifier/dst_ingress") 86 + SEC("tc") 87 87 int tc_dst(struct __sk_buff *skb) 88 88 { 89 89 __u8 zero[ETH_ALEN * 2]; ··· 108 108 return bpf_redirect_neigh(IFINDEX_SRC, NULL, 0, 0); 109 109 } 110 110 111 - SEC("classifier/src_ingress") 111 + SEC("tc") 112 112 int tc_src(struct __sk_buff *skb) 113 113 { 114 114 __u8 zero[ETH_ALEN * 2];
+3 -3
tools/testing/selftests/bpf/progs/test_tc_neigh_fib.c
··· 75 75 return 0; 76 76 } 77 77 78 - SEC("classifier/chk_egress") 78 + SEC("tc") 79 79 int tc_chk(struct __sk_buff *skb) 80 80 { 81 81 void *data_end = ctx_ptr(skb->data_end); ··· 143 143 /* these are identical, but keep them separate for compatibility with the 144 144 * section names expected by test_tc_redirect.sh 145 145 */ 146 - SEC("classifier/dst_ingress") 146 + SEC("tc") 147 147 int tc_dst(struct __sk_buff *skb) 148 148 { 149 149 return tc_redir(skb); 150 150 } 151 151 152 - SEC("classifier/src_ingress") 152 + SEC("tc") 153 153 int tc_src(struct __sk_buff *skb) 154 154 { 155 155 return tc_redir(skb);
+5 -5
tools/testing/selftests/bpf/progs/test_tc_peer.c
··· 16 16 static const __u8 src_mac[] = {0x00, 0x11, 0x22, 0x33, 0x44, 0x55}; 17 17 static const __u8 dst_mac[] = {0x00, 0x22, 0x33, 0x44, 0x55, 0x66}; 18 18 19 - SEC("classifier/chk_egress") 19 + SEC("tc") 20 20 int tc_chk(struct __sk_buff *skb) 21 21 { 22 22 return TC_ACT_SHOT; 23 23 } 24 24 25 - SEC("classifier/dst_ingress") 25 + SEC("tc") 26 26 int tc_dst(struct __sk_buff *skb) 27 27 { 28 28 return bpf_redirect_peer(IFINDEX_SRC, 0); 29 29 } 30 30 31 - SEC("classifier/src_ingress") 31 + SEC("tc") 32 32 int tc_src(struct __sk_buff *skb) 33 33 { 34 34 return bpf_redirect_peer(IFINDEX_DST, 0); 35 35 } 36 36 37 - SEC("classifier/dst_ingress_l3") 37 + SEC("tc") 38 38 int tc_dst_l3(struct __sk_buff *skb) 39 39 { 40 40 return bpf_redirect(IFINDEX_SRC, 0); 41 41 } 42 42 43 - SEC("classifier/src_ingress_l3") 43 + SEC("tc") 44 44 int tc_src_l3(struct __sk_buff *skb) 45 45 { 46 46 __u16 proto = skb->protocol;
+2 -2
tools/testing/selftests/bpf/progs/test_tcp_check_syncookie_kern.c
··· 148 148 bpf_sk_release(sk); 149 149 } 150 150 151 - SEC("clsact/check_syncookie") 151 + SEC("tc") 152 152 int check_syncookie_clsact(struct __sk_buff *skb) 153 153 { 154 154 check_syncookie(skb, (void *)(long)skb->data, ··· 156 156 return TC_ACT_OK; 157 157 } 158 158 159 - SEC("xdp/check_syncookie") 159 + SEC("xdp") 160 160 int check_syncookie_xdp(struct xdp_md *ctx) 161 161 { 162 162 check_syncookie(ctx, (void *)(long)ctx->data,
+1 -1
tools/testing/selftests/bpf/progs/test_tcp_hdr_options.c
··· 594 594 return CG_OK; 595 595 } 596 596 597 - SEC("sockops/estab") 597 + SEC("sockops") 598 598 int estab(struct bpf_sock_ops *skops) 599 599 { 600 600 int true_val = 1;
+2 -2
tools/testing/selftests/bpf/progs/test_tcpnotify_kern.c
··· 24 24 struct { 25 25 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); 26 26 __uint(max_entries, 2); 27 - __uint(key_size, sizeof(int)); 28 - __uint(value_size, sizeof(__u32)); 27 + __type(key, int); 28 + __type(value, __u32); 29 29 } perf_event_map SEC(".maps"); 30 30 31 31 int _version SEC("version") = 1;
+1 -1
tools/testing/selftests/bpf/progs/test_xdp.c
··· 210 210 return XDP_TX; 211 211 } 212 212 213 - SEC("xdp_tx_iptunnel") 213 + SEC("xdp") 214 214 int _xdp_tx_iptunnel(struct xdp_md *xdp) 215 215 { 216 216 void *data_end = (void *)(long)xdp->data_end;
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c
··· 2 2 #include <linux/bpf.h> 3 3 #include <bpf/bpf_helpers.h> 4 4 5 - SEC("xdp_adjust_tail_grow") 5 + SEC("xdp") 6 6 int _xdp_adjust_tail_grow(struct xdp_md *xdp) 7 7 { 8 8 void *data_end = (void *)(long)xdp->data_end;
+1 -3
tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c
··· 9 9 #include <linux/if_ether.h> 10 10 #include <bpf/bpf_helpers.h> 11 11 12 - int _version SEC("version") = 1; 13 - 14 - SEC("xdp_adjust_tail_shrink") 12 + SEC("xdp") 15 13 int _xdp_adjust_tail_shrink(struct xdp_md *xdp) 16 14 { 17 15 void *data_end = (void *)(long)xdp->data_end;
+2 -2
tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c
··· 36 36 37 37 struct { 38 38 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); 39 - __uint(key_size, sizeof(int)); 40 - __uint(value_size, sizeof(int)); 39 + __type(key, int); 40 + __type(value, int); 41 41 } perf_buf_map SEC(".maps"); 42 42 43 43 __u64 test_result_fentry = 0;
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_devmap_helpers.c
··· 5 5 #include <linux/bpf.h> 6 6 #include <bpf/bpf_helpers.h> 7 7 8 - SEC("xdp_dm_log") 8 + SEC("xdp") 9 9 int xdpdm_devlog(struct xdp_md *ctx) 10 10 { 11 11 char fmt[] = "devmap redirect: dev %u -> dev %u len %u\n";
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_link.c
··· 5 5 6 6 char LICENSE[] SEC("license") = "GPL"; 7 7 8 - SEC("xdp/handler") 8 + SEC("xdp") 9 9 int xdp_handler(struct xdp_md *xdp) 10 10 { 11 11 return 0;
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_loop.c
··· 206 206 return XDP_TX; 207 207 } 208 208 209 - SEC("xdp_tx_iptunnel") 209 + SEC("xdp") 210 210 int _xdp_tx_iptunnel(struct xdp_md *xdp) 211 211 { 212 212 void *data_end = (void *)(long)xdp->data_end;
+2 -2
tools/testing/selftests/bpf/progs/test_xdp_noinline.c
··· 797 797 return XDP_DROP; 798 798 } 799 799 800 - SEC("xdp-test-v4") 800 + SEC("xdp") 801 801 int balancer_ingress_v4(struct xdp_md *ctx) 802 802 { 803 803 void *data = (void *)(long)ctx->data; ··· 816 816 return XDP_DROP; 817 817 } 818 818 819 - SEC("xdp-test-v6") 819 + SEC("xdp") 820 820 int balancer_ingress_v6(struct xdp_md *ctx) 821 821 { 822 822 void *data = (void *)(long)ctx->data;
+2 -2
tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c
··· 12 12 __uint(max_entries, 4); 13 13 } cpu_map SEC(".maps"); 14 14 15 - SEC("xdp_redir") 15 + SEC("xdp") 16 16 int xdp_redir_prog(struct xdp_md *ctx) 17 17 { 18 18 return bpf_redirect_map(&cpu_map, 1, 0); 19 19 } 20 20 21 - SEC("xdp_dummy") 21 + SEC("xdp") 22 22 int xdp_dummy_prog(struct xdp_md *ctx) 23 23 { 24 24 return XDP_PASS;
+2 -2
tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c
··· 9 9 __uint(max_entries, 4); 10 10 } dm_ports SEC(".maps"); 11 11 12 - SEC("xdp_redir") 12 + SEC("xdp") 13 13 int xdp_redir_prog(struct xdp_md *ctx) 14 14 { 15 15 return bpf_redirect_map(&dm_ports, 1, 0); ··· 18 18 /* invalid program on DEVMAP entry; 19 19 * SEC name means expected attach type not set 20 20 */ 21 - SEC("xdp_dummy") 21 + SEC("xdp") 22 22 int xdp_dummy_prog(struct xdp_md *ctx) 23 23 { 24 24 return XDP_PASS;
+33
tools/testing/selftests/bpf/progs/trace_vprintk.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2021 Facebook */ 3 + 4 + #include "vmlinux.h" 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + 8 + char _license[] SEC("license") = "GPL"; 9 + 10 + int null_data_vprintk_ret = 0; 11 + int trace_vprintk_ret = 0; 12 + int trace_vprintk_ran = 0; 13 + 14 + SEC("fentry/__x64_sys_nanosleep") 15 + int sys_enter(void *ctx) 16 + { 17 + static const char one[] = "1"; 18 + static const char three[] = "3"; 19 + static const char five[] = "5"; 20 + static const char seven[] = "7"; 21 + static const char nine[] = "9"; 22 + static const char f[] = "%pS\n"; 23 + 24 + /* runner doesn't search for \t, just ensure it compiles */ 25 + bpf_printk("\t"); 26 + 27 + trace_vprintk_ret = __bpf_vprintk("%s,%d,%s,%d,%s,%d,%s,%d,%s,%d %d\n", 28 + one, 2, three, 4, five, 6, seven, 8, nine, 10, ++trace_vprintk_ran); 29 + 30 + /* non-NULL fmt w/ NULL data should result in error */ 31 + null_data_vprintk_ret = bpf_trace_vprintk(f, sizeof(f), NULL, 0); 32 + return 0; 33 + }
+1 -1
tools/testing/selftests/bpf/progs/xdp_dummy.c
··· 4 4 #include <linux/bpf.h> 5 5 #include <bpf/bpf_helpers.h> 6 6 7 - SEC("xdp_dummy") 7 + SEC("xdp") 8 8 int xdp_dummy_prog(struct xdp_md *ctx) 9 9 { 10 10 return XDP_PASS;
+2 -2
tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c
··· 34 34 __uint(max_entries, 128); 35 35 } mac_map SEC(".maps"); 36 36 37 - SEC("xdp_redirect_map_multi") 37 + SEC("xdp") 38 38 int xdp_redirect_map_multi_prog(struct xdp_md *ctx) 39 39 { 40 40 void *data_end = (void *)(long)ctx->data_end; ··· 63 63 } 64 64 65 65 /* The following 2 progs are for 2nd devmap prog testing */ 66 - SEC("xdp_redirect_map_ingress") 66 + SEC("xdp") 67 67 int xdp_redirect_map_all_prog(struct xdp_md *ctx) 68 68 { 69 69 return bpf_redirect_map(&map_egress, 0,
+2 -2
tools/testing/selftests/bpf/progs/xdping_kern.c
··· 86 86 return XDP_TX; 87 87 } 88 88 89 - SEC("xdpclient") 89 + SEC("xdp") 90 90 int xdping_client(struct xdp_md *ctx) 91 91 { 92 92 void *data_end = (void *)(long)ctx->data_end; ··· 150 150 return XDP_TX; 151 151 } 152 152 153 - SEC("xdpserver") 153 + SEC("xdp") 154 154 int xdping_server(struct xdp_md *ctx) 155 155 { 156 156 void *data_end = (void *)(long)ctx->data_end;
+365
tools/testing/selftests/bpf/progs/xdpwall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2021 Facebook */ 3 + #include <stdbool.h> 4 + #include <stdint.h> 5 + #include <linux/stddef.h> 6 + #include <linux/if_ether.h> 7 + #include <linux/in.h> 8 + #include <linux/in6.h> 9 + #include <linux/ip.h> 10 + #include <linux/ipv6.h> 11 + #include <linux/tcp.h> 12 + #include <linux/udp.h> 13 + #include <linux/bpf.h> 14 + #include <linux/types.h> 15 + #include <bpf/bpf_endian.h> 16 + #include <bpf/bpf_helpers.h> 17 + 18 + enum pkt_parse_err { 19 + NO_ERR, 20 + BAD_IP6_HDR, 21 + BAD_IP4GUE_HDR, 22 + BAD_IP6GUE_HDR, 23 + }; 24 + 25 + enum pkt_flag { 26 + TUNNEL = 0x1, 27 + TCP_SYN = 0x2, 28 + QUIC_INITIAL_FLAG = 0x4, 29 + TCP_ACK = 0x8, 30 + TCP_RST = 0x10 31 + }; 32 + 33 + struct v4_lpm_key { 34 + __u32 prefixlen; 35 + __u32 src; 36 + }; 37 + 38 + struct v4_lpm_val { 39 + struct v4_lpm_key key; 40 + __u8 val; 41 + }; 42 + 43 + struct { 44 + __uint(type, BPF_MAP_TYPE_HASH); 45 + __uint(max_entries, 16); 46 + __type(key, struct in6_addr); 47 + __type(value, bool); 48 + } v6_addr_map SEC(".maps"); 49 + 50 + struct { 51 + __uint(type, BPF_MAP_TYPE_HASH); 52 + __uint(max_entries, 16); 53 + __type(key, __u32); 54 + __type(value, bool); 55 + } v4_addr_map SEC(".maps"); 56 + 57 + struct { 58 + __uint(type, BPF_MAP_TYPE_LPM_TRIE); 59 + __uint(max_entries, 16); 60 + __uint(key_size, sizeof(struct v4_lpm_key)); 61 + __uint(value_size, sizeof(struct v4_lpm_val)); 62 + __uint(map_flags, BPF_F_NO_PREALLOC); 63 + } v4_lpm_val_map SEC(".maps"); 64 + 65 + struct { 66 + __uint(type, BPF_MAP_TYPE_ARRAY); 67 + __uint(max_entries, 16); 68 + __type(key, int); 69 + __type(value, __u8); 70 + } tcp_port_map SEC(".maps"); 71 + 72 + struct { 73 + __uint(type, BPF_MAP_TYPE_ARRAY); 74 + __uint(max_entries, 16); 75 + __type(key, int); 76 + __type(value, __u16); 77 + } udp_port_map SEC(".maps"); 78 + 79 + enum ip_type { V4 = 1, V6 = 2 }; 80 + 81 + struct fw_match_info { 82 + __u8 v4_src_ip_match; 83 + __u8 v6_src_ip_match; 84 + __u8 v4_src_prefix_match; 85 + __u8 v4_dst_prefix_match; 86 + __u8 tcp_dp_match; 87 + __u16 udp_sp_match; 88 + __u16 udp_dp_match; 89 + bool is_tcp; 90 + bool is_tcp_syn; 91 + }; 92 + 93 + struct pkt_info { 94 + enum ip_type type; 95 + union { 96 + struct iphdr *ipv4; 97 + struct ipv6hdr *ipv6; 98 + } ip; 99 + int sport; 100 + int dport; 101 + __u16 trans_hdr_offset; 102 + __u8 proto; 103 + __u8 flags; 104 + }; 105 + 106 + static __always_inline struct ethhdr *parse_ethhdr(void *data, void *data_end) 107 + { 108 + struct ethhdr *eth = data; 109 + 110 + if (eth + 1 > data_end) 111 + return NULL; 112 + 113 + return eth; 114 + } 115 + 116 + static __always_inline __u8 filter_ipv6_addr(const struct in6_addr *ipv6addr) 117 + { 118 + __u8 *leaf; 119 + 120 + leaf = bpf_map_lookup_elem(&v6_addr_map, ipv6addr); 121 + 122 + return leaf ? *leaf : 0; 123 + } 124 + 125 + static __always_inline __u8 filter_ipv4_addr(const __u32 ipaddr) 126 + { 127 + __u8 *leaf; 128 + 129 + leaf = bpf_map_lookup_elem(&v4_addr_map, &ipaddr); 130 + 131 + return leaf ? *leaf : 0; 132 + } 133 + 134 + static __always_inline __u8 filter_ipv4_lpm(const __u32 ipaddr) 135 + { 136 + struct v4_lpm_key v4_key = {}; 137 + struct v4_lpm_val *lpm_val; 138 + 139 + v4_key.src = ipaddr; 140 + v4_key.prefixlen = 32; 141 + 142 + lpm_val = bpf_map_lookup_elem(&v4_lpm_val_map, &v4_key); 143 + 144 + return lpm_val ? lpm_val->val : 0; 145 + } 146 + 147 + 148 + static __always_inline void 149 + filter_src_dst_ip(struct pkt_info* info, struct fw_match_info* match_info) 150 + { 151 + if (info->type == V6) { 152 + match_info->v6_src_ip_match = 153 + filter_ipv6_addr(&info->ip.ipv6->saddr); 154 + } else if (info->type == V4) { 155 + match_info->v4_src_ip_match = 156 + filter_ipv4_addr(info->ip.ipv4->saddr); 157 + match_info->v4_src_prefix_match = 158 + filter_ipv4_lpm(info->ip.ipv4->saddr); 159 + match_info->v4_dst_prefix_match = 160 + filter_ipv4_lpm(info->ip.ipv4->daddr); 161 + } 162 + } 163 + 164 + static __always_inline void * 165 + get_transport_hdr(__u16 offset, void *data, void *data_end) 166 + { 167 + if (offset > 255 || data + offset > data_end) 168 + return NULL; 169 + 170 + return data + offset; 171 + } 172 + 173 + static __always_inline bool tcphdr_only_contains_flag(struct tcphdr *tcp, 174 + __u32 FLAG) 175 + { 176 + return (tcp_flag_word(tcp) & 177 + (TCP_FLAG_ACK | TCP_FLAG_RST | TCP_FLAG_SYN | TCP_FLAG_FIN)) == FLAG; 178 + } 179 + 180 + static __always_inline void set_tcp_flags(struct pkt_info *info, 181 + struct tcphdr *tcp) { 182 + if (tcphdr_only_contains_flag(tcp, TCP_FLAG_SYN)) 183 + info->flags |= TCP_SYN; 184 + else if (tcphdr_only_contains_flag(tcp, TCP_FLAG_ACK)) 185 + info->flags |= TCP_ACK; 186 + else if (tcphdr_only_contains_flag(tcp, TCP_FLAG_RST)) 187 + info->flags |= TCP_RST; 188 + } 189 + 190 + static __always_inline bool 191 + parse_tcp(struct pkt_info *info, void *transport_hdr, void *data_end) 192 + { 193 + struct tcphdr *tcp = transport_hdr; 194 + 195 + if (tcp + 1 > data_end) 196 + return false; 197 + 198 + info->sport = bpf_ntohs(tcp->source); 199 + info->dport = bpf_ntohs(tcp->dest); 200 + set_tcp_flags(info, tcp); 201 + 202 + return true; 203 + } 204 + 205 + static __always_inline bool 206 + parse_udp(struct pkt_info *info, void *transport_hdr, void *data_end) 207 + { 208 + struct udphdr *udp = transport_hdr; 209 + 210 + if (udp + 1 > data_end) 211 + return false; 212 + 213 + info->sport = bpf_ntohs(udp->source); 214 + info->dport = bpf_ntohs(udp->dest); 215 + 216 + return true; 217 + } 218 + 219 + static __always_inline __u8 filter_tcp_port(int port) 220 + { 221 + __u8 *leaf = bpf_map_lookup_elem(&tcp_port_map, &port); 222 + 223 + return leaf ? *leaf : 0; 224 + } 225 + 226 + static __always_inline __u16 filter_udp_port(int port) 227 + { 228 + __u16 *leaf = bpf_map_lookup_elem(&udp_port_map, &port); 229 + 230 + return leaf ? *leaf : 0; 231 + } 232 + 233 + static __always_inline bool 234 + filter_transport_hdr(void *transport_hdr, void *data_end, 235 + struct pkt_info *info, struct fw_match_info *match_info) 236 + { 237 + if (info->proto == IPPROTO_TCP) { 238 + if (!parse_tcp(info, transport_hdr, data_end)) 239 + return false; 240 + 241 + match_info->is_tcp = true; 242 + match_info->is_tcp_syn = (info->flags & TCP_SYN) > 0; 243 + 244 + match_info->tcp_dp_match = filter_tcp_port(info->dport); 245 + } else if (info->proto == IPPROTO_UDP) { 246 + if (!parse_udp(info, transport_hdr, data_end)) 247 + return false; 248 + 249 + match_info->udp_dp_match = filter_udp_port(info->dport); 250 + match_info->udp_sp_match = filter_udp_port(info->sport); 251 + } 252 + 253 + return true; 254 + } 255 + 256 + static __always_inline __u8 257 + parse_gue_v6(struct pkt_info *info, struct ipv6hdr *ip6h, void *data_end) 258 + { 259 + struct udphdr *udp = (struct udphdr *)(ip6h + 1); 260 + void *encap_data = udp + 1; 261 + 262 + if (udp + 1 > data_end) 263 + return BAD_IP6_HDR; 264 + 265 + if (udp->dest != bpf_htons(6666)) 266 + return NO_ERR; 267 + 268 + info->flags |= TUNNEL; 269 + 270 + if (encap_data + 1 > data_end) 271 + return BAD_IP6GUE_HDR; 272 + 273 + if (*(__u8 *)encap_data & 0x30) { 274 + struct ipv6hdr *inner_ip6h = encap_data; 275 + 276 + if (inner_ip6h + 1 > data_end) 277 + return BAD_IP6GUE_HDR; 278 + 279 + info->type = V6; 280 + info->proto = inner_ip6h->nexthdr; 281 + info->ip.ipv6 = inner_ip6h; 282 + info->trans_hdr_offset += sizeof(struct ipv6hdr) + sizeof(struct udphdr); 283 + } else { 284 + struct iphdr *inner_ip4h = encap_data; 285 + 286 + if (inner_ip4h + 1 > data_end) 287 + return BAD_IP6GUE_HDR; 288 + 289 + info->type = V4; 290 + info->proto = inner_ip4h->protocol; 291 + info->ip.ipv4 = inner_ip4h; 292 + info->trans_hdr_offset += sizeof(struct iphdr) + sizeof(struct udphdr); 293 + } 294 + 295 + return NO_ERR; 296 + } 297 + 298 + static __always_inline __u8 parse_ipv6_gue(struct pkt_info *info, 299 + void *data, void *data_end) 300 + { 301 + struct ipv6hdr *ip6h = data + sizeof(struct ethhdr); 302 + 303 + if (ip6h + 1 > data_end) 304 + return BAD_IP6_HDR; 305 + 306 + info->proto = ip6h->nexthdr; 307 + info->ip.ipv6 = ip6h; 308 + info->type = V6; 309 + info->trans_hdr_offset = sizeof(struct ethhdr) + sizeof(struct ipv6hdr); 310 + 311 + if (info->proto == IPPROTO_UDP) 312 + return parse_gue_v6(info, ip6h, data_end); 313 + 314 + return NO_ERR; 315 + } 316 + 317 + SEC("xdp") 318 + int edgewall(struct xdp_md *ctx) 319 + { 320 + void *data_end = (void *)(long)(ctx->data_end); 321 + void *data = (void *)(long)(ctx->data); 322 + struct fw_match_info match_info = {}; 323 + struct pkt_info info = {}; 324 + __u8 parse_err = NO_ERR; 325 + void *transport_hdr; 326 + struct ethhdr *eth; 327 + bool filter_res; 328 + __u32 proto; 329 + 330 + eth = parse_ethhdr(data, data_end); 331 + if (!eth) 332 + return XDP_DROP; 333 + 334 + proto = eth->h_proto; 335 + if (proto != bpf_htons(ETH_P_IPV6)) 336 + return XDP_DROP; 337 + 338 + if (parse_ipv6_gue(&info, data, data_end)) 339 + return XDP_DROP; 340 + 341 + if (info.proto == IPPROTO_ICMPV6) 342 + return XDP_PASS; 343 + 344 + if (info.proto != IPPROTO_TCP && info.proto != IPPROTO_UDP) 345 + return XDP_DROP; 346 + 347 + filter_src_dst_ip(&info, &match_info); 348 + 349 + transport_hdr = get_transport_hdr(info.trans_hdr_offset, data, 350 + data_end); 351 + if (!transport_hdr) 352 + return XDP_DROP; 353 + 354 + filter_res = filter_transport_hdr(transport_hdr, data_end, 355 + &info, &match_info); 356 + if (!filter_res) 357 + return XDP_DROP; 358 + 359 + if (match_info.is_tcp && !match_info.is_tcp_syn) 360 + return XDP_PASS; 361 + 362 + return XDP_DROP; 363 + } 364 + 365 + char LICENSE[] SEC("license") = "GPL";
+9 -13
tools/testing/selftests/bpf/test_bpftool.py
··· 57 57 return f(*args, iface, **kwargs) 58 58 return wrapper 59 59 60 + DMESG_EMITTING_HELPERS = [ 61 + "bpf_probe_write_user", 62 + "bpf_trace_printk", 63 + "bpf_trace_vprintk", 64 + ] 60 65 61 66 class TestBpftool(unittest.TestCase): 62 67 @classmethod ··· 72 67 73 68 @default_iface 74 69 def test_feature_dev_json(self, iface): 75 - unexpected_helpers = [ 76 - "bpf_probe_write_user", 77 - "bpf_trace_printk", 78 - ] 70 + unexpected_helpers = DMESG_EMITTING_HELPERS 79 71 expected_keys = [ 80 72 "syscall_config", 81 73 "program_types", ··· 96 94 bpftool_json(["feature", "probe"]), 97 95 bpftool_json(["feature"]), 98 96 ] 99 - unexpected_helpers = [ 100 - "bpf_probe_write_user", 101 - "bpf_trace_printk", 102 - ] 97 + unexpected_helpers = DMESG_EMITTING_HELPERS 103 98 expected_keys = [ 104 99 "syscall_config", 105 100 "system_config", ··· 120 121 bpftool_json(["feature", "probe", "kernel", "full"]), 121 122 bpftool_json(["feature", "probe", "full"]), 122 123 ] 123 - expected_helpers = [ 124 - "bpf_probe_write_user", 125 - "bpf_trace_printk", 126 - ] 124 + expected_helpers = DMESG_EMITTING_HELPERS 127 125 128 126 for tc in test_cases: 129 127 # Check if expected helpers are included at least once in any ··· 153 157 not_full_set.add(helper) 154 158 155 159 self.assertCountEqual(full_set - not_full_set, 156 - {"bpf_probe_write_user", "bpf_trace_printk"}) 160 + set(DMESG_EMITTING_HELPERS)) 157 161 self.assertCountEqual(not_full_set - full_set, set()) 158 162 159 163 def test_feature_macros(self):
+2 -2
tools/testing/selftests/bpf/test_tcp_check_syncookie.sh
··· 76 76 TEST_IF=lo 77 77 MAX_PING_TRIES=5 78 78 BPF_PROG_OBJ="${DIR}/test_tcp_check_syncookie_kern.o" 79 - CLSACT_SECTION="clsact/check_syncookie" 80 - XDP_SECTION="xdp/check_syncookie" 79 + CLSACT_SECTION="tc" 80 + XDP_SECTION="xdp" 81 81 BPF_PROG_ID=0 82 82 PROG="${DIR}/test_tcp_check_syncookie_user" 83 83
+3 -2
tools/testing/selftests/bpf/test_tunnel.sh
··· 168 168 ip netns exec at_ns0 \ 169 169 ip link set dev $DEV_NS address 52:54:00:d9:01:00 up 170 170 ip netns exec at_ns0 ip addr add dev $DEV_NS 10.1.1.100/24 171 - ip netns exec at_ns0 arp -s 10.1.1.200 52:54:00:d9:02:00 171 + ip netns exec at_ns0 \ 172 + ip neigh add 10.1.1.200 lladdr 52:54:00:d9:02:00 dev $DEV_NS 172 173 ip netns exec at_ns0 iptables -A OUTPUT -j MARK --set-mark 0x800FF 173 174 174 175 # root namespace 175 176 ip link add dev $DEV type $TYPE external gbp dstport 4789 176 177 ip link set dev $DEV address 52:54:00:d9:02:00 up 177 178 ip addr add dev $DEV 10.1.1.200/24 178 - arp -s 10.1.1.100 52:54:00:d9:01:00 179 + ip neigh add 10.1.1.100 lladdr 52:54:00:d9:01:00 dev $DEV 179 180 } 180 181 181 182 add_ip6vxlan_tunnel()
+4 -1
tools/testing/selftests/bpf/test_xdp_meta.sh
··· 1 1 #!/bin/sh 2 2 3 + # Kselftest framework requirement - SKIP code is 4. 4 + readonly KSFT_SKIP=4 5 + 3 6 cleanup() 4 7 { 5 8 if [ "$?" = "0" ]; then ··· 20 17 ip link set dev lo xdp off 2>/dev/null > /dev/null 21 18 if [ $? -ne 0 ];then 22 19 echo "selftests: [SKIP] Could not run test without the ip xdp support" 23 - exit 0 20 + exit $KSFT_SKIP 24 21 fi 25 22 set -e 26 23
+2 -2
tools/testing/selftests/bpf/test_xdp_redirect.sh
··· 52 52 return 0 53 53 fi 54 54 55 - ip -n ns1 link set veth11 $xdpmode obj xdp_dummy.o sec xdp_dummy &> /dev/null 56 - ip -n ns2 link set veth22 $xdpmode obj xdp_dummy.o sec xdp_dummy &> /dev/null 55 + ip -n ns1 link set veth11 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null 56 + ip -n ns2 link set veth22 $xdpmode obj xdp_dummy.o sec xdp &> /dev/null 57 57 ip link set dev veth1 $xdpmode obj test_xdp_redirect.o sec redirect_to_222 &> /dev/null 58 58 ip link set dev veth2 $xdpmode obj test_xdp_redirect.o sec redirect_to_111 &> /dev/null 59 59
+1 -1
tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
··· 88 88 # Add a neigh entry for IPv4 ping test 89 89 ip -n ns$i neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0 90 90 ip -n ns$i link set veth0 $mode obj \ 91 - xdp_dummy.o sec xdp_dummy &> /dev/null || \ 91 + xdp_dummy.o sec xdp &> /dev/null || \ 92 92 { test_fail "Unable to load dummy xdp" && exit 1; } 93 93 IFACES="$IFACES veth$i" 94 94 veth_mac[$i]=$(ip link show veth$i | awk '/link\/ether/ {print $2}')
+2 -2
tools/testing/selftests/bpf/test_xdp_veth.sh
··· 107 107 ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1 108 108 ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2 109 109 110 - ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy 110 + ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp 111 111 ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp 112 - ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy 112 + ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp 113 113 114 114 trap cleanup EXIT 115 115
+5 -2
tools/testing/selftests/bpf/test_xdp_vlan.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # Author: Jesper Dangaard Brouer <hawk@kernel.org> 4 4 5 + # Kselftest framework requirement - SKIP code is 4. 6 + readonly KSFT_SKIP=4 7 + 5 8 # Allow wrapper scripts to name test 6 9 if [ -z "$TESTNAME" ]; then 7 10 TESTNAME=xdp_vlan ··· 97 94 -h | --help ) 98 95 usage; 99 96 echo "selftests: $TESTNAME [SKIP] usage help info requested" 100 - exit 0 97 + exit $KSFT_SKIP 101 98 ;; 102 99 * ) 103 100 shift ··· 120 117 ip link set dev lo xdpgeneric off 2>/dev/null > /dev/null 121 118 if [ $? -ne 0 ]; then 122 119 echo "selftests: $TESTNAME [SKIP] need ip xdp support" 123 - exit 0 120 + exit $KSFT_SKIP 124 121 fi 125 122 126 123 # Interactive mode likely require us to cleanup netns
+161
tools/testing/selftests/bpf/verifier/spill_fill.c
··· 104 104 .result = ACCEPT, 105 105 .retval = POINTER_VALUE, 106 106 }, 107 + { 108 + "Spill and refill a u32 const scalar. Offset to skb->data", 109 + .insns = { 110 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 111 + offsetof(struct __sk_buff, data)), 112 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 113 + offsetof(struct __sk_buff, data_end)), 114 + /* r4 = 20 */ 115 + BPF_MOV32_IMM(BPF_REG_4, 20), 116 + /* *(u32 *)(r10 -8) = r4 */ 117 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 118 + /* r4 = *(u32 *)(r10 -8) */ 119 + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), 120 + /* r0 = r2 */ 121 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 122 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv20 */ 123 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 124 + /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=inv20 */ 125 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 126 + /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=inv20 */ 127 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 128 + BPF_MOV64_IMM(BPF_REG_0, 0), 129 + BPF_EXIT_INSN(), 130 + }, 131 + .result = ACCEPT, 132 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 133 + }, 134 + { 135 + "Spill a u32 const, refill from another half of the uninit u32 from the stack", 136 + .insns = { 137 + /* r4 = 20 */ 138 + BPF_MOV32_IMM(BPF_REG_4, 20), 139 + /* *(u32 *)(r10 -8) = r4 */ 140 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 141 + /* r4 = *(u32 *)(r10 -4) fp-8=????rrrr*/ 142 + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4), 143 + BPF_MOV64_IMM(BPF_REG_0, 0), 144 + BPF_EXIT_INSN(), 145 + }, 146 + .result = REJECT, 147 + .errstr = "invalid read from stack off -4+0 size 4", 148 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 149 + }, 150 + { 151 + "Spill a u32 const scalar. Refill as u16. Offset to skb->data", 152 + .insns = { 153 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 154 + offsetof(struct __sk_buff, data)), 155 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 156 + offsetof(struct __sk_buff, data_end)), 157 + /* r4 = 20 */ 158 + BPF_MOV32_IMM(BPF_REG_4, 20), 159 + /* *(u32 *)(r10 -8) = r4 */ 160 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 161 + /* r4 = *(u16 *)(r10 -8) */ 162 + BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), 163 + /* r0 = r2 */ 164 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 165 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 166 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 167 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 168 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 169 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 170 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 171 + BPF_MOV64_IMM(BPF_REG_0, 0), 172 + BPF_EXIT_INSN(), 173 + }, 174 + .result = REJECT, 175 + .errstr = "invalid access to packet", 176 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 177 + }, 178 + { 179 + "Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data", 180 + .insns = { 181 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 182 + offsetof(struct __sk_buff, data)), 183 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 184 + offsetof(struct __sk_buff, data_end)), 185 + /* r4 = 20 */ 186 + BPF_MOV32_IMM(BPF_REG_4, 20), 187 + /* *(u32 *)(r10 -8) = r4 */ 188 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 189 + /* r4 = *(u16 *)(r10 -6) */ 190 + BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -6), 191 + /* r0 = r2 */ 192 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 193 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 194 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 195 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 196 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 197 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 198 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 199 + BPF_MOV64_IMM(BPF_REG_0, 0), 200 + BPF_EXIT_INSN(), 201 + }, 202 + .result = REJECT, 203 + .errstr = "invalid access to packet", 204 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 205 + }, 206 + { 207 + "Spill and refill a u32 const scalar at non 8byte aligned stack addr. Offset to skb->data", 208 + .insns = { 209 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 210 + offsetof(struct __sk_buff, data)), 211 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 212 + offsetof(struct __sk_buff, data_end)), 213 + /* r4 = 20 */ 214 + BPF_MOV32_IMM(BPF_REG_4, 20), 215 + /* *(u32 *)(r10 -8) = r4 */ 216 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 217 + /* *(u32 *)(r10 -4) = r4 */ 218 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -4), 219 + /* r4 = *(u32 *)(r10 -4), */ 220 + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4), 221 + /* r0 = r2 */ 222 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 223 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=U32_MAX */ 224 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 225 + /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */ 226 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 227 + /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */ 228 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 229 + BPF_MOV64_IMM(BPF_REG_0, 0), 230 + BPF_EXIT_INSN(), 231 + }, 232 + .result = REJECT, 233 + .errstr = "invalid access to packet", 234 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 235 + }, 236 + { 237 + "Spill and refill a umax=40 bounded scalar. Offset to skb->data", 238 + .insns = { 239 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 240 + offsetof(struct __sk_buff, data)), 241 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 242 + offsetof(struct __sk_buff, data_end)), 243 + BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1, 244 + offsetof(struct __sk_buff, tstamp)), 245 + BPF_JMP_IMM(BPF_JLE, BPF_REG_4, 40, 2), 246 + BPF_MOV64_IMM(BPF_REG_0, 0), 247 + BPF_EXIT_INSN(), 248 + /* *(u32 *)(r10 -8) = r4 R4=inv,umax=40 */ 249 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 250 + /* r4 = (*u32 *)(r10 - 8) */ 251 + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), 252 + /* r2 += r4 R2=pkt R4=inv,umax=40 */ 253 + BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_4), 254 + /* r0 = r2 R2=pkt,umax=40 R4=inv,umax=40 */ 255 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 256 + /* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */ 257 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 20), 258 + /* if (r2 > r3) R0=pkt,umax=40 R2=pkt,off=20,umax=40 */ 259 + BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 1), 260 + /* r0 = *(u32 *)r0 R0=pkt,r=20,umax=40 R2=pkt,off=20,r=20,umax=40 */ 261 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), 262 + BPF_MOV64_IMM(BPF_REG_0, 0), 263 + BPF_EXIT_INSN(), 264 + }, 265 + .result = ACCEPT, 266 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 267 + },
+2 -3
tools/testing/selftests/bpf/xdping.c
··· 178 178 return 1; 179 179 } 180 180 181 - main_prog = bpf_object__find_program_by_title(obj, 182 - server ? "xdpserver" : 183 - "xdpclient"); 181 + main_prog = bpf_object__find_program_by_name(obj, 182 + server ? "xdping_server" : "xdping_client"); 184 183 if (main_prog) 185 184 prog_fd = bpf_program__fd(main_prog); 186 185 if (!main_prog || prog_fd < 0) {
+103 -30
tools/testing/selftests/bpf/xdpxceiver.c
··· 384 384 ifobj->umem = &ifobj->umem_arr[0]; 385 385 ifobj->xsk = &ifobj->xsk_arr[0]; 386 386 ifobj->use_poll = false; 387 + ifobj->pacing_on = true; 387 388 ifobj->pkt_stream = test->pkt_stream_default; 388 389 389 390 if (i == 0) { ··· 444 443 static void test_spec_set_name(struct test_spec *test, const char *name) 445 444 { 446 445 strncpy(test->name, name, MAX_TEST_NAME_SIZE); 446 + } 447 + 448 + static void pkt_stream_reset(struct pkt_stream *pkt_stream) 449 + { 450 + if (pkt_stream) 451 + pkt_stream->rx_pkt_nb = 0; 447 452 } 448 453 449 454 static struct pkt *pkt_stream_get_pkt(struct pkt_stream *pkt_stream, u32 pkt_nb) ··· 514 507 515 508 pkt_stream->nb_pkts = nb_pkts; 516 509 for (i = 0; i < nb_pkts; i++) { 517 - pkt_stream->pkts[i].addr = (i % umem->num_frames) * umem->frame_size + 518 - DEFAULT_OFFSET; 510 + pkt_stream->pkts[i].addr = (i % umem->num_frames) * umem->frame_size; 519 511 pkt_stream->pkts[i].len = pkt_len; 520 512 pkt_stream->pkts[i].payload = i; 521 513 ··· 542 536 test->ifobj_rx->pkt_stream = pkt_stream; 543 537 } 544 538 545 - static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, u32 offset) 539 + static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset) 546 540 { 547 541 struct xsk_umem_info *umem = test->ifobj_tx->umem; 548 542 struct pkt_stream *pkt_stream; 549 543 u32 i; 550 544 551 545 pkt_stream = pkt_stream_clone(umem, test->pkt_stream_default); 552 - for (i = 0; i < test->pkt_stream_default->nb_pkts; i += 2) { 546 + for (i = 1; i < test->pkt_stream_default->nb_pkts; i += 2) { 553 547 pkt_stream->pkts[i].addr = (i % umem->num_frames) * umem->frame_size + offset; 554 548 pkt_stream->pkts[i].len = pkt_len; 555 549 } ··· 641 635 fprintf(stdout, "---------------------------------------\n"); 642 636 } 643 637 638 + static bool is_offset_correct(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream, u64 addr, 639 + u64 pkt_stream_addr) 640 + { 641 + u32 headroom = umem->unaligned_mode ? 0 : umem->frame_headroom; 642 + u32 offset = addr % umem->frame_size, expected_offset = 0; 643 + 644 + if (!pkt_stream->use_addr_for_fill) 645 + pkt_stream_addr = 0; 646 + 647 + expected_offset += (pkt_stream_addr + headroom + XDP_PACKET_HEADROOM) % umem->frame_size; 648 + 649 + if (offset == expected_offset) 650 + return true; 651 + 652 + ksft_test_result_fail("ERROR: [%s] expected [%u], got [%u]\n", __func__, expected_offset, 653 + offset); 654 + return false; 655 + } 656 + 644 657 static bool is_pkt_valid(struct pkt *pkt, void *buffer, u64 addr, u32 len) 645 658 { 646 659 void *data = xsk_umem__get_data(buffer, addr); ··· 742 717 struct pollfd *fds) 743 718 { 744 719 struct pkt *pkt = pkt_stream_get_next_rx_pkt(pkt_stream); 720 + struct xsk_umem_info *umem = xsk->umem; 745 721 u32 idx_rx = 0, idx_fq = 0, rcvd, i; 722 + u32 total = 0; 746 723 int ret; 747 724 748 725 while (pkt) { 749 726 rcvd = xsk_ring_cons__peek(&xsk->rx, BATCH_SIZE, &idx_rx); 750 727 if (!rcvd) { 751 - if (xsk_ring_prod__needs_wakeup(&xsk->umem->fq)) { 728 + if (xsk_ring_prod__needs_wakeup(&umem->fq)) { 752 729 ret = poll(fds, 1, POLL_TMOUT); 753 730 if (ret < 0) 754 731 exit_with_error(-ret); ··· 758 731 continue; 759 732 } 760 733 761 - ret = xsk_ring_prod__reserve(&xsk->umem->fq, rcvd, &idx_fq); 734 + ret = xsk_ring_prod__reserve(&umem->fq, rcvd, &idx_fq); 762 735 while (ret != rcvd) { 763 736 if (ret < 0) 764 737 exit_with_error(-ret); 765 - if (xsk_ring_prod__needs_wakeup(&xsk->umem->fq)) { 738 + if (xsk_ring_prod__needs_wakeup(&umem->fq)) { 766 739 ret = poll(fds, 1, POLL_TMOUT); 767 740 if (ret < 0) 768 741 exit_with_error(-ret); 769 742 } 770 - ret = xsk_ring_prod__reserve(&xsk->umem->fq, rcvd, &idx_fq); 743 + ret = xsk_ring_prod__reserve(&umem->fq, rcvd, &idx_fq); 771 744 } 772 745 773 746 for (i = 0; i < rcvd; i++) { ··· 784 757 785 758 orig = xsk_umem__extract_addr(addr); 786 759 addr = xsk_umem__add_offset_to_addr(addr); 787 - if (!is_pkt_valid(pkt, xsk->umem->buffer, addr, desc->len)) 760 + 761 + if (!is_pkt_valid(pkt, umem->buffer, addr, desc->len)) 762 + return; 763 + if (!is_offset_correct(umem, pkt_stream, addr, pkt->addr)) 788 764 return; 789 765 790 - *xsk_ring_prod__fill_addr(&xsk->umem->fq, idx_fq++) = orig; 766 + *xsk_ring_prod__fill_addr(&umem->fq, idx_fq++) = orig; 791 767 pkt = pkt_stream_get_next_rx_pkt(pkt_stream); 792 768 } 793 769 794 - xsk_ring_prod__submit(&xsk->umem->fq, rcvd); 770 + xsk_ring_prod__submit(&umem->fq, rcvd); 795 771 xsk_ring_cons__release(&xsk->rx, rcvd); 772 + 773 + pthread_mutex_lock(&pacing_mutex); 774 + pkts_in_flight -= rcvd; 775 + total += rcvd; 776 + if (pkts_in_flight < umem->num_frames) 777 + pthread_cond_signal(&pacing_cond); 778 + pthread_mutex_unlock(&pacing_mutex); 796 779 } 797 780 } 798 781 ··· 828 791 valid_pkts++; 829 792 } 830 793 794 + pthread_mutex_lock(&pacing_mutex); 795 + pkts_in_flight += valid_pkts; 796 + if (ifobject->pacing_on && pkts_in_flight >= ifobject->umem->num_frames - BATCH_SIZE) { 797 + kick_tx(xsk); 798 + pthread_cond_wait(&pacing_cond, &pacing_mutex); 799 + } 800 + pthread_mutex_unlock(&pacing_mutex); 801 + 831 802 xsk_ring_prod__submit(&xsk->tx, i); 832 803 xsk->outstanding_tx += valid_pkts; 833 - complete_pkts(xsk, BATCH_SIZE); 804 + complete_pkts(xsk, i); 834 805 806 + usleep(10); 835 807 return i; 836 808 } 837 809 ··· 859 813 fds.events = POLLOUT; 860 814 861 815 while (pkt_cnt < ifobject->pkt_stream->nb_pkts) { 862 - u32 sent; 863 - 864 816 if (ifobject->use_poll) { 865 817 int ret; 866 818 ··· 870 826 continue; 871 827 } 872 828 873 - sent = __send_pkts(ifobject, pkt_cnt); 874 - pkt_cnt += sent; 875 - usleep(10); 829 + pkt_cnt += __send_pkts(ifobject, pkt_cnt); 876 830 } 877 831 878 832 wait_for_tx_completion(ifobject->xsk); ··· 955 913 u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size; 956 914 u32 ctr = 0; 957 915 void *bufs; 916 + int ret; 958 917 959 918 bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); 960 919 if (bufs == MAP_FAILED) 961 920 exit_with_error(errno); 962 921 922 + ret = xsk_configure_umem(&ifobject->umem_arr[i], bufs, umem_sz); 923 + if (ret) 924 + exit_with_error(-ret); 925 + 963 926 while (ctr++ < SOCK_RECONF_CTR) { 964 - int ret; 965 - 966 - ret = xsk_configure_umem(&ifobject->umem_arr[i], bufs, umem_sz); 967 - if (ret) 968 - exit_with_error(-ret); 969 - 970 927 ret = xsk_configure_socket(&ifobject->xsk_arr[i], &ifobject->umem_arr[i], 971 928 ifobject, i); 972 929 if (!ret) ··· 1012 971 1013 972 static void xsk_populate_fill_ring(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream) 1014 973 { 1015 - u32 idx = 0, i; 974 + u32 idx = 0, i, buffers_to_fill; 1016 975 int ret; 1017 976 1018 - ret = xsk_ring_prod__reserve(&umem->fq, XSK_RING_PROD__DEFAULT_NUM_DESCS, &idx); 1019 - if (ret != XSK_RING_PROD__DEFAULT_NUM_DESCS) 977 + if (umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS) 978 + buffers_to_fill = umem->num_frames; 979 + else 980 + buffers_to_fill = XSK_RING_PROD__DEFAULT_NUM_DESCS; 981 + 982 + ret = xsk_ring_prod__reserve(&umem->fq, buffers_to_fill, &idx); 983 + if (ret != buffers_to_fill) 1020 984 exit_with_error(ENOSPC); 1021 - for (i = 0; i < XSK_RING_PROD__DEFAULT_NUM_DESCS; i++) { 985 + for (i = 0; i < buffers_to_fill; i++) { 1022 986 u64 addr; 1023 987 1024 988 if (pkt_stream->use_addr_for_fill) { ··· 1033 987 break; 1034 988 addr = pkt->addr; 1035 989 } else { 1036 - addr = (i % umem->num_frames) * umem->frame_size + DEFAULT_OFFSET; 990 + addr = i * umem->frame_size; 1037 991 } 1038 992 1039 993 *xsk_ring_prod__fill_addr(&umem->fq, idx++) = addr; 1040 994 } 1041 - xsk_ring_prod__submit(&umem->fq, XSK_RING_PROD__DEFAULT_NUM_DESCS); 995 + xsk_ring_prod__submit(&umem->fq, buffers_to_fill); 1042 996 } 1043 997 1044 998 static void *worker_testapp_validate_rx(void *arg) ··· 1078 1032 exit_with_error(errno); 1079 1033 1080 1034 test->current_step++; 1035 + pkt_stream_reset(ifobj_rx->pkt_stream); 1036 + pkts_in_flight = 0; 1081 1037 1082 1038 /*Spawn RX thread */ 1083 1039 pthread_create(&t0, NULL, ifobj_rx->func_ptr, test); ··· 1156 1108 testapp_validate_traffic(test); 1157 1109 } 1158 1110 1111 + static void testapp_headroom(struct test_spec *test) 1112 + { 1113 + test_spec_set_name(test, "UMEM_HEADROOM"); 1114 + test->ifobj_rx->umem->frame_headroom = UMEM_HEADROOM_TEST_SIZE; 1115 + testapp_validate_traffic(test); 1116 + } 1117 + 1159 1118 static void testapp_stats(struct test_spec *test) 1160 1119 { 1161 1120 int i; ··· 1170 1115 for (i = 0; i < STAT_TEST_TYPE_MAX; i++) { 1171 1116 test_spec_reset(test); 1172 1117 stat_test_type = i; 1118 + /* No or few packets will be received so cannot pace packets */ 1119 + test->ifobj_tx->pacing_on = false; 1173 1120 1174 1121 switch (stat_test_type) { 1175 1122 case STAT_TEST_RX_DROPPED: ··· 1238 1181 test->ifobj_tx->umem->unaligned_mode = true; 1239 1182 test->ifobj_rx->umem->unaligned_mode = true; 1240 1183 /* Let half of the packets straddle a buffer boundrary */ 1241 - pkt_stream_replace_half(test, PKT_SIZE, test->ifobj_tx->umem->frame_size - 32); 1184 + pkt_stream_replace_half(test, PKT_SIZE, -PKT_SIZE / 2); 1242 1185 test->ifobj_rx->pkt_stream->use_addr_for_fill = true; 1243 1186 testapp_validate_traffic(test); 1244 1187 1245 1188 pkt_stream_restore_default(test); 1246 1189 return true; 1190 + } 1191 + 1192 + static void testapp_single_pkt(struct test_spec *test) 1193 + { 1194 + struct pkt pkts[] = {{0x1000, PKT_SIZE, 0, true}}; 1195 + 1196 + pkt_stream_generate_custom(test, pkts, ARRAY_SIZE(pkts)); 1197 + testapp_validate_traffic(test); 1198 + pkt_stream_restore_default(test); 1247 1199 } 1248 1200 1249 1201 static void testapp_invalid_desc(struct test_spec *test) ··· 1336 1270 test_spec_set_name(test, "RUN_TO_COMPLETION"); 1337 1271 testapp_validate_traffic(test); 1338 1272 break; 1273 + case TEST_TYPE_RUN_TO_COMPLETION_SINGLE_PKT: 1274 + test_spec_set_name(test, "RUN_TO_COMPLETION_SINGLE_PKT"); 1275 + testapp_single_pkt(test); 1276 + break; 1339 1277 case TEST_TYPE_RUN_TO_COMPLETION_2K_FRAME: 1340 1278 test_spec_set_name(test, "RUN_TO_COMPLETION_2K_FRAME_SIZE"); 1341 1279 test->ifobj_tx->umem->frame_size = 2048; ··· 1374 1304 case TEST_TYPE_UNALIGNED: 1375 1305 if (!testapp_unaligned(test)) 1376 1306 return; 1307 + break; 1308 + case TEST_TYPE_HEADROOM: 1309 + testapp_headroom(test); 1377 1310 break; 1378 1311 default: 1379 1312 break;
+9 -2
tools/testing/selftests/bpf/xdpxceiver.h
··· 35 35 #define UDP_PKT_DATA_SIZE (UDP_PKT_SIZE - sizeof(struct udphdr)) 36 36 #define USLEEP_MAX 10000 37 37 #define SOCK_RECONF_CTR 10 38 - #define BATCH_SIZE 8 38 + #define BATCH_SIZE 64 39 39 #define POLL_TMOUT 1000 40 40 #define DEFAULT_PKT_CNT (4 * 1024) 41 41 #define DEFAULT_UMEM_BUFFERS (DEFAULT_PKT_CNT / 4) 42 42 #define UMEM_SIZE (DEFAULT_UMEM_BUFFERS * XSK_UMEM__DEFAULT_FRAME_SIZE) 43 43 #define RX_FULL_RXQSIZE 32 44 - #define DEFAULT_OFFSET 256 44 + #define UMEM_HEADROOM_TEST_SIZE 128 45 45 #define XSK_UMEM__INVALID_FRAME_SIZE (XSK_UMEM__DEFAULT_FRAME_SIZE + 1) 46 46 47 47 #define print_verbose(x...) do { if (opt_verbose) ksft_print_msg(x); } while (0) ··· 55 55 enum test_type { 56 56 TEST_TYPE_RUN_TO_COMPLETION, 57 57 TEST_TYPE_RUN_TO_COMPLETION_2K_FRAME, 58 + TEST_TYPE_RUN_TO_COMPLETION_SINGLE_PKT, 58 59 TEST_TYPE_POLL, 59 60 TEST_TYPE_UNALIGNED, 60 61 TEST_TYPE_ALIGNED_INV_DESC, 61 62 TEST_TYPE_ALIGNED_INV_DESC_2K_FRAME, 62 63 TEST_TYPE_UNALIGNED_INV_DESC, 64 + TEST_TYPE_HEADROOM, 63 65 TEST_TYPE_TEARDOWN, 64 66 TEST_TYPE_BIDI, 65 67 TEST_TYPE_STATS, ··· 138 136 bool tx_on; 139 137 bool rx_on; 140 138 bool use_poll; 139 + bool pacing_on; 141 140 u8 dst_mac[ETH_ALEN]; 142 141 u8 src_mac[ETH_ALEN]; 143 142 }; ··· 154 151 }; 155 152 156 153 pthread_barrier_t barr; 154 + pthread_mutex_t pacing_mutex = PTHREAD_MUTEX_INITIALIZER; 155 + pthread_cond_t pacing_cond = PTHREAD_COND_INITIALIZER; 156 + 157 + u32 pkts_in_flight; 157 158 158 159 #endif /* XDPXCEIVER_H */