Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf

Daniel Borkmann says:

====================
bpf 2022-08-10

We've added 23 non-merge commits during the last 7 day(s) which contain
a total of 19 files changed, 424 insertions(+), 35 deletions(-).

The main changes are:

1) Several fixes for BPF map iterator such as UAFs along with selftests, from Hou Tao.

2) Fix BPF syscall program's {copy,strncpy}_from_bpfptr() to not fault, from Jinghao Jia.

3) Reject BPF syscall programs calling BPF_PROG_RUN, from Alexei Starovoitov and YiFei Zhu.

4) Fix attach_btf_obj_id info to pick proper target BTF, from Stanislav Fomichev.

5) BPF design Q/A doc update to clarify what is not stable ABI, from Paul E. McKenney.

6) Fix BPF map's prealloc_lru_pop to not reinitialize, from Kumar Kartikeya Dwivedi.

7) Fix bpf_trampoline_put to avoid leaking ftrace hash, from Jiri Olsa.

8) Fix arm64 JIT to address sparse errors around BPF trampoline, from Xu Kuohai.

9) Fix arm64 JIT to use kvcalloc instead of kcalloc for internal program address
offset buffer, from Aijun Sun.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: (23 commits)
selftests/bpf: Ensure sleepable program is rejected by hash map iter
selftests/bpf: Add write tests for sk local storage map iterator
selftests/bpf: Add tests for reading a dangling map iter fd
bpf: Only allow sleepable program for resched-able iterator
bpf: Check the validity of max_rdwr_access for sock local storage map iterator
bpf: Acquire map uref in .init_seq_private for sock{map,hash} iterator
bpf: Acquire map uref in .init_seq_private for sock local storage map iterator
bpf: Acquire map uref in .init_seq_private for hash map iterator
bpf: Acquire map uref in .init_seq_private for array map iterator
bpf: Disallow bpf programs call prog_run command.
bpf, arm64: Fix bpf trampoline instruction endianness
selftests/bpf: Add test for prealloc_lru_pop bug
bpf: Don't reinit map value in prealloc_lru_pop
bpf: Allow calling bpf_prog_test kfuncs in tracing programs
bpf, arm64: Allocate program buffer using kvcalloc instead of kcalloc
selftests/bpf: Excercise bpf_obj_get_info_by_fd for bpf2bpf
bpf: Use proper target btf when exporting attach_btf_obj_id
mptcp, btf: Add struct mptcp_sock definition when CONFIG_MPTCP is disabled
bpf: Cleanup ftrace hash in bpf_trampoline_put
BPF: Fix potential bad pointer dereference in bpf_sys_bpf()
...
====================

Link: https://lore.kernel.org/r/20220810190624.10748-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+424 -35
+25
Documentation/bpf/bpf_design_QA.rst
··· 214 214 subject to change and can break with newer kernels. BPF programs need to change 215 215 accordingly when this happens. 216 216 217 + Q: Are places where kprobes can attach part of the stable ABI? 218 + -------------------------------------------------------------- 219 + A: NO. The places to which kprobes can attach are internal implementation 220 + details, which means that they are subject to change and can break with 221 + newer kernels. BPF programs need to change accordingly when this happens. 222 + 217 223 Q: How much stack space a BPF program uses? 218 224 ------------------------------------------- 219 225 A: Currently all program types are limited to 512 bytes of stack ··· 279 273 functions has changed, both the in-tree and out-of-tree kernel tcp cc 280 274 implementations have to be changed. The same goes for the bpf 281 275 programs and they have to be adjusted accordingly. 276 + 277 + Q: Attaching to arbitrary kernel functions is an ABI? 278 + ----------------------------------------------------- 279 + Q: BPF programs can be attached to many kernel functions. Do these 280 + kernel functions become part of the ABI? 281 + 282 + A: NO. 283 + 284 + The kernel function prototypes will change, and BPF programs attaching to 285 + them will need to change. The BPF compile-once-run-everywhere (CO-RE) 286 + should be used in order to make it easier to adapt your BPF programs to 287 + different versions of the kernel. 288 + 289 + Q: Marking a function with BTF_ID makes that function an ABI? 290 + ------------------------------------------------------------- 291 + A: NO. 292 + 293 + The BTF_ID macro does not cause a function to become part of the ABI 294 + any more than does the EXPORT_SYMBOL_GPL macro.
+8 -8
arch/arm64/net/bpf_jit_comp.c
··· 1496 1496 memset(&ctx, 0, sizeof(ctx)); 1497 1497 ctx.prog = prog; 1498 1498 1499 - ctx.offset = kcalloc(prog->len + 1, sizeof(int), GFP_KERNEL); 1499 + ctx.offset = kvcalloc(prog->len + 1, sizeof(int), GFP_KERNEL); 1500 1500 if (ctx.offset == NULL) { 1501 1501 prog = orig_prog; 1502 1502 goto out_off; ··· 1601 1601 ctx.offset[i] *= AARCH64_INSN_SIZE; 1602 1602 bpf_prog_fill_jited_linfo(prog, ctx.offset + 1); 1603 1603 out_off: 1604 - kfree(ctx.offset); 1604 + kvfree(ctx.offset); 1605 1605 kfree(jit_data); 1606 1606 prog->aux->jit_data = NULL; 1607 1607 } ··· 1643 1643 int args_off, int retval_off, int run_ctx_off, 1644 1644 bool save_ret) 1645 1645 { 1646 - u32 *branch; 1646 + __le32 *branch; 1647 1647 u64 enter_prog; 1648 1648 u64 exit_prog; 1649 1649 struct bpf_prog *p = l->link.prog; ··· 1698 1698 1699 1699 if (ctx->image) { 1700 1700 int offset = &ctx->image[ctx->idx] - branch; 1701 - *branch = A64_CBZ(1, A64_R(0), offset); 1701 + *branch = cpu_to_le32(A64_CBZ(1, A64_R(0), offset)); 1702 1702 } 1703 1703 1704 1704 /* arg1: prog */ ··· 1713 1713 1714 1714 static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl, 1715 1715 int args_off, int retval_off, int run_ctx_off, 1716 - u32 **branches) 1716 + __le32 **branches) 1717 1717 { 1718 1718 int i; 1719 1719 ··· 1784 1784 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; 1785 1785 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; 1786 1786 bool save_ret; 1787 - u32 **branches = NULL; 1787 + __le32 **branches = NULL; 1788 1788 1789 1789 /* trampoline stack layout: 1790 1790 * [ parent ip ] ··· 1892 1892 flags & BPF_TRAMP_F_RET_FENTRY_RET); 1893 1893 1894 1894 if (fmod_ret->nr_links) { 1895 - branches = kcalloc(fmod_ret->nr_links, sizeof(u32 *), 1895 + branches = kcalloc(fmod_ret->nr_links, sizeof(__le32 *), 1896 1896 GFP_KERNEL); 1897 1897 if (!branches) 1898 1898 return -ENOMEM; ··· 1916 1916 /* update the branches saved in invoke_bpf_mod_ret with cbnz */ 1917 1917 for (i = 0; i < fmod_ret->nr_links && ctx->image != NULL; i++) { 1918 1918 int offset = &ctx->image[ctx->idx] - branches[i]; 1919 - *branches[i] = A64_CBNZ(1, A64_R(10), offset); 1919 + *branches[i] = cpu_to_le32(A64_CBNZ(1, A64_R(10), offset)); 1920 1920 } 1921 1921 1922 1922 for (i = 0; i < fexit->nr_links; i++)
+6 -2
include/linux/bpfptr.h
··· 49 49 static inline int copy_from_bpfptr_offset(void *dst, bpfptr_t src, 50 50 size_t offset, size_t size) 51 51 { 52 - return copy_from_sockptr_offset(dst, (sockptr_t) src, offset, size); 52 + if (!bpfptr_is_kernel(src)) 53 + return copy_from_user(dst, src.user + offset, size); 54 + return copy_from_kernel_nofault(dst, src.kernel + offset, size); 53 55 } 54 56 55 57 static inline int copy_from_bpfptr(void *dst, bpfptr_t src, size_t size) ··· 80 78 81 79 static inline long strncpy_from_bpfptr(char *dst, bpfptr_t src, size_t count) 82 80 { 83 - return strncpy_from_sockptr(dst, (sockptr_t) src, count); 81 + if (bpfptr_is_kernel(src)) 82 + return strncpy_from_kernel_nofault(dst, src.kernel, count); 83 + return strncpy_from_user(dst, src.user, count); 84 84 } 85 85 86 86 #endif /* _LINUX_BPFPTR_H */
+4
include/net/mptcp.h
··· 291 291 static inline struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk) { return NULL; } 292 292 #endif 293 293 294 + #if !IS_ENABLED(CONFIG_MPTCP) 295 + struct mptcp_sock { }; 296 + #endif 297 + 294 298 #endif /* __NET_MPTCP_H */
+6
kernel/bpf/arraymap.c
··· 649 649 seq_info->percpu_value_buf = value_buf; 650 650 } 651 651 652 + /* bpf_iter_attach_map() acquires a map uref, and the uref may be 653 + * released before or in the middle of iterating map elements, so 654 + * acquire an extra map uref for iterator. 655 + */ 656 + bpf_map_inc_with_uref(map); 652 657 seq_info->map = map; 653 658 return 0; 654 659 } ··· 662 657 { 663 658 struct bpf_iter_seq_array_map_info *seq_info = priv_data; 664 659 660 + bpf_map_put_with_uref(seq_info->map); 665 661 kfree(seq_info->percpu_value_buf); 666 662 } 667 663
+10 -1
kernel/bpf/bpf_iter.c
··· 68 68 iter_priv->done_stop = true; 69 69 } 70 70 71 + static inline bool bpf_iter_target_support_resched(const struct bpf_iter_target_info *tinfo) 72 + { 73 + return tinfo->reg_info->feature & BPF_ITER_RESCHED; 74 + } 75 + 71 76 static bool bpf_iter_support_resched(struct seq_file *seq) 72 77 { 73 78 struct bpf_iter_priv_data *iter_priv; 74 79 75 80 iter_priv = container_of(seq->private, struct bpf_iter_priv_data, 76 81 target_private); 77 - return iter_priv->tinfo->reg_info->feature & BPF_ITER_RESCHED; 82 + return bpf_iter_target_support_resched(iter_priv->tinfo); 78 83 } 79 84 80 85 /* maximum visited objects before bailing out */ ··· 541 536 mutex_unlock(&targets_mutex); 542 537 if (!tinfo) 543 538 return -ENOENT; 539 + 540 + /* Only allow sleepable program for resched-able iterator */ 541 + if (prog->aux->sleepable && !bpf_iter_target_support_resched(tinfo)) 542 + return -EINVAL; 544 543 545 544 link = kzalloc(sizeof(*link), GFP_USER | __GFP_NOWARN); 546 545 if (!link)
+3 -5
kernel/bpf/hashtab.c
··· 311 311 struct htab_elem *l; 312 312 313 313 if (node) { 314 - u32 key_size = htab->map.key_size; 315 - 316 314 l = container_of(node, struct htab_elem, lru_node); 317 - memcpy(l->key, key, key_size); 318 - check_and_init_map_value(&htab->map, 319 - l->key + round_up(key_size, 8)); 315 + memcpy(l->key, key, htab->map.key_size); 320 316 return l; 321 317 } 322 318 ··· 2060 2064 seq_info->percpu_value_buf = value_buf; 2061 2065 } 2062 2066 2067 + bpf_map_inc_with_uref(map); 2063 2068 seq_info->map = map; 2064 2069 seq_info->htab = container_of(map, struct bpf_htab, map); 2065 2070 return 0; ··· 2070 2073 { 2071 2074 struct bpf_iter_seq_hash_map_info *seq_info = priv_data; 2072 2075 2076 + bpf_map_put_with_uref(seq_info->map); 2073 2077 kfree(seq_info->percpu_value_buf); 2074 2078 } 2075 2079
+17 -10
kernel/bpf/syscall.c
··· 3886 3886 union bpf_attr __user *uattr) 3887 3887 { 3888 3888 struct bpf_prog_info __user *uinfo = u64_to_user_ptr(attr->info.info); 3889 + struct btf *attach_btf = bpf_prog_get_target_btf(prog); 3889 3890 struct bpf_prog_info info; 3890 3891 u32 info_len = attr->info.info_len; 3891 3892 struct bpf_prog_kstats stats; ··· 4089 4088 if (prog->aux->btf) 4090 4089 info.btf_id = btf_obj_id(prog->aux->btf); 4091 4090 info.attach_btf_id = prog->aux->attach_btf_id; 4092 - if (prog->aux->attach_btf) 4093 - info.attach_btf_obj_id = btf_obj_id(prog->aux->attach_btf); 4094 - else if (prog->aux->dst_prog) 4095 - info.attach_btf_obj_id = btf_obj_id(prog->aux->dst_prog->aux->attach_btf); 4091 + if (attach_btf) 4092 + info.attach_btf_obj_id = btf_obj_id(attach_btf); 4096 4093 4097 4094 ulen = info.nr_func_info; 4098 4095 info.nr_func_info = prog->aux->func_info_cnt; ··· 5071 5072 5072 5073 BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size) 5073 5074 { 5074 - struct bpf_prog * __maybe_unused prog; 5075 - struct bpf_tramp_run_ctx __maybe_unused run_ctx; 5076 - 5077 5075 switch (cmd) { 5078 5076 case BPF_MAP_CREATE: 5079 5077 case BPF_MAP_UPDATE_ELEM: ··· 5080 5084 case BPF_LINK_CREATE: 5081 5085 case BPF_RAW_TRACEPOINT_OPEN: 5082 5086 break; 5087 + default: 5088 + return -EINVAL; 5089 + } 5090 + return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size); 5091 + } 5092 + 5093 + int kern_sys_bpf(int cmd, union bpf_attr *attr, unsigned int size) 5094 + { 5095 + struct bpf_prog * __maybe_unused prog; 5096 + struct bpf_tramp_run_ctx __maybe_unused run_ctx; 5097 + 5098 + switch (cmd) { 5083 5099 #ifdef CONFIG_BPF_JIT /* __bpf_prog_enter_sleepable used by trampoline and JIT */ 5084 5100 case BPF_PROG_TEST_RUN: 5085 5101 if (attr->test.data_in || attr->test.data_out || ··· 5122 5114 return 0; 5123 5115 #endif 5124 5116 default: 5125 - return -EINVAL; 5117 + return ____bpf_sys_bpf(cmd, attr, size); 5126 5118 } 5127 - return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size); 5128 5119 } 5129 - EXPORT_SYMBOL(bpf_sys_bpf); 5120 + EXPORT_SYMBOL(kern_sys_bpf); 5130 5121 5131 5122 static const struct bpf_func_proto bpf_sys_bpf_proto = { 5132 5123 .func = bpf_sys_bpf,
+4 -1
kernel/bpf/trampoline.c
··· 841 841 * multiple rcu callbacks. 842 842 */ 843 843 hlist_del(&tr->hlist); 844 - kfree(tr->fops); 844 + if (tr->fops) { 845 + ftrace_free_filter(tr->fops); 846 + kfree(tr->fops); 847 + } 845 848 kfree(tr); 846 849 out: 847 850 mutex_unlock(&trampoline_mutex);
+1
net/bpf/test_run.c
··· 1628 1628 int ret; 1629 1629 1630 1630 ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_prog_test_kfunc_set); 1631 + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_prog_test_kfunc_set); 1631 1632 return ret ?: register_btf_id_dtor_kfuncs(bpf_prog_test_dtor_kfunc, 1632 1633 ARRAY_SIZE(bpf_prog_test_dtor_kfunc), 1633 1634 THIS_MODULE);
+10 -2
net/core/bpf_sk_storage.c
··· 875 875 { 876 876 struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data; 877 877 878 + bpf_map_inc_with_uref(aux->map); 878 879 seq_info->map = aux->map; 879 880 return 0; 881 + } 882 + 883 + static void bpf_iter_fini_sk_storage_map(void *priv_data) 884 + { 885 + struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data; 886 + 887 + bpf_map_put_with_uref(seq_info->map); 880 888 } 881 889 882 890 static int bpf_iter_attach_map(struct bpf_prog *prog, ··· 904 896 if (map->map_type != BPF_MAP_TYPE_SK_STORAGE) 905 897 goto put_map; 906 898 907 - if (prog->aux->max_rdonly_access > map->value_size) { 899 + if (prog->aux->max_rdwr_access > map->value_size) { 908 900 err = -EACCES; 909 901 goto put_map; 910 902 } ··· 932 924 static const struct bpf_iter_seq_info iter_seq_info = { 933 925 .seq_ops = &bpf_sk_storage_map_seq_ops, 934 926 .init_seq_private = bpf_iter_init_sk_storage_map, 935 - .fini_seq_private = NULL, 927 + .fini_seq_private = bpf_iter_fini_sk_storage_map, 936 928 .seq_priv_size = sizeof(struct bpf_iter_seq_sk_storage_map_info), 937 929 }; 938 930
+19 -1
net/core/sock_map.c
··· 783 783 { 784 784 struct sock_map_seq_info *info = priv_data; 785 785 786 + bpf_map_inc_with_uref(aux->map); 786 787 info->map = aux->map; 787 788 return 0; 789 + } 790 + 791 + static void sock_map_fini_seq_private(void *priv_data) 792 + { 793 + struct sock_map_seq_info *info = priv_data; 794 + 795 + bpf_map_put_with_uref(info->map); 788 796 } 789 797 790 798 static const struct bpf_iter_seq_info sock_map_iter_seq_info = { 791 799 .seq_ops = &sock_map_seq_ops, 792 800 .init_seq_private = sock_map_init_seq_private, 801 + .fini_seq_private = sock_map_fini_seq_private, 793 802 .seq_priv_size = sizeof(struct sock_map_seq_info), 794 803 }; 795 804 ··· 1378 1369 }; 1379 1370 1380 1371 static int sock_hash_init_seq_private(void *priv_data, 1381 - struct bpf_iter_aux_info *aux) 1372 + struct bpf_iter_aux_info *aux) 1382 1373 { 1383 1374 struct sock_hash_seq_info *info = priv_data; 1384 1375 1376 + bpf_map_inc_with_uref(aux->map); 1385 1377 info->map = aux->map; 1386 1378 info->htab = container_of(aux->map, struct bpf_shtab, map); 1387 1379 return 0; 1388 1380 } 1389 1381 1382 + static void sock_hash_fini_seq_private(void *priv_data) 1383 + { 1384 + struct sock_hash_seq_info *info = priv_data; 1385 + 1386 + bpf_map_put_with_uref(info->map); 1387 + } 1388 + 1390 1389 static const struct bpf_iter_seq_info sock_hash_iter_seq_info = { 1391 1390 .seq_ops = &sock_hash_seq_ops, 1392 1391 .init_seq_private = sock_hash_init_seq_private, 1392 + .fini_seq_private = sock_hash_fini_seq_private, 1393 1393 .seq_priv_size = sizeof(struct sock_hash_seq_info), 1394 1394 }; 1395 1395
+2 -2
tools/lib/bpf/skel_internal.h
··· 66 66 const char *errstr; 67 67 }; 68 68 69 - long bpf_sys_bpf(__u32 cmd, void *attr, __u32 attr_size); 69 + long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size); 70 70 71 71 static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr, 72 72 unsigned int size) 73 73 { 74 74 #ifdef __KERNEL__ 75 - return bpf_sys_bpf(cmd, attr, size); 75 + return kern_sys_bpf(cmd, attr, size); 76 76 #else 77 77 return syscall(__NR_bpf, cmd, attr, size); 78 78 #endif
+115 -1
tools/testing/selftests/bpf/prog_tests/bpf_iter.c
··· 28 28 #include "bpf_iter_test_kern6.skel.h" 29 29 #include "bpf_iter_bpf_link.skel.h" 30 30 #include "bpf_iter_ksym.skel.h" 31 + #include "bpf_iter_sockmap.skel.h" 31 32 32 33 static int duration; 33 34 ··· 66 65 67 66 free_link: 68 67 bpf_link__destroy(link); 68 + } 69 + 70 + static void do_read_map_iter_fd(struct bpf_object_skeleton **skel, struct bpf_program *prog, 71 + struct bpf_map *map) 72 + { 73 + DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); 74 + union bpf_iter_link_info linfo; 75 + struct bpf_link *link; 76 + char buf[16] = {}; 77 + int iter_fd, len; 78 + 79 + memset(&linfo, 0, sizeof(linfo)); 80 + linfo.map.map_fd = bpf_map__fd(map); 81 + opts.link_info = &linfo; 82 + opts.link_info_len = sizeof(linfo); 83 + link = bpf_program__attach_iter(prog, &opts); 84 + if (!ASSERT_OK_PTR(link, "attach_map_iter")) 85 + return; 86 + 87 + iter_fd = bpf_iter_create(bpf_link__fd(link)); 88 + if (!ASSERT_GE(iter_fd, 0, "create_map_iter")) { 89 + bpf_link__destroy(link); 90 + return; 91 + } 92 + 93 + /* Close link and map fd prematurely */ 94 + bpf_link__destroy(link); 95 + bpf_object__destroy_skeleton(*skel); 96 + *skel = NULL; 97 + 98 + /* Try to let map free work to run first if map is freed */ 99 + usleep(100); 100 + /* Memory used by both sock map and sock local storage map are 101 + * freed after two synchronize_rcu() calls, so wait for it 102 + */ 103 + kern_sync_rcu(); 104 + kern_sync_rcu(); 105 + 106 + /* Read after both map fd and link fd are closed */ 107 + while ((len = read(iter_fd, buf, sizeof(buf))) > 0) 108 + ; 109 + ASSERT_GE(len, 0, "read_iterator"); 110 + 111 + close(iter_fd); 69 112 } 70 113 71 114 static int read_fd_into_buffer(int fd, char *buf, int size) ··· 679 634 goto out; 680 635 } 681 636 637 + /* Sleepable program is prohibited for hash map iterator */ 638 + linfo.map.map_fd = map_fd; 639 + link = bpf_program__attach_iter(skel->progs.sleepable_dummy_dump, &opts); 640 + if (!ASSERT_ERR_PTR(link, "attach_sleepable_prog_to_iter")) 641 + goto out; 642 + 682 643 linfo.map.map_fd = map_fd; 683 644 link = bpf_program__attach_iter(skel->progs.dump_bpf_hash_map, &opts); 684 645 if (!ASSERT_OK_PTR(link, "attach_iter")) ··· 878 827 bpf_iter_bpf_array_map__destroy(skel); 879 828 } 880 829 830 + static void test_bpf_array_map_iter_fd(void) 831 + { 832 + struct bpf_iter_bpf_array_map *skel; 833 + 834 + skel = bpf_iter_bpf_array_map__open_and_load(); 835 + if (!ASSERT_OK_PTR(skel, "bpf_iter_bpf_array_map__open_and_load")) 836 + return; 837 + 838 + do_read_map_iter_fd(&skel->skeleton, skel->progs.dump_bpf_array_map, 839 + skel->maps.arraymap1); 840 + 841 + bpf_iter_bpf_array_map__destroy(skel); 842 + } 843 + 881 844 static void test_bpf_percpu_array_map(void) 882 845 { 883 846 DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); ··· 1074 1009 bpf_iter_bpf_sk_storage_helpers__destroy(skel); 1075 1010 } 1076 1011 1012 + static void test_bpf_sk_stoarge_map_iter_fd(void) 1013 + { 1014 + struct bpf_iter_bpf_sk_storage_map *skel; 1015 + 1016 + skel = bpf_iter_bpf_sk_storage_map__open_and_load(); 1017 + if (!ASSERT_OK_PTR(skel, "bpf_iter_bpf_sk_storage_map__open_and_load")) 1018 + return; 1019 + 1020 + do_read_map_iter_fd(&skel->skeleton, skel->progs.rw_bpf_sk_storage_map, 1021 + skel->maps.sk_stg_map); 1022 + 1023 + bpf_iter_bpf_sk_storage_map__destroy(skel); 1024 + } 1025 + 1077 1026 static void test_bpf_sk_storage_map(void) 1078 1027 { 1079 1028 DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); ··· 1123 1044 linfo.map.map_fd = map_fd; 1124 1045 opts.link_info = &linfo; 1125 1046 opts.link_info_len = sizeof(linfo); 1126 - link = bpf_program__attach_iter(skel->progs.dump_bpf_sk_storage_map, &opts); 1047 + link = bpf_program__attach_iter(skel->progs.oob_write_bpf_sk_storage_map, &opts); 1048 + err = libbpf_get_error(link); 1049 + if (!ASSERT_EQ(err, -EACCES, "attach_oob_write_iter")) { 1050 + if (!err) 1051 + bpf_link__destroy(link); 1052 + goto out; 1053 + } 1054 + 1055 + link = bpf_program__attach_iter(skel->progs.rw_bpf_sk_storage_map, &opts); 1127 1056 if (!ASSERT_OK_PTR(link, "attach_iter")) 1128 1057 goto out; 1129 1058 ··· 1139 1052 if (!ASSERT_GE(iter_fd, 0, "create_iter")) 1140 1053 goto free_link; 1141 1054 1055 + skel->bss->to_add_val = time(NULL); 1142 1056 /* do some tests */ 1143 1057 while ((len = read(iter_fd, buf, sizeof(buf))) > 0) 1144 1058 ; ··· 1152 1064 1153 1065 if (!ASSERT_EQ(skel->bss->val_sum, expected_val, "val_sum")) 1154 1066 goto close_iter; 1067 + 1068 + for (i = 0; i < num_sockets; i++) { 1069 + err = bpf_map_lookup_elem(map_fd, &sock_fd[i], &val); 1070 + if (!ASSERT_OK(err, "map_lookup") || 1071 + !ASSERT_EQ(val, i + 1 + skel->bss->to_add_val, "check_map_value")) 1072 + break; 1073 + } 1155 1074 1156 1075 close_iter: 1157 1076 close(iter_fd); ··· 1312 1217 bpf_iter_task_vma__destroy(skel); 1313 1218 } 1314 1219 1220 + void test_bpf_sockmap_map_iter_fd(void) 1221 + { 1222 + struct bpf_iter_sockmap *skel; 1223 + 1224 + skel = bpf_iter_sockmap__open_and_load(); 1225 + if (!ASSERT_OK_PTR(skel, "bpf_iter_sockmap__open_and_load")) 1226 + return; 1227 + 1228 + do_read_map_iter_fd(&skel->skeleton, skel->progs.copy, skel->maps.sockmap); 1229 + 1230 + bpf_iter_sockmap__destroy(skel); 1231 + } 1232 + 1315 1233 void test_bpf_iter(void) 1316 1234 { 1317 1235 if (test__start_subtest("btf_id_or_null")) ··· 1375 1267 test_bpf_percpu_hash_map(); 1376 1268 if (test__start_subtest("bpf_array_map")) 1377 1269 test_bpf_array_map(); 1270 + if (test__start_subtest("bpf_array_map_iter_fd")) 1271 + test_bpf_array_map_iter_fd(); 1378 1272 if (test__start_subtest("bpf_percpu_array_map")) 1379 1273 test_bpf_percpu_array_map(); 1380 1274 if (test__start_subtest("bpf_sk_storage_map")) 1381 1275 test_bpf_sk_storage_map(); 1276 + if (test__start_subtest("bpf_sk_storage_map_iter_fd")) 1277 + test_bpf_sk_stoarge_map_iter_fd(); 1382 1278 if (test__start_subtest("bpf_sk_storage_delete")) 1383 1279 test_bpf_sk_storage_delete(); 1384 1280 if (test__start_subtest("bpf_sk_storage_get")) ··· 1395 1283 test_link_iter(); 1396 1284 if (test__start_subtest("ksym")) 1397 1285 test_ksym_iter(); 1286 + if (test__start_subtest("bpf_sockmap_map_iter_fd")) 1287 + test_bpf_sockmap_map_iter_fd(); 1398 1288 }
+95
tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
··· 3 3 #include <test_progs.h> 4 4 #include <network_helpers.h> 5 5 #include <bpf/btf.h> 6 + #include "bind4_prog.skel.h" 6 7 7 8 typedef int (*test_cb)(struct bpf_object *obj); 8 9 ··· 408 407 prog_name, false, NULL); 409 408 } 410 409 410 + static int find_prog_btf_id(const char *name, __u32 attach_prog_fd) 411 + { 412 + struct bpf_prog_info info = {}; 413 + __u32 info_len = sizeof(info); 414 + struct btf *btf; 415 + int ret; 416 + 417 + ret = bpf_obj_get_info_by_fd(attach_prog_fd, &info, &info_len); 418 + if (ret) 419 + return ret; 420 + 421 + if (!info.btf_id) 422 + return -EINVAL; 423 + 424 + btf = btf__load_from_kernel_by_id(info.btf_id); 425 + ret = libbpf_get_error(btf); 426 + if (ret) 427 + return ret; 428 + 429 + ret = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC); 430 + btf__free(btf); 431 + return ret; 432 + } 433 + 434 + static int load_fentry(int attach_prog_fd, int attach_btf_id) 435 + { 436 + LIBBPF_OPTS(bpf_prog_load_opts, opts, 437 + .expected_attach_type = BPF_TRACE_FENTRY, 438 + .attach_prog_fd = attach_prog_fd, 439 + .attach_btf_id = attach_btf_id, 440 + ); 441 + struct bpf_insn insns[] = { 442 + BPF_MOV64_IMM(BPF_REG_0, 0), 443 + BPF_EXIT_INSN(), 444 + }; 445 + 446 + return bpf_prog_load(BPF_PROG_TYPE_TRACING, 447 + "bind4_fentry", 448 + "GPL", 449 + insns, 450 + ARRAY_SIZE(insns), 451 + &opts); 452 + } 453 + 454 + static void test_fentry_to_cgroup_bpf(void) 455 + { 456 + struct bind4_prog *skel = NULL; 457 + struct bpf_prog_info info = {}; 458 + __u32 info_len = sizeof(info); 459 + int cgroup_fd = -1; 460 + int fentry_fd = -1; 461 + int btf_id; 462 + 463 + cgroup_fd = test__join_cgroup("/fentry_to_cgroup_bpf"); 464 + if (!ASSERT_GE(cgroup_fd, 0, "cgroup_fd")) 465 + return; 466 + 467 + skel = bind4_prog__open_and_load(); 468 + if (!ASSERT_OK_PTR(skel, "skel")) 469 + goto cleanup; 470 + 471 + skel->links.bind_v4_prog = bpf_program__attach_cgroup(skel->progs.bind_v4_prog, cgroup_fd); 472 + if (!ASSERT_OK_PTR(skel->links.bind_v4_prog, "bpf_program__attach_cgroup")) 473 + goto cleanup; 474 + 475 + btf_id = find_prog_btf_id("bind_v4_prog", bpf_program__fd(skel->progs.bind_v4_prog)); 476 + if (!ASSERT_GE(btf_id, 0, "find_prog_btf_id")) 477 + goto cleanup; 478 + 479 + fentry_fd = load_fentry(bpf_program__fd(skel->progs.bind_v4_prog), btf_id); 480 + if (!ASSERT_GE(fentry_fd, 0, "load_fentry")) 481 + goto cleanup; 482 + 483 + /* Make sure bpf_obj_get_info_by_fd works correctly when attaching 484 + * to another BPF program. 485 + */ 486 + 487 + ASSERT_OK(bpf_obj_get_info_by_fd(fentry_fd, &info, &info_len), 488 + "bpf_obj_get_info_by_fd"); 489 + 490 + ASSERT_EQ(info.btf_id, 0, "info.btf_id"); 491 + ASSERT_EQ(info.attach_btf_id, btf_id, "info.attach_btf_id"); 492 + ASSERT_GT(info.attach_btf_obj_id, 0, "info.attach_btf_obj_id"); 493 + 494 + cleanup: 495 + if (cgroup_fd >= 0) 496 + close(cgroup_fd); 497 + if (fentry_fd >= 0) 498 + close(fentry_fd); 499 + bind4_prog__destroy(skel); 500 + } 501 + 411 502 /* NOTE: affect other tests, must run in serial mode */ 412 503 void serial_test_fexit_bpf2bpf(void) 413 504 { ··· 523 430 test_fmod_ret_freplace(); 524 431 if (test__start_subtest("func_replace_global_func")) 525 432 test_func_replace_global_func(); 433 + if (test__start_subtest("fentry_to_cgroup_bpf")) 434 + test_fentry_to_cgroup_bpf(); 526 435 }
+21
tools/testing/selftests/bpf/prog_tests/lru_bug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <test_progs.h> 3 + 4 + #include "lru_bug.skel.h" 5 + 6 + void test_lru_bug(void) 7 + { 8 + struct lru_bug *skel; 9 + int ret; 10 + 11 + skel = lru_bug__open_and_load(); 12 + if (!ASSERT_OK_PTR(skel, "lru_bug__open_and_load")) 13 + return; 14 + ret = lru_bug__attach(skel); 15 + if (!ASSERT_OK(ret, "lru_bug__attach")) 16 + goto end; 17 + usleep(1); 18 + ASSERT_OK(skel->data->result, "prealloc_lru_pop doesn't call check_and_init_map_value"); 19 + end: 20 + lru_bug__destroy(skel); 21 + }
+9
tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
··· 112 112 113 113 return 0; 114 114 } 115 + 116 + SEC("iter.s/bpf_map_elem") 117 + int sleepable_dummy_dump(struct bpf_iter__bpf_map_elem *ctx) 118 + { 119 + if (ctx->meta->seq_num == 0) 120 + BPF_SEQ_PRINTF(ctx->meta->seq, "map dump starts\n"); 121 + 122 + return 0; 123 + }
+20 -2
tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_map.c
··· 16 16 17 17 __u32 val_sum = 0; 18 18 __u32 ipv6_sk_count = 0; 19 + __u32 to_add_val = 0; 19 20 20 21 SEC("iter/bpf_sk_storage_map") 21 - int dump_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx) 22 + int rw_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx) 22 23 { 23 24 struct sock *sk = ctx->sk; 24 25 __u32 *val = ctx->value; 25 26 26 - if (sk == (void *)0 || val == (void *)0) 27 + if (sk == NULL || val == NULL) 27 28 return 0; 28 29 29 30 if (sk->sk_family == AF_INET6) 30 31 ipv6_sk_count++; 31 32 32 33 val_sum += *val; 34 + 35 + *val += to_add_val; 36 + 37 + return 0; 38 + } 39 + 40 + SEC("iter/bpf_sk_storage_map") 41 + int oob_write_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx) 42 + { 43 + struct sock *sk = ctx->sk; 44 + __u32 *val = ctx->value; 45 + 46 + if (sk == NULL || val == NULL) 47 + return 0; 48 + 49 + *(val + 1) = 0xdeadbeef; 50 + 33 51 return 0; 34 52 }
+49
tools/testing/selftests/bpf/progs/lru_bug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <vmlinux.h> 3 + #include <bpf/bpf_tracing.h> 4 + #include <bpf/bpf_helpers.h> 5 + 6 + struct map_value { 7 + struct task_struct __kptr *ptr; 8 + }; 9 + 10 + struct { 11 + __uint(type, BPF_MAP_TYPE_LRU_HASH); 12 + __uint(max_entries, 1); 13 + __type(key, int); 14 + __type(value, struct map_value); 15 + } lru_map SEC(".maps"); 16 + 17 + int pid = 0; 18 + int result = 1; 19 + 20 + SEC("fentry/bpf_ktime_get_ns") 21 + int printk(void *ctx) 22 + { 23 + struct map_value v = {}; 24 + 25 + if (pid == bpf_get_current_task_btf()->pid) 26 + bpf_map_update_elem(&lru_map, &(int){0}, &v, 0); 27 + return 0; 28 + } 29 + 30 + SEC("fentry/do_nanosleep") 31 + int nanosleep(void *ctx) 32 + { 33 + struct map_value val = {}, *v; 34 + struct task_struct *current; 35 + 36 + bpf_map_update_elem(&lru_map, &(int){0}, &val, 0); 37 + v = bpf_map_lookup_elem(&lru_map, &(int){0}); 38 + if (!v) 39 + return 0; 40 + bpf_map_delete_elem(&lru_map, &(int){0}); 41 + current = bpf_get_current_task_btf(); 42 + v->ptr = current; 43 + pid = current->pid; 44 + bpf_ktime_get_ns(); 45 + result = !v->ptr; 46 + return 0; 47 + } 48 + 49 + char _license[] SEC("license") = "GPL";