Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2020-02-21

The following pull-request contains BPF updates for your *net-next* tree.

We've added 25 non-merge commits during the last 4 day(s) which contain
a total of 33 files changed, 2433 insertions(+), 161 deletions(-).

The main changes are:

1) Allow for adding TCP listen sockets into sock_map/hash so they can be used
with reuseport BPF programs, from Jakub Sitnicki.

2) Add a new bpf_program__set_attach_target() helper for adding libbpf support
to specify the tracepoint/function dynamically, from Eelco Chaudron.

3) Add bpf_read_branch_records() BPF helper which helps use cases like profile
guided optimizations, from Daniel Xu.

4) Enable bpf_perf_event_read_value() in all tracing programs, from Song Liu.

5) Relax BTF mandatory check if only used for libbpf itself e.g. to process
BTF defined maps, from Andrii Nakryiko.

6) Move BPF selftests -mcpu compilation attribute from 'probe' to 'v3' as it has
been observed that former fails in envs with low memlock, from Yonghong Song.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+2433 -161
+12 -17
Documentation/bpf/bpf_devel_QA.rst
··· 20 20 Q: How do I report bugs for BPF kernel code? 21 21 -------------------------------------------- 22 22 A: Since all BPF kernel development as well as bpftool and iproute2 BPF 23 - loader development happens through the netdev kernel mailing list, 23 + loader development happens through the bpf kernel mailing list, 24 24 please report any found issues around BPF to the following mailing 25 25 list: 26 26 27 - netdev@vger.kernel.org 27 + bpf@vger.kernel.org 28 28 29 29 This may also include issues related to XDP, BPF tracing, etc. 30 30 ··· 46 46 47 47 Q: To which mailing list do I need to submit my BPF patches? 48 48 ------------------------------------------------------------ 49 - A: Please submit your BPF patches to the netdev kernel mailing list: 49 + A: Please submit your BPF patches to the bpf kernel mailing list: 50 50 51 - netdev@vger.kernel.org 52 - 53 - Historically, BPF came out of networking and has always been maintained 54 - by the kernel networking community. Although these days BPF touches 55 - many other subsystems as well, the patches are still routed mainly 56 - through the networking community. 51 + bpf@vger.kernel.org 57 52 58 53 In case your patch has changes in various different subsystems (e.g. 59 - tracing, security, etc), make sure to Cc the related kernel mailing 54 + networking, tracing, security, etc), make sure to Cc the related kernel mailing 60 55 lists and maintainers from there as well, so they are able to review 61 56 the changes and provide their Acked-by's to the patches. 62 57 ··· 163 168 Be aware that this is not a final verdict that the patch will 164 169 automatically get accepted into net or net-next trees eventually: 165 170 166 - On the netdev kernel mailing list reviews can come in at any point 171 + On the bpf kernel mailing list reviews can come in at any point 167 172 in time. If discussions around a patch conclude that they cannot 168 173 get included as-is, we will either apply a follow-up fix or drop 169 174 them from the trees entirely. Therefore, we also reserve to rebase ··· 489 494 that set up, proceed with building the latest LLVM and clang version 490 495 from the git repositories:: 491 496 492 - $ git clone http://llvm.org/git/llvm.git 493 - $ cd llvm/tools 494 - $ git clone --depth 1 http://llvm.org/git/clang.git 495 - $ cd ..; mkdir build; cd build 496 - $ cmake .. -DLLVM_TARGETS_TO_BUILD="BPF;X86" \ 497 + $ git clone https://github.com/llvm/llvm-project.git 498 + $ mkdir -p llvm-project/llvm/build/install 499 + $ cd llvm-project/llvm/build 500 + $ cmake .. -G "Ninja" -DLLVM_TARGETS_TO_BUILD="BPF;X86" \ 501 + -DLLVM_ENABLE_PROJECTS="clang" \ 497 502 -DBUILD_SHARED_LIBS=OFF \ 498 503 -DCMAKE_BUILD_TYPE=Release \ 499 504 -DLLVM_BUILD_RUNTIME=OFF 500 - $ make -j $(getconf _NPROCESSORS_ONLN) 505 + $ ninja 501 506 502 507 The built binaries can then be found in the build/bin/ directory, where 503 508 you can point the PATH variable to.
+3 -17
include/linux/skmsg.h
··· 352 352 psock->saved_write_space = sk->sk_write_space; 353 353 354 354 psock->sk_proto = sk->sk_prot; 355 - sk->sk_prot = ops; 355 + /* Pairs with lockless read in sk_clone_lock() */ 356 + WRITE_ONCE(sk->sk_prot, ops); 356 357 } 357 358 358 359 static inline void sk_psock_restore_proto(struct sock *sk, 359 360 struct sk_psock *psock) 360 361 { 361 362 sk->sk_prot->unhash = psock->saved_unhash; 362 - 363 - if (psock->sk_proto) { 364 - struct inet_connection_sock *icsk = inet_csk(sk); 365 - bool has_ulp = !!icsk->icsk_ulp_data; 366 - 367 - if (has_ulp) { 368 - tcp_update_ulp(sk, psock->sk_proto, 369 - psock->saved_write_space); 370 - } else { 371 - sk->sk_prot = psock->sk_proto; 372 - sk->sk_write_space = psock->saved_write_space; 373 - } 374 - psock->sk_proto = NULL; 375 - } else { 376 - sk->sk_write_space = psock->saved_write_space; 377 - } 363 + tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space); 378 364 } 379 365 380 366 static inline void sk_psock_set_state(struct sk_psock *psock,
+35 -2
include/net/sock.h
··· 527 527 SK_PACING_FQ = 2, 528 528 }; 529 529 530 + /* Pointer stored in sk_user_data might not be suitable for copying 531 + * when cloning the socket. For instance, it can point to a reference 532 + * counted object. sk_user_data bottom bit is set if pointer must not 533 + * be copied. 534 + */ 535 + #define SK_USER_DATA_NOCOPY 1UL 536 + #define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY) 537 + 538 + /** 539 + * sk_user_data_is_nocopy - Test if sk_user_data pointer must not be copied 540 + * @sk: socket 541 + */ 542 + static inline bool sk_user_data_is_nocopy(const struct sock *sk) 543 + { 544 + return ((uintptr_t)sk->sk_user_data & SK_USER_DATA_NOCOPY); 545 + } 546 + 530 547 #define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data))) 531 548 532 - #define rcu_dereference_sk_user_data(sk) rcu_dereference(__sk_user_data((sk))) 533 - #define rcu_assign_sk_user_data(sk, ptr) rcu_assign_pointer(__sk_user_data((sk)), ptr) 549 + #define rcu_dereference_sk_user_data(sk) \ 550 + ({ \ 551 + void *__tmp = rcu_dereference(__sk_user_data((sk))); \ 552 + (void *)((uintptr_t)__tmp & SK_USER_DATA_PTRMASK); \ 553 + }) 554 + #define rcu_assign_sk_user_data(sk, ptr) \ 555 + ({ \ 556 + uintptr_t __tmp = (uintptr_t)(ptr); \ 557 + WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \ 558 + rcu_assign_pointer(__sk_user_data((sk)), __tmp); \ 559 + }) 560 + #define rcu_assign_sk_user_data_nocopy(sk, ptr) \ 561 + ({ \ 562 + uintptr_t __tmp = (uintptr_t)(ptr); \ 563 + WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \ 564 + rcu_assign_pointer(__sk_user_data((sk)), \ 565 + __tmp | SK_USER_DATA_NOCOPY); \ 566 + }) 534 567 535 568 /* 536 569 * SK_CAN_REUSE and SK_NO_REUSE on a socket mean that the socket is OK
-2
include/net/sock_reuseport.h
··· 55 55 return ret; 56 56 } 57 57 58 - int reuseport_get_id(struct sock_reuseport *reuse); 59 - 60 58 #endif /* _SOCK_REUSEPORT_H */
+7
include/net/tcp.h
··· 2203 2203 int nonblock, int flags, int *addr_len); 2204 2204 int __tcp_bpf_recvmsg(struct sock *sk, struct sk_psock *psock, 2205 2205 struct msghdr *msg, int len, int flags); 2206 + #ifdef CONFIG_NET_SOCK_MSG 2207 + void tcp_bpf_clone(const struct sock *sk, struct sock *newsk); 2208 + #else 2209 + static inline void tcp_bpf_clone(const struct sock *sk, struct sock *newsk) 2210 + { 2211 + } 2212 + #endif 2206 2213 2207 2214 /* Call BPF_SOCK_OPS program that returns an int. If the return value 2208 2215 * is < 0, then the BPF op failed (for example if the loaded BPF
+24 -1
include/uapi/linux/bpf.h
··· 2890 2890 * Obtain the 64bit jiffies 2891 2891 * Return 2892 2892 * The 64 bit jiffies 2893 + * 2894 + * int bpf_read_branch_records(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 flags) 2895 + * Description 2896 + * For an eBPF program attached to a perf event, retrieve the 2897 + * branch records (struct perf_branch_entry) associated to *ctx* 2898 + * and store it in the buffer pointed by *buf* up to size 2899 + * *size* bytes. 2900 + * Return 2901 + * On success, number of bytes written to *buf*. On error, a 2902 + * negative value. 2903 + * 2904 + * The *flags* can be set to **BPF_F_GET_BRANCH_RECORDS_SIZE** to 2905 + * instead return the number of bytes required to store all the 2906 + * branch entries. If this flag is set, *buf* may be NULL. 2907 + * 2908 + * **-EINVAL** if arguments invalid or **size** not a multiple 2909 + * of sizeof(struct perf_branch_entry). 2910 + * 2911 + * **-ENOENT** if architecture does not support branch records. 2893 2912 */ 2894 2913 #define __BPF_FUNC_MAPPER(FN) \ 2895 2914 FN(unspec), \ ··· 3029 3010 FN(probe_read_kernel_str), \ 3030 3011 FN(tcp_send_ack), \ 3031 3012 FN(send_signal_thread), \ 3032 - FN(jiffies64), 3013 + FN(jiffies64), \ 3014 + FN(read_branch_records), 3033 3015 3034 3016 /* integer value in 'imm' field of BPF_CALL instruction selects which helper 3035 3017 * function eBPF program intends to call ··· 3108 3088 3109 3089 /* BPF_FUNC_sk_storage_get flags */ 3110 3090 #define BPF_SK_STORAGE_GET_F_CREATE (1ULL << 0) 3091 + 3092 + /* BPF_FUNC_read_branch_records flags. */ 3093 + #define BPF_F_GET_BRANCH_RECORDS_SIZE (1ULL << 0) 3111 3094 3112 3095 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 3113 3096 enum bpf_adj_room_mode {
-5
kernel/bpf/reuseport_array.c
··· 305 305 if (err) 306 306 goto put_file_unlock; 307 307 308 - /* Ensure reuse->reuseport_id is set */ 309 - err = reuseport_get_id(reuse); 310 - if (err < 0) 311 - goto put_file_unlock; 312 - 313 308 WRITE_ONCE(nsk->sk_user_data, &array->ptrs[index]); 314 309 rcu_assign_pointer(array->ptrs[index], nsk); 315 310 free_osk = osk;
+7 -3
kernel/bpf/verifier.c
··· 3693 3693 if (func_id != BPF_FUNC_sk_redirect_map && 3694 3694 func_id != BPF_FUNC_sock_map_update && 3695 3695 func_id != BPF_FUNC_map_delete_elem && 3696 - func_id != BPF_FUNC_msg_redirect_map) 3696 + func_id != BPF_FUNC_msg_redirect_map && 3697 + func_id != BPF_FUNC_sk_select_reuseport) 3697 3698 goto error; 3698 3699 break; 3699 3700 case BPF_MAP_TYPE_SOCKHASH: 3700 3701 if (func_id != BPF_FUNC_sk_redirect_hash && 3701 3702 func_id != BPF_FUNC_sock_hash_update && 3702 3703 func_id != BPF_FUNC_map_delete_elem && 3703 - func_id != BPF_FUNC_msg_redirect_hash) 3704 + func_id != BPF_FUNC_msg_redirect_hash && 3705 + func_id != BPF_FUNC_sk_select_reuseport) 3704 3706 goto error; 3705 3707 break; 3706 3708 case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY: ··· 3776 3774 goto error; 3777 3775 break; 3778 3776 case BPF_FUNC_sk_select_reuseport: 3779 - if (map->map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) 3777 + if (map->map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY && 3778 + map->map_type != BPF_MAP_TYPE_SOCKMAP && 3779 + map->map_type != BPF_MAP_TYPE_SOCKHASH) 3780 3780 goto error; 3781 3781 break; 3782 3782 case BPF_FUNC_map_peek_elem:
+43 -2
kernel/trace/bpf_trace.c
··· 843 843 return &bpf_send_signal_proto; 844 844 case BPF_FUNC_send_signal_thread: 845 845 return &bpf_send_signal_thread_proto; 846 + case BPF_FUNC_perf_event_read_value: 847 + return &bpf_perf_event_read_value_proto; 846 848 default: 847 849 return NULL; 848 850 } ··· 860 858 return &bpf_get_stackid_proto; 861 859 case BPF_FUNC_get_stack: 862 860 return &bpf_get_stack_proto; 863 - case BPF_FUNC_perf_event_read_value: 864 - return &bpf_perf_event_read_value_proto; 865 861 #ifdef CONFIG_BPF_KPROBE_OVERRIDE 866 862 case BPF_FUNC_override_return: 867 863 return &bpf_override_return_proto; ··· 1028 1028 .arg3_type = ARG_CONST_SIZE, 1029 1029 }; 1030 1030 1031 + BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx, 1032 + void *, buf, u32, size, u64, flags) 1033 + { 1034 + #ifndef CONFIG_X86 1035 + return -ENOENT; 1036 + #else 1037 + static const u32 br_entry_size = sizeof(struct perf_branch_entry); 1038 + struct perf_branch_stack *br_stack = ctx->data->br_stack; 1039 + u32 to_copy; 1040 + 1041 + if (unlikely(flags & ~BPF_F_GET_BRANCH_RECORDS_SIZE)) 1042 + return -EINVAL; 1043 + 1044 + if (unlikely(!br_stack)) 1045 + return -EINVAL; 1046 + 1047 + if (flags & BPF_F_GET_BRANCH_RECORDS_SIZE) 1048 + return br_stack->nr * br_entry_size; 1049 + 1050 + if (!buf || (size % br_entry_size != 0)) 1051 + return -EINVAL; 1052 + 1053 + to_copy = min_t(u32, br_stack->nr * br_entry_size, size); 1054 + memcpy(buf, br_stack->entries, to_copy); 1055 + 1056 + return to_copy; 1057 + #endif 1058 + } 1059 + 1060 + static const struct bpf_func_proto bpf_read_branch_records_proto = { 1061 + .func = bpf_read_branch_records, 1062 + .gpl_only = true, 1063 + .ret_type = RET_INTEGER, 1064 + .arg1_type = ARG_PTR_TO_CTX, 1065 + .arg2_type = ARG_PTR_TO_MEM_OR_NULL, 1066 + .arg3_type = ARG_CONST_SIZE_OR_ZERO, 1067 + .arg4_type = ARG_ANYTHING, 1068 + }; 1069 + 1031 1070 static const struct bpf_func_proto * 1032 1071 pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) 1033 1072 { ··· 1079 1040 return &bpf_get_stack_proto_tp; 1080 1041 case BPF_FUNC_perf_prog_read_value: 1081 1042 return &bpf_perf_prog_read_value_proto; 1043 + case BPF_FUNC_read_branch_records: 1044 + return &bpf_read_branch_records_proto; 1082 1045 default: 1083 1046 return tracing_func_proto(func_id, prog); 1084 1047 }
+11 -16
net/core/filter.c
··· 8620 8620 BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern, 8621 8621 struct bpf_map *, map, void *, key, u32, flags) 8622 8622 { 8623 + bool is_sockarray = map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY; 8623 8624 struct sock_reuseport *reuse; 8624 8625 struct sock *selected_sk; 8625 8626 ··· 8629 8628 return -ENOENT; 8630 8629 8631 8630 reuse = rcu_dereference(selected_sk->sk_reuseport_cb); 8632 - if (!reuse) 8633 - /* selected_sk is unhashed (e.g. by close()) after the 8634 - * above map_lookup_elem(). Treat selected_sk has already 8635 - * been removed from the map. 8631 + if (!reuse) { 8632 + /* reuseport_array has only sk with non NULL sk_reuseport_cb. 8633 + * The only (!reuse) case here is - the sk has already been 8634 + * unhashed (e.g. by close()), so treat it as -ENOENT. 8635 + * 8636 + * Other maps (e.g. sock_map) do not provide this guarantee and 8637 + * the sk may never be in the reuseport group to begin with. 8636 8638 */ 8637 - return -ENOENT; 8639 + return is_sockarray ? -ENOENT : -EINVAL; 8640 + } 8638 8641 8639 8642 if (unlikely(reuse->reuseport_id != reuse_kern->reuseport_id)) { 8640 - struct sock *sk; 8643 + struct sock *sk = reuse_kern->sk; 8641 8644 8642 - if (unlikely(!reuse_kern->reuseport_id)) 8643 - /* There is a small race between adding the 8644 - * sk to the map and setting the 8645 - * reuse_kern->reuseport_id. 8646 - * Treat it as the sk has not been added to 8647 - * the bpf map yet. 8648 - */ 8649 - return -ENOENT; 8650 - 8651 - sk = reuse_kern->sk; 8652 8645 if (sk->sk_protocol != selected_sk->sk_protocol) 8653 8646 return -EPROTOTYPE; 8654 8647 else if (sk->sk_family != selected_sk->sk_family)
+1 -1
net/core/skmsg.c
··· 512 512 sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED); 513 513 refcount_set(&psock->refcnt, 1); 514 514 515 - rcu_assign_sk_user_data(sk, psock); 515 + rcu_assign_sk_user_data_nocopy(sk, psock); 516 516 sock_hold(sk); 517 517 518 518 return psock;
+11 -3
net/core/sock.c
··· 1572 1572 */ 1573 1573 static void sock_copy(struct sock *nsk, const struct sock *osk) 1574 1574 { 1575 + const struct proto *prot = READ_ONCE(osk->sk_prot); 1575 1576 #ifdef CONFIG_SECURITY_NETWORK 1576 1577 void *sptr = nsk->sk_security; 1577 1578 #endif 1578 1579 memcpy(nsk, osk, offsetof(struct sock, sk_dontcopy_begin)); 1579 1580 1580 1581 memcpy(&nsk->sk_dontcopy_end, &osk->sk_dontcopy_end, 1581 - osk->sk_prot->obj_size - offsetof(struct sock, sk_dontcopy_end)); 1582 + prot->obj_size - offsetof(struct sock, sk_dontcopy_end)); 1582 1583 1583 1584 #ifdef CONFIG_SECURITY_NETWORK 1584 1585 nsk->sk_security = sptr; ··· 1793 1792 */ 1794 1793 struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) 1795 1794 { 1795 + struct proto *prot = READ_ONCE(sk->sk_prot); 1796 1796 struct sock *newsk; 1797 1797 bool is_charged = true; 1798 1798 1799 - newsk = sk_prot_alloc(sk->sk_prot, priority, sk->sk_family); 1799 + newsk = sk_prot_alloc(prot, priority, sk->sk_family); 1800 1800 if (newsk != NULL) { 1801 1801 struct sk_filter *filter; 1802 1802 1803 1803 sock_copy(newsk, sk); 1804 1804 1805 - newsk->sk_prot_creator = sk->sk_prot; 1805 + newsk->sk_prot_creator = prot; 1806 1806 1807 1807 /* SANITY */ 1808 1808 if (likely(newsk->sk_net_refcnt)) ··· 1864 1862 newsk = NULL; 1865 1863 goto out; 1866 1864 } 1865 + 1866 + /* Clear sk_user_data if parent had the pointer tagged 1867 + * as not suitable for copying when cloning. 1868 + */ 1869 + if (sk_user_data_is_nocopy(newsk)) 1870 + RCU_INIT_POINTER(newsk->sk_user_data, NULL); 1867 1871 1868 1872 newsk->sk_err = 0; 1869 1873 newsk->sk_err_soft = 0;
+144 -23
net/core/sock_map.c
··· 10 10 #include <linux/skmsg.h> 11 11 #include <linux/list.h> 12 12 #include <linux/jhash.h> 13 + #include <linux/sock_diag.h> 13 14 14 15 struct bpf_stab { 15 16 struct bpf_map map; ··· 32 31 return ERR_PTR(-EPERM); 33 32 if (attr->max_entries == 0 || 34 33 attr->key_size != 4 || 35 - attr->value_size != 4 || 34 + (attr->value_size != sizeof(u32) && 35 + attr->value_size != sizeof(u64)) || 36 36 attr->map_flags & ~SOCK_CREATE_FLAG_MASK) 37 37 return ERR_PTR(-EINVAL); 38 38 ··· 230 228 return ret; 231 229 } 232 230 231 + static int sock_map_link_no_progs(struct bpf_map *map, struct sock *sk) 232 + { 233 + struct sk_psock *psock; 234 + int ret; 235 + 236 + psock = sk_psock_get_checked(sk); 237 + if (IS_ERR(psock)) 238 + return PTR_ERR(psock); 239 + 240 + if (psock) { 241 + tcp_bpf_reinit(sk); 242 + return 0; 243 + } 244 + 245 + psock = sk_psock_init(sk, map->numa_node); 246 + if (!psock) 247 + return -ENOMEM; 248 + 249 + ret = tcp_bpf_init(sk); 250 + if (ret < 0) 251 + sk_psock_put(sk, psock); 252 + return ret; 253 + } 254 + 233 255 static void sock_map_free(struct bpf_map *map) 234 256 { 235 257 struct bpf_stab *stab = container_of(map, struct bpf_stab, map); ··· 301 275 302 276 static void *sock_map_lookup(struct bpf_map *map, void *key) 303 277 { 304 - return ERR_PTR(-EOPNOTSUPP); 278 + return __sock_map_lookup_elem(map, *(u32 *)key); 279 + } 280 + 281 + static void *sock_map_lookup_sys(struct bpf_map *map, void *key) 282 + { 283 + struct sock *sk; 284 + 285 + if (map->value_size != sizeof(u64)) 286 + return ERR_PTR(-ENOSPC); 287 + 288 + sk = __sock_map_lookup_elem(map, *(u32 *)key); 289 + if (!sk) 290 + return ERR_PTR(-ENOENT); 291 + 292 + sock_gen_cookie(sk); 293 + return &sk->sk_cookie; 305 294 } 306 295 307 296 static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test, ··· 375 334 return 0; 376 335 } 377 336 337 + static bool sock_map_redirect_allowed(const struct sock *sk) 338 + { 339 + return sk->sk_state != TCP_LISTEN; 340 + } 341 + 378 342 static int sock_map_update_common(struct bpf_map *map, u32 idx, 379 343 struct sock *sk, u64 flags) 380 344 { ··· 402 356 if (!link) 403 357 return -ENOMEM; 404 358 405 - ret = sock_map_link(map, &stab->progs, sk); 359 + /* Only sockets we can redirect into/from in BPF need to hold 360 + * refs to parser/verdict progs and have their sk_data_ready 361 + * and sk_write_space callbacks overridden. 362 + */ 363 + if (sock_map_redirect_allowed(sk)) 364 + ret = sock_map_link(map, &stab->progs, sk); 365 + else 366 + ret = sock_map_link_no_progs(map, sk); 406 367 if (ret < 0) 407 368 goto out_free; 408 369 ··· 444 391 static bool sock_map_op_okay(const struct bpf_sock_ops_kern *ops) 445 392 { 446 393 return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB || 447 - ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB; 394 + ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB || 395 + ops->op == BPF_SOCK_OPS_TCP_LISTEN_CB; 448 396 } 449 397 450 398 static bool sock_map_sk_is_suitable(const struct sock *sk) ··· 454 400 sk->sk_protocol == IPPROTO_TCP; 455 401 } 456 402 403 + static bool sock_map_sk_state_allowed(const struct sock *sk) 404 + { 405 + return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_LISTEN); 406 + } 407 + 457 408 static int sock_map_update_elem(struct bpf_map *map, void *key, 458 409 void *value, u64 flags) 459 410 { 460 - u32 ufd = *(u32 *)value; 461 411 u32 idx = *(u32 *)key; 462 412 struct socket *sock; 463 413 struct sock *sk; 464 414 int ret; 415 + u64 ufd; 416 + 417 + if (map->value_size == sizeof(u64)) 418 + ufd = *(u64 *)value; 419 + else 420 + ufd = *(u32 *)value; 421 + if (ufd > S32_MAX) 422 + return -EINVAL; 465 423 466 424 sock = sockfd_lookup(ufd, &ret); 467 425 if (!sock) ··· 489 423 } 490 424 491 425 sock_map_sk_acquire(sk); 492 - if (sk->sk_state != TCP_ESTABLISHED) 426 + if (!sock_map_sk_state_allowed(sk)) 493 427 ret = -EOPNOTSUPP; 494 428 else 495 429 ret = sock_map_update_common(map, idx, sk, flags); ··· 526 460 struct bpf_map *, map, u32, key, u64, flags) 527 461 { 528 462 struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); 463 + struct sock *sk; 529 464 530 465 if (unlikely(flags & ~(BPF_F_INGRESS))) 531 466 return SK_DROP; 532 - tcb->bpf.flags = flags; 533 - tcb->bpf.sk_redir = __sock_map_lookup_elem(map, key); 534 - if (!tcb->bpf.sk_redir) 467 + 468 + sk = __sock_map_lookup_elem(map, key); 469 + if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 535 470 return SK_DROP; 471 + 472 + tcb->bpf.flags = flags; 473 + tcb->bpf.sk_redir = sk; 536 474 return SK_PASS; 537 475 } 538 476 ··· 553 483 BPF_CALL_4(bpf_msg_redirect_map, struct sk_msg *, msg, 554 484 struct bpf_map *, map, u32, key, u64, flags) 555 485 { 486 + struct sock *sk; 487 + 556 488 if (unlikely(flags & ~(BPF_F_INGRESS))) 557 489 return SK_DROP; 558 - msg->flags = flags; 559 - msg->sk_redir = __sock_map_lookup_elem(map, key); 560 - if (!msg->sk_redir) 490 + 491 + sk = __sock_map_lookup_elem(map, key); 492 + if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 561 493 return SK_DROP; 494 + 495 + msg->flags = flags; 496 + msg->sk_redir = sk; 562 497 return SK_PASS; 563 498 } 564 499 ··· 581 506 .map_alloc = sock_map_alloc, 582 507 .map_free = sock_map_free, 583 508 .map_get_next_key = sock_map_get_next_key, 509 + .map_lookup_elem_sys_only = sock_map_lookup_sys, 584 510 .map_update_elem = sock_map_update_elem, 585 511 .map_delete_elem = sock_map_delete_elem, 586 512 .map_lookup_elem = sock_map_lookup, ··· 756 680 if (!link) 757 681 return -ENOMEM; 758 682 759 - ret = sock_map_link(map, &htab->progs, sk); 683 + /* Only sockets we can redirect into/from in BPF need to hold 684 + * refs to parser/verdict progs and have their sk_data_ready 685 + * and sk_write_space callbacks overridden. 686 + */ 687 + if (sock_map_redirect_allowed(sk)) 688 + ret = sock_map_link(map, &htab->progs, sk); 689 + else 690 + ret = sock_map_link_no_progs(map, sk); 760 691 if (ret < 0) 761 692 goto out_free; 762 693 ··· 812 729 static int sock_hash_update_elem(struct bpf_map *map, void *key, 813 730 void *value, u64 flags) 814 731 { 815 - u32 ufd = *(u32 *)value; 816 732 struct socket *sock; 817 733 struct sock *sk; 818 734 int ret; 735 + u64 ufd; 736 + 737 + if (map->value_size == sizeof(u64)) 738 + ufd = *(u64 *)value; 739 + else 740 + ufd = *(u32 *)value; 741 + if (ufd > S32_MAX) 742 + return -EINVAL; 819 743 820 744 sock = sockfd_lookup(ufd, &ret); 821 745 if (!sock) ··· 838 748 } 839 749 840 750 sock_map_sk_acquire(sk); 841 - if (sk->sk_state != TCP_ESTABLISHED) 751 + if (!sock_map_sk_state_allowed(sk)) 842 752 ret = -EOPNOTSUPP; 843 753 else 844 754 ret = sock_hash_update_common(map, key, sk, flags); ··· 898 808 return ERR_PTR(-EPERM); 899 809 if (attr->max_entries == 0 || 900 810 attr->key_size == 0 || 901 - attr->value_size != 4 || 811 + (attr->value_size != sizeof(u32) && 812 + attr->value_size != sizeof(u64)) || 902 813 attr->map_flags & ~SOCK_CREATE_FLAG_MASK) 903 814 return ERR_PTR(-EINVAL); 904 815 if (attr->key_size > MAX_BPF_STACK) ··· 976 885 kfree(htab); 977 886 } 978 887 888 + static void *sock_hash_lookup_sys(struct bpf_map *map, void *key) 889 + { 890 + struct sock *sk; 891 + 892 + if (map->value_size != sizeof(u64)) 893 + return ERR_PTR(-ENOSPC); 894 + 895 + sk = __sock_hash_lookup_elem(map, key); 896 + if (!sk) 897 + return ERR_PTR(-ENOENT); 898 + 899 + sock_gen_cookie(sk); 900 + return &sk->sk_cookie; 901 + } 902 + 903 + static void *sock_hash_lookup(struct bpf_map *map, void *key) 904 + { 905 + return __sock_hash_lookup_elem(map, key); 906 + } 907 + 979 908 static void sock_hash_release_progs(struct bpf_map *map) 980 909 { 981 910 psock_progs_drop(&container_of(map, struct bpf_htab, map)->progs); ··· 1027 916 struct bpf_map *, map, void *, key, u64, flags) 1028 917 { 1029 918 struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); 919 + struct sock *sk; 1030 920 1031 921 if (unlikely(flags & ~(BPF_F_INGRESS))) 1032 922 return SK_DROP; 1033 - tcb->bpf.flags = flags; 1034 - tcb->bpf.sk_redir = __sock_hash_lookup_elem(map, key); 1035 - if (!tcb->bpf.sk_redir) 923 + 924 + sk = __sock_hash_lookup_elem(map, key); 925 + if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 1036 926 return SK_DROP; 927 + 928 + tcb->bpf.flags = flags; 929 + tcb->bpf.sk_redir = sk; 1037 930 return SK_PASS; 1038 931 } 1039 932 ··· 1054 939 BPF_CALL_4(bpf_msg_redirect_hash, struct sk_msg *, msg, 1055 940 struct bpf_map *, map, void *, key, u64, flags) 1056 941 { 942 + struct sock *sk; 943 + 1057 944 if (unlikely(flags & ~(BPF_F_INGRESS))) 1058 945 return SK_DROP; 1059 - msg->flags = flags; 1060 - msg->sk_redir = __sock_hash_lookup_elem(map, key); 1061 - if (!msg->sk_redir) 946 + 947 + sk = __sock_hash_lookup_elem(map, key); 948 + if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 1062 949 return SK_DROP; 950 + 951 + msg->flags = flags; 952 + msg->sk_redir = sk; 1063 953 return SK_PASS; 1064 954 } 1065 955 ··· 1084 964 .map_get_next_key = sock_hash_get_next_key, 1085 965 .map_update_elem = sock_hash_update_elem, 1086 966 .map_delete_elem = sock_hash_delete_elem, 1087 - .map_lookup_elem = sock_map_lookup, 967 + .map_lookup_elem = sock_hash_lookup, 968 + .map_lookup_elem_sys_only = sock_hash_lookup_sys, 1088 969 .map_release_uref = sock_hash_release_progs, 1089 970 .map_check_btf = map_check_no_btf, 1090 971 };
+21 -29
net/core/sock_reuseport.c
··· 16 16 17 17 DEFINE_SPINLOCK(reuseport_lock); 18 18 19 - #define REUSEPORT_MIN_ID 1 20 19 static DEFINE_IDA(reuseport_ida); 21 - 22 - int reuseport_get_id(struct sock_reuseport *reuse) 23 - { 24 - int id; 25 - 26 - if (reuse->reuseport_id) 27 - return reuse->reuseport_id; 28 - 29 - id = ida_simple_get(&reuseport_ida, REUSEPORT_MIN_ID, 0, 30 - /* Called under reuseport_lock */ 31 - GFP_ATOMIC); 32 - if (id < 0) 33 - return id; 34 - 35 - reuse->reuseport_id = id; 36 - 37 - return reuse->reuseport_id; 38 - } 39 20 40 21 static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks) 41 22 { ··· 36 55 int reuseport_alloc(struct sock *sk, bool bind_inany) 37 56 { 38 57 struct sock_reuseport *reuse; 58 + int id, ret = 0; 39 59 40 60 /* bh lock used since this function call may precede hlist lock in 41 61 * soft irq of receive path or setsockopt from process context ··· 60 78 61 79 reuse = __reuseport_alloc(INIT_SOCKS); 62 80 if (!reuse) { 63 - spin_unlock_bh(&reuseport_lock); 64 - return -ENOMEM; 81 + ret = -ENOMEM; 82 + goto out; 65 83 } 66 84 85 + id = ida_alloc(&reuseport_ida, GFP_ATOMIC); 86 + if (id < 0) { 87 + kfree(reuse); 88 + ret = id; 89 + goto out; 90 + } 91 + 92 + reuse->reuseport_id = id; 67 93 reuse->socks[0] = sk; 68 94 reuse->num_socks = 1; 69 95 reuse->bind_inany = bind_inany; ··· 80 90 out: 81 91 spin_unlock_bh(&reuseport_lock); 82 92 83 - return 0; 93 + return ret; 84 94 } 85 95 EXPORT_SYMBOL(reuseport_alloc); 86 96 ··· 124 134 125 135 reuse = container_of(head, struct sock_reuseport, rcu); 126 136 sk_reuseport_prog_free(rcu_dereference_protected(reuse->prog, 1)); 127 - if (reuse->reuseport_id) 128 - ida_simple_remove(&reuseport_ida, reuse->reuseport_id); 137 + ida_free(&reuseport_ida, reuse->reuseport_id); 129 138 kfree(reuse); 130 139 } 131 140 ··· 188 199 reuse = rcu_dereference_protected(sk->sk_reuseport_cb, 189 200 lockdep_is_held(&reuseport_lock)); 190 201 191 - /* At least one of the sk in this reuseport group is added to 192 - * a bpf map. Notify the bpf side. The bpf map logic will 193 - * remove the sk if it is indeed added to a bpf map. 202 + /* Notify the bpf side. The sk may be added to a sockarray 203 + * map. If so, sockarray logic will remove it from the map. 204 + * 205 + * Other bpf map types that work with reuseport, like sockmap, 206 + * don't need an explicit callback from here. They override sk 207 + * unhash/close ops to remove the sk from the map before we 208 + * get to this point. 194 209 */ 195 - if (reuse->reuseport_id) 196 - bpf_sk_reuseport_detach(sk); 210 + bpf_sk_reuseport_detach(sk); 197 211 198 212 rcu_assign_pointer(sk->sk_reuseport_cb, NULL); 199 213
+17 -1
net/ipv4/tcp_bpf.c
··· 645 645 /* Reinit occurs when program types change e.g. TCP_BPF_TX is removed 646 646 * or added requiring sk_prot hook updates. We keep original saved 647 647 * hooks in this case. 648 + * 649 + * Pairs with lockless read in sk_clone_lock(). 648 650 */ 649 - sk->sk_prot = &tcp_bpf_prots[family][config]; 651 + WRITE_ONCE(sk->sk_prot, &tcp_bpf_prots[family][config]); 650 652 } 651 653 652 654 static int tcp_bpf_assert_proto_ops(struct proto *ops) ··· 692 690 tcp_bpf_update_sk_prot(sk, psock); 693 691 rcu_read_unlock(); 694 692 return 0; 693 + } 694 + 695 + /* If a child got cloned from a listening socket that had tcp_bpf 696 + * protocol callbacks installed, we need to restore the callbacks to 697 + * the default ones because the child does not inherit the psock state 698 + * that tcp_bpf callbacks expect. 699 + */ 700 + void tcp_bpf_clone(const struct sock *sk, struct sock *newsk) 701 + { 702 + int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4; 703 + struct proto *prot = newsk->sk_prot; 704 + 705 + if (prot == &tcp_bpf_prots[family][TCP_BPF_BASE]) 706 + newsk->sk_prot = sk->sk_prot_creator; 695 707 }
+2
net/ipv4/tcp_minisocks.c
··· 548 548 newtp->fastopen_req = NULL; 549 549 RCU_INIT_POINTER(newtp->fastopen_rsk, NULL); 550 550 551 + tcp_bpf_clone(sk, newsk); 552 + 551 553 __TCP_INC_STATS(sock_net(sk), TCP_MIB_PASSIVEOPENS); 552 554 553 555 return newsk;
+2 -1
net/ipv4/tcp_ulp.c
··· 106 106 107 107 if (!icsk->icsk_ulp_ops) { 108 108 sk->sk_write_space = write_space; 109 - sk->sk_prot = proto; 109 + /* Pairs with lockless read in sk_clone_lock() */ 110 + WRITE_ONCE(sk->sk_prot, proto); 110 111 return; 111 112 } 112 113
+2 -1
net/tls/tls_main.c
··· 742 742 ctx->sk_write_space = write_space; 743 743 ctx->sk_proto = p; 744 744 } else { 745 - sk->sk_prot = p; 745 + /* Pairs with lockless read in sk_clone_lock(). */ 746 + WRITE_ONCE(sk->sk_prot, p); 746 747 sk->sk_write_space = write_space; 747 748 } 748 749 }
+24 -1
tools/include/uapi/linux/bpf.h
··· 2890 2890 * Obtain the 64bit jiffies 2891 2891 * Return 2892 2892 * The 64 bit jiffies 2893 + * 2894 + * int bpf_read_branch_records(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 flags) 2895 + * Description 2896 + * For an eBPF program attached to a perf event, retrieve the 2897 + * branch records (struct perf_branch_entry) associated to *ctx* 2898 + * and store it in the buffer pointed by *buf* up to size 2899 + * *size* bytes. 2900 + * Return 2901 + * On success, number of bytes written to *buf*. On error, a 2902 + * negative value. 2903 + * 2904 + * The *flags* can be set to **BPF_F_GET_BRANCH_RECORDS_SIZE** to 2905 + * instead return the number of bytes required to store all the 2906 + * branch entries. If this flag is set, *buf* may be NULL. 2907 + * 2908 + * **-EINVAL** if arguments invalid or **size** not a multiple 2909 + * of sizeof(struct perf_branch_entry). 2910 + * 2911 + * **-ENOENT** if architecture does not support branch records. 2893 2912 */ 2894 2913 #define __BPF_FUNC_MAPPER(FN) \ 2895 2914 FN(unspec), \ ··· 3029 3010 FN(probe_read_kernel_str), \ 3030 3011 FN(tcp_send_ack), \ 3031 3012 FN(send_signal_thread), \ 3032 - FN(jiffies64), 3013 + FN(jiffies64), \ 3014 + FN(read_branch_records), 3033 3015 3034 3016 /* integer value in 'imm' field of BPF_CALL instruction selects which helper 3035 3017 * function eBPF program intends to call ··· 3108 3088 3109 3089 /* BPF_FUNC_sk_storage_get flags */ 3110 3090 #define BPF_SK_STORAGE_GET_F_CREATE (1ULL << 0) 3091 + 3092 + /* BPF_FUNC_read_branch_records flags. */ 3093 + #define BPF_F_GET_BRANCH_RECORDS_SIZE (1ULL << 0) 3111 3094 3112 3095 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 3113 3096 enum bpf_adj_room_mode {
+31 -7
tools/lib/bpf/libbpf.c
··· 2286 2286 2287 2287 static bool bpf_object__is_btf_mandatory(const struct bpf_object *obj) 2288 2288 { 2289 - return obj->efile.btf_maps_shndx >= 0 || 2290 - obj->efile.st_ops_shndx >= 0 || 2291 - obj->nr_extern > 0; 2289 + return obj->efile.st_ops_shndx >= 0 || obj->nr_extern > 0; 2292 2290 } 2293 2291 2294 2292 static int bpf_object__init_btf(struct bpf_object *obj, ··· 4943 4945 { 4944 4946 int err = 0, fd, i, btf_id; 4945 4947 4946 - if (prog->type == BPF_PROG_TYPE_TRACING || 4947 - prog->type == BPF_PROG_TYPE_EXT) { 4948 + if ((prog->type == BPF_PROG_TYPE_TRACING || 4949 + prog->type == BPF_PROG_TYPE_EXT) && !prog->attach_btf_id) { 4948 4950 btf_id = libbpf_find_attach_btf_id(prog); 4949 4951 if (btf_id <= 0) 4950 4952 return btf_id; ··· 6587 6589 else 6588 6590 err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC); 6589 6591 6592 + if (err <= 0) 6593 + pr_warn("%s is not found in vmlinux BTF\n", name); 6594 + 6590 6595 return err; 6591 6596 } 6592 6597 ··· 6662 6661 err = __find_vmlinux_btf_id(prog->obj->btf_vmlinux, 6663 6662 name + section_defs[i].len, 6664 6663 attach_type); 6665 - if (err <= 0) 6666 - pr_warn("%s is not found in vmlinux BTF\n", name); 6667 6664 return err; 6668 6665 } 6669 6666 pr_warn("failed to identify btf_id based on ELF section name '%s'\n", name); ··· 8135 8136 bpf_prog_info_set_offset_u64(&info_linear->info, 8136 8137 desc->array_offset, addr); 8137 8138 } 8139 + } 8140 + 8141 + int bpf_program__set_attach_target(struct bpf_program *prog, 8142 + int attach_prog_fd, 8143 + const char *attach_func_name) 8144 + { 8145 + int btf_id; 8146 + 8147 + if (!prog || attach_prog_fd < 0 || !attach_func_name) 8148 + return -EINVAL; 8149 + 8150 + if (attach_prog_fd) 8151 + btf_id = libbpf_find_prog_btf_id(attach_func_name, 8152 + attach_prog_fd); 8153 + else 8154 + btf_id = __find_vmlinux_btf_id(prog->obj->btf_vmlinux, 8155 + attach_func_name, 8156 + prog->expected_attach_type); 8157 + 8158 + if (btf_id < 0) 8159 + return btf_id; 8160 + 8161 + prog->attach_btf_id = btf_id; 8162 + prog->attach_prog_fd = attach_prog_fd; 8163 + return 0; 8138 8164 } 8139 8165 8140 8166 int parse_cpu_mask_str(const char *s, bool **mask, int *mask_sz)
+4
tools/lib/bpf/libbpf.h
··· 334 334 bpf_program__set_expected_attach_type(struct bpf_program *prog, 335 335 enum bpf_attach_type type); 336 336 337 + LIBBPF_API int 338 + bpf_program__set_attach_target(struct bpf_program *prog, int attach_prog_fd, 339 + const char *attach_func_name); 340 + 337 341 LIBBPF_API bool bpf_program__is_socket_filter(const struct bpf_program *prog); 338 342 LIBBPF_API bool bpf_program__is_tracepoint(const struct bpf_program *prog); 339 343 LIBBPF_API bool bpf_program__is_raw_tracepoint(const struct bpf_program *prog);
+5
tools/lib/bpf/libbpf.map
··· 235 235 btf__align_of; 236 236 libbpf_find_kernel_btf; 237 237 } LIBBPF_0.0.6; 238 + 239 + LIBBPF_0.0.8 { 240 + global: 241 + bpf_program__set_attach_target; 242 + } LIBBPF_0.0.7;
+2 -2
tools/testing/selftests/bpf/Makefile
··· 209 209 $(call msg,CLNG-LLC,$(TRUNNER_BINARY),$2) 210 210 ($(CLANG) $3 -O2 -target bpf -emit-llvm \ 211 211 -c $1 -o - || echo "BPF obj compilation failed") | \ 212 - $(LLC) -mattr=dwarfris -march=bpf -mcpu=probe $4 -filetype=obj -o $2 212 + $(LLC) -mattr=dwarfris -march=bpf -mcpu=v3 $4 -filetype=obj -o $2 213 213 endef 214 214 # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32 215 215 define CLANG_NOALU32_BPF_BUILD_RULE ··· 223 223 $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2) 224 224 ($(CLANG) $3 -O2 -emit-llvm \ 225 225 -c $1 -o - || echo "BPF obj compilation failed") | \ 226 - $(LLC) -march=bpf -mcpu=probe $4 -filetype=obj -o $2 226 + $(LLC) -march=bpf -mcpu=v3 $4 -filetype=obj -o $2 227 227 endef 228 228 # Build BPF object using GCC 229 229 define GCC_BPF_BUILD_RULE
+170
tools/testing/selftests/bpf/prog_tests/perf_branches.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #define _GNU_SOURCE 3 + #include <pthread.h> 4 + #include <sched.h> 5 + #include <sys/socket.h> 6 + #include <test_progs.h> 7 + #include "bpf/libbpf_internal.h" 8 + #include "test_perf_branches.skel.h" 9 + 10 + static void check_good_sample(struct test_perf_branches *skel) 11 + { 12 + int written_global = skel->bss->written_global_out; 13 + int required_size = skel->bss->required_size_out; 14 + int written_stack = skel->bss->written_stack_out; 15 + int pbe_size = sizeof(struct perf_branch_entry); 16 + int duration = 0; 17 + 18 + if (CHECK(!skel->bss->valid, "output not valid", 19 + "no valid sample from prog")) 20 + return; 21 + 22 + /* 23 + * It's hard to validate the contents of the branch entries b/c it 24 + * would require some kind of disassembler and also encoding the 25 + * valid jump instructions for supported architectures. So just check 26 + * the easy stuff for now. 27 + */ 28 + CHECK(required_size <= 0, "read_branches_size", "err %d\n", required_size); 29 + CHECK(written_stack < 0, "read_branches_stack", "err %d\n", written_stack); 30 + CHECK(written_stack % pbe_size != 0, "read_branches_stack", 31 + "stack bytes written=%d not multiple of struct size=%d\n", 32 + written_stack, pbe_size); 33 + CHECK(written_global < 0, "read_branches_global", "err %d\n", written_global); 34 + CHECK(written_global % pbe_size != 0, "read_branches_global", 35 + "global bytes written=%d not multiple of struct size=%d\n", 36 + written_global, pbe_size); 37 + CHECK(written_global < written_stack, "read_branches_size", 38 + "written_global=%d < written_stack=%d\n", written_global, written_stack); 39 + } 40 + 41 + static void check_bad_sample(struct test_perf_branches *skel) 42 + { 43 + int written_global = skel->bss->written_global_out; 44 + int required_size = skel->bss->required_size_out; 45 + int written_stack = skel->bss->written_stack_out; 46 + int duration = 0; 47 + 48 + if (CHECK(!skel->bss->valid, "output not valid", 49 + "no valid sample from prog")) 50 + return; 51 + 52 + CHECK((required_size != -EINVAL && required_size != -ENOENT), 53 + "read_branches_size", "err %d\n", required_size); 54 + CHECK((written_stack != -EINVAL && written_stack != -ENOENT), 55 + "read_branches_stack", "written %d\n", written_stack); 56 + CHECK((written_global != -EINVAL && written_global != -ENOENT), 57 + "read_branches_global", "written %d\n", written_global); 58 + } 59 + 60 + static void test_perf_branches_common(int perf_fd, 61 + void (*cb)(struct test_perf_branches *)) 62 + { 63 + struct test_perf_branches *skel; 64 + int err, i, duration = 0; 65 + bool detached = false; 66 + struct bpf_link *link; 67 + volatile int j = 0; 68 + cpu_set_t cpu_set; 69 + 70 + skel = test_perf_branches__open_and_load(); 71 + if (CHECK(!skel, "test_perf_branches_load", 72 + "perf_branches skeleton failed\n")) 73 + return; 74 + 75 + /* attach perf_event */ 76 + link = bpf_program__attach_perf_event(skel->progs.perf_branches, perf_fd); 77 + if (CHECK(IS_ERR(link), "attach_perf_event", "err %ld\n", PTR_ERR(link))) 78 + goto out_destroy_skel; 79 + 80 + /* generate some branches on cpu 0 */ 81 + CPU_ZERO(&cpu_set); 82 + CPU_SET(0, &cpu_set); 83 + err = pthread_setaffinity_np(pthread_self(), sizeof(cpu_set), &cpu_set); 84 + if (CHECK(err, "set_affinity", "cpu #0, err %d\n", err)) 85 + goto out_destroy; 86 + /* spin the loop for a while (random high number) */ 87 + for (i = 0; i < 1000000; ++i) 88 + ++j; 89 + 90 + test_perf_branches__detach(skel); 91 + detached = true; 92 + 93 + cb(skel); 94 + out_destroy: 95 + bpf_link__destroy(link); 96 + out_destroy_skel: 97 + if (!detached) 98 + test_perf_branches__detach(skel); 99 + test_perf_branches__destroy(skel); 100 + } 101 + 102 + static void test_perf_branches_hw(void) 103 + { 104 + struct perf_event_attr attr = {0}; 105 + int duration = 0; 106 + int pfd; 107 + 108 + /* create perf event */ 109 + attr.size = sizeof(attr); 110 + attr.type = PERF_TYPE_HARDWARE; 111 + attr.config = PERF_COUNT_HW_CPU_CYCLES; 112 + attr.freq = 1; 113 + attr.sample_freq = 4000; 114 + attr.sample_type = PERF_SAMPLE_BRANCH_STACK; 115 + attr.branch_sample_type = PERF_SAMPLE_BRANCH_USER | PERF_SAMPLE_BRANCH_ANY; 116 + pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC); 117 + 118 + /* 119 + * Some setups don't support branch records (virtual machines, !x86), 120 + * so skip test in this case. 121 + */ 122 + if (pfd == -1) { 123 + if (errno == ENOENT || errno == EOPNOTSUPP) { 124 + printf("%s:SKIP:no PERF_SAMPLE_BRANCH_STACK\n", 125 + __func__); 126 + test__skip(); 127 + return; 128 + } 129 + if (CHECK(pfd < 0, "perf_event_open", "err %d errno %d\n", 130 + pfd, errno)) 131 + return; 132 + } 133 + 134 + test_perf_branches_common(pfd, check_good_sample); 135 + 136 + close(pfd); 137 + } 138 + 139 + /* 140 + * Tests negative case -- run bpf_read_branch_records() on improperly configured 141 + * perf event. 142 + */ 143 + static void test_perf_branches_no_hw(void) 144 + { 145 + struct perf_event_attr attr = {0}; 146 + int duration = 0; 147 + int pfd; 148 + 149 + /* create perf event */ 150 + attr.size = sizeof(attr); 151 + attr.type = PERF_TYPE_SOFTWARE; 152 + attr.config = PERF_COUNT_SW_CPU_CLOCK; 153 + attr.freq = 1; 154 + attr.sample_freq = 4000; 155 + pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC); 156 + if (CHECK(pfd < 0, "perf_event_open", "err %d\n", pfd)) 157 + return; 158 + 159 + test_perf_branches_common(pfd, check_bad_sample); 160 + 161 + close(pfd); 162 + } 163 + 164 + void test_perf_branches(void) 165 + { 166 + if (test__start_subtest("perf_branches_hw")) 167 + test_perf_branches_hw(); 168 + if (test__start_subtest("perf_branches_no_hw")) 169 + test_perf_branches_no_hw(); 170 + }
+53 -10
tools/testing/selftests/bpf/prog_tests/select_reuseport.c
··· 36 36 static __u32 expected_results[NR_RESULTS]; 37 37 static int sk_fds[REUSEPORT_ARRAY_SIZE]; 38 38 static int reuseport_array = -1, outer_map = -1; 39 + static enum bpf_map_type inner_map_type; 39 40 static int select_by_skb_data_prog; 40 41 static int saved_tcp_syncookie = -1; 41 42 static struct bpf_object *obj; ··· 64 63 } \ 65 64 }) 66 65 67 - static int create_maps(void) 66 + static int create_maps(enum bpf_map_type inner_type) 68 67 { 69 68 struct bpf_create_map_attr attr = {}; 70 69 70 + inner_map_type = inner_type; 71 + 71 72 /* Creating reuseport_array */ 72 73 attr.name = "reuseport_array"; 73 - attr.map_type = BPF_MAP_TYPE_REUSEPORT_SOCKARRAY; 74 + attr.map_type = inner_type; 74 75 attr.key_size = sizeof(__u32); 75 76 attr.value_size = sizeof(__u32); 76 77 attr.max_entries = REUSEPORT_ARRAY_SIZE; ··· 731 728 732 729 static void cleanup(void) 733 730 { 734 - if (outer_map != -1) 731 + if (outer_map != -1) { 735 732 close(outer_map); 736 - if (reuseport_array != -1) 733 + outer_map = -1; 734 + } 735 + 736 + if (reuseport_array != -1) { 737 737 close(reuseport_array); 738 - if (obj) 738 + reuseport_array = -1; 739 + } 740 + 741 + if (obj) { 739 742 bpf_object__close(obj); 743 + obj = NULL; 744 + } 745 + 746 + memset(expected_results, 0, sizeof(expected_results)); 747 + } 748 + 749 + static const char *maptype_str(enum bpf_map_type type) 750 + { 751 + switch (type) { 752 + case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY: 753 + return "reuseport_sockarray"; 754 + case BPF_MAP_TYPE_SOCKMAP: 755 + return "sockmap"; 756 + case BPF_MAP_TYPE_SOCKHASH: 757 + return "sockhash"; 758 + default: 759 + return "unknown"; 760 + } 740 761 } 741 762 742 763 static const char *family_str(sa_family_t family) ··· 808 781 const struct test *t; 809 782 810 783 for (t = tests; t < tests + ARRAY_SIZE(tests); t++) { 811 - snprintf(s, sizeof(s), "%s/%s %s %s", 784 + snprintf(s, sizeof(s), "%s %s/%s %s %s", 785 + maptype_str(inner_map_type), 812 786 family_str(family), sotype_str(sotype), 813 787 inany ? "INANY" : "LOOPBACK", t->name); 814 788 815 789 if (!test__start_subtest(s)) 816 790 continue; 791 + 792 + if (sotype == SOCK_DGRAM && 793 + inner_map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) { 794 + /* SOCKMAP/SOCKHASH don't support UDP yet */ 795 + test__skip(); 796 + continue; 797 + } 817 798 818 799 setup_per_test(sotype, family, inany, t->no_inner_map); 819 800 t->fn(sotype, family); ··· 851 816 test_config(c->sotype, c->family, c->inany); 852 817 } 853 818 854 - void test_select_reuseport(void) 819 + void test_map_type(enum bpf_map_type mt) 855 820 { 856 - if (create_maps()) 821 + if (create_maps(mt)) 857 822 goto out; 858 823 if (prepare_bpf_obj()) 859 824 goto out; 860 825 826 + test_all(); 827 + out: 828 + cleanup(); 829 + } 830 + 831 + void test_select_reuseport(void) 832 + { 861 833 saved_tcp_fo = read_int_sysctl(TCP_FO_SYSCTL); 862 834 if (saved_tcp_fo < 0) 863 835 goto out; ··· 877 835 if (disable_syncookie()) 878 836 goto out; 879 837 880 - test_all(); 838 + test_map_type(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY); 839 + test_map_type(BPF_MAP_TYPE_SOCKMAP); 840 + test_map_type(BPF_MAP_TYPE_SOCKHASH); 881 841 out: 882 - cleanup(); 883 842 restore_sysctls(); 884 843 }
+124
tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2020 Cloudflare 3 + /* 4 + * Tests for sockmap/sockhash holding kTLS sockets. 5 + */ 6 + 7 + #include "test_progs.h" 8 + 9 + #define MAX_TEST_NAME 80 10 + #define TCP_ULP 31 11 + 12 + static int tcp_server(int family) 13 + { 14 + int err, s; 15 + 16 + s = socket(family, SOCK_STREAM, 0); 17 + if (CHECK_FAIL(s == -1)) { 18 + perror("socket"); 19 + return -1; 20 + } 21 + 22 + err = listen(s, SOMAXCONN); 23 + if (CHECK_FAIL(err)) { 24 + perror("listen"); 25 + return -1; 26 + } 27 + 28 + return s; 29 + } 30 + 31 + static int disconnect(int fd) 32 + { 33 + struct sockaddr unspec = { AF_UNSPEC }; 34 + 35 + return connect(fd, &unspec, sizeof(unspec)); 36 + } 37 + 38 + /* Disconnect (unhash) a kTLS socket after removing it from sockmap. */ 39 + static void test_sockmap_ktls_disconnect_after_delete(int family, int map) 40 + { 41 + struct sockaddr_storage addr = {0}; 42 + socklen_t len = sizeof(addr); 43 + int err, cli, srv, zero = 0; 44 + 45 + srv = tcp_server(family); 46 + if (srv == -1) 47 + return; 48 + 49 + err = getsockname(srv, (struct sockaddr *)&addr, &len); 50 + if (CHECK_FAIL(err)) { 51 + perror("getsockopt"); 52 + goto close_srv; 53 + } 54 + 55 + cli = socket(family, SOCK_STREAM, 0); 56 + if (CHECK_FAIL(cli == -1)) { 57 + perror("socket"); 58 + goto close_srv; 59 + } 60 + 61 + err = connect(cli, (struct sockaddr *)&addr, len); 62 + if (CHECK_FAIL(err)) { 63 + perror("connect"); 64 + goto close_cli; 65 + } 66 + 67 + err = bpf_map_update_elem(map, &zero, &cli, 0); 68 + if (CHECK_FAIL(err)) { 69 + perror("bpf_map_update_elem"); 70 + goto close_cli; 71 + } 72 + 73 + err = setsockopt(cli, IPPROTO_TCP, TCP_ULP, "tls", strlen("tls")); 74 + if (CHECK_FAIL(err)) { 75 + perror("setsockopt(TCP_ULP)"); 76 + goto close_cli; 77 + } 78 + 79 + err = bpf_map_delete_elem(map, &zero); 80 + if (CHECK_FAIL(err)) { 81 + perror("bpf_map_delete_elem"); 82 + goto close_cli; 83 + } 84 + 85 + err = disconnect(cli); 86 + if (CHECK_FAIL(err)) 87 + perror("disconnect"); 88 + 89 + close_cli: 90 + close(cli); 91 + close_srv: 92 + close(srv); 93 + } 94 + 95 + static void run_tests(int family, enum bpf_map_type map_type) 96 + { 97 + char test_name[MAX_TEST_NAME]; 98 + int map; 99 + 100 + map = bpf_create_map(map_type, sizeof(int), sizeof(int), 1, 0); 101 + if (CHECK_FAIL(map == -1)) { 102 + perror("bpf_map_create"); 103 + return; 104 + } 105 + 106 + snprintf(test_name, MAX_TEST_NAME, 107 + "sockmap_ktls disconnect_after_delete %s %s", 108 + family == AF_INET ? "IPv4" : "IPv6", 109 + map_type == BPF_MAP_TYPE_SOCKMAP ? "SOCKMAP" : "SOCKHASH"); 110 + if (!test__start_subtest(test_name)) 111 + return; 112 + 113 + test_sockmap_ktls_disconnect_after_delete(family, map); 114 + 115 + close(map); 116 + } 117 + 118 + void test_sockmap_ktls(void) 119 + { 120 + run_tests(AF_INET, BPF_MAP_TYPE_SOCKMAP); 121 + run_tests(AF_INET, BPF_MAP_TYPE_SOCKHASH); 122 + run_tests(AF_INET6, BPF_MAP_TYPE_SOCKMAP); 123 + run_tests(AF_INET6, BPF_MAP_TYPE_SOCKHASH); 124 + }
+1496
tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2020 Cloudflare 3 + /* 4 + * Test suite for SOCKMAP/SOCKHASH holding listening sockets. 5 + * Covers: 6 + * 1. BPF map operations - bpf_map_{update,lookup delete}_elem 7 + * 2. BPF redirect helpers - bpf_{sk,msg}_redirect_map 8 + * 3. BPF reuseport helper - bpf_sk_select_reuseport 9 + */ 10 + 11 + #include <linux/compiler.h> 12 + #include <errno.h> 13 + #include <error.h> 14 + #include <limits.h> 15 + #include <netinet/in.h> 16 + #include <pthread.h> 17 + #include <stdlib.h> 18 + #include <string.h> 19 + #include <unistd.h> 20 + 21 + #include <bpf/bpf.h> 22 + #include <bpf/libbpf.h> 23 + 24 + #include "bpf_util.h" 25 + #include "test_progs.h" 26 + #include "test_sockmap_listen.skel.h" 27 + 28 + #define MAX_STRERR_LEN 256 29 + #define MAX_TEST_NAME 80 30 + 31 + #define _FAIL(errnum, fmt...) \ 32 + ({ \ 33 + error_at_line(0, (errnum), __func__, __LINE__, fmt); \ 34 + CHECK_FAIL(true); \ 35 + }) 36 + #define FAIL(fmt...) _FAIL(0, fmt) 37 + #define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) 38 + #define FAIL_LIBBPF(err, msg) \ 39 + ({ \ 40 + char __buf[MAX_STRERR_LEN]; \ 41 + libbpf_strerror((err), __buf, sizeof(__buf)); \ 42 + FAIL("%s: %s", (msg), __buf); \ 43 + }) 44 + 45 + /* Wrappers that fail the test on error and report it. */ 46 + 47 + #define xaccept(fd, addr, len) \ 48 + ({ \ 49 + int __ret = accept((fd), (addr), (len)); \ 50 + if (__ret == -1) \ 51 + FAIL_ERRNO("accept"); \ 52 + __ret; \ 53 + }) 54 + 55 + #define xbind(fd, addr, len) \ 56 + ({ \ 57 + int __ret = bind((fd), (addr), (len)); \ 58 + if (__ret == -1) \ 59 + FAIL_ERRNO("bind"); \ 60 + __ret; \ 61 + }) 62 + 63 + #define xclose(fd) \ 64 + ({ \ 65 + int __ret = close((fd)); \ 66 + if (__ret == -1) \ 67 + FAIL_ERRNO("close"); \ 68 + __ret; \ 69 + }) 70 + 71 + #define xconnect(fd, addr, len) \ 72 + ({ \ 73 + int __ret = connect((fd), (addr), (len)); \ 74 + if (__ret == -1) \ 75 + FAIL_ERRNO("connect"); \ 76 + __ret; \ 77 + }) 78 + 79 + #define xgetsockname(fd, addr, len) \ 80 + ({ \ 81 + int __ret = getsockname((fd), (addr), (len)); \ 82 + if (__ret == -1) \ 83 + FAIL_ERRNO("getsockname"); \ 84 + __ret; \ 85 + }) 86 + 87 + #define xgetsockopt(fd, level, name, val, len) \ 88 + ({ \ 89 + int __ret = getsockopt((fd), (level), (name), (val), (len)); \ 90 + if (__ret == -1) \ 91 + FAIL_ERRNO("getsockopt(" #name ")"); \ 92 + __ret; \ 93 + }) 94 + 95 + #define xlisten(fd, backlog) \ 96 + ({ \ 97 + int __ret = listen((fd), (backlog)); \ 98 + if (__ret == -1) \ 99 + FAIL_ERRNO("listen"); \ 100 + __ret; \ 101 + }) 102 + 103 + #define xsetsockopt(fd, level, name, val, len) \ 104 + ({ \ 105 + int __ret = setsockopt((fd), (level), (name), (val), (len)); \ 106 + if (__ret == -1) \ 107 + FAIL_ERRNO("setsockopt(" #name ")"); \ 108 + __ret; \ 109 + }) 110 + 111 + #define xsocket(family, sotype, flags) \ 112 + ({ \ 113 + int __ret = socket(family, sotype, flags); \ 114 + if (__ret == -1) \ 115 + FAIL_ERRNO("socket"); \ 116 + __ret; \ 117 + }) 118 + 119 + #define xbpf_map_delete_elem(fd, key) \ 120 + ({ \ 121 + int __ret = bpf_map_delete_elem((fd), (key)); \ 122 + if (__ret == -1) \ 123 + FAIL_ERRNO("map_delete"); \ 124 + __ret; \ 125 + }) 126 + 127 + #define xbpf_map_lookup_elem(fd, key, val) \ 128 + ({ \ 129 + int __ret = bpf_map_lookup_elem((fd), (key), (val)); \ 130 + if (__ret == -1) \ 131 + FAIL_ERRNO("map_lookup"); \ 132 + __ret; \ 133 + }) 134 + 135 + #define xbpf_map_update_elem(fd, key, val, flags) \ 136 + ({ \ 137 + int __ret = bpf_map_update_elem((fd), (key), (val), (flags)); \ 138 + if (__ret == -1) \ 139 + FAIL_ERRNO("map_update"); \ 140 + __ret; \ 141 + }) 142 + 143 + #define xbpf_prog_attach(prog, target, type, flags) \ 144 + ({ \ 145 + int __ret = \ 146 + bpf_prog_attach((prog), (target), (type), (flags)); \ 147 + if (__ret == -1) \ 148 + FAIL_ERRNO("prog_attach(" #type ")"); \ 149 + __ret; \ 150 + }) 151 + 152 + #define xbpf_prog_detach2(prog, target, type) \ 153 + ({ \ 154 + int __ret = bpf_prog_detach2((prog), (target), (type)); \ 155 + if (__ret == -1) \ 156 + FAIL_ERRNO("prog_detach2(" #type ")"); \ 157 + __ret; \ 158 + }) 159 + 160 + #define xpthread_create(thread, attr, func, arg) \ 161 + ({ \ 162 + int __ret = pthread_create((thread), (attr), (func), (arg)); \ 163 + errno = __ret; \ 164 + if (__ret) \ 165 + FAIL_ERRNO("pthread_create"); \ 166 + __ret; \ 167 + }) 168 + 169 + #define xpthread_join(thread, retval) \ 170 + ({ \ 171 + int __ret = pthread_join((thread), (retval)); \ 172 + errno = __ret; \ 173 + if (__ret) \ 174 + FAIL_ERRNO("pthread_join"); \ 175 + __ret; \ 176 + }) 177 + 178 + static void init_addr_loopback4(struct sockaddr_storage *ss, socklen_t *len) 179 + { 180 + struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); 181 + 182 + addr4->sin_family = AF_INET; 183 + addr4->sin_port = 0; 184 + addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); 185 + *len = sizeof(*addr4); 186 + } 187 + 188 + static void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len) 189 + { 190 + struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); 191 + 192 + addr6->sin6_family = AF_INET6; 193 + addr6->sin6_port = 0; 194 + addr6->sin6_addr = in6addr_loopback; 195 + *len = sizeof(*addr6); 196 + } 197 + 198 + static void init_addr_loopback(int family, struct sockaddr_storage *ss, 199 + socklen_t *len) 200 + { 201 + switch (family) { 202 + case AF_INET: 203 + init_addr_loopback4(ss, len); 204 + return; 205 + case AF_INET6: 206 + init_addr_loopback6(ss, len); 207 + return; 208 + default: 209 + FAIL("unsupported address family %d", family); 210 + } 211 + } 212 + 213 + static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) 214 + { 215 + return (struct sockaddr *)ss; 216 + } 217 + 218 + static int enable_reuseport(int s, int progfd) 219 + { 220 + int err, one = 1; 221 + 222 + err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); 223 + if (err) 224 + return -1; 225 + err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, 226 + sizeof(progfd)); 227 + if (err) 228 + return -1; 229 + 230 + return 0; 231 + } 232 + 233 + static int listen_loopback_reuseport(int family, int sotype, int progfd) 234 + { 235 + struct sockaddr_storage addr; 236 + socklen_t len; 237 + int err, s; 238 + 239 + init_addr_loopback(family, &addr, &len); 240 + 241 + s = xsocket(family, sotype, 0); 242 + if (s == -1) 243 + return -1; 244 + 245 + if (progfd >= 0) 246 + enable_reuseport(s, progfd); 247 + 248 + err = xbind(s, sockaddr(&addr), len); 249 + if (err) 250 + goto close; 251 + 252 + err = xlisten(s, SOMAXCONN); 253 + if (err) 254 + goto close; 255 + 256 + return s; 257 + close: 258 + xclose(s); 259 + return -1; 260 + } 261 + 262 + static int listen_loopback(int family, int sotype) 263 + { 264 + return listen_loopback_reuseport(family, sotype, -1); 265 + } 266 + 267 + static void test_insert_invalid(int family, int sotype, int mapfd) 268 + { 269 + u32 key = 0; 270 + u64 value; 271 + int err; 272 + 273 + value = -1; 274 + err = bpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 275 + if (!err || errno != EINVAL) 276 + FAIL_ERRNO("map_update: expected EINVAL"); 277 + 278 + value = INT_MAX; 279 + err = bpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 280 + if (!err || errno != EBADF) 281 + FAIL_ERRNO("map_update: expected EBADF"); 282 + } 283 + 284 + static void test_insert_opened(int family, int sotype, int mapfd) 285 + { 286 + u32 key = 0; 287 + u64 value; 288 + int err, s; 289 + 290 + s = xsocket(family, sotype, 0); 291 + if (s == -1) 292 + return; 293 + 294 + errno = 0; 295 + value = s; 296 + err = bpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 297 + if (!err || errno != EOPNOTSUPP) 298 + FAIL_ERRNO("map_update: expected EOPNOTSUPP"); 299 + 300 + xclose(s); 301 + } 302 + 303 + static void test_insert_bound(int family, int sotype, int mapfd) 304 + { 305 + struct sockaddr_storage addr; 306 + socklen_t len; 307 + u32 key = 0; 308 + u64 value; 309 + int err, s; 310 + 311 + init_addr_loopback(family, &addr, &len); 312 + 313 + s = xsocket(family, sotype, 0); 314 + if (s == -1) 315 + return; 316 + 317 + err = xbind(s, sockaddr(&addr), len); 318 + if (err) 319 + goto close; 320 + 321 + errno = 0; 322 + value = s; 323 + err = bpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 324 + if (!err || errno != EOPNOTSUPP) 325 + FAIL_ERRNO("map_update: expected EOPNOTSUPP"); 326 + close: 327 + xclose(s); 328 + } 329 + 330 + static void test_insert_listening(int family, int sotype, int mapfd) 331 + { 332 + u64 value; 333 + u32 key; 334 + int s; 335 + 336 + s = listen_loopback(family, sotype); 337 + if (s < 0) 338 + return; 339 + 340 + key = 0; 341 + value = s; 342 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 343 + xclose(s); 344 + } 345 + 346 + static void test_delete_after_insert(int family, int sotype, int mapfd) 347 + { 348 + u64 value; 349 + u32 key; 350 + int s; 351 + 352 + s = listen_loopback(family, sotype); 353 + if (s < 0) 354 + return; 355 + 356 + key = 0; 357 + value = s; 358 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 359 + xbpf_map_delete_elem(mapfd, &key); 360 + xclose(s); 361 + } 362 + 363 + static void test_delete_after_close(int family, int sotype, int mapfd) 364 + { 365 + int err, s; 366 + u64 value; 367 + u32 key; 368 + 369 + s = listen_loopback(family, sotype); 370 + if (s < 0) 371 + return; 372 + 373 + key = 0; 374 + value = s; 375 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 376 + 377 + xclose(s); 378 + 379 + errno = 0; 380 + err = bpf_map_delete_elem(mapfd, &key); 381 + if (!err || (errno != EINVAL && errno != ENOENT)) 382 + /* SOCKMAP and SOCKHASH return different error codes */ 383 + FAIL_ERRNO("map_delete: expected EINVAL/EINVAL"); 384 + } 385 + 386 + static void test_lookup_after_insert(int family, int sotype, int mapfd) 387 + { 388 + u64 cookie, value; 389 + socklen_t len; 390 + u32 key; 391 + int s; 392 + 393 + s = listen_loopback(family, sotype); 394 + if (s < 0) 395 + return; 396 + 397 + key = 0; 398 + value = s; 399 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 400 + 401 + len = sizeof(cookie); 402 + xgetsockopt(s, SOL_SOCKET, SO_COOKIE, &cookie, &len); 403 + 404 + xbpf_map_lookup_elem(mapfd, &key, &value); 405 + 406 + if (value != cookie) { 407 + FAIL("map_lookup: have %#llx, want %#llx", 408 + (unsigned long long)value, (unsigned long long)cookie); 409 + } 410 + 411 + xclose(s); 412 + } 413 + 414 + static void test_lookup_after_delete(int family, int sotype, int mapfd) 415 + { 416 + int err, s; 417 + u64 value; 418 + u32 key; 419 + 420 + s = listen_loopback(family, sotype); 421 + if (s < 0) 422 + return; 423 + 424 + key = 0; 425 + value = s; 426 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 427 + xbpf_map_delete_elem(mapfd, &key); 428 + 429 + errno = 0; 430 + err = bpf_map_lookup_elem(mapfd, &key, &value); 431 + if (!err || errno != ENOENT) 432 + FAIL_ERRNO("map_lookup: expected ENOENT"); 433 + 434 + xclose(s); 435 + } 436 + 437 + static void test_lookup_32_bit_value(int family, int sotype, int mapfd) 438 + { 439 + u32 key, value32; 440 + int err, s; 441 + 442 + s = listen_loopback(family, sotype); 443 + if (s < 0) 444 + return; 445 + 446 + mapfd = bpf_create_map(BPF_MAP_TYPE_SOCKMAP, sizeof(key), 447 + sizeof(value32), 1, 0); 448 + if (mapfd < 0) { 449 + FAIL_ERRNO("map_create"); 450 + goto close; 451 + } 452 + 453 + key = 0; 454 + value32 = s; 455 + xbpf_map_update_elem(mapfd, &key, &value32, BPF_NOEXIST); 456 + 457 + errno = 0; 458 + err = bpf_map_lookup_elem(mapfd, &key, &value32); 459 + if (!err || errno != ENOSPC) 460 + FAIL_ERRNO("map_lookup: expected ENOSPC"); 461 + 462 + xclose(mapfd); 463 + close: 464 + xclose(s); 465 + } 466 + 467 + static void test_update_listening(int family, int sotype, int mapfd) 468 + { 469 + int s1, s2; 470 + u64 value; 471 + u32 key; 472 + 473 + s1 = listen_loopback(family, sotype); 474 + if (s1 < 0) 475 + return; 476 + 477 + s2 = listen_loopback(family, sotype); 478 + if (s2 < 0) 479 + goto close_s1; 480 + 481 + key = 0; 482 + value = s1; 483 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 484 + 485 + value = s2; 486 + xbpf_map_update_elem(mapfd, &key, &value, BPF_EXIST); 487 + xclose(s2); 488 + close_s1: 489 + xclose(s1); 490 + } 491 + 492 + /* Exercise the code path where we destroy child sockets that never 493 + * got accept()'ed, aka orphans, when parent socket gets closed. 494 + */ 495 + static void test_destroy_orphan_child(int family, int sotype, int mapfd) 496 + { 497 + struct sockaddr_storage addr; 498 + socklen_t len; 499 + int err, s, c; 500 + u64 value; 501 + u32 key; 502 + 503 + s = listen_loopback(family, sotype); 504 + if (s < 0) 505 + return; 506 + 507 + len = sizeof(addr); 508 + err = xgetsockname(s, sockaddr(&addr), &len); 509 + if (err) 510 + goto close_srv; 511 + 512 + key = 0; 513 + value = s; 514 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 515 + 516 + c = xsocket(family, sotype, 0); 517 + if (c == -1) 518 + goto close_srv; 519 + 520 + xconnect(c, sockaddr(&addr), len); 521 + xclose(c); 522 + close_srv: 523 + xclose(s); 524 + } 525 + 526 + /* Perform a passive open after removing listening socket from SOCKMAP 527 + * to ensure that callbacks get restored properly. 528 + */ 529 + static void test_clone_after_delete(int family, int sotype, int mapfd) 530 + { 531 + struct sockaddr_storage addr; 532 + socklen_t len; 533 + int err, s, c; 534 + u64 value; 535 + u32 key; 536 + 537 + s = listen_loopback(family, sotype); 538 + if (s < 0) 539 + return; 540 + 541 + len = sizeof(addr); 542 + err = xgetsockname(s, sockaddr(&addr), &len); 543 + if (err) 544 + goto close_srv; 545 + 546 + key = 0; 547 + value = s; 548 + xbpf_map_update_elem(mapfd, &key, &value, BPF_NOEXIST); 549 + xbpf_map_delete_elem(mapfd, &key); 550 + 551 + c = xsocket(family, sotype, 0); 552 + if (c < 0) 553 + goto close_srv; 554 + 555 + xconnect(c, sockaddr(&addr), len); 556 + xclose(c); 557 + close_srv: 558 + xclose(s); 559 + } 560 + 561 + /* Check that child socket that got created while parent was in a 562 + * SOCKMAP, but got accept()'ed only after the parent has been removed 563 + * from SOCKMAP, gets cloned without parent psock state or callbacks. 564 + */ 565 + static void test_accept_after_delete(int family, int sotype, int mapfd) 566 + { 567 + struct sockaddr_storage addr; 568 + const u32 zero = 0; 569 + int err, s, c, p; 570 + socklen_t len; 571 + u64 value; 572 + 573 + s = listen_loopback(family, sotype); 574 + if (s == -1) 575 + return; 576 + 577 + len = sizeof(addr); 578 + err = xgetsockname(s, sockaddr(&addr), &len); 579 + if (err) 580 + goto close_srv; 581 + 582 + value = s; 583 + err = xbpf_map_update_elem(mapfd, &zero, &value, BPF_NOEXIST); 584 + if (err) 585 + goto close_srv; 586 + 587 + c = xsocket(family, sotype, 0); 588 + if (c == -1) 589 + goto close_srv; 590 + 591 + /* Create child while parent is in sockmap */ 592 + err = xconnect(c, sockaddr(&addr), len); 593 + if (err) 594 + goto close_cli; 595 + 596 + /* Remove parent from sockmap */ 597 + err = xbpf_map_delete_elem(mapfd, &zero); 598 + if (err) 599 + goto close_cli; 600 + 601 + p = xaccept(s, NULL, NULL); 602 + if (p == -1) 603 + goto close_cli; 604 + 605 + /* Check that child sk_user_data is not set */ 606 + value = p; 607 + xbpf_map_update_elem(mapfd, &zero, &value, BPF_NOEXIST); 608 + 609 + xclose(p); 610 + close_cli: 611 + xclose(c); 612 + close_srv: 613 + xclose(s); 614 + } 615 + 616 + /* Check that child socket that got created and accepted while parent 617 + * was in a SOCKMAP is cloned without parent psock state or callbacks. 618 + */ 619 + static void test_accept_before_delete(int family, int sotype, int mapfd) 620 + { 621 + struct sockaddr_storage addr; 622 + const u32 zero = 0, one = 1; 623 + int err, s, c, p; 624 + socklen_t len; 625 + u64 value; 626 + 627 + s = listen_loopback(family, sotype); 628 + if (s == -1) 629 + return; 630 + 631 + len = sizeof(addr); 632 + err = xgetsockname(s, sockaddr(&addr), &len); 633 + if (err) 634 + goto close_srv; 635 + 636 + value = s; 637 + err = xbpf_map_update_elem(mapfd, &zero, &value, BPF_NOEXIST); 638 + if (err) 639 + goto close_srv; 640 + 641 + c = xsocket(family, sotype, 0); 642 + if (c == -1) 643 + goto close_srv; 644 + 645 + /* Create & accept child while parent is in sockmap */ 646 + err = xconnect(c, sockaddr(&addr), len); 647 + if (err) 648 + goto close_cli; 649 + 650 + p = xaccept(s, NULL, NULL); 651 + if (p == -1) 652 + goto close_cli; 653 + 654 + /* Check that child sk_user_data is not set */ 655 + value = p; 656 + xbpf_map_update_elem(mapfd, &one, &value, BPF_NOEXIST); 657 + 658 + xclose(p); 659 + close_cli: 660 + xclose(c); 661 + close_srv: 662 + xclose(s); 663 + } 664 + 665 + struct connect_accept_ctx { 666 + int sockfd; 667 + unsigned int done; 668 + unsigned int nr_iter; 669 + }; 670 + 671 + static bool is_thread_done(struct connect_accept_ctx *ctx) 672 + { 673 + return READ_ONCE(ctx->done); 674 + } 675 + 676 + static void *connect_accept_thread(void *arg) 677 + { 678 + struct connect_accept_ctx *ctx = arg; 679 + struct sockaddr_storage addr; 680 + int family, socktype; 681 + socklen_t len; 682 + int err, i, s; 683 + 684 + s = ctx->sockfd; 685 + 686 + len = sizeof(addr); 687 + err = xgetsockname(s, sockaddr(&addr), &len); 688 + if (err) 689 + goto done; 690 + 691 + len = sizeof(family); 692 + err = xgetsockopt(s, SOL_SOCKET, SO_DOMAIN, &family, &len); 693 + if (err) 694 + goto done; 695 + 696 + len = sizeof(socktype); 697 + err = xgetsockopt(s, SOL_SOCKET, SO_TYPE, &socktype, &len); 698 + if (err) 699 + goto done; 700 + 701 + for (i = 0; i < ctx->nr_iter; i++) { 702 + int c, p; 703 + 704 + c = xsocket(family, socktype, 0); 705 + if (c < 0) 706 + break; 707 + 708 + err = xconnect(c, (struct sockaddr *)&addr, sizeof(addr)); 709 + if (err) { 710 + xclose(c); 711 + break; 712 + } 713 + 714 + p = xaccept(s, NULL, NULL); 715 + if (p < 0) { 716 + xclose(c); 717 + break; 718 + } 719 + 720 + xclose(p); 721 + xclose(c); 722 + } 723 + done: 724 + WRITE_ONCE(ctx->done, 1); 725 + return NULL; 726 + } 727 + 728 + static void test_syn_recv_insert_delete(int family, int sotype, int mapfd) 729 + { 730 + struct connect_accept_ctx ctx = { 0 }; 731 + struct sockaddr_storage addr; 732 + socklen_t len; 733 + u32 zero = 0; 734 + pthread_t t; 735 + int err, s; 736 + u64 value; 737 + 738 + s = listen_loopback(family, sotype | SOCK_NONBLOCK); 739 + if (s < 0) 740 + return; 741 + 742 + len = sizeof(addr); 743 + err = xgetsockname(s, sockaddr(&addr), &len); 744 + if (err) 745 + goto close; 746 + 747 + ctx.sockfd = s; 748 + ctx.nr_iter = 1000; 749 + 750 + err = xpthread_create(&t, NULL, connect_accept_thread, &ctx); 751 + if (err) 752 + goto close; 753 + 754 + value = s; 755 + while (!is_thread_done(&ctx)) { 756 + err = xbpf_map_update_elem(mapfd, &zero, &value, BPF_NOEXIST); 757 + if (err) 758 + break; 759 + 760 + err = xbpf_map_delete_elem(mapfd, &zero); 761 + if (err) 762 + break; 763 + } 764 + 765 + xpthread_join(t, NULL); 766 + close: 767 + xclose(s); 768 + } 769 + 770 + static void *listen_thread(void *arg) 771 + { 772 + struct sockaddr unspec = { AF_UNSPEC }; 773 + struct connect_accept_ctx *ctx = arg; 774 + int err, i, s; 775 + 776 + s = ctx->sockfd; 777 + 778 + for (i = 0; i < ctx->nr_iter; i++) { 779 + err = xlisten(s, 1); 780 + if (err) 781 + break; 782 + err = xconnect(s, &unspec, sizeof(unspec)); 783 + if (err) 784 + break; 785 + } 786 + 787 + WRITE_ONCE(ctx->done, 1); 788 + return NULL; 789 + } 790 + 791 + static void test_race_insert_listen(int family, int socktype, int mapfd) 792 + { 793 + struct connect_accept_ctx ctx = { 0 }; 794 + const u32 zero = 0; 795 + const int one = 1; 796 + pthread_t t; 797 + int err, s; 798 + u64 value; 799 + 800 + s = xsocket(family, socktype, 0); 801 + if (s < 0) 802 + return; 803 + 804 + err = xsetsockopt(s, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one)); 805 + if (err) 806 + goto close; 807 + 808 + ctx.sockfd = s; 809 + ctx.nr_iter = 10000; 810 + 811 + err = pthread_create(&t, NULL, listen_thread, &ctx); 812 + if (err) 813 + goto close; 814 + 815 + value = s; 816 + while (!is_thread_done(&ctx)) { 817 + err = bpf_map_update_elem(mapfd, &zero, &value, BPF_NOEXIST); 818 + /* Expecting EOPNOTSUPP before listen() */ 819 + if (err && errno != EOPNOTSUPP) { 820 + FAIL_ERRNO("map_update"); 821 + break; 822 + } 823 + 824 + err = bpf_map_delete_elem(mapfd, &zero); 825 + /* Expecting no entry after unhash on connect(AF_UNSPEC) */ 826 + if (err && errno != EINVAL && errno != ENOENT) { 827 + FAIL_ERRNO("map_delete"); 828 + break; 829 + } 830 + } 831 + 832 + xpthread_join(t, NULL); 833 + close: 834 + xclose(s); 835 + } 836 + 837 + static void zero_verdict_count(int mapfd) 838 + { 839 + unsigned int zero = 0; 840 + int key; 841 + 842 + key = SK_DROP; 843 + xbpf_map_update_elem(mapfd, &key, &zero, BPF_ANY); 844 + key = SK_PASS; 845 + xbpf_map_update_elem(mapfd, &key, &zero, BPF_ANY); 846 + } 847 + 848 + enum redir_mode { 849 + REDIR_INGRESS, 850 + REDIR_EGRESS, 851 + }; 852 + 853 + static const char *redir_mode_str(enum redir_mode mode) 854 + { 855 + switch (mode) { 856 + case REDIR_INGRESS: 857 + return "ingress"; 858 + case REDIR_EGRESS: 859 + return "egress"; 860 + default: 861 + return "unknown"; 862 + } 863 + } 864 + 865 + static void redir_to_connected(int family, int sotype, int sock_mapfd, 866 + int verd_mapfd, enum redir_mode mode) 867 + { 868 + const char *log_prefix = redir_mode_str(mode); 869 + struct sockaddr_storage addr; 870 + int s, c0, c1, p0, p1; 871 + unsigned int pass; 872 + socklen_t len; 873 + int err, n; 874 + u64 value; 875 + u32 key; 876 + char b; 877 + 878 + zero_verdict_count(verd_mapfd); 879 + 880 + s = listen_loopback(family, sotype | SOCK_NONBLOCK); 881 + if (s < 0) 882 + return; 883 + 884 + len = sizeof(addr); 885 + err = xgetsockname(s, sockaddr(&addr), &len); 886 + if (err) 887 + goto close_srv; 888 + 889 + c0 = xsocket(family, sotype, 0); 890 + if (c0 < 0) 891 + goto close_srv; 892 + err = xconnect(c0, sockaddr(&addr), len); 893 + if (err) 894 + goto close_cli0; 895 + 896 + p0 = xaccept(s, NULL, NULL); 897 + if (p0 < 0) 898 + goto close_cli0; 899 + 900 + c1 = xsocket(family, sotype, 0); 901 + if (c1 < 0) 902 + goto close_peer0; 903 + err = xconnect(c1, sockaddr(&addr), len); 904 + if (err) 905 + goto close_cli1; 906 + 907 + p1 = xaccept(s, NULL, NULL); 908 + if (p1 < 0) 909 + goto close_cli1; 910 + 911 + key = 0; 912 + value = p0; 913 + err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); 914 + if (err) 915 + goto close_peer1; 916 + 917 + key = 1; 918 + value = p1; 919 + err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); 920 + if (err) 921 + goto close_peer1; 922 + 923 + n = write(mode == REDIR_INGRESS ? c1 : p1, "a", 1); 924 + if (n < 0) 925 + FAIL_ERRNO("%s: write", log_prefix); 926 + if (n == 0) 927 + FAIL("%s: incomplete write", log_prefix); 928 + if (n < 1) 929 + goto close_peer1; 930 + 931 + key = SK_PASS; 932 + err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass); 933 + if (err) 934 + goto close_peer1; 935 + if (pass != 1) 936 + FAIL("%s: want pass count 1, have %d", log_prefix, pass); 937 + 938 + n = read(c0, &b, 1); 939 + if (n < 0) 940 + FAIL_ERRNO("%s: read", log_prefix); 941 + if (n == 0) 942 + FAIL("%s: incomplete read", log_prefix); 943 + 944 + close_peer1: 945 + xclose(p1); 946 + close_cli1: 947 + xclose(c1); 948 + close_peer0: 949 + xclose(p0); 950 + close_cli0: 951 + xclose(c0); 952 + close_srv: 953 + xclose(s); 954 + } 955 + 956 + static void test_skb_redir_to_connected(struct test_sockmap_listen *skel, 957 + struct bpf_map *inner_map, int family, 958 + int sotype) 959 + { 960 + int verdict = bpf_program__fd(skel->progs.prog_skb_verdict); 961 + int parser = bpf_program__fd(skel->progs.prog_skb_parser); 962 + int verdict_map = bpf_map__fd(skel->maps.verdict_map); 963 + int sock_map = bpf_map__fd(inner_map); 964 + int err; 965 + 966 + err = xbpf_prog_attach(parser, sock_map, BPF_SK_SKB_STREAM_PARSER, 0); 967 + if (err) 968 + return; 969 + err = xbpf_prog_attach(verdict, sock_map, BPF_SK_SKB_STREAM_VERDICT, 0); 970 + if (err) 971 + goto detach; 972 + 973 + redir_to_connected(family, sotype, sock_map, verdict_map, 974 + REDIR_INGRESS); 975 + 976 + xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_STREAM_VERDICT); 977 + detach: 978 + xbpf_prog_detach2(parser, sock_map, BPF_SK_SKB_STREAM_PARSER); 979 + } 980 + 981 + static void test_msg_redir_to_connected(struct test_sockmap_listen *skel, 982 + struct bpf_map *inner_map, int family, 983 + int sotype) 984 + { 985 + int verdict = bpf_program__fd(skel->progs.prog_msg_verdict); 986 + int verdict_map = bpf_map__fd(skel->maps.verdict_map); 987 + int sock_map = bpf_map__fd(inner_map); 988 + int err; 989 + 990 + err = xbpf_prog_attach(verdict, sock_map, BPF_SK_MSG_VERDICT, 0); 991 + if (err) 992 + return; 993 + 994 + redir_to_connected(family, sotype, sock_map, verdict_map, REDIR_EGRESS); 995 + 996 + xbpf_prog_detach2(verdict, sock_map, BPF_SK_MSG_VERDICT); 997 + } 998 + 999 + static void redir_to_listening(int family, int sotype, int sock_mapfd, 1000 + int verd_mapfd, enum redir_mode mode) 1001 + { 1002 + const char *log_prefix = redir_mode_str(mode); 1003 + struct sockaddr_storage addr; 1004 + int s, c, p, err, n; 1005 + unsigned int drop; 1006 + socklen_t len; 1007 + u64 value; 1008 + u32 key; 1009 + 1010 + zero_verdict_count(verd_mapfd); 1011 + 1012 + s = listen_loopback(family, sotype | SOCK_NONBLOCK); 1013 + if (s < 0) 1014 + return; 1015 + 1016 + len = sizeof(addr); 1017 + err = xgetsockname(s, sockaddr(&addr), &len); 1018 + if (err) 1019 + goto close_srv; 1020 + 1021 + c = xsocket(family, sotype, 0); 1022 + if (c < 0) 1023 + goto close_srv; 1024 + err = xconnect(c, sockaddr(&addr), len); 1025 + if (err) 1026 + goto close_cli; 1027 + 1028 + p = xaccept(s, NULL, NULL); 1029 + if (p < 0) 1030 + goto close_cli; 1031 + 1032 + key = 0; 1033 + value = s; 1034 + err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); 1035 + if (err) 1036 + goto close_peer; 1037 + 1038 + key = 1; 1039 + value = p; 1040 + err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); 1041 + if (err) 1042 + goto close_peer; 1043 + 1044 + n = write(mode == REDIR_INGRESS ? c : p, "a", 1); 1045 + if (n < 0 && errno != EACCES) 1046 + FAIL_ERRNO("%s: write", log_prefix); 1047 + if (n == 0) 1048 + FAIL("%s: incomplete write", log_prefix); 1049 + if (n < 1) 1050 + goto close_peer; 1051 + 1052 + key = SK_DROP; 1053 + err = xbpf_map_lookup_elem(verd_mapfd, &key, &drop); 1054 + if (err) 1055 + goto close_peer; 1056 + if (drop != 1) 1057 + FAIL("%s: want drop count 1, have %d", log_prefix, drop); 1058 + 1059 + close_peer: 1060 + xclose(p); 1061 + close_cli: 1062 + xclose(c); 1063 + close_srv: 1064 + xclose(s); 1065 + } 1066 + 1067 + static void test_skb_redir_to_listening(struct test_sockmap_listen *skel, 1068 + struct bpf_map *inner_map, int family, 1069 + int sotype) 1070 + { 1071 + int verdict = bpf_program__fd(skel->progs.prog_skb_verdict); 1072 + int parser = bpf_program__fd(skel->progs.prog_skb_parser); 1073 + int verdict_map = bpf_map__fd(skel->maps.verdict_map); 1074 + int sock_map = bpf_map__fd(inner_map); 1075 + int err; 1076 + 1077 + err = xbpf_prog_attach(parser, sock_map, BPF_SK_SKB_STREAM_PARSER, 0); 1078 + if (err) 1079 + return; 1080 + err = xbpf_prog_attach(verdict, sock_map, BPF_SK_SKB_STREAM_VERDICT, 0); 1081 + if (err) 1082 + goto detach; 1083 + 1084 + redir_to_listening(family, sotype, sock_map, verdict_map, 1085 + REDIR_INGRESS); 1086 + 1087 + xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_STREAM_VERDICT); 1088 + detach: 1089 + xbpf_prog_detach2(parser, sock_map, BPF_SK_SKB_STREAM_PARSER); 1090 + } 1091 + 1092 + static void test_msg_redir_to_listening(struct test_sockmap_listen *skel, 1093 + struct bpf_map *inner_map, int family, 1094 + int sotype) 1095 + { 1096 + int verdict = bpf_program__fd(skel->progs.prog_msg_verdict); 1097 + int verdict_map = bpf_map__fd(skel->maps.verdict_map); 1098 + int sock_map = bpf_map__fd(inner_map); 1099 + int err; 1100 + 1101 + err = xbpf_prog_attach(verdict, sock_map, BPF_SK_MSG_VERDICT, 0); 1102 + if (err) 1103 + return; 1104 + 1105 + redir_to_listening(family, sotype, sock_map, verdict_map, REDIR_EGRESS); 1106 + 1107 + xbpf_prog_detach2(verdict, sock_map, BPF_SK_MSG_VERDICT); 1108 + } 1109 + 1110 + static void test_reuseport_select_listening(int family, int sotype, 1111 + int sock_map, int verd_map, 1112 + int reuseport_prog) 1113 + { 1114 + struct sockaddr_storage addr; 1115 + unsigned int pass; 1116 + int s, c, p, err; 1117 + socklen_t len; 1118 + u64 value; 1119 + u32 key; 1120 + 1121 + zero_verdict_count(verd_map); 1122 + 1123 + s = listen_loopback_reuseport(family, sotype, reuseport_prog); 1124 + if (s < 0) 1125 + return; 1126 + 1127 + len = sizeof(addr); 1128 + err = xgetsockname(s, sockaddr(&addr), &len); 1129 + if (err) 1130 + goto close_srv; 1131 + 1132 + key = 0; 1133 + value = s; 1134 + err = xbpf_map_update_elem(sock_map, &key, &value, BPF_NOEXIST); 1135 + if (err) 1136 + goto close_srv; 1137 + 1138 + c = xsocket(family, sotype, 0); 1139 + if (c < 0) 1140 + goto close_srv; 1141 + err = xconnect(c, sockaddr(&addr), len); 1142 + if (err) 1143 + goto close_cli; 1144 + 1145 + p = xaccept(s, NULL, NULL); 1146 + if (p < 0) 1147 + goto close_cli; 1148 + 1149 + key = SK_PASS; 1150 + err = xbpf_map_lookup_elem(verd_map, &key, &pass); 1151 + if (err) 1152 + goto close_peer; 1153 + if (pass != 1) 1154 + FAIL("want pass count 1, have %d", pass); 1155 + 1156 + close_peer: 1157 + xclose(p); 1158 + close_cli: 1159 + xclose(c); 1160 + close_srv: 1161 + xclose(s); 1162 + } 1163 + 1164 + static void test_reuseport_select_connected(int family, int sotype, 1165 + int sock_map, int verd_map, 1166 + int reuseport_prog) 1167 + { 1168 + struct sockaddr_storage addr; 1169 + int s, c0, c1, p0, err; 1170 + unsigned int drop; 1171 + socklen_t len; 1172 + u64 value; 1173 + u32 key; 1174 + 1175 + zero_verdict_count(verd_map); 1176 + 1177 + s = listen_loopback_reuseport(family, sotype, reuseport_prog); 1178 + if (s < 0) 1179 + return; 1180 + 1181 + /* Populate sock_map[0] to avoid ENOENT on first connection */ 1182 + key = 0; 1183 + value = s; 1184 + err = xbpf_map_update_elem(sock_map, &key, &value, BPF_NOEXIST); 1185 + if (err) 1186 + goto close_srv; 1187 + 1188 + len = sizeof(addr); 1189 + err = xgetsockname(s, sockaddr(&addr), &len); 1190 + if (err) 1191 + goto close_srv; 1192 + 1193 + c0 = xsocket(family, sotype, 0); 1194 + if (c0 < 0) 1195 + goto close_srv; 1196 + 1197 + err = xconnect(c0, sockaddr(&addr), len); 1198 + if (err) 1199 + goto close_cli0; 1200 + 1201 + p0 = xaccept(s, NULL, NULL); 1202 + if (err) 1203 + goto close_cli0; 1204 + 1205 + /* Update sock_map[0] to redirect to a connected socket */ 1206 + key = 0; 1207 + value = p0; 1208 + err = xbpf_map_update_elem(sock_map, &key, &value, BPF_EXIST); 1209 + if (err) 1210 + goto close_peer0; 1211 + 1212 + c1 = xsocket(family, sotype, 0); 1213 + if (c1 < 0) 1214 + goto close_peer0; 1215 + 1216 + errno = 0; 1217 + err = connect(c1, sockaddr(&addr), len); 1218 + if (!err || errno != ECONNREFUSED) 1219 + FAIL_ERRNO("connect: expected ECONNREFUSED"); 1220 + 1221 + key = SK_DROP; 1222 + err = xbpf_map_lookup_elem(verd_map, &key, &drop); 1223 + if (err) 1224 + goto close_cli1; 1225 + if (drop != 1) 1226 + FAIL("want drop count 1, have %d", drop); 1227 + 1228 + close_cli1: 1229 + xclose(c1); 1230 + close_peer0: 1231 + xclose(p0); 1232 + close_cli0: 1233 + xclose(c0); 1234 + close_srv: 1235 + xclose(s); 1236 + } 1237 + 1238 + /* Check that redirecting across reuseport groups is not allowed. */ 1239 + static void test_reuseport_mixed_groups(int family, int sotype, int sock_map, 1240 + int verd_map, int reuseport_prog) 1241 + { 1242 + struct sockaddr_storage addr; 1243 + int s1, s2, c, err; 1244 + unsigned int drop; 1245 + socklen_t len; 1246 + u64 value; 1247 + u32 key; 1248 + 1249 + zero_verdict_count(verd_map); 1250 + 1251 + /* Create two listeners, each in its own reuseport group */ 1252 + s1 = listen_loopback_reuseport(family, sotype, reuseport_prog); 1253 + if (s1 < 0) 1254 + return; 1255 + 1256 + s2 = listen_loopback_reuseport(family, sotype, reuseport_prog); 1257 + if (s2 < 0) 1258 + goto close_srv1; 1259 + 1260 + key = 0; 1261 + value = s1; 1262 + err = xbpf_map_update_elem(sock_map, &key, &value, BPF_NOEXIST); 1263 + if (err) 1264 + goto close_srv2; 1265 + 1266 + key = 1; 1267 + value = s2; 1268 + err = xbpf_map_update_elem(sock_map, &key, &value, BPF_NOEXIST); 1269 + 1270 + /* Connect to s2, reuseport BPF selects s1 via sock_map[0] */ 1271 + len = sizeof(addr); 1272 + err = xgetsockname(s2, sockaddr(&addr), &len); 1273 + if (err) 1274 + goto close_srv2; 1275 + 1276 + c = xsocket(family, sotype, 0); 1277 + if (c < 0) 1278 + goto close_srv2; 1279 + 1280 + err = connect(c, sockaddr(&addr), len); 1281 + if (err && errno != ECONNREFUSED) { 1282 + FAIL_ERRNO("connect: expected ECONNREFUSED"); 1283 + goto close_cli; 1284 + } 1285 + 1286 + /* Expect drop, can't redirect outside of reuseport group */ 1287 + key = SK_DROP; 1288 + err = xbpf_map_lookup_elem(verd_map, &key, &drop); 1289 + if (err) 1290 + goto close_cli; 1291 + if (drop != 1) 1292 + FAIL("want drop count 1, have %d", drop); 1293 + 1294 + close_cli: 1295 + xclose(c); 1296 + close_srv2: 1297 + xclose(s2); 1298 + close_srv1: 1299 + xclose(s1); 1300 + } 1301 + 1302 + #define TEST(fn) \ 1303 + { \ 1304 + fn, #fn \ 1305 + } 1306 + 1307 + static void test_ops_cleanup(const struct bpf_map *map) 1308 + { 1309 + const struct bpf_map_def *def; 1310 + int err, mapfd; 1311 + u32 key; 1312 + 1313 + def = bpf_map__def(map); 1314 + mapfd = bpf_map__fd(map); 1315 + 1316 + for (key = 0; key < def->max_entries; key++) { 1317 + err = bpf_map_delete_elem(mapfd, &key); 1318 + if (err && errno != EINVAL && errno != ENOENT) 1319 + FAIL_ERRNO("map_delete: expected EINVAL/ENOENT"); 1320 + } 1321 + } 1322 + 1323 + static const char *family_str(sa_family_t family) 1324 + { 1325 + switch (family) { 1326 + case AF_INET: 1327 + return "IPv4"; 1328 + case AF_INET6: 1329 + return "IPv6"; 1330 + default: 1331 + return "unknown"; 1332 + } 1333 + } 1334 + 1335 + static const char *map_type_str(const struct bpf_map *map) 1336 + { 1337 + const struct bpf_map_def *def; 1338 + 1339 + def = bpf_map__def(map); 1340 + if (IS_ERR(def)) 1341 + return "invalid"; 1342 + 1343 + switch (def->type) { 1344 + case BPF_MAP_TYPE_SOCKMAP: 1345 + return "sockmap"; 1346 + case BPF_MAP_TYPE_SOCKHASH: 1347 + return "sockhash"; 1348 + default: 1349 + return "unknown"; 1350 + } 1351 + } 1352 + 1353 + static void test_ops(struct test_sockmap_listen *skel, struct bpf_map *map, 1354 + int family, int sotype) 1355 + { 1356 + const struct op_test { 1357 + void (*fn)(int family, int sotype, int mapfd); 1358 + const char *name; 1359 + } tests[] = { 1360 + /* insert */ 1361 + TEST(test_insert_invalid), 1362 + TEST(test_insert_opened), 1363 + TEST(test_insert_bound), 1364 + TEST(test_insert_listening), 1365 + /* delete */ 1366 + TEST(test_delete_after_insert), 1367 + TEST(test_delete_after_close), 1368 + /* lookup */ 1369 + TEST(test_lookup_after_insert), 1370 + TEST(test_lookup_after_delete), 1371 + TEST(test_lookup_32_bit_value), 1372 + /* update */ 1373 + TEST(test_update_listening), 1374 + /* races with insert/delete */ 1375 + TEST(test_destroy_orphan_child), 1376 + TEST(test_syn_recv_insert_delete), 1377 + TEST(test_race_insert_listen), 1378 + /* child clone */ 1379 + TEST(test_clone_after_delete), 1380 + TEST(test_accept_after_delete), 1381 + TEST(test_accept_before_delete), 1382 + }; 1383 + const char *family_name, *map_name; 1384 + const struct op_test *t; 1385 + char s[MAX_TEST_NAME]; 1386 + int map_fd; 1387 + 1388 + family_name = family_str(family); 1389 + map_name = map_type_str(map); 1390 + map_fd = bpf_map__fd(map); 1391 + 1392 + for (t = tests; t < tests + ARRAY_SIZE(tests); t++) { 1393 + snprintf(s, sizeof(s), "%s %s %s", map_name, family_name, 1394 + t->name); 1395 + 1396 + if (!test__start_subtest(s)) 1397 + continue; 1398 + 1399 + t->fn(family, sotype, map_fd); 1400 + test_ops_cleanup(map); 1401 + } 1402 + } 1403 + 1404 + static void test_redir(struct test_sockmap_listen *skel, struct bpf_map *map, 1405 + int family, int sotype) 1406 + { 1407 + const struct redir_test { 1408 + void (*fn)(struct test_sockmap_listen *skel, 1409 + struct bpf_map *map, int family, int sotype); 1410 + const char *name; 1411 + } tests[] = { 1412 + TEST(test_skb_redir_to_connected), 1413 + TEST(test_skb_redir_to_listening), 1414 + TEST(test_msg_redir_to_connected), 1415 + TEST(test_msg_redir_to_listening), 1416 + }; 1417 + const char *family_name, *map_name; 1418 + const struct redir_test *t; 1419 + char s[MAX_TEST_NAME]; 1420 + 1421 + family_name = family_str(family); 1422 + map_name = map_type_str(map); 1423 + 1424 + for (t = tests; t < tests + ARRAY_SIZE(tests); t++) { 1425 + snprintf(s, sizeof(s), "%s %s %s", map_name, family_name, 1426 + t->name); 1427 + if (!test__start_subtest(s)) 1428 + continue; 1429 + 1430 + t->fn(skel, map, family, sotype); 1431 + } 1432 + } 1433 + 1434 + static void test_reuseport(struct test_sockmap_listen *skel, 1435 + struct bpf_map *map, int family, int sotype) 1436 + { 1437 + const struct reuseport_test { 1438 + void (*fn)(int family, int sotype, int socket_map, 1439 + int verdict_map, int reuseport_prog); 1440 + const char *name; 1441 + } tests[] = { 1442 + TEST(test_reuseport_select_listening), 1443 + TEST(test_reuseport_select_connected), 1444 + TEST(test_reuseport_mixed_groups), 1445 + }; 1446 + int socket_map, verdict_map, reuseport_prog; 1447 + const char *family_name, *map_name; 1448 + const struct reuseport_test *t; 1449 + char s[MAX_TEST_NAME]; 1450 + 1451 + family_name = family_str(family); 1452 + map_name = map_type_str(map); 1453 + 1454 + socket_map = bpf_map__fd(map); 1455 + verdict_map = bpf_map__fd(skel->maps.verdict_map); 1456 + reuseport_prog = bpf_program__fd(skel->progs.prog_reuseport); 1457 + 1458 + for (t = tests; t < tests + ARRAY_SIZE(tests); t++) { 1459 + snprintf(s, sizeof(s), "%s %s %s", map_name, family_name, 1460 + t->name); 1461 + 1462 + if (!test__start_subtest(s)) 1463 + continue; 1464 + 1465 + t->fn(family, sotype, socket_map, verdict_map, reuseport_prog); 1466 + } 1467 + } 1468 + 1469 + static void run_tests(struct test_sockmap_listen *skel, struct bpf_map *map, 1470 + int family) 1471 + { 1472 + test_ops(skel, map, family, SOCK_STREAM); 1473 + test_redir(skel, map, family, SOCK_STREAM); 1474 + test_reuseport(skel, map, family, SOCK_STREAM); 1475 + } 1476 + 1477 + void test_sockmap_listen(void) 1478 + { 1479 + struct test_sockmap_listen *skel; 1480 + 1481 + skel = test_sockmap_listen__open_and_load(); 1482 + if (!skel) { 1483 + FAIL("skeleton open/load failed"); 1484 + return; 1485 + } 1486 + 1487 + skel->bss->test_sockmap = true; 1488 + run_tests(skel, skel->maps.sock_map, AF_INET); 1489 + run_tests(skel, skel->maps.sock_map, AF_INET6); 1490 + 1491 + skel->bss->test_sockmap = false; 1492 + run_tests(skel, skel->maps.sock_hash, AF_INET); 1493 + run_tests(skel, skel->maps.sock_hash, AF_INET6); 1494 + 1495 + test_sockmap_listen__destroy(skel); 1496 + }
+18 -7
tools/testing/selftests/bpf/prog_tests/trampoline_count.c
··· 55 55 /* attach 'allowed' 40 trampoline programs */ 56 56 for (i = 0; i < MAX_TRAMP_PROGS; i++) { 57 57 obj = bpf_object__open_file(object, NULL); 58 - if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) 58 + if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) { 59 + obj = NULL; 59 60 goto cleanup; 61 + } 60 62 61 63 err = bpf_object__load(obj); 62 64 if (CHECK(err, "obj_load", "err %d\n", err)) 63 65 goto cleanup; 64 66 inst[i].obj = obj; 67 + obj = NULL; 65 68 66 69 if (rand() % 2) { 67 - link = load(obj, fentry_name); 68 - if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link))) 70 + link = load(inst[i].obj, fentry_name); 71 + if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link))) { 72 + link = NULL; 69 73 goto cleanup; 74 + } 70 75 inst[i].link_fentry = link; 71 76 } else { 72 - link = load(obj, fexit_name); 73 - if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link))) 77 + link = load(inst[i].obj, fexit_name); 78 + if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link))) { 79 + link = NULL; 74 80 goto cleanup; 81 + } 75 82 inst[i].link_fexit = link; 76 83 } 77 84 } 78 85 79 86 /* and try 1 extra.. */ 80 87 obj = bpf_object__open_file(object, NULL); 81 - if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) 88 + if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) { 89 + obj = NULL; 82 90 goto cleanup; 91 + } 83 92 84 93 err = bpf_object__load(obj); 85 94 if (CHECK(err, "obj_load", "err %d\n", err)) ··· 113 104 cleanup_extra: 114 105 bpf_object__close(obj); 115 106 cleanup: 116 - while (--i) { 107 + if (i >= MAX_TRAMP_PROGS) 108 + i = MAX_TRAMP_PROGS - 1; 109 + for (; i >= 0; i--) { 117 110 bpf_link__destroy(inst[i].link_fentry); 118 111 bpf_link__destroy(inst[i].link_fexit); 119 112 bpf_object__close(inst[i].obj);
+13 -3
tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c
··· 14 14 struct test_xdp *pkt_skel = NULL; 15 15 struct test_xdp_bpf2bpf *ftrace_skel = NULL; 16 16 struct vip key4 = {.protocol = 6, .family = AF_INET}; 17 - DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts); 17 + struct bpf_program *prog; 18 18 19 19 /* Load XDP program to introspect */ 20 20 pkt_skel = test_xdp__open_and_load(); ··· 27 27 bpf_map_update_elem(map_fd, &key4, &value4, 0); 28 28 29 29 /* Load trace program */ 30 - opts.attach_prog_fd = pkt_fd, 31 - ftrace_skel = test_xdp_bpf2bpf__open_opts(&opts); 30 + ftrace_skel = test_xdp_bpf2bpf__open(); 32 31 if (CHECK(!ftrace_skel, "__open", "ftrace skeleton failed\n")) 33 32 goto out; 33 + 34 + /* Demonstrate the bpf_program__set_attach_target() API rather than 35 + * the load with options, i.e. opts.attach_prog_fd. 36 + */ 37 + prog = ftrace_skel->progs.trace_on_entry; 38 + bpf_program__set_expected_attach_type(prog, BPF_TRACE_FENTRY); 39 + bpf_program__set_attach_target(prog, pkt_fd, "_xdp_tx_iptunnel"); 40 + 41 + prog = ftrace_skel->progs.trace_on_exit; 42 + bpf_program__set_expected_attach_type(prog, BPF_TRACE_FEXIT); 43 + bpf_program__set_attach_target(prog, pkt_fd, "_xdp_tx_iptunnel"); 34 44 35 45 err = test_xdp_bpf2bpf__load(ftrace_skel); 36 46 if (CHECK(err, "__load", "ftrace skeleton failed\n"))
+50
tools/testing/selftests/bpf/progs/test_perf_branches.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2019 Facebook 3 + 4 + #include <stddef.h> 5 + #include <linux/ptrace.h> 6 + #include <linux/bpf.h> 7 + #include <bpf/bpf_helpers.h> 8 + #include "bpf_trace_helpers.h" 9 + 10 + int valid = 0; 11 + int required_size_out = 0; 12 + int written_stack_out = 0; 13 + int written_global_out = 0; 14 + 15 + struct { 16 + __u64 _a; 17 + __u64 _b; 18 + __u64 _c; 19 + } fpbe[30] = {0}; 20 + 21 + SEC("perf_event") 22 + int perf_branches(void *ctx) 23 + { 24 + __u64 entries[4 * 3] = {0}; 25 + int required_size, written_stack, written_global; 26 + 27 + /* write to stack */ 28 + written_stack = bpf_read_branch_records(ctx, entries, sizeof(entries), 0); 29 + /* ignore spurious events */ 30 + if (!written_stack) 31 + return 1; 32 + 33 + /* get required size */ 34 + required_size = bpf_read_branch_records(ctx, NULL, 0, 35 + BPF_F_GET_BRANCH_RECORDS_SIZE); 36 + 37 + written_global = bpf_read_branch_records(ctx, fpbe, sizeof(fpbe), 0); 38 + /* ignore spurious events */ 39 + if (!written_global) 40 + return 1; 41 + 42 + required_size_out = required_size; 43 + written_stack_out = written_stack; 44 + written_global_out = written_global; 45 + valid = 1; 46 + 47 + return 0; 48 + } 49 + 50 + char _license[] SEC("license") = "GPL";
+98
tools/testing/selftests/bpf/progs/test_sockmap_listen.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2020 Cloudflare 3 + 4 + #include <errno.h> 5 + #include <stdbool.h> 6 + #include <linux/bpf.h> 7 + 8 + #include <bpf/bpf_helpers.h> 9 + 10 + struct { 11 + __uint(type, BPF_MAP_TYPE_SOCKMAP); 12 + __uint(max_entries, 2); 13 + __type(key, __u32); 14 + __type(value, __u64); 15 + } sock_map SEC(".maps"); 16 + 17 + struct { 18 + __uint(type, BPF_MAP_TYPE_SOCKHASH); 19 + __uint(max_entries, 2); 20 + __type(key, __u32); 21 + __type(value, __u64); 22 + } sock_hash SEC(".maps"); 23 + 24 + struct { 25 + __uint(type, BPF_MAP_TYPE_ARRAY); 26 + __uint(max_entries, 2); 27 + __type(key, int); 28 + __type(value, unsigned int); 29 + } verdict_map SEC(".maps"); 30 + 31 + static volatile bool test_sockmap; /* toggled by user-space */ 32 + 33 + SEC("sk_skb/stream_parser") 34 + int prog_skb_parser(struct __sk_buff *skb) 35 + { 36 + return skb->len; 37 + } 38 + 39 + SEC("sk_skb/stream_verdict") 40 + int prog_skb_verdict(struct __sk_buff *skb) 41 + { 42 + unsigned int *count; 43 + __u32 zero = 0; 44 + int verdict; 45 + 46 + if (test_sockmap) 47 + verdict = bpf_sk_redirect_map(skb, &sock_map, zero, 0); 48 + else 49 + verdict = bpf_sk_redirect_hash(skb, &sock_hash, &zero, 0); 50 + 51 + count = bpf_map_lookup_elem(&verdict_map, &verdict); 52 + if (count) 53 + (*count)++; 54 + 55 + return verdict; 56 + } 57 + 58 + SEC("sk_msg") 59 + int prog_msg_verdict(struct sk_msg_md *msg) 60 + { 61 + unsigned int *count; 62 + __u32 zero = 0; 63 + int verdict; 64 + 65 + if (test_sockmap) 66 + verdict = bpf_msg_redirect_map(msg, &sock_map, zero, 0); 67 + else 68 + verdict = bpf_msg_redirect_hash(msg, &sock_hash, &zero, 0); 69 + 70 + count = bpf_map_lookup_elem(&verdict_map, &verdict); 71 + if (count) 72 + (*count)++; 73 + 74 + return verdict; 75 + } 76 + 77 + SEC("sk_reuseport") 78 + int prog_reuseport(struct sk_reuseport_md *reuse) 79 + { 80 + unsigned int *count; 81 + int err, verdict; 82 + __u32 zero = 0; 83 + 84 + if (test_sockmap) 85 + err = bpf_sk_select_reuseport(reuse, &sock_map, &zero, 0); 86 + else 87 + err = bpf_sk_select_reuseport(reuse, &sock_hash, &zero, 0); 88 + verdict = err ? SK_DROP : SK_PASS; 89 + 90 + count = bpf_map_lookup_elem(&verdict_map, &verdict); 91 + if (count) 92 + (*count)++; 93 + 94 + return verdict; 95 + } 96 + 97 + int _version SEC("version") = 1; 98 + char _license[] SEC("license") = "GPL";
+2 -2
tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c
··· 28 28 } __attribute__((preserve_access_index)); 29 29 30 30 __u64 test_result_fentry = 0; 31 - SEC("fentry/_xdp_tx_iptunnel") 31 + SEC("fentry/FUNC") 32 32 int BPF_PROG(trace_on_entry, struct xdp_buff *xdp) 33 33 { 34 34 test_result_fentry = xdp->rxq->dev->ifindex; ··· 36 36 } 37 37 38 38 __u64 test_result_fexit = 0; 39 - SEC("fexit/_xdp_tx_iptunnel") 39 + SEC("fexit/FUNC") 40 40 int BPF_PROG(trace_on_exit, struct xdp_buff *xdp, int ret) 41 41 { 42 42 test_result_fexit = ret;
+1 -5
tools/testing/selftests/bpf/test_maps.c
··· 756 756 /* Test update without programs */ 757 757 for (i = 0; i < 6; i++) { 758 758 err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY); 759 - if (i < 2 && !err) { 760 - printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n", 761 - i, sfd[i]); 762 - goto out_sockmap; 763 - } else if (i >= 2 && err) { 759 + if (err) { 764 760 printf("Failed noprog update sockmap '%i:%i'\n", 765 761 i, sfd[i]); 766 762 goto out_sockmap;