Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2023-08-16

We've added 17 non-merge commits during the last 6 day(s) which contain
a total of 20 files changed, 1179 insertions(+), 37 deletions(-).

The main changes are:

1) Add a BPF hook in sys_socket() to change the protocol ID
from IPPROTO_TCP to IPPROTO_MPTCP to cover migration for legacy
applications, from Geliang Tang.

2) Follow-up/fallout fix from the SO_REUSEPORT + bpf_sk_assign work
to fix a splat on non-fullsock sks in inet[6]_steal_sock,
from Lorenz Bauer.

3) Improvements to struct_ops links to avoid forcing presence of
update/validate callbacks. Also add bpf_struct_ops fields documentation,
from David Vernet.

4) Ensure libbpf sets close-on-exec flag on gzopen, from Marco Vedovati.

5) Several new tcx selftest additions and bpftool link show support for
tcx and xdp links, from Daniel Borkmann.

6) Fix a smatch warning on uninitialized symbol in
bpf_perf_link_fill_kprobe, from Yafang Shao.

7) BPF selftest fixes e.g. misplaced break in kfunc_call test,
from Yipeng Zou.

8) Small cleanup to remove unused declaration bpf_link_new_file,
from Yue Haibing.

9) Small typo fix to bpftool's perf help message, from Daniel T. Lee.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next:
selftests/bpf: Add mptcpify test
selftests/bpf: Fix error checks of mptcp open_and_load
selftests/bpf: Add two mptcp netns helpers
bpf: Add update_socket_protocol hook
bpftool: Implement link show support for xdp
bpftool: Implement link show support for tcx
selftests/bpf: Add selftest for fill_link_info
bpf: Fix uninitialized symbol in bpf_perf_link_fill_kprobe()
net: Fix slab-out-of-bounds in inet[6]_steal_sock
bpf: Document struct bpf_struct_ops fields
bpf: Support default .validate() and .update() behavior for struct_ops links
selftests/bpf: Add various more tcx test cases
selftests/bpf: Clean up fmod_ret in bench_rename test script
selftests/bpf: Fix repeat option when kfunc_call verification fails
libbpf: Set close-on-exec flag on gzopen
bpftool: fix perf help message
bpf: Remove unused declaration bpf_link_new_file()
====================

Link: https://lore.kernel.org/r/20230816212840.1539-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+1179 -37
+47 -1
include/linux/bpf.h
··· 1550 1550 struct btf_member; 1551 1551 1552 1552 #define BPF_STRUCT_OPS_MAX_NR_MEMBERS 64 1553 + /** 1554 + * struct bpf_struct_ops - A structure of callbacks allowing a subsystem to 1555 + * define a BPF_MAP_TYPE_STRUCT_OPS map type composed 1556 + * of BPF_PROG_TYPE_STRUCT_OPS progs. 1557 + * @verifier_ops: A structure of callbacks that are invoked by the verifier 1558 + * when determining whether the struct_ops progs in the 1559 + * struct_ops map are valid. 1560 + * @init: A callback that is invoked a single time, and before any other 1561 + * callback, to initialize the structure. A nonzero return value means 1562 + * the subsystem could not be initialized. 1563 + * @check_member: When defined, a callback invoked by the verifier to allow 1564 + * the subsystem to determine if an entry in the struct_ops map 1565 + * is valid. A nonzero return value means that the map is 1566 + * invalid and should be rejected by the verifier. 1567 + * @init_member: A callback that is invoked for each member of the struct_ops 1568 + * map to allow the subsystem to initialize the member. A nonzero 1569 + * value means the member could not be initialized. This callback 1570 + * is exclusive with the @type, @type_id, @value_type, and 1571 + * @value_id fields. 1572 + * @reg: A callback that is invoked when the struct_ops map has been 1573 + * initialized and is being attached to. Zero means the struct_ops map 1574 + * has been successfully registered and is live. A nonzero return value 1575 + * means the struct_ops map could not be registered. 1576 + * @unreg: A callback that is invoked when the struct_ops map should be 1577 + * unregistered. 1578 + * @update: A callback that is invoked when the live struct_ops map is being 1579 + * updated to contain new values. This callback is only invoked when 1580 + * the struct_ops map is loaded with BPF_F_LINK. If not defined, the 1581 + * it is assumed that the struct_ops map cannot be updated. 1582 + * @validate: A callback that is invoked after all of the members have been 1583 + * initialized. This callback should perform static checks on the 1584 + * map, meaning that it should either fail or succeed 1585 + * deterministically. A struct_ops map that has been validated may 1586 + * not necessarily succeed in being registered if the call to @reg 1587 + * fails. For example, a valid struct_ops map may be loaded, but 1588 + * then fail to be registered due to there being another active 1589 + * struct_ops map on the system in the subsystem already. For this 1590 + * reason, if this callback is not defined, the check is skipped as 1591 + * the struct_ops map will have final verification performed in 1592 + * @reg. 1593 + * @type: BTF type. 1594 + * @value_type: Value type. 1595 + * @name: The name of the struct bpf_struct_ops object. 1596 + * @func_models: Func models 1597 + * @type_id: BTF type id. 1598 + * @value_id: BTF value id. 1599 + */ 1553 1600 struct bpf_struct_ops { 1554 1601 const struct bpf_verifier_ops *verifier_ops; 1555 1602 int (*init)(struct btf *btf); ··· 2167 2120 void bpf_link_inc(struct bpf_link *link); 2168 2121 void bpf_link_put(struct bpf_link *link); 2169 2122 int bpf_link_new_fd(struct bpf_link *link); 2170 - struct file *bpf_link_new_file(struct bpf_link *link, int *reserved_fd); 2171 2123 struct bpf_link *bpf_link_get_from_fd(u32 ufd); 2172 2124 struct bpf_link *bpf_link_get_curr_or_next(u32 *id); 2173 2125
+1 -1
include/net/inet6_hashtables.h
··· 116 116 if (!sk) 117 117 return NULL; 118 118 119 - if (!prefetched) 119 + if (!prefetched || !sk_fullsock(sk)) 120 120 return sk; 121 121 122 122 if (sk->sk_protocol == IPPROTO_TCP) {
+1 -1
include/net/inet_hashtables.h
··· 462 462 if (!sk) 463 463 return NULL; 464 464 465 - if (!prefetched) 465 + if (!prefetched || !sk_fullsock(sk)) 466 466 return sk; 467 467 468 468 if (sk->sk_protocol == IPPROTO_TCP) {
+9 -6
kernel/bpf/bpf_struct_ops.c
··· 509 509 } 510 510 511 511 if (st_map->map.map_flags & BPF_F_LINK) { 512 - err = st_ops->validate(kdata); 513 - if (err) 514 - goto reset_unlock; 512 + err = 0; 513 + if (st_ops->validate) { 514 + err = st_ops->validate(kdata); 515 + if (err) 516 + goto reset_unlock; 517 + } 515 518 set_memory_rox((long)st_map->image, 1); 516 519 /* Let bpf_link handle registration & unregistration. 517 520 * ··· 665 662 vt = st_ops->value_type; 666 663 if (attr->value_size != vt->size) 667 664 return ERR_PTR(-EINVAL); 668 - 669 - if (attr->map_flags & BPF_F_LINK && (!st_ops->validate || !st_ops->update)) 670 - return ERR_PTR(-EOPNOTSUPP); 671 665 672 666 t = st_ops->type; 673 667 ··· 822 822 823 823 if (!bpf_struct_ops_valid_to_reg(new_map)) 824 824 return -EINVAL; 825 + 826 + if (!st_map->st_ops->update) 827 + return -EOPNOTSUPP; 825 828 826 829 mutex_lock(&update_mutex); 827 830
+2 -3
kernel/bpf/syscall.c
··· 3378 3378 3379 3379 if (!ulen ^ !uname) 3380 3380 return -EINVAL; 3381 - if (!uname) 3382 - return 0; 3383 3381 3384 3382 err = bpf_get_perf_event_info(event, &prog_id, fd_type, &buf, 3385 3383 probe_offset, probe_addr); 3386 3384 if (err) 3387 3385 return err; 3388 - 3386 + if (!uname) 3387 + return 0; 3389 3388 if (buf) { 3390 3389 len = strlen(buf); 3391 3390 err = bpf_copy_to_user(uname, buf, ulen, len);
+15
net/mptcp/bpf.c
··· 19 19 20 20 return NULL; 21 21 } 22 + 23 + BTF_SET8_START(bpf_mptcp_fmodret_ids) 24 + BTF_ID_FLAGS(func, update_socket_protocol) 25 + BTF_SET8_END(bpf_mptcp_fmodret_ids) 26 + 27 + static const struct btf_kfunc_id_set bpf_mptcp_fmodret_set = { 28 + .owner = THIS_MODULE, 29 + .set = &bpf_mptcp_fmodret_ids, 30 + }; 31 + 32 + static int __init bpf_mptcp_kfunc_init(void) 33 + { 34 + return register_btf_fmodret_id_set(&bpf_mptcp_fmodret_set); 35 + } 36 + late_initcall(bpf_mptcp_kfunc_init);
+25 -1
net/socket.c
··· 1657 1657 return sock_alloc_file(sock, flags, NULL); 1658 1658 } 1659 1659 1660 + /* A hook for bpf progs to attach to and update socket protocol. 1661 + * 1662 + * A static noinline declaration here could cause the compiler to 1663 + * optimize away the function. A global noinline declaration will 1664 + * keep the definition, but may optimize away the callsite. 1665 + * Therefore, __weak is needed to ensure that the call is still 1666 + * emitted, by telling the compiler that we don't know what the 1667 + * function might eventually be. 1668 + * 1669 + * __diag_* below are needed to dismiss the missing prototype warning. 1670 + */ 1671 + 1672 + __diag_push(); 1673 + __diag_ignore_all("-Wmissing-prototypes", 1674 + "A fmod_ret entry point for BPF programs"); 1675 + 1676 + __weak noinline int update_socket_protocol(int family, int type, int protocol) 1677 + { 1678 + return protocol; 1679 + } 1680 + 1681 + __diag_pop(); 1682 + 1660 1683 int __sys_socket(int family, int type, int protocol) 1661 1684 { 1662 1685 struct socket *sock; 1663 1686 int flags; 1664 1687 1665 - sock = __sys_socket_create(family, type, protocol); 1688 + sock = __sys_socket_create(family, type, 1689 + update_socket_protocol(family, type, protocol)); 1666 1690 if (IS_ERR(sock)) 1667 1691 return PTR_ERR(sock); 1668 1692
+44
tools/bpf/bpftool/link.c
··· 150 150 jsonw_uint_field(wtr, "attach_type", attach_type); 151 151 } 152 152 153 + static void show_link_ifindex_json(__u32 ifindex, json_writer_t *wtr) 154 + { 155 + char devname[IF_NAMESIZE] = "(unknown)"; 156 + 157 + if (ifindex) 158 + if_indextoname(ifindex, devname); 159 + else 160 + snprintf(devname, sizeof(devname), "(detached)"); 161 + jsonw_string_field(wtr, "devname", devname); 162 + jsonw_uint_field(wtr, "ifindex", ifindex); 163 + } 164 + 153 165 static bool is_iter_map_target(const char *target_name) 154 166 { 155 167 return strcmp(target_name, "bpf_map_elem") == 0 || ··· 445 433 case BPF_LINK_TYPE_NETFILTER: 446 434 netfilter_dump_json(info, json_wtr); 447 435 break; 436 + case BPF_LINK_TYPE_TCX: 437 + show_link_ifindex_json(info->tcx.ifindex, json_wtr); 438 + show_link_attach_type_json(info->tcx.attach_type, json_wtr); 439 + break; 440 + case BPF_LINK_TYPE_XDP: 441 + show_link_ifindex_json(info->xdp.ifindex, json_wtr); 442 + break; 448 443 case BPF_LINK_TYPE_STRUCT_OPS: 449 444 jsonw_uint_field(json_wtr, "map_id", 450 445 info->struct_ops.map_id); ··· 526 507 printf("attach_type %s ", attach_type_str); 527 508 else 528 509 printf("attach_type %u ", attach_type); 510 + } 511 + 512 + static void show_link_ifindex_plain(__u32 ifindex) 513 + { 514 + char devname[IF_NAMESIZE * 2] = "(unknown)"; 515 + char tmpname[IF_NAMESIZE]; 516 + char *ret = NULL; 517 + 518 + if (ifindex) 519 + ret = if_indextoname(ifindex, tmpname); 520 + else 521 + snprintf(devname, sizeof(devname), "(detached)"); 522 + if (ret) 523 + snprintf(devname, sizeof(devname), "%s(%d)", 524 + tmpname, ifindex); 525 + printf("ifindex %s ", devname); 529 526 } 530 527 531 528 static void show_iter_plain(struct bpf_link_info *info) ··· 779 744 break; 780 745 case BPF_LINK_TYPE_NETFILTER: 781 746 netfilter_dump_plain(info); 747 + break; 748 + case BPF_LINK_TYPE_TCX: 749 + printf("\n\t"); 750 + show_link_ifindex_plain(info->tcx.ifindex); 751 + show_link_attach_type_plain(info->tcx.attach_type); 752 + break; 753 + case BPF_LINK_TYPE_XDP: 754 + printf("\n\t"); 755 + show_link_ifindex_plain(info->xdp.ifindex); 782 756 break; 783 757 case BPF_LINK_TYPE_KPROBE_MULTI: 784 758 show_kprobe_multi_plain(info);
+1 -1
tools/bpf/bpftool/perf.c
··· 236 236 { 237 237 fprintf(stderr, 238 238 "Usage: %1$s %2$s { show | list }\n" 239 - " %1$s %2$s help }\n" 239 + " %1$s %2$s help\n" 240 240 "\n" 241 241 " " HELP_SPEC_OPTIONS " }\n" 242 242 "",
+2 -2
tools/lib/bpf/libbpf.c
··· 1978 1978 return -ENAMETOOLONG; 1979 1979 1980 1980 /* gzopen also accepts uncompressed files. */ 1981 - file = gzopen(buf, "r"); 1981 + file = gzopen(buf, "re"); 1982 1982 if (!file) 1983 - file = gzopen("/proc/config.gz", "r"); 1983 + file = gzopen("/proc/config.gz", "re"); 1984 1984 1985 1985 if (!file) { 1986 1986 pr_warn("failed to open system Kconfig\n");
+3
tools/testing/selftests/bpf/DENYLIST.aarch64
··· 12 12 module_attach # prog 'kprobe_multi': failed to auto-attach: -95 13 13 fentry_test/fentry_many_args # fentry_many_args:FAIL:fentry_many_args_attach unexpected error: -524 14 14 fexit_test/fexit_many_args # fexit_many_args:FAIL:fexit_many_args_attach unexpected error: -524 15 + fill_link_info/kprobe_multi_link_info # bpf_program__attach_kprobe_multi_opts unexpected error: -95 16 + fill_link_info/kretprobe_multi_link_info # bpf_program__attach_kprobe_multi_opts unexpected error: -95 17 + fill_link_info/kprobe_multi_invalid_ubuff # bpf_program__attach_kprobe_multi_opts unexpected error: -95
+1 -1
tools/testing/selftests/bpf/benchs/run_bench_rename.sh
··· 2 2 3 3 set -eufo pipefail 4 4 5 - for i in base kprobe kretprobe rawtp fentry fexit fmodret 5 + for i in base kprobe kretprobe rawtp fentry fexit 6 6 do 7 7 summary=$(sudo ./bench -w2 -d5 -a rename-$i | tail -n1 | cut -d'(' -f1 | cut -d' ' -f3-) 8 8 printf "%-10s: %s\n" $i "$summary"
+1 -1
tools/testing/selftests/bpf/prog_tests/kfunc_call.c
··· 173 173 case tc_test: 174 174 topts.data_in = &pkt_v4; 175 175 topts.data_size_in = sizeof(pkt_v4); 176 - break; 177 176 topts.repeat = 1; 177 + break; 178 178 } 179 179 180 180 skel = kfunc_call_fail__open_opts(&opts);
+161 -19
tools/testing/selftests/bpf/prog_tests/mptcp.c
··· 2 2 /* Copyright (c) 2020, Tessares SA. */ 3 3 /* Copyright (c) 2022, SUSE. */ 4 4 5 + #include <linux/const.h> 6 + #include <netinet/in.h> 5 7 #include <test_progs.h> 6 8 #include "cgroup_helpers.h" 7 9 #include "network_helpers.h" 8 10 #include "mptcp_sock.skel.h" 11 + #include "mptcpify.skel.h" 9 12 10 13 #define NS_TEST "mptcp_ns" 14 + 15 + #ifndef IPPROTO_MPTCP 16 + #define IPPROTO_MPTCP 262 17 + #endif 18 + 19 + #ifndef SOL_MPTCP 20 + #define SOL_MPTCP 284 21 + #endif 22 + #ifndef MPTCP_INFO 23 + #define MPTCP_INFO 1 24 + #endif 25 + #ifndef MPTCP_INFO_FLAG_FALLBACK 26 + #define MPTCP_INFO_FLAG_FALLBACK _BITUL(0) 27 + #endif 28 + #ifndef MPTCP_INFO_FLAG_REMOTE_KEY_RECEIVED 29 + #define MPTCP_INFO_FLAG_REMOTE_KEY_RECEIVED _BITUL(1) 30 + #endif 11 31 12 32 #ifndef TCP_CA_NAME_MAX 13 33 #define TCP_CA_NAME_MAX 16 14 34 #endif 35 + 36 + struct __mptcp_info { 37 + __u8 mptcpi_subflows; 38 + __u8 mptcpi_add_addr_signal; 39 + __u8 mptcpi_add_addr_accepted; 40 + __u8 mptcpi_subflows_max; 41 + __u8 mptcpi_add_addr_signal_max; 42 + __u8 mptcpi_add_addr_accepted_max; 43 + __u32 mptcpi_flags; 44 + __u32 mptcpi_token; 45 + __u64 mptcpi_write_seq; 46 + __u64 mptcpi_snd_una; 47 + __u64 mptcpi_rcv_nxt; 48 + __u8 mptcpi_local_addr_used; 49 + __u8 mptcpi_local_addr_max; 50 + __u8 mptcpi_csum_enabled; 51 + __u32 mptcpi_retransmits; 52 + __u64 mptcpi_bytes_retrans; 53 + __u64 mptcpi_bytes_sent; 54 + __u64 mptcpi_bytes_received; 55 + __u64 mptcpi_bytes_acked; 56 + }; 15 57 16 58 struct mptcp_storage { 17 59 __u32 invoked; ··· 63 21 struct sock *first; 64 22 char ca_name[TCP_CA_NAME_MAX]; 65 23 }; 24 + 25 + static struct nstoken *create_netns(void) 26 + { 27 + SYS(fail, "ip netns add %s", NS_TEST); 28 + SYS(fail, "ip -net %s link set dev lo up", NS_TEST); 29 + 30 + return open_netns(NS_TEST); 31 + fail: 32 + return NULL; 33 + } 34 + 35 + static void cleanup_netns(struct nstoken *nstoken) 36 + { 37 + if (nstoken) 38 + close_netns(nstoken); 39 + 40 + SYS_NOFAIL("ip netns del %s &> /dev/null", NS_TEST); 41 + } 66 42 67 43 static int verify_tsk(int map_fd, int client_fd) 68 44 { ··· 160 100 161 101 sock_skel = mptcp_sock__open_and_load(); 162 102 if (!ASSERT_OK_PTR(sock_skel, "skel_open_load")) 163 - return -EIO; 103 + return libbpf_get_error(sock_skel); 164 104 165 105 err = mptcp_sock__attach(sock_skel); 166 106 if (!ASSERT_OK(err, "skel_attach")) 167 107 goto out; 168 108 169 109 prog_fd = bpf_program__fd(sock_skel->progs._sockops); 170 - if (!ASSERT_GE(prog_fd, 0, "bpf_program__fd")) { 171 - err = -EIO; 172 - goto out; 173 - } 174 - 175 110 map_fd = bpf_map__fd(sock_skel->maps.socket_storage_map); 176 - if (!ASSERT_GE(map_fd, 0, "bpf_map__fd")) { 177 - err = -EIO; 178 - goto out; 179 - } 180 - 181 111 err = bpf_prog_attach(prog_fd, cgroup_fd, BPF_CGROUP_SOCK_OPS, 0); 182 112 if (!ASSERT_OK(err, "bpf_prog_attach")) 183 113 goto out; ··· 197 147 if (!ASSERT_GE(cgroup_fd, 0, "test__join_cgroup")) 198 148 return; 199 149 200 - SYS(fail, "ip netns add %s", NS_TEST); 201 - SYS(fail, "ip -net %s link set dev lo up", NS_TEST); 202 - 203 - nstoken = open_netns(NS_TEST); 204 - if (!ASSERT_OK_PTR(nstoken, "open_netns")) 150 + nstoken = create_netns(); 151 + if (!ASSERT_OK_PTR(nstoken, "create_netns")) 205 152 goto fail; 206 153 207 154 /* without MPTCP */ ··· 221 174 close(server_fd); 222 175 223 176 fail: 224 - if (nstoken) 225 - close_netns(nstoken); 177 + cleanup_netns(nstoken); 178 + close(cgroup_fd); 179 + } 226 180 227 - SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null"); 181 + static void send_byte(int fd) 182 + { 183 + char b = 0x55; 228 184 185 + ASSERT_EQ(write(fd, &b, sizeof(b)), 1, "send single byte"); 186 + } 187 + 188 + static int verify_mptcpify(int server_fd, int client_fd) 189 + { 190 + struct __mptcp_info info; 191 + socklen_t optlen; 192 + int protocol; 193 + int err = 0; 194 + 195 + optlen = sizeof(protocol); 196 + if (!ASSERT_OK(getsockopt(server_fd, SOL_SOCKET, SO_PROTOCOL, &protocol, &optlen), 197 + "getsockopt(SOL_PROTOCOL)")) 198 + return -1; 199 + 200 + if (!ASSERT_EQ(protocol, IPPROTO_MPTCP, "protocol isn't MPTCP")) 201 + err++; 202 + 203 + optlen = sizeof(info); 204 + if (!ASSERT_OK(getsockopt(client_fd, SOL_MPTCP, MPTCP_INFO, &info, &optlen), 205 + "getsockopt(MPTCP_INFO)")) 206 + return -1; 207 + 208 + if (!ASSERT_GE(info.mptcpi_flags, 0, "unexpected mptcpi_flags")) 209 + err++; 210 + if (!ASSERT_FALSE(info.mptcpi_flags & MPTCP_INFO_FLAG_FALLBACK, 211 + "MPTCP fallback")) 212 + err++; 213 + if (!ASSERT_TRUE(info.mptcpi_flags & MPTCP_INFO_FLAG_REMOTE_KEY_RECEIVED, 214 + "no remote key received")) 215 + err++; 216 + 217 + return err; 218 + } 219 + 220 + static int run_mptcpify(int cgroup_fd) 221 + { 222 + int server_fd, client_fd, err = 0; 223 + struct mptcpify *mptcpify_skel; 224 + 225 + mptcpify_skel = mptcpify__open_and_load(); 226 + if (!ASSERT_OK_PTR(mptcpify_skel, "skel_open_load")) 227 + return libbpf_get_error(mptcpify_skel); 228 + 229 + err = mptcpify__attach(mptcpify_skel); 230 + if (!ASSERT_OK(err, "skel_attach")) 231 + goto out; 232 + 233 + /* without MPTCP */ 234 + server_fd = start_server(AF_INET, SOCK_STREAM, NULL, 0, 0); 235 + if (!ASSERT_GE(server_fd, 0, "start_server")) { 236 + err = -EIO; 237 + goto out; 238 + } 239 + 240 + client_fd = connect_to_fd(server_fd, 0); 241 + if (!ASSERT_GE(client_fd, 0, "connect to fd")) { 242 + err = -EIO; 243 + goto close_server; 244 + } 245 + 246 + send_byte(client_fd); 247 + 248 + err = verify_mptcpify(server_fd, client_fd); 249 + 250 + close(client_fd); 251 + close_server: 252 + close(server_fd); 253 + out: 254 + mptcpify__destroy(mptcpify_skel); 255 + return err; 256 + } 257 + 258 + static void test_mptcpify(void) 259 + { 260 + struct nstoken *nstoken = NULL; 261 + int cgroup_fd; 262 + 263 + cgroup_fd = test__join_cgroup("/mptcpify"); 264 + if (!ASSERT_GE(cgroup_fd, 0, "test__join_cgroup")) 265 + return; 266 + 267 + nstoken = create_netns(); 268 + if (!ASSERT_OK_PTR(nstoken, "create_netns")) 269 + goto fail; 270 + 271 + ASSERT_OK(run_mptcpify(cgroup_fd), "run_mptcpify"); 272 + 273 + fail: 274 + cleanup_netns(nstoken); 229 275 close(cgroup_fd); 230 276 } 231 277 ··· 326 186 { 327 187 if (test__start_subtest("base")) 328 188 test_base(); 189 + if (test__start_subtest("mptcpify")) 190 + test_mptcpify(); 329 191 }
+336
tools/testing/selftests/bpf/prog_tests/tc_links.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright (c) 2023 Isovalent */ 3 3 #include <uapi/linux/if_link.h> 4 + #include <uapi/linux/pkt_sched.h> 4 5 #include <net/if.h> 5 6 #include <test_progs.h> 6 7 ··· 1581 1580 { 1582 1581 test_tc_links_dev_cleanup_target(BPF_TCX_INGRESS); 1583 1582 test_tc_links_dev_cleanup_target(BPF_TCX_EGRESS); 1583 + } 1584 + 1585 + static void test_tc_chain_mixed(int target) 1586 + { 1587 + LIBBPF_OPTS(bpf_tc_opts, tc_opts, .handle = 1, .priority = 1); 1588 + LIBBPF_OPTS(bpf_tc_hook, tc_hook, .ifindex = loopback); 1589 + LIBBPF_OPTS(bpf_tcx_opts, optl); 1590 + struct test_tc_link *skel; 1591 + struct bpf_link *link; 1592 + __u32 pid1, pid2, pid3; 1593 + int err; 1594 + 1595 + skel = test_tc_link__open(); 1596 + if (!ASSERT_OK_PTR(skel, "skel_open")) 1597 + goto cleanup; 1598 + 1599 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc4, target), 1600 + 0, "tc4_attach_type"); 1601 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc5, target), 1602 + 0, "tc5_attach_type"); 1603 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc6, target), 1604 + 0, "tc6_attach_type"); 1605 + 1606 + err = test_tc_link__load(skel); 1607 + if (!ASSERT_OK(err, "skel_load")) 1608 + goto cleanup; 1609 + 1610 + pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc4)); 1611 + pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc5)); 1612 + pid3 = id_from_prog_fd(bpf_program__fd(skel->progs.tc6)); 1613 + 1614 + ASSERT_NEQ(pid1, pid2, "prog_ids_1_2"); 1615 + ASSERT_NEQ(pid2, pid3, "prog_ids_2_3"); 1616 + 1617 + assert_mprog_count(target, 0); 1618 + 1619 + tc_hook.attach_point = target == BPF_TCX_INGRESS ? 1620 + BPF_TC_INGRESS : BPF_TC_EGRESS; 1621 + err = bpf_tc_hook_create(&tc_hook); 1622 + err = err == -EEXIST ? 0 : err; 1623 + if (!ASSERT_OK(err, "bpf_tc_hook_create")) 1624 + goto cleanup; 1625 + 1626 + tc_opts.prog_fd = bpf_program__fd(skel->progs.tc5); 1627 + err = bpf_tc_attach(&tc_hook, &tc_opts); 1628 + if (!ASSERT_OK(err, "bpf_tc_attach")) 1629 + goto cleanup; 1630 + 1631 + link = bpf_program__attach_tcx(skel->progs.tc6, loopback, &optl); 1632 + if (!ASSERT_OK_PTR(link, "link_attach")) 1633 + goto cleanup; 1634 + 1635 + skel->links.tc6 = link; 1636 + 1637 + assert_mprog_count(target, 1); 1638 + 1639 + ASSERT_OK(system(ping_cmd), ping_cmd); 1640 + 1641 + ASSERT_EQ(skel->bss->seen_tc4, false, "seen_tc4"); 1642 + ASSERT_EQ(skel->bss->seen_tc5, false, "seen_tc5"); 1643 + ASSERT_EQ(skel->bss->seen_tc6, true, "seen_tc6"); 1644 + 1645 + skel->bss->seen_tc4 = false; 1646 + skel->bss->seen_tc5 = false; 1647 + skel->bss->seen_tc6 = false; 1648 + 1649 + err = bpf_link__update_program(skel->links.tc6, skel->progs.tc4); 1650 + if (!ASSERT_OK(err, "link_update")) 1651 + goto cleanup; 1652 + 1653 + assert_mprog_count(target, 1); 1654 + 1655 + ASSERT_OK(system(ping_cmd), ping_cmd); 1656 + 1657 + ASSERT_EQ(skel->bss->seen_tc4, true, "seen_tc4"); 1658 + ASSERT_EQ(skel->bss->seen_tc5, true, "seen_tc5"); 1659 + ASSERT_EQ(skel->bss->seen_tc6, false, "seen_tc6"); 1660 + 1661 + skel->bss->seen_tc4 = false; 1662 + skel->bss->seen_tc5 = false; 1663 + skel->bss->seen_tc6 = false; 1664 + 1665 + err = bpf_link__detach(skel->links.tc6); 1666 + if (!ASSERT_OK(err, "prog_detach")) 1667 + goto cleanup; 1668 + 1669 + __assert_mprog_count(target, 0, true, loopback); 1670 + 1671 + ASSERT_OK(system(ping_cmd), ping_cmd); 1672 + 1673 + ASSERT_EQ(skel->bss->seen_tc4, false, "seen_tc4"); 1674 + ASSERT_EQ(skel->bss->seen_tc5, true, "seen_tc5"); 1675 + ASSERT_EQ(skel->bss->seen_tc6, false, "seen_tc6"); 1676 + 1677 + cleanup: 1678 + tc_opts.flags = tc_opts.prog_fd = tc_opts.prog_id = 0; 1679 + err = bpf_tc_detach(&tc_hook, &tc_opts); 1680 + ASSERT_OK(err, "bpf_tc_detach"); 1681 + 1682 + tc_hook.attach_point = BPF_TC_INGRESS | BPF_TC_EGRESS; 1683 + bpf_tc_hook_destroy(&tc_hook); 1684 + 1685 + test_tc_link__destroy(skel); 1686 + } 1687 + 1688 + void serial_test_tc_links_chain_mixed(void) 1689 + { 1690 + test_tc_chain_mixed(BPF_TCX_INGRESS); 1691 + test_tc_chain_mixed(BPF_TCX_EGRESS); 1692 + } 1693 + 1694 + static void test_tc_links_ingress(int target, bool chain_tc_old, 1695 + bool tcx_teardown_first) 1696 + { 1697 + LIBBPF_OPTS(bpf_tc_opts, tc_opts, 1698 + .handle = 1, 1699 + .priority = 1, 1700 + ); 1701 + LIBBPF_OPTS(bpf_tc_hook, tc_hook, 1702 + .ifindex = loopback, 1703 + .attach_point = BPF_TC_CUSTOM, 1704 + .parent = TC_H_INGRESS, 1705 + ); 1706 + bool hook_created = false, tc_attached = false; 1707 + LIBBPF_OPTS(bpf_tcx_opts, optl); 1708 + __u32 pid1, pid2, pid3; 1709 + struct test_tc_link *skel; 1710 + struct bpf_link *link; 1711 + int err; 1712 + 1713 + skel = test_tc_link__open(); 1714 + if (!ASSERT_OK_PTR(skel, "skel_open")) 1715 + goto cleanup; 1716 + 1717 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1, target), 1718 + 0, "tc1_attach_type"); 1719 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2, target), 1720 + 0, "tc2_attach_type"); 1721 + 1722 + err = test_tc_link__load(skel); 1723 + if (!ASSERT_OK(err, "skel_load")) 1724 + goto cleanup; 1725 + 1726 + pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1)); 1727 + pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2)); 1728 + pid3 = id_from_prog_fd(bpf_program__fd(skel->progs.tc3)); 1729 + 1730 + ASSERT_NEQ(pid1, pid2, "prog_ids_1_2"); 1731 + ASSERT_NEQ(pid2, pid3, "prog_ids_2_3"); 1732 + 1733 + assert_mprog_count(target, 0); 1734 + 1735 + if (chain_tc_old) { 1736 + ASSERT_OK(system("tc qdisc add dev lo ingress"), "add_ingress"); 1737 + hook_created = true; 1738 + 1739 + tc_opts.prog_fd = bpf_program__fd(skel->progs.tc3); 1740 + err = bpf_tc_attach(&tc_hook, &tc_opts); 1741 + if (!ASSERT_OK(err, "bpf_tc_attach")) 1742 + goto cleanup; 1743 + tc_attached = true; 1744 + } 1745 + 1746 + link = bpf_program__attach_tcx(skel->progs.tc1, loopback, &optl); 1747 + if (!ASSERT_OK_PTR(link, "link_attach")) 1748 + goto cleanup; 1749 + 1750 + skel->links.tc1 = link; 1751 + 1752 + link = bpf_program__attach_tcx(skel->progs.tc2, loopback, &optl); 1753 + if (!ASSERT_OK_PTR(link, "link_attach")) 1754 + goto cleanup; 1755 + 1756 + skel->links.tc2 = link; 1757 + 1758 + assert_mprog_count(target, 2); 1759 + 1760 + ASSERT_OK(system(ping_cmd), ping_cmd); 1761 + 1762 + ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1"); 1763 + ASSERT_EQ(skel->bss->seen_tc2, true, "seen_tc2"); 1764 + ASSERT_EQ(skel->bss->seen_tc3, chain_tc_old, "seen_tc3"); 1765 + 1766 + skel->bss->seen_tc1 = false; 1767 + skel->bss->seen_tc2 = false; 1768 + skel->bss->seen_tc3 = false; 1769 + 1770 + err = bpf_link__detach(skel->links.tc2); 1771 + if (!ASSERT_OK(err, "prog_detach")) 1772 + goto cleanup; 1773 + 1774 + assert_mprog_count(target, 1); 1775 + 1776 + ASSERT_OK(system(ping_cmd), ping_cmd); 1777 + 1778 + ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1"); 1779 + ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2"); 1780 + ASSERT_EQ(skel->bss->seen_tc3, chain_tc_old, "seen_tc3"); 1781 + cleanup: 1782 + if (tc_attached) { 1783 + tc_opts.flags = tc_opts.prog_fd = tc_opts.prog_id = 0; 1784 + err = bpf_tc_detach(&tc_hook, &tc_opts); 1785 + ASSERT_OK(err, "bpf_tc_detach"); 1786 + } 1787 + ASSERT_OK(system(ping_cmd), ping_cmd); 1788 + assert_mprog_count(target, 1); 1789 + if (hook_created && tcx_teardown_first) 1790 + ASSERT_OK(system("tc qdisc del dev lo ingress"), "del_ingress"); 1791 + ASSERT_OK(system(ping_cmd), ping_cmd); 1792 + test_tc_link__destroy(skel); 1793 + ASSERT_OK(system(ping_cmd), ping_cmd); 1794 + if (hook_created && !tcx_teardown_first) 1795 + ASSERT_OK(system("tc qdisc del dev lo ingress"), "del_ingress"); 1796 + ASSERT_OK(system(ping_cmd), ping_cmd); 1797 + assert_mprog_count(target, 0); 1798 + } 1799 + 1800 + void serial_test_tc_links_ingress(void) 1801 + { 1802 + test_tc_links_ingress(BPF_TCX_INGRESS, true, true); 1803 + test_tc_links_ingress(BPF_TCX_INGRESS, true, false); 1804 + test_tc_links_ingress(BPF_TCX_INGRESS, false, false); 1805 + } 1806 + 1807 + static void test_tc_links_dev_mixed(int target) 1808 + { 1809 + LIBBPF_OPTS(bpf_tc_opts, tc_opts, .handle = 1, .priority = 1); 1810 + LIBBPF_OPTS(bpf_tc_hook, tc_hook); 1811 + LIBBPF_OPTS(bpf_tcx_opts, optl); 1812 + __u32 pid1, pid2, pid3, pid4; 1813 + struct test_tc_link *skel; 1814 + struct bpf_link *link; 1815 + int err, ifindex; 1816 + 1817 + ASSERT_OK(system("ip link add dev tcx_opts1 type veth peer name tcx_opts2"), "add veth"); 1818 + ifindex = if_nametoindex("tcx_opts1"); 1819 + ASSERT_NEQ(ifindex, 0, "non_zero_ifindex"); 1820 + 1821 + skel = test_tc_link__open(); 1822 + if (!ASSERT_OK_PTR(skel, "skel_open")) 1823 + goto cleanup; 1824 + 1825 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1, target), 1826 + 0, "tc1_attach_type"); 1827 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2, target), 1828 + 0, "tc2_attach_type"); 1829 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc3, target), 1830 + 0, "tc3_attach_type"); 1831 + ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc4, target), 1832 + 0, "tc4_attach_type"); 1833 + 1834 + err = test_tc_link__load(skel); 1835 + if (!ASSERT_OK(err, "skel_load")) 1836 + goto cleanup; 1837 + 1838 + pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1)); 1839 + pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2)); 1840 + pid3 = id_from_prog_fd(bpf_program__fd(skel->progs.tc3)); 1841 + pid4 = id_from_prog_fd(bpf_program__fd(skel->progs.tc4)); 1842 + 1843 + ASSERT_NEQ(pid1, pid2, "prog_ids_1_2"); 1844 + ASSERT_NEQ(pid3, pid4, "prog_ids_3_4"); 1845 + ASSERT_NEQ(pid2, pid3, "prog_ids_2_3"); 1846 + 1847 + assert_mprog_count(target, 0); 1848 + 1849 + link = bpf_program__attach_tcx(skel->progs.tc1, ifindex, &optl); 1850 + if (!ASSERT_OK_PTR(link, "link_attach")) 1851 + goto cleanup; 1852 + 1853 + skel->links.tc1 = link; 1854 + 1855 + assert_mprog_count_ifindex(ifindex, target, 1); 1856 + 1857 + link = bpf_program__attach_tcx(skel->progs.tc2, ifindex, &optl); 1858 + if (!ASSERT_OK_PTR(link, "link_attach")) 1859 + goto cleanup; 1860 + 1861 + skel->links.tc2 = link; 1862 + 1863 + assert_mprog_count_ifindex(ifindex, target, 2); 1864 + 1865 + link = bpf_program__attach_tcx(skel->progs.tc3, ifindex, &optl); 1866 + if (!ASSERT_OK_PTR(link, "link_attach")) 1867 + goto cleanup; 1868 + 1869 + skel->links.tc3 = link; 1870 + 1871 + assert_mprog_count_ifindex(ifindex, target, 3); 1872 + 1873 + link = bpf_program__attach_tcx(skel->progs.tc4, ifindex, &optl); 1874 + if (!ASSERT_OK_PTR(link, "link_attach")) 1875 + goto cleanup; 1876 + 1877 + skel->links.tc4 = link; 1878 + 1879 + assert_mprog_count_ifindex(ifindex, target, 4); 1880 + 1881 + tc_hook.ifindex = ifindex; 1882 + tc_hook.attach_point = target == BPF_TCX_INGRESS ? 1883 + BPF_TC_INGRESS : BPF_TC_EGRESS; 1884 + 1885 + err = bpf_tc_hook_create(&tc_hook); 1886 + err = err == -EEXIST ? 0 : err; 1887 + if (!ASSERT_OK(err, "bpf_tc_hook_create")) 1888 + goto cleanup; 1889 + 1890 + tc_opts.prog_fd = bpf_program__fd(skel->progs.tc5); 1891 + err = bpf_tc_attach(&tc_hook, &tc_opts); 1892 + if (!ASSERT_OK(err, "bpf_tc_attach")) 1893 + goto cleanup; 1894 + 1895 + ASSERT_OK(system("ip link del dev tcx_opts1"), "del veth"); 1896 + ASSERT_EQ(if_nametoindex("tcx_opts1"), 0, "dev1_removed"); 1897 + ASSERT_EQ(if_nametoindex("tcx_opts2"), 0, "dev2_removed"); 1898 + 1899 + ASSERT_EQ(ifindex_from_link_fd(bpf_link__fd(skel->links.tc1)), 0, "tc1_ifindex"); 1900 + ASSERT_EQ(ifindex_from_link_fd(bpf_link__fd(skel->links.tc2)), 0, "tc2_ifindex"); 1901 + ASSERT_EQ(ifindex_from_link_fd(bpf_link__fd(skel->links.tc3)), 0, "tc3_ifindex"); 1902 + ASSERT_EQ(ifindex_from_link_fd(bpf_link__fd(skel->links.tc4)), 0, "tc4_ifindex"); 1903 + 1904 + test_tc_link__destroy(skel); 1905 + return; 1906 + cleanup: 1907 + test_tc_link__destroy(skel); 1908 + 1909 + ASSERT_OK(system("ip link del dev tcx_opts1"), "del veth"); 1910 + ASSERT_EQ(if_nametoindex("tcx_opts1"), 0, "dev1_removed"); 1911 + ASSERT_EQ(if_nametoindex("tcx_opts2"), 0, "dev2_removed"); 1912 + } 1913 + 1914 + void serial_test_tc_links_dev_mixed(void) 1915 + { 1916 + test_tc_links_dev_mixed(BPF_TCX_INGRESS); 1917 + test_tc_links_dev_mixed(BPF_TCX_EGRESS); 1584 1918 }
+110
tools/testing/selftests/bpf/prog_tests/tc_opts.c
··· 2268 2268 test_tc_opts_delete_empty(BPF_TCX_INGRESS, true); 2269 2269 test_tc_opts_delete_empty(BPF_TCX_EGRESS, true); 2270 2270 } 2271 + 2272 + static void test_tc_chain_mixed(int target) 2273 + { 2274 + LIBBPF_OPTS(bpf_tc_opts, tc_opts, .handle = 1, .priority = 1); 2275 + LIBBPF_OPTS(bpf_tc_hook, tc_hook, .ifindex = loopback); 2276 + LIBBPF_OPTS(bpf_prog_attach_opts, opta); 2277 + LIBBPF_OPTS(bpf_prog_detach_opts, optd); 2278 + __u32 fd1, fd2, fd3, id1, id2, id3; 2279 + struct test_tc_link *skel; 2280 + int err, detach_fd; 2281 + 2282 + skel = test_tc_link__open_and_load(); 2283 + if (!ASSERT_OK_PTR(skel, "skel_load")) 2284 + goto cleanup; 2285 + 2286 + fd1 = bpf_program__fd(skel->progs.tc4); 2287 + fd2 = bpf_program__fd(skel->progs.tc5); 2288 + fd3 = bpf_program__fd(skel->progs.tc6); 2289 + 2290 + id1 = id_from_prog_fd(fd1); 2291 + id2 = id_from_prog_fd(fd2); 2292 + id3 = id_from_prog_fd(fd3); 2293 + 2294 + ASSERT_NEQ(id1, id2, "prog_ids_1_2"); 2295 + ASSERT_NEQ(id2, id3, "prog_ids_2_3"); 2296 + 2297 + assert_mprog_count(target, 0); 2298 + 2299 + tc_hook.attach_point = target == BPF_TCX_INGRESS ? 2300 + BPF_TC_INGRESS : BPF_TC_EGRESS; 2301 + err = bpf_tc_hook_create(&tc_hook); 2302 + err = err == -EEXIST ? 0 : err; 2303 + if (!ASSERT_OK(err, "bpf_tc_hook_create")) 2304 + goto cleanup; 2305 + 2306 + tc_opts.prog_fd = fd2; 2307 + err = bpf_tc_attach(&tc_hook, &tc_opts); 2308 + if (!ASSERT_OK(err, "bpf_tc_attach")) 2309 + goto cleanup_hook; 2310 + 2311 + err = bpf_prog_attach_opts(fd3, loopback, target, &opta); 2312 + if (!ASSERT_EQ(err, 0, "prog_attach")) 2313 + goto cleanup_filter; 2314 + 2315 + detach_fd = fd3; 2316 + 2317 + assert_mprog_count(target, 1); 2318 + 2319 + ASSERT_OK(system(ping_cmd), ping_cmd); 2320 + 2321 + ASSERT_EQ(skel->bss->seen_tc4, false, "seen_tc4"); 2322 + ASSERT_EQ(skel->bss->seen_tc5, false, "seen_tc5"); 2323 + ASSERT_EQ(skel->bss->seen_tc6, true, "seen_tc6"); 2324 + 2325 + skel->bss->seen_tc4 = false; 2326 + skel->bss->seen_tc5 = false; 2327 + skel->bss->seen_tc6 = false; 2328 + 2329 + LIBBPF_OPTS_RESET(opta, 2330 + .flags = BPF_F_REPLACE, 2331 + .replace_prog_fd = fd3, 2332 + ); 2333 + 2334 + err = bpf_prog_attach_opts(fd1, loopback, target, &opta); 2335 + if (!ASSERT_EQ(err, 0, "prog_attach")) 2336 + goto cleanup_opts; 2337 + 2338 + detach_fd = fd1; 2339 + 2340 + assert_mprog_count(target, 1); 2341 + 2342 + ASSERT_OK(system(ping_cmd), ping_cmd); 2343 + 2344 + ASSERT_EQ(skel->bss->seen_tc4, true, "seen_tc4"); 2345 + ASSERT_EQ(skel->bss->seen_tc5, true, "seen_tc5"); 2346 + ASSERT_EQ(skel->bss->seen_tc6, false, "seen_tc6"); 2347 + 2348 + skel->bss->seen_tc4 = false; 2349 + skel->bss->seen_tc5 = false; 2350 + skel->bss->seen_tc6 = false; 2351 + 2352 + cleanup_opts: 2353 + err = bpf_prog_detach_opts(detach_fd, loopback, target, &optd); 2354 + ASSERT_OK(err, "prog_detach"); 2355 + __assert_mprog_count(target, 0, true, loopback); 2356 + 2357 + ASSERT_OK(system(ping_cmd), ping_cmd); 2358 + 2359 + ASSERT_EQ(skel->bss->seen_tc4, false, "seen_tc4"); 2360 + ASSERT_EQ(skel->bss->seen_tc5, true, "seen_tc5"); 2361 + ASSERT_EQ(skel->bss->seen_tc6, false, "seen_tc6"); 2362 + 2363 + cleanup_filter: 2364 + tc_opts.flags = tc_opts.prog_fd = tc_opts.prog_id = 0; 2365 + err = bpf_tc_detach(&tc_hook, &tc_opts); 2366 + ASSERT_OK(err, "bpf_tc_detach"); 2367 + 2368 + cleanup_hook: 2369 + tc_hook.attach_point = BPF_TC_INGRESS | BPF_TC_EGRESS; 2370 + bpf_tc_hook_destroy(&tc_hook); 2371 + 2372 + cleanup: 2373 + test_tc_link__destroy(skel); 2374 + } 2375 + 2376 + void serial_test_tc_opts_chain_mixed(void) 2377 + { 2378 + test_tc_chain_mixed(BPF_TCX_INGRESS); 2379 + test_tc_chain_mixed(BPF_TCX_EGRESS); 2380 + }
+20
tools/testing/selftests/bpf/progs/mptcpify.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2023, SUSE. */ 3 + 4 + #include "vmlinux.h" 5 + #include <bpf/bpf_tracing.h> 6 + #include "bpf_tracing_net.h" 7 + 8 + char _license[] SEC("license") = "GPL"; 9 + 10 + SEC("fmod_ret/update_socket_protocol") 11 + int BPF_PROG(mptcpify, int family, int type, int protocol) 12 + { 13 + if ((family == AF_INET || family == AF_INET6) && 14 + type == SOCK_STREAM && 15 + (!protocol || protocol == IPPROTO_TCP)) { 16 + return IPPROTO_MPTCP; 17 + } 18 + 19 + return protocol; 20 + }
+16
tools/testing/selftests/bpf/progs/test_tc_link.c
··· 10 10 bool seen_tc2; 11 11 bool seen_tc3; 12 12 bool seen_tc4; 13 + bool seen_tc5; 14 + bool seen_tc6; 13 15 14 16 SEC("tc/ingress") 15 17 int tc1(struct __sk_buff *skb) ··· 39 37 { 40 38 seen_tc4 = true; 41 39 return TCX_NEXT; 40 + } 41 + 42 + SEC("tc/egress") 43 + int tc5(struct __sk_buff *skb) 44 + { 45 + seen_tc5 = true; 46 + return TCX_PASS; 47 + } 48 + 49 + SEC("tc/egress") 50 + int tc6(struct __sk_buff *skb) 51 + { 52 + seen_tc6 = true; 53 + return TCX_PASS; 42 54 }