Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2021-06-17

The following pull-request contains BPF updates for your *net-next* tree.

We've added 50 non-merge commits during the last 25 day(s) which contain
a total of 148 files changed, 4779 insertions(+), 1248 deletions(-).

The main changes are:

1) BPF infrastructure to migrate TCP child sockets from a listener to another
in the same reuseport group/map, from Kuniyuki Iwashima.

2) Add a provably sound, faster and more precise algorithm for tnum_mul() as
noted in https://arxiv.org/abs/2105.05398, from Harishankar Vishwanathan.

3) Streamline error reporting changes in libbpf as planned out in the
'libbpf: the road to v1.0' effort, from Andrii Nakryiko.

4) Add broadcast support to xdp_redirect_map(), from Hangbin Liu.

5) Extends bpf_map_lookup_and_delete_elem() functionality to 4 more map
types, that is, {LRU_,PERCPU_,LRU_PERCPU_,}HASH, from Denis Salopek.

6) Support new LLVM relocations in libbpf to make them more linker friendly,
also add a doc to describe the BPF backend relocations, from Yonghong Song.

7) Silence long standing KUBSAN complaints on register-based shifts in
interpreter, from Daniel Borkmann and Eric Biggers.

8) Add dummy PT_REGS macros in libbpf to fail BPF program compilation when
target arch cannot be determined, from Lorenz Bauer.

9) Extend AF_XDP to support large umems with 1M+ pages, from Magnus Karlsson.

10) Fix two minor libbpf tc BPF API issues, from Kumar Kartikeya Dwivedi.

11) Move libbpf BPF_SEQ_PRINTF/BPF_SNPRINTF macros that can be used by BPF
programs to bpf_helpers.h header, from Florent Revest.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+4788 -1258
+1
Documentation/bpf/index.rst
··· 84 84 :maxdepth: 1 85 85 86 86 ringbuf 87 + llvm_reloc 87 88 88 89 .. Links: 89 90 .. _networking-filter: ../networking/filter.rst
+240
Documentation/bpf/llvm_reloc.rst
··· 1 + .. SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) 2 + 3 + ==================== 4 + BPF LLVM Relocations 5 + ==================== 6 + 7 + This document describes LLVM BPF backend relocation types. 8 + 9 + Relocation Record 10 + ================= 11 + 12 + LLVM BPF backend records each relocation with the following 16-byte 13 + ELF structure:: 14 + 15 + typedef struct 16 + { 17 + Elf64_Addr r_offset; // Offset from the beginning of section. 18 + Elf64_Xword r_info; // Relocation type and symbol index. 19 + } Elf64_Rel; 20 + 21 + For example, for the following code:: 22 + 23 + int g1 __attribute__((section("sec"))); 24 + int g2 __attribute__((section("sec"))); 25 + static volatile int l1 __attribute__((section("sec"))); 26 + static volatile int l2 __attribute__((section("sec"))); 27 + int test() { 28 + return g1 + g2 + l1 + l2; 29 + } 30 + 31 + Compiled with ``clang -target bpf -O2 -c test.c``, the following is 32 + the code with ``llvm-objdump -dr test.o``:: 33 + 34 + 0: 18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll 35 + 0000000000000000: R_BPF_64_64 g1 36 + 2: 61 11 00 00 00 00 00 00 r1 = *(u32 *)(r1 + 0) 37 + 3: 18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0 ll 38 + 0000000000000018: R_BPF_64_64 g2 39 + 5: 61 20 00 00 00 00 00 00 r0 = *(u32 *)(r2 + 0) 40 + 6: 0f 10 00 00 00 00 00 00 r0 += r1 41 + 7: 18 01 00 00 08 00 00 00 00 00 00 00 00 00 00 00 r1 = 8 ll 42 + 0000000000000038: R_BPF_64_64 sec 43 + 9: 61 11 00 00 00 00 00 00 r1 = *(u32 *)(r1 + 0) 44 + 10: 0f 10 00 00 00 00 00 00 r0 += r1 45 + 11: 18 01 00 00 0c 00 00 00 00 00 00 00 00 00 00 00 r1 = 12 ll 46 + 0000000000000058: R_BPF_64_64 sec 47 + 13: 61 11 00 00 00 00 00 00 r1 = *(u32 *)(r1 + 0) 48 + 14: 0f 10 00 00 00 00 00 00 r0 += r1 49 + 15: 95 00 00 00 00 00 00 00 exit 50 + 51 + There are four relations in the above for four ``LD_imm64`` instructions. 52 + The following ``llvm-readelf -r test.o`` shows the binary values of the four 53 + relocations:: 54 + 55 + Relocation section '.rel.text' at offset 0x190 contains 4 entries: 56 + Offset Info Type Symbol's Value Symbol's Name 57 + 0000000000000000 0000000600000001 R_BPF_64_64 0000000000000000 g1 58 + 0000000000000018 0000000700000001 R_BPF_64_64 0000000000000004 g2 59 + 0000000000000038 0000000400000001 R_BPF_64_64 0000000000000000 sec 60 + 0000000000000058 0000000400000001 R_BPF_64_64 0000000000000000 sec 61 + 62 + Each relocation is represented by ``Offset`` (8 bytes) and ``Info`` (8 bytes). 63 + For example, the first relocation corresponds to the first instruction 64 + (Offset 0x0) and the corresponding ``Info`` indicates the relocation type 65 + of ``R_BPF_64_64`` (type 1) and the entry in the symbol table (entry 6). 66 + The following is the symbol table with ``llvm-readelf -s test.o``:: 67 + 68 + Symbol table '.symtab' contains 8 entries: 69 + Num: Value Size Type Bind Vis Ndx Name 70 + 0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND 71 + 1: 0000000000000000 0 FILE LOCAL DEFAULT ABS test.c 72 + 2: 0000000000000008 4 OBJECT LOCAL DEFAULT 4 l1 73 + 3: 000000000000000c 4 OBJECT LOCAL DEFAULT 4 l2 74 + 4: 0000000000000000 0 SECTION LOCAL DEFAULT 4 sec 75 + 5: 0000000000000000 128 FUNC GLOBAL DEFAULT 2 test 76 + 6: 0000000000000000 4 OBJECT GLOBAL DEFAULT 4 g1 77 + 7: 0000000000000004 4 OBJECT GLOBAL DEFAULT 4 g2 78 + 79 + The 6th entry is global variable ``g1`` with value 0. 80 + 81 + Similarly, the second relocation is at ``.text`` offset ``0x18``, instruction 3, 82 + for global variable ``g2`` which has a symbol value 4, the offset 83 + from the start of ``.data`` section. 84 + 85 + The third and fourth relocations refers to static variables ``l1`` 86 + and ``l2``. From ``.rel.text`` section above, it is not clear 87 + which symbols they really refers to as they both refers to 88 + symbol table entry 4, symbol ``sec``, which has ``STT_SECTION`` type 89 + and represents a section. So for static variable or function, 90 + the section offset is written to the original insn 91 + buffer, which is called ``A`` (addend). Looking at 92 + above insn ``7`` and ``11``, they have section offset ``8`` and ``12``. 93 + From symbol table, we can find that they correspond to entries ``2`` 94 + and ``3`` for ``l1`` and ``l2``. 95 + 96 + In general, the ``A`` is 0 for global variables and functions, 97 + and is the section offset or some computation result based on 98 + section offset for static variables/functions. The non-section-offset 99 + case refers to function calls. See below for more details. 100 + 101 + Different Relocation Types 102 + ========================== 103 + 104 + Six relocation types are supported. The following is an overview and 105 + ``S`` represents the value of the symbol in the symbol table:: 106 + 107 + Enum ELF Reloc Type Description BitSize Offset Calculation 108 + 0 R_BPF_NONE None 109 + 1 R_BPF_64_64 ld_imm64 insn 32 r_offset + 4 S + A 110 + 2 R_BPF_64_ABS64 normal data 64 r_offset S + A 111 + 3 R_BPF_64_ABS32 normal data 32 r_offset S + A 112 + 4 R_BPF_64_NODYLD32 .BTF[.ext] data 32 r_offset S + A 113 + 10 R_BPF_64_32 call insn 32 r_offset + 4 (S + A) / 8 - 1 114 + 115 + For example, ``R_BPF_64_64`` relocation type is used for ``ld_imm64`` instruction. 116 + The actual to-be-relocated data (0 or section offset) 117 + is stored at ``r_offset + 4`` and the read/write 118 + data bitsize is 32 (4 bytes). The relocation can be resolved with 119 + the symbol value plus implicit addend. Note that the ``BitSize`` is 32 which 120 + means the section offset must be less than or equal to ``UINT32_MAX`` and this 121 + is enforced by LLVM BPF backend. 122 + 123 + In another case, ``R_BPF_64_ABS64`` relocation type is used for normal 64-bit data. 124 + The actual to-be-relocated data is stored at ``r_offset`` and the read/write data 125 + bitsize is 64 (8 bytes). The relocation can be resolved with 126 + the symbol value plus implicit addend. 127 + 128 + Both ``R_BPF_64_ABS32`` and ``R_BPF_64_NODYLD32`` types are for 32-bit data. 129 + But ``R_BPF_64_NODYLD32`` specifically refers to relocations in ``.BTF`` and 130 + ``.BTF.ext`` sections. For cases like bcc where llvm ``ExecutionEngine RuntimeDyld`` 131 + is involved, ``R_BPF_64_NODYLD32`` types of relocations should not be resolved 132 + to actual function/variable address. Otherwise, ``.BTF`` and ``.BTF.ext`` 133 + become unusable by bcc and kernel. 134 + 135 + Type ``R_BPF_64_32`` is used for call instruction. The call target section 136 + offset is stored at ``r_offset + 4`` (32bit) and calculated as 137 + ``(S + A) / 8 - 1``. 138 + 139 + Examples 140 + ======== 141 + 142 + Types ``R_BPF_64_64`` and ``R_BPF_64_32`` are used to resolve ``ld_imm64`` 143 + and ``call`` instructions. For example:: 144 + 145 + __attribute__((noinline)) __attribute__((section("sec1"))) 146 + int gfunc(int a, int b) { 147 + return a * b; 148 + } 149 + static __attribute__((noinline)) __attribute__((section("sec1"))) 150 + int lfunc(int a, int b) { 151 + return a + b; 152 + } 153 + int global __attribute__((section("sec2"))); 154 + int test(int a, int b) { 155 + return gfunc(a, b) + lfunc(a, b) + global; 156 + } 157 + 158 + Compiled with ``clang -target bpf -O2 -c test.c``, we will have 159 + following code with `llvm-objdump -dr test.o``:: 160 + 161 + Disassembly of section .text: 162 + 163 + 0000000000000000 <test>: 164 + 0: bf 26 00 00 00 00 00 00 r6 = r2 165 + 1: bf 17 00 00 00 00 00 00 r7 = r1 166 + 2: 85 10 00 00 ff ff ff ff call -1 167 + 0000000000000010: R_BPF_64_32 gfunc 168 + 3: bf 08 00 00 00 00 00 00 r8 = r0 169 + 4: bf 71 00 00 00 00 00 00 r1 = r7 170 + 5: bf 62 00 00 00 00 00 00 r2 = r6 171 + 6: 85 10 00 00 02 00 00 00 call 2 172 + 0000000000000030: R_BPF_64_32 sec1 173 + 7: 0f 80 00 00 00 00 00 00 r0 += r8 174 + 8: 18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll 175 + 0000000000000040: R_BPF_64_64 global 176 + 10: 61 11 00 00 00 00 00 00 r1 = *(u32 *)(r1 + 0) 177 + 11: 0f 10 00 00 00 00 00 00 r0 += r1 178 + 12: 95 00 00 00 00 00 00 00 exit 179 + 180 + Disassembly of section sec1: 181 + 182 + 0000000000000000 <gfunc>: 183 + 0: bf 20 00 00 00 00 00 00 r0 = r2 184 + 1: 2f 10 00 00 00 00 00 00 r0 *= r1 185 + 2: 95 00 00 00 00 00 00 00 exit 186 + 187 + 0000000000000018 <lfunc>: 188 + 3: bf 20 00 00 00 00 00 00 r0 = r2 189 + 4: 0f 10 00 00 00 00 00 00 r0 += r1 190 + 5: 95 00 00 00 00 00 00 00 exit 191 + 192 + The first relocation corresponds to ``gfunc(a, b)`` where ``gfunc`` has a value of 0, 193 + so the ``call`` instruction offset is ``(0 + 0)/8 - 1 = -1``. 194 + The second relocation corresponds to ``lfunc(a, b)`` where ``lfunc`` has a section 195 + offset ``0x18``, so the ``call`` instruction offset is ``(0 + 0x18)/8 - 1 = 2``. 196 + The third relocation corresponds to ld_imm64 of ``global``, which has a section 197 + offset ``0``. 198 + 199 + The following is an example to show how R_BPF_64_ABS64 could be generated:: 200 + 201 + int global() { return 0; } 202 + struct t { void *g; } gbl = { global }; 203 + 204 + Compiled with ``clang -target bpf -O2 -g -c test.c``, we will see a 205 + relocation below in ``.data`` section with command 206 + ``llvm-readelf -r test.o``:: 207 + 208 + Relocation section '.rel.data' at offset 0x458 contains 1 entries: 209 + Offset Info Type Symbol's Value Symbol's Name 210 + 0000000000000000 0000000700000002 R_BPF_64_ABS64 0000000000000000 global 211 + 212 + The relocation says the first 8-byte of ``.data`` section should be 213 + filled with address of ``global`` variable. 214 + 215 + With ``llvm-readelf`` output, we can see that dwarf sections have a bunch of 216 + ``R_BPF_64_ABS32`` and ``R_BPF_64_ABS64`` relocations:: 217 + 218 + Relocation section '.rel.debug_info' at offset 0x468 contains 13 entries: 219 + Offset Info Type Symbol's Value Symbol's Name 220 + 0000000000000006 0000000300000003 R_BPF_64_ABS32 0000000000000000 .debug_abbrev 221 + 000000000000000c 0000000400000003 R_BPF_64_ABS32 0000000000000000 .debug_str 222 + 0000000000000012 0000000400000003 R_BPF_64_ABS32 0000000000000000 .debug_str 223 + 0000000000000016 0000000600000003 R_BPF_64_ABS32 0000000000000000 .debug_line 224 + 000000000000001a 0000000400000003 R_BPF_64_ABS32 0000000000000000 .debug_str 225 + 000000000000001e 0000000200000002 R_BPF_64_ABS64 0000000000000000 .text 226 + 000000000000002b 0000000400000003 R_BPF_64_ABS32 0000000000000000 .debug_str 227 + 0000000000000037 0000000800000002 R_BPF_64_ABS64 0000000000000000 gbl 228 + 0000000000000040 0000000400000003 R_BPF_64_ABS32 0000000000000000 .debug_str 229 + ...... 230 + 231 + The .BTF/.BTF.ext sections has R_BPF_64_NODYLD32 relocations:: 232 + 233 + Relocation section '.rel.BTF' at offset 0x538 contains 1 entries: 234 + Offset Info Type Symbol's Value Symbol's Name 235 + 0000000000000084 0000000800000004 R_BPF_64_NODYLD32 0000000000000000 gbl 236 + 237 + Relocation section '.rel.BTF.ext' at offset 0x548 contains 2 entries: 238 + Offset Info Type Symbol's Value Symbol's Name 239 + 000000000000002c 0000000200000004 R_BPF_64_NODYLD32 0000000000000000 .text 240 + 0000000000000040 0000000200000004 R_BPF_64_NODYLD32 0000000000000000 .text
+25
Documentation/networking/ip-sysctl.rst
··· 761 761 network connections you can set this knob to 2 to enable 762 762 unconditionally generation of syncookies. 763 763 764 + tcp_migrate_req - BOOLEAN 765 + The incoming connection is tied to a specific listening socket when 766 + the initial SYN packet is received during the three-way handshake. 767 + When a listener is closed, in-flight request sockets during the 768 + handshake and established sockets in the accept queue are aborted. 769 + 770 + If the listener has SO_REUSEPORT enabled, other listeners on the 771 + same port should have been able to accept such connections. This 772 + option makes it possible to migrate such child sockets to another 773 + listener after close() or shutdown(). 774 + 775 + The BPF_SK_REUSEPORT_SELECT_OR_MIGRATE type of eBPF program should 776 + usually be used to define the policy to pick an alive listener. 777 + Otherwise, the kernel will randomly pick an alive listener only if 778 + this option is enabled. 779 + 780 + Note that migration between listeners with different settings may 781 + crash applications. Let's say migration happens from listener A to 782 + B, and only B has TCP_SAVE_SYN enabled. B cannot read SYN data from 783 + the requests migrated from A. To avoid such a situation, cancel 784 + migration by returning SK_DROP in the type of eBPF program, or 785 + disable this option. 786 + 787 + Default: 0 788 + 764 789 tcp_fastopen - INTEGER 765 790 Enable TCP Fast Open (RFC7413) to send and accept data in the opening 766 791 SYN packet.
+23
include/linux/bpf.h
··· 70 70 void *(*map_lookup_elem_sys_only)(struct bpf_map *map, void *key); 71 71 int (*map_lookup_batch)(struct bpf_map *map, const union bpf_attr *attr, 72 72 union bpf_attr __user *uattr); 73 + int (*map_lookup_and_delete_elem)(struct bpf_map *map, void *key, 74 + void *value, u64 flags); 73 75 int (*map_lookup_and_delete_batch)(struct bpf_map *map, 74 76 const union bpf_attr *attr, 75 77 union bpf_attr __user *uattr); ··· 1501 1499 struct net_device *dev_rx); 1502 1500 int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, 1503 1501 struct net_device *dev_rx); 1502 + int dev_map_enqueue_multi(struct xdp_buff *xdp, struct net_device *dev_rx, 1503 + struct bpf_map *map, bool exclude_ingress); 1504 1504 int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, 1505 1505 struct bpf_prog *xdp_prog); 1506 + int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, 1507 + struct bpf_prog *xdp_prog, struct bpf_map *map, 1508 + bool exclude_ingress); 1506 1509 bool dev_map_can_have_prog(struct bpf_map *map); 1507 1510 1508 1511 void __cpu_map_flush(void); ··· 1675 1668 return 0; 1676 1669 } 1677 1670 1671 + static inline 1672 + int dev_map_enqueue_multi(struct xdp_buff *xdp, struct net_device *dev_rx, 1673 + struct bpf_map *map, bool exclude_ingress) 1674 + { 1675 + return 0; 1676 + } 1677 + 1678 1678 struct sk_buff; 1679 1679 1680 1680 static inline int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, 1681 1681 struct sk_buff *skb, 1682 1682 struct bpf_prog *xdp_prog) 1683 + { 1684 + return 0; 1685 + } 1686 + 1687 + static inline 1688 + int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, 1689 + struct bpf_prog *xdp_prog, struct bpf_map *map, 1690 + bool exclude_ingress) 1683 1691 { 1684 1692 return 0; 1685 1693 } ··· 2048 2026 struct sk_buff *skb; 2049 2027 struct sock *sk; 2050 2028 struct sock *selected_sk; 2029 + struct sock *migrating_sk; 2051 2030 void *data_end; 2052 2031 u32 hash; 2053 2032 u32 reuseport_id;
+2 -2
include/linux/bpf_local_storage.h
··· 58 58 * from the object's bpf_local_storage. 59 59 * 60 60 * Put it in the same cacheline as the data to minimize 61 - * the number of cachelines access during the cache hit case. 61 + * the number of cachelines accessed during the cache hit case. 62 62 */ 63 63 struct bpf_local_storage_map __rcu *smap; 64 64 u8 data[] __aligned(8); ··· 71 71 struct bpf_local_storage __rcu *local_storage; 72 72 struct rcu_head rcu; 73 73 /* 8 bytes hole */ 74 - /* The data is stored in aother cacheline to minimize 74 + /* The data is stored in another cacheline to minimize 75 75 * the number of cachelines access during a cache hit. 76 76 */ 77 77 struct bpf_local_storage_data sdata ____cacheline_aligned;
+17 -4
include/linux/filter.h
··· 646 646 u32 flags; 647 647 u32 tgt_index; 648 648 void *tgt_value; 649 + struct bpf_map *map; 649 650 u32 map_id; 650 651 enum bpf_map_type map_type; 651 652 u32 kern_flags; ··· 996 995 #ifdef CONFIG_INET 997 996 struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk, 998 997 struct bpf_prog *prog, struct sk_buff *skb, 998 + struct sock *migrating_sk, 999 999 u32 hash); 1000 1000 #else 1001 1001 static inline struct sock * 1002 1002 bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk, 1003 1003 struct bpf_prog *prog, struct sk_buff *skb, 1004 + struct sock *migrating_sk, 1004 1005 u32 hash) 1005 1006 { 1006 1007 return NULL; ··· 1467 1464 } 1468 1465 #endif /* IS_ENABLED(CONFIG_IPV6) */ 1469 1466 1470 - static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifindex, u64 flags, 1467 + static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifindex, 1468 + u64 flags, const u64 flag_mask, 1471 1469 void *lookup_elem(struct bpf_map *map, u32 key)) 1472 1470 { 1473 1471 struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); 1472 + const u64 action_mask = XDP_ABORTED | XDP_DROP | XDP_PASS | XDP_TX; 1474 1473 1475 1474 /* Lower bits of the flags are used as return code on lookup failure */ 1476 - if (unlikely(flags > XDP_TX)) 1475 + if (unlikely(flags & ~(action_mask | flag_mask))) 1477 1476 return XDP_ABORTED; 1478 1477 1479 1478 ri->tgt_value = lookup_elem(map, ifindex); 1480 - if (unlikely(!ri->tgt_value)) { 1479 + if (unlikely(!ri->tgt_value) && !(flags & BPF_F_BROADCAST)) { 1481 1480 /* If the lookup fails we want to clear out the state in the 1482 1481 * redirect_info struct completely, so that if an eBPF program 1483 1482 * performs multiple lookups, the last one always takes ··· 1487 1482 */ 1488 1483 ri->map_id = INT_MAX; /* Valid map id idr range: [1,INT_MAX[ */ 1489 1484 ri->map_type = BPF_MAP_TYPE_UNSPEC; 1490 - return flags; 1485 + return flags & action_mask; 1491 1486 } 1492 1487 1493 1488 ri->tgt_index = ifindex; 1494 1489 ri->map_id = map->id; 1495 1490 ri->map_type = map->map_type; 1491 + 1492 + if (flags & BPF_F_BROADCAST) { 1493 + WRITE_ONCE(ri->map, map); 1494 + ri->flags = flags; 1495 + } else { 1496 + WRITE_ONCE(ri->map, NULL); 1497 + ri->flags = 0; 1498 + } 1496 1499 1497 1500 return XDP_REDIRECT; 1498 1501 }
+1
include/net/netns/ipv4.h
··· 126 126 u8 sysctl_tcp_syn_retries; 127 127 u8 sysctl_tcp_synack_retries; 128 128 u8 sysctl_tcp_syncookies; 129 + u8 sysctl_tcp_migrate_req; 129 130 int sysctl_tcp_reordering; 130 131 u8 sysctl_tcp_retries1; 131 132 u8 sysctl_tcp_retries2;
+7 -2
include/net/sock_reuseport.h
··· 13 13 struct sock_reuseport { 14 14 struct rcu_head rcu; 15 15 16 - u16 max_socks; /* length of socks */ 17 - u16 num_socks; /* elements in socks */ 16 + u16 max_socks; /* length of socks */ 17 + u16 num_socks; /* elements in socks */ 18 + u16 num_closed_socks; /* closed elements in socks */ 18 19 /* The last synq overflow event timestamp of this 19 20 * reuse->socks[] group. 20 21 */ ··· 32 31 extern int reuseport_add_sock(struct sock *sk, struct sock *sk2, 33 32 bool bind_inany); 34 33 extern void reuseport_detach_sock(struct sock *sk); 34 + void reuseport_stop_listen_sock(struct sock *sk); 35 35 extern struct sock *reuseport_select_sock(struct sock *sk, 36 36 u32 hash, 37 37 struct sk_buff *skb, 38 38 int hdr_len); 39 + struct sock *reuseport_migrate_sock(struct sock *sk, 40 + struct sock *migrating_sk, 41 + struct sk_buff *skb); 39 42 extern int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog); 40 43 extern int reuseport_detach_prog(struct sock *sk); 41 44
+1
include/net/xdp.h
··· 170 170 struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf, 171 171 struct net_device *dev); 172 172 int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp); 173 + struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf); 173 174 174 175 static inline 175 176 void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp)
+5 -1
include/trace/events/xdp.h
··· 110 110 u32 ifindex = 0, map_index = index; 111 111 112 112 if (map_type == BPF_MAP_TYPE_DEVMAP || map_type == BPF_MAP_TYPE_DEVMAP_HASH) { 113 - ifindex = ((struct _bpf_dtab_netdev *)tgt)->dev->ifindex; 113 + /* Just leave to_ifindex to 0 if do broadcast redirect, 114 + * as tgt will be NULL. 115 + */ 116 + if (tgt) 117 + ifindex = ((struct _bpf_dtab_netdev *)tgt)->dev->ifindex; 114 118 } else if (map_type == BPF_MAP_TYPE_UNSPEC && map_id == INT_MAX) { 115 119 ifindex = index; 116 120 map_index = 0;
+41 -2
include/uapi/linux/bpf.h
··· 527 527 * Look up an element with the given *key* in the map referred to 528 528 * by the file descriptor *fd*, and if found, delete the element. 529 529 * 530 + * For **BPF_MAP_TYPE_QUEUE** and **BPF_MAP_TYPE_STACK** map 531 + * types, the *flags* argument needs to be set to 0, but for other 532 + * map types, it may be specified as: 533 + * 534 + * **BPF_F_LOCK** 535 + * Look up and delete the value of a spin-locked map 536 + * without returning the lock. This must be specified if 537 + * the elements contain a spinlock. 538 + * 530 539 * The **BPF_MAP_TYPE_QUEUE** and **BPF_MAP_TYPE_STACK** map types 531 540 * implement this command as a "pop" operation, deleting the top 532 541 * element rather than one corresponding to *key*. ··· 545 536 * This command is only valid for the following map types: 546 537 * * **BPF_MAP_TYPE_QUEUE** 547 538 * * **BPF_MAP_TYPE_STACK** 539 + * * **BPF_MAP_TYPE_HASH** 540 + * * **BPF_MAP_TYPE_PERCPU_HASH** 541 + * * **BPF_MAP_TYPE_LRU_HASH** 542 + * * **BPF_MAP_TYPE_LRU_PERCPU_HASH** 548 543 * 549 544 * Return 550 545 * Returns zero on success. On error, -1 is returned and *errno* ··· 994 981 BPF_SK_LOOKUP, 995 982 BPF_XDP, 996 983 BPF_SK_SKB_VERDICT, 984 + BPF_SK_REUSEPORT_SELECT, 985 + BPF_SK_REUSEPORT_SELECT_OR_MIGRATE, 997 986 __MAX_BPF_ATTACH_TYPE 998 987 }; 999 988 ··· 2557 2542 * The lower two bits of *flags* are used as the return code if 2558 2543 * the map lookup fails. This is so that the return value can be 2559 2544 * one of the XDP program return codes up to **XDP_TX**, as chosen 2560 - * by the caller. Any higher bits in the *flags* argument must be 2561 - * unset. 2545 + * by the caller. The higher bits of *flags* can be set to 2546 + * BPF_F_BROADCAST or BPF_F_EXCLUDE_INGRESS as defined below. 2547 + * 2548 + * With BPF_F_BROADCAST the packet will be broadcasted to all the 2549 + * interfaces in the map, with BPF_F_EXCLUDE_INGRESS the ingress 2550 + * interface will be excluded when do broadcasting. 2562 2551 * 2563 2552 * See also **bpf_redirect**\ (), which only supports redirecting 2564 2553 * to an ifindex, but doesn't require a map to do so. ··· 5128 5109 BPF_F_BPRM_SECUREEXEC = (1ULL << 0), 5129 5110 }; 5130 5111 5112 + /* Flags for bpf_redirect_map helper */ 5113 + enum { 5114 + BPF_F_BROADCAST = (1ULL << 3), 5115 + BPF_F_EXCLUDE_INGRESS = (1ULL << 4), 5116 + }; 5117 + 5131 5118 #define __bpf_md_ptr(type, name) \ 5132 5119 union { \ 5133 5120 type name; \ ··· 5418 5393 __u32 ip_protocol; /* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */ 5419 5394 __u32 bind_inany; /* Is sock bound to an INANY address? */ 5420 5395 __u32 hash; /* A hash of the packet 4 tuples */ 5396 + /* When reuse->migrating_sk is NULL, it is selecting a sk for the 5397 + * new incoming connection request (e.g. selecting a listen sk for 5398 + * the received SYN in the TCP case). reuse->sk is one of the sk 5399 + * in the reuseport group. The bpf prog can use reuse->sk to learn 5400 + * the local listening ip/port without looking into the skb. 5401 + * 5402 + * When reuse->migrating_sk is not NULL, reuse->sk is closed and 5403 + * reuse->migrating_sk is the socket that needs to be migrated 5404 + * to another listening socket. migrating_sk could be a fullsock 5405 + * sk that is fully established or a reqsk that is in-the-middle 5406 + * of 3-way handshake. 5407 + */ 5408 + __bpf_md_ptr(struct bpf_sock *, sk); 5409 + __bpf_md_ptr(struct bpf_sock *, migrating_sk); 5421 5410 }; 5422 5411 5423 5412 #define BPF_TAG_SIZE 8
+1 -1
kernel/bpf/bpf_inode_storage.c
··· 72 72 return; 73 73 } 74 74 75 - /* Netiher the bpf_prog nor the bpf-map's syscall 75 + /* Neither the bpf_prog nor the bpf-map's syscall 76 76 * could be modifying the local_storage->list now. 77 77 * Thus, no elem can be added-to or deleted-from the 78 78 * local_storage->list by the bpf_prog or by the bpf-map's syscall.
+1 -1
kernel/bpf/bpf_lsm.c
··· 127 127 } 128 128 129 129 /* The set of hooks which are called without pagefaults disabled and are allowed 130 - * to "sleep" and thus can be used for sleeable BPF programs. 130 + * to "sleep" and thus can be used for sleepable BPF programs. 131 131 */ 132 132 BTF_SET_START(sleepable_lsm_hooks) 133 133 BTF_ID(func, bpf_lsm_bpf)
+3 -3
kernel/bpf/btf.c
··· 51 51 * The BTF type section contains a list of 'struct btf_type' objects. 52 52 * Each one describes a C type. Recall from the above section 53 53 * that a 'struct btf_type' object could be immediately followed by extra 54 - * data in order to desribe some particular C types. 54 + * data in order to describe some particular C types. 55 55 * 56 56 * type_id: 57 57 * ~~~~~~~ ··· 1143 1143 1144 1144 /* 1145 1145 * We need a new copy to our safe object, either because we haven't 1146 - * yet copied and are intializing safe data, or because the data 1146 + * yet copied and are initializing safe data, or because the data 1147 1147 * we want falls outside the boundaries of the safe object. 1148 1148 */ 1149 1149 if (!safe) { ··· 3417 3417 * BTF_KIND_FUNC_PROTO cannot be directly referred by 3418 3418 * a struct's member. 3419 3419 * 3420 - * It should be a funciton pointer instead. 3420 + * It should be a function pointer instead. 3421 3421 * (i.e. struct's member -> BTF_KIND_PTR -> BTF_KIND_FUNC_PROTO) 3422 3422 * 3423 3423 * Hence, there is no btf_func_check_member().
+43 -18
kernel/bpf/core.c
··· 1392 1392 select_insn: 1393 1393 goto *jumptable[insn->code]; 1394 1394 1395 - /* ALU */ 1396 - #define ALU(OPCODE, OP) \ 1397 - ALU64_##OPCODE##_X: \ 1398 - DST = DST OP SRC; \ 1399 - CONT; \ 1400 - ALU_##OPCODE##_X: \ 1401 - DST = (u32) DST OP (u32) SRC; \ 1402 - CONT; \ 1403 - ALU64_##OPCODE##_K: \ 1404 - DST = DST OP IMM; \ 1405 - CONT; \ 1406 - ALU_##OPCODE##_K: \ 1407 - DST = (u32) DST OP (u32) IMM; \ 1395 + /* Explicitly mask the register-based shift amounts with 63 or 31 1396 + * to avoid undefined behavior. Normally this won't affect the 1397 + * generated code, for example, in case of native 64 bit archs such 1398 + * as x86-64 or arm64, the compiler is optimizing the AND away for 1399 + * the interpreter. In case of JITs, each of the JIT backends compiles 1400 + * the BPF shift operations to machine instructions which produce 1401 + * implementation-defined results in such a case; the resulting 1402 + * contents of the register may be arbitrary, but program behaviour 1403 + * as a whole remains defined. In other words, in case of JIT backends, 1404 + * the AND must /not/ be added to the emitted LSH/RSH/ARSH translation. 1405 + */ 1406 + /* ALU (shifts) */ 1407 + #define SHT(OPCODE, OP) \ 1408 + ALU64_##OPCODE##_X: \ 1409 + DST = DST OP (SRC & 63); \ 1410 + CONT; \ 1411 + ALU_##OPCODE##_X: \ 1412 + DST = (u32) DST OP ((u32) SRC & 31); \ 1413 + CONT; \ 1414 + ALU64_##OPCODE##_K: \ 1415 + DST = DST OP IMM; \ 1416 + CONT; \ 1417 + ALU_##OPCODE##_K: \ 1418 + DST = (u32) DST OP (u32) IMM; \ 1408 1419 CONT; 1409 - 1420 + /* ALU (rest) */ 1421 + #define ALU(OPCODE, OP) \ 1422 + ALU64_##OPCODE##_X: \ 1423 + DST = DST OP SRC; \ 1424 + CONT; \ 1425 + ALU_##OPCODE##_X: \ 1426 + DST = (u32) DST OP (u32) SRC; \ 1427 + CONT; \ 1428 + ALU64_##OPCODE##_K: \ 1429 + DST = DST OP IMM; \ 1430 + CONT; \ 1431 + ALU_##OPCODE##_K: \ 1432 + DST = (u32) DST OP (u32) IMM; \ 1433 + CONT; 1410 1434 ALU(ADD, +) 1411 1435 ALU(SUB, -) 1412 1436 ALU(AND, &) 1413 1437 ALU(OR, |) 1414 - ALU(LSH, <<) 1415 - ALU(RSH, >>) 1416 1438 ALU(XOR, ^) 1417 1439 ALU(MUL, *) 1440 + SHT(LSH, <<) 1441 + SHT(RSH, >>) 1442 + #undef SHT 1418 1443 #undef ALU 1419 1444 ALU_NEG: 1420 1445 DST = (u32) -DST; ··· 1464 1439 insn++; 1465 1440 CONT; 1466 1441 ALU_ARSH_X: 1467 - DST = (u64) (u32) (((s32) DST) >> SRC); 1442 + DST = (u64) (u32) (((s32) DST) >> (SRC & 31)); 1468 1443 CONT; 1469 1444 ALU_ARSH_K: 1470 1445 DST = (u64) (u32) (((s32) DST) >> IMM); 1471 1446 CONT; 1472 1447 ALU64_ARSH_X: 1473 - (*(s64 *) &DST) >>= SRC; 1448 + (*(s64 *) &DST) >>= (SRC & 63); 1474 1449 CONT; 1475 1450 ALU64_ARSH_K: 1476 1451 (*(s64 *) &DST) >>= IMM;
+2 -1
kernel/bpf/cpumap.c
··· 601 601 602 602 static int cpu_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) 603 603 { 604 - return __bpf_xdp_redirect_map(map, ifindex, flags, __cpu_map_lookup_elem); 604 + return __bpf_xdp_redirect_map(map, ifindex, flags, 0, 605 + __cpu_map_lookup_elem); 605 606 } 606 607 607 608 static int cpu_map_btf_id;
+254 -53
kernel/bpf/devmap.c
··· 57 57 struct list_head flush_node; 58 58 struct net_device *dev; 59 59 struct net_device *dev_rx; 60 + struct bpf_prog *xdp_prog; 60 61 unsigned int count; 61 62 }; 62 63 ··· 198 197 list_del_rcu(&dtab->list); 199 198 spin_unlock(&dev_map_lock); 200 199 200 + bpf_clear_redirect_map(map); 201 201 synchronize_rcu(); 202 202 203 203 /* Make sure prior __dev_map_entry_free() have completed. */ ··· 328 326 return false; 329 327 } 330 328 329 + static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog, 330 + struct xdp_frame **frames, int n, 331 + struct net_device *dev) 332 + { 333 + struct xdp_txq_info txq = { .dev = dev }; 334 + struct xdp_buff xdp; 335 + int i, nframes = 0; 336 + 337 + for (i = 0; i < n; i++) { 338 + struct xdp_frame *xdpf = frames[i]; 339 + u32 act; 340 + int err; 341 + 342 + xdp_convert_frame_to_buff(xdpf, &xdp); 343 + xdp.txq = &txq; 344 + 345 + act = bpf_prog_run_xdp(xdp_prog, &xdp); 346 + switch (act) { 347 + case XDP_PASS: 348 + err = xdp_update_frame_from_buff(&xdp, xdpf); 349 + if (unlikely(err < 0)) 350 + xdp_return_frame_rx_napi(xdpf); 351 + else 352 + frames[nframes++] = xdpf; 353 + break; 354 + default: 355 + bpf_warn_invalid_xdp_action(act); 356 + fallthrough; 357 + case XDP_ABORTED: 358 + trace_xdp_exception(dev, xdp_prog, act); 359 + fallthrough; 360 + case XDP_DROP: 361 + xdp_return_frame_rx_napi(xdpf); 362 + break; 363 + } 364 + } 365 + return nframes; /* sent frames count */ 366 + } 367 + 331 368 static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags) 332 369 { 333 370 struct net_device *dev = bq->dev; 371 + unsigned int cnt = bq->count; 334 372 int sent = 0, err = 0; 373 + int to_send = cnt; 335 374 int i; 336 375 337 - if (unlikely(!bq->count)) 376 + if (unlikely(!cnt)) 338 377 return; 339 378 340 - for (i = 0; i < bq->count; i++) { 379 + for (i = 0; i < cnt; i++) { 341 380 struct xdp_frame *xdpf = bq->q[i]; 342 381 343 382 prefetch(xdpf); 344 383 } 345 384 346 - sent = dev->netdev_ops->ndo_xdp_xmit(dev, bq->count, bq->q, flags); 385 + if (bq->xdp_prog) { 386 + to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev); 387 + if (!to_send) 388 + goto out; 389 + } 390 + 391 + sent = dev->netdev_ops->ndo_xdp_xmit(dev, to_send, bq->q, flags); 347 392 if (sent < 0) { 348 393 /* If ndo_xdp_xmit fails with an errno, no frames have 349 394 * been xmit'ed. ··· 402 353 /* If not all frames have been transmitted, it is our 403 354 * responsibility to free them 404 355 */ 405 - for (i = sent; unlikely(i < bq->count); i++) 356 + for (i = sent; unlikely(i < to_send); i++) 406 357 xdp_return_frame_rx_napi(bq->q[i]); 407 358 408 - trace_xdp_devmap_xmit(bq->dev_rx, dev, sent, bq->count - sent, err); 409 - bq->dev_rx = NULL; 359 + out: 410 360 bq->count = 0; 411 - __list_del_clearprev(&bq->flush_node); 361 + trace_xdp_devmap_xmit(bq->dev_rx, dev, sent, cnt - sent, err); 412 362 } 413 363 414 364 /* __dev_flush is called from xdp_do_flush() which _must_ be signaled ··· 425 377 struct list_head *flush_list = this_cpu_ptr(&dev_flush_list); 426 378 struct xdp_dev_bulk_queue *bq, *tmp; 427 379 428 - list_for_each_entry_safe(bq, tmp, flush_list, flush_node) 380 + list_for_each_entry_safe(bq, tmp, flush_list, flush_node) { 429 381 bq_xmit_all(bq, XDP_XMIT_FLUSH); 382 + bq->dev_rx = NULL; 383 + bq->xdp_prog = NULL; 384 + __list_del_clearprev(&bq->flush_node); 385 + } 430 386 } 431 387 432 388 /* rcu_read_lock (from syscall and BPF contexts) ensures that if a delete and/or 433 - * update happens in parallel here a dev_put wont happen until after reading the 434 - * ifindex. 389 + * update happens in parallel here a dev_put won't happen until after reading 390 + * the ifindex. 435 391 */ 436 392 static void *__dev_map_lookup_elem(struct bpf_map *map, u32 key) 437 393 { ··· 453 401 * Thus, safe percpu variable access. 454 402 */ 455 403 static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf, 456 - struct net_device *dev_rx) 404 + struct net_device *dev_rx, struct bpf_prog *xdp_prog) 457 405 { 458 406 struct list_head *flush_list = this_cpu_ptr(&dev_flush_list); 459 407 struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq); ··· 464 412 /* Ingress dev_rx will be the same for all xdp_frame's in 465 413 * bulk_queue, because bq stored per-CPU and must be flushed 466 414 * from net_device drivers NAPI func end. 415 + * 416 + * Do the same with xdp_prog and flush_list since these fields 417 + * are only ever modified together. 467 418 */ 468 - if (!bq->dev_rx) 419 + if (!bq->dev_rx) { 469 420 bq->dev_rx = dev_rx; 421 + bq->xdp_prog = xdp_prog; 422 + list_add(&bq->flush_node, flush_list); 423 + } 470 424 471 425 bq->q[bq->count++] = xdpf; 472 - 473 - if (!bq->flush_node.prev) 474 - list_add(&bq->flush_node, flush_list); 475 426 } 476 427 477 428 static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, 478 - struct net_device *dev_rx) 429 + struct net_device *dev_rx, 430 + struct bpf_prog *xdp_prog) 479 431 { 480 432 struct xdp_frame *xdpf; 481 433 int err; ··· 495 439 if (unlikely(!xdpf)) 496 440 return -EOVERFLOW; 497 441 498 - bq_enqueue(dev, xdpf, dev_rx); 442 + bq_enqueue(dev, xdpf, dev_rx, xdp_prog); 499 443 return 0; 500 - } 501 - 502 - static struct xdp_buff *dev_map_run_prog(struct net_device *dev, 503 - struct xdp_buff *xdp, 504 - struct bpf_prog *xdp_prog) 505 - { 506 - struct xdp_txq_info txq = { .dev = dev }; 507 - u32 act; 508 - 509 - xdp_set_data_meta_invalid(xdp); 510 - xdp->txq = &txq; 511 - 512 - act = bpf_prog_run_xdp(xdp_prog, xdp); 513 - switch (act) { 514 - case XDP_PASS: 515 - return xdp; 516 - case XDP_DROP: 517 - break; 518 - default: 519 - bpf_warn_invalid_xdp_action(act); 520 - fallthrough; 521 - case XDP_ABORTED: 522 - trace_xdp_exception(dev, xdp_prog, act); 523 - break; 524 - } 525 - 526 - xdp_return_buff(xdp); 527 - return NULL; 528 444 } 529 445 530 446 int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, 531 447 struct net_device *dev_rx) 532 448 { 533 - return __xdp_enqueue(dev, xdp, dev_rx); 449 + return __xdp_enqueue(dev, xdp, dev_rx, NULL); 534 450 } 535 451 536 452 int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, ··· 510 482 { 511 483 struct net_device *dev = dst->dev; 512 484 513 - if (dst->xdp_prog) { 514 - xdp = dev_map_run_prog(dev, xdp, dst->xdp_prog); 515 - if (!xdp) 516 - return 0; 485 + return __xdp_enqueue(dev, xdp, dev_rx, dst->xdp_prog); 486 + } 487 + 488 + static bool is_valid_dst(struct bpf_dtab_netdev *obj, struct xdp_buff *xdp, 489 + int exclude_ifindex) 490 + { 491 + if (!obj || obj->dev->ifindex == exclude_ifindex || 492 + !obj->dev->netdev_ops->ndo_xdp_xmit) 493 + return false; 494 + 495 + if (xdp_ok_fwd_dev(obj->dev, xdp->data_end - xdp->data)) 496 + return false; 497 + 498 + return true; 499 + } 500 + 501 + static int dev_map_enqueue_clone(struct bpf_dtab_netdev *obj, 502 + struct net_device *dev_rx, 503 + struct xdp_frame *xdpf) 504 + { 505 + struct xdp_frame *nxdpf; 506 + 507 + nxdpf = xdpf_clone(xdpf); 508 + if (!nxdpf) 509 + return -ENOMEM; 510 + 511 + bq_enqueue(obj->dev, nxdpf, dev_rx, obj->xdp_prog); 512 + 513 + return 0; 514 + } 515 + 516 + int dev_map_enqueue_multi(struct xdp_buff *xdp, struct net_device *dev_rx, 517 + struct bpf_map *map, bool exclude_ingress) 518 + { 519 + struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); 520 + int exclude_ifindex = exclude_ingress ? dev_rx->ifindex : 0; 521 + struct bpf_dtab_netdev *dst, *last_dst = NULL; 522 + struct hlist_head *head; 523 + struct xdp_frame *xdpf; 524 + unsigned int i; 525 + int err; 526 + 527 + xdpf = xdp_convert_buff_to_frame(xdp); 528 + if (unlikely(!xdpf)) 529 + return -EOVERFLOW; 530 + 531 + if (map->map_type == BPF_MAP_TYPE_DEVMAP) { 532 + for (i = 0; i < map->max_entries; i++) { 533 + dst = READ_ONCE(dtab->netdev_map[i]); 534 + if (!is_valid_dst(dst, xdp, exclude_ifindex)) 535 + continue; 536 + 537 + /* we only need n-1 clones; last_dst enqueued below */ 538 + if (!last_dst) { 539 + last_dst = dst; 540 + continue; 541 + } 542 + 543 + err = dev_map_enqueue_clone(last_dst, dev_rx, xdpf); 544 + if (err) 545 + return err; 546 + 547 + last_dst = dst; 548 + } 549 + } else { /* BPF_MAP_TYPE_DEVMAP_HASH */ 550 + for (i = 0; i < dtab->n_buckets; i++) { 551 + head = dev_map_index_hash(dtab, i); 552 + hlist_for_each_entry_rcu(dst, head, index_hlist, 553 + lockdep_is_held(&dtab->index_lock)) { 554 + if (!is_valid_dst(dst, xdp, exclude_ifindex)) 555 + continue; 556 + 557 + /* we only need n-1 clones; last_dst enqueued below */ 558 + if (!last_dst) { 559 + last_dst = dst; 560 + continue; 561 + } 562 + 563 + err = dev_map_enqueue_clone(last_dst, dev_rx, xdpf); 564 + if (err) 565 + return err; 566 + 567 + last_dst = dst; 568 + } 569 + } 517 570 } 518 - return __xdp_enqueue(dev, xdp, dev_rx); 571 + 572 + /* consume the last copy of the frame */ 573 + if (last_dst) 574 + bq_enqueue(last_dst->dev, xdpf, dev_rx, last_dst->xdp_prog); 575 + else 576 + xdp_return_frame_rx_napi(xdpf); /* dtab is empty */ 577 + 578 + return 0; 519 579 } 520 580 521 581 int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, ··· 617 501 skb->dev = dst->dev; 618 502 generic_xdp_tx(skb, xdp_prog); 619 503 504 + return 0; 505 + } 506 + 507 + static int dev_map_redirect_clone(struct bpf_dtab_netdev *dst, 508 + struct sk_buff *skb, 509 + struct bpf_prog *xdp_prog) 510 + { 511 + struct sk_buff *nskb; 512 + int err; 513 + 514 + nskb = skb_clone(skb, GFP_ATOMIC); 515 + if (!nskb) 516 + return -ENOMEM; 517 + 518 + err = dev_map_generic_redirect(dst, nskb, xdp_prog); 519 + if (unlikely(err)) { 520 + consume_skb(nskb); 521 + return err; 522 + } 523 + 524 + return 0; 525 + } 526 + 527 + int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, 528 + struct bpf_prog *xdp_prog, struct bpf_map *map, 529 + bool exclude_ingress) 530 + { 531 + struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); 532 + int exclude_ifindex = exclude_ingress ? dev->ifindex : 0; 533 + struct bpf_dtab_netdev *dst, *last_dst = NULL; 534 + struct hlist_head *head; 535 + struct hlist_node *next; 536 + unsigned int i; 537 + int err; 538 + 539 + if (map->map_type == BPF_MAP_TYPE_DEVMAP) { 540 + for (i = 0; i < map->max_entries; i++) { 541 + dst = READ_ONCE(dtab->netdev_map[i]); 542 + if (!dst || dst->dev->ifindex == exclude_ifindex) 543 + continue; 544 + 545 + /* we only need n-1 clones; last_dst enqueued below */ 546 + if (!last_dst) { 547 + last_dst = dst; 548 + continue; 549 + } 550 + 551 + err = dev_map_redirect_clone(last_dst, skb, xdp_prog); 552 + if (err) 553 + return err; 554 + 555 + last_dst = dst; 556 + } 557 + } else { /* BPF_MAP_TYPE_DEVMAP_HASH */ 558 + for (i = 0; i < dtab->n_buckets; i++) { 559 + head = dev_map_index_hash(dtab, i); 560 + hlist_for_each_entry_safe(dst, next, head, index_hlist) { 561 + if (!dst || dst->dev->ifindex == exclude_ifindex) 562 + continue; 563 + 564 + /* we only need n-1 clones; last_dst enqueued below */ 565 + if (!last_dst) { 566 + last_dst = dst; 567 + continue; 568 + } 569 + 570 + err = dev_map_redirect_clone(last_dst, skb, xdp_prog); 571 + if (err) 572 + return err; 573 + 574 + last_dst = dst; 575 + } 576 + } 577 + } 578 + 579 + /* consume the first skb and return */ 580 + if (last_dst) 581 + return dev_map_generic_redirect(last_dst, skb, xdp_prog); 582 + 583 + /* dtab is empty */ 584 + consume_skb(skb); 620 585 return 0; 621 586 } 622 587 ··· 927 730 928 731 static int dev_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) 929 732 { 930 - return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_lookup_elem); 733 + return __bpf_xdp_redirect_map(map, ifindex, flags, 734 + BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS, 735 + __dev_map_lookup_elem); 931 736 } 932 737 933 738 static int dev_hash_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) 934 739 { 935 - return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_hash_lookup_elem); 740 + return __bpf_xdp_redirect_map(map, ifindex, flags, 741 + BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS, 742 + __dev_map_hash_lookup_elem); 936 743 } 937 744 938 745 static int dev_map_btf_id;
+100 -2
kernel/bpf/hashtab.c
··· 46 46 * events, kprobes and tracing to be invoked before the prior invocation 47 47 * from one of these contexts completed. sys_bpf() uses the same mechanism 48 48 * by pinning the task to the current CPU and incrementing the recursion 49 - * protection accross the map operation. 49 + * protection across the map operation. 50 50 * 51 51 * This has subtle implications on PREEMPT_RT. PREEMPT_RT forbids certain 52 52 * operations like memory allocations (even with GFP_ATOMIC) from atomic 53 53 * contexts. This is required because even with GFP_ATOMIC the memory 54 - * allocator calls into code pathes which acquire locks with long held lock 54 + * allocator calls into code paths which acquire locks with long held lock 55 55 * sections. To ensure the deterministic behaviour these locks are regular 56 56 * spinlocks, which are converted to 'sleepable' spinlocks on RT. The only 57 57 * true atomic contexts on an RT kernel are the low level hardware ··· 1401 1401 rcu_read_unlock(); 1402 1402 } 1403 1403 1404 + static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key, 1405 + void *value, bool is_lru_map, 1406 + bool is_percpu, u64 flags) 1407 + { 1408 + struct bpf_htab *htab = container_of(map, struct bpf_htab, map); 1409 + struct hlist_nulls_head *head; 1410 + unsigned long bflags; 1411 + struct htab_elem *l; 1412 + u32 hash, key_size; 1413 + struct bucket *b; 1414 + int ret; 1415 + 1416 + key_size = map->key_size; 1417 + 1418 + hash = htab_map_hash(key, key_size, htab->hashrnd); 1419 + b = __select_bucket(htab, hash); 1420 + head = &b->head; 1421 + 1422 + ret = htab_lock_bucket(htab, b, hash, &bflags); 1423 + if (ret) 1424 + return ret; 1425 + 1426 + l = lookup_elem_raw(head, hash, key, key_size); 1427 + if (!l) { 1428 + ret = -ENOENT; 1429 + } else { 1430 + if (is_percpu) { 1431 + u32 roundup_value_size = round_up(map->value_size, 8); 1432 + void __percpu *pptr; 1433 + int off = 0, cpu; 1434 + 1435 + pptr = htab_elem_get_ptr(l, key_size); 1436 + for_each_possible_cpu(cpu) { 1437 + bpf_long_memcpy(value + off, 1438 + per_cpu_ptr(pptr, cpu), 1439 + roundup_value_size); 1440 + off += roundup_value_size; 1441 + } 1442 + } else { 1443 + u32 roundup_key_size = round_up(map->key_size, 8); 1444 + 1445 + if (flags & BPF_F_LOCK) 1446 + copy_map_value_locked(map, value, l->key + 1447 + roundup_key_size, 1448 + true); 1449 + else 1450 + copy_map_value(map, value, l->key + 1451 + roundup_key_size); 1452 + check_and_init_map_lock(map, value); 1453 + } 1454 + 1455 + hlist_nulls_del_rcu(&l->hash_node); 1456 + if (!is_lru_map) 1457 + free_htab_elem(htab, l); 1458 + } 1459 + 1460 + htab_unlock_bucket(htab, b, hash, bflags); 1461 + 1462 + if (is_lru_map && l) 1463 + bpf_lru_push_free(&htab->lru, &l->lru_node); 1464 + 1465 + return ret; 1466 + } 1467 + 1468 + static int htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key, 1469 + void *value, u64 flags) 1470 + { 1471 + return __htab_map_lookup_and_delete_elem(map, key, value, false, false, 1472 + flags); 1473 + } 1474 + 1475 + static int htab_percpu_map_lookup_and_delete_elem(struct bpf_map *map, 1476 + void *key, void *value, 1477 + u64 flags) 1478 + { 1479 + return __htab_map_lookup_and_delete_elem(map, key, value, false, true, 1480 + flags); 1481 + } 1482 + 1483 + static int htab_lru_map_lookup_and_delete_elem(struct bpf_map *map, void *key, 1484 + void *value, u64 flags) 1485 + { 1486 + return __htab_map_lookup_and_delete_elem(map, key, value, true, false, 1487 + flags); 1488 + } 1489 + 1490 + static int htab_lru_percpu_map_lookup_and_delete_elem(struct bpf_map *map, 1491 + void *key, void *value, 1492 + u64 flags) 1493 + { 1494 + return __htab_map_lookup_and_delete_elem(map, key, value, true, true, 1495 + flags); 1496 + } 1497 + 1404 1498 static int 1405 1499 __htab_map_lookup_and_delete_batch(struct bpf_map *map, 1406 1500 const union bpf_attr *attr, ··· 2028 1934 .map_free = htab_map_free, 2029 1935 .map_get_next_key = htab_map_get_next_key, 2030 1936 .map_lookup_elem = htab_map_lookup_elem, 1937 + .map_lookup_and_delete_elem = htab_map_lookup_and_delete_elem, 2031 1938 .map_update_elem = htab_map_update_elem, 2032 1939 .map_delete_elem = htab_map_delete_elem, 2033 1940 .map_gen_lookup = htab_map_gen_lookup, ··· 2049 1954 .map_free = htab_map_free, 2050 1955 .map_get_next_key = htab_map_get_next_key, 2051 1956 .map_lookup_elem = htab_lru_map_lookup_elem, 1957 + .map_lookup_and_delete_elem = htab_lru_map_lookup_and_delete_elem, 2052 1958 .map_lookup_elem_sys_only = htab_lru_map_lookup_elem_sys, 2053 1959 .map_update_elem = htab_lru_map_update_elem, 2054 1960 .map_delete_elem = htab_lru_map_delete_elem, ··· 2173 2077 .map_free = htab_map_free, 2174 2078 .map_get_next_key = htab_map_get_next_key, 2175 2079 .map_lookup_elem = htab_percpu_map_lookup_elem, 2080 + .map_lookup_and_delete_elem = htab_percpu_map_lookup_and_delete_elem, 2176 2081 .map_update_elem = htab_percpu_map_update_elem, 2177 2082 .map_delete_elem = htab_map_delete_elem, 2178 2083 .map_seq_show_elem = htab_percpu_map_seq_show_elem, ··· 2193 2096 .map_free = htab_map_free, 2194 2097 .map_get_next_key = htab_map_get_next_key, 2195 2098 .map_lookup_elem = htab_lru_percpu_map_lookup_elem, 2099 + .map_lookup_and_delete_elem = htab_lru_percpu_map_lookup_and_delete_elem, 2196 2100 .map_update_elem = htab_lru_percpu_map_update_elem, 2197 2101 .map_delete_elem = htab_lru_map_delete_elem, 2198 2102 .map_seq_show_elem = htab_percpu_map_seq_show_elem,
-1
kernel/bpf/preload/iterators/iterators.bpf.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include <linux/bpf.h> 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 #include <bpf/bpf_core_read.h> 7 6 8 7 #pragma clang attribute push (__attribute__((preserve_access_index)), apply_to = record)
+1 -1
kernel/bpf/reuseport_array.c
··· 102 102 /* 103 103 * ops->map_*_elem() will not be able to access this 104 104 * array now. Hence, this function only races with 105 - * bpf_sk_reuseport_detach() which was triggerred by 105 + * bpf_sk_reuseport_detach() which was triggered by 106 106 * close() or disconnect(). 107 107 * 108 108 * This function and bpf_sk_reuseport_detach() are
+43 -4
kernel/bpf/syscall.c
··· 1484 1484 return err; 1485 1485 } 1486 1486 1487 - #define BPF_MAP_LOOKUP_AND_DELETE_ELEM_LAST_FIELD value 1487 + #define BPF_MAP_LOOKUP_AND_DELETE_ELEM_LAST_FIELD flags 1488 1488 1489 1489 static int map_lookup_and_delete_elem(union bpf_attr *attr) 1490 1490 { ··· 1500 1500 if (CHECK_ATTR(BPF_MAP_LOOKUP_AND_DELETE_ELEM)) 1501 1501 return -EINVAL; 1502 1502 1503 + if (attr->flags & ~BPF_F_LOCK) 1504 + return -EINVAL; 1505 + 1503 1506 f = fdget(ufd); 1504 1507 map = __bpf_map_get(f); 1505 1508 if (IS_ERR(map)) ··· 1513 1510 goto err_put; 1514 1511 } 1515 1512 1513 + if (attr->flags && 1514 + (map->map_type == BPF_MAP_TYPE_QUEUE || 1515 + map->map_type == BPF_MAP_TYPE_STACK)) { 1516 + err = -EINVAL; 1517 + goto err_put; 1518 + } 1519 + 1520 + if ((attr->flags & BPF_F_LOCK) && 1521 + !map_value_has_spin_lock(map)) { 1522 + err = -EINVAL; 1523 + goto err_put; 1524 + } 1525 + 1516 1526 key = __bpf_copy_key(ukey, map->key_size); 1517 1527 if (IS_ERR(key)) { 1518 1528 err = PTR_ERR(key); 1519 1529 goto err_put; 1520 1530 } 1521 1531 1522 - value_size = map->value_size; 1532 + value_size = bpf_map_value_size(map); 1523 1533 1524 1534 err = -ENOMEM; 1525 1535 value = kmalloc(value_size, GFP_USER | __GFP_NOWARN); 1526 1536 if (!value) 1527 1537 goto free_key; 1528 1538 1539 + err = -ENOTSUPP; 1529 1540 if (map->map_type == BPF_MAP_TYPE_QUEUE || 1530 1541 map->map_type == BPF_MAP_TYPE_STACK) { 1531 1542 err = map->ops->map_pop_elem(map, value); 1532 - } else { 1533 - err = -ENOTSUPP; 1543 + } else if (map->map_type == BPF_MAP_TYPE_HASH || 1544 + map->map_type == BPF_MAP_TYPE_PERCPU_HASH || 1545 + map->map_type == BPF_MAP_TYPE_LRU_HASH || 1546 + map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) { 1547 + if (!bpf_map_is_dev_bound(map)) { 1548 + bpf_disable_instrumentation(); 1549 + rcu_read_lock(); 1550 + err = map->ops->map_lookup_and_delete_elem(map, key, value, attr->flags); 1551 + rcu_read_unlock(); 1552 + bpf_enable_instrumentation(); 1553 + } 1534 1554 } 1535 1555 1536 1556 if (err) ··· 1973 1947 attr->expected_attach_type = 1974 1948 BPF_CGROUP_INET_SOCK_CREATE; 1975 1949 break; 1950 + case BPF_PROG_TYPE_SK_REUSEPORT: 1951 + if (!attr->expected_attach_type) 1952 + attr->expected_attach_type = 1953 + BPF_SK_REUSEPORT_SELECT; 1954 + break; 1976 1955 } 1977 1956 } 1978 1957 ··· 2061 2030 if (expected_attach_type == BPF_SK_LOOKUP) 2062 2031 return 0; 2063 2032 return -EINVAL; 2033 + case BPF_PROG_TYPE_SK_REUSEPORT: 2034 + switch (expected_attach_type) { 2035 + case BPF_SK_REUSEPORT_SELECT: 2036 + case BPF_SK_REUSEPORT_SELECT_OR_MIGRATE: 2037 + return 0; 2038 + default: 2039 + return -EINVAL; 2040 + } 2064 2041 case BPF_PROG_TYPE_SYSCALL: 2065 2042 case BPF_PROG_TYPE_EXT: 2066 2043 if (expected_attach_type)
+21 -18
kernel/bpf/tnum.c
··· 111 111 return TNUM(v & ~mu, mu); 112 112 } 113 113 114 - /* half-multiply add: acc += (unknown * mask * value). 115 - * An intermediate step in the multiply algorithm. 114 + /* Generate partial products by multiplying each bit in the multiplier (tnum a) 115 + * with the multiplicand (tnum b), and add the partial products after 116 + * appropriately bit-shifting them. Instead of directly performing tnum addition 117 + * on the generated partial products, equivalenty, decompose each partial 118 + * product into two tnums, consisting of the value-sum (acc_v) and the 119 + * mask-sum (acc_m) and then perform tnum addition on them. The following paper 120 + * explains the algorithm in more detail: https://arxiv.org/abs/2105.05398. 116 121 */ 117 - static struct tnum hma(struct tnum acc, u64 value, u64 mask) 118 - { 119 - while (mask) { 120 - if (mask & 1) 121 - acc = tnum_add(acc, TNUM(0, value)); 122 - mask >>= 1; 123 - value <<= 1; 124 - } 125 - return acc; 126 - } 127 - 128 122 struct tnum tnum_mul(struct tnum a, struct tnum b) 129 123 { 130 - struct tnum acc; 131 - u64 pi; 124 + u64 acc_v = a.value * b.value; 125 + struct tnum acc_m = TNUM(0, 0); 132 126 133 - pi = a.value * b.value; 134 - acc = hma(TNUM(pi, 0), a.mask, b.mask | b.value); 135 - return hma(acc, b.mask, a.value); 127 + while (a.value || a.mask) { 128 + /* LSB of tnum a is a certain 1 */ 129 + if (a.value & 1) 130 + acc_m = tnum_add(acc_m, TNUM(0, b.mask)); 131 + /* LSB of tnum a is uncertain */ 132 + else if (a.mask & 1) 133 + acc_m = tnum_add(acc_m, TNUM(0, b.value | b.mask)); 134 + /* Note: no case for LSB is certain 0 */ 135 + a = tnum_rshift(a, 1); 136 + b = tnum_lshift(b, 1); 137 + } 138 + return tnum_add(TNUM(acc_v, 0), acc_m); 136 139 } 137 140 138 141 /* Note that if a and b disagree - i.e. one has a 'known 1' where the other has
+1 -1
kernel/bpf/trampoline.c
··· 552 552 * __bpf_prog_enter returns: 553 553 * 0 - skip execution of the bpf prog 554 554 * 1 - execute bpf prog 555 - * [2..MAX_U64] - excute bpf prog and record execution time. 555 + * [2..MAX_U64] - execute bpf prog and record execution time. 556 556 * This is start time. 557 557 */ 558 558 u64 notrace __bpf_prog_enter(struct bpf_prog *prog)
+6 -6
kernel/bpf/verifier.c
··· 47 47 * - unreachable insns exist (shouldn't be a forest. program = one function) 48 48 * - out of bounds or malformed jumps 49 49 * The second pass is all possible path descent from the 1st insn. 50 - * Since it's analyzing all pathes through the program, the length of the 50 + * Since it's analyzing all paths through the program, the length of the 51 51 * analysis is limited to 64k insn, which may be hit even if total number of 52 52 * insn is less then 4K, but there are too many branches that change stack/regs. 53 53 * Number of 'branches to be analyzed' is limited to 1k ··· 132 132 * If it's ok, then verifier allows this BPF_CALL insn and looks at 133 133 * .ret_type which is RET_PTR_TO_MAP_VALUE_OR_NULL, so it sets 134 134 * R0->type = PTR_TO_MAP_VALUE_OR_NULL which means bpf_map_lookup_elem() function 135 - * returns ether pointer to map value or NULL. 135 + * returns either pointer to map value or NULL. 136 136 * 137 137 * When type PTR_TO_MAP_VALUE_OR_NULL passes through 'if (reg != 0) goto +off' 138 138 * insn, the register holding that pointer in the true branch changes state to ··· 2616 2616 if (dst_reg != BPF_REG_FP) { 2617 2617 /* The backtracking logic can only recognize explicit 2618 2618 * stack slot address like [fp - 8]. Other spill of 2619 - * scalar via different register has to be conervative. 2619 + * scalar via different register has to be conservative. 2620 2620 * Backtrack from here and mark all registers as precise 2621 2621 * that contributed into 'reg' being a constant. 2622 2622 */ ··· 9059 9059 !prog->aux->attach_func_proto->type) 9060 9060 return 0; 9061 9061 9062 - /* eBPF calling convetion is such that R0 is used 9062 + /* eBPF calling convention is such that R0 is used 9063 9063 * to return the value from eBPF program. 9064 9064 * Make sure that it's readable at this time 9065 9065 * of bpf_exit, which means that program wrote ··· 9850 9850 * Since the verifier pushes the branch states as it sees them while exploring 9851 9851 * the program the condition of walking the branch instruction for the second 9852 9852 * time means that all states below this branch were already explored and 9853 - * their final liveness markes are already propagated. 9853 + * their final liveness marks are already propagated. 9854 9854 * Hence when the verifier completes the search of state list in is_state_visited() 9855 9855 * we can call this clean_live_states() function to mark all liveness states 9856 9856 * as REG_LIVE_DONE to indicate that 'parent' pointers of 'struct bpf_reg_state' ··· 12470 12470 prog->aux->max_pkt_offset = MAX_PACKET_OFF; 12471 12471 12472 12472 /* mark bpf_tail_call as different opcode to avoid 12473 - * conditional branch in the interpeter for every normal 12473 + * conditional branch in the interpreter for every normal 12474 12474 * call and to prevent accidental JITing by JIT compiler 12475 12475 * that doesn't support bpf_tail_call yet 12476 12476 */
+57 -3
net/core/filter.c
··· 3931 3931 } 3932 3932 EXPORT_SYMBOL_GPL(xdp_do_flush); 3933 3933 3934 + void bpf_clear_redirect_map(struct bpf_map *map) 3935 + { 3936 + struct bpf_redirect_info *ri; 3937 + int cpu; 3938 + 3939 + for_each_possible_cpu(cpu) { 3940 + ri = per_cpu_ptr(&bpf_redirect_info, cpu); 3941 + /* Avoid polluting remote cacheline due to writes if 3942 + * not needed. Once we pass this test, we need the 3943 + * cmpxchg() to make sure it hasn't been changed in 3944 + * the meantime by remote CPU. 3945 + */ 3946 + if (unlikely(READ_ONCE(ri->map) == map)) 3947 + cmpxchg(&ri->map, map, NULL); 3948 + } 3949 + } 3950 + 3934 3951 int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, 3935 3952 struct bpf_prog *xdp_prog) 3936 3953 { ··· 3955 3938 enum bpf_map_type map_type = ri->map_type; 3956 3939 void *fwd = ri->tgt_value; 3957 3940 u32 map_id = ri->map_id; 3941 + struct bpf_map *map; 3958 3942 int err; 3959 3943 3960 3944 ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ ··· 3965 3947 case BPF_MAP_TYPE_DEVMAP: 3966 3948 fallthrough; 3967 3949 case BPF_MAP_TYPE_DEVMAP_HASH: 3968 - err = dev_map_enqueue(fwd, xdp, dev); 3950 + map = READ_ONCE(ri->map); 3951 + if (unlikely(map)) { 3952 + WRITE_ONCE(ri->map, NULL); 3953 + err = dev_map_enqueue_multi(xdp, dev, map, 3954 + ri->flags & BPF_F_EXCLUDE_INGRESS); 3955 + } else { 3956 + err = dev_map_enqueue(fwd, xdp, dev); 3957 + } 3969 3958 break; 3970 3959 case BPF_MAP_TYPE_CPUMAP: 3971 3960 err = cpu_map_enqueue(fwd, xdp, dev); ··· 4014 3989 enum bpf_map_type map_type, u32 map_id) 4015 3990 { 4016 3991 struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); 3992 + struct bpf_map *map; 4017 3993 int err; 4018 3994 4019 3995 switch (map_type) { 4020 3996 case BPF_MAP_TYPE_DEVMAP: 4021 3997 fallthrough; 4022 3998 case BPF_MAP_TYPE_DEVMAP_HASH: 4023 - err = dev_map_generic_redirect(fwd, skb, xdp_prog); 3999 + map = READ_ONCE(ri->map); 4000 + if (unlikely(map)) { 4001 + WRITE_ONCE(ri->map, NULL); 4002 + err = dev_map_redirect_multi(dev, skb, xdp_prog, map, 4003 + ri->flags & BPF_F_EXCLUDE_INGRESS); 4004 + } else { 4005 + err = dev_map_generic_redirect(fwd, skb, xdp_prog); 4006 + } 4024 4007 if (unlikely(err)) 4025 4008 goto err; 4026 4009 break; ··· 10045 10012 static void bpf_init_reuseport_kern(struct sk_reuseport_kern *reuse_kern, 10046 10013 struct sock_reuseport *reuse, 10047 10014 struct sock *sk, struct sk_buff *skb, 10015 + struct sock *migrating_sk, 10048 10016 u32 hash) 10049 10017 { 10050 10018 reuse_kern->skb = skb; 10051 10019 reuse_kern->sk = sk; 10052 10020 reuse_kern->selected_sk = NULL; 10021 + reuse_kern->migrating_sk = migrating_sk; 10053 10022 reuse_kern->data_end = skb->data + skb_headlen(skb); 10054 10023 reuse_kern->hash = hash; 10055 10024 reuse_kern->reuseport_id = reuse->reuseport_id; ··· 10060 10025 10061 10026 struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk, 10062 10027 struct bpf_prog *prog, struct sk_buff *skb, 10028 + struct sock *migrating_sk, 10063 10029 u32 hash) 10064 10030 { 10065 10031 struct sk_reuseport_kern reuse_kern; 10066 10032 enum sk_action action; 10067 10033 10068 - bpf_init_reuseport_kern(&reuse_kern, reuse, sk, skb, hash); 10034 + bpf_init_reuseport_kern(&reuse_kern, reuse, sk, skb, migrating_sk, hash); 10069 10035 action = BPF_PROG_RUN(prog, &reuse_kern); 10070 10036 10071 10037 if (action == SK_PASS) ··· 10176 10140 return &sk_reuseport_load_bytes_proto; 10177 10141 case BPF_FUNC_skb_load_bytes_relative: 10178 10142 return &sk_reuseport_load_bytes_relative_proto; 10143 + case BPF_FUNC_get_socket_cookie: 10144 + return &bpf_get_socket_ptr_cookie_proto; 10179 10145 default: 10180 10146 return bpf_base_func_proto(func_id); 10181 10147 } ··· 10206 10168 10207 10169 case offsetof(struct sk_reuseport_md, hash): 10208 10170 return size == size_default; 10171 + 10172 + case offsetof(struct sk_reuseport_md, sk): 10173 + info->reg_type = PTR_TO_SOCKET; 10174 + return size == sizeof(__u64); 10175 + 10176 + case offsetof(struct sk_reuseport_md, migrating_sk): 10177 + info->reg_type = PTR_TO_SOCK_COMMON_OR_NULL; 10178 + return size == sizeof(__u64); 10209 10179 10210 10180 /* Fields that allow narrowing */ 10211 10181 case bpf_ctx_range(struct sk_reuseport_md, eth_protocol): ··· 10286 10240 10287 10241 case offsetof(struct sk_reuseport_md, bind_inany): 10288 10242 SK_REUSEPORT_LOAD_FIELD(bind_inany); 10243 + break; 10244 + 10245 + case offsetof(struct sk_reuseport_md, sk): 10246 + SK_REUSEPORT_LOAD_FIELD(sk); 10247 + break; 10248 + 10249 + case offsetof(struct sk_reuseport_md, migrating_sk): 10250 + SK_REUSEPORT_LOAD_FIELD(migrating_sk); 10289 10251 break; 10290 10252 } 10291 10253
+320 -39
net/core/sock_reuseport.c
··· 17 17 DEFINE_SPINLOCK(reuseport_lock); 18 18 19 19 static DEFINE_IDA(reuseport_ida); 20 + static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse, 21 + struct sock_reuseport *reuse, bool bind_inany); 22 + 23 + static int reuseport_sock_index(struct sock *sk, 24 + const struct sock_reuseport *reuse, 25 + bool closed) 26 + { 27 + int left, right; 28 + 29 + if (!closed) { 30 + left = 0; 31 + right = reuse->num_socks; 32 + } else { 33 + left = reuse->max_socks - reuse->num_closed_socks; 34 + right = reuse->max_socks; 35 + } 36 + 37 + for (; left < right; left++) 38 + if (reuse->socks[left] == sk) 39 + return left; 40 + return -1; 41 + } 42 + 43 + static void __reuseport_add_sock(struct sock *sk, 44 + struct sock_reuseport *reuse) 45 + { 46 + reuse->socks[reuse->num_socks] = sk; 47 + /* paired with smp_rmb() in reuseport_(select|migrate)_sock() */ 48 + smp_wmb(); 49 + reuse->num_socks++; 50 + } 51 + 52 + static bool __reuseport_detach_sock(struct sock *sk, 53 + struct sock_reuseport *reuse) 54 + { 55 + int i = reuseport_sock_index(sk, reuse, false); 56 + 57 + if (i == -1) 58 + return false; 59 + 60 + reuse->socks[i] = reuse->socks[reuse->num_socks - 1]; 61 + reuse->num_socks--; 62 + 63 + return true; 64 + } 65 + 66 + static void __reuseport_add_closed_sock(struct sock *sk, 67 + struct sock_reuseport *reuse) 68 + { 69 + reuse->socks[reuse->max_socks - reuse->num_closed_socks - 1] = sk; 70 + /* paired with READ_ONCE() in inet_csk_bind_conflict() */ 71 + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks + 1); 72 + } 73 + 74 + static bool __reuseport_detach_closed_sock(struct sock *sk, 75 + struct sock_reuseport *reuse) 76 + { 77 + int i = reuseport_sock_index(sk, reuse, true); 78 + 79 + if (i == -1) 80 + return false; 81 + 82 + reuse->socks[i] = reuse->socks[reuse->max_socks - reuse->num_closed_socks]; 83 + /* paired with READ_ONCE() in inet_csk_bind_conflict() */ 84 + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks - 1); 85 + 86 + return true; 87 + } 20 88 21 89 static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks) 22 90 { ··· 117 49 reuse = rcu_dereference_protected(sk->sk_reuseport_cb, 118 50 lockdep_is_held(&reuseport_lock)); 119 51 if (reuse) { 52 + if (reuse->num_closed_socks) { 53 + /* sk was shutdown()ed before */ 54 + ret = reuseport_resurrect(sk, reuse, NULL, bind_inany); 55 + goto out; 56 + } 57 + 120 58 /* Only set reuse->bind_inany if the bind_inany is true. 121 59 * Otherwise, it will overwrite the reuse->bind_inany 122 60 * which was set by the bind/hash path. ··· 146 72 } 147 73 148 74 reuse->reuseport_id = id; 75 + reuse->bind_inany = bind_inany; 149 76 reuse->socks[0] = sk; 150 77 reuse->num_socks = 1; 151 - reuse->bind_inany = bind_inany; 152 78 rcu_assign_pointer(sk->sk_reuseport_cb, reuse); 153 79 154 80 out: ··· 164 90 u32 more_socks_size, i; 165 91 166 92 more_socks_size = reuse->max_socks * 2U; 167 - if (more_socks_size > U16_MAX) 93 + if (more_socks_size > U16_MAX) { 94 + if (reuse->num_closed_socks) { 95 + /* Make room by removing a closed sk. 96 + * The child has already been migrated. 97 + * Only reqsk left at this point. 98 + */ 99 + struct sock *sk; 100 + 101 + sk = reuse->socks[reuse->max_socks - reuse->num_closed_socks]; 102 + RCU_INIT_POINTER(sk->sk_reuseport_cb, NULL); 103 + __reuseport_detach_closed_sock(sk, reuse); 104 + 105 + return reuse; 106 + } 107 + 168 108 return NULL; 109 + } 169 110 170 111 more_reuse = __reuseport_alloc(more_socks_size); 171 112 if (!more_reuse) 172 113 return NULL; 173 114 174 115 more_reuse->num_socks = reuse->num_socks; 116 + more_reuse->num_closed_socks = reuse->num_closed_socks; 175 117 more_reuse->prog = reuse->prog; 176 118 more_reuse->reuseport_id = reuse->reuseport_id; 177 119 more_reuse->bind_inany = reuse->bind_inany; ··· 195 105 196 106 memcpy(more_reuse->socks, reuse->socks, 197 107 reuse->num_socks * sizeof(struct sock *)); 108 + memcpy(more_reuse->socks + 109 + (more_reuse->max_socks - more_reuse->num_closed_socks), 110 + reuse->socks + (reuse->max_socks - reuse->num_closed_socks), 111 + reuse->num_closed_socks * sizeof(struct sock *)); 198 112 more_reuse->synq_overflow_ts = READ_ONCE(reuse->synq_overflow_ts); 199 113 200 - for (i = 0; i < reuse->num_socks; ++i) 114 + for (i = 0; i < reuse->max_socks; ++i) 201 115 rcu_assign_pointer(reuse->socks[i]->sk_reuseport_cb, 202 116 more_reuse); 203 117 ··· 246 152 reuse = rcu_dereference_protected(sk2->sk_reuseport_cb, 247 153 lockdep_is_held(&reuseport_lock)); 248 154 old_reuse = rcu_dereference_protected(sk->sk_reuseport_cb, 249 - lockdep_is_held(&reuseport_lock)); 155 + lockdep_is_held(&reuseport_lock)); 156 + if (old_reuse && old_reuse->num_closed_socks) { 157 + /* sk was shutdown()ed before */ 158 + int err = reuseport_resurrect(sk, old_reuse, reuse, reuse->bind_inany); 159 + 160 + spin_unlock_bh(&reuseport_lock); 161 + return err; 162 + } 163 + 250 164 if (old_reuse && old_reuse->num_socks != 1) { 251 165 spin_unlock_bh(&reuseport_lock); 252 166 return -EBUSY; 253 167 } 254 168 255 - if (reuse->num_socks == reuse->max_socks) { 169 + if (reuse->num_socks + reuse->num_closed_socks == reuse->max_socks) { 256 170 reuse = reuseport_grow(reuse); 257 171 if (!reuse) { 258 172 spin_unlock_bh(&reuseport_lock); ··· 268 166 } 269 167 } 270 168 271 - reuse->socks[reuse->num_socks] = sk; 272 - /* paired with smp_rmb() in reuseport_select_sock() */ 273 - smp_wmb(); 274 - reuse->num_socks++; 169 + __reuseport_add_sock(sk, reuse); 275 170 rcu_assign_pointer(sk->sk_reuseport_cb, reuse); 276 171 277 172 spin_unlock_bh(&reuseport_lock); ··· 279 180 } 280 181 EXPORT_SYMBOL(reuseport_add_sock); 281 182 183 + static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse, 184 + struct sock_reuseport *reuse, bool bind_inany) 185 + { 186 + if (old_reuse == reuse) { 187 + /* If sk was in the same reuseport group, just pop sk out of 188 + * the closed section and push sk into the listening section. 189 + */ 190 + __reuseport_detach_closed_sock(sk, old_reuse); 191 + __reuseport_add_sock(sk, old_reuse); 192 + return 0; 193 + } 194 + 195 + if (!reuse) { 196 + /* In bind()/listen() path, we cannot carry over the eBPF prog 197 + * for the shutdown()ed socket. In setsockopt() path, we should 198 + * not change the eBPF prog of listening sockets by attaching a 199 + * prog to the shutdown()ed socket. Thus, we will allocate a new 200 + * reuseport group and detach sk from the old group. 201 + */ 202 + int id; 203 + 204 + reuse = __reuseport_alloc(INIT_SOCKS); 205 + if (!reuse) 206 + return -ENOMEM; 207 + 208 + id = ida_alloc(&reuseport_ida, GFP_ATOMIC); 209 + if (id < 0) { 210 + kfree(reuse); 211 + return id; 212 + } 213 + 214 + reuse->reuseport_id = id; 215 + reuse->bind_inany = bind_inany; 216 + } else { 217 + /* Move sk from the old group to the new one if 218 + * - all the other listeners in the old group were close()d or 219 + * shutdown()ed, and then sk2 has listen()ed on the same port 220 + * OR 221 + * - sk listen()ed without bind() (or with autobind), was 222 + * shutdown()ed, and then listen()s on another port which 223 + * sk2 listen()s on. 224 + */ 225 + if (reuse->num_socks + reuse->num_closed_socks == reuse->max_socks) { 226 + reuse = reuseport_grow(reuse); 227 + if (!reuse) 228 + return -ENOMEM; 229 + } 230 + } 231 + 232 + __reuseport_detach_closed_sock(sk, old_reuse); 233 + __reuseport_add_sock(sk, reuse); 234 + rcu_assign_pointer(sk->sk_reuseport_cb, reuse); 235 + 236 + if (old_reuse->num_socks + old_reuse->num_closed_socks == 0) 237 + call_rcu(&old_reuse->rcu, reuseport_free_rcu); 238 + 239 + return 0; 240 + } 241 + 282 242 void reuseport_detach_sock(struct sock *sk) 283 243 { 284 244 struct sock_reuseport *reuse; 285 - int i; 286 245 287 246 spin_lock_bh(&reuseport_lock); 288 247 reuse = rcu_dereference_protected(sk->sk_reuseport_cb, 289 248 lockdep_is_held(&reuseport_lock)); 249 + 250 + /* reuseport_grow() has detached a closed sk */ 251 + if (!reuse) 252 + goto out; 290 253 291 254 /* Notify the bpf side. The sk may be added to a sockarray 292 255 * map. If so, sockarray logic will remove it from the map. ··· 362 201 363 202 rcu_assign_pointer(sk->sk_reuseport_cb, NULL); 364 203 365 - for (i = 0; i < reuse->num_socks; i++) { 366 - if (reuse->socks[i] == sk) { 367 - reuse->socks[i] = reuse->socks[reuse->num_socks - 1]; 368 - reuse->num_socks--; 369 - if (reuse->num_socks == 0) 370 - call_rcu(&reuse->rcu, reuseport_free_rcu); 371 - break; 372 - } 373 - } 204 + if (!__reuseport_detach_closed_sock(sk, reuse)) 205 + __reuseport_detach_sock(sk, reuse); 206 + 207 + if (reuse->num_socks + reuse->num_closed_socks == 0) 208 + call_rcu(&reuse->rcu, reuseport_free_rcu); 209 + 210 + out: 374 211 spin_unlock_bh(&reuseport_lock); 375 212 } 376 213 EXPORT_SYMBOL(reuseport_detach_sock); 214 + 215 + void reuseport_stop_listen_sock(struct sock *sk) 216 + { 217 + if (sk->sk_protocol == IPPROTO_TCP) { 218 + struct sock_reuseport *reuse; 219 + struct bpf_prog *prog; 220 + 221 + spin_lock_bh(&reuseport_lock); 222 + 223 + reuse = rcu_dereference_protected(sk->sk_reuseport_cb, 224 + lockdep_is_held(&reuseport_lock)); 225 + prog = rcu_dereference_protected(reuse->prog, 226 + lockdep_is_held(&reuseport_lock)); 227 + 228 + if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req || 229 + (prog && prog->expected_attach_type == BPF_SK_REUSEPORT_SELECT_OR_MIGRATE)) { 230 + /* Migration capable, move sk from the listening section 231 + * to the closed section. 232 + */ 233 + bpf_sk_reuseport_detach(sk); 234 + 235 + __reuseport_detach_sock(sk, reuse); 236 + __reuseport_add_closed_sock(sk, reuse); 237 + 238 + spin_unlock_bh(&reuseport_lock); 239 + return; 240 + } 241 + 242 + spin_unlock_bh(&reuseport_lock); 243 + } 244 + 245 + /* Not capable to do migration, detach immediately */ 246 + reuseport_detach_sock(sk); 247 + } 248 + EXPORT_SYMBOL(reuseport_stop_listen_sock); 377 249 378 250 static struct sock *run_bpf_filter(struct sock_reuseport *reuse, u16 socks, 379 251 struct bpf_prog *prog, struct sk_buff *skb, ··· 436 242 return NULL; 437 243 438 244 return reuse->socks[index]; 245 + } 246 + 247 + static struct sock *reuseport_select_sock_by_hash(struct sock_reuseport *reuse, 248 + u32 hash, u16 num_socks) 249 + { 250 + int i, j; 251 + 252 + i = j = reciprocal_scale(hash, num_socks); 253 + while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) { 254 + i++; 255 + if (i >= num_socks) 256 + i = 0; 257 + if (i == j) 258 + return NULL; 259 + } 260 + 261 + return reuse->socks[i]; 439 262 } 440 263 441 264 /** ··· 485 274 prog = rcu_dereference(reuse->prog); 486 275 socks = READ_ONCE(reuse->num_socks); 487 276 if (likely(socks)) { 488 - /* paired with smp_wmb() in reuseport_add_sock() */ 277 + /* paired with smp_wmb() in __reuseport_add_sock() */ 489 278 smp_rmb(); 490 279 491 280 if (!prog || !skb) 492 281 goto select_by_hash; 493 282 494 283 if (prog->type == BPF_PROG_TYPE_SK_REUSEPORT) 495 - sk2 = bpf_run_sk_reuseport(reuse, sk, prog, skb, hash); 284 + sk2 = bpf_run_sk_reuseport(reuse, sk, prog, skb, NULL, hash); 496 285 else 497 286 sk2 = run_bpf_filter(reuse, socks, prog, skb, hdr_len); 498 287 499 288 select_by_hash: 500 289 /* no bpf or invalid bpf result: fall back to hash usage */ 501 - if (!sk2) { 502 - int i, j; 503 - 504 - i = j = reciprocal_scale(hash, socks); 505 - while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) { 506 - i++; 507 - if (i >= socks) 508 - i = 0; 509 - if (i == j) 510 - goto out; 511 - } 512 - sk2 = reuse->socks[i]; 513 - } 290 + if (!sk2) 291 + sk2 = reuseport_select_sock_by_hash(reuse, hash, socks); 514 292 } 515 293 516 294 out: ··· 508 308 } 509 309 EXPORT_SYMBOL(reuseport_select_sock); 510 310 311 + /** 312 + * reuseport_migrate_sock - Select a socket from an SO_REUSEPORT group. 313 + * @sk: close()ed or shutdown()ed socket in the group. 314 + * @migrating_sk: ESTABLISHED/SYN_RECV full socket in the accept queue or 315 + * NEW_SYN_RECV request socket during 3WHS. 316 + * @skb: skb to run through BPF filter. 317 + * Returns a socket (with sk_refcnt +1) that should accept the child socket 318 + * (or NULL on error). 319 + */ 320 + struct sock *reuseport_migrate_sock(struct sock *sk, 321 + struct sock *migrating_sk, 322 + struct sk_buff *skb) 323 + { 324 + struct sock_reuseport *reuse; 325 + struct sock *nsk = NULL; 326 + bool allocated = false; 327 + struct bpf_prog *prog; 328 + u16 socks; 329 + u32 hash; 330 + 331 + rcu_read_lock(); 332 + 333 + reuse = rcu_dereference(sk->sk_reuseport_cb); 334 + if (!reuse) 335 + goto out; 336 + 337 + socks = READ_ONCE(reuse->num_socks); 338 + if (unlikely(!socks)) 339 + goto out; 340 + 341 + /* paired with smp_wmb() in __reuseport_add_sock() */ 342 + smp_rmb(); 343 + 344 + hash = migrating_sk->sk_hash; 345 + prog = rcu_dereference(reuse->prog); 346 + if (!prog || prog->expected_attach_type != BPF_SK_REUSEPORT_SELECT_OR_MIGRATE) { 347 + if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req) 348 + goto select_by_hash; 349 + goto out; 350 + } 351 + 352 + if (!skb) { 353 + skb = alloc_skb(0, GFP_ATOMIC); 354 + if (!skb) 355 + goto out; 356 + allocated = true; 357 + } 358 + 359 + nsk = bpf_run_sk_reuseport(reuse, sk, prog, skb, migrating_sk, hash); 360 + 361 + if (allocated) 362 + kfree_skb(skb); 363 + 364 + select_by_hash: 365 + if (!nsk) 366 + nsk = reuseport_select_sock_by_hash(reuse, hash, socks); 367 + 368 + if (IS_ERR_OR_NULL(nsk) || unlikely(!refcount_inc_not_zero(&nsk->sk_refcnt))) 369 + nsk = NULL; 370 + 371 + out: 372 + rcu_read_unlock(); 373 + return nsk; 374 + } 375 + EXPORT_SYMBOL(reuseport_migrate_sock); 376 + 511 377 int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog) 512 378 { 513 379 struct sock_reuseport *reuse; 514 380 struct bpf_prog *old_prog; 515 381 516 - if (sk_unhashed(sk) && sk->sk_reuseport) { 517 - int err = reuseport_alloc(sk, false); 382 + if (sk_unhashed(sk)) { 383 + int err; 518 384 385 + if (!sk->sk_reuseport) 386 + return -EINVAL; 387 + 388 + err = reuseport_alloc(sk, false); 519 389 if (err) 520 390 return err; 521 391 } else if (!rcu_access_pointer(sk->sk_reuseport_cb)) { ··· 611 341 struct sock_reuseport *reuse; 612 342 struct bpf_prog *old_prog; 613 343 614 - if (!rcu_access_pointer(sk->sk_reuseport_cb)) 615 - return sk->sk_reuseport ? -ENOENT : -EINVAL; 616 - 617 344 old_prog = NULL; 618 345 spin_lock_bh(&reuseport_lock); 619 346 reuse = rcu_dereference_protected(sk->sk_reuseport_cb, 620 347 lockdep_is_held(&reuseport_lock)); 348 + 349 + /* reuse must be checked after acquiring the reuseport_lock 350 + * because reuseport_grow() can detach a closed sk. 351 + */ 352 + if (!reuse) { 353 + spin_unlock_bh(&reuseport_lock); 354 + return sk->sk_reuseport ? -ENOENT : -EINVAL; 355 + } 356 + 357 + if (sk_unhashed(sk) && reuse->num_closed_socks) { 358 + spin_unlock_bh(&reuseport_lock); 359 + return -ENOENT; 360 + } 361 + 621 362 old_prog = rcu_replace_pointer(reuse->prog, old_prog, 622 363 lockdep_is_held(&reuseport_lock)); 623 364 spin_unlock_bh(&reuseport_lock);
+28
net/core/xdp.c
··· 584 584 return __xdp_build_skb_from_frame(xdpf, skb, dev); 585 585 } 586 586 EXPORT_SYMBOL_GPL(xdp_build_skb_from_frame); 587 + 588 + struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf) 589 + { 590 + unsigned int headroom, totalsize; 591 + struct xdp_frame *nxdpf; 592 + struct page *page; 593 + void *addr; 594 + 595 + headroom = xdpf->headroom + sizeof(*xdpf); 596 + totalsize = headroom + xdpf->len; 597 + 598 + if (unlikely(totalsize > PAGE_SIZE)) 599 + return NULL; 600 + page = dev_alloc_page(); 601 + if (!page) 602 + return NULL; 603 + addr = page_to_virt(page); 604 + 605 + memcpy(addr, xdpf, totalsize); 606 + 607 + nxdpf = addr; 608 + nxdpf->data = addr + headroom; 609 + nxdpf->frame_sz = PAGE_SIZE; 610 + nxdpf->mem.type = MEM_TYPE_PAGE_ORDER0; 611 + nxdpf->mem.id = 0; 612 + 613 + return nxdpf; 614 + }
+179 -12
net/ipv4/inet_connection_sock.c
··· 135 135 bool relax, bool reuseport_ok) 136 136 { 137 137 struct sock *sk2; 138 + bool reuseport_cb_ok; 138 139 bool reuse = sk->sk_reuse; 139 140 bool reuseport = !!sk->sk_reuseport; 141 + struct sock_reuseport *reuseport_cb; 140 142 kuid_t uid = sock_i_uid((struct sock *)sk); 143 + 144 + rcu_read_lock(); 145 + reuseport_cb = rcu_dereference(sk->sk_reuseport_cb); 146 + /* paired with WRITE_ONCE() in __reuseport_(add|detach)_closed_sock */ 147 + reuseport_cb_ok = !reuseport_cb || READ_ONCE(reuseport_cb->num_closed_socks); 148 + rcu_read_unlock(); 141 149 142 150 /* 143 151 * Unlike other sk lookup places we do not check ··· 164 156 if ((!relax || 165 157 (!reuseport_ok && 166 158 reuseport && sk2->sk_reuseport && 167 - !rcu_access_pointer(sk->sk_reuseport_cb) && 159 + reuseport_cb_ok && 168 160 (sk2->sk_state == TCP_TIME_WAIT || 169 161 uid_eq(uid, sock_i_uid(sk2))))) && 170 162 inet_rcv_saddr_equal(sk, sk2, true)) 171 163 break; 172 164 } else if (!reuseport_ok || 173 165 !reuseport || !sk2->sk_reuseport || 174 - rcu_access_pointer(sk->sk_reuseport_cb) || 166 + !reuseport_cb_ok || 175 167 (sk2->sk_state != TCP_TIME_WAIT && 176 168 !uid_eq(uid, sock_i_uid(sk2)))) { 177 169 if (inet_rcv_saddr_equal(sk, sk2, true)) ··· 695 687 } 696 688 EXPORT_SYMBOL(inet_rtx_syn_ack); 697 689 690 + static struct request_sock *inet_reqsk_clone(struct request_sock *req, 691 + struct sock *sk) 692 + { 693 + struct sock *req_sk, *nreq_sk; 694 + struct request_sock *nreq; 695 + 696 + nreq = kmem_cache_alloc(req->rsk_ops->slab, GFP_ATOMIC | __GFP_NOWARN); 697 + if (!nreq) { 698 + /* paired with refcount_inc_not_zero() in reuseport_migrate_sock() */ 699 + sock_put(sk); 700 + return NULL; 701 + } 702 + 703 + req_sk = req_to_sk(req); 704 + nreq_sk = req_to_sk(nreq); 705 + 706 + memcpy(nreq_sk, req_sk, 707 + offsetof(struct sock, sk_dontcopy_begin)); 708 + memcpy(&nreq_sk->sk_dontcopy_end, &req_sk->sk_dontcopy_end, 709 + req->rsk_ops->obj_size - offsetof(struct sock, sk_dontcopy_end)); 710 + 711 + sk_node_init(&nreq_sk->sk_node); 712 + nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping; 713 + #ifdef CONFIG_XPS 714 + nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping; 715 + #endif 716 + nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu; 717 + 718 + nreq->rsk_listener = sk; 719 + 720 + /* We need not acquire fastopenq->lock 721 + * because the child socket is locked in inet_csk_listen_stop(). 722 + */ 723 + if (sk->sk_protocol == IPPROTO_TCP && tcp_rsk(nreq)->tfo_listener) 724 + rcu_assign_pointer(tcp_sk(nreq->sk)->fastopen_rsk, nreq); 725 + 726 + return nreq; 727 + } 728 + 729 + static void reqsk_queue_migrated(struct request_sock_queue *queue, 730 + const struct request_sock *req) 731 + { 732 + if (req->num_timeout == 0) 733 + atomic_inc(&queue->young); 734 + atomic_inc(&queue->qlen); 735 + } 736 + 737 + static void reqsk_migrate_reset(struct request_sock *req) 738 + { 739 + req->saved_syn = NULL; 740 + #if IS_ENABLED(CONFIG_IPV6) 741 + inet_rsk(req)->ipv6_opt = NULL; 742 + inet_rsk(req)->pktopts = NULL; 743 + #else 744 + inet_rsk(req)->ireq_opt = NULL; 745 + #endif 746 + } 747 + 698 748 /* return true if req was found in the ehash table */ 699 749 static bool reqsk_queue_unlink(struct request_sock *req) 700 750 { ··· 793 727 static void reqsk_timer_handler(struct timer_list *t) 794 728 { 795 729 struct request_sock *req = from_timer(req, t, rsk_timer); 730 + struct request_sock *nreq = NULL, *oreq = req; 796 731 struct sock *sk_listener = req->rsk_listener; 797 - struct net *net = sock_net(sk_listener); 798 - struct inet_connection_sock *icsk = inet_csk(sk_listener); 799 - struct request_sock_queue *queue = &icsk->icsk_accept_queue; 732 + struct inet_connection_sock *icsk; 733 + struct request_sock_queue *queue; 734 + struct net *net; 800 735 int max_syn_ack_retries, qlen, expire = 0, resend = 0; 801 736 802 - if (inet_sk_state_load(sk_listener) != TCP_LISTEN) 803 - goto drop; 737 + if (inet_sk_state_load(sk_listener) != TCP_LISTEN) { 738 + struct sock *nsk; 804 739 740 + nsk = reuseport_migrate_sock(sk_listener, req_to_sk(req), NULL); 741 + if (!nsk) 742 + goto drop; 743 + 744 + nreq = inet_reqsk_clone(req, nsk); 745 + if (!nreq) 746 + goto drop; 747 + 748 + /* The new timer for the cloned req can decrease the 2 749 + * by calling inet_csk_reqsk_queue_drop_and_put(), so 750 + * hold another count to prevent use-after-free and 751 + * call reqsk_put() just before return. 752 + */ 753 + refcount_set(&nreq->rsk_refcnt, 2 + 1); 754 + timer_setup(&nreq->rsk_timer, reqsk_timer_handler, TIMER_PINNED); 755 + reqsk_queue_migrated(&inet_csk(nsk)->icsk_accept_queue, req); 756 + 757 + req = nreq; 758 + sk_listener = nsk; 759 + } 760 + 761 + icsk = inet_csk(sk_listener); 762 + net = sock_net(sk_listener); 805 763 max_syn_ack_retries = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_synack_retries; 806 764 /* Normally all the openreqs are young and become mature 807 765 * (i.e. converted to established socket) for first timeout. ··· 844 754 * embrions; and abort old ones without pity, if old 845 755 * ones are about to clog our table. 846 756 */ 757 + queue = &icsk->icsk_accept_queue; 847 758 qlen = reqsk_queue_len(queue); 848 759 if ((qlen << 1) > max(8U, READ_ONCE(sk_listener->sk_max_ack_backlog))) { 849 760 int young = reqsk_queue_len_young(queue) << 1; ··· 869 778 atomic_dec(&queue->young); 870 779 timeo = min(TCP_TIMEOUT_INIT << req->num_timeout, TCP_RTO_MAX); 871 780 mod_timer(&req->rsk_timer, jiffies + timeo); 781 + 782 + if (!nreq) 783 + return; 784 + 785 + if (!inet_ehash_insert(req_to_sk(nreq), req_to_sk(oreq), NULL)) { 786 + /* delete timer */ 787 + inet_csk_reqsk_queue_drop(sk_listener, nreq); 788 + goto drop; 789 + } 790 + 791 + reqsk_migrate_reset(oreq); 792 + reqsk_queue_removed(&inet_csk(oreq->rsk_listener)->icsk_accept_queue, oreq); 793 + reqsk_put(oreq); 794 + 795 + reqsk_put(nreq); 872 796 return; 873 797 } 798 + 874 799 drop: 875 - inet_csk_reqsk_queue_drop_and_put(sk_listener, req); 800 + /* Even if we can clone the req, we may need not retransmit any more 801 + * SYN+ACKs (nreq->num_timeout > max_syn_ack_retries, etc), or another 802 + * CPU may win the "own_req" race so that inet_ehash_insert() fails. 803 + */ 804 + if (nreq) { 805 + reqsk_migrate_reset(nreq); 806 + reqsk_queue_removed(queue, nreq); 807 + __reqsk_free(nreq); 808 + } 809 + 810 + inet_csk_reqsk_queue_drop_and_put(oreq->rsk_listener, oreq); 876 811 } 877 812 878 813 static void reqsk_queue_hash_req(struct request_sock *req, ··· 1114 997 struct request_sock *req, bool own_req) 1115 998 { 1116 999 if (own_req) { 1117 - inet_csk_reqsk_queue_drop(sk, req); 1118 - reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req); 1119 - if (inet_csk_reqsk_queue_add(sk, req, child)) 1000 + inet_csk_reqsk_queue_drop(req->rsk_listener, req); 1001 + reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req); 1002 + 1003 + if (sk != req->rsk_listener) { 1004 + /* another listening sk has been selected, 1005 + * migrate the req to it. 1006 + */ 1007 + struct request_sock *nreq; 1008 + 1009 + /* hold a refcnt for the nreq->rsk_listener 1010 + * which is assigned in inet_reqsk_clone() 1011 + */ 1012 + sock_hold(sk); 1013 + nreq = inet_reqsk_clone(req, sk); 1014 + if (!nreq) { 1015 + inet_child_forget(sk, req, child); 1016 + goto child_put; 1017 + } 1018 + 1019 + refcount_set(&nreq->rsk_refcnt, 1); 1020 + if (inet_csk_reqsk_queue_add(sk, nreq, child)) { 1021 + reqsk_migrate_reset(req); 1022 + reqsk_put(req); 1023 + return child; 1024 + } 1025 + 1026 + reqsk_migrate_reset(nreq); 1027 + __reqsk_free(nreq); 1028 + } else if (inet_csk_reqsk_queue_add(sk, req, child)) { 1120 1029 return child; 1030 + } 1121 1031 } 1122 1032 /* Too bad, another child took ownership of the request, undo. */ 1033 + child_put: 1123 1034 bh_unlock_sock(child); 1124 1035 sock_put(child); 1125 1036 return NULL; ··· 1173 1028 * of the variants now. --ANK 1174 1029 */ 1175 1030 while ((req = reqsk_queue_remove(queue, sk)) != NULL) { 1176 - struct sock *child = req->sk; 1031 + struct sock *child = req->sk, *nsk; 1032 + struct request_sock *nreq; 1177 1033 1178 1034 local_bh_disable(); 1179 1035 bh_lock_sock(child); 1180 1036 WARN_ON(sock_owned_by_user(child)); 1181 1037 sock_hold(child); 1182 1038 1039 + nsk = reuseport_migrate_sock(sk, child, NULL); 1040 + if (nsk) { 1041 + nreq = inet_reqsk_clone(req, nsk); 1042 + if (nreq) { 1043 + refcount_set(&nreq->rsk_refcnt, 1); 1044 + 1045 + if (inet_csk_reqsk_queue_add(nsk, nreq, child)) { 1046 + reqsk_migrate_reset(req); 1047 + } else { 1048 + reqsk_migrate_reset(nreq); 1049 + __reqsk_free(nreq); 1050 + } 1051 + 1052 + /* inet_csk_reqsk_queue_add() has already 1053 + * called inet_child_forget() on failure case. 1054 + */ 1055 + goto skip_child_forget; 1056 + } 1057 + } 1058 + 1183 1059 inet_child_forget(sk, req, child); 1060 + skip_child_forget: 1184 1061 reqsk_put(req); 1185 1062 bh_unlock_sock(child); 1186 1063 local_bh_enable();
+1 -1
net/ipv4/inet_hashtables.c
··· 697 697 goto unlock; 698 698 699 699 if (rcu_access_pointer(sk->sk_reuseport_cb)) 700 - reuseport_detach_sock(sk); 700 + reuseport_stop_listen_sock(sk); 701 701 if (ilb) { 702 702 inet_unhash2(hashinfo, sk); 703 703 ilb->count--;
+9
net/ipv4/sysctl_net_ipv4.c
··· 961 961 }, 962 962 #endif 963 963 { 964 + .procname = "tcp_migrate_req", 965 + .data = &init_net.ipv4.sysctl_tcp_migrate_req, 966 + .maxlen = sizeof(u8), 967 + .mode = 0644, 968 + .proc_handler = proc_dou8vec_minmax, 969 + .extra1 = SYSCTL_ZERO, 970 + .extra2 = SYSCTL_ONE 971 + }, 972 + { 964 973 .procname = "tcp_reordering", 965 974 .data = &init_net.ipv4.sysctl_tcp_reordering, 966 975 .maxlen = sizeof(int),
+14 -6
net/ipv4/tcp_ipv4.c
··· 2002 2002 goto csum_error; 2003 2003 } 2004 2004 if (unlikely(sk->sk_state != TCP_LISTEN)) { 2005 - inet_csk_reqsk_queue_drop_and_put(sk, req); 2006 - goto lookup; 2005 + nsk = reuseport_migrate_sock(sk, req_to_sk(req), skb); 2006 + if (!nsk) { 2007 + inet_csk_reqsk_queue_drop_and_put(sk, req); 2008 + goto lookup; 2009 + } 2010 + sk = nsk; 2011 + /* reuseport_migrate_sock() has already held one sk_refcnt 2012 + * before returning. 2013 + */ 2014 + } else { 2015 + /* We own a reference on the listener, increase it again 2016 + * as we might lose it too soon. 2017 + */ 2018 + sock_hold(sk); 2007 2019 } 2008 - /* We own a reference on the listener, increase it again 2009 - * as we might lose it too soon. 2010 - */ 2011 - sock_hold(sk); 2012 2020 refcounted = true; 2013 2021 nsk = NULL; 2014 2022 if (!tcp_filter(sk, skb)) {
+2 -2
net/ipv4/tcp_minisocks.c
··· 775 775 goto listen_overflow; 776 776 777 777 if (own_req && rsk_drop_req(req)) { 778 - reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req); 779 - inet_csk_reqsk_queue_drop_and_put(sk, req); 778 + reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req); 779 + inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req); 780 780 return child; 781 781 } 782 782
+11 -3
net/ipv6/tcp_ipv6.c
··· 1664 1664 goto csum_error; 1665 1665 } 1666 1666 if (unlikely(sk->sk_state != TCP_LISTEN)) { 1667 - inet_csk_reqsk_queue_drop_and_put(sk, req); 1668 - goto lookup; 1667 + nsk = reuseport_migrate_sock(sk, req_to_sk(req), skb); 1668 + if (!nsk) { 1669 + inet_csk_reqsk_queue_drop_and_put(sk, req); 1670 + goto lookup; 1671 + } 1672 + sk = nsk; 1673 + /* reuseport_migrate_sock() has already held one sk_refcnt 1674 + * before returning. 1675 + */ 1676 + } else { 1677 + sock_hold(sk); 1669 1678 } 1670 - sock_hold(sk); 1671 1679 refcounted = true; 1672 1680 nsk = NULL; 1673 1681 if (!tcp_filter(sk, skb)) {
+3 -4
net/xdp/xdp_umem.c
··· 27 27 { 28 28 unpin_user_pages_dirty_lock(umem->pgs, umem->npgs, true); 29 29 30 - kfree(umem->pgs); 30 + kvfree(umem->pgs); 31 31 umem->pgs = NULL; 32 32 } 33 33 ··· 99 99 long npgs; 100 100 int err; 101 101 102 - umem->pgs = kcalloc(umem->npgs, sizeof(*umem->pgs), 103 - GFP_KERNEL | __GFP_NOWARN); 102 + umem->pgs = kvcalloc(umem->npgs, sizeof(*umem->pgs), GFP_KERNEL | __GFP_NOWARN); 104 103 if (!umem->pgs) 105 104 return -ENOMEM; 106 105 ··· 122 123 out_pin: 123 124 xdp_umem_unpin_pages(umem); 124 125 out_pgs: 125 - kfree(umem->pgs); 126 + kvfree(umem->pgs); 126 127 umem->pgs = NULL; 127 128 return err; 128 129 }
+2 -1
net/xdp/xskmap.c
··· 226 226 227 227 static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) 228 228 { 229 - return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem); 229 + return __bpf_xdp_redirect_map(map, ifindex, flags, 0, 230 + __xsk_map_lookup_elem); 230 231 } 231 232 232 233 void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
+3
samples/bpf/Makefile
··· 41 41 tprogs-y += per_socket_stats_example 42 42 tprogs-y += xdp_redirect 43 43 tprogs-y += xdp_redirect_map 44 + tprogs-y += xdp_redirect_map_multi 44 45 tprogs-y += xdp_redirect_cpu 45 46 tprogs-y += xdp_monitor 46 47 tprogs-y += xdp_rxq_info ··· 100 99 per_socket_stats_example-objs := cookie_uid_helper_example.o 101 100 xdp_redirect-objs := xdp_redirect_user.o 102 101 xdp_redirect_map-objs := xdp_redirect_map_user.o 102 + xdp_redirect_map_multi-objs := xdp_redirect_map_multi_user.o 103 103 xdp_redirect_cpu-objs := xdp_redirect_cpu_user.o 104 104 xdp_monitor-objs := xdp_monitor_user.o 105 105 xdp_rxq_info-objs := xdp_rxq_info_user.o ··· 162 160 always-y += tcp_dumpstats_kern.o 163 161 always-y += xdp_redirect_kern.o 164 162 always-y += xdp_redirect_map_kern.o 163 + always-y += xdp_redirect_map_multi_kern.o 165 164 always-y += xdp_redirect_cpu_kern.o 166 165 always-y += xdp_monitor_kern.o 167 166 always-y += xdp_rxq_info_kern.o
+1 -1
samples/bpf/ibumad_kern.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 3 - /** 3 + /* 4 4 * ibumad BPF sample kernel side 5 5 * 6 6 * This program is free software; you can redistribute it and/or
+1 -1
samples/bpf/ibumad_user.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 3 - /** 3 + /* 4 4 * ibumad BPF sample user side 5 5 * 6 6 * This program is free software; you can redistribute it and/or
+2
samples/bpf/xdp_fwd_user.c
··· 67 67 "usage: %s [OPTS] interface-list\n" 68 68 "\nOPTS:\n" 69 69 " -d detach program\n" 70 + " -S use skb-mode\n" 71 + " -F force loading prog\n" 70 72 " -D direct table lookups (skip fib rules)\n", 71 73 prog); 72 74 }
+88
samples/bpf/xdp_redirect_map_multi_kern.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #define KBUILD_MODNAME "foo" 3 + #include <uapi/linux/bpf.h> 4 + #include <linux/in.h> 5 + #include <linux/if_ether.h> 6 + #include <linux/ip.h> 7 + #include <linux/ipv6.h> 8 + #include <bpf/bpf_helpers.h> 9 + 10 + struct { 11 + __uint(type, BPF_MAP_TYPE_DEVMAP_HASH); 12 + __uint(key_size, sizeof(int)); 13 + __uint(value_size, sizeof(int)); 14 + __uint(max_entries, 32); 15 + } forward_map_general SEC(".maps"); 16 + 17 + struct { 18 + __uint(type, BPF_MAP_TYPE_DEVMAP_HASH); 19 + __uint(key_size, sizeof(int)); 20 + __uint(value_size, sizeof(struct bpf_devmap_val)); 21 + __uint(max_entries, 32); 22 + } forward_map_native SEC(".maps"); 23 + 24 + struct { 25 + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 26 + __type(key, u32); 27 + __type(value, long); 28 + __uint(max_entries, 1); 29 + } rxcnt SEC(".maps"); 30 + 31 + /* map to store egress interfaces mac addresses, set the 32 + * max_entries to 1 and extend it in user sapce prog. 33 + */ 34 + struct { 35 + __uint(type, BPF_MAP_TYPE_ARRAY); 36 + __type(key, u32); 37 + __type(value, __be64); 38 + __uint(max_entries, 1); 39 + } mac_map SEC(".maps"); 40 + 41 + static int xdp_redirect_map(struct xdp_md *ctx, void *forward_map) 42 + { 43 + long *value; 44 + u32 key = 0; 45 + 46 + /* count packet in global counter */ 47 + value = bpf_map_lookup_elem(&rxcnt, &key); 48 + if (value) 49 + *value += 1; 50 + 51 + return bpf_redirect_map(forward_map, key, 52 + BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS); 53 + } 54 + 55 + SEC("xdp_redirect_general") 56 + int xdp_redirect_map_general(struct xdp_md *ctx) 57 + { 58 + return xdp_redirect_map(ctx, &forward_map_general); 59 + } 60 + 61 + SEC("xdp_redirect_native") 62 + int xdp_redirect_map_native(struct xdp_md *ctx) 63 + { 64 + return xdp_redirect_map(ctx, &forward_map_native); 65 + } 66 + 67 + SEC("xdp_devmap/map_prog") 68 + int xdp_devmap_prog(struct xdp_md *ctx) 69 + { 70 + void *data_end = (void *)(long)ctx->data_end; 71 + void *data = (void *)(long)ctx->data; 72 + u32 key = ctx->egress_ifindex; 73 + struct ethhdr *eth = data; 74 + __be64 *mac; 75 + u64 nh_off; 76 + 77 + nh_off = sizeof(*eth); 78 + if (data + nh_off > data_end) 79 + return XDP_DROP; 80 + 81 + mac = bpf_map_lookup_elem(&mac_map, &key); 82 + if (mac) 83 + __builtin_memcpy(eth->h_source, mac, ETH_ALEN); 84 + 85 + return XDP_PASS; 86 + } 87 + 88 + char _license[] SEC("license") = "GPL";
+302
samples/bpf/xdp_redirect_map_multi_user.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <linux/if_link.h> 4 + #include <assert.h> 5 + #include <errno.h> 6 + #include <signal.h> 7 + #include <stdio.h> 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <net/if.h> 11 + #include <unistd.h> 12 + #include <libgen.h> 13 + #include <sys/resource.h> 14 + #include <sys/ioctl.h> 15 + #include <sys/types.h> 16 + #include <sys/socket.h> 17 + #include <netinet/in.h> 18 + 19 + #include "bpf_util.h" 20 + #include <bpf/bpf.h> 21 + #include <bpf/libbpf.h> 22 + 23 + #define MAX_IFACE_NUM 32 24 + 25 + static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; 26 + static int ifaces[MAX_IFACE_NUM] = {}; 27 + static int rxcnt_map_fd; 28 + 29 + static void int_exit(int sig) 30 + { 31 + __u32 prog_id = 0; 32 + int i; 33 + 34 + for (i = 0; ifaces[i] > 0; i++) { 35 + if (bpf_get_link_xdp_id(ifaces[i], &prog_id, xdp_flags)) { 36 + printf("bpf_get_link_xdp_id failed\n"); 37 + exit(1); 38 + } 39 + if (prog_id) 40 + bpf_set_link_xdp_fd(ifaces[i], -1, xdp_flags); 41 + } 42 + 43 + exit(0); 44 + } 45 + 46 + static void poll_stats(int interval) 47 + { 48 + unsigned int nr_cpus = bpf_num_possible_cpus(); 49 + __u64 values[nr_cpus], prev[nr_cpus]; 50 + 51 + memset(prev, 0, sizeof(prev)); 52 + 53 + while (1) { 54 + __u64 sum = 0; 55 + __u32 key = 0; 56 + int i; 57 + 58 + sleep(interval); 59 + assert(bpf_map_lookup_elem(rxcnt_map_fd, &key, values) == 0); 60 + for (i = 0; i < nr_cpus; i++) 61 + sum += (values[i] - prev[i]); 62 + if (sum) 63 + printf("Forwarding %10llu pkt/s\n", sum / interval); 64 + memcpy(prev, values, sizeof(values)); 65 + } 66 + } 67 + 68 + static int get_mac_addr(unsigned int ifindex, void *mac_addr) 69 + { 70 + char ifname[IF_NAMESIZE]; 71 + struct ifreq ifr; 72 + int fd, ret = -1; 73 + 74 + fd = socket(AF_INET, SOCK_DGRAM, 0); 75 + if (fd < 0) 76 + return ret; 77 + 78 + if (!if_indextoname(ifindex, ifname)) 79 + goto err_out; 80 + 81 + strcpy(ifr.ifr_name, ifname); 82 + 83 + if (ioctl(fd, SIOCGIFHWADDR, &ifr) != 0) 84 + goto err_out; 85 + 86 + memcpy(mac_addr, ifr.ifr_hwaddr.sa_data, 6 * sizeof(char)); 87 + ret = 0; 88 + 89 + err_out: 90 + close(fd); 91 + return ret; 92 + } 93 + 94 + static int update_mac_map(struct bpf_object *obj) 95 + { 96 + int i, ret = -1, mac_map_fd; 97 + unsigned char mac_addr[6]; 98 + unsigned int ifindex; 99 + 100 + mac_map_fd = bpf_object__find_map_fd_by_name(obj, "mac_map"); 101 + if (mac_map_fd < 0) { 102 + printf("find mac map fd failed\n"); 103 + return ret; 104 + } 105 + 106 + for (i = 0; ifaces[i] > 0; i++) { 107 + ifindex = ifaces[i]; 108 + 109 + ret = get_mac_addr(ifindex, mac_addr); 110 + if (ret < 0) { 111 + printf("get interface %d mac failed\n", ifindex); 112 + return ret; 113 + } 114 + 115 + ret = bpf_map_update_elem(mac_map_fd, &ifindex, mac_addr, 0); 116 + if (ret) { 117 + perror("bpf_update_elem mac_map_fd"); 118 + return ret; 119 + } 120 + } 121 + 122 + return 0; 123 + } 124 + 125 + static void usage(const char *prog) 126 + { 127 + fprintf(stderr, 128 + "usage: %s [OPTS] <IFNAME|IFINDEX> <IFNAME|IFINDEX> ...\n" 129 + "OPTS:\n" 130 + " -S use skb-mode\n" 131 + " -N enforce native mode\n" 132 + " -F force loading prog\n" 133 + " -X load xdp program on egress\n", 134 + prog); 135 + } 136 + 137 + int main(int argc, char **argv) 138 + { 139 + int i, ret, opt, forward_map_fd, max_ifindex = 0; 140 + struct bpf_program *ingress_prog, *egress_prog; 141 + int ingress_prog_fd, egress_prog_fd = 0; 142 + struct bpf_devmap_val devmap_val; 143 + bool attach_egress_prog = false; 144 + char ifname[IF_NAMESIZE]; 145 + struct bpf_map *mac_map; 146 + struct bpf_object *obj; 147 + unsigned int ifindex; 148 + char filename[256]; 149 + 150 + while ((opt = getopt(argc, argv, "SNFX")) != -1) { 151 + switch (opt) { 152 + case 'S': 153 + xdp_flags |= XDP_FLAGS_SKB_MODE; 154 + break; 155 + case 'N': 156 + /* default, set below */ 157 + break; 158 + case 'F': 159 + xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST; 160 + break; 161 + case 'X': 162 + attach_egress_prog = true; 163 + break; 164 + default: 165 + usage(basename(argv[0])); 166 + return 1; 167 + } 168 + } 169 + 170 + if (!(xdp_flags & XDP_FLAGS_SKB_MODE)) { 171 + xdp_flags |= XDP_FLAGS_DRV_MODE; 172 + } else if (attach_egress_prog) { 173 + printf("Load xdp program on egress with SKB mode not supported yet\n"); 174 + return 1; 175 + } 176 + 177 + if (optind == argc) { 178 + printf("usage: %s <IFNAME|IFINDEX> <IFNAME|IFINDEX> ...\n", argv[0]); 179 + return 1; 180 + } 181 + 182 + printf("Get interfaces"); 183 + for (i = 0; i < MAX_IFACE_NUM && argv[optind + i]; i++) { 184 + ifaces[i] = if_nametoindex(argv[optind + i]); 185 + if (!ifaces[i]) 186 + ifaces[i] = strtoul(argv[optind + i], NULL, 0); 187 + if (!if_indextoname(ifaces[i], ifname)) { 188 + perror("Invalid interface name or i"); 189 + return 1; 190 + } 191 + 192 + /* Find the largest index number */ 193 + if (ifaces[i] > max_ifindex) 194 + max_ifindex = ifaces[i]; 195 + 196 + printf(" %d", ifaces[i]); 197 + } 198 + printf("\n"); 199 + 200 + snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 201 + 202 + obj = bpf_object__open(filename); 203 + if (libbpf_get_error(obj)) { 204 + printf("ERROR: opening BPF object file failed\n"); 205 + obj = NULL; 206 + goto err_out; 207 + } 208 + 209 + /* Reset the map size to max ifindex + 1 */ 210 + if (attach_egress_prog) { 211 + mac_map = bpf_object__find_map_by_name(obj, "mac_map"); 212 + ret = bpf_map__resize(mac_map, max_ifindex + 1); 213 + if (ret < 0) { 214 + printf("ERROR: reset mac map size failed\n"); 215 + goto err_out; 216 + } 217 + } 218 + 219 + /* load BPF program */ 220 + if (bpf_object__load(obj)) { 221 + printf("ERROR: loading BPF object file failed\n"); 222 + goto err_out; 223 + } 224 + 225 + if (xdp_flags & XDP_FLAGS_SKB_MODE) { 226 + ingress_prog = bpf_object__find_program_by_name(obj, "xdp_redirect_map_general"); 227 + forward_map_fd = bpf_object__find_map_fd_by_name(obj, "forward_map_general"); 228 + } else { 229 + ingress_prog = bpf_object__find_program_by_name(obj, "xdp_redirect_map_native"); 230 + forward_map_fd = bpf_object__find_map_fd_by_name(obj, "forward_map_native"); 231 + } 232 + if (!ingress_prog || forward_map_fd < 0) { 233 + printf("finding ingress_prog/forward_map in obj file failed\n"); 234 + goto err_out; 235 + } 236 + 237 + ingress_prog_fd = bpf_program__fd(ingress_prog); 238 + if (ingress_prog_fd < 0) { 239 + printf("find ingress_prog fd failed\n"); 240 + goto err_out; 241 + } 242 + 243 + rxcnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rxcnt"); 244 + if (rxcnt_map_fd < 0) { 245 + printf("bpf_object__find_map_fd_by_name failed\n"); 246 + goto err_out; 247 + } 248 + 249 + if (attach_egress_prog) { 250 + /* Update mac_map with all egress interfaces' mac addr */ 251 + if (update_mac_map(obj) < 0) { 252 + printf("Error: update mac map failed"); 253 + goto err_out; 254 + } 255 + 256 + /* Find egress prog fd */ 257 + egress_prog = bpf_object__find_program_by_name(obj, "xdp_devmap_prog"); 258 + if (!egress_prog) { 259 + printf("finding egress_prog in obj file failed\n"); 260 + goto err_out; 261 + } 262 + egress_prog_fd = bpf_program__fd(egress_prog); 263 + if (egress_prog_fd < 0) { 264 + printf("find egress_prog fd failed\n"); 265 + goto err_out; 266 + } 267 + } 268 + 269 + /* Remove attached program when program is interrupted or killed */ 270 + signal(SIGINT, int_exit); 271 + signal(SIGTERM, int_exit); 272 + 273 + /* Init forward multicast groups */ 274 + for (i = 0; ifaces[i] > 0; i++) { 275 + ifindex = ifaces[i]; 276 + 277 + /* bind prog_fd to each interface */ 278 + ret = bpf_set_link_xdp_fd(ifindex, ingress_prog_fd, xdp_flags); 279 + if (ret) { 280 + printf("Set xdp fd failed on %d\n", ifindex); 281 + goto err_out; 282 + } 283 + 284 + /* Add all the interfaces to forward group and attach 285 + * egress devmap programe if exist 286 + */ 287 + devmap_val.ifindex = ifindex; 288 + devmap_val.bpf_prog.fd = egress_prog_fd; 289 + ret = bpf_map_update_elem(forward_map_fd, &ifindex, &devmap_val, 0); 290 + if (ret) { 291 + perror("bpf_map_update_elem forward_map"); 292 + goto err_out; 293 + } 294 + } 295 + 296 + poll_stats(2); 297 + 298 + return 0; 299 + 300 + err_out: 301 + return 1; 302 + }
+2 -1
samples/bpf/xdp_sample_pkts_user.c
··· 103 103 fprintf(stderr, 104 104 "%s: %s [OPTS] <ifname|ifindex>\n\n" 105 105 "OPTS:\n" 106 - " -F force loading prog\n", 106 + " -F force loading prog\n" 107 + " -S use skb-mode\n", 107 108 __func__, prog); 108 109 } 109 110
+4 -1
tools/bpf/bpftool/Makefile
··· 136 136 137 137 BPFTOOL_BOOTSTRAP := $(BOOTSTRAP_OUTPUT)bpftool 138 138 139 - BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o xlated_dumper.o btf_dumper.o) $(OUTPUT)disasm.o 139 + BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o xlated_dumper.o btf_dumper.o disasm.o) 140 140 OBJS = $(patsubst %.c,$(OUTPUT)%.o,$(SRCS)) $(OUTPUT)disasm.o 141 141 142 142 VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \ ··· 179 179 endif 180 180 181 181 CFLAGS += $(if $(BUILD_BPF_SKELS),,-DBPFTOOL_WITHOUT_SKELETONS) 182 + 183 + $(BOOTSTRAP_OUTPUT)disasm.o: $(srctree)/kernel/bpf/disasm.c 184 + $(QUIET_CC)$(HOSTCC) $(CFLAGS) -c -MMD -o $@ $< 182 185 183 186 $(OUTPUT)disasm.o: $(srctree)/kernel/bpf/disasm.c 184 187 $(QUIET_CC)$(CC) $(CFLAGS) -c -MMD -o $@ $<
+18 -9
tools/bpf/bpftool/gen.c
··· 713 713 #ifndef %2$s \n\ 714 714 #define %2$s \n\ 715 715 \n\ 716 + #include <errno.h> \n\ 716 717 #include <stdlib.h> \n\ 717 718 #include <bpf/libbpf.h> \n\ 718 719 \n\ ··· 794 793 %1$s__open_opts(const struct bpf_object_open_opts *opts) \n\ 795 794 { \n\ 796 795 struct %1$s *obj; \n\ 796 + int err; \n\ 797 797 \n\ 798 798 obj = (struct %1$s *)calloc(1, sizeof(*obj)); \n\ 799 - if (!obj) \n\ 799 + if (!obj) { \n\ 800 + errno = ENOMEM; \n\ 800 801 return NULL; \n\ 801 - if (%1$s__create_skeleton(obj)) \n\ 802 - goto err; \n\ 803 - if (bpf_object__open_skeleton(obj->skeleton, opts)) \n\ 804 - goto err; \n\ 802 + } \n\ 803 + \n\ 804 + err = %1$s__create_skeleton(obj); \n\ 805 + err = err ?: bpf_object__open_skeleton(obj->skeleton, opts);\n\ 806 + if (err) \n\ 807 + goto err_out; \n\ 805 808 \n\ 806 809 return obj; \n\ 807 - err: \n\ 810 + err_out: \n\ 808 811 %1$s__destroy(obj); \n\ 812 + errno = -err; \n\ 809 813 return NULL; \n\ 810 814 } \n\ 811 815 \n\ ··· 830 824 %1$s__open_and_load(void) \n\ 831 825 { \n\ 832 826 struct %1$s *obj; \n\ 827 + int err; \n\ 833 828 \n\ 834 829 obj = %1$s__open(); \n\ 835 830 if (!obj) \n\ 836 831 return NULL; \n\ 837 - if (%1$s__load(obj)) { \n\ 832 + err = %1$s__load(obj); \n\ 833 + if (err) { \n\ 838 834 %1$s__destroy(obj); \n\ 835 + errno = -err; \n\ 839 836 return NULL; \n\ 840 837 } \n\ 841 838 return obj; \n\ ··· 869 860 \n\ 870 861 s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\ 871 862 if (!s) \n\ 872 - return -1; \n\ 863 + goto err; \n\ 873 864 obj->skeleton = s; \n\ 874 865 \n\ 875 866 s->sz = sizeof(*s); \n\ ··· 958 949 return 0; \n\ 959 950 err: \n\ 960 951 bpf_object__destroy_skeleton(s); \n\ 961 - return -1; \n\ 952 + return -ENOMEM; \n\ 962 953 } \n\ 963 954 \n\ 964 955 #endif /* %s */ \n\
+3 -1
tools/bpf/bpftool/main.c
··· 341 341 n_argc = make_args(buf, n_argv, BATCH_ARG_NB_MAX, lines); 342 342 if (!n_argc) 343 343 continue; 344 - if (n_argc < 0) 344 + if (n_argc < 0) { 345 + err = n_argc; 345 346 goto err_close; 347 + } 346 348 347 349 if (json_output) { 348 350 jsonw_start_object(json_wtr);
+41 -2
tools/include/uapi/linux/bpf.h
··· 527 527 * Look up an element with the given *key* in the map referred to 528 528 * by the file descriptor *fd*, and if found, delete the element. 529 529 * 530 + * For **BPF_MAP_TYPE_QUEUE** and **BPF_MAP_TYPE_STACK** map 531 + * types, the *flags* argument needs to be set to 0, but for other 532 + * map types, it may be specified as: 533 + * 534 + * **BPF_F_LOCK** 535 + * Look up and delete the value of a spin-locked map 536 + * without returning the lock. This must be specified if 537 + * the elements contain a spinlock. 538 + * 530 539 * The **BPF_MAP_TYPE_QUEUE** and **BPF_MAP_TYPE_STACK** map types 531 540 * implement this command as a "pop" operation, deleting the top 532 541 * element rather than one corresponding to *key*. ··· 545 536 * This command is only valid for the following map types: 546 537 * * **BPF_MAP_TYPE_QUEUE** 547 538 * * **BPF_MAP_TYPE_STACK** 539 + * * **BPF_MAP_TYPE_HASH** 540 + * * **BPF_MAP_TYPE_PERCPU_HASH** 541 + * * **BPF_MAP_TYPE_LRU_HASH** 542 + * * **BPF_MAP_TYPE_LRU_PERCPU_HASH** 548 543 * 549 544 * Return 550 545 * Returns zero on success. On error, -1 is returned and *errno* ··· 994 981 BPF_SK_LOOKUP, 995 982 BPF_XDP, 996 983 BPF_SK_SKB_VERDICT, 984 + BPF_SK_REUSEPORT_SELECT, 985 + BPF_SK_REUSEPORT_SELECT_OR_MIGRATE, 997 986 __MAX_BPF_ATTACH_TYPE 998 987 }; 999 988 ··· 2557 2542 * The lower two bits of *flags* are used as the return code if 2558 2543 * the map lookup fails. This is so that the return value can be 2559 2544 * one of the XDP program return codes up to **XDP_TX**, as chosen 2560 - * by the caller. Any higher bits in the *flags* argument must be 2561 - * unset. 2545 + * by the caller. The higher bits of *flags* can be set to 2546 + * BPF_F_BROADCAST or BPF_F_EXCLUDE_INGRESS as defined below. 2547 + * 2548 + * With BPF_F_BROADCAST the packet will be broadcasted to all the 2549 + * interfaces in the map, with BPF_F_EXCLUDE_INGRESS the ingress 2550 + * interface will be excluded when do broadcasting. 2562 2551 * 2563 2552 * See also **bpf_redirect**\ (), which only supports redirecting 2564 2553 * to an ifindex, but doesn't require a map to do so. ··· 5128 5109 BPF_F_BPRM_SECUREEXEC = (1ULL << 0), 5129 5110 }; 5130 5111 5112 + /* Flags for bpf_redirect_map helper */ 5113 + enum { 5114 + BPF_F_BROADCAST = (1ULL << 3), 5115 + BPF_F_EXCLUDE_INGRESS = (1ULL << 4), 5116 + }; 5117 + 5131 5118 #define __bpf_md_ptr(type, name) \ 5132 5119 union { \ 5133 5120 type name; \ ··· 5418 5393 __u32 ip_protocol; /* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */ 5419 5394 __u32 bind_inany; /* Is sock bound to an INANY address? */ 5420 5395 __u32 hash; /* A hash of the packet 4 tuples */ 5396 + /* When reuse->migrating_sk is NULL, it is selecting a sk for the 5397 + * new incoming connection request (e.g. selecting a listen sk for 5398 + * the received SYN in the TCP case). reuse->sk is one of the sk 5399 + * in the reuseport group. The bpf prog can use reuse->sk to learn 5400 + * the local listening ip/port without looking into the skb. 5401 + * 5402 + * When reuse->migrating_sk is not NULL, reuse->sk is closed and 5403 + * reuse->migrating_sk is the socket that needs to be migrated 5404 + * to another listening socket. migrating_sk could be a fullsock 5405 + * sk that is fully established or a reqsk that is in-the-middle 5406 + * of 3-way handshake. 5407 + */ 5408 + __bpf_md_ptr(struct bpf_sock *, sk); 5409 + __bpf_md_ptr(struct bpf_sock *, migrating_sk); 5421 5410 }; 5422 5411 5423 5412 #define BPF_TAG_SIZE 8
+7 -11
tools/lib/bpf/Makefile
··· 223 223 $(call do_install_mkdir,$(libdir_SQ)); \ 224 224 cp -fpR $(LIB_FILE) $(DESTDIR)$(libdir_SQ) 225 225 226 + INSTALL_HEADERS = bpf.h libbpf.h btf.h libbpf_common.h libbpf_legacy.h xsk.h \ 227 + bpf_helpers.h $(BPF_HELPER_DEFS) bpf_tracing.h \ 228 + bpf_endian.h bpf_core_read.h skel_internal.h 229 + 226 230 install_headers: $(BPF_HELPER_DEFS) 227 - $(call QUIET_INSTALL, headers) \ 228 - $(call do_install,bpf.h,$(prefix)/include/bpf,644); \ 229 - $(call do_install,libbpf.h,$(prefix)/include/bpf,644); \ 230 - $(call do_install,btf.h,$(prefix)/include/bpf,644); \ 231 - $(call do_install,libbpf_common.h,$(prefix)/include/bpf,644); \ 232 - $(call do_install,xsk.h,$(prefix)/include/bpf,644); \ 233 - $(call do_install,bpf_helpers.h,$(prefix)/include/bpf,644); \ 234 - $(call do_install,$(BPF_HELPER_DEFS),$(prefix)/include/bpf,644); \ 235 - $(call do_install,bpf_tracing.h,$(prefix)/include/bpf,644); \ 236 - $(call do_install,bpf_endian.h,$(prefix)/include/bpf,644); \ 237 - $(call do_install,bpf_core_read.h,$(prefix)/include/bpf,644); 231 + $(call QUIET_INSTALL, headers) \ 232 + $(foreach hdr,$(INSTALL_HEADERS), \ 233 + $(call do_install,$(hdr),$(prefix)/include/bpf,644);) 238 234 239 235 install_pkgconfig: $(PC_FILE) 240 236 $(call QUIET_INSTALL, $(PC_FILE)) \
+141 -60
tools/lib/bpf/bpf.c
··· 80 80 int bpf_create_map_xattr(const struct bpf_create_map_attr *create_attr) 81 81 { 82 82 union bpf_attr attr; 83 + int fd; 83 84 84 85 memset(&attr, '\0', sizeof(attr)); 85 86 ··· 103 102 else 104 103 attr.inner_map_fd = create_attr->inner_map_fd; 105 104 106 - return sys_bpf(BPF_MAP_CREATE, &attr, sizeof(attr)); 105 + fd = sys_bpf(BPF_MAP_CREATE, &attr, sizeof(attr)); 106 + return libbpf_err_errno(fd); 107 107 } 108 108 109 109 int bpf_create_map_node(enum bpf_map_type map_type, const char *name, ··· 162 160 __u32 map_flags, int node) 163 161 { 164 162 union bpf_attr attr; 163 + int fd; 165 164 166 165 memset(&attr, '\0', sizeof(attr)); 167 166 ··· 181 178 attr.numa_node = node; 182 179 } 183 180 184 - return sys_bpf(BPF_MAP_CREATE, &attr, sizeof(attr)); 181 + fd = sys_bpf(BPF_MAP_CREATE, &attr, sizeof(attr)); 182 + return libbpf_err_errno(fd); 185 183 } 186 184 187 185 int bpf_create_map_in_map(enum bpf_map_type map_type, const char *name, ··· 226 222 int fd; 227 223 228 224 if (!load_attr->log_buf != !load_attr->log_buf_sz) 229 - return -EINVAL; 225 + return libbpf_err(-EINVAL); 230 226 231 227 if (load_attr->log_level > (4 | 2 | 1) || (load_attr->log_level && !load_attr->log_buf)) 232 - return -EINVAL; 228 + return libbpf_err(-EINVAL); 233 229 234 230 memset(&attr, 0, sizeof(attr)); 235 231 attr.prog_type = load_attr->prog_type; ··· 285 281 load_attr->func_info_cnt, 286 282 load_attr->func_info_rec_size, 287 283 attr.func_info_rec_size); 288 - if (!finfo) 284 + if (!finfo) { 285 + errno = E2BIG; 289 286 goto done; 287 + } 290 288 291 289 attr.func_info = ptr_to_u64(finfo); 292 290 attr.func_info_rec_size = load_attr->func_info_rec_size; ··· 299 293 load_attr->line_info_cnt, 300 294 load_attr->line_info_rec_size, 301 295 attr.line_info_rec_size); 302 - if (!linfo) 296 + if (!linfo) { 297 + errno = E2BIG; 303 298 goto done; 299 + } 304 300 305 301 attr.line_info = ptr_to_u64(linfo); 306 302 attr.line_info_rec_size = load_attr->line_info_rec_size; ··· 326 318 327 319 fd = sys_bpf_prog_load(&attr, sizeof(attr)); 328 320 done: 321 + /* free() doesn't affect errno, so we don't need to restore it */ 329 322 free(finfo); 330 323 free(linfo); 331 - return fd; 324 + return libbpf_err_errno(fd); 332 325 } 333 326 334 327 int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr, ··· 338 329 struct bpf_prog_load_params p = {}; 339 330 340 331 if (!load_attr || !log_buf != !log_buf_sz) 341 - return -EINVAL; 332 + return libbpf_err(-EINVAL); 342 333 343 334 p.prog_type = load_attr->prog_type; 344 335 p.expected_attach_type = load_attr->expected_attach_type; ··· 400 391 int log_level) 401 392 { 402 393 union bpf_attr attr; 394 + int fd; 403 395 404 396 memset(&attr, 0, sizeof(attr)); 405 397 attr.prog_type = type; ··· 414 404 attr.kern_version = kern_version; 415 405 attr.prog_flags = prog_flags; 416 406 417 - return sys_bpf_prog_load(&attr, sizeof(attr)); 407 + fd = sys_bpf_prog_load(&attr, sizeof(attr)); 408 + return libbpf_err_errno(fd); 418 409 } 419 410 420 411 int bpf_map_update_elem(int fd, const void *key, const void *value, 421 412 __u64 flags) 422 413 { 423 414 union bpf_attr attr; 415 + int ret; 424 416 425 417 memset(&attr, 0, sizeof(attr)); 426 418 attr.map_fd = fd; ··· 430 418 attr.value = ptr_to_u64(value); 431 419 attr.flags = flags; 432 420 433 - return sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr)); 421 + ret = sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr)); 422 + return libbpf_err_errno(ret); 434 423 } 435 424 436 425 int bpf_map_lookup_elem(int fd, const void *key, void *value) 437 426 { 438 427 union bpf_attr attr; 428 + int ret; 439 429 440 430 memset(&attr, 0, sizeof(attr)); 441 431 attr.map_fd = fd; 442 432 attr.key = ptr_to_u64(key); 443 433 attr.value = ptr_to_u64(value); 444 434 445 - return sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr)); 435 + ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr)); 436 + return libbpf_err_errno(ret); 446 437 } 447 438 448 439 int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags) 440 + { 441 + union bpf_attr attr; 442 + int ret; 443 + 444 + memset(&attr, 0, sizeof(attr)); 445 + attr.map_fd = fd; 446 + attr.key = ptr_to_u64(key); 447 + attr.value = ptr_to_u64(value); 448 + attr.flags = flags; 449 + 450 + ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr)); 451 + return libbpf_err_errno(ret); 452 + } 453 + 454 + int bpf_map_lookup_and_delete_elem(int fd, const void *key, void *value) 455 + { 456 + union bpf_attr attr; 457 + int ret; 458 + 459 + memset(&attr, 0, sizeof(attr)); 460 + attr.map_fd = fd; 461 + attr.key = ptr_to_u64(key); 462 + attr.value = ptr_to_u64(value); 463 + 464 + ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr)); 465 + return libbpf_err_errno(ret); 466 + } 467 + 468 + int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, __u64 flags) 449 469 { 450 470 union bpf_attr attr; 451 471 ··· 486 442 attr.key = ptr_to_u64(key); 487 443 attr.value = ptr_to_u64(value); 488 444 attr.flags = flags; 489 - 490 - return sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr)); 491 - } 492 - 493 - int bpf_map_lookup_and_delete_elem(int fd, const void *key, void *value) 494 - { 495 - union bpf_attr attr; 496 - 497 - memset(&attr, 0, sizeof(attr)); 498 - attr.map_fd = fd; 499 - attr.key = ptr_to_u64(key); 500 - attr.value = ptr_to_u64(value); 501 445 502 446 return sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr)); 503 447 } ··· 493 461 int bpf_map_delete_elem(int fd, const void *key) 494 462 { 495 463 union bpf_attr attr; 464 + int ret; 496 465 497 466 memset(&attr, 0, sizeof(attr)); 498 467 attr.map_fd = fd; 499 468 attr.key = ptr_to_u64(key); 500 469 501 - return sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr)); 470 + ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr)); 471 + return libbpf_err_errno(ret); 502 472 } 503 473 504 474 int bpf_map_get_next_key(int fd, const void *key, void *next_key) 505 475 { 506 476 union bpf_attr attr; 477 + int ret; 507 478 508 479 memset(&attr, 0, sizeof(attr)); 509 480 attr.map_fd = fd; 510 481 attr.key = ptr_to_u64(key); 511 482 attr.next_key = ptr_to_u64(next_key); 512 483 513 - return sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr)); 484 + ret = sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr)); 485 + return libbpf_err_errno(ret); 514 486 } 515 487 516 488 int bpf_map_freeze(int fd) 517 489 { 518 490 union bpf_attr attr; 491 + int ret; 519 492 520 493 memset(&attr, 0, sizeof(attr)); 521 494 attr.map_fd = fd; 522 495 523 - return sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr)); 496 + ret = sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr)); 497 + return libbpf_err_errno(ret); 524 498 } 525 499 526 500 static int bpf_map_batch_common(int cmd, int fd, void *in_batch, ··· 538 500 int ret; 539 501 540 502 if (!OPTS_VALID(opts, bpf_map_batch_opts)) 541 - return -EINVAL; 503 + return libbpf_err(-EINVAL); 542 504 543 505 memset(&attr, 0, sizeof(attr)); 544 506 attr.batch.map_fd = fd; ··· 553 515 ret = sys_bpf(cmd, &attr, sizeof(attr)); 554 516 *count = attr.batch.count; 555 517 556 - return ret; 518 + return libbpf_err_errno(ret); 557 519 } 558 520 559 521 int bpf_map_delete_batch(int fd, void *keys, __u32 *count, ··· 590 552 int bpf_obj_pin(int fd, const char *pathname) 591 553 { 592 554 union bpf_attr attr; 555 + int ret; 593 556 594 557 memset(&attr, 0, sizeof(attr)); 595 558 attr.pathname = ptr_to_u64((void *)pathname); 596 559 attr.bpf_fd = fd; 597 560 598 - return sys_bpf(BPF_OBJ_PIN, &attr, sizeof(attr)); 561 + ret = sys_bpf(BPF_OBJ_PIN, &attr, sizeof(attr)); 562 + return libbpf_err_errno(ret); 599 563 } 600 564 601 565 int bpf_obj_get(const char *pathname) 602 566 { 603 567 union bpf_attr attr; 568 + int fd; 604 569 605 570 memset(&attr, 0, sizeof(attr)); 606 571 attr.pathname = ptr_to_u64((void *)pathname); 607 572 608 - return sys_bpf(BPF_OBJ_GET, &attr, sizeof(attr)); 573 + fd = sys_bpf(BPF_OBJ_GET, &attr, sizeof(attr)); 574 + return libbpf_err_errno(fd); 609 575 } 610 576 611 577 int bpf_prog_attach(int prog_fd, int target_fd, enum bpf_attach_type type, ··· 627 585 const struct bpf_prog_attach_opts *opts) 628 586 { 629 587 union bpf_attr attr; 588 + int ret; 630 589 631 590 if (!OPTS_VALID(opts, bpf_prog_attach_opts)) 632 - return -EINVAL; 591 + return libbpf_err(-EINVAL); 633 592 634 593 memset(&attr, 0, sizeof(attr)); 635 594 attr.target_fd = target_fd; ··· 639 596 attr.attach_flags = OPTS_GET(opts, flags, 0); 640 597 attr.replace_bpf_fd = OPTS_GET(opts, replace_prog_fd, 0); 641 598 642 - return sys_bpf(BPF_PROG_ATTACH, &attr, sizeof(attr)); 599 + ret = sys_bpf(BPF_PROG_ATTACH, &attr, sizeof(attr)); 600 + return libbpf_err_errno(ret); 643 601 } 644 602 645 603 int bpf_prog_detach(int target_fd, enum bpf_attach_type type) 646 604 { 647 605 union bpf_attr attr; 606 + int ret; 648 607 649 608 memset(&attr, 0, sizeof(attr)); 650 609 attr.target_fd = target_fd; 651 610 attr.attach_type = type; 652 611 653 - return sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr)); 612 + ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr)); 613 + return libbpf_err_errno(ret); 654 614 } 655 615 656 616 int bpf_prog_detach2(int prog_fd, int target_fd, enum bpf_attach_type type) 657 617 { 658 618 union bpf_attr attr; 619 + int ret; 659 620 660 621 memset(&attr, 0, sizeof(attr)); 661 622 attr.target_fd = target_fd; 662 623 attr.attach_bpf_fd = prog_fd; 663 624 attr.attach_type = type; 664 625 665 - return sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr)); 626 + ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr)); 627 + return libbpf_err_errno(ret); 666 628 } 667 629 668 630 int bpf_link_create(int prog_fd, int target_fd, ··· 676 628 { 677 629 __u32 target_btf_id, iter_info_len; 678 630 union bpf_attr attr; 631 + int fd; 679 632 680 633 if (!OPTS_VALID(opts, bpf_link_create_opts)) 681 - return -EINVAL; 634 + return libbpf_err(-EINVAL); 682 635 683 636 iter_info_len = OPTS_GET(opts, iter_info_len, 0); 684 637 target_btf_id = OPTS_GET(opts, target_btf_id, 0); 685 638 686 639 if (iter_info_len && target_btf_id) 687 - return -EINVAL; 640 + return libbpf_err(-EINVAL); 688 641 689 642 memset(&attr, 0, sizeof(attr)); 690 643 attr.link_create.prog_fd = prog_fd; ··· 701 652 attr.link_create.target_btf_id = target_btf_id; 702 653 } 703 654 704 - return sys_bpf(BPF_LINK_CREATE, &attr, sizeof(attr)); 655 + fd = sys_bpf(BPF_LINK_CREATE, &attr, sizeof(attr)); 656 + return libbpf_err_errno(fd); 705 657 } 706 658 707 659 int bpf_link_detach(int link_fd) 708 660 { 709 661 union bpf_attr attr; 662 + int ret; 710 663 711 664 memset(&attr, 0, sizeof(attr)); 712 665 attr.link_detach.link_fd = link_fd; 713 666 714 - return sys_bpf(BPF_LINK_DETACH, &attr, sizeof(attr)); 667 + ret = sys_bpf(BPF_LINK_DETACH, &attr, sizeof(attr)); 668 + return libbpf_err_errno(ret); 715 669 } 716 670 717 671 int bpf_link_update(int link_fd, int new_prog_fd, 718 672 const struct bpf_link_update_opts *opts) 719 673 { 720 674 union bpf_attr attr; 675 + int ret; 721 676 722 677 if (!OPTS_VALID(opts, bpf_link_update_opts)) 723 - return -EINVAL; 678 + return libbpf_err(-EINVAL); 724 679 725 680 memset(&attr, 0, sizeof(attr)); 726 681 attr.link_update.link_fd = link_fd; ··· 732 679 attr.link_update.flags = OPTS_GET(opts, flags, 0); 733 680 attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0); 734 681 735 - return sys_bpf(BPF_LINK_UPDATE, &attr, sizeof(attr)); 682 + ret = sys_bpf(BPF_LINK_UPDATE, &attr, sizeof(attr)); 683 + return libbpf_err_errno(ret); 736 684 } 737 685 738 686 int bpf_iter_create(int link_fd) 739 687 { 740 688 union bpf_attr attr; 689 + int fd; 741 690 742 691 memset(&attr, 0, sizeof(attr)); 743 692 attr.iter_create.link_fd = link_fd; 744 693 745 - return sys_bpf(BPF_ITER_CREATE, &attr, sizeof(attr)); 694 + fd = sys_bpf(BPF_ITER_CREATE, &attr, sizeof(attr)); 695 + return libbpf_err_errno(fd); 746 696 } 747 697 748 698 int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags, ··· 762 706 attr.query.prog_ids = ptr_to_u64(prog_ids); 763 707 764 708 ret = sys_bpf(BPF_PROG_QUERY, &attr, sizeof(attr)); 709 + 765 710 if (attach_flags) 766 711 *attach_flags = attr.query.attach_flags; 767 712 *prog_cnt = attr.query.prog_cnt; 768 - return ret; 713 + 714 + return libbpf_err_errno(ret); 769 715 } 770 716 771 717 int bpf_prog_test_run(int prog_fd, int repeat, void *data, __u32 size, ··· 785 727 attr.test.repeat = repeat; 786 728 787 729 ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr)); 730 + 788 731 if (size_out) 789 732 *size_out = attr.test.data_size_out; 790 733 if (retval) 791 734 *retval = attr.test.retval; 792 735 if (duration) 793 736 *duration = attr.test.duration; 794 - return ret; 737 + 738 + return libbpf_err_errno(ret); 795 739 } 796 740 797 741 int bpf_prog_test_run_xattr(struct bpf_prog_test_run_attr *test_attr) ··· 802 742 int ret; 803 743 804 744 if (!test_attr->data_out && test_attr->data_size_out > 0) 805 - return -EINVAL; 745 + return libbpf_err(-EINVAL); 806 746 807 747 memset(&attr, 0, sizeof(attr)); 808 748 attr.test.prog_fd = test_attr->prog_fd; ··· 817 757 attr.test.repeat = test_attr->repeat; 818 758 819 759 ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr)); 760 + 820 761 test_attr->data_size_out = attr.test.data_size_out; 821 762 test_attr->ctx_size_out = attr.test.ctx_size_out; 822 763 test_attr->retval = attr.test.retval; 823 764 test_attr->duration = attr.test.duration; 824 - return ret; 765 + 766 + return libbpf_err_errno(ret); 825 767 } 826 768 827 769 int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts) ··· 832 770 int ret; 833 771 834 772 if (!OPTS_VALID(opts, bpf_test_run_opts)) 835 - return -EINVAL; 773 + return libbpf_err(-EINVAL); 836 774 837 775 memset(&attr, 0, sizeof(attr)); 838 776 attr.test.prog_fd = prog_fd; ··· 850 788 attr.test.data_out = ptr_to_u64(OPTS_GET(opts, data_out, NULL)); 851 789 852 790 ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr)); 791 + 853 792 OPTS_SET(opts, data_size_out, attr.test.data_size_out); 854 793 OPTS_SET(opts, ctx_size_out, attr.test.ctx_size_out); 855 794 OPTS_SET(opts, duration, attr.test.duration); 856 795 OPTS_SET(opts, retval, attr.test.retval); 857 - return ret; 796 + 797 + return libbpf_err_errno(ret); 858 798 } 859 799 860 800 static int bpf_obj_get_next_id(__u32 start_id, __u32 *next_id, int cmd) ··· 871 807 if (!err) 872 808 *next_id = attr.next_id; 873 809 874 - return err; 810 + return libbpf_err_errno(err); 875 811 } 876 812 877 813 int bpf_prog_get_next_id(__u32 start_id, __u32 *next_id) ··· 897 833 int bpf_prog_get_fd_by_id(__u32 id) 898 834 { 899 835 union bpf_attr attr; 836 + int fd; 900 837 901 838 memset(&attr, 0, sizeof(attr)); 902 839 attr.prog_id = id; 903 840 904 - return sys_bpf(BPF_PROG_GET_FD_BY_ID, &attr, sizeof(attr)); 841 + fd = sys_bpf(BPF_PROG_GET_FD_BY_ID, &attr, sizeof(attr)); 842 + return libbpf_err_errno(fd); 905 843 } 906 844 907 845 int bpf_map_get_fd_by_id(__u32 id) 908 846 { 909 847 union bpf_attr attr; 848 + int fd; 910 849 911 850 memset(&attr, 0, sizeof(attr)); 912 851 attr.map_id = id; 913 852 914 - return sys_bpf(BPF_MAP_GET_FD_BY_ID, &attr, sizeof(attr)); 853 + fd = sys_bpf(BPF_MAP_GET_FD_BY_ID, &attr, sizeof(attr)); 854 + return libbpf_err_errno(fd); 915 855 } 916 856 917 857 int bpf_btf_get_fd_by_id(__u32 id) 918 858 { 919 859 union bpf_attr attr; 860 + int fd; 920 861 921 862 memset(&attr, 0, sizeof(attr)); 922 863 attr.btf_id = id; 923 864 924 - return sys_bpf(BPF_BTF_GET_FD_BY_ID, &attr, sizeof(attr)); 865 + fd = sys_bpf(BPF_BTF_GET_FD_BY_ID, &attr, sizeof(attr)); 866 + return libbpf_err_errno(fd); 925 867 } 926 868 927 869 int bpf_link_get_fd_by_id(__u32 id) 928 870 { 929 871 union bpf_attr attr; 872 + int fd; 930 873 931 874 memset(&attr, 0, sizeof(attr)); 932 875 attr.link_id = id; 933 876 934 - return sys_bpf(BPF_LINK_GET_FD_BY_ID, &attr, sizeof(attr)); 877 + fd = sys_bpf(BPF_LINK_GET_FD_BY_ID, &attr, sizeof(attr)); 878 + return libbpf_err_errno(fd); 935 879 } 936 880 937 881 int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len) ··· 953 881 attr.info.info = ptr_to_u64(info); 954 882 955 883 err = sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, sizeof(attr)); 884 + 956 885 if (!err) 957 886 *info_len = attr.info.info_len; 958 887 959 - return err; 888 + return libbpf_err_errno(err); 960 889 } 961 890 962 891 int bpf_raw_tracepoint_open(const char *name, int prog_fd) 963 892 { 964 893 union bpf_attr attr; 894 + int fd; 965 895 966 896 memset(&attr, 0, sizeof(attr)); 967 897 attr.raw_tracepoint.name = ptr_to_u64(name); 968 898 attr.raw_tracepoint.prog_fd = prog_fd; 969 899 970 - return sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr)); 900 + fd = sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr)); 901 + return libbpf_err_errno(fd); 971 902 } 972 903 973 904 int bpf_load_btf(const void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size, ··· 990 915 } 991 916 992 917 fd = sys_bpf(BPF_BTF_LOAD, &attr, sizeof(attr)); 993 - if (fd == -1 && !do_log && log_buf && log_buf_size) { 918 + 919 + if (fd < 0 && !do_log && log_buf && log_buf_size) { 994 920 do_log = true; 995 921 goto retry; 996 922 } 997 923 998 - return fd; 924 + return libbpf_err_errno(fd); 999 925 } 1000 926 1001 927 int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, __u32 *buf_len, ··· 1013 937 attr.task_fd_query.buf_len = *buf_len; 1014 938 1015 939 err = sys_bpf(BPF_TASK_FD_QUERY, &attr, sizeof(attr)); 940 + 1016 941 *buf_len = attr.task_fd_query.buf_len; 1017 942 *prog_id = attr.task_fd_query.prog_id; 1018 943 *fd_type = attr.task_fd_query.fd_type; 1019 944 *probe_offset = attr.task_fd_query.probe_offset; 1020 945 *probe_addr = attr.task_fd_query.probe_addr; 1021 946 1022 - return err; 947 + return libbpf_err_errno(err); 1023 948 } 1024 949 1025 950 int bpf_enable_stats(enum bpf_stats_type type) 1026 951 { 1027 952 union bpf_attr attr; 953 + int fd; 1028 954 1029 955 memset(&attr, 0, sizeof(attr)); 1030 956 attr.enable_stats.type = type; 1031 957 1032 - return sys_bpf(BPF_ENABLE_STATS, &attr, sizeof(attr)); 958 + fd = sys_bpf(BPF_ENABLE_STATS, &attr, sizeof(attr)); 959 + return libbpf_err_errno(fd); 1033 960 } 1034 961 1035 962 int bpf_prog_bind_map(int prog_fd, int map_fd, 1036 963 const struct bpf_prog_bind_opts *opts) 1037 964 { 1038 965 union bpf_attr attr; 966 + int ret; 1039 967 1040 968 if (!OPTS_VALID(opts, bpf_prog_bind_opts)) 1041 - return -EINVAL; 969 + return libbpf_err(-EINVAL); 1042 970 1043 971 memset(&attr, 0, sizeof(attr)); 1044 972 attr.prog_bind_map.prog_fd = prog_fd; 1045 973 attr.prog_bind_map.map_fd = map_fd; 1046 974 attr.prog_bind_map.flags = OPTS_GET(opts, flags, 0); 1047 975 1048 - return sys_bpf(BPF_PROG_BIND_MAP, &attr, sizeof(attr)); 976 + ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, sizeof(attr)); 977 + return libbpf_err_errno(ret); 1049 978 }
+2
tools/lib/bpf/bpf.h
··· 124 124 __u64 flags); 125 125 LIBBPF_API int bpf_map_lookup_and_delete_elem(int fd, const void *key, 126 126 void *value); 127 + LIBBPF_API int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, 128 + void *value, __u64 flags); 127 129 LIBBPF_API int bpf_map_delete_elem(int fd, const void *key); 128 130 LIBBPF_API int bpf_map_get_next_key(int fd, const void *key, void *next_key); 129 131 LIBBPF_API int bpf_map_freeze(int fd);
+66
tools/lib/bpf/bpf_helpers.h
··· 158 158 #define __kconfig __attribute__((section(".kconfig"))) 159 159 #define __ksym __attribute__((section(".ksyms"))) 160 160 161 + #ifndef ___bpf_concat 162 + #define ___bpf_concat(a, b) a ## b 163 + #endif 164 + #ifndef ___bpf_apply 165 + #define ___bpf_apply(fn, n) ___bpf_concat(fn, n) 166 + #endif 167 + #ifndef ___bpf_nth 168 + #define ___bpf_nth(_, _1, _2, _3, _4, _5, _6, _7, _8, _9, _a, _b, _c, N, ...) N 169 + #endif 170 + #ifndef ___bpf_narg 171 + #define ___bpf_narg(...) \ 172 + ___bpf_nth(_, ##__VA_ARGS__, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0) 173 + #endif 174 + 175 + #define ___bpf_fill0(arr, p, x) do {} while (0) 176 + #define ___bpf_fill1(arr, p, x) arr[p] = x 177 + #define ___bpf_fill2(arr, p, x, args...) arr[p] = x; ___bpf_fill1(arr, p + 1, args) 178 + #define ___bpf_fill3(arr, p, x, args...) arr[p] = x; ___bpf_fill2(arr, p + 1, args) 179 + #define ___bpf_fill4(arr, p, x, args...) arr[p] = x; ___bpf_fill3(arr, p + 1, args) 180 + #define ___bpf_fill5(arr, p, x, args...) arr[p] = x; ___bpf_fill4(arr, p + 1, args) 181 + #define ___bpf_fill6(arr, p, x, args...) arr[p] = x; ___bpf_fill5(arr, p + 1, args) 182 + #define ___bpf_fill7(arr, p, x, args...) arr[p] = x; ___bpf_fill6(arr, p + 1, args) 183 + #define ___bpf_fill8(arr, p, x, args...) arr[p] = x; ___bpf_fill7(arr, p + 1, args) 184 + #define ___bpf_fill9(arr, p, x, args...) arr[p] = x; ___bpf_fill8(arr, p + 1, args) 185 + #define ___bpf_fill10(arr, p, x, args...) arr[p] = x; ___bpf_fill9(arr, p + 1, args) 186 + #define ___bpf_fill11(arr, p, x, args...) arr[p] = x; ___bpf_fill10(arr, p + 1, args) 187 + #define ___bpf_fill12(arr, p, x, args...) arr[p] = x; ___bpf_fill11(arr, p + 1, args) 188 + #define ___bpf_fill(arr, args...) \ 189 + ___bpf_apply(___bpf_fill, ___bpf_narg(args))(arr, 0, args) 190 + 191 + /* 192 + * BPF_SEQ_PRINTF to wrap bpf_seq_printf to-be-printed values 193 + * in a structure. 194 + */ 195 + #define BPF_SEQ_PRINTF(seq, fmt, args...) \ 196 + ({ \ 197 + static const char ___fmt[] = fmt; \ 198 + unsigned long long ___param[___bpf_narg(args)]; \ 199 + \ 200 + _Pragma("GCC diagnostic push") \ 201 + _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 202 + ___bpf_fill(___param, args); \ 203 + _Pragma("GCC diagnostic pop") \ 204 + \ 205 + bpf_seq_printf(seq, ___fmt, sizeof(___fmt), \ 206 + ___param, sizeof(___param)); \ 207 + }) 208 + 209 + /* 210 + * BPF_SNPRINTF wraps the bpf_snprintf helper with variadic arguments instead of 211 + * an array of u64. 212 + */ 213 + #define BPF_SNPRINTF(out, out_size, fmt, args...) \ 214 + ({ \ 215 + static const char ___fmt[] = fmt; \ 216 + unsigned long long ___param[___bpf_narg(args)]; \ 217 + \ 218 + _Pragma("GCC diagnostic push") \ 219 + _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 220 + ___bpf_fill(___param, args); \ 221 + _Pragma("GCC diagnostic pop") \ 222 + \ 223 + bpf_snprintf(out, out_size, ___fmt, \ 224 + ___param, sizeof(___param)); \ 225 + }) 226 + 161 227 #endif
+9 -9
tools/lib/bpf/bpf_prog_linfo.c
··· 106 106 nr_linfo = info->nr_line_info; 107 107 108 108 if (!nr_linfo) 109 - return NULL; 109 + return errno = EINVAL, NULL; 110 110 111 111 /* 112 112 * The min size that bpf_prog_linfo has to access for ··· 114 114 */ 115 115 if (info->line_info_rec_size < 116 116 offsetof(struct bpf_line_info, file_name_off)) 117 - return NULL; 117 + return errno = EINVAL, NULL; 118 118 119 119 prog_linfo = calloc(1, sizeof(*prog_linfo)); 120 120 if (!prog_linfo) 121 - return NULL; 121 + return errno = ENOMEM, NULL; 122 122 123 123 /* Copy xlated line_info */ 124 124 prog_linfo->nr_linfo = nr_linfo; ··· 174 174 175 175 err_free: 176 176 bpf_prog_linfo__free(prog_linfo); 177 - return NULL; 177 + return errno = EINVAL, NULL; 178 178 } 179 179 180 180 const struct bpf_line_info * ··· 186 186 const __u64 *jited_linfo; 187 187 188 188 if (func_idx >= prog_linfo->nr_jited_func) 189 - return NULL; 189 + return errno = ENOENT, NULL; 190 190 191 191 nr_linfo = prog_linfo->nr_jited_linfo_per_func[func_idx]; 192 192 if (nr_skip >= nr_linfo) 193 - return NULL; 193 + return errno = ENOENT, NULL; 194 194 195 195 start = prog_linfo->jited_linfo_func_idx[func_idx] + nr_skip; 196 196 jited_rec_size = prog_linfo->jited_rec_size; ··· 198 198 (start * jited_rec_size); 199 199 jited_linfo = raw_jited_linfo; 200 200 if (addr < *jited_linfo) 201 - return NULL; 201 + return errno = ENOENT, NULL; 202 202 203 203 nr_linfo -= nr_skip; 204 204 rec_size = prog_linfo->rec_size; ··· 225 225 226 226 nr_linfo = prog_linfo->nr_linfo; 227 227 if (nr_skip >= nr_linfo) 228 - return NULL; 228 + return errno = ENOENT, NULL; 229 229 230 230 rec_size = prog_linfo->rec_size; 231 231 raw_linfo = prog_linfo->raw_linfo + (nr_skip * rec_size); 232 232 linfo = raw_linfo; 233 233 if (insn_off < linfo->insn_off) 234 - return NULL; 234 + return errno = ENOENT, NULL; 235 235 236 236 nr_linfo -= nr_skip; 237 237 for (i = 0; i < nr_linfo; i++) {
+50 -58
tools/lib/bpf/bpf_tracing.h
··· 25 25 #define bpf_target_sparc 26 26 #define bpf_target_defined 27 27 #else 28 - #undef bpf_target_defined 29 - #endif 30 28 31 29 /* Fall back to what the compiler says */ 32 - #ifndef bpf_target_defined 33 30 #if defined(__x86_64__) 34 31 #define bpf_target_x86 32 + #define bpf_target_defined 35 33 #elif defined(__s390__) 36 34 #define bpf_target_s390 35 + #define bpf_target_defined 37 36 #elif defined(__arm__) 38 37 #define bpf_target_arm 38 + #define bpf_target_defined 39 39 #elif defined(__aarch64__) 40 40 #define bpf_target_arm64 41 + #define bpf_target_defined 41 42 #elif defined(__mips__) 42 43 #define bpf_target_mips 44 + #define bpf_target_defined 43 45 #elif defined(__powerpc__) 44 46 #define bpf_target_powerpc 47 + #define bpf_target_defined 45 48 #elif defined(__sparc__) 46 49 #define bpf_target_sparc 50 + #define bpf_target_defined 51 + #endif /* no compiler target */ 52 + 47 53 #endif 54 + 55 + #ifndef __BPF_TARGET_MISSING 56 + #define __BPF_TARGET_MISSING "GCC error \"Must specify a BPF target arch via __TARGET_ARCH_xxx\"" 48 57 #endif 49 58 50 59 #if defined(bpf_target_x86) ··· 296 287 #elif defined(bpf_target_sparc) 297 288 #define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ (ip) = PT_REGS_RET(ctx); }) 298 289 #define BPF_KRETPROBE_READ_RET_IP BPF_KPROBE_READ_RET_IP 299 - #else 290 + #elif defined(bpf_target_defined) 300 291 #define BPF_KPROBE_READ_RET_IP(ip, ctx) \ 301 292 ({ bpf_probe_read_kernel(&(ip), sizeof(ip), (void *)PT_REGS_RET(ctx)); }) 302 293 #define BPF_KRETPROBE_READ_RET_IP(ip, ctx) \ ··· 304 295 (void *)(PT_REGS_FP(ctx) + sizeof(ip))); }) 305 296 #endif 306 297 298 + #if !defined(bpf_target_defined) 299 + 300 + #define PT_REGS_PARM1(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 301 + #define PT_REGS_PARM2(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 302 + #define PT_REGS_PARM3(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 303 + #define PT_REGS_PARM4(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 304 + #define PT_REGS_PARM5(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 305 + #define PT_REGS_RET(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 306 + #define PT_REGS_FP(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 307 + #define PT_REGS_RC(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 308 + #define PT_REGS_SP(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 309 + #define PT_REGS_IP(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 310 + 311 + #define PT_REGS_PARM1_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 312 + #define PT_REGS_PARM2_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 313 + #define PT_REGS_PARM3_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 314 + #define PT_REGS_PARM4_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 315 + #define PT_REGS_PARM5_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 316 + #define PT_REGS_RET_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 317 + #define PT_REGS_FP_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 318 + #define PT_REGS_RC_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 319 + #define PT_REGS_SP_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 320 + #define PT_REGS_IP_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 321 + 322 + #define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 323 + #define BPF_KRETPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; }) 324 + 325 + #endif /* !defined(bpf_target_defined) */ 326 + 327 + #ifndef ___bpf_concat 307 328 #define ___bpf_concat(a, b) a ## b 329 + #endif 330 + #ifndef ___bpf_apply 308 331 #define ___bpf_apply(fn, n) ___bpf_concat(fn, n) 332 + #endif 333 + #ifndef ___bpf_nth 309 334 #define ___bpf_nth(_, _1, _2, _3, _4, _5, _6, _7, _8, _9, _a, _b, _c, N, ...) N 335 + #endif 336 + #ifndef ___bpf_narg 310 337 #define ___bpf_narg(...) \ 311 338 ___bpf_nth(_, ##__VA_ARGS__, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0) 312 - #define ___bpf_empty(...) \ 313 - ___bpf_nth(_, ##__VA_ARGS__, N, N, N, N, N, N, N, N, N, N, 0) 339 + #endif 314 340 315 341 #define ___bpf_ctx_cast0() ctx 316 342 #define ___bpf_ctx_cast1(x) ___bpf_ctx_cast0(), (void *)ctx[0] ··· 456 412 _Pragma("GCC diagnostic pop") \ 457 413 } \ 458 414 static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args) 459 - 460 - #define ___bpf_fill0(arr, p, x) do {} while (0) 461 - #define ___bpf_fill1(arr, p, x) arr[p] = x 462 - #define ___bpf_fill2(arr, p, x, args...) arr[p] = x; ___bpf_fill1(arr, p + 1, args) 463 - #define ___bpf_fill3(arr, p, x, args...) arr[p] = x; ___bpf_fill2(arr, p + 1, args) 464 - #define ___bpf_fill4(arr, p, x, args...) arr[p] = x; ___bpf_fill3(arr, p + 1, args) 465 - #define ___bpf_fill5(arr, p, x, args...) arr[p] = x; ___bpf_fill4(arr, p + 1, args) 466 - #define ___bpf_fill6(arr, p, x, args...) arr[p] = x; ___bpf_fill5(arr, p + 1, args) 467 - #define ___bpf_fill7(arr, p, x, args...) arr[p] = x; ___bpf_fill6(arr, p + 1, args) 468 - #define ___bpf_fill8(arr, p, x, args...) arr[p] = x; ___bpf_fill7(arr, p + 1, args) 469 - #define ___bpf_fill9(arr, p, x, args...) arr[p] = x; ___bpf_fill8(arr, p + 1, args) 470 - #define ___bpf_fill10(arr, p, x, args...) arr[p] = x; ___bpf_fill9(arr, p + 1, args) 471 - #define ___bpf_fill11(arr, p, x, args...) arr[p] = x; ___bpf_fill10(arr, p + 1, args) 472 - #define ___bpf_fill12(arr, p, x, args...) arr[p] = x; ___bpf_fill11(arr, p + 1, args) 473 - #define ___bpf_fill(arr, args...) \ 474 - ___bpf_apply(___bpf_fill, ___bpf_narg(args))(arr, 0, args) 475 - 476 - /* 477 - * BPF_SEQ_PRINTF to wrap bpf_seq_printf to-be-printed values 478 - * in a structure. 479 - */ 480 - #define BPF_SEQ_PRINTF(seq, fmt, args...) \ 481 - ({ \ 482 - static const char ___fmt[] = fmt; \ 483 - unsigned long long ___param[___bpf_narg(args)]; \ 484 - \ 485 - _Pragma("GCC diagnostic push") \ 486 - _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 487 - ___bpf_fill(___param, args); \ 488 - _Pragma("GCC diagnostic pop") \ 489 - \ 490 - bpf_seq_printf(seq, ___fmt, sizeof(___fmt), \ 491 - ___param, sizeof(___param)); \ 492 - }) 493 - 494 - /* 495 - * BPF_SNPRINTF wraps the bpf_snprintf helper with variadic arguments instead of 496 - * an array of u64. 497 - */ 498 - #define BPF_SNPRINTF(out, out_size, fmt, args...) \ 499 - ({ \ 500 - static const char ___fmt[] = fmt; \ 501 - unsigned long long ___param[___bpf_narg(args)]; \ 502 - \ 503 - _Pragma("GCC diagnostic push") \ 504 - _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 505 - ___bpf_fill(___param, args); \ 506 - _Pragma("GCC diagnostic pop") \ 507 - \ 508 - bpf_snprintf(out, out_size, ___fmt, \ 509 - ___param, sizeof(___param)); \ 510 - }) 511 415 512 416 #endif
+152 -150
tools/lib/bpf/btf.c
··· 443 443 const struct btf_type *btf__type_by_id(const struct btf *btf, __u32 type_id) 444 444 { 445 445 if (type_id >= btf->start_id + btf->nr_types) 446 - return NULL; 446 + return errno = EINVAL, NULL; 447 447 return btf_type_by_id((struct btf *)btf, type_id); 448 448 } 449 449 ··· 510 510 int btf__set_pointer_size(struct btf *btf, size_t ptr_sz) 511 511 { 512 512 if (ptr_sz != 4 && ptr_sz != 8) 513 - return -EINVAL; 513 + return libbpf_err(-EINVAL); 514 514 btf->ptr_sz = ptr_sz; 515 515 return 0; 516 516 } ··· 537 537 int btf__set_endianness(struct btf *btf, enum btf_endianness endian) 538 538 { 539 539 if (endian != BTF_LITTLE_ENDIAN && endian != BTF_BIG_ENDIAN) 540 - return -EINVAL; 540 + return libbpf_err(-EINVAL); 541 541 542 542 btf->swapped_endian = is_host_big_endian() != (endian == BTF_BIG_ENDIAN); 543 543 if (!btf->swapped_endian) { ··· 568 568 int i; 569 569 570 570 t = btf__type_by_id(btf, type_id); 571 - for (i = 0; i < MAX_RESOLVE_DEPTH && !btf_type_is_void_or_null(t); 572 - i++) { 571 + for (i = 0; i < MAX_RESOLVE_DEPTH && !btf_type_is_void_or_null(t); i++) { 573 572 switch (btf_kind(t)) { 574 573 case BTF_KIND_INT: 575 574 case BTF_KIND_STRUCT: ··· 591 592 case BTF_KIND_ARRAY: 592 593 array = btf_array(t); 593 594 if (nelems && array->nelems > UINT32_MAX / nelems) 594 - return -E2BIG; 595 + return libbpf_err(-E2BIG); 595 596 nelems *= array->nelems; 596 597 type_id = array->type; 597 598 break; 598 599 default: 599 - return -EINVAL; 600 + return libbpf_err(-EINVAL); 600 601 } 601 602 602 603 t = btf__type_by_id(btf, type_id); ··· 604 605 605 606 done: 606 607 if (size < 0) 607 - return -EINVAL; 608 + return libbpf_err(-EINVAL); 608 609 if (nelems && size > UINT32_MAX / nelems) 609 - return -E2BIG; 610 + return libbpf_err(-E2BIG); 610 611 611 612 return nelems * size; 612 613 } ··· 639 640 for (i = 0; i < vlen; i++, m++) { 640 641 align = btf__align_of(btf, m->type); 641 642 if (align <= 0) 642 - return align; 643 + return libbpf_err(align); 643 644 max_align = max(max_align, align); 644 645 } 645 646 ··· 647 648 } 648 649 default: 649 650 pr_warn("unsupported BTF_KIND:%u\n", btf_kind(t)); 650 - return 0; 651 + return errno = EINVAL, 0; 651 652 } 652 653 } 653 654 ··· 666 667 } 667 668 668 669 if (depth == MAX_RESOLVE_DEPTH || btf_type_is_void_or_null(t)) 669 - return -EINVAL; 670 + return libbpf_err(-EINVAL); 670 671 671 672 return type_id; 672 673 } ··· 686 687 return i; 687 688 } 688 689 689 - return -ENOENT; 690 + return libbpf_err(-ENOENT); 690 691 } 691 692 692 693 __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name, ··· 708 709 return i; 709 710 } 710 711 711 - return -ENOENT; 712 + return libbpf_err(-ENOENT); 712 713 } 713 714 714 715 static bool btf_is_modifiable(const struct btf *btf) ··· 784 785 785 786 struct btf *btf__new_empty(void) 786 787 { 787 - return btf_new_empty(NULL); 788 + return libbpf_ptr(btf_new_empty(NULL)); 788 789 } 789 790 790 791 struct btf *btf__new_empty_split(struct btf *base_btf) 791 792 { 792 - return btf_new_empty(base_btf); 793 + return libbpf_ptr(btf_new_empty(base_btf)); 793 794 } 794 795 795 796 static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf) ··· 845 846 846 847 struct btf *btf__new(const void *data, __u32 size) 847 848 { 848 - return btf_new(data, size, NULL); 849 + return libbpf_ptr(btf_new(data, size, NULL)); 849 850 } 850 851 851 852 static struct btf *btf_parse_elf(const char *path, struct btf *base_btf, ··· 936 937 goto done; 937 938 } 938 939 btf = btf_new(btf_data->d_buf, btf_data->d_size, base_btf); 939 - if (IS_ERR(btf)) 940 + err = libbpf_get_error(btf); 941 + if (err) 940 942 goto done; 941 943 942 944 switch (gelf_getclass(elf)) { ··· 953 953 } 954 954 955 955 if (btf_ext && btf_ext_data) { 956 - *btf_ext = btf_ext__new(btf_ext_data->d_buf, 957 - btf_ext_data->d_size); 958 - if (IS_ERR(*btf_ext)) 956 + *btf_ext = btf_ext__new(btf_ext_data->d_buf, btf_ext_data->d_size); 957 + err = libbpf_get_error(*btf_ext); 958 + if (err) 959 959 goto done; 960 960 } else if (btf_ext) { 961 961 *btf_ext = NULL; ··· 965 965 elf_end(elf); 966 966 close(fd); 967 967 968 - if (err) 969 - return ERR_PTR(err); 970 - /* 971 - * btf is always parsed before btf_ext, so no need to clean up 972 - * btf_ext, if btf loading failed 973 - */ 974 - if (IS_ERR(btf)) 968 + if (!err) 975 969 return btf; 976 - if (btf_ext && IS_ERR(*btf_ext)) { 977 - btf__free(btf); 978 - err = PTR_ERR(*btf_ext); 979 - return ERR_PTR(err); 980 - } 981 - return btf; 970 + 971 + if (btf_ext) 972 + btf_ext__free(*btf_ext); 973 + btf__free(btf); 974 + 975 + return ERR_PTR(err); 982 976 } 983 977 984 978 struct btf *btf__parse_elf(const char *path, struct btf_ext **btf_ext) 985 979 { 986 - return btf_parse_elf(path, NULL, btf_ext); 980 + return libbpf_ptr(btf_parse_elf(path, NULL, btf_ext)); 987 981 } 988 982 989 983 struct btf *btf__parse_elf_split(const char *path, struct btf *base_btf) 990 984 { 991 - return btf_parse_elf(path, base_btf, NULL); 985 + return libbpf_ptr(btf_parse_elf(path, base_btf, NULL)); 992 986 } 993 987 994 988 static struct btf *btf_parse_raw(const char *path, struct btf *base_btf) ··· 1050 1056 1051 1057 struct btf *btf__parse_raw(const char *path) 1052 1058 { 1053 - return btf_parse_raw(path, NULL); 1059 + return libbpf_ptr(btf_parse_raw(path, NULL)); 1054 1060 } 1055 1061 1056 1062 struct btf *btf__parse_raw_split(const char *path, struct btf *base_btf) 1057 1063 { 1058 - return btf_parse_raw(path, base_btf); 1064 + return libbpf_ptr(btf_parse_raw(path, base_btf)); 1059 1065 } 1060 1066 1061 1067 static struct btf *btf_parse(const char *path, struct btf *base_btf, struct btf_ext **btf_ext) 1062 1068 { 1063 1069 struct btf *btf; 1070 + int err; 1064 1071 1065 1072 if (btf_ext) 1066 1073 *btf_ext = NULL; 1067 1074 1068 1075 btf = btf_parse_raw(path, base_btf); 1069 - if (!IS_ERR(btf) || PTR_ERR(btf) != -EPROTO) 1076 + err = libbpf_get_error(btf); 1077 + if (!err) 1070 1078 return btf; 1071 - 1079 + if (err != -EPROTO) 1080 + return ERR_PTR(err); 1072 1081 return btf_parse_elf(path, base_btf, btf_ext); 1073 1082 } 1074 1083 1075 1084 struct btf *btf__parse(const char *path, struct btf_ext **btf_ext) 1076 1085 { 1077 - return btf_parse(path, NULL, btf_ext); 1086 + return libbpf_ptr(btf_parse(path, NULL, btf_ext)); 1078 1087 } 1079 1088 1080 1089 struct btf *btf__parse_split(const char *path, struct btf *base_btf) 1081 1090 { 1082 - return btf_parse(path, base_btf, NULL); 1091 + return libbpf_ptr(btf_parse(path, base_btf, NULL)); 1083 1092 } 1084 1093 1085 1094 static int compare_vsi_off(const void *_a, const void *_b) ··· 1175 1178 } 1176 1179 } 1177 1180 1178 - return err; 1181 + return libbpf_err(err); 1179 1182 } 1180 1183 1181 1184 static void *btf_get_raw_data(const struct btf *btf, __u32 *size, bool swap_endian); ··· 1188 1191 int err = 0; 1189 1192 1190 1193 if (btf->fd >= 0) 1191 - return -EEXIST; 1194 + return libbpf_err(-EEXIST); 1192 1195 1193 1196 retry_load: 1194 1197 if (log_buf_size) { 1195 1198 log_buf = malloc(log_buf_size); 1196 1199 if (!log_buf) 1197 - return -ENOMEM; 1200 + return libbpf_err(-ENOMEM); 1198 1201 1199 1202 *log_buf = 0; 1200 1203 } ··· 1226 1229 1227 1230 done: 1228 1231 free(log_buf); 1229 - return err; 1232 + return libbpf_err(err); 1230 1233 } 1231 1234 1232 1235 int btf__fd(const struct btf *btf) ··· 1302 1305 1303 1306 data = btf_get_raw_data(btf, &data_sz, btf->swapped_endian); 1304 1307 if (!data) 1305 - return NULL; 1308 + return errno = -ENOMEM, NULL; 1306 1309 1307 1310 btf->raw_size = data_sz; 1308 1311 if (btf->swapped_endian) ··· 1320 1323 else if (offset - btf->start_str_off < btf->hdr->str_len) 1321 1324 return btf_strs_data(btf) + (offset - btf->start_str_off); 1322 1325 else 1323 - return NULL; 1326 + return errno = EINVAL, NULL; 1324 1327 } 1325 1328 1326 1329 const char *btf__name_by_offset(const struct btf *btf, __u32 offset) ··· 1385 1388 int btf__get_from_id(__u32 id, struct btf **btf) 1386 1389 { 1387 1390 struct btf *res; 1388 - int btf_fd; 1391 + int err, btf_fd; 1389 1392 1390 1393 *btf = NULL; 1391 1394 btf_fd = bpf_btf_get_fd_by_id(id); 1392 1395 if (btf_fd < 0) 1393 - return -errno; 1396 + return libbpf_err(-errno); 1394 1397 1395 1398 res = btf_get_from_fd(btf_fd, NULL); 1399 + err = libbpf_get_error(res); 1400 + 1396 1401 close(btf_fd); 1397 - if (IS_ERR(res)) 1398 - return PTR_ERR(res); 1402 + 1403 + if (err) 1404 + return libbpf_err(err); 1399 1405 1400 1406 *btf = res; 1401 1407 return 0; ··· 1415 1415 __s64 key_size, value_size; 1416 1416 __s32 container_id; 1417 1417 1418 - if (snprintf(container_name, max_name, "____btf_map_%s", map_name) == 1419 - max_name) { 1418 + if (snprintf(container_name, max_name, "____btf_map_%s", map_name) == max_name) { 1420 1419 pr_warn("map:%s length of '____btf_map_%s' is too long\n", 1421 1420 map_name, map_name); 1422 - return -EINVAL; 1421 + return libbpf_err(-EINVAL); 1423 1422 } 1424 1423 1425 1424 container_id = btf__find_by_name(btf, container_name); 1426 1425 if (container_id < 0) { 1427 1426 pr_debug("map:%s container_name:%s cannot be found in BTF. Missing BPF_ANNOTATE_KV_PAIR?\n", 1428 1427 map_name, container_name); 1429 - return container_id; 1428 + return libbpf_err(container_id); 1430 1429 } 1431 1430 1432 1431 container_type = btf__type_by_id(btf, container_id); 1433 1432 if (!container_type) { 1434 1433 pr_warn("map:%s cannot find BTF type for container_id:%u\n", 1435 1434 map_name, container_id); 1436 - return -EINVAL; 1435 + return libbpf_err(-EINVAL); 1437 1436 } 1438 1437 1439 1438 if (!btf_is_struct(container_type) || btf_vlen(container_type) < 2) { 1440 1439 pr_warn("map:%s container_name:%s is an invalid container struct\n", 1441 1440 map_name, container_name); 1442 - return -EINVAL; 1441 + return libbpf_err(-EINVAL); 1443 1442 } 1444 1443 1445 1444 key = btf_members(container_type); ··· 1447 1448 key_size = btf__resolve_size(btf, key->type); 1448 1449 if (key_size < 0) { 1449 1450 pr_warn("map:%s invalid BTF key_type_size\n", map_name); 1450 - return key_size; 1451 + return libbpf_err(key_size); 1451 1452 } 1452 1453 1453 1454 if (expected_key_size != key_size) { 1454 1455 pr_warn("map:%s btf_key_type_size:%u != map_def_key_size:%u\n", 1455 1456 map_name, (__u32)key_size, expected_key_size); 1456 - return -EINVAL; 1457 + return libbpf_err(-EINVAL); 1457 1458 } 1458 1459 1459 1460 value_size = btf__resolve_size(btf, value->type); 1460 1461 if (value_size < 0) { 1461 1462 pr_warn("map:%s invalid BTF value_type_size\n", map_name); 1462 - return value_size; 1463 + return libbpf_err(value_size); 1463 1464 } 1464 1465 1465 1466 if (expected_value_size != value_size) { 1466 1467 pr_warn("map:%s btf_value_type_size:%u != map_def_value_size:%u\n", 1467 1468 map_name, (__u32)value_size, expected_value_size); 1468 - return -EINVAL; 1469 + return libbpf_err(-EINVAL); 1469 1470 } 1470 1471 1471 1472 *key_type_id = key->type; ··· 1562 1563 1563 1564 /* BTF needs to be in a modifiable state to build string lookup index */ 1564 1565 if (btf_ensure_modifiable(btf)) 1565 - return -ENOMEM; 1566 + return libbpf_err(-ENOMEM); 1566 1567 1567 1568 off = strset__find_str(btf->strs_set, s); 1568 1569 if (off < 0) 1569 - return off; 1570 + return libbpf_err(off); 1570 1571 1571 1572 return btf->start_str_off + off; 1572 1573 } ··· 1587 1588 } 1588 1589 1589 1590 if (btf_ensure_modifiable(btf)) 1590 - return -ENOMEM; 1591 + return libbpf_err(-ENOMEM); 1591 1592 1592 1593 off = strset__add_str(btf->strs_set, s); 1593 1594 if (off < 0) 1594 - return off; 1595 + return libbpf_err(off); 1595 1596 1596 1597 btf->hdr->str_len = strset__data_size(btf->strs_set); 1597 1598 ··· 1615 1616 1616 1617 err = btf_add_type_idx_entry(btf, btf->hdr->type_len); 1617 1618 if (err) 1618 - return err; 1619 + return libbpf_err(err); 1619 1620 1620 1621 btf->hdr->type_len += data_sz; 1621 1622 btf->hdr->str_off += data_sz; ··· 1652 1653 1653 1654 sz = btf_type_size(src_type); 1654 1655 if (sz < 0) 1655 - return sz; 1656 + return libbpf_err(sz); 1656 1657 1657 1658 /* deconstruct BTF, if necessary, and invalidate raw_data */ 1658 1659 if (btf_ensure_modifiable(btf)) 1659 - return -ENOMEM; 1660 + return libbpf_err(-ENOMEM); 1660 1661 1661 1662 t = btf_add_type_mem(btf, sz); 1662 1663 if (!t) 1663 - return -ENOMEM; 1664 + return libbpf_err(-ENOMEM); 1664 1665 1665 1666 memcpy(t, src_type, sz); 1666 1667 1667 1668 err = btf_type_visit_str_offs(t, btf_rewrite_str, &p); 1668 1669 if (err) 1669 - return err; 1670 + return libbpf_err(err); 1670 1671 1671 1672 return btf_commit_type(btf, sz); 1672 1673 } ··· 1687 1688 1688 1689 /* non-empty name */ 1689 1690 if (!name || !name[0]) 1690 - return -EINVAL; 1691 + return libbpf_err(-EINVAL); 1691 1692 /* byte_sz must be power of 2 */ 1692 1693 if (!byte_sz || (byte_sz & (byte_sz - 1)) || byte_sz > 16) 1693 - return -EINVAL; 1694 + return libbpf_err(-EINVAL); 1694 1695 if (encoding & ~(BTF_INT_SIGNED | BTF_INT_CHAR | BTF_INT_BOOL)) 1695 - return -EINVAL; 1696 + return libbpf_err(-EINVAL); 1696 1697 1697 1698 /* deconstruct BTF, if necessary, and invalidate raw_data */ 1698 1699 if (btf_ensure_modifiable(btf)) 1699 - return -ENOMEM; 1700 + return libbpf_err(-ENOMEM); 1700 1701 1701 1702 sz = sizeof(struct btf_type) + sizeof(int); 1702 1703 t = btf_add_type_mem(btf, sz); 1703 1704 if (!t) 1704 - return -ENOMEM; 1705 + return libbpf_err(-ENOMEM); 1705 1706 1706 1707 /* if something goes wrong later, we might end up with an extra string, 1707 1708 * but that shouldn't be a problem, because BTF can't be constructed ··· 1735 1736 1736 1737 /* non-empty name */ 1737 1738 if (!name || !name[0]) 1738 - return -EINVAL; 1739 + return libbpf_err(-EINVAL); 1739 1740 1740 1741 /* byte_sz must be one of the explicitly allowed values */ 1741 1742 if (byte_sz != 2 && byte_sz != 4 && byte_sz != 8 && byte_sz != 12 && 1742 1743 byte_sz != 16) 1743 - return -EINVAL; 1744 + return libbpf_err(-EINVAL); 1744 1745 1745 1746 if (btf_ensure_modifiable(btf)) 1746 - return -ENOMEM; 1747 + return libbpf_err(-ENOMEM); 1747 1748 1748 1749 sz = sizeof(struct btf_type); 1749 1750 t = btf_add_type_mem(btf, sz); 1750 1751 if (!t) 1751 - return -ENOMEM; 1752 + return libbpf_err(-ENOMEM); 1752 1753 1753 1754 name_off = btf__add_str(btf, name); 1754 1755 if (name_off < 0) ··· 1779 1780 int sz, name_off = 0; 1780 1781 1781 1782 if (validate_type_id(ref_type_id)) 1782 - return -EINVAL; 1783 + return libbpf_err(-EINVAL); 1783 1784 1784 1785 if (btf_ensure_modifiable(btf)) 1785 - return -ENOMEM; 1786 + return libbpf_err(-ENOMEM); 1786 1787 1787 1788 sz = sizeof(struct btf_type); 1788 1789 t = btf_add_type_mem(btf, sz); 1789 1790 if (!t) 1790 - return -ENOMEM; 1791 + return libbpf_err(-ENOMEM); 1791 1792 1792 1793 if (name && name[0]) { 1793 1794 name_off = btf__add_str(btf, name); ··· 1830 1831 int sz; 1831 1832 1832 1833 if (validate_type_id(index_type_id) || validate_type_id(elem_type_id)) 1833 - return -EINVAL; 1834 + return libbpf_err(-EINVAL); 1834 1835 1835 1836 if (btf_ensure_modifiable(btf)) 1836 - return -ENOMEM; 1837 + return libbpf_err(-ENOMEM); 1837 1838 1838 1839 sz = sizeof(struct btf_type) + sizeof(struct btf_array); 1839 1840 t = btf_add_type_mem(btf, sz); 1840 1841 if (!t) 1841 - return -ENOMEM; 1842 + return libbpf_err(-ENOMEM); 1842 1843 1843 1844 t->name_off = 0; 1844 1845 t->info = btf_type_info(BTF_KIND_ARRAY, 0, 0); ··· 1859 1860 int sz, name_off = 0; 1860 1861 1861 1862 if (btf_ensure_modifiable(btf)) 1862 - return -ENOMEM; 1863 + return libbpf_err(-ENOMEM); 1863 1864 1864 1865 sz = sizeof(struct btf_type); 1865 1866 t = btf_add_type_mem(btf, sz); 1866 1867 if (!t) 1867 - return -ENOMEM; 1868 + return libbpf_err(-ENOMEM); 1868 1869 1869 1870 if (name && name[0]) { 1870 1871 name_off = btf__add_str(btf, name); ··· 1942 1943 1943 1944 /* last type should be union/struct */ 1944 1945 if (btf->nr_types == 0) 1945 - return -EINVAL; 1946 + return libbpf_err(-EINVAL); 1946 1947 t = btf_last_type(btf); 1947 1948 if (!btf_is_composite(t)) 1948 - return -EINVAL; 1949 + return libbpf_err(-EINVAL); 1949 1950 1950 1951 if (validate_type_id(type_id)) 1951 - return -EINVAL; 1952 + return libbpf_err(-EINVAL); 1952 1953 /* best-effort bit field offset/size enforcement */ 1953 1954 is_bitfield = bit_size || (bit_offset % 8 != 0); 1954 1955 if (is_bitfield && (bit_size == 0 || bit_size > 255 || bit_offset > 0xffffff)) 1955 - return -EINVAL; 1956 + return libbpf_err(-EINVAL); 1956 1957 1957 1958 /* only offset 0 is allowed for unions */ 1958 1959 if (btf_is_union(t) && bit_offset) 1959 - return -EINVAL; 1960 + return libbpf_err(-EINVAL); 1960 1961 1961 1962 /* decompose and invalidate raw data */ 1962 1963 if (btf_ensure_modifiable(btf)) 1963 - return -ENOMEM; 1964 + return libbpf_err(-ENOMEM); 1964 1965 1965 1966 sz = sizeof(struct btf_member); 1966 1967 m = btf_add_type_mem(btf, sz); 1967 1968 if (!m) 1968 - return -ENOMEM; 1969 + return libbpf_err(-ENOMEM); 1969 1970 1970 1971 if (name && name[0]) { 1971 1972 name_off = btf__add_str(btf, name); ··· 2007 2008 2008 2009 /* byte_sz must be power of 2 */ 2009 2010 if (!byte_sz || (byte_sz & (byte_sz - 1)) || byte_sz > 8) 2010 - return -EINVAL; 2011 + return libbpf_err(-EINVAL); 2011 2012 2012 2013 if (btf_ensure_modifiable(btf)) 2013 - return -ENOMEM; 2014 + return libbpf_err(-ENOMEM); 2014 2015 2015 2016 sz = sizeof(struct btf_type); 2016 2017 t = btf_add_type_mem(btf, sz); 2017 2018 if (!t) 2018 - return -ENOMEM; 2019 + return libbpf_err(-ENOMEM); 2019 2020 2020 2021 if (name && name[0]) { 2021 2022 name_off = btf__add_str(btf, name); ··· 2047 2048 2048 2049 /* last type should be BTF_KIND_ENUM */ 2049 2050 if (btf->nr_types == 0) 2050 - return -EINVAL; 2051 + return libbpf_err(-EINVAL); 2051 2052 t = btf_last_type(btf); 2052 2053 if (!btf_is_enum(t)) 2053 - return -EINVAL; 2054 + return libbpf_err(-EINVAL); 2054 2055 2055 2056 /* non-empty name */ 2056 2057 if (!name || !name[0]) 2057 - return -EINVAL; 2058 + return libbpf_err(-EINVAL); 2058 2059 if (value < INT_MIN || value > UINT_MAX) 2059 - return -E2BIG; 2060 + return libbpf_err(-E2BIG); 2060 2061 2061 2062 /* decompose and invalidate raw data */ 2062 2063 if (btf_ensure_modifiable(btf)) 2063 - return -ENOMEM; 2064 + return libbpf_err(-ENOMEM); 2064 2065 2065 2066 sz = sizeof(struct btf_enum); 2066 2067 v = btf_add_type_mem(btf, sz); 2067 2068 if (!v) 2068 - return -ENOMEM; 2069 + return libbpf_err(-ENOMEM); 2069 2070 2070 2071 name_off = btf__add_str(btf, name); 2071 2072 if (name_off < 0) ··· 2095 2096 int btf__add_fwd(struct btf *btf, const char *name, enum btf_fwd_kind fwd_kind) 2096 2097 { 2097 2098 if (!name || !name[0]) 2098 - return -EINVAL; 2099 + return libbpf_err(-EINVAL); 2099 2100 2100 2101 switch (fwd_kind) { 2101 2102 case BTF_FWD_STRUCT: ··· 2116 2117 */ 2117 2118 return btf__add_enum(btf, name, sizeof(int)); 2118 2119 default: 2119 - return -EINVAL; 2120 + return libbpf_err(-EINVAL); 2120 2121 } 2121 2122 } 2122 2123 ··· 2131 2132 int btf__add_typedef(struct btf *btf, const char *name, int ref_type_id) 2132 2133 { 2133 2134 if (!name || !name[0]) 2134 - return -EINVAL; 2135 + return libbpf_err(-EINVAL); 2135 2136 2136 2137 return btf_add_ref_kind(btf, BTF_KIND_TYPEDEF, name, ref_type_id); 2137 2138 } ··· 2186 2187 int id; 2187 2188 2188 2189 if (!name || !name[0]) 2189 - return -EINVAL; 2190 + return libbpf_err(-EINVAL); 2190 2191 if (linkage != BTF_FUNC_STATIC && linkage != BTF_FUNC_GLOBAL && 2191 2192 linkage != BTF_FUNC_EXTERN) 2192 - return -EINVAL; 2193 + return libbpf_err(-EINVAL); 2193 2194 2194 2195 id = btf_add_ref_kind(btf, BTF_KIND_FUNC, name, proto_type_id); 2195 2196 if (id > 0) { ··· 2197 2198 2198 2199 t->info = btf_type_info(BTF_KIND_FUNC, linkage, 0); 2199 2200 } 2200 - return id; 2201 + return libbpf_err(id); 2201 2202 } 2202 2203 2203 2204 /* ··· 2218 2219 int sz; 2219 2220 2220 2221 if (validate_type_id(ret_type_id)) 2221 - return -EINVAL; 2222 + return libbpf_err(-EINVAL); 2222 2223 2223 2224 if (btf_ensure_modifiable(btf)) 2224 - return -ENOMEM; 2225 + return libbpf_err(-ENOMEM); 2225 2226 2226 2227 sz = sizeof(struct btf_type); 2227 2228 t = btf_add_type_mem(btf, sz); 2228 2229 if (!t) 2229 - return -ENOMEM; 2230 + return libbpf_err(-ENOMEM); 2230 2231 2231 2232 /* start out with vlen=0; this will be adjusted when adding enum 2232 2233 * values, if necessary ··· 2253 2254 int sz, name_off = 0; 2254 2255 2255 2256 if (validate_type_id(type_id)) 2256 - return -EINVAL; 2257 + return libbpf_err(-EINVAL); 2257 2258 2258 2259 /* last type should be BTF_KIND_FUNC_PROTO */ 2259 2260 if (btf->nr_types == 0) 2260 - return -EINVAL; 2261 + return libbpf_err(-EINVAL); 2261 2262 t = btf_last_type(btf); 2262 2263 if (!btf_is_func_proto(t)) 2263 - return -EINVAL; 2264 + return libbpf_err(-EINVAL); 2264 2265 2265 2266 /* decompose and invalidate raw data */ 2266 2267 if (btf_ensure_modifiable(btf)) 2267 - return -ENOMEM; 2268 + return libbpf_err(-ENOMEM); 2268 2269 2269 2270 sz = sizeof(struct btf_param); 2270 2271 p = btf_add_type_mem(btf, sz); 2271 2272 if (!p) 2272 - return -ENOMEM; 2273 + return libbpf_err(-ENOMEM); 2273 2274 2274 2275 if (name && name[0]) { 2275 2276 name_off = btf__add_str(btf, name); ··· 2307 2308 2308 2309 /* non-empty name */ 2309 2310 if (!name || !name[0]) 2310 - return -EINVAL; 2311 + return libbpf_err(-EINVAL); 2311 2312 if (linkage != BTF_VAR_STATIC && linkage != BTF_VAR_GLOBAL_ALLOCATED && 2312 2313 linkage != BTF_VAR_GLOBAL_EXTERN) 2313 - return -EINVAL; 2314 + return libbpf_err(-EINVAL); 2314 2315 if (validate_type_id(type_id)) 2315 - return -EINVAL; 2316 + return libbpf_err(-EINVAL); 2316 2317 2317 2318 /* deconstruct BTF, if necessary, and invalidate raw_data */ 2318 2319 if (btf_ensure_modifiable(btf)) 2319 - return -ENOMEM; 2320 + return libbpf_err(-ENOMEM); 2320 2321 2321 2322 sz = sizeof(struct btf_type) + sizeof(struct btf_var); 2322 2323 t = btf_add_type_mem(btf, sz); 2323 2324 if (!t) 2324 - return -ENOMEM; 2325 + return libbpf_err(-ENOMEM); 2325 2326 2326 2327 name_off = btf__add_str(btf, name); 2327 2328 if (name_off < 0) ··· 2356 2357 2357 2358 /* non-empty name */ 2358 2359 if (!name || !name[0]) 2359 - return -EINVAL; 2360 + return libbpf_err(-EINVAL); 2360 2361 2361 2362 if (btf_ensure_modifiable(btf)) 2362 - return -ENOMEM; 2363 + return libbpf_err(-ENOMEM); 2363 2364 2364 2365 sz = sizeof(struct btf_type); 2365 2366 t = btf_add_type_mem(btf, sz); 2366 2367 if (!t) 2367 - return -ENOMEM; 2368 + return libbpf_err(-ENOMEM); 2368 2369 2369 2370 name_off = btf__add_str(btf, name); 2370 2371 if (name_off < 0) ··· 2396 2397 2397 2398 /* last type should be BTF_KIND_DATASEC */ 2398 2399 if (btf->nr_types == 0) 2399 - return -EINVAL; 2400 + return libbpf_err(-EINVAL); 2400 2401 t = btf_last_type(btf); 2401 2402 if (!btf_is_datasec(t)) 2402 - return -EINVAL; 2403 + return libbpf_err(-EINVAL); 2403 2404 2404 2405 if (validate_type_id(var_type_id)) 2405 - return -EINVAL; 2406 + return libbpf_err(-EINVAL); 2406 2407 2407 2408 /* decompose and invalidate raw data */ 2408 2409 if (btf_ensure_modifiable(btf)) 2409 - return -ENOMEM; 2410 + return libbpf_err(-ENOMEM); 2410 2411 2411 2412 sz = sizeof(struct btf_var_secinfo); 2412 2413 v = btf_add_type_mem(btf, sz); 2413 2414 if (!v) 2414 - return -ENOMEM; 2415 + return libbpf_err(-ENOMEM); 2415 2416 2416 2417 v->type = var_type_id; 2417 2418 v->offset = offset; ··· 2613 2614 2614 2615 err = btf_ext_parse_hdr(data, size); 2615 2616 if (err) 2616 - return ERR_PTR(err); 2617 + return libbpf_err_ptr(err); 2617 2618 2618 2619 btf_ext = calloc(1, sizeof(struct btf_ext)); 2619 2620 if (!btf_ext) 2620 - return ERR_PTR(-ENOMEM); 2621 + return libbpf_err_ptr(-ENOMEM); 2621 2622 2622 2623 btf_ext->data_size = size; 2623 2624 btf_ext->data = malloc(size); ··· 2627 2628 } 2628 2629 memcpy(btf_ext->data, data, size); 2629 2630 2630 - if (btf_ext->hdr->hdr_len < 2631 - offsetofend(struct btf_ext_header, line_info_len)) 2631 + if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, line_info_len)) { 2632 + err = -EINVAL; 2632 2633 goto done; 2634 + } 2635 + 2633 2636 err = btf_ext_setup_func_info(btf_ext); 2634 2637 if (err) 2635 2638 goto done; ··· 2640 2639 if (err) 2641 2640 goto done; 2642 2641 2643 - if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, core_relo_len)) 2642 + if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, core_relo_len)) { 2643 + err = -EINVAL; 2644 2644 goto done; 2645 + } 2646 + 2645 2647 err = btf_ext_setup_core_relos(btf_ext); 2646 2648 if (err) 2647 2649 goto done; ··· 2652 2648 done: 2653 2649 if (err) { 2654 2650 btf_ext__free(btf_ext); 2655 - return ERR_PTR(err); 2651 + return libbpf_err_ptr(err); 2656 2652 } 2657 2653 2658 2654 return btf_ext; ··· 2691 2687 existing_len = (*cnt) * record_size; 2692 2688 data = realloc(*info, existing_len + records_len); 2693 2689 if (!data) 2694 - return -ENOMEM; 2690 + return libbpf_err(-ENOMEM); 2695 2691 2696 2692 memcpy(data + existing_len, sinfo->data, records_len); 2697 2693 /* adjust insn_off only, the rest data will be passed ··· 2701 2697 __u32 *insn_off; 2702 2698 2703 2699 insn_off = data + existing_len + (i * record_size); 2704 - *insn_off = *insn_off / sizeof(struct bpf_insn) + 2705 - insns_cnt; 2700 + *insn_off = *insn_off / sizeof(struct bpf_insn) + insns_cnt; 2706 2701 } 2707 2702 *info = data; 2708 2703 *cnt += sinfo->num_info; 2709 2704 return 0; 2710 2705 } 2711 2706 2712 - return -ENOENT; 2707 + return libbpf_err(-ENOENT); 2713 2708 } 2714 2709 2715 2710 int btf_ext__reloc_func_info(const struct btf *btf, ··· 2897 2894 2898 2895 if (IS_ERR(d)) { 2899 2896 pr_debug("btf_dedup_new failed: %ld", PTR_ERR(d)); 2900 - return -EINVAL; 2897 + return libbpf_err(-EINVAL); 2901 2898 } 2902 2899 2903 2900 if (btf_ensure_modifiable(btf)) 2904 - return -ENOMEM; 2901 + return libbpf_err(-ENOMEM); 2905 2902 2906 2903 err = btf_dedup_prep(d); 2907 2904 if (err) { ··· 2941 2938 2942 2939 done: 2943 2940 btf_dedup_free(d); 2944 - return err; 2941 + return libbpf_err(err); 2945 2942 } 2946 2943 2947 2944 #define BTF_UNPROCESSED_ID ((__u32)-1) ··· 4414 4411 char path[PATH_MAX + 1]; 4415 4412 struct utsname buf; 4416 4413 struct btf *btf; 4417 - int i; 4414 + int i, err; 4418 4415 4419 4416 uname(&buf); 4420 4417 ··· 4428 4425 btf = btf__parse_raw(path); 4429 4426 else 4430 4427 btf = btf__parse_elf(path, NULL); 4431 - 4432 - pr_debug("loading kernel BTF '%s': %ld\n", 4433 - path, IS_ERR(btf) ? PTR_ERR(btf) : 0); 4434 - if (IS_ERR(btf)) 4428 + err = libbpf_get_error(btf); 4429 + pr_debug("loading kernel BTF '%s': %d\n", path, err); 4430 + if (err) 4435 4431 continue; 4436 4432 4437 4433 return btf; 4438 4434 } 4439 4435 4440 4436 pr_warn("failed to find valid kernel BTF\n"); 4441 - return ERR_PTR(-ESRCH); 4437 + return libbpf_err_ptr(-ESRCH); 4442 4438 } 4443 4439 4444 4440 int btf_type_visit_type_ids(struct btf_type *t, type_id_visit_fn visit, void *ctx)
+7 -7
tools/lib/bpf/btf_dump.c
··· 128 128 129 129 d = calloc(1, sizeof(struct btf_dump)); 130 130 if (!d) 131 - return ERR_PTR(-ENOMEM); 131 + return libbpf_err_ptr(-ENOMEM); 132 132 133 133 d->btf = btf; 134 134 d->btf_ext = btf_ext; ··· 156 156 return d; 157 157 err: 158 158 btf_dump__free(d); 159 - return ERR_PTR(err); 159 + return libbpf_err_ptr(err); 160 160 } 161 161 162 162 static int btf_dump_resize(struct btf_dump *d) ··· 236 236 int err, i; 237 237 238 238 if (id > btf__get_nr_types(d->btf)) 239 - return -EINVAL; 239 + return libbpf_err(-EINVAL); 240 240 241 241 err = btf_dump_resize(d); 242 242 if (err) 243 - return err; 243 + return libbpf_err(err); 244 244 245 245 d->emit_queue_cnt = 0; 246 246 err = btf_dump_order_type(d, id, false); 247 247 if (err < 0) 248 - return err; 248 + return libbpf_err(err); 249 249 250 250 for (i = 0; i < d->emit_queue_cnt; i++) 251 251 btf_dump_emit_type(d, d->emit_queue[i], 0 /*top-level*/); ··· 1075 1075 int lvl, err; 1076 1076 1077 1077 if (!OPTS_VALID(opts, btf_dump_emit_type_decl_opts)) 1078 - return -EINVAL; 1078 + return libbpf_err(-EINVAL); 1079 1079 1080 1080 err = btf_dump_resize(d); 1081 1081 if (err) 1082 - return -EINVAL; 1082 + return libbpf_err(err); 1083 1083 1084 1084 fname = OPTS_GET(opts, field_name, ""); 1085 1085 lvl = OPTS_GET(opts, indent_level, 0);
+288 -247
tools/lib/bpf/libbpf.c
··· 151 151 return (__u64) (unsigned long) ptr; 152 152 } 153 153 154 + /* this goes away in libbpf 1.0 */ 155 + enum libbpf_strict_mode libbpf_mode = LIBBPF_STRICT_NONE; 156 + 157 + int libbpf_set_strict_mode(enum libbpf_strict_mode mode) 158 + { 159 + /* __LIBBPF_STRICT_LAST is the last power-of-2 value used + 1, so to 160 + * get all possible values we compensate last +1, and then (2*x - 1) 161 + * to get the bit mask 162 + */ 163 + if (mode != LIBBPF_STRICT_ALL 164 + && (mode & ~((__LIBBPF_STRICT_LAST - 1) * 2 - 1))) 165 + return errno = EINVAL, -EINVAL; 166 + 167 + libbpf_mode = mode; 168 + return 0; 169 + } 170 + 154 171 enum kern_feature_id { 155 172 /* v4.14: kernel support for program & map names. */ 156 173 FEAT_PROG_NAME, ··· 2465 2448 err = err ?: bpf_object__init_global_data_maps(obj); 2466 2449 err = err ?: bpf_object__init_kconfig_map(obj); 2467 2450 err = err ?: bpf_object__init_struct_ops_maps(obj); 2468 - if (err) 2469 - return err; 2470 2451 2471 - return 0; 2452 + return err; 2472 2453 } 2473 2454 2474 2455 static bool section_have_execinstr(struct bpf_object *obj, int idx) ··· 2577 2562 2578 2563 if (btf_data) { 2579 2564 obj->btf = btf__new(btf_data->d_buf, btf_data->d_size); 2580 - if (IS_ERR(obj->btf)) { 2581 - err = PTR_ERR(obj->btf); 2565 + err = libbpf_get_error(obj->btf); 2566 + if (err) { 2582 2567 obj->btf = NULL; 2583 - pr_warn("Error loading ELF section %s: %d.\n", 2584 - BTF_ELF_SEC, err); 2568 + pr_warn("Error loading ELF section %s: %d.\n", BTF_ELF_SEC, err); 2585 2569 goto out; 2586 2570 } 2587 2571 /* enforce 8-byte pointers for BPF-targeted BTFs */ 2588 2572 btf__set_pointer_size(obj->btf, 8); 2589 - err = 0; 2590 2573 } 2591 2574 if (btf_ext_data) { 2592 2575 if (!obj->btf) { ··· 2592 2579 BTF_EXT_ELF_SEC, BTF_ELF_SEC); 2593 2580 goto out; 2594 2581 } 2595 - obj->btf_ext = btf_ext__new(btf_ext_data->d_buf, 2596 - btf_ext_data->d_size); 2597 - if (IS_ERR(obj->btf_ext)) { 2598 - pr_warn("Error loading ELF section %s: %ld. Ignored and continue.\n", 2599 - BTF_EXT_ELF_SEC, PTR_ERR(obj->btf_ext)); 2582 + obj->btf_ext = btf_ext__new(btf_ext_data->d_buf, btf_ext_data->d_size); 2583 + err = libbpf_get_error(obj->btf_ext); 2584 + if (err) { 2585 + pr_warn("Error loading ELF section %s: %d. Ignored and continue.\n", 2586 + BTF_EXT_ELF_SEC, err); 2600 2587 obj->btf_ext = NULL; 2601 2588 goto out; 2602 2589 } ··· 2680 2667 return 0; 2681 2668 2682 2669 obj->btf_vmlinux = libbpf_find_kernel_btf(); 2683 - if (IS_ERR(obj->btf_vmlinux)) { 2684 - err = PTR_ERR(obj->btf_vmlinux); 2670 + err = libbpf_get_error(obj->btf_vmlinux); 2671 + if (err) { 2685 2672 pr_warn("Error loading vmlinux BTF: %d\n", err); 2686 2673 obj->btf_vmlinux = NULL; 2687 2674 return err; ··· 2747 2734 /* clone BTF to sanitize a copy and leave the original intact */ 2748 2735 raw_data = btf__get_raw_data(obj->btf, &sz); 2749 2736 kern_btf = btf__new(raw_data, sz); 2750 - if (IS_ERR(kern_btf)) 2751 - return PTR_ERR(kern_btf); 2737 + err = libbpf_get_error(kern_btf); 2738 + if (err) 2739 + return err; 2752 2740 2753 2741 /* enforce 8-byte pointers for BPF-targeted BTFs */ 2754 2742 btf__set_pointer_size(obj->btf, 8); ··· 3523 3509 if (pos->sec_name && !strcmp(pos->sec_name, title)) 3524 3510 return pos; 3525 3511 } 3526 - return NULL; 3512 + return errno = ENOENT, NULL; 3527 3513 } 3528 3514 3529 3515 static bool prog_is_subprog(const struct bpf_object *obj, ··· 3556 3542 if (!strcmp(prog->name, name)) 3557 3543 return prog; 3558 3544 } 3559 - return NULL; 3545 + return errno = ENOENT, NULL; 3560 3546 } 3561 3547 3562 3548 static bool bpf_object__shndx_is_data(const struct bpf_object *obj, ··· 3903 3889 3904 3890 err = bpf_obj_get_info_by_fd(fd, &info, &len); 3905 3891 if (err) 3906 - return err; 3892 + return libbpf_err(err); 3907 3893 3908 3894 new_name = strdup(info.name); 3909 3895 if (!new_name) 3910 - return -errno; 3896 + return libbpf_err(-errno); 3911 3897 3912 3898 new_fd = open("/", O_RDONLY | O_CLOEXEC); 3913 3899 if (new_fd < 0) { ··· 3945 3931 close(new_fd); 3946 3932 err_free_new_name: 3947 3933 free(new_name); 3948 - return err; 3934 + return libbpf_err(err); 3949 3935 } 3950 3936 3951 3937 __u32 bpf_map__max_entries(const struct bpf_map *map) ··· 3956 3942 struct bpf_map *bpf_map__inner_map(struct bpf_map *map) 3957 3943 { 3958 3944 if (!bpf_map_type__is_map_in_map(map->def.type)) 3959 - return NULL; 3945 + return errno = EINVAL, NULL; 3960 3946 3961 3947 return map->inner_map; 3962 3948 } ··· 3964 3950 int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries) 3965 3951 { 3966 3952 if (map->fd >= 0) 3967 - return -EBUSY; 3953 + return libbpf_err(-EBUSY); 3968 3954 map->def.max_entries = max_entries; 3969 3955 return 0; 3970 3956 } ··· 3972 3958 int bpf_map__resize(struct bpf_map *map, __u32 max_entries) 3973 3959 { 3974 3960 if (!map || !max_entries) 3975 - return -EINVAL; 3961 + return libbpf_err(-EINVAL); 3976 3962 3977 3963 return bpf_map__set_max_entries(map, max_entries); 3978 3964 } ··· 3987 3973 BPF_EXIT_INSN(), 3988 3974 }; 3989 3975 int ret; 3976 + 3977 + if (obj->gen_loader) 3978 + return 0; 3990 3979 3991 3980 /* make sure basic loading works */ 3992 3981 ··· 4582 4565 targ_map = map->init_slots[i]; 4583 4566 fd = bpf_map__fd(targ_map); 4584 4567 if (obj->gen_loader) { 4585 - pr_warn("// TODO map_update_elem: idx %ld key %d value==map_idx %ld\n", 4568 + pr_warn("// TODO map_update_elem: idx %td key %d value==map_idx %td\n", 4586 4569 map - obj->maps, i, targ_map - obj->maps); 4587 4570 return -ENOTSUP; 4588 4571 } else { ··· 5103 5086 } 5104 5087 5105 5088 btf = btf_get_from_fd(fd, obj->btf_vmlinux); 5106 - if (IS_ERR(btf)) { 5107 - pr_warn("failed to load module [%s]'s BTF object #%d: %ld\n", 5108 - name, id, PTR_ERR(btf)); 5109 - err = PTR_ERR(btf); 5089 + err = libbpf_get_error(btf); 5090 + if (err) { 5091 + pr_warn("failed to load module [%s]'s BTF object #%d: %d\n", 5092 + name, id, err); 5110 5093 goto err_out; 5111 5094 } 5112 5095 ··· 6206 6189 return -EINVAL; 6207 6190 6208 6191 if (prog->obj->gen_loader) { 6209 - pr_warn("// TODO core_relo: prog %ld insn[%d] %s %s kind %d\n", 6192 + pr_warn("// TODO core_relo: prog %td insn[%d] %s %s kind %d\n", 6210 6193 prog - prog->obj->programs, relo->insn_off / 8, 6211 6194 local_name, spec_str, relo->kind); 6212 6195 return -ENOTSUP; ··· 6366 6349 6367 6350 if (targ_btf_path) { 6368 6351 obj->btf_vmlinux_override = btf__parse(targ_btf_path, NULL); 6369 - if (IS_ERR_OR_NULL(obj->btf_vmlinux_override)) { 6370 - err = PTR_ERR(obj->btf_vmlinux_override); 6352 + err = libbpf_get_error(obj->btf_vmlinux_override); 6353 + if (err) { 6371 6354 pr_warn("failed to parse target BTF: %d\n", err); 6372 6355 return err; 6373 6356 } ··· 7424 7407 7425 7408 if (prog->obj->loaded) { 7426 7409 pr_warn("prog '%s': can't load after object was loaded\n", prog->name); 7427 - return -EINVAL; 7410 + return libbpf_err(-EINVAL); 7428 7411 } 7429 7412 7430 7413 if ((prog->type == BPF_PROG_TYPE_TRACING || ··· 7434 7417 7435 7418 err = libbpf_find_attach_btf_id(prog, &btf_obj_fd, &btf_type_id); 7436 7419 if (err) 7437 - return err; 7420 + return libbpf_err(err); 7438 7421 7439 7422 prog->attach_btf_obj_fd = btf_obj_fd; 7440 7423 prog->attach_btf_id = btf_type_id; ··· 7444 7427 if (prog->preprocessor) { 7445 7428 pr_warn("Internal error: can't load program '%s'\n", 7446 7429 prog->name); 7447 - return -LIBBPF_ERRNO__INTERNAL; 7430 + return libbpf_err(-LIBBPF_ERRNO__INTERNAL); 7448 7431 } 7449 7432 7450 7433 prog->instances.fds = malloc(sizeof(int)); 7451 7434 if (!prog->instances.fds) { 7452 7435 pr_warn("Not enough memory for BPF fds\n"); 7453 - return -ENOMEM; 7436 + return libbpf_err(-ENOMEM); 7454 7437 } 7455 7438 prog->instances.nr = 1; 7456 7439 prog->instances.fds[0] = -1; ··· 7509 7492 pr_warn("failed to load program '%s'\n", prog->name); 7510 7493 zfree(&prog->insns); 7511 7494 prog->insns_cnt = 0; 7512 - return err; 7495 + return libbpf_err(err); 7513 7496 } 7514 7497 7515 7498 static int ··· 7642 7625 7643 7626 struct bpf_object *bpf_object__open_xattr(struct bpf_object_open_attr *attr) 7644 7627 { 7645 - return __bpf_object__open_xattr(attr, 0); 7628 + return libbpf_ptr(__bpf_object__open_xattr(attr, 0)); 7646 7629 } 7647 7630 7648 7631 struct bpf_object *bpf_object__open(const char *path) ··· 7652 7635 .prog_type = BPF_PROG_TYPE_UNSPEC, 7653 7636 }; 7654 7637 7655 - return bpf_object__open_xattr(&attr); 7638 + return libbpf_ptr(__bpf_object__open_xattr(&attr, 0)); 7656 7639 } 7657 7640 7658 7641 struct bpf_object * 7659 7642 bpf_object__open_file(const char *path, const struct bpf_object_open_opts *opts) 7660 7643 { 7661 7644 if (!path) 7662 - return ERR_PTR(-EINVAL); 7645 + return libbpf_err_ptr(-EINVAL); 7663 7646 7664 7647 pr_debug("loading %s\n", path); 7665 7648 7666 - return __bpf_object__open(path, NULL, 0, opts); 7649 + return libbpf_ptr(__bpf_object__open(path, NULL, 0, opts)); 7667 7650 } 7668 7651 7669 7652 struct bpf_object * ··· 7671 7654 const struct bpf_object_open_opts *opts) 7672 7655 { 7673 7656 if (!obj_buf || obj_buf_sz == 0) 7674 - return ERR_PTR(-EINVAL); 7657 + return libbpf_err_ptr(-EINVAL); 7675 7658 7676 - return __bpf_object__open(NULL, obj_buf, obj_buf_sz, opts); 7659 + return libbpf_ptr(__bpf_object__open(NULL, obj_buf, obj_buf_sz, opts)); 7677 7660 } 7678 7661 7679 7662 struct bpf_object * ··· 7688 7671 7689 7672 /* returning NULL is wrong, but backwards-compatible */ 7690 7673 if (!obj_buf || obj_buf_sz == 0) 7691 - return NULL; 7674 + return errno = EINVAL, NULL; 7692 7675 7693 - return bpf_object__open_mem(obj_buf, obj_buf_sz, &opts); 7676 + return libbpf_ptr(__bpf_object__open(NULL, obj_buf, obj_buf_sz, &opts)); 7694 7677 } 7695 7678 7696 7679 int bpf_object__unload(struct bpf_object *obj) ··· 7698 7681 size_t i; 7699 7682 7700 7683 if (!obj) 7701 - return -EINVAL; 7684 + return libbpf_err(-EINVAL); 7702 7685 7703 7686 for (i = 0; i < obj->nr_maps; i++) { 7704 7687 zclose(obj->maps[i].fd); ··· 8031 8014 int err, i; 8032 8015 8033 8016 if (!attr) 8034 - return -EINVAL; 8017 + return libbpf_err(-EINVAL); 8035 8018 obj = attr->obj; 8036 8019 if (!obj) 8037 - return -EINVAL; 8020 + return libbpf_err(-EINVAL); 8038 8021 8039 8022 if (obj->loaded) { 8040 8023 pr_warn("object '%s': load can't be attempted twice\n", obj->name); 8041 - return -EINVAL; 8024 + return libbpf_err(-EINVAL); 8042 8025 } 8043 8026 8044 8027 if (obj->gen_loader) ··· 8089 8072 8090 8073 bpf_object__unload(obj); 8091 8074 pr_warn("failed to load object '%s'\n", obj->path); 8092 - return err; 8075 + return libbpf_err(err); 8093 8076 } 8094 8077 8095 8078 int bpf_object__load(struct bpf_object *obj) ··· 8161 8144 8162 8145 err = make_parent_dir(path); 8163 8146 if (err) 8164 - return err; 8147 + return libbpf_err(err); 8165 8148 8166 8149 err = check_path(path); 8167 8150 if (err) 8168 - return err; 8151 + return libbpf_err(err); 8169 8152 8170 8153 if (prog == NULL) { 8171 8154 pr_warn("invalid program pointer\n"); 8172 - return -EINVAL; 8155 + return libbpf_err(-EINVAL); 8173 8156 } 8174 8157 8175 8158 if (instance < 0 || instance >= prog->instances.nr) { 8176 8159 pr_warn("invalid prog instance %d of prog %s (max %d)\n", 8177 8160 instance, prog->name, prog->instances.nr); 8178 - return -EINVAL; 8161 + return libbpf_err(-EINVAL); 8179 8162 } 8180 8163 8181 8164 if (bpf_obj_pin(prog->instances.fds[instance], path)) { 8182 8165 err = -errno; 8183 8166 cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg)); 8184 8167 pr_warn("failed to pin program: %s\n", cp); 8185 - return err; 8168 + return libbpf_err(err); 8186 8169 } 8187 8170 pr_debug("pinned program '%s'\n", path); 8188 8171 ··· 8196 8179 8197 8180 err = check_path(path); 8198 8181 if (err) 8199 - return err; 8182 + return libbpf_err(err); 8200 8183 8201 8184 if (prog == NULL) { 8202 8185 pr_warn("invalid program pointer\n"); 8203 - return -EINVAL; 8186 + return libbpf_err(-EINVAL); 8204 8187 } 8205 8188 8206 8189 if (instance < 0 || instance >= prog->instances.nr) { 8207 8190 pr_warn("invalid prog instance %d of prog %s (max %d)\n", 8208 8191 instance, prog->name, prog->instances.nr); 8209 - return -EINVAL; 8192 + return libbpf_err(-EINVAL); 8210 8193 } 8211 8194 8212 8195 err = unlink(path); 8213 8196 if (err != 0) 8214 - return -errno; 8197 + return libbpf_err(-errno); 8198 + 8215 8199 pr_debug("unpinned program '%s'\n", path); 8216 8200 8217 8201 return 0; ··· 8224 8206 8225 8207 err = make_parent_dir(path); 8226 8208 if (err) 8227 - return err; 8209 + return libbpf_err(err); 8228 8210 8229 8211 err = check_path(path); 8230 8212 if (err) 8231 - return err; 8213 + return libbpf_err(err); 8232 8214 8233 8215 if (prog == NULL) { 8234 8216 pr_warn("invalid program pointer\n"); 8235 - return -EINVAL; 8217 + return libbpf_err(-EINVAL); 8236 8218 } 8237 8219 8238 8220 if (prog->instances.nr <= 0) { 8239 8221 pr_warn("no instances of prog %s to pin\n", prog->name); 8240 - return -EINVAL; 8222 + return libbpf_err(-EINVAL); 8241 8223 } 8242 8224 8243 8225 if (prog->instances.nr == 1) { ··· 8281 8263 8282 8264 rmdir(path); 8283 8265 8284 - return err; 8266 + return libbpf_err(err); 8285 8267 } 8286 8268 8287 8269 int bpf_program__unpin(struct bpf_program *prog, const char *path) ··· 8290 8272 8291 8273 err = check_path(path); 8292 8274 if (err) 8293 - return err; 8275 + return libbpf_err(err); 8294 8276 8295 8277 if (prog == NULL) { 8296 8278 pr_warn("invalid program pointer\n"); 8297 - return -EINVAL; 8279 + return libbpf_err(-EINVAL); 8298 8280 } 8299 8281 8300 8282 if (prog->instances.nr <= 0) { 8301 8283 pr_warn("no instances of prog %s to pin\n", prog->name); 8302 - return -EINVAL; 8284 + return libbpf_err(-EINVAL); 8303 8285 } 8304 8286 8305 8287 if (prog->instances.nr == 1) { ··· 8313 8295 8314 8296 len = snprintf(buf, PATH_MAX, "%s/%d", path, i); 8315 8297 if (len < 0) 8316 - return -EINVAL; 8298 + return libbpf_err(-EINVAL); 8317 8299 else if (len >= PATH_MAX) 8318 - return -ENAMETOOLONG; 8300 + return libbpf_err(-ENAMETOOLONG); 8319 8301 8320 8302 err = bpf_program__unpin_instance(prog, buf, i); 8321 8303 if (err) ··· 8324 8306 8325 8307 err = rmdir(path); 8326 8308 if (err) 8327 - return -errno; 8309 + return libbpf_err(-errno); 8328 8310 8329 8311 return 0; 8330 8312 } ··· 8336 8318 8337 8319 if (map == NULL) { 8338 8320 pr_warn("invalid map pointer\n"); 8339 - return -EINVAL; 8321 + return libbpf_err(-EINVAL); 8340 8322 } 8341 8323 8342 8324 if (map->pin_path) { 8343 8325 if (path && strcmp(path, map->pin_path)) { 8344 8326 pr_warn("map '%s' already has pin path '%s' different from '%s'\n", 8345 8327 bpf_map__name(map), map->pin_path, path); 8346 - return -EINVAL; 8328 + return libbpf_err(-EINVAL); 8347 8329 } else if (map->pinned) { 8348 8330 pr_debug("map '%s' already pinned at '%s'; not re-pinning\n", 8349 8331 bpf_map__name(map), map->pin_path); ··· 8353 8335 if (!path) { 8354 8336 pr_warn("missing a path to pin map '%s' at\n", 8355 8337 bpf_map__name(map)); 8356 - return -EINVAL; 8338 + return libbpf_err(-EINVAL); 8357 8339 } else if (map->pinned) { 8358 8340 pr_warn("map '%s' already pinned\n", bpf_map__name(map)); 8359 - return -EEXIST; 8341 + return libbpf_err(-EEXIST); 8360 8342 } 8361 8343 8362 8344 map->pin_path = strdup(path); ··· 8368 8350 8369 8351 err = make_parent_dir(map->pin_path); 8370 8352 if (err) 8371 - return err; 8353 + return libbpf_err(err); 8372 8354 8373 8355 err = check_path(map->pin_path); 8374 8356 if (err) 8375 - return err; 8357 + return libbpf_err(err); 8376 8358 8377 8359 if (bpf_obj_pin(map->fd, map->pin_path)) { 8378 8360 err = -errno; ··· 8387 8369 out_err: 8388 8370 cp = libbpf_strerror_r(-err, errmsg, sizeof(errmsg)); 8389 8371 pr_warn("failed to pin map: %s\n", cp); 8390 - return err; 8372 + return libbpf_err(err); 8391 8373 } 8392 8374 8393 8375 int bpf_map__unpin(struct bpf_map *map, const char *path) ··· 8396 8378 8397 8379 if (map == NULL) { 8398 8380 pr_warn("invalid map pointer\n"); 8399 - return -EINVAL; 8381 + return libbpf_err(-EINVAL); 8400 8382 } 8401 8383 8402 8384 if (map->pin_path) { 8403 8385 if (path && strcmp(path, map->pin_path)) { 8404 8386 pr_warn("map '%s' already has pin path '%s' different from '%s'\n", 8405 8387 bpf_map__name(map), map->pin_path, path); 8406 - return -EINVAL; 8388 + return libbpf_err(-EINVAL); 8407 8389 } 8408 8390 path = map->pin_path; 8409 8391 } else if (!path) { 8410 8392 pr_warn("no path to unpin map '%s' from\n", 8411 8393 bpf_map__name(map)); 8412 - return -EINVAL; 8394 + return libbpf_err(-EINVAL); 8413 8395 } 8414 8396 8415 8397 err = check_path(path); 8416 8398 if (err) 8417 - return err; 8399 + return libbpf_err(err); 8418 8400 8419 8401 err = unlink(path); 8420 8402 if (err != 0) 8421 - return -errno; 8403 + return libbpf_err(-errno); 8422 8404 8423 8405 map->pinned = false; 8424 8406 pr_debug("unpinned map '%s' from '%s'\n", bpf_map__name(map), path); ··· 8433 8415 if (path) { 8434 8416 new = strdup(path); 8435 8417 if (!new) 8436 - return -errno; 8418 + return libbpf_err(-errno); 8437 8419 } 8438 8420 8439 8421 free(map->pin_path); ··· 8467 8449 int err; 8468 8450 8469 8451 if (!obj) 8470 - return -ENOENT; 8452 + return libbpf_err(-ENOENT); 8471 8453 8472 8454 if (!obj->loaded) { 8473 8455 pr_warn("object not yet loaded; load it first\n"); 8474 - return -ENOENT; 8456 + return libbpf_err(-ENOENT); 8475 8457 } 8476 8458 8477 8459 bpf_object__for_each_map(map, obj) { ··· 8511 8493 bpf_map__unpin(map, NULL); 8512 8494 } 8513 8495 8514 - return err; 8496 + return libbpf_err(err); 8515 8497 } 8516 8498 8517 8499 int bpf_object__unpin_maps(struct bpf_object *obj, const char *path) ··· 8520 8502 int err; 8521 8503 8522 8504 if (!obj) 8523 - return -ENOENT; 8505 + return libbpf_err(-ENOENT); 8524 8506 8525 8507 bpf_object__for_each_map(map, obj) { 8526 8508 char *pin_path = NULL; ··· 8532 8514 len = snprintf(buf, PATH_MAX, "%s/%s", path, 8533 8515 bpf_map__name(map)); 8534 8516 if (len < 0) 8535 - return -EINVAL; 8517 + return libbpf_err(-EINVAL); 8536 8518 else if (len >= PATH_MAX) 8537 - return -ENAMETOOLONG; 8519 + return libbpf_err(-ENAMETOOLONG); 8538 8520 sanitize_pin_path(buf); 8539 8521 pin_path = buf; 8540 8522 } else if (!map->pin_path) { ··· 8543 8525 8544 8526 err = bpf_map__unpin(map, pin_path); 8545 8527 if (err) 8546 - return err; 8528 + return libbpf_err(err); 8547 8529 } 8548 8530 8549 8531 return 0; ··· 8555 8537 int err; 8556 8538 8557 8539 if (!obj) 8558 - return -ENOENT; 8540 + return libbpf_err(-ENOENT); 8559 8541 8560 8542 if (!obj->loaded) { 8561 8543 pr_warn("object not yet loaded; load it first\n"); 8562 - return -ENOENT; 8544 + return libbpf_err(-ENOENT); 8563 8545 } 8564 8546 8565 8547 bpf_object__for_each_program(prog, obj) { ··· 8598 8580 bpf_program__unpin(prog, buf); 8599 8581 } 8600 8582 8601 - return err; 8583 + return libbpf_err(err); 8602 8584 } 8603 8585 8604 8586 int bpf_object__unpin_programs(struct bpf_object *obj, const char *path) ··· 8607 8589 int err; 8608 8590 8609 8591 if (!obj) 8610 - return -ENOENT; 8592 + return libbpf_err(-ENOENT); 8611 8593 8612 8594 bpf_object__for_each_program(prog, obj) { 8613 8595 char buf[PATH_MAX]; ··· 8616 8598 len = snprintf(buf, PATH_MAX, "%s/%s", path, 8617 8599 prog->pin_name); 8618 8600 if (len < 0) 8619 - return -EINVAL; 8601 + return libbpf_err(-EINVAL); 8620 8602 else if (len >= PATH_MAX) 8621 - return -ENAMETOOLONG; 8603 + return libbpf_err(-ENAMETOOLONG); 8622 8604 8623 8605 err = bpf_program__unpin(prog, buf); 8624 8606 if (err) 8625 - return err; 8607 + return libbpf_err(err); 8626 8608 } 8627 8609 8628 8610 return 0; ··· 8634 8616 8635 8617 err = bpf_object__pin_maps(obj, path); 8636 8618 if (err) 8637 - return err; 8619 + return libbpf_err(err); 8638 8620 8639 8621 err = bpf_object__pin_programs(obj, path); 8640 8622 if (err) { 8641 8623 bpf_object__unpin_maps(obj, path); 8642 - return err; 8624 + return libbpf_err(err); 8643 8625 } 8644 8626 8645 8627 return 0; ··· 8736 8718 8737 8719 const char *bpf_object__name(const struct bpf_object *obj) 8738 8720 { 8739 - return obj ? obj->name : ERR_PTR(-EINVAL); 8721 + return obj ? obj->name : libbpf_err_ptr(-EINVAL); 8740 8722 } 8741 8723 8742 8724 unsigned int bpf_object__kversion(const struct bpf_object *obj) ··· 8757 8739 int bpf_object__set_kversion(struct bpf_object *obj, __u32 kern_version) 8758 8740 { 8759 8741 if (obj->loaded) 8760 - return -EINVAL; 8742 + return libbpf_err(-EINVAL); 8761 8743 8762 8744 obj->kern_version = kern_version; 8763 8745 ··· 8777 8759 8778 8760 void *bpf_object__priv(const struct bpf_object *obj) 8779 8761 { 8780 - return obj ? obj->priv : ERR_PTR(-EINVAL); 8762 + return obj ? obj->priv : libbpf_err_ptr(-EINVAL); 8781 8763 } 8782 8764 8783 8765 int bpf_object__gen_loader(struct bpf_object *obj, struct gen_loader_opts *opts) ··· 8813 8795 8814 8796 if (p->obj != obj) { 8815 8797 pr_warn("error: program handler doesn't match object\n"); 8816 - return NULL; 8798 + return errno = EINVAL, NULL; 8817 8799 } 8818 8800 8819 8801 idx = (p - obj->programs) + (forward ? 1 : -1); ··· 8859 8841 8860 8842 void *bpf_program__priv(const struct bpf_program *prog) 8861 8843 { 8862 - return prog ? prog->priv : ERR_PTR(-EINVAL); 8844 + return prog ? prog->priv : libbpf_err_ptr(-EINVAL); 8863 8845 } 8864 8846 8865 8847 void bpf_program__set_ifindex(struct bpf_program *prog, __u32 ifindex) ··· 8886 8868 title = strdup(title); 8887 8869 if (!title) { 8888 8870 pr_warn("failed to strdup program title\n"); 8889 - return ERR_PTR(-ENOMEM); 8871 + return libbpf_err_ptr(-ENOMEM); 8890 8872 } 8891 8873 } 8892 8874 ··· 8901 8883 int bpf_program__set_autoload(struct bpf_program *prog, bool autoload) 8902 8884 { 8903 8885 if (prog->obj->loaded) 8904 - return -EINVAL; 8886 + return libbpf_err(-EINVAL); 8905 8887 8906 8888 prog->load = autoload; 8907 8889 return 0; ··· 8923 8905 int *instances_fds; 8924 8906 8925 8907 if (nr_instances <= 0 || !prep) 8926 - return -EINVAL; 8908 + return libbpf_err(-EINVAL); 8927 8909 8928 8910 if (prog->instances.nr > 0 || prog->instances.fds) { 8929 8911 pr_warn("Can't set pre-processor after loading\n"); 8930 - return -EINVAL; 8912 + return libbpf_err(-EINVAL); 8931 8913 } 8932 8914 8933 8915 instances_fds = malloc(sizeof(int) * nr_instances); 8934 8916 if (!instances_fds) { 8935 8917 pr_warn("alloc memory failed for fds\n"); 8936 - return -ENOMEM; 8918 + return libbpf_err(-ENOMEM); 8937 8919 } 8938 8920 8939 8921 /* fill all fd with -1 */ ··· 8950 8932 int fd; 8951 8933 8952 8934 if (!prog) 8953 - return -EINVAL; 8935 + return libbpf_err(-EINVAL); 8954 8936 8955 8937 if (n >= prog->instances.nr || n < 0) { 8956 8938 pr_warn("Can't get the %dth fd from program %s: only %d instances\n", 8957 8939 n, prog->name, prog->instances.nr); 8958 - return -EINVAL; 8940 + return libbpf_err(-EINVAL); 8959 8941 } 8960 8942 8961 8943 fd = prog->instances.fds[n]; 8962 8944 if (fd < 0) { 8963 8945 pr_warn("%dth instance of program '%s' is invalid\n", 8964 8946 n, prog->name); 8965 - return -ENOENT; 8947 + return libbpf_err(-ENOENT); 8966 8948 } 8967 8949 8968 8950 return fd; ··· 8988 8970 int bpf_program__set_##NAME(struct bpf_program *prog) \ 8989 8971 { \ 8990 8972 if (!prog) \ 8991 - return -EINVAL; \ 8973 + return libbpf_err(-EINVAL); \ 8992 8974 bpf_program__set_type(prog, TYPE); \ 8993 8975 return 0; \ 8994 8976 } \ ··· 9078 9060 9079 9061 static const struct bpf_sec_def section_defs[] = { 9080 9062 BPF_PROG_SEC("socket", BPF_PROG_TYPE_SOCKET_FILTER), 9081 - BPF_PROG_SEC("sk_reuseport", BPF_PROG_TYPE_SK_REUSEPORT), 9063 + BPF_EAPROG_SEC("sk_reuseport/migrate", BPF_PROG_TYPE_SK_REUSEPORT, 9064 + BPF_SK_REUSEPORT_SELECT_OR_MIGRATE), 9065 + BPF_EAPROG_SEC("sk_reuseport", BPF_PROG_TYPE_SK_REUSEPORT, 9066 + BPF_SK_REUSEPORT_SELECT), 9082 9067 SEC_DEF("kprobe/", KPROBE, 9083 9068 .attach_fn = attach_kprobe), 9084 9069 BPF_PROG_SEC("uprobe/", BPF_PROG_TYPE_KPROBE), ··· 9278 9257 char *type_names; 9279 9258 9280 9259 if (!name) 9281 - return -EINVAL; 9260 + return libbpf_err(-EINVAL); 9282 9261 9283 9262 sec_def = find_sec_def(name); 9284 9263 if (sec_def) { ··· 9294 9273 free(type_names); 9295 9274 } 9296 9275 9297 - return -ESRCH; 9276 + return libbpf_err(-ESRCH); 9298 9277 } 9299 9278 9300 9279 static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj, ··· 9492 9471 int err; 9493 9472 9494 9473 btf = libbpf_find_kernel_btf(); 9495 - if (IS_ERR(btf)) { 9474 + err = libbpf_get_error(btf); 9475 + if (err) { 9496 9476 pr_warn("vmlinux BTF is not found\n"); 9497 - return -EINVAL; 9477 + return libbpf_err(err); 9498 9478 } 9499 9479 9500 9480 err = find_attach_btf_id(btf, name, attach_type); ··· 9503 9481 pr_warn("%s is not found in vmlinux BTF\n", name); 9504 9482 9505 9483 btf__free(btf); 9506 - return err; 9484 + return libbpf_err(err); 9507 9485 } 9508 9486 9509 9487 static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd) ··· 9514 9492 int err = -EINVAL; 9515 9493 9516 9494 info_linear = bpf_program__get_prog_info_linear(attach_prog_fd, 0); 9517 - if (IS_ERR_OR_NULL(info_linear)) { 9495 + err = libbpf_get_error(info_linear); 9496 + if (err) { 9518 9497 pr_warn("failed get_prog_info_linear for FD %d\n", 9519 9498 attach_prog_fd); 9520 - return -EINVAL; 9499 + return err; 9521 9500 } 9522 9501 info = &info_linear->info; 9523 9502 if (!info->btf_id) { ··· 9639 9616 int i; 9640 9617 9641 9618 if (!name) 9642 - return -EINVAL; 9619 + return libbpf_err(-EINVAL); 9643 9620 9644 9621 for (i = 0; i < ARRAY_SIZE(section_defs); i++) { 9645 9622 if (strncmp(name, section_defs[i].sec, section_defs[i].len)) 9646 9623 continue; 9647 9624 if (!section_defs[i].is_attachable) 9648 - return -EINVAL; 9625 + return libbpf_err(-EINVAL); 9649 9626 *attach_type = section_defs[i].expected_attach_type; 9650 9627 return 0; 9651 9628 } ··· 9656 9633 free(type_names); 9657 9634 } 9658 9635 9659 - return -EINVAL; 9636 + return libbpf_err(-EINVAL); 9660 9637 } 9661 9638 9662 9639 int bpf_map__fd(const struct bpf_map *map) 9663 9640 { 9664 - return map ? map->fd : -EINVAL; 9641 + return map ? map->fd : libbpf_err(-EINVAL); 9665 9642 } 9666 9643 9667 9644 const struct bpf_map_def *bpf_map__def(const struct bpf_map *map) 9668 9645 { 9669 - return map ? &map->def : ERR_PTR(-EINVAL); 9646 + return map ? &map->def : libbpf_err_ptr(-EINVAL); 9670 9647 } 9671 9648 9672 9649 const char *bpf_map__name(const struct bpf_map *map) ··· 9682 9659 int bpf_map__set_type(struct bpf_map *map, enum bpf_map_type type) 9683 9660 { 9684 9661 if (map->fd >= 0) 9685 - return -EBUSY; 9662 + return libbpf_err(-EBUSY); 9686 9663 map->def.type = type; 9687 9664 return 0; 9688 9665 } ··· 9695 9672 int bpf_map__set_map_flags(struct bpf_map *map, __u32 flags) 9696 9673 { 9697 9674 if (map->fd >= 0) 9698 - return -EBUSY; 9675 + return libbpf_err(-EBUSY); 9699 9676 map->def.map_flags = flags; 9700 9677 return 0; 9701 9678 } ··· 9708 9685 int bpf_map__set_numa_node(struct bpf_map *map, __u32 numa_node) 9709 9686 { 9710 9687 if (map->fd >= 0) 9711 - return -EBUSY; 9688 + return libbpf_err(-EBUSY); 9712 9689 map->numa_node = numa_node; 9713 9690 return 0; 9714 9691 } ··· 9721 9698 int bpf_map__set_key_size(struct bpf_map *map, __u32 size) 9722 9699 { 9723 9700 if (map->fd >= 0) 9724 - return -EBUSY; 9701 + return libbpf_err(-EBUSY); 9725 9702 map->def.key_size = size; 9726 9703 return 0; 9727 9704 } ··· 9734 9711 int bpf_map__set_value_size(struct bpf_map *map, __u32 size) 9735 9712 { 9736 9713 if (map->fd >= 0) 9737 - return -EBUSY; 9714 + return libbpf_err(-EBUSY); 9738 9715 map->def.value_size = size; 9739 9716 return 0; 9740 9717 } ··· 9753 9730 bpf_map_clear_priv_t clear_priv) 9754 9731 { 9755 9732 if (!map) 9756 - return -EINVAL; 9733 + return libbpf_err(-EINVAL); 9757 9734 9758 9735 if (map->priv) { 9759 9736 if (map->clear_priv) ··· 9767 9744 9768 9745 void *bpf_map__priv(const struct bpf_map *map) 9769 9746 { 9770 - return map ? map->priv : ERR_PTR(-EINVAL); 9747 + return map ? map->priv : libbpf_err_ptr(-EINVAL); 9771 9748 } 9772 9749 9773 9750 int bpf_map__set_initial_value(struct bpf_map *map, ··· 9775 9752 { 9776 9753 if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG || 9777 9754 size != map->def.value_size || map->fd >= 0) 9778 - return -EINVAL; 9755 + return libbpf_err(-EINVAL); 9779 9756 9780 9757 memcpy(map->mmaped, data, size); 9781 9758 return 0; ··· 9807 9784 int bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex) 9808 9785 { 9809 9786 if (map->fd >= 0) 9810 - return -EBUSY; 9787 + return libbpf_err(-EBUSY); 9811 9788 map->map_ifindex = ifindex; 9812 9789 return 0; 9813 9790 } ··· 9816 9793 { 9817 9794 if (!bpf_map_type__is_map_in_map(map->def.type)) { 9818 9795 pr_warn("error: unsupported map type\n"); 9819 - return -EINVAL; 9796 + return libbpf_err(-EINVAL); 9820 9797 } 9821 9798 if (map->inner_map_fd != -1) { 9822 9799 pr_warn("error: inner_map_fd already specified\n"); 9823 - return -EINVAL; 9800 + return libbpf_err(-EINVAL); 9824 9801 } 9825 9802 zfree(&map->inner_map); 9826 9803 map->inner_map_fd = fd; ··· 9834 9811 struct bpf_map *s, *e; 9835 9812 9836 9813 if (!obj || !obj->maps) 9837 - return NULL; 9814 + return errno = EINVAL, NULL; 9838 9815 9839 9816 s = obj->maps; 9840 9817 e = obj->maps + obj->nr_maps; ··· 9842 9819 if ((m < s) || (m >= e)) { 9843 9820 pr_warn("error in %s: map handler doesn't belong to object\n", 9844 9821 __func__); 9845 - return NULL; 9822 + return errno = EINVAL, NULL; 9846 9823 } 9847 9824 9848 9825 idx = (m - obj->maps) + i; ··· 9881 9858 if (pos->name && !strcmp(pos->name, name)) 9882 9859 return pos; 9883 9860 } 9884 - return NULL; 9861 + return errno = ENOENT, NULL; 9885 9862 } 9886 9863 9887 9864 int ··· 9893 9870 struct bpf_map * 9894 9871 bpf_object__find_map_by_offset(struct bpf_object *obj, size_t offset) 9895 9872 { 9896 - return ERR_PTR(-ENOTSUP); 9873 + return libbpf_err_ptr(-ENOTSUP); 9897 9874 } 9898 9875 9899 9876 long libbpf_get_error(const void *ptr) 9900 9877 { 9901 - return PTR_ERR_OR_ZERO(ptr); 9878 + if (!IS_ERR_OR_NULL(ptr)) 9879 + return 0; 9880 + 9881 + if (IS_ERR(ptr)) 9882 + errno = -PTR_ERR(ptr); 9883 + 9884 + /* If ptr == NULL, then errno should be already set by the failing 9885 + * API, because libbpf never returns NULL on success and it now always 9886 + * sets errno on error. So no extra errno handling for ptr == NULL 9887 + * case. 9888 + */ 9889 + return -errno; 9902 9890 } 9903 9891 9904 9892 int bpf_prog_load(const char *file, enum bpf_prog_type type, ··· 9935 9901 int err; 9936 9902 9937 9903 if (!attr) 9938 - return -EINVAL; 9904 + return libbpf_err(-EINVAL); 9939 9905 if (!attr->file) 9940 - return -EINVAL; 9906 + return libbpf_err(-EINVAL); 9941 9907 9942 9908 open_attr.file = attr->file; 9943 9909 open_attr.prog_type = attr->prog_type; 9944 9910 9945 9911 obj = bpf_object__open_xattr(&open_attr); 9946 - if (IS_ERR_OR_NULL(obj)) 9947 - return -ENOENT; 9912 + err = libbpf_get_error(obj); 9913 + if (err) 9914 + return libbpf_err(-ENOENT); 9948 9915 9949 9916 bpf_object__for_each_program(prog, obj) { 9950 9917 enum bpf_attach_type attach_type = attr->expected_attach_type; ··· 9965 9930 * didn't provide a fallback type, too bad... 9966 9931 */ 9967 9932 bpf_object__close(obj); 9968 - return -EINVAL; 9933 + return libbpf_err(-EINVAL); 9969 9934 } 9970 9935 9971 9936 prog->prog_ifindex = attr->ifindex; ··· 9983 9948 if (!first_prog) { 9984 9949 pr_warn("object file doesn't contain bpf program\n"); 9985 9950 bpf_object__close(obj); 9986 - return -ENOENT; 9951 + return libbpf_err(-ENOENT); 9987 9952 } 9988 9953 9989 9954 err = bpf_object__load(obj); 9990 9955 if (err) { 9991 9956 bpf_object__close(obj); 9992 - return err; 9957 + return libbpf_err(err); 9993 9958 } 9994 9959 9995 9960 *pobj = obj; ··· 10008 9973 /* Replace link's underlying BPF program with the new one */ 10009 9974 int bpf_link__update_program(struct bpf_link *link, struct bpf_program *prog) 10010 9975 { 10011 - return bpf_link_update(bpf_link__fd(link), bpf_program__fd(prog), NULL); 9976 + int ret; 9977 + 9978 + ret = bpf_link_update(bpf_link__fd(link), bpf_program__fd(prog), NULL); 9979 + return libbpf_err_errno(ret); 10012 9980 } 10013 9981 10014 9982 /* Release "ownership" of underlying BPF resource (typically, BPF program ··· 10044 10006 free(link->pin_path); 10045 10007 free(link); 10046 10008 10047 - return err; 10009 + return libbpf_err(err); 10048 10010 } 10049 10011 10050 10012 int bpf_link__fd(const struct bpf_link *link) ··· 10059 10021 10060 10022 static int bpf_link__detach_fd(struct bpf_link *link) 10061 10023 { 10062 - return close(link->fd); 10024 + return libbpf_err_errno(close(link->fd)); 10063 10025 } 10064 10026 10065 10027 struct bpf_link *bpf_link__open(const char *path) ··· 10071 10033 if (fd < 0) { 10072 10034 fd = -errno; 10073 10035 pr_warn("failed to open link at %s: %d\n", path, fd); 10074 - return ERR_PTR(fd); 10036 + return libbpf_err_ptr(fd); 10075 10037 } 10076 10038 10077 10039 link = calloc(1, sizeof(*link)); 10078 10040 if (!link) { 10079 10041 close(fd); 10080 - return ERR_PTR(-ENOMEM); 10042 + return libbpf_err_ptr(-ENOMEM); 10081 10043 } 10082 10044 link->detach = &bpf_link__detach_fd; 10083 10045 link->fd = fd; ··· 10085 10047 link->pin_path = strdup(path); 10086 10048 if (!link->pin_path) { 10087 10049 bpf_link__destroy(link); 10088 - return ERR_PTR(-ENOMEM); 10050 + return libbpf_err_ptr(-ENOMEM); 10089 10051 } 10090 10052 10091 10053 return link; ··· 10101 10063 int err; 10102 10064 10103 10065 if (link->pin_path) 10104 - return -EBUSY; 10066 + return libbpf_err(-EBUSY); 10105 10067 err = make_parent_dir(path); 10106 10068 if (err) 10107 - return err; 10069 + return libbpf_err(err); 10108 10070 err = check_path(path); 10109 10071 if (err) 10110 - return err; 10072 + return libbpf_err(err); 10111 10073 10112 10074 link->pin_path = strdup(path); 10113 10075 if (!link->pin_path) 10114 - return -ENOMEM; 10076 + return libbpf_err(-ENOMEM); 10115 10077 10116 10078 if (bpf_obj_pin(link->fd, link->pin_path)) { 10117 10079 err = -errno; 10118 10080 zfree(&link->pin_path); 10119 - return err; 10081 + return libbpf_err(err); 10120 10082 } 10121 10083 10122 10084 pr_debug("link fd=%d: pinned at %s\n", link->fd, link->pin_path); ··· 10128 10090 int err; 10129 10091 10130 10092 if (!link->pin_path) 10131 - return -EINVAL; 10093 + return libbpf_err(-EINVAL); 10132 10094 10133 10095 err = unlink(link->pin_path); 10134 10096 if (err != 0) 10135 - return -errno; 10097 + return libbpf_err_errno(err); 10136 10098 10137 10099 pr_debug("link fd=%d: unpinned from %s\n", link->fd, link->pin_path); 10138 10100 zfree(&link->pin_path); ··· 10148 10110 err = -errno; 10149 10111 10150 10112 close(link->fd); 10151 - return err; 10113 + return libbpf_err(err); 10152 10114 } 10153 10115 10154 - struct bpf_link *bpf_program__attach_perf_event(struct bpf_program *prog, 10155 - int pfd) 10116 + struct bpf_link *bpf_program__attach_perf_event(struct bpf_program *prog, int pfd) 10156 10117 { 10157 10118 char errmsg[STRERR_BUFSIZE]; 10158 10119 struct bpf_link *link; ··· 10160 10123 if (pfd < 0) { 10161 10124 pr_warn("prog '%s': invalid perf event FD %d\n", 10162 10125 prog->name, pfd); 10163 - return ERR_PTR(-EINVAL); 10126 + return libbpf_err_ptr(-EINVAL); 10164 10127 } 10165 10128 prog_fd = bpf_program__fd(prog); 10166 10129 if (prog_fd < 0) { 10167 10130 pr_warn("prog '%s': can't attach BPF program w/o FD (did you load it?)\n", 10168 10131 prog->name); 10169 - return ERR_PTR(-EINVAL); 10132 + return libbpf_err_ptr(-EINVAL); 10170 10133 } 10171 10134 10172 10135 link = calloc(1, sizeof(*link)); 10173 10136 if (!link) 10174 - return ERR_PTR(-ENOMEM); 10137 + return libbpf_err_ptr(-ENOMEM); 10175 10138 link->detach = &bpf_link__detach_perf_event; 10176 10139 link->fd = pfd; 10177 10140 ··· 10183 10146 if (err == -EPROTO) 10184 10147 pr_warn("prog '%s': try add PERF_SAMPLE_CALLCHAIN to or remove exclude_callchain_[kernel|user] from pfd %d\n", 10185 10148 prog->name, pfd); 10186 - return ERR_PTR(err); 10149 + return libbpf_err_ptr(err); 10187 10150 } 10188 10151 if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) { 10189 10152 err = -errno; 10190 10153 free(link); 10191 10154 pr_warn("prog '%s': failed to enable pfd %d: %s\n", 10192 10155 prog->name, pfd, libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 10193 - return ERR_PTR(err); 10156 + return libbpf_err_ptr(err); 10194 10157 } 10195 10158 return link; 10196 10159 } ··· 10314 10277 pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n", 10315 10278 prog->name, retprobe ? "kretprobe" : "kprobe", func_name, 10316 10279 libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); 10317 - return ERR_PTR(pfd); 10280 + return libbpf_err_ptr(pfd); 10318 10281 } 10319 10282 link = bpf_program__attach_perf_event(prog, pfd); 10320 - if (IS_ERR(link)) { 10283 + err = libbpf_get_error(link); 10284 + if (err) { 10321 10285 close(pfd); 10322 - err = PTR_ERR(link); 10323 10286 pr_warn("prog '%s': failed to attach to %s '%s': %s\n", 10324 10287 prog->name, retprobe ? "kretprobe" : "kprobe", func_name, 10325 10288 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 10326 - return link; 10289 + return libbpf_err_ptr(err); 10327 10290 } 10328 10291 return link; 10329 10292 } ··· 10356 10319 prog->name, retprobe ? "uretprobe" : "uprobe", 10357 10320 binary_path, func_offset, 10358 10321 libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); 10359 - return ERR_PTR(pfd); 10322 + return libbpf_err_ptr(pfd); 10360 10323 } 10361 10324 link = bpf_program__attach_perf_event(prog, pfd); 10362 - if (IS_ERR(link)) { 10325 + err = libbpf_get_error(link); 10326 + if (err) { 10363 10327 close(pfd); 10364 - err = PTR_ERR(link); 10365 10328 pr_warn("prog '%s': failed to attach to %s '%s:0x%zx': %s\n", 10366 10329 prog->name, retprobe ? "uretprobe" : "uprobe", 10367 10330 binary_path, func_offset, 10368 10331 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 10369 - return link; 10332 + return libbpf_err_ptr(err); 10370 10333 } 10371 10334 return link; 10372 10335 } ··· 10434 10397 pr_warn("prog '%s': failed to create tracepoint '%s/%s' perf event: %s\n", 10435 10398 prog->name, tp_category, tp_name, 10436 10399 libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); 10437 - return ERR_PTR(pfd); 10400 + return libbpf_err_ptr(pfd); 10438 10401 } 10439 10402 link = bpf_program__attach_perf_event(prog, pfd); 10440 - if (IS_ERR(link)) { 10403 + err = libbpf_get_error(link); 10404 + if (err) { 10441 10405 close(pfd); 10442 - err = PTR_ERR(link); 10443 10406 pr_warn("prog '%s': failed to attach to tracepoint '%s/%s': %s\n", 10444 10407 prog->name, tp_category, tp_name, 10445 10408 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); 10446 - return link; 10409 + return libbpf_err_ptr(err); 10447 10410 } 10448 10411 return link; 10449 10412 } ··· 10456 10419 10457 10420 sec_name = strdup(prog->sec_name); 10458 10421 if (!sec_name) 10459 - return ERR_PTR(-ENOMEM); 10422 + return libbpf_err_ptr(-ENOMEM); 10460 10423 10461 10424 /* extract "tp/<category>/<name>" */ 10462 10425 tp_cat = sec_name + sec->len; 10463 10426 tp_name = strchr(tp_cat, '/'); 10464 10427 if (!tp_name) { 10465 - link = ERR_PTR(-EINVAL); 10466 - goto out; 10428 + free(sec_name); 10429 + return libbpf_err_ptr(-EINVAL); 10467 10430 } 10468 10431 *tp_name = '\0'; 10469 10432 tp_name++; 10470 10433 10471 10434 link = bpf_program__attach_tracepoint(prog, tp_cat, tp_name); 10472 - out: 10473 10435 free(sec_name); 10474 10436 return link; 10475 10437 } ··· 10483 10447 prog_fd = bpf_program__fd(prog); 10484 10448 if (prog_fd < 0) { 10485 10449 pr_warn("prog '%s': can't attach before loaded\n", prog->name); 10486 - return ERR_PTR(-EINVAL); 10450 + return libbpf_err_ptr(-EINVAL); 10487 10451 } 10488 10452 10489 10453 link = calloc(1, sizeof(*link)); 10490 10454 if (!link) 10491 - return ERR_PTR(-ENOMEM); 10455 + return libbpf_err_ptr(-ENOMEM); 10492 10456 link->detach = &bpf_link__detach_fd; 10493 10457 10494 10458 pfd = bpf_raw_tracepoint_open(tp_name, prog_fd); ··· 10497 10461 free(link); 10498 10462 pr_warn("prog '%s': failed to attach to raw tracepoint '%s': %s\n", 10499 10463 prog->name, tp_name, libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); 10500 - return ERR_PTR(pfd); 10464 + return libbpf_err_ptr(pfd); 10501 10465 } 10502 10466 link->fd = pfd; 10503 10467 return link; ··· 10521 10485 prog_fd = bpf_program__fd(prog); 10522 10486 if (prog_fd < 0) { 10523 10487 pr_warn("prog '%s': can't attach before loaded\n", prog->name); 10524 - return ERR_PTR(-EINVAL); 10488 + return libbpf_err_ptr(-EINVAL); 10525 10489 } 10526 10490 10527 10491 link = calloc(1, sizeof(*link)); 10528 10492 if (!link) 10529 - return ERR_PTR(-ENOMEM); 10493 + return libbpf_err_ptr(-ENOMEM); 10530 10494 link->detach = &bpf_link__detach_fd; 10531 10495 10532 10496 pfd = bpf_raw_tracepoint_open(NULL, prog_fd); ··· 10535 10499 free(link); 10536 10500 pr_warn("prog '%s': failed to attach: %s\n", 10537 10501 prog->name, libbpf_strerror_r(pfd, errmsg, sizeof(errmsg))); 10538 - return ERR_PTR(pfd); 10502 + return libbpf_err_ptr(pfd); 10539 10503 } 10540 10504 link->fd = pfd; 10541 10505 return (struct bpf_link *)link; ··· 10563 10527 return bpf_program__attach_lsm(prog); 10564 10528 } 10565 10529 10566 - static struct bpf_link *attach_iter(const struct bpf_sec_def *sec, 10567 - struct bpf_program *prog) 10568 - { 10569 - return bpf_program__attach_iter(prog, NULL); 10570 - } 10571 - 10572 10530 static struct bpf_link * 10573 10531 bpf_program__attach_fd(struct bpf_program *prog, int target_fd, int btf_id, 10574 10532 const char *target_name) ··· 10577 10547 prog_fd = bpf_program__fd(prog); 10578 10548 if (prog_fd < 0) { 10579 10549 pr_warn("prog '%s': can't attach before loaded\n", prog->name); 10580 - return ERR_PTR(-EINVAL); 10550 + return libbpf_err_ptr(-EINVAL); 10581 10551 } 10582 10552 10583 10553 link = calloc(1, sizeof(*link)); 10584 10554 if (!link) 10585 - return ERR_PTR(-ENOMEM); 10555 + return libbpf_err_ptr(-ENOMEM); 10586 10556 link->detach = &bpf_link__detach_fd; 10587 10557 10588 10558 attach_type = bpf_program__get_expected_attach_type(prog); ··· 10593 10563 pr_warn("prog '%s': failed to attach to %s: %s\n", 10594 10564 prog->name, target_name, 10595 10565 libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg))); 10596 - return ERR_PTR(link_fd); 10566 + return libbpf_err_ptr(link_fd); 10597 10567 } 10598 10568 link->fd = link_fd; 10599 10569 return link; ··· 10626 10596 if (!!target_fd != !!attach_func_name) { 10627 10597 pr_warn("prog '%s': supply none or both of target_fd and attach_func_name\n", 10628 10598 prog->name); 10629 - return ERR_PTR(-EINVAL); 10599 + return libbpf_err_ptr(-EINVAL); 10630 10600 } 10631 10601 10632 10602 if (prog->type != BPF_PROG_TYPE_EXT) { 10633 10603 pr_warn("prog '%s': only BPF_PROG_TYPE_EXT can attach as freplace", 10634 10604 prog->name); 10635 - return ERR_PTR(-EINVAL); 10605 + return libbpf_err_ptr(-EINVAL); 10636 10606 } 10637 10607 10638 10608 if (target_fd) { 10639 10609 btf_id = libbpf_find_prog_btf_id(attach_func_name, target_fd); 10640 10610 if (btf_id < 0) 10641 - return ERR_PTR(btf_id); 10611 + return libbpf_err_ptr(btf_id); 10642 10612 10643 10613 return bpf_program__attach_fd(prog, target_fd, btf_id, "freplace"); 10644 10614 } else { ··· 10660 10630 __u32 target_fd = 0; 10661 10631 10662 10632 if (!OPTS_VALID(opts, bpf_iter_attach_opts)) 10663 - return ERR_PTR(-EINVAL); 10633 + return libbpf_err_ptr(-EINVAL); 10664 10634 10665 10635 link_create_opts.iter_info = OPTS_GET(opts, link_info, (void *)0); 10666 10636 link_create_opts.iter_info_len = OPTS_GET(opts, link_info_len, 0); ··· 10668 10638 prog_fd = bpf_program__fd(prog); 10669 10639 if (prog_fd < 0) { 10670 10640 pr_warn("prog '%s': can't attach before loaded\n", prog->name); 10671 - return ERR_PTR(-EINVAL); 10641 + return libbpf_err_ptr(-EINVAL); 10672 10642 } 10673 10643 10674 10644 link = calloc(1, sizeof(*link)); 10675 10645 if (!link) 10676 - return ERR_PTR(-ENOMEM); 10646 + return libbpf_err_ptr(-ENOMEM); 10677 10647 link->detach = &bpf_link__detach_fd; 10678 10648 10679 10649 link_fd = bpf_link_create(prog_fd, target_fd, BPF_TRACE_ITER, ··· 10683 10653 free(link); 10684 10654 pr_warn("prog '%s': failed to attach to iterator: %s\n", 10685 10655 prog->name, libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg))); 10686 - return ERR_PTR(link_fd); 10656 + return libbpf_err_ptr(link_fd); 10687 10657 } 10688 10658 link->fd = link_fd; 10689 10659 return link; 10660 + } 10661 + 10662 + static struct bpf_link *attach_iter(const struct bpf_sec_def *sec, 10663 + struct bpf_program *prog) 10664 + { 10665 + return bpf_program__attach_iter(prog, NULL); 10690 10666 } 10691 10667 10692 10668 struct bpf_link *bpf_program__attach(struct bpf_program *prog) ··· 10701 10665 10702 10666 sec_def = find_sec_def(prog->sec_name); 10703 10667 if (!sec_def || !sec_def->attach_fn) 10704 - return ERR_PTR(-ESRCH); 10668 + return libbpf_err_ptr(-ESRCH); 10705 10669 10706 10670 return sec_def->attach_fn(sec_def, prog); 10707 10671 } ··· 10724 10688 int err; 10725 10689 10726 10690 if (!bpf_map__is_struct_ops(map) || map->fd == -1) 10727 - return ERR_PTR(-EINVAL); 10691 + return libbpf_err_ptr(-EINVAL); 10728 10692 10729 10693 link = calloc(1, sizeof(*link)); 10730 10694 if (!link) 10731 - return ERR_PTR(-EINVAL); 10695 + return libbpf_err_ptr(-EINVAL); 10732 10696 10733 10697 st_ops = map->st_ops; 10734 10698 for (i = 0; i < btf_vlen(st_ops->type); i++) { ··· 10748 10712 if (err) { 10749 10713 err = -errno; 10750 10714 free(link); 10751 - return ERR_PTR(err); 10715 + return libbpf_err_ptr(err); 10752 10716 } 10753 10717 10754 10718 link->detach = bpf_link__detach_struct_ops; ··· 10802 10766 } 10803 10767 10804 10768 ring_buffer_write_tail(header, data_tail); 10805 - return ret; 10769 + return libbpf_err(ret); 10806 10770 } 10807 10771 10808 10772 struct perf_buffer; ··· 10955 10919 p.lost_cb = opts ? opts->lost_cb : NULL; 10956 10920 p.ctx = opts ? opts->ctx : NULL; 10957 10921 10958 - return __perf_buffer__new(map_fd, page_cnt, &p); 10922 + return libbpf_ptr(__perf_buffer__new(map_fd, page_cnt, &p)); 10959 10923 } 10960 10924 10961 10925 struct perf_buffer * ··· 10971 10935 p.cpus = opts->cpus; 10972 10936 p.map_keys = opts->map_keys; 10973 10937 10974 - return __perf_buffer__new(map_fd, page_cnt, &p); 10938 + return libbpf_ptr(__perf_buffer__new(map_fd, page_cnt, &p)); 10975 10939 } 10976 10940 10977 10941 static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt, ··· 11192 11156 int i, cnt, err; 11193 11157 11194 11158 cnt = epoll_wait(pb->epoll_fd, pb->events, pb->cpu_cnt, timeout_ms); 11159 + if (cnt < 0) 11160 + return libbpf_err_errno(cnt); 11161 + 11195 11162 for (i = 0; i < cnt; i++) { 11196 11163 struct perf_cpu_buf *cpu_buf = pb->events[i].data.ptr; 11197 11164 11198 11165 err = perf_buffer__process_records(pb, cpu_buf); 11199 11166 if (err) { 11200 11167 pr_warn("error while processing records: %d\n", err); 11201 - return err; 11168 + return libbpf_err(err); 11202 11169 } 11203 11170 } 11204 - return cnt < 0 ? -errno : cnt; 11171 + return cnt; 11205 11172 } 11206 11173 11207 11174 /* Return number of PERF_EVENT_ARRAY map slots set up by this perf_buffer ··· 11225 11186 struct perf_cpu_buf *cpu_buf; 11226 11187 11227 11188 if (buf_idx >= pb->cpu_cnt) 11228 - return -EINVAL; 11189 + return libbpf_err(-EINVAL); 11229 11190 11230 11191 cpu_buf = pb->cpu_bufs[buf_idx]; 11231 11192 if (!cpu_buf) 11232 - return -ENOENT; 11193 + return libbpf_err(-ENOENT); 11233 11194 11234 11195 return cpu_buf->fd; 11235 11196 } ··· 11247 11208 struct perf_cpu_buf *cpu_buf; 11248 11209 11249 11210 if (buf_idx >= pb->cpu_cnt) 11250 - return -EINVAL; 11211 + return libbpf_err(-EINVAL); 11251 11212 11252 11213 cpu_buf = pb->cpu_bufs[buf_idx]; 11253 11214 if (!cpu_buf) 11254 - return -ENOENT; 11215 + return libbpf_err(-ENOENT); 11255 11216 11256 11217 return perf_buffer__process_records(pb, cpu_buf); 11257 11218 } ··· 11269 11230 err = perf_buffer__process_records(pb, cpu_buf); 11270 11231 if (err) { 11271 11232 pr_warn("perf_buffer: failed to process records in buffer #%d: %d\n", i, err); 11272 - return err; 11233 + return libbpf_err(err); 11273 11234 } 11274 11235 } 11275 11236 return 0; ··· 11381 11342 void *ptr; 11382 11343 11383 11344 if (arrays >> BPF_PROG_INFO_LAST_ARRAY) 11384 - return ERR_PTR(-EINVAL); 11345 + return libbpf_err_ptr(-EINVAL); 11385 11346 11386 11347 /* step 1: get array dimensions */ 11387 11348 err = bpf_obj_get_info_by_fd(fd, &info, &info_len); 11388 11349 if (err) { 11389 11350 pr_debug("can't get prog info: %s", strerror(errno)); 11390 - return ERR_PTR(-EFAULT); 11351 + return libbpf_err_ptr(-EFAULT); 11391 11352 } 11392 11353 11393 11354 /* step 2: calculate total size of all arrays */ ··· 11419 11380 data_len = roundup(data_len, sizeof(__u64)); 11420 11381 info_linear = malloc(sizeof(struct bpf_prog_info_linear) + data_len); 11421 11382 if (!info_linear) 11422 - return ERR_PTR(-ENOMEM); 11383 + return libbpf_err_ptr(-ENOMEM); 11423 11384 11424 11385 /* step 4: fill data to info_linear->info */ 11425 11386 info_linear->arrays = arrays; ··· 11451 11412 if (err) { 11452 11413 pr_debug("can't get prog info: %s", strerror(errno)); 11453 11414 free(info_linear); 11454 - return ERR_PTR(-EFAULT); 11415 + return libbpf_err_ptr(-EFAULT); 11455 11416 } 11456 11417 11457 11418 /* step 6: verify the data */ ··· 11530 11491 int btf_obj_fd = 0, btf_id = 0, err; 11531 11492 11532 11493 if (!prog || attach_prog_fd < 0 || !attach_func_name) 11533 - return -EINVAL; 11494 + return libbpf_err(-EINVAL); 11534 11495 11535 11496 if (prog->obj->loaded) 11536 - return -EINVAL; 11497 + return libbpf_err(-EINVAL); 11537 11498 11538 11499 if (attach_prog_fd) { 11539 11500 btf_id = libbpf_find_prog_btf_id(attach_func_name, 11540 11501 attach_prog_fd); 11541 11502 if (btf_id < 0) 11542 - return btf_id; 11503 + return libbpf_err(btf_id); 11543 11504 } else { 11544 11505 /* load btf_vmlinux, if not yet */ 11545 11506 err = bpf_object__load_vmlinux_btf(prog->obj, true); 11546 11507 if (err) 11547 - return err; 11508 + return libbpf_err(err); 11548 11509 err = find_kernel_btf_id(prog->obj, attach_func_name, 11549 11510 prog->expected_attach_type, 11550 11511 &btf_obj_fd, &btf_id); 11551 11512 if (err) 11552 - return err; 11513 + return libbpf_err(err); 11553 11514 } 11554 11515 11555 11516 prog->attach_btf_id = btf_id; ··· 11648 11609 11649 11610 err = parse_cpu_mask_file(fcpu, &mask, &n); 11650 11611 if (err) 11651 - return err; 11612 + return libbpf_err(err); 11652 11613 11653 11614 tmp_cpus = 0; 11654 11615 for (i = 0; i < n; i++) { ··· 11668 11629 .object_name = s->name, 11669 11630 ); 11670 11631 struct bpf_object *obj; 11671 - int i; 11632 + int i, err; 11672 11633 11673 11634 /* Attempt to preserve opts->object_name, unless overriden by user 11674 11635 * explicitly. Overwriting object name for skeletons is discouraged, ··· 11683 11644 } 11684 11645 11685 11646 obj = bpf_object__open_mem(s->data, s->data_sz, &skel_opts); 11686 - if (IS_ERR(obj)) { 11687 - pr_warn("failed to initialize skeleton BPF object '%s': %ld\n", 11688 - s->name, PTR_ERR(obj)); 11689 - return PTR_ERR(obj); 11647 + err = libbpf_get_error(obj); 11648 + if (err) { 11649 + pr_warn("failed to initialize skeleton BPF object '%s': %d\n", 11650 + s->name, err); 11651 + return libbpf_err(err); 11690 11652 } 11691 11653 11692 11654 *s->obj = obj; ··· 11700 11660 *map = bpf_object__find_map_by_name(obj, name); 11701 11661 if (!*map) { 11702 11662 pr_warn("failed to find skeleton map '%s'\n", name); 11703 - return -ESRCH; 11663 + return libbpf_err(-ESRCH); 11704 11664 } 11705 11665 11706 11666 /* externs shouldn't be pre-setup from user code */ ··· 11715 11675 *prog = bpf_object__find_program_by_name(obj, name); 11716 11676 if (!*prog) { 11717 11677 pr_warn("failed to find skeleton program '%s'\n", name); 11718 - return -ESRCH; 11678 + return libbpf_err(-ESRCH); 11719 11679 } 11720 11680 } 11721 11681 ··· 11729 11689 err = bpf_object__load(*s->obj); 11730 11690 if (err) { 11731 11691 pr_warn("failed to load BPF skeleton '%s': %d\n", s->name, err); 11732 - return err; 11692 + return libbpf_err(err); 11733 11693 } 11734 11694 11735 11695 for (i = 0; i < s->map_cnt; i++) { ··· 11768 11728 *mmaped = NULL; 11769 11729 pr_warn("failed to re-mmap() map '%s': %d\n", 11770 11730 bpf_map__name(map), err); 11771 - return err; 11731 + return libbpf_err(err); 11772 11732 } 11773 11733 } 11774 11734 ··· 11777 11737 11778 11738 int bpf_object__attach_skeleton(struct bpf_object_skeleton *s) 11779 11739 { 11780 - int i; 11740 + int i, err; 11781 11741 11782 11742 for (i = 0; i < s->prog_cnt; i++) { 11783 11743 struct bpf_program *prog = *s->progs[i].prog; ··· 11792 11752 continue; 11793 11753 11794 11754 *link = sec_def->attach_fn(sec_def, prog); 11795 - if (IS_ERR(*link)) { 11796 - pr_warn("failed to auto-attach program '%s': %ld\n", 11797 - bpf_program__name(prog), PTR_ERR(*link)); 11798 - return PTR_ERR(*link); 11755 + err = libbpf_get_error(*link); 11756 + if (err) { 11757 + pr_warn("failed to auto-attach program '%s': %d\n", 11758 + bpf_program__name(prog), err); 11759 + return libbpf_err(err); 11799 11760 } 11800 11761 } 11801 11762
+1
tools/lib/bpf/libbpf.h
··· 18 18 #include <linux/bpf.h> 19 19 20 20 #include "libbpf_common.h" 21 + #include "libbpf_legacy.h" 21 22 22 23 #ifdef __cplusplus 23 24 extern "C" {
+8 -2
tools/lib/bpf/libbpf.map
··· 359 359 bpf_linker__finalize; 360 360 bpf_linker__free; 361 361 bpf_linker__new; 362 - bpf_map__initial_value; 363 362 bpf_map__inner_map; 364 - bpf_object__gen_loader; 365 363 bpf_object__set_kversion; 366 364 bpf_tc_attach; 367 365 bpf_tc_detach; ··· 367 369 bpf_tc_hook_destroy; 368 370 bpf_tc_query; 369 371 } LIBBPF_0.3.0; 372 + 373 + LIBBPF_0.5.0 { 374 + global: 375 + bpf_map__initial_value; 376 + bpf_map_lookup_and_delete_elem_flags; 377 + bpf_object__gen_loader; 378 + libbpf_set_strict_mode; 379 + } LIBBPF_0.4.0;
+4 -3
tools/lib/bpf/libbpf_errno.c
··· 12 12 #include <string.h> 13 13 14 14 #include "libbpf.h" 15 + #include "libbpf_internal.h" 15 16 16 17 /* make sure libbpf doesn't use kernel-only integer typedefs */ 17 18 #pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64 ··· 40 39 int libbpf_strerror(int err, char *buf, size_t size) 41 40 { 42 41 if (!buf || !size) 43 - return -1; 42 + return libbpf_err(-EINVAL); 44 43 45 44 err = err > 0 ? err : -err; 46 45 ··· 49 48 50 49 ret = strerror_r(err, buf, size); 51 50 buf[size - 1] = '\0'; 52 - return ret; 51 + return libbpf_err_errno(ret); 53 52 } 54 53 55 54 if (err < __LIBBPF_ERRNO__END) { ··· 63 62 64 63 snprintf(buf, size, "Unknown libbpf error %d", err); 65 64 buf[size - 1] = '\0'; 66 - return -1; 65 + return libbpf_err(-ENOENT); 67 66 }
+59
tools/lib/bpf/libbpf_internal.h
··· 11 11 12 12 #include <stdlib.h> 13 13 #include <limits.h> 14 + #include <errno.h> 15 + #include <linux/err.h> 16 + #include "libbpf_legacy.h" 14 17 15 18 /* make sure libbpf doesn't use kernel-only integer typedefs */ 16 19 #pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64 ··· 30 27 31 28 #ifndef R_BPF_64_64 32 29 #define R_BPF_64_64 1 30 + #endif 31 + #ifndef R_BPF_64_ABS64 32 + #define R_BPF_64_ABS64 2 33 + #endif 34 + #ifndef R_BPF_64_ABS32 35 + #define R_BPF_64_ABS32 3 33 36 #endif 34 37 #ifndef R_BPF_64_32 35 38 #define R_BPF_64_32 10 ··· 443 434 int btf_type_visit_str_offs(struct btf_type *t, str_off_visit_fn visit, void *ctx); 444 435 int btf_ext_visit_type_ids(struct btf_ext *btf_ext, type_id_visit_fn visit, void *ctx); 445 436 int btf_ext_visit_str_offs(struct btf_ext *btf_ext, str_off_visit_fn visit, void *ctx); 437 + 438 + extern enum libbpf_strict_mode libbpf_mode; 439 + 440 + /* handle direct returned errors */ 441 + static inline int libbpf_err(int ret) 442 + { 443 + if (ret < 0) 444 + errno = -ret; 445 + return ret; 446 + } 447 + 448 + /* handle errno-based (e.g., syscall or libc) errors according to libbpf's 449 + * strict mode settings 450 + */ 451 + static inline int libbpf_err_errno(int ret) 452 + { 453 + if (libbpf_mode & LIBBPF_STRICT_DIRECT_ERRS) 454 + /* errno is already assumed to be set on error */ 455 + return ret < 0 ? -errno : ret; 456 + 457 + /* legacy: on error return -1 directly and don't touch errno */ 458 + return ret; 459 + } 460 + 461 + /* handle error for pointer-returning APIs, err is assumed to be < 0 always */ 462 + static inline void *libbpf_err_ptr(int err) 463 + { 464 + /* set errno on error, this doesn't break anything */ 465 + errno = -err; 466 + 467 + if (libbpf_mode & LIBBPF_STRICT_CLEAN_PTRS) 468 + return NULL; 469 + 470 + /* legacy: encode err as ptr */ 471 + return ERR_PTR(err); 472 + } 473 + 474 + /* handle pointer-returning APIs' error handling */ 475 + static inline void *libbpf_ptr(void *ret) 476 + { 477 + /* set errno on error, this doesn't break anything */ 478 + if (IS_ERR(ret)) 479 + errno = -PTR_ERR(ret); 480 + 481 + if (libbpf_mode & LIBBPF_STRICT_CLEAN_PTRS) 482 + return IS_ERR(ret) ? NULL : ret; 483 + 484 + /* legacy: pass-through original pointer */ 485 + return ret; 486 + } 446 487 447 488 #endif /* __LIBBPF_LIBBPF_INTERNAL_H */
+59
tools/lib/bpf/libbpf_legacy.h
··· 1 + /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ 2 + 3 + /* 4 + * Libbpf legacy APIs (either discouraged or deprecated, as mentioned in [0]) 5 + * 6 + * [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTY 7 + * 8 + * Copyright (C) 2021 Facebook 9 + */ 10 + #ifndef __LIBBPF_LEGACY_BPF_H 11 + #define __LIBBPF_LEGACY_BPF_H 12 + 13 + #include <linux/bpf.h> 14 + #include <stdbool.h> 15 + #include <stddef.h> 16 + #include <stdint.h> 17 + #include "libbpf_common.h" 18 + 19 + #ifdef __cplusplus 20 + extern "C" { 21 + #endif 22 + 23 + enum libbpf_strict_mode { 24 + /* Turn on all supported strict features of libbpf to simulate libbpf 25 + * v1.0 behavior. 26 + * This will be the default behavior in libbpf v1.0. 27 + */ 28 + LIBBPF_STRICT_ALL = 0xffffffff, 29 + 30 + /* 31 + * Disable any libbpf 1.0 behaviors. This is the default before libbpf 32 + * v1.0. It won't be supported anymore in v1.0, please update your 33 + * code so that it handles LIBBPF_STRICT_ALL mode before libbpf v1.0. 34 + */ 35 + LIBBPF_STRICT_NONE = 0x00, 36 + /* 37 + * Return NULL pointers on error, not ERR_PTR(err). 38 + * Additionally, libbpf also always sets errno to corresponding Exx 39 + * (positive) error code. 40 + */ 41 + LIBBPF_STRICT_CLEAN_PTRS = 0x01, 42 + /* 43 + * Return actual error codes from low-level APIs directly, not just -1. 44 + * Additionally, libbpf also always sets errno to corresponding Exx 45 + * (positive) error code. 46 + */ 47 + LIBBPF_STRICT_DIRECT_ERRS = 0x02, 48 + 49 + __LIBBPF_STRICT_LAST, 50 + }; 51 + 52 + LIBBPF_API int libbpf_set_strict_mode(enum libbpf_strict_mode mode); 53 + 54 + 55 + #ifdef __cplusplus 56 + } /* extern "C" */ 57 + #endif 58 + 59 + #endif /* __LIBBPF_LEGACY_BPF_H */
+13 -12
tools/lib/bpf/linker.c
··· 220 220 int err; 221 221 222 222 if (!OPTS_VALID(opts, bpf_linker_opts)) 223 - return NULL; 223 + return errno = EINVAL, NULL; 224 224 225 225 if (elf_version(EV_CURRENT) == EV_NONE) { 226 226 pr_warn_elf("libelf initialization failed"); 227 - return NULL; 227 + return errno = EINVAL, NULL; 228 228 } 229 229 230 230 linker = calloc(1, sizeof(*linker)); 231 231 if (!linker) 232 - return NULL; 232 + return errno = ENOMEM, NULL; 233 233 234 234 linker->fd = -1; 235 235 ··· 241 241 242 242 err_out: 243 243 bpf_linker__free(linker); 244 - return NULL; 244 + return errno = -err, NULL; 245 245 } 246 246 247 247 static struct dst_sec *add_dst_sec(struct bpf_linker *linker, const char *sec_name) ··· 444 444 int err = 0; 445 445 446 446 if (!OPTS_VALID(opts, bpf_linker_file_opts)) 447 - return -EINVAL; 447 + return libbpf_err(-EINVAL); 448 448 449 449 if (!linker->elf) 450 - return -EINVAL; 450 + return libbpf_err(-EINVAL); 451 451 452 452 err = err ?: linker_load_obj_file(linker, filename, opts, &obj); 453 453 err = err ?: linker_append_sec_data(linker, &obj); ··· 467 467 if (obj.fd >= 0) 468 468 close(obj.fd); 469 469 470 - return err; 470 + return libbpf_err(err); 471 471 } 472 472 473 473 static bool is_dwarf_sec_name(const char *name) ··· 892 892 size_t sym_idx = ELF64_R_SYM(relo->r_info); 893 893 size_t sym_type = ELF64_R_TYPE(relo->r_info); 894 894 895 - if (sym_type != R_BPF_64_64 && sym_type != R_BPF_64_32) { 895 + if (sym_type != R_BPF_64_64 && sym_type != R_BPF_64_32 && 896 + sym_type != R_BPF_64_ABS64 && sym_type != R_BPF_64_ABS32) { 896 897 pr_warn("ELF relo #%d in section #%zu has unexpected type %zu in %s\n", 897 898 i, sec->sec_idx, sym_type, obj->filename); 898 899 return -EINVAL; ··· 2548 2547 int err, i; 2549 2548 2550 2549 if (!linker->elf) 2551 - return -EINVAL; 2550 + return libbpf_err(-EINVAL); 2552 2551 2553 2552 err = finalize_btf(linker); 2554 2553 if (err) 2555 - return err; 2554 + return libbpf_err(err); 2556 2555 2557 2556 /* Finalize strings */ 2558 2557 strs_sz = strset__data_size(linker->strtab_strs); ··· 2584 2583 if (elf_update(linker->elf, ELF_C_NULL) < 0) { 2585 2584 err = -errno; 2586 2585 pr_warn_elf("failed to finalize ELF layout"); 2587 - return err; 2586 + return libbpf_err(err); 2588 2587 } 2589 2588 2590 2589 /* Write out final ELF contents */ 2591 2590 if (elf_update(linker->elf, ELF_C_WRITE) < 0) { 2592 2591 err = -errno; 2593 2592 pr_warn_elf("failed to write ELF contents"); 2594 - return err; 2593 + return libbpf_err(err); 2595 2594 } 2596 2595 2597 2596 elf_end(linker->elf);
+48 -37
tools/lib/bpf/netlink.c
··· 225 225 int bpf_set_link_xdp_fd_opts(int ifindex, int fd, __u32 flags, 226 226 const struct bpf_xdp_set_link_opts *opts) 227 227 { 228 - int old_fd = -1; 228 + int old_fd = -1, ret; 229 229 230 230 if (!OPTS_VALID(opts, bpf_xdp_set_link_opts)) 231 - return -EINVAL; 231 + return libbpf_err(-EINVAL); 232 232 233 233 if (OPTS_HAS(opts, old_fd)) { 234 234 old_fd = OPTS_GET(opts, old_fd, -1); 235 235 flags |= XDP_FLAGS_REPLACE; 236 236 } 237 237 238 - return __bpf_set_link_xdp_fd_replace(ifindex, fd, old_fd, flags); 238 + ret = __bpf_set_link_xdp_fd_replace(ifindex, fd, old_fd, flags); 239 + return libbpf_err(ret); 239 240 } 240 241 241 242 int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags) 242 243 { 243 - return __bpf_set_link_xdp_fd_replace(ifindex, fd, 0, flags); 244 + int ret; 245 + 246 + ret = __bpf_set_link_xdp_fd_replace(ifindex, fd, 0, flags); 247 + return libbpf_err(ret); 244 248 } 245 249 246 250 static int __dump_link_nlmsg(struct nlmsghdr *nlh, ··· 325 321 }; 326 322 327 323 if (flags & ~XDP_FLAGS_MASK || !info_size) 328 - return -EINVAL; 324 + return libbpf_err(-EINVAL); 329 325 330 326 /* Check whether the single {HW,DRV,SKB} mode is set */ 331 327 flags &= (XDP_FLAGS_SKB_MODE | XDP_FLAGS_DRV_MODE | XDP_FLAGS_HW_MODE); 332 328 mask = flags - 1; 333 329 if (flags && flags & mask) 334 - return -EINVAL; 330 + return libbpf_err(-EINVAL); 335 331 336 332 xdp_id.ifindex = ifindex; 337 333 xdp_id.flags = flags; ··· 345 341 memset((void *) info + sz, 0, info_size - sz); 346 342 } 347 343 348 - return ret; 344 + return libbpf_err(ret); 349 345 } 350 346 351 347 static __u32 get_xdp_id(struct xdp_link_info *info, __u32 flags) ··· 373 369 if (!ret) 374 370 *prog_id = get_xdp_id(&info, flags); 375 371 376 - return ret; 372 + return libbpf_err(ret); 377 373 } 378 374 379 375 typedef int (*qdisc_config_t)(struct nlmsghdr *nh, struct tcmsg *t, ··· 457 453 458 454 static int tc_qdisc_create_excl(struct bpf_tc_hook *hook) 459 455 { 460 - return tc_qdisc_modify(hook, RTM_NEWQDISC, NLM_F_CREATE); 456 + return tc_qdisc_modify(hook, RTM_NEWQDISC, NLM_F_CREATE | NLM_F_EXCL); 461 457 } 462 458 463 459 static int tc_qdisc_delete(struct bpf_tc_hook *hook) ··· 467 463 468 464 int bpf_tc_hook_create(struct bpf_tc_hook *hook) 469 465 { 466 + int ret; 467 + 470 468 if (!hook || !OPTS_VALID(hook, bpf_tc_hook) || 471 469 OPTS_GET(hook, ifindex, 0) <= 0) 472 - return -EINVAL; 470 + return libbpf_err(-EINVAL); 473 471 474 - return tc_qdisc_create_excl(hook); 472 + ret = tc_qdisc_create_excl(hook); 473 + return libbpf_err(ret); 475 474 } 476 475 477 476 static int __bpf_tc_detach(const struct bpf_tc_hook *hook, ··· 485 478 { 486 479 if (!hook || !OPTS_VALID(hook, bpf_tc_hook) || 487 480 OPTS_GET(hook, ifindex, 0) <= 0) 488 - return -EINVAL; 481 + return libbpf_err(-EINVAL); 489 482 490 483 switch (OPTS_GET(hook, attach_point, 0)) { 491 484 case BPF_TC_INGRESS: 492 485 case BPF_TC_EGRESS: 493 - return __bpf_tc_detach(hook, NULL, true); 486 + return libbpf_err(__bpf_tc_detach(hook, NULL, true)); 494 487 case BPF_TC_INGRESS | BPF_TC_EGRESS: 495 - return tc_qdisc_delete(hook); 488 + return libbpf_err(tc_qdisc_delete(hook)); 496 489 case BPF_TC_CUSTOM: 497 - return -EOPNOTSUPP; 490 + return libbpf_err(-EOPNOTSUPP); 498 491 default: 499 - return -EINVAL; 492 + return libbpf_err(-EINVAL); 500 493 } 501 494 } 502 495 ··· 581 574 if (!hook || !opts || 582 575 !OPTS_VALID(hook, bpf_tc_hook) || 583 576 !OPTS_VALID(opts, bpf_tc_opts)) 584 - return -EINVAL; 577 + return libbpf_err(-EINVAL); 585 578 586 579 ifindex = OPTS_GET(hook, ifindex, 0); 587 580 parent = OPTS_GET(hook, parent, 0); ··· 594 587 flags = OPTS_GET(opts, flags, 0); 595 588 596 589 if (ifindex <= 0 || !prog_fd || prog_id) 597 - return -EINVAL; 590 + return libbpf_err(-EINVAL); 598 591 if (priority > UINT16_MAX) 599 - return -EINVAL; 592 + return libbpf_err(-EINVAL); 600 593 if (flags & ~BPF_TC_F_REPLACE) 601 - return -EINVAL; 594 + return libbpf_err(-EINVAL); 602 595 603 596 flags = (flags & BPF_TC_F_REPLACE) ? NLM_F_REPLACE : NLM_F_EXCL; 604 597 protocol = ETH_P_ALL; ··· 615 608 616 609 ret = tc_get_tcm_parent(attach_point, &parent); 617 610 if (ret < 0) 618 - return ret; 611 + return libbpf_err(ret); 619 612 req.tc.tcm_parent = parent; 620 613 621 614 ret = nlattr_add(&req.nh, sizeof(req), TCA_KIND, "bpf", sizeof("bpf")); 622 615 if (ret < 0) 623 - return ret; 616 + return libbpf_err(ret); 624 617 nla = nlattr_begin_nested(&req.nh, sizeof(req), TCA_OPTIONS); 625 618 if (!nla) 626 - return -EMSGSIZE; 619 + return libbpf_err(-EMSGSIZE); 627 620 ret = tc_add_fd_and_name(&req.nh, sizeof(req), prog_fd); 628 621 if (ret < 0) 629 - return ret; 622 + return libbpf_err(ret); 630 623 bpf_flags = TCA_BPF_FLAG_ACT_DIRECT; 631 624 ret = nlattr_add(&req.nh, sizeof(req), TCA_BPF_FLAGS, &bpf_flags, 632 625 sizeof(bpf_flags)); 633 626 if (ret < 0) 634 - return ret; 627 + return libbpf_err(ret); 635 628 nlattr_end_nested(&req.nh, nla); 636 629 637 630 info.opts = opts; 638 631 639 632 ret = libbpf_netlink_send_recv(&req.nh, get_tc_info, NULL, &info); 640 633 if (ret < 0) 641 - return ret; 634 + return libbpf_err(ret); 642 635 if (!info.processed) 643 - return -ENOENT; 636 + return libbpf_err(-ENOENT); 644 637 return ret; 645 638 } 646 639 ··· 674 667 if (ifindex <= 0 || flags || prog_fd || prog_id) 675 668 return -EINVAL; 676 669 if (priority > UINT16_MAX) 677 - return -EINVAL; 678 - if (flags & ~BPF_TC_F_REPLACE) 679 670 return -EINVAL; 680 671 if (!flush) { 681 672 if (!handle || !priority) ··· 713 708 int bpf_tc_detach(const struct bpf_tc_hook *hook, 714 709 const struct bpf_tc_opts *opts) 715 710 { 716 - return !opts ? -EINVAL : __bpf_tc_detach(hook, opts, false); 711 + int ret; 712 + 713 + if (!opts) 714 + return libbpf_err(-EINVAL); 715 + 716 + ret = __bpf_tc_detach(hook, opts, false); 717 + return libbpf_err(ret); 717 718 } 718 719 719 720 int bpf_tc_query(const struct bpf_tc_hook *hook, struct bpf_tc_opts *opts) ··· 736 725 if (!hook || !opts || 737 726 !OPTS_VALID(hook, bpf_tc_hook) || 738 727 !OPTS_VALID(opts, bpf_tc_opts)) 739 - return -EINVAL; 728 + return libbpf_err(-EINVAL); 740 729 741 730 ifindex = OPTS_GET(hook, ifindex, 0); 742 731 parent = OPTS_GET(hook, parent, 0); ··· 750 739 751 740 if (ifindex <= 0 || flags || prog_fd || prog_id || 752 741 !handle || !priority) 753 - return -EINVAL; 742 + return libbpf_err(-EINVAL); 754 743 if (priority > UINT16_MAX) 755 - return -EINVAL; 744 + return libbpf_err(-EINVAL); 756 745 757 746 protocol = ETH_P_ALL; 758 747 ··· 767 756 768 757 ret = tc_get_tcm_parent(attach_point, &parent); 769 758 if (ret < 0) 770 - return ret; 759 + return libbpf_err(ret); 771 760 req.tc.tcm_parent = parent; 772 761 773 762 ret = nlattr_add(&req.nh, sizeof(req), TCA_KIND, "bpf", sizeof("bpf")); 774 763 if (ret < 0) 775 - return ret; 764 + return libbpf_err(ret); 776 765 777 766 info.opts = opts; 778 767 779 768 ret = libbpf_netlink_send_recv(&req.nh, get_tc_info, NULL, &info); 780 769 if (ret < 0) 781 - return ret; 770 + return libbpf_err(ret); 782 771 if (!info.processed) 783 - return -ENOENT; 772 + return libbpf_err(-ENOENT); 784 773 return ret; 785 774 }
+13 -13
tools/lib/bpf/ringbuf.c
··· 69 69 err = -errno; 70 70 pr_warn("ringbuf: failed to get map info for fd=%d: %d\n", 71 71 map_fd, err); 72 - return err; 72 + return libbpf_err(err); 73 73 } 74 74 75 75 if (info.type != BPF_MAP_TYPE_RINGBUF) { 76 76 pr_warn("ringbuf: map fd=%d is not BPF_MAP_TYPE_RINGBUF\n", 77 77 map_fd); 78 - return -EINVAL; 78 + return libbpf_err(-EINVAL); 79 79 } 80 80 81 81 tmp = libbpf_reallocarray(rb->rings, rb->ring_cnt + 1, sizeof(*rb->rings)); 82 82 if (!tmp) 83 - return -ENOMEM; 83 + return libbpf_err(-ENOMEM); 84 84 rb->rings = tmp; 85 85 86 86 tmp = libbpf_reallocarray(rb->events, rb->ring_cnt + 1, sizeof(*rb->events)); 87 87 if (!tmp) 88 - return -ENOMEM; 88 + return libbpf_err(-ENOMEM); 89 89 rb->events = tmp; 90 90 91 91 r = &rb->rings[rb->ring_cnt]; ··· 103 103 err = -errno; 104 104 pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n", 105 105 map_fd, err); 106 - return err; 106 + return libbpf_err(err); 107 107 } 108 108 r->consumer_pos = tmp; 109 109 ··· 118 118 ringbuf_unmap_ring(rb, r); 119 119 pr_warn("ringbuf: failed to mmap data pages for map fd=%d: %d\n", 120 120 map_fd, err); 121 - return err; 121 + return libbpf_err(err); 122 122 } 123 123 r->producer_pos = tmp; 124 124 r->data = tmp + rb->page_size; ··· 133 133 ringbuf_unmap_ring(rb, r); 134 134 pr_warn("ringbuf: failed to epoll add map fd=%d: %d\n", 135 135 map_fd, err); 136 - return err; 136 + return libbpf_err(err); 137 137 } 138 138 139 139 rb->ring_cnt++; ··· 165 165 int err; 166 166 167 167 if (!OPTS_VALID(opts, ring_buffer_opts)) 168 - return NULL; 168 + return errno = EINVAL, NULL; 169 169 170 170 rb = calloc(1, sizeof(*rb)); 171 171 if (!rb) 172 - return NULL; 172 + return errno = ENOMEM, NULL; 173 173 174 174 rb->page_size = getpagesize(); 175 175 ··· 188 188 189 189 err_out: 190 190 ring_buffer__free(rb); 191 - return NULL; 191 + return errno = -err, NULL; 192 192 } 193 193 194 194 static inline int roundup_len(__u32 len) ··· 260 260 261 261 err = ringbuf_process_ring(ring); 262 262 if (err < 0) 263 - return err; 263 + return libbpf_err(err); 264 264 res += err; 265 265 } 266 266 if (res > INT_MAX) ··· 279 279 280 280 cnt = epoll_wait(rb->epoll_fd, rb->events, rb->ring_cnt, timeout_ms); 281 281 if (cnt < 0) 282 - return -errno; 282 + return libbpf_err(-errno); 283 283 284 284 for (i = 0; i < cnt; i++) { 285 285 __u32 ring_id = rb->events[i].data.fd; ··· 287 287 288 288 err = ringbuf_process_ring(ring); 289 289 if (err < 0) 290 - return err; 290 + return libbpf_err(err); 291 291 res += err; 292 292 } 293 293 if (res > INT_MAX)
+3
tools/testing/selftests/bpf/.gitignore
··· 10 10 fixdep 11 11 test_dev_cgroup 12 12 /test_progs* 13 + !test_progs.h 13 14 test_verifier_log 14 15 feature 15 16 test_sock ··· 38 37 /runqslower 39 38 /bench 40 39 *.ko 40 + *.tmp 41 41 xdpxceiver 42 + xdp_redirect_multi
+2 -1
tools/testing/selftests/bpf/Makefile
··· 54 54 # Order correspond to 'make run_tests' order 55 55 TEST_PROGS := test_kmod.sh \ 56 56 test_xdp_redirect.sh \ 57 + test_xdp_redirect_multi.sh \ 57 58 test_xdp_meta.sh \ 58 59 test_xdp_veth.sh \ 59 60 test_offload.py \ ··· 85 84 TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \ 86 85 flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \ 87 86 test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \ 88 - xdpxceiver 87 + xdpxceiver xdp_redirect_multi 89 88 90 89 TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read 91 90
+2 -1
tools/testing/selftests/bpf/Makefile.docs
··· 52 52 ifndef RST2MAN_DEP 53 53 $$(error "rst2man not found, but required to generate man pages") 54 54 endif 55 - $$(QUIET_GEN)rst2man $$< > $$@ 55 + $$(QUIET_GEN)rst2man --exit-status=1 $$< > $$@.tmp 56 + $$(QUIET_GEN)mv $$@.tmp $$@ 56 57 57 58 docs-clean-$1: 58 59 $$(call QUIET_CLEAN, eBPF_$1-manpage)
+19
tools/testing/selftests/bpf/README.rst
··· 202 202 Clang that contains the fix. 203 203 204 204 __ https://reviews.llvm.org/D100362 205 + 206 + Clang relocation changes 207 + ======================== 208 + 209 + Clang 13 patch `clang reloc patch`_ made some changes on relocations such 210 + that existing relocation types are broken into more types and 211 + each new type corresponds to only one way to resolve relocation. 212 + See `kernel llvm reloc`_ for more explanation and some examples. 213 + Using clang 13 to compile old libbpf which has static linker support, 214 + there will be a compilation failure:: 215 + 216 + libbpf: ELF relo #0 in section #6 has unexpected type 2 in .../bpf_tcp_nogpl.o 217 + 218 + Here, ``type 2`` refers to new relocation type ``R_BPF_64_ABS64``. 219 + To fix this issue, user newer libbpf. 220 + 221 + .. Links 222 + .. _clang reloc patch: https://reviews.llvm.org/D102712 223 + .. _kernel llvm reloc: /Documentation/bpf/llvm_reloc.rst
+1
tools/testing/selftests/bpf/bench.c
··· 43 43 { 44 44 int err; 45 45 46 + libbpf_set_strict_mode(LIBBPF_STRICT_ALL); 46 47 libbpf_set_print(libbpf_print_fn); 47 48 48 49 err = bump_memlock_rlimit();
+1 -1
tools/testing/selftests/bpf/benchs/bench_rename.c
··· 65 65 struct bpf_link *link; 66 66 67 67 link = bpf_program__attach(prog); 68 - if (IS_ERR(link)) { 68 + if (!link) { 69 69 fprintf(stderr, "failed to attach program!\n"); 70 70 exit(1); 71 71 }
+3 -3
tools/testing/selftests/bpf/benchs/bench_ringbufs.c
··· 181 181 } 182 182 183 183 link = bpf_program__attach(ctx->skel->progs.bench_ringbuf); 184 - if (IS_ERR(link)) { 184 + if (!link) { 185 185 fprintf(stderr, "failed to attach program!\n"); 186 186 exit(1); 187 187 } ··· 271 271 } 272 272 273 273 link = bpf_program__attach(ctx->skel->progs.bench_ringbuf); 274 - if (IS_ERR(link)) { 274 + if (!link) { 275 275 fprintf(stderr, "failed to attach program\n"); 276 276 exit(1); 277 277 } ··· 430 430 } 431 431 432 432 link = bpf_program__attach(ctx->skel->progs.bench_perfbuf); 433 - if (IS_ERR(link)) { 433 + if (!link) { 434 434 fprintf(stderr, "failed to attach program\n"); 435 435 exit(1); 436 436 }
+1 -1
tools/testing/selftests/bpf/benchs/bench_trigger.c
··· 60 60 struct bpf_link *link; 61 61 62 62 link = bpf_program__attach(prog); 63 - if (IS_ERR(link)) { 63 + if (!link) { 64 64 fprintf(stderr, "failed to attach program!\n"); 65 65 exit(1); 66 66 }
+4 -8
tools/testing/selftests/bpf/prog_tests/attach_probe.c
··· 85 85 kprobe_link = bpf_program__attach_kprobe(skel->progs.handle_kprobe, 86 86 false /* retprobe */, 87 87 SYS_NANOSLEEP_KPROBE_NAME); 88 - if (CHECK(IS_ERR(kprobe_link), "attach_kprobe", 89 - "err %ld\n", PTR_ERR(kprobe_link))) 88 + if (!ASSERT_OK_PTR(kprobe_link, "attach_kprobe")) 90 89 goto cleanup; 91 90 skel->links.handle_kprobe = kprobe_link; 92 91 93 92 kretprobe_link = bpf_program__attach_kprobe(skel->progs.handle_kretprobe, 94 93 true /* retprobe */, 95 94 SYS_NANOSLEEP_KPROBE_NAME); 96 - if (CHECK(IS_ERR(kretprobe_link), "attach_kretprobe", 97 - "err %ld\n", PTR_ERR(kretprobe_link))) 95 + if (!ASSERT_OK_PTR(kretprobe_link, "attach_kretprobe")) 98 96 goto cleanup; 99 97 skel->links.handle_kretprobe = kretprobe_link; 100 98 ··· 101 103 0 /* self pid */, 102 104 "/proc/self/exe", 103 105 uprobe_offset); 104 - if (CHECK(IS_ERR(uprobe_link), "attach_uprobe", 105 - "err %ld\n", PTR_ERR(uprobe_link))) 106 + if (!ASSERT_OK_PTR(uprobe_link, "attach_uprobe")) 106 107 goto cleanup; 107 108 skel->links.handle_uprobe = uprobe_link; 108 109 ··· 110 113 -1 /* any pid */, 111 114 "/proc/self/exe", 112 115 uprobe_offset); 113 - if (CHECK(IS_ERR(uretprobe_link), "attach_uretprobe", 114 - "err %ld\n", PTR_ERR(uretprobe_link))) 116 + if (!ASSERT_OK_PTR(uretprobe_link, "attach_uretprobe")) 115 117 goto cleanup; 116 118 skel->links.handle_uretprobe = uretprobe_link; 117 119
+14 -17
tools/testing/selftests/bpf/prog_tests/bpf_iter.c
··· 47 47 int iter_fd, len; 48 48 49 49 link = bpf_program__attach_iter(prog, NULL); 50 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 50 + if (!ASSERT_OK_PTR(link, "attach_iter")) 51 51 return; 52 52 53 53 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 201 201 int ret = 0; 202 202 203 203 link = bpf_program__attach_iter(prog, NULL); 204 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 204 + if (!ASSERT_OK_PTR(link, "attach_iter")) 205 205 return ret; 206 206 207 207 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 396 396 return; 397 397 398 398 link = bpf_program__attach_iter(skel1->progs.dump_task, NULL); 399 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 399 + if (!ASSERT_OK_PTR(link, "attach_iter")) 400 400 goto out; 401 401 402 402 /* unlink this path if it exists. */ ··· 502 502 skel->bss->map2_id = map_info.id; 503 503 504 504 link = bpf_program__attach_iter(skel->progs.dump_bpf_map, NULL); 505 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 505 + if (!ASSERT_OK_PTR(link, "attach_iter")) 506 506 goto free_map2; 507 507 508 508 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 607 607 opts.link_info = &linfo; 608 608 opts.link_info_len = sizeof(linfo); 609 609 link = bpf_program__attach_iter(skel->progs.dump_bpf_hash_map, &opts); 610 - if (CHECK(!IS_ERR(link), "attach_iter", 611 - "attach_iter for hashmap2 unexpected succeeded\n")) 610 + if (!ASSERT_ERR_PTR(link, "attach_iter")) 612 611 goto out; 613 612 614 613 linfo.map.map_fd = bpf_map__fd(skel->maps.hashmap3); 615 614 link = bpf_program__attach_iter(skel->progs.dump_bpf_hash_map, &opts); 616 - if (CHECK(!IS_ERR(link), "attach_iter", 617 - "attach_iter for hashmap3 unexpected succeeded\n")) 615 + if (!ASSERT_ERR_PTR(link, "attach_iter")) 618 616 goto out; 619 617 620 618 /* hashmap1 should be good, update map values here */ ··· 634 636 635 637 linfo.map.map_fd = map_fd; 636 638 link = bpf_program__attach_iter(skel->progs.dump_bpf_hash_map, &opts); 637 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 639 + if (!ASSERT_OK_PTR(link, "attach_iter")) 638 640 goto out; 639 641 640 642 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 725 727 opts.link_info = &linfo; 726 728 opts.link_info_len = sizeof(linfo); 727 729 link = bpf_program__attach_iter(skel->progs.dump_bpf_percpu_hash_map, &opts); 728 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 730 + if (!ASSERT_OK_PTR(link, "attach_iter")) 729 731 goto out; 730 732 731 733 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 796 798 opts.link_info = &linfo; 797 799 opts.link_info_len = sizeof(linfo); 798 800 link = bpf_program__attach_iter(skel->progs.dump_bpf_array_map, &opts); 799 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 801 + if (!ASSERT_OK_PTR(link, "attach_iter")) 800 802 goto out; 801 803 802 804 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 892 894 opts.link_info = &linfo; 893 895 opts.link_info_len = sizeof(linfo); 894 896 link = bpf_program__attach_iter(skel->progs.dump_bpf_percpu_array_map, &opts); 895 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 897 + if (!ASSERT_OK_PTR(link, "attach_iter")) 896 898 goto out; 897 899 898 900 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 955 957 opts.link_info_len = sizeof(linfo); 956 958 link = bpf_program__attach_iter(skel->progs.delete_bpf_sk_storage_map, 957 959 &opts); 958 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 960 + if (!ASSERT_OK_PTR(link, "attach_iter")) 959 961 goto out; 960 962 961 963 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 1073 1075 opts.link_info = &linfo; 1074 1076 opts.link_info_len = sizeof(linfo); 1075 1077 link = bpf_program__attach_iter(skel->progs.dump_bpf_sk_storage_map, &opts); 1076 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 1078 + if (!ASSERT_OK_PTR(link, "attach_iter")) 1077 1079 goto out; 1078 1080 1079 1081 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 1126 1128 opts.link_info = &linfo; 1127 1129 opts.link_info_len = sizeof(linfo); 1128 1130 link = bpf_program__attach_iter(skel->progs.dump_bpf_hash_map, &opts); 1129 - if (CHECK(!IS_ERR(link), "attach_iter", "unexpected success\n")) 1131 + if (!ASSERT_ERR_PTR(link, "attach_iter")) 1130 1132 bpf_link__destroy(link); 1131 1133 1132 1134 bpf_iter_test_kern5__destroy(skel); ··· 1184 1186 skel->links.proc_maps = bpf_program__attach_iter( 1185 1187 skel->progs.proc_maps, NULL); 1186 1188 1187 - if (CHECK(IS_ERR(skel->links.proc_maps), "bpf_program__attach_iter", 1188 - "attach iterator failed\n")) { 1189 + if (!ASSERT_OK_PTR(skel->links.proc_maps, "bpf_program__attach_iter")) { 1189 1190 skel->links.proc_maps = NULL; 1190 1191 goto out; 1191 1192 }
+3 -5
tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
··· 82 82 bytes, total_bytes, nr_sent, errno); 83 83 84 84 done: 85 - if (fd != -1) 85 + if (fd >= 0) 86 86 close(fd); 87 87 if (err) { 88 88 WRITE_ONCE(stop, 1); ··· 191 191 return; 192 192 193 193 link = bpf_map__attach_struct_ops(cubic_skel->maps.cubic); 194 - if (CHECK(IS_ERR(link), "bpf_map__attach_struct_ops", "err:%ld\n", 195 - PTR_ERR(link))) { 194 + if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) { 196 195 bpf_cubic__destroy(cubic_skel); 197 196 return; 198 197 } ··· 212 213 return; 213 214 214 215 link = bpf_map__attach_struct_ops(dctcp_skel->maps.dctcp); 215 - if (CHECK(IS_ERR(link), "bpf_map__attach_struct_ops", "err:%ld\n", 216 - PTR_ERR(link))) { 216 + if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) { 217 217 bpf_dctcp__destroy(dctcp_skel); 218 218 return; 219 219 }
+47 -46
tools/testing/selftests/bpf/prog_tests/btf.c
··· 3811 3811 always_log); 3812 3812 free(raw_btf); 3813 3813 3814 - err = ((btf_fd == -1) != test->btf_load_err); 3814 + err = ((btf_fd < 0) != test->btf_load_err); 3815 3815 if (CHECK(err, "btf_fd:%d test->btf_load_err:%u", 3816 3816 btf_fd, test->btf_load_err) || 3817 3817 CHECK(test->err_str && !strstr(btf_log_buf, test->err_str), ··· 3820 3820 goto done; 3821 3821 } 3822 3822 3823 - if (err || btf_fd == -1) 3823 + if (err || btf_fd < 0) 3824 3824 goto done; 3825 3825 3826 3826 create_attr.name = test->map_name; ··· 3834 3834 3835 3835 map_fd = bpf_create_map_xattr(&create_attr); 3836 3836 3837 - err = ((map_fd == -1) != test->map_create_err); 3837 + err = ((map_fd < 0) != test->map_create_err); 3838 3838 CHECK(err, "map_fd:%d test->map_create_err:%u", 3839 3839 map_fd, test->map_create_err); 3840 3840 3841 3841 done: 3842 3842 if (*btf_log_buf && (err || always_log)) 3843 3843 fprintf(stderr, "\n%s", btf_log_buf); 3844 - if (btf_fd != -1) 3844 + if (btf_fd >= 0) 3845 3845 close(btf_fd); 3846 - if (map_fd != -1) 3846 + if (map_fd >= 0) 3847 3847 close(map_fd); 3848 3848 } 3849 3849 ··· 3941 3941 btf_fd = bpf_load_btf(raw_btf, raw_btf_size, 3942 3942 btf_log_buf, BTF_LOG_BUF_SIZE, 3943 3943 always_log); 3944 - if (CHECK(btf_fd == -1, "errno:%d", errno)) { 3944 + if (CHECK(btf_fd < 0, "errno:%d", errno)) { 3945 3945 err = -1; 3946 3946 goto done; 3947 3947 } ··· 3987 3987 free(raw_btf); 3988 3988 free(user_btf); 3989 3989 3990 - if (btf_fd != -1) 3990 + if (btf_fd >= 0) 3991 3991 close(btf_fd); 3992 3992 3993 3993 return err; ··· 4029 4029 btf_fd[0] = bpf_load_btf(raw_btf, raw_btf_size, 4030 4030 btf_log_buf, BTF_LOG_BUF_SIZE, 4031 4031 always_log); 4032 - if (CHECK(btf_fd[0] == -1, "errno:%d", errno)) { 4032 + if (CHECK(btf_fd[0] < 0, "errno:%d", errno)) { 4033 4033 err = -1; 4034 4034 goto done; 4035 4035 } ··· 4043 4043 } 4044 4044 4045 4045 btf_fd[1] = bpf_btf_get_fd_by_id(info[0].id); 4046 - if (CHECK(btf_fd[1] == -1, "errno:%d", errno)) { 4046 + if (CHECK(btf_fd[1] < 0, "errno:%d", errno)) { 4047 4047 err = -1; 4048 4048 goto done; 4049 4049 } ··· 4071 4071 create_attr.btf_value_type_id = 2; 4072 4072 4073 4073 map_fd = bpf_create_map_xattr(&create_attr); 4074 - if (CHECK(map_fd == -1, "errno:%d", errno)) { 4074 + if (CHECK(map_fd < 0, "errno:%d", errno)) { 4075 4075 err = -1; 4076 4076 goto done; 4077 4077 } ··· 4094 4094 4095 4095 /* Test BTF ID is removed from the kernel */ 4096 4096 btf_fd[0] = bpf_btf_get_fd_by_id(map_info.btf_id); 4097 - if (CHECK(btf_fd[0] == -1, "errno:%d", errno)) { 4097 + if (CHECK(btf_fd[0] < 0, "errno:%d", errno)) { 4098 4098 err = -1; 4099 4099 goto done; 4100 4100 } ··· 4105 4105 close(map_fd); 4106 4106 map_fd = -1; 4107 4107 btf_fd[0] = bpf_btf_get_fd_by_id(map_info.btf_id); 4108 - if (CHECK(btf_fd[0] != -1, "BTF lingers")) { 4108 + if (CHECK(btf_fd[0] >= 0, "BTF lingers")) { 4109 4109 err = -1; 4110 4110 goto done; 4111 4111 } ··· 4117 4117 fprintf(stderr, "\n%s", btf_log_buf); 4118 4118 4119 4119 free(raw_btf); 4120 - if (map_fd != -1) 4120 + if (map_fd >= 0) 4121 4121 close(map_fd); 4122 4122 for (i = 0; i < 2; i++) { 4123 4123 free(user_btf[i]); 4124 - if (btf_fd[i] != -1) 4124 + if (btf_fd[i] >= 0) 4125 4125 close(btf_fd[i]); 4126 4126 } 4127 4127 ··· 4166 4166 btf_fd = bpf_load_btf(raw_btf, raw_btf_size, 4167 4167 btf_log_buf, BTF_LOG_BUF_SIZE, 4168 4168 always_log); 4169 - if (CHECK(btf_fd == -1, "errno:%d", errno)) { 4169 + if (CHECK(btf_fd <= 0, "errno:%d", errno)) { 4170 4170 err = -1; 4171 4171 goto done; 4172 4172 } ··· 4212 4212 free(raw_btf); 4213 4213 free(user_btf); 4214 4214 4215 - if (btf_fd != -1) 4215 + if (btf_fd >= 0) 4216 4216 close(btf_fd); 4217 4217 } 4218 4218 ··· 4249 4249 return; 4250 4250 4251 4251 btf = btf__parse_elf(test->file, &btf_ext); 4252 - if (IS_ERR(btf)) { 4253 - if (PTR_ERR(btf) == -ENOENT) { 4252 + err = libbpf_get_error(btf); 4253 + if (err) { 4254 + if (err == -ENOENT) { 4254 4255 printf("%s:SKIP: No ELF %s found", __func__, BTF_ELF_SEC); 4255 4256 test__skip(); 4256 4257 return; ··· 4264 4263 btf_ext__free(btf_ext); 4265 4264 4266 4265 obj = bpf_object__open(test->file); 4267 - if (CHECK(IS_ERR(obj), "obj: %ld", PTR_ERR(obj))) 4266 + err = libbpf_get_error(obj); 4267 + if (CHECK(err, "obj: %d", err)) 4268 4268 return; 4269 4269 4270 4270 prog = bpf_program__next(NULL, obj); ··· 4300 4298 info_len = sizeof(struct bpf_prog_info); 4301 4299 err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); 4302 4300 4303 - if (CHECK(err == -1, "invalid get info (1st) errno:%d", errno)) { 4301 + if (CHECK(err < 0, "invalid get info (1st) errno:%d", errno)) { 4304 4302 fprintf(stderr, "%s\n", btf_log_buf); 4305 4303 err = -1; 4306 4304 goto done; ··· 4332 4330 4333 4331 err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); 4334 4332 4335 - if (CHECK(err == -1, "invalid get info (2nd) errno:%d", errno)) { 4333 + if (CHECK(err < 0, "invalid get info (2nd) errno:%d", errno)) { 4336 4334 fprintf(stderr, "%s\n", btf_log_buf); 4337 4335 err = -1; 4338 4336 goto done; ··· 4888 4886 always_log); 4889 4887 free(raw_btf); 4890 4888 4891 - if (CHECK(btf_fd == -1, "errno:%d", errno)) { 4889 + if (CHECK(btf_fd < 0, "errno:%d", errno)) { 4892 4890 err = -1; 4893 4891 goto done; 4894 4892 } ··· 4903 4901 create_attr.btf_value_type_id = test->value_type_id; 4904 4902 4905 4903 map_fd = bpf_create_map_xattr(&create_attr); 4906 - if (CHECK(map_fd == -1, "errno:%d", errno)) { 4904 + if (CHECK(map_fd < 0, "errno:%d", errno)) { 4907 4905 err = -1; 4908 4906 goto done; 4909 4907 } ··· 4984 4982 4985 4983 err = check_line(expected_line, nexpected_line, 4986 4984 sizeof(expected_line), line); 4987 - if (err == -1) 4985 + if (err < 0) 4988 4986 goto done; 4989 4987 } 4990 4988 ··· 5000 4998 cpu, cmapv); 5001 4999 err = check_line(expected_line, nexpected_line, 5002 5000 sizeof(expected_line), line); 5003 - if (err == -1) 5001 + if (err < 0) 5004 5002 goto done; 5005 5003 5006 5004 cmapv = cmapv + rounded_value_size; ··· 5038 5036 fprintf(stderr, "OK"); 5039 5037 if (*btf_log_buf && (err || always_log)) 5040 5038 fprintf(stderr, "\n%s", btf_log_buf); 5041 - if (btf_fd != -1) 5039 + if (btf_fd >= 0) 5042 5040 close(btf_fd); 5043 - if (map_fd != -1) 5041 + if (map_fd >= 0) 5044 5042 close(map_fd); 5045 5043 if (pin_file) 5046 5044 fclose(pin_file); ··· 5952 5950 /* get necessary lens */ 5953 5951 info_len = sizeof(struct bpf_prog_info); 5954 5952 err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); 5955 - if (CHECK(err == -1, "invalid get info (1st) errno:%d", errno)) { 5953 + if (CHECK(err < 0, "invalid get info (1st) errno:%d", errno)) { 5956 5954 fprintf(stderr, "%s\n", btf_log_buf); 5957 5955 return -1; 5958 5956 } ··· 5982 5980 info.func_info_rec_size = rec_size; 5983 5981 info.func_info = ptr_to_u64(func_info); 5984 5982 err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); 5985 - if (CHECK(err == -1, "invalid get info (2nd) errno:%d", errno)) { 5983 + if (CHECK(err < 0, "invalid get info (2nd) errno:%d", errno)) { 5986 5984 fprintf(stderr, "%s\n", btf_log_buf); 5987 5985 err = -1; 5988 5986 goto done; ··· 6046 6044 6047 6045 info_len = sizeof(struct bpf_prog_info); 6048 6046 err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); 6049 - if (CHECK(err == -1, "err:%d errno:%d", err, errno)) { 6047 + if (CHECK(err < 0, "err:%d errno:%d", err, errno)) { 6050 6048 err = -1; 6051 6049 goto done; 6052 6050 } ··· 6125 6123 * Only recheck the info.*line_info* fields. 6126 6124 * Other fields are not the concern of this test. 6127 6125 */ 6128 - if (CHECK(err == -1 || 6126 + if (CHECK(err < 0 || 6129 6127 info.nr_line_info != cnt || 6130 6128 (jited_cnt && !info.jited_line_info) || 6131 6129 info.nr_jited_line_info != jited_cnt || ··· 6262 6260 always_log); 6263 6261 free(raw_btf); 6264 6262 6265 - if (CHECK(btf_fd == -1, "invalid btf_fd errno:%d", errno)) { 6263 + if (CHECK(btf_fd < 0, "invalid btf_fd errno:%d", errno)) { 6266 6264 err = -1; 6267 6265 goto done; 6268 6266 } ··· 6275 6273 patched_linfo = patch_name_tbd(test->line_info, 6276 6274 test->str_sec, linfo_str_off, 6277 6275 test->str_sec_size, &linfo_size); 6278 - if (IS_ERR(patched_linfo)) { 6276 + err = libbpf_get_error(patched_linfo); 6277 + if (err) { 6279 6278 fprintf(stderr, "error in creating raw bpf_line_info"); 6280 6279 err = -1; 6281 6280 goto done; ··· 6300 6297 } 6301 6298 6302 6299 prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); 6303 - err = ((prog_fd == -1) != test->expected_prog_load_failure); 6300 + err = ((prog_fd < 0) != test->expected_prog_load_failure); 6304 6301 if (CHECK(err, "prog_fd:%d expected_prog_load_failure:%u errno:%d", 6305 6302 prog_fd, test->expected_prog_load_failure, errno) || 6306 6303 CHECK(test->err_str && !strstr(btf_log_buf, test->err_str), ··· 6309 6306 goto done; 6310 6307 } 6311 6308 6312 - if (prog_fd == -1) 6309 + if (prog_fd < 0) 6313 6310 goto done; 6314 6311 6315 6312 err = test_get_finfo(test, prog_fd); ··· 6326 6323 if (*btf_log_buf && (err || always_log)) 6327 6324 fprintf(stderr, "\n%s", btf_log_buf); 6328 6325 6329 - if (btf_fd != -1) 6326 + if (btf_fd >= 0) 6330 6327 close(btf_fd); 6331 - if (prog_fd != -1) 6328 + if (prog_fd >= 0) 6332 6329 close(prog_fd); 6333 6330 6334 - if (!IS_ERR(patched_linfo)) 6331 + if (!libbpf_get_error(patched_linfo)) 6335 6332 free(patched_linfo); 6336 6333 } 6337 6334 ··· 6842 6839 return; 6843 6840 6844 6841 test_btf = btf__new((__u8 *)raw_btf, raw_btf_size); 6842 + err = libbpf_get_error(test_btf); 6845 6843 free(raw_btf); 6846 - if (CHECK(IS_ERR(test_btf), "invalid test_btf errno:%ld", 6847 - PTR_ERR(test_btf))) { 6844 + if (CHECK(err, "invalid test_btf errno:%d", err)) { 6848 6845 err = -1; 6849 6846 goto done; 6850 6847 } ··· 6856 6853 if (!raw_btf) 6857 6854 return; 6858 6855 expect_btf = btf__new((__u8 *)raw_btf, raw_btf_size); 6856 + err = libbpf_get_error(expect_btf); 6859 6857 free(raw_btf); 6860 - if (CHECK(IS_ERR(expect_btf), "invalid expect_btf errno:%ld", 6861 - PTR_ERR(expect_btf))) { 6858 + if (CHECK(err, "invalid expect_btf errno:%d", err)) { 6862 6859 err = -1; 6863 6860 goto done; 6864 6861 } ··· 6969 6966 } 6970 6967 6971 6968 done: 6972 - if (!IS_ERR(test_btf)) 6973 - btf__free(test_btf); 6974 - if (!IS_ERR(expect_btf)) 6975 - btf__free(expect_btf); 6969 + btf__free(test_btf); 6970 + btf__free(expect_btf); 6976 6971 } 6977 6972 6978 6973 void test_btf(void)
+4 -4
tools/testing/selftests/bpf/prog_tests/btf_dump.c
··· 32 32 int err = 0, id; 33 33 34 34 d = btf_dump__new(btf, NULL, opts, btf_dump_printf); 35 - if (IS_ERR(d)) 36 - return PTR_ERR(d); 35 + err = libbpf_get_error(d); 36 + if (err) 37 + return err; 37 38 38 39 for (id = 1; id <= type_cnt; id++) { 39 40 err = btf_dump__dump_type(d, id); ··· 57 56 snprintf(test_file, sizeof(test_file), "%s.o", t->file); 58 57 59 58 btf = btf__parse_elf(test_file, NULL); 60 - if (CHECK(IS_ERR(btf), "btf_parse_elf", 61 - "failed to load test BTF: %ld\n", PTR_ERR(btf))) { 59 + if (!ASSERT_OK_PTR(btf, "btf_parse_elf")) { 62 60 err = -PTR_ERR(btf); 63 61 btf = NULL; 64 62 goto done;
+1 -3
tools/testing/selftests/bpf/prog_tests/btf_write.c
··· 4 4 #include <bpf/btf.h> 5 5 #include "btf_helpers.h" 6 6 7 - static int duration = 0; 8 - 9 7 void test_btf_write() { 10 8 const struct btf_var_secinfo *vi; 11 9 const struct btf_type *t; ··· 14 16 int id, err, str_off; 15 17 16 18 btf = btf__new_empty(); 17 - if (CHECK(IS_ERR(btf), "new_empty", "failed: %ld\n", PTR_ERR(btf))) 19 + if (!ASSERT_OK_PTR(btf, "new_empty")) 18 20 return; 19 21 20 22 str_off = btf__find_str(btf, "int");
+28 -56
tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
··· 102 102 */ 103 103 parent_link = bpf_program__attach_cgroup(obj->progs.egress, 104 104 parent_cgroup_fd); 105 - if (CHECK(IS_ERR(parent_link), "parent-cg-attach", 106 - "err %ld", PTR_ERR(parent_link))) 105 + if (!ASSERT_OK_PTR(parent_link, "parent-cg-attach")) 107 106 goto close_bpf_object; 108 107 err = connect_send(CHILD_CGROUP); 109 108 if (CHECK(err, "first-connect-send", "errno %d", errno)) ··· 125 126 */ 126 127 child_link = bpf_program__attach_cgroup(obj->progs.egress, 127 128 child_cgroup_fd); 128 - if (CHECK(IS_ERR(child_link), "child-cg-attach", 129 - "err %ld", PTR_ERR(child_link))) 129 + if (!ASSERT_OK_PTR(child_link, "child-cg-attach")) 130 130 goto close_bpf_object; 131 131 err = connect_send(CHILD_CGROUP); 132 132 if (CHECK(err, "second-connect-send", "errno %d", errno)) ··· 145 147 goto close_bpf_object; 146 148 147 149 close_bpf_object: 148 - if (!IS_ERR(parent_link)) 149 - bpf_link__destroy(parent_link); 150 - if (!IS_ERR(child_link)) 151 - bpf_link__destroy(child_link); 150 + bpf_link__destroy(parent_link); 151 + bpf_link__destroy(child_link); 152 152 153 153 cg_storage_multi_egress_only__destroy(obj); 154 154 } ··· 172 176 */ 173 177 parent_egress1_link = bpf_program__attach_cgroup(obj->progs.egress1, 174 178 parent_cgroup_fd); 175 - if (CHECK(IS_ERR(parent_egress1_link), "parent-egress1-cg-attach", 176 - "err %ld", PTR_ERR(parent_egress1_link))) 179 + if (!ASSERT_OK_PTR(parent_egress1_link, "parent-egress1-cg-attach")) 177 180 goto close_bpf_object; 178 181 parent_egress2_link = bpf_program__attach_cgroup(obj->progs.egress2, 179 182 parent_cgroup_fd); 180 - if (CHECK(IS_ERR(parent_egress2_link), "parent-egress2-cg-attach", 181 - "err %ld", PTR_ERR(parent_egress2_link))) 183 + if (!ASSERT_OK_PTR(parent_egress2_link, "parent-egress2-cg-attach")) 182 184 goto close_bpf_object; 183 185 parent_ingress_link = bpf_program__attach_cgroup(obj->progs.ingress, 184 186 parent_cgroup_fd); 185 - if (CHECK(IS_ERR(parent_ingress_link), "parent-ingress-cg-attach", 186 - "err %ld", PTR_ERR(parent_ingress_link))) 187 + if (!ASSERT_OK_PTR(parent_ingress_link, "parent-ingress-cg-attach")) 187 188 goto close_bpf_object; 188 189 err = connect_send(CHILD_CGROUP); 189 190 if (CHECK(err, "first-connect-send", "errno %d", errno)) ··· 214 221 */ 215 222 child_egress1_link = bpf_program__attach_cgroup(obj->progs.egress1, 216 223 child_cgroup_fd); 217 - if (CHECK(IS_ERR(child_egress1_link), "child-egress1-cg-attach", 218 - "err %ld", PTR_ERR(child_egress1_link))) 224 + if (!ASSERT_OK_PTR(child_egress1_link, "child-egress1-cg-attach")) 219 225 goto close_bpf_object; 220 226 child_egress2_link = bpf_program__attach_cgroup(obj->progs.egress2, 221 227 child_cgroup_fd); 222 - if (CHECK(IS_ERR(child_egress2_link), "child-egress2-cg-attach", 223 - "err %ld", PTR_ERR(child_egress2_link))) 228 + if (!ASSERT_OK_PTR(child_egress2_link, "child-egress2-cg-attach")) 224 229 goto close_bpf_object; 225 230 child_ingress_link = bpf_program__attach_cgroup(obj->progs.ingress, 226 231 child_cgroup_fd); 227 - if (CHECK(IS_ERR(child_ingress_link), "child-ingress-cg-attach", 228 - "err %ld", PTR_ERR(child_ingress_link))) 232 + if (!ASSERT_OK_PTR(child_ingress_link, "child-ingress-cg-attach")) 229 233 goto close_bpf_object; 230 234 err = connect_send(CHILD_CGROUP); 231 235 if (CHECK(err, "second-connect-send", "errno %d", errno)) ··· 254 264 goto close_bpf_object; 255 265 256 266 close_bpf_object: 257 - if (!IS_ERR(parent_egress1_link)) 258 - bpf_link__destroy(parent_egress1_link); 259 - if (!IS_ERR(parent_egress2_link)) 260 - bpf_link__destroy(parent_egress2_link); 261 - if (!IS_ERR(parent_ingress_link)) 262 - bpf_link__destroy(parent_ingress_link); 263 - if (!IS_ERR(child_egress1_link)) 264 - bpf_link__destroy(child_egress1_link); 265 - if (!IS_ERR(child_egress2_link)) 266 - bpf_link__destroy(child_egress2_link); 267 - if (!IS_ERR(child_ingress_link)) 268 - bpf_link__destroy(child_ingress_link); 267 + bpf_link__destroy(parent_egress1_link); 268 + bpf_link__destroy(parent_egress2_link); 269 + bpf_link__destroy(parent_ingress_link); 270 + bpf_link__destroy(child_egress1_link); 271 + bpf_link__destroy(child_egress2_link); 272 + bpf_link__destroy(child_ingress_link); 269 273 270 274 cg_storage_multi_isolated__destroy(obj); 271 275 } ··· 285 301 */ 286 302 parent_egress1_link = bpf_program__attach_cgroup(obj->progs.egress1, 287 303 parent_cgroup_fd); 288 - if (CHECK(IS_ERR(parent_egress1_link), "parent-egress1-cg-attach", 289 - "err %ld", PTR_ERR(parent_egress1_link))) 304 + if (!ASSERT_OK_PTR(parent_egress1_link, "parent-egress1-cg-attach")) 290 305 goto close_bpf_object; 291 306 parent_egress2_link = bpf_program__attach_cgroup(obj->progs.egress2, 292 307 parent_cgroup_fd); 293 - if (CHECK(IS_ERR(parent_egress2_link), "parent-egress2-cg-attach", 294 - "err %ld", PTR_ERR(parent_egress2_link))) 308 + if (!ASSERT_OK_PTR(parent_egress2_link, "parent-egress2-cg-attach")) 295 309 goto close_bpf_object; 296 310 parent_ingress_link = bpf_program__attach_cgroup(obj->progs.ingress, 297 311 parent_cgroup_fd); 298 - if (CHECK(IS_ERR(parent_ingress_link), "parent-ingress-cg-attach", 299 - "err %ld", PTR_ERR(parent_ingress_link))) 312 + if (!ASSERT_OK_PTR(parent_ingress_link, "parent-ingress-cg-attach")) 300 313 goto close_bpf_object; 301 314 err = connect_send(CHILD_CGROUP); 302 315 if (CHECK(err, "first-connect-send", "errno %d", errno)) ··· 319 338 */ 320 339 child_egress1_link = bpf_program__attach_cgroup(obj->progs.egress1, 321 340 child_cgroup_fd); 322 - if (CHECK(IS_ERR(child_egress1_link), "child-egress1-cg-attach", 323 - "err %ld", PTR_ERR(child_egress1_link))) 341 + if (!ASSERT_OK_PTR(child_egress1_link, "child-egress1-cg-attach")) 324 342 goto close_bpf_object; 325 343 child_egress2_link = bpf_program__attach_cgroup(obj->progs.egress2, 326 344 child_cgroup_fd); 327 - if (CHECK(IS_ERR(child_egress2_link), "child-egress2-cg-attach", 328 - "err %ld", PTR_ERR(child_egress2_link))) 345 + if (!ASSERT_OK_PTR(child_egress2_link, "child-egress2-cg-attach")) 329 346 goto close_bpf_object; 330 347 child_ingress_link = bpf_program__attach_cgroup(obj->progs.ingress, 331 348 child_cgroup_fd); 332 - if (CHECK(IS_ERR(child_ingress_link), "child-ingress-cg-attach", 333 - "err %ld", PTR_ERR(child_ingress_link))) 349 + if (!ASSERT_OK_PTR(child_ingress_link, "child-ingress-cg-attach")) 334 350 goto close_bpf_object; 335 351 err = connect_send(CHILD_CGROUP); 336 352 if (CHECK(err, "second-connect-send", "errno %d", errno)) ··· 353 375 goto close_bpf_object; 354 376 355 377 close_bpf_object: 356 - if (!IS_ERR(parent_egress1_link)) 357 - bpf_link__destroy(parent_egress1_link); 358 - if (!IS_ERR(parent_egress2_link)) 359 - bpf_link__destroy(parent_egress2_link); 360 - if (!IS_ERR(parent_ingress_link)) 361 - bpf_link__destroy(parent_ingress_link); 362 - if (!IS_ERR(child_egress1_link)) 363 - bpf_link__destroy(child_egress1_link); 364 - if (!IS_ERR(child_egress2_link)) 365 - bpf_link__destroy(child_egress2_link); 366 - if (!IS_ERR(child_ingress_link)) 367 - bpf_link__destroy(child_ingress_link); 378 + bpf_link__destroy(parent_egress1_link); 379 + bpf_link__destroy(parent_egress2_link); 380 + bpf_link__destroy(parent_ingress_link); 381 + bpf_link__destroy(child_egress1_link); 382 + bpf_link__destroy(child_egress2_link); 383 + bpf_link__destroy(child_ingress_link); 368 384 369 385 cg_storage_multi_shared__destroy(obj); 370 386 }
+1 -1
tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
··· 167 167 prog_cnt = 2; 168 168 CHECK_FAIL(bpf_prog_query(cg5, BPF_CGROUP_INET_EGRESS, 169 169 BPF_F_QUERY_EFFECTIVE, &attach_flags, 170 - prog_ids, &prog_cnt) != -1); 170 + prog_ids, &prog_cnt) >= 0); 171 171 CHECK_FAIL(errno != ENOSPC); 172 172 CHECK_FAIL(prog_cnt != 4); 173 173 /* check that prog_ids are returned even when buffer is too small */
+5 -9
tools/testing/selftests/bpf/prog_tests/cgroup_link.c
··· 65 65 for (i = 0; i < cg_nr; i++) { 66 66 links[i] = bpf_program__attach_cgroup(skel->progs.egress, 67 67 cgs[i].fd); 68 - if (CHECK(IS_ERR(links[i]), "cg_attach", "i: %d, err: %ld\n", 69 - i, PTR_ERR(links[i]))) 68 + if (!ASSERT_OK_PTR(links[i], "cg_attach")) 70 69 goto cleanup; 71 70 } 72 71 ··· 120 121 121 122 links[last_cg] = bpf_program__attach_cgroup(skel->progs.egress, 122 123 cgs[last_cg].fd); 123 - if (CHECK(IS_ERR(links[last_cg]), "cg_attach", "err: %ld\n", 124 - PTR_ERR(links[last_cg]))) 124 + if (!ASSERT_OK_PTR(links[last_cg], "cg_attach")) 125 125 goto cleanup; 126 126 127 127 ping_and_check(cg_nr + 1, 0); ··· 145 147 /* attempt to mix in with multi-attach bpf_link */ 146 148 tmp_link = bpf_program__attach_cgroup(skel->progs.egress, 147 149 cgs[last_cg].fd); 148 - if (CHECK(!IS_ERR(tmp_link), "cg_attach_fail", "unexpected success!\n")) { 150 + if (!ASSERT_ERR_PTR(tmp_link, "cg_attach_fail")) { 149 151 bpf_link__destroy(tmp_link); 150 152 goto cleanup; 151 153 } ··· 163 165 /* attach back link-based one */ 164 166 links[last_cg] = bpf_program__attach_cgroup(skel->progs.egress, 165 167 cgs[last_cg].fd); 166 - if (CHECK(IS_ERR(links[last_cg]), "cg_attach", "err: %ld\n", 167 - PTR_ERR(links[last_cg]))) 168 + if (!ASSERT_OK_PTR(links[last_cg], "cg_attach")) 168 169 goto cleanup; 169 170 170 171 ping_and_check(cg_nr, 0); ··· 246 249 BPF_CGROUP_INET_EGRESS); 247 250 248 251 for (i = 0; i < cg_nr; i++) { 249 - if (!IS_ERR(links[i])) 250 - bpf_link__destroy(links[i]); 252 + bpf_link__destroy(links[i]); 251 253 } 252 254 test_cgroup_link__destroy(skel); 253 255
+1 -1
tools/testing/selftests/bpf/prog_tests/cgroup_skb_sk_lookup.c
··· 60 60 goto cleanup; 61 61 62 62 link = bpf_program__attach_cgroup(skel->progs.ingress_lookup, cgfd); 63 - if (CHECK(IS_ERR(link), "cgroup_attach", "err: %ld\n", PTR_ERR(link))) 63 + if (!ASSERT_OK_PTR(link, "cgroup_attach")) 64 64 goto cleanup; 65 65 66 66 run_lookup_test(&skel->bss->g_serv_port, out_sk);
+1 -1
tools/testing/selftests/bpf/prog_tests/check_mtu.c
··· 53 53 prog = skel->progs.xdp_use_helper_basic; 54 54 55 55 link = bpf_program__attach_xdp(prog, IFINDEX_LO); 56 - if (CHECK(IS_ERR(link), "link_attach", "failed: %ld\n", PTR_ERR(link))) 56 + if (!ASSERT_OK_PTR(link, "link_attach")) 57 57 goto out; 58 58 skel->links.xdp_use_helper_basic = link; 59 59
+5 -10
tools/testing/selftests/bpf/prog_tests/core_reloc.c
··· 369 369 const char *name; 370 370 int i; 371 371 372 - if (CHECK(IS_ERR(local_btf), "local_btf", "failed: %ld\n", PTR_ERR(local_btf)) || 373 - CHECK(IS_ERR(targ_btf), "targ_btf", "failed: %ld\n", PTR_ERR(targ_btf))) { 372 + if (!ASSERT_OK_PTR(local_btf, "local_btf") || !ASSERT_OK_PTR(targ_btf, "targ_btf")) { 374 373 btf__free(local_btf); 375 374 btf__free(targ_btf); 376 375 return -EINVAL; ··· 847 848 } 848 849 849 850 obj = bpf_object__open_file(test_case->bpf_obj_file, NULL); 850 - if (CHECK(IS_ERR(obj), "obj_open", "failed to open '%s': %ld\n", 851 - test_case->bpf_obj_file, PTR_ERR(obj))) 851 + if (!ASSERT_OK_PTR(obj, "obj_open")) 852 852 continue; 853 853 854 854 probe_name = "raw_tracepoint/sys_enter"; ··· 897 899 data->my_pid_tgid = my_pid_tgid; 898 900 899 901 link = bpf_program__attach_raw_tracepoint(prog, tp_name); 900 - if (CHECK(IS_ERR(link), "attach_raw_tp", "err %ld\n", 901 - PTR_ERR(link))) 902 + if (!ASSERT_OK_PTR(link, "attach_raw_tp")) 902 903 goto cleanup; 903 904 904 905 /* trigger test run */ ··· 938 941 CHECK_FAIL(munmap(mmap_data, mmap_sz)); 939 942 mmap_data = NULL; 940 943 } 941 - if (!IS_ERR_OR_NULL(link)) { 942 - bpf_link__destroy(link); 943 - link = NULL; 944 - } 944 + bpf_link__destroy(link); 945 + link = NULL; 945 946 bpf_object__close(obj); 946 947 } 947 948 }
+8 -17
tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
··· 146 146 147 147 close_prog: 148 148 for (i = 0; i < prog_cnt; i++) 149 - if (!IS_ERR_OR_NULL(link[i])) 150 - bpf_link__destroy(link[i]); 151 - if (!IS_ERR_OR_NULL(obj)) 152 - bpf_object__close(obj); 149 + bpf_link__destroy(link[i]); 150 + bpf_object__close(obj); 153 151 bpf_object__close(tgt_obj); 154 152 free(link); 155 153 free(prog); ··· 229 231 return err; 230 232 231 233 link = bpf_program__attach_freplace(prog, tgt_fd, tgt_name); 232 - if (CHECK(IS_ERR(link), "second_link", "failed to attach second link prog_fd %d tgt_fd %d\n", bpf_program__fd(prog), tgt_fd)) 234 + if (!ASSERT_OK_PTR(link, "second_link")) 233 235 goto out; 234 236 235 237 err = bpf_prog_test_run(tgt_fd, 1, &pkt_v6, sizeof(pkt_v6), ··· 281 283 opts.attach_prog_fd = pkt_fd; 282 284 283 285 freplace_obj = bpf_object__open_file(freplace_name, &opts); 284 - if (CHECK(IS_ERR_OR_NULL(freplace_obj), "freplace_obj_open", 285 - "failed to open %s: %ld\n", freplace_name, 286 - PTR_ERR(freplace_obj))) 286 + if (!ASSERT_OK_PTR(freplace_obj, "freplace_obj_open")) 287 287 goto out; 288 288 289 289 err = bpf_object__load(freplace_obj); ··· 290 294 291 295 prog = bpf_program__next(NULL, freplace_obj); 292 296 freplace_link = bpf_program__attach_trace(prog); 293 - if (CHECK(IS_ERR(freplace_link), "freplace_attach_trace", "failed to link\n")) 297 + if (!ASSERT_OK_PTR(freplace_link, "freplace_attach_trace")) 294 298 goto out; 295 299 296 300 opts.attach_prog_fd = bpf_program__fd(prog); 297 301 fmod_obj = bpf_object__open_file(fmod_ret_name, &opts); 298 - if (CHECK(IS_ERR_OR_NULL(fmod_obj), "fmod_obj_open", 299 - "failed to open %s: %ld\n", fmod_ret_name, 300 - PTR_ERR(fmod_obj))) 302 + if (!ASSERT_OK_PTR(fmod_obj, "fmod_obj_open")) 301 303 goto out; 302 304 303 305 err = bpf_object__load(fmod_obj); ··· 344 350 ); 345 351 346 352 obj = bpf_object__open_file(obj_file, &opts); 347 - if (CHECK(IS_ERR_OR_NULL(obj), "obj_open", 348 - "failed to open %s: %ld\n", obj_file, 349 - PTR_ERR(obj))) 353 + if (!ASSERT_OK_PTR(obj, "obj_open")) 350 354 goto close_prog; 351 355 352 356 /* It should fail to load the program */ ··· 353 361 goto close_prog; 354 362 355 363 close_prog: 356 - if (!IS_ERR_OR_NULL(obj)) 357 - bpf_object__close(obj); 364 + bpf_object__close(obj); 358 365 bpf_object__close(pkt_obj); 359 366 } 360 367
+1 -1
tools/testing/selftests/bpf/prog_tests/flow_dissector.c
··· 541 541 return; 542 542 543 543 link = bpf_program__attach_netns(skel->progs._dissect, net_fd); 544 - if (CHECK(IS_ERR(link), "attach_netns", "err %ld\n", PTR_ERR(link))) 544 + if (!ASSERT_OK_PTR(link, "attach_netns")) 545 545 goto out_close; 546 546 547 547 run_tests_skb_less(tap_fd, skel->maps.last_dissection);
+5 -5
tools/testing/selftests/bpf/prog_tests/flow_dissector_reattach.c
··· 134 134 /* Expect failure creating link when another link exists */ 135 135 errno = 0; 136 136 link2 = bpf_link_create(prog2, netns, BPF_FLOW_DISSECTOR, &opts); 137 - if (CHECK_FAIL(link2 != -1 || errno != E2BIG)) 137 + if (CHECK_FAIL(link2 >= 0 || errno != E2BIG)) 138 138 perror("bpf_prog_attach(prog2) expected E2BIG"); 139 - if (link2 != -1) 139 + if (link2 >= 0) 140 140 close(link2); 141 141 CHECK_FAIL(query_attached_prog_id(netns) != query_prog_id(prog1)); 142 142 ··· 159 159 /* Expect failure creating link when prog attached */ 160 160 errno = 0; 161 161 link = bpf_link_create(prog2, netns, BPF_FLOW_DISSECTOR, &opts); 162 - if (CHECK_FAIL(link != -1 || errno != EEXIST)) 162 + if (CHECK_FAIL(link >= 0 || errno != EEXIST)) 163 163 perror("bpf_link_create(prog2) expected EEXIST"); 164 - if (link != -1) 164 + if (link >= 0) 165 165 close(link); 166 166 CHECK_FAIL(query_attached_prog_id(netns) != query_prog_id(prog1)); 167 167 ··· 623 623 } 624 624 out_close: 625 625 for (i = 0; i < ARRAY_SIZE(progs); i++) { 626 - if (progs[i] != -1) 626 + if (progs[i] >= 0) 627 627 CHECK_FAIL(close(progs[i])); 628 628 } 629 629 }
+4 -6
tools/testing/selftests/bpf/prog_tests/get_stack_raw_tp.c
··· 121 121 goto close_prog; 122 122 123 123 link = bpf_program__attach_raw_tracepoint(prog, "sys_enter"); 124 - if (CHECK(IS_ERR(link), "attach_raw_tp", "err %ld\n", PTR_ERR(link))) 124 + if (!ASSERT_OK_PTR(link, "attach_raw_tp")) 125 125 goto close_prog; 126 126 127 127 pb_opts.sample_cb = get_stack_print_output; 128 128 pb = perf_buffer__new(bpf_map__fd(map), 8, &pb_opts); 129 - if (CHECK(IS_ERR(pb), "perf_buf__new", "err %ld\n", PTR_ERR(pb))) 129 + if (!ASSERT_OK_PTR(pb, "perf_buf__new")) 130 130 goto close_prog; 131 131 132 132 /* trigger some syscall action */ ··· 141 141 } 142 142 143 143 close_prog: 144 - if (!IS_ERR_OR_NULL(link)) 145 - bpf_link__destroy(link); 146 - if (!IS_ERR_OR_NULL(pb)) 147 - perf_buffer__free(pb); 144 + bpf_link__destroy(link); 145 + perf_buffer__free(pb); 148 146 bpf_object__close(obj); 149 147 }
+3 -6
tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c
··· 48 48 49 49 skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, 50 50 pmu_fd); 51 - CHECK(!IS_ERR(skel->links.oncpu), "attach_perf_event_no_callchain", 52 - "should have failed\n"); 51 + ASSERT_ERR_PTR(skel->links.oncpu, "attach_perf_event_no_callchain"); 53 52 close(pmu_fd); 54 53 55 54 /* add PERF_SAMPLE_CALLCHAIN, attach should succeed */ ··· 64 65 65 66 skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, 66 67 pmu_fd); 67 - CHECK(IS_ERR(skel->links.oncpu), "attach_perf_event_callchain", 68 - "err: %ld\n", PTR_ERR(skel->links.oncpu)); 68 + ASSERT_OK_PTR(skel->links.oncpu, "attach_perf_event_callchain"); 69 69 close(pmu_fd); 70 70 71 71 /* add exclude_callchain_kernel, attach should fail */ ··· 80 82 81 83 skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, 82 84 pmu_fd); 83 - CHECK(!IS_ERR(skel->links.oncpu), "attach_perf_event_exclude_callchain_kernel", 84 - "should have failed\n"); 85 + ASSERT_ERR_PTR(skel->links.oncpu, "attach_perf_event_exclude_callchain_kernel"); 85 86 close(pmu_fd); 86 87 87 88 cleanup:
+3 -6
tools/testing/selftests/bpf/prog_tests/hashmap.c
··· 48 48 struct hashmap *map; 49 49 50 50 map = hashmap__new(hash_fn, equal_fn, NULL); 51 - if (CHECK(IS_ERR(map), "hashmap__new", 52 - "failed to create map: %ld\n", PTR_ERR(map))) 51 + if (!ASSERT_OK_PTR(map, "hashmap__new")) 53 52 return; 54 53 55 54 for (i = 0; i < ELEM_CNT; i++) { ··· 266 267 267 268 /* force collisions */ 268 269 map = hashmap__new(collision_hash_fn, equal_fn, NULL); 269 - if (CHECK(IS_ERR(map), "hashmap__new", 270 - "failed to create map: %ld\n", PTR_ERR(map))) 270 + if (!ASSERT_OK_PTR(map, "hashmap__new")) 271 271 return; 272 272 273 273 /* set up multimap: ··· 337 339 338 340 /* force collisions */ 339 341 map = hashmap__new(hash_fn, equal_fn, NULL); 340 - if (CHECK(IS_ERR(map), "hashmap__new", 341 - "failed to create map: %ld\n", PTR_ERR(map))) 342 + if (!ASSERT_OK_PTR(map, "hashmap__new")) 342 343 goto cleanup; 343 344 344 345 if (CHECK(hashmap__size(map) != 0, "hashmap__size",
+7 -12
tools/testing/selftests/bpf/prog_tests/kfree_skb.c
··· 97 97 goto close_prog; 98 98 99 99 link = bpf_program__attach_raw_tracepoint(prog, NULL); 100 - if (CHECK(IS_ERR(link), "attach_raw_tp", "err %ld\n", PTR_ERR(link))) 100 + if (!ASSERT_OK_PTR(link, "attach_raw_tp")) 101 101 goto close_prog; 102 102 link_fentry = bpf_program__attach_trace(fentry); 103 - if (CHECK(IS_ERR(link_fentry), "attach fentry", "err %ld\n", 104 - PTR_ERR(link_fentry))) 103 + if (!ASSERT_OK_PTR(link_fentry, "attach fentry")) 105 104 goto close_prog; 106 105 link_fexit = bpf_program__attach_trace(fexit); 107 - if (CHECK(IS_ERR(link_fexit), "attach fexit", "err %ld\n", 108 - PTR_ERR(link_fexit))) 106 + if (!ASSERT_OK_PTR(link_fexit, "attach fexit")) 109 107 goto close_prog; 110 108 111 109 perf_buf_map = bpf_object__find_map_by_name(obj2, "perf_buf_map"); ··· 114 116 pb_opts.sample_cb = on_sample; 115 117 pb_opts.ctx = &passed; 116 118 pb = perf_buffer__new(bpf_map__fd(perf_buf_map), 1, &pb_opts); 117 - if (CHECK(IS_ERR(pb), "perf_buf__new", "err %ld\n", PTR_ERR(pb))) 119 + if (!ASSERT_OK_PTR(pb, "perf_buf__new")) 118 120 goto close_prog; 119 121 120 122 memcpy(skb.cb, &cb, sizeof(cb)); ··· 142 144 CHECK_FAIL(!test_ok[0] || !test_ok[1]); 143 145 close_prog: 144 146 perf_buffer__free(pb); 145 - if (!IS_ERR_OR_NULL(link)) 146 - bpf_link__destroy(link); 147 - if (!IS_ERR_OR_NULL(link_fentry)) 148 - bpf_link__destroy(link_fentry); 149 - if (!IS_ERR_OR_NULL(link_fexit)) 150 - bpf_link__destroy(link_fexit); 147 + bpf_link__destroy(link); 148 + bpf_link__destroy(link_fentry); 149 + bpf_link__destroy(link_fexit); 151 150 bpf_object__close(obj); 152 151 bpf_object__close(obj2); 153 152 }
+1 -2
tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
··· 87 87 struct btf *btf; 88 88 89 89 btf = libbpf_find_kernel_btf(); 90 - if (CHECK(IS_ERR(btf), "btf_exists", "failed to load kernel BTF: %ld\n", 91 - PTR_ERR(btf))) 90 + if (!ASSERT_OK_PTR(btf, "btf_exists")) 92 91 return; 93 92 94 93 percpu_datasec = btf__find_by_name_kind(btf, ".data..percpu",
+288
tools/testing/selftests/bpf/prog_tests/lookup_and_delete.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #include <test_progs.h> 4 + #include "test_lookup_and_delete.skel.h" 5 + 6 + #define START_VALUE 1234 7 + #define NEW_VALUE 4321 8 + #define MAX_ENTRIES 2 9 + 10 + static int duration; 11 + static int nr_cpus; 12 + 13 + static int fill_values(int map_fd) 14 + { 15 + __u64 key, value = START_VALUE; 16 + int err; 17 + 18 + for (key = 1; key < MAX_ENTRIES + 1; key++) { 19 + err = bpf_map_update_elem(map_fd, &key, &value, BPF_NOEXIST); 20 + if (!ASSERT_OK(err, "bpf_map_update_elem")) 21 + return -1; 22 + } 23 + 24 + return 0; 25 + } 26 + 27 + static int fill_values_percpu(int map_fd) 28 + { 29 + __u64 key, value[nr_cpus]; 30 + int i, err; 31 + 32 + for (i = 0; i < nr_cpus; i++) 33 + value[i] = START_VALUE; 34 + 35 + for (key = 1; key < MAX_ENTRIES + 1; key++) { 36 + err = bpf_map_update_elem(map_fd, &key, value, BPF_NOEXIST); 37 + if (!ASSERT_OK(err, "bpf_map_update_elem")) 38 + return -1; 39 + } 40 + 41 + return 0; 42 + } 43 + 44 + static struct test_lookup_and_delete *setup_prog(enum bpf_map_type map_type, 45 + int *map_fd) 46 + { 47 + struct test_lookup_and_delete *skel; 48 + int err; 49 + 50 + skel = test_lookup_and_delete__open(); 51 + if (!ASSERT_OK_PTR(skel, "test_lookup_and_delete__open")) 52 + return NULL; 53 + 54 + err = bpf_map__set_type(skel->maps.hash_map, map_type); 55 + if (!ASSERT_OK(err, "bpf_map__set_type")) 56 + goto cleanup; 57 + 58 + err = bpf_map__set_max_entries(skel->maps.hash_map, MAX_ENTRIES); 59 + if (!ASSERT_OK(err, "bpf_map__set_max_entries")) 60 + goto cleanup; 61 + 62 + err = test_lookup_and_delete__load(skel); 63 + if (!ASSERT_OK(err, "test_lookup_and_delete__load")) 64 + goto cleanup; 65 + 66 + *map_fd = bpf_map__fd(skel->maps.hash_map); 67 + if (!ASSERT_GE(*map_fd, 0, "bpf_map__fd")) 68 + goto cleanup; 69 + 70 + return skel; 71 + 72 + cleanup: 73 + test_lookup_and_delete__destroy(skel); 74 + return NULL; 75 + } 76 + 77 + /* Triggers BPF program that updates map with given key and value */ 78 + static int trigger_tp(struct test_lookup_and_delete *skel, __u64 key, 79 + __u64 value) 80 + { 81 + int err; 82 + 83 + skel->bss->set_pid = getpid(); 84 + skel->bss->set_key = key; 85 + skel->bss->set_value = value; 86 + 87 + err = test_lookup_and_delete__attach(skel); 88 + if (!ASSERT_OK(err, "test_lookup_and_delete__attach")) 89 + return -1; 90 + 91 + syscall(__NR_getpgid); 92 + 93 + test_lookup_and_delete__detach(skel); 94 + 95 + return 0; 96 + } 97 + 98 + static void test_lookup_and_delete_hash(void) 99 + { 100 + struct test_lookup_and_delete *skel; 101 + __u64 key, value; 102 + int map_fd, err; 103 + 104 + /* Setup program and fill the map. */ 105 + skel = setup_prog(BPF_MAP_TYPE_HASH, &map_fd); 106 + if (!ASSERT_OK_PTR(skel, "setup_prog")) 107 + return; 108 + 109 + err = fill_values(map_fd); 110 + if (!ASSERT_OK(err, "fill_values")) 111 + goto cleanup; 112 + 113 + /* Lookup and delete element. */ 114 + key = 1; 115 + err = bpf_map_lookup_and_delete_elem(map_fd, &key, &value); 116 + if (!ASSERT_OK(err, "bpf_map_lookup_and_delete_elem")) 117 + goto cleanup; 118 + 119 + /* Fetched value should match the initially set value. */ 120 + if (CHECK(value != START_VALUE, "bpf_map_lookup_and_delete_elem", 121 + "unexpected value=%lld\n", value)) 122 + goto cleanup; 123 + 124 + /* Check that the entry is non existent. */ 125 + err = bpf_map_lookup_elem(map_fd, &key, &value); 126 + if (!ASSERT_ERR(err, "bpf_map_lookup_elem")) 127 + goto cleanup; 128 + 129 + cleanup: 130 + test_lookup_and_delete__destroy(skel); 131 + } 132 + 133 + static void test_lookup_and_delete_percpu_hash(void) 134 + { 135 + struct test_lookup_and_delete *skel; 136 + __u64 key, val, value[nr_cpus]; 137 + int map_fd, err, i; 138 + 139 + /* Setup program and fill the map. */ 140 + skel = setup_prog(BPF_MAP_TYPE_PERCPU_HASH, &map_fd); 141 + if (!ASSERT_OK_PTR(skel, "setup_prog")) 142 + return; 143 + 144 + err = fill_values_percpu(map_fd); 145 + if (!ASSERT_OK(err, "fill_values_percpu")) 146 + goto cleanup; 147 + 148 + /* Lookup and delete element. */ 149 + key = 1; 150 + err = bpf_map_lookup_and_delete_elem(map_fd, &key, value); 151 + if (!ASSERT_OK(err, "bpf_map_lookup_and_delete_elem")) 152 + goto cleanup; 153 + 154 + for (i = 0; i < nr_cpus; i++) { 155 + val = value[i]; 156 + 157 + /* Fetched value should match the initially set value. */ 158 + if (CHECK(val != START_VALUE, "map value", 159 + "unexpected for cpu %d: %lld\n", i, val)) 160 + goto cleanup; 161 + } 162 + 163 + /* Check that the entry is non existent. */ 164 + err = bpf_map_lookup_elem(map_fd, &key, value); 165 + if (!ASSERT_ERR(err, "bpf_map_lookup_elem")) 166 + goto cleanup; 167 + 168 + cleanup: 169 + test_lookup_and_delete__destroy(skel); 170 + } 171 + 172 + static void test_lookup_and_delete_lru_hash(void) 173 + { 174 + struct test_lookup_and_delete *skel; 175 + __u64 key, value; 176 + int map_fd, err; 177 + 178 + /* Setup program and fill the LRU map. */ 179 + skel = setup_prog(BPF_MAP_TYPE_LRU_HASH, &map_fd); 180 + if (!ASSERT_OK_PTR(skel, "setup_prog")) 181 + return; 182 + 183 + err = fill_values(map_fd); 184 + if (!ASSERT_OK(err, "fill_values")) 185 + goto cleanup; 186 + 187 + /* Insert new element at key=3, should reuse LRU element. */ 188 + key = 3; 189 + err = trigger_tp(skel, key, NEW_VALUE); 190 + if (!ASSERT_OK(err, "trigger_tp")) 191 + goto cleanup; 192 + 193 + /* Lookup and delete element 3. */ 194 + err = bpf_map_lookup_and_delete_elem(map_fd, &key, &value); 195 + if (!ASSERT_OK(err, "bpf_map_lookup_and_delete_elem")) 196 + goto cleanup; 197 + 198 + /* Value should match the new value. */ 199 + if (CHECK(value != NEW_VALUE, "bpf_map_lookup_and_delete_elem", 200 + "unexpected value=%lld\n", value)) 201 + goto cleanup; 202 + 203 + /* Check that entries 3 and 1 are non existent. */ 204 + err = bpf_map_lookup_elem(map_fd, &key, &value); 205 + if (!ASSERT_ERR(err, "bpf_map_lookup_elem")) 206 + goto cleanup; 207 + 208 + key = 1; 209 + err = bpf_map_lookup_elem(map_fd, &key, &value); 210 + if (!ASSERT_ERR(err, "bpf_map_lookup_elem")) 211 + goto cleanup; 212 + 213 + cleanup: 214 + test_lookup_and_delete__destroy(skel); 215 + } 216 + 217 + static void test_lookup_and_delete_lru_percpu_hash(void) 218 + { 219 + struct test_lookup_and_delete *skel; 220 + __u64 key, val, value[nr_cpus]; 221 + int map_fd, err, i, cpucnt = 0; 222 + 223 + /* Setup program and fill the LRU map. */ 224 + skel = setup_prog(BPF_MAP_TYPE_LRU_PERCPU_HASH, &map_fd); 225 + if (!ASSERT_OK_PTR(skel, "setup_prog")) 226 + return; 227 + 228 + err = fill_values_percpu(map_fd); 229 + if (!ASSERT_OK(err, "fill_values_percpu")) 230 + goto cleanup; 231 + 232 + /* Insert new element at key=3, should reuse LRU element 1. */ 233 + key = 3; 234 + err = trigger_tp(skel, key, NEW_VALUE); 235 + if (!ASSERT_OK(err, "trigger_tp")) 236 + goto cleanup; 237 + 238 + /* Clean value. */ 239 + for (i = 0; i < nr_cpus; i++) 240 + value[i] = 0; 241 + 242 + /* Lookup and delete element 3. */ 243 + err = bpf_map_lookup_and_delete_elem(map_fd, &key, value); 244 + if (!ASSERT_OK(err, "bpf_map_lookup_and_delete_elem")) { 245 + goto cleanup; 246 + } 247 + 248 + /* Check if only one CPU has set the value. */ 249 + for (i = 0; i < nr_cpus; i++) { 250 + val = value[i]; 251 + if (val) { 252 + if (CHECK(val != NEW_VALUE, "map value", 253 + "unexpected for cpu %d: %lld\n", i, val)) 254 + goto cleanup; 255 + cpucnt++; 256 + } 257 + } 258 + if (CHECK(cpucnt != 1, "map value", "set for %d CPUs instead of 1!\n", 259 + cpucnt)) 260 + goto cleanup; 261 + 262 + /* Check that entries 3 and 1 are non existent. */ 263 + err = bpf_map_lookup_elem(map_fd, &key, &value); 264 + if (!ASSERT_ERR(err, "bpf_map_lookup_elem")) 265 + goto cleanup; 266 + 267 + key = 1; 268 + err = bpf_map_lookup_elem(map_fd, &key, &value); 269 + if (!ASSERT_ERR(err, "bpf_map_lookup_elem")) 270 + goto cleanup; 271 + 272 + cleanup: 273 + test_lookup_and_delete__destroy(skel); 274 + } 275 + 276 + void test_lookup_and_delete(void) 277 + { 278 + nr_cpus = bpf_num_possible_cpus(); 279 + 280 + if (test__start_subtest("lookup_and_delete")) 281 + test_lookup_and_delete_hash(); 282 + if (test__start_subtest("lookup_and_delete_percpu")) 283 + test_lookup_and_delete_percpu_hash(); 284 + if (test__start_subtest("lookup_and_delete_lru")) 285 + test_lookup_and_delete_lru_hash(); 286 + if (test__start_subtest("lookup_and_delete_lru_percpu")) 287 + test_lookup_and_delete_lru_percpu_hash(); 288 + }
+559
tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Check if we can migrate child sockets. 4 + * 5 + * 1. call listen() for 4 server sockets. 6 + * 2. call connect() for 25 client sockets. 7 + * 3. call listen() for 1 server socket. (migration target) 8 + * 4. update a map to migrate all child sockets 9 + * to the last server socket (migrate_map[cookie] = 4) 10 + * 5. call shutdown() for first 4 server sockets 11 + * and migrate the requests in the accept queue 12 + * to the last server socket. 13 + * 6. call listen() for the second server socket. 14 + * 7. call shutdown() for the last server 15 + * and migrate the requests in the accept queue 16 + * to the second server socket. 17 + * 8. call listen() for the last server. 18 + * 9. call shutdown() for the second server 19 + * and migrate the requests in the accept queue 20 + * to the last server socket. 21 + * 10. call accept() for the last server socket. 22 + * 23 + * Author: Kuniyuki Iwashima <kuniyu@amazon.co.jp> 24 + */ 25 + 26 + #include <bpf/bpf.h> 27 + #include <bpf/libbpf.h> 28 + 29 + #include "test_progs.h" 30 + #include "test_migrate_reuseport.skel.h" 31 + #include "network_helpers.h" 32 + 33 + #ifndef TCP_FASTOPEN_CONNECT 34 + #define TCP_FASTOPEN_CONNECT 30 35 + #endif 36 + 37 + #define IFINDEX_LO 1 38 + 39 + #define NR_SERVERS 5 40 + #define NR_CLIENTS (NR_SERVERS * 5) 41 + #define MIGRATED_TO (NR_SERVERS - 1) 42 + 43 + /* fastopenq->max_qlen and sk->sk_max_ack_backlog */ 44 + #define QLEN (NR_CLIENTS * 5) 45 + 46 + #define MSG "Hello World\0" 47 + #define MSGLEN 12 48 + 49 + static struct migrate_reuseport_test_case { 50 + const char *name; 51 + __s64 servers[NR_SERVERS]; 52 + __s64 clients[NR_CLIENTS]; 53 + struct sockaddr_storage addr; 54 + socklen_t addrlen; 55 + int family; 56 + int state; 57 + bool drop_ack; 58 + bool expire_synack_timer; 59 + bool fastopen; 60 + struct bpf_link *link; 61 + } test_cases[] = { 62 + { 63 + .name = "IPv4 TCP_ESTABLISHED inet_csk_listen_stop", 64 + .family = AF_INET, 65 + .state = BPF_TCP_ESTABLISHED, 66 + .drop_ack = false, 67 + .expire_synack_timer = false, 68 + .fastopen = false, 69 + }, 70 + { 71 + .name = "IPv4 TCP_SYN_RECV inet_csk_listen_stop", 72 + .family = AF_INET, 73 + .state = BPF_TCP_SYN_RECV, 74 + .drop_ack = true, 75 + .expire_synack_timer = false, 76 + .fastopen = true, 77 + }, 78 + { 79 + .name = "IPv4 TCP_NEW_SYN_RECV reqsk_timer_handler", 80 + .family = AF_INET, 81 + .state = BPF_TCP_NEW_SYN_RECV, 82 + .drop_ack = true, 83 + .expire_synack_timer = true, 84 + .fastopen = false, 85 + }, 86 + { 87 + .name = "IPv4 TCP_NEW_SYN_RECV inet_csk_complete_hashdance", 88 + .family = AF_INET, 89 + .state = BPF_TCP_NEW_SYN_RECV, 90 + .drop_ack = true, 91 + .expire_synack_timer = false, 92 + .fastopen = false, 93 + }, 94 + { 95 + .name = "IPv6 TCP_ESTABLISHED inet_csk_listen_stop", 96 + .family = AF_INET6, 97 + .state = BPF_TCP_ESTABLISHED, 98 + .drop_ack = false, 99 + .expire_synack_timer = false, 100 + .fastopen = false, 101 + }, 102 + { 103 + .name = "IPv6 TCP_SYN_RECV inet_csk_listen_stop", 104 + .family = AF_INET6, 105 + .state = BPF_TCP_SYN_RECV, 106 + .drop_ack = true, 107 + .expire_synack_timer = false, 108 + .fastopen = true, 109 + }, 110 + { 111 + .name = "IPv6 TCP_NEW_SYN_RECV reqsk_timer_handler", 112 + .family = AF_INET6, 113 + .state = BPF_TCP_NEW_SYN_RECV, 114 + .drop_ack = true, 115 + .expire_synack_timer = true, 116 + .fastopen = false, 117 + }, 118 + { 119 + .name = "IPv6 TCP_NEW_SYN_RECV inet_csk_complete_hashdance", 120 + .family = AF_INET6, 121 + .state = BPF_TCP_NEW_SYN_RECV, 122 + .drop_ack = true, 123 + .expire_synack_timer = false, 124 + .fastopen = false, 125 + } 126 + }; 127 + 128 + static void init_fds(__s64 fds[], int len) 129 + { 130 + int i; 131 + 132 + for (i = 0; i < len; i++) 133 + fds[i] = -1; 134 + } 135 + 136 + static void close_fds(__s64 fds[], int len) 137 + { 138 + int i; 139 + 140 + for (i = 0; i < len; i++) { 141 + if (fds[i] != -1) { 142 + close(fds[i]); 143 + fds[i] = -1; 144 + } 145 + } 146 + } 147 + 148 + static int setup_fastopen(char *buf, int size, int *saved_len, bool restore) 149 + { 150 + int err = 0, fd, len; 151 + 152 + fd = open("/proc/sys/net/ipv4/tcp_fastopen", O_RDWR); 153 + if (!ASSERT_NEQ(fd, -1, "open")) 154 + return -1; 155 + 156 + if (restore) { 157 + len = write(fd, buf, *saved_len); 158 + if (!ASSERT_EQ(len, *saved_len, "write - restore")) 159 + err = -1; 160 + } else { 161 + *saved_len = read(fd, buf, size); 162 + if (!ASSERT_GE(*saved_len, 1, "read")) { 163 + err = -1; 164 + goto close; 165 + } 166 + 167 + err = lseek(fd, 0, SEEK_SET); 168 + if (!ASSERT_OK(err, "lseek")) 169 + goto close; 170 + 171 + /* (TFO_CLIENT_ENABLE | TFO_SERVER_ENABLE | 172 + * TFO_CLIENT_NO_COOKIE | TFO_SERVER_COOKIE_NOT_REQD) 173 + */ 174 + len = write(fd, "519", 3); 175 + if (!ASSERT_EQ(len, 3, "write - setup")) 176 + err = -1; 177 + } 178 + 179 + close: 180 + close(fd); 181 + 182 + return err; 183 + } 184 + 185 + static int drop_ack(struct migrate_reuseport_test_case *test_case, 186 + struct test_migrate_reuseport *skel) 187 + { 188 + if (test_case->family == AF_INET) 189 + skel->bss->server_port = ((struct sockaddr_in *) 190 + &test_case->addr)->sin_port; 191 + else 192 + skel->bss->server_port = ((struct sockaddr_in6 *) 193 + &test_case->addr)->sin6_port; 194 + 195 + test_case->link = bpf_program__attach_xdp(skel->progs.drop_ack, 196 + IFINDEX_LO); 197 + if (!ASSERT_OK_PTR(test_case->link, "bpf_program__attach_xdp")) 198 + return -1; 199 + 200 + return 0; 201 + } 202 + 203 + static int pass_ack(struct migrate_reuseport_test_case *test_case) 204 + { 205 + int err; 206 + 207 + err = bpf_link__detach(test_case->link); 208 + if (!ASSERT_OK(err, "bpf_link__detach")) 209 + return -1; 210 + 211 + test_case->link = NULL; 212 + 213 + return 0; 214 + } 215 + 216 + static int start_servers(struct migrate_reuseport_test_case *test_case, 217 + struct test_migrate_reuseport *skel) 218 + { 219 + int i, err, prog_fd, reuseport = 1, qlen = QLEN; 220 + 221 + prog_fd = bpf_program__fd(skel->progs.migrate_reuseport); 222 + 223 + make_sockaddr(test_case->family, 224 + test_case->family == AF_INET ? "127.0.0.1" : "::1", 0, 225 + &test_case->addr, &test_case->addrlen); 226 + 227 + for (i = 0; i < NR_SERVERS; i++) { 228 + test_case->servers[i] = socket(test_case->family, SOCK_STREAM, 229 + IPPROTO_TCP); 230 + if (!ASSERT_NEQ(test_case->servers[i], -1, "socket")) 231 + return -1; 232 + 233 + err = setsockopt(test_case->servers[i], SOL_SOCKET, 234 + SO_REUSEPORT, &reuseport, sizeof(reuseport)); 235 + if (!ASSERT_OK(err, "setsockopt - SO_REUSEPORT")) 236 + return -1; 237 + 238 + err = bind(test_case->servers[i], 239 + (struct sockaddr *)&test_case->addr, 240 + test_case->addrlen); 241 + if (!ASSERT_OK(err, "bind")) 242 + return -1; 243 + 244 + if (i == 0) { 245 + err = setsockopt(test_case->servers[i], SOL_SOCKET, 246 + SO_ATTACH_REUSEPORT_EBPF, 247 + &prog_fd, sizeof(prog_fd)); 248 + if (!ASSERT_OK(err, 249 + "setsockopt - SO_ATTACH_REUSEPORT_EBPF")) 250 + return -1; 251 + 252 + err = getsockname(test_case->servers[i], 253 + (struct sockaddr *)&test_case->addr, 254 + &test_case->addrlen); 255 + if (!ASSERT_OK(err, "getsockname")) 256 + return -1; 257 + } 258 + 259 + if (test_case->fastopen) { 260 + err = setsockopt(test_case->servers[i], 261 + SOL_TCP, TCP_FASTOPEN, 262 + &qlen, sizeof(qlen)); 263 + if (!ASSERT_OK(err, "setsockopt - TCP_FASTOPEN")) 264 + return -1; 265 + } 266 + 267 + /* All requests will be tied to the first four listeners */ 268 + if (i != MIGRATED_TO) { 269 + err = listen(test_case->servers[i], qlen); 270 + if (!ASSERT_OK(err, "listen")) 271 + return -1; 272 + } 273 + } 274 + 275 + return 0; 276 + } 277 + 278 + static int start_clients(struct migrate_reuseport_test_case *test_case) 279 + { 280 + char buf[MSGLEN] = MSG; 281 + int i, err; 282 + 283 + for (i = 0; i < NR_CLIENTS; i++) { 284 + test_case->clients[i] = socket(test_case->family, SOCK_STREAM, 285 + IPPROTO_TCP); 286 + if (!ASSERT_NEQ(test_case->clients[i], -1, "socket")) 287 + return -1; 288 + 289 + /* The attached XDP program drops only the final ACK, so 290 + * clients will transition to TCP_ESTABLISHED immediately. 291 + */ 292 + err = settimeo(test_case->clients[i], 100); 293 + if (!ASSERT_OK(err, "settimeo")) 294 + return -1; 295 + 296 + if (test_case->fastopen) { 297 + int fastopen = 1; 298 + 299 + err = setsockopt(test_case->clients[i], IPPROTO_TCP, 300 + TCP_FASTOPEN_CONNECT, &fastopen, 301 + sizeof(fastopen)); 302 + if (!ASSERT_OK(err, 303 + "setsockopt - TCP_FASTOPEN_CONNECT")) 304 + return -1; 305 + } 306 + 307 + err = connect(test_case->clients[i], 308 + (struct sockaddr *)&test_case->addr, 309 + test_case->addrlen); 310 + if (!ASSERT_OK(err, "connect")) 311 + return -1; 312 + 313 + err = write(test_case->clients[i], buf, MSGLEN); 314 + if (!ASSERT_EQ(err, MSGLEN, "write")) 315 + return -1; 316 + } 317 + 318 + return 0; 319 + } 320 + 321 + static int update_maps(struct migrate_reuseport_test_case *test_case, 322 + struct test_migrate_reuseport *skel) 323 + { 324 + int i, err, migrated_to = MIGRATED_TO; 325 + int reuseport_map_fd, migrate_map_fd; 326 + __u64 value; 327 + 328 + reuseport_map_fd = bpf_map__fd(skel->maps.reuseport_map); 329 + migrate_map_fd = bpf_map__fd(skel->maps.migrate_map); 330 + 331 + for (i = 0; i < NR_SERVERS; i++) { 332 + value = (__u64)test_case->servers[i]; 333 + err = bpf_map_update_elem(reuseport_map_fd, &i, &value, 334 + BPF_NOEXIST); 335 + if (!ASSERT_OK(err, "bpf_map_update_elem - reuseport_map")) 336 + return -1; 337 + 338 + err = bpf_map_lookup_elem(reuseport_map_fd, &i, &value); 339 + if (!ASSERT_OK(err, "bpf_map_lookup_elem - reuseport_map")) 340 + return -1; 341 + 342 + err = bpf_map_update_elem(migrate_map_fd, &value, &migrated_to, 343 + BPF_NOEXIST); 344 + if (!ASSERT_OK(err, "bpf_map_update_elem - migrate_map")) 345 + return -1; 346 + } 347 + 348 + return 0; 349 + } 350 + 351 + static int migrate_dance(struct migrate_reuseport_test_case *test_case) 352 + { 353 + int i, err; 354 + 355 + /* Migrate TCP_ESTABLISHED and TCP_SYN_RECV requests 356 + * to the last listener based on eBPF. 357 + */ 358 + for (i = 0; i < MIGRATED_TO; i++) { 359 + err = shutdown(test_case->servers[i], SHUT_RDWR); 360 + if (!ASSERT_OK(err, "shutdown")) 361 + return -1; 362 + } 363 + 364 + /* No dance for TCP_NEW_SYN_RECV to migrate based on eBPF */ 365 + if (test_case->state == BPF_TCP_NEW_SYN_RECV) 366 + return 0; 367 + 368 + /* Note that we use the second listener instead of the 369 + * first one here. 370 + * 371 + * The fist listener is bind()ed with port 0 and, 372 + * SOCK_BINDPORT_LOCK is not set to sk_userlocks, so 373 + * calling listen() again will bind() the first listener 374 + * on a new ephemeral port and detach it from the existing 375 + * reuseport group. (See: __inet_bind(), tcp_set_state()) 376 + * 377 + * OTOH, the second one is bind()ed with a specific port, 378 + * and SOCK_BINDPORT_LOCK is set. Thus, re-listen() will 379 + * resurrect the listener on the existing reuseport group. 380 + */ 381 + err = listen(test_case->servers[1], QLEN); 382 + if (!ASSERT_OK(err, "listen")) 383 + return -1; 384 + 385 + /* Migrate from the last listener to the second one. 386 + * 387 + * All listeners were detached out of the reuseport_map, 388 + * so migration will be done by kernel random pick from here. 389 + */ 390 + err = shutdown(test_case->servers[MIGRATED_TO], SHUT_RDWR); 391 + if (!ASSERT_OK(err, "shutdown")) 392 + return -1; 393 + 394 + /* Back to the existing reuseport group */ 395 + err = listen(test_case->servers[MIGRATED_TO], QLEN); 396 + if (!ASSERT_OK(err, "listen")) 397 + return -1; 398 + 399 + /* Migrate back to the last one from the second one */ 400 + err = shutdown(test_case->servers[1], SHUT_RDWR); 401 + if (!ASSERT_OK(err, "shutdown")) 402 + return -1; 403 + 404 + return 0; 405 + } 406 + 407 + static void count_requests(struct migrate_reuseport_test_case *test_case, 408 + struct test_migrate_reuseport *skel) 409 + { 410 + struct sockaddr_storage addr; 411 + socklen_t len = sizeof(addr); 412 + int err, cnt = 0, client; 413 + char buf[MSGLEN]; 414 + 415 + err = settimeo(test_case->servers[MIGRATED_TO], 4000); 416 + if (!ASSERT_OK(err, "settimeo")) 417 + goto out; 418 + 419 + for (; cnt < NR_CLIENTS; cnt++) { 420 + client = accept(test_case->servers[MIGRATED_TO], 421 + (struct sockaddr *)&addr, &len); 422 + if (!ASSERT_NEQ(client, -1, "accept")) 423 + goto out; 424 + 425 + memset(buf, 0, MSGLEN); 426 + read(client, &buf, MSGLEN); 427 + close(client); 428 + 429 + if (!ASSERT_STREQ(buf, MSG, "read")) 430 + goto out; 431 + } 432 + 433 + out: 434 + ASSERT_EQ(cnt, NR_CLIENTS, "count in userspace"); 435 + 436 + switch (test_case->state) { 437 + case BPF_TCP_ESTABLISHED: 438 + cnt = skel->bss->migrated_at_close; 439 + break; 440 + case BPF_TCP_SYN_RECV: 441 + cnt = skel->bss->migrated_at_close_fastopen; 442 + break; 443 + case BPF_TCP_NEW_SYN_RECV: 444 + if (test_case->expire_synack_timer) 445 + cnt = skel->bss->migrated_at_send_synack; 446 + else 447 + cnt = skel->bss->migrated_at_recv_ack; 448 + break; 449 + default: 450 + cnt = 0; 451 + } 452 + 453 + ASSERT_EQ(cnt, NR_CLIENTS, "count in BPF prog"); 454 + } 455 + 456 + static void run_test(struct migrate_reuseport_test_case *test_case, 457 + struct test_migrate_reuseport *skel) 458 + { 459 + int err, saved_len; 460 + char buf[16]; 461 + 462 + skel->bss->migrated_at_close = 0; 463 + skel->bss->migrated_at_close_fastopen = 0; 464 + skel->bss->migrated_at_send_synack = 0; 465 + skel->bss->migrated_at_recv_ack = 0; 466 + 467 + init_fds(test_case->servers, NR_SERVERS); 468 + init_fds(test_case->clients, NR_CLIENTS); 469 + 470 + if (test_case->fastopen) { 471 + memset(buf, 0, sizeof(buf)); 472 + 473 + err = setup_fastopen(buf, sizeof(buf), &saved_len, false); 474 + if (!ASSERT_OK(err, "setup_fastopen - setup")) 475 + return; 476 + } 477 + 478 + err = start_servers(test_case, skel); 479 + if (!ASSERT_OK(err, "start_servers")) 480 + goto close_servers; 481 + 482 + if (test_case->drop_ack) { 483 + /* Drop the final ACK of the 3-way handshake and stick the 484 + * in-flight requests on TCP_SYN_RECV or TCP_NEW_SYN_RECV. 485 + */ 486 + err = drop_ack(test_case, skel); 487 + if (!ASSERT_OK(err, "drop_ack")) 488 + goto close_servers; 489 + } 490 + 491 + /* Tie requests to the first four listners */ 492 + err = start_clients(test_case); 493 + if (!ASSERT_OK(err, "start_clients")) 494 + goto close_clients; 495 + 496 + err = listen(test_case->servers[MIGRATED_TO], QLEN); 497 + if (!ASSERT_OK(err, "listen")) 498 + goto close_clients; 499 + 500 + err = update_maps(test_case, skel); 501 + if (!ASSERT_OK(err, "fill_maps")) 502 + goto close_clients; 503 + 504 + /* Migrate the requests in the accept queue only. 505 + * TCP_NEW_SYN_RECV requests are not migrated at this point. 506 + */ 507 + err = migrate_dance(test_case); 508 + if (!ASSERT_OK(err, "migrate_dance")) 509 + goto close_clients; 510 + 511 + if (test_case->expire_synack_timer) { 512 + /* Wait for SYN+ACK timers to expire so that 513 + * reqsk_timer_handler() migrates TCP_NEW_SYN_RECV requests. 514 + */ 515 + sleep(1); 516 + } 517 + 518 + if (test_case->link) { 519 + /* Resume 3WHS and migrate TCP_NEW_SYN_RECV requests */ 520 + err = pass_ack(test_case); 521 + if (!ASSERT_OK(err, "pass_ack")) 522 + goto close_clients; 523 + } 524 + 525 + count_requests(test_case, skel); 526 + 527 + close_clients: 528 + close_fds(test_case->clients, NR_CLIENTS); 529 + 530 + if (test_case->link) { 531 + err = pass_ack(test_case); 532 + ASSERT_OK(err, "pass_ack - clean up"); 533 + } 534 + 535 + close_servers: 536 + close_fds(test_case->servers, NR_SERVERS); 537 + 538 + if (test_case->fastopen) { 539 + err = setup_fastopen(buf, sizeof(buf), &saved_len, true); 540 + ASSERT_OK(err, "setup_fastopen - restore"); 541 + } 542 + } 543 + 544 + void test_migrate_reuseport(void) 545 + { 546 + struct test_migrate_reuseport *skel; 547 + int i; 548 + 549 + skel = test_migrate_reuseport__open_and_load(); 550 + if (!ASSERT_OK_PTR(skel, "open_and_load")) 551 + return; 552 + 553 + for (i = 0; i < ARRAY_SIZE(test_cases); i++) { 554 + test__start_subtest(test_cases[i].name); 555 + run_test(&test_cases[i], skel); 556 + } 557 + 558 + test_migrate_reuseport__destroy(skel); 559 + }
+4 -4
tools/testing/selftests/bpf/prog_tests/obj_name.c
··· 38 38 39 39 fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); 40 40 CHECK((tests[i].success && fd < 0) || 41 - (!tests[i].success && fd != -1) || 41 + (!tests[i].success && fd >= 0) || 42 42 (!tests[i].success && errno != tests[i].expected_errno), 43 43 "check-bpf-prog-name", 44 44 "fd %d(%d) errno %d(%d)\n", 45 45 fd, tests[i].success, errno, tests[i].expected_errno); 46 46 47 - if (fd != -1) 47 + if (fd >= 0) 48 48 close(fd); 49 49 50 50 /* test different attr.map_name during BPF_MAP_CREATE */ ··· 59 59 memcpy(attr.map_name, tests[i].name, ncopy); 60 60 fd = syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr)); 61 61 CHECK((tests[i].success && fd < 0) || 62 - (!tests[i].success && fd != -1) || 62 + (!tests[i].success && fd >= 0) || 63 63 (!tests[i].success && errno != tests[i].expected_errno), 64 64 "check-bpf-map-name", 65 65 "fd %d(%d) errno %d(%d)\n", 66 66 fd, tests[i].success, errno, tests[i].expected_errno); 67 67 68 - if (fd != -1) 68 + if (fd >= 0) 69 69 close(fd); 70 70 } 71 71 }
+2 -2
tools/testing/selftests/bpf/prog_tests/perf_branches.c
··· 74 74 75 75 /* attach perf_event */ 76 76 link = bpf_program__attach_perf_event(skel->progs.perf_branches, perf_fd); 77 - if (CHECK(IS_ERR(link), "attach_perf_event", "err %ld\n", PTR_ERR(link))) 77 + if (!ASSERT_OK_PTR(link, "attach_perf_event")) 78 78 goto out_destroy_skel; 79 79 80 80 /* generate some branches on cpu 0 */ ··· 119 119 * Some setups don't support branch records (virtual machines, !x86), 120 120 * so skip test in this case. 121 121 */ 122 - if (pfd == -1) { 122 + if (pfd < 0) { 123 123 if (errno == ENOENT || errno == EOPNOTSUPP) { 124 124 printf("%s:SKIP:no PERF_SAMPLE_BRANCH_STACK\n", 125 125 __func__);
+1 -1
tools/testing/selftests/bpf/prog_tests/perf_buffer.c
··· 80 80 pb_opts.sample_cb = on_sample; 81 81 pb_opts.ctx = &cpu_seen; 82 82 pb = perf_buffer__new(bpf_map__fd(skel->maps.perf_buf_map), 1, &pb_opts); 83 - if (CHECK(IS_ERR(pb), "perf_buf__new", "err %ld\n", PTR_ERR(pb))) 83 + if (!ASSERT_OK_PTR(pb, "perf_buf__new")) 84 84 goto out_close; 85 85 86 86 CHECK(perf_buffer__epoll_fd(pb) < 0, "epoll_fd",
+1 -2
tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c
··· 97 97 98 98 skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, 99 99 pmu_fd); 100 - if (CHECK(IS_ERR(skel->links.oncpu), "attach_perf_event", 101 - "err %ld\n", PTR_ERR(skel->links.oncpu))) { 100 + if (!ASSERT_OK_PTR(skel->links.oncpu, "attach_perf_event")) { 102 101 close(pmu_fd); 103 102 goto cleanup; 104 103 }
+2 -5
tools/testing/selftests/bpf/prog_tests/probe_user.c
··· 15 15 static const int zero = 0; 16 16 17 17 obj = bpf_object__open_file(obj_file, &opts); 18 - if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) 18 + if (!ASSERT_OK_PTR(obj, "obj_open_file")) 19 19 return; 20 20 21 21 kprobe_prog = bpf_object__find_program_by_title(obj, prog_name); ··· 33 33 goto cleanup; 34 34 35 35 kprobe_link = bpf_program__attach(kprobe_prog); 36 - if (CHECK(IS_ERR(kprobe_link), "attach_kprobe", 37 - "err %ld\n", PTR_ERR(kprobe_link))) { 38 - kprobe_link = NULL; 36 + if (!ASSERT_OK_PTR(kprobe_link, "attach_kprobe")) 39 37 goto cleanup; 40 - } 41 38 42 39 memset(&curr, 0, sizeof(curr)); 43 40 in->sin_family = AF_INET;
+2 -2
tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
··· 46 46 tattr.prog_fd = bpf_program__fd(skel->progs.test_pkt_access); 47 47 48 48 err = bpf_prog_test_run_xattr(&tattr); 49 - CHECK_ATTR(err != -1 || errno != ENOSPC || tattr.retval, "run", 49 + CHECK_ATTR(err >= 0 || errno != ENOSPC || tattr.retval, "run", 50 50 "err %d errno %d retval %d\n", err, errno, tattr.retval); 51 51 52 52 CHECK_ATTR(tattr.data_size_out != sizeof(pkt_v4), "data_size_out", ··· 78 78 cleanup: 79 79 if (skel) 80 80 test_pkt_access__destroy(skel); 81 - if (stats_fd != -1) 81 + if (stats_fd >= 0) 82 82 close(stats_fd); 83 83 }
+2 -2
tools/testing/selftests/bpf/prog_tests/raw_tp_test_run.c
··· 77 77 /* invalid cpu ID should fail with ENXIO */ 78 78 opts.cpu = 0xffffffff; 79 79 err = bpf_prog_test_run_opts(prog_fd, &opts); 80 - CHECK(err != -1 || errno != ENXIO, 80 + CHECK(err >= 0 || errno != ENXIO, 81 81 "test_run_opts_fail", 82 82 "should failed with ENXIO\n"); 83 83 ··· 85 85 opts.cpu = 1; 86 86 opts.flags = 0; 87 87 err = bpf_prog_test_run_opts(prog_fd, &opts); 88 - CHECK(err != -1 || errno != EINVAL, 88 + CHECK(err >= 0 || errno != EINVAL, 89 89 "test_run_opts_fail", 90 90 "should failed with EINVAL\n"); 91 91
+2 -5
tools/testing/selftests/bpf/prog_tests/rdonly_maps.c
··· 30 30 struct bss bss; 31 31 32 32 obj = bpf_object__open_file(file, NULL); 33 - if (CHECK(IS_ERR(obj), "obj_open", "err %ld\n", PTR_ERR(obj))) 33 + if (!ASSERT_OK_PTR(obj, "obj_open")) 34 34 return; 35 35 36 36 err = bpf_object__load(obj); ··· 58 58 goto cleanup; 59 59 60 60 link = bpf_program__attach_raw_tracepoint(prog, "sys_enter"); 61 - if (CHECK(IS_ERR(link), "attach_prog", "prog '%s', err %ld\n", 62 - t->prog_name, PTR_ERR(link))) { 63 - link = NULL; 61 + if (!ASSERT_OK_PTR(link, "attach_prog")) 64 62 goto cleanup; 65 - } 66 63 67 64 /* trigger probe */ 68 65 usleep(1);
+1 -1
tools/testing/selftests/bpf/prog_tests/reference_tracking.c
··· 15 15 int err = 0; 16 16 17 17 obj = bpf_object__open_file(file, &open_opts); 18 - if (CHECK_FAIL(IS_ERR(obj))) 18 + if (!ASSERT_OK_PTR(obj, "obj_open_file")) 19 19 return; 20 20 21 21 if (CHECK(strcmp(bpf_object__name(obj), obj_name), "obj_name",
+1 -1
tools/testing/selftests/bpf/prog_tests/resolve_btfids.c
··· 76 76 } 77 77 78 78 for (i = 0; i < ARRAY_SIZE(test_symbols); i++) { 79 - if (test_symbols[i].id != -1) 79 + if (test_symbols[i].id >= 0) 80 80 continue; 81 81 82 82 if (BTF_INFO_KIND(type->info) != test_symbols[i].type)
+1 -1
tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
··· 63 63 goto cleanup; 64 64 65 65 proto_fd = bpf_create_map(BPF_MAP_TYPE_RINGBUF, 0, 0, page_size, 0); 66 - if (CHECK(proto_fd == -1, "bpf_create_map", "bpf_create_map failed\n")) 66 + if (CHECK(proto_fd < 0, "bpf_create_map", "bpf_create_map failed\n")) 67 67 goto cleanup; 68 68 69 69 err = bpf_map__set_inner_map_fd(skel->maps.ringbuf_hash, proto_fd);
+27 -26
tools/testing/selftests/bpf/prog_tests/select_reuseport.c
··· 78 78 attr.max_entries = REUSEPORT_ARRAY_SIZE; 79 79 80 80 reuseport_array = bpf_create_map_xattr(&attr); 81 - RET_ERR(reuseport_array == -1, "creating reuseport_array", 81 + RET_ERR(reuseport_array < 0, "creating reuseport_array", 82 82 "reuseport_array:%d errno:%d\n", reuseport_array, errno); 83 83 84 84 /* Creating outer_map */ ··· 89 89 attr.max_entries = 1; 90 90 attr.inner_map_fd = reuseport_array; 91 91 outer_map = bpf_create_map_xattr(&attr); 92 - RET_ERR(outer_map == -1, "creating outer_map", 92 + RET_ERR(outer_map < 0, "creating outer_map", 93 93 "outer_map:%d errno:%d\n", outer_map, errno); 94 94 95 95 return 0; ··· 102 102 int err; 103 103 104 104 obj = bpf_object__open("test_select_reuseport_kern.o"); 105 - RET_ERR(IS_ERR_OR_NULL(obj), "open test_select_reuseport_kern.o", 106 - "obj:%p PTR_ERR(obj):%ld\n", obj, PTR_ERR(obj)); 105 + err = libbpf_get_error(obj); 106 + RET_ERR(err, "open test_select_reuseport_kern.o", 107 + "obj:%p PTR_ERR(obj):%d\n", obj, err); 107 108 108 109 map = bpf_object__find_map_by_name(obj, "outer_map"); 109 110 RET_ERR(!map, "find outer_map", "!map\n"); ··· 117 116 prog = bpf_program__next(NULL, obj); 118 117 RET_ERR(!prog, "get first bpf_program", "!prog\n"); 119 118 select_by_skb_data_prog = bpf_program__fd(prog); 120 - RET_ERR(select_by_skb_data_prog == -1, "get prog fd", 119 + RET_ERR(select_by_skb_data_prog < 0, "get prog fd", 121 120 "select_by_skb_data_prog:%d\n", select_by_skb_data_prog); 122 121 123 122 map = bpf_object__find_map_by_name(obj, "result_map"); 124 123 RET_ERR(!map, "find result_map", "!map\n"); 125 124 result_map = bpf_map__fd(map); 126 - RET_ERR(result_map == -1, "get result_map fd", 125 + RET_ERR(result_map < 0, "get result_map fd", 127 126 "result_map:%d\n", result_map); 128 127 129 128 map = bpf_object__find_map_by_name(obj, "tmp_index_ovr_map"); 130 129 RET_ERR(!map, "find tmp_index_ovr_map\n", "!map"); 131 130 tmp_index_ovr_map = bpf_map__fd(map); 132 - RET_ERR(tmp_index_ovr_map == -1, "get tmp_index_ovr_map fd", 131 + RET_ERR(tmp_index_ovr_map < 0, "get tmp_index_ovr_map fd", 133 132 "tmp_index_ovr_map:%d\n", tmp_index_ovr_map); 134 133 135 134 map = bpf_object__find_map_by_name(obj, "linum_map"); 136 135 RET_ERR(!map, "find linum_map", "!map\n"); 137 136 linum_map = bpf_map__fd(map); 138 - RET_ERR(linum_map == -1, "get linum_map fd", 137 + RET_ERR(linum_map < 0, "get linum_map fd", 139 138 "linum_map:%d\n", linum_map); 140 139 141 140 map = bpf_object__find_map_by_name(obj, "data_check_map"); 142 141 RET_ERR(!map, "find data_check_map", "!map\n"); 143 142 data_check_map = bpf_map__fd(map); 144 - RET_ERR(data_check_map == -1, "get data_check_map fd", 143 + RET_ERR(data_check_map < 0, "get data_check_map fd", 145 144 "data_check_map:%d\n", data_check_map); 146 145 147 146 return 0; ··· 238 237 int err; 239 238 240 239 err = bpf_map_lookup_elem(linum_map, &index_zero, &linum); 241 - RET_ERR(err == -1, "lookup_elem(linum_map)", "err:%d errno:%d\n", 240 + RET_ERR(err < 0, "lookup_elem(linum_map)", "err:%d errno:%d\n", 242 241 err, errno); 243 242 244 243 return linum; ··· 255 254 addrlen = sizeof(cli_sa); 256 255 err = getsockname(cli_fd, (struct sockaddr *)&cli_sa, 257 256 &addrlen); 258 - RET_IF(err == -1, "getsockname(cli_fd)", "err:%d errno:%d\n", 257 + RET_IF(err < 0, "getsockname(cli_fd)", "err:%d errno:%d\n", 259 258 err, errno); 260 259 261 260 err = bpf_map_lookup_elem(data_check_map, &index_zero, &result); 262 - RET_IF(err == -1, "lookup_elem(data_check_map)", "err:%d errno:%d\n", 261 + RET_IF(err < 0, "lookup_elem(data_check_map)", "err:%d errno:%d\n", 263 262 err, errno); 264 263 265 264 if (type == SOCK_STREAM) { ··· 348 347 349 348 for (i = 0; i < NR_RESULTS; i++) { 350 349 err = bpf_map_lookup_elem(result_map, &i, &results[i]); 351 - RET_IF(err == -1, "lookup_elem(result_map)", 350 + RET_IF(err < 0, "lookup_elem(result_map)", 352 351 "i:%u err:%d errno:%d\n", i, err, errno); 353 352 } 354 353 ··· 525 524 */ 526 525 err = bpf_map_update_elem(tmp_index_ovr_map, &index_zero, 527 526 &tmp_index, BPF_ANY); 528 - RET_IF(err == -1, "update_elem(tmp_index_ovr_map, 0, 1)", 527 + RET_IF(err < 0, "update_elem(tmp_index_ovr_map, 0, 1)", 529 528 "err:%d errno:%d\n", err, errno); 530 529 do_test(type, family, &cmd, PASS); 531 530 err = bpf_map_lookup_elem(tmp_index_ovr_map, &index_zero, 532 531 &tmp_index); 533 - RET_IF(err == -1 || tmp_index != -1, 532 + RET_IF(err < 0 || tmp_index >= 0, 534 533 "lookup_elem(tmp_index_ovr_map)", 535 534 "err:%d errno:%d tmp_index:%d\n", 536 535 err, errno, tmp_index); ··· 570 569 571 570 for (i = 0; i < NR_RESULTS; i++) { 572 571 err = bpf_map_lookup_elem(result_map, &i, &tmp); 573 - RET_IF(err == -1, "lookup_elem(result_map)", 572 + RET_IF(err < 0, "lookup_elem(result_map)", 574 573 "i:%u err:%d errno:%d\n", i, err, errno); 575 574 nr_run_before += tmp; 576 575 } ··· 585 584 586 585 for (i = 0; i < NR_RESULTS; i++) { 587 586 err = bpf_map_lookup_elem(result_map, &i, &tmp); 588 - RET_IF(err == -1, "lookup_elem(result_map)", 587 + RET_IF(err < 0, "lookup_elem(result_map)", 589 588 "i:%u err:%d errno:%d\n", i, err, errno); 590 589 nr_run_after += tmp; 591 590 } ··· 633 632 SO_ATTACH_REUSEPORT_EBPF, 634 633 &select_by_skb_data_prog, 635 634 sizeof(select_by_skb_data_prog)); 636 - RET_IF(err == -1, "setsockopt(SO_ATTACH_REUEPORT_EBPF)", 635 + RET_IF(err < 0, "setsockopt(SO_ATTACH_REUEPORT_EBPF)", 637 636 "err:%d errno:%d\n", err, errno); 638 637 } 639 638 640 639 err = bind(sk_fds[i], (struct sockaddr *)&srv_sa, addrlen); 641 - RET_IF(err == -1, "bind()", "sk_fds[%d] err:%d errno:%d\n", 640 + RET_IF(err < 0, "bind()", "sk_fds[%d] err:%d errno:%d\n", 642 641 i, err, errno); 643 642 644 643 if (type == SOCK_STREAM) { 645 644 err = listen(sk_fds[i], 10); 646 - RET_IF(err == -1, "listen()", 645 + RET_IF(err < 0, "listen()", 647 646 "sk_fds[%d] err:%d errno:%d\n", 648 647 i, err, errno); 649 648 } 650 649 651 650 err = bpf_map_update_elem(reuseport_array, &i, &sk_fds[i], 652 651 BPF_NOEXIST); 653 - RET_IF(err == -1, "update_elem(reuseport_array)", 652 + RET_IF(err < 0, "update_elem(reuseport_array)", 654 653 "sk_fds[%d] err:%d errno:%d\n", i, err, errno); 655 654 656 655 if (i == first) { ··· 683 682 prepare_sk_fds(type, family, inany); 684 683 err = bpf_map_update_elem(tmp_index_ovr_map, &index_zero, &ovr, 685 684 BPF_ANY); 686 - RET_IF(err == -1, "update_elem(tmp_index_ovr_map, 0, -1)", 685 + RET_IF(err < 0, "update_elem(tmp_index_ovr_map, 0, -1)", 687 686 "err:%d errno:%d\n", err, errno); 688 687 689 688 /* Install reuseport_array to outer_map? */ ··· 692 691 693 692 err = bpf_map_update_elem(outer_map, &index_zero, &reuseport_array, 694 693 BPF_ANY); 695 - RET_IF(err == -1, "update_elem(outer_map, 0, reuseport_array)", 694 + RET_IF(err < 0, "update_elem(outer_map, 0, reuseport_array)", 696 695 "err:%d errno:%d\n", err, errno); 697 696 } 698 697 ··· 721 720 return; 722 721 723 722 err = bpf_map_delete_elem(outer_map, &index_zero); 724 - RET_IF(err == -1, "delete_elem(outer_map)", 723 + RET_IF(err < 0, "delete_elem(outer_map)", 725 724 "err:%d errno:%d\n", err, errno); 726 725 } 727 726 728 727 static void cleanup(void) 729 728 { 730 - if (outer_map != -1) { 729 + if (outer_map >= 0) { 731 730 close(outer_map); 732 731 outer_map = -1; 733 732 } 734 733 735 - if (reuseport_array != -1) { 734 + if (reuseport_array >= 0) { 736 735 close(reuseport_array); 737 736 reuseport_array = -1; 738 737 }
+1 -2
tools/testing/selftests/bpf/prog_tests/send_signal.c
··· 91 91 92 92 skel->links.send_signal_perf = 93 93 bpf_program__attach_perf_event(skel->progs.send_signal_perf, pmu_fd); 94 - if (CHECK(IS_ERR(skel->links.send_signal_perf), "attach_perf_event", 95 - "err %ld\n", PTR_ERR(skel->links.send_signal_perf))) 94 + if (!ASSERT_OK_PTR(skel->links.send_signal_perf, "attach_perf_event")) 96 95 goto disable_pmu; 97 96 } 98 97
+1 -1
tools/testing/selftests/bpf/prog_tests/sk_lookup.c
··· 480 480 } 481 481 482 482 link = bpf_program__attach_netns(prog, net_fd); 483 - if (CHECK(IS_ERR(link), "bpf_program__attach_netns", "failed\n")) { 483 + if (!ASSERT_OK_PTR(link, "bpf_program__attach_netns")) { 484 484 errno = -PTR_ERR(link); 485 485 log_err("failed to attach program '%s' to netns", 486 486 bpf_program__name(prog));
+6 -8
tools/testing/selftests/bpf/prog_tests/sock_fields.c
··· 97 97 98 98 err = bpf_map_lookup_elem(linum_map_fd, &egress_linum_idx, 99 99 &egress_linum); 100 - CHECK(err == -1, "bpf_map_lookup_elem(linum_map_fd)", 100 + CHECK(err < 0, "bpf_map_lookup_elem(linum_map_fd)", 101 101 "err:%d errno:%d\n", err, errno); 102 102 103 103 err = bpf_map_lookup_elem(linum_map_fd, &ingress_linum_idx, 104 104 &ingress_linum); 105 - CHECK(err == -1, "bpf_map_lookup_elem(linum_map_fd)", 105 + CHECK(err < 0, "bpf_map_lookup_elem(linum_map_fd)", 106 106 "err:%d errno:%d\n", err, errno); 107 107 108 108 memcpy(&srv_sk, &skel->bss->srv_sk, sizeof(srv_sk)); ··· 355 355 356 356 egress_link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields, 357 357 child_cg_fd); 358 - if (CHECK(IS_ERR(egress_link), "attach_cgroup(egress)", "err:%ld\n", 359 - PTR_ERR(egress_link))) 358 + if (!ASSERT_OK_PTR(egress_link, "attach_cgroup(egress)")) 360 359 goto done; 361 360 362 361 ingress_link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields, 363 362 child_cg_fd); 364 - if (CHECK(IS_ERR(ingress_link), "attach_cgroup(ingress)", "err:%ld\n", 365 - PTR_ERR(ingress_link))) 363 + if (!ASSERT_OK_PTR(ingress_link, "attach_cgroup(ingress)")) 366 364 goto done; 367 365 368 366 linum_map_fd = bpf_map__fd(skel->maps.linum_map); ··· 373 375 bpf_link__destroy(egress_link); 374 376 bpf_link__destroy(ingress_link); 375 377 test_sock_fields__destroy(skel); 376 - if (child_cg_fd != -1) 378 + if (child_cg_fd >= 0) 377 379 close(child_cg_fd); 378 - if (parent_cg_fd != -1) 380 + if (parent_cg_fd >= 0) 379 381 close(parent_cg_fd); 380 382 }
+4 -4
tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
··· 88 88 int s, map, err; 89 89 90 90 s = connected_socket_v4(); 91 - if (CHECK_FAIL(s == -1)) 91 + if (CHECK_FAIL(s < 0)) 92 92 return; 93 93 94 94 map = bpf_create_map(map_type, sizeof(int), sizeof(int), 1, 0); 95 - if (CHECK_FAIL(map == -1)) { 95 + if (CHECK_FAIL(map < 0)) { 96 96 perror("bpf_create_map"); 97 97 goto out; 98 98 } ··· 245 245 opts.link_info = &linfo; 246 246 opts.link_info_len = sizeof(linfo); 247 247 link = bpf_program__attach_iter(skel->progs.copy, &opts); 248 - if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) 248 + if (!ASSERT_OK_PTR(link, "attach_iter")) 249 249 goto out; 250 250 251 251 iter_fd = bpf_iter_create(bpf_link__fd(link)); ··· 304 304 } 305 305 306 306 err = bpf_prog_attach(verdict, map, second, 0); 307 - assert(err == -1 && errno == EBUSY); 307 + ASSERT_EQ(err, -EBUSY, "prog_attach_fail"); 308 308 309 309 err = bpf_prog_detach2(verdict, map, first); 310 310 if (CHECK_FAIL(err)) {
+1 -1
tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
··· 98 98 int map; 99 99 100 100 map = bpf_create_map(map_type, sizeof(int), sizeof(int), 1, 0); 101 - if (CHECK_FAIL(map == -1)) { 101 + if (CHECK_FAIL(map < 0)) { 102 102 perror("bpf_map_create"); 103 103 return; 104 104 }
+5 -5
tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
··· 139 139 #define xbpf_map_delete_elem(fd, key) \ 140 140 ({ \ 141 141 int __ret = bpf_map_delete_elem((fd), (key)); \ 142 - if (__ret == -1) \ 142 + if (__ret < 0) \ 143 143 FAIL_ERRNO("map_delete"); \ 144 144 __ret; \ 145 145 }) ··· 147 147 #define xbpf_map_lookup_elem(fd, key, val) \ 148 148 ({ \ 149 149 int __ret = bpf_map_lookup_elem((fd), (key), (val)); \ 150 - if (__ret == -1) \ 150 + if (__ret < 0) \ 151 151 FAIL_ERRNO("map_lookup"); \ 152 152 __ret; \ 153 153 }) ··· 155 155 #define xbpf_map_update_elem(fd, key, val, flags) \ 156 156 ({ \ 157 157 int __ret = bpf_map_update_elem((fd), (key), (val), (flags)); \ 158 - if (__ret == -1) \ 158 + if (__ret < 0) \ 159 159 FAIL_ERRNO("map_update"); \ 160 160 __ret; \ 161 161 }) ··· 164 164 ({ \ 165 165 int __ret = \ 166 166 bpf_prog_attach((prog), (target), (type), (flags)); \ 167 - if (__ret == -1) \ 167 + if (__ret < 0) \ 168 168 FAIL_ERRNO("prog_attach(" #type ")"); \ 169 169 __ret; \ 170 170 }) ··· 172 172 #define xbpf_prog_detach2(prog, target, type) \ 173 173 ({ \ 174 174 int __ret = bpf_prog_detach2((prog), (target), (type)); \ 175 - if (__ret == -1) \ 175 + if (__ret < 0) \ 176 176 FAIL_ERRNO("prog_detach2(" #type ")"); \ 177 177 __ret; \ 178 178 })
+1 -2
tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
··· 62 62 63 63 skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, 64 64 pmu_fd); 65 - if (CHECK(IS_ERR(skel->links.oncpu), "attach_perf_event", 66 - "err %ld\n", PTR_ERR(skel->links.oncpu))) { 65 + if (!ASSERT_OK_PTR(skel->links.oncpu, "attach_perf_event")) { 67 66 close(pmu_fd); 68 67 goto cleanup; 69 68 }
+1 -1
tools/testing/selftests/bpf/prog_tests/stacktrace_map.c
··· 21 21 goto close_prog; 22 22 23 23 link = bpf_program__attach_tracepoint(prog, "sched", "sched_switch"); 24 - if (CHECK(IS_ERR(link), "attach_tp", "err %ld\n", PTR_ERR(link))) 24 + if (!ASSERT_OK_PTR(link, "attach_tp")) 25 25 goto close_prog; 26 26 27 27 /* find map fds */
+2 -3
tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
··· 21 21 goto close_prog; 22 22 23 23 link = bpf_program__attach_raw_tracepoint(prog, "sched_switch"); 24 - if (CHECK(IS_ERR(link), "attach_raw_tp", "err %ld\n", PTR_ERR(link))) 24 + if (!ASSERT_OK_PTR(link, "attach_raw_tp")) 25 25 goto close_prog; 26 26 27 27 /* find map fds */ ··· 59 59 goto close_prog; 60 60 61 61 close_prog: 62 - if (!IS_ERR_OR_NULL(link)) 63 - bpf_link__destroy(link); 62 + bpf_link__destroy(link); 64 63 bpf_object__close(obj); 65 64 }
+5 -10
tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c
··· 353 353 return; 354 354 355 355 link = bpf_program__attach_cgroup(skel->progs.estab, cg_fd); 356 - if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n", 357 - PTR_ERR(link))) 356 + if (!ASSERT_OK_PTR(link, "attach_cgroup(estab)")) 358 357 return; 359 358 360 359 if (sk_fds_connect(&sk_fds, true)) { ··· 397 398 return; 398 399 399 400 link = bpf_program__attach_cgroup(skel->progs.estab, cg_fd); 400 - if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n", 401 - PTR_ERR(link))) 401 + if (!ASSERT_OK_PTR(link, "attach_cgroup(estab)")) 402 402 return; 403 403 404 404 if (sk_fds_connect(&sk_fds, false)) { ··· 429 431 return; 430 432 431 433 link = bpf_program__attach_cgroup(skel->progs.estab, cg_fd); 432 - if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n", 433 - PTR_ERR(link))) 434 + if (!ASSERT_OK_PTR(link, "attach_cgroup(estab)")) 434 435 return; 435 436 436 437 if (sk_fds_connect(&sk_fds, false)) { ··· 468 471 return; 469 472 470 473 link = bpf_program__attach_cgroup(skel->progs.estab, cg_fd); 471 - if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n", 472 - PTR_ERR(link))) 474 + if (!ASSERT_OK_PTR(link, "attach_cgroup(estab)")) 473 475 return; 474 476 475 477 if (sk_fds_connect(&sk_fds, false)) { ··· 505 509 return; 506 510 507 511 link = bpf_program__attach_cgroup(misc_skel->progs.misc_estab, cg_fd); 508 - if (CHECK(IS_ERR(link), "attach_cgroup(misc_estab)", "err: %ld\n", 509 - PTR_ERR(link))) 512 + if (!ASSERT_OK_PTR(link, "attach_cgroup(misc_estab)")) 510 513 return; 511 514 512 515 if (sk_fds_connect(&sk_fds, false)) {
+6 -6
tools/testing/selftests/bpf/prog_tests/test_overhead.c
··· 73 73 return; 74 74 75 75 obj = bpf_object__open_file("./test_overhead.o", NULL); 76 - if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) 76 + if (!ASSERT_OK_PTR(obj, "obj_open_file")) 77 77 return; 78 78 79 79 kprobe_prog = bpf_object__find_program_by_title(obj, kprobe_name); ··· 108 108 /* attach kprobe */ 109 109 link = bpf_program__attach_kprobe(kprobe_prog, false /* retprobe */, 110 110 kprobe_func); 111 - if (CHECK(IS_ERR(link), "attach_kprobe", "err %ld\n", PTR_ERR(link))) 111 + if (!ASSERT_OK_PTR(link, "attach_kprobe")) 112 112 goto cleanup; 113 113 test_run("kprobe"); 114 114 bpf_link__destroy(link); ··· 116 116 /* attach kretprobe */ 117 117 link = bpf_program__attach_kprobe(kretprobe_prog, true /* retprobe */, 118 118 kprobe_func); 119 - if (CHECK(IS_ERR(link), "attach kretprobe", "err %ld\n", PTR_ERR(link))) 119 + if (!ASSERT_OK_PTR(link, "attach_kretprobe")) 120 120 goto cleanup; 121 121 test_run("kretprobe"); 122 122 bpf_link__destroy(link); 123 123 124 124 /* attach raw_tp */ 125 125 link = bpf_program__attach_raw_tracepoint(raw_tp_prog, "task_rename"); 126 - if (CHECK(IS_ERR(link), "attach fentry", "err %ld\n", PTR_ERR(link))) 126 + if (!ASSERT_OK_PTR(link, "attach_raw_tp")) 127 127 goto cleanup; 128 128 test_run("raw_tp"); 129 129 bpf_link__destroy(link); 130 130 131 131 /* attach fentry */ 132 132 link = bpf_program__attach_trace(fentry_prog); 133 - if (CHECK(IS_ERR(link), "attach fentry", "err %ld\n", PTR_ERR(link))) 133 + if (!ASSERT_OK_PTR(link, "attach_fentry")) 134 134 goto cleanup; 135 135 test_run("fentry"); 136 136 bpf_link__destroy(link); 137 137 138 138 /* attach fexit */ 139 139 link = bpf_program__attach_trace(fexit_prog); 140 - if (CHECK(IS_ERR(link), "attach fexit", "err %ld\n", PTR_ERR(link))) 140 + if (!ASSERT_OK_PTR(link, "attach_fexit")) 141 141 goto cleanup; 142 142 test_run("fexit"); 143 143 bpf_link__destroy(link);
+8 -6
tools/testing/selftests/bpf/prog_tests/trampoline_count.c
··· 55 55 /* attach 'allowed' trampoline programs */ 56 56 for (i = 0; i < MAX_TRAMP_PROGS; i++) { 57 57 obj = bpf_object__open_file(object, NULL); 58 - if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) { 58 + if (!ASSERT_OK_PTR(obj, "obj_open_file")) { 59 59 obj = NULL; 60 60 goto cleanup; 61 61 } ··· 68 68 69 69 if (rand() % 2) { 70 70 link = load(inst[i].obj, fentry_name); 71 - if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link))) { 71 + if (!ASSERT_OK_PTR(link, "attach_prog")) { 72 72 link = NULL; 73 73 goto cleanup; 74 74 } 75 75 inst[i].link_fentry = link; 76 76 } else { 77 77 link = load(inst[i].obj, fexit_name); 78 - if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link))) { 78 + if (!ASSERT_OK_PTR(link, "attach_prog")) { 79 79 link = NULL; 80 80 goto cleanup; 81 81 } ··· 85 85 86 86 /* and try 1 extra.. */ 87 87 obj = bpf_object__open_file(object, NULL); 88 - if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj))) { 88 + if (!ASSERT_OK_PTR(obj, "obj_open_file")) { 89 89 obj = NULL; 90 90 goto cleanup; 91 91 } ··· 96 96 97 97 /* ..that needs to fail */ 98 98 link = load(obj, fentry_name); 99 - if (CHECK(!IS_ERR(link), "cannot attach over the limit", "err %ld\n", PTR_ERR(link))) { 99 + err = libbpf_get_error(link); 100 + if (!ASSERT_ERR_PTR(link, "cannot attach over the limit")) { 100 101 bpf_link__destroy(link); 101 102 goto cleanup_extra; 102 103 } 103 104 104 105 /* with E2BIG error */ 105 - CHECK(PTR_ERR(link) != -E2BIG, "proper error check", "err %ld\n", PTR_ERR(link)); 106 + ASSERT_EQ(err, -E2BIG, "proper error check"); 107 + ASSERT_EQ(link, NULL, "ptr_is_null"); 106 108 107 109 /* and finaly execute the probe */ 108 110 if (CHECK_FAIL(prctl(PR_GET_NAME, comm, 0L, 0L, 0L)))
+3 -4
tools/testing/selftests/bpf/prog_tests/udp_limit.c
··· 22 22 goto close_cgroup_fd; 23 23 24 24 skel->links.sock = bpf_program__attach_cgroup(skel->progs.sock, cgroup_fd); 25 + if (!ASSERT_OK_PTR(skel->links.sock, "cg_attach_sock")) 26 + goto close_skeleton; 25 27 skel->links.sock_release = bpf_program__attach_cgroup(skel->progs.sock_release, cgroup_fd); 26 - if (CHECK(IS_ERR(skel->links.sock) || IS_ERR(skel->links.sock_release), 27 - "cg-attach", "sock %ld sock_release %ld", 28 - PTR_ERR(skel->links.sock), 29 - PTR_ERR(skel->links.sock_release))) 28 + if (!ASSERT_OK_PTR(skel->links.sock_release, "cg_attach_sock_release")) 30 29 goto close_skeleton; 31 30 32 31 /* BPF program enforces a single UDP socket per cgroup,
+1 -1
tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c
··· 90 90 pb_opts.ctx = &passed; 91 91 pb = perf_buffer__new(bpf_map__fd(ftrace_skel->maps.perf_buf_map), 92 92 1, &pb_opts); 93 - if (CHECK(IS_ERR(pb), "perf_buf__new", "err %ld\n", PTR_ERR(pb))) 93 + if (!ASSERT_OK_PTR(pb, "perf_buf__new")) 94 94 goto out; 95 95 96 96 /* Run test program */
+4 -4
tools/testing/selftests/bpf/prog_tests/xdp_link.c
··· 51 51 52 52 /* BPF link is not allowed to replace prog attachment */ 53 53 link = bpf_program__attach_xdp(skel1->progs.xdp_handler, IFINDEX_LO); 54 - if (CHECK(!IS_ERR(link), "link_attach_fail", "unexpected success\n")) { 54 + if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) { 55 55 bpf_link__destroy(link); 56 56 /* best-effort detach prog */ 57 57 opts.old_fd = prog_fd1; ··· 67 67 68 68 /* now BPF link should attach successfully */ 69 69 link = bpf_program__attach_xdp(skel1->progs.xdp_handler, IFINDEX_LO); 70 - if (CHECK(IS_ERR(link), "link_attach", "failed: %ld\n", PTR_ERR(link))) 70 + if (!ASSERT_OK_PTR(link, "link_attach")) 71 71 goto cleanup; 72 72 skel1->links.xdp_handler = link; 73 73 ··· 95 95 96 96 /* BPF link is not allowed to replace another BPF link */ 97 97 link = bpf_program__attach_xdp(skel2->progs.xdp_handler, IFINDEX_LO); 98 - if (CHECK(!IS_ERR(link), "link_attach_fail", "unexpected success\n")) { 98 + if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) { 99 99 bpf_link__destroy(link); 100 100 goto cleanup; 101 101 } ··· 105 105 106 106 /* new link attach should succeed */ 107 107 link = bpf_program__attach_xdp(skel2->progs.xdp_handler, IFINDEX_LO); 108 - if (CHECK(IS_ERR(link), "link_attach", "failed: %ld\n", PTR_ERR(link))) 108 + if (!ASSERT_OK_PTR(link, "link_attach")) 109 109 goto cleanup; 110 110 skel2->links.xdp_handler = link; 111 111
-1
tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 7 6 char _license[] SEC("license") = "GPL"; 8 7
-1
tools/testing/selftests/bpf/progs/bpf_iter_bpf_map.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 7 6 char _license[] SEC("license") = "GPL"; 8 7
-1
tools/testing/selftests/bpf/progs/bpf_iter_ipv6_route.c
··· 3 3 #include "bpf_iter.h" 4 4 #include "bpf_tracing_net.h" 5 5 #include <bpf/bpf_helpers.h> 6 - #include <bpf/bpf_tracing.h> 7 6 8 7 char _license[] SEC("license") = "GPL"; 9 8
-1
tools/testing/selftests/bpf/progs/bpf_iter_netlink.c
··· 3 3 #include "bpf_iter.h" 4 4 #include "bpf_tracing_net.h" 5 5 #include <bpf/bpf_helpers.h> 6 - #include <bpf/bpf_tracing.h> 7 6 8 7 char _license[] SEC("license") = "GPL"; 9 8
-1
tools/testing/selftests/bpf/progs/bpf_iter_task.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 7 6 char _license[] SEC("license") = "GPL"; 8 7
-1
tools/testing/selftests/bpf/progs/bpf_iter_task_btf.c
··· 2 2 /* Copyright (c) 2020, Oracle and/or its affiliates. */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 #include <bpf/bpf_core_read.h> 7 6 8 7 #include <errno.h>
-1
tools/testing/selftests/bpf/progs/bpf_iter_task_file.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 7 6 char _license[] SEC("license") = "GPL"; 8 7
-1
tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 7 6 char _license[] SEC("license") = "GPL"; 8 7
-1
tools/testing/selftests/bpf/progs/bpf_iter_task_vma.c
··· 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #include "bpf_iter.h" 4 4 #include <bpf/bpf_helpers.h> 5 - #include <bpf/bpf_tracing.h> 6 5 7 6 char _license[] SEC("license") = "GPL"; 8 7
-1
tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
··· 3 3 #include "bpf_iter.h" 4 4 #include "bpf_tracing_net.h" 5 5 #include <bpf/bpf_helpers.h> 6 - #include <bpf/bpf_tracing.h> 7 6 #include <bpf/bpf_endian.h> 8 7 9 8 char _license[] SEC("license") = "GPL";
-1
tools/testing/selftests/bpf/progs/bpf_iter_tcp6.c
··· 3 3 #include "bpf_iter.h" 4 4 #include "bpf_tracing_net.h" 5 5 #include <bpf/bpf_helpers.h> 6 - #include <bpf/bpf_tracing.h> 7 6 #include <bpf/bpf_endian.h> 8 7 9 8 char _license[] SEC("license") = "GPL";
-1
tools/testing/selftests/bpf/progs/bpf_iter_udp4.c
··· 3 3 #include "bpf_iter.h" 4 4 #include "bpf_tracing_net.h" 5 5 #include <bpf/bpf_helpers.h> 6 - #include <bpf/bpf_tracing.h> 7 6 #include <bpf/bpf_endian.h> 8 7 9 8 char _license[] SEC("license") = "GPL";
-1
tools/testing/selftests/bpf/progs/bpf_iter_udp6.c
··· 3 3 #include "bpf_iter.h" 4 4 #include "bpf_tracing_net.h" 5 5 #include <bpf/bpf_helpers.h> 6 - #include <bpf/bpf_tracing.h> 7 6 #include <bpf/bpf_endian.h> 8 7 9 8 char _license[] SEC("license") = "GPL";
+26
tools/testing/selftests/bpf/progs/test_lookup_and_delete.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "vmlinux.h" 4 + #include <bpf/bpf_helpers.h> 5 + 6 + __u32 set_pid = 0; 7 + __u64 set_key = 0; 8 + __u64 set_value = 0; 9 + 10 + struct { 11 + __uint(type, BPF_MAP_TYPE_HASH); 12 + __uint(max_entries, 2); 13 + __type(key, __u64); 14 + __type(value, __u64); 15 + } hash_map SEC(".maps"); 16 + 17 + SEC("tp/syscalls/sys_enter_getpgid") 18 + int bpf_lookup_and_delete_test(const void *ctx) 19 + { 20 + if (set_pid == bpf_get_current_pid_tgid() >> 32) 21 + bpf_map_update_elem(&hash_map, &set_key, &set_value, BPF_NOEXIST); 22 + 23 + return 0; 24 + } 25 + 26 + char _license[] SEC("license") = "GPL";
+135
tools/testing/selftests/bpf/progs/test_migrate_reuseport.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Check if we can migrate child sockets. 4 + * 5 + * 1. If reuse_md->migrating_sk is NULL (SYN packet), 6 + * return SK_PASS without selecting a listener. 7 + * 2. If reuse_md->migrating_sk is not NULL (socket migration), 8 + * select a listener (reuseport_map[migrate_map[cookie]]) 9 + * 10 + * Author: Kuniyuki Iwashima <kuniyu@amazon.co.jp> 11 + */ 12 + 13 + #include <stddef.h> 14 + #include <string.h> 15 + #include <linux/bpf.h> 16 + #include <linux/if_ether.h> 17 + #include <linux/ip.h> 18 + #include <linux/ipv6.h> 19 + #include <linux/tcp.h> 20 + #include <linux/in.h> 21 + #include <bpf/bpf_endian.h> 22 + #include <bpf/bpf_helpers.h> 23 + 24 + struct { 25 + __uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY); 26 + __uint(max_entries, 256); 27 + __type(key, int); 28 + __type(value, __u64); 29 + } reuseport_map SEC(".maps"); 30 + 31 + struct { 32 + __uint(type, BPF_MAP_TYPE_HASH); 33 + __uint(max_entries, 256); 34 + __type(key, __u64); 35 + __type(value, int); 36 + } migrate_map SEC(".maps"); 37 + 38 + int migrated_at_close = 0; 39 + int migrated_at_close_fastopen = 0; 40 + int migrated_at_send_synack = 0; 41 + int migrated_at_recv_ack = 0; 42 + __be16 server_port; 43 + 44 + SEC("xdp") 45 + int drop_ack(struct xdp_md *xdp) 46 + { 47 + void *data_end = (void *)(long)xdp->data_end; 48 + void *data = (void *)(long)xdp->data; 49 + struct ethhdr *eth = data; 50 + struct tcphdr *tcp = NULL; 51 + 52 + if (eth + 1 > data_end) 53 + goto pass; 54 + 55 + switch (bpf_ntohs(eth->h_proto)) { 56 + case ETH_P_IP: { 57 + struct iphdr *ip = (struct iphdr *)(eth + 1); 58 + 59 + if (ip + 1 > data_end) 60 + goto pass; 61 + 62 + if (ip->protocol != IPPROTO_TCP) 63 + goto pass; 64 + 65 + tcp = (struct tcphdr *)((void *)ip + ip->ihl * 4); 66 + break; 67 + } 68 + case ETH_P_IPV6: { 69 + struct ipv6hdr *ipv6 = (struct ipv6hdr *)(eth + 1); 70 + 71 + if (ipv6 + 1 > data_end) 72 + goto pass; 73 + 74 + if (ipv6->nexthdr != IPPROTO_TCP) 75 + goto pass; 76 + 77 + tcp = (struct tcphdr *)(ipv6 + 1); 78 + break; 79 + } 80 + default: 81 + goto pass; 82 + } 83 + 84 + if (tcp + 1 > data_end) 85 + goto pass; 86 + 87 + if (tcp->dest != server_port) 88 + goto pass; 89 + 90 + if (!tcp->syn && tcp->ack) 91 + return XDP_DROP; 92 + 93 + pass: 94 + return XDP_PASS; 95 + } 96 + 97 + SEC("sk_reuseport/migrate") 98 + int migrate_reuseport(struct sk_reuseport_md *reuse_md) 99 + { 100 + int *key, flags = 0, state, err; 101 + __u64 cookie; 102 + 103 + if (!reuse_md->migrating_sk) 104 + return SK_PASS; 105 + 106 + state = reuse_md->migrating_sk->state; 107 + cookie = bpf_get_socket_cookie(reuse_md->sk); 108 + 109 + key = bpf_map_lookup_elem(&migrate_map, &cookie); 110 + if (!key) 111 + return SK_DROP; 112 + 113 + err = bpf_sk_select_reuseport(reuse_md, &reuseport_map, key, flags); 114 + if (err) 115 + return SK_PASS; 116 + 117 + switch (state) { 118 + case BPF_TCP_ESTABLISHED: 119 + __sync_fetch_and_add(&migrated_at_close, 1); 120 + break; 121 + case BPF_TCP_SYN_RECV: 122 + __sync_fetch_and_add(&migrated_at_close_fastopen, 1); 123 + break; 124 + case BPF_TCP_NEW_SYN_RECV: 125 + if (!reuse_md->len) 126 + __sync_fetch_and_add(&migrated_at_send_synack, 1); 127 + else 128 + __sync_fetch_and_add(&migrated_at_recv_ack, 1); 129 + break; 130 + } 131 + 132 + return SK_PASS; 133 + } 134 + 135 + char _license[] SEC("license") = "GPL";
-1
tools/testing/selftests/bpf/progs/test_snprintf.c
··· 3 3 4 4 #include <linux/bpf.h> 5 5 #include <bpf/bpf_helpers.h> 6 - #include <bpf/bpf_tracing.h> 7 6 8 7 __u32 pid = 0; 9 8
+94
tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #define KBUILD_MODNAME "foo" 3 + #include <string.h> 4 + #include <linux/in.h> 5 + #include <linux/if_ether.h> 6 + #include <linux/if_packet.h> 7 + #include <linux/ip.h> 8 + #include <linux/ipv6.h> 9 + 10 + #include <linux/bpf.h> 11 + #include <bpf/bpf_helpers.h> 12 + #include <bpf/bpf_endian.h> 13 + 14 + /* One map use devmap, another one use devmap_hash for testing */ 15 + struct { 16 + __uint(type, BPF_MAP_TYPE_DEVMAP); 17 + __uint(key_size, sizeof(int)); 18 + __uint(value_size, sizeof(int)); 19 + __uint(max_entries, 1024); 20 + } map_all SEC(".maps"); 21 + 22 + struct { 23 + __uint(type, BPF_MAP_TYPE_DEVMAP_HASH); 24 + __uint(key_size, sizeof(int)); 25 + __uint(value_size, sizeof(struct bpf_devmap_val)); 26 + __uint(max_entries, 128); 27 + } map_egress SEC(".maps"); 28 + 29 + /* map to store egress interfaces mac addresses */ 30 + struct { 31 + __uint(type, BPF_MAP_TYPE_HASH); 32 + __type(key, __u32); 33 + __type(value, __be64); 34 + __uint(max_entries, 128); 35 + } mac_map SEC(".maps"); 36 + 37 + SEC("xdp_redirect_map_multi") 38 + int xdp_redirect_map_multi_prog(struct xdp_md *ctx) 39 + { 40 + void *data_end = (void *)(long)ctx->data_end; 41 + void *data = (void *)(long)ctx->data; 42 + int if_index = ctx->ingress_ifindex; 43 + struct ethhdr *eth = data; 44 + __u16 h_proto; 45 + __u64 nh_off; 46 + 47 + nh_off = sizeof(*eth); 48 + if (data + nh_off > data_end) 49 + return XDP_DROP; 50 + 51 + h_proto = eth->h_proto; 52 + 53 + /* Using IPv4 for (BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS) testing */ 54 + if (h_proto == bpf_htons(ETH_P_IP)) 55 + return bpf_redirect_map(&map_all, 0, 56 + BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS); 57 + /* Using IPv6 for none flag testing */ 58 + else if (h_proto == bpf_htons(ETH_P_IPV6)) 59 + return bpf_redirect_map(&map_all, if_index, 0); 60 + /* All others for BPF_F_BROADCAST testing */ 61 + else 62 + return bpf_redirect_map(&map_all, 0, BPF_F_BROADCAST); 63 + } 64 + 65 + /* The following 2 progs are for 2nd devmap prog testing */ 66 + SEC("xdp_redirect_map_ingress") 67 + int xdp_redirect_map_all_prog(struct xdp_md *ctx) 68 + { 69 + return bpf_redirect_map(&map_egress, 0, 70 + BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS); 71 + } 72 + 73 + SEC("xdp_devmap/map_prog") 74 + int xdp_devmap_prog(struct xdp_md *ctx) 75 + { 76 + void *data_end = (void *)(long)ctx->data_end; 77 + void *data = (void *)(long)ctx->data; 78 + __u32 key = ctx->egress_ifindex; 79 + struct ethhdr *eth = data; 80 + __u64 nh_off; 81 + __be64 *mac; 82 + 83 + nh_off = sizeof(*eth); 84 + if (data + nh_off > data_end) 85 + return XDP_DROP; 86 + 87 + mac = bpf_map_lookup_elem(&mac_map, &key); 88 + if (mac) 89 + __builtin_memcpy(eth->h_source, mac, ETH_ALEN); 90 + 91 + return XDP_PASS; 92 + } 93 + 94 + char _license[] SEC("license") = "GPL";
+1
tools/testing/selftests/bpf/test_doc_build.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 3 + set -e 3 4 4 5 # Assume script is located under tools/testing/selftests/bpf/. We want to start 5 6 # build attempts from the top of kernel repository.
+8
tools/testing/selftests/bpf/test_lru_map.c
··· 231 231 assert(bpf_map_lookup_elem(lru_map_fd, &key, value) == -1 && 232 232 errno == ENOENT); 233 233 234 + /* lookup elem key=1 and delete it, then check it doesn't exist */ 235 + key = 1; 236 + assert(!bpf_map_lookup_and_delete_elem(lru_map_fd, &key, &value)); 237 + assert(value[0] == 1234); 238 + 239 + /* remove the same element from the expected map */ 240 + assert(!bpf_map_delete_elem(expected_map_fd, &key)); 241 + 234 242 assert(map_equal(lru_map_fd, expected_map_fd)); 235 243 236 244 close(expected_map_fd);
+102 -83
tools/testing/selftests/bpf/test_maps.c
··· 53 53 54 54 value = 0; 55 55 /* BPF_NOEXIST means add new element if it doesn't exist. */ 56 - assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) == -1 && 56 + assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) < 0 && 57 57 /* key=1 already exists. */ 58 58 errno == EEXIST); 59 59 60 60 /* -1 is an invalid flag. */ 61 - assert(bpf_map_update_elem(fd, &key, &value, -1) == -1 && 61 + assert(bpf_map_update_elem(fd, &key, &value, -1) < 0 && 62 62 errno == EINVAL); 63 63 64 64 /* Check that key=1 can be found. */ 65 65 assert(bpf_map_lookup_elem(fd, &key, &value) == 0 && value == 1234); 66 66 67 67 key = 2; 68 + value = 1234; 69 + /* Insert key=2 element. */ 70 + assert(bpf_map_update_elem(fd, &key, &value, BPF_ANY) == 0); 71 + 72 + /* Check that key=2 matches the value and delete it */ 73 + assert(bpf_map_lookup_and_delete_elem(fd, &key, &value) == 0 && value == 1234); 74 + 68 75 /* Check that key=2 is not found. */ 69 - assert(bpf_map_lookup_elem(fd, &key, &value) == -1 && errno == ENOENT); 76 + assert(bpf_map_lookup_elem(fd, &key, &value) < 0 && errno == ENOENT); 70 77 71 78 /* BPF_EXIST means update existing element. */ 72 - assert(bpf_map_update_elem(fd, &key, &value, BPF_EXIST) == -1 && 79 + assert(bpf_map_update_elem(fd, &key, &value, BPF_EXIST) < 0 && 73 80 /* key=2 is not there. */ 74 81 errno == ENOENT); 75 82 ··· 87 80 * inserted due to max_entries limit. 88 81 */ 89 82 key = 0; 90 - assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) == -1 && 83 + assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) < 0 && 91 84 errno == E2BIG); 92 85 93 86 /* Update existing element, though the map is full. */ ··· 96 89 key = 2; 97 90 assert(bpf_map_update_elem(fd, &key, &value, BPF_ANY) == 0); 98 91 key = 3; 99 - assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) == -1 && 92 + assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) < 0 && 100 93 errno == E2BIG); 101 94 102 95 /* Check that key = 0 doesn't exist. */ 103 96 key = 0; 104 - assert(bpf_map_delete_elem(fd, &key) == -1 && errno == ENOENT); 97 + assert(bpf_map_delete_elem(fd, &key) < 0 && errno == ENOENT); 105 98 106 99 /* Iterate over two elements. */ 107 100 assert(bpf_map_get_next_key(fd, NULL, &first_key) == 0 && ··· 111 104 assert(bpf_map_get_next_key(fd, &next_key, &next_key) == 0 && 112 105 (next_key == 1 || next_key == 2) && 113 106 (next_key != first_key)); 114 - assert(bpf_map_get_next_key(fd, &next_key, &next_key) == -1 && 107 + assert(bpf_map_get_next_key(fd, &next_key, &next_key) < 0 && 115 108 errno == ENOENT); 116 109 117 110 /* Delete both elements. */ ··· 119 112 assert(bpf_map_delete_elem(fd, &key) == 0); 120 113 key = 2; 121 114 assert(bpf_map_delete_elem(fd, &key) == 0); 122 - assert(bpf_map_delete_elem(fd, &key) == -1 && errno == ENOENT); 115 + assert(bpf_map_delete_elem(fd, &key) < 0 && errno == ENOENT); 123 116 124 117 key = 0; 125 118 /* Check that map is empty. */ 126 - assert(bpf_map_get_next_key(fd, NULL, &next_key) == -1 && 119 + assert(bpf_map_get_next_key(fd, NULL, &next_key) < 0 && 127 120 errno == ENOENT); 128 - assert(bpf_map_get_next_key(fd, &key, &next_key) == -1 && 121 + assert(bpf_map_get_next_key(fd, &key, &next_key) < 0 && 129 122 errno == ENOENT); 130 123 131 124 close(fd); ··· 173 166 /* Insert key=1 element. */ 174 167 assert(!(expected_key_mask & key)); 175 168 assert(bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0); 169 + 170 + /* Lookup and delete elem key=1 and check value. */ 171 + assert(bpf_map_lookup_and_delete_elem(fd, &key, value) == 0 && 172 + bpf_percpu(value,0) == 100); 173 + 174 + for (i = 0; i < nr_cpus; i++) 175 + bpf_percpu(value,i) = i + 100; 176 + 177 + /* Insert key=1 element which should not exist. */ 178 + assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == 0); 176 179 expected_key_mask |= key; 177 180 178 181 /* BPF_NOEXIST means add new element if it doesn't exist. */ 179 - assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == -1 && 182 + assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) < 0 && 180 183 /* key=1 already exists. */ 181 184 errno == EEXIST); 182 185 183 186 /* -1 is an invalid flag. */ 184 - assert(bpf_map_update_elem(fd, &key, value, -1) == -1 && 187 + assert(bpf_map_update_elem(fd, &key, value, -1) < 0 && 185 188 errno == EINVAL); 186 189 187 190 /* Check that key=1 can be found. Value could be 0 if the lookup ··· 203 186 204 187 key = 2; 205 188 /* Check that key=2 is not found. */ 206 - assert(bpf_map_lookup_elem(fd, &key, value) == -1 && errno == ENOENT); 189 + assert(bpf_map_lookup_elem(fd, &key, value) < 0 && errno == ENOENT); 207 190 208 191 /* BPF_EXIST means update existing element. */ 209 - assert(bpf_map_update_elem(fd, &key, value, BPF_EXIST) == -1 && 192 + assert(bpf_map_update_elem(fd, &key, value, BPF_EXIST) < 0 && 210 193 /* key=2 is not there. */ 211 194 errno == ENOENT); 212 195 ··· 219 202 * inserted due to max_entries limit. 220 203 */ 221 204 key = 0; 222 - assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == -1 && 205 + assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) < 0 && 223 206 errno == E2BIG); 224 207 225 208 /* Check that key = 0 doesn't exist. */ 226 - assert(bpf_map_delete_elem(fd, &key) == -1 && errno == ENOENT); 209 + assert(bpf_map_delete_elem(fd, &key) < 0 && errno == ENOENT); 227 210 228 211 /* Iterate over two elements. */ 229 212 assert(bpf_map_get_next_key(fd, NULL, &first_key) == 0 && ··· 254 237 assert(bpf_map_delete_elem(fd, &key) == 0); 255 238 key = 2; 256 239 assert(bpf_map_delete_elem(fd, &key) == 0); 257 - assert(bpf_map_delete_elem(fd, &key) == -1 && errno == ENOENT); 240 + assert(bpf_map_delete_elem(fd, &key) < 0 && errno == ENOENT); 258 241 259 242 key = 0; 260 243 /* Check that map is empty. */ 261 - assert(bpf_map_get_next_key(fd, NULL, &next_key) == -1 && 244 + assert(bpf_map_get_next_key(fd, NULL, &next_key) < 0 && 262 245 errno == ENOENT); 263 - assert(bpf_map_get_next_key(fd, &key, &next_key) == -1 && 246 + assert(bpf_map_get_next_key(fd, &key, &next_key) < 0 && 264 247 errno == ENOENT); 265 248 266 249 close(fd); ··· 377 360 assert(bpf_map_update_elem(fd, &key, &value, BPF_ANY) == 0); 378 361 379 362 value = 0; 380 - assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) == -1 && 363 + assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) < 0 && 381 364 errno == EEXIST); 382 365 383 366 /* Check that key=1 can be found. */ ··· 391 374 * due to max_entries limit. 392 375 */ 393 376 key = 2; 394 - assert(bpf_map_update_elem(fd, &key, &value, BPF_EXIST) == -1 && 377 + assert(bpf_map_update_elem(fd, &key, &value, BPF_EXIST) < 0 && 395 378 errno == E2BIG); 396 379 397 380 /* Check that key = 2 doesn't exist. */ 398 - assert(bpf_map_lookup_elem(fd, &key, &value) == -1 && errno == ENOENT); 381 + assert(bpf_map_lookup_elem(fd, &key, &value) < 0 && errno == ENOENT); 399 382 400 383 /* Iterate over two elements. */ 401 384 assert(bpf_map_get_next_key(fd, NULL, &next_key) == 0 && ··· 404 387 next_key == 0); 405 388 assert(bpf_map_get_next_key(fd, &next_key, &next_key) == 0 && 406 389 next_key == 1); 407 - assert(bpf_map_get_next_key(fd, &next_key, &next_key) == -1 && 390 + assert(bpf_map_get_next_key(fd, &next_key, &next_key) < 0 && 408 391 errno == ENOENT); 409 392 410 393 /* Delete shouldn't succeed. */ 411 394 key = 1; 412 - assert(bpf_map_delete_elem(fd, &key) == -1 && errno == EINVAL); 395 + assert(bpf_map_delete_elem(fd, &key) < 0 && errno == EINVAL); 413 396 414 397 close(fd); 415 398 } ··· 435 418 assert(bpf_map_update_elem(fd, &key, values, BPF_ANY) == 0); 436 419 437 420 bpf_percpu(values, 0) = 0; 438 - assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == -1 && 421 + assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) < 0 && 439 422 errno == EEXIST); 440 423 441 424 /* Check that key=1 can be found. */ ··· 450 433 451 434 /* Check that key=2 cannot be inserted due to max_entries limit. */ 452 435 key = 2; 453 - assert(bpf_map_update_elem(fd, &key, values, BPF_EXIST) == -1 && 436 + assert(bpf_map_update_elem(fd, &key, values, BPF_EXIST) < 0 && 454 437 errno == E2BIG); 455 438 456 439 /* Check that key = 2 doesn't exist. */ 457 - assert(bpf_map_lookup_elem(fd, &key, values) == -1 && errno == ENOENT); 440 + assert(bpf_map_lookup_elem(fd, &key, values) < 0 && errno == ENOENT); 458 441 459 442 /* Iterate over two elements. */ 460 443 assert(bpf_map_get_next_key(fd, NULL, &next_key) == 0 && ··· 463 446 next_key == 0); 464 447 assert(bpf_map_get_next_key(fd, &next_key, &next_key) == 0 && 465 448 next_key == 1); 466 - assert(bpf_map_get_next_key(fd, &next_key, &next_key) == -1 && 449 + assert(bpf_map_get_next_key(fd, &next_key, &next_key) < 0 && 467 450 errno == ENOENT); 468 451 469 452 /* Delete shouldn't succeed. */ 470 453 key = 1; 471 - assert(bpf_map_delete_elem(fd, &key) == -1 && errno == EINVAL); 454 + assert(bpf_map_delete_elem(fd, &key) < 0 && errno == EINVAL); 472 455 473 456 close(fd); 474 457 } ··· 572 555 assert(bpf_map_update_elem(fd, NULL, &vals[i], 0) == 0); 573 556 574 557 /* Check that element cannot be pushed due to max_entries limit */ 575 - assert(bpf_map_update_elem(fd, NULL, &val, 0) == -1 && 558 + assert(bpf_map_update_elem(fd, NULL, &val, 0) < 0 && 576 559 errno == E2BIG); 577 560 578 561 /* Peek element */ ··· 588 571 val == vals[i]); 589 572 590 573 /* Check that there are not elements left */ 591 - assert(bpf_map_lookup_and_delete_elem(fd, NULL, &val) == -1 && 574 + assert(bpf_map_lookup_and_delete_elem(fd, NULL, &val) < 0 && 592 575 errno == ENOENT); 593 576 594 577 /* Check that non supported functions set errno to EINVAL */ 595 - assert(bpf_map_delete_elem(fd, NULL) == -1 && errno == EINVAL); 596 - assert(bpf_map_get_next_key(fd, NULL, NULL) == -1 && errno == EINVAL); 578 + assert(bpf_map_delete_elem(fd, NULL) < 0 && errno == EINVAL); 579 + assert(bpf_map_get_next_key(fd, NULL, NULL) < 0 && errno == EINVAL); 597 580 598 581 close(fd); 599 582 } ··· 630 613 assert(bpf_map_update_elem(fd, NULL, &vals[i], 0) == 0); 631 614 632 615 /* Check that element cannot be pushed due to max_entries limit */ 633 - assert(bpf_map_update_elem(fd, NULL, &val, 0) == -1 && 616 + assert(bpf_map_update_elem(fd, NULL, &val, 0) < 0 && 634 617 errno == E2BIG); 635 618 636 619 /* Peek element */ ··· 646 629 val == vals[i]); 647 630 648 631 /* Check that there are not elements left */ 649 - assert(bpf_map_lookup_and_delete_elem(fd, NULL, &val) == -1 && 632 + assert(bpf_map_lookup_and_delete_elem(fd, NULL, &val) < 0 && 650 633 errno == ENOENT); 651 634 652 635 /* Check that non supported functions set errno to EINVAL */ 653 - assert(bpf_map_delete_elem(fd, NULL) == -1 && errno == EINVAL); 654 - assert(bpf_map_get_next_key(fd, NULL, NULL) == -1 && errno == EINVAL); 636 + assert(bpf_map_delete_elem(fd, NULL) < 0 && errno == EINVAL); 637 + assert(bpf_map_get_next_key(fd, NULL, NULL) < 0 && errno == EINVAL); 655 638 656 639 close(fd); 657 640 } ··· 852 835 } 853 836 854 837 bpf_map_rx = bpf_object__find_map_by_name(obj, "sock_map_rx"); 855 - if (IS_ERR(bpf_map_rx)) { 838 + if (!bpf_map_rx) { 856 839 printf("Failed to load map rx from verdict prog\n"); 857 840 goto out_sockmap; 858 841 } ··· 864 847 } 865 848 866 849 bpf_map_tx = bpf_object__find_map_by_name(obj, "sock_map_tx"); 867 - if (IS_ERR(bpf_map_tx)) { 850 + if (!bpf_map_tx) { 868 851 printf("Failed to load map tx from verdict prog\n"); 869 852 goto out_sockmap; 870 853 } ··· 876 859 } 877 860 878 861 bpf_map_msg = bpf_object__find_map_by_name(obj, "sock_map_msg"); 879 - if (IS_ERR(bpf_map_msg)) { 862 + if (!bpf_map_msg) { 880 863 printf("Failed to load map msg from msg_verdict prog\n"); 881 864 goto out_sockmap; 882 865 } ··· 888 871 } 889 872 890 873 bpf_map_break = bpf_object__find_map_by_name(obj, "sock_map_break"); 891 - if (IS_ERR(bpf_map_break)) { 874 + if (!bpf_map_break) { 892 875 printf("Failed to load map tx from verdict prog\n"); 893 876 goto out_sockmap; 894 877 } ··· 1170 1153 } 1171 1154 1172 1155 map = bpf_object__find_map_by_name(obj, "mim_array"); 1173 - if (IS_ERR(map)) { 1156 + if (!map) { 1174 1157 printf("Failed to load array of maps from test prog\n"); 1175 1158 goto out_map_in_map; 1176 1159 } ··· 1181 1164 } 1182 1165 1183 1166 map = bpf_object__find_map_by_name(obj, "mim_hash"); 1184 - if (IS_ERR(map)) { 1167 + if (!map) { 1185 1168 printf("Failed to load hash of maps from test prog\n"); 1186 1169 goto out_map_in_map; 1187 1170 } ··· 1194 1177 bpf_object__load(obj); 1195 1178 1196 1179 map = bpf_object__find_map_by_name(obj, "mim_array"); 1197 - if (IS_ERR(map)) { 1180 + if (!map) { 1198 1181 printf("Failed to load array of maps from test prog\n"); 1199 1182 goto out_map_in_map; 1200 1183 } ··· 1211 1194 } 1212 1195 1213 1196 map = bpf_object__find_map_by_name(obj, "mim_hash"); 1214 - if (IS_ERR(map)) { 1197 + if (!map) { 1215 1198 printf("Failed to load hash of maps from test prog\n"); 1216 1199 goto out_map_in_map; 1217 1200 } ··· 1263 1246 } 1264 1247 1265 1248 key.c = -1; 1266 - assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) == -1 && 1249 + assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) < 0 && 1267 1250 errno == E2BIG); 1268 1251 1269 1252 /* Iterate through all elements. */ ··· 1271 1254 key.c = -1; 1272 1255 for (i = 0; i < MAP_SIZE; i++) 1273 1256 assert(bpf_map_get_next_key(fd, &key, &key) == 0); 1274 - assert(bpf_map_get_next_key(fd, &key, &key) == -1 && errno == ENOENT); 1257 + assert(bpf_map_get_next_key(fd, &key, &key) < 0 && errno == ENOENT); 1275 1258 1276 1259 key.c = 0; 1277 1260 assert(bpf_map_lookup_elem(fd, &key, &value) == 0 && value == 0); 1278 1261 key.a = 1; 1279 - assert(bpf_map_lookup_elem(fd, &key, &value) == -1 && errno == ENOENT); 1262 + assert(bpf_map_lookup_elem(fd, &key, &value) < 0 && errno == ENOENT); 1280 1263 1281 1264 close(fd); 1282 1265 } ··· 1408 1391 run_parallel(TASKS, test_update_delete, data); 1409 1392 1410 1393 /* Check that key=0 is already there. */ 1411 - assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) == -1 && 1394 + assert(bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST) < 0 && 1412 1395 errno == EEXIST); 1413 1396 1414 1397 /* Check that all elements were inserted. */ ··· 1416 1399 key = -1; 1417 1400 for (i = 0; i < MAP_SIZE; i++) 1418 1401 assert(bpf_map_get_next_key(fd, &key, &key) == 0); 1419 - assert(bpf_map_get_next_key(fd, &key, &key) == -1 && errno == ENOENT); 1402 + assert(bpf_map_get_next_key(fd, &key, &key) < 0 && errno == ENOENT); 1420 1403 1421 1404 /* Another check for all elements */ 1422 1405 for (i = 0; i < MAP_SIZE; i++) { ··· 1432 1415 1433 1416 /* Nothing should be left. */ 1434 1417 key = -1; 1435 - assert(bpf_map_get_next_key(fd, NULL, &key) == -1 && errno == ENOENT); 1436 - assert(bpf_map_get_next_key(fd, &key, &key) == -1 && errno == ENOENT); 1418 + assert(bpf_map_get_next_key(fd, NULL, &key) < 0 && errno == ENOENT); 1419 + assert(bpf_map_get_next_key(fd, &key, &key) < 0 && errno == ENOENT); 1437 1420 } 1438 1421 1439 1422 static void test_map_rdonly(void) ··· 1451 1434 key = 1; 1452 1435 value = 1234; 1453 1436 /* Try to insert key=1 element. */ 1454 - assert(bpf_map_update_elem(fd, &key, &value, BPF_ANY) == -1 && 1437 + assert(bpf_map_update_elem(fd, &key, &value, BPF_ANY) < 0 && 1455 1438 errno == EPERM); 1456 1439 1457 1440 /* Check that key=1 is not found. */ 1458 - assert(bpf_map_lookup_elem(fd, &key, &value) == -1 && errno == ENOENT); 1459 - assert(bpf_map_get_next_key(fd, &key, &value) == -1 && errno == ENOENT); 1441 + assert(bpf_map_lookup_elem(fd, &key, &value) < 0 && errno == ENOENT); 1442 + assert(bpf_map_get_next_key(fd, &key, &value) < 0 && errno == ENOENT); 1460 1443 1461 1444 close(fd); 1462 1445 } ··· 1479 1462 assert(bpf_map_update_elem(fd, &key, &value, BPF_ANY) == 0); 1480 1463 1481 1464 /* Check that reading elements and keys from the map is not allowed. */ 1482 - assert(bpf_map_lookup_elem(fd, &key, &value) == -1 && errno == EPERM); 1483 - assert(bpf_map_get_next_key(fd, &key, &value) == -1 && errno == EPERM); 1465 + assert(bpf_map_lookup_elem(fd, &key, &value) < 0 && errno == EPERM); 1466 + assert(bpf_map_get_next_key(fd, &key, &value) < 0 && errno == EPERM); 1484 1467 1485 1468 close(fd); 1486 1469 } ··· 1507 1490 assert(bpf_map_update_elem(fd, NULL, &value, BPF_ANY) == 0); 1508 1491 1509 1492 /* Peek element should fail */ 1510 - assert(bpf_map_lookup_elem(fd, NULL, &value) == -1 && errno == EPERM); 1493 + assert(bpf_map_lookup_elem(fd, NULL, &value) < 0 && errno == EPERM); 1511 1494 1512 1495 /* Pop element should fail */ 1513 - assert(bpf_map_lookup_and_delete_elem(fd, NULL, &value) == -1 && 1496 + assert(bpf_map_lookup_and_delete_elem(fd, NULL, &value) < 0 && 1514 1497 errno == EPERM); 1515 1498 1516 1499 close(fd); ··· 1564 1547 value = &fd32; 1565 1548 } 1566 1549 err = bpf_map_update_elem(map_fd, &index0, value, BPF_ANY); 1567 - CHECK(err != -1 || errno != EINVAL, 1550 + CHECK(err >= 0 || errno != EINVAL, 1568 1551 "reuseport array update unbound sk", 1569 1552 "sock_type:%d err:%d errno:%d\n", 1570 1553 type, err, errno); ··· 1593 1576 */ 1594 1577 err = bpf_map_update_elem(map_fd, &index0, value, 1595 1578 BPF_ANY); 1596 - CHECK(err != -1 || errno != EINVAL, 1579 + CHECK(err >= 0 || errno != EINVAL, 1597 1580 "reuseport array update non-listening sk", 1598 1581 "sock_type:%d err:%d errno:%d\n", 1599 1582 type, err, errno); ··· 1623 1606 1624 1607 map_fd = bpf_create_map(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, 1625 1608 sizeof(__u32), sizeof(__u64), array_size, 0); 1626 - CHECK(map_fd == -1, "reuseport array create", 1609 + CHECK(map_fd < 0, "reuseport array create", 1627 1610 "map_fd:%d, errno:%d\n", map_fd, errno); 1628 1611 1629 1612 /* Test lookup/update/delete with invalid index */ 1630 1613 err = bpf_map_delete_elem(map_fd, &bad_index); 1631 - CHECK(err != -1 || errno != E2BIG, "reuseport array del >=max_entries", 1614 + CHECK(err >= 0 || errno != E2BIG, "reuseport array del >=max_entries", 1632 1615 "err:%d errno:%d\n", err, errno); 1633 1616 1634 1617 err = bpf_map_update_elem(map_fd, &bad_index, &fd64, BPF_ANY); 1635 - CHECK(err != -1 || errno != E2BIG, 1618 + CHECK(err >= 0 || errno != E2BIG, 1636 1619 "reuseport array update >=max_entries", 1637 1620 "err:%d errno:%d\n", err, errno); 1638 1621 1639 1622 err = bpf_map_lookup_elem(map_fd, &bad_index, &map_cookie); 1640 - CHECK(err != -1 || errno != ENOENT, 1623 + CHECK(err >= 0 || errno != ENOENT, 1641 1624 "reuseport array update >=max_entries", 1642 1625 "err:%d errno:%d\n", err, errno); 1643 1626 1644 1627 /* Test lookup/delete non existence elem */ 1645 1628 err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); 1646 - CHECK(err != -1 || errno != ENOENT, 1629 + CHECK(err >= 0 || errno != ENOENT, 1647 1630 "reuseport array lookup not-exist elem", 1648 1631 "err:%d errno:%d\n", err, errno); 1649 1632 err = bpf_map_delete_elem(map_fd, &index3); 1650 - CHECK(err != -1 || errno != ENOENT, 1633 + CHECK(err >= 0 || errno != ENOENT, 1651 1634 "reuseport array del not-exist elem", 1652 1635 "err:%d errno:%d\n", err, errno); 1653 1636 ··· 1661 1644 /* BPF_EXIST failure case */ 1662 1645 err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], 1663 1646 BPF_EXIST); 1664 - CHECK(err != -1 || errno != ENOENT, 1647 + CHECK(err >= 0 || errno != ENOENT, 1665 1648 "reuseport array update empty elem BPF_EXIST", 1666 1649 "sock_type:%d err:%d errno:%d\n", 1667 1650 type, err, errno); ··· 1670 1653 /* BPF_NOEXIST success case */ 1671 1654 err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], 1672 1655 BPF_NOEXIST); 1673 - CHECK(err == -1, 1656 + CHECK(err < 0, 1674 1657 "reuseport array update empty elem BPF_NOEXIST", 1675 1658 "sock_type:%d err:%d errno:%d\n", 1676 1659 type, err, errno); ··· 1679 1662 /* BPF_EXIST success case. */ 1680 1663 err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], 1681 1664 BPF_EXIST); 1682 - CHECK(err == -1, 1665 + CHECK(err < 0, 1683 1666 "reuseport array update same elem BPF_EXIST", 1684 1667 "sock_type:%d err:%d errno:%d\n", type, err, errno); 1685 1668 fds_idx = REUSEPORT_FD_IDX(err, fds_idx); ··· 1687 1670 /* BPF_NOEXIST failure case */ 1688 1671 err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], 1689 1672 BPF_NOEXIST); 1690 - CHECK(err != -1 || errno != EEXIST, 1673 + CHECK(err >= 0 || errno != EEXIST, 1691 1674 "reuseport array update non-empty elem BPF_NOEXIST", 1692 1675 "sock_type:%d err:%d errno:%d\n", 1693 1676 type, err, errno); ··· 1696 1679 /* BPF_ANY case (always succeed) */ 1697 1680 err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], 1698 1681 BPF_ANY); 1699 - CHECK(err == -1, 1682 + CHECK(err < 0, 1700 1683 "reuseport array update same sk with BPF_ANY", 1701 1684 "sock_type:%d err:%d errno:%d\n", type, err, errno); 1702 1685 ··· 1705 1688 1706 1689 /* The same sk cannot be added to reuseport_array twice */ 1707 1690 err = bpf_map_update_elem(map_fd, &index3, &fd64, BPF_ANY); 1708 - CHECK(err != -1 || errno != EBUSY, 1691 + CHECK(err >= 0 || errno != EBUSY, 1709 1692 "reuseport array update same sk with same index", 1710 1693 "sock_type:%d err:%d errno:%d\n", 1711 1694 type, err, errno); 1712 1695 1713 1696 err = bpf_map_update_elem(map_fd, &index0, &fd64, BPF_ANY); 1714 - CHECK(err != -1 || errno != EBUSY, 1697 + CHECK(err >= 0 || errno != EBUSY, 1715 1698 "reuseport array update same sk with different index", 1716 1699 "sock_type:%d err:%d errno:%d\n", 1717 1700 type, err, errno); 1718 1701 1719 1702 /* Test delete elem */ 1720 1703 err = bpf_map_delete_elem(map_fd, &index3); 1721 - CHECK(err == -1, "reuseport array delete sk", 1704 + CHECK(err < 0, "reuseport array delete sk", 1722 1705 "sock_type:%d err:%d errno:%d\n", 1723 1706 type, err, errno); 1724 1707 1725 1708 /* Add it back with BPF_NOEXIST */ 1726 1709 err = bpf_map_update_elem(map_fd, &index3, &fd64, BPF_NOEXIST); 1727 - CHECK(err == -1, 1710 + CHECK(err < 0, 1728 1711 "reuseport array re-add with BPF_NOEXIST after del", 1729 1712 "sock_type:%d err:%d errno:%d\n", type, err, errno); 1730 1713 1731 1714 /* Test cookie */ 1732 1715 err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); 1733 - CHECK(err == -1 || sk_cookie != map_cookie, 1716 + CHECK(err < 0 || sk_cookie != map_cookie, 1734 1717 "reuseport array lookup re-added sk", 1735 1718 "sock_type:%d err:%d errno:%d sk_cookie:0x%llx map_cookie:0x%llxn", 1736 1719 type, err, errno, sk_cookie, map_cookie); ··· 1739 1722 for (f = 0; f < ARRAY_SIZE(grpa_fds64); f++) 1740 1723 close(grpa_fds64[f]); 1741 1724 err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); 1742 - CHECK(err != -1 || errno != ENOENT, 1725 + CHECK(err >= 0 || errno != ENOENT, 1743 1726 "reuseport array lookup after close()", 1744 1727 "sock_type:%d err:%d errno:%d\n", 1745 1728 type, err, errno); ··· 1750 1733 CHECK(fd64 == -1, "socket(SOCK_RAW)", "err:%d errno:%d\n", 1751 1734 err, errno); 1752 1735 err = bpf_map_update_elem(map_fd, &index3, &fd64, BPF_NOEXIST); 1753 - CHECK(err != -1 || errno != ENOTSUPP, "reuseport array update SOCK_RAW", 1736 + CHECK(err >= 0 || errno != ENOTSUPP, "reuseport array update SOCK_RAW", 1754 1737 "err:%d errno:%d\n", err, errno); 1755 1738 close(fd64); 1756 1739 ··· 1760 1743 /* Test 32 bit fd */ 1761 1744 map_fd = bpf_create_map(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, 1762 1745 sizeof(__u32), sizeof(__u32), array_size, 0); 1763 - CHECK(map_fd == -1, "reuseport array create", 1746 + CHECK(map_fd < 0, "reuseport array create", 1764 1747 "map_fd:%d, errno:%d\n", map_fd, errno); 1765 1748 prepare_reuseport_grp(SOCK_STREAM, map_fd, sizeof(__u32), &fd64, 1766 1749 &sk_cookie, 1); 1767 1750 fd = fd64; 1768 1751 err = bpf_map_update_elem(map_fd, &index3, &fd, BPF_NOEXIST); 1769 - CHECK(err == -1, "reuseport array update 32 bit fd", 1752 + CHECK(err < 0, "reuseport array update 32 bit fd", 1770 1753 "err:%d errno:%d\n", err, errno); 1771 1754 err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); 1772 - CHECK(err != -1 || errno != ENOSPC, 1755 + CHECK(err >= 0 || errno != ENOSPC, 1773 1756 "reuseport array lookup 32 bit fd", 1774 1757 "err:%d errno:%d\n", err, errno); 1775 1758 close(fd); ··· 1814 1797 int main(void) 1815 1798 { 1816 1799 srand(time(NULL)); 1800 + 1801 + libbpf_set_strict_mode(LIBBPF_STRICT_ALL); 1817 1802 1818 1803 map_flags = 0; 1819 1804 run_all_tests();
+3
tools/testing/selftests/bpf/test_progs.c
··· 737 737 if (err) 738 738 return err; 739 739 740 + /* Use libbpf 1.0 API mode */ 741 + libbpf_set_strict_mode(LIBBPF_STRICT_ALL); 742 + 740 743 libbpf_set_print(libbpf_print_fn); 741 744 742 745 srand(time(NULL));
+5 -4
tools/testing/selftests/bpf/test_progs.h
··· 249 249 #define ASSERT_OK_PTR(ptr, name) ({ \ 250 250 static int duration = 0; \ 251 251 const void *___res = (ptr); \ 252 - bool ___ok = !IS_ERR_OR_NULL(___res); \ 253 - CHECK(!___ok, (name), \ 254 - "unexpected error: %ld\n", PTR_ERR(___res)); \ 252 + int ___err = libbpf_get_error(___res); \ 253 + bool ___ok = ___err == 0; \ 254 + CHECK(!___ok, (name), "unexpected error: %d\n", ___err); \ 255 255 ___ok; \ 256 256 }) 257 257 258 258 #define ASSERT_ERR_PTR(ptr, name) ({ \ 259 259 static int duration = 0; \ 260 260 const void *___res = (ptr); \ 261 - bool ___ok = IS_ERR(___res); \ 261 + int ___err = libbpf_get_error(___res); \ 262 + bool ___ok = ___err != 0; \ 262 263 CHECK(!___ok, (name), "unexpected pointer: %p\n", ___res); \ 263 264 ___ok; \ 264 265 })
+4 -3
tools/testing/selftests/bpf/test_tcpnotify_user.c
··· 82 82 cpu_set_t cpuset; 83 83 __u32 key = 0; 84 84 85 + libbpf_set_strict_mode(LIBBPF_STRICT_ALL); 86 + 85 87 CPU_ZERO(&cpuset); 86 88 CPU_SET(0, &cpuset); 87 89 pthread_setaffinity_np(pthread_self(), sizeof(cpu_set_t), &cpuset); ··· 118 116 119 117 pb_opts.sample_cb = dummyfn; 120 118 pb = perf_buffer__new(bpf_map__fd(perf_map), 8, &pb_opts); 121 - if (IS_ERR(pb)) 119 + if (!pb) 122 120 goto err; 123 121 124 122 pthread_create(&tid, NULL, poller_thread, pb); ··· 165 163 bpf_prog_detach(cg_fd, BPF_CGROUP_SOCK_OPS); 166 164 close(cg_fd); 167 165 cleanup_cgroup_environment(); 168 - if (!IS_ERR_OR_NULL(pb)) 169 - perf_buffer__free(pb); 166 + perf_buffer__free(pb); 170 167 return error; 171 168 }
+204
tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Test topology: 5 + # - - - - - - - - - - - - - - - - - - - - - - - - - 6 + # | veth1 veth2 veth3 | ... init net 7 + # - -| - - - - - - | - - - - - - | - - 8 + # --------- --------- --------- 9 + # | veth0 | | veth0 | | veth0 | ... 10 + # --------- --------- --------- 11 + # ns1 ns2 ns3 12 + # 13 + # Test modules: 14 + # XDP modes: generic, native, native + egress_prog 15 + # 16 + # Test cases: 17 + # ARP: Testing BPF_F_BROADCAST, the ingress interface also should receive 18 + # the redirects. 19 + # ns1 -> gw: ns1, ns2, ns3, should receive the arp request 20 + # IPv4: Testing BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS, the ingress 21 + # interface should not receive the redirects. 22 + # ns1 -> gw: ns1 should not receive, ns2, ns3 should receive redirects. 23 + # IPv6: Testing none flag, all the pkts should be redirected back 24 + # ping test: ns1 -> ns2 (block), echo requests will be redirect back 25 + # egress_prog: 26 + # all src mac should be egress interface's mac 27 + 28 + # netns numbers 29 + NUM=3 30 + IFACES="" 31 + DRV_MODE="xdpgeneric xdpdrv xdpegress" 32 + PASS=0 33 + FAIL=0 34 + 35 + test_pass() 36 + { 37 + echo "Pass: $@" 38 + PASS=$((PASS + 1)) 39 + } 40 + 41 + test_fail() 42 + { 43 + echo "fail: $@" 44 + FAIL=$((FAIL + 1)) 45 + } 46 + 47 + clean_up() 48 + { 49 + for i in $(seq $NUM); do 50 + ip link del veth$i 2> /dev/null 51 + ip netns del ns$i 2> /dev/null 52 + done 53 + } 54 + 55 + # Kselftest framework requirement - SKIP code is 4. 56 + check_env() 57 + { 58 + ip link set dev lo xdpgeneric off &>/dev/null 59 + if [ $? -ne 0 ];then 60 + echo "selftests: [SKIP] Could not run test without the ip xdpgeneric support" 61 + exit 4 62 + fi 63 + 64 + which tcpdump &>/dev/null 65 + if [ $? -ne 0 ];then 66 + echo "selftests: [SKIP] Could not run test without tcpdump" 67 + exit 4 68 + fi 69 + } 70 + 71 + setup_ns() 72 + { 73 + local mode=$1 74 + IFACES="" 75 + 76 + if [ "$mode" = "xdpegress" ]; then 77 + mode="xdpdrv" 78 + fi 79 + 80 + for i in $(seq $NUM); do 81 + ip netns add ns$i 82 + ip link add veth$i type veth peer name veth0 netns ns$i 83 + ip link set veth$i up 84 + ip -n ns$i link set veth0 up 85 + 86 + ip -n ns$i addr add 192.0.2.$i/24 dev veth0 87 + ip -n ns$i addr add 2001:db8::$i/64 dev veth0 88 + # Add a neigh entry for IPv4 ping test 89 + ip -n ns$i neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0 90 + ip -n ns$i link set veth0 $mode obj \ 91 + xdp_dummy.o sec xdp_dummy &> /dev/null || \ 92 + { test_fail "Unable to load dummy xdp" && exit 1; } 93 + IFACES="$IFACES veth$i" 94 + veth_mac[$i]=$(ip link show veth$i | awk '/link\/ether/ {print $2}') 95 + done 96 + } 97 + 98 + do_egress_tests() 99 + { 100 + local mode=$1 101 + 102 + # mac test 103 + ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> mac_ns1-2_${mode}.log & 104 + ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> mac_ns1-3_${mode}.log & 105 + sleep 0.5 106 + ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null 107 + sleep 0.5 108 + pkill -9 tcpdump 109 + 110 + # mac check 111 + grep -q "${veth_mac[2]} > ff:ff:ff:ff:ff:ff" mac_ns1-2_${mode}.log && \ 112 + test_pass "$mode mac ns1-2" || test_fail "$mode mac ns1-2" 113 + grep -q "${veth_mac[3]} > ff:ff:ff:ff:ff:ff" mac_ns1-3_${mode}.log && \ 114 + test_pass "$mode mac ns1-3" || test_fail "$mode mac ns1-3" 115 + } 116 + 117 + do_ping_tests() 118 + { 119 + local mode=$1 120 + 121 + # ping6 test: echo request should be redirect back to itself, not others 122 + ip netns exec ns1 ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02 123 + 124 + ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ns1-1_${mode}.log & 125 + ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ns1-2_${mode}.log & 126 + ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ns1-3_${mode}.log & 127 + sleep 0.5 128 + # ARP test 129 + ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null 130 + # IPv4 test 131 + ip netns exec ns1 ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null 132 + # IPv6 test 133 + ip netns exec ns1 ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null 134 + sleep 0.5 135 + pkill -9 tcpdump 136 + 137 + # All netns should receive the redirect arp requests 138 + [ $(grep -c "who-has 192.0.2.254" ns1-1_${mode}.log) -gt 4 ] && \ 139 + test_pass "$mode arp(F_BROADCAST) ns1-1" || \ 140 + test_fail "$mode arp(F_BROADCAST) ns1-1" 141 + [ $(grep -c "who-has 192.0.2.254" ns1-2_${mode}.log) -le 4 ] && \ 142 + test_pass "$mode arp(F_BROADCAST) ns1-2" || \ 143 + test_fail "$mode arp(F_BROADCAST) ns1-2" 144 + [ $(grep -c "who-has 192.0.2.254" ns1-3_${mode}.log) -le 4 ] && \ 145 + test_pass "$mode arp(F_BROADCAST) ns1-3" || \ 146 + test_fail "$mode arp(F_BROADCAST) ns1-3" 147 + 148 + # ns1 should not receive the redirect echo request, others should 149 + [ $(grep -c "ICMP echo request" ns1-1_${mode}.log) -eq 4 ] && \ 150 + test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1" || \ 151 + test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1" 152 + [ $(grep -c "ICMP echo request" ns1-2_${mode}.log) -eq 4 ] && \ 153 + test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2" || \ 154 + test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2" 155 + [ $(grep -c "ICMP echo request" ns1-3_${mode}.log) -eq 4 ] && \ 156 + test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3" || \ 157 + test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3" 158 + 159 + # ns1 should receive the echo request, ns2 should not 160 + [ $(grep -c "ICMP6, echo request" ns1-1_${mode}.log) -eq 4 ] && \ 161 + test_pass "$mode IPv6 (no flags) ns1-1" || \ 162 + test_fail "$mode IPv6 (no flags) ns1-1" 163 + [ $(grep -c "ICMP6, echo request" ns1-2_${mode}.log) -eq 0 ] && \ 164 + test_pass "$mode IPv6 (no flags) ns1-2" || \ 165 + test_fail "$mode IPv6 (no flags) ns1-2" 166 + } 167 + 168 + do_tests() 169 + { 170 + local mode=$1 171 + local drv_p 172 + 173 + case ${mode} in 174 + xdpdrv) drv_p="-N";; 175 + xdpegress) drv_p="-X";; 176 + xdpgeneric) drv_p="-S";; 177 + esac 178 + 179 + ./xdp_redirect_multi $drv_p $IFACES &> xdp_redirect_${mode}.log & 180 + xdp_pid=$! 181 + sleep 1 182 + 183 + if [ "$mode" = "xdpegress" ]; then 184 + do_egress_tests $mode 185 + else 186 + do_ping_tests $mode 187 + fi 188 + 189 + kill $xdp_pid 190 + } 191 + 192 + trap clean_up 0 2 3 6 9 193 + 194 + check_env 195 + rm -f xdp_redirect_*.log ns*.log mac_ns*.log 196 + 197 + for mode in ${DRV_MODE}; do 198 + setup_ns $mode 199 + do_tests $mode 200 + clean_up 201 + done 202 + 203 + echo "Summary: PASS $PASS, FAIL $FAIL" 204 + [ $FAIL -eq 0 ] && exit 0 || exit 1
+226
tools/testing/selftests/bpf/xdp_redirect_multi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <linux/if_link.h> 4 + #include <assert.h> 5 + #include <errno.h> 6 + #include <signal.h> 7 + #include <stdio.h> 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <net/if.h> 11 + #include <unistd.h> 12 + #include <libgen.h> 13 + #include <sys/resource.h> 14 + #include <sys/ioctl.h> 15 + #include <sys/types.h> 16 + #include <sys/socket.h> 17 + #include <netinet/in.h> 18 + 19 + #include "bpf_util.h" 20 + #include <bpf/bpf.h> 21 + #include <bpf/libbpf.h> 22 + 23 + #define MAX_IFACE_NUM 32 24 + #define MAX_INDEX_NUM 1024 25 + 26 + static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; 27 + static int ifaces[MAX_IFACE_NUM] = {}; 28 + 29 + static void int_exit(int sig) 30 + { 31 + __u32 prog_id = 0; 32 + int i; 33 + 34 + for (i = 0; ifaces[i] > 0; i++) { 35 + if (bpf_get_link_xdp_id(ifaces[i], &prog_id, xdp_flags)) { 36 + printf("bpf_get_link_xdp_id failed\n"); 37 + exit(1); 38 + } 39 + if (prog_id) 40 + bpf_set_link_xdp_fd(ifaces[i], -1, xdp_flags); 41 + } 42 + 43 + exit(0); 44 + } 45 + 46 + static int get_mac_addr(unsigned int ifindex, void *mac_addr) 47 + { 48 + char ifname[IF_NAMESIZE]; 49 + struct ifreq ifr; 50 + int fd, ret = -1; 51 + 52 + fd = socket(AF_INET, SOCK_DGRAM, 0); 53 + if (fd < 0) 54 + return ret; 55 + 56 + if (!if_indextoname(ifindex, ifname)) 57 + goto err_out; 58 + 59 + strcpy(ifr.ifr_name, ifname); 60 + 61 + if (ioctl(fd, SIOCGIFHWADDR, &ifr) != 0) 62 + goto err_out; 63 + 64 + memcpy(mac_addr, ifr.ifr_hwaddr.sa_data, 6 * sizeof(char)); 65 + ret = 0; 66 + 67 + err_out: 68 + close(fd); 69 + return ret; 70 + } 71 + 72 + static void usage(const char *prog) 73 + { 74 + fprintf(stderr, 75 + "usage: %s [OPTS] <IFNAME|IFINDEX> <IFNAME|IFINDEX> ...\n" 76 + "OPTS:\n" 77 + " -S use skb-mode\n" 78 + " -N enforce native mode\n" 79 + " -F force loading prog\n" 80 + " -X load xdp program on egress\n", 81 + prog); 82 + } 83 + 84 + int main(int argc, char **argv) 85 + { 86 + int prog_fd, group_all, mac_map; 87 + struct bpf_program *ingress_prog, *egress_prog; 88 + struct bpf_prog_load_attr prog_load_attr = { 89 + .prog_type = BPF_PROG_TYPE_UNSPEC, 90 + }; 91 + int i, ret, opt, egress_prog_fd = 0; 92 + struct bpf_devmap_val devmap_val; 93 + bool attach_egress_prog = false; 94 + unsigned char mac_addr[6]; 95 + char ifname[IF_NAMESIZE]; 96 + struct bpf_object *obj; 97 + unsigned int ifindex; 98 + char filename[256]; 99 + 100 + while ((opt = getopt(argc, argv, "SNFX")) != -1) { 101 + switch (opt) { 102 + case 'S': 103 + xdp_flags |= XDP_FLAGS_SKB_MODE; 104 + break; 105 + case 'N': 106 + /* default, set below */ 107 + break; 108 + case 'F': 109 + xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST; 110 + break; 111 + case 'X': 112 + attach_egress_prog = true; 113 + break; 114 + default: 115 + usage(basename(argv[0])); 116 + return 1; 117 + } 118 + } 119 + 120 + if (!(xdp_flags & XDP_FLAGS_SKB_MODE)) { 121 + xdp_flags |= XDP_FLAGS_DRV_MODE; 122 + } else if (attach_egress_prog) { 123 + printf("Load xdp program on egress with SKB mode not supported yet\n"); 124 + goto err_out; 125 + } 126 + 127 + if (optind == argc) { 128 + printf("usage: %s <IFNAME|IFINDEX> <IFNAME|IFINDEX> ...\n", argv[0]); 129 + goto err_out; 130 + } 131 + 132 + printf("Get interfaces"); 133 + for (i = 0; i < MAX_IFACE_NUM && argv[optind + i]; i++) { 134 + ifaces[i] = if_nametoindex(argv[optind + i]); 135 + if (!ifaces[i]) 136 + ifaces[i] = strtoul(argv[optind + i], NULL, 0); 137 + if (!if_indextoname(ifaces[i], ifname)) { 138 + perror("Invalid interface name or i"); 139 + goto err_out; 140 + } 141 + if (ifaces[i] > MAX_INDEX_NUM) { 142 + printf("Interface index to large\n"); 143 + goto err_out; 144 + } 145 + printf(" %d", ifaces[i]); 146 + } 147 + printf("\n"); 148 + 149 + snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); 150 + prog_load_attr.file = filename; 151 + 152 + if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) 153 + goto err_out; 154 + 155 + if (attach_egress_prog) 156 + group_all = bpf_object__find_map_fd_by_name(obj, "map_egress"); 157 + else 158 + group_all = bpf_object__find_map_fd_by_name(obj, "map_all"); 159 + mac_map = bpf_object__find_map_fd_by_name(obj, "mac_map"); 160 + 161 + if (group_all < 0 || mac_map < 0) { 162 + printf("bpf_object__find_map_fd_by_name failed\n"); 163 + goto err_out; 164 + } 165 + 166 + if (attach_egress_prog) { 167 + /* Find ingress/egress prog for 2nd xdp prog */ 168 + ingress_prog = bpf_object__find_program_by_name(obj, "xdp_redirect_map_all_prog"); 169 + egress_prog = bpf_object__find_program_by_name(obj, "xdp_devmap_prog"); 170 + if (!ingress_prog || !egress_prog) { 171 + printf("finding ingress/egress_prog in obj file failed\n"); 172 + goto err_out; 173 + } 174 + prog_fd = bpf_program__fd(ingress_prog); 175 + egress_prog_fd = bpf_program__fd(egress_prog); 176 + if (prog_fd < 0 || egress_prog_fd < 0) { 177 + printf("find egress_prog fd failed\n"); 178 + goto err_out; 179 + } 180 + } 181 + 182 + signal(SIGINT, int_exit); 183 + signal(SIGTERM, int_exit); 184 + 185 + /* Init forward multicast groups and exclude group */ 186 + for (i = 0; ifaces[i] > 0; i++) { 187 + ifindex = ifaces[i]; 188 + 189 + if (attach_egress_prog) { 190 + ret = get_mac_addr(ifindex, mac_addr); 191 + if (ret < 0) { 192 + printf("get interface %d mac failed\n", ifindex); 193 + goto err_out; 194 + } 195 + ret = bpf_map_update_elem(mac_map, &ifindex, mac_addr, 0); 196 + if (ret) { 197 + perror("bpf_update_elem mac_map failed\n"); 198 + goto err_out; 199 + } 200 + } 201 + 202 + /* Add all the interfaces to group all */ 203 + devmap_val.ifindex = ifindex; 204 + devmap_val.bpf_prog.fd = egress_prog_fd; 205 + ret = bpf_map_update_elem(group_all, &ifindex, &devmap_val, 0); 206 + if (ret) { 207 + perror("bpf_map_update_elem"); 208 + goto err_out; 209 + } 210 + 211 + /* bind prog_fd to each interface */ 212 + ret = bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags); 213 + if (ret) { 214 + printf("Set xdp fd failed on %d\n", ifindex); 215 + goto err_out; 216 + } 217 + } 218 + 219 + /* sleep some time for testing */ 220 + sleep(999); 221 + 222 + return 0; 223 + 224 + err_out: 225 + return 1; 226 + }