Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Alexei Starovoitov says:

====================
pull-request: bpf-next 2023-07-13

We've added 67 non-merge commits during the last 15 day(s) which contain
a total of 106 files changed, 4444 insertions(+), 619 deletions(-).

The main changes are:

1) Fix bpftool build in presence of stale vmlinux.h,
from Alexander Lobakin.

2) Introduce bpf_me_mcache_free_rcu() and fix OOM under stress,
from Alexei Starovoitov.

3) Teach verifier actual bounds of bpf_get_smp_processor_id()
and fix perf+libbpf issue related to custom section handling,
from Andrii Nakryiko.

4) Introduce bpf map element count, from Anton Protopopov.

5) Check skb ownership against full socket, from Kui-Feng Lee.

6) Support for up to 12 arguments in BPF trampoline, from Menglong Dong.

7) Export rcu_request_urgent_qs_task, from Paul E. McKenney.

8) Fix BTF walking of unions, from Yafang Shao.

9) Extend link_info for kprobe_multi and perf_event links,
from Yafang Shao.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (67 commits)
selftests/bpf: Add selftest for PTR_UNTRUSTED
bpf: Fix an error in verifying a field in a union
selftests/bpf: Add selftests for nested_trust
bpf: Fix an error around PTR_UNTRUSTED
selftests/bpf: add testcase for TRACING with 6+ arguments
bpf, x86: allow function arguments up to 12 for TRACING
bpf, x86: save/restore regs with BPF_DW size
bpftool: Use "fallthrough;" keyword instead of comments
bpf: Add object leak check.
bpf: Convert bpf_cpumask to bpf_mem_cache_free_rcu.
bpf: Introduce bpf_mem_free_rcu() similar to kfree_rcu().
selftests/bpf: Improve test coverage of bpf_mem_alloc.
rcu: Export rcu_request_urgent_qs_task()
bpf: Allow reuse from waiting_for_gp_ttrace list.
bpf: Add a hint to allocated objects.
bpf: Change bpf_mem_cache draining process.
bpf: Further refactor alloc_bulk().
bpf: Factor out inc/dec of active flag into helpers.
bpf: Refactor alloc_bulk().
bpf: Let free_all() return the number of freed elements.
...
====================

Link: https://lore.kernel.org/r/20230714020910.80794-1-alexei.starovoitov@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4445 -620
+5 -5
Documentation/bpf/bpf_devel_QA.rst
··· 635 635 636 636 Q: clang flag for target bpf? 637 637 ----------------------------- 638 - Q: In some cases clang flag ``-target bpf`` is used but in other cases the 638 + Q: In some cases clang flag ``--target=bpf`` is used but in other cases the 639 639 default clang target, which matches the underlying architecture, is used. 640 640 What is the difference and when I should use which? 641 641 642 642 A: Although LLVM IR generation and optimization try to stay architecture 643 - independent, ``-target <arch>`` still has some impact on generated code: 643 + independent, ``--target=<arch>`` still has some impact on generated code: 644 644 645 645 - BPF program may recursively include header file(s) with file scope 646 646 inline assembly codes. The default target can handle this well, ··· 658 658 The clang option ``-fno-jump-tables`` can be used to disable 659 659 switch table generation. 660 660 661 - - For clang ``-target bpf``, it is guaranteed that pointer or long / 661 + - For clang ``--target=bpf``, it is guaranteed that pointer or long / 662 662 unsigned long types will always have a width of 64 bit, no matter 663 663 whether underlying clang binary or default target (or kernel) is 664 664 32 bit. However, when native clang target is used, then it will ··· 668 668 while the BPF LLVM back end still operates in 64 bit. The native 669 669 target is mostly needed in tracing for the case of walking ``pt_regs`` 670 670 or other kernel structures where CPU's register width matters. 671 - Otherwise, ``clang -target bpf`` is generally recommended. 671 + Otherwise, ``clang --target=bpf`` is generally recommended. 672 672 673 673 You should use default target when: 674 674 ··· 685 685 into these structures is verified by the BPF verifier and may result 686 686 in verification failures if the native architecture is not aligned with 687 687 the BPF architecture, e.g. 64-bit. An example of this is 688 - BPF_PROG_TYPE_SK_MSG require ``-target bpf`` 688 + BPF_PROG_TYPE_SK_MSG require ``--target=bpf`` 689 689 690 690 691 691 .. Links
+2 -2
Documentation/bpf/btf.rst
··· 990 990 } g2; 991 991 int main() { return 0; } 992 992 int test() { return 0; } 993 - -bash-4.4$ clang -c -g -O2 -target bpf t2.c 993 + -bash-4.4$ clang -c -g -O2 --target=bpf t2.c 994 994 -bash-4.4$ readelf -S t2.o 995 995 ...... 996 996 [ 8] .BTF PROGBITS 0000000000000000 00000247 ··· 1000 1000 [10] .rel.BTF.ext REL 0000000000000000 000007e0 1001 1001 0000000000000040 0000000000000010 16 9 8 1002 1002 ...... 1003 - -bash-4.4$ clang -S -g -O2 -target bpf t2.c 1003 + -bash-4.4$ clang -S -g -O2 --target=bpf t2.c 1004 1004 -bash-4.4$ cat t2.s 1005 1005 ...... 1006 1006 .section .BTF,"",@progbits
+1 -2
Documentation/bpf/index.rst
··· 12 12 .. toctree:: 13 13 :maxdepth: 1 14 14 15 - instruction-set 16 15 verifier 17 16 libbpf/index 17 + standardization/index 18 18 btf 19 19 faq 20 20 syscall_api ··· 29 29 bpf_licensing 30 30 test_debug 31 31 clang-notes 32 - linux-notes 33 32 other 34 33 redirect 35 34
+1 -1
Documentation/bpf/instruction-set.rst Documentation/bpf/standardization/instruction-set.rst
··· 165 165 BPF_AND 0x50 dst &= src 166 166 BPF_LSH 0x60 dst <<= (src & mask) 167 167 BPF_RSH 0x70 dst >>= (src & mask) 168 - BPF_NEG 0x80 dst = ~src 168 + BPF_NEG 0x80 dst = -src 169 169 BPF_MOD 0x90 dst = (src != 0) ? (dst % src) : dst 170 170 BPF_XOR 0xa0 dst ^= src 171 171 BPF_MOV 0xb0 dst = src
+2 -1
Documentation/bpf/linux-notes.rst Documentation/bpf/standardization/linux-notes.rst
··· 45 45 Legacy BPF Packet access instructions 46 46 ===================================== 47 47 48 - As mentioned in the `ISA standard documentation <instruction-set.rst#legacy-bpf-packet-access-instructions>`_, 48 + As mentioned in the `ISA standard documentation 49 + <instruction-set.html#legacy-bpf-packet-access-instructions>`_, 49 50 Linux has special eBPF instructions for access to packet data that have been 50 51 carried over from classic BPF to retain the performance of legacy socket 51 52 filters running in the eBPF interpreter.
+3 -3
Documentation/bpf/llvm_reloc.rst
··· 28 28 return g1 + g2 + l1 + l2; 29 29 } 30 30 31 - Compiled with ``clang -target bpf -O2 -c test.c``, the following is 31 + Compiled with ``clang --target=bpf -O2 -c test.c``, the following is 32 32 the code with ``llvm-objdump -dr test.o``:: 33 33 34 34 0: 18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll ··· 157 157 return gfunc(a, b) + lfunc(a, b) + global; 158 158 } 159 159 160 - Compiled with ``clang -target bpf -O2 -c test.c``, we will have 160 + Compiled with ``clang --target=bpf -O2 -c test.c``, we will have 161 161 following code with `llvm-objdump -dr test.o``:: 162 162 163 163 Disassembly of section .text: ··· 203 203 int global() { return 0; } 204 204 struct t { void *g; } gbl = { global }; 205 205 206 - Compiled with ``clang -target bpf -O2 -g -c test.c``, we will see a 206 + Compiled with ``clang --target=bpf -O2 -g -c test.c``, we will see a 207 207 relocation below in ``.data`` section with command 208 208 ``llvm-readelf -r test.o``:: 209 209
+18
Documentation/bpf/standardization/index.rst
··· 1 + .. SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) 2 + 3 + =================== 4 + BPF Standardization 5 + =================== 6 + 7 + This directory contains documents that are being iterated on as part of the BPF 8 + standardization effort with the IETF. See the `IETF BPF Working Group`_ page 9 + for the working group charter, documents, and more. 10 + 11 + .. toctree:: 12 + :maxdepth: 1 13 + 14 + instruction-set 15 + linux-notes 16 + 17 + .. Links: 18 + .. _IETF BPF Working Group: https://datatracker.ietf.org/wg/bpf/about/
+1 -1
MAINTAINERS
··· 3694 3694 L: bpf@vger.kernel.org 3695 3695 L: bpf@ietf.org 3696 3696 S: Maintained 3697 - F: Documentation/bpf/instruction-set.rst 3697 + F: Documentation/bpf/standardization/ 3698 3698 3699 3699 BPF [GENERAL] (Safe Dynamic Programs and Tools) 3700 3700 M: Alexei Starovoitov <ast@kernel.org>
+204 -44
arch/x86/net/bpf_jit_comp.c
··· 1857 1857 return proglen; 1858 1858 } 1859 1859 1860 - static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_regs, 1861 - int stack_size) 1860 + static void clean_stack_garbage(const struct btf_func_model *m, 1861 + u8 **pprog, int nr_stack_slots, 1862 + int stack_size) 1862 1863 { 1863 - int i, j, arg_size; 1864 - bool next_same_struct = false; 1864 + int arg_size, off; 1865 + u8 *prog; 1866 + 1867 + /* Generally speaking, the compiler will pass the arguments 1868 + * on-stack with "push" instruction, which will take 8-byte 1869 + * on the stack. In this case, there won't be garbage values 1870 + * while we copy the arguments from origin stack frame to current 1871 + * in BPF_DW. 1872 + * 1873 + * However, sometimes the compiler will only allocate 4-byte on 1874 + * the stack for the arguments. For now, this case will only 1875 + * happen if there is only one argument on-stack and its size 1876 + * not more than 4 byte. In this case, there will be garbage 1877 + * values on the upper 4-byte where we store the argument on 1878 + * current stack frame. 1879 + * 1880 + * arguments on origin stack: 1881 + * 1882 + * stack_arg_1(4-byte) xxx(4-byte) 1883 + * 1884 + * what we copy: 1885 + * 1886 + * stack_arg_1(8-byte): stack_arg_1(origin) xxx 1887 + * 1888 + * and the xxx is the garbage values which we should clean here. 1889 + */ 1890 + if (nr_stack_slots != 1) 1891 + return; 1892 + 1893 + /* the size of the last argument */ 1894 + arg_size = m->arg_size[m->nr_args - 1]; 1895 + if (arg_size <= 4) { 1896 + off = -(stack_size - 4); 1897 + prog = *pprog; 1898 + /* mov DWORD PTR [rbp + off], 0 */ 1899 + if (!is_imm8(off)) 1900 + EMIT2_off32(0xC7, 0x85, off); 1901 + else 1902 + EMIT3(0xC7, 0x45, off); 1903 + EMIT(0, 4); 1904 + *pprog = prog; 1905 + } 1906 + } 1907 + 1908 + /* get the count of the regs that are used to pass arguments */ 1909 + static int get_nr_used_regs(const struct btf_func_model *m) 1910 + { 1911 + int i, arg_regs, nr_used_regs = 0; 1912 + 1913 + for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) { 1914 + arg_regs = (m->arg_size[i] + 7) / 8; 1915 + if (nr_used_regs + arg_regs <= 6) 1916 + nr_used_regs += arg_regs; 1917 + 1918 + if (nr_used_regs >= 6) 1919 + break; 1920 + } 1921 + 1922 + return nr_used_regs; 1923 + } 1924 + 1925 + static void save_args(const struct btf_func_model *m, u8 **prog, 1926 + int stack_size, bool for_call_origin) 1927 + { 1928 + int arg_regs, first_off, nr_regs = 0, nr_stack_slots = 0; 1929 + int i, j; 1865 1930 1866 1931 /* Store function arguments to stack. 1867 1932 * For a function that accepts two pointers the sequence will be: 1868 1933 * mov QWORD PTR [rbp-0x10],rdi 1869 1934 * mov QWORD PTR [rbp-0x8],rsi 1870 1935 */ 1871 - for (i = 0, j = 0; i < min(nr_regs, 6); i++) { 1872 - /* The arg_size is at most 16 bytes, enforced by the verifier. */ 1873 - arg_size = m->arg_size[j]; 1874 - if (arg_size > 8) { 1875 - arg_size = 8; 1876 - next_same_struct = !next_same_struct; 1936 + for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) { 1937 + arg_regs = (m->arg_size[i] + 7) / 8; 1938 + 1939 + /* According to the research of Yonghong, struct members 1940 + * should be all in register or all on the stack. 1941 + * Meanwhile, the compiler will pass the argument on regs 1942 + * if the remaining regs can hold the argument. 1943 + * 1944 + * Disorder of the args can happen. For example: 1945 + * 1946 + * struct foo_struct { 1947 + * long a; 1948 + * int b; 1949 + * }; 1950 + * int foo(char, char, char, char, char, struct foo_struct, 1951 + * char); 1952 + * 1953 + * the arg1-5,arg7 will be passed by regs, and arg6 will 1954 + * by stack. 1955 + */ 1956 + if (nr_regs + arg_regs > 6) { 1957 + /* copy function arguments from origin stack frame 1958 + * into current stack frame. 1959 + * 1960 + * The starting address of the arguments on-stack 1961 + * is: 1962 + * rbp + 8(push rbp) + 1963 + * 8(return addr of origin call) + 1964 + * 8(return addr of the caller) 1965 + * which means: rbp + 24 1966 + */ 1967 + for (j = 0; j < arg_regs; j++) { 1968 + emit_ldx(prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 1969 + nr_stack_slots * 8 + 0x18); 1970 + emit_stx(prog, BPF_DW, BPF_REG_FP, BPF_REG_0, 1971 + -stack_size); 1972 + 1973 + if (!nr_stack_slots) 1974 + first_off = stack_size; 1975 + stack_size -= 8; 1976 + nr_stack_slots++; 1977 + } 1978 + } else { 1979 + /* Only copy the arguments on-stack to current 1980 + * 'stack_size' and ignore the regs, used to 1981 + * prepare the arguments on-stack for orign call. 1982 + */ 1983 + if (for_call_origin) { 1984 + nr_regs += arg_regs; 1985 + continue; 1986 + } 1987 + 1988 + /* copy the arguments from regs into stack */ 1989 + for (j = 0; j < arg_regs; j++) { 1990 + emit_stx(prog, BPF_DW, BPF_REG_FP, 1991 + nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + nr_regs, 1992 + -stack_size); 1993 + stack_size -= 8; 1994 + nr_regs++; 1995 + } 1877 1996 } 1878 - 1879 - emit_stx(prog, bytes_to_bpf_size(arg_size), 1880 - BPF_REG_FP, 1881 - i == 5 ? X86_REG_R9 : BPF_REG_1 + i, 1882 - -(stack_size - i * 8)); 1883 - 1884 - j = next_same_struct ? j : j + 1; 1885 1997 } 1998 + 1999 + clean_stack_garbage(m, prog, nr_stack_slots, first_off); 1886 2000 } 1887 2001 1888 - static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_regs, 2002 + static void restore_regs(const struct btf_func_model *m, u8 **prog, 1889 2003 int stack_size) 1890 2004 { 1891 - int i, j, arg_size; 1892 - bool next_same_struct = false; 2005 + int i, j, arg_regs, nr_regs = 0; 1893 2006 1894 2007 /* Restore function arguments from stack. 1895 2008 * For a function that accepts two pointers the sequence will be: 1896 2009 * EMIT4(0x48, 0x8B, 0x7D, 0xF0); mov rdi,QWORD PTR [rbp-0x10] 1897 2010 * EMIT4(0x48, 0x8B, 0x75, 0xF8); mov rsi,QWORD PTR [rbp-0x8] 2011 + * 2012 + * The logic here is similar to what we do in save_args() 1898 2013 */ 1899 - for (i = 0, j = 0; i < min(nr_regs, 6); i++) { 1900 - /* The arg_size is at most 16 bytes, enforced by the verifier. */ 1901 - arg_size = m->arg_size[j]; 1902 - if (arg_size > 8) { 1903 - arg_size = 8; 1904 - next_same_struct = !next_same_struct; 2014 + for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) { 2015 + arg_regs = (m->arg_size[i] + 7) / 8; 2016 + if (nr_regs + arg_regs <= 6) { 2017 + for (j = 0; j < arg_regs; j++) { 2018 + emit_ldx(prog, BPF_DW, 2019 + nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + nr_regs, 2020 + BPF_REG_FP, 2021 + -stack_size); 2022 + stack_size -= 8; 2023 + nr_regs++; 2024 + } 2025 + } else { 2026 + stack_size -= 8 * arg_regs; 1905 2027 } 1906 2028 1907 - emit_ldx(prog, bytes_to_bpf_size(arg_size), 1908 - i == 5 ? X86_REG_R9 : BPF_REG_1 + i, 1909 - BPF_REG_FP, 1910 - -(stack_size - i * 8)); 1911 - 1912 - j = next_same_struct ? j : j + 1; 2029 + if (nr_regs >= 6) 2030 + break; 1913 2031 } 1914 2032 } 1915 2033 ··· 2056 1938 /* arg1: mov rdi, progs[i] */ 2057 1939 emit_mov_imm64(&prog, BPF_REG_1, (long) p >> 32, (u32) (long) p); 2058 1940 /* arg2: lea rsi, [rbp - ctx_cookie_off] */ 2059 - EMIT4(0x48, 0x8D, 0x75, -run_ctx_off); 1941 + if (!is_imm8(-run_ctx_off)) 1942 + EMIT3_off32(0x48, 0x8D, 0xB5, -run_ctx_off); 1943 + else 1944 + EMIT4(0x48, 0x8D, 0x75, -run_ctx_off); 2060 1945 2061 1946 if (emit_rsb_call(&prog, bpf_trampoline_enter(p), prog)) 2062 1947 return -EINVAL; ··· 2075 1954 emit_nops(&prog, 2); 2076 1955 2077 1956 /* arg1: lea rdi, [rbp - stack_size] */ 2078 - EMIT4(0x48, 0x8D, 0x7D, -stack_size); 1957 + if (!is_imm8(-stack_size)) 1958 + EMIT3_off32(0x48, 0x8D, 0xBD, -stack_size); 1959 + else 1960 + EMIT4(0x48, 0x8D, 0x7D, -stack_size); 2079 1961 /* arg2: progs[i]->insnsi for interpreter */ 2080 1962 if (!p->jited) 2081 1963 emit_mov_imm64(&prog, BPF_REG_2, ··· 2108 1984 /* arg2: mov rsi, rbx <- start time in nsec */ 2109 1985 emit_mov_reg(&prog, true, BPF_REG_2, BPF_REG_6); 2110 1986 /* arg3: lea rdx, [rbp - run_ctx_off] */ 2111 - EMIT4(0x48, 0x8D, 0x55, -run_ctx_off); 1987 + if (!is_imm8(-run_ctx_off)) 1988 + EMIT3_off32(0x48, 0x8D, 0x95, -run_ctx_off); 1989 + else 1990 + EMIT4(0x48, 0x8D, 0x55, -run_ctx_off); 2112 1991 if (emit_rsb_call(&prog, bpf_trampoline_exit(p), prog)) 2113 1992 return -EINVAL; 2114 1993 ··· 2263 2136 void *func_addr) 2264 2137 { 2265 2138 int i, ret, nr_regs = m->nr_args, stack_size = 0; 2266 - int regs_off, nregs_off, ip_off, run_ctx_off; 2139 + int regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off, rbx_off; 2267 2140 struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY]; 2268 2141 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; 2269 2142 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; ··· 2277 2150 if (m->arg_flags[i] & BTF_FMODEL_STRUCT_ARG) 2278 2151 nr_regs += (m->arg_size[i] + 7) / 8 - 1; 2279 2152 2280 - /* x86-64 supports up to 6 arguments. 7+ can be added in the future */ 2281 - if (nr_regs > 6) 2153 + /* x86-64 supports up to MAX_BPF_FUNC_ARGS arguments. 1-6 2154 + * are passed through regs, the remains are through stack. 2155 + */ 2156 + if (nr_regs > MAX_BPF_FUNC_ARGS) 2282 2157 return -ENOTSUPP; 2283 2158 2284 2159 /* Generated trampoline stack layout: ··· 2299 2170 * 2300 2171 * RBP - ip_off [ traced function ] BPF_TRAMP_F_IP_ARG flag 2301 2172 * 2173 + * RBP - rbx_off [ rbx value ] always 2174 + * 2302 2175 * RBP - run_ctx_off [ bpf_tramp_run_ctx ] 2176 + * 2177 + * [ stack_argN ] BPF_TRAMP_F_CALL_ORIG 2178 + * [ ... ] 2179 + * [ stack_arg2 ] 2180 + * RBP - arg_stack_off [ stack_arg1 ] 2303 2181 */ 2304 2182 2305 2183 /* room for return value of orig_call or fentry prog */ ··· 2326 2190 2327 2191 ip_off = stack_size; 2328 2192 2193 + stack_size += 8; 2194 + rbx_off = stack_size; 2195 + 2329 2196 stack_size += (sizeof(struct bpf_tramp_run_ctx) + 7) & ~0x7; 2330 2197 run_ctx_off = stack_size; 2198 + 2199 + if (nr_regs > 6 && (flags & BPF_TRAMP_F_CALL_ORIG)) { 2200 + /* the space that used to pass arguments on-stack */ 2201 + stack_size += (nr_regs - get_nr_used_regs(m)) * 8; 2202 + /* make sure the stack pointer is 16-byte aligned if we 2203 + * need pass arguments on stack, which means 2204 + * [stack_size + 8(rbp) + 8(rip) + 8(origin rip)] 2205 + * should be 16-byte aligned. Following code depend on 2206 + * that stack_size is already 8-byte aligned. 2207 + */ 2208 + stack_size += (stack_size % 16) ? 0 : 8; 2209 + } 2210 + 2211 + arg_stack_off = stack_size; 2331 2212 2332 2213 if (flags & BPF_TRAMP_F_SKIP_FRAME) { 2333 2214 /* skip patched call instruction and point orig_call to actual ··· 2365 2212 x86_call_depth_emit_accounting(&prog, NULL); 2366 2213 EMIT1(0x55); /* push rbp */ 2367 2214 EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */ 2368 - EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */ 2369 - EMIT1(0x53); /* push rbx */ 2215 + if (!is_imm8(stack_size)) 2216 + /* sub rsp, stack_size */ 2217 + EMIT3_off32(0x48, 0x81, 0xEC, stack_size); 2218 + else 2219 + /* sub rsp, stack_size */ 2220 + EMIT4(0x48, 0x83, 0xEC, stack_size); 2221 + /* mov QWORD PTR [rbp - rbx_off], rbx */ 2222 + emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off); 2370 2223 2371 2224 /* Store number of argument registers of the traced function: 2372 2225 * mov rax, nr_regs ··· 2390 2231 emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ip_off); 2391 2232 } 2392 2233 2393 - save_regs(m, &prog, nr_regs, regs_off); 2234 + save_args(m, &prog, regs_off, false); 2394 2235 2395 2236 if (flags & BPF_TRAMP_F_CALL_ORIG) { 2396 2237 /* arg1: mov rdi, im */ ··· 2420 2261 } 2421 2262 2422 2263 if (flags & BPF_TRAMP_F_CALL_ORIG) { 2423 - restore_regs(m, &prog, nr_regs, regs_off); 2264 + restore_regs(m, &prog, regs_off); 2265 + save_args(m, &prog, arg_stack_off, true); 2424 2266 2425 2267 if (flags & BPF_TRAMP_F_ORIG_STACK) { 2426 2268 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8); ··· 2462 2302 } 2463 2303 2464 2304 if (flags & BPF_TRAMP_F_RESTORE_REGS) 2465 - restore_regs(m, &prog, nr_regs, regs_off); 2305 + restore_regs(m, &prog, regs_off); 2466 2306 2467 2307 /* This needs to be done regardless. If there were fmod_ret programs, 2468 2308 * the return value is only updated on the stack and still needs to be ··· 2481 2321 if (save_ret) 2482 2322 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8); 2483 2323 2484 - EMIT1(0x5B); /* pop rbx */ 2324 + emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off); 2485 2325 EMIT1(0xC9); /* leave */ 2486 2326 if (flags & BPF_TRAMP_F_SKIP_FRAME) 2487 2327 /* skip our return address and return to parent */
+1 -1
drivers/hid/bpf/entrypoints/Makefile
··· 58 58 59 59 $(OUTPUT)/entrypoints.bpf.o: entrypoints.bpf.c $(OUTPUT)/vmlinux.h $(BPFOBJ) | $(OUTPUT) 60 60 $(call msg,BPF,$@) 61 - $(Q)$(CLANG) -g -O2 -target bpf $(INCLUDES) \ 61 + $(Q)$(CLANG) -g -O2 --target=bpf $(INCLUDES) \ 62 62 -c $(filter %.c,$^) -o $@ && \ 63 63 $(LLVM_STRIP) -g $@ 64 64
+2 -2
include/linux/bpf-cgroup.h
··· 199 199 #define BPF_CGROUP_RUN_PROG_INET_EGRESS(sk, skb) \ 200 200 ({ \ 201 201 int __ret = 0; \ 202 - if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk && sk == skb->sk) { \ 202 + if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk) { \ 203 203 typeof(sk) __sk = sk_to_full_sk(sk); \ 204 - if (sk_fullsock(__sk) && \ 204 + if (sk_fullsock(__sk) && __sk == skb_to_full_sk(skb) && \ 205 205 cgroup_bpf_sock_enabled(__sk, CGROUP_INET_EGRESS)) \ 206 206 __ret = __cgroup_bpf_run_filter_skb(__sk, skb, \ 207 207 CGROUP_INET_EGRESS); \
+30
include/linux/bpf.h
··· 275 275 } owner; 276 276 bool bypass_spec_v1; 277 277 bool frozen; /* write-once; write-protected by freeze_mutex */ 278 + s64 __percpu *elem_count; 278 279 }; 279 280 280 281 static inline const char *btf_field_type_name(enum btf_field_type type) ··· 2040 2039 return __alloc_percpu_gfp(size, align, flags); 2041 2040 } 2042 2041 #endif 2042 + 2043 + static inline int 2044 + bpf_map_init_elem_count(struct bpf_map *map) 2045 + { 2046 + size_t size = sizeof(*map->elem_count), align = size; 2047 + gfp_t flags = GFP_USER | __GFP_NOWARN; 2048 + 2049 + map->elem_count = bpf_map_alloc_percpu(map, size, align, flags); 2050 + if (!map->elem_count) 2051 + return -ENOMEM; 2052 + 2053 + return 0; 2054 + } 2055 + 2056 + static inline void 2057 + bpf_map_free_elem_count(struct bpf_map *map) 2058 + { 2059 + free_percpu(map->elem_count); 2060 + } 2061 + 2062 + static inline void bpf_map_inc_elem_count(struct bpf_map *map) 2063 + { 2064 + this_cpu_inc(*map->elem_count); 2065 + } 2066 + 2067 + static inline void bpf_map_dec_elem_count(struct bpf_map *map) 2068 + { 2069 + this_cpu_dec(*map->elem_count); 2070 + } 2043 2071 2044 2072 extern int sysctl_unprivileged_bpf_disabled; 2045 2073
+2
include/linux/bpf_mem_alloc.h
··· 27 27 /* kmalloc/kfree equivalent: */ 28 28 void *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size); 29 29 void bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr); 30 + void bpf_mem_free_rcu(struct bpf_mem_alloc *ma, void *ptr); 30 31 31 32 /* kmem_cache_alloc/free equivalent: */ 32 33 void *bpf_mem_cache_alloc(struct bpf_mem_alloc *ma); 33 34 void bpf_mem_cache_free(struct bpf_mem_alloc *ma, void *ptr); 35 + void bpf_mem_cache_free_rcu(struct bpf_mem_alloc *ma, void *ptr); 34 36 void bpf_mem_cache_raw_free(void *ptr); 35 37 void *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags); 36 38
+2
include/linux/rcutiny.h
··· 138 138 return 0; 139 139 } 140 140 141 + static inline void rcu_request_urgent_qs_task(struct task_struct *t) { } 142 + 141 143 /* 142 144 * Take advantage of the fact that there is only one CPU, which 143 145 * allows us to ignore virtualization-based context switches.
+1
include/linux/rcutree.h
··· 21 21 void rcu_note_context_switch(bool preempt); 22 22 int rcu_needs_cpu(void); 23 23 void rcu_cpu_stall_reset(void); 24 + void rcu_request_urgent_qs_task(struct task_struct *t); 24 25 25 26 /* 26 27 * Note a virtualization-based context switch. This is simply a
+2 -1
include/linux/trace_events.h
··· 867 867 extern void perf_uprobe_destroy(struct perf_event *event); 868 868 extern int bpf_get_uprobe_info(const struct perf_event *event, 869 869 u32 *fd_type, const char **filename, 870 - u64 *probe_offset, bool perf_type_tracepoint); 870 + u64 *probe_offset, u64 *probe_addr, 871 + bool perf_type_tracepoint); 871 872 #endif 872 873 extern int ftrace_profile_set_filter(struct perf_event *event, int event_id, 873 874 char *filter_str);
+40
include/uapi/linux/bpf.h
··· 1057 1057 MAX_BPF_LINK_TYPE, 1058 1058 }; 1059 1059 1060 + enum bpf_perf_event_type { 1061 + BPF_PERF_EVENT_UNSPEC = 0, 1062 + BPF_PERF_EVENT_UPROBE = 1, 1063 + BPF_PERF_EVENT_URETPROBE = 2, 1064 + BPF_PERF_EVENT_KPROBE = 3, 1065 + BPF_PERF_EVENT_KRETPROBE = 4, 1066 + BPF_PERF_EVENT_TRACEPOINT = 5, 1067 + BPF_PERF_EVENT_EVENT = 6, 1068 + }; 1069 + 1060 1070 /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command 1061 1071 * 1062 1072 * NONE(default): No further bpf programs allowed in the subtree. ··· 6449 6439 __s32 priority; 6450 6440 __u32 flags; 6451 6441 } netfilter; 6442 + struct { 6443 + __aligned_u64 addrs; 6444 + __u32 count; /* in/out: kprobe_multi function count */ 6445 + __u32 flags; 6446 + } kprobe_multi; 6447 + struct { 6448 + __u32 type; /* enum bpf_perf_event_type */ 6449 + __u32 :32; 6450 + union { 6451 + struct { 6452 + __aligned_u64 file_name; /* in/out */ 6453 + __u32 name_len; 6454 + __u32 offset; /* offset from file_name */ 6455 + } uprobe; /* BPF_PERF_EVENT_UPROBE, BPF_PERF_EVENT_URETPROBE */ 6456 + struct { 6457 + __aligned_u64 func_name; /* in/out */ 6458 + __u32 name_len; 6459 + __u32 offset; /* offset from func_name */ 6460 + __u64 addr; 6461 + } kprobe; /* BPF_PERF_EVENT_KPROBE, BPF_PERF_EVENT_KRETPROBE */ 6462 + struct { 6463 + __aligned_u64 tp_name; /* in/out */ 6464 + __u32 name_len; 6465 + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */ 6466 + struct { 6467 + __u64 config; 6468 + __u32 type; 6469 + } event; /* BPF_PERF_EVENT_EVENT */ 6470 + }; 6471 + } perf_event; 6452 6472 }; 6453 6473 } __attribute__((aligned(8))); 6454 6474
+12 -12
kernel/bpf/btf.c
··· 6133 6133 const char *tname, *mname, *tag_value; 6134 6134 u32 vlen, elem_id, mid; 6135 6135 6136 - *flag = 0; 6137 6136 again: 6137 + if (btf_type_is_modifier(t)) 6138 + t = btf_type_skip_modifiers(btf, t->type, NULL); 6138 6139 tname = __btf_name_by_offset(btf, t->name_off); 6139 6140 if (!btf_type_is_struct(t)) { 6140 6141 bpf_log(log, "Type '%s' is not a struct\n", tname); ··· 6143 6142 } 6144 6143 6145 6144 vlen = btf_type_vlen(t); 6145 + if (BTF_INFO_KIND(t->info) == BTF_KIND_UNION && vlen != 1 && !(*flag & PTR_UNTRUSTED)) 6146 + /* 6147 + * walking unions yields untrusted pointers 6148 + * with exception of __bpf_md_ptr and other 6149 + * unions with a single member 6150 + */ 6151 + *flag |= PTR_UNTRUSTED; 6152 + 6146 6153 if (off + size > t->size) { 6147 6154 /* If the last element is a variable size array, we may 6148 6155 * need to relax the rule. ··· 6311 6302 * of this field or inside of this struct 6312 6303 */ 6313 6304 if (btf_type_is_struct(mtype)) { 6314 - if (BTF_INFO_KIND(mtype->info) == BTF_KIND_UNION && 6315 - btf_type_vlen(mtype) != 1) 6316 - /* 6317 - * walking unions yields untrusted pointers 6318 - * with exception of __bpf_md_ptr and other 6319 - * unions with a single member 6320 - */ 6321 - *flag |= PTR_UNTRUSTED; 6322 - 6323 6305 /* our field must be inside that union or struct */ 6324 6306 t = mtype; 6325 6307 ··· 6368 6368 * that also allows using an array of int as a scratch 6369 6369 * space. e.g. skb->cb[]. 6370 6370 */ 6371 - if (off + size > mtrue_end) { 6371 + if (off + size > mtrue_end && !(*flag & PTR_UNTRUSTED)) { 6372 6372 bpf_log(log, 6373 6373 "access beyond the end of member %s (mend:%u) in struct %s with off %u size %u\n", 6374 6374 mname, mtrue_end, tname, off, size); ··· 6476 6476 bool strict) 6477 6477 { 6478 6478 const struct btf_type *type; 6479 - enum bpf_type_flag flag; 6479 + enum bpf_type_flag flag = 0; 6480 6480 int err; 6481 6481 6482 6482 /* Are we already done? */
+6 -14
kernel/bpf/cpumask.c
··· 9 9 /** 10 10 * struct bpf_cpumask - refcounted BPF cpumask wrapper structure 11 11 * @cpumask: The actual cpumask embedded in the struct. 12 - * @rcu: The RCU head used to free the cpumask with RCU safety. 13 12 * @usage: Object reference counter. When the refcount goes to 0, the 14 13 * memory is released back to the BPF allocator, which provides 15 14 * RCU safety. ··· 24 25 */ 25 26 struct bpf_cpumask { 26 27 cpumask_t cpumask; 27 - struct rcu_head rcu; 28 28 refcount_t usage; 29 29 }; 30 30 ··· 80 82 return cpumask; 81 83 } 82 84 83 - static void cpumask_free_cb(struct rcu_head *head) 84 - { 85 - struct bpf_cpumask *cpumask; 86 - 87 - cpumask = container_of(head, struct bpf_cpumask, rcu); 88 - migrate_disable(); 89 - bpf_mem_cache_free(&bpf_cpumask_ma, cpumask); 90 - migrate_enable(); 91 - } 92 - 93 85 /** 94 86 * bpf_cpumask_release() - Release a previously acquired BPF cpumask. 95 87 * @cpumask: The cpumask being released. ··· 90 102 */ 91 103 __bpf_kfunc void bpf_cpumask_release(struct bpf_cpumask *cpumask) 92 104 { 93 - if (refcount_dec_and_test(&cpumask->usage)) 94 - call_rcu(&cpumask->rcu, cpumask_free_cb); 105 + if (!refcount_dec_and_test(&cpumask->usage)) 106 + return; 107 + 108 + migrate_disable(); 109 + bpf_mem_cache_free_rcu(&bpf_cpumask_ma, cpumask); 110 + migrate_enable(); 95 111 } 96 112 97 113 /**
+20 -2
kernel/bpf/hashtab.c
··· 302 302 struct htab_elem *l; 303 303 304 304 if (node) { 305 + bpf_map_inc_elem_count(&htab->map); 305 306 l = container_of(node, struct htab_elem, lru_node); 306 307 memcpy(l->key, key, htab->map.key_size); 307 308 return l; ··· 511 510 htab->n_buckets > U32_MAX / sizeof(struct bucket)) 512 511 goto free_htab; 513 512 513 + err = bpf_map_init_elem_count(&htab->map); 514 + if (err) 515 + goto free_htab; 516 + 514 517 err = -ENOMEM; 515 518 htab->buckets = bpf_map_area_alloc(htab->n_buckets * 516 519 sizeof(struct bucket), 517 520 htab->map.numa_node); 518 521 if (!htab->buckets) 519 - goto free_htab; 522 + goto free_elem_count; 520 523 521 524 for (i = 0; i < HASHTAB_MAP_LOCK_COUNT; i++) { 522 525 htab->map_locked[i] = bpf_map_alloc_percpu(&htab->map, ··· 598 593 bpf_map_area_free(htab->buckets); 599 594 bpf_mem_alloc_destroy(&htab->pcpu_ma); 600 595 bpf_mem_alloc_destroy(&htab->ma); 596 + free_elem_count: 597 + bpf_map_free_elem_count(&htab->map); 601 598 free_htab: 602 599 lockdep_unregister_key(&htab->lockdep_key); 603 600 bpf_map_area_free(htab); ··· 811 804 if (l == tgt_l) { 812 805 hlist_nulls_del_rcu(&l->hash_node); 813 806 check_and_free_fields(htab, l); 807 + bpf_map_dec_elem_count(&htab->map); 814 808 break; 815 809 } 816 810 ··· 908 900 909 901 static void inc_elem_count(struct bpf_htab *htab) 910 902 { 903 + bpf_map_inc_elem_count(&htab->map); 904 + 911 905 if (htab->use_percpu_counter) 912 906 percpu_counter_add_batch(&htab->pcount, 1, PERCPU_COUNTER_BATCH); 913 907 else ··· 918 908 919 909 static void dec_elem_count(struct bpf_htab *htab) 920 910 { 911 + bpf_map_dec_elem_count(&htab->map); 912 + 921 913 if (htab->use_percpu_counter) 922 914 percpu_counter_add_batch(&htab->pcount, -1, PERCPU_COUNTER_BATCH); 923 915 else ··· 932 920 htab_put_fd_value(htab, l); 933 921 934 922 if (htab_is_prealloc(htab)) { 923 + bpf_map_dec_elem_count(&htab->map); 935 924 check_and_free_fields(htab, l); 936 925 __pcpu_freelist_push(&htab->freelist, &l->fnode); 937 926 } else { ··· 1013 1000 if (!l) 1014 1001 return ERR_PTR(-E2BIG); 1015 1002 l_new = container_of(l, struct htab_elem, fnode); 1003 + bpf_map_inc_elem_count(&htab->map); 1016 1004 } 1017 1005 } else { 1018 1006 if (is_map_full(htab)) ··· 1182 1168 static void htab_lru_push_free(struct bpf_htab *htab, struct htab_elem *elem) 1183 1169 { 1184 1170 check_and_free_fields(htab, elem); 1171 + bpf_map_dec_elem_count(&htab->map); 1185 1172 bpf_lru_push_free(&htab->lru, &elem->lru_node); 1186 1173 } 1187 1174 ··· 1372 1357 err: 1373 1358 htab_unlock_bucket(htab, b, hash, flags); 1374 1359 err_lock_bucket: 1375 - if (l_new) 1360 + if (l_new) { 1361 + bpf_map_dec_elem_count(&htab->map); 1376 1362 bpf_lru_push_free(&htab->lru, &l_new->lru_node); 1363 + } 1377 1364 return ret; 1378 1365 } 1379 1366 ··· 1540 1523 prealloc_destroy(htab); 1541 1524 } 1542 1525 1526 + bpf_map_free_elem_count(map); 1543 1527 free_percpu(htab->extra_elems); 1544 1528 bpf_map_area_free(htab->buckets); 1545 1529 bpf_mem_alloc_destroy(&htab->pcpu_ma);
+38 -1
kernel/bpf/map_iter.c
··· 93 93 .ctx_arg_info_size = 1, 94 94 .ctx_arg_info = { 95 95 { offsetof(struct bpf_iter__bpf_map, map), 96 - PTR_TO_BTF_ID_OR_NULL }, 96 + PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED }, 97 97 }, 98 98 .seq_info = &bpf_map_seq_info, 99 99 }; ··· 193 193 } 194 194 195 195 late_initcall(bpf_map_iter_init); 196 + 197 + __diag_push(); 198 + __diag_ignore_all("-Wmissing-prototypes", 199 + "Global functions as their definitions will be in vmlinux BTF"); 200 + 201 + __bpf_kfunc s64 bpf_map_sum_elem_count(struct bpf_map *map) 202 + { 203 + s64 *pcount; 204 + s64 ret = 0; 205 + int cpu; 206 + 207 + if (!map || !map->elem_count) 208 + return 0; 209 + 210 + for_each_possible_cpu(cpu) { 211 + pcount = per_cpu_ptr(map->elem_count, cpu); 212 + ret += READ_ONCE(*pcount); 213 + } 214 + return ret; 215 + } 216 + 217 + __diag_pop(); 218 + 219 + BTF_SET8_START(bpf_map_iter_kfunc_ids) 220 + BTF_ID_FLAGS(func, bpf_map_sum_elem_count, KF_TRUSTED_ARGS) 221 + BTF_SET8_END(bpf_map_iter_kfunc_ids) 222 + 223 + static const struct btf_kfunc_id_set bpf_map_iter_kfunc_set = { 224 + .owner = THIS_MODULE, 225 + .set = &bpf_map_iter_kfunc_ids, 226 + }; 227 + 228 + static int init_subsystem(void) 229 + { 230 + return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_map_iter_kfunc_set); 231 + } 232 + late_initcall(init_subsystem);
+286 -92
kernel/bpf/memalloc.c
··· 98 98 int free_cnt; 99 99 int low_watermark, high_watermark, batch; 100 100 int percpu_size; 101 + bool draining; 102 + struct bpf_mem_cache *tgt; 101 103 102 - struct rcu_head rcu; 104 + /* list of objects to be freed after RCU GP */ 103 105 struct llist_head free_by_rcu; 106 + struct llist_node *free_by_rcu_tail; 104 107 struct llist_head waiting_for_gp; 108 + struct llist_node *waiting_for_gp_tail; 109 + struct rcu_head rcu; 105 110 atomic_t call_rcu_in_progress; 111 + struct llist_head free_llist_extra_rcu; 112 + 113 + /* list of objects to be freed after RCU tasks trace GP */ 114 + struct llist_head free_by_rcu_ttrace; 115 + struct llist_head waiting_for_gp_ttrace; 116 + struct rcu_head rcu_ttrace; 117 + atomic_t call_rcu_ttrace_in_progress; 106 118 }; 107 119 108 120 struct bpf_mem_caches { ··· 165 153 #endif 166 154 } 167 155 156 + static void inc_active(struct bpf_mem_cache *c, unsigned long *flags) 157 + { 158 + if (IS_ENABLED(CONFIG_PREEMPT_RT)) 159 + /* In RT irq_work runs in per-cpu kthread, so disable 160 + * interrupts to avoid preemption and interrupts and 161 + * reduce the chance of bpf prog executing on this cpu 162 + * when active counter is busy. 163 + */ 164 + local_irq_save(*flags); 165 + /* alloc_bulk runs from irq_work which will not preempt a bpf 166 + * program that does unit_alloc/unit_free since IRQs are 167 + * disabled there. There is no race to increment 'active' 168 + * counter. It protects free_llist from corruption in case NMI 169 + * bpf prog preempted this loop. 170 + */ 171 + WARN_ON_ONCE(local_inc_return(&c->active) != 1); 172 + } 173 + 174 + static void dec_active(struct bpf_mem_cache *c, unsigned long flags) 175 + { 176 + local_dec(&c->active); 177 + if (IS_ENABLED(CONFIG_PREEMPT_RT)) 178 + local_irq_restore(flags); 179 + } 180 + 181 + static void add_obj_to_free_list(struct bpf_mem_cache *c, void *obj) 182 + { 183 + unsigned long flags; 184 + 185 + inc_active(c, &flags); 186 + __llist_add(obj, &c->free_llist); 187 + c->free_cnt++; 188 + dec_active(c, flags); 189 + } 190 + 168 191 /* Mostly runs from irq_work except __init phase. */ 169 192 static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node) 170 193 { 171 194 struct mem_cgroup *memcg = NULL, *old_memcg; 172 - unsigned long flags; 173 195 void *obj; 174 196 int i; 175 197 176 - memcg = get_memcg(c); 177 - old_memcg = set_active_memcg(memcg); 178 198 for (i = 0; i < cnt; i++) { 179 199 /* 180 - * free_by_rcu is only manipulated by irq work refill_work(). 181 - * IRQ works on the same CPU are called sequentially, so it is 182 - * safe to use __llist_del_first() here. If alloc_bulk() is 183 - * invoked by the initial prefill, there will be no running 184 - * refill_work(), so __llist_del_first() is fine as well. 185 - * 186 - * In most cases, objects on free_by_rcu are from the same CPU. 187 - * If some objects come from other CPUs, it doesn't incur any 188 - * harm because NUMA_NO_NODE means the preference for current 189 - * numa node and it is not a guarantee. 200 + * For every 'c' llist_del_first(&c->free_by_rcu_ttrace); is 201 + * done only by one CPU == current CPU. Other CPUs might 202 + * llist_add() and llist_del_all() in parallel. 190 203 */ 191 - obj = __llist_del_first(&c->free_by_rcu); 192 - if (!obj) { 193 - /* Allocate, but don't deplete atomic reserves that typical 194 - * GFP_ATOMIC would do. irq_work runs on this cpu and kmalloc 195 - * will allocate from the current numa node which is what we 196 - * want here. 197 - */ 198 - obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT); 199 - if (!obj) 200 - break; 201 - } 202 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 203 - /* In RT irq_work runs in per-cpu kthread, so disable 204 - * interrupts to avoid preemption and interrupts and 205 - * reduce the chance of bpf prog executing on this cpu 206 - * when active counter is busy. 207 - */ 208 - local_irq_save(flags); 209 - /* alloc_bulk runs from irq_work which will not preempt a bpf 210 - * program that does unit_alloc/unit_free since IRQs are 211 - * disabled there. There is no race to increment 'active' 212 - * counter. It protects free_llist from corruption in case NMI 213 - * bpf prog preempted this loop. 204 + obj = llist_del_first(&c->free_by_rcu_ttrace); 205 + if (!obj) 206 + break; 207 + add_obj_to_free_list(c, obj); 208 + } 209 + if (i >= cnt) 210 + return; 211 + 212 + for (; i < cnt; i++) { 213 + obj = llist_del_first(&c->waiting_for_gp_ttrace); 214 + if (!obj) 215 + break; 216 + add_obj_to_free_list(c, obj); 217 + } 218 + if (i >= cnt) 219 + return; 220 + 221 + memcg = get_memcg(c); 222 + old_memcg = set_active_memcg(memcg); 223 + for (; i < cnt; i++) { 224 + /* Allocate, but don't deplete atomic reserves that typical 225 + * GFP_ATOMIC would do. irq_work runs on this cpu and kmalloc 226 + * will allocate from the current numa node which is what we 227 + * want here. 214 228 */ 215 - WARN_ON_ONCE(local_inc_return(&c->active) != 1); 216 - __llist_add(obj, &c->free_llist); 217 - c->free_cnt++; 218 - local_dec(&c->active); 219 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 220 - local_irq_restore(flags); 229 + obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT); 230 + if (!obj) 231 + break; 232 + add_obj_to_free_list(c, obj); 221 233 } 222 234 set_active_memcg(old_memcg); 223 235 mem_cgroup_put(memcg); ··· 258 222 kfree(obj); 259 223 } 260 224 261 - static void free_all(struct llist_node *llnode, bool percpu) 225 + static int free_all(struct llist_node *llnode, bool percpu) 262 226 { 263 227 struct llist_node *pos, *t; 228 + int cnt = 0; 264 229 265 - llist_for_each_safe(pos, t, llnode) 230 + llist_for_each_safe(pos, t, llnode) { 266 231 free_one(pos, percpu); 232 + cnt++; 233 + } 234 + return cnt; 267 235 } 268 236 269 237 static void __free_rcu(struct rcu_head *head) 270 238 { 271 - struct bpf_mem_cache *c = container_of(head, struct bpf_mem_cache, rcu); 239 + struct bpf_mem_cache *c = container_of(head, struct bpf_mem_cache, rcu_ttrace); 272 240 273 - free_all(llist_del_all(&c->waiting_for_gp), !!c->percpu_size); 274 - atomic_set(&c->call_rcu_in_progress, 0); 241 + free_all(llist_del_all(&c->waiting_for_gp_ttrace), !!c->percpu_size); 242 + atomic_set(&c->call_rcu_ttrace_in_progress, 0); 275 243 } 276 244 277 245 static void __free_rcu_tasks_trace(struct rcu_head *head) ··· 294 254 struct llist_node *llnode = obj; 295 255 296 256 /* bpf_mem_cache is a per-cpu object. Freeing happens in irq_work. 297 - * Nothing races to add to free_by_rcu list. 257 + * Nothing races to add to free_by_rcu_ttrace list. 298 258 */ 299 - __llist_add(llnode, &c->free_by_rcu); 259 + llist_add(llnode, &c->free_by_rcu_ttrace); 300 260 } 301 261 302 - static void do_call_rcu(struct bpf_mem_cache *c) 262 + static void do_call_rcu_ttrace(struct bpf_mem_cache *c) 303 263 { 304 264 struct llist_node *llnode, *t; 305 265 306 - if (atomic_xchg(&c->call_rcu_in_progress, 1)) 266 + if (atomic_xchg(&c->call_rcu_ttrace_in_progress, 1)) { 267 + if (unlikely(READ_ONCE(c->draining))) { 268 + llnode = llist_del_all(&c->free_by_rcu_ttrace); 269 + free_all(llnode, !!c->percpu_size); 270 + } 307 271 return; 272 + } 308 273 309 - WARN_ON_ONCE(!llist_empty(&c->waiting_for_gp)); 310 - llist_for_each_safe(llnode, t, __llist_del_all(&c->free_by_rcu)) 311 - /* There is no concurrent __llist_add(waiting_for_gp) access. 312 - * It doesn't race with llist_del_all either. 313 - * But there could be two concurrent llist_del_all(waiting_for_gp): 314 - * from __free_rcu() and from drain_mem_cache(). 315 - */ 316 - __llist_add(llnode, &c->waiting_for_gp); 274 + WARN_ON_ONCE(!llist_empty(&c->waiting_for_gp_ttrace)); 275 + llist_for_each_safe(llnode, t, llist_del_all(&c->free_by_rcu_ttrace)) 276 + llist_add(llnode, &c->waiting_for_gp_ttrace); 277 + 278 + if (unlikely(READ_ONCE(c->draining))) { 279 + __free_rcu(&c->rcu_ttrace); 280 + return; 281 + } 282 + 317 283 /* Use call_rcu_tasks_trace() to wait for sleepable progs to finish. 318 284 * If RCU Tasks Trace grace period implies RCU grace period, free 319 285 * these elements directly, else use call_rcu() to wait for normal 320 286 * progs to finish and finally do free_one() on each element. 321 287 */ 322 - call_rcu_tasks_trace(&c->rcu, __free_rcu_tasks_trace); 288 + call_rcu_tasks_trace(&c->rcu_ttrace, __free_rcu_tasks_trace); 323 289 } 324 290 325 291 static void free_bulk(struct bpf_mem_cache *c) 326 292 { 293 + struct bpf_mem_cache *tgt = c->tgt; 327 294 struct llist_node *llnode, *t; 328 295 unsigned long flags; 329 296 int cnt; 330 297 298 + WARN_ON_ONCE(tgt->unit_size != c->unit_size); 299 + 331 300 do { 332 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 333 - local_irq_save(flags); 334 - WARN_ON_ONCE(local_inc_return(&c->active) != 1); 301 + inc_active(c, &flags); 335 302 llnode = __llist_del_first(&c->free_llist); 336 303 if (llnode) 337 304 cnt = --c->free_cnt; 338 305 else 339 306 cnt = 0; 340 - local_dec(&c->active); 341 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 342 - local_irq_restore(flags); 307 + dec_active(c, flags); 343 308 if (llnode) 344 - enque_to_free(c, llnode); 309 + enque_to_free(tgt, llnode); 345 310 } while (cnt > (c->high_watermark + c->low_watermark) / 2); 346 311 347 312 /* and drain free_llist_extra */ 348 313 llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist_extra)) 349 - enque_to_free(c, llnode); 350 - do_call_rcu(c); 314 + enque_to_free(tgt, llnode); 315 + do_call_rcu_ttrace(tgt); 316 + } 317 + 318 + static void __free_by_rcu(struct rcu_head *head) 319 + { 320 + struct bpf_mem_cache *c = container_of(head, struct bpf_mem_cache, rcu); 321 + struct bpf_mem_cache *tgt = c->tgt; 322 + struct llist_node *llnode; 323 + 324 + llnode = llist_del_all(&c->waiting_for_gp); 325 + if (!llnode) 326 + goto out; 327 + 328 + llist_add_batch(llnode, c->waiting_for_gp_tail, &tgt->free_by_rcu_ttrace); 329 + 330 + /* Objects went through regular RCU GP. Send them to RCU tasks trace */ 331 + do_call_rcu_ttrace(tgt); 332 + out: 333 + atomic_set(&c->call_rcu_in_progress, 0); 334 + } 335 + 336 + static void check_free_by_rcu(struct bpf_mem_cache *c) 337 + { 338 + struct llist_node *llnode, *t; 339 + unsigned long flags; 340 + 341 + /* drain free_llist_extra_rcu */ 342 + if (unlikely(!llist_empty(&c->free_llist_extra_rcu))) { 343 + inc_active(c, &flags); 344 + llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist_extra_rcu)) 345 + if (__llist_add(llnode, &c->free_by_rcu)) 346 + c->free_by_rcu_tail = llnode; 347 + dec_active(c, flags); 348 + } 349 + 350 + if (llist_empty(&c->free_by_rcu)) 351 + return; 352 + 353 + if (atomic_xchg(&c->call_rcu_in_progress, 1)) { 354 + /* 355 + * Instead of kmalloc-ing new rcu_head and triggering 10k 356 + * call_rcu() to hit rcutree.qhimark and force RCU to notice 357 + * the overload just ask RCU to hurry up. There could be many 358 + * objects in free_by_rcu list. 359 + * This hint reduces memory consumption for an artificial 360 + * benchmark from 2 Gbyte to 150 Mbyte. 361 + */ 362 + rcu_request_urgent_qs_task(current); 363 + return; 364 + } 365 + 366 + WARN_ON_ONCE(!llist_empty(&c->waiting_for_gp)); 367 + 368 + inc_active(c, &flags); 369 + WRITE_ONCE(c->waiting_for_gp.first, __llist_del_all(&c->free_by_rcu)); 370 + c->waiting_for_gp_tail = c->free_by_rcu_tail; 371 + dec_active(c, flags); 372 + 373 + if (unlikely(READ_ONCE(c->draining))) { 374 + free_all(llist_del_all(&c->waiting_for_gp), !!c->percpu_size); 375 + atomic_set(&c->call_rcu_in_progress, 0); 376 + } else { 377 + call_rcu_hurry(&c->rcu, __free_by_rcu); 378 + } 351 379 } 352 380 353 381 static void bpf_mem_refill(struct irq_work *work) ··· 432 324 alloc_bulk(c, c->batch, NUMA_NO_NODE); 433 325 else if (cnt > c->high_watermark) 434 326 free_bulk(c); 327 + 328 + check_free_by_rcu(c); 435 329 } 436 330 437 331 static void notrace irq_work_raise(struct bpf_mem_cache *c) ··· 516 406 c->unit_size = unit_size; 517 407 c->objcg = objcg; 518 408 c->percpu_size = percpu_size; 409 + c->tgt = c; 519 410 prefill_mem_cache(c, cpu); 520 411 } 521 412 ma->cache = pc; ··· 539 428 c = &cc->cache[i]; 540 429 c->unit_size = sizes[i]; 541 430 c->objcg = objcg; 431 + c->tgt = c; 542 432 prefill_mem_cache(c, cpu); 543 433 } 544 434 } ··· 553 441 554 442 /* No progs are using this bpf_mem_cache, but htab_map_free() called 555 443 * bpf_mem_cache_free() for all remaining elements and they can be in 556 - * free_by_rcu or in waiting_for_gp lists, so drain those lists now. 444 + * free_by_rcu_ttrace or in waiting_for_gp_ttrace lists, so drain those lists now. 557 445 * 558 - * Except for waiting_for_gp list, there are no concurrent operations 446 + * Except for waiting_for_gp_ttrace list, there are no concurrent operations 559 447 * on these lists, so it is safe to use __llist_del_all(). 560 448 */ 561 - free_all(__llist_del_all(&c->free_by_rcu), percpu); 562 - free_all(llist_del_all(&c->waiting_for_gp), percpu); 449 + free_all(llist_del_all(&c->free_by_rcu_ttrace), percpu); 450 + free_all(llist_del_all(&c->waiting_for_gp_ttrace), percpu); 563 451 free_all(__llist_del_all(&c->free_llist), percpu); 564 452 free_all(__llist_del_all(&c->free_llist_extra), percpu); 453 + free_all(__llist_del_all(&c->free_by_rcu), percpu); 454 + free_all(__llist_del_all(&c->free_llist_extra_rcu), percpu); 455 + free_all(llist_del_all(&c->waiting_for_gp), percpu); 456 + } 457 + 458 + static void check_mem_cache(struct bpf_mem_cache *c) 459 + { 460 + WARN_ON_ONCE(!llist_empty(&c->free_by_rcu_ttrace)); 461 + WARN_ON_ONCE(!llist_empty(&c->waiting_for_gp_ttrace)); 462 + WARN_ON_ONCE(!llist_empty(&c->free_llist)); 463 + WARN_ON_ONCE(!llist_empty(&c->free_llist_extra)); 464 + WARN_ON_ONCE(!llist_empty(&c->free_by_rcu)); 465 + WARN_ON_ONCE(!llist_empty(&c->free_llist_extra_rcu)); 466 + WARN_ON_ONCE(!llist_empty(&c->waiting_for_gp)); 467 + } 468 + 469 + static void check_leaked_objs(struct bpf_mem_alloc *ma) 470 + { 471 + struct bpf_mem_caches *cc; 472 + struct bpf_mem_cache *c; 473 + int cpu, i; 474 + 475 + if (ma->cache) { 476 + for_each_possible_cpu(cpu) { 477 + c = per_cpu_ptr(ma->cache, cpu); 478 + check_mem_cache(c); 479 + } 480 + } 481 + if (ma->caches) { 482 + for_each_possible_cpu(cpu) { 483 + cc = per_cpu_ptr(ma->caches, cpu); 484 + for (i = 0; i < NUM_CACHES; i++) { 485 + c = &cc->cache[i]; 486 + check_mem_cache(c); 487 + } 488 + } 489 + } 565 490 } 566 491 567 492 static void free_mem_alloc_no_barrier(struct bpf_mem_alloc *ma) 568 493 { 494 + check_leaked_objs(ma); 569 495 free_percpu(ma->cache); 570 496 free_percpu(ma->caches); 571 497 ma->cache = NULL; ··· 612 462 613 463 static void free_mem_alloc(struct bpf_mem_alloc *ma) 614 464 { 615 - /* waiting_for_gp lists was drained, but __free_rcu might 616 - * still execute. Wait for it now before we freeing percpu caches. 465 + /* waiting_for_gp[_ttrace] lists were drained, but RCU callbacks 466 + * might still execute. Wait for them. 617 467 * 618 468 * rcu_barrier_tasks_trace() doesn't imply synchronize_rcu_tasks_trace(), 619 469 * but rcu_barrier_tasks_trace() and rcu_barrier() below are only used ··· 622 472 * rcu_trace_implies_rcu_gp(), it will be OK to skip rcu_barrier() by 623 473 * using rcu_trace_implies_rcu_gp() as well. 624 474 */ 625 - rcu_barrier_tasks_trace(); 475 + rcu_barrier(); /* wait for __free_by_rcu */ 476 + rcu_barrier_tasks_trace(); /* wait for __free_rcu */ 626 477 if (!rcu_trace_implies_rcu_gp()) 627 478 rcu_barrier(); 628 479 free_mem_alloc_no_barrier(ma); ··· 649 498 return; 650 499 } 651 500 652 - copy = kmalloc(sizeof(*ma), GFP_KERNEL); 501 + copy = kmemdup(ma, sizeof(*ma), GFP_KERNEL); 653 502 if (!copy) { 654 503 /* Slow path with inline barrier-s */ 655 504 free_mem_alloc(ma); ··· 657 506 } 658 507 659 508 /* Defer barriers into worker to let the rest of map memory to be freed */ 660 - copy->cache = ma->cache; 661 - ma->cache = NULL; 662 - copy->caches = ma->caches; 663 - ma->caches = NULL; 509 + memset(ma, 0, sizeof(*ma)); 664 510 INIT_WORK(&copy->work, free_mem_alloc_deferred); 665 511 queue_work(system_unbound_wq, &copy->work); 666 512 } ··· 672 524 rcu_in_progress = 0; 673 525 for_each_possible_cpu(cpu) { 674 526 c = per_cpu_ptr(ma->cache, cpu); 675 - /* 676 - * refill_work may be unfinished for PREEMPT_RT kernel 677 - * in which irq work is invoked in a per-CPU RT thread. 678 - * It is also possible for kernel with 679 - * arch_irq_work_has_interrupt() being false and irq 680 - * work is invoked in timer interrupt. So waiting for 681 - * the completion of irq work to ease the handling of 682 - * concurrency. 683 - */ 527 + WRITE_ONCE(c->draining, true); 684 528 irq_work_sync(&c->refill_work); 685 529 drain_mem_cache(c); 530 + rcu_in_progress += atomic_read(&c->call_rcu_ttrace_in_progress); 686 531 rcu_in_progress += atomic_read(&c->call_rcu_in_progress); 687 532 } 688 533 /* objcg is the same across cpus */ ··· 689 548 cc = per_cpu_ptr(ma->caches, cpu); 690 549 for (i = 0; i < NUM_CACHES; i++) { 691 550 c = &cc->cache[i]; 551 + WRITE_ONCE(c->draining, true); 692 552 irq_work_sync(&c->refill_work); 693 553 drain_mem_cache(c); 554 + rcu_in_progress += atomic_read(&c->call_rcu_ttrace_in_progress); 694 555 rcu_in_progress += atomic_read(&c->call_rcu_in_progress); 695 556 } 696 557 } ··· 724 581 local_irq_save(flags); 725 582 if (local_inc_return(&c->active) == 1) { 726 583 llnode = __llist_del_first(&c->free_llist); 727 - if (llnode) 584 + if (llnode) { 728 585 cnt = --c->free_cnt; 586 + *(struct bpf_mem_cache **)llnode = c; 587 + } 729 588 } 730 589 local_dec(&c->active); 731 590 local_irq_restore(flags); ··· 751 606 752 607 BUILD_BUG_ON(LLIST_NODE_SZ > 8); 753 608 609 + /* 610 + * Remember bpf_mem_cache that allocated this object. 611 + * The hint is not accurate. 612 + */ 613 + c->tgt = *(struct bpf_mem_cache **)llnode; 614 + 754 615 local_irq_save(flags); 755 616 if (local_inc_return(&c->active) == 1) { 756 617 __llist_add(llnode, &c->free_llist); ··· 775 624 776 625 if (cnt > c->high_watermark) 777 626 /* free few objects from current cpu into global kmalloc pool */ 627 + irq_work_raise(c); 628 + } 629 + 630 + static void notrace unit_free_rcu(struct bpf_mem_cache *c, void *ptr) 631 + { 632 + struct llist_node *llnode = ptr - LLIST_NODE_SZ; 633 + unsigned long flags; 634 + 635 + c->tgt = *(struct bpf_mem_cache **)llnode; 636 + 637 + local_irq_save(flags); 638 + if (local_inc_return(&c->active) == 1) { 639 + if (__llist_add(llnode, &c->free_by_rcu)) 640 + c->free_by_rcu_tail = llnode; 641 + } else { 642 + llist_add(llnode, &c->free_llist_extra_rcu); 643 + } 644 + local_dec(&c->active); 645 + local_irq_restore(flags); 646 + 647 + if (!atomic_read(&c->call_rcu_in_progress)) 778 648 irq_work_raise(c); 779 649 } 780 650 ··· 832 660 unit_free(this_cpu_ptr(ma->caches)->cache + idx, ptr); 833 661 } 834 662 663 + void notrace bpf_mem_free_rcu(struct bpf_mem_alloc *ma, void *ptr) 664 + { 665 + int idx; 666 + 667 + if (!ptr) 668 + return; 669 + 670 + idx = bpf_mem_cache_idx(ksize(ptr - LLIST_NODE_SZ)); 671 + if (idx < 0) 672 + return; 673 + 674 + unit_free_rcu(this_cpu_ptr(ma->caches)->cache + idx, ptr); 675 + } 676 + 835 677 void notrace *bpf_mem_cache_alloc(struct bpf_mem_alloc *ma) 836 678 { 837 679 void *ret; ··· 860 674 return; 861 675 862 676 unit_free(this_cpu_ptr(ma->cache), ptr); 677 + } 678 + 679 + void notrace bpf_mem_cache_free_rcu(struct bpf_mem_alloc *ma, void *ptr) 680 + { 681 + if (!ptr) 682 + return; 683 + 684 + unit_free_rcu(this_cpu_ptr(ma->cache), ptr); 863 685 } 864 686 865 687 /* Directly does a kfree() without putting 'ptr' back to the free_llist
+1 -1
kernel/bpf/preload/iterators/Makefile
··· 50 50 $(OUTPUT)/%/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT) 51 51 $(call msg,BPF,$@) 52 52 $(Q)mkdir -p $(@D) 53 - $(Q)$(CLANG) -g -O2 -target bpf -m$* $(INCLUDES) \ 53 + $(Q)$(CLANG) -g -O2 --target=bpf -m$* $(INCLUDES) \ 54 54 -c $(filter %.c,$^) -o $@ && \ 55 55 $(LLVM_STRIP) -g $@ 56 56
+7 -2
kernel/bpf/preload/iterators/iterators.bpf.c
··· 73 73 return str + name_off; 74 74 } 75 75 76 + __s64 bpf_map_sum_elem_count(struct bpf_map *map) __ksym; 77 + 76 78 SEC("iter/bpf_map") 77 79 int dump_bpf_map(struct bpf_iter__bpf_map *ctx) 78 80 { ··· 86 84 return 0; 87 85 88 86 if (seq_num == 0) 89 - BPF_SEQ_PRINTF(seq, " id name max_entries\n"); 87 + BPF_SEQ_PRINTF(seq, " id name max_entries cur_entries\n"); 90 88 91 - BPF_SEQ_PRINTF(seq, "%4u %-16s%6d\n", map->id, map->name, map->max_entries); 89 + BPF_SEQ_PRINTF(seq, "%4u %-16s %10d %10lld\n", 90 + map->id, map->name, map->max_entries, 91 + bpf_map_sum_elem_count(map)); 92 + 92 93 return 0; 93 94 } 94 95
+268 -258
kernel/bpf/preload/iterators/iterators.lskel-little-endian.h
··· 1 1 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ 2 - /* THIS FILE IS AUTOGENERATED! */ 2 + /* THIS FILE IS AUTOGENERATED BY BPFTOOL! */ 3 3 #ifndef __ITERATORS_BPF_SKEL_H__ 4 4 #define __ITERATORS_BPF_SKEL_H__ 5 5 ··· 18 18 int dump_bpf_map_fd; 19 19 int dump_bpf_prog_fd; 20 20 } links; 21 - struct iterators_bpf__rodata { 22 - } *rodata; 23 21 }; 24 22 25 23 static inline int ··· 66 68 iterators_bpf__detach(skel); 67 69 skel_closenz(skel->progs.dump_bpf_map.prog_fd); 68 70 skel_closenz(skel->progs.dump_bpf_prog.prog_fd); 69 - skel_free_map_data(skel->rodata, skel->maps.rodata.initial_value, 4096); 70 71 skel_closenz(skel->maps.rodata.map_fd); 71 72 skel_free(skel); 72 73 } ··· 78 81 if (!skel) 79 82 goto cleanup; 80 83 skel->ctx.sz = (void *)&skel->links - (void *)skel; 81 - skel->rodata = skel_prep_map_data((void *)"\ 82 - \x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ 83 - \x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\ 84 - \x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\ 85 - \x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\ 86 - \x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x0a\0", 4096, 98); 87 - if (!skel->rodata) 88 - goto cleanup; 89 - skel->maps.rodata.initial_value = (__u64) (long) skel->rodata; 90 84 return skel; 91 85 cleanup: 92 86 iterators_bpf__destroy(skel); ··· 91 103 int err; 92 104 93 105 opts.ctx = (struct bpf_loader_ctx *)skel; 94 - opts.data_sz = 6056; 106 + opts.data_sz = 6208; 95 107 opts.data = (void *)"\ 96 108 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 97 109 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ ··· 126 138 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 127 139 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 128 140 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x9f\xeb\x01\0\ 129 - \x18\0\0\0\0\0\0\0\x1c\x04\0\0\x1c\x04\0\0\xf9\x04\0\0\0\0\0\0\0\0\0\x02\x02\0\ 141 + \x18\0\0\0\0\0\0\0\x80\x04\0\0\x80\x04\0\0\x31\x05\0\0\0\0\0\0\0\0\0\x02\x02\0\ 130 142 \0\0\x01\0\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\0\0\0\x04\ 131 143 \0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\0\0\0\0\0\ 132 144 \0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\0\0\0\x20\ 133 - \0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xa3\0\0\0\x03\0\0\x04\x18\0\0\0\xb1\0\ 134 - \0\0\x09\0\0\0\0\0\0\0\xb5\0\0\0\x0b\0\0\0\x40\0\0\0\xc0\0\0\0\x0b\0\0\0\x80\0\ 135 - \0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xc8\0\0\0\0\0\0\x07\0\0\0\0\xd1\0\0\0\0\0\0\ 136 - \x08\x0c\0\0\0\xd7\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\x94\x01\0\0\x03\0\0\x04\ 137 - \x18\0\0\0\x9c\x01\0\0\x0e\0\0\0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xa4\ 138 - \x01\0\0\x0e\0\0\0\xa0\0\0\0\xb0\x01\0\0\0\0\0\x08\x0f\0\0\0\xb6\x01\0\0\0\0\0\ 139 - \x01\x04\0\0\0\x20\0\0\0\xc3\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\ 140 - \0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xc8\x01\0\0\0\0\0\x01\x04\0\0\0\ 141 - \x20\0\0\0\0\0\0\0\0\0\0\x02\x14\0\0\0\x2c\x02\0\0\x02\0\0\x04\x10\0\0\0\x13\0\ 142 - \0\0\x03\0\0\0\0\0\0\0\x3f\x02\0\0\x15\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x18\0\ 143 - \0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x13\0\0\0\x44\x02\0\0\x01\0\0\x0c\ 144 - \x16\0\0\0\x90\x02\0\0\x01\0\0\x04\x08\0\0\0\x99\x02\0\0\x19\0\0\0\0\0\0\0\0\0\ 145 - \0\0\0\0\0\x02\x1a\0\0\0\xea\x02\0\0\x06\0\0\x04\x38\0\0\0\x9c\x01\0\0\x0e\0\0\ 146 - \0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xf7\x02\0\0\x1b\0\0\0\xc0\0\0\0\x08\ 147 - \x03\0\0\x15\0\0\0\0\x01\0\0\x11\x03\0\0\x1d\0\0\0\x40\x01\0\0\x1b\x03\0\0\x1e\ 148 - \0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x1c\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\0\0\0\0\ 149 - \0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\x65\x03\0\0\x02\0\0\x04\ 150 - \x08\0\0\0\x73\x03\0\0\x0e\0\0\0\0\0\0\0\x7c\x03\0\0\x0e\0\0\0\x20\0\0\0\x1b\ 151 - \x03\0\0\x03\0\0\x04\x18\0\0\0\x86\x03\0\0\x1b\0\0\0\0\0\0\0\x8e\x03\0\0\x21\0\ 152 - \0\0\x40\0\0\0\x94\x03\0\0\x23\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x22\0\0\0\0\0\ 153 - \0\0\0\0\0\x02\x24\0\0\0\x98\x03\0\0\x01\0\0\x04\x04\0\0\0\xa3\x03\0\0\x0e\0\0\ 154 - \0\0\0\0\0\x0c\x04\0\0\x01\0\0\x04\x04\0\0\0\x15\x04\0\0\x0e\0\0\0\0\0\0\0\0\0\ 155 - \0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\0\x8b\x04\0\0\0\0\0\x0e\x25\ 156 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\0\x9f\x04\ 157 - \0\0\0\0\0\x0e\x27\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\ 158 - \x20\0\0\0\xb5\x04\0\0\0\0\0\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\ 159 - \x1c\0\0\0\x12\0\0\0\x11\0\0\0\xca\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\0\0\0\0\ 160 - \0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\xe1\x04\0\0\0\0\0\x0e\x2d\0\0\ 161 - \0\x01\0\0\0\xe9\x04\0\0\x04\0\0\x0f\x62\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\0\x28\ 162 - \0\0\0\x23\0\0\0\x0e\0\0\0\x2a\0\0\0\x31\0\0\0\x20\0\0\0\x2c\0\0\0\x51\0\0\0\ 163 - \x11\0\0\0\xf1\x04\0\0\x01\0\0\x0f\x04\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\0\0\0\x62\ 164 - \x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\x74\ 165 - \x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\x70\ 166 - \x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x30\ 167 - \x3a\x30\0\x2f\x77\x2f\x6e\x65\x74\x2d\x6e\x65\x78\x74\x2f\x6b\x65\x72\x6e\x65\ 168 - \x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\ 169 - \x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\ 170 - \x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\ 171 - \x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\ 172 - \x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\ 173 - \x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\ 174 - \x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x75\x6e\x73\x69\x67\x6e\ 175 - \x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\0\x30\x3a\x31\0\x09\x73\x74\ 176 - \x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\ 177 - \x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\ 178 - \x29\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x20\x63\ 179 - \x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\x3b\0\x30\ 180 - \x3a\x32\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\ 181 - \x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\ 182 - \x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\ 183 - \x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\ 184 - \x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\ 185 - \0\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\ 186 - \x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\ 187 - \x52\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\ 188 - \x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\ 189 - \x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\ 190 - \x2d\x3e\x69\x64\x2c\x20\x6d\x61\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\ 191 - \x70\x2d\x3e\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x29\x3b\0\x7d\0\x62\ 192 - \x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x70\x72\ 193 - \x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x69\x74\x65\ 194 - \x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\ 195 - \x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\ 196 - \x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\x20\x28\x21\x70\x72\x6f\x67\x29\0\ 197 - \x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\x78\0\x09\x61\x75\x78\x20\x3d\x20\ 198 - \x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\ 199 - \x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\ 200 - \x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\ 201 - \x74\x61\x63\x68\x65\x64\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\ 202 - \x5f\x61\x75\x78\0\x61\x74\x74\x61\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\ 203 - \x65\0\x64\x73\x74\x5f\x70\x72\x6f\x67\0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\ 204 - \x62\x74\x66\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\ 205 - \x73\x65\x71\x2c\x20\x22\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\ 206 - \x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\ 207 - \x30\x3a\x35\0\x09\x69\x66\x20\x28\x21\x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\ 208 - \x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x69\x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\ 209 - \x70\x65\x5f\x69\x64\0\x30\0\x73\x74\x72\x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\ 210 - \0\x68\x64\x72\0\x62\x74\x66\x5f\x68\x65\x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\ 211 - \x65\x6e\0\x09\x74\x79\x70\x65\x73\x20\x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\ 212 - \x65\x73\x3b\0\x09\x62\x70\x66\x5f\x70\x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\ 213 - \x6b\x65\x72\x6e\x65\x6c\x28\x26\x74\x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\ 214 - \x29\x2c\x20\x74\x79\x70\x65\x73\x20\x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\ 215 - \x09\x73\x74\x72\x20\x3d\x20\x62\x74\x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\ 216 - \x3b\0\x62\x74\x66\x5f\x74\x79\x70\x65\0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\ 217 - \x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\ 218 - \x5f\x52\x45\x41\x44\x28\x74\x2c\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\ 219 - \x30\x3a\x32\x3a\x30\0\x09\x69\x66\x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\ 220 - \x3e\x3d\x20\x62\x74\x66\x2d\x3e\x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\ 221 - \x29\0\x09\x72\x65\x74\x75\x72\x6e\x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\ 222 - \x5f\x6f\x66\x66\x3b\0\x30\x3a\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\ 223 - \x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\ 224 - \x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\ 225 - \x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\ 226 - \x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\ 227 - \x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\0\0\ 228 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x2d\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\ 229 - \0\x04\0\0\0\x62\0\0\0\x01\0\0\0\x80\x04\0\0\0\0\0\0\0\0\0\0\x69\x74\x65\x72\ 230 - \x61\x74\x6f\x72\x2e\x72\x6f\x64\x61\x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\x2f\0\0\ 231 - \0\0\0\0\0\0\0\0\0\0\0\0\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\ 232 - \x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\ 233 - \x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\ 234 - \x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ 235 - \x61\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\ 236 - \x25\x73\x20\x25\x73\x0a\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 237 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\ 238 - \x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x1b\0\0\ 239 - \0\0\0\x79\x11\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\ 240 - \0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\ 241 - \0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x23\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\ 242 - \0\x61\x71\0\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\ 243 - \0\0\0\0\0\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\ 244 - \x7b\x1a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\ 245 - \x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x23\0\0\0\xb7\x03\0\0\x0e\0\0\0\ 246 - \xb7\x05\0\0\x18\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\ 247 - \0\0\0\0\x07\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x3c\x01\0\x01\0\0\0\x42\0\0\ 248 - \0\x7b\0\0\0\x24\x3c\x01\0\x02\0\0\0\x42\0\0\0\xee\0\0\0\x1d\x44\x01\0\x03\0\0\ 249 - \0\x42\0\0\0\x0f\x01\0\0\x06\x4c\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\x40\ 250 - \x01\0\x05\0\0\0\x42\0\0\0\x1a\x01\0\0\x1d\x40\x01\0\x06\0\0\0\x42\0\0\0\x43\ 251 - \x01\0\0\x06\x58\x01\0\x08\0\0\0\x42\0\0\0\x56\x01\0\0\x03\x5c\x01\0\x0f\0\0\0\ 252 - \x42\0\0\0\xdc\x01\0\0\x02\x64\x01\0\x1f\0\0\0\x42\0\0\0\x2a\x02\0\0\x01\x6c\ 253 - \x01\0\0\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\ 254 - \0\x10\0\0\0\x02\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\ 255 - \x28\0\0\0\x08\0\0\0\x3f\x01\0\0\0\0\0\0\x78\0\0\0\x0d\0\0\0\x3e\0\0\0\0\0\0\0\ 256 - \x88\0\0\0\x0d\0\0\0\xea\0\0\0\0\0\0\0\xa8\0\0\0\x0d\0\0\0\x3f\x01\0\0\0\0\0\0\ 257 - \x1a\0\0\0\x21\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 258 - \0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\ 259 - \0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\ 260 - \0\0\0\0\0\x0a\0\0\0\x01\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 261 - \0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x6d\ 262 - \x61\x70\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\ 263 - \0\0\0\0\x79\x12\x08\0\0\0\0\0\x15\x02\x3c\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x79\ 264 - \x27\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\ 265 - \0\x07\x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\ 266 - \x31\0\0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x7b\ 267 - \x6a\xc8\xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\xff\0\0\0\0\xb7\x03\0\0\ 268 - \x04\0\0\0\xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\x71\x28\0\0\0\0\0\x79\ 269 - \x78\x30\0\0\0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\ 270 - \0\x61\x11\x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\0\x03\0\0\0\x0f\x13\0\ 271 - \0\0\0\0\0\x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf8\xff\xff\xff\ 272 - \xb7\x02\0\0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\0\0\0\0\x79\xa3\xf8\xff\ 273 - \0\0\0\0\x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf4\xff\xff\xff\ 274 - \xb7\x02\0\0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\x04\0\0\0\x61\xa1\xf4\ 275 - \xff\0\0\0\0\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\x0f\x16\0\0\0\0\0\0\ 276 - \xbf\x69\0\0\0\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\0\0\0\0\x7b\x1a\xe0\ 277 - \xff\0\0\0\0\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x31\0\0\0\0\0\0\x7b\ 278 - \x1a\xe8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\x79\xa1\ 279 - \xc8\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x51\0\0\0\xb7\x03\0\0\x11\0\0\0\ 280 - \xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\ 281 - \0\0\0\0\x17\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x80\x01\0\x01\0\0\0\x42\0\0\ 282 - \0\x7b\0\0\0\x24\x80\x01\0\x02\0\0\0\x42\0\0\0\x60\x02\0\0\x1f\x88\x01\0\x03\0\ 283 - \0\0\x42\0\0\0\x84\x02\0\0\x06\x94\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\ 284 - \x84\x01\0\x05\0\0\0\x42\0\0\0\x9d\x02\0\0\x0e\xa0\x01\0\x06\0\0\0\x42\0\0\0\ 285 - \x1a\x01\0\0\x1d\x84\x01\0\x07\0\0\0\x42\0\0\0\x43\x01\0\0\x06\xa4\x01\0\x09\0\ 286 - \0\0\x42\0\0\0\xaf\x02\0\0\x03\xa8\x01\0\x11\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\ 287 - \xb0\x01\0\x18\0\0\0\x42\0\0\0\x5a\x03\0\0\x06\x04\x01\0\x1b\0\0\0\x42\0\0\0\0\ 288 - \0\0\0\0\0\0\0\x1c\0\0\0\x42\0\0\0\xab\x03\0\0\x0f\x10\x01\0\x1d\0\0\0\x42\0\0\ 289 - \0\xc0\x03\0\0\x2d\x14\x01\0\x1f\0\0\0\x42\0\0\0\xf7\x03\0\0\x0d\x0c\x01\0\x21\ 290 - \0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x22\0\0\0\x42\0\0\0\xc0\x03\0\0\x02\x14\x01\0\ 291 - \x25\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x28\0\0\0\x42\0\0\0\0\0\0\0\0\0\ 292 - \0\0\x29\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x2c\0\0\0\x42\0\0\0\x1e\x04\ 293 - \0\0\x0d\x18\x01\0\x2d\0\0\0\x42\0\0\0\x4c\x04\0\0\x1b\x1c\x01\0\x2e\0\0\0\x42\ 294 - \0\0\0\x4c\x04\0\0\x06\x1c\x01\0\x2f\0\0\0\x42\0\0\0\x6f\x04\0\0\x0d\x24\x01\0\ 295 - \x31\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\xb0\x01\0\x40\0\0\0\x42\0\0\0\x2a\x02\0\0\ 296 - \x01\xc0\x01\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\ 297 - \0\0\0\0\0\x10\0\0\0\x14\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x14\0\0\0\x3e\0\0\0\ 298 - \0\0\0\0\x28\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x30\0\0\0\x08\0\0\0\x3f\x01\0\0\ 299 - \0\0\0\0\x88\0\0\0\x1a\0\0\0\x3e\0\0\0\0\0\0\0\x98\0\0\0\x1a\0\0\0\xea\0\0\0\0\ 300 - \0\0\0\xb0\0\0\0\x1a\0\0\0\x52\x03\0\0\0\0\0\0\xb8\0\0\0\x1a\0\0\0\x56\x03\0\0\ 301 - \0\0\0\0\xc8\0\0\0\x1f\0\0\0\x84\x03\0\0\0\0\0\0\xe0\0\0\0\x20\0\0\0\xea\0\0\0\ 302 - \0\0\0\0\xf8\0\0\0\x20\0\0\0\x3e\0\0\0\0\0\0\0\x20\x01\0\0\x24\0\0\0\x3e\0\0\0\ 303 - \0\0\0\0\x58\x01\0\0\x1a\0\0\0\xea\0\0\0\0\0\0\0\x68\x01\0\0\x20\0\0\0\x46\x04\ 304 - \0\0\0\0\0\0\x90\x01\0\0\x1a\0\0\0\x3f\x01\0\0\0\0\0\0\xa0\x01\0\0\x1a\0\0\0\ 305 - \x87\x04\0\0\0\0\0\0\xa8\x01\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x1a\0\0\0\x42\0\0\ 306 - \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 307 - \0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0\x1c\0\0\ 308 - \0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x1a\0\ 309 - \0\0\x01\0\0\0\0\0\0\0\x13\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\ 310 - \0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\ 311 - \0\0\0\0"; 312 - opts.insns_sz = 2216; 145 + \0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xb0\0\0\0\x03\0\0\x04\x18\0\0\0\xbe\0\ 146 + \0\0\x09\0\0\0\0\0\0\0\xc2\0\0\0\x0b\0\0\0\x40\0\0\0\xcd\0\0\0\x0b\0\0\0\x80\0\ 147 + \0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xd5\0\0\0\0\0\0\x07\0\0\0\0\xde\0\0\0\0\0\0\ 148 + \x08\x0c\0\0\0\xe4\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\xae\x01\0\0\x03\0\0\x04\ 149 + \x18\0\0\0\xb6\x01\0\0\x0e\0\0\0\0\0\0\0\xb9\x01\0\0\x11\0\0\0\x20\0\0\0\xbe\ 150 + \x01\0\0\x0e\0\0\0\xa0\0\0\0\xca\x01\0\0\0\0\0\x08\x0f\0\0\0\xd0\x01\0\0\0\0\0\ 151 + \x01\x04\0\0\0\x20\0\0\0\xdd\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\ 152 + \0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xe2\x01\0\0\0\0\0\x01\x04\0\0\0\ 153 + \x20\0\0\0\0\0\0\0\x01\0\0\x0d\x14\0\0\0\x26\x05\0\0\x04\0\0\0\x2b\x02\0\0\0\0\ 154 + \0\x08\x15\0\0\0\x31\x02\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\x01\x3b\x02\0\0\x01\0\ 155 + \0\x0c\x13\0\0\0\0\0\0\0\0\0\0\x02\x18\0\0\0\x52\x02\0\0\x02\0\0\x04\x10\0\0\0\ 156 + \x13\0\0\0\x03\0\0\0\0\0\0\0\x65\x02\0\0\x19\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\ 157 + \x1c\0\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x17\0\0\0\x6a\x02\0\0\x01\0\ 158 + \0\x0c\x1a\0\0\0\xb6\x02\0\0\x01\0\0\x04\x08\0\0\0\xbf\x02\0\0\x1d\0\0\0\0\0\0\ 159 + \0\0\0\0\0\0\0\0\x02\x1e\0\0\0\x10\x03\0\0\x06\0\0\x04\x38\0\0\0\xb6\x01\0\0\ 160 + \x0e\0\0\0\0\0\0\0\xb9\x01\0\0\x11\0\0\0\x20\0\0\0\x1d\x03\0\0\x1f\0\0\0\xc0\0\ 161 + \0\0\x2e\x03\0\0\x19\0\0\0\0\x01\0\0\x37\x03\0\0\x21\0\0\0\x40\x01\0\0\x41\x03\ 162 + \0\0\x22\0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\ 163 + \0\0\0\0\0\0\0\0\0\x02\x23\0\0\0\0\0\0\0\0\0\0\x02\x24\0\0\0\x8b\x03\0\0\x02\0\ 164 + \0\x04\x08\0\0\0\x99\x03\0\0\x0e\0\0\0\0\0\0\0\xa2\x03\0\0\x0e\0\0\0\x20\0\0\0\ 165 + \x41\x03\0\0\x03\0\0\x04\x18\0\0\0\xac\x03\0\0\x1f\0\0\0\0\0\0\0\xb4\x03\0\0\ 166 + \x25\0\0\0\x40\0\0\0\xba\x03\0\0\x27\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x26\0\0\ 167 + \0\0\0\0\0\0\0\0\x02\x28\0\0\0\xbe\x03\0\0\x01\0\0\x04\x04\0\0\0\xc9\x03\0\0\ 168 + \x0e\0\0\0\0\0\0\0\x32\x04\0\0\x01\0\0\x04\x04\0\0\0\x3b\x04\0\0\x0e\0\0\0\0\0\ 169 + \0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x20\0\0\0\x12\0\0\0\x30\0\0\0\xb1\x04\0\0\0\0\0\ 170 + \x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x20\0\0\0\x12\0\0\0\x1a\0\0\0\ 171 + \xc5\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x20\0\0\0\ 172 + \x12\0\0\0\x20\0\0\0\xdb\x04\0\0\0\0\0\x0e\x2d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\ 173 + \0\0\0\0\x20\0\0\0\x12\0\0\0\x11\0\0\0\xf0\x04\0\0\0\0\0\x0e\x2f\0\0\0\0\0\0\0\ 174 + \0\0\0\0\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\x07\x05\0\0\0\0\0\x0e\ 175 + \x31\0\0\0\x01\0\0\0\x0f\x05\0\0\x01\0\0\x0f\x04\0\0\0\x36\0\0\0\0\0\0\0\x04\0\ 176 + \0\0\x16\x05\0\0\x04\0\0\x0f\x7b\0\0\0\x2a\0\0\0\0\0\0\0\x30\0\0\0\x2c\0\0\0\ 177 + \x30\0\0\0\x1a\0\0\0\x2e\0\0\0\x4a\0\0\0\x20\0\0\0\x30\0\0\0\x6a\0\0\0\x11\0\0\ 178 + \0\x1e\x05\0\0\x01\0\0\x0f\x04\0\0\0\x32\0\0\0\0\0\0\0\x04\0\0\0\x26\x05\0\0\0\ 179 + \0\0\x0e\x06\0\0\0\x01\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\ 180 + \x66\x5f\x6d\x61\x70\0\x6d\x65\x74\x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\ 181 + \x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\ 182 + \x62\x70\x66\x5f\x6d\x61\x70\0\x30\x3a\x30\0\x2f\x68\x6f\x6d\x65\x2f\x61\x73\ 183 + \x70\x73\x6b\x2f\x73\x72\x63\x2f\x62\x70\x66\x2d\x6e\x65\x78\x74\x2f\x6b\x65\ 184 + \x72\x6e\x65\x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\ 185 + \x65\x72\x61\x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\ 186 + \x70\x66\x2e\x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\ 187 + \x65\x20\x2a\x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\ 188 + \x3e\x73\x65\x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\ 189 + \x73\x65\x71\0\x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\ 190 + \x75\x6d\0\x73\x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x75\x6e\x73\ 191 + \x69\x67\x6e\x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\0\x30\x3a\x31\0\ 192 + \x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\ 193 + \x70\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\ 194 + \x6d\x61\x70\x29\0\x30\x3a\x32\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\ 195 + \x6e\x75\x6d\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\ 196 + \x71\x5f\x6e\x75\x6d\x3b\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\ 197 + \x3d\x3d\x20\x30\x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\ 198 + \x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\ 199 + \x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\ 200 + \x72\x69\x65\x73\x20\x20\x63\x75\x72\x5f\x65\x6e\x74\x72\x69\x65\x73\x5c\x6e\ 201 + \x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\0\x6d\ 202 + \x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\x73\ 203 + \x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\x52\ 204 + \x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\x5f\ 205 + \x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\ 206 + \x75\x20\x25\x2d\x31\x36\x73\x20\x20\x25\x31\x30\x64\x20\x20\x20\x25\x31\x30\ 207 + \x6c\x6c\x64\x5c\x6e\x22\x2c\0\x7d\0\x5f\x5f\x73\x36\x34\0\x6c\x6f\x6e\x67\x20\ 208 + \x6c\x6f\x6e\x67\0\x62\x70\x66\x5f\x6d\x61\x70\x5f\x73\x75\x6d\x5f\x65\x6c\x65\ 209 + \x6d\x5f\x63\x6f\x75\x6e\x74\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\ 210 + \x66\x5f\x70\x72\x6f\x67\0\x70\x72\x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\ 211 + \x5f\x70\x72\x6f\x67\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\ 212 + \x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\ 213 + \x72\x6f\x67\x20\x3d\x20\x63\x74\x78\x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\ 214 + \x20\x28\x21\x70\x72\x6f\x67\x29\0\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\ 215 + \x78\0\x09\x61\x75\x78\x20\x3d\x20\x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\ 216 + \x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\ 217 + \x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\ 218 + \x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\x64\x5c\x6e\x22\x29\ 219 + \x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x61\x75\x78\0\x61\x74\x74\x61\x63\ 220 + \x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\x65\0\x64\x73\x74\x5f\x70\x72\x6f\x67\ 221 + \0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x62\x74\x66\0\x09\x42\x50\x46\x5f\x53\ 222 + \x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\x75\ 223 + \x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\ 224 + \x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\x30\x3a\x35\0\x09\x69\x66\x20\x28\x21\ 225 + \x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x69\ 226 + \x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\x70\x65\x5f\x69\x64\0\x30\0\x73\x74\x72\ 227 + \x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\0\x68\x64\x72\0\x62\x74\x66\x5f\x68\x65\ 228 + \x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\x65\x6e\0\x09\x74\x79\x70\x65\x73\x20\ 229 + \x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\x65\x73\x3b\0\x09\x62\x70\x66\x5f\x70\ 230 + \x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\x6b\x65\x72\x6e\x65\x6c\x28\x26\x74\ 231 + \x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\x29\x2c\x20\x74\x79\x70\x65\x73\x20\ 232 + \x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\x09\x73\x74\x72\x20\x3d\x20\x62\x74\ 233 + \x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\x3b\0\x62\x74\x66\x5f\x74\x79\x70\x65\ 234 + \0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\ 235 + \x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\x5f\x52\x45\x41\x44\x28\x74\x2c\x20\ 236 + \x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\x30\x3a\x32\x3a\x30\0\x09\x69\x66\ 237 + \x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3e\x3d\x20\x62\x74\x66\x2d\x3e\ 238 + \x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\x29\0\x09\x72\x65\x74\x75\x72\x6e\ 239 + \x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x3b\0\x30\x3a\x33\ 240 + \0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\ 241 + \0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\ 242 + \x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\ 243 + \x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\ 244 + \x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\x4e\x53\x45\0\x2e\x6b\x73\x79\x6d\ 245 + \x73\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\x64\x75\x6d\ 246 + \x6d\x79\x5f\x6b\x73\x79\x6d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 247 + \xc9\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\0\x04\0\0\0\x7b\0\0\0\x01\0\0\0\ 248 + \x80\0\0\0\0\0\0\0\0\0\0\0\x69\x74\x65\x72\x61\x74\x6f\x72\x2e\x72\x6f\x64\x61\ 249 + \x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\x34\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x20\x20\ 250 + \x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ 251 + \x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x20\x20\x63\x75\x72\x5f\x65\ 252 + \x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x20\x25\ 253 + \x31\x30\x64\x20\x20\x20\x25\x31\x30\x6c\x6c\x64\x0a\0\x20\x20\x69\x64\x20\x6e\ 254 + \x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\ 255 + \x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\ 256 + \x25\x73\x0a\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 257 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\ 258 + \0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x1d\0\0\0\0\0\x79\x21\ 259 + \x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe0\xff\ 260 + \xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\ 261 + \x30\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\ 262 + \xe0\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\0\0\0\0\0\x0f\x12\0\0\0\0\0\0\ 263 + \x7b\x2a\xe8\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\x7b\x1a\xf0\xff\0\0\0\0\xbf\x71\ 264 + \0\0\0\0\0\0\x85\x20\0\0\0\0\0\0\x7b\x0a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\ 265 + \x07\x04\0\0\xe0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\ 266 + \x30\0\0\0\xb7\x03\0\0\x1a\0\0\0\xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\ 267 + \0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\0\0\0\0\x07\0\0\0\0\0\0\0\x42\0\0\0\x88\0\0\0\ 268 + \x1e\x44\x01\0\x01\0\0\0\x42\0\0\0\x88\0\0\0\x24\x44\x01\0\x02\0\0\0\x42\0\0\0\ 269 + \xfb\0\0\0\x1d\x4c\x01\0\x03\0\0\0\x42\0\0\0\x1c\x01\0\0\x06\x54\x01\0\x04\0\0\ 270 + \0\x42\0\0\0\x2b\x01\0\0\x1d\x48\x01\0\x05\0\0\0\x42\0\0\0\x50\x01\0\0\x06\x60\ 271 + \x01\0\x07\0\0\0\x42\0\0\0\x63\x01\0\0\x03\x64\x01\0\x0e\0\0\0\x42\0\0\0\xf6\ 272 + \x01\0\0\x02\x6c\x01\0\x21\0\0\0\x42\0\0\0\x29\x02\0\0\x01\x80\x01\0\0\0\0\0\ 273 + \x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\ 274 + \x02\0\0\0\xf7\0\0\0\0\0\0\0\x20\0\0\0\x08\0\0\0\x27\x01\0\0\0\0\0\0\x70\0\0\0\ 275 + \x0d\0\0\0\x3e\0\0\0\0\0\0\0\x80\0\0\0\x0d\0\0\0\xf7\0\0\0\0\0\0\0\xa0\0\0\0\ 276 + \x0d\0\0\0\x27\x01\0\0\0\0\0\0\x1a\0\0\0\x23\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 277 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\ 278 + \x70\x66\x5f\x6d\x61\x70\0\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\ 279 + \0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\x01\0\0\0\0\0\0\0\x07\0\0\ 280 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\ 281 + \x65\x72\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\0\0\0\0\x62\x70\x66\x5f\x6d\ 282 + \x61\x70\x5f\x73\x75\x6d\x5f\x65\x6c\x65\x6d\x5f\x63\x6f\x75\x6e\x74\0\0\x47\ 283 + \x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x11\x08\0\0\0\0\ 284 + \0\x15\x01\x3b\0\0\0\0\0\x79\x17\0\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\ 285 + \0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\ 286 + \x18\x62\0\0\0\0\0\0\0\0\0\0\x4a\0\0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\ 287 + \0\x85\0\0\0\x7e\0\0\0\x7b\x6a\xc8\xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\ 288 + \xff\0\0\0\0\xb7\x03\0\0\x04\0\0\0\xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\ 289 + \x71\x28\0\0\0\0\0\x79\x78\x30\0\0\0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\ 290 + \0\0\x0f\x21\0\0\0\0\0\0\x61\x11\x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\ 291 + \0\x03\0\0\0\x0f\x13\0\0\0\0\0\0\x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\ 292 + \x01\0\0\xf8\xff\xff\xff\xb7\x02\0\0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\ 293 + \0\0\0\0\x79\xa3\xf8\xff\0\0\0\0\x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\ 294 + \x01\0\0\xf4\xff\xff\xff\xb7\x02\0\0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\ 295 + \x04\0\0\0\x61\xa1\xf4\xff\0\0\0\0\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\ 296 + \x0f\x16\0\0\0\0\0\0\xbf\x69\0\0\0\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\ 297 + \0\0\0\0\x7b\x1a\xe0\xff\0\0\0\0\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\ 298 + \x31\0\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\ 299 + \xff\xff\xff\x79\xa1\xc8\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x6a\0\0\0\xb7\ 300 + \x03\0\0\x11\0\0\0\xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\ 301 + \x95\0\0\0\0\0\0\0\0\0\0\0\x1b\0\0\0\0\0\0\0\x42\0\0\0\x88\0\0\0\x1e\x94\x01\0\ 302 + \x01\0\0\0\x42\0\0\0\x88\0\0\0\x24\x94\x01\0\x02\0\0\0\x42\0\0\0\x86\x02\0\0\ 303 + \x1f\x9c\x01\0\x03\0\0\0\x42\0\0\0\xaa\x02\0\0\x06\xa8\x01\0\x04\0\0\0\x42\0\0\ 304 + \0\xc3\x02\0\0\x0e\xb4\x01\0\x05\0\0\0\x42\0\0\0\x2b\x01\0\0\x1d\x98\x01\0\x06\ 305 + \0\0\0\x42\0\0\0\x50\x01\0\0\x06\xb8\x01\0\x08\0\0\0\x42\0\0\0\xd5\x02\0\0\x03\ 306 + \xbc\x01\0\x10\0\0\0\x42\0\0\0\x45\x03\0\0\x02\xc4\x01\0\x17\0\0\0\x42\0\0\0\ 307 + \x80\x03\0\0\x06\x04\x01\0\x1a\0\0\0\x42\0\0\0\x45\x03\0\0\x02\xc4\x01\0\x1b\0\ 308 + \0\0\x42\0\0\0\xd1\x03\0\0\x0f\x10\x01\0\x1c\0\0\0\x42\0\0\0\xe6\x03\0\0\x2d\ 309 + \x14\x01\0\x1e\0\0\0\x42\0\0\0\x1d\x04\0\0\x0d\x0c\x01\0\x20\0\0\0\x42\0\0\0\ 310 + \x45\x03\0\0\x02\xc4\x01\0\x21\0\0\0\x42\0\0\0\xe6\x03\0\0\x02\x14\x01\0\x24\0\ 311 + \0\0\x42\0\0\0\x44\x04\0\0\x0d\x18\x01\0\x27\0\0\0\x42\0\0\0\x45\x03\0\0\x02\ 312 + \xc4\x01\0\x28\0\0\0\x42\0\0\0\x44\x04\0\0\x0d\x18\x01\0\x2b\0\0\0\x42\0\0\0\ 313 + \x44\x04\0\0\x0d\x18\x01\0\x2c\0\0\0\x42\0\0\0\x72\x04\0\0\x1b\x1c\x01\0\x2d\0\ 314 + \0\0\x42\0\0\0\x72\x04\0\0\x06\x1c\x01\0\x2e\0\0\0\x42\0\0\0\x95\x04\0\0\x0d\ 315 + \x24\x01\0\x30\0\0\0\x42\0\0\0\x45\x03\0\0\x02\xc4\x01\0\x3f\0\0\0\x42\0\0\0\ 316 + \x29\x02\0\0\x01\xd4\x01\0\0\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\ 317 + \0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x18\0\0\0\xf7\0\0\0\0\0\0\0\x20\0\0\0\x1c\0\0\ 318 + \0\x3e\0\0\0\0\0\0\0\x28\0\0\0\x08\0\0\0\x27\x01\0\0\0\0\0\0\x80\0\0\0\x1e\0\0\ 319 + \0\x3e\0\0\0\0\0\0\0\x90\0\0\0\x1e\0\0\0\xf7\0\0\0\0\0\0\0\xa8\0\0\0\x1e\0\0\0\ 320 + \x78\x03\0\0\0\0\0\0\xb0\0\0\0\x1e\0\0\0\x7c\x03\0\0\0\0\0\0\xc0\0\0\0\x23\0\0\ 321 + \0\xaa\x03\0\0\0\0\0\0\xd8\0\0\0\x24\0\0\0\xf7\0\0\0\0\0\0\0\xf0\0\0\0\x24\0\0\ 322 + \0\x3e\0\0\0\0\0\0\0\x18\x01\0\0\x28\0\0\0\x3e\0\0\0\0\0\0\0\x50\x01\0\0\x1e\0\ 323 + \0\0\xf7\0\0\0\0\0\0\0\x60\x01\0\0\x24\0\0\0\x6c\x04\0\0\0\0\0\0\x88\x01\0\0\ 324 + \x1e\0\0\0\x27\x01\0\0\0\0\0\0\x98\x01\0\0\x1e\0\0\0\xad\x04\0\0\0\0\0\0\xa0\ 325 + \x01\0\0\x1c\0\0\0\x3e\0\0\0\0\0\0\0\x1a\0\0\0\x41\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 326 + \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\ 327 + \x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\ 328 + \0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x19\0\0\0\x01\0\0\0\0\0\0\0\ 329 + \x12\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\ 330 + \x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0"; 331 + opts.insns_sz = 2456; 313 332 opts.insns = (void *)"\ 314 333 \xbf\x16\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\x78\xff\xff\xff\xb7\x02\0\ 315 334 \0\x88\0\0\0\xb7\x03\0\0\0\0\0\0\x85\0\0\0\x71\0\0\0\x05\0\x14\0\0\0\0\0\x61\ ··· 326 331 \0\0\0\x85\0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x01\0\0\0\0\ 327 332 \0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xbf\x70\0\0\ 328 333 \0\0\0\0\x95\0\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 329 - \x48\x0e\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\ 330 - \0\0\x44\x0e\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\ 331 - \0\0\0\0\x38\x0e\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\ 332 - \x18\x61\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x12\0\ 333 - \0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\xb7\x03\0\0\x1c\0\0\0\x85\0\0\0\ 334 + \xe8\x0e\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\ 335 + \0\0\xe4\x0e\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\ 336 + \0\0\0\0\xd8\x0e\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\ 337 + \x18\x61\0\0\0\0\0\0\0\0\0\0\xd0\x0e\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x12\0\ 338 + \0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\xd0\x0e\0\0\xb7\x03\0\0\x1c\0\0\0\x85\0\0\0\ 334 339 \xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xd4\xff\0\0\0\0\x63\x7a\x78\xff\0\0\0\0\ 335 - \x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x0e\0\0\x63\x01\0\0\0\ 340 + \x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x20\x0f\0\0\x63\x01\0\0\0\ 336 341 \0\0\0\x61\x60\x1c\0\0\0\0\0\x15\0\x03\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 337 - \x5c\x0e\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\ 338 - \0\x50\x0e\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\ 342 + \xfc\x0e\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\ 343 + \0\xf0\x0e\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\ 339 344 \xc5\x07\xc3\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x63\x71\0\0\0\0\0\ 340 - \0\x79\x63\x20\0\0\0\0\0\x15\x03\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x98\ 341 - \x0e\0\0\xb7\x02\0\0\x62\0\0\0\x61\x60\x04\0\0\0\0\0\x45\0\x02\0\x01\0\0\0\x85\ 345 + \0\x79\x63\x20\0\0\0\0\0\x15\x03\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x38\ 346 + \x0f\0\0\xb7\x02\0\0\x7b\0\0\0\x61\x60\x04\0\0\0\0\0\x45\0\x02\0\x01\0\0\0\x85\ 342 347 \0\0\0\x94\0\0\0\x05\0\x01\0\0\0\0\0\x85\0\0\0\x71\0\0\0\x18\x62\0\0\0\0\0\0\0\ 343 - \0\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\x63\ 344 - \x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\ 345 - \0\0\x10\x0f\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x98\x0e\0\0\ 346 - \x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x0f\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x02\0\ 347 - \0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\xb7\x03\0\0\x20\0\0\0\x85\0\0\0\ 348 + \0\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc0\x0f\0\0\x63\ 349 + \x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xb8\x0f\0\0\x18\x61\0\0\0\0\0\0\0\ 350 + \0\0\0\xc8\x0f\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\ 351 + \x18\x61\0\0\0\0\0\0\0\0\0\0\xd0\x0f\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x02\0\ 352 + \0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\xc0\x0f\0\0\xb7\x03\0\0\x20\0\0\0\x85\0\0\0\ 348 353 \xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x9f\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\ 349 - \0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\x63\ 350 - \x01\0\0\0\0\0\0\xb7\x01\0\0\x16\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\ 354 + \0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\x0f\0\0\x63\ 355 + \x01\0\0\0\0\0\0\xb7\x01\0\0\x16\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\xe0\x0f\0\0\ 351 356 \xb7\x03\0\0\x04\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x92\xff\ 352 - \0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 353 - \x78\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\x18\ 354 - \x61\0\0\0\0\0\0\0\0\0\0\x70\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ 355 - \0\0\0\x40\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x11\0\0\x7b\x01\0\0\0\0\0\0\ 356 - \x18\x60\0\0\0\0\0\0\0\0\0\0\x48\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc8\x11\0\ 357 - \0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x10\0\0\x18\x61\0\0\0\0\ 358 - \0\0\0\0\0\0\xe8\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\ 359 - \0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\ 360 - \0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x11\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\ 361 - \0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x84\x11\0\0\x63\x01\0\0\0\0\0\0\x79\x60\ 362 - \x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x88\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\ 363 - \xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x11\0\0\x63\x01\0\0\0\0\0\ 364 - \0\x18\x61\0\0\0\0\0\0\0\0\0\0\xf8\x11\0\0\xb7\x02\0\0\x11\0\0\0\xb7\x03\0\0\ 357 + \0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ 358 + \x20\x12\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xf0\x0f\0\0\x18\ 359 + \x61\0\0\0\0\0\0\0\0\0\0\x18\x12\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ 360 + \0\0\0\x08\x11\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x60\x12\0\0\x7b\x01\0\0\0\0\0\0\ 361 + \x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x11\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x70\x12\0\ 362 + \0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xa0\x11\0\0\x18\x61\0\0\0\0\ 363 + \0\0\0\0\0\0\x90\x12\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\ 364 + \0\x18\x61\0\0\0\0\0\0\0\0\0\0\x88\x12\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\ 365 + \0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x12\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\ 366 + \0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x2c\x12\0\0\x63\x01\0\0\0\0\0\0\x79\x60\ 367 + \x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x30\x12\0\0\x7b\x01\0\0\0\0\0\0\x61\ 368 + \xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x58\x12\0\0\x63\x01\0\0\0\0\0\ 369 + \0\x18\x61\0\0\0\0\0\0\0\0\0\0\xa0\x12\0\0\xb7\x02\0\0\x11\0\0\0\xb7\x03\0\0\ 365 370 \x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\ 366 - \x5c\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x68\x11\0\0\x63\x70\x6c\0\0\0\0\0\ 367 - \x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\0\0\0\x18\x62\0\0\ 368 - \0\0\0\0\0\0\0\0\x68\x11\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\ 369 - \0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd8\x11\0\0\x61\x01\0\0\0\0\0\0\xd5\ 370 - \x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x4a\xff\0\0\ 371 - \0\0\x63\x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\x18\x61\0\ 372 - \0\0\0\0\0\0\0\0\0\x10\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\ 373 - \x18\x12\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\ 374 - \x60\0\0\0\0\0\0\0\0\0\0\x28\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x50\x17\0\0\ 375 - \x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x14\0\0\x18\x61\0\0\0\0\0\ 376 - \0\0\0\0\0\x60\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd0\x15\ 377 - \0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\ 378 - \0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x78\x17\0\0\x7b\x01\0\0\0\0\ 379 - \0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x17\0\0\x63\x01\0\0\ 380 - \0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x1c\x17\0\0\x63\x01\ 381 - \0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\x7b\ 382 - \x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x48\x17\0\ 383 - \0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x17\0\0\xb7\x02\0\0\x12\ 384 - \0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\ 385 - \0\0\0\0\0\xc5\x07\x13\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\x63\ 386 - \x70\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\ 387 - \0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\ 388 - \xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x70\x17\0\0\x61\x01\ 389 - \0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\ 390 - \x07\x01\xff\0\0\0\0\x63\x7a\x84\xff\0\0\0\0\x61\xa1\x78\xff\0\0\0\0\xd5\x01\ 391 - \x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa0\x80\xff\0\0\0\0\ 392 - \x63\x06\x28\0\0\0\0\0\x61\xa0\x84\xff\0\0\0\0\x63\x06\x2c\0\0\0\0\0\x18\x61\0\ 393 - \0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x10\0\0\0\0\0\0\x63\x06\x18\0\0\0\0\0\xb7\0\0\0\ 394 - \0\0\0\0\x95\0\0\0\0\0\0\0"; 371 + \x5c\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\x63\x70\x6c\0\0\0\0\0\ 372 + \x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\x18\x68\0\0\0\0\0\0\0\0\0\0\xa8\ 373 + \x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x12\0\0\xb7\x02\0\0\x17\0\0\0\xb7\x03\ 374 + \0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\ 375 + \x07\x4d\xff\0\0\0\0\x75\x07\x03\0\0\0\0\0\x62\x08\x04\0\0\0\0\0\x6a\x08\x02\0\ 376 + \0\0\0\0\x05\0\x0a\0\0\0\0\0\x63\x78\x04\0\0\0\0\0\xbf\x79\0\0\0\0\0\0\x77\x09\ 377 + \0\0\x20\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\x63\x90\0\0\0\0\0\0\x55\ 378 + \x09\x02\0\0\0\0\0\x6a\x08\x02\0\0\0\0\0\x05\0\x01\0\0\0\0\0\x6a\x08\x02\0\x40\ 379 + \0\0\0\xb7\x01\0\0\x05\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\xb7\x03\0\ 380 + \0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\ 381 + \0\0\x01\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\ 382 + \0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x80\x12\0\0\x61\x01\0\0\0\0\0\0\ 383 + \xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x2c\xff\ 384 + \0\0\0\0\x63\x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd0\x12\0\0\x18\ 385 + \x61\0\0\0\0\0\0\0\0\0\0\xa8\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ 386 + \0\0\0\xd8\x12\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xa0\x17\0\0\x7b\x01\0\0\0\0\0\0\ 387 + \x18\x60\0\0\0\0\0\0\0\0\0\0\xe0\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe8\x17\0\ 388 + \0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x14\0\0\x18\x61\0\0\0\0\ 389 + \0\0\0\0\0\0\xf8\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x78\ 390 + \x16\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x18\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\ 391 + \0\0\0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x10\x18\0\0\x7b\x01\0\0\ 392 + \0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x17\0\0\x63\x01\ 393 + \0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb4\x17\0\0\x63\ 394 + \x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x17\0\0\ 395 + \x7b\x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\ 396 + \x17\0\0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x18\0\0\xb7\x02\0\ 397 + \0\x12\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\ 398 + \x07\0\0\0\0\0\0\xc5\x07\xf5\xfe\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x98\x17\0\ 399 + \0\x63\x70\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\ 400 + \0\x05\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x98\x17\0\0\xb7\x03\0\0\x8c\0\0\0\x85\ 401 + \0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x08\x18\0\0\ 402 + \x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\ 403 + \0\0\xc5\x07\xe3\xfe\0\0\0\0\x63\x7a\x84\xff\0\0\0\0\x61\xa1\x78\xff\0\0\0\0\ 404 + \xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa0\x80\xff\ 405 + \0\0\0\0\x63\x06\x28\0\0\0\0\0\x61\xa0\x84\xff\0\0\0\0\x63\x06\x2c\0\0\0\0\0\ 406 + \x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x10\0\0\0\0\0\0\x63\x06\x18\0\0\0\0\0\ 407 + \xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0"; 395 408 err = bpf_load_and_run(&opts); 396 409 if (err < 0) 397 410 return err; 398 - skel->rodata = skel_finalize_map_data(&skel->maps.rodata.initial_value, 399 - 4096, PROT_READ, skel->maps.rodata.map_fd); 400 - if (!skel->rodata) 401 - return -ENOMEM; 402 411 return 0; 403 412 } 404 413 ··· 419 420 return NULL; 420 421 } 421 422 return skel; 423 + } 424 + 425 + __attribute__((unused)) static void 426 + iterators_bpf__assert(struct iterators_bpf *s __attribute__((unused))) 427 + { 428 + #ifdef __cplusplus 429 + #define _Static_assert static_assert 430 + #endif 431 + #ifdef __cplusplus 432 + #undef _Static_assert 433 + #endif 422 434 } 423 435 424 436 #endif /* __ITERATORS_BPF_SKEL_H__ */
+11 -15
kernel/bpf/ringbuf.c
··· 23 23 24 24 #define RINGBUF_MAX_RECORD_SZ (UINT_MAX/4) 25 25 26 - /* Maximum size of ring buffer area is limited by 32-bit page offset within 27 - * record header, counted in pages. Reserve 8 bits for extensibility, and take 28 - * into account few extra pages for consumer/producer pages and 29 - * non-mmap()'able parts. This gives 64GB limit, which seems plenty for single 30 - * ring buffer. 31 - */ 32 - #define RINGBUF_MAX_DATA_SZ \ 33 - (((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE) 34 - 35 26 struct bpf_ringbuf { 36 27 wait_queue_head_t waitq; 37 28 struct irq_work work; ··· 152 161 wake_up_all(&rb->waitq); 153 162 } 154 163 164 + /* Maximum size of ring buffer area is limited by 32-bit page offset within 165 + * record header, counted in pages. Reserve 8 bits for extensibility, and 166 + * take into account few extra pages for consumer/producer pages and 167 + * non-mmap()'able parts, the current maximum size would be: 168 + * 169 + * (((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE) 170 + * 171 + * This gives 64GB limit, which seems plenty for single ring buffer. Now 172 + * considering that the maximum value of data_sz is (4GB - 1), there 173 + * will be no overflow, so just note the size limit in the comments. 174 + */ 155 175 static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node) 156 176 { 157 177 struct bpf_ringbuf *rb; ··· 194 192 !is_power_of_2(attr->max_entries) || 195 193 !PAGE_ALIGNED(attr->max_entries)) 196 194 return ERR_PTR(-EINVAL); 197 - 198 - #ifdef CONFIG_64BIT 199 - /* on 32-bit arch, it's impossible to overflow record's hdr->pgoff */ 200 - if (attr->max_entries > RINGBUF_MAX_DATA_SZ) 201 - return ERR_PTR(-E2BIG); 202 - #endif 203 195 204 196 rb_map = bpf_map_area_alloc(sizeof(*rb_map), NUMA_NO_NODE); 205 197 if (!rb_map)
+166 -14
kernel/bpf/syscall.c
··· 3295 3295 raw_tp_link->btp->tp->name); 3296 3296 } 3297 3297 3298 + static int bpf_copy_to_user(char __user *ubuf, const char *buf, u32 ulen, 3299 + u32 len) 3300 + { 3301 + if (ulen >= len + 1) { 3302 + if (copy_to_user(ubuf, buf, len + 1)) 3303 + return -EFAULT; 3304 + } else { 3305 + char zero = '\0'; 3306 + 3307 + if (copy_to_user(ubuf, buf, ulen - 1)) 3308 + return -EFAULT; 3309 + if (put_user(zero, ubuf + ulen - 1)) 3310 + return -EFAULT; 3311 + return -ENOSPC; 3312 + } 3313 + 3314 + return 0; 3315 + } 3316 + 3298 3317 static int bpf_raw_tp_link_fill_link_info(const struct bpf_link *link, 3299 3318 struct bpf_link_info *info) 3300 3319 { ··· 3332 3313 if (!ubuf) 3333 3314 return 0; 3334 3315 3335 - if (ulen >= tp_len + 1) { 3336 - if (copy_to_user(ubuf, tp_name, tp_len + 1)) 3337 - return -EFAULT; 3338 - } else { 3339 - char zero = '\0'; 3340 - 3341 - if (copy_to_user(ubuf, tp_name, ulen - 1)) 3342 - return -EFAULT; 3343 - if (put_user(zero, ubuf + ulen - 1)) 3344 - return -EFAULT; 3345 - return -ENOSPC; 3346 - } 3347 - 3348 - return 0; 3316 + return bpf_copy_to_user(ubuf, tp_name, ulen, tp_len); 3349 3317 } 3350 3318 3351 3319 static const struct bpf_link_ops bpf_raw_tp_link_lops = { ··· 3364 3358 kfree(perf_link); 3365 3359 } 3366 3360 3361 + static int bpf_perf_link_fill_common(const struct perf_event *event, 3362 + char __user *uname, u32 ulen, 3363 + u64 *probe_offset, u64 *probe_addr, 3364 + u32 *fd_type) 3365 + { 3366 + const char *buf; 3367 + u32 prog_id; 3368 + size_t len; 3369 + int err; 3370 + 3371 + if (!ulen ^ !uname) 3372 + return -EINVAL; 3373 + if (!uname) 3374 + return 0; 3375 + 3376 + err = bpf_get_perf_event_info(event, &prog_id, fd_type, &buf, 3377 + probe_offset, probe_addr); 3378 + if (err) 3379 + return err; 3380 + 3381 + if (buf) { 3382 + len = strlen(buf); 3383 + err = bpf_copy_to_user(uname, buf, ulen, len); 3384 + if (err) 3385 + return err; 3386 + } else { 3387 + char zero = '\0'; 3388 + 3389 + if (put_user(zero, uname)) 3390 + return -EFAULT; 3391 + } 3392 + return 0; 3393 + } 3394 + 3395 + #ifdef CONFIG_KPROBE_EVENTS 3396 + static int bpf_perf_link_fill_kprobe(const struct perf_event *event, 3397 + struct bpf_link_info *info) 3398 + { 3399 + char __user *uname; 3400 + u64 addr, offset; 3401 + u32 ulen, type; 3402 + int err; 3403 + 3404 + uname = u64_to_user_ptr(info->perf_event.kprobe.func_name); 3405 + ulen = info->perf_event.kprobe.name_len; 3406 + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, 3407 + &type); 3408 + if (err) 3409 + return err; 3410 + if (type == BPF_FD_TYPE_KRETPROBE) 3411 + info->perf_event.type = BPF_PERF_EVENT_KRETPROBE; 3412 + else 3413 + info->perf_event.type = BPF_PERF_EVENT_KPROBE; 3414 + 3415 + info->perf_event.kprobe.offset = offset; 3416 + if (!kallsyms_show_value(current_cred())) 3417 + addr = 0; 3418 + info->perf_event.kprobe.addr = addr; 3419 + return 0; 3420 + } 3421 + #endif 3422 + 3423 + #ifdef CONFIG_UPROBE_EVENTS 3424 + static int bpf_perf_link_fill_uprobe(const struct perf_event *event, 3425 + struct bpf_link_info *info) 3426 + { 3427 + char __user *uname; 3428 + u64 addr, offset; 3429 + u32 ulen, type; 3430 + int err; 3431 + 3432 + uname = u64_to_user_ptr(info->perf_event.uprobe.file_name); 3433 + ulen = info->perf_event.uprobe.name_len; 3434 + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr, 3435 + &type); 3436 + if (err) 3437 + return err; 3438 + 3439 + if (type == BPF_FD_TYPE_URETPROBE) 3440 + info->perf_event.type = BPF_PERF_EVENT_URETPROBE; 3441 + else 3442 + info->perf_event.type = BPF_PERF_EVENT_UPROBE; 3443 + info->perf_event.uprobe.offset = offset; 3444 + return 0; 3445 + } 3446 + #endif 3447 + 3448 + static int bpf_perf_link_fill_probe(const struct perf_event *event, 3449 + struct bpf_link_info *info) 3450 + { 3451 + #ifdef CONFIG_KPROBE_EVENTS 3452 + if (event->tp_event->flags & TRACE_EVENT_FL_KPROBE) 3453 + return bpf_perf_link_fill_kprobe(event, info); 3454 + #endif 3455 + #ifdef CONFIG_UPROBE_EVENTS 3456 + if (event->tp_event->flags & TRACE_EVENT_FL_UPROBE) 3457 + return bpf_perf_link_fill_uprobe(event, info); 3458 + #endif 3459 + return -EOPNOTSUPP; 3460 + } 3461 + 3462 + static int bpf_perf_link_fill_tracepoint(const struct perf_event *event, 3463 + struct bpf_link_info *info) 3464 + { 3465 + char __user *uname; 3466 + u32 ulen; 3467 + 3468 + uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name); 3469 + ulen = info->perf_event.tracepoint.name_len; 3470 + info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT; 3471 + return bpf_perf_link_fill_common(event, uname, ulen, NULL, NULL, NULL); 3472 + } 3473 + 3474 + static int bpf_perf_link_fill_perf_event(const struct perf_event *event, 3475 + struct bpf_link_info *info) 3476 + { 3477 + info->perf_event.event.type = event->attr.type; 3478 + info->perf_event.event.config = event->attr.config; 3479 + info->perf_event.type = BPF_PERF_EVENT_EVENT; 3480 + return 0; 3481 + } 3482 + 3483 + static int bpf_perf_link_fill_link_info(const struct bpf_link *link, 3484 + struct bpf_link_info *info) 3485 + { 3486 + struct bpf_perf_link *perf_link; 3487 + const struct perf_event *event; 3488 + 3489 + perf_link = container_of(link, struct bpf_perf_link, link); 3490 + event = perf_get_event(perf_link->perf_file); 3491 + if (IS_ERR(event)) 3492 + return PTR_ERR(event); 3493 + 3494 + switch (event->prog->type) { 3495 + case BPF_PROG_TYPE_PERF_EVENT: 3496 + return bpf_perf_link_fill_perf_event(event, info); 3497 + case BPF_PROG_TYPE_TRACEPOINT: 3498 + return bpf_perf_link_fill_tracepoint(event, info); 3499 + case BPF_PROG_TYPE_KPROBE: 3500 + return bpf_perf_link_fill_probe(event, info); 3501 + default: 3502 + return -EOPNOTSUPP; 3503 + } 3504 + } 3505 + 3367 3506 static const struct bpf_link_ops bpf_perf_link_lops = { 3368 3507 .release = bpf_perf_link_release, 3369 3508 .dealloc = bpf_perf_link_dealloc, 3509 + .fill_link_info = bpf_perf_link_fill_link_info, 3370 3510 }; 3371 3511 3372 3512 static int bpf_perf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+31 -11
kernel/bpf/verifier.c
··· 25 25 #include <linux/btf_ids.h> 26 26 #include <linux/poison.h> 27 27 #include <linux/module.h> 28 + #include <linux/cpumask.h> 28 29 29 30 #include "disasm.h" 30 31 ··· 6068 6067 type_is_rcu_or_null(env, reg, field_name, btf_id)) { 6069 6068 /* __rcu tagged pointers can be NULL */ 6070 6069 flag |= MEM_RCU | PTR_MAYBE_NULL; 6070 + 6071 + /* We always trust them */ 6072 + if (type_is_rcu_or_null(env, reg, field_name, btf_id) && 6073 + flag & PTR_UNTRUSTED) 6074 + flag &= ~PTR_UNTRUSTED; 6071 6075 } else if (flag & (MEM_PERCPU | MEM_USER)) { 6072 6076 /* keep as-is */ 6073 6077 } else { ··· 9123 9117 { 9124 9118 struct bpf_reg_state *ret_reg = &regs[BPF_REG_0]; 9125 9119 9126 - if (ret_type != RET_INTEGER || 9127 - (func_id != BPF_FUNC_get_stack && 9128 - func_id != BPF_FUNC_get_task_stack && 9129 - func_id != BPF_FUNC_probe_read_str && 9130 - func_id != BPF_FUNC_probe_read_kernel_str && 9131 - func_id != BPF_FUNC_probe_read_user_str)) 9120 + if (ret_type != RET_INTEGER) 9132 9121 return; 9133 9122 9134 - ret_reg->smax_value = meta->msize_max_value; 9135 - ret_reg->s32_max_value = meta->msize_max_value; 9136 - ret_reg->smin_value = -MAX_ERRNO; 9137 - ret_reg->s32_min_value = -MAX_ERRNO; 9138 - reg_bounds_sync(ret_reg); 9123 + switch (func_id) { 9124 + case BPF_FUNC_get_stack: 9125 + case BPF_FUNC_get_task_stack: 9126 + case BPF_FUNC_probe_read_str: 9127 + case BPF_FUNC_probe_read_kernel_str: 9128 + case BPF_FUNC_probe_read_user_str: 9129 + ret_reg->smax_value = meta->msize_max_value; 9130 + ret_reg->s32_max_value = meta->msize_max_value; 9131 + ret_reg->smin_value = -MAX_ERRNO; 9132 + ret_reg->s32_min_value = -MAX_ERRNO; 9133 + reg_bounds_sync(ret_reg); 9134 + break; 9135 + case BPF_FUNC_get_smp_processor_id: 9136 + ret_reg->umax_value = nr_cpu_ids - 1; 9137 + ret_reg->u32_max_value = nr_cpu_ids - 1; 9138 + ret_reg->smax_value = nr_cpu_ids - 1; 9139 + ret_reg->s32_max_value = nr_cpu_ids - 1; 9140 + ret_reg->umin_value = 0; 9141 + ret_reg->u32_min_value = 0; 9142 + ret_reg->smin_value = 0; 9143 + ret_reg->s32_min_value = 0; 9144 + reg_bounds_sync(ret_reg); 9145 + break; 9146 + } 9139 9147 } 9140 9148 9141 9149 static int
-2
kernel/rcu/rcu.h
··· 493 493 static inline void rcu_unexpedite_gp(void) { } 494 494 static inline void rcu_async_hurry(void) { } 495 495 static inline void rcu_async_relax(void) { } 496 - static inline void rcu_request_urgent_qs_task(struct task_struct *t) { } 497 496 #else /* #ifdef CONFIG_TINY_RCU */ 498 497 bool rcu_gp_is_normal(void); /* Internal RCU use. */ 499 498 bool rcu_gp_is_expedited(void); /* Internal RCU use. */ ··· 507 508 #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */ 508 509 static inline void show_rcu_tasks_gp_kthreads(void) {} 509 510 #endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */ 510 - void rcu_request_urgent_qs_task(struct task_struct *t); 511 511 #endif /* #else #ifdef CONFIG_TINY_RCU */ 512 512 513 513 #define RCU_SCHEDULER_INACTIVE 0
+45 -4
kernel/trace/bpf_trace.c
··· 2369 2369 if (is_tracepoint || is_syscall_tp) { 2370 2370 *buf = is_tracepoint ? event->tp_event->tp->name 2371 2371 : event->tp_event->name; 2372 - *fd_type = BPF_FD_TYPE_TRACEPOINT; 2373 - *probe_offset = 0x0; 2374 - *probe_addr = 0x0; 2372 + /* We allow NULL pointer for tracepoint */ 2373 + if (fd_type) 2374 + *fd_type = BPF_FD_TYPE_TRACEPOINT; 2375 + if (probe_offset) 2376 + *probe_offset = 0x0; 2377 + if (probe_addr) 2378 + *probe_addr = 0x0; 2375 2379 } else { 2376 2380 /* kprobe/uprobe */ 2377 2381 err = -EOPNOTSUPP; ··· 2388 2384 #ifdef CONFIG_UPROBE_EVENTS 2389 2385 if (flags & TRACE_EVENT_FL_UPROBE) 2390 2386 err = bpf_get_uprobe_info(event, fd_type, buf, 2391 - probe_offset, 2387 + probe_offset, probe_addr, 2392 2388 event->attr.type == PERF_TYPE_TRACEPOINT); 2393 2389 #endif 2394 2390 } ··· 2473 2469 u32 cnt; 2474 2470 u32 mods_cnt; 2475 2471 struct module **mods; 2472 + u32 flags; 2476 2473 }; 2477 2474 2478 2475 struct bpf_kprobe_multi_run_ctx { ··· 2563 2558 kfree(kmulti_link); 2564 2559 } 2565 2560 2561 + static int bpf_kprobe_multi_link_fill_link_info(const struct bpf_link *link, 2562 + struct bpf_link_info *info) 2563 + { 2564 + u64 __user *uaddrs = u64_to_user_ptr(info->kprobe_multi.addrs); 2565 + struct bpf_kprobe_multi_link *kmulti_link; 2566 + u32 ucount = info->kprobe_multi.count; 2567 + int err = 0, i; 2568 + 2569 + if (!uaddrs ^ !ucount) 2570 + return -EINVAL; 2571 + 2572 + kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link); 2573 + info->kprobe_multi.count = kmulti_link->cnt; 2574 + info->kprobe_multi.flags = kmulti_link->flags; 2575 + 2576 + if (!uaddrs) 2577 + return 0; 2578 + if (ucount < kmulti_link->cnt) 2579 + err = -ENOSPC; 2580 + else 2581 + ucount = kmulti_link->cnt; 2582 + 2583 + if (kallsyms_show_value(current_cred())) { 2584 + if (copy_to_user(uaddrs, kmulti_link->addrs, ucount * sizeof(u64))) 2585 + return -EFAULT; 2586 + } else { 2587 + for (i = 0; i < ucount; i++) { 2588 + if (put_user(0, uaddrs + i)) 2589 + return -EFAULT; 2590 + } 2591 + } 2592 + return err; 2593 + } 2594 + 2566 2595 static const struct bpf_link_ops bpf_kprobe_multi_link_lops = { 2567 2596 .release = bpf_kprobe_multi_link_release, 2568 2597 .dealloc = bpf_kprobe_multi_link_dealloc, 2598 + .fill_link_info = bpf_kprobe_multi_link_fill_link_info, 2569 2599 }; 2570 2600 2571 2601 static void bpf_kprobe_multi_cookie_swap(void *a, void *b, int size, const void *priv) ··· 2914 2874 link->addrs = addrs; 2915 2875 link->cookies = cookies; 2916 2876 link->cnt = cnt; 2877 + link->flags = flags; 2917 2878 2918 2879 if (cookies) { 2919 2880 /*
+4 -9
kernel/trace/trace_kprobe.c
··· 1561 1561 1562 1562 *fd_type = trace_kprobe_is_return(tk) ? BPF_FD_TYPE_KRETPROBE 1563 1563 : BPF_FD_TYPE_KPROBE; 1564 - if (tk->symbol) { 1565 - *symbol = tk->symbol; 1566 - *probe_offset = tk->rp.kp.offset; 1567 - *probe_addr = 0; 1568 - } else { 1569 - *symbol = NULL; 1570 - *probe_offset = 0; 1571 - *probe_addr = (unsigned long)tk->rp.kp.addr; 1572 - } 1564 + *probe_offset = tk->rp.kp.offset; 1565 + *probe_addr = kallsyms_show_value(current_cred()) ? 1566 + (unsigned long)tk->rp.kp.addr : 0; 1567 + *symbol = tk->symbol; 1573 1568 return 0; 1574 1569 } 1575 1570 #endif /* CONFIG_PERF_EVENTS */
+2 -1
kernel/trace/trace_uprobe.c
··· 1417 1417 1418 1418 int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type, 1419 1419 const char **filename, u64 *probe_offset, 1420 - bool perf_type_tracepoint) 1420 + u64 *probe_addr, bool perf_type_tracepoint) 1421 1421 { 1422 1422 const char *pevent = trace_event_name(event->tp_event); 1423 1423 const char *group = event->tp_event->class->system; ··· 1434 1434 : BPF_FD_TYPE_UPROBE; 1435 1435 *filename = tu->filename; 1436 1436 *probe_offset = tu->offset; 1437 + *probe_addr = 0; 1437 1438 return 0; 1438 1439 } 1439 1440 #endif /* CONFIG_PERF_EVENTS */
+1 -11
lib/test_bpf.c
··· 14381 14381 * single fragment to the skb, filled with 14382 14382 * test->frag_data. 14383 14383 */ 14384 - void *ptr; 14385 - 14386 14384 page = alloc_page(GFP_KERNEL); 14387 - 14388 14385 if (!page) 14389 14386 goto err_kfree_skb; 14390 14387 14391 - ptr = kmap(page); 14392 - if (!ptr) 14393 - goto err_free_page; 14394 - memcpy(ptr, test->frag_data, MAX_DATA); 14395 - kunmap(page); 14388 + memcpy(page_address(page), test->frag_data, MAX_DATA); 14396 14389 skb_add_rx_frag(skb, 0, page, 0, MAX_DATA, MAX_DATA); 14397 14390 } 14398 14391 14399 14392 return skb; 14400 - 14401 - err_free_page: 14402 - __free_page(page); 14403 14393 err_kfree_skb: 14404 14394 kfree_skb(skb); 14405 14395 return NULL;
+17 -1
net/bpf/test_run.c
··· 555 555 return *a; 556 556 } 557 557 558 + void noinline bpf_fentry_test_sinfo(struct skb_shared_info *sinfo) 559 + { 560 + } 561 + 558 562 __bpf_kfunc int bpf_modify_return_test(int a, int *b) 559 563 { 560 564 *b += 1; 561 565 return a + *b; 566 + } 567 + 568 + __bpf_kfunc int bpf_modify_return_test2(int a, int *b, short c, int d, 569 + void *e, char f, int g) 570 + { 571 + *b += 1; 572 + return a + *b + c + d + (long)e + f + g; 562 573 } 563 574 564 575 int noinline bpf_fentry_shadow_test(int a) ··· 607 596 608 597 BTF_SET8_START(bpf_test_modify_return_ids) 609 598 BTF_ID_FLAGS(func, bpf_modify_return_test) 599 + BTF_ID_FLAGS(func, bpf_modify_return_test2) 610 600 BTF_ID_FLAGS(func, bpf_fentry_test1, KF_SLEEPABLE) 611 601 BTF_SET8_END(bpf_test_modify_return_ids) 612 602 ··· 675 663 case BPF_MODIFY_RETURN: 676 664 ret = bpf_modify_return_test(1, &b); 677 665 if (b != 2) 678 - side_effect = 1; 666 + side_effect++; 667 + b = 2; 668 + ret += bpf_modify_return_test2(1, &b, 3, 4, (void *)5, 6, 7); 669 + if (b != 2) 670 + side_effect++; 679 671 break; 680 672 default: 681 673 goto out;
+3 -3
samples/bpf/Makefile
··· 248 248 BTF_PAHOLE_PROBE := $(shell $(BTF_PAHOLE) --help 2>&1 | grep BTF) 249 249 BTF_OBJCOPY_PROBE := $(shell $(LLVM_OBJCOPY) --help 2>&1 | grep -i 'usage.*llvm') 250 250 BTF_LLVM_PROBE := $(shell echo "int main() { return 0; }" | \ 251 - $(CLANG) -target bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \ 251 + $(CLANG) --target=bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \ 252 252 $(LLVM_READELF) -S ./llvm_btf_verify.o | grep BTF; \ 253 253 /bin/rm -f ./llvm_btf_verify.o) 254 254 ··· 370 370 clean-files += vmlinux.h 371 371 372 372 # Get Clang's default includes on this system, as opposed to those seen by 373 - # '-target bpf'. This fixes "missing" files on some architectures/distros, 373 + # '--target=bpf'. This fixes "missing" files on some architectures/distros, 374 374 # such as asm/byteorder.h, asm/socket.h, asm/sockios.h, sys/cdefs.h etc. 375 375 # 376 376 # Use '-idirafter': Don't interfere with include mechanics except where the ··· 392 392 393 393 $(obj)/%.bpf.o: $(src)/%.bpf.c $(obj)/vmlinux.h $(src)/xdp_sample.bpf.h $(src)/xdp_sample_shared.h 394 394 @echo " CLANG-BPF " $@ 395 - $(Q)$(CLANG) -g -O2 -target bpf -D__TARGET_ARCH_$(SRCARCH) \ 395 + $(Q)$(CLANG) -g -O2 --target=bpf -D__TARGET_ARCH_$(SRCARCH) \ 396 396 -Wno-compare-distinct-pointer-types -I$(srctree)/include \ 397 397 -I$(srctree)/samples/bpf -I$(srctree)/tools/include \ 398 398 -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) \
+1 -1
samples/bpf/gnu/stubs.h
··· 1 - /* dummy .h to trick /usr/include/features.h to work with 'clang -target bpf' */ 1 + /* dummy .h to trick /usr/include/features.h to work with 'clang --target=bpf' */
+4
samples/bpf/syscall_tp_kern.c
··· 44 44 bpf_map_update_elem(map, &key, &init_val, BPF_NOEXIST); 45 45 } 46 46 47 + #if !defined(__aarch64__) 47 48 SEC("tracepoint/syscalls/sys_enter_open") 48 49 int trace_enter_open(struct syscalls_enter_open_args *ctx) 49 50 { 50 51 count(&enter_open_map); 51 52 return 0; 52 53 } 54 + #endif 53 55 54 56 SEC("tracepoint/syscalls/sys_enter_openat") 55 57 int trace_enter_open_at(struct syscalls_enter_open_args *ctx) ··· 67 65 return 0; 68 66 } 69 67 68 + #if !defined(__aarch64__) 70 69 SEC("tracepoint/syscalls/sys_exit_open") 71 70 int trace_enter_exit(struct syscalls_exit_open_args *ctx) 72 71 { 73 72 count(&exit_open_map); 74 73 return 0; 75 74 } 75 + #endif 76 76 77 77 SEC("tracepoint/syscalls/sys_exit_openat") 78 78 int trace_enter_exit_at(struct syscalls_exit_open_args *ctx)
+1 -1
samples/bpf/test_lwt_bpf.sh
··· 376 376 SRC_MAC=$(lookup_mac $VETH0) 377 377 DST_IFINDEX=$(cat /sys/class/net/$VETH0/ifindex) 378 378 379 - CLANG_OPTS="-O2 -target bpf -I ../include/" 379 + CLANG_OPTS="-O2 --target=bpf -I ../include/" 380 380 CLANG_OPTS+=" -DSRC_MAC=$SRC_MAC -DDST_MAC=$DST_MAC -DDST_IFINDEX=$DST_IFINDEX" 381 381 clang $CLANG_OPTS -c $PROG_SRC -o $BPF_PROG 382 382
+3 -3
samples/hid/Makefile
··· 86 86 BTF_PAHOLE_PROBE := $(shell $(BTF_PAHOLE) --help 2>&1 | grep BTF) 87 87 BTF_OBJCOPY_PROBE := $(shell $(LLVM_OBJCOPY) --help 2>&1 | grep -i 'usage.*llvm') 88 88 BTF_LLVM_PROBE := $(shell echo "int main() { return 0; }" | \ 89 - $(CLANG) -target bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \ 89 + $(CLANG) --target=bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \ 90 90 $(LLVM_READELF) -S ./llvm_btf_verify.o | grep BTF; \ 91 91 /bin/rm -f ./llvm_btf_verify.o) 92 92 ··· 181 181 clean-files += vmlinux.h 182 182 183 183 # Get Clang's default includes on this system, as opposed to those seen by 184 - # '-target bpf'. This fixes "missing" files on some architectures/distros, 184 + # '--target=bpf'. This fixes "missing" files on some architectures/distros, 185 185 # such as asm/byteorder.h, asm/socket.h, asm/sockios.h, sys/cdefs.h etc. 186 186 # 187 187 # Use '-idirafter': Don't interfere with include mechanics except where the ··· 198 198 199 199 $(obj)/%.bpf.o: $(src)/%.bpf.c $(EXTRA_BPF_HEADERS_SRC) $(obj)/vmlinux.h 200 200 @echo " CLANG-BPF " $@ 201 - $(Q)$(CLANG) -g -O2 -target bpf -D__TARGET_ARCH_$(SRCARCH) \ 201 + $(Q)$(CLANG) -g -O2 --target=bpf -D__TARGET_ARCH_$(SRCARCH) \ 202 202 -Wno-compare-distinct-pointer-types -I$(srctree)/include \ 203 203 -I$(srctree)/samples/bpf -I$(srctree)/tools/include \ 204 204 -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) \
+2 -2
tools/bpf/bpftool/Documentation/bpftool-gen.rst
··· 260 260 This is example BPF application with two BPF programs and a mix of BPF maps 261 261 and global variables. Source code is split across two source code files. 262 262 263 - **$ clang -target bpf -g example1.bpf.c -o example1.bpf.o** 263 + **$ clang --target=bpf -g example1.bpf.c -o example1.bpf.o** 264 264 265 - **$ clang -target bpf -g example2.bpf.c -o example2.bpf.o** 265 + **$ clang --target=bpf -g example2.bpf.c -o example2.bpf.o** 266 266 267 267 **$ bpftool gen object example.bpf.o example1.bpf.o example2.bpf.o** 268 268
+1 -1
tools/bpf/bpftool/Makefile
··· 216 216 -I$(srctree)/tools/include/uapi/ \ 217 217 -I$(LIBBPF_BOOTSTRAP_INCLUDE) \ 218 218 -g -O2 -Wall -fno-stack-protector \ 219 - -target bpf -c $< -o $@ 219 + --target=bpf -c $< -o $@ 220 220 $(Q)$(LLVM_STRIP) -g $@ 221 221 222 222 $(OUTPUT)%.skel.h: $(OUTPUT)%.bpf.o $(BPFTOOL_BOOTSTRAP)
+1 -1
tools/bpf/bpftool/btf_dumper.c
··· 835 835 case '|': 836 836 case ' ': 837 837 putchar('\\'); 838 - /* fallthrough */ 838 + fallthrough; 839 839 default: 840 840 putchar(*s); 841 841 }
+1 -1
tools/bpf/bpftool/feature.c
··· 757 757 case BPF_FUNC_probe_write_user: 758 758 if (!full_mode) 759 759 continue; 760 - /* fallthrough */ 760 + fallthrough; 761 761 default: 762 762 probe_res |= probe_helper_for_progtype(prog_type, supported_type, 763 763 define_prefix, id, prog_type_str,
+428 -4
tools/bpf/bpftool/link.c
··· 5 5 #include <linux/err.h> 6 6 #include <linux/netfilter.h> 7 7 #include <linux/netfilter_arp.h> 8 + #include <linux/perf_event.h> 8 9 #include <net/if.h> 9 10 #include <stdio.h> 10 11 #include <unistd.h> ··· 15 14 16 15 #include "json_writer.h" 17 16 #include "main.h" 17 + #include "xlated_dumper.h" 18 + 19 + #define PERF_HW_CACHE_LEN 128 18 20 19 21 static struct hashmap *link_table; 22 + static struct dump_data dd; 23 + 24 + static const char *perf_type_name[PERF_TYPE_MAX] = { 25 + [PERF_TYPE_HARDWARE] = "hardware", 26 + [PERF_TYPE_SOFTWARE] = "software", 27 + [PERF_TYPE_TRACEPOINT] = "tracepoint", 28 + [PERF_TYPE_HW_CACHE] = "hw-cache", 29 + [PERF_TYPE_RAW] = "raw", 30 + [PERF_TYPE_BREAKPOINT] = "breakpoint", 31 + }; 32 + 33 + const char *event_symbols_hw[PERF_COUNT_HW_MAX] = { 34 + [PERF_COUNT_HW_CPU_CYCLES] = "cpu-cycles", 35 + [PERF_COUNT_HW_INSTRUCTIONS] = "instructions", 36 + [PERF_COUNT_HW_CACHE_REFERENCES] = "cache-references", 37 + [PERF_COUNT_HW_CACHE_MISSES] = "cache-misses", 38 + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = "branch-instructions", 39 + [PERF_COUNT_HW_BRANCH_MISSES] = "branch-misses", 40 + [PERF_COUNT_HW_BUS_CYCLES] = "bus-cycles", 41 + [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = "stalled-cycles-frontend", 42 + [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = "stalled-cycles-backend", 43 + [PERF_COUNT_HW_REF_CPU_CYCLES] = "ref-cycles", 44 + }; 45 + 46 + const char *event_symbols_sw[PERF_COUNT_SW_MAX] = { 47 + [PERF_COUNT_SW_CPU_CLOCK] = "cpu-clock", 48 + [PERF_COUNT_SW_TASK_CLOCK] = "task-clock", 49 + [PERF_COUNT_SW_PAGE_FAULTS] = "page-faults", 50 + [PERF_COUNT_SW_CONTEXT_SWITCHES] = "context-switches", 51 + [PERF_COUNT_SW_CPU_MIGRATIONS] = "cpu-migrations", 52 + [PERF_COUNT_SW_PAGE_FAULTS_MIN] = "minor-faults", 53 + [PERF_COUNT_SW_PAGE_FAULTS_MAJ] = "major-faults", 54 + [PERF_COUNT_SW_ALIGNMENT_FAULTS] = "alignment-faults", 55 + [PERF_COUNT_SW_EMULATION_FAULTS] = "emulation-faults", 56 + [PERF_COUNT_SW_DUMMY] = "dummy", 57 + [PERF_COUNT_SW_BPF_OUTPUT] = "bpf-output", 58 + [PERF_COUNT_SW_CGROUP_SWITCHES] = "cgroup-switches", 59 + }; 60 + 61 + const char *evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX] = { 62 + [PERF_COUNT_HW_CACHE_L1D] = "L1-dcache", 63 + [PERF_COUNT_HW_CACHE_L1I] = "L1-icache", 64 + [PERF_COUNT_HW_CACHE_LL] = "LLC", 65 + [PERF_COUNT_HW_CACHE_DTLB] = "dTLB", 66 + [PERF_COUNT_HW_CACHE_ITLB] = "iTLB", 67 + [PERF_COUNT_HW_CACHE_BPU] = "branch", 68 + [PERF_COUNT_HW_CACHE_NODE] = "node", 69 + }; 70 + 71 + const char *evsel__hw_cache_op[PERF_COUNT_HW_CACHE_OP_MAX] = { 72 + [PERF_COUNT_HW_CACHE_OP_READ] = "load", 73 + [PERF_COUNT_HW_CACHE_OP_WRITE] = "store", 74 + [PERF_COUNT_HW_CACHE_OP_PREFETCH] = "prefetch", 75 + }; 76 + 77 + const char *evsel__hw_cache_result[PERF_COUNT_HW_CACHE_RESULT_MAX] = { 78 + [PERF_COUNT_HW_CACHE_RESULT_ACCESS] = "refs", 79 + [PERF_COUNT_HW_CACHE_RESULT_MISS] = "misses", 80 + }; 81 + 82 + #define perf_event_name(array, id) ({ \ 83 + const char *event_str = NULL; \ 84 + \ 85 + if ((id) >= 0 && (id) < ARRAY_SIZE(array)) \ 86 + event_str = array[id]; \ 87 + event_str; \ 88 + }) 20 89 21 90 static int link_parse_fd(int *argc, char ***argv) 22 91 { ··· 237 166 return err; 238 167 } 239 168 169 + static int cmp_u64(const void *A, const void *B) 170 + { 171 + const __u64 *a = A, *b = B; 172 + 173 + return *a - *b; 174 + } 175 + 176 + static void 177 + show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr) 178 + { 179 + __u32 i, j = 0; 180 + __u64 *addrs; 181 + 182 + jsonw_bool_field(json_wtr, "retprobe", 183 + info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN); 184 + jsonw_uint_field(json_wtr, "func_cnt", info->kprobe_multi.count); 185 + jsonw_name(json_wtr, "funcs"); 186 + jsonw_start_array(json_wtr); 187 + addrs = u64_to_ptr(info->kprobe_multi.addrs); 188 + qsort(addrs, info->kprobe_multi.count, sizeof(addrs[0]), cmp_u64); 189 + 190 + /* Load it once for all. */ 191 + if (!dd.sym_count) 192 + kernel_syms_load(&dd); 193 + for (i = 0; i < dd.sym_count; i++) { 194 + if (dd.sym_mapping[i].address != addrs[j]) 195 + continue; 196 + jsonw_start_object(json_wtr); 197 + jsonw_uint_field(json_wtr, "addr", dd.sym_mapping[i].address); 198 + jsonw_string_field(json_wtr, "func", dd.sym_mapping[i].name); 199 + /* Print null if it is vmlinux */ 200 + if (dd.sym_mapping[i].module[0] == '\0') { 201 + jsonw_name(json_wtr, "module"); 202 + jsonw_null(json_wtr); 203 + } else { 204 + jsonw_string_field(json_wtr, "module", dd.sym_mapping[i].module); 205 + } 206 + jsonw_end_object(json_wtr); 207 + if (j++ == info->kprobe_multi.count) 208 + break; 209 + } 210 + jsonw_end_array(json_wtr); 211 + } 212 + 213 + static void 214 + show_perf_event_kprobe_json(struct bpf_link_info *info, json_writer_t *wtr) 215 + { 216 + jsonw_bool_field(wtr, "retprobe", info->perf_event.type == BPF_PERF_EVENT_KRETPROBE); 217 + jsonw_uint_field(wtr, "addr", info->perf_event.kprobe.addr); 218 + jsonw_string_field(wtr, "func", 219 + u64_to_ptr(info->perf_event.kprobe.func_name)); 220 + jsonw_uint_field(wtr, "offset", info->perf_event.kprobe.offset); 221 + } 222 + 223 + static void 224 + show_perf_event_uprobe_json(struct bpf_link_info *info, json_writer_t *wtr) 225 + { 226 + jsonw_bool_field(wtr, "retprobe", info->perf_event.type == BPF_PERF_EVENT_URETPROBE); 227 + jsonw_string_field(wtr, "file", 228 + u64_to_ptr(info->perf_event.uprobe.file_name)); 229 + jsonw_uint_field(wtr, "offset", info->perf_event.uprobe.offset); 230 + } 231 + 232 + static void 233 + show_perf_event_tracepoint_json(struct bpf_link_info *info, json_writer_t *wtr) 234 + { 235 + jsonw_string_field(wtr, "tracepoint", 236 + u64_to_ptr(info->perf_event.tracepoint.tp_name)); 237 + } 238 + 239 + static char *perf_config_hw_cache_str(__u64 config) 240 + { 241 + const char *hw_cache, *result, *op; 242 + char *str = malloc(PERF_HW_CACHE_LEN); 243 + 244 + if (!str) { 245 + p_err("mem alloc failed"); 246 + return NULL; 247 + } 248 + 249 + hw_cache = perf_event_name(evsel__hw_cache, config & 0xff); 250 + if (hw_cache) 251 + snprintf(str, PERF_HW_CACHE_LEN, "%s-", hw_cache); 252 + else 253 + snprintf(str, PERF_HW_CACHE_LEN, "%lld-", config & 0xff); 254 + 255 + op = perf_event_name(evsel__hw_cache_op, (config >> 8) & 0xff); 256 + if (op) 257 + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str), 258 + "%s-", op); 259 + else 260 + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str), 261 + "%lld-", (config >> 8) & 0xff); 262 + 263 + result = perf_event_name(evsel__hw_cache_result, config >> 16); 264 + if (result) 265 + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str), 266 + "%s", result); 267 + else 268 + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str), 269 + "%lld", config >> 16); 270 + return str; 271 + } 272 + 273 + static const char *perf_config_str(__u32 type, __u64 config) 274 + { 275 + const char *perf_config; 276 + 277 + switch (type) { 278 + case PERF_TYPE_HARDWARE: 279 + perf_config = perf_event_name(event_symbols_hw, config); 280 + break; 281 + case PERF_TYPE_SOFTWARE: 282 + perf_config = perf_event_name(event_symbols_sw, config); 283 + break; 284 + case PERF_TYPE_HW_CACHE: 285 + perf_config = perf_config_hw_cache_str(config); 286 + break; 287 + default: 288 + perf_config = NULL; 289 + break; 290 + } 291 + return perf_config; 292 + } 293 + 294 + static void 295 + show_perf_event_event_json(struct bpf_link_info *info, json_writer_t *wtr) 296 + { 297 + __u64 config = info->perf_event.event.config; 298 + __u32 type = info->perf_event.event.type; 299 + const char *perf_type, *perf_config; 300 + 301 + perf_type = perf_event_name(perf_type_name, type); 302 + if (perf_type) 303 + jsonw_string_field(wtr, "event_type", perf_type); 304 + else 305 + jsonw_uint_field(wtr, "event_type", type); 306 + 307 + perf_config = perf_config_str(type, config); 308 + if (perf_config) 309 + jsonw_string_field(wtr, "event_config", perf_config); 310 + else 311 + jsonw_uint_field(wtr, "event_config", config); 312 + 313 + if (type == PERF_TYPE_HW_CACHE && perf_config) 314 + free((void *)perf_config); 315 + } 316 + 240 317 static int show_link_close_json(int fd, struct bpf_link_info *info) 241 318 { 242 319 struct bpf_prog_info prog_info; ··· 436 217 case BPF_LINK_TYPE_STRUCT_OPS: 437 218 jsonw_uint_field(json_wtr, "map_id", 438 219 info->struct_ops.map_id); 220 + break; 221 + case BPF_LINK_TYPE_KPROBE_MULTI: 222 + show_kprobe_multi_json(info, json_wtr); 223 + break; 224 + case BPF_LINK_TYPE_PERF_EVENT: 225 + switch (info->perf_event.type) { 226 + case BPF_PERF_EVENT_EVENT: 227 + show_perf_event_event_json(info, json_wtr); 228 + break; 229 + case BPF_PERF_EVENT_TRACEPOINT: 230 + show_perf_event_tracepoint_json(info, json_wtr); 231 + break; 232 + case BPF_PERF_EVENT_KPROBE: 233 + case BPF_PERF_EVENT_KRETPROBE: 234 + show_perf_event_kprobe_json(info, json_wtr); 235 + break; 236 + case BPF_PERF_EVENT_UPROBE: 237 + case BPF_PERF_EVENT_URETPROBE: 238 + show_perf_event_uprobe_json(info, json_wtr); 239 + break; 240 + default: 241 + break; 242 + } 439 243 break; 440 244 default: 441 245 break; ··· 593 351 printf(" flags 0x%x", info->netfilter.flags); 594 352 } 595 353 354 + static void show_kprobe_multi_plain(struct bpf_link_info *info) 355 + { 356 + __u32 i, j = 0; 357 + __u64 *addrs; 358 + 359 + if (!info->kprobe_multi.count) 360 + return; 361 + 362 + if (info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN) 363 + printf("\n\tkretprobe.multi "); 364 + else 365 + printf("\n\tkprobe.multi "); 366 + printf("func_cnt %u ", info->kprobe_multi.count); 367 + addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs); 368 + qsort(addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64); 369 + 370 + /* Load it once for all. */ 371 + if (!dd.sym_count) 372 + kernel_syms_load(&dd); 373 + if (!dd.sym_count) 374 + return; 375 + 376 + printf("\n\t%-16s %s", "addr", "func [module]"); 377 + for (i = 0; i < dd.sym_count; i++) { 378 + if (dd.sym_mapping[i].address != addrs[j]) 379 + continue; 380 + printf("\n\t%016lx %s", 381 + dd.sym_mapping[i].address, dd.sym_mapping[i].name); 382 + if (dd.sym_mapping[i].module[0] != '\0') 383 + printf(" [%s] ", dd.sym_mapping[i].module); 384 + else 385 + printf(" "); 386 + 387 + if (j++ == info->kprobe_multi.count) 388 + break; 389 + } 390 + } 391 + 392 + static void show_perf_event_kprobe_plain(struct bpf_link_info *info) 393 + { 394 + const char *buf; 395 + 396 + buf = u64_to_ptr(info->perf_event.kprobe.func_name); 397 + if (buf[0] == '\0' && !info->perf_event.kprobe.addr) 398 + return; 399 + 400 + if (info->perf_event.type == BPF_PERF_EVENT_KRETPROBE) 401 + printf("\n\tkretprobe "); 402 + else 403 + printf("\n\tkprobe "); 404 + if (info->perf_event.kprobe.addr) 405 + printf("%llx ", info->perf_event.kprobe.addr); 406 + printf("%s", buf); 407 + if (info->perf_event.kprobe.offset) 408 + printf("+%#x", info->perf_event.kprobe.offset); 409 + printf(" "); 410 + } 411 + 412 + static void show_perf_event_uprobe_plain(struct bpf_link_info *info) 413 + { 414 + const char *buf; 415 + 416 + buf = u64_to_ptr(info->perf_event.uprobe.file_name); 417 + if (buf[0] == '\0') 418 + return; 419 + 420 + if (info->perf_event.type == BPF_PERF_EVENT_URETPROBE) 421 + printf("\n\turetprobe "); 422 + else 423 + printf("\n\tuprobe "); 424 + printf("%s+%#x ", buf, info->perf_event.uprobe.offset); 425 + } 426 + 427 + static void show_perf_event_tracepoint_plain(struct bpf_link_info *info) 428 + { 429 + const char *buf; 430 + 431 + buf = u64_to_ptr(info->perf_event.tracepoint.tp_name); 432 + if (buf[0] == '\0') 433 + return; 434 + 435 + printf("\n\ttracepoint %s ", buf); 436 + } 437 + 438 + static void show_perf_event_event_plain(struct bpf_link_info *info) 439 + { 440 + __u64 config = info->perf_event.event.config; 441 + __u32 type = info->perf_event.event.type; 442 + const char *perf_type, *perf_config; 443 + 444 + printf("\n\tevent "); 445 + perf_type = perf_event_name(perf_type_name, type); 446 + if (perf_type) 447 + printf("%s:", perf_type); 448 + else 449 + printf("%u :", type); 450 + 451 + perf_config = perf_config_str(type, config); 452 + if (perf_config) 453 + printf("%s ", perf_config); 454 + else 455 + printf("%llu ", config); 456 + 457 + if (type == PERF_TYPE_HW_CACHE && perf_config) 458 + free((void *)perf_config); 459 + } 460 + 596 461 static int show_link_close_plain(int fd, struct bpf_link_info *info) 597 462 { 598 463 struct bpf_prog_info prog_info; ··· 745 396 case BPF_LINK_TYPE_NETFILTER: 746 397 netfilter_dump_plain(info); 747 398 break; 399 + case BPF_LINK_TYPE_KPROBE_MULTI: 400 + show_kprobe_multi_plain(info); 401 + break; 402 + case BPF_LINK_TYPE_PERF_EVENT: 403 + switch (info->perf_event.type) { 404 + case BPF_PERF_EVENT_EVENT: 405 + show_perf_event_event_plain(info); 406 + break; 407 + case BPF_PERF_EVENT_TRACEPOINT: 408 + show_perf_event_tracepoint_plain(info); 409 + break; 410 + case BPF_PERF_EVENT_KPROBE: 411 + case BPF_PERF_EVENT_KRETPROBE: 412 + show_perf_event_kprobe_plain(info); 413 + break; 414 + case BPF_PERF_EVENT_UPROBE: 415 + case BPF_PERF_EVENT_URETPROBE: 416 + show_perf_event_uprobe_plain(info); 417 + break; 418 + default: 419 + break; 420 + } 421 + break; 748 422 default: 749 423 break; 750 424 } ··· 789 417 { 790 418 struct bpf_link_info info; 791 419 __u32 len = sizeof(info); 792 - char buf[256]; 420 + __u64 *addrs = NULL; 421 + char buf[PATH_MAX]; 422 + int count; 793 423 int err; 794 424 795 425 memset(&info, 0, sizeof(info)); 426 + buf[0] = '\0'; 796 427 again: 797 428 err = bpf_link_get_info_by_fd(fd, &info, &len); 798 429 if (err) { ··· 806 431 } 807 432 if (info.type == BPF_LINK_TYPE_RAW_TRACEPOINT && 808 433 !info.raw_tracepoint.tp_name) { 809 - info.raw_tracepoint.tp_name = (unsigned long)&buf; 434 + info.raw_tracepoint.tp_name = ptr_to_u64(&buf); 810 435 info.raw_tracepoint.tp_name_len = sizeof(buf); 811 436 goto again; 812 437 } 813 438 if (info.type == BPF_LINK_TYPE_ITER && 814 439 !info.iter.target_name) { 815 - info.iter.target_name = (unsigned long)&buf; 440 + info.iter.target_name = ptr_to_u64(&buf); 816 441 info.iter.target_name_len = sizeof(buf); 817 442 goto again; 443 + } 444 + if (info.type == BPF_LINK_TYPE_KPROBE_MULTI && 445 + !info.kprobe_multi.addrs) { 446 + count = info.kprobe_multi.count; 447 + if (count) { 448 + addrs = calloc(count, sizeof(__u64)); 449 + if (!addrs) { 450 + p_err("mem alloc failed"); 451 + close(fd); 452 + return -ENOMEM; 453 + } 454 + info.kprobe_multi.addrs = ptr_to_u64(addrs); 455 + goto again; 456 + } 457 + } 458 + if (info.type == BPF_LINK_TYPE_PERF_EVENT) { 459 + switch (info.perf_event.type) { 460 + case BPF_PERF_EVENT_TRACEPOINT: 461 + if (!info.perf_event.tracepoint.tp_name) { 462 + info.perf_event.tracepoint.tp_name = ptr_to_u64(&buf); 463 + info.perf_event.tracepoint.name_len = sizeof(buf); 464 + goto again; 465 + } 466 + break; 467 + case BPF_PERF_EVENT_KPROBE: 468 + case BPF_PERF_EVENT_KRETPROBE: 469 + if (!info.perf_event.kprobe.func_name) { 470 + info.perf_event.kprobe.func_name = ptr_to_u64(&buf); 471 + info.perf_event.kprobe.name_len = sizeof(buf); 472 + goto again; 473 + } 474 + break; 475 + case BPF_PERF_EVENT_UPROBE: 476 + case BPF_PERF_EVENT_URETPROBE: 477 + if (!info.perf_event.uprobe.file_name) { 478 + info.perf_event.uprobe.file_name = ptr_to_u64(&buf); 479 + info.perf_event.uprobe.name_len = sizeof(buf); 480 + goto again; 481 + } 482 + break; 483 + default: 484 + break; 485 + } 818 486 } 819 487 820 488 if (json_output) ··· 865 447 else 866 448 show_link_close_plain(fd, &info); 867 449 450 + if (addrs) 451 + free(addrs); 868 452 close(fd); 869 453 return 0; 870 454 } ··· 891 471 fd = link_parse_fd(&argc, &argv); 892 472 if (fd < 0) 893 473 return fd; 894 - return do_show_link(fd); 474 + do_show_link(fd); 475 + goto out; 895 476 } 896 477 897 478 if (argc) ··· 931 510 if (show_pinned) 932 511 delete_pinned_obj_table(link_table); 933 512 513 + out: 514 + if (dd.sym_count) 515 + kernel_syms_destroy(&dd); 934 516 return errno == ENOENT ? 0 : -1; 935 517 } 936 518
+21 -5
tools/bpf/bpftool/skeleton/pid_iter.bpf.c
··· 15 15 BPF_OBJ_BTF, 16 16 }; 17 17 18 + struct bpf_perf_link___local { 19 + struct bpf_link link; 20 + struct file *perf_file; 21 + } __attribute__((preserve_access_index)); 22 + 23 + struct perf_event___local { 24 + u64 bpf_cookie; 25 + } __attribute__((preserve_access_index)); 26 + 27 + enum bpf_link_type___local { 28 + BPF_LINK_TYPE_PERF_EVENT___local = 7, 29 + }; 30 + 18 31 extern const void bpf_link_fops __ksym; 19 32 extern const void bpf_map_fops __ksym; 20 33 extern const void bpf_prog_fops __ksym; ··· 54 41 /* could be used only with BPF_LINK_TYPE_PERF_EVENT links */ 55 42 static __u64 get_bpf_cookie(struct bpf_link *link) 56 43 { 57 - struct bpf_perf_link *perf_link; 58 - struct perf_event *event; 44 + struct bpf_perf_link___local *perf_link; 45 + struct perf_event___local *event; 59 46 60 - perf_link = container_of(link, struct bpf_perf_link, link); 47 + perf_link = container_of(link, struct bpf_perf_link___local, link); 61 48 event = BPF_CORE_READ(perf_link, perf_file, private_data); 62 49 return BPF_CORE_READ(event, bpf_cookie); 63 50 } ··· 97 84 e.pid = task->tgid; 98 85 e.id = get_obj_id(file->private_data, obj_type); 99 86 100 - if (obj_type == BPF_OBJ_LINK) { 87 + if (obj_type == BPF_OBJ_LINK && 88 + bpf_core_enum_value_exists(enum bpf_link_type___local, 89 + BPF_LINK_TYPE_PERF_EVENT___local)) { 101 90 struct bpf_link *link = (struct bpf_link *) file->private_data; 102 91 103 - if (BPF_CORE_READ(link, type) == BPF_LINK_TYPE_PERF_EVENT) { 92 + if (link->type == bpf_core_enum_value(enum bpf_link_type___local, 93 + BPF_LINK_TYPE_PERF_EVENT___local)) { 104 94 e.has_bpf_cookie = true; 105 95 e.bpf_cookie = get_bpf_cookie(link); 106 96 }
+17 -10
tools/bpf/bpftool/skeleton/profiler.bpf.c
··· 4 4 #include <bpf/bpf_helpers.h> 5 5 #include <bpf/bpf_tracing.h> 6 6 7 + struct bpf_perf_event_value___local { 8 + __u64 counter; 9 + __u64 enabled; 10 + __u64 running; 11 + } __attribute__((preserve_access_index)); 12 + 7 13 /* map of perf event fds, num_cpu * num_metric entries */ 8 14 struct { 9 15 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); ··· 21 15 struct { 22 16 __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 23 17 __uint(key_size, sizeof(u32)); 24 - __uint(value_size, sizeof(struct bpf_perf_event_value)); 18 + __uint(value_size, sizeof(struct bpf_perf_event_value___local)); 25 19 } fentry_readings SEC(".maps"); 26 20 27 21 /* accumulated readings */ 28 22 struct { 29 23 __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 30 24 __uint(key_size, sizeof(u32)); 31 - __uint(value_size, sizeof(struct bpf_perf_event_value)); 25 + __uint(value_size, sizeof(struct bpf_perf_event_value___local)); 32 26 } accum_readings SEC(".maps"); 33 27 34 28 /* sample counts, one per cpu */ ··· 45 39 SEC("fentry/XXX") 46 40 int BPF_PROG(fentry_XXX) 47 41 { 48 - struct bpf_perf_event_value *ptrs[MAX_NUM_MATRICS]; 42 + struct bpf_perf_event_value___local *ptrs[MAX_NUM_MATRICS]; 49 43 u32 key = bpf_get_smp_processor_id(); 50 44 u32 i; 51 45 ··· 59 53 } 60 54 61 55 for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) { 62 - struct bpf_perf_event_value reading; 56 + struct bpf_perf_event_value___local reading; 63 57 int err; 64 58 65 - err = bpf_perf_event_read_value(&events, key, &reading, 59 + err = bpf_perf_event_read_value(&events, key, (void *)&reading, 66 60 sizeof(reading)); 67 61 if (err) 68 62 return 0; ··· 74 68 } 75 69 76 70 static inline void 77 - fexit_update_maps(u32 id, struct bpf_perf_event_value *after) 71 + fexit_update_maps(u32 id, struct bpf_perf_event_value___local *after) 78 72 { 79 - struct bpf_perf_event_value *before, diff; 73 + struct bpf_perf_event_value___local *before, diff; 80 74 81 75 before = bpf_map_lookup_elem(&fentry_readings, &id); 82 76 /* only account samples with a valid fentry_reading */ 83 77 if (before && before->counter) { 84 - struct bpf_perf_event_value *accum; 78 + struct bpf_perf_event_value___local *accum; 85 79 86 80 diff.counter = after->counter - before->counter; 87 81 diff.enabled = after->enabled - before->enabled; ··· 99 93 SEC("fexit/XXX") 100 94 int BPF_PROG(fexit_XXX) 101 95 { 102 - struct bpf_perf_event_value readings[MAX_NUM_MATRICS]; 96 + struct bpf_perf_event_value___local readings[MAX_NUM_MATRICS]; 103 97 u32 cpu = bpf_get_smp_processor_id(); 104 98 u32 i, zero = 0; 105 99 int err; ··· 108 102 /* read all events before updating the maps, to reduce error */ 109 103 for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) { 110 104 err = bpf_perf_event_read_value(&events, cpu + i * num_cpu, 111 - readings + i, sizeof(*readings)); 105 + (void *)(readings + i), 106 + sizeof(*readings)); 112 107 if (err) 113 108 return 0; 114 109 }
+5 -1
tools/bpf/bpftool/xlated_dumper.c
··· 46 46 } 47 47 dd->sym_mapping = tmp; 48 48 sym = &dd->sym_mapping[dd->sym_count]; 49 - if (sscanf(buff, "%p %*c %s", &address, sym->name) != 2) 49 + 50 + /* module is optional */ 51 + sym->module[0] = '\0'; 52 + /* trim the square brackets around the module name */ 53 + if (sscanf(buff, "%p %*c %s [%[^]]s", &address, sym->name, sym->module) < 2) 50 54 continue; 51 55 sym->address = (unsigned long)address; 52 56 if (!strcmp(sym->name, "__bpf_call_base")) {
+2
tools/bpf/bpftool/xlated_dumper.h
··· 5 5 #define __BPF_TOOL_XLATED_DUMPER_H 6 6 7 7 #define SYM_MAX_NAME 256 8 + #define MODULE_MAX_NAME 64 8 9 9 10 struct bpf_prog_linfo; 10 11 11 12 struct kernel_sym { 12 13 unsigned long address; 13 14 char name[SYM_MAX_NAME]; 15 + char module[MODULE_MAX_NAME]; 14 16 }; 15 17 16 18 struct dump_data {
+1 -1
tools/bpf/runqslower/Makefile
··· 62 62 $(QUIET_GEN)$(BPFTOOL) gen skeleton $< > $@ 63 63 64 64 $(OUTPUT)/%.bpf.o: %.bpf.c $(BPFOBJ) | $(OUTPUT) 65 - $(QUIET_GEN)$(CLANG) -g -O2 -target bpf $(INCLUDES) \ 65 + $(QUIET_GEN)$(CLANG) -g -O2 --target=bpf $(INCLUDES) \ 66 66 -c $(filter %.c,$^) -o $@ && \ 67 67 $(LLVM_STRIP) -g $@ 68 68
+1 -1
tools/build/feature/Makefile
··· 372 372 $(BUILD) -lzstd 373 373 374 374 $(OUTPUT)test-clang-bpf-co-re.bin: 375 - $(CLANG) -S -g -target bpf -o - $(patsubst %.bin,%.c,$(@F)) | \ 375 + $(CLANG) -S -g --target=bpf -o - $(patsubst %.bin,%.c,$(@F)) | \ 376 376 grep BTF_KIND_VAR 377 377 378 378 $(OUTPUT)test-file-handle.bin:
+40
tools/include/uapi/linux/bpf.h
··· 1057 1057 MAX_BPF_LINK_TYPE, 1058 1058 }; 1059 1059 1060 + enum bpf_perf_event_type { 1061 + BPF_PERF_EVENT_UNSPEC = 0, 1062 + BPF_PERF_EVENT_UPROBE = 1, 1063 + BPF_PERF_EVENT_URETPROBE = 2, 1064 + BPF_PERF_EVENT_KPROBE = 3, 1065 + BPF_PERF_EVENT_KRETPROBE = 4, 1066 + BPF_PERF_EVENT_TRACEPOINT = 5, 1067 + BPF_PERF_EVENT_EVENT = 6, 1068 + }; 1069 + 1060 1070 /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command 1061 1071 * 1062 1072 * NONE(default): No further bpf programs allowed in the subtree. ··· 6449 6439 __s32 priority; 6450 6440 __u32 flags; 6451 6441 } netfilter; 6442 + struct { 6443 + __aligned_u64 addrs; 6444 + __u32 count; /* in/out: kprobe_multi function count */ 6445 + __u32 flags; 6446 + } kprobe_multi; 6447 + struct { 6448 + __u32 type; /* enum bpf_perf_event_type */ 6449 + __u32 :32; 6450 + union { 6451 + struct { 6452 + __aligned_u64 file_name; /* in/out */ 6453 + __u32 name_len; 6454 + __u32 offset; /* offset from file_name */ 6455 + } uprobe; /* BPF_PERF_EVENT_UPROBE, BPF_PERF_EVENT_URETPROBE */ 6456 + struct { 6457 + __aligned_u64 func_name; /* in/out */ 6458 + __u32 name_len; 6459 + __u32 offset; /* offset from func_name */ 6460 + __u64 addr; 6461 + } kprobe; /* BPF_PERF_EVENT_KPROBE, BPF_PERF_EVENT_KRETPROBE */ 6462 + struct { 6463 + __aligned_u64 tp_name; /* in/out */ 6464 + __u32 name_len; 6465 + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */ 6466 + struct { 6467 + __u64 config; 6468 + __u32 type; 6469 + } event; /* BPF_PERF_EVENT_EVENT */ 6470 + }; 6471 + } perf_event; 6452 6472 }; 6453 6473 } __attribute__((aligned(8))); 6454 6474
+8
tools/lib/bpf/bpf.c
··· 741 741 if (!OPTS_ZEROED(opts, tracing)) 742 742 return libbpf_err(-EINVAL); 743 743 break; 744 + case BPF_NETFILTER: 745 + attr.link_create.netfilter.pf = OPTS_GET(opts, netfilter.pf, 0); 746 + attr.link_create.netfilter.hooknum = OPTS_GET(opts, netfilter.hooknum, 0); 747 + attr.link_create.netfilter.priority = OPTS_GET(opts, netfilter.priority, 0); 748 + attr.link_create.netfilter.flags = OPTS_GET(opts, netfilter.flags, 0); 749 + if (!OPTS_ZEROED(opts, netfilter)) 750 + return libbpf_err(-EINVAL); 751 + break; 744 752 default: 745 753 if (!OPTS_ZEROED(opts, flags)) 746 754 return libbpf_err(-EINVAL);
+6
tools/lib/bpf/bpf.h
··· 349 349 struct { 350 350 __u64 cookie; 351 351 } tracing; 352 + struct { 353 + __u32 pf; 354 + __u32 hooknum; 355 + __s32 priority; 356 + __u32 flags; 357 + } netfilter; 352 358 }; 353 359 size_t :0; 354 360 };
-10
tools/lib/bpf/hashmap.h
··· 80 80 size_t sz; 81 81 }; 82 82 83 - #define HASHMAP_INIT(hash_fn, equal_fn, ctx) { \ 84 - .hash_fn = (hash_fn), \ 85 - .equal_fn = (equal_fn), \ 86 - .ctx = (ctx), \ 87 - .buckets = NULL, \ 88 - .cap = 0, \ 89 - .cap_bits = 0, \ 90 - .sz = 0, \ 91 - } 92 - 93 83 void hashmap__init(struct hashmap *map, hashmap_hash_fn hash_fn, 94 84 hashmap_equal_fn equal_fn, void *ctx); 95 85 struct hashmap *hashmap__new(hashmap_hash_fn hash_fn,
+234 -24
tools/lib/bpf/libbpf.c
··· 5471 5471 err = bpf_btf_get_next_id(id, &id); 5472 5472 if (err && errno == ENOENT) 5473 5473 return 0; 5474 + if (err && errno == EPERM) { 5475 + pr_debug("skipping module BTFs loading, missing privileges\n"); 5476 + return 0; 5477 + } 5474 5478 if (err) { 5475 5479 err = -errno; 5476 5480 pr_warn("failed to iterate BTF objects: %d\n", err); ··· 6161 6157 if (main_prog == subprog) 6162 6158 return 0; 6163 6159 relos = libbpf_reallocarray(main_prog->reloc_desc, new_cnt, sizeof(*relos)); 6164 - if (!relos) 6160 + /* if new count is zero, reallocarray can return a valid NULL result; 6161 + * in this case the previous pointer will be freed, so we *have to* 6162 + * reassign old pointer to the new value (even if it's NULL) 6163 + */ 6164 + if (!relos && new_cnt) 6165 6165 return -ENOMEM; 6166 6166 if (subprog->nr_reloc) 6167 6167 memcpy(relos + main_prog->nr_reloc, subprog->reloc_desc, ··· 8536 8528 return -EBUSY; 8537 8529 8538 8530 insns = libbpf_reallocarray(prog->insns, new_insn_cnt, sizeof(*insns)); 8539 - if (!insns) { 8531 + /* NULL is a valid return from reallocarray if the new count is zero */ 8532 + if (!insns && new_insn_cnt) { 8540 8533 pr_warn("prog '%s': failed to realloc prog code\n", prog->name); 8541 8534 return -ENOMEM; 8542 8535 } ··· 8567 8558 return prog->type; 8568 8559 } 8569 8560 8561 + static size_t custom_sec_def_cnt; 8562 + static struct bpf_sec_def *custom_sec_defs; 8563 + static struct bpf_sec_def custom_fallback_def; 8564 + static bool has_custom_fallback_def; 8565 + static int last_custom_sec_def_handler_id; 8566 + 8570 8567 int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type) 8571 8568 { 8572 8569 if (prog->obj->loaded) 8573 8570 return libbpf_err(-EBUSY); 8574 8571 8572 + /* if type is not changed, do nothing */ 8573 + if (prog->type == type) 8574 + return 0; 8575 + 8575 8576 prog->type = type; 8576 - prog->sec_def = NULL; 8577 + 8578 + /* If a program type was changed, we need to reset associated SEC() 8579 + * handler, as it will be invalid now. The only exception is a generic 8580 + * fallback handler, which by definition is program type-agnostic and 8581 + * is a catch-all custom handler, optionally set by the application, 8582 + * so should be able to handle any type of BPF program. 8583 + */ 8584 + if (prog->sec_def != &custom_fallback_def) 8585 + prog->sec_def = NULL; 8577 8586 return 0; 8578 8587 } 8579 8588 ··· 8767 8740 SEC_DEF("netfilter", NETFILTER, BPF_NETFILTER, SEC_NONE), 8768 8741 }; 8769 8742 8770 - static size_t custom_sec_def_cnt; 8771 - static struct bpf_sec_def *custom_sec_defs; 8772 - static struct bpf_sec_def custom_fallback_def; 8773 - static bool has_custom_fallback_def; 8774 - 8775 - static int last_custom_sec_def_handler_id; 8776 - 8777 8743 int libbpf_register_prog_handler(const char *sec, 8778 8744 enum bpf_prog_type prog_type, 8779 8745 enum bpf_attach_type exp_attach_type, ··· 8846 8826 8847 8827 /* try to shrink the array, but it's ok if we couldn't */ 8848 8828 sec_defs = libbpf_reallocarray(custom_sec_defs, custom_sec_def_cnt, sizeof(*sec_defs)); 8849 - if (sec_defs) 8829 + /* if new count is zero, reallocarray can return a valid NULL result; 8830 + * in this case the previous pointer will be freed, so we *have to* 8831 + * reassign old pointer to the new value (even if it's NULL) 8832 + */ 8833 + if (sec_defs || custom_sec_def_cnt == 0) 8850 8834 custom_sec_defs = sec_defs; 8851 8835 8852 8836 return 0; ··· 10248 10224 return use_debugfs() ? DEBUGFS"/uprobe_events" : TRACEFS"/uprobe_events"; 10249 10225 } 10250 10226 10227 + static const char *tracefs_available_filter_functions(void) 10228 + { 10229 + return use_debugfs() ? DEBUGFS"/available_filter_functions" 10230 + : TRACEFS"/available_filter_functions"; 10231 + } 10232 + 10233 + static const char *tracefs_available_filter_functions_addrs(void) 10234 + { 10235 + return use_debugfs() ? DEBUGFS"/available_filter_functions_addrs" 10236 + : TRACEFS"/available_filter_functions_addrs"; 10237 + } 10238 + 10251 10239 static void gen_kprobe_legacy_event_name(char *buf, size_t buf_sz, 10252 10240 const char *kfunc_name, size_t offset) 10253 10241 { ··· 10575 10539 size_t cnt; 10576 10540 }; 10577 10541 10578 - static int 10579 - resolve_kprobe_multi_cb(unsigned long long sym_addr, char sym_type, 10580 - const char *sym_name, void *ctx) 10542 + struct avail_kallsyms_data { 10543 + char **syms; 10544 + size_t cnt; 10545 + struct kprobe_multi_resolve *res; 10546 + }; 10547 + 10548 + static int avail_func_cmp(const void *a, const void *b) 10581 10549 { 10582 - struct kprobe_multi_resolve *res = ctx; 10550 + return strcmp(*(const char **)a, *(const char **)b); 10551 + } 10552 + 10553 + static int avail_kallsyms_cb(unsigned long long sym_addr, char sym_type, 10554 + const char *sym_name, void *ctx) 10555 + { 10556 + struct avail_kallsyms_data *data = ctx; 10557 + struct kprobe_multi_resolve *res = data->res; 10583 10558 int err; 10584 10559 10585 - if (!glob_match(sym_name, res->pattern)) 10560 + if (!bsearch(&sym_name, data->syms, data->cnt, sizeof(*data->syms), avail_func_cmp)) 10586 10561 return 0; 10587 10562 10588 - err = libbpf_ensure_mem((void **) &res->addrs, &res->cap, sizeof(unsigned long), 10589 - res->cnt + 1); 10563 + err = libbpf_ensure_mem((void **)&res->addrs, &res->cap, sizeof(*res->addrs), res->cnt + 1); 10590 10564 if (err) 10591 10565 return err; 10592 10566 10593 - res->addrs[res->cnt++] = (unsigned long) sym_addr; 10567 + res->addrs[res->cnt++] = (unsigned long)sym_addr; 10594 10568 return 0; 10569 + } 10570 + 10571 + static int libbpf_available_kallsyms_parse(struct kprobe_multi_resolve *res) 10572 + { 10573 + const char *available_functions_file = tracefs_available_filter_functions(); 10574 + struct avail_kallsyms_data data; 10575 + char sym_name[500]; 10576 + FILE *f; 10577 + int err = 0, ret, i; 10578 + char **syms = NULL; 10579 + size_t cap = 0, cnt = 0; 10580 + 10581 + f = fopen(available_functions_file, "re"); 10582 + if (!f) { 10583 + err = -errno; 10584 + pr_warn("failed to open %s: %d\n", available_functions_file, err); 10585 + return err; 10586 + } 10587 + 10588 + while (true) { 10589 + char *name; 10590 + 10591 + ret = fscanf(f, "%499s%*[^\n]\n", sym_name); 10592 + if (ret == EOF && feof(f)) 10593 + break; 10594 + 10595 + if (ret != 1) { 10596 + pr_warn("failed to parse available_filter_functions entry: %d\n", ret); 10597 + err = -EINVAL; 10598 + goto cleanup; 10599 + } 10600 + 10601 + if (!glob_match(sym_name, res->pattern)) 10602 + continue; 10603 + 10604 + err = libbpf_ensure_mem((void **)&syms, &cap, sizeof(*syms), cnt + 1); 10605 + if (err) 10606 + goto cleanup; 10607 + 10608 + name = strdup(sym_name); 10609 + if (!name) { 10610 + err = -errno; 10611 + goto cleanup; 10612 + } 10613 + 10614 + syms[cnt++] = name; 10615 + } 10616 + 10617 + /* no entries found, bail out */ 10618 + if (cnt == 0) { 10619 + err = -ENOENT; 10620 + goto cleanup; 10621 + } 10622 + 10623 + /* sort available functions */ 10624 + qsort(syms, cnt, sizeof(*syms), avail_func_cmp); 10625 + 10626 + data.syms = syms; 10627 + data.res = res; 10628 + data.cnt = cnt; 10629 + libbpf_kallsyms_parse(avail_kallsyms_cb, &data); 10630 + 10631 + if (res->cnt == 0) 10632 + err = -ENOENT; 10633 + 10634 + cleanup: 10635 + for (i = 0; i < cnt; i++) 10636 + free((char *)syms[i]); 10637 + free(syms); 10638 + 10639 + fclose(f); 10640 + return err; 10641 + } 10642 + 10643 + static bool has_available_filter_functions_addrs(void) 10644 + { 10645 + return access(tracefs_available_filter_functions_addrs(), R_OK) != -1; 10646 + } 10647 + 10648 + static int libbpf_available_kprobes_parse(struct kprobe_multi_resolve *res) 10649 + { 10650 + const char *available_path = tracefs_available_filter_functions_addrs(); 10651 + char sym_name[500]; 10652 + FILE *f; 10653 + int ret, err = 0; 10654 + unsigned long long sym_addr; 10655 + 10656 + f = fopen(available_path, "re"); 10657 + if (!f) { 10658 + err = -errno; 10659 + pr_warn("failed to open %s: %d\n", available_path, err); 10660 + return err; 10661 + } 10662 + 10663 + while (true) { 10664 + ret = fscanf(f, "%llx %499s%*[^\n]\n", &sym_addr, sym_name); 10665 + if (ret == EOF && feof(f)) 10666 + break; 10667 + 10668 + if (ret != 2) { 10669 + pr_warn("failed to parse available_filter_functions_addrs entry: %d\n", 10670 + ret); 10671 + err = -EINVAL; 10672 + goto cleanup; 10673 + } 10674 + 10675 + if (!glob_match(sym_name, res->pattern)) 10676 + continue; 10677 + 10678 + err = libbpf_ensure_mem((void **)&res->addrs, &res->cap, 10679 + sizeof(*res->addrs), res->cnt + 1); 10680 + if (err) 10681 + goto cleanup; 10682 + 10683 + res->addrs[res->cnt++] = (unsigned long)sym_addr; 10684 + } 10685 + 10686 + if (res->cnt == 0) 10687 + err = -ENOENT; 10688 + 10689 + cleanup: 10690 + fclose(f); 10691 + return err; 10595 10692 } 10596 10693 10597 10694 struct bpf_link * ··· 10763 10594 return libbpf_err_ptr(-EINVAL); 10764 10595 10765 10596 if (pattern) { 10766 - err = libbpf_kallsyms_parse(resolve_kprobe_multi_cb, &res); 10597 + if (has_available_filter_functions_addrs()) 10598 + err = libbpf_available_kprobes_parse(&res); 10599 + else 10600 + err = libbpf_available_kallsyms_parse(&res); 10767 10601 if (err) 10768 10602 goto error; 10769 - if (!res.cnt) { 10770 - err = -ENOENT; 10771 - goto error; 10772 - } 10773 10603 addrs = res.addrs; 10774 10604 cnt = res.cnt; 10775 10605 } ··· 11977 11809 { 11978 11810 *link = bpf_program__attach_iter(prog, NULL); 11979 11811 return libbpf_get_error(*link); 11812 + } 11813 + 11814 + struct bpf_link *bpf_program__attach_netfilter(const struct bpf_program *prog, 11815 + const struct bpf_netfilter_opts *opts) 11816 + { 11817 + LIBBPF_OPTS(bpf_link_create_opts, lopts); 11818 + struct bpf_link *link; 11819 + int prog_fd, link_fd; 11820 + 11821 + if (!OPTS_VALID(opts, bpf_netfilter_opts)) 11822 + return libbpf_err_ptr(-EINVAL); 11823 + 11824 + prog_fd = bpf_program__fd(prog); 11825 + if (prog_fd < 0) { 11826 + pr_warn("prog '%s': can't attach before loaded\n", prog->name); 11827 + return libbpf_err_ptr(-EINVAL); 11828 + } 11829 + 11830 + link = calloc(1, sizeof(*link)); 11831 + if (!link) 11832 + return libbpf_err_ptr(-ENOMEM); 11833 + 11834 + link->detach = &bpf_link__detach_fd; 11835 + 11836 + lopts.netfilter.pf = OPTS_GET(opts, pf, 0); 11837 + lopts.netfilter.hooknum = OPTS_GET(opts, hooknum, 0); 11838 + lopts.netfilter.priority = OPTS_GET(opts, priority, 0); 11839 + lopts.netfilter.flags = OPTS_GET(opts, flags, 0); 11840 + 11841 + link_fd = bpf_link_create(prog_fd, 0, BPF_NETFILTER, &lopts); 11842 + if (link_fd < 0) { 11843 + char errmsg[STRERR_BUFSIZE]; 11844 + 11845 + link_fd = -errno; 11846 + free(link); 11847 + pr_warn("prog '%s': failed to attach to netfilter: %s\n", 11848 + prog->name, libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg))); 11849 + return libbpf_err_ptr(link_fd); 11850 + } 11851 + link->fd = link_fd; 11852 + 11853 + return link; 11980 11854 } 11981 11855 11982 11856 struct bpf_link *bpf_program__attach(const struct bpf_program *prog)
+15
tools/lib/bpf/libbpf.h
··· 718 718 bpf_program__attach_freplace(const struct bpf_program *prog, 719 719 int target_fd, const char *attach_func_name); 720 720 721 + struct bpf_netfilter_opts { 722 + /* size of this struct, for forward/backward compatibility */ 723 + size_t sz; 724 + 725 + __u32 pf; 726 + __u32 hooknum; 727 + __s32 priority; 728 + __u32 flags; 729 + }; 730 + #define bpf_netfilter_opts__last_field flags 731 + 732 + LIBBPF_API struct bpf_link * 733 + bpf_program__attach_netfilter(const struct bpf_program *prog, 734 + const struct bpf_netfilter_opts *opts); 735 + 721 736 struct bpf_map; 722 737 723 738 LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
+1
tools/lib/bpf/libbpf.map
··· 395 395 LIBBPF_1.3.0 { 396 396 global: 397 397 bpf_obj_pin_opts; 398 + bpf_program__attach_netfilter; 398 399 } LIBBPF_1.2.0;
+4 -1
tools/lib/bpf/usdt.c
··· 852 852 * system is so exhausted on memory, it's the least of user's 853 853 * concerns, probably. 854 854 * So just do our best here to return those IDs to usdt_manager. 855 + * Another edge case when we can legitimately get NULL is when 856 + * new_cnt is zero, which can happen in some edge cases, so we 857 + * need to be careful about that. 855 858 */ 856 - if (new_free_ids) { 859 + if (new_free_ids || new_cnt == 0) { 857 860 memcpy(new_free_ids + man->free_spec_cnt, usdt_link->spec_ids, 858 861 usdt_link->spec_cnt * sizeof(*usdt_link->spec_ids)); 859 862 man->free_spec_ids = new_free_ids;
+2
tools/testing/selftests/bpf/DENYLIST.aarch64
··· 10 10 kprobe_multi_test/link_api_syms # link_fd unexpected link_fd: actual -95 < expected 0 11 11 kprobe_multi_test/skel_api # libbpf: failed to load BPF skeleton 'kprobe_multi': -3 12 12 module_attach # prog 'kprobe_multi': failed to auto-attach: -95 13 + fentry_test/fentry_many_args # fentry_many_args:FAIL:fentry_many_args_attach unexpected error: -524 14 + fexit_test/fexit_many_args # fexit_many_args:FAIL:fexit_many_args_attach unexpected error: -524
+10 -3
tools/testing/selftests/bpf/Makefile
··· 12 12 TOOLSINCDIR := $(TOOLSDIR)/include 13 13 BPFTOOLDIR := $(TOOLSDIR)/bpf/bpftool 14 14 APIDIR := $(TOOLSINCDIR)/uapi 15 + ifneq ($(O),) 16 + GENDIR := $(O)/include/generated 17 + else 15 18 GENDIR := $(abspath ../../../../include/generated) 19 + endif 16 20 GENHDR := $(GENDIR)/autoconf.h 17 21 HOSTPKG_CONFIG := pkg-config 18 22 ··· 335 331 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ) 336 332 337 333 # Get Clang's default includes on this system, as opposed to those seen by 338 - # '-target bpf'. This fixes "missing" files on some architectures/distros, 334 + # '--target=bpf'. This fixes "missing" files on some architectures/distros, 339 335 # such as asm/byteorder.h, asm/socket.h, asm/sockios.h, sys/cdefs.h etc. 340 336 # 341 337 # Use '-idirafter': Don't interfere with include mechanics except where the ··· 376 372 # $3 - CFLAGS 377 373 define CLANG_BPF_BUILD_RULE 378 374 $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2) 379 - $(Q)$(CLANG) $3 -O2 -target bpf -c $1 -mcpu=v3 -o $2 375 + $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v3 -o $2 380 376 endef 381 377 # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32 382 378 define CLANG_NOALU32_BPF_BUILD_RULE 383 379 $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2) 384 - $(Q)$(CLANG) $3 -O2 -target bpf -c $1 -mcpu=v2 -o $2 380 + $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v2 -o $2 385 381 endef 386 382 # Build BPF object using GCC 387 383 define GCC_BPF_BUILD_RULE ··· 648 644 $(OUTPUT)/bench_local_storage_rcu_tasks_trace.o: $(OUTPUT)/local_storage_rcu_tasks_trace_bench.skel.h 649 645 $(OUTPUT)/bench_local_storage_create.o: $(OUTPUT)/bench_local_storage_create.skel.h 650 646 $(OUTPUT)/bench_bpf_hashmap_lookup.o: $(OUTPUT)/bpf_hashmap_lookup.skel.h 647 + $(OUTPUT)/bench_htab_mem.o: $(OUTPUT)/htab_mem_bench.skel.h 651 648 $(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ) 652 649 $(OUTPUT)/bench: LDLIBS += -lm 653 650 $(OUTPUT)/bench: $(OUTPUT)/bench.o \ 654 651 $(TESTING_HELPERS) \ 655 652 $(TRACE_HELPERS) \ 653 + $(CGROUP_HELPERS) \ 656 654 $(OUTPUT)/bench_count.o \ 657 655 $(OUTPUT)/bench_rename.o \ 658 656 $(OUTPUT)/bench_trigger.o \ ··· 667 661 $(OUTPUT)/bench_local_storage_rcu_tasks_trace.o \ 668 662 $(OUTPUT)/bench_bpf_hashmap_lookup.o \ 669 663 $(OUTPUT)/bench_local_storage_create.o \ 664 + $(OUTPUT)/bench_htab_mem.o \ 670 665 # 671 666 $(call msg,BINARY,,$@) 672 667 $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
+4
tools/testing/selftests/bpf/bench.c
··· 279 279 extern struct argp bench_strncmp_argp; 280 280 extern struct argp bench_hashmap_lookup_argp; 281 281 extern struct argp bench_local_storage_create_argp; 282 + extern struct argp bench_htab_mem_argp; 282 283 283 284 static const struct argp_child bench_parsers[] = { 284 285 { &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 }, ··· 291 290 "local_storage RCU Tasks Trace slowdown benchmark", 0 }, 292 291 { &bench_hashmap_lookup_argp, 0, "Hashmap lookup benchmark", 0 }, 293 292 { &bench_local_storage_create_argp, 0, "local-storage-create benchmark", 0 }, 293 + { &bench_htab_mem_argp, 0, "hash map memory benchmark", 0 }, 294 294 {}, 295 295 }; 296 296 ··· 522 520 extern const struct bench bench_local_storage_tasks_trace; 523 521 extern const struct bench bench_bpf_hashmap_lookup; 524 522 extern const struct bench bench_local_storage_create; 523 + extern const struct bench bench_htab_mem; 525 524 526 525 static const struct bench *benchs[] = { 527 526 &bench_count_global, ··· 564 561 &bench_local_storage_tasks_trace, 565 562 &bench_bpf_hashmap_lookup, 566 563 &bench_local_storage_create, 564 + &bench_htab_mem, 567 565 }; 568 566 569 567 static void find_benchmark(void)
+350
tools/testing/selftests/bpf/benchs/bench_htab_mem.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2023. Huawei Technologies Co., Ltd */ 3 + #include <argp.h> 4 + #include <stdbool.h> 5 + #include <pthread.h> 6 + #include <sys/types.h> 7 + #include <sys/stat.h> 8 + #include <sys/param.h> 9 + #include <fcntl.h> 10 + 11 + #include "bench.h" 12 + #include "bpf_util.h" 13 + #include "cgroup_helpers.h" 14 + #include "htab_mem_bench.skel.h" 15 + 16 + struct htab_mem_use_case { 17 + const char *name; 18 + const char **progs; 19 + /* Do synchronization between addition thread and deletion thread */ 20 + bool need_sync; 21 + }; 22 + 23 + static struct htab_mem_ctx { 24 + const struct htab_mem_use_case *uc; 25 + struct htab_mem_bench *skel; 26 + pthread_barrier_t *notify; 27 + int fd; 28 + } ctx; 29 + 30 + const char *ow_progs[] = {"overwrite", NULL}; 31 + const char *batch_progs[] = {"batch_add_batch_del", NULL}; 32 + const char *add_del_progs[] = {"add_only", "del_only", NULL}; 33 + const static struct htab_mem_use_case use_cases[] = { 34 + { .name = "overwrite", .progs = ow_progs }, 35 + { .name = "batch_add_batch_del", .progs = batch_progs }, 36 + { .name = "add_del_on_diff_cpu", .progs = add_del_progs, .need_sync = true }, 37 + }; 38 + 39 + static struct htab_mem_args { 40 + u32 value_size; 41 + const char *use_case; 42 + bool preallocated; 43 + } args = { 44 + .value_size = 8, 45 + .use_case = "overwrite", 46 + .preallocated = false, 47 + }; 48 + 49 + enum { 50 + ARG_VALUE_SIZE = 10000, 51 + ARG_USE_CASE = 10001, 52 + ARG_PREALLOCATED = 10002, 53 + }; 54 + 55 + static const struct argp_option opts[] = { 56 + { "value-size", ARG_VALUE_SIZE, "VALUE_SIZE", 0, 57 + "Set the value size of hash map (default 8)" }, 58 + { "use-case", ARG_USE_CASE, "USE_CASE", 0, 59 + "Set the use case of hash map: overwrite|batch_add_batch_del|add_del_on_diff_cpu" }, 60 + { "preallocated", ARG_PREALLOCATED, NULL, 0, "use preallocated hash map" }, 61 + {}, 62 + }; 63 + 64 + static error_t htab_mem_parse_arg(int key, char *arg, struct argp_state *state) 65 + { 66 + switch (key) { 67 + case ARG_VALUE_SIZE: 68 + args.value_size = strtoul(arg, NULL, 10); 69 + if (args.value_size > 4096) { 70 + fprintf(stderr, "too big value size %u\n", args.value_size); 71 + argp_usage(state); 72 + } 73 + break; 74 + case ARG_USE_CASE: 75 + args.use_case = strdup(arg); 76 + if (!args.use_case) { 77 + fprintf(stderr, "no mem for use-case\n"); 78 + argp_usage(state); 79 + } 80 + break; 81 + case ARG_PREALLOCATED: 82 + args.preallocated = true; 83 + break; 84 + default: 85 + return ARGP_ERR_UNKNOWN; 86 + } 87 + 88 + return 0; 89 + } 90 + 91 + const struct argp bench_htab_mem_argp = { 92 + .options = opts, 93 + .parser = htab_mem_parse_arg, 94 + }; 95 + 96 + static void htab_mem_validate(void) 97 + { 98 + if (!strcmp(use_cases[2].name, args.use_case) && env.producer_cnt % 2) { 99 + fprintf(stderr, "%s needs an even number of producers\n", args.use_case); 100 + exit(1); 101 + } 102 + } 103 + 104 + static int htab_mem_bench_init_barriers(void) 105 + { 106 + pthread_barrier_t *barriers; 107 + unsigned int i, nr; 108 + 109 + if (!ctx.uc->need_sync) 110 + return 0; 111 + 112 + nr = (env.producer_cnt + 1) / 2; 113 + barriers = calloc(nr, sizeof(*barriers)); 114 + if (!barriers) 115 + return -1; 116 + 117 + /* Used for synchronization between two threads */ 118 + for (i = 0; i < nr; i++) 119 + pthread_barrier_init(&barriers[i], NULL, 2); 120 + 121 + ctx.notify = barriers; 122 + return 0; 123 + } 124 + 125 + static void htab_mem_bench_exit_barriers(void) 126 + { 127 + unsigned int i, nr; 128 + 129 + if (!ctx.notify) 130 + return; 131 + 132 + nr = (env.producer_cnt + 1) / 2; 133 + for (i = 0; i < nr; i++) 134 + pthread_barrier_destroy(&ctx.notify[i]); 135 + free(ctx.notify); 136 + } 137 + 138 + static const struct htab_mem_use_case *htab_mem_find_use_case_or_exit(const char *name) 139 + { 140 + unsigned int i; 141 + 142 + for (i = 0; i < ARRAY_SIZE(use_cases); i++) { 143 + if (!strcmp(name, use_cases[i].name)) 144 + return &use_cases[i]; 145 + } 146 + 147 + fprintf(stderr, "no such use-case: %s\n", name); 148 + fprintf(stderr, "available use case:"); 149 + for (i = 0; i < ARRAY_SIZE(use_cases); i++) 150 + fprintf(stderr, " %s", use_cases[i].name); 151 + fprintf(stderr, "\n"); 152 + exit(1); 153 + } 154 + 155 + static void htab_mem_setup(void) 156 + { 157 + struct bpf_map *map; 158 + const char **names; 159 + int err; 160 + 161 + setup_libbpf(); 162 + 163 + ctx.uc = htab_mem_find_use_case_or_exit(args.use_case); 164 + err = htab_mem_bench_init_barriers(); 165 + if (err) { 166 + fprintf(stderr, "failed to init barrier\n"); 167 + exit(1); 168 + } 169 + 170 + ctx.fd = cgroup_setup_and_join("/htab_mem"); 171 + if (ctx.fd < 0) 172 + goto cleanup; 173 + 174 + ctx.skel = htab_mem_bench__open(); 175 + if (!ctx.skel) { 176 + fprintf(stderr, "failed to open skeleton\n"); 177 + goto cleanup; 178 + } 179 + 180 + map = ctx.skel->maps.htab; 181 + bpf_map__set_value_size(map, args.value_size); 182 + /* Ensure that different CPUs can operate on different subset */ 183 + bpf_map__set_max_entries(map, MAX(8192, 64 * env.nr_cpus)); 184 + if (args.preallocated) 185 + bpf_map__set_map_flags(map, bpf_map__map_flags(map) & ~BPF_F_NO_PREALLOC); 186 + 187 + names = ctx.uc->progs; 188 + while (*names) { 189 + struct bpf_program *prog; 190 + 191 + prog = bpf_object__find_program_by_name(ctx.skel->obj, *names); 192 + if (!prog) { 193 + fprintf(stderr, "no such program %s\n", *names); 194 + goto cleanup; 195 + } 196 + bpf_program__set_autoload(prog, true); 197 + names++; 198 + } 199 + ctx.skel->bss->nr_thread = env.producer_cnt; 200 + 201 + err = htab_mem_bench__load(ctx.skel); 202 + if (err) { 203 + fprintf(stderr, "failed to load skeleton\n"); 204 + goto cleanup; 205 + } 206 + err = htab_mem_bench__attach(ctx.skel); 207 + if (err) { 208 + fprintf(stderr, "failed to attach skeleton\n"); 209 + goto cleanup; 210 + } 211 + return; 212 + 213 + cleanup: 214 + htab_mem_bench__destroy(ctx.skel); 215 + htab_mem_bench_exit_barriers(); 216 + if (ctx.fd >= 0) { 217 + close(ctx.fd); 218 + cleanup_cgroup_environment(); 219 + } 220 + exit(1); 221 + } 222 + 223 + static void htab_mem_add_fn(pthread_barrier_t *notify) 224 + { 225 + while (true) { 226 + /* Do addition */ 227 + (void)syscall(__NR_getpgid, 0); 228 + /* Notify deletion thread to do deletion */ 229 + pthread_barrier_wait(notify); 230 + /* Wait for deletion to complete */ 231 + pthread_barrier_wait(notify); 232 + } 233 + } 234 + 235 + static void htab_mem_delete_fn(pthread_barrier_t *notify) 236 + { 237 + while (true) { 238 + /* Wait for addition to complete */ 239 + pthread_barrier_wait(notify); 240 + /* Do deletion */ 241 + (void)syscall(__NR_getppid); 242 + /* Notify addition thread to do addition */ 243 + pthread_barrier_wait(notify); 244 + } 245 + } 246 + 247 + static void *htab_mem_producer(void *arg) 248 + { 249 + pthread_barrier_t *notify; 250 + int seq; 251 + 252 + if (!ctx.uc->need_sync) { 253 + while (true) 254 + (void)syscall(__NR_getpgid, 0); 255 + return NULL; 256 + } 257 + 258 + seq = (long)arg; 259 + notify = &ctx.notify[seq / 2]; 260 + if (seq & 1) 261 + htab_mem_delete_fn(notify); 262 + else 263 + htab_mem_add_fn(notify); 264 + return NULL; 265 + } 266 + 267 + static void htab_mem_read_mem_cgrp_file(const char *name, unsigned long *value) 268 + { 269 + char buf[32]; 270 + ssize_t got; 271 + int fd; 272 + 273 + fd = openat(ctx.fd, name, O_RDONLY); 274 + if (fd < 0) { 275 + /* cgroup v1 ? */ 276 + fprintf(stderr, "no %s\n", name); 277 + *value = 0; 278 + return; 279 + } 280 + 281 + got = read(fd, buf, sizeof(buf) - 1); 282 + if (got <= 0) { 283 + *value = 0; 284 + return; 285 + } 286 + buf[got] = 0; 287 + 288 + *value = strtoull(buf, NULL, 0); 289 + 290 + close(fd); 291 + } 292 + 293 + static void htab_mem_measure(struct bench_res *res) 294 + { 295 + res->hits = atomic_swap(&ctx.skel->bss->op_cnt, 0) / env.producer_cnt; 296 + htab_mem_read_mem_cgrp_file("memory.current", &res->gp_ct); 297 + } 298 + 299 + static void htab_mem_report_progress(int iter, struct bench_res *res, long delta_ns) 300 + { 301 + double loop, mem; 302 + 303 + loop = res->hits / 1000.0 / (delta_ns / 1000000000.0); 304 + mem = res->gp_ct / 1048576.0; 305 + printf("Iter %3d (%7.3lfus): ", iter, (delta_ns - 1000000000) / 1000.0); 306 + printf("per-prod-op %7.2lfk/s, memory usage %7.2lfMiB\n", loop, mem); 307 + } 308 + 309 + static void htab_mem_report_final(struct bench_res res[], int res_cnt) 310 + { 311 + double mem_mean = 0.0, mem_stddev = 0.0; 312 + double loop_mean = 0.0, loop_stddev = 0.0; 313 + unsigned long peak_mem; 314 + int i; 315 + 316 + for (i = 0; i < res_cnt; i++) { 317 + loop_mean += res[i].hits / 1000.0 / (0.0 + res_cnt); 318 + mem_mean += res[i].gp_ct / 1048576.0 / (0.0 + res_cnt); 319 + } 320 + if (res_cnt > 1) { 321 + for (i = 0; i < res_cnt; i++) { 322 + loop_stddev += (loop_mean - res[i].hits / 1000.0) * 323 + (loop_mean - res[i].hits / 1000.0) / 324 + (res_cnt - 1.0); 325 + mem_stddev += (mem_mean - res[i].gp_ct / 1048576.0) * 326 + (mem_mean - res[i].gp_ct / 1048576.0) / 327 + (res_cnt - 1.0); 328 + } 329 + loop_stddev = sqrt(loop_stddev); 330 + mem_stddev = sqrt(mem_stddev); 331 + } 332 + 333 + htab_mem_read_mem_cgrp_file("memory.peak", &peak_mem); 334 + printf("Summary: per-prod-op %7.2lf \u00B1 %7.2lfk/s, memory usage %7.2lf \u00B1 %7.2lfMiB," 335 + " peak memory usage %7.2lfMiB\n", 336 + loop_mean, loop_stddev, mem_mean, mem_stddev, peak_mem / 1048576.0); 337 + 338 + cleanup_cgroup_environment(); 339 + } 340 + 341 + const struct bench bench_htab_mem = { 342 + .name = "htab-mem", 343 + .argp = &bench_htab_mem_argp, 344 + .validate = htab_mem_validate, 345 + .setup = htab_mem_setup, 346 + .producer_thread = htab_mem_producer, 347 + .measure = htab_mem_measure, 348 + .report_progress = htab_mem_report_progress, 349 + .report_final = htab_mem_report_final, 350 + };
+1 -1
tools/testing/selftests/bpf/benchs/bench_ringbufs.c
··· 399 399 ctx->skel = perfbuf_setup_skeleton(); 400 400 401 401 memset(&attr, 0, sizeof(attr)); 402 - attr.config = PERF_COUNT_SW_BPF_OUTPUT, 402 + attr.config = PERF_COUNT_SW_BPF_OUTPUT; 403 403 attr.type = PERF_TYPE_SOFTWARE; 404 404 attr.sample_type = PERF_SAMPLE_RAW; 405 405 /* notify only every Nth sample */
+40
tools/testing/selftests/bpf/benchs/run_bench_htab_mem.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source ./benchs/run_common.sh 5 + 6 + set -eufo pipefail 7 + 8 + htab_mem() 9 + { 10 + echo -n "per-prod-op: " 11 + echo -n "$*" | sed -E "s/.* per-prod-op\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+k\/s).*/\1/" 12 + echo -n -e ", avg mem: " 13 + echo -n "$*" | sed -E "s/.* memory usage\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+MiB).*/\1/" 14 + echo -n ", peak mem: " 15 + echo "$*" | sed -E "s/.* peak memory usage\s+([0-9]+\.[0-9]+MiB).*/\1/" 16 + } 17 + 18 + summarize_htab_mem() 19 + { 20 + local bench="$1" 21 + local summary=$(echo $2 | tail -n1) 22 + 23 + printf "%-20s %s\n" "$bench" "$(htab_mem $summary)" 24 + } 25 + 26 + htab_mem_bench() 27 + { 28 + local name 29 + 30 + for name in overwrite batch_add_batch_del add_del_on_diff_cpu 31 + do 32 + summarize_htab_mem "$name" "$($RUN_BENCH htab-mem --use-case $name -p8 "$@")" 33 + done 34 + } 35 + 36 + header "preallocated" 37 + htab_mem_bench "--preallocated" 38 + 39 + header "normal bpf ma" 40 + htab_mem_bench
+48 -1
tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
··· 34 34 int b[]; 35 35 }; 36 36 37 + struct bpf_testmod_struct_arg_4 { 38 + u64 a; 39 + int b; 40 + }; 41 + 37 42 __diag_push(); 38 43 __diag_ignore_all("-Wmissing-prototypes", 39 44 "Global functions as their definitions will be in bpf_testmod.ko BTF"); ··· 77 72 noinline int 78 73 bpf_testmod_test_struct_arg_6(struct bpf_testmod_struct_arg_3 *a) { 79 74 bpf_testmod_test_struct_arg_result = a->b[0]; 75 + return bpf_testmod_test_struct_arg_result; 76 + } 77 + 78 + noinline int 79 + bpf_testmod_test_struct_arg_7(u64 a, void *b, short c, int d, void *e, 80 + struct bpf_testmod_struct_arg_4 f) 81 + { 82 + bpf_testmod_test_struct_arg_result = a + (long)b + c + d + 83 + (long)e + f.a + f.b; 84 + return bpf_testmod_test_struct_arg_result; 85 + } 86 + 87 + noinline int 88 + bpf_testmod_test_struct_arg_8(u64 a, void *b, short c, int d, void *e, 89 + struct bpf_testmod_struct_arg_4 f, int g) 90 + { 91 + bpf_testmod_test_struct_arg_result = a + (long)b + c + d + 92 + (long)e + f.a + f.b + g; 80 93 return bpf_testmod_test_struct_arg_result; 81 94 } 82 95 ··· 214 191 return a + b + c; 215 192 } 216 193 194 + noinline int bpf_testmod_fentry_test7(u64 a, void *b, short c, int d, 195 + void *e, char f, int g) 196 + { 197 + return a + (long)b + c + d + (long)e + f + g; 198 + } 199 + 200 + noinline int bpf_testmod_fentry_test11(u64 a, void *b, short c, int d, 201 + void *e, char f, int g, 202 + unsigned int h, long i, __u64 j, 203 + unsigned long k) 204 + { 205 + return a + (long)b + c + d + (long)e + f + g + h + i + j + k; 206 + } 207 + 217 208 int bpf_testmod_fentry_ok; 218 209 219 210 noinline ssize_t ··· 243 206 struct bpf_testmod_struct_arg_1 struct_arg1 = {10}; 244 207 struct bpf_testmod_struct_arg_2 struct_arg2 = {2, 3}; 245 208 struct bpf_testmod_struct_arg_3 *struct_arg3; 209 + struct bpf_testmod_struct_arg_4 struct_arg4 = {21, 22}; 246 210 int i = 1; 247 211 248 212 while (bpf_testmod_return_ptr(i)) ··· 254 216 (void)bpf_testmod_test_struct_arg_3(1, 4, struct_arg2); 255 217 (void)bpf_testmod_test_struct_arg_4(struct_arg1, 1, 2, 3, struct_arg2); 256 218 (void)bpf_testmod_test_struct_arg_5(); 219 + (void)bpf_testmod_test_struct_arg_7(16, (void *)17, 18, 19, 220 + (void *)20, struct_arg4); 221 + (void)bpf_testmod_test_struct_arg_8(16, (void *)17, 18, 19, 222 + (void *)20, struct_arg4, 23); 223 + 257 224 258 225 struct_arg3 = kmalloc((sizeof(struct bpf_testmod_struct_arg_3) + 259 226 sizeof(int)), GFP_KERNEL); ··· 286 243 287 244 if (bpf_testmod_fentry_test1(1) != 2 || 288 245 bpf_testmod_fentry_test2(2, 3) != 5 || 289 - bpf_testmod_fentry_test3(4, 5, 6) != 15) 246 + bpf_testmod_fentry_test3(4, 5, 6) != 15 || 247 + bpf_testmod_fentry_test7(16, (void *)17, 18, 19, (void *)20, 248 + 21, 22) != 133 || 249 + bpf_testmod_fentry_test11(16, (void *)17, 18, 19, (void *)20, 250 + 21, 22, 23, 24, 25, 26) != 231) 290 251 goto out; 291 252 292 253 bpf_testmod_fentry_ok = 1;
+12
tools/testing/selftests/bpf/cgroup_helpers.c
··· 278 278 } 279 279 280 280 /** 281 + * join_root_cgroup() - Join the root cgroup 282 + * 283 + * This function joins the root cgroup. 284 + * 285 + * On success, it returns 0, otherwise on failure it returns 1. 286 + */ 287 + int join_root_cgroup(void) 288 + { 289 + return join_cgroup_from_top(CGROUP_MOUNT_PATH); 290 + } 291 + 292 + /** 281 293 * join_parent_cgroup() - Join a cgroup in the parent process workdir 282 294 * @relative_path: The cgroup path, relative to parent process workdir, to join 283 295 *
+1
tools/testing/selftests/bpf/cgroup_helpers.h
··· 22 22 unsigned long long get_cgroup_id(const char *relative_path); 23 23 24 24 int join_cgroup(const char *relative_path); 25 + int join_root_cgroup(void); 25 26 int join_parent_cgroup(const char *relative_path); 26 27 27 28 int setup_cgroup_environment(void);
+35
tools/testing/selftests/bpf/cgroup_tcp_skb.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ 3 + 4 + /* Define states of a socket to tracking messages sending to and from the 5 + * socket. 6 + * 7 + * These states are based on rfc9293 with some modifications to support 8 + * tracking of messages sent out from a socket. For example, when a SYN is 9 + * received, a new socket is transiting to the SYN_RECV state defined in 10 + * rfc9293. But, we put it in SYN_RECV_SENDING_SYN_ACK state and when 11 + * SYN-ACK is sent out, it moves to SYN_RECV state. With this modification, 12 + * we can track the message sent out from a socket. 13 + */ 14 + 15 + #ifndef __CGROUP_TCP_SKB_H__ 16 + #define __CGROUP_TCP_SKB_H__ 17 + 18 + enum { 19 + INIT, 20 + CLOSED, 21 + SYN_SENT, 22 + SYN_RECV_SENDING_SYN_ACK, 23 + SYN_RECV, 24 + ESTABLISHED, 25 + FIN_WAIT1, 26 + FIN_WAIT2, 27 + CLOSE_WAIT_SENDING_ACK, 28 + CLOSE_WAIT, 29 + CLOSING, 30 + LAST_ACK, 31 + TIME_WAIT_SENDING_ACK, 32 + TIME_WAIT, 33 + }; 34 + 35 + #endif /* __CGROUP_TCP_SKB_H__ */
+1 -1
tools/testing/selftests/bpf/gnu/stubs.h
··· 1 - /* dummy .h to trick /usr/include/features.h to work with 'clang -target bpf' */ 1 + /* dummy .h to trick /usr/include/features.h to work with 'clang --target=bpf' */
+447
tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2023 Isovalent */ 3 + 4 + #include <errno.h> 5 + #include <unistd.h> 6 + #include <pthread.h> 7 + 8 + #include <bpf/bpf.h> 9 + #include <bpf/libbpf.h> 10 + 11 + #include <bpf_util.h> 12 + #include <test_maps.h> 13 + 14 + #include "map_percpu_stats.skel.h" 15 + 16 + #define MAX_ENTRIES 16384 17 + #define MAX_ENTRIES_HASH_OF_MAPS 64 18 + #define N_THREADS 8 19 + #define MAX_MAP_KEY_SIZE 4 20 + 21 + static void map_info(int map_fd, struct bpf_map_info *info) 22 + { 23 + __u32 len = sizeof(*info); 24 + int ret; 25 + 26 + memset(info, 0, sizeof(*info)); 27 + 28 + ret = bpf_obj_get_info_by_fd(map_fd, info, &len); 29 + CHECK(ret < 0, "bpf_obj_get_info_by_fd", "error: %s\n", strerror(errno)); 30 + } 31 + 32 + static const char *map_type_to_s(__u32 type) 33 + { 34 + switch (type) { 35 + case BPF_MAP_TYPE_HASH: 36 + return "HASH"; 37 + case BPF_MAP_TYPE_PERCPU_HASH: 38 + return "PERCPU_HASH"; 39 + case BPF_MAP_TYPE_LRU_HASH: 40 + return "LRU_HASH"; 41 + case BPF_MAP_TYPE_LRU_PERCPU_HASH: 42 + return "LRU_PERCPU_HASH"; 43 + case BPF_MAP_TYPE_HASH_OF_MAPS: 44 + return "BPF_MAP_TYPE_HASH_OF_MAPS"; 45 + default: 46 + return "<define-me>"; 47 + } 48 + } 49 + 50 + static __u32 map_count_elements(__u32 type, int map_fd) 51 + { 52 + __u32 key = -1; 53 + int n = 0; 54 + 55 + while (!bpf_map_get_next_key(map_fd, &key, &key)) 56 + n++; 57 + return n; 58 + } 59 + 60 + #define BATCH true 61 + 62 + static void delete_and_lookup_batch(int map_fd, void *keys, __u32 count) 63 + { 64 + static __u8 values[(8 << 10) * MAX_ENTRIES]; 65 + void *in_batch = NULL, *out_batch; 66 + __u32 save_count = count; 67 + int ret; 68 + 69 + ret = bpf_map_lookup_and_delete_batch(map_fd, 70 + &in_batch, &out_batch, 71 + keys, values, &count, 72 + NULL); 73 + 74 + /* 75 + * Despite what uapi header says, lookup_and_delete_batch will return 76 + * -ENOENT in case we successfully have deleted all elements, so check 77 + * this separately 78 + */ 79 + CHECK(ret < 0 && (errno != ENOENT || !count), "bpf_map_lookup_and_delete_batch", 80 + "error: %s\n", strerror(errno)); 81 + 82 + CHECK(count != save_count, 83 + "bpf_map_lookup_and_delete_batch", 84 + "deleted not all elements: removed=%u expected=%u\n", 85 + count, save_count); 86 + } 87 + 88 + static void delete_all_elements(__u32 type, int map_fd, bool batch) 89 + { 90 + static __u8 val[8 << 10]; /* enough for 1024 CPUs */ 91 + __u32 key = -1; 92 + void *keys; 93 + __u32 i, n; 94 + int ret; 95 + 96 + keys = calloc(MAX_MAP_KEY_SIZE, MAX_ENTRIES); 97 + CHECK(!keys, "calloc", "error: %s\n", strerror(errno)); 98 + 99 + for (n = 0; !bpf_map_get_next_key(map_fd, &key, &key); n++) 100 + memcpy(keys + n*MAX_MAP_KEY_SIZE, &key, MAX_MAP_KEY_SIZE); 101 + 102 + if (batch) { 103 + /* Can't mix delete_batch and delete_and_lookup_batch because 104 + * they have different semantics in relation to the keys 105 + * argument. However, delete_batch utilize map_delete_elem, 106 + * so we actually test it in non-batch scenario */ 107 + delete_and_lookup_batch(map_fd, keys, n); 108 + } else { 109 + /* Intentionally mix delete and lookup_and_delete so we can test both */ 110 + for (i = 0; i < n; i++) { 111 + void *keyp = keys + i*MAX_MAP_KEY_SIZE; 112 + 113 + if (i % 2 || type == BPF_MAP_TYPE_HASH_OF_MAPS) { 114 + ret = bpf_map_delete_elem(map_fd, keyp); 115 + CHECK(ret < 0, "bpf_map_delete_elem", 116 + "error: key %u: %s\n", i, strerror(errno)); 117 + } else { 118 + ret = bpf_map_lookup_and_delete_elem(map_fd, keyp, val); 119 + CHECK(ret < 0, "bpf_map_lookup_and_delete_elem", 120 + "error: key %u: %s\n", i, strerror(errno)); 121 + } 122 + } 123 + } 124 + 125 + free(keys); 126 + } 127 + 128 + static bool is_lru(__u32 map_type) 129 + { 130 + return map_type == BPF_MAP_TYPE_LRU_HASH || 131 + map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH; 132 + } 133 + 134 + struct upsert_opts { 135 + __u32 map_type; 136 + int map_fd; 137 + __u32 n; 138 + }; 139 + 140 + static int create_small_hash(void) 141 + { 142 + int map_fd; 143 + 144 + map_fd = bpf_map_create(BPF_MAP_TYPE_HASH, "small", 4, 4, 4, NULL); 145 + CHECK(map_fd < 0, "bpf_map_create()", "error:%s (name=%s)\n", 146 + strerror(errno), "small"); 147 + 148 + return map_fd; 149 + } 150 + 151 + static void *patch_map_thread(void *arg) 152 + { 153 + struct upsert_opts *opts = arg; 154 + int val; 155 + int ret; 156 + int i; 157 + 158 + for (i = 0; i < opts->n; i++) { 159 + if (opts->map_type == BPF_MAP_TYPE_HASH_OF_MAPS) 160 + val = create_small_hash(); 161 + else 162 + val = rand(); 163 + ret = bpf_map_update_elem(opts->map_fd, &i, &val, 0); 164 + CHECK(ret < 0, "bpf_map_update_elem", "key=%d error: %s\n", i, strerror(errno)); 165 + 166 + if (opts->map_type == BPF_MAP_TYPE_HASH_OF_MAPS) 167 + close(val); 168 + } 169 + return NULL; 170 + } 171 + 172 + static void upsert_elements(struct upsert_opts *opts) 173 + { 174 + pthread_t threads[N_THREADS]; 175 + int ret; 176 + int i; 177 + 178 + for (i = 0; i < ARRAY_SIZE(threads); i++) { 179 + ret = pthread_create(&i[threads], NULL, patch_map_thread, opts); 180 + CHECK(ret != 0, "pthread_create", "error: %s\n", strerror(ret)); 181 + } 182 + 183 + for (i = 0; i < ARRAY_SIZE(threads); i++) { 184 + ret = pthread_join(i[threads], NULL); 185 + CHECK(ret != 0, "pthread_join", "error: %s\n", strerror(ret)); 186 + } 187 + } 188 + 189 + static __u32 read_cur_elements(int iter_fd) 190 + { 191 + char buf[64]; 192 + ssize_t n; 193 + __u32 ret; 194 + 195 + n = read(iter_fd, buf, sizeof(buf)-1); 196 + CHECK(n <= 0, "read", "error: %s\n", strerror(errno)); 197 + buf[n] = '\0'; 198 + 199 + errno = 0; 200 + ret = (__u32)strtol(buf, NULL, 10); 201 + CHECK(errno != 0, "strtol", "error: %s\n", strerror(errno)); 202 + 203 + return ret; 204 + } 205 + 206 + static __u32 get_cur_elements(int map_id) 207 + { 208 + struct map_percpu_stats *skel; 209 + struct bpf_link *link; 210 + __u32 n_elements; 211 + int iter_fd; 212 + int ret; 213 + 214 + skel = map_percpu_stats__open(); 215 + CHECK(skel == NULL, "map_percpu_stats__open", "error: %s", strerror(errno)); 216 + 217 + skel->bss->target_id = map_id; 218 + 219 + ret = map_percpu_stats__load(skel); 220 + CHECK(ret != 0, "map_percpu_stats__load", "error: %s", strerror(errno)); 221 + 222 + link = bpf_program__attach_iter(skel->progs.dump_bpf_map, NULL); 223 + CHECK(!link, "bpf_program__attach_iter", "error: %s\n", strerror(errno)); 224 + 225 + iter_fd = bpf_iter_create(bpf_link__fd(link)); 226 + CHECK(iter_fd < 0, "bpf_iter_create", "error: %s\n", strerror(errno)); 227 + 228 + n_elements = read_cur_elements(iter_fd); 229 + 230 + close(iter_fd); 231 + bpf_link__destroy(link); 232 + map_percpu_stats__destroy(skel); 233 + 234 + return n_elements; 235 + } 236 + 237 + static void check_expected_number_elements(__u32 n_inserted, int map_fd, 238 + struct bpf_map_info *info) 239 + { 240 + __u32 n_real; 241 + __u32 n_iter; 242 + 243 + /* Count the current number of elements in the map by iterating through 244 + * all the map keys via bpf_get_next_key 245 + */ 246 + n_real = map_count_elements(info->type, map_fd); 247 + 248 + /* The "real" number of elements should be the same as the inserted 249 + * number of elements in all cases except LRU maps, where some elements 250 + * may have been evicted 251 + */ 252 + if (n_inserted == 0 || !is_lru(info->type)) 253 + CHECK(n_inserted != n_real, "map_count_elements", 254 + "n_real(%u) != n_inserted(%u)\n", n_real, n_inserted); 255 + 256 + /* Count the current number of elements in the map using an iterator */ 257 + n_iter = get_cur_elements(info->id); 258 + 259 + /* Both counts should be the same, as all updates are over */ 260 + CHECK(n_iter != n_real, "get_cur_elements", 261 + "n_iter=%u, expected %u (map_type=%s,map_flags=%08x)\n", 262 + n_iter, n_real, map_type_to_s(info->type), info->map_flags); 263 + } 264 + 265 + static void __test(int map_fd) 266 + { 267 + struct upsert_opts opts = { 268 + .map_fd = map_fd, 269 + }; 270 + struct bpf_map_info info; 271 + 272 + map_info(map_fd, &info); 273 + opts.map_type = info.type; 274 + opts.n = info.max_entries; 275 + 276 + /* Reduce the number of elements we are updating such that we don't 277 + * bump into -E2BIG from non-preallocated hash maps, but still will 278 + * have some evictions for LRU maps */ 279 + if (opts.map_type != BPF_MAP_TYPE_HASH_OF_MAPS) 280 + opts.n -= 512; 281 + else 282 + opts.n /= 2; 283 + 284 + /* 285 + * Upsert keys [0, n) under some competition: with random values from 286 + * N_THREADS threads. Check values, then delete all elements and check 287 + * values again. 288 + */ 289 + upsert_elements(&opts); 290 + check_expected_number_elements(opts.n, map_fd, &info); 291 + delete_all_elements(info.type, map_fd, !BATCH); 292 + check_expected_number_elements(0, map_fd, &info); 293 + 294 + /* Now do the same, but using batch delete operations */ 295 + upsert_elements(&opts); 296 + check_expected_number_elements(opts.n, map_fd, &info); 297 + delete_all_elements(info.type, map_fd, BATCH); 298 + check_expected_number_elements(0, map_fd, &info); 299 + 300 + close(map_fd); 301 + } 302 + 303 + static int map_create_opts(__u32 type, const char *name, 304 + struct bpf_map_create_opts *map_opts, 305 + __u32 key_size, __u32 val_size) 306 + { 307 + int max_entries; 308 + int map_fd; 309 + 310 + if (type == BPF_MAP_TYPE_HASH_OF_MAPS) 311 + max_entries = MAX_ENTRIES_HASH_OF_MAPS; 312 + else 313 + max_entries = MAX_ENTRIES; 314 + 315 + map_fd = bpf_map_create(type, name, key_size, val_size, max_entries, map_opts); 316 + CHECK(map_fd < 0, "bpf_map_create()", "error:%s (name=%s)\n", 317 + strerror(errno), name); 318 + 319 + return map_fd; 320 + } 321 + 322 + static int map_create(__u32 type, const char *name, struct bpf_map_create_opts *map_opts) 323 + { 324 + return map_create_opts(type, name, map_opts, sizeof(int), sizeof(int)); 325 + } 326 + 327 + static int create_hash(void) 328 + { 329 + struct bpf_map_create_opts map_opts = { 330 + .sz = sizeof(map_opts), 331 + .map_flags = BPF_F_NO_PREALLOC, 332 + }; 333 + 334 + return map_create(BPF_MAP_TYPE_HASH, "hash", &map_opts); 335 + } 336 + 337 + static int create_percpu_hash(void) 338 + { 339 + struct bpf_map_create_opts map_opts = { 340 + .sz = sizeof(map_opts), 341 + .map_flags = BPF_F_NO_PREALLOC, 342 + }; 343 + 344 + return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash", &map_opts); 345 + } 346 + 347 + static int create_hash_prealloc(void) 348 + { 349 + return map_create(BPF_MAP_TYPE_HASH, "hash", NULL); 350 + } 351 + 352 + static int create_percpu_hash_prealloc(void) 353 + { 354 + return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash_prealloc", NULL); 355 + } 356 + 357 + static int create_lru_hash(__u32 type, __u32 map_flags) 358 + { 359 + struct bpf_map_create_opts map_opts = { 360 + .sz = sizeof(map_opts), 361 + .map_flags = map_flags, 362 + }; 363 + 364 + return map_create(type, "lru_hash", &map_opts); 365 + } 366 + 367 + static int create_hash_of_maps(void) 368 + { 369 + struct bpf_map_create_opts map_opts = { 370 + .sz = sizeof(map_opts), 371 + .map_flags = BPF_F_NO_PREALLOC, 372 + .inner_map_fd = create_small_hash(), 373 + }; 374 + int ret; 375 + 376 + ret = map_create_opts(BPF_MAP_TYPE_HASH_OF_MAPS, "hash_of_maps", 377 + &map_opts, sizeof(int), sizeof(int)); 378 + close(map_opts.inner_map_fd); 379 + return ret; 380 + } 381 + 382 + static void map_percpu_stats_hash(void) 383 + { 384 + __test(create_hash()); 385 + printf("test_%s:PASS\n", __func__); 386 + } 387 + 388 + static void map_percpu_stats_percpu_hash(void) 389 + { 390 + __test(create_percpu_hash()); 391 + printf("test_%s:PASS\n", __func__); 392 + } 393 + 394 + static void map_percpu_stats_hash_prealloc(void) 395 + { 396 + __test(create_hash_prealloc()); 397 + printf("test_%s:PASS\n", __func__); 398 + } 399 + 400 + static void map_percpu_stats_percpu_hash_prealloc(void) 401 + { 402 + __test(create_percpu_hash_prealloc()); 403 + printf("test_%s:PASS\n", __func__); 404 + } 405 + 406 + static void map_percpu_stats_lru_hash(void) 407 + { 408 + __test(create_lru_hash(BPF_MAP_TYPE_LRU_HASH, 0)); 409 + printf("test_%s:PASS\n", __func__); 410 + } 411 + 412 + static void map_percpu_stats_lru_hash_no_common(void) 413 + { 414 + __test(create_lru_hash(BPF_MAP_TYPE_LRU_HASH, BPF_F_NO_COMMON_LRU)); 415 + printf("test_%s:PASS\n", __func__); 416 + } 417 + 418 + static void map_percpu_stats_percpu_lru_hash(void) 419 + { 420 + __test(create_lru_hash(BPF_MAP_TYPE_LRU_PERCPU_HASH, 0)); 421 + printf("test_%s:PASS\n", __func__); 422 + } 423 + 424 + static void map_percpu_stats_percpu_lru_hash_no_common(void) 425 + { 426 + __test(create_lru_hash(BPF_MAP_TYPE_LRU_PERCPU_HASH, BPF_F_NO_COMMON_LRU)); 427 + printf("test_%s:PASS\n", __func__); 428 + } 429 + 430 + static void map_percpu_stats_hash_of_maps(void) 431 + { 432 + __test(create_hash_of_maps()); 433 + printf("test_%s:PASS\n", __func__); 434 + } 435 + 436 + void test_map_percpu_stats(void) 437 + { 438 + map_percpu_stats_hash(); 439 + map_percpu_stats_percpu_hash(); 440 + map_percpu_stats_hash_prealloc(); 441 + map_percpu_stats_percpu_hash_prealloc(); 442 + map_percpu_stats_lru_hash(); 443 + map_percpu_stats_lru_hash_no_common(); 444 + map_percpu_stats_percpu_lru_hash(); 445 + map_percpu_stats_percpu_lru_hash_no_common(); 446 + map_percpu_stats_hash_of_maps(); 447 + }
+3 -2
tools/testing/selftests/bpf/prog_tests/bpf_nf.c
··· 123 123 ASSERT_EQ(skel->data->test_snat_addr, 0, "Test for source natting"); 124 124 ASSERT_EQ(skel->data->test_dnat_addr, 0, "Test for destination natting"); 125 125 end: 126 - if (srv_client_fd != -1) 127 - close(srv_client_fd); 128 126 if (client_fd != -1) 129 127 close(client_fd); 128 + if (srv_client_fd != -1) 129 + close(srv_client_fd); 130 130 if (srv_fd != -1) 131 131 close(srv_fd); 132 + 132 133 snprintf(cmd, sizeof(cmd), iptables, "-D"); 133 134 system(cmd); 134 135 test_bpf_nf__destroy(skel);
+402
tools/testing/selftests/bpf/prog_tests/cgroup_tcp_skb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2023 Facebook */ 3 + #include <test_progs.h> 4 + #include <linux/in6.h> 5 + #include <sys/socket.h> 6 + #include <sched.h> 7 + #include <unistd.h> 8 + #include "cgroup_helpers.h" 9 + #include "testing_helpers.h" 10 + #include "cgroup_tcp_skb.skel.h" 11 + #include "cgroup_tcp_skb.h" 12 + 13 + #define CGROUP_TCP_SKB_PATH "/test_cgroup_tcp_skb" 14 + 15 + static int install_filters(int cgroup_fd, 16 + struct bpf_link **egress_link, 17 + struct bpf_link **ingress_link, 18 + struct bpf_program *egress_prog, 19 + struct bpf_program *ingress_prog, 20 + struct cgroup_tcp_skb *skel) 21 + { 22 + /* Prepare filters */ 23 + skel->bss->g_sock_state = 0; 24 + skel->bss->g_unexpected = 0; 25 + *egress_link = 26 + bpf_program__attach_cgroup(egress_prog, 27 + cgroup_fd); 28 + if (!ASSERT_OK_PTR(egress_link, "egress_link")) 29 + return -1; 30 + *ingress_link = 31 + bpf_program__attach_cgroup(ingress_prog, 32 + cgroup_fd); 33 + if (!ASSERT_OK_PTR(ingress_link, "ingress_link")) 34 + return -1; 35 + 36 + return 0; 37 + } 38 + 39 + static void uninstall_filters(struct bpf_link **egress_link, 40 + struct bpf_link **ingress_link) 41 + { 42 + bpf_link__destroy(*egress_link); 43 + *egress_link = NULL; 44 + bpf_link__destroy(*ingress_link); 45 + *ingress_link = NULL; 46 + } 47 + 48 + static int create_client_sock_v6(void) 49 + { 50 + int fd; 51 + 52 + fd = socket(AF_INET6, SOCK_STREAM, 0); 53 + if (fd < 0) { 54 + perror("socket"); 55 + return -1; 56 + } 57 + 58 + return fd; 59 + } 60 + 61 + static int create_server_sock_v6(void) 62 + { 63 + struct sockaddr_in6 addr = { 64 + .sin6_family = AF_INET6, 65 + .sin6_port = htons(0), 66 + .sin6_addr = IN6ADDR_LOOPBACK_INIT, 67 + }; 68 + int fd, err; 69 + 70 + fd = socket(AF_INET6, SOCK_STREAM, 0); 71 + if (fd < 0) { 72 + perror("socket"); 73 + return -1; 74 + } 75 + 76 + err = bind(fd, (struct sockaddr *)&addr, sizeof(addr)); 77 + if (err < 0) { 78 + perror("bind"); 79 + return -1; 80 + } 81 + 82 + err = listen(fd, 1); 83 + if (err < 0) { 84 + perror("listen"); 85 + return -1; 86 + } 87 + 88 + return fd; 89 + } 90 + 91 + static int get_sock_port_v6(int fd) 92 + { 93 + struct sockaddr_in6 addr; 94 + socklen_t len; 95 + int err; 96 + 97 + len = sizeof(addr); 98 + err = getsockname(fd, (struct sockaddr *)&addr, &len); 99 + if (err < 0) { 100 + perror("getsockname"); 101 + return -1; 102 + } 103 + 104 + return ntohs(addr.sin6_port); 105 + } 106 + 107 + static int connect_client_server_v6(int client_fd, int listen_fd) 108 + { 109 + struct sockaddr_in6 addr = { 110 + .sin6_family = AF_INET6, 111 + .sin6_addr = IN6ADDR_LOOPBACK_INIT, 112 + }; 113 + int err; 114 + 115 + addr.sin6_port = htons(get_sock_port_v6(listen_fd)); 116 + if (addr.sin6_port < 0) 117 + return -1; 118 + 119 + err = connect(client_fd, (struct sockaddr *)&addr, sizeof(addr)); 120 + if (err < 0) { 121 + perror("connect"); 122 + return -1; 123 + } 124 + 125 + return 0; 126 + } 127 + 128 + /* Connect to the server in a cgroup from the outside of the cgroup. */ 129 + static int talk_to_cgroup(int *client_fd, int *listen_fd, int *service_fd, 130 + struct cgroup_tcp_skb *skel) 131 + { 132 + int err, cp; 133 + char buf[5]; 134 + 135 + /* Create client & server socket */ 136 + err = join_root_cgroup(); 137 + if (!ASSERT_OK(err, "join_root_cgroup")) 138 + return -1; 139 + *client_fd = create_client_sock_v6(); 140 + if (!ASSERT_GE(*client_fd, 0, "client_fd")) 141 + return -1; 142 + err = join_cgroup(CGROUP_TCP_SKB_PATH); 143 + if (!ASSERT_OK(err, "join_cgroup")) 144 + return -1; 145 + *listen_fd = create_server_sock_v6(); 146 + if (!ASSERT_GE(*listen_fd, 0, "listen_fd")) 147 + return -1; 148 + skel->bss->g_sock_port = get_sock_port_v6(*listen_fd); 149 + 150 + /* Connect client to server */ 151 + err = connect_client_server_v6(*client_fd, *listen_fd); 152 + if (!ASSERT_OK(err, "connect_client_server_v6")) 153 + return -1; 154 + *service_fd = accept(*listen_fd, NULL, NULL); 155 + if (!ASSERT_GE(*service_fd, 0, "service_fd")) 156 + return -1; 157 + err = join_root_cgroup(); 158 + if (!ASSERT_OK(err, "join_root_cgroup")) 159 + return -1; 160 + cp = write(*client_fd, "hello", 5); 161 + if (!ASSERT_EQ(cp, 5, "write")) 162 + return -1; 163 + cp = read(*service_fd, buf, 5); 164 + if (!ASSERT_EQ(cp, 5, "read")) 165 + return -1; 166 + 167 + return 0; 168 + } 169 + 170 + /* Connect to the server out of a cgroup from inside the cgroup. */ 171 + static int talk_to_outside(int *client_fd, int *listen_fd, int *service_fd, 172 + struct cgroup_tcp_skb *skel) 173 + 174 + { 175 + int err, cp; 176 + char buf[5]; 177 + 178 + /* Create client & server socket */ 179 + err = join_root_cgroup(); 180 + if (!ASSERT_OK(err, "join_root_cgroup")) 181 + return -1; 182 + *listen_fd = create_server_sock_v6(); 183 + if (!ASSERT_GE(*listen_fd, 0, "listen_fd")) 184 + return -1; 185 + err = join_cgroup(CGROUP_TCP_SKB_PATH); 186 + if (!ASSERT_OK(err, "join_cgroup")) 187 + return -1; 188 + *client_fd = create_client_sock_v6(); 189 + if (!ASSERT_GE(*client_fd, 0, "client_fd")) 190 + return -1; 191 + err = join_root_cgroup(); 192 + if (!ASSERT_OK(err, "join_root_cgroup")) 193 + return -1; 194 + skel->bss->g_sock_port = get_sock_port_v6(*listen_fd); 195 + 196 + /* Connect client to server */ 197 + err = connect_client_server_v6(*client_fd, *listen_fd); 198 + if (!ASSERT_OK(err, "connect_client_server_v6")) 199 + return -1; 200 + *service_fd = accept(*listen_fd, NULL, NULL); 201 + if (!ASSERT_GE(*service_fd, 0, "service_fd")) 202 + return -1; 203 + cp = write(*client_fd, "hello", 5); 204 + if (!ASSERT_EQ(cp, 5, "write")) 205 + return -1; 206 + cp = read(*service_fd, buf, 5); 207 + if (!ASSERT_EQ(cp, 5, "read")) 208 + return -1; 209 + 210 + return 0; 211 + } 212 + 213 + static int close_connection(int *closing_fd, int *peer_fd, int *listen_fd, 214 + struct cgroup_tcp_skb *skel) 215 + { 216 + __u32 saved_packet_count = 0; 217 + int err; 218 + int i; 219 + 220 + /* Wait for ACKs to be sent */ 221 + saved_packet_count = skel->bss->g_packet_count; 222 + usleep(100000); /* 0.1s */ 223 + for (i = 0; 224 + skel->bss->g_packet_count != saved_packet_count && i < 10; 225 + i++) { 226 + saved_packet_count = skel->bss->g_packet_count; 227 + usleep(100000); /* 0.1s */ 228 + } 229 + if (!ASSERT_EQ(skel->bss->g_packet_count, saved_packet_count, 230 + "packet_count")) 231 + return -1; 232 + 233 + skel->bss->g_packet_count = 0; 234 + saved_packet_count = 0; 235 + 236 + /* Half shutdown to make sure the closing socket having a chance to 237 + * receive a FIN from the peer. 238 + */ 239 + err = shutdown(*closing_fd, SHUT_WR); 240 + if (!ASSERT_OK(err, "shutdown closing_fd")) 241 + return -1; 242 + 243 + /* Wait for FIN and the ACK of the FIN to be observed */ 244 + for (i = 0; 245 + skel->bss->g_packet_count < saved_packet_count + 2 && i < 10; 246 + i++) 247 + usleep(100000); /* 0.1s */ 248 + if (!ASSERT_GE(skel->bss->g_packet_count, saved_packet_count + 2, 249 + "packet_count")) 250 + return -1; 251 + 252 + saved_packet_count = skel->bss->g_packet_count; 253 + 254 + /* Fully shutdown the connection */ 255 + err = close(*peer_fd); 256 + if (!ASSERT_OK(err, "close peer_fd")) 257 + return -1; 258 + *peer_fd = -1; 259 + 260 + /* Wait for FIN and the ACK of the FIN to be observed */ 261 + for (i = 0; 262 + skel->bss->g_packet_count < saved_packet_count + 2 && i < 10; 263 + i++) 264 + usleep(100000); /* 0.1s */ 265 + if (!ASSERT_GE(skel->bss->g_packet_count, saved_packet_count + 2, 266 + "packet_count")) 267 + return -1; 268 + 269 + err = close(*closing_fd); 270 + if (!ASSERT_OK(err, "close closing_fd")) 271 + return -1; 272 + *closing_fd = -1; 273 + 274 + close(*listen_fd); 275 + *listen_fd = -1; 276 + 277 + return 0; 278 + } 279 + 280 + /* This test case includes four scenarios: 281 + * 1. Connect to the server from outside the cgroup and close the connection 282 + * from outside the cgroup. 283 + * 2. Connect to the server from outside the cgroup and close the connection 284 + * from inside the cgroup. 285 + * 3. Connect to the server from inside the cgroup and close the connection 286 + * from outside the cgroup. 287 + * 4. Connect to the server from inside the cgroup and close the connection 288 + * from inside the cgroup. 289 + * 290 + * The test case is to verify that cgroup_skb/{egress,ingress} filters 291 + * receive expected packets including SYN, SYN/ACK, ACK, FIN, and FIN/ACK. 292 + */ 293 + void test_cgroup_tcp_skb(void) 294 + { 295 + struct bpf_link *ingress_link = NULL; 296 + struct bpf_link *egress_link = NULL; 297 + int client_fd = -1, listen_fd = -1; 298 + struct cgroup_tcp_skb *skel; 299 + int service_fd = -1; 300 + int cgroup_fd = -1; 301 + int err; 302 + 303 + skel = cgroup_tcp_skb__open_and_load(); 304 + if (!ASSERT_OK(!skel, "skel_open_load")) 305 + return; 306 + 307 + err = setup_cgroup_environment(); 308 + if (!ASSERT_OK(err, "setup_cgroup_environment")) 309 + goto cleanup; 310 + 311 + cgroup_fd = create_and_get_cgroup(CGROUP_TCP_SKB_PATH); 312 + if (!ASSERT_GE(cgroup_fd, 0, "cgroup_fd")) 313 + goto cleanup; 314 + 315 + /* Scenario 1 */ 316 + err = install_filters(cgroup_fd, &egress_link, &ingress_link, 317 + skel->progs.server_egress, 318 + skel->progs.server_ingress, 319 + skel); 320 + if (!ASSERT_OK(err, "install_filters")) 321 + goto cleanup; 322 + 323 + err = talk_to_cgroup(&client_fd, &listen_fd, &service_fd, skel); 324 + if (!ASSERT_OK(err, "talk_to_cgroup")) 325 + goto cleanup; 326 + 327 + err = close_connection(&client_fd, &service_fd, &listen_fd, skel); 328 + if (!ASSERT_OK(err, "close_connection")) 329 + goto cleanup; 330 + 331 + ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected"); 332 + ASSERT_EQ(skel->bss->g_sock_state, CLOSED, "g_sock_state"); 333 + 334 + uninstall_filters(&egress_link, &ingress_link); 335 + 336 + /* Scenario 2 */ 337 + err = install_filters(cgroup_fd, &egress_link, &ingress_link, 338 + skel->progs.server_egress_srv, 339 + skel->progs.server_ingress_srv, 340 + skel); 341 + 342 + err = talk_to_cgroup(&client_fd, &listen_fd, &service_fd, skel); 343 + if (!ASSERT_OK(err, "talk_to_cgroup")) 344 + goto cleanup; 345 + 346 + err = close_connection(&service_fd, &client_fd, &listen_fd, skel); 347 + if (!ASSERT_OK(err, "close_connection")) 348 + goto cleanup; 349 + 350 + ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected"); 351 + ASSERT_EQ(skel->bss->g_sock_state, TIME_WAIT, "g_sock_state"); 352 + 353 + uninstall_filters(&egress_link, &ingress_link); 354 + 355 + /* Scenario 3 */ 356 + err = install_filters(cgroup_fd, &egress_link, &ingress_link, 357 + skel->progs.client_egress_srv, 358 + skel->progs.client_ingress_srv, 359 + skel); 360 + 361 + err = talk_to_outside(&client_fd, &listen_fd, &service_fd, skel); 362 + if (!ASSERT_OK(err, "talk_to_outside")) 363 + goto cleanup; 364 + 365 + err = close_connection(&service_fd, &client_fd, &listen_fd, skel); 366 + if (!ASSERT_OK(err, "close_connection")) 367 + goto cleanup; 368 + 369 + ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected"); 370 + ASSERT_EQ(skel->bss->g_sock_state, CLOSED, "g_sock_state"); 371 + 372 + uninstall_filters(&egress_link, &ingress_link); 373 + 374 + /* Scenario 4 */ 375 + err = install_filters(cgroup_fd, &egress_link, &ingress_link, 376 + skel->progs.client_egress, 377 + skel->progs.client_ingress, 378 + skel); 379 + 380 + err = talk_to_outside(&client_fd, &listen_fd, &service_fd, skel); 381 + if (!ASSERT_OK(err, "talk_to_outside")) 382 + goto cleanup; 383 + 384 + err = close_connection(&client_fd, &service_fd, &listen_fd, skel); 385 + if (!ASSERT_OK(err, "close_connection")) 386 + goto cleanup; 387 + 388 + ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected"); 389 + ASSERT_EQ(skel->bss->g_sock_state, TIME_WAIT, "g_sock_state"); 390 + 391 + uninstall_filters(&egress_link, &ingress_link); 392 + 393 + cleanup: 394 + close(client_fd); 395 + close(listen_fd); 396 + close(service_fd); 397 + close(cgroup_fd); 398 + bpf_link__destroy(egress_link); 399 + bpf_link__destroy(ingress_link); 400 + cleanup_cgroup_environment(); 401 + cgroup_tcp_skb__destroy(skel); 402 + }
+39 -4
tools/testing/selftests/bpf/prog_tests/fentry_test.c
··· 2 2 /* Copyright (c) 2019 Facebook */ 3 3 #include <test_progs.h> 4 4 #include "fentry_test.lskel.h" 5 + #include "fentry_many_args.skel.h" 5 6 6 - static int fentry_test(struct fentry_test_lskel *fentry_skel) 7 + static int fentry_test_common(struct fentry_test_lskel *fentry_skel) 7 8 { 8 9 int err, prog_fd, i; 9 10 int link_fd; ··· 38 37 return 0; 39 38 } 40 39 41 - void test_fentry_test(void) 40 + static void fentry_test(void) 42 41 { 43 42 struct fentry_test_lskel *fentry_skel = NULL; 44 43 int err; ··· 47 46 if (!ASSERT_OK_PTR(fentry_skel, "fentry_skel_load")) 48 47 goto cleanup; 49 48 50 - err = fentry_test(fentry_skel); 49 + err = fentry_test_common(fentry_skel); 51 50 if (!ASSERT_OK(err, "fentry_first_attach")) 52 51 goto cleanup; 53 52 54 - err = fentry_test(fentry_skel); 53 + err = fentry_test_common(fentry_skel); 55 54 ASSERT_OK(err, "fentry_second_attach"); 56 55 57 56 cleanup: 58 57 fentry_test_lskel__destroy(fentry_skel); 58 + } 59 + 60 + static void fentry_many_args(void) 61 + { 62 + struct fentry_many_args *fentry_skel = NULL; 63 + int err; 64 + 65 + fentry_skel = fentry_many_args__open_and_load(); 66 + if (!ASSERT_OK_PTR(fentry_skel, "fentry_many_args_skel_load")) 67 + goto cleanup; 68 + 69 + err = fentry_many_args__attach(fentry_skel); 70 + if (!ASSERT_OK(err, "fentry_many_args_attach")) 71 + goto cleanup; 72 + 73 + ASSERT_OK(trigger_module_test_read(1), "trigger_read"); 74 + 75 + ASSERT_EQ(fentry_skel->bss->test1_result, 1, 76 + "fentry_many_args_result1"); 77 + ASSERT_EQ(fentry_skel->bss->test2_result, 1, 78 + "fentry_many_args_result2"); 79 + ASSERT_EQ(fentry_skel->bss->test3_result, 1, 80 + "fentry_many_args_result3"); 81 + 82 + cleanup: 83 + fentry_many_args__destroy(fentry_skel); 84 + } 85 + 86 + void test_fentry_test(void) 87 + { 88 + if (test__start_subtest("fentry")) 89 + fentry_test(); 90 + if (test__start_subtest("fentry_many_args")) 91 + fentry_many_args(); 59 92 }
+39 -4
tools/testing/selftests/bpf/prog_tests/fexit_test.c
··· 2 2 /* Copyright (c) 2019 Facebook */ 3 3 #include <test_progs.h> 4 4 #include "fexit_test.lskel.h" 5 + #include "fexit_many_args.skel.h" 5 6 6 - static int fexit_test(struct fexit_test_lskel *fexit_skel) 7 + static int fexit_test_common(struct fexit_test_lskel *fexit_skel) 7 8 { 8 9 int err, prog_fd, i; 9 10 int link_fd; ··· 38 37 return 0; 39 38 } 40 39 41 - void test_fexit_test(void) 40 + static void fexit_test(void) 42 41 { 43 42 struct fexit_test_lskel *fexit_skel = NULL; 44 43 int err; ··· 47 46 if (!ASSERT_OK_PTR(fexit_skel, "fexit_skel_load")) 48 47 goto cleanup; 49 48 50 - err = fexit_test(fexit_skel); 49 + err = fexit_test_common(fexit_skel); 51 50 if (!ASSERT_OK(err, "fexit_first_attach")) 52 51 goto cleanup; 53 52 54 - err = fexit_test(fexit_skel); 53 + err = fexit_test_common(fexit_skel); 55 54 ASSERT_OK(err, "fexit_second_attach"); 56 55 57 56 cleanup: 58 57 fexit_test_lskel__destroy(fexit_skel); 58 + } 59 + 60 + static void fexit_many_args(void) 61 + { 62 + struct fexit_many_args *fexit_skel = NULL; 63 + int err; 64 + 65 + fexit_skel = fexit_many_args__open_and_load(); 66 + if (!ASSERT_OK_PTR(fexit_skel, "fexit_many_args_skel_load")) 67 + goto cleanup; 68 + 69 + err = fexit_many_args__attach(fexit_skel); 70 + if (!ASSERT_OK(err, "fexit_many_args_attach")) 71 + goto cleanup; 72 + 73 + ASSERT_OK(trigger_module_test_read(1), "trigger_read"); 74 + 75 + ASSERT_EQ(fexit_skel->bss->test1_result, 1, 76 + "fexit_many_args_result1"); 77 + ASSERT_EQ(fexit_skel->bss->test2_result, 1, 78 + "fexit_many_args_result2"); 79 + ASSERT_EQ(fexit_skel->bss->test3_result, 1, 80 + "fexit_many_args_result3"); 81 + 82 + cleanup: 83 + fexit_many_args__destroy(fexit_skel); 84 + } 85 + 86 + void test_fexit_test(void) 87 + { 88 + if (test__start_subtest("fexit")) 89 + fexit_test(); 90 + if (test__start_subtest("fexit_many_args")) 91 + fexit_many_args(); 59 92 }
+3 -1
tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
··· 30 30 prog_fd = bpf_program__fd(skel->progs.fmod_ret_test); 31 31 err = bpf_prog_test_run_opts(prog_fd, &topts); 32 32 ASSERT_OK(err, "test_run"); 33 - ASSERT_EQ(topts.retval, 1234, "test_run"); 33 + 34 + ASSERT_EQ(topts.retval >> 16, 1, "test_run"); 35 + ASSERT_EQ(topts.retval & 0xffff, 1234 + 29, "test_run"); 34 36 35 37 ASSERT_EQ(skel->bss->test1_result, 1, "test1_result"); 36 38 ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
+11 -3
tools/testing/selftests/bpf/prog_tests/global_map_resize.c
··· 22 22 struct test_global_map_resize *skel; 23 23 struct bpf_map *map; 24 24 const __u32 desired_sz = sizeof(skel->bss->sum) + sysconf(_SC_PAGE_SIZE) * 2; 25 - size_t array_len, actual_sz; 25 + size_t array_len, actual_sz, new_sz; 26 26 27 27 skel = test_global_map_resize__open(); 28 28 if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open")) ··· 41 41 goto teardown; 42 42 if (!ASSERT_EQ(bpf_map__value_size(map), desired_sz, "resize")) 43 43 goto teardown; 44 + 45 + new_sz = sizeof(skel->data_percpu_arr->percpu_arr[0]) * libbpf_num_possible_cpus(); 46 + err = bpf_map__set_value_size(skel->maps.data_percpu_arr, new_sz); 47 + ASSERT_OK(err, "percpu_arr_resize"); 44 48 45 49 /* set the expected number of elements based on the resized array */ 46 50 array_len = (desired_sz - sizeof(skel->bss->sum)) / sizeof(skel->bss->array[0]); ··· 88 84 89 85 static void global_map_resize_data_subtest(void) 90 86 { 91 - int err; 92 87 struct test_global_map_resize *skel; 93 88 struct bpf_map *map; 94 89 const __u32 desired_sz = sysconf(_SC_PAGE_SIZE) * 2; 95 - size_t array_len, actual_sz; 90 + size_t array_len, actual_sz, new_sz; 91 + int err; 96 92 97 93 skel = test_global_map_resize__open(); 98 94 if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open")) ··· 111 107 goto teardown; 112 108 if (!ASSERT_EQ(bpf_map__value_size(map), desired_sz, "resize")) 113 109 goto teardown; 110 + 111 + new_sz = sizeof(skel->data_percpu_arr->percpu_arr[0]) * libbpf_num_possible_cpus(); 112 + err = bpf_map__set_value_size(skel->maps.data_percpu_arr, new_sz); 113 + ASSERT_OK(err, "percpu_arr_resize"); 114 114 115 115 /* set the expected number of elements based on the resized array */ 116 116 array_len = (desired_sz - sizeof(skel->bss->sum)) / sizeof(skel->data_custom->my_array[0]);
+7 -3
tools/testing/selftests/bpf/prog_tests/modify_return.c
··· 41 41 ASSERT_EQ(skel->bss->fexit_result, 1, "modify_return fexit_result"); 42 42 ASSERT_EQ(skel->bss->fmod_ret_result, 1, "modify_return fmod_ret_result"); 43 43 44 + ASSERT_EQ(skel->bss->fentry_result2, 1, "modify_return fentry_result2"); 45 + ASSERT_EQ(skel->bss->fexit_result2, 1, "modify_return fexit_result2"); 46 + ASSERT_EQ(skel->bss->fmod_ret_result2, 1, "modify_return fmod_ret_result2"); 47 + 44 48 cleanup: 45 49 modify_return__destroy(skel); 46 50 } ··· 53 49 void serial_test_modify_return(void) 54 50 { 55 51 run_test(0 /* input_retval */, 56 - 1 /* want_side_effect */, 57 - 4 /* want_ret */); 52 + 2 /* want_side_effect */, 53 + 33 /* want_ret */); 58 54 run_test(-EINVAL /* input_retval */, 59 55 0 /* want_side_effect */, 60 - -EINVAL /* want_ret */); 56 + -EINVAL * 2 /* want_ret */); 61 57 }
+36
tools/testing/selftests/bpf/prog_tests/ptr_untrusted.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2023 Yafang Shao <laoar.shao@gmail.com> */ 3 + 4 + #include <string.h> 5 + #include <linux/bpf.h> 6 + #include <test_progs.h> 7 + #include "test_ptr_untrusted.skel.h" 8 + 9 + #define TP_NAME "sched_switch" 10 + 11 + void serial_test_ptr_untrusted(void) 12 + { 13 + struct test_ptr_untrusted *skel; 14 + int err; 15 + 16 + skel = test_ptr_untrusted__open_and_load(); 17 + if (!ASSERT_OK_PTR(skel, "skel_open")) 18 + goto cleanup; 19 + 20 + /* First, attach lsm prog */ 21 + skel->links.lsm_run = bpf_program__attach_lsm(skel->progs.lsm_run); 22 + if (!ASSERT_OK_PTR(skel->links.lsm_run, "lsm_attach")) 23 + goto cleanup; 24 + 25 + /* Second, attach raw_tp prog. The lsm prog will be triggered. */ 26 + skel->links.raw_tp_run = bpf_program__attach_raw_tracepoint(skel->progs.raw_tp_run, 27 + TP_NAME); 28 + if (!ASSERT_OK_PTR(skel->links.raw_tp_run, "raw_tp_attach")) 29 + goto cleanup; 30 + 31 + err = strncmp(skel->bss->tp_name, TP_NAME, strlen(TP_NAME)); 32 + ASSERT_EQ(err, 0, "cmp_tp_name"); 33 + 34 + cleanup: 35 + test_ptr_untrusted__destroy(skel); 36 + }
+1 -1
tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c
··· 347 347 exp_active_estab_in.max_delack_ms = 22; 348 348 349 349 exp_passive_hdr_stg.syncookie = true; 350 - exp_active_hdr_stg.resend_syn = true, 350 + exp_active_hdr_stg.resend_syn = true; 351 351 352 352 prepare_out(); 353 353
+19
tools/testing/selftests/bpf/prog_tests/tracing_struct.c
··· 55 55 56 56 ASSERT_EQ(skel->bss->t6, 1, "t6 ret"); 57 57 58 + ASSERT_EQ(skel->bss->t7_a, 16, "t7:a"); 59 + ASSERT_EQ(skel->bss->t7_b, 17, "t7:b"); 60 + ASSERT_EQ(skel->bss->t7_c, 18, "t7:c"); 61 + ASSERT_EQ(skel->bss->t7_d, 19, "t7:d"); 62 + ASSERT_EQ(skel->bss->t7_e, 20, "t7:e"); 63 + ASSERT_EQ(skel->bss->t7_f_a, 21, "t7:f.a"); 64 + ASSERT_EQ(skel->bss->t7_f_b, 22, "t7:f.b"); 65 + ASSERT_EQ(skel->bss->t7_ret, 133, "t7 ret"); 66 + 67 + ASSERT_EQ(skel->bss->t8_a, 16, "t8:a"); 68 + ASSERT_EQ(skel->bss->t8_b, 17, "t8:b"); 69 + ASSERT_EQ(skel->bss->t8_c, 18, "t8:c"); 70 + ASSERT_EQ(skel->bss->t8_d, 19, "t8:d"); 71 + ASSERT_EQ(skel->bss->t8_e, 20, "t8:e"); 72 + ASSERT_EQ(skel->bss->t8_f_a, 21, "t8:f.a"); 73 + ASSERT_EQ(skel->bss->t8_f_b, 22, "t8:f.b"); 74 + ASSERT_EQ(skel->bss->t8_g, 23, "t8:g"); 75 + ASSERT_EQ(skel->bss->t8_ret, 156, "t8 ret"); 76 + 58 77 tracing_struct__detach(skel); 59 78 destroy_skel: 60 79 tracing_struct__destroy(skel);
+2 -2
tools/testing/selftests/bpf/prog_tests/trampoline_count.c
··· 88 88 if (!ASSERT_OK(err, "bpf_prog_test_run_opts")) 89 89 goto cleanup; 90 90 91 - ASSERT_EQ(opts.retval & 0xffff, 4, "bpf_modify_return_test.result"); 92 - ASSERT_EQ(opts.retval >> 16, 1, "bpf_modify_return_test.side_effect"); 91 + ASSERT_EQ(opts.retval & 0xffff, 33, "bpf_modify_return_test.result"); 92 + ASSERT_EQ(opts.retval >> 16, 2, "bpf_modify_return_test.side_effect"); 93 93 94 94 cleanup: 95 95 for (; i >= 0; i--) {
+2
tools/testing/selftests/bpf/prog_tests/verifier.c
··· 58 58 #include "verifier_stack_ptr.skel.h" 59 59 #include "verifier_subprog_precision.skel.h" 60 60 #include "verifier_subreg.skel.h" 61 + #include "verifier_typedef.skel.h" 61 62 #include "verifier_uninit.skel.h" 62 63 #include "verifier_unpriv.skel.h" 63 64 #include "verifier_unpriv_perf.skel.h" ··· 160 159 void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } 161 160 void test_verifier_subprog_precision(void) { RUN(verifier_subprog_precision); } 162 161 void test_verifier_subreg(void) { RUN(verifier_subreg); } 162 + void test_verifier_typedef(void) { RUN(verifier_typedef); } 163 163 void test_verifier_uninit(void) { RUN(verifier_uninit); } 164 164 void test_verifier_unpriv(void) { RUN(verifier_unpriv); } 165 165 void test_verifier_unpriv_perf(void) { RUN(verifier_unpriv_perf); }
+382
tools/testing/selftests/bpf/progs/cgroup_tcp_skb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_endian.h> 5 + #include <bpf/bpf_helpers.h> 6 + 7 + #include <linux/if_ether.h> 8 + #include <linux/in.h> 9 + #include <linux/in6.h> 10 + #include <linux/ipv6.h> 11 + #include <linux/tcp.h> 12 + 13 + #include <sys/types.h> 14 + #include <sys/socket.h> 15 + 16 + #include "cgroup_tcp_skb.h" 17 + 18 + char _license[] SEC("license") = "GPL"; 19 + 20 + __u16 g_sock_port = 0; 21 + __u32 g_sock_state = 0; 22 + int g_unexpected = 0; 23 + __u32 g_packet_count = 0; 24 + 25 + int needed_tcp_pkt(struct __sk_buff *skb, struct tcphdr *tcph) 26 + { 27 + struct ipv6hdr ip6h; 28 + 29 + if (skb->protocol != bpf_htons(ETH_P_IPV6)) 30 + return 0; 31 + if (bpf_skb_load_bytes(skb, 0, &ip6h, sizeof(ip6h))) 32 + return 0; 33 + 34 + if (ip6h.nexthdr != IPPROTO_TCP) 35 + return 0; 36 + 37 + if (bpf_skb_load_bytes(skb, sizeof(ip6h), tcph, sizeof(*tcph))) 38 + return 0; 39 + 40 + if (tcph->source != bpf_htons(g_sock_port) && 41 + tcph->dest != bpf_htons(g_sock_port)) 42 + return 0; 43 + 44 + return 1; 45 + } 46 + 47 + /* Run accept() on a socket in the cgroup to receive a new connection. */ 48 + static int egress_accept(struct tcphdr *tcph) 49 + { 50 + if (g_sock_state == SYN_RECV_SENDING_SYN_ACK) { 51 + if (tcph->fin || !tcph->syn || !tcph->ack) 52 + g_unexpected++; 53 + else 54 + g_sock_state = SYN_RECV; 55 + return 1; 56 + } 57 + 58 + return 0; 59 + } 60 + 61 + static int ingress_accept(struct tcphdr *tcph) 62 + { 63 + switch (g_sock_state) { 64 + case INIT: 65 + if (!tcph->syn || tcph->fin || tcph->ack) 66 + g_unexpected++; 67 + else 68 + g_sock_state = SYN_RECV_SENDING_SYN_ACK; 69 + break; 70 + case SYN_RECV: 71 + if (tcph->fin || tcph->syn || !tcph->ack) 72 + g_unexpected++; 73 + else 74 + g_sock_state = ESTABLISHED; 75 + break; 76 + default: 77 + return 0; 78 + } 79 + 80 + return 1; 81 + } 82 + 83 + /* Run connect() on a socket in the cgroup to start a new connection. */ 84 + static int egress_connect(struct tcphdr *tcph) 85 + { 86 + if (g_sock_state == INIT) { 87 + if (!tcph->syn || tcph->fin || tcph->ack) 88 + g_unexpected++; 89 + else 90 + g_sock_state = SYN_SENT; 91 + return 1; 92 + } 93 + 94 + return 0; 95 + } 96 + 97 + static int ingress_connect(struct tcphdr *tcph) 98 + { 99 + if (g_sock_state == SYN_SENT) { 100 + if (tcph->fin || !tcph->syn || !tcph->ack) 101 + g_unexpected++; 102 + else 103 + g_sock_state = ESTABLISHED; 104 + return 1; 105 + } 106 + 107 + return 0; 108 + } 109 + 110 + /* The connection is closed by the peer outside the cgroup. */ 111 + static int egress_close_remote(struct tcphdr *tcph) 112 + { 113 + switch (g_sock_state) { 114 + case ESTABLISHED: 115 + break; 116 + case CLOSE_WAIT_SENDING_ACK: 117 + if (tcph->fin || tcph->syn || !tcph->ack) 118 + g_unexpected++; 119 + else 120 + g_sock_state = CLOSE_WAIT; 121 + break; 122 + case CLOSE_WAIT: 123 + if (!tcph->fin) 124 + g_unexpected++; 125 + else 126 + g_sock_state = LAST_ACK; 127 + break; 128 + default: 129 + return 0; 130 + } 131 + 132 + return 1; 133 + } 134 + 135 + static int ingress_close_remote(struct tcphdr *tcph) 136 + { 137 + switch (g_sock_state) { 138 + case ESTABLISHED: 139 + if (tcph->fin) 140 + g_sock_state = CLOSE_WAIT_SENDING_ACK; 141 + break; 142 + case LAST_ACK: 143 + if (tcph->fin || tcph->syn || !tcph->ack) 144 + g_unexpected++; 145 + else 146 + g_sock_state = CLOSED; 147 + break; 148 + default: 149 + return 0; 150 + } 151 + 152 + return 1; 153 + } 154 + 155 + /* The connection is closed by the endpoint inside the cgroup. */ 156 + static int egress_close_local(struct tcphdr *tcph) 157 + { 158 + switch (g_sock_state) { 159 + case ESTABLISHED: 160 + if (tcph->fin) 161 + g_sock_state = FIN_WAIT1; 162 + break; 163 + case TIME_WAIT_SENDING_ACK: 164 + if (tcph->fin || tcph->syn || !tcph->ack) 165 + g_unexpected++; 166 + else 167 + g_sock_state = TIME_WAIT; 168 + break; 169 + default: 170 + return 0; 171 + } 172 + 173 + return 1; 174 + } 175 + 176 + static int ingress_close_local(struct tcphdr *tcph) 177 + { 178 + switch (g_sock_state) { 179 + case ESTABLISHED: 180 + break; 181 + case FIN_WAIT1: 182 + if (tcph->fin || tcph->syn || !tcph->ack) 183 + g_unexpected++; 184 + else 185 + g_sock_state = FIN_WAIT2; 186 + break; 187 + case FIN_WAIT2: 188 + if (!tcph->fin || tcph->syn || !tcph->ack) 189 + g_unexpected++; 190 + else 191 + g_sock_state = TIME_WAIT_SENDING_ACK; 192 + break; 193 + default: 194 + return 0; 195 + } 196 + 197 + return 1; 198 + } 199 + 200 + /* Check the types of outgoing packets of a server socket to make sure they 201 + * are consistent with the state of the server socket. 202 + * 203 + * The connection is closed by the client side. 204 + */ 205 + SEC("cgroup_skb/egress") 206 + int server_egress(struct __sk_buff *skb) 207 + { 208 + struct tcphdr tcph; 209 + 210 + if (!needed_tcp_pkt(skb, &tcph)) 211 + return 1; 212 + 213 + g_packet_count++; 214 + 215 + /* Egress of the server socket. */ 216 + if (egress_accept(&tcph) || egress_close_remote(&tcph)) 217 + return 1; 218 + 219 + g_unexpected++; 220 + return 1; 221 + } 222 + 223 + /* Check the types of incoming packets of a server socket to make sure they 224 + * are consistent with the state of the server socket. 225 + * 226 + * The connection is closed by the client side. 227 + */ 228 + SEC("cgroup_skb/ingress") 229 + int server_ingress(struct __sk_buff *skb) 230 + { 231 + struct tcphdr tcph; 232 + 233 + if (!needed_tcp_pkt(skb, &tcph)) 234 + return 1; 235 + 236 + g_packet_count++; 237 + 238 + /* Ingress of the server socket. */ 239 + if (ingress_accept(&tcph) || ingress_close_remote(&tcph)) 240 + return 1; 241 + 242 + g_unexpected++; 243 + return 1; 244 + } 245 + 246 + /* Check the types of outgoing packets of a server socket to make sure they 247 + * are consistent with the state of the server socket. 248 + * 249 + * The connection is closed by the server side. 250 + */ 251 + SEC("cgroup_skb/egress") 252 + int server_egress_srv(struct __sk_buff *skb) 253 + { 254 + struct tcphdr tcph; 255 + 256 + if (!needed_tcp_pkt(skb, &tcph)) 257 + return 1; 258 + 259 + g_packet_count++; 260 + 261 + /* Egress of the server socket. */ 262 + if (egress_accept(&tcph) || egress_close_local(&tcph)) 263 + return 1; 264 + 265 + g_unexpected++; 266 + return 1; 267 + } 268 + 269 + /* Check the types of incoming packets of a server socket to make sure they 270 + * are consistent with the state of the server socket. 271 + * 272 + * The connection is closed by the server side. 273 + */ 274 + SEC("cgroup_skb/ingress") 275 + int server_ingress_srv(struct __sk_buff *skb) 276 + { 277 + struct tcphdr tcph; 278 + 279 + if (!needed_tcp_pkt(skb, &tcph)) 280 + return 1; 281 + 282 + g_packet_count++; 283 + 284 + /* Ingress of the server socket. */ 285 + if (ingress_accept(&tcph) || ingress_close_local(&tcph)) 286 + return 1; 287 + 288 + g_unexpected++; 289 + return 1; 290 + } 291 + 292 + /* Check the types of outgoing packets of a client socket to make sure they 293 + * are consistent with the state of the client socket. 294 + * 295 + * The connection is closed by the server side. 296 + */ 297 + SEC("cgroup_skb/egress") 298 + int client_egress_srv(struct __sk_buff *skb) 299 + { 300 + struct tcphdr tcph; 301 + 302 + if (!needed_tcp_pkt(skb, &tcph)) 303 + return 1; 304 + 305 + g_packet_count++; 306 + 307 + /* Egress of the server socket. */ 308 + if (egress_connect(&tcph) || egress_close_remote(&tcph)) 309 + return 1; 310 + 311 + g_unexpected++; 312 + return 1; 313 + } 314 + 315 + /* Check the types of incoming packets of a client socket to make sure they 316 + * are consistent with the state of the client socket. 317 + * 318 + * The connection is closed by the server side. 319 + */ 320 + SEC("cgroup_skb/ingress") 321 + int client_ingress_srv(struct __sk_buff *skb) 322 + { 323 + struct tcphdr tcph; 324 + 325 + if (!needed_tcp_pkt(skb, &tcph)) 326 + return 1; 327 + 328 + g_packet_count++; 329 + 330 + /* Ingress of the server socket. */ 331 + if (ingress_connect(&tcph) || ingress_close_remote(&tcph)) 332 + return 1; 333 + 334 + g_unexpected++; 335 + return 1; 336 + } 337 + 338 + /* Check the types of outgoing packets of a client socket to make sure they 339 + * are consistent with the state of the client socket. 340 + * 341 + * The connection is closed by the client side. 342 + */ 343 + SEC("cgroup_skb/egress") 344 + int client_egress(struct __sk_buff *skb) 345 + { 346 + struct tcphdr tcph; 347 + 348 + if (!needed_tcp_pkt(skb, &tcph)) 349 + return 1; 350 + 351 + g_packet_count++; 352 + 353 + /* Egress of the server socket. */ 354 + if (egress_connect(&tcph) || egress_close_local(&tcph)) 355 + return 1; 356 + 357 + g_unexpected++; 358 + return 1; 359 + } 360 + 361 + /* Check the types of incoming packets of a client socket to make sure they 362 + * are consistent with the state of the client socket. 363 + * 364 + * The connection is closed by the client side. 365 + */ 366 + SEC("cgroup_skb/ingress") 367 + int client_ingress(struct __sk_buff *skb) 368 + { 369 + struct tcphdr tcph; 370 + 371 + if (!needed_tcp_pkt(skb, &tcph)) 372 + return 1; 373 + 374 + g_packet_count++; 375 + 376 + /* Ingress of the server socket. */ 377 + if (ingress_connect(&tcph) || ingress_close_local(&tcph)) 378 + return 1; 379 + 380 + g_unexpected++; 381 + return 1; 382 + }
+39
tools/testing/selftests/bpf/progs/fentry_many_args.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2023 Tencent */ 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + #include <bpf/bpf_tracing.h> 6 + 7 + char _license[] SEC("license") = "GPL"; 8 + 9 + __u64 test1_result = 0; 10 + SEC("fentry/bpf_testmod_fentry_test7") 11 + int BPF_PROG(test1, __u64 a, void *b, short c, int d, void *e, char f, 12 + int g) 13 + { 14 + test1_result = a == 16 && b == (void *)17 && c == 18 && d == 19 && 15 + e == (void *)20 && f == 21 && g == 22; 16 + return 0; 17 + } 18 + 19 + __u64 test2_result = 0; 20 + SEC("fentry/bpf_testmod_fentry_test11") 21 + int BPF_PROG(test2, __u64 a, void *b, short c, int d, void *e, char f, 22 + int g, unsigned int h, long i, __u64 j, unsigned long k) 23 + { 24 + test2_result = a == 16 && b == (void *)17 && c == 18 && d == 19 && 25 + e == (void *)20 && f == 21 && g == 22 && h == 23 && 26 + i == 24 && j == 25 && k == 26; 27 + return 0; 28 + } 29 + 30 + __u64 test3_result = 0; 31 + SEC("fentry/bpf_testmod_fentry_test11") 32 + int BPF_PROG(test3, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, 33 + __u64 g, __u64 h, __u64 i, __u64 j, __u64 k) 34 + { 35 + test3_result = a == 16 && b == 17 && c == 18 && d == 19 && 36 + e == 20 && f == 21 && g == 22 && h == 23 && 37 + i == 24 && j == 25 && k == 26; 38 + return 0; 39 + }
+40
tools/testing/selftests/bpf/progs/fexit_many_args.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2023 Tencent */ 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + #include <bpf/bpf_tracing.h> 6 + 7 + char _license[] SEC("license") = "GPL"; 8 + 9 + __u64 test1_result = 0; 10 + SEC("fexit/bpf_testmod_fentry_test7") 11 + int BPF_PROG(test1, __u64 a, void *b, short c, int d, void *e, char f, 12 + int g, int ret) 13 + { 14 + test1_result = a == 16 && b == (void *)17 && c == 18 && d == 19 && 15 + e == (void *)20 && f == 21 && g == 22 && ret == 133; 16 + return 0; 17 + } 18 + 19 + __u64 test2_result = 0; 20 + SEC("fexit/bpf_testmod_fentry_test11") 21 + int BPF_PROG(test2, __u64 a, void *b, short c, int d, void *e, char f, 22 + int g, unsigned int h, long i, __u64 j, unsigned long k, 23 + int ret) 24 + { 25 + test2_result = a == 16 && b == (void *)17 && c == 18 && d == 19 && 26 + e == (void *)20 && f == 21 && g == 22 && h == 23 && 27 + i == 24 && j == 25 && k == 26 && ret == 231; 28 + return 0; 29 + } 30 + 31 + __u64 test3_result = 0; 32 + SEC("fexit/bpf_testmod_fentry_test11") 33 + int BPF_PROG(test3, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, 34 + __u64 g, __u64 h, __u64 i, __u64 j, __u64 k, __u64 ret) 35 + { 36 + test3_result = a == 16 && b == 17 && c == 18 && d == 19 && 37 + e == 20 && f == 21 && g == 22 && h == 23 && 38 + i == 24 && j == 25 && k == 26 && ret == 231; 39 + return 0; 40 + }
+105
tools/testing/selftests/bpf/progs/htab_mem_bench.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2023. Huawei Technologies Co., Ltd */ 3 + #include <stdbool.h> 4 + #include <errno.h> 5 + #include <linux/types.h> 6 + #include <linux/bpf.h> 7 + #include <bpf/bpf_helpers.h> 8 + #include <bpf/bpf_tracing.h> 9 + 10 + #define OP_BATCH 64 11 + 12 + struct update_ctx { 13 + unsigned int from; 14 + unsigned int step; 15 + }; 16 + 17 + struct { 18 + __uint(type, BPF_MAP_TYPE_HASH); 19 + __uint(key_size, 4); 20 + __uint(map_flags, BPF_F_NO_PREALLOC); 21 + } htab SEC(".maps"); 22 + 23 + char _license[] SEC("license") = "GPL"; 24 + 25 + unsigned char zeroed_value[4096]; 26 + unsigned int nr_thread = 0; 27 + long op_cnt = 0; 28 + 29 + static int write_htab(unsigned int i, struct update_ctx *ctx, unsigned int flags) 30 + { 31 + bpf_map_update_elem(&htab, &ctx->from, zeroed_value, flags); 32 + ctx->from += ctx->step; 33 + 34 + return 0; 35 + } 36 + 37 + static int overwrite_htab(unsigned int i, struct update_ctx *ctx) 38 + { 39 + return write_htab(i, ctx, 0); 40 + } 41 + 42 + static int newwrite_htab(unsigned int i, struct update_ctx *ctx) 43 + { 44 + return write_htab(i, ctx, BPF_NOEXIST); 45 + } 46 + 47 + static int del_htab(unsigned int i, struct update_ctx *ctx) 48 + { 49 + bpf_map_delete_elem(&htab, &ctx->from); 50 + ctx->from += ctx->step; 51 + 52 + return 0; 53 + } 54 + 55 + SEC("?tp/syscalls/sys_enter_getpgid") 56 + int overwrite(void *ctx) 57 + { 58 + struct update_ctx update; 59 + 60 + update.from = bpf_get_smp_processor_id(); 61 + update.step = nr_thread; 62 + bpf_loop(OP_BATCH, overwrite_htab, &update, 0); 63 + __sync_fetch_and_add(&op_cnt, 1); 64 + return 0; 65 + } 66 + 67 + SEC("?tp/syscalls/sys_enter_getpgid") 68 + int batch_add_batch_del(void *ctx) 69 + { 70 + struct update_ctx update; 71 + 72 + update.from = bpf_get_smp_processor_id(); 73 + update.step = nr_thread; 74 + bpf_loop(OP_BATCH, overwrite_htab, &update, 0); 75 + 76 + update.from = bpf_get_smp_processor_id(); 77 + bpf_loop(OP_BATCH, del_htab, &update, 0); 78 + 79 + __sync_fetch_and_add(&op_cnt, 2); 80 + return 0; 81 + } 82 + 83 + SEC("?tp/syscalls/sys_enter_getpgid") 84 + int add_only(void *ctx) 85 + { 86 + struct update_ctx update; 87 + 88 + update.from = bpf_get_smp_processor_id() / 2; 89 + update.step = nr_thread / 2; 90 + bpf_loop(OP_BATCH, newwrite_htab, &update, 0); 91 + __sync_fetch_and_add(&op_cnt, 1); 92 + return 0; 93 + } 94 + 95 + SEC("?tp/syscalls/sys_enter_getppid") 96 + int del_only(void *ctx) 97 + { 98 + struct update_ctx update; 99 + 100 + update.from = bpf_get_smp_processor_id() / 2; 101 + update.step = nr_thread / 2; 102 + bpf_loop(OP_BATCH, del_htab, &update, 0); 103 + __sync_fetch_and_add(&op_cnt, 1); 104 + return 0; 105 + }
+1 -1
tools/testing/selftests/bpf/progs/linked_list.c
··· 96 96 int list_push_pop_multiple(struct bpf_spin_lock *lock, struct bpf_list_head *head, bool leave_in_map) 97 97 { 98 98 struct bpf_list_node *n; 99 - struct foo *f[8], *pf; 99 + struct foo *f[200], *pf; 100 100 int i; 101 101 102 102 /* Loop following this check adds nodes 2-at-a-time in order to
+24
tools/testing/selftests/bpf/progs/map_percpu_stats.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2023 Isovalent */ 3 + 4 + #include "vmlinux.h" 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + 8 + __u32 target_id; 9 + 10 + __s64 bpf_map_sum_elem_count(struct bpf_map *map) __ksym; 11 + 12 + SEC("iter/bpf_map") 13 + int dump_bpf_map(struct bpf_iter__bpf_map *ctx) 14 + { 15 + struct seq_file *seq = ctx->meta->seq; 16 + struct bpf_map *map = ctx->map; 17 + 18 + if (map && map->id == target_id) 19 + BPF_SEQ_PRINTF(seq, "%lld", bpf_map_sum_elem_count(map)); 20 + 21 + return 0; 22 + } 23 + 24 + char _license[] SEC("license") = "GPL";
+40
tools/testing/selftests/bpf/progs/modify_return.c
··· 47 47 48 48 return 0; 49 49 } 50 + 51 + static int sequence2; 52 + 53 + __u64 fentry_result2 = 0; 54 + SEC("fentry/bpf_modify_return_test2") 55 + int BPF_PROG(fentry_test2, int a, int *b, short c, int d, void *e, char f, 56 + int g) 57 + { 58 + sequence2++; 59 + fentry_result2 = (sequence2 == 1); 60 + return 0; 61 + } 62 + 63 + __u64 fmod_ret_result2 = 0; 64 + SEC("fmod_ret/bpf_modify_return_test2") 65 + int BPF_PROG(fmod_ret_test2, int a, int *b, short c, int d, void *e, char f, 66 + int g, int ret) 67 + { 68 + sequence2++; 69 + /* This is the first fmod_ret program, the ret passed should be 0 */ 70 + fmod_ret_result2 = (sequence2 == 2 && ret == 0); 71 + return input_retval; 72 + } 73 + 74 + __u64 fexit_result2 = 0; 75 + SEC("fexit/bpf_modify_return_test2") 76 + int BPF_PROG(fexit_test2, int a, int *b, short c, int d, void *e, char f, 77 + int g, int ret) 78 + { 79 + sequence2++; 80 + /* If the input_reval is non-zero a successful modification should have 81 + * occurred. 82 + */ 83 + if (input_retval) 84 + fexit_result2 = (sequence2 == 3 && ret == input_retval); 85 + else 86 + fexit_result2 = (sequence2 == 3 && ret == 29); 87 + 88 + return 0; 89 + }
+16
tools/testing/selftests/bpf/progs/nested_trust_failure.c
··· 10 10 11 11 char _license[] SEC("license") = "GPL"; 12 12 13 + struct { 14 + __uint(type, BPF_MAP_TYPE_SK_STORAGE); 15 + __uint(map_flags, BPF_F_NO_PREALLOC); 16 + __type(key, int); 17 + __type(value, u64); 18 + } sk_storage_map SEC(".maps"); 19 + 13 20 /* Prototype for all of the program trace events below: 14 21 * 15 22 * TRACE_EVENT(task_newtask, ··· 36 29 int BPF_PROG(test_invalid_nested_offset, struct task_struct *task, u64 clone_flags) 37 30 { 38 31 bpf_cpumask_first_zero(&task->cpus_mask); 32 + return 0; 33 + } 34 + 35 + /* Although R2 is of type sk_buff but sock_common is expected, we will hit untrusted ptr first. */ 36 + SEC("tp_btf/tcp_probe") 37 + __failure __msg("R2 type=untrusted_ptr_ expected=ptr_, trusted_ptr_, rcu_ptr_") 38 + int BPF_PROG(test_invalid_skb_field, struct sock *sk, struct sk_buff *skb) 39 + { 40 + bpf_sk_storage_get(&sk_storage_map, skb->next, 0, 0); 39 41 return 0; 40 42 }
+15
tools/testing/selftests/bpf/progs/nested_trust_success.c
··· 10 10 11 11 char _license[] SEC("license") = "GPL"; 12 12 13 + struct { 14 + __uint(type, BPF_MAP_TYPE_SK_STORAGE); 15 + __uint(map_flags, BPF_F_NO_PREALLOC); 16 + __type(key, int); 17 + __type(value, u64); 18 + } sk_storage_map SEC(".maps"); 19 + 13 20 SEC("tp_btf/task_newtask") 14 21 __success 15 22 int BPF_PROG(test_read_cpumask, struct task_struct *task, u64 clone_flags) 16 23 { 17 24 bpf_cpumask_test_cpu(0, task->cpus_ptr); 25 + return 0; 26 + } 27 + 28 + SEC("tp_btf/tcp_probe") 29 + __success 30 + int BPF_PROG(test_skb_field, struct sock *sk, struct sk_buff *skb) 31 + { 32 + bpf_sk_storage_get(&sk_storage_map, skb->sk, 0, 0); 18 33 return 0; 19 34 }
+6 -2
tools/testing/selftests/bpf/progs/test_global_map_resize.c
··· 29 29 int my_array_first[1] SEC(".data.array_not_last"); 30 30 int my_int_last SEC(".data.array_not_last"); 31 31 32 + int percpu_arr[1] SEC(".data.percpu_arr"); 33 + 32 34 SEC("tp/syscalls/sys_enter_getpid") 33 35 int bss_array_sum(void *ctx) 34 36 { 35 37 if (pid != (bpf_get_current_pid_tgid() >> 32)) 36 38 return 0; 37 39 38 - sum = 0; 40 + /* this will be zero, we just rely on verifier not rejecting this */ 41 + sum = percpu_arr[bpf_get_smp_processor_id()]; 39 42 40 43 for (size_t i = 0; i < bss_array_len; ++i) 41 44 sum += array[i]; ··· 52 49 if (pid != (bpf_get_current_pid_tgid() >> 32)) 53 50 return 0; 54 51 55 - sum = 0; 52 + /* this will be zero, we just rely on verifier not rejecting this */ 53 + sum = percpu_arr[bpf_get_smp_processor_id()]; 56 54 57 55 for (size_t i = 0; i < data_array_len; ++i) 58 56 sum += my_array[i];
+29
tools/testing/selftests/bpf/progs/test_ptr_untrusted.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2023 Yafang Shao <laoar.shao@gmail.com> */ 3 + 4 + #include "vmlinux.h" 5 + #include <bpf/bpf_tracing.h> 6 + 7 + char tp_name[128]; 8 + 9 + SEC("lsm/bpf") 10 + int BPF_PROG(lsm_run, int cmd, union bpf_attr *attr, unsigned int size) 11 + { 12 + switch (cmd) { 13 + case BPF_RAW_TRACEPOINT_OPEN: 14 + bpf_probe_read_user_str(tp_name, sizeof(tp_name) - 1, 15 + (void *)attr->raw_tracepoint.name); 16 + break; 17 + default: 18 + break; 19 + } 20 + return 0; 21 + } 22 + 23 + SEC("raw_tracepoint") 24 + int BPF_PROG(raw_tp_run) 25 + { 26 + return 0; 27 + } 28 + 29 + char _license[] SEC("license") = "GPL";
+54
tools/testing/selftests/bpf/progs/tracing_struct.c
··· 18 18 int b[]; 19 19 }; 20 20 21 + struct bpf_testmod_struct_arg_4 { 22 + u64 a; 23 + int b; 24 + }; 25 + 21 26 long t1_a_a, t1_a_b, t1_b, t1_c, t1_ret, t1_nregs; 22 27 __u64 t1_reg0, t1_reg1, t1_reg2, t1_reg3; 23 28 long t2_a, t2_b_a, t2_b_b, t2_c, t2_ret; ··· 30 25 long t4_a_a, t4_b, t4_c, t4_d, t4_e_a, t4_e_b, t4_ret; 31 26 long t5_ret; 32 27 int t6; 28 + long t7_a, t7_b, t7_c, t7_d, t7_e, t7_f_a, t7_f_b, t7_ret; 29 + long t8_a, t8_b, t8_c, t8_d, t8_e, t8_f_a, t8_f_b, t8_g, t8_ret; 30 + 33 31 34 32 SEC("fentry/bpf_testmod_test_struct_arg_1") 35 33 int BPF_PROG2(test_struct_arg_1, struct bpf_testmod_struct_arg_2, a, int, b, int, c) ··· 135 127 int BPF_PROG2(test_struct_arg_11, struct bpf_testmod_struct_arg_3 *, a) 136 128 { 137 129 t6 = a->b[0]; 130 + return 0; 131 + } 132 + 133 + SEC("fentry/bpf_testmod_test_struct_arg_7") 134 + int BPF_PROG2(test_struct_arg_12, __u64, a, void *, b, short, c, int, d, 135 + void *, e, struct bpf_testmod_struct_arg_4, f) 136 + { 137 + t7_a = a; 138 + t7_b = (long)b; 139 + t7_c = c; 140 + t7_d = d; 141 + t7_e = (long)e; 142 + t7_f_a = f.a; 143 + t7_f_b = f.b; 144 + return 0; 145 + } 146 + 147 + SEC("fexit/bpf_testmod_test_struct_arg_7") 148 + int BPF_PROG2(test_struct_arg_13, __u64, a, void *, b, short, c, int, d, 149 + void *, e, struct bpf_testmod_struct_arg_4, f, int, ret) 150 + { 151 + t7_ret = ret; 152 + return 0; 153 + } 154 + 155 + SEC("fentry/bpf_testmod_test_struct_arg_8") 156 + int BPF_PROG2(test_struct_arg_14, __u64, a, void *, b, short, c, int, d, 157 + void *, e, struct bpf_testmod_struct_arg_4, f, int, g) 158 + { 159 + t8_a = a; 160 + t8_b = (long)b; 161 + t8_c = c; 162 + t8_d = d; 163 + t8_e = (long)e; 164 + t8_f_a = f.a; 165 + t8_f_b = f.b; 166 + t8_g = g; 167 + return 0; 168 + } 169 + 170 + SEC("fexit/bpf_testmod_test_struct_arg_8") 171 + int BPF_PROG2(test_struct_arg_15, __u64, a, void *, b, short, c, int, d, 172 + void *, e, struct bpf_testmod_struct_arg_4, f, int, g, 173 + int, ret) 174 + { 175 + t8_ret = ret; 138 176 return 0; 139 177 } 140 178
+23
tools/testing/selftests/bpf/progs/verifier_typedef.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <vmlinux.h> 4 + #include <bpf/bpf_helpers.h> 5 + #include "bpf_misc.h" 6 + 7 + SEC("fentry/bpf_fentry_test_sinfo") 8 + __description("typedef: resolve") 9 + __success __retval(0) 10 + __naked void resolve_typedef(void) 11 + { 12 + asm volatile (" \ 13 + r1 = *(u64 *)(r1 +0); \ 14 + r2 = *(u64 *)(r1 +%[frags_offs]); \ 15 + r0 = 0; \ 16 + exit; \ 17 + " : 18 + : __imm_const(frags_offs, 19 + offsetof(struct skb_shared_info, frags)) 20 + : __clobber_all); 21 + } 22 + 23 + char _license[] SEC("license") = "GPL";
+4 -1
tools/testing/selftests/bpf/trace_helpers.c
··· 18 18 #define TRACEFS_PIPE "/sys/kernel/tracing/trace_pipe" 19 19 #define DEBUGFS_PIPE "/sys/kernel/debug/tracing/trace_pipe" 20 20 21 - #define MAX_SYMS 300000 21 + #define MAX_SYMS 400000 22 22 static struct ksym syms[MAX_SYMS]; 23 23 static int sym_cnt; 24 24 ··· 46 46 break; 47 47 if (!addr) 48 48 continue; 49 + if (i >= MAX_SYMS) 50 + return -EFBIG; 51 + 49 52 syms[i].addr = (long) addr; 50 53 syms[i].name = strdup(func); 51 54 i++;
+1
tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
··· 242 242 .result = REJECT, 243 243 .errstr = "R0 invalid mem access", 244 244 .errstr_unpriv = "R10 partial copy of pointer", 245 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 245 246 },
+2
tools/testing/selftests/bpf/verifier/ctx_skb.c
··· 1169 1169 }, 1170 1170 .result = ACCEPT, 1171 1171 .prog_type = BPF_PROG_TYPE_SK_SKB, 1172 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1172 1173 }, 1173 1174 { 1174 1175 "pkt_end < pkt taken check", ··· 1191 1190 }, 1192 1191 .result = ACCEPT, 1193 1192 .prog_type = BPF_PROG_TYPE_SK_SKB, 1193 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1194 1194 },
+8
tools/testing/selftests/bpf/verifier/jmp32.c
··· 290 290 .result_unpriv = REJECT, 291 291 .result = ACCEPT, 292 292 .retval = 2, 293 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 293 294 }, 294 295 { 295 296 "jgt32: BPF_K", ··· 361 360 .result_unpriv = REJECT, 362 361 .result = ACCEPT, 363 362 .retval = 2, 363 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 364 364 }, 365 365 { 366 366 "jle32: BPF_K", ··· 432 430 .result_unpriv = REJECT, 433 431 .result = ACCEPT, 434 432 .retval = 2, 433 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 435 434 }, 436 435 { 437 436 "jlt32: BPF_K", ··· 503 500 .result_unpriv = REJECT, 504 501 .result = ACCEPT, 505 502 .retval = 2, 503 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 506 504 }, 507 505 { 508 506 "jsge32: BPF_K", ··· 574 570 .result_unpriv = REJECT, 575 571 .result = ACCEPT, 576 572 .retval = 2, 573 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 577 574 }, 578 575 { 579 576 "jsgt32: BPF_K", ··· 645 640 .result_unpriv = REJECT, 646 641 .result = ACCEPT, 647 642 .retval = 2, 643 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 648 644 }, 649 645 { 650 646 "jsle32: BPF_K", ··· 716 710 .result_unpriv = REJECT, 717 711 .result = ACCEPT, 718 712 .retval = 2, 713 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 719 714 }, 720 715 { 721 716 "jslt32: BPF_K", ··· 787 780 .result_unpriv = REJECT, 788 781 .result = ACCEPT, 789 782 .retval = 2, 783 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 790 784 }, 791 785 { 792 786 "jgt32: range bound deduction, reg op imm",
+2
tools/testing/selftests/bpf/verifier/map_kptr.c
··· 68 68 .fixup_map_kptr = { 1 }, 69 69 .result = REJECT, 70 70 .errstr = "kptr access cannot have variable offset", 71 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 71 72 }, 72 73 { 73 74 "map_kptr: bpf_kptr_xchg non-const var_off", ··· 122 121 .fixup_map_kptr = { 1 }, 123 122 .result = REJECT, 124 123 .errstr = "kptr access misaligned expected=0 off=7", 124 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 125 125 }, 126 126 { 127 127 "map_kptr: reject var_off != 0",
+1 -1
tools/testing/selftests/bpf/verifier/precise.c
··· 216 216 }, 217 217 .fixup_map_ringbuf = { 1 }, 218 218 .prog_type = BPF_PROG_TYPE_XDP, 219 - .flags = BPF_F_TEST_STATE_FREQ, 219 + .flags = BPF_F_TEST_STATE_FREQ | F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 220 220 .errstr = "invalid access to memory, mem_size=1 off=42 size=8", 221 221 .result = REJECT, 222 222 },
+3 -3
tools/testing/selftests/hid/Makefile
··· 167 167 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ) 168 168 169 169 # Get Clang's default includes on this system, as opposed to those seen by 170 - # '-target bpf'. This fixes "missing" files on some architectures/distros, 170 + # '--target=bpf'. This fixes "missing" files on some architectures/distros, 171 171 # such as asm/byteorder.h, asm/socket.h, asm/sockios.h, sys/cdefs.h etc. 172 172 # 173 173 # Use '-idirafter': Don't interfere with include mechanics except where the ··· 196 196 # $3 - CFLAGS 197 197 define CLANG_BPF_BUILD_RULE 198 198 $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2) 199 - $(Q)$(CLANG) $3 -O2 -target bpf -c $1 -mcpu=v3 -o $2 199 + $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v3 -o $2 200 200 endef 201 201 # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32 202 202 define CLANG_NOALU32_BPF_BUILD_RULE 203 203 $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2) 204 - $(Q)$(CLANG) $3 -O2 -target bpf -c $1 -mcpu=v2 -o $2 204 + $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v2 -o $2 205 205 endef 206 206 # Build BPF object using GCC 207 207 define GCC_BPF_BUILD_RULE
+2 -2
tools/testing/selftests/net/Makefile
··· 113 113 mkdir -p $@ 114 114 115 115 # Get Clang's default includes on this system, as opposed to those seen by 116 - # '-target bpf'. This fixes "missing" files on some architectures/distros, 116 + # '--target=bpf'. This fixes "missing" files on some architectures/distros, 117 117 # such as asm/byteorder.h, asm/socket.h, asm/sockios.h, sys/cdefs.h etc. 118 118 # 119 119 # Use '-idirafter': Don't interfere with include mechanics except where the ··· 131 131 CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH)) 132 132 133 133 $(OUTPUT)/nat6to4.o: nat6to4.c $(BPFOBJ) | $(MAKE_DIRS) 134 - $(CLANG) -O2 -target bpf -c $< $(CCINCLUDE) $(CLANG_SYS_INCLUDES) -o $@ 134 + $(CLANG) -O2 --target=bpf -c $< $(CCINCLUDE) $(CLANG_SYS_INCLUDES) -o $@ 135 135 136 136 $(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \ 137 137 $(APIDIR)/linux/bpf.h \
+1 -1
tools/testing/selftests/tc-testing/Makefile
··· 24 24 25 25 $(OUTPUT)/%.o: %.c 26 26 $(CLANG) $(CLANG_FLAGS) \ 27 - -O2 -target bpf -emit-llvm -c $< -o - | \ 27 + -O2 --target=bpf -emit-llvm -c $< -o - | \ 28 28 $(LLC) -march=bpf -mcpu=$(CPU) $(LLC_FLAGS) -filetype=obj -o $@ 29 29 30 30 TEST_PROGS += ./tdc.sh