Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

trace: bpf: Allow bpf to attach to bare tracepoints

Some subsystems only have bare tracepoints (a tracepoint with no
associated trace event) to avoid the problem of trace events being an
ABI that can't be changed.

>From bpf presepective, bare tracepoints are what it calls
RAW_TRACEPOINT().

Since bpf assumed there's 1:1 mapping, it relied on hooking to
DEFINE_EVENT() macro to create bpf mapping of the tracepoints. Since
bare tracepoints use DECLARE_TRACE() to create the tracepoint, bpf had
no knowledge about their existence.

By teaching bpf_probe.h to parse DECLARE_TRACE() in a similar fashion to
DEFINE_EVENT(), bpf can find and attach to the new raw tracepoints.

Enabling that comes with the contract that changes to raw tracepoints
don't constitute a regression if they break existing bpf programs.
We need the ability to continue to morph and modify these raw
tracepoints without worrying about any ABI.

Update Documentation/bpf/bpf_design_QA.rst to document this contract.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210119122237.2426878-2-qais.yousef@arm.com

authored by

Qais Yousef and committed by
Alexei Starovoitov
6939f4ef 86e6b4e9

+16 -2
+6
Documentation/bpf/bpf_design_QA.rst
··· 208 208 kernel internals are subject to change and can break with newer kernels 209 209 such that the program needs to be adapted accordingly. 210 210 211 + Q: Are tracepoints part of the stable ABI? 212 + ------------------------------------------ 213 + A: NO. Tracepoints are tied to internal implementation details hence they are 214 + subject to change and can break with newer kernels. BPF programs need to change 215 + accordingly when this happens. 216 + 211 217 Q: How much stack space a BPF program uses? 212 218 ------------------------------------------- 213 219 A: Currently all program types are limited to 512 bytes of stack
+10 -2
include/trace/bpf_probe.h
··· 55 55 /* tracepoints with more than 12 arguments will hit build error */ 56 56 #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__) 57 57 58 - #undef DECLARE_EVENT_CLASS 59 - #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ 58 + #define __BPF_DECLARE_TRACE(call, proto, args) \ 60 59 static notrace void \ 61 60 __bpf_trace_##call(void *__data, proto) \ 62 61 { \ 63 62 struct bpf_prog *prog = __data; \ 64 63 CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args)); \ 65 64 } 65 + 66 + #undef DECLARE_EVENT_CLASS 67 + #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ 68 + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) 66 69 67 70 /* 68 71 * This part is compiled out, it is only here as a build time check ··· 113 110 #undef DEFINE_EVENT_PRINT 114 111 #define DEFINE_EVENT_PRINT(template, name, proto, args, print) \ 115 112 DEFINE_EVENT(template, name, PARAMS(proto), PARAMS(args)) 113 + 114 + #undef DECLARE_TRACE 115 + #define DECLARE_TRACE(call, proto, args) \ 116 + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \ 117 + __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), 0) 116 118 117 119 #include TRACE_INCLUDE(TRACE_INCLUDE_FILE) 118 120