Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

perf kwork: Constify control data for BPF

The control knobs set before loading BPF programs should be declared as
'const volatile' so that it can be optimized by the BPF core.

Committer testing:

root@x1:~# perf kwork report --use-bpf
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)intel_atomic_commit_work [ | 0009 | 18.680 ms | 2 | 18.553 ms | 362410.681580 s | 362410.700133 s |
(w)pm_runtime_work | 0007 | 13.300 ms | 1 | 13.300 ms | 362410.254996 s | 362410.268295 s |
(w)intel_atomic_commit_work [ | 0009 | 9.846 ms | 2 | 9.717 ms | 362410.172352 s | 362410.182069 s |
(w)acpi_ec_event_processor | 0002 | 8.106 ms | 1 | 8.106 ms | 362410.463187 s | 362410.471293 s |
(s)SCHED:7 | 0000 | 1.351 ms | 106 | 0.063 ms | 362410.658017 s | 362410.658080 s |
i915:157 | 0008 | 0.994 ms | 13 | 0.361 ms | 362411.222125 s | 362411.222486 s |
(s)SCHED:7 | 0001 | 0.703 ms | 98 | 0.047 ms | 362410.245004 s | 362410.245051 s |
(s)SCHED:7 | 0005 | 0.674 ms | 42 | 0.074 ms | 362411.483039 s | 362411.483113 s |
(s)NET_RX:3 | 0001 | 0.556 ms | 10 | 0.079 ms | 362411.066388 s | 362411.066467 s |
<SNIP>

root@x1:~# perf trace -e bpf --max-events 5 perf kwork report --use-bpf
0.000 ( 0.016 ms): perf/2948007 bpf(cmd: 36, uattr: 0x7ffededa6660, size: 8) = -1 EOPNOTSUPP (Operation not supported)
0.026 ( 0.106 ms): perf/2948007 bpf(cmd: PROG_LOAD, uattr: 0x7ffededa6390, size: 148) = 12
0.152 ( 0.032 ms): perf/2948007 bpf(cmd: PROG_LOAD, uattr: 0x7ffededa6450, size: 148) = 12
26.247 ( 0.138 ms): perf/2948007 bpf(cmd: PROG_LOAD, uattr: 0x7ffededa6300, size: 148) = 12
26.396 ( 0.012 ms): perf/2948007 bpf(uattr: 0x7ffededa64b0, size: 80) = 12
Starting trace, Hit <Ctrl+C> to stop and report
root@x1:~#

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Song Liu <song@kernel.org>
Cc: Yang Jihong <yangjihong@bytedance.com>
Link: https://lore.kernel.org/r/20240902200515.2103769-4-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

authored by

Namhyung Kim and committed by
Arnaldo Carvalho de Melo
066fd840 ac5a23b2

+13 -10
+5 -4
tools/perf/util/bpf_kwork.c
··· 176 176 bpf_map_update_elem(fd, &cpu.cpu, &val, BPF_ANY); 177 177 } 178 178 perf_cpu_map__put(map); 179 - 180 - skel->bss->has_cpu_filter = 1; 181 179 } 182 180 183 181 if (kwork->profile_name != NULL) { ··· 195 197 196 198 key = 0; 197 199 bpf_map_update_elem(fd, &key, kwork->profile_name, BPF_ANY); 198 - 199 - skel->bss->has_name_filter = 1; 200 200 } 201 201 202 202 return 0; ··· 234 238 if (class_bpf->load_prepare != NULL) 235 239 class_bpf->load_prepare(kwork); 236 240 } 241 + 242 + if (kwork->cpu_list != NULL) 243 + skel->rodata->has_cpu_filter = 1; 244 + if (kwork->profile_name != NULL) 245 + skel->rodata->has_name_filter = 1; 237 246 238 247 if (kwork_trace_bpf__load(skel)) { 239 248 pr_debug("Failed to load kwork trace skeleton\n");
+4 -3
tools/perf/util/bpf_kwork_top.c
··· 151 151 bpf_map_update_elem(fd, &cpu.cpu, &val, BPF_ANY); 152 152 } 153 153 perf_cpu_map__put(map); 154 - 155 - skel->bss->has_cpu_filter = 1; 156 154 } 157 155 158 156 return 0; 159 157 } 160 158 161 - int perf_kwork__top_prepare_bpf(struct perf_kwork *kwork __maybe_unused) 159 + int perf_kwork__top_prepare_bpf(struct perf_kwork *kwork) 162 160 { 163 161 struct bpf_program *prog; 164 162 struct kwork_class *class; ··· 190 192 if (class_bpf->load_prepare) 191 193 class_bpf->load_prepare(); 192 194 } 195 + 196 + if (kwork->cpu_list) 197 + skel->rodata->has_cpu_filter = 1; 193 198 194 199 if (kwork_top_bpf__load(skel)) { 195 200 pr_debug("Failed to load kwork top skeleton\n");
+1 -1
tools/perf/util/bpf_skel/kwork_top.bpf.c
··· 84 84 85 85 int enabled = 0; 86 86 87 - int has_cpu_filter = 0; 87 + const volatile int has_cpu_filter = 0; 88 88 89 89 __u64 from_timestamp = 0; 90 90 __u64 to_timestamp = 0;
+3 -2
tools/perf/util/bpf_skel/kwork_trace.bpf.c
··· 68 68 } perf_kwork_name_filter SEC(".maps"); 69 69 70 70 int enabled = 0; 71 - int has_cpu_filter = 0; 72 - int has_name_filter = 0; 71 + 72 + const volatile int has_cpu_filter = 0; 73 + const volatile int has_name_filter = 0; 73 74 74 75 static __always_inline int local_strncmp(const char *s1, 75 76 unsigned int sz, const char *s2)