Merge branch 'perf-s390-regression-move-uid-filtering-to-bpf-filters'

Ilya Leoshkevich says:

====================
perf/s390: Regression: Move uid filtering to BPF filters

v4: https://lore.kernel.org/bpf/20250806114227.14617-1-iii@linux.ibm.com/
v4 -> v5: Fix a typo in the commit message (Yonghong).

v3: https://lore.kernel.org/bpf/20250805130346.1225535-1-iii@linux.ibm.com/
v3 -> v4: Rename the new field to dont_enable (Alexei, Eduard).
Switch the Fixes: tag in patch 2 (Alexander, Thomas).
Fix typos in the cover letter (Thomas).

v2: https://lore.kernel.org/bpf/20250728144340.711196-1-tmricht@linux.ibm.com/
v2 -> v3: Use no_ioctl_enable in perf.

v1: https://lore.kernel.org/bpf/20250725093405.3629253-1-tmricht@linux.ibm.com/
v1 -> v2: Introduce no_ioctl_enable (Jiri).

Hi,

This series fixes a regression caused by moving UID filtering to BPF.
The regression affects all events that support auxiliary data, most
notably, "cycles" events on s390, but also PT events on Intel. The
symptom is missing events when UID filtering is enabled.

Patch 1 introduces a new option for the
bpf_program__attach_perf_event_opts() function.
Patch 2 makes use of it in perf, and also contains a lot of technical
details of why exactly the problem is occurring.

Thanks to Thomas Richter for the investigation and the initial version
of this fix, and to Jiri Olsa for suggestions.

Best regards,
Ilya
====================

Link: https://patch.msgid.link/20250806162417.19666-1-iii@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

+15 -7
+8 -5
tools/lib/bpf/libbpf.c
··· 10965 10965 } 10966 10966 link->link.fd = pfd; 10967 10967 } 10968 - if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) { 10969 - err = -errno; 10970 - pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n", 10971 - prog->name, pfd, errstr(err)); 10972 - goto err_out; 10968 + 10969 + if (!OPTS_GET(opts, dont_enable, false)) { 10970 + if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) { 10971 + err = -errno; 10972 + pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n", 10973 + prog->name, pfd, errstr(err)); 10974 + goto err_out; 10975 + } 10973 10976 } 10974 10977 10975 10978 return &link->link;
+3 -1
tools/lib/bpf/libbpf.h
··· 499 499 __u64 bpf_cookie; 500 500 /* don't use BPF link when attach BPF program */ 501 501 bool force_ioctl_attach; 502 + /* don't automatically enable the event */ 503 + bool dont_enable; 502 504 size_t :0; 503 505 }; 504 - #define bpf_perf_event_opts__last_field force_ioctl_attach 506 + #define bpf_perf_event_opts__last_field dont_enable 505 507 506 508 LIBBPF_API struct bpf_link * 507 509 bpf_program__attach_perf_event(const struct bpf_program *prog, int pfd);
+4 -1
tools/perf/util/bpf-filter.c
··· 451 451 struct bpf_link *link; 452 452 struct perf_bpf_filter_entry *entry; 453 453 bool needs_idx_hash = !target__has_cpu(target); 454 + DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts, 455 + .dont_enable = true); 454 456 455 457 entry = calloc(MAX_FILTERS, sizeof(*entry)); 456 458 if (entry == NULL) ··· 524 522 prog = skel->progs.perf_sample_filter; 525 523 for (x = 0; x < xyarray__max_x(evsel->core.fd); x++) { 526 524 for (y = 0; y < xyarray__max_y(evsel->core.fd); y++) { 527 - link = bpf_program__attach_perf_event(prog, FD(evsel, x, y)); 525 + link = bpf_program__attach_perf_event_opts(prog, FD(evsel, x, y), 526 + &pe_opts); 528 527 if (IS_ERR(link)) { 529 528 pr_err("Failed to attach perf sample-filter program\n"); 530 529 ret = PTR_ERR(link);