Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'perf-tools-for-v6.14-2025-01-21' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools

Pull perf-tools updates from Namhyung Kim:
"There are a lot of changes in the perf tools in this cycle.

build:

- Use generic syscall table to generate syscall numbers on supported
archs

- This also enables to get rid of libaudit which was used for syscall
numbers

- Remove python2 support as it's deprecated for years

- Fix issues on static build with libzstd

perf record:

- Intel-PT supports "aux-action" config term to pause or resume
tracing in the aux-buffer. Users can start the intel_pt event as
"started-paused" and configure other events to control the Intel-PT
tracing:

# perf record --kcore -e intel_pt/aux-action=start-paused/ \
-e syscalls:sys_enter_newuname/aux-action=resume/ \
-e syscalls:sys_exit_newuname/aux-action=pause/ -- uname

This requires kernel support (which was added in v6.13)

perf lock:

- 'perf lock contention' command has an ability to symbolize locks in
dynamically allocated objects using slab cache name when it runs
with BPF. Those dynamic locks would have "&" prefix in the name to
distinguish them from ordinary (static) locks

# perf lock con -abl -E 5 sleep 1
contended total wait max wait avg wait address symbol

2 1.95 us 1.77 us 975 ns ffff9d5e852d3498 &task_struct (mutex)
1 1.18 us 1.18 us 1.18 us ffff9d5e852d3538 &task_struct (mutex)
4 1.12 us 354 ns 279 ns ffff9d5e841ca800 &kmalloc-cg-512 (mutex)
2 859 ns 617 ns 429 ns ffffffffa41c3620 delayed_uprobe_lock (mutex)
3 691 ns 388 ns 230 ns ffffffffa41c0940 pack_mutex (mutex)

This also requires kernel/BPF support (which was added in v6.13)

perf ftrace:

- 'perf ftrace latency' command gets a couple of options to support
linear buckets instead of exponential. Also it's possible to
specify max and min latency for the linear buckets:

# perf ftrace latency -abn -T switch_mm_irqs_off --bucket-range=100 \
--min-latency=200 --max-latency=800 -- sleep 1
# DURATION | COUNT | GRAPH |
0 - 200 ns | 186 | ### |
200 - 300 ns | 256 | ##### |
300 - 400 ns | 364 | ####### |
400 - 500 ns | 223 | #### |
500 - 600 ns | 111 | ## |
600 - 700 ns | 41 | |
700 - 800 ns | 141 | ## |
800 - ... ns | 169 | ### |

# statistics (in nsec)
total time: 2162212
avg time: 967
max time: 16817
min time: 132
count: 2236

- As you can see in the above example, it nows shows the statistics
at the end so that users can see the avg/max/min latencies easily

- 'perf ftrace profile' command has --graph-opts option like 'perf
ftrace trace' so that it can control the tracing behaviors in the
same way. For example, it can limit the function call depth or
threshold

perf script:

- Improve physical memory resolution in 'mem-phys-addr' script by
parsing /proc/iomem file

# perf script mem-phys-addr -- find /
...
Event: mem_inst_retired.all_loads:P
Memory type count percentage
---------------------------------------- ---------- ----------
100000000-85f7fffff : System RAM 8929 69.7
547600000-54785d23f : Kernel data 1240 9.7
546a00000-5474bdfff : Kernel rodata 490 3.8
5480ce000-5485fffff : Kernel bss 121 0.9
0-fff : Reserved 3860 30.1
100000-89c01fff : System RAM 18 0.1
8a22c000-8df6efff : System RAM 5 0.0

Others:

- 'perf test' gets --runs-per-test option to run the test cases
repeatedly. This would be helpful to see if it's flaky

- Add 'parse_events' method to Python perf extension module, so that
users can use the same event parsing logic in the python code. One
more step towards implementing perf tools in Python. :)

- Support opening tracepoint events without libtraceevent. This will
be helpful if it won't use the tracing data like in 'perf stat'

- Update ARM Neoverse N2/V2 JSON events and metrics"

* tag 'perf-tools-for-v6.14-2025-01-21' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools: (176 commits)
perf test: Update event_groups test to use instructions
perf bench: Fix undefined behavior in cmpworker()
perf annotate: Prefer passing evsel to evsel->core.idx
perf lock: Rename fields in lock_type_table
perf lock: Add percpu-rwsem for type filter
perf lock: Fix parse_lock_type which only retrieve one lock flag
perf lock: Fix return code for functions in __cmd_contention
perf hist: Fix width calculation in hpp__fmt()
perf hist: Fix bogus profiles when filters are enabled
perf hist: Deduplicate cmp/sort/collapse code
perf test: Improve verbose documentation
perf test: Add a runs-per-test flag
perf test: Fix parallel/sequential option documentation
perf test: Send list output to stdout rather than stderr
perf test: Rename functions and variables for better clarity
perf tools: Expose quiet/verbose variables in Makefile.perf
perf config: Add a function to set one variable in .perfconfig
perf test perftool_testsuite: Return correct value for skipping
perf test perftool_testsuite: Add missing description
perf test record+probe_libc_inet_pton: Make test resilient
...

+11044 -2977
+24
Documentation/ABI/testing/sysfs-bus-event_source-devices
··· 1 + What: /sys/bus/event_source/devices/<pmu> 2 + Date: 2014/02/24 3 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 4 + Description: Performance Monitoring Unit (<pmu>) 5 + 6 + Each <pmu> directory, for a PMU device, is a name 7 + optionally followed by an underscore and then either a 8 + decimal or hexadecimal number. For example, cpu is a 9 + PMU name without a suffix as is intel_bts, 10 + uncore_imc_0 is a PMU name with a 0 numeric suffix, 11 + ddr_pmu_87e1b0000000 is a PMU name with a hex 12 + suffix. The hex suffix must be more than two 13 + characters long to avoid ambiguity with PMUs like the 14 + S390 cpum_cf. 15 + 16 + Tools can treat PMUs with the same name that differ by 17 + suffix as instances of the same PMU for the sake of, 18 + for example, opening an event. For example, the PMUs 19 + uncore_imc_free_running_0 and 20 + uncore_imc_free_running_1 have an event data_read; 21 + opening the data_read event on a PMU specified as 22 + uncore_imc_free_running should be treated as opening 23 + the data_read event on PMU uncore_imc_free_running_0 24 + and PMU uncore_imc_free_running_1.
+6 -4
Documentation/ABI/testing/sysfs-bus-event_source-devices-events
··· 37 37 performance monitoring event supported by the <pmu>. The name 38 38 of the file is the name of the event. 39 39 40 - As performance monitoring event names are case 41 - insensitive in the perf tool, the perf tool only looks 42 - for lower or upper case event names in sysfs to avoid 40 + As performance monitoring event names are case insensitive 41 + in the perf tool, the perf tool only looks for all lower 42 + case or all upper case event names in sysfs to avoid 43 43 scanning the directory. It is therefore required the 44 - name of the event here is either lower or upper case. 44 + name of the event here is either completely lower or upper 45 + case, with no mixed-case characters. Numbers, '.', '_', and 46 + '-' are also allowed. 45 47 46 48 File contents: 47 49
+1 -1
Documentation/admin-guide/workload-tracing.rst
··· 83 83 the necessary tools:: 84 84 85 85 sudo apt-get build-essentials flex bison yacc 86 - sudo apt install libelf-dev systemtap-sdt-dev libaudit-dev libslang2-dev libperl-dev libdw-dev 86 + sudo apt install libelf-dev systemtap-sdt-dev libslang2-dev libperl-dev libdw-dev 87 87 88 88 cscope is a good tool to browse kernel sources. Let's install it now:: 89 89
+7
tools/bpf/bpftool/Makefile
··· 106 106 FEATURE_TESTS += libbfd-liberty-z 107 107 FEATURE_TESTS += disassembler-four-args 108 108 FEATURE_TESTS += disassembler-init-styled 109 + FEATURE_TESTS += libelf-zstd 109 110 110 111 FEATURE_DISPLAY := clang-bpf-co-re 111 112 FEATURE_DISPLAY += llvm ··· 133 132 134 133 LIBS = $(LIBBPF) -lelf -lz 135 134 LIBS_BOOTSTRAP = $(LIBBPF_BOOTSTRAP) -lelf -lz 135 + 136 + ifeq ($(feature-libelf-zstd),1) 137 + LIBS += -lzstd 138 + LIBS_BOOTSTRAP += -lzstd 139 + endif 140 + 136 141 ifeq ($(feature-libcap), 1) 137 142 CFLAGS += -DUSE_LIBCAP 138 143 LIBS += -lcap
+2
tools/build/Build.include
··· 13 13 comma := , 14 14 squote := ' 15 15 pound := \# 16 + empty := 17 + space := $(empty) $(empty) 16 18 17 19 ### 18 20 # Name of target with a '.' as filename prefix. foo/bar.o => foo/.bar.o
-20
tools/build/Makefile.build
··· 12 12 PHONY := __build 13 13 __build: 14 14 15 - ifeq ($(V),1) 16 - quiet = 17 - Q = 18 - else 19 - quiet=quiet_ 20 - Q=@ 21 - endif 22 - 23 - # If the user is running make -s (silent mode), suppress echoing of commands 24 - # make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS. 25 - ifeq ($(filter 3.%,$(MAKE_VERSION)),) 26 - short-opts := $(firstword -$(MAKEFLAGS)) 27 - else 28 - short-opts := $(filter-out --%,$(MAKEFLAGS)) 29 - endif 30 - 31 - ifneq ($(findstring s,$(short-opts)),) 32 - quiet=silent_ 33 - endif 34 - 35 15 build-dir := $(srctree)/tools/build 36 16 37 17 # Define $(fixdep) for dep-cmd function
+36 -10
tools/build/Makefile.feature
··· 28 28 # the rule that uses them - an example for that is the 'bionic' 29 29 # feature check. ] 30 30 # 31 + # These + the ones in FEATURE_TESTS_EXTRA are included in 32 + # tools/build/feature/test-all.c and we try to build it all together 33 + # then setting all those features to '1' meaning they are all enabled. 34 + # 35 + # There are things like fortify-source that will be set to 1 because test-all 36 + # is built with the flags needed to test if its enabled, resulting in 37 + # 38 + # $ rm -rf /tmp/b ; mkdir /tmp/b ; make -C tools/perf O=/tmp/b feature-dump 39 + # $ grep fortify-source /tmp/b/FEATURE-DUMP 40 + # feature-fortify-source=1 41 + # $ 42 + # 43 + # All the others should have lines in tools/build/feature/test-all.c like: 44 + # 45 + # #define main main_test_disassembler_init_styled 46 + # # include "test-disassembler-init-styled.c" 47 + # #undef main 48 + # 49 + # #define main main_test_libzstd 50 + # # include "test-libzstd.c" 51 + # #undef main 52 + # 53 + # int main(int argc, char *argv[]) 54 + # { 55 + # main_test_disassembler_four_args(); 56 + # main_test_libzstd(); 57 + # return 0; 58 + # } 59 + # 60 + # If the sample above works, then we end up with these lines in the FEATURE-DUMP 61 + # file: 62 + # 63 + # feature-disassembler-four-args=1 64 + # feature-libzstd=1 65 + # 31 66 FEATURE_TESTS_BASIC := \ 32 67 backtrace \ 33 68 libdw \ ··· 73 38 glibc \ 74 39 libbfd \ 75 40 libbfd-buildid \ 76 - libcap \ 77 41 libelf \ 78 42 libelf-getphdrnum \ 79 43 libelf-gelf_getnote \ 80 44 libelf-getshdrstrndx \ 45 + libelf-zstd \ 81 46 libnuma \ 82 47 numa_num_possible_cpus \ 83 48 libperl \ 84 49 libpython \ 85 50 libslang \ 86 - libslang-include-subdir \ 87 51 libtraceevent \ 88 52 libtracefs \ 89 53 libcpupower \ ··· 123 89 libbfd-liberty \ 124 90 libbfd-liberty-z \ 125 91 libopencsd \ 126 - libunwind-x86 \ 127 - libunwind-x86_64 \ 128 - libunwind-arm \ 129 - libunwind-aarch64 \ 130 - libunwind-debug-frame \ 131 - libunwind-debug-frame-arm \ 132 - libunwind-debug-frame-aarch64 \ 133 92 cxx \ 134 93 llvm \ 135 94 clang \ ··· 149 122 glibc \ 150 123 libbfd \ 151 124 libbfd-buildid \ 152 - libcap \ 153 125 libelf \ 154 126 libnuma \ 155 127 numa_num_possible_cpus \
+5 -5
tools/build/feature/Makefile
··· 13 13 test-gtk2.bin \ 14 14 test-gtk2-infobar.bin \ 15 15 test-hello.bin \ 16 - test-libaudit.bin \ 17 16 test-libbfd.bin \ 18 17 test-libbfd-buildid.bin \ 19 18 test-disassembler-four-args.bin \ ··· 27 28 test-libelf-getphdrnum.bin \ 28 29 test-libelf-gelf_getnote.bin \ 29 30 test-libelf-getshdrstrndx.bin \ 31 + test-libelf-zstd.bin \ 30 32 test-libdebuginfod.bin \ 31 33 test-libnuma.bin \ 32 34 test-numa_num_possible_cpus.bin \ ··· 110 110 __BUILD = $(CC) $(CFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.c,$(@F)) $(LDFLAGS) 111 111 BUILD = $(__BUILD) > $(@:.bin=.make.output) 2>&1 112 112 BUILD_BFD = $(BUILD) -DPACKAGE='"perf"' -lbfd -ldl 113 - BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd -lcap 113 + BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd 114 114 115 115 __BUILDXX = $(CXX) $(CXXFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.cpp,$(@F)) $(LDFLAGS) 116 116 BUILDXX = $(__BUILDXX) > $(@:.bin=.make.output) 2>&1 ··· 196 196 $(OUTPUT)test-libelf-getshdrstrndx.bin: 197 197 $(BUILD) -lelf 198 198 199 + $(OUTPUT)test-libelf-zstd.bin: 200 + $(BUILD) -lelf -lz -lzstd 201 + 199 202 $(OUTPUT)test-libdebuginfod.bin: 200 203 $(BUILD) -ldebuginfod 201 204 ··· 230 227 231 228 $(OUTPUT)test-libunwind-debug-frame-aarch64.bin: 232 229 $(BUILD) -lelf -llzma -lunwind-aarch64 233 - 234 - $(OUTPUT)test-libaudit.bin: 235 - $(BUILD) -laudit 236 230 237 231 $(OUTPUT)test-libslang.bin: 238 232 $(BUILD) -lslang
+12 -3
tools/build/feature/test-all.c
··· 58 58 # include "test-libelf-getshdrstrndx.c" 59 59 #undef main 60 60 61 - #define main main_test_libunwind 62 - # include "test-libunwind.c" 61 + #define main main_test_libelf_zstd 62 + # include "test-libelf-zstd.c" 63 63 #undef main 64 64 65 65 #define main main_test_libslang ··· 170 170 # include "test-libzstd.c" 171 171 #undef main 172 172 173 + #define main main_test_libtraceevent 174 + # include "test-libtraceevent.c" 175 + #undef main 176 + 177 + #define main main_test_libtracefs 178 + # include "test-libtracefs.c" 179 + #undef main 180 + 173 181 int main(int argc, char *argv[]) 174 182 { 175 183 main_test_libpython(); ··· 192 184 main_test_libelf_getphdrnum(); 193 185 main_test_libelf_gelf_getnote(); 194 186 main_test_libelf_getshdrstrndx(); 195 - main_test_libunwind(); 196 187 main_test_libslang(); 197 188 main_test_libbfd(); 198 189 main_test_libbfd_buildid(); ··· 215 208 main_test_reallocarray(); 216 209 main_test_disassembler_four_args(); 217 210 main_test_libzstd(); 211 + main_test_libtraceevent(); 212 + main_test_libtracefs(); 218 213 219 214 return 0; 220 215 }
-11
tools/build/feature/test-libaudit.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - #include <libaudit.h> 3 - 4 - extern int printf(const char *format, ...); 5 - 6 - int main(void) 7 - { 8 - printf("error message: %s\n", audit_errno_to_name(0)); 9 - 10 - return audit_open(); 11 - }
+9
tools/build/feature/test-libelf-zstd.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <stddef.h> 3 + #include <libelf.h> 4 + 5 + int main(void) 6 + { 7 + elf_compress(NULL, ELFCOMPRESS_ZSTD, 0); 8 + return 0; 9 + }
+3 -3
tools/lib/api/fs/fs.c
··· 296 296 int fd = open(filename, O_RDONLY), err = -1; 297 297 298 298 if (fd < 0) 299 - return -1; 299 + return -errno; 300 300 301 301 if (read(fd, line, sizeof(line)) > 0) { 302 302 *value = atoi(line); ··· 314 314 int fd = open(filename, O_RDONLY), err = -1; 315 315 316 316 if (fd < 0) 317 - return -1; 317 + return -errno; 318 318 319 319 if (read(fd, line, sizeof(line)) > 0) { 320 320 *value = strtoull(line, NULL, base); ··· 372 372 char buf[64]; 373 373 374 374 if (fd < 0) 375 - return err; 375 + return -errno; 376 376 377 377 sprintf(buf, "%d", value); 378 378 if (write(fd, buf, sizeof(buf)) == sizeof(buf))
-1
tools/lib/perf/Documentation/libperf.txt
··· 39 39 40 40 struct perf_cpu_map *perf_cpu_map__new_any_cpu(void); 41 41 struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list); 42 - struct perf_cpu_map *perf_cpu_map__read(FILE *file); 43 42 struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map); 44 43 struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig, 45 44 struct perf_cpu_map *other);
+42 -89
tools/lib/perf/cpumap.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 + #include <errno.h> 2 3 #include <perf/cpumap.h> 3 4 #include <stdlib.h> 4 5 #include <linux/refcount.h> ··· 11 10 #include <ctype.h> 12 11 #include <limits.h> 13 12 #include "internal.h" 13 + #include <api/fs/fs.h> 14 + 15 + #define MAX_NR_CPUS 4096 14 16 15 17 void perf_cpu_map__set_nr(struct perf_cpu_map *map, int nr_cpus) 16 18 { ··· 104 100 static struct perf_cpu_map *cpu_map__new_sysfs_online(void) 105 101 { 106 102 struct perf_cpu_map *cpus = NULL; 107 - FILE *onlnf; 103 + char *buf = NULL; 104 + size_t buf_len; 108 105 109 - onlnf = fopen("/sys/devices/system/cpu/online", "r"); 110 - if (onlnf) { 111 - cpus = perf_cpu_map__read(onlnf); 112 - fclose(onlnf); 106 + if (sysfs__read_str("devices/system/cpu/online", &buf, &buf_len) >= 0) { 107 + cpus = perf_cpu_map__new(buf); 108 + free(buf); 113 109 } 114 110 return cpus; 115 111 } ··· 162 158 return cpus; 163 159 } 164 160 165 - struct perf_cpu_map *perf_cpu_map__read(FILE *file) 166 - { 167 - struct perf_cpu_map *cpus = NULL; 168 - int nr_cpus = 0; 169 - struct perf_cpu *tmp_cpus = NULL, *tmp; 170 - int max_entries = 0; 171 - int n, cpu, prev; 172 - char sep; 173 - 174 - sep = 0; 175 - prev = -1; 176 - for (;;) { 177 - n = fscanf(file, "%u%c", &cpu, &sep); 178 - if (n <= 0) 179 - break; 180 - if (prev >= 0) { 181 - int new_max = nr_cpus + cpu - prev - 1; 182 - 183 - WARN_ONCE(new_max >= MAX_NR_CPUS, "Perf can support %d CPUs. " 184 - "Consider raising MAX_NR_CPUS\n", MAX_NR_CPUS); 185 - 186 - if (new_max >= max_entries) { 187 - max_entries = new_max + MAX_NR_CPUS / 2; 188 - tmp = realloc(tmp_cpus, max_entries * sizeof(struct perf_cpu)); 189 - if (tmp == NULL) 190 - goto out_free_tmp; 191 - tmp_cpus = tmp; 192 - } 193 - 194 - while (++prev < cpu) 195 - tmp_cpus[nr_cpus++].cpu = prev; 196 - } 197 - if (nr_cpus == max_entries) { 198 - max_entries += MAX_NR_CPUS; 199 - tmp = realloc(tmp_cpus, max_entries * sizeof(struct perf_cpu)); 200 - if (tmp == NULL) 201 - goto out_free_tmp; 202 - tmp_cpus = tmp; 203 - } 204 - 205 - tmp_cpus[nr_cpus++].cpu = cpu; 206 - if (n == 2 && sep == '-') 207 - prev = cpu; 208 - else 209 - prev = -1; 210 - if (n == 1 || sep == '\n') 211 - break; 212 - } 213 - 214 - if (nr_cpus > 0) 215 - cpus = cpu_map__trim_new(nr_cpus, tmp_cpus); 216 - out_free_tmp: 217 - free(tmp_cpus); 218 - return cpus; 219 - } 220 - 221 161 struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list) 222 162 { 223 163 struct perf_cpu_map *cpus = NULL; ··· 186 238 p = NULL; 187 239 start_cpu = strtoul(cpu_list, &p, 0); 188 240 if (start_cpu >= INT_MAX 189 - || (*p != '\0' && *p != ',' && *p != '-')) 241 + || (*p != '\0' && *p != ',' && *p != '-' && *p != '\n')) 190 242 goto invalid; 191 243 192 244 if (*p == '-') { ··· 194 246 p = NULL; 195 247 end_cpu = strtoul(cpu_list, &p, 0); 196 248 197 - if (end_cpu >= INT_MAX || (*p != '\0' && *p != ',')) 249 + if (end_cpu >= INT_MAX || (*p != '\0' && *p != ',' && *p != '\n')) 198 250 goto invalid; 199 251 200 252 if (end_cpu < start_cpu) ··· 213 265 goto invalid; 214 266 215 267 if (nr_cpus == max_entries) { 216 - max_entries += MAX_NR_CPUS; 268 + max_entries += max(end_cpu - start_cpu + 1, 16UL); 217 269 tmp = realloc(tmp_cpus, max_entries * sizeof(struct perf_cpu)); 218 270 if (tmp == NULL) 219 271 goto invalid; ··· 227 279 cpu_list = p; 228 280 } 229 281 230 - if (nr_cpus > 0) 282 + if (nr_cpus > 0) { 231 283 cpus = cpu_map__trim_new(nr_cpus, tmp_cpus); 232 - else if (*cpu_list != '\0') { 284 + } else if (*cpu_list != '\0') { 233 285 pr_warning("Unexpected characters at end of cpu list ('%s'), using online CPUs.", 234 286 cpu_list); 235 287 cpus = perf_cpu_map__new_online_cpus(); 236 - } else 288 + } else { 237 289 cpus = perf_cpu_map__new_any_cpu(); 290 + } 238 291 invalid: 239 292 free(tmp_cpus); 240 293 out: ··· 385 436 } 386 437 387 438 /* 388 - * Merge two cpumaps 439 + * Merge two cpumaps. 389 440 * 390 - * orig either gets freed and replaced with a new map, or reused 391 - * with no reference count change (similar to "realloc") 392 - * other has its reference count increased. 441 + * If 'other' is subset of '*orig', '*orig' keeps itself with no reference count 442 + * change (similar to "realloc"). 443 + * 444 + * If '*orig' is subset of 'other', '*orig' reuses 'other' with its reference 445 + * count increased. 446 + * 447 + * Otherwise, '*orig' gets freed and replaced with a new map. 393 448 */ 394 - 395 - struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig, 396 - struct perf_cpu_map *other) 449 + int perf_cpu_map__merge(struct perf_cpu_map **orig, struct perf_cpu_map *other) 397 450 { 398 451 struct perf_cpu *tmp_cpus; 399 452 int tmp_len; 400 453 int i, j, k; 401 454 struct perf_cpu_map *merged; 402 455 403 - if (perf_cpu_map__is_subset(orig, other)) 404 - return orig; 405 - if (perf_cpu_map__is_subset(other, orig)) { 406 - perf_cpu_map__put(orig); 407 - return perf_cpu_map__get(other); 456 + if (perf_cpu_map__is_subset(*orig, other)) 457 + return 0; 458 + if (perf_cpu_map__is_subset(other, *orig)) { 459 + perf_cpu_map__put(*orig); 460 + *orig = perf_cpu_map__get(other); 461 + return 0; 408 462 } 409 463 410 - tmp_len = __perf_cpu_map__nr(orig) + __perf_cpu_map__nr(other); 464 + tmp_len = __perf_cpu_map__nr(*orig) + __perf_cpu_map__nr(other); 411 465 tmp_cpus = malloc(tmp_len * sizeof(struct perf_cpu)); 412 466 if (!tmp_cpus) 413 - return NULL; 467 + return -ENOMEM; 414 468 415 469 /* Standard merge algorithm from wikipedia */ 416 470 i = j = k = 0; 417 - while (i < __perf_cpu_map__nr(orig) && j < __perf_cpu_map__nr(other)) { 418 - if (__perf_cpu_map__cpu(orig, i).cpu <= __perf_cpu_map__cpu(other, j).cpu) { 419 - if (__perf_cpu_map__cpu(orig, i).cpu == __perf_cpu_map__cpu(other, j).cpu) 471 + while (i < __perf_cpu_map__nr(*orig) && j < __perf_cpu_map__nr(other)) { 472 + if (__perf_cpu_map__cpu(*orig, i).cpu <= __perf_cpu_map__cpu(other, j).cpu) { 473 + if (__perf_cpu_map__cpu(*orig, i).cpu == __perf_cpu_map__cpu(other, j).cpu) 420 474 j++; 421 - tmp_cpus[k++] = __perf_cpu_map__cpu(orig, i++); 475 + tmp_cpus[k++] = __perf_cpu_map__cpu(*orig, i++); 422 476 } else 423 477 tmp_cpus[k++] = __perf_cpu_map__cpu(other, j++); 424 478 } 425 479 426 - while (i < __perf_cpu_map__nr(orig)) 427 - tmp_cpus[k++] = __perf_cpu_map__cpu(orig, i++); 480 + while (i < __perf_cpu_map__nr(*orig)) 481 + tmp_cpus[k++] = __perf_cpu_map__cpu(*orig, i++); 428 482 429 483 while (j < __perf_cpu_map__nr(other)) 430 484 tmp_cpus[k++] = __perf_cpu_map__cpu(other, j++); ··· 435 483 436 484 merged = cpu_map__trim_new(k, tmp_cpus); 437 485 free(tmp_cpus); 438 - perf_cpu_map__put(orig); 439 - return merged; 486 + perf_cpu_map__put(*orig); 487 + *orig = merged; 488 + return 0; 440 489 } 441 490 442 491 struct perf_cpu_map *perf_cpu_map__intersect(struct perf_cpu_map *orig,
+1 -1
tools/lib/perf/evlist.c
··· 89 89 evsel->threads = perf_thread_map__get(evlist->threads); 90 90 } 91 91 92 - evlist->all_cpus = perf_cpu_map__merge(evlist->all_cpus, evsel->cpus); 92 + perf_cpu_map__merge(&evlist->all_cpus, evsel->cpus); 93 93 } 94 94 95 95 static void perf_evlist__propagate_maps(struct perf_evlist *evlist)
-4
tools/lib/perf/include/internal/cpumap.h
··· 21 21 struct perf_cpu map[]; 22 22 }; 23 23 24 - #ifndef MAX_NR_CPUS 25 - #define MAX_NR_CPUS 2048 26 - #endif 27 - 28 24 struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus); 29 25 int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu); 30 26 bool perf_cpu_map__is_subset(const struct perf_cpu_map *a, const struct perf_cpu_map *b);
+2 -4
tools/lib/perf/include/perf/cpumap.h
··· 3 3 #define __LIBPERF_CPUMAP_H 4 4 5 5 #include <perf/core.h> 6 - #include <stdio.h> 7 6 #include <stdbool.h> 8 7 9 8 /** A wrapper around a CPU to avoid confusion with the perf_cpu_map's map's indices. */ ··· 36 37 * perf_cpu_map__new_online_cpus is returned. 37 38 */ 38 39 LIBPERF_API struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list); 39 - LIBPERF_API struct perf_cpu_map *perf_cpu_map__read(FILE *file); 40 40 LIBPERF_API struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map); 41 - LIBPERF_API struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig, 42 - struct perf_cpu_map *other); 41 + LIBPERF_API int perf_cpu_map__merge(struct perf_cpu_map **orig, 42 + struct perf_cpu_map *other); 43 43 LIBPERF_API struct perf_cpu_map *perf_cpu_map__intersect(struct perf_cpu_map *orig, 44 44 struct perf_cpu_map *other); 45 45 LIBPERF_API void perf_cpu_map__put(struct perf_cpu_map *map);
-1
tools/lib/perf/libperf.map
··· 6 6 perf_cpu_map__get; 7 7 perf_cpu_map__put; 8 8 perf_cpu_map__new; 9 - perf_cpu_map__read; 10 9 perf_cpu_map__nr; 11 10 perf_cpu_map__cpu; 12 11 perf_cpu_map__has_any_cpu_or_is_empty;
-2
tools/perf/Documentation/perf-check.txt
··· 51 51 dwarf_getlocations / HAVE_LIBDW_SUPPORT 52 52 dwarf-unwind / HAVE_DWARF_UNWIND_SUPPORT 53 53 auxtrace / HAVE_AUXTRACE_SUPPORT 54 - libaudit / HAVE_LIBAUDIT_SUPPORT 55 54 libbfd / HAVE_LIBBFD_SUPPORT 56 55 libcapstone / HAVE_LIBCAPSTONE_SUPPORT 57 56 libcrypto / HAVE_LIBCRYPTO_SUPPORT ··· 66 67 libunwind / HAVE_LIBUNWIND_SUPPORT 67 68 lzma / HAVE_LZMA_SUPPORT 68 69 numa_num_possible_cpus / HAVE_LIBNUMA_SUPPORT 69 - syscall_table / HAVE_SYSCALL_TABLE_SUPPORT 70 70 zlib / HAVE_ZLIB_SUPPORT 71 71 zstd / HAVE_ZSTD_SUPPORT 72 72
+1 -1
tools/perf/Documentation/perf-config.txt
··· 40 40 The file '$(sysconfdir)/perfconfig' can be used to 41 41 store a system-wide default configuration. 42 42 43 - One an disable reading config files by setting the PERF_CONFIG environment 43 + One can disable reading config files by setting the PERF_CONFIG environment 44 44 variable to /dev/null, or provide an alternate config file by setting that 45 45 variable. 46 46
+19
tools/perf/Documentation/perf-ftrace.txt
··· 148 148 --use-nsec:: 149 149 Use nano-second instead of micro-second as a base unit of the histogram. 150 150 151 + --bucket-range=:: 152 + Bucket range in ms or ns (according to -n/--use-nsec), default is log2() mode. 153 + 154 + --min-latency=:: 155 + Minimum latency for the start of the first bucket, in ms or ns (according to 156 + -n/--use-nsec). 157 + 158 + --max-latency=:: 159 + Maximum latency for the start of the last bucket, in ms or ns (according to 160 + -n/--use-nsec). The setting is ignored if the value results in more than 161 + 22 buckets. 151 162 152 163 OPTIONS for 'perf ftrace profile' 153 164 --------------------------------- ··· 200 189 --sort=:: 201 190 Sort the result by the given field. Available values are: 202 191 total, avg, max, count, name. Default is 'total'. 192 + 193 + --graph-opts:: 194 + List of options allowed to set: 195 + 196 + - nosleep-time - Measure on-CPU time only for function_graph tracer. 197 + - noirqs - Ignore functions that happen inside interrupt. 198 + - thresh=<n> - Setup trace duration threshold in microseconds. 199 + - depth=<n> - Set max depth for function graph tracer to follow. 203 200 204 201 205 202 SEE ALSO
+376 -220
tools/perf/Documentation/perf-intel-pt.txt
··· 151 151 There are two ways that instructions-per-cycle (IPC) can be calculated depending 152 152 on the recording. 153 153 154 - If the 'cyc' config term (see config terms section below) was used, then IPC 154 + If the 'cyc' config term (see <<_config_terms,config terms>> section below) was used, then IPC 155 155 and cycle events are calculated using the cycle count from CYC packets, otherwise 156 156 MTC packets are used - refer to the 'mtc' config term. When MTC is used, however, 157 157 the values are less accurate because the timing is less accurate. ··· 239 239 240 240 -e intel_pt/tsc=1,noretcomp=0/ 241 241 242 - Note there are now new config terms - see section 'config terms' further below. 242 + Note there are other config terms - see section <<_config_terms,config terms>> further below. 243 243 244 244 The config terms are listed in /sys/devices/intel_pt/format. They are bit 245 245 fields within the config member of the struct perf_event_attr which is ··· 311 311 config terms 312 312 ~~~~~~~~~~~~ 313 313 314 - The June 2015 version of Intel 64 and IA-32 Architectures Software Developer 315 - Manuals, Chapter 36 Intel Processor Trace, defined new Intel PT features. 316 - Some of the features are reflect in new config terms. All the config terms are 317 - described below. 318 - 319 - tsc Always supported. Produces TSC timestamp packets to provide 320 - timing information. In some cases it is possible to decode 321 - without timing information, for example a per-thread context 322 - that does not overlap executable memory maps. 323 - 324 - The default config selects tsc (i.e. tsc=1). 325 - 326 - noretcomp Always supported. Disables "return compression" so a TIP packet 327 - is produced when a function returns. Causes more packets to be 328 - produced but might make decoding more reliable. 329 - 330 - The default config does not select noretcomp (i.e. noretcomp=0). 331 - 332 - psb_period Allows the frequency of PSB packets to be specified. 333 - 334 - The PSB packet is a synchronization packet that provides a 335 - starting point for decoding or recovery from errors. 336 - 337 - Support for psb_period is indicated by: 338 - 339 - /sys/bus/event_source/devices/intel_pt/caps/psb_cyc 340 - 341 - which contains "1" if the feature is supported and "0" 342 - otherwise. 343 - 344 - Valid values are given by: 345 - 346 - /sys/bus/event_source/devices/intel_pt/caps/psb_periods 347 - 348 - which contains a hexadecimal value, the bits of which represent 349 - valid values e.g. bit 2 set means value 2 is valid. 350 - 351 - The psb_period value is converted to the approximate number of 352 - trace bytes between PSB packets as: 353 - 354 - 2 ^ (value + 11) 355 - 356 - e.g. value 3 means 16KiB bytes between PSBs 357 - 358 - If an invalid value is entered, the error message 359 - will give a list of valid values e.g. 360 - 361 - $ perf record -e intel_pt/psb_period=15/u uname 362 - Invalid psb_period for intel_pt. Valid values are: 0-5 363 - 364 - If MTC packets are selected, the default config selects a value 365 - of 3 (i.e. psb_period=3) or the nearest lower value that is 366 - supported (0 is always supported). Otherwise the default is 0. 367 - 368 - If decoding is expected to be reliable and the buffer is large 369 - then a large PSB period can be used. 370 - 371 - Because a TSC packet is produced with PSB, the PSB period can 372 - also affect the granularity to timing information in the absence 373 - of MTC or CYC. 374 - 375 - mtc Produces MTC timing packets. 376 - 377 - MTC packets provide finer grain timestamp information than TSC 378 - packets. MTC packets record time using the hardware crystal 379 - clock (CTC) which is related to TSC packets using a TMA packet. 380 - 381 - Support for this feature is indicated by: 382 - 383 - /sys/bus/event_source/devices/intel_pt/caps/mtc 384 - 385 - which contains "1" if the feature is supported and 386 - "0" otherwise. 387 - 388 - The frequency of MTC packets can also be specified - see 389 - mtc_period below. 390 - 391 - mtc_period Specifies how frequently MTC packets are produced - see mtc 392 - above for how to determine if MTC packets are supported. 393 - 394 - Valid values are given by: 395 - 396 - /sys/bus/event_source/devices/intel_pt/caps/mtc_periods 397 - 398 - which contains a hexadecimal value, the bits of which represent 399 - valid values e.g. bit 2 set means value 2 is valid. 400 - 401 - The mtc_period value is converted to the MTC frequency as: 402 - 403 - CTC-frequency / (2 ^ value) 404 - 405 - e.g. value 3 means one eighth of CTC-frequency 406 - 407 - Where CTC is the hardware crystal clock, the frequency of which 408 - can be related to TSC via values provided in cpuid leaf 0x15. 409 - 410 - If an invalid value is entered, the error message 411 - will give a list of valid values e.g. 412 - 413 - $ perf record -e intel_pt/mtc_period=15/u uname 414 - Invalid mtc_period for intel_pt. Valid values are: 0,3,6,9 415 - 416 - The default value is 3 or the nearest lower value 417 - that is supported (0 is always supported). 418 - 419 - cyc Produces CYC timing packets. 420 - 421 - CYC packets provide even finer grain timestamp information than 422 - MTC and TSC packets. A CYC packet contains the number of CPU 423 - cycles since the last CYC packet. Unlike MTC and TSC packets, 424 - CYC packets are only sent when another packet is also sent. 425 - 426 - Support for this feature is indicated by: 427 - 428 - /sys/bus/event_source/devices/intel_pt/caps/psb_cyc 429 - 430 - which contains "1" if the feature is supported and 431 - "0" otherwise. 432 - 433 - The number of CYC packets produced can be reduced by specifying 434 - a threshold - see cyc_thresh below. 435 - 436 - cyc_thresh Specifies how frequently CYC packets are produced - see cyc 437 - above for how to determine if CYC packets are supported. 438 - 439 - Valid cyc_thresh values are given by: 440 - 441 - /sys/bus/event_source/devices/intel_pt/caps/cycle_thresholds 442 - 443 - which contains a hexadecimal value, the bits of which represent 444 - valid values e.g. bit 2 set means value 2 is valid. 445 - 446 - The cyc_thresh value represents the minimum number of CPU cycles 447 - that must have passed before a CYC packet can be sent. The 448 - number of CPU cycles is: 449 - 450 - 2 ^ (value - 1) 451 - 452 - e.g. value 4 means 8 CPU cycles must pass before a CYC packet 453 - can be sent. Note a CYC packet is still only sent when another 454 - packet is sent, not at, e.g. every 8 CPU cycles. 455 - 456 - If an invalid value is entered, the error message 457 - will give a list of valid values e.g. 458 - 459 - $ perf record -e intel_pt/cyc,cyc_thresh=15/u uname 460 - Invalid cyc_thresh for intel_pt. Valid values are: 0-12 461 - 462 - CYC packets are not requested by default. 463 - 464 - pt Specifies pass-through which enables the 'branch' config term. 465 - 466 - The default config selects 'pt' if it is available, so a user will 467 - never need to specify this term. 468 - 469 - branch Enable branch tracing. Branch tracing is enabled by default so to 470 - disable branch tracing use 'branch=0'. 471 - 472 - The default config selects 'branch' if it is available. 473 - 474 - ptw Enable PTWRITE packets which are produced when a ptwrite instruction 475 - is executed. 476 - 477 - Support for this feature is indicated by: 478 - 479 - /sys/bus/event_source/devices/intel_pt/caps/ptwrite 480 - 481 - which contains "1" if the feature is supported and 482 - "0" otherwise. 483 - 484 - As an alternative, refer to "Emulated PTWRITE" further below. 485 - 486 - fup_on_ptw Enable a FUP packet to follow the PTWRITE packet. The FUP packet 487 - provides the address of the ptwrite instruction. In the absence of 488 - fup_on_ptw, the decoder will use the address of the previous branch 489 - if branch tracing is enabled, otherwise the address will be zero. 490 - Note that fup_on_ptw will work even when branch tracing is disabled. 491 - 492 - pwr_evt Enable power events. The power events provide information about 493 - changes to the CPU C-state. 494 - 495 - Support for this feature is indicated by: 496 - 497 - /sys/bus/event_source/devices/intel_pt/caps/power_event_trace 498 - 499 - which contains "1" if the feature is supported and 500 - "0" otherwise. 501 - 502 - event Enable Event Trace. The events provide information about asynchronous 503 - events. 504 - 505 - Support for this feature is indicated by: 506 - 507 - /sys/bus/event_source/devices/intel_pt/caps/event_trace 508 - 509 - which contains "1" if the feature is supported and 510 - "0" otherwise. 511 - 512 - notnt Disable TNT packets. Without TNT packets, it is not possible to walk 513 - executable code to reconstruct control flow, however FUP, TIP, TIP.PGE 514 - and TIP.PGD packets still indicate asynchronous control flow, and (if 515 - return compression is disabled - see noretcomp) return statements. 516 - The advantage of eliminating TNT packets is reducing the size of the 517 - trace and corresponding tracing overhead. 518 - 519 - Support for this feature is indicated by: 520 - 521 - /sys/bus/event_source/devices/intel_pt/caps/tnt_disable 522 - 523 - which contains "1" if the feature is supported and 524 - "0" otherwise. 525 - 314 + Config terms are parameters specified with the -e intel_pt// event option, 315 + for example: 316 + 317 + -e intel_pt/cyc/ 318 + 319 + which selects cycle accurate mode. Each config term can have a value which 320 + defaults to 1, so the above is the same as: 321 + 322 + -e intel_pt/cyc=1/ 323 + 324 + Some terms are set by default, so must be set to 0 to turn them off. For 325 + example, to turn off branch tracing: 326 + 327 + -e intel_pt/branch=0/ 328 + 329 + Multiple config terms are separated by commas, for example: 330 + 331 + -e intel_pt/cyc,mtc_period=9/ 332 + 333 + There are also common config terms, see linkperf:perf-record[1] documentation. 334 + 335 + Intel PT config terms are described below. 336 + 337 + *tsc*:: 338 + Always supported. Produces TSC timestamp packets to provide 339 + timing information. In some cases it is possible to decode 340 + without timing information, for example a per-thread context 341 + that does not overlap executable memory maps. 342 + + 343 + The default config selects tsc (i.e. tsc=1). 344 + 345 + *noretcomp*:: 346 + Always supported. Disables "return compression" so a TIP packet 347 + is produced when a function returns. Causes more packets to be 348 + produced but might make decoding more reliable. 349 + + 350 + The default config does not select noretcomp (i.e. noretcomp=0). 351 + 352 + *psb_period*:: 353 + Allows the frequency of PSB packets to be specified. 354 + + 355 + The PSB packet is a synchronization packet that provides a 356 + starting point for decoding or recovery from errors. 357 + + 358 + Support for psb_period is indicated by: 359 + + 360 + /sys/bus/event_source/devices/intel_pt/caps/psb_cyc 361 + + 362 + which contains "1" if the feature is supported and "0" 363 + otherwise. 364 + + 365 + Valid values are given by: 366 + + 367 + /sys/bus/event_source/devices/intel_pt/caps/psb_periods 368 + + 369 + which contains a hexadecimal value, the bits of which represent 370 + valid values e.g. bit 2 set means value 2 is valid. 371 + + 372 + The psb_period value is converted to the approximate number of 373 + trace bytes between PSB packets as: 374 + + 375 + 2 ^ (value + 11) 376 + + 377 + e.g. value 3 means 16KiB bytes between PSBs 378 + + 379 + If an invalid value is entered, the error message 380 + will give a list of valid values e.g. 381 + + 382 + $ perf record -e intel_pt/psb_period=15/u uname 383 + Invalid psb_period for intel_pt. Valid values are: 0-5 384 + + 385 + If MTC packets are selected, the default config selects a value 386 + of 3 (i.e. psb_period=3) or the nearest lower value that is 387 + supported (0 is always supported). Otherwise the default is 0. 388 + + 389 + If decoding is expected to be reliable and the buffer is large 390 + then a large PSB period can be used. 391 + + 392 + Because a TSC packet is produced with PSB, the PSB period can 393 + also affect the granularity to timing information in the absence 394 + of MTC or CYC. 395 + 396 + *mtc*:: 397 + Produces MTC timing packets. 398 + + 399 + MTC packets provide finer grain timestamp information than TSC 400 + packets. MTC packets record time using the hardware crystal 401 + clock (CTC) which is related to TSC packets using a TMA packet. 402 + + 403 + Support for this feature is indicated by: 404 + + 405 + /sys/bus/event_source/devices/intel_pt/caps/mtc 406 + + 407 + which contains "1" if the feature is supported and 408 + "0" otherwise. 409 + + 410 + The frequency of MTC packets can also be specified - see 411 + mtc_period below. 412 + 413 + *mtc_period*:: 414 + Specifies how frequently MTC packets are produced - see mtc 415 + above for how to determine if MTC packets are supported. 416 + + 417 + Valid values are given by: 418 + + 419 + /sys/bus/event_source/devices/intel_pt/caps/mtc_periods 420 + + 421 + which contains a hexadecimal value, the bits of which represent 422 + valid values e.g. bit 2 set means value 2 is valid. 423 + + 424 + The mtc_period value is converted to the MTC frequency as: 425 + 426 + CTC-frequency / (2 ^ value) 427 + + 428 + e.g. value 3 means one eighth of CTC-frequency 429 + + 430 + Where CTC is the hardware crystal clock, the frequency of which 431 + can be related to TSC via values provided in cpuid leaf 0x15. 432 + + 433 + If an invalid value is entered, the error message 434 + will give a list of valid values e.g. 435 + + 436 + $ perf record -e intel_pt/mtc_period=15/u uname 437 + Invalid mtc_period for intel_pt. Valid values are: 0,3,6,9 438 + + 439 + The default value is 3 or the nearest lower value 440 + that is supported (0 is always supported). 441 + 442 + *cyc*:: 443 + Produces CYC timing packets. 444 + + 445 + CYC packets provide even finer grain timestamp information than 446 + MTC and TSC packets. A CYC packet contains the number of CPU 447 + cycles since the last CYC packet. Unlike MTC and TSC packets, 448 + CYC packets are only sent when another packet is also sent. 449 + + 450 + Support for this feature is indicated by: 451 + + 452 + /sys/bus/event_source/devices/intel_pt/caps/psb_cyc 453 + + 454 + which contains "1" if the feature is supported and 455 + "0" otherwise. 456 + + 457 + The number of CYC packets produced can be reduced by specifying 458 + a threshold - see cyc_thresh below. 459 + 460 + *cyc_thresh*:: 461 + Specifies how frequently CYC packets are produced - see cyc 462 + above for how to determine if CYC packets are supported. 463 + + 464 + Valid cyc_thresh values are given by: 465 + + 466 + /sys/bus/event_source/devices/intel_pt/caps/cycle_thresholds 467 + + 468 + which contains a hexadecimal value, the bits of which represent 469 + valid values e.g. bit 2 set means value 2 is valid. 470 + + 471 + The cyc_thresh value represents the minimum number of CPU cycles 472 + that must have passed before a CYC packet can be sent. The 473 + number of CPU cycles is: 474 + + 475 + 2 ^ (value - 1) 476 + + 477 + e.g. value 4 means 8 CPU cycles must pass before a CYC packet 478 + can be sent. Note a CYC packet is still only sent when another 479 + packet is sent, not at, e.g. every 8 CPU cycles. 480 + + 481 + If an invalid value is entered, the error message 482 + will give a list of valid values e.g. 483 + + 484 + $ perf record -e intel_pt/cyc,cyc_thresh=15/u uname 485 + Invalid cyc_thresh for intel_pt. Valid values are: 0-12 486 + + 487 + CYC packets are not requested by default. 488 + 489 + *pt*:: 490 + Specifies pass-through which enables the 'branch' config term. 491 + + 492 + The default config selects 'pt' if it is available, so a user will 493 + never need to specify this term. 494 + 495 + *branch*:: 496 + Enable branch tracing. Branch tracing is enabled by default so to 497 + disable branch tracing use 'branch=0'. 498 + + 499 + The default config selects 'branch' if it is available. 500 + 501 + *ptw*:: 502 + Enable PTWRITE packets which are produced when a ptwrite instruction 503 + is executed. 504 + + 505 + Support for this feature is indicated by: 506 + + 507 + /sys/bus/event_source/devices/intel_pt/caps/ptwrite 508 + + 509 + which contains "1" if the feature is supported and 510 + "0" otherwise. 511 + + 512 + As an alternative, refer to "Emulated PTWRITE" further below. 513 + 514 + *fup_on_ptw*:: 515 + Enable a FUP packet to follow the PTWRITE packet. The FUP packet 516 + provides the address of the ptwrite instruction. In the absence of 517 + fup_on_ptw, the decoder will use the address of the previous branch 518 + if branch tracing is enabled, otherwise the address will be zero. 519 + Note that fup_on_ptw will work even when branch tracing is disabled. 520 + 521 + *pwr_evt*:: 522 + Enable power events. The power events provide information about 523 + changes to the CPU C-state. 524 + + 525 + Support for this feature is indicated by: 526 + + 527 + /sys/bus/event_source/devices/intel_pt/caps/power_event_trace 528 + + 529 + which contains "1" if the feature is supported and 530 + "0" otherwise. 531 + 532 + *event*:: 533 + Enable Event Trace. The events provide information about asynchronous 534 + events. 535 + + 536 + Support for this feature is indicated by: 537 + + 538 + /sys/bus/event_source/devices/intel_pt/caps/event_trace 539 + + 540 + which contains "1" if the feature is supported and 541 + "0" otherwise. 542 + 543 + *notnt*:: 544 + Disable TNT packets. Without TNT packets, it is not possible to walk 545 + executable code to reconstruct control flow, however FUP, TIP, TIP.PGE 546 + and TIP.PGD packets still indicate asynchronous control flow, and (if 547 + return compression is disabled - see noretcomp) return statements. 548 + The advantage of eliminating TNT packets is reducing the size of the 549 + trace and corresponding tracing overhead. 550 + + 551 + Support for this feature is indicated by: 552 + + 553 + /sys/bus/event_source/devices/intel_pt/caps/tnt_disable 554 + + 555 + which contains "1" if the feature is supported and 556 + "0" otherwise. 557 + 558 + *aux-action=start-paused*:: 559 + Start tracing paused, refer to the section <<_pause_or_resume_tracing,Pause or Resume Tracing>> 560 + 561 + 562 + config terms on other events 563 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 564 + 565 + Some Intel PT features work with other events, features such as AUX area sampling 566 + and PEBS-via-PT. In those cases, the other events can have config terms below: 567 + 568 + *aux-sample-size*:: 569 + Used to set the AUX area sample size, refer to the section 570 + <<_aux_area_sampling_option,AUX area sampling option>> 571 + 572 + *aux-output*:: 573 + Used to select PEBS-via-PT, refer to the 574 + section <<_pebs_via_intel_pt,PEBS via Intel PT>> 575 + 576 + *aux-action*:: 577 + Used to pause or resume tracing, refer to the section 578 + <<_pause_or_resume_tracing,Pause or Resume Tracing>> 526 579 527 580 AUX area sampling option 528 581 ~~~~~~~~~~~~~~~~~~~~~~~~ ··· 649 596 nor snapshot size is specified, then the default is 4MiB for privileged users 650 597 (or if /proc/sys/kernel/perf_event_paranoid < 0), 128KiB for unprivileged users. 651 598 If an unprivileged user does not specify mmap pages, the mmap pages will be 652 - reduced as described in the 'new auxtrace mmap size option' section below. 599 + reduced as described in the <<_new_auxtrace_mmap_size_option,new auxtrace mmap size option>> 600 + section below. 653 601 654 602 The snapshot size is displayed if the option -vv is used e.g. 655 603 ··· 1006 952 1007 953 Note that "instructions", "cycles", "branches" and "transactions" events 1008 954 depend on code flow packets which can be disabled by using the config term 1009 - "branch=0". Refer to the config terms section above. 955 + "branch=0". Refer to the <<_config_terms,config terms>> section above. 1010 956 1011 957 "ptwrite" events record the payload of the ptwrite instruction and whether 1012 958 "fup_on_ptw" was used. "ptwrite" events depend on PTWRITE packets which are 1013 - recorded only if the "ptw" config term was used. Refer to the config terms 959 + recorded only if the "ptw" config term was used. Refer to the <<_config_terms,config terms>> 1014 960 section above. perf script "synth" field displays "ptwrite" information like 1015 961 this: "ip: 0 payload: 0x123456789abcdef0" where "ip" is 1 if "fup_on_ptw" was 1016 962 used. ··· 1018 964 "Power" events correspond to power event packets and CBR (core-to-bus ratio) 1019 965 packets. While CBR packets are always recorded when tracing is enabled, power 1020 966 event packets are recorded only if the "pwr_evt" config term was used. Refer to 1021 - the config terms section above. The power events record information about 967 + the <<_config_terms,config terms>> section above. The power events record information about 1022 968 C-state changes, whereas CBR is indicative of CPU frequency. perf script 1023 969 "event,synth" fields display information like this: 1024 970 ··· 1174 1120 - asynchronous branches such as interrupts 1175 1121 - indirect branches 1176 1122 - function return target address *if* the noretcomp config term (refer 1177 - config terms section) was used 1123 + <<_config_terms,config terms>> section) was used 1178 1124 - start of (control-flow) tracing 1179 1125 - end of (control-flow) tracing, if it is not out of context 1180 1126 - power events, ptwrite, transaction start and abort ··· 1187 1133 less detail. The decoder decodes only extended PSB (PSB+) packets, getting the 1188 1134 instruction pointer if there is a FUP packet within PSB+ (i.e. between PSB and 1189 1135 PSBEND). Note PSB packets occur regularly in the trace based on the psb_period 1190 - config term (refer config terms section). There will be a FUP packet if the 1136 + config term (refer <<_config_terms,config terms>> section). There will be a FUP packet if the 1191 1137 PSB+ occurs while control flow is being traced. 1192 1138 1193 1139 What will *not* be decoded with the qq option: ··· 1919 1865 1920 1866 For pipe mode, the order of events and timestamps can presumably 1921 1867 be messed up. 1868 + 1869 + 1870 + Pause or Resume Tracing 1871 + ----------------------- 1872 + 1873 + With newer Kernels, it is possible to use other selected events to pause 1874 + or resume Intel PT tracing. This is configured by using the "aux-action" 1875 + config term: 1876 + 1877 + "aux-action=pause" is used with events that are to pause Intel PT tracing. 1878 + 1879 + "aux-action=resume" is used with events that are to resume Intel PT tracing. 1880 + 1881 + "aux-action=start-paused" is used with the Intel PT event to start in a 1882 + paused state. 1883 + 1884 + For example, to trace only the uname system call (sys_newuname) when running the 1885 + command line utility uname: 1886 + 1887 + $ perf record --kcore -e intel_pt/aux-action=start-paused/k,syscalls:sys_enter_newuname/aux-action=resume/,syscalls:sys_exit_newuname/aux-action=pause/ uname 1888 + Linux 1889 + [ perf record: Woken up 1 times to write data ] 1890 + [ perf record: Captured and wrote 0.043 MB perf.data ] 1891 + $ perf script --call-trace 1892 + uname 30805 [000] 24001.058782799: name: 0x7ffc9c1865b0 1893 + uname 30805 [000] 24001.058784424: psb offs: 0 1894 + uname 30805 [000] 24001.058784424: cbr: 39 freq: 3904 MHz (139%) 1895 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) debug_smp_processor_id 1896 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) __x64_sys_newuname 1897 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) down_read 1898 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) __cond_resched 1899 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) preempt_count_add 1900 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) in_lock_functions 1901 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) preempt_count_sub 1902 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) up_read 1903 + uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) preempt_count_add 1904 + uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) in_lock_functions 1905 + uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) preempt_count_sub 1906 + uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) _copy_to_user 1907 + uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) syscall_exit_to_user_mode 1908 + uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) syscall_exit_work 1909 + uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) perf_syscall_exit 1910 + uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) debug_smp_processor_id 1911 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_trace_buf_alloc 1912 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_swevent_get_recursion_context 1913 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) debug_smp_processor_id 1914 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) debug_smp_processor_id 1915 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_tp_event 1916 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_trace_buf_update 1917 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) tracing_gen_ctx_irq_test 1918 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_swevent_event 1919 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __perf_event_account_interrupt 1920 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __this_cpu_preempt_check 1921 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_event_output_forward 1922 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_event_aux_pause 1923 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) ring_buffer_get 1924 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __rcu_read_lock 1925 + uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __rcu_read_unlock 1926 + uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) pt_event_stop 1927 + uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) debug_smp_processor_id 1928 + uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) debug_smp_processor_id 1929 + uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) native_write_msr 1930 + uname 30805 [000] 24001.058785463: ([kernel.kallsyms]) native_write_msr 1931 + uname 30805 [000] 24001.058785639: 0x0 1932 + 1933 + The example above uses tracepoints, but any kind of sampled event can be used. 1934 + 1935 + For example: 1936 + 1937 + Tracing between arch_cpu_idle_enter() and arch_cpu_idle_exit() using breakpoint events: 1938 + 1939 + $ sudo cat /proc/kallsyms | sort | grep ' arch_cpu_idle_enter\| arch_cpu_idle_exit' 1940 + ffffffffb605bf60 T arch_cpu_idle_enter 1941 + ffffffffb614d8a0 W arch_cpu_idle_exit 1942 + $ sudo perf record --kcore -a -e intel_pt/aux-action=start-paused/k -e mem:0xffffffffb605bf60:x/aux-action=resume/ -e mem:0xffffffffb614d8a0:x/aux-action=pause/ -- sleep 1 1943 + [ perf record: Woken up 1 times to write data ] 1944 + [ perf record: Captured and wrote 1.387 MB perf.data ] 1945 + 1946 + Tracing __alloc_pages() using kprobes: 1947 + 1948 + $ sudo perf probe --add '__alloc_pages order' 1949 + Added new event: probe:__alloc_pages (on __alloc_pages with order) 1950 + $ sudo perf probe --add __alloc_pages%return 1951 + Added new event: probe:__alloc_pages__return (on __alloc_pages%return) 1952 + $ sudo perf record --kcore -aR -e intel_pt/aux-action=start-paused/k -e probe:__alloc_pages/aux-action=resume/ -e probe:__alloc_pages__return/aux-action=pause/ -- sleep 1 1953 + [ perf record: Woken up 1 times to write data ] 1954 + [ perf record: Captured and wrote 1.490 MB perf.data ] 1955 + 1956 + Tracing starting at main() using a uprobe event: 1957 + 1958 + $ sudo perf probe -x /usr/bin/uname main 1959 + Added new event: probe_uname:main (on main in /usr/bin/uname) 1960 + $ sudo perf record -e intel_pt/-aux-action=start-paused/u -e probe_uname:main/aux-action=resume/ -- uname 1961 + Linux 1962 + [ perf record: Woken up 1 times to write data ] 1963 + [ perf record: Captured and wrote 0.031 MB perf.data ] 1964 + 1965 + Tracing occasionally using cycles events with different periods: 1966 + 1967 + $ perf record --kcore -a -m,64M -e intel_pt/aux-action=start-paused/k -e cycles/aux-action=pause,period=1000000/Pk -e cycles/aux-action=resume,period=10500000/Pk -- firefox 1968 + [ perf record: Woken up 19 times to write data ] 1969 + [ perf record: Captured and wrote 16.561 MB perf.data ] 1922 1970 1923 1971 1924 1972 EXAMPLE
+2 -2
tools/perf/Documentation/perf-lock.txt
··· 187 187 Show lock contention only for given lock types (comma separated list). 188 188 Available values are: 189 189 semaphore, spinlock, rwlock, rwlock:R, rwlock:W, rwsem, rwsem:R, rwsem:W, 190 - rtmutex, rwlock-rt, rwlock-rt:R, rwlock-rt:W, pcpu-sem, pcpu-sem:R, pcpu-sem:W, 191 - mutex 190 + rtmutex, rwlock-rt, rwlock-rt:R, rwlock-rt:W, percpu-rwmem, pcpu-sem, 191 + pcpu-sem:R, pcpu-sem:W, mutex 192 192 193 193 Note that RW-variant of locks have :R and :W suffix. Names without the 194 194 suffix are shortcuts for the both variants. Ex) rwsem = rwsem:R + rwsem:W.
+4
tools/perf/Documentation/perf-record.txt
··· 68 68 like this: name=\'CPU_CLK_UNHALTED.THREAD:cmask=0x1\'. 69 69 - 'aux-output': Generate AUX records instead of events. This requires 70 70 that an AUX area event is also provided. 71 + - 'aux-action': "pause" or "resume" to pause or resume an AUX 72 + area event (the group leader) when this event occurs. 73 + "start-paused" on an AUX area event itself, will 74 + start in a paused state. 71 75 - 'aux-sample-size': Set sample size for AUX area sampling. If the 72 76 '--aux-sample' option has been used, set aux-sample-size=0 to disable 73 77 AUX area sampling for the event.
+11 -7
tools/perf/Documentation/perf-test.txt
··· 28 28 Tests to skip (comma separated numeric list). 29 29 30 30 -v:: 31 + -vv:: 32 + -vvv:: 31 33 --verbose:: 32 - Be more verbose. 34 + With a single '-v', verbose level 1, only failing test output 35 + is displayed. With '-vv' and higher all test output is shown. 33 36 34 37 -S:: 35 38 --sequential:: 36 - Run tests one after the other, this is the default mode. 39 + Run all tests one after the other. By default "exclusive" 40 + tests are run sequentially, but other tests are run in 41 + parallel to speed execution. 37 42 38 - -p:: 39 - --parallel:: 40 - Run tests in parallel, speeds up the whole process but is not safe with 41 - the current infrastructure, where some tests that compete for some resources, 42 - for instance, 'perf probe' tests that add/remove probes or clean all probes, etc. 43 + -r:: 44 + --runs-per-test:: 45 + Run each test the given number of times, by default once. This 46 + option can be useful to determine if a test is flaky. 43 47 44 48 -F:: 45 49 --dont-fork::
+5
tools/perf/Documentation/perf-trace.txt
··· 241 241 printing using the existing 'perf trace' syscall arg beautifiers to map integer 242 242 arguments to strings (pid to comm, syscall id to syscall name, etc). 243 243 244 + --force-btf:: 245 + Use btf_dump to pretty print syscall argument data, instead of using hand-crafted pretty 246 + printers. This option is intended for testing BTF integration in perf trace. btf_dump-based 247 + pretty-printing serves as a fallback to hand-crafted pretty printers, as the latter can 248 + better pretty-print integer flags and struct pointers. 244 249 245 250 PAGEFAULTS 246 251 ----------
+3
tools/perf/MANIFEST
··· 1 + COPYING 2 + LICENSES/preferred/GPL-2.0 1 3 arch/arm64/tools/gen-sysreg.awk 2 4 arch/arm64/tools/sysreg 5 + arch/*/include/uapi/asm/bpf_perf_event.h 3 6 tools/perf 4 7 tools/arch 5 8 tools/scripts
+59 -73
tools/perf/Makefile.config
··· 28 28 29 29 $(call detected_var,SRCARCH) 30 30 31 - ifneq ($(NO_SYSCALL_TABLE),1) 32 - NO_SYSCALL_TABLE := 1 33 - 34 - ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 powerpc arm64 s390 mips loongarch riscv)) 35 - NO_SYSCALL_TABLE := 0 36 - endif 37 - 38 - ifneq ($(NO_SYSCALL_TABLE),1) 39 - CFLAGS += -DHAVE_SYSCALL_TABLE_SUPPORT 40 - endif 41 - endif 31 + CFLAGS += -I$(OUTPUT)arch/$(SRCARCH)/include/generated 42 32 43 33 # Additional ARCH settings for ppc 44 34 ifeq ($(SRCARCH),powerpc) 45 - CFLAGS += -I$(OUTPUT)arch/powerpc/include/generated 46 - LIBUNWIND_LIBS := -lunwind -lunwind-ppc64 35 + ifndef NO_LIBUNWIND 36 + LIBUNWIND_LIBS := -lunwind -lunwind-ppc64 37 + endif 47 38 endif 48 39 49 40 # Additional ARCH settings for x86 50 41 ifeq ($(SRCARCH),x86) 51 42 $(call detected,CONFIG_X86) 52 - CFLAGS += -I$(OUTPUT)arch/x86/include/generated 53 43 ifeq (${IS_64_BIT}, 1) 54 44 CFLAGS += -DHAVE_ARCH_X86_64_SUPPORT 55 45 ARCH_INCLUDE = ../../arch/x86/lib/memcpy_64.S ../../arch/x86/lib/memset_64.S 56 - LIBUNWIND_LIBS = -lunwind-x86_64 -lunwind -llzma 46 + ifndef NO_LIBUNWIND 47 + LIBUNWIND_LIBS = -lunwind-x86_64 -lunwind -llzma 48 + endif 57 49 $(call detected,CONFIG_X86_64) 58 50 else 59 - LIBUNWIND_LIBS = -lunwind-x86 -llzma -lunwind 51 + ifndef NO_LIBUNWIND 52 + LIBUNWIND_LIBS = -lunwind-x86 -llzma -lunwind 53 + endif 60 54 endif 61 55 endif 62 56 63 57 ifeq ($(SRCARCH),arm) 64 - LIBUNWIND_LIBS = -lunwind -lunwind-arm 58 + ifndef NO_LIBUNWIND 59 + LIBUNWIND_LIBS = -lunwind -lunwind-arm 60 + endif 65 61 endif 66 62 67 63 ifeq ($(SRCARCH),arm64) 68 - CFLAGS += -I$(OUTPUT)arch/arm64/include/generated 69 - LIBUNWIND_LIBS = -lunwind -lunwind-aarch64 64 + ifndef NO_LIBUNWIND 65 + LIBUNWIND_LIBS = -lunwind -lunwind-aarch64 66 + endif 70 67 endif 71 68 72 69 ifeq ($(SRCARCH),loongarch) 73 - CFLAGS += -I$(OUTPUT)arch/loongarch/include/generated 74 - LIBUNWIND_LIBS = -lunwind -lunwind-loongarch64 70 + ifndef NO_LIBUNWIND 71 + LIBUNWIND_LIBS = -lunwind -lunwind-loongarch64 72 + endif 75 73 endif 76 74 77 75 ifeq ($(ARCH),s390) 78 - CFLAGS += -fPIC -I$(OUTPUT)arch/s390/include/generated 76 + CFLAGS += -fPIC 79 77 endif 80 78 81 79 ifeq ($(ARCH),mips) 82 - CFLAGS += -I$(OUTPUT)arch/mips/include/generated 83 - LIBUNWIND_LIBS = -lunwind -lunwind-mips 84 - endif 85 - 86 - ifeq ($(ARCH),riscv) 87 - CFLAGS += -I$(OUTPUT)arch/riscv/include/generated 80 + ifndef NO_LIBUNWIND 81 + LIBUNWIND_LIBS = -lunwind -lunwind-mips 82 + endif 88 83 endif 89 84 90 85 # So far there's only x86 and arm libdw unwind support merged in perf. ··· 116 121 $(foreach libunwind_arch,$(LIBUNWIND_ARCHS),$(call libunwind_arch_set_flags,$(libunwind_arch))) 117 122 endif 118 123 119 - # Set per-feature check compilation flags 120 - FEATURE_CHECK_CFLAGS-libunwind = $(LIBUNWIND_CFLAGS) 121 - FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS) 122 - FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS) 123 - FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS) 124 - 125 - FEATURE_CHECK_LDFLAGS-libunwind-arm += -lunwind -lunwind-arm 126 - FEATURE_CHECK_LDFLAGS-libunwind-aarch64 += -lunwind -lunwind-aarch64 127 - FEATURE_CHECK_LDFLAGS-libunwind-x86 += -lunwind -llzma -lunwind-x86 128 - FEATURE_CHECK_LDFLAGS-libunwind-x86_64 += -lunwind -llzma -lunwind-x86_64 124 + ifndef NO_LIBUNWIND 125 + # Set per-feature check compilation flags 126 + FEATURE_CHECK_CFLAGS-libunwind = $(LIBUNWIND_CFLAGS) 127 + FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS) 128 + FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS) 129 + FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS) 130 + 131 + FEATURE_CHECK_LDFLAGS-libunwind-arm += -lunwind -lunwind-arm 132 + FEATURE_CHECK_LDFLAGS-libunwind-aarch64 += -lunwind -lunwind-aarch64 133 + FEATURE_CHECK_LDFLAGS-libunwind-x86 += -lunwind -llzma -lunwind-x86 134 + FEATURE_CHECK_LDFLAGS-libunwind-x86_64 += -lunwind -llzma -lunwind-x86_64 135 + endif 129 136 130 137 FEATURE_CHECK_LDFLAGS-libcrypto = -lcrypto 131 138 ··· 152 155 endif 153 156 DWARFLIBS := -ldw 154 157 ifeq ($(findstring -static,${LDFLAGS}),-static) 155 - DWARFLIBS += -lelf -lz -llzma -lbz2 -lzstd 158 + DWARFLIBS += -lelf -lz -llzma -lbz2 156 159 157 160 LIBDW_VERSION := $(shell $(PKG_CONFIG) --modversion libdw).0.0 158 161 LIBDW_VERSION_1 := $(word 1, $(subst ., ,$(LIBDW_VERSION))) ··· 547 550 CFLAGS += -DHAVE_ELF_GETSHDRSTRNDX_SUPPORT 548 551 endif 549 552 553 + ifeq ($(feature-libelf-zstd), 1) 554 + ifdef NO_LIBZSTD 555 + $(error Error: libzstd is required by libelf, please do not set NO_LIBZSTD) 556 + endif 557 + endif 558 + 550 559 ifndef NO_LIBDEBUGINFOD 551 560 $(call feature_check,libdebuginfod) 552 561 ifeq ($(feature-libdebuginfod), 1) ··· 737 734 $(call detected,CONFIG_DWARF_UNWIND) 738 735 endif 739 736 740 - ifndef NO_LOCAL_LIBUNWIND 741 - ifeq ($(SRCARCH),$(filter $(SRCARCH),arm arm64)) 742 - $(call feature_check,libunwind-debug-frame) 743 - ifneq ($(feature-libunwind-debug-frame), 1) 744 - $(warning No debug_frame support found in libunwind) 737 + ifndef NO_LIBUNWIND 738 + ifndef NO_LOCAL_LIBUNWIND 739 + ifeq ($(SRCARCH),$(filter $(SRCARCH),arm arm64)) 740 + $(call feature_check,libunwind-debug-frame) 741 + ifneq ($(feature-libunwind-debug-frame), 1) 742 + $(warning No debug_frame support found in libunwind) 743 + CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME 744 + endif 745 + else 746 + # non-ARM has no dwarf_find_debug_frame() function: 745 747 CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME 746 748 endif 747 - else 748 - # non-ARM has no dwarf_find_debug_frame() function: 749 - CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME 749 + EXTLIBS += $(LIBUNWIND_LIBS) 750 + LDFLAGS += $(LIBUNWIND_LIBS) 750 751 endif 751 - EXTLIBS += $(LIBUNWIND_LIBS) 752 - LDFLAGS += $(LIBUNWIND_LIBS) 753 - endif 754 - ifeq ($(findstring -static,${LDFLAGS}),-static) 755 - # gcc -static links libgcc_eh which contans piece of libunwind 756 - LIBUNWIND_LDFLAGS += -Wl,--allow-multiple-definition 757 - endif 758 - 759 - ifndef NO_LIBUNWIND 752 + ifeq ($(findstring -static,${LDFLAGS}),-static) 753 + # gcc -static links libgcc_eh which contans piece of libunwind 754 + LIBUNWIND_LDFLAGS += -Wl,--allow-multiple-definition 755 + endif 760 756 CFLAGS += -DHAVE_LIBUNWIND_SUPPORT 761 757 CFLAGS += $(LIBUNWIND_CFLAGS) 762 758 LDFLAGS += $(LIBUNWIND_LDFLAGS) ··· 763 761 endif 764 762 765 763 ifneq ($(NO_LIBTRACEEVENT),1) 766 - ifeq ($(NO_SYSCALL_TABLE),0) 767 - $(call detected,CONFIG_TRACE) 768 - else 769 - ifndef NO_LIBAUDIT 770 - $(call feature_check,libaudit) 771 - ifneq ($(feature-libaudit), 1) 772 - $(warning No libaudit.h found, disables 'trace' tool, please install audit-libs-devel or libaudit-dev) 773 - NO_LIBAUDIT := 1 774 - else 775 - CFLAGS += -DHAVE_LIBAUDIT_SUPPORT 776 - EXTLIBS += -laudit 777 - $(call detected,CONFIG_TRACE) 778 - endif 779 - endif 780 - endif 764 + $(call detected,CONFIG_TRACE) 781 765 endif 782 766 783 767 ifndef NO_LIBCRYPTO ··· 1160 1172 1161 1173 # libtraceevent is a recommended dependency picked up from the system. 1162 1174 ifneq ($(NO_LIBTRACEEVENT),1) 1163 - $(call feature_check,libtraceevent) 1164 1175 ifeq ($(feature-libtraceevent), 1) 1165 1176 CFLAGS += -DHAVE_LIBTRACEEVENT $(shell $(PKG_CONFIG) --cflags libtraceevent) 1166 1177 LDFLAGS += $(shell $(PKG_CONFIG) --libs-only-L libtraceevent) ··· 1175 1188 $(error ERROR: libtraceevent is missing. Please install libtraceevent-dev/libtraceevent-devel and/or set LIBTRACEEVENT_DIR or build with NO_LIBTRACEEVENT=1) 1176 1189 endif 1177 1190 1178 - $(call feature_check,libtracefs) 1179 1191 ifeq ($(feature-libtracefs), 1) 1180 1192 CFLAGS += $(shell $(PKG_CONFIG) --cflags libtracefs) 1181 1193 LDFLAGS += $(shell $(PKG_CONFIG) --libs-only-L libtracefs)
+42 -14
tools/perf/Makefile.perf
··· 59 59 # 60 60 # Define NO_LIBNUMA if you do not want numa perf benchmark 61 61 # 62 - # Define NO_LIBAUDIT if you do not want libaudit support 63 - # 64 62 # Define NO_LIBBIONIC if you do not want bionic support 65 63 # 66 64 # Define NO_LIBCRYPTO if you do not want libcrypto (openssl) support ··· 117 119 # 118 120 # Define LIBBPF_DYNAMIC to enable libbpf dynamic linking. 119 121 # 120 - # Define NO_SYSCALL_TABLE=1 to disable the use of syscall id to/from name tables 121 - # generated from the kernel .tbl or unistd.h files and use, if available, libaudit 122 - # for doing the conversions to/from strings/id. 123 - # 124 122 # Define NO_LIBPFM4 to disable libpfm4 events extension. 125 123 # 126 124 # Define NO_LIBDEBUGINFOD if you do not want support debuginfod ··· 161 167 SOURCE := $(shell ln -sf $(srctree)/tools/perf $(OUTPUT)/source) 162 168 endif 163 169 170 + # Beautify output 171 + # --------------------------------------------------------------------------- 172 + # 173 + # Most of build commands in Kbuild start with "cmd_". You can optionally define 174 + # "quiet_cmd_*". If defined, the short log is printed. Otherwise, no log from 175 + # that command is printed by default. 176 + # 177 + # e.g.) 178 + # quiet_cmd_depmod = DEPMOD $(MODLIB) 179 + # cmd_depmod = $(srctree)/scripts/depmod.sh $(DEPMOD) $(KERNELRELEASE) 180 + # 181 + # A simple variant is to prefix commands with $(Q) - that's useful 182 + # for commands that shall be hidden in non-verbose mode. 183 + # 184 + # $(Q)$(MAKE) $(build)=scripts/basic 185 + # 186 + # To put more focus on warnings, be less verbose as default 187 + # Use 'make V=1' to see the full commands 188 + 164 189 ifeq ($(V),1) 190 + quiet = 165 191 Q = 166 192 else 167 - Q = @ 193 + quiet=quiet_ 194 + Q=@ 168 195 endif 196 + 197 + # If the user is running make -s (silent mode), suppress echoing of commands 198 + # make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS. 199 + ifeq ($(filter 3.%,$(MAKE_VERSION)),) 200 + short-opts := $(firstword -$(MAKEFLAGS)) 201 + else 202 + short-opts := $(filter-out --%,$(MAKEFLAGS)) 203 + endif 204 + 205 + ifneq ($(findstring s,$(short-opts)),) 206 + quiet=silent_ 207 + endif 208 + 209 + export quiet Q 169 210 170 211 # Do not use make's built-in rules 171 212 # (this improves performance and avoids hard-to-debug behaviour); ··· 339 310 FEATURE_TESTS := all 340 311 endif 341 312 endif 313 + include $(srctree)/tools/perf/scripts/Makefile.syscalls 342 314 include Makefile.config 343 315 endif 344 316 ··· 516 486 517 487 EXTLIBS := $(call filter-out,$(EXCLUDE_EXTLIBS),$(EXTLIBS)) 518 488 LIBS = -Wl,--whole-archive $(PERFLIBS) $(EXTRA_PERFLIBS) -Wl,--no-whole-archive -Wl,--start-group $(EXTLIBS) -Wl,--end-group 489 + 490 + PERFLIBS_PY := $(call filter-out,$(LIBPERF_BENCH) $(LIBPERF_TEST),$(PERFLIBS)) 491 + LIBS_PY = -Wl,--whole-archive $(PERFLIBS_PY) $(EXTRA_PERFLIBS) -Wl,--no-whole-archive -Wl,--start-group $(EXTLIBS) -Wl,--end-group 519 492 520 493 export INSTALL SHELL_PATH 521 494 ··· 768 735 # Create python binding output directory if not already present 769 736 $(shell [ -d '$(OUTPUT)python' ] || mkdir -p '$(OUTPUT)python') 770 737 771 - $(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): util/python.c util/setup.py $(PERFLIBS) 738 + $(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): util/python.c util/setup.py $(PERFLIBS_PY) 772 739 $(QUIET_GEN)LDSHARED="$(CC) -pthread -shared" \ 773 - CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS) $(LIBS)' \ 740 + CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS) $(LIBS_PY)' \ 774 741 $(PYTHON_WORD) util/setup.py \ 775 742 --quiet build_ext; \ 776 743 cp $(PYTHON_EXTBUILD_LIB)perf*.so $(OUTPUT)python/ ··· 1127 1094 $(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' 1128 1095 $(call QUIET_INSTALL, perf-iostat) \ 1129 1096 $(INSTALL) $(OUTPUT)perf-iostat -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' 1130 - ifndef NO_LIBAUDIT 1131 - $(call QUIET_INSTALL, strace/groups) \ 1132 - $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(STRACE_GROUPS_INSTDIR_SQ)'; \ 1133 - $(INSTALL) trace/strace/groups/* -m 644 -t '$(DESTDIR_SQ)$(STRACE_GROUPS_INSTDIR_SQ)' 1134 - endif 1135 1097 ifndef NO_LIBPERL 1136 1098 $(call QUIET_INSTALL, perl-scripts) \ 1137 1099 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'; \
+2
tools/perf/arch/alpha/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_64.h
+5
tools/perf/arch/alpha/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_64 += 4 + 5 + syscalltbl = $(srctree)/tools/perf/arch/alpha/entry/syscalls/syscall.tbl
+504
tools/perf/arch/alpha/entry/syscalls/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # system call numbers and entry vectors for alpha 4 + # 5 + # The format is: 6 + # <number> <abi> <name> <entry point> 7 + # 8 + # The <abi> is always "common" for this file 9 + # 10 + 0 common osf_syscall alpha_syscall_zero 11 + 1 common exit sys_exit 12 + 2 common fork alpha_fork 13 + 3 common read sys_read 14 + 4 common write sys_write 15 + 5 common osf_old_open sys_ni_syscall 16 + 6 common close sys_close 17 + 7 common osf_wait4 sys_osf_wait4 18 + 8 common osf_old_creat sys_ni_syscall 19 + 9 common link sys_link 20 + 10 common unlink sys_unlink 21 + 11 common osf_execve sys_ni_syscall 22 + 12 common chdir sys_chdir 23 + 13 common fchdir sys_fchdir 24 + 14 common mknod sys_mknod 25 + 15 common chmod sys_chmod 26 + 16 common chown sys_chown 27 + 17 common brk sys_osf_brk 28 + 18 common osf_getfsstat sys_ni_syscall 29 + 19 common lseek sys_lseek 30 + 20 common getxpid sys_getxpid 31 + 21 common osf_mount sys_osf_mount 32 + 22 common umount2 sys_umount 33 + 23 common setuid sys_setuid 34 + 24 common getxuid sys_getxuid 35 + 25 common exec_with_loader sys_ni_syscall 36 + 26 common ptrace sys_ptrace 37 + 27 common osf_nrecvmsg sys_ni_syscall 38 + 28 common osf_nsendmsg sys_ni_syscall 39 + 29 common osf_nrecvfrom sys_ni_syscall 40 + 30 common osf_naccept sys_ni_syscall 41 + 31 common osf_ngetpeername sys_ni_syscall 42 + 32 common osf_ngetsockname sys_ni_syscall 43 + 33 common access sys_access 44 + 34 common osf_chflags sys_ni_syscall 45 + 35 common osf_fchflags sys_ni_syscall 46 + 36 common sync sys_sync 47 + 37 common kill sys_kill 48 + 38 common osf_old_stat sys_ni_syscall 49 + 39 common setpgid sys_setpgid 50 + 40 common osf_old_lstat sys_ni_syscall 51 + 41 common dup sys_dup 52 + 42 common pipe sys_alpha_pipe 53 + 43 common osf_set_program_attributes sys_osf_set_program_attributes 54 + 44 common osf_profil sys_ni_syscall 55 + 45 common open sys_open 56 + 46 common osf_old_sigaction sys_ni_syscall 57 + 47 common getxgid sys_getxgid 58 + 48 common osf_sigprocmask sys_osf_sigprocmask 59 + 49 common osf_getlogin sys_ni_syscall 60 + 50 common osf_setlogin sys_ni_syscall 61 + 51 common acct sys_acct 62 + 52 common sigpending sys_sigpending 63 + 54 common ioctl sys_ioctl 64 + 55 common osf_reboot sys_ni_syscall 65 + 56 common osf_revoke sys_ni_syscall 66 + 57 common symlink sys_symlink 67 + 58 common readlink sys_readlink 68 + 59 common execve sys_execve 69 + 60 common umask sys_umask 70 + 61 common chroot sys_chroot 71 + 62 common osf_old_fstat sys_ni_syscall 72 + 63 common getpgrp sys_getpgrp 73 + 64 common getpagesize sys_getpagesize 74 + 65 common osf_mremap sys_ni_syscall 75 + 66 common vfork alpha_vfork 76 + 67 common stat sys_newstat 77 + 68 common lstat sys_newlstat 78 + 69 common osf_sbrk sys_ni_syscall 79 + 70 common osf_sstk sys_ni_syscall 80 + 71 common mmap sys_osf_mmap 81 + 72 common osf_old_vadvise sys_ni_syscall 82 + 73 common munmap sys_munmap 83 + 74 common mprotect sys_mprotect 84 + 75 common madvise sys_madvise 85 + 76 common vhangup sys_vhangup 86 + 77 common osf_kmodcall sys_ni_syscall 87 + 78 common osf_mincore sys_ni_syscall 88 + 79 common getgroups sys_getgroups 89 + 80 common setgroups sys_setgroups 90 + 81 common osf_old_getpgrp sys_ni_syscall 91 + 82 common setpgrp sys_setpgid 92 + 83 common osf_setitimer compat_sys_setitimer 93 + 84 common osf_old_wait sys_ni_syscall 94 + 85 common osf_table sys_ni_syscall 95 + 86 common osf_getitimer compat_sys_getitimer 96 + 87 common gethostname sys_gethostname 97 + 88 common sethostname sys_sethostname 98 + 89 common getdtablesize sys_getdtablesize 99 + 90 common dup2 sys_dup2 100 + 91 common fstat sys_newfstat 101 + 92 common fcntl sys_fcntl 102 + 93 common osf_select sys_osf_select 103 + 94 common poll sys_poll 104 + 95 common fsync sys_fsync 105 + 96 common setpriority sys_setpriority 106 + 97 common socket sys_socket 107 + 98 common connect sys_connect 108 + 99 common accept sys_accept 109 + 100 common getpriority sys_osf_getpriority 110 + 101 common send sys_send 111 + 102 common recv sys_recv 112 + 103 common sigreturn sys_sigreturn 113 + 104 common bind sys_bind 114 + 105 common setsockopt sys_setsockopt 115 + 106 common listen sys_listen 116 + 107 common osf_plock sys_ni_syscall 117 + 108 common osf_old_sigvec sys_ni_syscall 118 + 109 common osf_old_sigblock sys_ni_syscall 119 + 110 common osf_old_sigsetmask sys_ni_syscall 120 + 111 common sigsuspend sys_sigsuspend 121 + 112 common osf_sigstack sys_osf_sigstack 122 + 113 common recvmsg sys_recvmsg 123 + 114 common sendmsg sys_sendmsg 124 + 115 common osf_old_vtrace sys_ni_syscall 125 + 116 common osf_gettimeofday sys_osf_gettimeofday 126 + 117 common osf_getrusage sys_osf_getrusage 127 + 118 common getsockopt sys_getsockopt 128 + 120 common readv sys_readv 129 + 121 common writev sys_writev 130 + 122 common osf_settimeofday sys_osf_settimeofday 131 + 123 common fchown sys_fchown 132 + 124 common fchmod sys_fchmod 133 + 125 common recvfrom sys_recvfrom 134 + 126 common setreuid sys_setreuid 135 + 127 common setregid sys_setregid 136 + 128 common rename sys_rename 137 + 129 common truncate sys_truncate 138 + 130 common ftruncate sys_ftruncate 139 + 131 common flock sys_flock 140 + 132 common setgid sys_setgid 141 + 133 common sendto sys_sendto 142 + 134 common shutdown sys_shutdown 143 + 135 common socketpair sys_socketpair 144 + 136 common mkdir sys_mkdir 145 + 137 common rmdir sys_rmdir 146 + 138 common osf_utimes sys_osf_utimes 147 + 139 common osf_old_sigreturn sys_ni_syscall 148 + 140 common osf_adjtime sys_ni_syscall 149 + 141 common getpeername sys_getpeername 150 + 142 common osf_gethostid sys_ni_syscall 151 + 143 common osf_sethostid sys_ni_syscall 152 + 144 common getrlimit sys_getrlimit 153 + 145 common setrlimit sys_setrlimit 154 + 146 common osf_old_killpg sys_ni_syscall 155 + 147 common setsid sys_setsid 156 + 148 common quotactl sys_quotactl 157 + 149 common osf_oldquota sys_ni_syscall 158 + 150 common getsockname sys_getsockname 159 + 153 common osf_pid_block sys_ni_syscall 160 + 154 common osf_pid_unblock sys_ni_syscall 161 + 156 common sigaction sys_osf_sigaction 162 + 157 common osf_sigwaitprim sys_ni_syscall 163 + 158 common osf_nfssvc sys_ni_syscall 164 + 159 common osf_getdirentries sys_osf_getdirentries 165 + 160 common osf_statfs sys_osf_statfs 166 + 161 common osf_fstatfs sys_osf_fstatfs 167 + 163 common osf_asynch_daemon sys_ni_syscall 168 + 164 common osf_getfh sys_ni_syscall 169 + 165 common osf_getdomainname sys_osf_getdomainname 170 + 166 common setdomainname sys_setdomainname 171 + 169 common osf_exportfs sys_ni_syscall 172 + 181 common osf_alt_plock sys_ni_syscall 173 + 184 common osf_getmnt sys_ni_syscall 174 + 187 common osf_alt_sigpending sys_ni_syscall 175 + 188 common osf_alt_setsid sys_ni_syscall 176 + 199 common osf_swapon sys_swapon 177 + 200 common msgctl sys_old_msgctl 178 + 201 common msgget sys_msgget 179 + 202 common msgrcv sys_msgrcv 180 + 203 common msgsnd sys_msgsnd 181 + 204 common semctl sys_old_semctl 182 + 205 common semget sys_semget 183 + 206 common semop sys_semop 184 + 207 common osf_utsname sys_osf_utsname 185 + 208 common lchown sys_lchown 186 + 209 common shmat sys_shmat 187 + 210 common shmctl sys_old_shmctl 188 + 211 common shmdt sys_shmdt 189 + 212 common shmget sys_shmget 190 + 213 common osf_mvalid sys_ni_syscall 191 + 214 common osf_getaddressconf sys_ni_syscall 192 + 215 common osf_msleep sys_ni_syscall 193 + 216 common osf_mwakeup sys_ni_syscall 194 + 217 common msync sys_msync 195 + 218 common osf_signal sys_ni_syscall 196 + 219 common osf_utc_gettime sys_ni_syscall 197 + 220 common osf_utc_adjtime sys_ni_syscall 198 + 222 common osf_security sys_ni_syscall 199 + 223 common osf_kloadcall sys_ni_syscall 200 + 224 common osf_stat sys_osf_stat 201 + 225 common osf_lstat sys_osf_lstat 202 + 226 common osf_fstat sys_osf_fstat 203 + 227 common osf_statfs64 sys_osf_statfs64 204 + 228 common osf_fstatfs64 sys_osf_fstatfs64 205 + 233 common getpgid sys_getpgid 206 + 234 common getsid sys_getsid 207 + 235 common sigaltstack sys_sigaltstack 208 + 236 common osf_waitid sys_ni_syscall 209 + 237 common osf_priocntlset sys_ni_syscall 210 + 238 common osf_sigsendset sys_ni_syscall 211 + 239 common osf_set_speculative sys_ni_syscall 212 + 240 common osf_msfs_syscall sys_ni_syscall 213 + 241 common osf_sysinfo sys_osf_sysinfo 214 + 242 common osf_uadmin sys_ni_syscall 215 + 243 common osf_fuser sys_ni_syscall 216 + 244 common osf_proplist_syscall sys_osf_proplist_syscall 217 + 245 common osf_ntp_adjtime sys_ni_syscall 218 + 246 common osf_ntp_gettime sys_ni_syscall 219 + 247 common osf_pathconf sys_ni_syscall 220 + 248 common osf_fpathconf sys_ni_syscall 221 + 250 common osf_uswitch sys_ni_syscall 222 + 251 common osf_usleep_thread sys_osf_usleep_thread 223 + 252 common osf_audcntl sys_ni_syscall 224 + 253 common osf_audgen sys_ni_syscall 225 + 254 common sysfs sys_sysfs 226 + 255 common osf_subsys_info sys_ni_syscall 227 + 256 common osf_getsysinfo sys_osf_getsysinfo 228 + 257 common osf_setsysinfo sys_osf_setsysinfo 229 + 258 common osf_afs_syscall sys_ni_syscall 230 + 259 common osf_swapctl sys_ni_syscall 231 + 260 common osf_memcntl sys_ni_syscall 232 + 261 common osf_fdatasync sys_ni_syscall 233 + 300 common bdflush sys_ni_syscall 234 + 301 common sethae sys_sethae 235 + 302 common mount sys_mount 236 + 303 common old_adjtimex sys_old_adjtimex 237 + 304 common swapoff sys_swapoff 238 + 305 common getdents sys_getdents 239 + 306 common create_module sys_ni_syscall 240 + 307 common init_module sys_init_module 241 + 308 common delete_module sys_delete_module 242 + 309 common get_kernel_syms sys_ni_syscall 243 + 310 common syslog sys_syslog 244 + 311 common reboot sys_reboot 245 + 312 common clone alpha_clone 246 + 313 common uselib sys_uselib 247 + 314 common mlock sys_mlock 248 + 315 common munlock sys_munlock 249 + 316 common mlockall sys_mlockall 250 + 317 common munlockall sys_munlockall 251 + 318 common sysinfo sys_sysinfo 252 + 319 common _sysctl sys_ni_syscall 253 + # 320 was sys_idle 254 + 321 common oldumount sys_oldumount 255 + 322 common swapon sys_swapon 256 + 323 common times sys_times 257 + 324 common personality sys_personality 258 + 325 common setfsuid sys_setfsuid 259 + 326 common setfsgid sys_setfsgid 260 + 327 common ustat sys_ustat 261 + 328 common statfs sys_statfs 262 + 329 common fstatfs sys_fstatfs 263 + 330 common sched_setparam sys_sched_setparam 264 + 331 common sched_getparam sys_sched_getparam 265 + 332 common sched_setscheduler sys_sched_setscheduler 266 + 333 common sched_getscheduler sys_sched_getscheduler 267 + 334 common sched_yield sys_sched_yield 268 + 335 common sched_get_priority_max sys_sched_get_priority_max 269 + 336 common sched_get_priority_min sys_sched_get_priority_min 270 + 337 common sched_rr_get_interval sys_sched_rr_get_interval 271 + 338 common afs_syscall sys_ni_syscall 272 + 339 common uname sys_newuname 273 + 340 common nanosleep sys_nanosleep 274 + 341 common mremap sys_mremap 275 + 342 common nfsservctl sys_ni_syscall 276 + 343 common setresuid sys_setresuid 277 + 344 common getresuid sys_getresuid 278 + 345 common pciconfig_read sys_pciconfig_read 279 + 346 common pciconfig_write sys_pciconfig_write 280 + 347 common query_module sys_ni_syscall 281 + 348 common prctl sys_prctl 282 + 349 common pread64 sys_pread64 283 + 350 common pwrite64 sys_pwrite64 284 + 351 common rt_sigreturn sys_rt_sigreturn 285 + 352 common rt_sigaction sys_rt_sigaction 286 + 353 common rt_sigprocmask sys_rt_sigprocmask 287 + 354 common rt_sigpending sys_rt_sigpending 288 + 355 common rt_sigtimedwait sys_rt_sigtimedwait 289 + 356 common rt_sigqueueinfo sys_rt_sigqueueinfo 290 + 357 common rt_sigsuspend sys_rt_sigsuspend 291 + 358 common select sys_select 292 + 359 common gettimeofday sys_gettimeofday 293 + 360 common settimeofday sys_settimeofday 294 + 361 common getitimer sys_getitimer 295 + 362 common setitimer sys_setitimer 296 + 363 common utimes sys_utimes 297 + 364 common getrusage sys_getrusage 298 + 365 common wait4 sys_wait4 299 + 366 common adjtimex sys_adjtimex 300 + 367 common getcwd sys_getcwd 301 + 368 common capget sys_capget 302 + 369 common capset sys_capset 303 + 370 common sendfile sys_sendfile64 304 + 371 common setresgid sys_setresgid 305 + 372 common getresgid sys_getresgid 306 + 373 common dipc sys_ni_syscall 307 + 374 common pivot_root sys_pivot_root 308 + 375 common mincore sys_mincore 309 + 376 common pciconfig_iobase sys_pciconfig_iobase 310 + 377 common getdents64 sys_getdents64 311 + 378 common gettid sys_gettid 312 + 379 common readahead sys_readahead 313 + # 380 is unused 314 + 381 common tkill sys_tkill 315 + 382 common setxattr sys_setxattr 316 + 383 common lsetxattr sys_lsetxattr 317 + 384 common fsetxattr sys_fsetxattr 318 + 385 common getxattr sys_getxattr 319 + 386 common lgetxattr sys_lgetxattr 320 + 387 common fgetxattr sys_fgetxattr 321 + 388 common listxattr sys_listxattr 322 + 389 common llistxattr sys_llistxattr 323 + 390 common flistxattr sys_flistxattr 324 + 391 common removexattr sys_removexattr 325 + 392 common lremovexattr sys_lremovexattr 326 + 393 common fremovexattr sys_fremovexattr 327 + 394 common futex sys_futex 328 + 395 common sched_setaffinity sys_sched_setaffinity 329 + 396 common sched_getaffinity sys_sched_getaffinity 330 + 397 common tuxcall sys_ni_syscall 331 + 398 common io_setup sys_io_setup 332 + 399 common io_destroy sys_io_destroy 333 + 400 common io_getevents sys_io_getevents 334 + 401 common io_submit sys_io_submit 335 + 402 common io_cancel sys_io_cancel 336 + 405 common exit_group sys_exit_group 337 + 406 common lookup_dcookie sys_ni_syscall 338 + 407 common epoll_create sys_epoll_create 339 + 408 common epoll_ctl sys_epoll_ctl 340 + 409 common epoll_wait sys_epoll_wait 341 + 410 common remap_file_pages sys_remap_file_pages 342 + 411 common set_tid_address sys_set_tid_address 343 + 412 common restart_syscall sys_restart_syscall 344 + 413 common fadvise64 sys_fadvise64 345 + 414 common timer_create sys_timer_create 346 + 415 common timer_settime sys_timer_settime 347 + 416 common timer_gettime sys_timer_gettime 348 + 417 common timer_getoverrun sys_timer_getoverrun 349 + 418 common timer_delete sys_timer_delete 350 + 419 common clock_settime sys_clock_settime 351 + 420 common clock_gettime sys_clock_gettime 352 + 421 common clock_getres sys_clock_getres 353 + 422 common clock_nanosleep sys_clock_nanosleep 354 + 423 common semtimedop sys_semtimedop 355 + 424 common tgkill sys_tgkill 356 + 425 common stat64 sys_stat64 357 + 426 common lstat64 sys_lstat64 358 + 427 common fstat64 sys_fstat64 359 + 428 common vserver sys_ni_syscall 360 + 429 common mbind sys_ni_syscall 361 + 430 common get_mempolicy sys_ni_syscall 362 + 431 common set_mempolicy sys_ni_syscall 363 + 432 common mq_open sys_mq_open 364 + 433 common mq_unlink sys_mq_unlink 365 + 434 common mq_timedsend sys_mq_timedsend 366 + 435 common mq_timedreceive sys_mq_timedreceive 367 + 436 common mq_notify sys_mq_notify 368 + 437 common mq_getsetattr sys_mq_getsetattr 369 + 438 common waitid sys_waitid 370 + 439 common add_key sys_add_key 371 + 440 common request_key sys_request_key 372 + 441 common keyctl sys_keyctl 373 + 442 common ioprio_set sys_ioprio_set 374 + 443 common ioprio_get sys_ioprio_get 375 + 444 common inotify_init sys_inotify_init 376 + 445 common inotify_add_watch sys_inotify_add_watch 377 + 446 common inotify_rm_watch sys_inotify_rm_watch 378 + 447 common fdatasync sys_fdatasync 379 + 448 common kexec_load sys_kexec_load 380 + 449 common migrate_pages sys_migrate_pages 381 + 450 common openat sys_openat 382 + 451 common mkdirat sys_mkdirat 383 + 452 common mknodat sys_mknodat 384 + 453 common fchownat sys_fchownat 385 + 454 common futimesat sys_futimesat 386 + 455 common fstatat64 sys_fstatat64 387 + 456 common unlinkat sys_unlinkat 388 + 457 common renameat sys_renameat 389 + 458 common linkat sys_linkat 390 + 459 common symlinkat sys_symlinkat 391 + 460 common readlinkat sys_readlinkat 392 + 461 common fchmodat sys_fchmodat 393 + 462 common faccessat sys_faccessat 394 + 463 common pselect6 sys_pselect6 395 + 464 common ppoll sys_ppoll 396 + 465 common unshare sys_unshare 397 + 466 common set_robust_list sys_set_robust_list 398 + 467 common get_robust_list sys_get_robust_list 399 + 468 common splice sys_splice 400 + 469 common sync_file_range sys_sync_file_range 401 + 470 common tee sys_tee 402 + 471 common vmsplice sys_vmsplice 403 + 472 common move_pages sys_move_pages 404 + 473 common getcpu sys_getcpu 405 + 474 common epoll_pwait sys_epoll_pwait 406 + 475 common utimensat sys_utimensat 407 + 476 common signalfd sys_signalfd 408 + 477 common timerfd sys_ni_syscall 409 + 478 common eventfd sys_eventfd 410 + 479 common recvmmsg sys_recvmmsg 411 + 480 common fallocate sys_fallocate 412 + 481 common timerfd_create sys_timerfd_create 413 + 482 common timerfd_settime sys_timerfd_settime 414 + 483 common timerfd_gettime sys_timerfd_gettime 415 + 484 common signalfd4 sys_signalfd4 416 + 485 common eventfd2 sys_eventfd2 417 + 486 common epoll_create1 sys_epoll_create1 418 + 487 common dup3 sys_dup3 419 + 488 common pipe2 sys_pipe2 420 + 489 common inotify_init1 sys_inotify_init1 421 + 490 common preadv sys_preadv 422 + 491 common pwritev sys_pwritev 423 + 492 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo 424 + 493 common perf_event_open sys_perf_event_open 425 + 494 common fanotify_init sys_fanotify_init 426 + 495 common fanotify_mark sys_fanotify_mark 427 + 496 common prlimit64 sys_prlimit64 428 + 497 common name_to_handle_at sys_name_to_handle_at 429 + 498 common open_by_handle_at sys_open_by_handle_at 430 + 499 common clock_adjtime sys_clock_adjtime 431 + 500 common syncfs sys_syncfs 432 + 501 common setns sys_setns 433 + 502 common accept4 sys_accept4 434 + 503 common sendmmsg sys_sendmmsg 435 + 504 common process_vm_readv sys_process_vm_readv 436 + 505 common process_vm_writev sys_process_vm_writev 437 + 506 common kcmp sys_kcmp 438 + 507 common finit_module sys_finit_module 439 + 508 common sched_setattr sys_sched_setattr 440 + 509 common sched_getattr sys_sched_getattr 441 + 510 common renameat2 sys_renameat2 442 + 511 common getrandom sys_getrandom 443 + 512 common memfd_create sys_memfd_create 444 + 513 common execveat sys_execveat 445 + 514 common seccomp sys_seccomp 446 + 515 common bpf sys_bpf 447 + 516 common userfaultfd sys_userfaultfd 448 + 517 common membarrier sys_membarrier 449 + 518 common mlock2 sys_mlock2 450 + 519 common copy_file_range sys_copy_file_range 451 + 520 common preadv2 sys_preadv2 452 + 521 common pwritev2 sys_pwritev2 453 + 522 common statx sys_statx 454 + 523 common io_pgetevents sys_io_pgetevents 455 + 524 common pkey_mprotect sys_pkey_mprotect 456 + 525 common pkey_alloc sys_pkey_alloc 457 + 526 common pkey_free sys_pkey_free 458 + 527 common rseq sys_rseq 459 + 528 common statfs64 sys_statfs64 460 + 529 common fstatfs64 sys_fstatfs64 461 + 530 common getegid sys_getegid 462 + 531 common geteuid sys_geteuid 463 + 532 common getppid sys_getppid 464 + # all other architectures have common numbers for new syscall, alpha 465 + # is the exception. 466 + 534 common pidfd_send_signal sys_pidfd_send_signal 467 + 535 common io_uring_setup sys_io_uring_setup 468 + 536 common io_uring_enter sys_io_uring_enter 469 + 537 common io_uring_register sys_io_uring_register 470 + 538 common open_tree sys_open_tree 471 + 539 common move_mount sys_move_mount 472 + 540 common fsopen sys_fsopen 473 + 541 common fsconfig sys_fsconfig 474 + 542 common fsmount sys_fsmount 475 + 543 common fspick sys_fspick 476 + 544 common pidfd_open sys_pidfd_open 477 + 545 common clone3 alpha_clone3 478 + 546 common close_range sys_close_range 479 + 547 common openat2 sys_openat2 480 + 548 common pidfd_getfd sys_pidfd_getfd 481 + 549 common faccessat2 sys_faccessat2 482 + 550 common process_madvise sys_process_madvise 483 + 551 common epoll_pwait2 sys_epoll_pwait2 484 + 552 common mount_setattr sys_mount_setattr 485 + 553 common quotactl_fd sys_quotactl_fd 486 + 554 common landlock_create_ruleset sys_landlock_create_ruleset 487 + 555 common landlock_add_rule sys_landlock_add_rule 488 + 556 common landlock_restrict_self sys_landlock_restrict_self 489 + # 557 reserved for memfd_secret 490 + 558 common process_mrelease sys_process_mrelease 491 + 559 common futex_waitv sys_futex_waitv 492 + 560 common set_mempolicy_home_node sys_ni_syscall 493 + 561 common cachestat sys_cachestat 494 + 562 common fchmodat2 sys_fchmodat2 495 + 563 common map_shadow_stack sys_map_shadow_stack 496 + 564 common futex_wake sys_futex_wake 497 + 565 common futex_wait sys_futex_wait 498 + 566 common futex_requeue sys_futex_requeue 499 + 567 common statmount sys_statmount 500 + 568 common listmount sys_listmount 501 + 569 common lsm_get_self_attr sys_lsm_get_self_attr 502 + 570 common lsm_set_self_attr sys_lsm_set_self_attr 503 + 571 common lsm_list_modules sys_lsm_list_modules 504 + 572 common mseal sys_mseal
+2
tools/perf/arch/alpha/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_64.h>
+2
tools/perf/arch/arc/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h
+3
tools/perf/arch/arc/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += arc time32 renameat stat64 rlimit
+2
tools/perf/arch/arc/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_32.h>
+4
tools/perf/arch/arm/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += oabi 4 + syscalltbl = $(srctree)/tools/perf/arch/arm/entry/syscalls/syscall.tbl
+2
tools/perf/arch/arm/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h
+483
tools/perf/arch/arm/entry/syscalls/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # Linux system call numbers and entry vectors 4 + # 5 + # The format is: 6 + # <num> <abi> <name> [<entry point> [<oabi compat entry point>]] 7 + # 8 + # Where abi is: 9 + # common - for system calls shared between oabi and eabi (may have compat) 10 + # oabi - for oabi-only system calls (may have compat) 11 + # eabi - for eabi-only system calls 12 + # 13 + # For each syscall number, "common" is mutually exclusive with oabi and eabi 14 + # 15 + 0 common restart_syscall sys_restart_syscall 16 + 1 common exit sys_exit 17 + 2 common fork sys_fork 18 + 3 common read sys_read 19 + 4 common write sys_write 20 + 5 common open sys_open 21 + 6 common close sys_close 22 + # 7 was sys_waitpid 23 + 8 common creat sys_creat 24 + 9 common link sys_link 25 + 10 common unlink sys_unlink 26 + 11 common execve sys_execve 27 + 12 common chdir sys_chdir 28 + 13 oabi time sys_time32 29 + 14 common mknod sys_mknod 30 + 15 common chmod sys_chmod 31 + 16 common lchown sys_lchown16 32 + # 17 was sys_break 33 + # 18 was sys_stat 34 + 19 common lseek sys_lseek 35 + 20 common getpid sys_getpid 36 + 21 common mount sys_mount 37 + 22 oabi umount sys_oldumount 38 + 23 common setuid sys_setuid16 39 + 24 common getuid sys_getuid16 40 + 25 oabi stime sys_stime32 41 + 26 common ptrace sys_ptrace 42 + 27 oabi alarm sys_alarm 43 + # 28 was sys_fstat 44 + 29 common pause sys_pause 45 + 30 oabi utime sys_utime32 46 + # 31 was sys_stty 47 + # 32 was sys_gtty 48 + 33 common access sys_access 49 + 34 common nice sys_nice 50 + # 35 was sys_ftime 51 + 36 common sync sys_sync 52 + 37 common kill sys_kill 53 + 38 common rename sys_rename 54 + 39 common mkdir sys_mkdir 55 + 40 common rmdir sys_rmdir 56 + 41 common dup sys_dup 57 + 42 common pipe sys_pipe 58 + 43 common times sys_times 59 + # 44 was sys_prof 60 + 45 common brk sys_brk 61 + 46 common setgid sys_setgid16 62 + 47 common getgid sys_getgid16 63 + # 48 was sys_signal 64 + 49 common geteuid sys_geteuid16 65 + 50 common getegid sys_getegid16 66 + 51 common acct sys_acct 67 + 52 common umount2 sys_umount 68 + # 53 was sys_lock 69 + 54 common ioctl sys_ioctl 70 + 55 common fcntl sys_fcntl 71 + # 56 was sys_mpx 72 + 57 common setpgid sys_setpgid 73 + # 58 was sys_ulimit 74 + # 59 was sys_olduname 75 + 60 common umask sys_umask 76 + 61 common chroot sys_chroot 77 + 62 common ustat sys_ustat 78 + 63 common dup2 sys_dup2 79 + 64 common getppid sys_getppid 80 + 65 common getpgrp sys_getpgrp 81 + 66 common setsid sys_setsid 82 + 67 common sigaction sys_sigaction 83 + # 68 was sys_sgetmask 84 + # 69 was sys_ssetmask 85 + 70 common setreuid sys_setreuid16 86 + 71 common setregid sys_setregid16 87 + 72 common sigsuspend sys_sigsuspend 88 + 73 common sigpending sys_sigpending 89 + 74 common sethostname sys_sethostname 90 + 75 common setrlimit sys_setrlimit 91 + # Back compat 2GB limited rlimit 92 + 76 oabi getrlimit sys_old_getrlimit 93 + 77 common getrusage sys_getrusage 94 + 78 common gettimeofday sys_gettimeofday 95 + 79 common settimeofday sys_settimeofday 96 + 80 common getgroups sys_getgroups16 97 + 81 common setgroups sys_setgroups16 98 + 82 oabi select sys_old_select 99 + 83 common symlink sys_symlink 100 + # 84 was sys_lstat 101 + 85 common readlink sys_readlink 102 + 86 common uselib sys_uselib 103 + 87 common swapon sys_swapon 104 + 88 common reboot sys_reboot 105 + 89 oabi readdir sys_old_readdir 106 + 90 oabi mmap sys_old_mmap 107 + 91 common munmap sys_munmap 108 + 92 common truncate sys_truncate 109 + 93 common ftruncate sys_ftruncate 110 + 94 common fchmod sys_fchmod 111 + 95 common fchown sys_fchown16 112 + 96 common getpriority sys_getpriority 113 + 97 common setpriority sys_setpriority 114 + # 98 was sys_profil 115 + 99 common statfs sys_statfs 116 + 100 common fstatfs sys_fstatfs 117 + # 101 was sys_ioperm 118 + 102 oabi socketcall sys_socketcall sys_oabi_socketcall 119 + 103 common syslog sys_syslog 120 + 104 common setitimer sys_setitimer 121 + 105 common getitimer sys_getitimer 122 + 106 common stat sys_newstat 123 + 107 common lstat sys_newlstat 124 + 108 common fstat sys_newfstat 125 + # 109 was sys_uname 126 + # 110 was sys_iopl 127 + 111 common vhangup sys_vhangup 128 + # 112 was sys_idle 129 + # syscall to call a syscall! 130 + 113 oabi syscall sys_syscall 131 + 114 common wait4 sys_wait4 132 + 115 common swapoff sys_swapoff 133 + 116 common sysinfo sys_sysinfo 134 + 117 oabi ipc sys_ipc sys_oabi_ipc 135 + 118 common fsync sys_fsync 136 + 119 common sigreturn sys_sigreturn_wrapper 137 + 120 common clone sys_clone 138 + 121 common setdomainname sys_setdomainname 139 + 122 common uname sys_newuname 140 + # 123 was sys_modify_ldt 141 + 124 common adjtimex sys_adjtimex_time32 142 + 125 common mprotect sys_mprotect 143 + 126 common sigprocmask sys_sigprocmask 144 + # 127 was sys_create_module 145 + 128 common init_module sys_init_module 146 + 129 common delete_module sys_delete_module 147 + # 130 was sys_get_kernel_syms 148 + 131 common quotactl sys_quotactl 149 + 132 common getpgid sys_getpgid 150 + 133 common fchdir sys_fchdir 151 + 134 common bdflush sys_ni_syscall 152 + 135 common sysfs sys_sysfs 153 + 136 common personality sys_personality 154 + # 137 was sys_afs_syscall 155 + 138 common setfsuid sys_setfsuid16 156 + 139 common setfsgid sys_setfsgid16 157 + 140 common _llseek sys_llseek 158 + 141 common getdents sys_getdents 159 + 142 common _newselect sys_select 160 + 143 common flock sys_flock 161 + 144 common msync sys_msync 162 + 145 common readv sys_readv 163 + 146 common writev sys_writev 164 + 147 common getsid sys_getsid 165 + 148 common fdatasync sys_fdatasync 166 + 149 common _sysctl sys_ni_syscall 167 + 150 common mlock sys_mlock 168 + 151 common munlock sys_munlock 169 + 152 common mlockall sys_mlockall 170 + 153 common munlockall sys_munlockall 171 + 154 common sched_setparam sys_sched_setparam 172 + 155 common sched_getparam sys_sched_getparam 173 + 156 common sched_setscheduler sys_sched_setscheduler 174 + 157 common sched_getscheduler sys_sched_getscheduler 175 + 158 common sched_yield sys_sched_yield 176 + 159 common sched_get_priority_max sys_sched_get_priority_max 177 + 160 common sched_get_priority_min sys_sched_get_priority_min 178 + 161 common sched_rr_get_interval sys_sched_rr_get_interval_time32 179 + 162 common nanosleep sys_nanosleep_time32 180 + 163 common mremap sys_mremap 181 + 164 common setresuid sys_setresuid16 182 + 165 common getresuid sys_getresuid16 183 + # 166 was sys_vm86 184 + # 167 was sys_query_module 185 + 168 common poll sys_poll 186 + 169 common nfsservctl 187 + 170 common setresgid sys_setresgid16 188 + 171 common getresgid sys_getresgid16 189 + 172 common prctl sys_prctl 190 + 173 common rt_sigreturn sys_rt_sigreturn_wrapper 191 + 174 common rt_sigaction sys_rt_sigaction 192 + 175 common rt_sigprocmask sys_rt_sigprocmask 193 + 176 common rt_sigpending sys_rt_sigpending 194 + 177 common rt_sigtimedwait sys_rt_sigtimedwait_time32 195 + 178 common rt_sigqueueinfo sys_rt_sigqueueinfo 196 + 179 common rt_sigsuspend sys_rt_sigsuspend 197 + 180 common pread64 sys_pread64 sys_oabi_pread64 198 + 181 common pwrite64 sys_pwrite64 sys_oabi_pwrite64 199 + 182 common chown sys_chown16 200 + 183 common getcwd sys_getcwd 201 + 184 common capget sys_capget 202 + 185 common capset sys_capset 203 + 186 common sigaltstack sys_sigaltstack 204 + 187 common sendfile sys_sendfile 205 + # 188 reserved 206 + # 189 reserved 207 + 190 common vfork sys_vfork 208 + # SuS compliant getrlimit 209 + 191 common ugetrlimit sys_getrlimit 210 + 192 common mmap2 sys_mmap2 211 + 193 common truncate64 sys_truncate64 sys_oabi_truncate64 212 + 194 common ftruncate64 sys_ftruncate64 sys_oabi_ftruncate64 213 + 195 common stat64 sys_stat64 sys_oabi_stat64 214 + 196 common lstat64 sys_lstat64 sys_oabi_lstat64 215 + 197 common fstat64 sys_fstat64 sys_oabi_fstat64 216 + 198 common lchown32 sys_lchown 217 + 199 common getuid32 sys_getuid 218 + 200 common getgid32 sys_getgid 219 + 201 common geteuid32 sys_geteuid 220 + 202 common getegid32 sys_getegid 221 + 203 common setreuid32 sys_setreuid 222 + 204 common setregid32 sys_setregid 223 + 205 common getgroups32 sys_getgroups 224 + 206 common setgroups32 sys_setgroups 225 + 207 common fchown32 sys_fchown 226 + 208 common setresuid32 sys_setresuid 227 + 209 common getresuid32 sys_getresuid 228 + 210 common setresgid32 sys_setresgid 229 + 211 common getresgid32 sys_getresgid 230 + 212 common chown32 sys_chown 231 + 213 common setuid32 sys_setuid 232 + 214 common setgid32 sys_setgid 233 + 215 common setfsuid32 sys_setfsuid 234 + 216 common setfsgid32 sys_setfsgid 235 + 217 common getdents64 sys_getdents64 236 + 218 common pivot_root sys_pivot_root 237 + 219 common mincore sys_mincore 238 + 220 common madvise sys_madvise 239 + 221 common fcntl64 sys_fcntl64 sys_oabi_fcntl64 240 + # 222 for tux 241 + # 223 is unused 242 + 224 common gettid sys_gettid 243 + 225 common readahead sys_readahead sys_oabi_readahead 244 + 226 common setxattr sys_setxattr 245 + 227 common lsetxattr sys_lsetxattr 246 + 228 common fsetxattr sys_fsetxattr 247 + 229 common getxattr sys_getxattr 248 + 230 common lgetxattr sys_lgetxattr 249 + 231 common fgetxattr sys_fgetxattr 250 + 232 common listxattr sys_listxattr 251 + 233 common llistxattr sys_llistxattr 252 + 234 common flistxattr sys_flistxattr 253 + 235 common removexattr sys_removexattr 254 + 236 common lremovexattr sys_lremovexattr 255 + 237 common fremovexattr sys_fremovexattr 256 + 238 common tkill sys_tkill 257 + 239 common sendfile64 sys_sendfile64 258 + 240 common futex sys_futex_time32 259 + 241 common sched_setaffinity sys_sched_setaffinity 260 + 242 common sched_getaffinity sys_sched_getaffinity 261 + 243 common io_setup sys_io_setup 262 + 244 common io_destroy sys_io_destroy 263 + 245 common io_getevents sys_io_getevents_time32 264 + 246 common io_submit sys_io_submit 265 + 247 common io_cancel sys_io_cancel 266 + 248 common exit_group sys_exit_group 267 + 249 common lookup_dcookie sys_ni_syscall 268 + 250 common epoll_create sys_epoll_create 269 + 251 common epoll_ctl sys_epoll_ctl sys_oabi_epoll_ctl 270 + 252 common epoll_wait sys_epoll_wait 271 + 253 common remap_file_pages sys_remap_file_pages 272 + # 254 for set_thread_area 273 + # 255 for get_thread_area 274 + 256 common set_tid_address sys_set_tid_address 275 + 257 common timer_create sys_timer_create 276 + 258 common timer_settime sys_timer_settime32 277 + 259 common timer_gettime sys_timer_gettime32 278 + 260 common timer_getoverrun sys_timer_getoverrun 279 + 261 common timer_delete sys_timer_delete 280 + 262 common clock_settime sys_clock_settime32 281 + 263 common clock_gettime sys_clock_gettime32 282 + 264 common clock_getres sys_clock_getres_time32 283 + 265 common clock_nanosleep sys_clock_nanosleep_time32 284 + 266 common statfs64 sys_statfs64_wrapper 285 + 267 common fstatfs64 sys_fstatfs64_wrapper 286 + 268 common tgkill sys_tgkill 287 + 269 common utimes sys_utimes_time32 288 + 270 common arm_fadvise64_64 sys_arm_fadvise64_64 289 + 271 common pciconfig_iobase sys_pciconfig_iobase 290 + 272 common pciconfig_read sys_pciconfig_read 291 + 273 common pciconfig_write sys_pciconfig_write 292 + 274 common mq_open sys_mq_open 293 + 275 common mq_unlink sys_mq_unlink 294 + 276 common mq_timedsend sys_mq_timedsend_time32 295 + 277 common mq_timedreceive sys_mq_timedreceive_time32 296 + 278 common mq_notify sys_mq_notify 297 + 279 common mq_getsetattr sys_mq_getsetattr 298 + 280 common waitid sys_waitid 299 + 281 common socket sys_socket 300 + 282 common bind sys_bind sys_oabi_bind 301 + 283 common connect sys_connect sys_oabi_connect 302 + 284 common listen sys_listen 303 + 285 common accept sys_accept 304 + 286 common getsockname sys_getsockname 305 + 287 common getpeername sys_getpeername 306 + 288 common socketpair sys_socketpair 307 + 289 common send sys_send 308 + 290 common sendto sys_sendto sys_oabi_sendto 309 + 291 common recv sys_recv 310 + 292 common recvfrom sys_recvfrom 311 + 293 common shutdown sys_shutdown 312 + 294 common setsockopt sys_setsockopt 313 + 295 common getsockopt sys_getsockopt 314 + 296 common sendmsg sys_sendmsg sys_oabi_sendmsg 315 + 297 common recvmsg sys_recvmsg 316 + 298 common semop sys_semop sys_oabi_semop 317 + 299 common semget sys_semget 318 + 300 common semctl sys_old_semctl 319 + 301 common msgsnd sys_msgsnd 320 + 302 common msgrcv sys_msgrcv 321 + 303 common msgget sys_msgget 322 + 304 common msgctl sys_old_msgctl 323 + 305 common shmat sys_shmat 324 + 306 common shmdt sys_shmdt 325 + 307 common shmget sys_shmget 326 + 308 common shmctl sys_old_shmctl 327 + 309 common add_key sys_add_key 328 + 310 common request_key sys_request_key 329 + 311 common keyctl sys_keyctl 330 + 312 common semtimedop sys_semtimedop_time32 sys_oabi_semtimedop 331 + 313 common vserver 332 + 314 common ioprio_set sys_ioprio_set 333 + 315 common ioprio_get sys_ioprio_get 334 + 316 common inotify_init sys_inotify_init 335 + 317 common inotify_add_watch sys_inotify_add_watch 336 + 318 common inotify_rm_watch sys_inotify_rm_watch 337 + 319 common mbind sys_mbind 338 + 320 common get_mempolicy sys_get_mempolicy 339 + 321 common set_mempolicy sys_set_mempolicy 340 + 322 common openat sys_openat 341 + 323 common mkdirat sys_mkdirat 342 + 324 common mknodat sys_mknodat 343 + 325 common fchownat sys_fchownat 344 + 326 common futimesat sys_futimesat_time32 345 + 327 common fstatat64 sys_fstatat64 sys_oabi_fstatat64 346 + 328 common unlinkat sys_unlinkat 347 + 329 common renameat sys_renameat 348 + 330 common linkat sys_linkat 349 + 331 common symlinkat sys_symlinkat 350 + 332 common readlinkat sys_readlinkat 351 + 333 common fchmodat sys_fchmodat 352 + 334 common faccessat sys_faccessat 353 + 335 common pselect6 sys_pselect6_time32 354 + 336 common ppoll sys_ppoll_time32 355 + 337 common unshare sys_unshare 356 + 338 common set_robust_list sys_set_robust_list 357 + 339 common get_robust_list sys_get_robust_list 358 + 340 common splice sys_splice 359 + 341 common arm_sync_file_range sys_sync_file_range2 360 + 342 common tee sys_tee 361 + 343 common vmsplice sys_vmsplice 362 + 344 common move_pages sys_move_pages 363 + 345 common getcpu sys_getcpu 364 + 346 common epoll_pwait sys_epoll_pwait 365 + 347 common kexec_load sys_kexec_load 366 + 348 common utimensat sys_utimensat_time32 367 + 349 common signalfd sys_signalfd 368 + 350 common timerfd_create sys_timerfd_create 369 + 351 common eventfd sys_eventfd 370 + 352 common fallocate sys_fallocate 371 + 353 common timerfd_settime sys_timerfd_settime32 372 + 354 common timerfd_gettime sys_timerfd_gettime32 373 + 355 common signalfd4 sys_signalfd4 374 + 356 common eventfd2 sys_eventfd2 375 + 357 common epoll_create1 sys_epoll_create1 376 + 358 common dup3 sys_dup3 377 + 359 common pipe2 sys_pipe2 378 + 360 common inotify_init1 sys_inotify_init1 379 + 361 common preadv sys_preadv 380 + 362 common pwritev sys_pwritev 381 + 363 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo 382 + 364 common perf_event_open sys_perf_event_open 383 + 365 common recvmmsg sys_recvmmsg_time32 384 + 366 common accept4 sys_accept4 385 + 367 common fanotify_init sys_fanotify_init 386 + 368 common fanotify_mark sys_fanotify_mark 387 + 369 common prlimit64 sys_prlimit64 388 + 370 common name_to_handle_at sys_name_to_handle_at 389 + 371 common open_by_handle_at sys_open_by_handle_at 390 + 372 common clock_adjtime sys_clock_adjtime32 391 + 373 common syncfs sys_syncfs 392 + 374 common sendmmsg sys_sendmmsg 393 + 375 common setns sys_setns 394 + 376 common process_vm_readv sys_process_vm_readv 395 + 377 common process_vm_writev sys_process_vm_writev 396 + 378 common kcmp sys_kcmp 397 + 379 common finit_module sys_finit_module 398 + 380 common sched_setattr sys_sched_setattr 399 + 381 common sched_getattr sys_sched_getattr 400 + 382 common renameat2 sys_renameat2 401 + 383 common seccomp sys_seccomp 402 + 384 common getrandom sys_getrandom 403 + 385 common memfd_create sys_memfd_create 404 + 386 common bpf sys_bpf 405 + 387 common execveat sys_execveat 406 + 388 common userfaultfd sys_userfaultfd 407 + 389 common membarrier sys_membarrier 408 + 390 common mlock2 sys_mlock2 409 + 391 common copy_file_range sys_copy_file_range 410 + 392 common preadv2 sys_preadv2 411 + 393 common pwritev2 sys_pwritev2 412 + 394 common pkey_mprotect sys_pkey_mprotect 413 + 395 common pkey_alloc sys_pkey_alloc 414 + 396 common pkey_free sys_pkey_free 415 + 397 common statx sys_statx 416 + 398 common rseq sys_rseq 417 + 399 common io_pgetevents sys_io_pgetevents_time32 418 + 400 common migrate_pages sys_migrate_pages 419 + 401 common kexec_file_load sys_kexec_file_load 420 + # 402 is unused 421 + 403 common clock_gettime64 sys_clock_gettime 422 + 404 common clock_settime64 sys_clock_settime 423 + 405 common clock_adjtime64 sys_clock_adjtime 424 + 406 common clock_getres_time64 sys_clock_getres 425 + 407 common clock_nanosleep_time64 sys_clock_nanosleep 426 + 408 common timer_gettime64 sys_timer_gettime 427 + 409 common timer_settime64 sys_timer_settime 428 + 410 common timerfd_gettime64 sys_timerfd_gettime 429 + 411 common timerfd_settime64 sys_timerfd_settime 430 + 412 common utimensat_time64 sys_utimensat 431 + 413 common pselect6_time64 sys_pselect6 432 + 414 common ppoll_time64 sys_ppoll 433 + 416 common io_pgetevents_time64 sys_io_pgetevents 434 + 417 common recvmmsg_time64 sys_recvmmsg 435 + 418 common mq_timedsend_time64 sys_mq_timedsend 436 + 419 common mq_timedreceive_time64 sys_mq_timedreceive 437 + 420 common semtimedop_time64 sys_semtimedop 438 + 421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait 439 + 422 common futex_time64 sys_futex 440 + 423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval 441 + 424 common pidfd_send_signal sys_pidfd_send_signal 442 + 425 common io_uring_setup sys_io_uring_setup 443 + 426 common io_uring_enter sys_io_uring_enter 444 + 427 common io_uring_register sys_io_uring_register 445 + 428 common open_tree sys_open_tree 446 + 429 common move_mount sys_move_mount 447 + 430 common fsopen sys_fsopen 448 + 431 common fsconfig sys_fsconfig 449 + 432 common fsmount sys_fsmount 450 + 433 common fspick sys_fspick 451 + 434 common pidfd_open sys_pidfd_open 452 + 435 common clone3 sys_clone3 453 + 436 common close_range sys_close_range 454 + 437 common openat2 sys_openat2 455 + 438 common pidfd_getfd sys_pidfd_getfd 456 + 439 common faccessat2 sys_faccessat2 457 + 440 common process_madvise sys_process_madvise 458 + 441 common epoll_pwait2 sys_epoll_pwait2 459 + 442 common mount_setattr sys_mount_setattr 460 + 443 common quotactl_fd sys_quotactl_fd 461 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 462 + 445 common landlock_add_rule sys_landlock_add_rule 463 + 446 common landlock_restrict_self sys_landlock_restrict_self 464 + # 447 reserved for memfd_secret 465 + 448 common process_mrelease sys_process_mrelease 466 + 449 common futex_waitv sys_futex_waitv 467 + 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 468 + 451 common cachestat sys_cachestat 469 + 452 common fchmodat2 sys_fchmodat2 470 + 453 common map_shadow_stack sys_map_shadow_stack 471 + 454 common futex_wake sys_futex_wake 472 + 455 common futex_wait sys_futex_wait 473 + 456 common futex_requeue sys_futex_requeue 474 + 457 common statmount sys_statmount 475 + 458 common listmount sys_listmount 476 + 459 common lsm_get_self_attr sys_lsm_get_self_attr 477 + 460 common lsm_set_self_attr sys_lsm_set_self_attr 478 + 461 common lsm_list_modules sys_lsm_list_modules 479 + 462 common mseal sys_mseal 480 + 463 common setxattrat sys_setxattrat 481 + 464 common getxattrat sys_getxattrat 482 + 465 common listxattrat sys_listxattrat 483 + 466 common removexattrat sys_removexattrat
+2
tools/perf/arch/arm/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_32.h>
-22
tools/perf/arch/arm64/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 PERF_HAVE_JITDUMP := 1 3 3 HAVE_KVM_STAT_SUPPORT := 1 4 - 5 - # 6 - # Syscall table generation for perf 7 - # 8 - 9 - out := $(OUTPUT)arch/arm64/include/generated/asm 10 - header := $(out)/syscalls.c 11 - incpath := $(srctree)/tools 12 - sysdef := $(srctree)/tools/arch/arm64/include/uapi/asm/unistd.h 13 - sysprf := $(srctree)/tools/perf/arch/arm64/entry/syscalls/ 14 - systbl := $(sysprf)/mksyscalltbl 15 - 16 - # Create output directory if not already present 17 - $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 18 - 19 - $(header): $(sysdef) $(systbl) 20 - $(Q)$(SHELL) '$(systbl)' '$(CC)' '$(HOSTCC)' $(incpath) $(sysdef) > $@ 21 - 22 - clean:: 23 - $(call QUIET_CLEAN, arm64) $(RM) $(header) 24 - 25 - archheaders: $(header)
+3
tools/perf/arch/arm64/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h 3 + syscall-y += syscalls_64.h
+6
tools/perf/arch/arm64/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += 4 + syscall_abis_64 += renameat rlimit memfd_secret 5 + 6 + syscalltbl = $(srctree)/tools/perf/arch/arm64/entry/syscalls/syscall_%.tbl
-46
tools/perf/arch/arm64/entry/syscalls/mksyscalltbl
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - # 4 - # Generate system call table for perf. Derived from 5 - # powerpc script. 6 - # 7 - # Copyright IBM Corp. 2017 8 - # Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 9 - # Changed by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> 10 - # Changed by: Kim Phillips <kim.phillips@arm.com> 11 - 12 - gcc=$1 13 - hostcc=$2 14 - incpath=$3 15 - input=$4 16 - 17 - if ! test -r $input; then 18 - echo "Could not read input file" >&2 19 - exit 1 20 - fi 21 - 22 - create_sc_table() 23 - { 24 - local sc nr max_nr 25 - 26 - while read sc nr; do 27 - printf "%s\n" " [$nr] = \"$sc\"," 28 - max_nr=$nr 29 - done 30 - 31 - echo "#define SYSCALLTBL_ARM64_MAX_ID $max_nr" 32 - } 33 - 34 - create_table() 35 - { 36 - echo "#include \"$input\"" 37 - echo "static const char *const syscalltbl_arm64[] = {" 38 - create_sc_table 39 - echo "};" 40 - } 41 - 42 - $gcc -E -dM -x c -I $incpath/include/uapi $input \ 43 - |awk '$2 ~ "__NR" && $3 !~ "__NR3264_" { 44 - sub("^#define __NR(3264)?_", ""); 45 - print | "sort -k2 -n"}' \ 46 - |create_table
+476
tools/perf/arch/arm64/entry/syscalls/syscall_32.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + # AArch32 (compat) system call definitions. 4 + # 5 + # Copyright (C) 2001-2005 Russell King 6 + # Copyright (C) 2012 ARM Ltd. 7 + # 8 + # This file corresponds to arch/arm/tools/syscall.tbl 9 + # for the native EABI syscalls and should be kept in sync 10 + # Instead of the OABI syscalls, it contains pointers to 11 + # the compat entry points where they differ from the native 12 + # syscalls. 13 + # 14 + 0 common restart_syscall sys_restart_syscall 15 + 1 common exit sys_exit 16 + 2 common fork sys_fork 17 + 3 common read sys_read 18 + 4 common write sys_write 19 + 5 common open sys_open compat_sys_open 20 + 6 common close sys_close 21 + # 7 was sys_waitpid 22 + 8 common creat sys_creat 23 + 9 common link sys_link 24 + 10 common unlink sys_unlink 25 + 11 common execve sys_execve compat_sys_execve 26 + 12 common chdir sys_chdir 27 + # 13 was sys_time 28 + 14 common mknod sys_mknod 29 + 15 common chmod sys_chmod 30 + 16 common lchown sys_lchown16 31 + # 17 was sys_break 32 + # 18 was sys_stat 33 + 19 common lseek sys_lseek compat_sys_lseek 34 + 20 common getpid sys_getpid 35 + 21 common mount sys_mount 36 + # 22 was sys_umount 37 + 23 common setuid sys_setuid16 38 + 24 common getuid sys_getuid16 39 + # 25 was sys_stime 40 + 26 common ptrace sys_ptrace compat_sys_ptrace 41 + # 27 was sys_alarm 42 + # 28 was sys_fstat 43 + 29 common pause sys_pause 44 + # 30 was sys_utime 45 + # 31 was sys_stty 46 + # 32 was sys_gtty 47 + 33 common access sys_access 48 + 34 common nice sys_nice 49 + # 35 was sys_ftime 50 + 36 common sync sys_sync 51 + 37 common kill sys_kill 52 + 38 common rename sys_rename 53 + 39 common mkdir sys_mkdir 54 + 40 common rmdir sys_rmdir 55 + 41 common dup sys_dup 56 + 42 common pipe sys_pipe 57 + 43 common times sys_times compat_sys_times 58 + # 44 was sys_prof 59 + 45 common brk sys_brk 60 + 46 common setgid sys_setgid16 61 + 47 common getgid sys_getgid16 62 + # 48 was sys_signal 63 + 49 common geteuid sys_geteuid16 64 + 50 common getegid sys_getegid16 65 + 51 common acct sys_acct 66 + 52 common umount2 sys_umount 67 + # 53 was sys_lock 68 + 54 common ioctl sys_ioctl compat_sys_ioctl 69 + 55 common fcntl sys_fcntl compat_sys_fcntl 70 + # 56 was sys_mpx 71 + 57 common setpgid sys_setpgid 72 + # 58 was sys_ulimit 73 + # 59 was sys_olduname 74 + 60 common umask sys_umask 75 + 61 common chroot sys_chroot 76 + 62 common ustat sys_ustat compat_sys_ustat 77 + 63 common dup2 sys_dup2 78 + 64 common getppid sys_getppid 79 + 65 common getpgrp sys_getpgrp 80 + 66 common setsid sys_setsid 81 + 67 common sigaction sys_sigaction compat_sys_sigaction 82 + # 68 was sys_sgetmask 83 + # 69 was sys_ssetmask 84 + 70 common setreuid sys_setreuid16 85 + 71 common setregid sys_setregid16 86 + 72 common sigsuspend sys_sigsuspend 87 + 73 common sigpending sys_sigpending compat_sys_sigpending 88 + 74 common sethostname sys_sethostname 89 + 75 common setrlimit sys_setrlimit compat_sys_setrlimit 90 + # 76 was compat_sys_getrlimit 91 + 77 common getrusage sys_getrusage compat_sys_getrusage 92 + 78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday 93 + 79 common settimeofday sys_settimeofday compat_sys_settimeofday 94 + 80 common getgroups sys_getgroups16 95 + 81 common setgroups sys_setgroups16 96 + # 82 was compat_sys_select 97 + 83 common symlink sys_symlink 98 + # 84 was sys_lstat 99 + 85 common readlink sys_readlink 100 + 86 common uselib sys_uselib 101 + 87 common swapon sys_swapon 102 + 88 common reboot sys_reboot 103 + # 89 was sys_readdir 104 + # 90 was sys_mmap 105 + 91 common munmap sys_munmap 106 + 92 common truncate sys_truncate compat_sys_truncate 107 + 93 common ftruncate sys_ftruncate compat_sys_ftruncate 108 + 94 common fchmod sys_fchmod 109 + 95 common fchown sys_fchown16 110 + 96 common getpriority sys_getpriority 111 + 97 common setpriority sys_setpriority 112 + # 98 was sys_profil 113 + 99 common statfs sys_statfs compat_sys_statfs 114 + 100 common fstatfs sys_fstatfs compat_sys_fstatfs 115 + # 101 was sys_ioperm 116 + # 102 was sys_socketcall 117 + 103 common syslog sys_syslog 118 + 104 common setitimer sys_setitimer compat_sys_setitimer 119 + 105 common getitimer sys_getitimer compat_sys_getitimer 120 + 106 common stat sys_newstat compat_sys_newstat 121 + 107 common lstat sys_newlstat compat_sys_newlstat 122 + 108 common fstat sys_newfstat compat_sys_newfstat 123 + # 109 was sys_uname 124 + # 110 was sys_iopl 125 + 111 common vhangup sys_vhangup 126 + # 112 was sys_idle 127 + # 113 was sys_syscall 128 + 114 common wait4 sys_wait4 compat_sys_wait4 129 + 115 common swapoff sys_swapoff 130 + 116 common sysinfo sys_sysinfo compat_sys_sysinfo 131 + # 117 was sys_ipc 132 + 118 common fsync sys_fsync 133 + 119 common sigreturn sys_sigreturn_wrapper compat_sys_sigreturn 134 + 120 common clone sys_clone 135 + 121 common setdomainname sys_setdomainname 136 + 122 common uname sys_newuname 137 + # 123 was sys_modify_ldt 138 + 124 common adjtimex sys_adjtimex_time32 139 + 125 common mprotect sys_mprotect 140 + 126 common sigprocmask sys_sigprocmask compat_sys_sigprocmask 141 + # 127 was sys_create_module 142 + 128 common init_module sys_init_module 143 + 129 common delete_module sys_delete_module 144 + # 130 was sys_get_kernel_syms 145 + 131 common quotactl sys_quotactl 146 + 132 common getpgid sys_getpgid 147 + 133 common fchdir sys_fchdir 148 + 134 common bdflush sys_ni_syscall 149 + 135 common sysfs sys_sysfs 150 + 136 common personality sys_personality 151 + # 137 was sys_afs_syscall 152 + 138 common setfsuid sys_setfsuid16 153 + 139 common setfsgid sys_setfsgid16 154 + 140 common _llseek sys_llseek 155 + 141 common getdents sys_getdents compat_sys_getdents 156 + 142 common _newselect sys_select compat_sys_select 157 + 143 common flock sys_flock 158 + 144 common msync sys_msync 159 + 145 common readv sys_readv 160 + 146 common writev sys_writev 161 + 147 common getsid sys_getsid 162 + 148 common fdatasync sys_fdatasync 163 + 149 common _sysctl sys_ni_syscall 164 + 150 common mlock sys_mlock 165 + 151 common munlock sys_munlock 166 + 152 common mlockall sys_mlockall 167 + 153 common munlockall sys_munlockall 168 + 154 common sched_setparam sys_sched_setparam 169 + 155 common sched_getparam sys_sched_getparam 170 + 156 common sched_setscheduler sys_sched_setscheduler 171 + 157 common sched_getscheduler sys_sched_getscheduler 172 + 158 common sched_yield sys_sched_yield 173 + 159 common sched_get_priority_max sys_sched_get_priority_max 174 + 160 common sched_get_priority_min sys_sched_get_priority_min 175 + 161 common sched_rr_get_interval sys_sched_rr_get_interval_time32 176 + 162 common nanosleep sys_nanosleep_time32 177 + 163 common mremap sys_mremap 178 + 164 common setresuid sys_setresuid16 179 + 165 common getresuid sys_getresuid16 180 + # 166 was sys_vm86 181 + # 167 was sys_query_module 182 + 168 common poll sys_poll 183 + 169 common nfsservctl sys_ni_syscall 184 + 170 common setresgid sys_setresgid16 185 + 171 common getresgid sys_getresgid16 186 + 172 common prctl sys_prctl 187 + 173 common rt_sigreturn sys_rt_sigreturn_wrapper compat_sys_rt_sigreturn 188 + 174 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction 189 + 175 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask 190 + 176 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending 191 + 177 common rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32 192 + 178 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo 193 + 179 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend 194 + 180 common pread64 sys_pread64 compat_sys_aarch32_pread64 195 + 181 common pwrite64 sys_pwrite64 compat_sys_aarch32_pwrite64 196 + 182 common chown sys_chown16 197 + 183 common getcwd sys_getcwd 198 + 184 common capget sys_capget 199 + 185 common capset sys_capset 200 + 186 common sigaltstack sys_sigaltstack compat_sys_sigaltstack 201 + 187 common sendfile sys_sendfile compat_sys_sendfile 202 + # 188 reserved 203 + # 189 reserved 204 + 190 common vfork sys_vfork 205 + # SuS compliant getrlimit 206 + 191 common ugetrlimit sys_getrlimit compat_sys_getrlimit 207 + 192 common mmap2 sys_mmap2 compat_sys_aarch32_mmap2 208 + 193 common truncate64 sys_truncate64 compat_sys_aarch32_truncate64 209 + 194 common ftruncate64 sys_ftruncate64 compat_sys_aarch32_ftruncate64 210 + 195 common stat64 sys_stat64 211 + 196 common lstat64 sys_lstat64 212 + 197 common fstat64 sys_fstat64 213 + 198 common lchown32 sys_lchown 214 + 199 common getuid32 sys_getuid 215 + 200 common getgid32 sys_getgid 216 + 201 common geteuid32 sys_geteuid 217 + 202 common getegid32 sys_getegid 218 + 203 common setreuid32 sys_setreuid 219 + 204 common setregid32 sys_setregid 220 + 205 common getgroups32 sys_getgroups 221 + 206 common setgroups32 sys_setgroups 222 + 207 common fchown32 sys_fchown 223 + 208 common setresuid32 sys_setresuid 224 + 209 common getresuid32 sys_getresuid 225 + 210 common setresgid32 sys_setresgid 226 + 211 common getresgid32 sys_getresgid 227 + 212 common chown32 sys_chown 228 + 213 common setuid32 sys_setuid 229 + 214 common setgid32 sys_setgid 230 + 215 common setfsuid32 sys_setfsuid 231 + 216 common setfsgid32 sys_setfsgid 232 + 217 common getdents64 sys_getdents64 233 + 218 common pivot_root sys_pivot_root 234 + 219 common mincore sys_mincore 235 + 220 common madvise sys_madvise 236 + 221 common fcntl64 sys_fcntl64 compat_sys_fcntl64 237 + # 222 for tux 238 + # 223 is unused 239 + 224 common gettid sys_gettid 240 + 225 common readahead sys_readahead compat_sys_aarch32_readahead 241 + 226 common setxattr sys_setxattr 242 + 227 common lsetxattr sys_lsetxattr 243 + 228 common fsetxattr sys_fsetxattr 244 + 229 common getxattr sys_getxattr 245 + 230 common lgetxattr sys_lgetxattr 246 + 231 common fgetxattr sys_fgetxattr 247 + 232 common listxattr sys_listxattr 248 + 233 common llistxattr sys_llistxattr 249 + 234 common flistxattr sys_flistxattr 250 + 235 common removexattr sys_removexattr 251 + 236 common lremovexattr sys_lremovexattr 252 + 237 common fremovexattr sys_fremovexattr 253 + 238 common tkill sys_tkill 254 + 239 common sendfile64 sys_sendfile64 255 + 240 common futex sys_futex_time32 256 + 241 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity 257 + 242 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity 258 + 243 common io_setup sys_io_setup compat_sys_io_setup 259 + 244 common io_destroy sys_io_destroy 260 + 245 common io_getevents sys_io_getevents_time32 261 + 246 common io_submit sys_io_submit compat_sys_io_submit 262 + 247 common io_cancel sys_io_cancel 263 + 248 common exit_group sys_exit_group 264 + 249 common lookup_dcookie sys_ni_syscall 265 + 250 common epoll_create sys_epoll_create 266 + 251 common epoll_ctl sys_epoll_ctl 267 + 252 common epoll_wait sys_epoll_wait 268 + 253 common remap_file_pages sys_remap_file_pages 269 + # 254 for set_thread_area 270 + # 255 for get_thread_area 271 + 256 common set_tid_address sys_set_tid_address 272 + 257 common timer_create sys_timer_create compat_sys_timer_create 273 + 258 common timer_settime sys_timer_settime32 274 + 259 common timer_gettime sys_timer_gettime32 275 + 260 common timer_getoverrun sys_timer_getoverrun 276 + 261 common timer_delete sys_timer_delete 277 + 262 common clock_settime sys_clock_settime32 278 + 263 common clock_gettime sys_clock_gettime32 279 + 264 common clock_getres sys_clock_getres_time32 280 + 265 common clock_nanosleep sys_clock_nanosleep_time32 281 + 266 common statfs64 sys_statfs64_wrapper compat_sys_aarch32_statfs64 282 + 267 common fstatfs64 sys_fstatfs64_wrapper compat_sys_aarch32_fstatfs64 283 + 268 common tgkill sys_tgkill 284 + 269 common utimes sys_utimes_time32 285 + 270 common arm_fadvise64_64 sys_arm_fadvise64_64 compat_sys_aarch32_fadvise64_64 286 + 271 common pciconfig_iobase sys_pciconfig_iobase 287 + 272 common pciconfig_read sys_pciconfig_read 288 + 273 common pciconfig_write sys_pciconfig_write 289 + 274 common mq_open sys_mq_open compat_sys_mq_open 290 + 275 common mq_unlink sys_mq_unlink 291 + 276 common mq_timedsend sys_mq_timedsend_time32 292 + 277 common mq_timedreceive sys_mq_timedreceive_time32 293 + 278 common mq_notify sys_mq_notify compat_sys_mq_notify 294 + 279 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr 295 + 280 common waitid sys_waitid compat_sys_waitid 296 + 281 common socket sys_socket 297 + 282 common bind sys_bind 298 + 283 common connect sys_connect 299 + 284 common listen sys_listen 300 + 285 common accept sys_accept 301 + 286 common getsockname sys_getsockname 302 + 287 common getpeername sys_getpeername 303 + 288 common socketpair sys_socketpair 304 + 289 common send sys_send 305 + 290 common sendto sys_sendto 306 + 291 common recv sys_recv compat_sys_recv 307 + 292 common recvfrom sys_recvfrom compat_sys_recvfrom 308 + 293 common shutdown sys_shutdown 309 + 294 common setsockopt sys_setsockopt 310 + 295 common getsockopt sys_getsockopt 311 + 296 common sendmsg sys_sendmsg compat_sys_sendmsg 312 + 297 common recvmsg sys_recvmsg compat_sys_recvmsg 313 + 298 common semop sys_semop 314 + 299 common semget sys_semget 315 + 300 common semctl sys_old_semctl compat_sys_old_semctl 316 + 301 common msgsnd sys_msgsnd compat_sys_msgsnd 317 + 302 common msgrcv sys_msgrcv compat_sys_msgrcv 318 + 303 common msgget sys_msgget 319 + 304 common msgctl sys_old_msgctl compat_sys_old_msgctl 320 + 305 common shmat sys_shmat compat_sys_shmat 321 + 306 common shmdt sys_shmdt 322 + 307 common shmget sys_shmget 323 + 308 common shmctl sys_old_shmctl compat_sys_old_shmctl 324 + 309 common add_key sys_add_key 325 + 310 common request_key sys_request_key 326 + 311 common keyctl sys_keyctl compat_sys_keyctl 327 + 312 common semtimedop sys_semtimedop_time32 328 + 313 common vserver sys_ni_syscall 329 + 314 common ioprio_set sys_ioprio_set 330 + 315 common ioprio_get sys_ioprio_get 331 + 316 common inotify_init sys_inotify_init 332 + 317 common inotify_add_watch sys_inotify_add_watch 333 + 318 common inotify_rm_watch sys_inotify_rm_watch 334 + 319 common mbind sys_mbind 335 + 320 common get_mempolicy sys_get_mempolicy 336 + 321 common set_mempolicy sys_set_mempolicy 337 + 322 common openat sys_openat compat_sys_openat 338 + 323 common mkdirat sys_mkdirat 339 + 324 common mknodat sys_mknodat 340 + 325 common fchownat sys_fchownat 341 + 326 common futimesat sys_futimesat_time32 342 + 327 common fstatat64 sys_fstatat64 343 + 328 common unlinkat sys_unlinkat 344 + 329 common renameat sys_renameat 345 + 330 common linkat sys_linkat 346 + 331 common symlinkat sys_symlinkat 347 + 332 common readlinkat sys_readlinkat 348 + 333 common fchmodat sys_fchmodat 349 + 334 common faccessat sys_faccessat 350 + 335 common pselect6 sys_pselect6_time32 compat_sys_pselect6_time32 351 + 336 common ppoll sys_ppoll_time32 compat_sys_ppoll_time32 352 + 337 common unshare sys_unshare 353 + 338 common set_robust_list sys_set_robust_list compat_sys_set_robust_list 354 + 339 common get_robust_list sys_get_robust_list compat_sys_get_robust_list 355 + 340 common splice sys_splice 356 + 341 common arm_sync_file_range sys_sync_file_range2 compat_sys_aarch32_sync_file_range2 357 + 342 common tee sys_tee 358 + 343 common vmsplice sys_vmsplice 359 + 344 common move_pages sys_move_pages 360 + 345 common getcpu sys_getcpu 361 + 346 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait 362 + 347 common kexec_load sys_kexec_load compat_sys_kexec_load 363 + 348 common utimensat sys_utimensat_time32 364 + 349 common signalfd sys_signalfd compat_sys_signalfd 365 + 350 common timerfd_create sys_timerfd_create 366 + 351 common eventfd sys_eventfd 367 + 352 common fallocate sys_fallocate compat_sys_aarch32_fallocate 368 + 353 common timerfd_settime sys_timerfd_settime32 369 + 354 common timerfd_gettime sys_timerfd_gettime32 370 + 355 common signalfd4 sys_signalfd4 compat_sys_signalfd4 371 + 356 common eventfd2 sys_eventfd2 372 + 357 common epoll_create1 sys_epoll_create1 373 + 358 common dup3 sys_dup3 374 + 359 common pipe2 sys_pipe2 375 + 360 common inotify_init1 sys_inotify_init1 376 + 361 common preadv sys_preadv compat_sys_preadv 377 + 362 common pwritev sys_pwritev compat_sys_pwritev 378 + 363 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo 379 + 364 common perf_event_open sys_perf_event_open 380 + 365 common recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32 381 + 366 common accept4 sys_accept4 382 + 367 common fanotify_init sys_fanotify_init 383 + 368 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark 384 + 369 common prlimit64 sys_prlimit64 385 + 370 common name_to_handle_at sys_name_to_handle_at 386 + 371 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at 387 + 372 common clock_adjtime sys_clock_adjtime32 388 + 373 common syncfs sys_syncfs 389 + 374 common sendmmsg sys_sendmmsg compat_sys_sendmmsg 390 + 375 common setns sys_setns 391 + 376 common process_vm_readv sys_process_vm_readv 392 + 377 common process_vm_writev sys_process_vm_writev 393 + 378 common kcmp sys_kcmp 394 + 379 common finit_module sys_finit_module 395 + 380 common sched_setattr sys_sched_setattr 396 + 381 common sched_getattr sys_sched_getattr 397 + 382 common renameat2 sys_renameat2 398 + 383 common seccomp sys_seccomp 399 + 384 common getrandom sys_getrandom 400 + 385 common memfd_create sys_memfd_create 401 + 386 common bpf sys_bpf 402 + 387 common execveat sys_execveat compat_sys_execveat 403 + 388 common userfaultfd sys_userfaultfd 404 + 389 common membarrier sys_membarrier 405 + 390 common mlock2 sys_mlock2 406 + 391 common copy_file_range sys_copy_file_range 407 + 392 common preadv2 sys_preadv2 compat_sys_preadv2 408 + 393 common pwritev2 sys_pwritev2 compat_sys_pwritev2 409 + 394 common pkey_mprotect sys_pkey_mprotect 410 + 395 common pkey_alloc sys_pkey_alloc 411 + 396 common pkey_free sys_pkey_free 412 + 397 common statx sys_statx 413 + 398 common rseq sys_rseq 414 + 399 common io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents 415 + 400 common migrate_pages sys_migrate_pages 416 + 401 common kexec_file_load sys_kexec_file_load 417 + # 402 is unused 418 + 403 common clock_gettime64 sys_clock_gettime 419 + 404 common clock_settime64 sys_clock_settime 420 + 405 common clock_adjtime64 sys_clock_adjtime 421 + 406 common clock_getres_time64 sys_clock_getres 422 + 407 common clock_nanosleep_time64 sys_clock_nanosleep 423 + 408 common timer_gettime64 sys_timer_gettime 424 + 409 common timer_settime64 sys_timer_settime 425 + 410 common timerfd_gettime64 sys_timerfd_gettime 426 + 411 common timerfd_settime64 sys_timerfd_settime 427 + 412 common utimensat_time64 sys_utimensat 428 + 413 common pselect6_time64 sys_pselect6 compat_sys_pselect6_time64 429 + 414 common ppoll_time64 sys_ppoll compat_sys_ppoll_time64 430 + 416 common io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64 431 + 417 common recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64 432 + 418 common mq_timedsend_time64 sys_mq_timedsend 433 + 419 common mq_timedreceive_time64 sys_mq_timedreceive 434 + 420 common semtimedop_time64 sys_semtimedop 435 + 421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64 436 + 422 common futex_time64 sys_futex 437 + 423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval 438 + 424 common pidfd_send_signal sys_pidfd_send_signal 439 + 425 common io_uring_setup sys_io_uring_setup 440 + 426 common io_uring_enter sys_io_uring_enter 441 + 427 common io_uring_register sys_io_uring_register 442 + 428 common open_tree sys_open_tree 443 + 429 common move_mount sys_move_mount 444 + 430 common fsopen sys_fsopen 445 + 431 common fsconfig sys_fsconfig 446 + 432 common fsmount sys_fsmount 447 + 433 common fspick sys_fspick 448 + 434 common pidfd_open sys_pidfd_open 449 + 435 common clone3 sys_clone3 450 + 436 common close_range sys_close_range 451 + 437 common openat2 sys_openat2 452 + 438 common pidfd_getfd sys_pidfd_getfd 453 + 439 common faccessat2 sys_faccessat2 454 + 440 common process_madvise sys_process_madvise 455 + 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 456 + 442 common mount_setattr sys_mount_setattr 457 + 443 common quotactl_fd sys_quotactl_fd 458 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 459 + 445 common landlock_add_rule sys_landlock_add_rule 460 + 446 common landlock_restrict_self sys_landlock_restrict_self 461 + # 447 reserved for memfd_secret 462 + 448 common process_mrelease sys_process_mrelease 463 + 449 common futex_waitv sys_futex_waitv 464 + 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 465 + 451 common cachestat sys_cachestat 466 + 452 common fchmodat2 sys_fchmodat2 467 + 453 common map_shadow_stack sys_map_shadow_stack 468 + 454 common futex_wake sys_futex_wake 469 + 455 common futex_wait sys_futex_wait 470 + 456 common futex_requeue sys_futex_requeue 471 + 457 common statmount sys_statmount 472 + 458 common listmount sys_listmount 473 + 459 common lsm_get_self_attr sys_lsm_get_self_attr 474 + 460 common lsm_set_self_attr sys_lsm_set_self_attr 475 + 461 common lsm_list_modules sys_lsm_list_modules 476 + 462 common mseal sys_mseal
+8
tools/perf/arch/arm64/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/bitsperlong.h> 3 + 4 + #if __BITS_PER_LONG == 64 5 + #include <asm/syscalls_64.h> 6 + #else 7 + #include <asm/syscalls_32.h> 8 + #endif
+58 -32
tools/perf/arch/arm64/util/arm-spe.c
··· 274 274 evsel__set_sample_bit(evsel, PHYS_ADDR); 275 275 } 276 276 277 - static int arm_spe_recording_options(struct auxtrace_record *itr, 278 - struct evlist *evlist, 279 - struct record_opts *opts) 277 + static int arm_spe_setup_aux_buffer(struct record_opts *opts) 280 278 { 281 - struct arm_spe_recording *sper = 282 - container_of(itr, struct arm_spe_recording, itr); 283 - struct evsel *evsel, *tmp; 284 - struct perf_cpu_map *cpus = evlist->core.user_requested_cpus; 285 279 bool privileged = perf_event_paranoid_check(-1); 286 - struct evsel *tracking_evsel; 287 - int err; 288 - 289 - sper->evlist = evlist; 290 - 291 - evlist__for_each_entry(evlist, evsel) { 292 - if (evsel__is_aux_event(evsel)) { 293 - if (!strstarts(evsel->pmu->name, ARM_SPE_PMU_NAME)) { 294 - pr_err("Found unexpected auxtrace event: %s\n", 295 - evsel->pmu->name); 296 - return -EINVAL; 297 - } 298 - opts->full_auxtrace = true; 299 - } 300 - } 301 - 302 - if (!opts->full_auxtrace) 303 - return 0; 304 280 305 281 /* 306 282 * we are in snapshot mode. ··· 306 330 pr_err("Failed to calculate default snapshot size and/or AUX area tracing mmap pages\n"); 307 331 return -EINVAL; 308 332 } 333 + 334 + pr_debug2("%sx snapshot size: %zu\n", ARM_SPE_PMU_NAME, 335 + opts->auxtrace_snapshot_size); 309 336 } 310 337 311 338 /* We are in full trace mode but '-m,xyz' wasn't specified */ ··· 334 355 } 335 356 } 336 357 337 - if (opts->auxtrace_snapshot_mode) 338 - pr_debug2("%sx snapshot size: %zu\n", ARM_SPE_PMU_NAME, 339 - opts->auxtrace_snapshot_size); 358 + return 0; 359 + } 340 360 341 - evlist__for_each_entry_safe(evlist, tmp, evsel) { 342 - if (evsel__is_aux_event(evsel)) 343 - arm_spe_setup_evsel(evsel, cpus); 344 - } 361 + static int arm_spe_setup_tracking_event(struct evlist *evlist, 362 + struct record_opts *opts) 363 + { 364 + int err; 365 + struct evsel *tracking_evsel; 366 + struct perf_cpu_map *cpus = evlist->core.user_requested_cpus; 345 367 346 368 /* Add dummy event to keep tracking */ 347 369 err = parse_event(evlist, "dummy:u"); ··· 366 386 } 367 387 368 388 return 0; 389 + } 390 + 391 + static int arm_spe_recording_options(struct auxtrace_record *itr, 392 + struct evlist *evlist, 393 + struct record_opts *opts) 394 + { 395 + struct arm_spe_recording *sper = 396 + container_of(itr, struct arm_spe_recording, itr); 397 + struct evsel *evsel, *tmp; 398 + struct perf_cpu_map *cpus = evlist->core.user_requested_cpus; 399 + bool discard = false; 400 + int err; 401 + 402 + sper->evlist = evlist; 403 + 404 + evlist__for_each_entry(evlist, evsel) { 405 + if (evsel__is_aux_event(evsel)) { 406 + if (!strstarts(evsel->pmu->name, ARM_SPE_PMU_NAME)) { 407 + pr_err("Found unexpected auxtrace event: %s\n", 408 + evsel->pmu->name); 409 + return -EINVAL; 410 + } 411 + opts->full_auxtrace = true; 412 + } 413 + } 414 + 415 + if (!opts->full_auxtrace) 416 + return 0; 417 + 418 + evlist__for_each_entry_safe(evlist, tmp, evsel) { 419 + if (evsel__is_aux_event(evsel)) { 420 + arm_spe_setup_evsel(evsel, cpus); 421 + if (evsel->core.attr.config & 422 + perf_pmu__format_bits(evsel->pmu, "discard")) 423 + discard = true; 424 + } 425 + } 426 + 427 + if (discard) 428 + return 0; 429 + 430 + err = arm_spe_setup_aux_buffer(opts); 431 + if (err) 432 + return err; 433 + 434 + return arm_spe_setup_tracking_event(evlist, opts); 369 435 } 370 436 371 437 static int arm_spe_parse_snapshot_options(struct auxtrace_record *itr __maybe_unused,
+2
tools/perf/arch/csky/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h
+3
tools/perf/arch/csky/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += csky time32 stat64 rlimit
+2
tools/perf/arch/csky/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_32.h>
-22
tools/perf/arch/loongarch/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 PERF_HAVE_JITDUMP := 1 3 3 HAVE_KVM_STAT_SUPPORT := 1 4 - 5 - # 6 - # Syscall table generation for perf 7 - # 8 - 9 - out := $(OUTPUT)arch/loongarch/include/generated/asm 10 - header := $(out)/syscalls.c 11 - incpath := $(srctree)/tools 12 - sysdef := $(srctree)/tools/arch/loongarch/include/uapi/asm/unistd.h 13 - sysprf := $(srctree)/tools/perf/arch/loongarch/entry/syscalls/ 14 - systbl := $(sysprf)/mksyscalltbl 15 - 16 - # Create output directory if not already present 17 - $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 18 - 19 - $(header): $(sysdef) $(systbl) 20 - $(Q)$(SHELL) '$(systbl)' '$(CC)' '$(HOSTCC)' $(incpath) $(sysdef) > $@ 21 - 22 - clean:: 23 - $(call QUIET_CLEAN, loongarch) $(RM) $(header) 24 - 25 - archheaders: $(header)
+2
tools/perf/arch/loongarch/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_64.h
+3
tools/perf/arch/loongarch/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_64 +=
-45
tools/perf/arch/loongarch/entry/syscalls/mksyscalltbl
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - # 4 - # Generate system call table for perf. Derived from 5 - # powerpc script. 6 - # 7 - # Author(s): Ming Wang <wangming01@loongson.cn> 8 - # Author(s): Huacai Chen <chenhuacai@loongson.cn> 9 - # Copyright (C) 2020-2023 Loongson Technology Corporation Limited 10 - 11 - gcc=$1 12 - hostcc=$2 13 - incpath=$3 14 - input=$4 15 - 16 - if ! test -r $input; then 17 - echo "Could not read input file" >&2 18 - exit 1 19 - fi 20 - 21 - create_sc_table() 22 - { 23 - local sc nr max_nr 24 - 25 - while read sc nr; do 26 - printf "%s\n" " [$nr] = \"$sc\"," 27 - max_nr=$nr 28 - done 29 - 30 - echo "#define SYSCALLTBL_LOONGARCH_MAX_ID $max_nr" 31 - } 32 - 33 - create_table() 34 - { 35 - echo "#include \"$input\"" 36 - echo "static const char *const syscalltbl_loongarch[] = {" 37 - create_sc_table 38 - echo "};" 39 - } 40 - 41 - $gcc -E -dM -x c -I $incpath/include/uapi $input \ 42 - |awk '$2 ~ "__NR" && $3 !~ "__NR3264_" { 43 - sub("^#define __NR(3264)?_", ""); 44 - print | "sort -k2 -n"}' \ 45 - |create_table
+2
tools/perf/arch/loongarch/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscall_table_64.h>
-18
tools/perf/arch/mips/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - # Syscall table generation for perf 3 - out := $(OUTPUT)arch/mips/include/generated/asm 4 - header := $(out)/syscalls_n64.c 5 - sysprf := $(srctree)/tools/perf/arch/mips/entry/syscalls 6 - sysdef := $(sysprf)/syscall_n64.tbl 7 - systbl := $(sysprf)/mksyscalltbl 8 - 9 - # Create output directory if not already present 10 - $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 11 - 12 - $(header): $(sysdef) $(systbl) 13 - $(Q)$(SHELL) '$(systbl)' $(sysdef) > $@ 14 - 15 - clean:: 16 - $(call QUIET_CLEAN, mips) $(RM) $(header) 17 - 18 - archheaders: $(header)
+2
tools/perf/arch/mips/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_64.h
+5
tools/perf/arch/mips/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_64 += n64 4 + 5 + syscalltbl = $(srctree)/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
-32
tools/perf/arch/mips/entry/syscalls/mksyscalltbl
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - # 4 - # Generate system call table for perf. Derived from 5 - # s390 script. 6 - # 7 - # Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 8 - # Changed by: Tiezhu Yang <yangtiezhu@loongson.cn> 9 - 10 - SYSCALL_TBL=$1 11 - 12 - if ! test -r $SYSCALL_TBL; then 13 - echo "Could not read input file" >&2 14 - exit 1 15 - fi 16 - 17 - create_table() 18 - { 19 - local max_nr nr abi sc discard 20 - 21 - echo 'static const char *const syscalltbl_mips_n64[] = {' 22 - while read nr abi sc discard; do 23 - printf '\t[%d] = "%s",\n' $nr $sc 24 - max_nr=$nr 25 - done 26 - echo '};' 27 - echo "#define SYSCALLTBL_MIPS_N64_MAX_ID $max_nr" 28 - } 29 - 30 - grep -E "^[[:digit:]]+[[:space:]]+(n64)" $SYSCALL_TBL \ 31 - |sort -k1 -n \ 32 - |create_table
+2
tools/perf/arch/mips/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_64.h>
+3
tools/perf/arch/parisc/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h 3 + syscall-y += syscalls_64.h
+6
tools/perf/arch/parisc/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += 4 + syscall_abis_64 += 5 + 6 + syscalltbl = $(srctree)/tools/perf/arch/parisc/entry/syscalls/syscall.tbl
+463
tools/perf/arch/parisc/entry/syscalls/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # system call numbers and entry vectors for parisc 4 + # 5 + # The format is: 6 + # <number> <abi> <name> <entry point> <compat entry point> 7 + # 8 + # The <abi> can be common, 64, or 32 for this file. 9 + # 10 + 0 common restart_syscall sys_restart_syscall 11 + 1 common exit sys_exit 12 + 2 common fork sys_fork_wrapper 13 + 3 common read sys_read 14 + 4 common write sys_write 15 + 5 common open sys_open compat_sys_open 16 + 6 common close sys_close 17 + 7 common waitpid sys_waitpid 18 + 8 common creat sys_creat 19 + 9 common link sys_link 20 + 10 common unlink sys_unlink 21 + 11 common execve sys_execve compat_sys_execve 22 + 12 common chdir sys_chdir 23 + 13 32 time sys_time32 24 + 13 64 time sys_time 25 + 14 common mknod sys_mknod 26 + 15 common chmod sys_chmod 27 + 16 common lchown sys_lchown 28 + 17 common socket sys_socket 29 + 18 common stat sys_newstat compat_sys_newstat 30 + 19 common lseek sys_lseek compat_sys_lseek 31 + 20 common getpid sys_getpid 32 + 21 common mount sys_mount 33 + 22 common bind sys_bind 34 + 23 common setuid sys_setuid 35 + 24 common getuid sys_getuid 36 + 25 32 stime sys_stime32 37 + 25 64 stime sys_stime 38 + 26 common ptrace sys_ptrace compat_sys_ptrace 39 + 27 common alarm sys_alarm 40 + 28 common fstat sys_newfstat compat_sys_newfstat 41 + 29 common pause sys_pause 42 + 30 32 utime sys_utime32 43 + 30 64 utime sys_utime 44 + 31 common connect sys_connect 45 + 32 common listen sys_listen 46 + 33 common access sys_access 47 + 34 common nice sys_nice 48 + 35 common accept sys_accept 49 + 36 common sync sys_sync 50 + 37 common kill sys_kill 51 + 38 common rename sys_rename 52 + 39 common mkdir sys_mkdir 53 + 40 common rmdir sys_rmdir 54 + 41 common dup sys_dup 55 + 42 common pipe sys_pipe 56 + 43 common times sys_times compat_sys_times 57 + 44 common getsockname sys_getsockname 58 + 45 common brk sys_brk 59 + 46 common setgid sys_setgid 60 + 47 common getgid sys_getgid 61 + 48 common signal sys_signal 62 + 49 common geteuid sys_geteuid 63 + 50 common getegid sys_getegid 64 + 51 common acct sys_acct 65 + 52 common umount2 sys_umount 66 + 53 common getpeername sys_getpeername 67 + 54 common ioctl sys_ioctl compat_sys_ioctl 68 + 55 common fcntl sys_fcntl compat_sys_fcntl 69 + 56 common socketpair sys_socketpair 70 + 57 common setpgid sys_setpgid 71 + 58 common send sys_send 72 + 59 common uname sys_newuname 73 + 60 common umask sys_umask 74 + 61 common chroot sys_chroot 75 + 62 common ustat sys_ustat compat_sys_ustat 76 + 63 common dup2 sys_dup2 77 + 64 common getppid sys_getppid 78 + 65 common getpgrp sys_getpgrp 79 + 66 common setsid sys_setsid 80 + 67 common pivot_root sys_pivot_root 81 + 68 common sgetmask sys_sgetmask sys32_unimplemented 82 + 69 common ssetmask sys_ssetmask sys32_unimplemented 83 + 70 common setreuid sys_setreuid 84 + 71 common setregid sys_setregid 85 + 72 common mincore sys_mincore 86 + 73 common sigpending sys_sigpending compat_sys_sigpending 87 + 74 common sethostname sys_sethostname 88 + 75 common setrlimit sys_setrlimit compat_sys_setrlimit 89 + 76 common getrlimit sys_getrlimit compat_sys_getrlimit 90 + 77 common getrusage sys_getrusage compat_sys_getrusage 91 + 78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday 92 + 79 common settimeofday sys_settimeofday compat_sys_settimeofday 93 + 80 common getgroups sys_getgroups 94 + 81 common setgroups sys_setgroups 95 + 82 common sendto sys_sendto 96 + 83 common symlink sys_symlink 97 + 84 common lstat sys_newlstat compat_sys_newlstat 98 + 85 common readlink sys_readlink 99 + 86 common uselib sys_ni_syscall 100 + 87 common swapon sys_swapon 101 + 88 common reboot sys_reboot 102 + 89 common mmap2 sys_mmap2 103 + 90 common mmap sys_mmap 104 + 91 common munmap sys_munmap 105 + 92 common truncate sys_truncate compat_sys_truncate 106 + 93 common ftruncate sys_ftruncate compat_sys_ftruncate 107 + 94 common fchmod sys_fchmod 108 + 95 common fchown sys_fchown 109 + 96 common getpriority sys_getpriority 110 + 97 common setpriority sys_setpriority 111 + 98 common recv sys_recv compat_sys_recv 112 + 99 common statfs sys_statfs compat_sys_statfs 113 + 100 common fstatfs sys_fstatfs compat_sys_fstatfs 114 + 101 common stat64 sys_stat64 115 + # 102 was socketcall 116 + 103 common syslog sys_syslog 117 + 104 common setitimer sys_setitimer compat_sys_setitimer 118 + 105 common getitimer sys_getitimer compat_sys_getitimer 119 + 106 common capget sys_capget 120 + 107 common capset sys_capset 121 + 108 32 pread64 parisc_pread64 122 + 108 64 pread64 sys_pread64 123 + 109 32 pwrite64 parisc_pwrite64 124 + 109 64 pwrite64 sys_pwrite64 125 + 110 common getcwd sys_getcwd 126 + 111 common vhangup sys_vhangup 127 + 112 common fstat64 sys_fstat64 128 + 113 common vfork sys_vfork_wrapper 129 + 114 common wait4 sys_wait4 compat_sys_wait4 130 + 115 common swapoff sys_swapoff 131 + 116 common sysinfo sys_sysinfo compat_sys_sysinfo 132 + 117 common shutdown sys_shutdown 133 + 118 common fsync sys_fsync 134 + 119 common madvise parisc_madvise 135 + 120 common clone sys_clone_wrapper 136 + 121 common setdomainname sys_setdomainname 137 + 122 common sendfile sys_sendfile compat_sys_sendfile 138 + 123 common recvfrom sys_recvfrom compat_sys_recvfrom 139 + 124 32 adjtimex sys_adjtimex_time32 140 + 124 64 adjtimex sys_adjtimex 141 + 125 common mprotect sys_mprotect 142 + 126 common sigprocmask sys_sigprocmask compat_sys_sigprocmask 143 + # 127 was create_module 144 + 128 common init_module sys_init_module 145 + 129 common delete_module sys_delete_module 146 + # 130 was get_kernel_syms 147 + 131 common quotactl sys_quotactl 148 + 132 common getpgid sys_getpgid 149 + 133 common fchdir sys_fchdir 150 + 134 common bdflush sys_ni_syscall 151 + 135 common sysfs sys_sysfs 152 + 136 32 personality parisc_personality 153 + 136 64 personality sys_personality 154 + # 137 was afs_syscall 155 + 138 common setfsuid sys_setfsuid 156 + 139 common setfsgid sys_setfsgid 157 + 140 common _llseek sys_llseek 158 + 141 common getdents sys_getdents compat_sys_getdents 159 + 142 common _newselect sys_select compat_sys_select 160 + 143 common flock sys_flock 161 + 144 common msync sys_msync 162 + 145 common readv sys_readv 163 + 146 common writev sys_writev 164 + 147 common getsid sys_getsid 165 + 148 common fdatasync sys_fdatasync 166 + 149 common _sysctl sys_ni_syscall 167 + 150 common mlock sys_mlock 168 + 151 common munlock sys_munlock 169 + 152 common mlockall sys_mlockall 170 + 153 common munlockall sys_munlockall 171 + 154 common sched_setparam sys_sched_setparam 172 + 155 common sched_getparam sys_sched_getparam 173 + 156 common sched_setscheduler sys_sched_setscheduler 174 + 157 common sched_getscheduler sys_sched_getscheduler 175 + 158 common sched_yield sys_sched_yield 176 + 159 common sched_get_priority_max sys_sched_get_priority_max 177 + 160 common sched_get_priority_min sys_sched_get_priority_min 178 + 161 32 sched_rr_get_interval sys_sched_rr_get_interval_time32 179 + 161 64 sched_rr_get_interval sys_sched_rr_get_interval 180 + 162 32 nanosleep sys_nanosleep_time32 181 + 162 64 nanosleep sys_nanosleep 182 + 163 common mremap sys_mremap 183 + 164 common setresuid sys_setresuid 184 + 165 common getresuid sys_getresuid 185 + 166 common sigaltstack sys_sigaltstack compat_sys_sigaltstack 186 + # 167 was query_module 187 + 168 common poll sys_poll 188 + # 169 was nfsservctl 189 + 170 common setresgid sys_setresgid 190 + 171 common getresgid sys_getresgid 191 + 172 common prctl sys_prctl 192 + 173 common rt_sigreturn sys_rt_sigreturn_wrapper 193 + 174 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction 194 + 175 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask 195 + 176 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending 196 + 177 32 rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32 197 + 177 64 rt_sigtimedwait sys_rt_sigtimedwait 198 + 178 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo 199 + 179 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend 200 + 180 common chown sys_chown 201 + 181 common setsockopt sys_setsockopt sys_setsockopt 202 + 182 common getsockopt sys_getsockopt sys_getsockopt 203 + 183 common sendmsg sys_sendmsg compat_sys_sendmsg 204 + 184 common recvmsg sys_recvmsg compat_sys_recvmsg 205 + 185 common semop sys_semop 206 + 186 common semget sys_semget 207 + 187 common semctl sys_semctl compat_sys_semctl 208 + 188 common msgsnd sys_msgsnd compat_sys_msgsnd 209 + 189 common msgrcv sys_msgrcv compat_sys_msgrcv 210 + 190 common msgget sys_msgget 211 + 191 common msgctl sys_msgctl compat_sys_msgctl 212 + 192 common shmat sys_shmat compat_sys_shmat 213 + 193 common shmdt sys_shmdt 214 + 194 common shmget sys_shmget 215 + 195 common shmctl sys_shmctl compat_sys_shmctl 216 + # 196 was getpmsg 217 + # 197 was putpmsg 218 + 198 common lstat64 sys_lstat64 219 + 199 32 truncate64 parisc_truncate64 220 + 199 64 truncate64 sys_truncate64 221 + 200 32 ftruncate64 parisc_ftruncate64 222 + 200 64 ftruncate64 sys_ftruncate64 223 + 201 common getdents64 sys_getdents64 224 + 202 common fcntl64 sys_fcntl64 compat_sys_fcntl64 225 + # 203 was attrctl 226 + # 204 was acl_get 227 + # 205 was acl_set 228 + 206 common gettid sys_gettid 229 + 207 32 readahead parisc_readahead 230 + 207 64 readahead sys_readahead 231 + 208 common tkill sys_tkill 232 + 209 common sendfile64 sys_sendfile64 compat_sys_sendfile64 233 + 210 32 futex sys_futex_time32 234 + 210 64 futex sys_futex 235 + 211 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity 236 + 212 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity 237 + # 213 was set_thread_area 238 + # 214 was get_thread_area 239 + 215 common io_setup sys_io_setup compat_sys_io_setup 240 + 216 common io_destroy sys_io_destroy 241 + 217 32 io_getevents sys_io_getevents_time32 242 + 217 64 io_getevents sys_io_getevents 243 + 218 common io_submit sys_io_submit compat_sys_io_submit 244 + 219 common io_cancel sys_io_cancel 245 + # 220 was alloc_hugepages 246 + # 221 was free_hugepages 247 + 222 common exit_group sys_exit_group 248 + 223 common lookup_dcookie sys_ni_syscall 249 + 224 common epoll_create sys_epoll_create 250 + 225 common epoll_ctl sys_epoll_ctl 251 + 226 common epoll_wait sys_epoll_wait 252 + 227 common remap_file_pages sys_remap_file_pages 253 + 228 32 semtimedop sys_semtimedop_time32 254 + 228 64 semtimedop sys_semtimedop 255 + 229 common mq_open sys_mq_open compat_sys_mq_open 256 + 230 common mq_unlink sys_mq_unlink 257 + 231 32 mq_timedsend sys_mq_timedsend_time32 258 + 231 64 mq_timedsend sys_mq_timedsend 259 + 232 32 mq_timedreceive sys_mq_timedreceive_time32 260 + 232 64 mq_timedreceive sys_mq_timedreceive 261 + 233 common mq_notify sys_mq_notify compat_sys_mq_notify 262 + 234 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr 263 + 235 common waitid sys_waitid compat_sys_waitid 264 + 236 32 fadvise64_64 parisc_fadvise64_64 265 + 236 64 fadvise64_64 sys_fadvise64_64 266 + 237 common set_tid_address sys_set_tid_address 267 + 238 common setxattr sys_setxattr 268 + 239 common lsetxattr sys_lsetxattr 269 + 240 common fsetxattr sys_fsetxattr 270 + 241 common getxattr sys_getxattr 271 + 242 common lgetxattr sys_lgetxattr 272 + 243 common fgetxattr sys_fgetxattr 273 + 244 common listxattr sys_listxattr 274 + 245 common llistxattr sys_llistxattr 275 + 246 common flistxattr sys_flistxattr 276 + 247 common removexattr sys_removexattr 277 + 248 common lremovexattr sys_lremovexattr 278 + 249 common fremovexattr sys_fremovexattr 279 + 250 common timer_create sys_timer_create compat_sys_timer_create 280 + 251 32 timer_settime sys_timer_settime32 281 + 251 64 timer_settime sys_timer_settime 282 + 252 32 timer_gettime sys_timer_gettime32 283 + 252 64 timer_gettime sys_timer_gettime 284 + 253 common timer_getoverrun sys_timer_getoverrun 285 + 254 common timer_delete sys_timer_delete 286 + 255 32 clock_settime sys_clock_settime32 287 + 255 64 clock_settime sys_clock_settime 288 + 256 32 clock_gettime sys_clock_gettime32 289 + 256 64 clock_gettime sys_clock_gettime 290 + 257 32 clock_getres sys_clock_getres_time32 291 + 257 64 clock_getres sys_clock_getres 292 + 258 32 clock_nanosleep sys_clock_nanosleep_time32 293 + 258 64 clock_nanosleep sys_clock_nanosleep 294 + 259 common tgkill sys_tgkill 295 + 260 common mbind sys_mbind 296 + 261 common get_mempolicy sys_get_mempolicy 297 + 262 common set_mempolicy sys_set_mempolicy 298 + # 263 was vserver 299 + 264 common add_key sys_add_key 300 + 265 common request_key sys_request_key 301 + 266 common keyctl sys_keyctl compat_sys_keyctl 302 + 267 common ioprio_set sys_ioprio_set 303 + 268 common ioprio_get sys_ioprio_get 304 + 269 common inotify_init sys_inotify_init 305 + 270 common inotify_add_watch sys_inotify_add_watch 306 + 271 common inotify_rm_watch sys_inotify_rm_watch 307 + 272 common migrate_pages sys_migrate_pages 308 + 273 32 pselect6 sys_pselect6_time32 compat_sys_pselect6_time32 309 + 273 64 pselect6 sys_pselect6 310 + 274 32 ppoll sys_ppoll_time32 compat_sys_ppoll_time32 311 + 274 64 ppoll sys_ppoll 312 + 275 common openat sys_openat compat_sys_openat 313 + 276 common mkdirat sys_mkdirat 314 + 277 common mknodat sys_mknodat 315 + 278 common fchownat sys_fchownat 316 + 279 32 futimesat sys_futimesat_time32 317 + 279 64 futimesat sys_futimesat 318 + 280 common fstatat64 sys_fstatat64 319 + 281 common unlinkat sys_unlinkat 320 + 282 common renameat sys_renameat 321 + 283 common linkat sys_linkat 322 + 284 common symlinkat sys_symlinkat 323 + 285 common readlinkat sys_readlinkat 324 + 286 common fchmodat sys_fchmodat 325 + 287 common faccessat sys_faccessat 326 + 288 common unshare sys_unshare 327 + 289 common set_robust_list sys_set_robust_list compat_sys_set_robust_list 328 + 290 common get_robust_list sys_get_robust_list compat_sys_get_robust_list 329 + 291 common splice sys_splice 330 + 292 32 sync_file_range parisc_sync_file_range 331 + 292 64 sync_file_range sys_sync_file_range 332 + 293 common tee sys_tee 333 + 294 common vmsplice sys_vmsplice 334 + 295 common move_pages sys_move_pages 335 + 296 common getcpu sys_getcpu 336 + 297 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait 337 + 298 common statfs64 sys_statfs64 compat_sys_statfs64 338 + 299 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64 339 + 300 common kexec_load sys_kexec_load compat_sys_kexec_load 340 + 301 32 utimensat sys_utimensat_time32 341 + 301 64 utimensat sys_utimensat 342 + 302 common signalfd sys_signalfd compat_sys_signalfd 343 + # 303 was timerfd 344 + 304 common eventfd sys_eventfd 345 + 305 32 fallocate parisc_fallocate 346 + 305 64 fallocate sys_fallocate 347 + 306 common timerfd_create parisc_timerfd_create 348 + 307 32 timerfd_settime sys_timerfd_settime32 349 + 307 64 timerfd_settime sys_timerfd_settime 350 + 308 32 timerfd_gettime sys_timerfd_gettime32 351 + 308 64 timerfd_gettime sys_timerfd_gettime 352 + 309 common signalfd4 parisc_signalfd4 parisc_compat_signalfd4 353 + 310 common eventfd2 parisc_eventfd2 354 + 311 common epoll_create1 sys_epoll_create1 355 + 312 common dup3 sys_dup3 356 + 313 common pipe2 parisc_pipe2 357 + 314 common inotify_init1 parisc_inotify_init1 358 + 315 common preadv sys_preadv compat_sys_preadv 359 + 316 common pwritev sys_pwritev compat_sys_pwritev 360 + 317 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo 361 + 318 common perf_event_open sys_perf_event_open 362 + 319 32 recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32 363 + 319 64 recvmmsg sys_recvmmsg 364 + 320 common accept4 sys_accept4 365 + 321 common prlimit64 sys_prlimit64 366 + 322 common fanotify_init sys_fanotify_init 367 + 323 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark 368 + 324 32 clock_adjtime sys_clock_adjtime32 369 + 324 64 clock_adjtime sys_clock_adjtime 370 + 325 common name_to_handle_at sys_name_to_handle_at 371 + 326 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at 372 + 327 common syncfs sys_syncfs 373 + 328 common setns sys_setns 374 + 329 common sendmmsg sys_sendmmsg compat_sys_sendmmsg 375 + 330 common process_vm_readv sys_process_vm_readv 376 + 331 common process_vm_writev sys_process_vm_writev 377 + 332 common kcmp sys_kcmp 378 + 333 common finit_module sys_finit_module 379 + 334 common sched_setattr sys_sched_setattr 380 + 335 common sched_getattr sys_sched_getattr 381 + 336 32 utimes sys_utimes_time32 382 + 336 64 utimes sys_utimes 383 + 337 common renameat2 sys_renameat2 384 + 338 common seccomp sys_seccomp 385 + 339 common getrandom sys_getrandom 386 + 340 common memfd_create sys_memfd_create 387 + 341 common bpf sys_bpf 388 + 342 common execveat sys_execveat compat_sys_execveat 389 + 343 common membarrier sys_membarrier 390 + 344 common userfaultfd parisc_userfaultfd 391 + 345 common mlock2 sys_mlock2 392 + 346 common copy_file_range sys_copy_file_range 393 + 347 common preadv2 sys_preadv2 compat_sys_preadv2 394 + 348 common pwritev2 sys_pwritev2 compat_sys_pwritev2 395 + 349 common statx sys_statx 396 + 350 32 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents 397 + 350 64 io_pgetevents sys_io_pgetevents 398 + 351 common pkey_mprotect sys_pkey_mprotect 399 + 352 common pkey_alloc sys_pkey_alloc 400 + 353 common pkey_free sys_pkey_free 401 + 354 common rseq sys_rseq 402 + 355 common kexec_file_load sys_kexec_file_load sys_kexec_file_load 403 + 356 common cacheflush sys_cacheflush 404 + # up to 402 is unassigned and reserved for arch specific syscalls 405 + 403 32 clock_gettime64 sys_clock_gettime sys_clock_gettime 406 + 404 32 clock_settime64 sys_clock_settime sys_clock_settime 407 + 405 32 clock_adjtime64 sys_clock_adjtime sys_clock_adjtime 408 + 406 32 clock_getres_time64 sys_clock_getres sys_clock_getres 409 + 407 32 clock_nanosleep_time64 sys_clock_nanosleep sys_clock_nanosleep 410 + 408 32 timer_gettime64 sys_timer_gettime sys_timer_gettime 411 + 409 32 timer_settime64 sys_timer_settime sys_timer_settime 412 + 410 32 timerfd_gettime64 sys_timerfd_gettime sys_timerfd_gettime 413 + 411 32 timerfd_settime64 sys_timerfd_settime sys_timerfd_settime 414 + 412 32 utimensat_time64 sys_utimensat sys_utimensat 415 + 413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64 416 + 414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64 417 + 416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64 418 + 417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64 419 + 418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend 420 + 419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive 421 + 420 32 semtimedop_time64 sys_semtimedop sys_semtimedop 422 + 421 32 rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64 423 + 422 32 futex_time64 sys_futex sys_futex 424 + 423 32 sched_rr_get_interval_time64 sys_sched_rr_get_interval sys_sched_rr_get_interval 425 + 424 common pidfd_send_signal sys_pidfd_send_signal 426 + 425 common io_uring_setup sys_io_uring_setup 427 + 426 common io_uring_enter sys_io_uring_enter 428 + 427 common io_uring_register sys_io_uring_register 429 + 428 common open_tree sys_open_tree 430 + 429 common move_mount sys_move_mount 431 + 430 common fsopen sys_fsopen 432 + 431 common fsconfig sys_fsconfig 433 + 432 common fsmount sys_fsmount 434 + 433 common fspick sys_fspick 435 + 434 common pidfd_open sys_pidfd_open 436 + 435 common clone3 sys_clone3_wrapper 437 + 436 common close_range sys_close_range 438 + 437 common openat2 sys_openat2 439 + 438 common pidfd_getfd sys_pidfd_getfd 440 + 439 common faccessat2 sys_faccessat2 441 + 440 common process_madvise sys_process_madvise 442 + 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 443 + 442 common mount_setattr sys_mount_setattr 444 + 443 common quotactl_fd sys_quotactl_fd 445 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 446 + 445 common landlock_add_rule sys_landlock_add_rule 447 + 446 common landlock_restrict_self sys_landlock_restrict_self 448 + # 447 reserved for memfd_secret 449 + 448 common process_mrelease sys_process_mrelease 450 + 449 common futex_waitv sys_futex_waitv 451 + 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 452 + 451 common cachestat sys_cachestat 453 + 452 common fchmodat2 sys_fchmodat2 454 + 453 common map_shadow_stack sys_map_shadow_stack 455 + 454 common futex_wake sys_futex_wake 456 + 455 common futex_wait sys_futex_wait 457 + 456 common futex_requeue sys_futex_requeue 458 + 457 common statmount sys_statmount 459 + 458 common listmount sys_listmount 460 + 459 common lsm_get_self_attr sys_lsm_get_self_attr 461 + 460 common lsm_set_self_attr sys_lsm_set_self_attr 462 + 461 common lsm_list_modules sys_lsm_list_modules 463 + 462 common mseal sys_mseal
+8
tools/perf/arch/parisc/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/bitsperlong.h> 3 + 4 + #if __BITS_PER_LONG == 64 5 + #include <asm/syscalls_64.h> 6 + #else 7 + #include <asm/syscalls_32.h> 8 + #endif
-25
tools/perf/arch/powerpc/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 HAVE_KVM_STAT_SUPPORT := 1 3 3 PERF_HAVE_JITDUMP := 1 4 - 5 - # 6 - # Syscall table generation for perf 7 - # 8 - 9 - out := $(OUTPUT)arch/powerpc/include/generated/asm 10 - header32 := $(out)/syscalls_32.c 11 - header64 := $(out)/syscalls_64.c 12 - sysprf := $(srctree)/tools/perf/arch/powerpc/entry/syscalls 13 - sysdef := $(sysprf)/syscall.tbl 14 - systbl := $(sysprf)/mksyscalltbl 15 - 16 - # Create output directory if not already present 17 - $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 18 - 19 - $(header64): $(sysdef) $(systbl) 20 - $(Q)$(SHELL) '$(systbl)' '64' $(sysdef) > $@ 21 - 22 - $(header32): $(sysdef) $(systbl) 23 - $(Q)$(SHELL) '$(systbl)' '32' $(sysdef) > $@ 24 - 25 - clean:: 26 - $(call QUIET_CLEAN, powerpc) $(RM) $(header32) $(header64) 27 - 28 - archheaders: $(header32) $(header64)
+3
tools/perf/arch/powerpc/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h 3 + syscall-y += syscalls_64.h
+6
tools/perf/arch/powerpc/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += nospu 4 + syscall_abis_64 += nospu 5 + 6 + syscalltbl = $(srctree)/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
-39
tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - # 4 - # Generate system call table for perf. Derived from 5 - # s390 script. 6 - # 7 - # Copyright IBM Corp. 2017 8 - # Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 9 - # Changed by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> 10 - 11 - wordsize=$1 12 - SYSCALL_TBL=$2 13 - 14 - if ! test -r $SYSCALL_TBL; then 15 - echo "Could not read input file" >&2 16 - exit 1 17 - fi 18 - 19 - create_table() 20 - { 21 - local wordsize=$1 22 - local max_nr nr abi sc discard 23 - max_nr=-1 24 - nr=0 25 - 26 - echo "static const char *const syscalltbl_powerpc_${wordsize}[] = {" 27 - while read nr abi sc discard; do 28 - if [ "$max_nr" -lt "$nr" ]; then 29 - printf '\t[%d] = "%s",\n' $nr $sc 30 - max_nr=$nr 31 - fi 32 - done 33 - echo '};' 34 - echo "#define SYSCALLTBL_POWERPC_${wordsize}_MAX_ID $max_nr" 35 - } 36 - 37 - grep -E "^[[:digit:]]+[[:space:]]+(common|spu|nospu|${wordsize})" $SYSCALL_TBL \ 38 - |sort -k1 -n \ 39 - |create_table ${wordsize}
+8
tools/perf/arch/powerpc/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/bitsperlong.h> 3 + 4 + #if __BITS_PER_LONG == 64 5 + #include <asm/syscalls_64.h> 6 + #else 7 + #include <asm/syscalls_32.h> 8 + #endif
+2 -1
tools/perf/arch/powerpc/util/perf_regs.c
··· 16 16 17 17 #define PVR_POWER9 0x004E 18 18 #define PVR_POWER10 0x0080 19 + #define PVR_POWER11 0x0082 19 20 20 21 static const struct sample_reg sample_reg_masks[] = { 21 22 SMPL_REG(r0, PERF_REG_POWERPC_R0), ··· 208 207 version = (((mfspr(SPRN_PVR)) >> 16) & 0xFFFF); 209 208 if (version == PVR_POWER9) 210 209 extended_mask = PERF_REG_PMU_MASK_300; 211 - else if (version == PVR_POWER10) 210 + else if ((version == PVR_POWER10) || (version == PVR_POWER11)) 212 211 extended_mask = PERF_REG_PMU_MASK_31; 213 212 else 214 213 return mask;
-22
tools/perf/arch/riscv/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 PERF_HAVE_JITDUMP := 1 3 3 HAVE_KVM_STAT_SUPPORT := 1 4 - 5 - # 6 - # Syscall table generation for perf 7 - # 8 - 9 - out := $(OUTPUT)arch/riscv/include/generated/asm 10 - header := $(out)/syscalls.c 11 - incpath := $(srctree)/tools 12 - sysdef := $(srctree)/tools/arch/riscv/include/uapi/asm/unistd.h 13 - sysprf := $(srctree)/tools/perf/arch/riscv/entry/syscalls/ 14 - systbl := $(sysprf)/mksyscalltbl 15 - 16 - # Create output directory if not already present 17 - $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 18 - 19 - $(header): $(sysdef) $(systbl) 20 - $(Q)$(SHELL) '$(systbl)' '$(CC)' '$(HOSTCC)' $(incpath) $(sysdef) > $@ 21 - 22 - clean:: 23 - $(call QUIET_CLEAN, riscv) $(RM) $(header) 24 - 25 - archheaders: $(header)
+2
tools/perf/arch/riscv/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_64.h
+4
tools/perf/arch/riscv/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += riscv memfd_secret 4 + syscall_abis_64 += riscv rlimit memfd_secret
-47
tools/perf/arch/riscv/entry/syscalls/mksyscalltbl
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - # 4 - # Generate system call table for perf. Derived from 5 - # powerpc script. 6 - # 7 - # Copyright IBM Corp. 2017 8 - # Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 9 - # Changed by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> 10 - # Changed by: Kim Phillips <kim.phillips@arm.com> 11 - # Changed by: Björn Töpel <bjorn@rivosinc.com> 12 - 13 - gcc=$1 14 - hostcc=$2 15 - incpath=$3 16 - input=$4 17 - 18 - if ! test -r $input; then 19 - echo "Could not read input file" >&2 20 - exit 1 21 - fi 22 - 23 - create_sc_table() 24 - { 25 - local sc nr max_nr 26 - 27 - while read sc nr; do 28 - printf "%s\n" " [$nr] = \"$sc\"," 29 - max_nr=$nr 30 - done 31 - 32 - echo "#define SYSCALLTBL_RISCV_MAX_ID $max_nr" 33 - } 34 - 35 - create_table() 36 - { 37 - echo "#include \"$input\"" 38 - echo "static const char *const syscalltbl_riscv[] = {" 39 - create_sc_table 40 - echo "};" 41 - } 42 - 43 - $gcc -E -dM -x c -I $incpath/include/uapi $input \ 44 - |awk '$2 ~ "__NR" && $3 !~ "__NR3264_" { 45 - sub("^#define __NR(3264)?_", ""); 46 - print | "sort -k2 -n"}' \ 47 - |create_table
+8
tools/perf/arch/riscv/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/bitsperlong.h> 3 + 4 + #if __BITS_PER_LONG == 64 5 + #include <asm/syscalls_64.h> 6 + #else 7 + #include <asm/syscalls_32.h> 8 + #endif
-21
tools/perf/arch/s390/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 HAVE_KVM_STAT_SUPPORT := 1 3 3 PERF_HAVE_JITDUMP := 1 4 - 5 - # 6 - # Syscall table generation for perf 7 - # 8 - 9 - out := $(OUTPUT)arch/s390/include/generated/asm 10 - header := $(out)/syscalls_64.c 11 - sysprf := $(srctree)/tools/perf/arch/s390/entry/syscalls 12 - sysdef := $(sysprf)/syscall.tbl 13 - systbl := $(sysprf)/mksyscalltbl 14 - 15 - # Create output directory if not already present 16 - $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 17 - 18 - $(header): $(sysdef) $(systbl) 19 - $(Q)$(SHELL) '$(systbl)' $(sysdef) > $@ 20 - 21 - clean:: 22 - $(call QUIET_CLEAN, s390) $(RM) $(header) 23 - 24 - archheaders: $(header)
+2
tools/perf/arch/s390/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_64.h
+5
tools/perf/arch/s390/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_64 += renameat rlimit memfd_secret 4 + 5 + syscalltbl = $(srctree)/tools/perf/arch/s390/entry/syscalls/syscall.tbl
-32
tools/perf/arch/s390/entry/syscalls/mksyscalltbl
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - # 4 - # Generate system call table for perf 5 - # 6 - # Copyright IBM Corp. 2017, 2018 7 - # Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 8 - # 9 - 10 - SYSCALL_TBL=$1 11 - 12 - if ! test -r $SYSCALL_TBL; then 13 - echo "Could not read input file" >&2 14 - exit 1 15 - fi 16 - 17 - create_table() 18 - { 19 - local max_nr nr abi sc discard 20 - 21 - echo 'static const char *const syscalltbl_s390_64[] = {' 22 - while read nr abi sc discard; do 23 - printf '\t[%d] = "%s",\n' $nr $sc 24 - max_nr=$nr 25 - done 26 - echo '};' 27 - echo "#define SYSCALLTBL_S390_64_MAX_ID $max_nr" 28 - } 29 - 30 - grep -E "^[[:digit:]]+[[:space:]]+(common|64)" $SYSCALL_TBL \ 31 - |sort -k1 -n \ 32 - |create_table
+2
tools/perf/arch/s390/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_64.h>
+2
tools/perf/arch/sh/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h
+4
tools/perf/arch/sh/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += 4 + syscalltbl = $(srctree)/tools/perf/arch/sh/entry/syscalls/syscall.tbl
+472
tools/perf/arch/sh/entry/syscalls/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # system call numbers and entry vectors for sh 4 + # 5 + # The format is: 6 + # <number> <abi> <name> <entry point> 7 + # 8 + # The <abi> is always "common" for this file 9 + # 10 + 0 common restart_syscall sys_restart_syscall 11 + 1 common exit sys_exit 12 + 2 common fork sys_fork 13 + 3 common read sys_read 14 + 4 common write sys_write 15 + 5 common open sys_open 16 + 6 common close sys_close 17 + 7 common waitpid sys_waitpid 18 + 8 common creat sys_creat 19 + 9 common link sys_link 20 + 10 common unlink sys_unlink 21 + 11 common execve sys_execve 22 + 12 common chdir sys_chdir 23 + 13 common time sys_time32 24 + 14 common mknod sys_mknod 25 + 15 common chmod sys_chmod 26 + 16 common lchown sys_lchown16 27 + # 17 was break 28 + 18 common oldstat sys_stat 29 + 19 common lseek sys_lseek 30 + 20 common getpid sys_getpid 31 + 21 common mount sys_mount 32 + 22 common umount sys_oldumount 33 + 23 common setuid sys_setuid16 34 + 24 common getuid sys_getuid16 35 + 25 common stime sys_stime32 36 + 26 common ptrace sys_ptrace 37 + 27 common alarm sys_alarm 38 + 28 common oldfstat sys_fstat 39 + 29 common pause sys_pause 40 + 30 common utime sys_utime32 41 + # 31 was stty 42 + # 32 was gtty 43 + 33 common access sys_access 44 + 34 common nice sys_nice 45 + # 35 was ftime 46 + 36 common sync sys_sync 47 + 37 common kill sys_kill 48 + 38 common rename sys_rename 49 + 39 common mkdir sys_mkdir 50 + 40 common rmdir sys_rmdir 51 + 41 common dup sys_dup 52 + 42 common pipe sys_sh_pipe 53 + 43 common times sys_times 54 + # 44 was prof 55 + 45 common brk sys_brk 56 + 46 common setgid sys_setgid16 57 + 47 common getgid sys_getgid16 58 + 48 common signal sys_signal 59 + 49 common geteuid sys_geteuid16 60 + 50 common getegid sys_getegid16 61 + 51 common acct sys_acct 62 + 52 common umount2 sys_umount 63 + # 53 was lock 64 + 54 common ioctl sys_ioctl 65 + 55 common fcntl sys_fcntl 66 + # 56 was mpx 67 + 57 common setpgid sys_setpgid 68 + # 58 was ulimit 69 + # 59 was olduname 70 + 60 common umask sys_umask 71 + 61 common chroot sys_chroot 72 + 62 common ustat sys_ustat 73 + 63 common dup2 sys_dup2 74 + 64 common getppid sys_getppid 75 + 65 common getpgrp sys_getpgrp 76 + 66 common setsid sys_setsid 77 + 67 common sigaction sys_sigaction 78 + 68 common sgetmask sys_sgetmask 79 + 69 common ssetmask sys_ssetmask 80 + 70 common setreuid sys_setreuid16 81 + 71 common setregid sys_setregid16 82 + 72 common sigsuspend sys_sigsuspend 83 + 73 common sigpending sys_sigpending 84 + 74 common sethostname sys_sethostname 85 + 75 common setrlimit sys_setrlimit 86 + 76 common getrlimit sys_old_getrlimit 87 + 77 common getrusage sys_getrusage 88 + 78 common gettimeofday sys_gettimeofday 89 + 79 common settimeofday sys_settimeofday 90 + 80 common getgroups sys_getgroups16 91 + 81 common setgroups sys_setgroups16 92 + # 82 was select 93 + 83 common symlink sys_symlink 94 + 84 common oldlstat sys_lstat 95 + 85 common readlink sys_readlink 96 + 86 common uselib sys_uselib 97 + 87 common swapon sys_swapon 98 + 88 common reboot sys_reboot 99 + 89 common readdir sys_old_readdir 100 + 90 common mmap old_mmap 101 + 91 common munmap sys_munmap 102 + 92 common truncate sys_truncate 103 + 93 common ftruncate sys_ftruncate 104 + 94 common fchmod sys_fchmod 105 + 95 common fchown sys_fchown16 106 + 96 common getpriority sys_getpriority 107 + 97 common setpriority sys_setpriority 108 + # 98 was profil 109 + 99 common statfs sys_statfs 110 + 100 common fstatfs sys_fstatfs 111 + # 101 was ioperm 112 + 102 common socketcall sys_socketcall 113 + 103 common syslog sys_syslog 114 + 104 common setitimer sys_setitimer 115 + 105 common getitimer sys_getitimer 116 + 106 common stat sys_newstat 117 + 107 common lstat sys_newlstat 118 + 108 common fstat sys_newfstat 119 + 109 common olduname sys_uname 120 + # 110 was iopl 121 + 111 common vhangup sys_vhangup 122 + # 112 was idle 123 + # 113 was vm86old 124 + 114 common wait4 sys_wait4 125 + 115 common swapoff sys_swapoff 126 + 116 common sysinfo sys_sysinfo 127 + 117 common ipc sys_ipc 128 + 118 common fsync sys_fsync 129 + 119 common sigreturn sys_sigreturn 130 + 120 common clone sys_clone 131 + 121 common setdomainname sys_setdomainname 132 + 122 common uname sys_newuname 133 + 123 common cacheflush sys_cacheflush 134 + 124 common adjtimex sys_adjtimex_time32 135 + 125 common mprotect sys_mprotect 136 + 126 common sigprocmask sys_sigprocmask 137 + # 127 was create_module 138 + 128 common init_module sys_init_module 139 + 129 common delete_module sys_delete_module 140 + # 130 was get_kernel_syms 141 + 131 common quotactl sys_quotactl 142 + 132 common getpgid sys_getpgid 143 + 133 common fchdir sys_fchdir 144 + 134 common bdflush sys_ni_syscall 145 + 135 common sysfs sys_sysfs 146 + 136 common personality sys_personality 147 + # 137 was afs_syscall 148 + 138 common setfsuid sys_setfsuid16 149 + 139 common setfsgid sys_setfsgid16 150 + 140 common _llseek sys_llseek 151 + 141 common getdents sys_getdents 152 + 142 common _newselect sys_select 153 + 143 common flock sys_flock 154 + 144 common msync sys_msync 155 + 145 common readv sys_readv 156 + 146 common writev sys_writev 157 + 147 common getsid sys_getsid 158 + 148 common fdatasync sys_fdatasync 159 + 149 common _sysctl sys_ni_syscall 160 + 150 common mlock sys_mlock 161 + 151 common munlock sys_munlock 162 + 152 common mlockall sys_mlockall 163 + 153 common munlockall sys_munlockall 164 + 154 common sched_setparam sys_sched_setparam 165 + 155 common sched_getparam sys_sched_getparam 166 + 156 common sched_setscheduler sys_sched_setscheduler 167 + 157 common sched_getscheduler sys_sched_getscheduler 168 + 158 common sched_yield sys_sched_yield 169 + 159 common sched_get_priority_max sys_sched_get_priority_max 170 + 160 common sched_get_priority_min sys_sched_get_priority_min 171 + 161 common sched_rr_get_interval sys_sched_rr_get_interval_time32 172 + 162 common nanosleep sys_nanosleep_time32 173 + 163 common mremap sys_mremap 174 + 164 common setresuid sys_setresuid16 175 + 165 common getresuid sys_getresuid16 176 + # 166 was vm86 177 + # 167 was query_module 178 + 168 common poll sys_poll 179 + 169 common nfsservctl sys_ni_syscall 180 + 170 common setresgid sys_setresgid16 181 + 171 common getresgid sys_getresgid16 182 + 172 common prctl sys_prctl 183 + 173 common rt_sigreturn sys_rt_sigreturn 184 + 174 common rt_sigaction sys_rt_sigaction 185 + 175 common rt_sigprocmask sys_rt_sigprocmask 186 + 176 common rt_sigpending sys_rt_sigpending 187 + 177 common rt_sigtimedwait sys_rt_sigtimedwait_time32 188 + 178 common rt_sigqueueinfo sys_rt_sigqueueinfo 189 + 179 common rt_sigsuspend sys_rt_sigsuspend 190 + 180 common pread64 sys_pread_wrapper 191 + 181 common pwrite64 sys_pwrite_wrapper 192 + 182 common chown sys_chown16 193 + 183 common getcwd sys_getcwd 194 + 184 common capget sys_capget 195 + 185 common capset sys_capset 196 + 186 common sigaltstack sys_sigaltstack 197 + 187 common sendfile sys_sendfile 198 + # 188 is reserved for getpmsg 199 + # 189 is reserved for putpmsg 200 + 190 common vfork sys_vfork 201 + 191 common ugetrlimit sys_getrlimit 202 + 192 common mmap2 sys_mmap2 203 + 193 common truncate64 sys_truncate64 204 + 194 common ftruncate64 sys_ftruncate64 205 + 195 common stat64 sys_stat64 206 + 196 common lstat64 sys_lstat64 207 + 197 common fstat64 sys_fstat64 208 + 198 common lchown32 sys_lchown 209 + 199 common getuid32 sys_getuid 210 + 200 common getgid32 sys_getgid 211 + 201 common geteuid32 sys_geteuid 212 + 202 common getegid32 sys_getegid 213 + 203 common setreuid32 sys_setreuid 214 + 204 common setregid32 sys_setregid 215 + 205 common getgroups32 sys_getgroups 216 + 206 common setgroups32 sys_setgroups 217 + 207 common fchown32 sys_fchown 218 + 208 common setresuid32 sys_setresuid 219 + 209 common getresuid32 sys_getresuid 220 + 210 common setresgid32 sys_setresgid 221 + 211 common getresgid32 sys_getresgid 222 + 212 common chown32 sys_chown 223 + 213 common setuid32 sys_setuid 224 + 214 common setgid32 sys_setgid 225 + 215 common setfsuid32 sys_setfsuid 226 + 216 common setfsgid32 sys_setfsgid 227 + 217 common pivot_root sys_pivot_root 228 + 218 common mincore sys_mincore 229 + 219 common madvise sys_madvise 230 + 220 common getdents64 sys_getdents64 231 + 221 common fcntl64 sys_fcntl64 232 + # 222 is reserved for tux 233 + # 223 is unused 234 + 224 common gettid sys_gettid 235 + 225 common readahead sys_readahead 236 + 226 common setxattr sys_setxattr 237 + 227 common lsetxattr sys_lsetxattr 238 + 228 common fsetxattr sys_fsetxattr 239 + 229 common getxattr sys_getxattr 240 + 230 common lgetxattr sys_lgetxattr 241 + 231 common fgetxattr sys_fgetxattr 242 + 232 common listxattr sys_listxattr 243 + 233 common llistxattr sys_llistxattr 244 + 234 common flistxattr sys_flistxattr 245 + 235 common removexattr sys_removexattr 246 + 236 common lremovexattr sys_lremovexattr 247 + 237 common fremovexattr sys_fremovexattr 248 + 238 common tkill sys_tkill 249 + 239 common sendfile64 sys_sendfile64 250 + 240 common futex sys_futex_time32 251 + 241 common sched_setaffinity sys_sched_setaffinity 252 + 242 common sched_getaffinity sys_sched_getaffinity 253 + # 243 is reserved for set_thread_area 254 + # 244 is reserved for get_thread_area 255 + 245 common io_setup sys_io_setup 256 + 246 common io_destroy sys_io_destroy 257 + 247 common io_getevents sys_io_getevents_time32 258 + 248 common io_submit sys_io_submit 259 + 249 common io_cancel sys_io_cancel 260 + 250 common fadvise64 sys_fadvise64 261 + # 251 is unused 262 + 252 common exit_group sys_exit_group 263 + 253 common lookup_dcookie sys_ni_syscall 264 + 254 common epoll_create sys_epoll_create 265 + 255 common epoll_ctl sys_epoll_ctl 266 + 256 common epoll_wait sys_epoll_wait 267 + 257 common remap_file_pages sys_remap_file_pages 268 + 258 common set_tid_address sys_set_tid_address 269 + 259 common timer_create sys_timer_create 270 + 260 common timer_settime sys_timer_settime32 271 + 261 common timer_gettime sys_timer_gettime32 272 + 262 common timer_getoverrun sys_timer_getoverrun 273 + 263 common timer_delete sys_timer_delete 274 + 264 common clock_settime sys_clock_settime32 275 + 265 common clock_gettime sys_clock_gettime32 276 + 266 common clock_getres sys_clock_getres_time32 277 + 267 common clock_nanosleep sys_clock_nanosleep_time32 278 + 268 common statfs64 sys_statfs64 279 + 269 common fstatfs64 sys_fstatfs64 280 + 270 common tgkill sys_tgkill 281 + 271 common utimes sys_utimes_time32 282 + 272 common fadvise64_64 sys_fadvise64_64_wrapper 283 + # 273 is reserved for vserver 284 + 274 common mbind sys_mbind 285 + 275 common get_mempolicy sys_get_mempolicy 286 + 276 common set_mempolicy sys_set_mempolicy 287 + 277 common mq_open sys_mq_open 288 + 278 common mq_unlink sys_mq_unlink 289 + 279 common mq_timedsend sys_mq_timedsend_time32 290 + 280 common mq_timedreceive sys_mq_timedreceive_time32 291 + 281 common mq_notify sys_mq_notify 292 + 282 common mq_getsetattr sys_mq_getsetattr 293 + 283 common kexec_load sys_kexec_load 294 + 284 common waitid sys_waitid 295 + 285 common add_key sys_add_key 296 + 286 common request_key sys_request_key 297 + 287 common keyctl sys_keyctl 298 + 288 common ioprio_set sys_ioprio_set 299 + 289 common ioprio_get sys_ioprio_get 300 + 290 common inotify_init sys_inotify_init 301 + 291 common inotify_add_watch sys_inotify_add_watch 302 + 292 common inotify_rm_watch sys_inotify_rm_watch 303 + # 293 is unused 304 + 294 common migrate_pages sys_migrate_pages 305 + 295 common openat sys_openat 306 + 296 common mkdirat sys_mkdirat 307 + 297 common mknodat sys_mknodat 308 + 298 common fchownat sys_fchownat 309 + 299 common futimesat sys_futimesat_time32 310 + 300 common fstatat64 sys_fstatat64 311 + 301 common unlinkat sys_unlinkat 312 + 302 common renameat sys_renameat 313 + 303 common linkat sys_linkat 314 + 304 common symlinkat sys_symlinkat 315 + 305 common readlinkat sys_readlinkat 316 + 306 common fchmodat sys_fchmodat 317 + 307 common faccessat sys_faccessat 318 + 308 common pselect6 sys_pselect6_time32 319 + 309 common ppoll sys_ppoll_time32 320 + 310 common unshare sys_unshare 321 + 311 common set_robust_list sys_set_robust_list 322 + 312 common get_robust_list sys_get_robust_list 323 + 313 common splice sys_splice 324 + 314 common sync_file_range sys_sh_sync_file_range6 325 + 315 common tee sys_tee 326 + 316 common vmsplice sys_vmsplice 327 + 317 common move_pages sys_move_pages 328 + 318 common getcpu sys_getcpu 329 + 319 common epoll_pwait sys_epoll_pwait 330 + 320 common utimensat sys_utimensat_time32 331 + 321 common signalfd sys_signalfd 332 + 322 common timerfd_create sys_timerfd_create 333 + 323 common eventfd sys_eventfd 334 + 324 common fallocate sys_fallocate 335 + 325 common timerfd_settime sys_timerfd_settime32 336 + 326 common timerfd_gettime sys_timerfd_gettime32 337 + 327 common signalfd4 sys_signalfd4 338 + 328 common eventfd2 sys_eventfd2 339 + 329 common epoll_create1 sys_epoll_create1 340 + 330 common dup3 sys_dup3 341 + 331 common pipe2 sys_pipe2 342 + 332 common inotify_init1 sys_inotify_init1 343 + 333 common preadv sys_preadv 344 + 334 common pwritev sys_pwritev 345 + 335 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo 346 + 336 common perf_event_open sys_perf_event_open 347 + 337 common fanotify_init sys_fanotify_init 348 + 338 common fanotify_mark sys_fanotify_mark 349 + 339 common prlimit64 sys_prlimit64 350 + 340 common socket sys_socket 351 + 341 common bind sys_bind 352 + 342 common connect sys_connect 353 + 343 common listen sys_listen 354 + 344 common accept sys_accept 355 + 345 common getsockname sys_getsockname 356 + 346 common getpeername sys_getpeername 357 + 347 common socketpair sys_socketpair 358 + 348 common send sys_send 359 + 349 common sendto sys_sendto 360 + 350 common recv sys_recv 361 + 351 common recvfrom sys_recvfrom 362 + 352 common shutdown sys_shutdown 363 + 353 common setsockopt sys_setsockopt 364 + 354 common getsockopt sys_getsockopt 365 + 355 common sendmsg sys_sendmsg 366 + 356 common recvmsg sys_recvmsg 367 + 357 common recvmmsg sys_recvmmsg_time32 368 + 358 common accept4 sys_accept4 369 + 359 common name_to_handle_at sys_name_to_handle_at 370 + 360 common open_by_handle_at sys_open_by_handle_at 371 + 361 common clock_adjtime sys_clock_adjtime32 372 + 362 common syncfs sys_syncfs 373 + 363 common sendmmsg sys_sendmmsg 374 + 364 common setns sys_setns 375 + 365 common process_vm_readv sys_process_vm_readv 376 + 366 common process_vm_writev sys_process_vm_writev 377 + 367 common kcmp sys_kcmp 378 + 368 common finit_module sys_finit_module 379 + 369 common sched_getattr sys_sched_getattr 380 + 370 common sched_setattr sys_sched_setattr 381 + 371 common renameat2 sys_renameat2 382 + 372 common seccomp sys_seccomp 383 + 373 common getrandom sys_getrandom 384 + 374 common memfd_create sys_memfd_create 385 + 375 common bpf sys_bpf 386 + 376 common execveat sys_execveat 387 + 377 common userfaultfd sys_userfaultfd 388 + 378 common membarrier sys_membarrier 389 + 379 common mlock2 sys_mlock2 390 + 380 common copy_file_range sys_copy_file_range 391 + 381 common preadv2 sys_preadv2 392 + 382 common pwritev2 sys_pwritev2 393 + 383 common statx sys_statx 394 + 384 common pkey_mprotect sys_pkey_mprotect 395 + 385 common pkey_alloc sys_pkey_alloc 396 + 386 common pkey_free sys_pkey_free 397 + 387 common rseq sys_rseq 398 + 388 common sync_file_range2 sys_sync_file_range2 399 + # room for arch specific syscalls 400 + 393 common semget sys_semget 401 + 394 common semctl sys_semctl 402 + 395 common shmget sys_shmget 403 + 396 common shmctl sys_shmctl 404 + 397 common shmat sys_shmat 405 + 398 common shmdt sys_shmdt 406 + 399 common msgget sys_msgget 407 + 400 common msgsnd sys_msgsnd 408 + 401 common msgrcv sys_msgrcv 409 + 402 common msgctl sys_msgctl 410 + 403 common clock_gettime64 sys_clock_gettime 411 + 404 common clock_settime64 sys_clock_settime 412 + 405 common clock_adjtime64 sys_clock_adjtime 413 + 406 common clock_getres_time64 sys_clock_getres 414 + 407 common clock_nanosleep_time64 sys_clock_nanosleep 415 + 408 common timer_gettime64 sys_timer_gettime 416 + 409 common timer_settime64 sys_timer_settime 417 + 410 common timerfd_gettime64 sys_timerfd_gettime 418 + 411 common timerfd_settime64 sys_timerfd_settime 419 + 412 common utimensat_time64 sys_utimensat 420 + 413 common pselect6_time64 sys_pselect6 421 + 414 common ppoll_time64 sys_ppoll 422 + 416 common io_pgetevents_time64 sys_io_pgetevents 423 + 417 common recvmmsg_time64 sys_recvmmsg 424 + 418 common mq_timedsend_time64 sys_mq_timedsend 425 + 419 common mq_timedreceive_time64 sys_mq_timedreceive 426 + 420 common semtimedop_time64 sys_semtimedop 427 + 421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait 428 + 422 common futex_time64 sys_futex 429 + 423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval 430 + 424 common pidfd_send_signal sys_pidfd_send_signal 431 + 425 common io_uring_setup sys_io_uring_setup 432 + 426 common io_uring_enter sys_io_uring_enter 433 + 427 common io_uring_register sys_io_uring_register 434 + 428 common open_tree sys_open_tree 435 + 429 common move_mount sys_move_mount 436 + 430 common fsopen sys_fsopen 437 + 431 common fsconfig sys_fsconfig 438 + 432 common fsmount sys_fsmount 439 + 433 common fspick sys_fspick 440 + 434 common pidfd_open sys_pidfd_open 441 + # 435 reserved for clone3 442 + 436 common close_range sys_close_range 443 + 437 common openat2 sys_openat2 444 + 438 common pidfd_getfd sys_pidfd_getfd 445 + 439 common faccessat2 sys_faccessat2 446 + 440 common process_madvise sys_process_madvise 447 + 441 common epoll_pwait2 sys_epoll_pwait2 448 + 442 common mount_setattr sys_mount_setattr 449 + 443 common quotactl_fd sys_quotactl_fd 450 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 451 + 445 common landlock_add_rule sys_landlock_add_rule 452 + 446 common landlock_restrict_self sys_landlock_restrict_self 453 + # 447 reserved for memfd_secret 454 + 448 common process_mrelease sys_process_mrelease 455 + 449 common futex_waitv sys_futex_waitv 456 + 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 457 + 451 common cachestat sys_cachestat 458 + 452 common fchmodat2 sys_fchmodat2 459 + 453 common map_shadow_stack sys_map_shadow_stack 460 + 454 common futex_wake sys_futex_wake 461 + 455 common futex_wait sys_futex_wait 462 + 456 common futex_requeue sys_futex_requeue 463 + 457 common statmount sys_statmount 464 + 458 common listmount sys_listmount 465 + 459 common lsm_get_self_attr sys_lsm_get_self_attr 466 + 460 common lsm_set_self_attr sys_lsm_set_self_attr 467 + 461 common lsm_list_modules sys_lsm_list_modules 468 + 462 common mseal sys_mseal 469 + 463 common setxattrat sys_setxattrat 470 + 464 common getxattrat sys_getxattrat 471 + 465 common listxattrat sys_listxattrat 472 + 466 common removexattrat sys_removexattrat
+2
tools/perf/arch/sh/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_32.h>
+3
tools/perf/arch/sparc/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h 3 + syscall-y += syscalls_64.h
+5
tools/perf/arch/sparc/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += 4 + syscall_abis_64 += 5 + syscalltbl = $(srctree)/tools/perf/arch/sparc/entry/syscalls/syscall.tbl
+514
tools/perf/arch/sparc/entry/syscalls/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # system call numbers and entry vectors for sparc 4 + # 5 + # The format is: 6 + # <number> <abi> <name> <entry point> <compat entry point> 7 + # 8 + # The <abi> can be common, 64, or 32 for this file. 9 + # 10 + 0 common restart_syscall sys_restart_syscall 11 + 1 32 exit sys_exit sparc_exit 12 + 1 64 exit sparc_exit 13 + 2 common fork sys_fork 14 + 3 common read sys_read 15 + 4 common write sys_write 16 + 5 common open sys_open compat_sys_open 17 + 6 common close sys_close 18 + 7 common wait4 sys_wait4 compat_sys_wait4 19 + 8 common creat sys_creat 20 + 9 common link sys_link 21 + 10 common unlink sys_unlink 22 + 11 32 execv sunos_execv 23 + 11 64 execv sys_nis_syscall 24 + 12 common chdir sys_chdir 25 + 13 32 chown sys_chown16 26 + 13 64 chown sys_chown 27 + 14 common mknod sys_mknod 28 + 15 common chmod sys_chmod 29 + 16 32 lchown sys_lchown16 30 + 16 64 lchown sys_lchown 31 + 17 common brk sys_brk 32 + 18 common perfctr sys_nis_syscall 33 + 19 common lseek sys_lseek compat_sys_lseek 34 + 20 common getpid sys_getpid 35 + 21 common capget sys_capget 36 + 22 common capset sys_capset 37 + 23 32 setuid sys_setuid16 38 + 23 64 setuid sys_setuid 39 + 24 32 getuid sys_getuid16 40 + 24 64 getuid sys_getuid 41 + 25 common vmsplice sys_vmsplice 42 + 26 common ptrace sys_ptrace compat_sys_ptrace 43 + 27 common alarm sys_alarm 44 + 28 common sigaltstack sys_sigaltstack compat_sys_sigaltstack 45 + 29 32 pause sys_pause 46 + 29 64 pause sys_nis_syscall 47 + 30 32 utime sys_utime32 48 + 30 64 utime sys_utime 49 + 31 32 lchown32 sys_lchown 50 + 32 32 fchown32 sys_fchown 51 + 33 common access sys_access 52 + 34 common nice sys_nice 53 + 35 32 chown32 sys_chown 54 + 36 common sync sys_sync 55 + 37 common kill sys_kill 56 + 38 common stat sys_newstat compat_sys_newstat 57 + 39 32 sendfile sys_sendfile compat_sys_sendfile 58 + 39 64 sendfile sys_sendfile64 59 + 40 common lstat sys_newlstat compat_sys_newlstat 60 + 41 common dup sys_dup 61 + 42 common pipe sys_sparc_pipe 62 + 43 common times sys_times compat_sys_times 63 + 44 32 getuid32 sys_getuid 64 + 45 common umount2 sys_umount 65 + 46 32 setgid sys_setgid16 66 + 46 64 setgid sys_setgid 67 + 47 32 getgid sys_getgid16 68 + 47 64 getgid sys_getgid 69 + 48 common signal sys_signal 70 + 49 32 geteuid sys_geteuid16 71 + 49 64 geteuid sys_geteuid 72 + 50 32 getegid sys_getegid16 73 + 50 64 getegid sys_getegid 74 + 51 common acct sys_acct 75 + 52 64 memory_ordering sys_memory_ordering 76 + 53 32 getgid32 sys_getgid 77 + 54 common ioctl sys_ioctl compat_sys_ioctl 78 + 55 common reboot sys_reboot 79 + 56 32 mmap2 sys_mmap2 sys32_mmap2 80 + 57 common symlink sys_symlink 81 + 58 common readlink sys_readlink 82 + 59 32 execve sys_execve sys32_execve 83 + 59 64 execve sys64_execve 84 + 60 common umask sys_umask 85 + 61 common chroot sys_chroot 86 + 62 common fstat sys_newfstat compat_sys_newfstat 87 + 63 common fstat64 sys_fstat64 compat_sys_fstat64 88 + 64 common getpagesize sys_getpagesize 89 + 65 common msync sys_msync 90 + 66 common vfork sys_vfork 91 + 67 common pread64 sys_pread64 compat_sys_pread64 92 + 68 common pwrite64 sys_pwrite64 compat_sys_pwrite64 93 + 69 32 geteuid32 sys_geteuid 94 + 70 32 getegid32 sys_getegid 95 + 71 common mmap sys_mmap 96 + 72 32 setreuid32 sys_setreuid 97 + 73 32 munmap sys_munmap 98 + 73 64 munmap sys_64_munmap 99 + 74 common mprotect sys_mprotect 100 + 75 common madvise sys_madvise 101 + 76 common vhangup sys_vhangup 102 + 77 32 truncate64 sys_truncate64 compat_sys_truncate64 103 + 78 common mincore sys_mincore 104 + 79 32 getgroups sys_getgroups16 105 + 79 64 getgroups sys_getgroups 106 + 80 32 setgroups sys_setgroups16 107 + 80 64 setgroups sys_setgroups 108 + 81 common getpgrp sys_getpgrp 109 + 82 32 setgroups32 sys_setgroups 110 + 83 common setitimer sys_setitimer compat_sys_setitimer 111 + 84 32 ftruncate64 sys_ftruncate64 compat_sys_ftruncate64 112 + 85 common swapon sys_swapon 113 + 86 common getitimer sys_getitimer compat_sys_getitimer 114 + 87 32 setuid32 sys_setuid 115 + 88 common sethostname sys_sethostname 116 + 89 32 setgid32 sys_setgid 117 + 90 common dup2 sys_dup2 118 + 91 32 setfsuid32 sys_setfsuid 119 + 92 common fcntl sys_fcntl compat_sys_fcntl 120 + 93 common select sys_select compat_sys_select 121 + 94 32 setfsgid32 sys_setfsgid 122 + 95 common fsync sys_fsync 123 + 96 common setpriority sys_setpriority 124 + 97 common socket sys_socket 125 + 98 common connect sys_connect 126 + 99 common accept sys_accept 127 + 100 common getpriority sys_getpriority 128 + 101 common rt_sigreturn sys_rt_sigreturn sys32_rt_sigreturn 129 + 102 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction 130 + 103 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask 131 + 104 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending 132 + 105 32 rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32 133 + 105 64 rt_sigtimedwait sys_rt_sigtimedwait 134 + 106 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo 135 + 107 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend 136 + 108 32 setresuid32 sys_setresuid 137 + 108 64 setresuid sys_setresuid 138 + 109 32 getresuid32 sys_getresuid 139 + 109 64 getresuid sys_getresuid 140 + 110 32 setresgid32 sys_setresgid 141 + 110 64 setresgid sys_setresgid 142 + 111 32 getresgid32 sys_getresgid 143 + 111 64 getresgid sys_getresgid 144 + 112 32 setregid32 sys_setregid 145 + 113 common recvmsg sys_recvmsg compat_sys_recvmsg 146 + 114 common sendmsg sys_sendmsg compat_sys_sendmsg 147 + 115 32 getgroups32 sys_getgroups 148 + 116 common gettimeofday sys_gettimeofday compat_sys_gettimeofday 149 + 117 common getrusage sys_getrusage compat_sys_getrusage 150 + 118 common getsockopt sys_getsockopt sys_getsockopt 151 + 119 common getcwd sys_getcwd 152 + 120 common readv sys_readv 153 + 121 common writev sys_writev 154 + 122 common settimeofday sys_settimeofday compat_sys_settimeofday 155 + 123 32 fchown sys_fchown16 156 + 123 64 fchown sys_fchown 157 + 124 common fchmod sys_fchmod 158 + 125 common recvfrom sys_recvfrom compat_sys_recvfrom 159 + 126 32 setreuid sys_setreuid16 160 + 126 64 setreuid sys_setreuid 161 + 127 32 setregid sys_setregid16 162 + 127 64 setregid sys_setregid 163 + 128 common rename sys_rename 164 + 129 common truncate sys_truncate compat_sys_truncate 165 + 130 common ftruncate sys_ftruncate compat_sys_ftruncate 166 + 131 common flock sys_flock 167 + 132 common lstat64 sys_lstat64 compat_sys_lstat64 168 + 133 common sendto sys_sendto 169 + 134 common shutdown sys_shutdown 170 + 135 common socketpair sys_socketpair 171 + 136 common mkdir sys_mkdir 172 + 137 common rmdir sys_rmdir 173 + 138 32 utimes sys_utimes_time32 174 + 138 64 utimes sys_utimes 175 + 139 common stat64 sys_stat64 compat_sys_stat64 176 + 140 common sendfile64 sys_sendfile64 177 + 141 common getpeername sys_getpeername 178 + 142 32 futex sys_futex_time32 179 + 142 64 futex sys_futex 180 + 143 common gettid sys_gettid 181 + 144 common getrlimit sys_getrlimit compat_sys_getrlimit 182 + 145 common setrlimit sys_setrlimit compat_sys_setrlimit 183 + 146 common pivot_root sys_pivot_root 184 + 147 common prctl sys_prctl 185 + 148 common pciconfig_read sys_pciconfig_read 186 + 149 common pciconfig_write sys_pciconfig_write 187 + 150 common getsockname sys_getsockname 188 + 151 common inotify_init sys_inotify_init 189 + 152 common inotify_add_watch sys_inotify_add_watch 190 + 153 common poll sys_poll 191 + 154 common getdents64 sys_getdents64 192 + 155 32 fcntl64 sys_fcntl64 compat_sys_fcntl64 193 + 156 common inotify_rm_watch sys_inotify_rm_watch 194 + 157 common statfs sys_statfs compat_sys_statfs 195 + 158 common fstatfs sys_fstatfs compat_sys_fstatfs 196 + 159 common umount sys_oldumount 197 + 160 common sched_set_affinity sys_sched_setaffinity compat_sys_sched_setaffinity 198 + 161 common sched_get_affinity sys_sched_getaffinity compat_sys_sched_getaffinity 199 + 162 common getdomainname sys_getdomainname 200 + 163 common setdomainname sys_setdomainname 201 + 164 64 utrap_install sys_utrap_install 202 + 165 common quotactl sys_quotactl 203 + 166 common set_tid_address sys_set_tid_address 204 + 167 common mount sys_mount 205 + 168 common ustat sys_ustat compat_sys_ustat 206 + 169 common setxattr sys_setxattr 207 + 170 common lsetxattr sys_lsetxattr 208 + 171 common fsetxattr sys_fsetxattr 209 + 172 common getxattr sys_getxattr 210 + 173 common lgetxattr sys_lgetxattr 211 + 174 common getdents sys_getdents compat_sys_getdents 212 + 175 common setsid sys_setsid 213 + 176 common fchdir sys_fchdir 214 + 177 common fgetxattr sys_fgetxattr 215 + 178 common listxattr sys_listxattr 216 + 179 common llistxattr sys_llistxattr 217 + 180 common flistxattr sys_flistxattr 218 + 181 common removexattr sys_removexattr 219 + 182 common lremovexattr sys_lremovexattr 220 + 183 32 sigpending sys_sigpending compat_sys_sigpending 221 + 183 64 sigpending sys_nis_syscall 222 + 184 common query_module sys_ni_syscall 223 + 185 common setpgid sys_setpgid 224 + 186 common fremovexattr sys_fremovexattr 225 + 187 common tkill sys_tkill 226 + 188 32 exit_group sys_exit_group sparc_exit_group 227 + 188 64 exit_group sparc_exit_group 228 + 189 common uname sys_newuname 229 + 190 common init_module sys_init_module 230 + 191 32 personality sys_personality sys_sparc64_personality 231 + 191 64 personality sys_sparc64_personality 232 + 192 32 remap_file_pages sys_sparc_remap_file_pages sys_remap_file_pages 233 + 192 64 remap_file_pages sys_remap_file_pages 234 + 193 common epoll_create sys_epoll_create 235 + 194 common epoll_ctl sys_epoll_ctl 236 + 195 common epoll_wait sys_epoll_wait 237 + 196 common ioprio_set sys_ioprio_set 238 + 197 common getppid sys_getppid 239 + 198 32 sigaction sys_sparc_sigaction compat_sys_sparc_sigaction 240 + 198 64 sigaction sys_nis_syscall 241 + 199 common sgetmask sys_sgetmask 242 + 200 common ssetmask sys_ssetmask 243 + 201 32 sigsuspend sys_sigsuspend 244 + 201 64 sigsuspend sys_nis_syscall 245 + 202 common oldlstat sys_newlstat compat_sys_newlstat 246 + 203 common uselib sys_uselib 247 + 204 32 readdir sys_old_readdir compat_sys_old_readdir 248 + 204 64 readdir sys_nis_syscall 249 + 205 common readahead sys_readahead compat_sys_readahead 250 + 206 common socketcall sys_socketcall compat_sys_socketcall 251 + 207 common syslog sys_syslog 252 + 208 common lookup_dcookie sys_ni_syscall 253 + 209 common fadvise64 sys_fadvise64 compat_sys_fadvise64 254 + 210 common fadvise64_64 sys_fadvise64_64 compat_sys_fadvise64_64 255 + 211 common tgkill sys_tgkill 256 + 212 common waitpid sys_waitpid 257 + 213 common swapoff sys_swapoff 258 + 214 common sysinfo sys_sysinfo compat_sys_sysinfo 259 + 215 32 ipc sys_ipc compat_sys_ipc 260 + 215 64 ipc sys_sparc_ipc 261 + 216 32 sigreturn sys_sigreturn sys32_sigreturn 262 + 216 64 sigreturn sys_nis_syscall 263 + 217 common clone sys_clone 264 + 218 common ioprio_get sys_ioprio_get 265 + 219 32 adjtimex sys_adjtimex_time32 266 + 219 64 adjtimex sys_sparc_adjtimex 267 + 220 32 sigprocmask sys_sigprocmask compat_sys_sigprocmask 268 + 220 64 sigprocmask sys_nis_syscall 269 + 221 common create_module sys_ni_syscall 270 + 222 common delete_module sys_delete_module 271 + 223 common get_kernel_syms sys_ni_syscall 272 + 224 common getpgid sys_getpgid 273 + 225 common bdflush sys_ni_syscall 274 + 226 common sysfs sys_sysfs 275 + 227 common afs_syscall sys_nis_syscall 276 + 228 common setfsuid sys_setfsuid16 277 + 229 common setfsgid sys_setfsgid16 278 + 230 common _newselect sys_select compat_sys_select 279 + 231 32 time sys_time32 280 + 232 common splice sys_splice 281 + 233 32 stime sys_stime32 282 + 233 64 stime sys_stime 283 + 234 common statfs64 sys_statfs64 compat_sys_statfs64 284 + 235 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64 285 + 236 common _llseek sys_llseek 286 + 237 common mlock sys_mlock 287 + 238 common munlock sys_munlock 288 + 239 common mlockall sys_mlockall 289 + 240 common munlockall sys_munlockall 290 + 241 common sched_setparam sys_sched_setparam 291 + 242 common sched_getparam sys_sched_getparam 292 + 243 common sched_setscheduler sys_sched_setscheduler 293 + 244 common sched_getscheduler sys_sched_getscheduler 294 + 245 common sched_yield sys_sched_yield 295 + 246 common sched_get_priority_max sys_sched_get_priority_max 296 + 247 common sched_get_priority_min sys_sched_get_priority_min 297 + 248 32 sched_rr_get_interval sys_sched_rr_get_interval_time32 298 + 248 64 sched_rr_get_interval sys_sched_rr_get_interval 299 + 249 32 nanosleep sys_nanosleep_time32 300 + 249 64 nanosleep sys_nanosleep 301 + 250 32 mremap sys_mremap 302 + 250 64 mremap sys_64_mremap 303 + 251 common _sysctl sys_ni_syscall 304 + 252 common getsid sys_getsid 305 + 253 common fdatasync sys_fdatasync 306 + 254 32 nfsservctl sys_ni_syscall sys_nis_syscall 307 + 254 64 nfsservctl sys_nis_syscall 308 + 255 common sync_file_range sys_sync_file_range compat_sys_sync_file_range 309 + 256 32 clock_settime sys_clock_settime32 310 + 256 64 clock_settime sys_clock_settime 311 + 257 32 clock_gettime sys_clock_gettime32 312 + 257 64 clock_gettime sys_clock_gettime 313 + 258 32 clock_getres sys_clock_getres_time32 314 + 258 64 clock_getres sys_clock_getres 315 + 259 32 clock_nanosleep sys_clock_nanosleep_time32 316 + 259 64 clock_nanosleep sys_clock_nanosleep 317 + 260 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity 318 + 261 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity 319 + 262 32 timer_settime sys_timer_settime32 320 + 262 64 timer_settime sys_timer_settime 321 + 263 32 timer_gettime sys_timer_gettime32 322 + 263 64 timer_gettime sys_timer_gettime 323 + 264 common timer_getoverrun sys_timer_getoverrun 324 + 265 common timer_delete sys_timer_delete 325 + 266 common timer_create sys_timer_create compat_sys_timer_create 326 + # 267 was vserver 327 + 267 common vserver sys_nis_syscall 328 + 268 common io_setup sys_io_setup compat_sys_io_setup 329 + 269 common io_destroy sys_io_destroy 330 + 270 common io_submit sys_io_submit compat_sys_io_submit 331 + 271 common io_cancel sys_io_cancel 332 + 272 32 io_getevents sys_io_getevents_time32 333 + 272 64 io_getevents sys_io_getevents 334 + 273 common mq_open sys_mq_open compat_sys_mq_open 335 + 274 common mq_unlink sys_mq_unlink 336 + 275 32 mq_timedsend sys_mq_timedsend_time32 337 + 275 64 mq_timedsend sys_mq_timedsend 338 + 276 32 mq_timedreceive sys_mq_timedreceive_time32 339 + 276 64 mq_timedreceive sys_mq_timedreceive 340 + 277 common mq_notify sys_mq_notify compat_sys_mq_notify 341 + 278 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr 342 + 279 common waitid sys_waitid compat_sys_waitid 343 + 280 common tee sys_tee 344 + 281 common add_key sys_add_key 345 + 282 common request_key sys_request_key 346 + 283 common keyctl sys_keyctl compat_sys_keyctl 347 + 284 common openat sys_openat compat_sys_openat 348 + 285 common mkdirat sys_mkdirat 349 + 286 common mknodat sys_mknodat 350 + 287 common fchownat sys_fchownat 351 + 288 32 futimesat sys_futimesat_time32 352 + 288 64 futimesat sys_futimesat 353 + 289 common fstatat64 sys_fstatat64 compat_sys_fstatat64 354 + 290 common unlinkat sys_unlinkat 355 + 291 common renameat sys_renameat 356 + 292 common linkat sys_linkat 357 + 293 common symlinkat sys_symlinkat 358 + 294 common readlinkat sys_readlinkat 359 + 295 common fchmodat sys_fchmodat 360 + 296 common faccessat sys_faccessat 361 + 297 32 pselect6 sys_pselect6_time32 compat_sys_pselect6_time32 362 + 297 64 pselect6 sys_pselect6 363 + 298 32 ppoll sys_ppoll_time32 compat_sys_ppoll_time32 364 + 298 64 ppoll sys_ppoll 365 + 299 common unshare sys_unshare 366 + 300 common set_robust_list sys_set_robust_list compat_sys_set_robust_list 367 + 301 common get_robust_list sys_get_robust_list compat_sys_get_robust_list 368 + 302 common migrate_pages sys_migrate_pages 369 + 303 common mbind sys_mbind 370 + 304 common get_mempolicy sys_get_mempolicy 371 + 305 common set_mempolicy sys_set_mempolicy 372 + 306 common kexec_load sys_kexec_load compat_sys_kexec_load 373 + 307 common move_pages sys_move_pages 374 + 308 common getcpu sys_getcpu 375 + 309 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait 376 + 310 32 utimensat sys_utimensat_time32 377 + 310 64 utimensat sys_utimensat 378 + 311 common signalfd sys_signalfd compat_sys_signalfd 379 + 312 common timerfd_create sys_timerfd_create 380 + 313 common eventfd sys_eventfd 381 + 314 common fallocate sys_fallocate compat_sys_fallocate 382 + 315 32 timerfd_settime sys_timerfd_settime32 383 + 315 64 timerfd_settime sys_timerfd_settime 384 + 316 32 timerfd_gettime sys_timerfd_gettime32 385 + 316 64 timerfd_gettime sys_timerfd_gettime 386 + 317 common signalfd4 sys_signalfd4 compat_sys_signalfd4 387 + 318 common eventfd2 sys_eventfd2 388 + 319 common epoll_create1 sys_epoll_create1 389 + 320 common dup3 sys_dup3 390 + 321 common pipe2 sys_pipe2 391 + 322 common inotify_init1 sys_inotify_init1 392 + 323 common accept4 sys_accept4 393 + 324 common preadv sys_preadv compat_sys_preadv 394 + 325 common pwritev sys_pwritev compat_sys_pwritev 395 + 326 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo 396 + 327 common perf_event_open sys_perf_event_open 397 + 328 32 recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32 398 + 328 64 recvmmsg sys_recvmmsg 399 + 329 common fanotify_init sys_fanotify_init 400 + 330 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark 401 + 331 common prlimit64 sys_prlimit64 402 + 332 common name_to_handle_at sys_name_to_handle_at 403 + 333 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at 404 + 334 32 clock_adjtime sys_clock_adjtime32 405 + 334 64 clock_adjtime sys_sparc_clock_adjtime 406 + 335 common syncfs sys_syncfs 407 + 336 common sendmmsg sys_sendmmsg compat_sys_sendmmsg 408 + 337 common setns sys_setns 409 + 338 common process_vm_readv sys_process_vm_readv 410 + 339 common process_vm_writev sys_process_vm_writev 411 + 340 32 kern_features sys_ni_syscall sys_kern_features 412 + 340 64 kern_features sys_kern_features 413 + 341 common kcmp sys_kcmp 414 + 342 common finit_module sys_finit_module 415 + 343 common sched_setattr sys_sched_setattr 416 + 344 common sched_getattr sys_sched_getattr 417 + 345 common renameat2 sys_renameat2 418 + 346 common seccomp sys_seccomp 419 + 347 common getrandom sys_getrandom 420 + 348 common memfd_create sys_memfd_create 421 + 349 common bpf sys_bpf 422 + 350 32 execveat sys_execveat sys32_execveat 423 + 350 64 execveat sys64_execveat 424 + 351 common membarrier sys_membarrier 425 + 352 common userfaultfd sys_userfaultfd 426 + 353 common bind sys_bind 427 + 354 common listen sys_listen 428 + 355 common setsockopt sys_setsockopt sys_setsockopt 429 + 356 common mlock2 sys_mlock2 430 + 357 common copy_file_range sys_copy_file_range 431 + 358 common preadv2 sys_preadv2 compat_sys_preadv2 432 + 359 common pwritev2 sys_pwritev2 compat_sys_pwritev2 433 + 360 common statx sys_statx 434 + 361 32 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents 435 + 361 64 io_pgetevents sys_io_pgetevents 436 + 362 common pkey_mprotect sys_pkey_mprotect 437 + 363 common pkey_alloc sys_pkey_alloc 438 + 364 common pkey_free sys_pkey_free 439 + 365 common rseq sys_rseq 440 + # room for arch specific syscalls 441 + 392 64 semtimedop sys_semtimedop 442 + 393 common semget sys_semget 443 + 394 common semctl sys_semctl compat_sys_semctl 444 + 395 common shmget sys_shmget 445 + 396 common shmctl sys_shmctl compat_sys_shmctl 446 + 397 common shmat sys_shmat compat_sys_shmat 447 + 398 common shmdt sys_shmdt 448 + 399 common msgget sys_msgget 449 + 400 common msgsnd sys_msgsnd compat_sys_msgsnd 450 + 401 common msgrcv sys_msgrcv compat_sys_msgrcv 451 + 402 common msgctl sys_msgctl compat_sys_msgctl 452 + 403 32 clock_gettime64 sys_clock_gettime sys_clock_gettime 453 + 404 32 clock_settime64 sys_clock_settime sys_clock_settime 454 + 405 32 clock_adjtime64 sys_clock_adjtime sys_clock_adjtime 455 + 406 32 clock_getres_time64 sys_clock_getres sys_clock_getres 456 + 407 32 clock_nanosleep_time64 sys_clock_nanosleep sys_clock_nanosleep 457 + 408 32 timer_gettime64 sys_timer_gettime sys_timer_gettime 458 + 409 32 timer_settime64 sys_timer_settime sys_timer_settime 459 + 410 32 timerfd_gettime64 sys_timerfd_gettime sys_timerfd_gettime 460 + 411 32 timerfd_settime64 sys_timerfd_settime sys_timerfd_settime 461 + 412 32 utimensat_time64 sys_utimensat sys_utimensat 462 + 413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64 463 + 414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64 464 + 416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64 465 + 417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64 466 + 418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend 467 + 419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive 468 + 420 32 semtimedop_time64 sys_semtimedop sys_semtimedop 469 + 421 32 rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64 470 + 422 32 futex_time64 sys_futex sys_futex 471 + 423 32 sched_rr_get_interval_time64 sys_sched_rr_get_interval sys_sched_rr_get_interval 472 + 424 common pidfd_send_signal sys_pidfd_send_signal 473 + 425 common io_uring_setup sys_io_uring_setup 474 + 426 common io_uring_enter sys_io_uring_enter 475 + 427 common io_uring_register sys_io_uring_register 476 + 428 common open_tree sys_open_tree 477 + 429 common move_mount sys_move_mount 478 + 430 common fsopen sys_fsopen 479 + 431 common fsconfig sys_fsconfig 480 + 432 common fsmount sys_fsmount 481 + 433 common fspick sys_fspick 482 + 434 common pidfd_open sys_pidfd_open 483 + # 435 reserved for clone3 484 + 436 common close_range sys_close_range 485 + 437 common openat2 sys_openat2 486 + 438 common pidfd_getfd sys_pidfd_getfd 487 + 439 common faccessat2 sys_faccessat2 488 + 440 common process_madvise sys_process_madvise 489 + 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 490 + 442 common mount_setattr sys_mount_setattr 491 + 443 common quotactl_fd sys_quotactl_fd 492 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 493 + 445 common landlock_add_rule sys_landlock_add_rule 494 + 446 common landlock_restrict_self sys_landlock_restrict_self 495 + # 447 reserved for memfd_secret 496 + 448 common process_mrelease sys_process_mrelease 497 + 449 common futex_waitv sys_futex_waitv 498 + 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 499 + 451 common cachestat sys_cachestat 500 + 452 common fchmodat2 sys_fchmodat2 501 + 453 common map_shadow_stack sys_map_shadow_stack 502 + 454 common futex_wake sys_futex_wake 503 + 455 common futex_wait sys_futex_wait 504 + 456 common futex_requeue sys_futex_requeue 505 + 457 common statmount sys_statmount 506 + 458 common listmount sys_listmount 507 + 459 common lsm_get_self_attr sys_lsm_get_self_attr 508 + 460 common lsm_set_self_attr sys_lsm_set_self_attr 509 + 461 common lsm_list_modules sys_lsm_list_modules 510 + 462 common mseal sys_mseal 511 + 463 common setxattrat sys_setxattrat 512 + 464 common getxattrat sys_getxattrat 513 + 465 common listxattrat sys_listxattrat 514 + 466 common removexattrat sys_removexattrat
+8
tools/perf/arch/sparc/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/bitsperlong.h> 3 + 4 + #if __BITS_PER_LONG == 64 5 + #include <asm/syscalls_64.h> 6 + #else 7 + #include <asm/syscalls_32.h> 8 + #endif
-1
tools/perf/arch/x86/Build
··· 2 2 perf-test-y += tests/ 3 3 4 4 ifdef SHELLCHECK 5 - SHELL_TESTS := entry/syscalls/syscalltbl.sh 6 5 TEST_LOGS := $(SHELL_TESTS:%=%.shellcheck_log) 7 6 else 8 7 SHELL_TESTS :=
-25
tools/perf/arch/x86/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 HAVE_KVM_STAT_SUPPORT := 1 3 3 PERF_HAVE_JITDUMP := 1 4 - 5 - ### 6 - # Syscall table generation 7 - # 8 - 9 - generated := $(OUTPUT)arch/x86/include/generated 10 - out := $(generated)/asm 11 - header := $(out)/syscalls_64.c 12 - header_32 := $(out)/syscalls_32.c 13 - sys := $(srctree)/tools/perf/arch/x86/entry/syscalls 14 - systbl := $(sys)/syscalltbl.sh 15 - 16 - # Create output directory if not already present 17 - $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 18 - 19 - $(header): $(sys)/syscall_64.tbl $(systbl) 20 - $(Q)$(SHELL) '$(systbl)' $(sys)/syscall_64.tbl 'x86_64' > $@ 21 - 22 - $(header_32): $(sys)/syscall_32.tbl $(systbl) 23 - $(Q)$(SHELL) '$(systbl)' $(sys)/syscall_32.tbl 'x86' > $@ 24 - 25 - clean:: 26 - $(call QUIET_CLEAN, x86) $(RM) -r $(header) $(generated) 27 - 28 - archheaders: $(header) $(header_32)
+3
tools/perf/arch/x86/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h 3 + syscall-y += syscalls_64.h
+6
tools/perf/arch/x86/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += i386 4 + syscall_abis_64 += 5 + 6 + syscalltbl = $(srctree)/tools/perf/arch/x86/entry/syscalls/syscall_%.tbl
-42
tools/perf/arch/x86/entry/syscalls/syscalltbl.sh
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - 4 - in="$1" 5 - arch="$2" 6 - 7 - syscall_macro() { 8 - nr="$1" 9 - name="$2" 10 - 11 - echo " [$nr] = \"$name\"," 12 - } 13 - 14 - emit() { 15 - nr="$1" 16 - entry="$2" 17 - 18 - syscall_macro "$nr" "$entry" 19 - } 20 - 21 - echo "static const char *const syscalltbl_${arch}[] = {" 22 - 23 - sorted_table=$(mktemp /tmp/syscalltbl.XXXXXX) 24 - grep '^[0-9]' "$in" | sort -n > $sorted_table 25 - 26 - max_nr=0 27 - # the params are: nr abi name entry compat 28 - # use _ for intentionally unused variables according to SC2034 29 - while read nr _ name _ _; do 30 - if [ $nr -ge 512 ] ; then # discard compat sycalls 31 - break 32 - fi 33 - 34 - emit "$nr" "$name" 35 - max_nr=$nr 36 - done < $sorted_table 37 - 38 - rm -f $sorted_table 39 - 40 - echo "};" 41 - 42 - echo "#define SYSCALLTBL_${arch}_MAX_ID ${max_nr}"
+8
tools/perf/arch/x86/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/bitsperlong.h> 3 + 4 + #if __BITS_PER_LONG == 64 5 + #include <asm/syscalls_64.h> 6 + #else 7 + #include <asm/syscalls_32.h> 8 + #endif
+1 -1
tools/perf/arch/x86/util/Build
··· 15 15 perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o 16 16 17 17 perf-util-$(CONFIG_AUXTRACE) += auxtrace.o 18 - perf-util-$(CONFIG_AUXTRACE) += archinsn.o 18 + perf-util-y += archinsn.o 19 19 perf-util-$(CONFIG_AUXTRACE) += intel-pt.o 20 20 perf-util-$(CONFIG_AUXTRACE) += intel-bts.o
+4
tools/perf/arch/x86/util/iostat.c
··· 403 403 struct iio_root_port *rp = evlist->selected->priv; 404 404 405 405 if (rp) { 406 + /* 407 + * TODO: This is the incorrect format in JSON mode. 408 + * See prepare_timestamp() 409 + */ 406 410 if (ts) 407 411 sprintf(prefix, "%6lu.%09lu%s%04x:%02x%s", 408 412 ts->tv_sec, ts->tv_nsec,
+2
tools/perf/arch/xtensa/entry/syscalls/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + syscall-y += syscalls_32.h
+4
tools/perf/arch/xtensa/entry/syscalls/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + syscall_abis_32 += 4 + syscalltbl = $(srctree)/tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
+439
tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # system call numbers and entry vectors for xtensa 4 + # 5 + # The format is: 6 + # <number> <abi> <name> <entry point> 7 + # 8 + # The <abi> is always "common" for this file 9 + # 10 + 0 common spill sys_ni_syscall 11 + 1 common xtensa sys_ni_syscall 12 + 2 common available4 sys_ni_syscall 13 + 3 common available5 sys_ni_syscall 14 + 4 common available6 sys_ni_syscall 15 + 5 common available7 sys_ni_syscall 16 + 6 common available8 sys_ni_syscall 17 + 7 common available9 sys_ni_syscall 18 + # File Operations 19 + 8 common open sys_open 20 + 9 common close sys_close 21 + 10 common dup sys_dup 22 + 11 common dup2 sys_dup2 23 + 12 common read sys_read 24 + 13 common write sys_write 25 + 14 common select sys_select 26 + 15 common lseek sys_lseek 27 + 16 common poll sys_poll 28 + 17 common _llseek sys_llseek 29 + 18 common epoll_wait sys_epoll_wait 30 + 19 common epoll_ctl sys_epoll_ctl 31 + 20 common epoll_create sys_epoll_create 32 + 21 common creat sys_creat 33 + 22 common truncate sys_truncate 34 + 23 common ftruncate sys_ftruncate 35 + 24 common readv sys_readv 36 + 25 common writev sys_writev 37 + 26 common fsync sys_fsync 38 + 27 common fdatasync sys_fdatasync 39 + 28 common truncate64 sys_truncate64 40 + 29 common ftruncate64 sys_ftruncate64 41 + 30 common pread64 sys_pread64 42 + 31 common pwrite64 sys_pwrite64 43 + 32 common link sys_link 44 + 33 common rename sys_rename 45 + 34 common symlink sys_symlink 46 + 35 common readlink sys_readlink 47 + 36 common mknod sys_mknod 48 + 37 common pipe sys_pipe 49 + 38 common unlink sys_unlink 50 + 39 common rmdir sys_rmdir 51 + 40 common mkdir sys_mkdir 52 + 41 common chdir sys_chdir 53 + 42 common fchdir sys_fchdir 54 + 43 common getcwd sys_getcwd 55 + 44 common chmod sys_chmod 56 + 45 common chown sys_chown 57 + 46 common stat sys_newstat 58 + 47 common stat64 sys_stat64 59 + 48 common lchown sys_lchown 60 + 49 common lstat sys_newlstat 61 + 50 common lstat64 sys_lstat64 62 + 51 common available51 sys_ni_syscall 63 + 52 common fchmod sys_fchmod 64 + 53 common fchown sys_fchown 65 + 54 common fstat sys_newfstat 66 + 55 common fstat64 sys_fstat64 67 + 56 common flock sys_flock 68 + 57 common access sys_access 69 + 58 common umask sys_umask 70 + 59 common getdents sys_getdents 71 + 60 common getdents64 sys_getdents64 72 + 61 common fcntl64 sys_fcntl64 73 + 62 common fallocate sys_fallocate 74 + 63 common fadvise64_64 xtensa_fadvise64_64 75 + 64 common utime sys_utime32 76 + 65 common utimes sys_utimes_time32 77 + 66 common ioctl sys_ioctl 78 + 67 common fcntl sys_fcntl 79 + 68 common setxattr sys_setxattr 80 + 69 common getxattr sys_getxattr 81 + 70 common listxattr sys_listxattr 82 + 71 common removexattr sys_removexattr 83 + 72 common lsetxattr sys_lsetxattr 84 + 73 common lgetxattr sys_lgetxattr 85 + 74 common llistxattr sys_llistxattr 86 + 75 common lremovexattr sys_lremovexattr 87 + 76 common fsetxattr sys_fsetxattr 88 + 77 common fgetxattr sys_fgetxattr 89 + 78 common flistxattr sys_flistxattr 90 + 79 common fremovexattr sys_fremovexattr 91 + # File Map / Shared Memory Operations 92 + 80 common mmap2 sys_mmap_pgoff 93 + 81 common munmap sys_munmap 94 + 82 common mprotect sys_mprotect 95 + 83 common brk sys_brk 96 + 84 common mlock sys_mlock 97 + 85 common munlock sys_munlock 98 + 86 common mlockall sys_mlockall 99 + 87 common munlockall sys_munlockall 100 + 88 common mremap sys_mremap 101 + 89 common msync sys_msync 102 + 90 common mincore sys_mincore 103 + 91 common madvise sys_madvise 104 + 92 common shmget sys_shmget 105 + 93 common shmat xtensa_shmat 106 + 94 common shmctl sys_old_shmctl 107 + 95 common shmdt sys_shmdt 108 + # Socket Operations 109 + 96 common socket sys_socket 110 + 97 common setsockopt sys_setsockopt 111 + 98 common getsockopt sys_getsockopt 112 + 99 common shutdown sys_shutdown 113 + 100 common bind sys_bind 114 + 101 common connect sys_connect 115 + 102 common listen sys_listen 116 + 103 common accept sys_accept 117 + 104 common getsockname sys_getsockname 118 + 105 common getpeername sys_getpeername 119 + 106 common sendmsg sys_sendmsg 120 + 107 common recvmsg sys_recvmsg 121 + 108 common send sys_send 122 + 109 common recv sys_recv 123 + 110 common sendto sys_sendto 124 + 111 common recvfrom sys_recvfrom 125 + 112 common socketpair sys_socketpair 126 + 113 common sendfile sys_sendfile 127 + 114 common sendfile64 sys_sendfile64 128 + 115 common sendmmsg sys_sendmmsg 129 + # Process Operations 130 + 116 common clone sys_clone 131 + 117 common execve sys_execve 132 + 118 common exit sys_exit 133 + 119 common exit_group sys_exit_group 134 + 120 common getpid sys_getpid 135 + 121 common wait4 sys_wait4 136 + 122 common waitid sys_waitid 137 + 123 common kill sys_kill 138 + 124 common tkill sys_tkill 139 + 125 common tgkill sys_tgkill 140 + 126 common set_tid_address sys_set_tid_address 141 + 127 common gettid sys_gettid 142 + 128 common setsid sys_setsid 143 + 129 common getsid sys_getsid 144 + 130 common prctl sys_prctl 145 + 131 common personality sys_personality 146 + 132 common getpriority sys_getpriority 147 + 133 common setpriority sys_setpriority 148 + 134 common setitimer sys_setitimer 149 + 135 common getitimer sys_getitimer 150 + 136 common setuid sys_setuid 151 + 137 common getuid sys_getuid 152 + 138 common setgid sys_setgid 153 + 139 common getgid sys_getgid 154 + 140 common geteuid sys_geteuid 155 + 141 common getegid sys_getegid 156 + 142 common setreuid sys_setreuid 157 + 143 common setregid sys_setregid 158 + 144 common setresuid sys_setresuid 159 + 145 common getresuid sys_getresuid 160 + 146 common setresgid sys_setresgid 161 + 147 common getresgid sys_getresgid 162 + 148 common setpgid sys_setpgid 163 + 149 common getpgid sys_getpgid 164 + 150 common getppid sys_getppid 165 + 151 common getpgrp sys_getpgrp 166 + # 152 was set_thread_area 167 + 152 common reserved152 sys_ni_syscall 168 + # 153 was get_thread_area 169 + 153 common reserved153 sys_ni_syscall 170 + 154 common times sys_times 171 + 155 common acct sys_acct 172 + 156 common sched_setaffinity sys_sched_setaffinity 173 + 157 common sched_getaffinity sys_sched_getaffinity 174 + 158 common capget sys_capget 175 + 159 common capset sys_capset 176 + 160 common ptrace sys_ptrace 177 + 161 common semtimedop sys_semtimedop_time32 178 + 162 common semget sys_semget 179 + 163 common semop sys_semop 180 + 164 common semctl sys_old_semctl 181 + 165 common available165 sys_ni_syscall 182 + 166 common msgget sys_msgget 183 + 167 common msgsnd sys_msgsnd 184 + 168 common msgrcv sys_msgrcv 185 + 169 common msgctl sys_old_msgctl 186 + 170 common available170 sys_ni_syscall 187 + # File System 188 + 171 common umount2 sys_umount 189 + 172 common mount sys_mount 190 + 173 common swapon sys_swapon 191 + 174 common chroot sys_chroot 192 + 175 common pivot_root sys_pivot_root 193 + 176 common umount sys_oldumount 194 + 177 common swapoff sys_swapoff 195 + 178 common sync sys_sync 196 + 179 common syncfs sys_syncfs 197 + 180 common setfsuid sys_setfsuid 198 + 181 common setfsgid sys_setfsgid 199 + 182 common sysfs sys_sysfs 200 + 183 common ustat sys_ustat 201 + 184 common statfs sys_statfs 202 + 185 common fstatfs sys_fstatfs 203 + 186 common statfs64 sys_statfs64 204 + 187 common fstatfs64 sys_fstatfs64 205 + # System 206 + 188 common setrlimit sys_setrlimit 207 + 189 common getrlimit sys_getrlimit 208 + 190 common getrusage sys_getrusage 209 + 191 common futex sys_futex_time32 210 + 192 common gettimeofday sys_gettimeofday 211 + 193 common settimeofday sys_settimeofday 212 + 194 common adjtimex sys_adjtimex_time32 213 + 195 common nanosleep sys_nanosleep_time32 214 + 196 common getgroups sys_getgroups 215 + 197 common setgroups sys_setgroups 216 + 198 common sethostname sys_sethostname 217 + 199 common setdomainname sys_setdomainname 218 + 200 common syslog sys_syslog 219 + 201 common vhangup sys_vhangup 220 + 202 common uselib sys_uselib 221 + 203 common reboot sys_reboot 222 + 204 common quotactl sys_quotactl 223 + # 205 was old nfsservctl 224 + 205 common nfsservctl sys_ni_syscall 225 + 206 common _sysctl sys_ni_syscall 226 + 207 common bdflush sys_ni_syscall 227 + 208 common uname sys_newuname 228 + 209 common sysinfo sys_sysinfo 229 + 210 common init_module sys_init_module 230 + 211 common delete_module sys_delete_module 231 + 212 common sched_setparam sys_sched_setparam 232 + 213 common sched_getparam sys_sched_getparam 233 + 214 common sched_setscheduler sys_sched_setscheduler 234 + 215 common sched_getscheduler sys_sched_getscheduler 235 + 216 common sched_get_priority_max sys_sched_get_priority_max 236 + 217 common sched_get_priority_min sys_sched_get_priority_min 237 + 218 common sched_rr_get_interval sys_sched_rr_get_interval_time32 238 + 219 common sched_yield sys_sched_yield 239 + 222 common available222 sys_ni_syscall 240 + # Signal Handling 241 + 223 common restart_syscall sys_restart_syscall 242 + 224 common sigaltstack sys_sigaltstack 243 + 225 common rt_sigreturn xtensa_rt_sigreturn 244 + 226 common rt_sigaction sys_rt_sigaction 245 + 227 common rt_sigprocmask sys_rt_sigprocmask 246 + 228 common rt_sigpending sys_rt_sigpending 247 + 229 common rt_sigtimedwait sys_rt_sigtimedwait_time32 248 + 230 common rt_sigqueueinfo sys_rt_sigqueueinfo 249 + 231 common rt_sigsuspend sys_rt_sigsuspend 250 + # Message 251 + 232 common mq_open sys_mq_open 252 + 233 common mq_unlink sys_mq_unlink 253 + 234 common mq_timedsend sys_mq_timedsend_time32 254 + 235 common mq_timedreceive sys_mq_timedreceive_time32 255 + 236 common mq_notify sys_mq_notify 256 + 237 common mq_getsetattr sys_mq_getsetattr 257 + 238 common available238 sys_ni_syscall 258 + 239 common io_setup sys_io_setup 259 + # IO 260 + 240 common io_destroy sys_io_destroy 261 + 241 common io_submit sys_io_submit 262 + 242 common io_getevents sys_io_getevents_time32 263 + 243 common io_cancel sys_io_cancel 264 + 244 common clock_settime sys_clock_settime32 265 + 245 common clock_gettime sys_clock_gettime32 266 + 246 common clock_getres sys_clock_getres_time32 267 + 247 common clock_nanosleep sys_clock_nanosleep_time32 268 + # Timer 269 + 248 common timer_create sys_timer_create 270 + 249 common timer_delete sys_timer_delete 271 + 250 common timer_settime sys_timer_settime32 272 + 251 common timer_gettime sys_timer_gettime32 273 + 252 common timer_getoverrun sys_timer_getoverrun 274 + # System 275 + 253 common reserved253 sys_ni_syscall 276 + 254 common lookup_dcookie sys_ni_syscall 277 + 255 common available255 sys_ni_syscall 278 + 256 common add_key sys_add_key 279 + 257 common request_key sys_request_key 280 + 258 common keyctl sys_keyctl 281 + 259 common available259 sys_ni_syscall 282 + 260 common readahead sys_readahead 283 + 261 common remap_file_pages sys_remap_file_pages 284 + 262 common migrate_pages sys_migrate_pages 285 + 263 common mbind sys_mbind 286 + 264 common get_mempolicy sys_get_mempolicy 287 + 265 common set_mempolicy sys_set_mempolicy 288 + 266 common unshare sys_unshare 289 + 267 common move_pages sys_move_pages 290 + 268 common splice sys_splice 291 + 269 common tee sys_tee 292 + 270 common vmsplice sys_vmsplice 293 + 271 common available271 sys_ni_syscall 294 + 272 common pselect6 sys_pselect6_time32 295 + 273 common ppoll sys_ppoll_time32 296 + 274 common epoll_pwait sys_epoll_pwait 297 + 275 common epoll_create1 sys_epoll_create1 298 + 276 common inotify_init sys_inotify_init 299 + 277 common inotify_add_watch sys_inotify_add_watch 300 + 278 common inotify_rm_watch sys_inotify_rm_watch 301 + 279 common inotify_init1 sys_inotify_init1 302 + 280 common getcpu sys_getcpu 303 + 281 common kexec_load sys_ni_syscall 304 + 282 common ioprio_set sys_ioprio_set 305 + 283 common ioprio_get sys_ioprio_get 306 + 284 common set_robust_list sys_set_robust_list 307 + 285 common get_robust_list sys_get_robust_list 308 + 286 common available286 sys_ni_syscall 309 + 287 common available287 sys_ni_syscall 310 + # Relative File Operations 311 + 288 common openat sys_openat 312 + 289 common mkdirat sys_mkdirat 313 + 290 common mknodat sys_mknodat 314 + 291 common unlinkat sys_unlinkat 315 + 292 common renameat sys_renameat 316 + 293 common linkat sys_linkat 317 + 294 common symlinkat sys_symlinkat 318 + 295 common readlinkat sys_readlinkat 319 + 296 common utimensat sys_utimensat_time32 320 + 297 common fchownat sys_fchownat 321 + 298 common futimesat sys_futimesat_time32 322 + 299 common fstatat64 sys_fstatat64 323 + 300 common fchmodat sys_fchmodat 324 + 301 common faccessat sys_faccessat 325 + 302 common available302 sys_ni_syscall 326 + 303 common available303 sys_ni_syscall 327 + 304 common signalfd sys_signalfd 328 + # 305 was timerfd 329 + 306 common eventfd sys_eventfd 330 + 307 common recvmmsg sys_recvmmsg_time32 331 + 308 common setns sys_setns 332 + 309 common signalfd4 sys_signalfd4 333 + 310 common dup3 sys_dup3 334 + 311 common pipe2 sys_pipe2 335 + 312 common timerfd_create sys_timerfd_create 336 + 313 common timerfd_settime sys_timerfd_settime32 337 + 314 common timerfd_gettime sys_timerfd_gettime32 338 + 315 common available315 sys_ni_syscall 339 + 316 common eventfd2 sys_eventfd2 340 + 317 common preadv sys_preadv 341 + 318 common pwritev sys_pwritev 342 + 319 common available319 sys_ni_syscall 343 + 320 common fanotify_init sys_fanotify_init 344 + 321 common fanotify_mark sys_fanotify_mark 345 + 322 common process_vm_readv sys_process_vm_readv 346 + 323 common process_vm_writev sys_process_vm_writev 347 + 324 common name_to_handle_at sys_name_to_handle_at 348 + 325 common open_by_handle_at sys_open_by_handle_at 349 + 326 common sync_file_range2 sys_sync_file_range2 350 + 327 common perf_event_open sys_perf_event_open 351 + 328 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo 352 + 329 common clock_adjtime sys_clock_adjtime32 353 + 330 common prlimit64 sys_prlimit64 354 + 331 common kcmp sys_kcmp 355 + 332 common finit_module sys_finit_module 356 + 333 common accept4 sys_accept4 357 + 334 common sched_setattr sys_sched_setattr 358 + 335 common sched_getattr sys_sched_getattr 359 + 336 common renameat2 sys_renameat2 360 + 337 common seccomp sys_seccomp 361 + 338 common getrandom sys_getrandom 362 + 339 common memfd_create sys_memfd_create 363 + 340 common bpf sys_bpf 364 + 341 common execveat sys_execveat 365 + 342 common userfaultfd sys_userfaultfd 366 + 343 common membarrier sys_membarrier 367 + 344 common mlock2 sys_mlock2 368 + 345 common copy_file_range sys_copy_file_range 369 + 346 common preadv2 sys_preadv2 370 + 347 common pwritev2 sys_pwritev2 371 + 348 common pkey_mprotect sys_pkey_mprotect 372 + 349 common pkey_alloc sys_pkey_alloc 373 + 350 common pkey_free sys_pkey_free 374 + 351 common statx sys_statx 375 + 352 common rseq sys_rseq 376 + # 353 through 402 are unassigned to sync up with generic numbers 377 + 403 common clock_gettime64 sys_clock_gettime 378 + 404 common clock_settime64 sys_clock_settime 379 + 405 common clock_adjtime64 sys_clock_adjtime 380 + 406 common clock_getres_time64 sys_clock_getres 381 + 407 common clock_nanosleep_time64 sys_clock_nanosleep 382 + 408 common timer_gettime64 sys_timer_gettime 383 + 409 common timer_settime64 sys_timer_settime 384 + 410 common timerfd_gettime64 sys_timerfd_gettime 385 + 411 common timerfd_settime64 sys_timerfd_settime 386 + 412 common utimensat_time64 sys_utimensat 387 + 413 common pselect6_time64 sys_pselect6 388 + 414 common ppoll_time64 sys_ppoll 389 + 416 common io_pgetevents_time64 sys_io_pgetevents 390 + 417 common recvmmsg_time64 sys_recvmmsg 391 + 418 common mq_timedsend_time64 sys_mq_timedsend 392 + 419 common mq_timedreceive_time64 sys_mq_timedreceive 393 + 420 common semtimedop_time64 sys_semtimedop 394 + 421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait 395 + 422 common futex_time64 sys_futex 396 + 423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval 397 + 424 common pidfd_send_signal sys_pidfd_send_signal 398 + 425 common io_uring_setup sys_io_uring_setup 399 + 426 common io_uring_enter sys_io_uring_enter 400 + 427 common io_uring_register sys_io_uring_register 401 + 428 common open_tree sys_open_tree 402 + 429 common move_mount sys_move_mount 403 + 430 common fsopen sys_fsopen 404 + 431 common fsconfig sys_fsconfig 405 + 432 common fsmount sys_fsmount 406 + 433 common fspick sys_fspick 407 + 434 common pidfd_open sys_pidfd_open 408 + 435 common clone3 sys_clone3 409 + 436 common close_range sys_close_range 410 + 437 common openat2 sys_openat2 411 + 438 common pidfd_getfd sys_pidfd_getfd 412 + 439 common faccessat2 sys_faccessat2 413 + 440 common process_madvise sys_process_madvise 414 + 441 common epoll_pwait2 sys_epoll_pwait2 415 + 442 common mount_setattr sys_mount_setattr 416 + 443 common quotactl_fd sys_quotactl_fd 417 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 418 + 445 common landlock_add_rule sys_landlock_add_rule 419 + 446 common landlock_restrict_self sys_landlock_restrict_self 420 + # 447 reserved for memfd_secret 421 + 448 common process_mrelease sys_process_mrelease 422 + 449 common futex_waitv sys_futex_waitv 423 + 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 424 + 451 common cachestat sys_cachestat 425 + 452 common fchmodat2 sys_fchmodat2 426 + 453 common map_shadow_stack sys_map_shadow_stack 427 + 454 common futex_wake sys_futex_wake 428 + 455 common futex_wait sys_futex_wait 429 + 456 common futex_requeue sys_futex_requeue 430 + 457 common statmount sys_statmount 431 + 458 common listmount sys_listmount 432 + 459 common lsm_get_self_attr sys_lsm_get_self_attr 433 + 460 common lsm_set_self_attr sys_lsm_set_self_attr 434 + 461 common lsm_list_modules sys_lsm_list_modules 435 + 462 common mseal sys_mseal 436 + 463 common setxattrat sys_setxattrat 437 + 464 common getxattrat sys_getxattrat 438 + 465 common listxattrat sys_listxattrat 439 + 466 common removexattrat sys_removexattrat
+2
tools/perf/arch/xtensa/include/syscall_table.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <asm/syscalls_32.h>
+6 -1
tools/perf/bench/epoll-wait.c
··· 420 420 421 421 struct worker *w1 = (struct worker *) p1; 422 422 struct worker *w2 = (struct worker *) p2; 423 - return w1->tid > w2->tid; 423 + 424 + if (w1->tid > w2->tid) 425 + return 1; 426 + if (w1->tid < w2->tid) 427 + return -1; 428 + return 0; 424 429 } 425 430 426 431 int bench_epoll_wait(int argc, const char **argv)
+7 -6
tools/perf/bench/inject-buildid.c
··· 52 52 static int nr_dsos; 53 53 static struct bench_dso *dsos; 54 54 55 - extern int cmd_inject(int argc, const char *argv[]); 55 + extern int main(int argc, const char **argv); 56 56 57 57 static const struct option options[] = { 58 58 OPT_UINTEGER('i', "iterations", &iterations, ··· 294 294 295 295 if (data->pid == 0) { 296 296 const char **inject_argv; 297 - int inject_argc = 2; 297 + int inject_argc = 3; 298 298 299 299 close(data->input_pipe[1]); 300 300 close(data->output_pipe[0]); ··· 318 318 if (inject_argv == NULL) 319 319 exit(1); 320 320 321 - inject_argv[0] = strdup("inject"); 322 - inject_argv[1] = strdup("-b"); 321 + inject_argv[0] = strdup("perf"); 322 + inject_argv[1] = strdup("inject"); 323 + inject_argv[2] = strdup("-b"); 323 324 if (build_id_all) 324 - inject_argv[2] = strdup("--buildid-all"); 325 + inject_argv[3] = strdup("--buildid-all"); 325 326 326 327 /* signal that we're ready to go */ 327 328 close(ready_pipe[1]); 328 329 329 - cmd_inject(inject_argc, inject_argv); 330 + main(inject_argc, inject_argv); 330 331 331 332 exit(0); 332 333 }
+1
tools/perf/builtin-annotate.c
··· 7 7 * a histogram of results, along various sorting keys. 8 8 */ 9 9 #include "builtin.h" 10 + #include "perf.h" 10 11 11 12 #include "util/color.h" 12 13 #include <linux/list.h>
-2
tools/perf/builtin-check.c
··· 31 31 FEATURE_STATUS("dwarf_getlocations", HAVE_LIBDW_SUPPORT), 32 32 FEATURE_STATUS("dwarf-unwind", HAVE_DWARF_UNWIND_SUPPORT), 33 33 FEATURE_STATUS("auxtrace", HAVE_AUXTRACE_SUPPORT), 34 - FEATURE_STATUS("libaudit", HAVE_LIBAUDIT_SUPPORT), 35 34 FEATURE_STATUS("libbfd", HAVE_LIBBFD_SUPPORT), 36 35 FEATURE_STATUS("libcapstone", HAVE_LIBCAPSTONE_SUPPORT), 37 36 FEATURE_STATUS("libcrypto", HAVE_LIBCRYPTO_SUPPORT), ··· 46 47 FEATURE_STATUS("libunwind", HAVE_LIBUNWIND_SUPPORT), 47 48 FEATURE_STATUS("lzma", HAVE_LZMA_SUPPORT), 48 49 FEATURE_STATUS("numa_num_possible_cpus", HAVE_LIBNUMA_SUPPORT), 49 - FEATURE_STATUS("syscall_table", HAVE_SYSCALL_TABLE_SUPPORT), 50 50 FEATURE_STATUS("zlib", HAVE_ZLIB_SUPPORT), 51 51 FEATURE_STATUS("zstd", HAVE_ZSTD_SUPPORT), 52 52
+38
tools/perf/builtin-config.c
··· 154 154 return 0; 155 155 } 156 156 157 + int perf_config__set_variable(const char *var, const char *value) 158 + { 159 + char path[PATH_MAX]; 160 + char *user_config = mkpath(path, sizeof(path), "%s/.perfconfig", getenv("HOME")); 161 + const char *config_filename; 162 + struct perf_config_set *set; 163 + int ret = -1; 164 + 165 + if (use_system_config) 166 + config_exclusive_filename = perf_etc_perfconfig(); 167 + else if (use_user_config) 168 + config_exclusive_filename = user_config; 169 + 170 + if (!config_exclusive_filename) 171 + config_filename = user_config; 172 + else 173 + config_filename = config_exclusive_filename; 174 + 175 + set = perf_config_set__new(); 176 + if (!set) 177 + goto out_err; 178 + 179 + if (perf_config_set__collect(set, config_filename, var, value) < 0) { 180 + pr_err("Failed to add '%s=%s'\n", var, value); 181 + goto out_err; 182 + } 183 + 184 + if (set_config(set, config_filename) < 0) { 185 + pr_err("Failed to set the configs on %s\n", config_filename); 186 + goto out_err; 187 + } 188 + 189 + ret = 0; 190 + out_err: 191 + perf_config_set__delete(set); 192 + return ret; 193 + } 194 + 157 195 int cmd_config(int argc, const char **argv) 158 196 { 159 197 int i, ret = -1;
+3 -2
tools/perf/builtin-diff.c
··· 6 6 * DSOs and symbol information, sort them and produce a diff. 7 7 */ 8 8 #include "builtin.h" 9 + #include "perf.h" 9 10 10 11 #include "util/debug.h" 11 12 #include "util/event.h" ··· 1020 1019 continue; 1021 1020 1022 1021 es_base = evsel_streams__entry(data_base->evlist_streams, 1023 - evsel_base->core.idx); 1022 + evsel_base); 1024 1023 if (!es_base) 1025 1024 return -1; 1026 1025 1027 1026 es_pair = evsel_streams__entry(data_pair->evlist_streams, 1028 - evsel_pair->core.idx); 1027 + evsel_pair); 1029 1028 if (!es_pair) 1030 1029 return -1; 1031 1030
+126 -23
tools/perf/builtin-ftrace.c
··· 43 43 static volatile sig_atomic_t workload_exec_errno; 44 44 static volatile sig_atomic_t done; 45 45 46 + static struct stats latency_stats; /* for tracepoints */ 47 + 46 48 static void sig_handler(int sig __maybe_unused) 47 49 { 48 50 done = true; ··· 728 726 return (done && !workload_exec_errno) ? 0 : -1; 729 727 } 730 728 731 - static void make_histogram(int buckets[], char *buf, size_t len, char *linebuf, 732 - bool use_nsec) 729 + static void make_histogram(struct perf_ftrace *ftrace, int buckets[], 730 + char *buf, size_t len, char *linebuf) 733 731 { 732 + int min_latency = ftrace->min_latency; 733 + int max_latency = ftrace->max_latency; 734 734 char *p, *q; 735 735 char *unit; 736 736 double num; ··· 778 774 if (!unit || strncmp(unit, " us", 3)) 779 775 goto next; 780 776 781 - if (use_nsec) 777 + if (ftrace->use_nsec) 782 778 num *= 1000; 783 779 784 - i = log2(num); 785 - if (i < 0) 786 - i = 0; 780 + i = 0; 781 + if (num < min_latency) 782 + goto do_inc; 783 + 784 + num -= min_latency; 785 + 786 + if (!ftrace->bucket_range) { 787 + i = log2(num); 788 + if (i < 0) 789 + i = 0; 790 + } else { 791 + // Less than 1 unit (ms or ns), or, in the future, 792 + // than the min latency desired. 793 + if (num > 0) // 1st entry: [ 1 unit .. bucket_range units ] 794 + i = num / ftrace->bucket_range + 1; 795 + if (num >= max_latency - min_latency) 796 + i = NUM_BUCKET -1; 797 + } 787 798 if (i >= NUM_BUCKET) 788 799 i = NUM_BUCKET - 1; 789 800 801 + num += min_latency; 802 + do_inc: 790 803 buckets[i]++; 804 + update_stats(&latency_stats, num); 791 805 792 806 next: 793 807 /* empty the line buffer for the next output */ ··· 816 794 strcat(linebuf, p); 817 795 } 818 796 819 - static void display_histogram(int buckets[], bool use_nsec) 797 + static void display_histogram(struct perf_ftrace *ftrace, int buckets[]) 820 798 { 799 + int min_latency = ftrace->min_latency; 800 + bool use_nsec = ftrace->use_nsec; 821 801 int i; 822 802 int total = 0; 823 803 int bar_total = 46; /* to fit in 80 column */ ··· 838 814 " DURATION ", "COUNT", bar_total, "GRAPH"); 839 815 840 816 bar_len = buckets[0] * bar_total / total; 841 - printf(" %4d - %-4d %s | %10d | %.*s%*s |\n", 842 - 0, 1, use_nsec ? "ns" : "us", buckets[0], bar_len, bar, bar_total - bar_len, ""); 817 + 818 + printf(" %4d - %4d %s | %10d | %.*s%*s |\n", 819 + 0, min_latency ?: 1, use_nsec ? "ns" : "us", 820 + buckets[0], bar_len, bar, bar_total - bar_len, ""); 843 821 844 822 for (i = 1; i < NUM_BUCKET - 1; i++) { 845 - int start = (1 << (i - 1)); 846 - int stop = 1 << i; 823 + unsigned int start, stop; 847 824 const char *unit = use_nsec ? "ns" : "us"; 848 825 849 - if (start >= 1024) { 850 - start >>= 10; 851 - stop >>= 10; 852 - unit = use_nsec ? "us" : "ms"; 826 + if (!ftrace->bucket_range) { 827 + start = (1 << (i - 1)); 828 + stop = 1 << i; 829 + 830 + if (start >= 1024) { 831 + start >>= 10; 832 + stop >>= 10; 833 + unit = use_nsec ? "us" : "ms"; 834 + } 835 + } else { 836 + start = (i - 1) * ftrace->bucket_range + min_latency; 837 + stop = i * ftrace->bucket_range + min_latency; 838 + 839 + if (start >= ftrace->max_latency) 840 + break; 841 + if (stop > ftrace->max_latency) 842 + stop = ftrace->max_latency; 843 + 844 + if (start >= 1000) { 845 + double dstart = start / 1000.0, 846 + dstop = stop / 1000.0; 847 + printf(" %4.2f - %-4.2f", dstart, dstop); 848 + unit = use_nsec ? "us" : "ms"; 849 + goto print_bucket_info; 850 + } 853 851 } 852 + 853 + printf(" %4d - %4d", start, stop); 854 + print_bucket_info: 854 855 bar_len = buckets[i] * bar_total / total; 855 - printf(" %4d - %-4d %s | %10d | %.*s%*s |\n", 856 - start, stop, unit, buckets[i], bar_len, bar, 856 + printf(" %s | %10d | %.*s%*s |\n", unit, buckets[i], bar_len, bar, 857 857 bar_total - bar_len, ""); 858 858 } 859 859 860 860 bar_len = buckets[NUM_BUCKET - 1] * bar_total / total; 861 - printf(" %4d - %-4s %s | %10d | %.*s%*s |\n", 862 - 1, "...", use_nsec ? "ms" : " s", buckets[NUM_BUCKET - 1], 861 + if (!ftrace->bucket_range) { 862 + printf(" %4d - %-4s %s", 1, "...", use_nsec ? "ms" : "s "); 863 + } else { 864 + unsigned int upper_outlier = (NUM_BUCKET - 2) * ftrace->bucket_range + min_latency; 865 + if (upper_outlier > ftrace->max_latency) 866 + upper_outlier = ftrace->max_latency; 867 + 868 + if (upper_outlier >= 1000) { 869 + double dstart = upper_outlier / 1000.0; 870 + 871 + printf(" %4.2f - %-4s %s", dstart, "...", use_nsec ? "us" : "ms"); 872 + } else { 873 + printf(" %4d - %4s %s", upper_outlier, "...", use_nsec ? "ns" : "us"); 874 + } 875 + } 876 + printf(" | %10d | %.*s%*s |\n", buckets[NUM_BUCKET - 1], 863 877 bar_len, bar, bar_total - bar_len, ""); 864 878 879 + printf("\n# statistics (in %s)\n", ftrace->use_nsec ? "nsec" : "usec"); 880 + printf(" total time: %20.0f\n", latency_stats.mean * latency_stats.n); 881 + printf(" avg time: %20.0f\n", latency_stats.mean); 882 + printf(" max time: %20"PRIu64"\n", latency_stats.max); 883 + printf(" min time: %20"PRIu64"\n", latency_stats.min); 884 + printf(" count: %20.0f\n", latency_stats.n); 865 885 } 866 886 867 887 static int prepare_func_latency(struct perf_ftrace *ftrace) ··· 944 876 if (fd < 0) 945 877 pr_err("failed to open trace_pipe\n"); 946 878 879 + init_stats(&latency_stats); 880 + 947 881 put_tracing_file(trace_file); 948 882 return fd; 949 883 } ··· 975 905 static int read_func_latency(struct perf_ftrace *ftrace, int buckets[]) 976 906 { 977 907 if (ftrace->target.use_bpf) 978 - return perf_ftrace__latency_read_bpf(ftrace, buckets); 908 + return perf_ftrace__latency_read_bpf(ftrace, buckets, &latency_stats); 979 909 980 910 return 0; 981 911 } ··· 1021 951 if (n < 0) 1022 952 break; 1023 953 1024 - make_histogram(buckets, buf, n, line, ftrace->use_nsec); 954 + make_histogram(ftrace, buckets, buf, n, line); 1025 955 } 1026 956 } 1027 957 ··· 1038 968 int n = read(trace_fd, buf, sizeof(buf) - 1); 1039 969 if (n <= 0) 1040 970 break; 1041 - make_histogram(buckets, buf, n, line, ftrace->use_nsec); 971 + make_histogram(ftrace, buckets, buf, n, line); 1042 972 } 1043 973 1044 974 read_func_latency(ftrace, buckets); 1045 975 1046 - display_histogram(buckets, ftrace->use_nsec); 976 + display_histogram(ftrace, buckets); 1047 977 1048 978 out: 1049 979 close(trace_fd); ··· 1066 996 { 1067 997 ftrace->tracer = "function_graph"; 1068 998 ftrace->graph_tail = 1; 999 + ftrace->graph_verbose = 0; 1069 1000 1070 1001 ftrace->profile_hash = hashmap__new(profile_hash, profile_equal, NULL); 1071 1002 if (ftrace->profile_hash == NULL) ··· 1629 1558 #endif 1630 1559 OPT_BOOLEAN('n', "use-nsec", &ftrace.use_nsec, 1631 1560 "Use nano-second histogram"), 1561 + OPT_UINTEGER(0, "bucket-range", &ftrace.bucket_range, 1562 + "Bucket range in ms or ns (-n/--use-nsec), default is log2() mode"), 1563 + OPT_UINTEGER(0, "min-latency", &ftrace.min_latency, 1564 + "Minimum latency (1st bucket). Works only with --bucket-range."), 1565 + OPT_UINTEGER(0, "max-latency", &ftrace.max_latency, 1566 + "Maximum latency (last bucket). Works only with --bucket-range and total buckets less than 22."), 1632 1567 OPT_PARENT(common_options), 1633 1568 }; 1634 1569 const struct option profile_options[] = { ··· 1653 1576 OPT_CALLBACK('s', "sort", &profile_sort, "key", 1654 1577 "Sort result by key: total (default), avg, max, count, name.", 1655 1578 parse_sort_key), 1579 + OPT_CALLBACK(0, "graph-opts", &ftrace, "options", 1580 + "Graph tracer options, available options: nosleep-time,noirqs,thresh=<n>,depth=<n>", 1581 + parse_graph_tracer_opts), 1656 1582 OPT_PARENT(common_options), 1657 1583 }; 1658 1584 const struct option *options = ftrace_options; ··· 1732 1652 parse_options_usage(ftrace_usage, options, "T", 1); 1733 1653 ret = -EINVAL; 1734 1654 goto out_delete_filters; 1655 + } 1656 + if (!ftrace.bucket_range && ftrace.min_latency) { 1657 + pr_err("--min-latency works only with --bucket-range\n"); 1658 + parse_options_usage(ftrace_usage, options, 1659 + "min-latency", /*short_opt=*/false); 1660 + ret = -EINVAL; 1661 + goto out_delete_filters; 1662 + } 1663 + if (ftrace.bucket_range && !ftrace.min_latency) { 1664 + /* default min latency should be the bucket range */ 1665 + ftrace.min_latency = ftrace.bucket_range; 1666 + } 1667 + if (!ftrace.bucket_range && ftrace.max_latency) { 1668 + pr_err("--max-latency works only with --bucket-range\n"); 1669 + parse_options_usage(ftrace_usage, options, 1670 + "max-latency", /*short_opt=*/false); 1671 + ret = -EINVAL; 1672 + goto out_delete_filters; 1673 + } 1674 + if (ftrace.bucket_range && !ftrace.max_latency) { 1675 + /* default max latency should depend on bucket range and num_buckets */ 1676 + ftrace.max_latency = (NUM_BUCKET - 2) * ftrace.bucket_range + 1677 + ftrace.min_latency; 1735 1678 } 1736 1679 cmd_func = __cmd_latency; 1737 1680 break;
-2
tools/perf/builtin-help.c
··· 447 447 #ifdef HAVE_LIBELF_SUPPORT 448 448 "probe", 449 449 #endif 450 - #if defined(HAVE_LIBAUDIT_SUPPORT) || defined(HAVE_SYSCALL_TABLE_SUPPORT) 451 450 "trace", 452 - #endif 453 451 NULL }; 454 452 const char *builtin_help_usage[] = { 455 453 "perf help [--all] [--man|--web|--info] [command]",
+4 -4
tools/perf/builtin-inject.c
··· 2367 2367 }; 2368 2368 int ret; 2369 2369 const char *known_build_ids = NULL; 2370 - bool build_ids; 2371 - bool build_id_all; 2372 - bool mmap2_build_ids; 2373 - bool mmap2_build_id_all; 2370 + bool build_ids = false; 2371 + bool build_id_all = false; 2372 + bool mmap2_build_ids = false; 2373 + bool mmap2_build_id_all = false; 2374 2374 2375 2375 struct option options[] = { 2376 2376 OPT_BOOLEAN('b', "build-ids", &build_ids,
+7 -5
tools/perf/builtin-kmem.c
··· 761 761 }; 762 762 struct trace_seq seq; 763 763 char *str, *pos = NULL; 764 + const struct tep_event *tp_format; 764 765 765 766 if (nr_gfps) { 766 767 struct gfp_flag key = { ··· 773 772 } 774 773 775 774 trace_seq_init(&seq); 776 - tep_print_event(evsel->tp_format->tep, 777 - &seq, &record, "%s", TEP_PRINT_INFO); 775 + tp_format = evsel__tp_format(evsel); 776 + if (tp_format) 777 + tep_print_event(tp_format->tep, &seq, &record, "%s", TEP_PRINT_INFO); 778 778 779 779 str = strtok_r(seq.buffer, " ", &pos); 780 780 while (str) { ··· 2014 2012 2015 2013 if (kmem_page) { 2016 2014 struct evsel *evsel = evlist__find_tracepoint_by_name(session->evlist, "kmem:mm_page_alloc"); 2015 + const struct tep_event *tp_format = evsel ? evsel__tp_format(evsel) : NULL; 2017 2016 2018 - if (evsel == NULL) { 2017 + if (tp_format == NULL) { 2019 2018 pr_err(errmsg, "page", "page"); 2020 2019 goto out_delete; 2021 2020 } 2022 - 2023 - kmem_page_size = tep_get_page_size(evsel->tp_format->tep); 2021 + kmem_page_size = tep_get_page_size(tp_format->tep); 2024 2022 symbol_conf.use_callchain = true; 2025 2023 } 2026 2024
-61
tools/perf/builtin-kvm.c
··· 615 615 616 616 #if defined(HAVE_KVM_STAT_SUPPORT) && defined(HAVE_LIBTRACEEVENT) 617 617 618 - void exit_event_get_key(struct evsel *evsel, 619 - struct perf_sample *sample, 620 - struct event_key *key) 621 - { 622 - key->info = 0; 623 - key->key = evsel__intval(evsel, sample, kvm_exit_reason); 624 - } 625 - 626 - bool kvm_exit_event(struct evsel *evsel) 627 - { 628 - return evsel__name_is(evsel, kvm_exit_trace); 629 - } 630 - 631 - bool exit_event_begin(struct evsel *evsel, 632 - struct perf_sample *sample, struct event_key *key) 633 - { 634 - if (kvm_exit_event(evsel)) { 635 - exit_event_get_key(evsel, sample, key); 636 - return true; 637 - } 638 - 639 - return false; 640 - } 641 - 642 - bool kvm_entry_event(struct evsel *evsel) 643 - { 644 - return evsel__name_is(evsel, kvm_entry_trace); 645 - } 646 - 647 - bool exit_event_end(struct evsel *evsel, 648 - struct perf_sample *sample __maybe_unused, 649 - struct event_key *key __maybe_unused) 650 - { 651 - return kvm_entry_event(evsel); 652 - } 653 - 654 - static const char *get_exit_reason(struct perf_kvm_stat *kvm, 655 - struct exit_reasons_table *tbl, 656 - u64 exit_code) 657 - { 658 - while (tbl->reason != NULL) { 659 - if (tbl->exit_code == exit_code) 660 - return tbl->reason; 661 - tbl++; 662 - } 663 - 664 - pr_err("unknown kvm exit code:%lld on %s\n", 665 - (unsigned long long)exit_code, kvm->exit_reasons_isa); 666 - return "UNKNOWN"; 667 - } 668 - 669 - void exit_event_decode_key(struct perf_kvm_stat *kvm, 670 - struct event_key *key, 671 - char *decode) 672 - { 673 - const char *exit_reason = get_exit_reason(kvm, key->exit_reasons, 674 - key->key); 675 - 676 - scnprintf(decode, KVM_EVENT_NAME_LEN, "%s", exit_reason); 677 - } 678 - 679 618 static bool register_kvm_events_ops(struct perf_kvm_stat *kvm) 680 619 { 681 620 struct kvm_reg_events_ops *events_ops = kvm_reg_events_ops;
+5 -2
tools/perf/builtin-kwork.c
··· 6 6 */ 7 7 8 8 #include "builtin.h" 9 + #include "perf.h" 9 10 10 11 #include "util/data.h" 11 12 #include "util/evlist.h" ··· 1104 1103 char *name = NULL; 1105 1104 bool found = false; 1106 1105 struct tep_print_flag_sym *sym = NULL; 1107 - struct tep_print_arg *args = evsel->tp_format->print_fmt.args; 1106 + const struct tep_event *tp_format = evsel__tp_format(evsel); 1107 + struct tep_print_arg *args = tp_format ? tp_format->print_fmt.args : NULL; 1108 1108 1109 1109 if ((args == NULL) || (args->next == NULL)) 1110 1110 return NULL; ··· 1848 1846 } 1849 1847 } 1850 1848 1851 - struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork, 1849 + static struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork, 1852 1850 struct kwork_class *class, 1853 1851 struct kwork_work *key) 1854 1852 { ··· 2346 2344 .all_runtime = 0, 2347 2345 .all_count = 0, 2348 2346 .nr_skipped_events = { 0 }, 2347 + .add_work = perf_kwork_add_work, 2349 2348 }; 2350 2349 static const char default_report_sort_order[] = "runtime, max, count"; 2351 2350 static const char default_latency_sort_order[] = "avg, max, count";
+105 -176
tools/perf/builtin-lock.c
··· 46 46 static struct perf_session *session; 47 47 static struct target target; 48 48 49 - /* based on kernel/lockdep.c */ 50 - #define LOCKHASH_BITS 12 51 - #define LOCKHASH_SIZE (1UL << LOCKHASH_BITS) 52 - 53 - static struct hlist_head *lockhash_table; 54 - 55 - #define __lockhashfn(key) hash_long((unsigned long)key, LOCKHASH_BITS) 56 - #define lockhashentry(key) (lockhash_table + __lockhashfn((key))) 57 - 58 49 static struct rb_root thread_stats; 59 50 60 51 static bool combine_locks; ··· 58 67 static int max_stack_depth = CONTENTION_STACK_DEPTH; 59 68 static int stack_skip = CONTENTION_STACK_SKIP; 60 69 static int print_nr_entries = INT_MAX / 2; 61 - static LIST_HEAD(callstack_filters); 62 70 static const char *output_name = NULL; 63 71 static FILE *lock_output; 64 - 65 - struct callstack_filter { 66 - struct list_head list; 67 - char name[]; 68 - }; 69 72 70 73 static struct lock_filter filters; 71 74 72 75 static enum lock_aggr_mode aggr_mode = LOCK_AGGR_ADDR; 73 - 74 - static bool needs_callstack(void) 75 - { 76 - return !list_empty(&callstack_filters); 77 - } 78 76 79 77 static struct thread_stat *thread_stat_find(u32 tid) 80 78 { ··· 455 475 456 476 rb_erase(node, &result); 457 477 return container_of(node, struct lock_stat, rb); 458 - } 459 - 460 - struct lock_stat *lock_stat_find(u64 addr) 461 - { 462 - struct hlist_head *entry = lockhashentry(addr); 463 - struct lock_stat *ret; 464 - 465 - hlist_for_each_entry(ret, entry, hash_entry) { 466 - if (ret->addr == addr) 467 - return ret; 468 - } 469 - return NULL; 470 - } 471 - 472 - struct lock_stat *lock_stat_findnew(u64 addr, const char *name, int flags) 473 - { 474 - struct hlist_head *entry = lockhashentry(addr); 475 - struct lock_stat *ret, *new; 476 - 477 - hlist_for_each_entry(ret, entry, hash_entry) { 478 - if (ret->addr == addr) 479 - return ret; 480 - } 481 - 482 - new = zalloc(sizeof(struct lock_stat)); 483 - if (!new) 484 - goto alloc_failed; 485 - 486 - new->addr = addr; 487 - new->name = strdup(name); 488 - if (!new->name) { 489 - free(new); 490 - goto alloc_failed; 491 - } 492 - 493 - new->flags = flags; 494 - new->wait_time_min = ULLONG_MAX; 495 - 496 - hlist_add_head(&new->hash_entry, entry); 497 - return new; 498 - 499 - alloc_failed: 500 - pr_err("memory allocation failed\n"); 501 - return NULL; 502 - } 503 - 504 - bool match_callstack_filter(struct machine *machine, u64 *callstack) 505 - { 506 - struct map *kmap; 507 - struct symbol *sym; 508 - u64 ip; 509 - const char *arch = perf_env__arch(machine->env); 510 - 511 - if (list_empty(&callstack_filters)) 512 - return true; 513 - 514 - for (int i = 0; i < max_stack_depth; i++) { 515 - struct callstack_filter *filter; 516 - 517 - /* 518 - * In powerpc, the callchain saved by kernel always includes 519 - * first three entries as the NIP (next instruction pointer), 520 - * LR (link register), and the contents of LR save area in the 521 - * second stack frame. In certain scenarios its possible to have 522 - * invalid kernel instruction addresses in either LR or the second 523 - * stack frame's LR. In that case, kernel will store that address as 524 - * zero. 525 - * 526 - * The below check will continue to look into callstack, 527 - * incase first or second callstack index entry has 0 528 - * address for powerpc. 529 - */ 530 - if (!callstack || (!callstack[i] && (strcmp(arch, "powerpc") || 531 - (i != 1 && i != 2)))) 532 - break; 533 - 534 - ip = callstack[i]; 535 - sym = machine__find_kernel_symbol(machine, ip, &kmap); 536 - if (sym == NULL) 537 - continue; 538 - 539 - list_for_each_entry(filter, &callstack_filters, list) { 540 - if (strstr(sym->name, filter->name)) 541 - return true; 542 - } 543 - } 544 - return false; 545 478 } 546 479 547 480 struct trace_lock_handler { ··· 1058 1165 if (callstack == NULL) 1059 1166 return -ENOMEM; 1060 1167 1061 - if (!match_callstack_filter(machine, callstack)) { 1168 + if (!match_callstack_filter(machine, callstack, max_stack_depth)) { 1062 1169 free(callstack); 1063 1170 return 0; 1064 1171 } ··· 1468 1575 1469 1576 static const struct { 1470 1577 unsigned int flags; 1471 - const char *str; 1472 - const char *name; 1578 + /* 1579 + * Name of the lock flags (access), with delimeter ':'. 1580 + * For example, rwsem:R of rwsem:W. 1581 + */ 1582 + const char *flags_name; 1583 + /* Name of the lock (type), for example, rwlock or rwsem. */ 1584 + const char *lock_name; 1473 1585 } lock_type_table[] = { 1474 1586 { 0, "semaphore", "semaphore" }, 1475 1587 { LCB_F_SPIN, "spinlock", "spinlock" }, ··· 1489 1591 { LCB_F_PERCPU | LCB_F_WRITE, "pcpu-sem:W", "percpu-rwsem" }, 1490 1592 { LCB_F_MUTEX, "mutex", "mutex" }, 1491 1593 { LCB_F_MUTEX | LCB_F_SPIN, "mutex", "mutex" }, 1492 - /* alias for get_type_flag() */ 1493 - { LCB_F_MUTEX | LCB_F_SPIN, "mutex-spin", "mutex" }, 1594 + /* alias for optimistic spinning only */ 1595 + { LCB_F_MUTEX | LCB_F_SPIN, "mutex:spin", "mutex-spin" }, 1494 1596 }; 1495 1597 1496 - static const char *get_type_str(unsigned int flags) 1598 + static const char *get_type_flags_name(unsigned int flags) 1497 1599 { 1498 - flags &= LCB_F_MAX_FLAGS - 1; 1600 + flags &= LCB_F_TYPE_MASK; 1499 1601 1500 1602 for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) { 1501 1603 if (lock_type_table[i].flags == flags) 1502 - return lock_type_table[i].str; 1604 + return lock_type_table[i].flags_name; 1503 1605 } 1504 1606 return "unknown"; 1505 1607 } 1506 1608 1507 - static const char *get_type_name(unsigned int flags) 1609 + static const char *get_type_lock_name(unsigned int flags) 1508 1610 { 1509 - flags &= LCB_F_MAX_FLAGS - 1; 1611 + flags &= LCB_F_TYPE_MASK; 1510 1612 1511 1613 for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) { 1512 1614 if (lock_type_table[i].flags == flags) 1513 - return lock_type_table[i].name; 1615 + return lock_type_table[i].lock_name; 1514 1616 } 1515 1617 return "unknown"; 1516 - } 1517 - 1518 - static unsigned int get_type_flag(const char *str) 1519 - { 1520 - for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) { 1521 - if (!strcmp(lock_type_table[i].name, str)) 1522 - return lock_type_table[i].flags; 1523 - } 1524 - for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) { 1525 - if (!strcmp(lock_type_table[i].str, str)) 1526 - return lock_type_table[i].flags; 1527 - } 1528 - return UINT_MAX; 1529 1618 } 1530 1619 1531 1620 static void lock_filter_finish(void) ··· 1531 1646 1532 1647 zfree(&filters.cgrps); 1533 1648 filters.nr_cgrps = 0; 1649 + 1650 + for (int i = 0; i < filters.nr_slabs; i++) 1651 + free(filters.slabs[i]); 1652 + 1653 + zfree(&filters.slabs); 1654 + filters.nr_slabs = 0; 1534 1655 } 1535 1656 1536 1657 static void sort_contention_result(void) ··· 1623 1732 1624 1733 switch (aggr_mode) { 1625 1734 case LOCK_AGGR_CALLER: 1626 - fprintf(lock_output, " %10s %s\n", get_type_str(st->flags), st->name); 1735 + fprintf(lock_output, " %10s %s\n", get_type_flags_name(st->flags), st->name); 1627 1736 break; 1628 1737 case LOCK_AGGR_TASK: 1629 1738 pid = st->addr; ··· 1633 1742 break; 1634 1743 case LOCK_AGGR_ADDR: 1635 1744 fprintf(lock_output, " %016llx %s (%s)\n", (unsigned long long)st->addr, 1636 - st->name, get_type_name(st->flags)); 1745 + st->name, get_type_lock_name(st->flags)); 1637 1746 break; 1638 1747 case LOCK_AGGR_CGROUP: 1639 1748 fprintf(lock_output, " %s\n", st->name); ··· 1674 1783 1675 1784 switch (aggr_mode) { 1676 1785 case LOCK_AGGR_CALLER: 1677 - fprintf(lock_output, "%s%s %s", get_type_str(st->flags), sep, st->name); 1786 + fprintf(lock_output, "%s%s %s", get_type_flags_name(st->flags), sep, st->name); 1678 1787 if (verbose <= 0) 1679 1788 fprintf(lock_output, "\n"); 1680 1789 break; ··· 1686 1795 break; 1687 1796 case LOCK_AGGR_ADDR: 1688 1797 fprintf(lock_output, "%llx%s %s%s %s\n", (unsigned long long)st->addr, sep, 1689 - st->name, sep, get_type_name(st->flags)); 1798 + st->name, sep, get_type_lock_name(st->flags)); 1690 1799 break; 1691 1800 case LOCK_AGGR_CGROUP: 1692 1801 fprintf(lock_output, "%s\n",st->name); ··· 2041 2150 goto out_delete; 2042 2151 } 2043 2152 2044 - if (lock_contention_prepare(&con) < 0) { 2153 + err = lock_contention_prepare(&con); 2154 + if (err < 0) { 2045 2155 pr_err("lock contention BPF setup failed\n"); 2046 2156 goto out_delete; 2047 2157 } ··· 2063 2171 } 2064 2172 } 2065 2173 2066 - if (setup_output_field(true, output_fields)) 2174 + err = setup_output_field(true, output_fields); 2175 + if (err) { 2176 + pr_err("Failed to setup output field\n"); 2067 2177 goto out_delete; 2178 + } 2068 2179 2069 - if (select_key(true)) 2180 + err = select_key(true); 2181 + if (err) 2070 2182 goto out_delete; 2071 2183 2072 2184 if (symbol_conf.field_sep) { ··· 2246 2350 int unset __maybe_unused) 2247 2351 { 2248 2352 char *s, *tmp, *tok; 2249 - int ret = 0; 2250 2353 2251 2354 s = strdup(str); 2252 2355 if (s == NULL) 2253 2356 return -1; 2254 2357 2255 2358 for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) { 2256 - unsigned int flags = get_type_flag(tok); 2359 + bool found = false; 2257 2360 2258 - if (flags == -1U) { 2259 - pr_err("Unknown lock flags: %s\n", tok); 2260 - ret = -1; 2261 - break; 2361 + /* `tok` is a flags name if it contains ':'. */ 2362 + if (strchr(tok, ':')) { 2363 + for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) { 2364 + if (!strcmp(lock_type_table[i].flags_name, tok) && 2365 + add_lock_type(lock_type_table[i].flags)) { 2366 + found = true; 2367 + break; 2368 + } 2369 + } 2370 + 2371 + if (!found) { 2372 + pr_err("Unknown lock flags name: %s\n", tok); 2373 + free(s); 2374 + return -1; 2375 + } 2376 + 2377 + continue; 2262 2378 } 2263 2379 2264 - if (!add_lock_type(flags)) { 2265 - ret = -1; 2266 - break; 2380 + /* 2381 + * Otherwise `tok` is a lock name. 2382 + * Single lock name could contain multiple flags. 2383 + * Replace alias `pcpu-sem` with actual name `percpu-rwsem. 2384 + */ 2385 + if (!strcmp(tok, "pcpu-sem")) 2386 + tok = (char *)"percpu-rwsem"; 2387 + for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) { 2388 + if (!strcmp(lock_type_table[i].lock_name, tok)) { 2389 + if (add_lock_type(lock_type_table[i].flags)) { 2390 + found = true; 2391 + } else { 2392 + free(s); 2393 + return -1; 2394 + } 2395 + } 2267 2396 } 2397 + 2398 + if (!found) { 2399 + pr_err("Unknown lock name: %s\n", tok); 2400 + free(s); 2401 + return -1; 2402 + } 2403 + 2268 2404 } 2269 2405 2270 2406 free(s); 2271 - return ret; 2407 + return 0; 2272 2408 } 2273 2409 2274 2410 static bool add_lock_addr(unsigned long addr) ··· 2340 2412 return true; 2341 2413 } 2342 2414 2415 + static bool add_lock_slab(char *name) 2416 + { 2417 + char **tmp; 2418 + char *sym = strdup(name); 2419 + 2420 + if (sym == NULL) { 2421 + pr_err("Memory allocation failure\n"); 2422 + return false; 2423 + } 2424 + 2425 + tmp = realloc(filters.slabs, (filters.nr_slabs + 1) * sizeof(*filters.slabs)); 2426 + if (tmp == NULL) { 2427 + pr_err("Memory allocation failure\n"); 2428 + return false; 2429 + } 2430 + 2431 + tmp[filters.nr_slabs++] = sym; 2432 + filters.slabs = tmp; 2433 + return true; 2434 + } 2435 + 2343 2436 static int parse_lock_addr(const struct option *opt __maybe_unused, const char *str, 2344 2437 int unset __maybe_unused) 2345 2438 { ··· 2384 2435 continue; 2385 2436 } 2386 2437 2438 + if (*tok == '&') { 2439 + if (!add_lock_slab(tok + 1)) { 2440 + ret = -1; 2441 + break; 2442 + } 2443 + continue; 2444 + } 2445 + 2387 2446 /* 2388 2447 * At this moment, we don't have kernel symbols. Save the symbols 2389 2448 * in a separate list and resolve them to addresses later. ··· 2400 2443 ret = -1; 2401 2444 break; 2402 2445 } 2403 - } 2404 - 2405 - free(s); 2406 - return ret; 2407 - } 2408 - 2409 - static int parse_call_stack(const struct option *opt __maybe_unused, const char *str, 2410 - int unset __maybe_unused) 2411 - { 2412 - char *s, *tmp, *tok; 2413 - int ret = 0; 2414 - 2415 - s = strdup(str); 2416 - if (s == NULL) 2417 - return -1; 2418 - 2419 - for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) { 2420 - struct callstack_filter *entry; 2421 - 2422 - entry = malloc(sizeof(*entry) + strlen(tok) + 1); 2423 - if (entry == NULL) { 2424 - pr_err("Memory allocation failure\n"); 2425 - free(s); 2426 - return -1; 2427 - } 2428 - 2429 - strcpy(entry->name, tok); 2430 - list_add_tail(&entry->list, &callstack_filters); 2431 2446 } 2432 2447 2433 2448 free(s);
+1
tools/perf/builtin-mem.c
··· 4 4 #include <sys/stat.h> 5 5 #include <unistd.h> 6 6 #include "builtin.h" 7 + #include "perf.h" 7 8 8 9 #include <subcmd/parse-options.h> 9 10 #include "util/auxtrace.h"
+3 -3
tools/perf/builtin-record.c
··· 860 860 if (err) 861 861 return err; 862 862 863 - auxtrace_regroup_aux_output(rec->evlist); 863 + err = auxtrace_parse_aux_action(rec->evlist); 864 + if (err) 865 + return err; 864 866 865 867 return auxtrace_parse_filters(rec->evlist); 866 868 } ··· 1750 1748 if (rec->no_buildid) 1751 1749 perf_header__clear_feat(&session->header, HEADER_BUILD_ID); 1752 1750 1753 - #ifdef HAVE_LIBTRACEEVENT 1754 1751 if (!have_tracepoints(&rec->evlist->core.entries)) 1755 1752 perf_header__clear_feat(&session->header, HEADER_TRACING_DATA); 1756 - #endif 1757 1753 1758 1754 if (!rec->opts.branch_stack) 1759 1755 perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK);
+2 -4
tools/perf/builtin-report.c
··· 348 348 struct report *rep = container_of(tool, struct report, tool); 349 349 350 350 if (rep->show_threads) { 351 - const char *name = evsel__name(evsel); 352 351 int err = perf_read_values_add_value(&rep->show_threads_values, 353 352 event->read.pid, event->read.tid, 354 - evsel->core.idx, 355 - name, 353 + evsel, 356 354 event->read.value); 357 355 358 356 if (err) ··· 1420 1422 OPT_STRING(0, "addr2line", &addr2line_path, "path", 1421 1423 "addr2line binary to use for line numbers"), 1422 1424 OPT_BOOLEAN(0, "demangle", &symbol_conf.demangle, 1423 - "Disable symbol demangling"), 1425 + "Symbol demangling. Enabled by default, use --no-demangle to disable."), 1424 1426 OPT_BOOLEAN(0, "demangle-kernel", &symbol_conf.demangle_kernel, 1425 1427 "Enable kernel symbol demangling"), 1426 1428 OPT_BOOLEAN(0, "mem-mode", &report.mem_mode, "mem access profile"),
+1
tools/perf/builtin-sched.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include "builtin.h" 3 + #include "perf.h" 3 4 #include "perf-sys.h" 4 5 5 6 #include "util/cpumap.h"
+60 -344
tools/perf/builtin-script.c
··· 85 85 static bool print_flags; 86 86 static const char *cpu_list; 87 87 static DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS); 88 - static struct perf_stat_config stat_config; 89 88 static int max_blocks; 90 89 static bool native_arch; 91 90 static struct dlfilter *dlfilter; 92 91 static int dlargc; 93 92 static char **dlargv; 94 - 95 - unsigned int scripting_max_stack = PERF_MAX_STACK_DEPTH; 96 93 97 94 enum perf_output_field { 98 95 PERF_OUTPUT_COMM = 1ULL << 0, ··· 220 223 OUTPUT_TYPE_OTHER, 221 224 OUTPUT_TYPE_MAX 222 225 }; 226 + 227 + // We need to refactor the evsel->priv use in in 'perf script' to allow for 228 + // using that area, that is being used only in some cases. 229 + #define OUTPUT_TYPE_UNSET -1 223 230 224 231 /* default set to maintain compatibility with current format */ 225 232 static struct { ··· 398 397 return OUTPUT_TYPE_OTHER; 399 398 } 400 399 400 + static inline int evsel__output_type(struct evsel *evsel) 401 + { 402 + if (evsel->script_output_type == OUTPUT_TYPE_UNSET) 403 + evsel->script_output_type = output_type(evsel->core.attr.type); 404 + 405 + return evsel->script_output_type; 406 + } 407 + 401 408 static bool output_set_by_user(void) 402 409 { 403 410 int j; ··· 430 421 return str; 431 422 } 432 423 433 - #define PRINT_FIELD(x) (output[output_type(attr->type)].fields & PERF_OUTPUT_##x) 424 + #define PRINT_FIELD(x) (output[evsel__output_type(evsel)].fields & PERF_OUTPUT_##x) 434 425 435 426 static int evsel__do_check_stype(struct evsel *evsel, u64 sample_type, const char *sample_msg, 436 427 enum perf_output_field field, bool allow_user_set) 437 428 { 438 429 struct perf_event_attr *attr = &evsel->core.attr; 439 - int type = output_type(attr->type); 430 + int type = evsel__output_type(evsel); 440 431 const char *evname; 441 432 442 433 if (attr->sample_type & sample_type) ··· 470 461 471 462 static int evsel__check_attr(struct evsel *evsel, struct perf_session *session) 472 463 { 473 - struct perf_event_attr *attr = &evsel->core.attr; 474 464 bool allow_user_set; 475 465 476 466 if (evsel__is_dummy_event(evsel)) ··· 586 578 return 0; 587 579 } 588 580 589 - static void set_print_ip_opts(struct perf_event_attr *attr) 581 + static void evsel__set_print_ip_opts(struct evsel *evsel) 590 582 { 591 - unsigned int type = output_type(attr->type); 583 + unsigned int type = evsel__output_type(evsel); 592 584 593 585 output[type].print_ip_opts = 0; 594 586 if (PRINT_FIELD(IP)) ··· 618 610 evlist__for_each_entry(evlist, evsel) { 619 611 if (evsel__is_dummy_event(evsel)) 620 612 continue; 621 - if (output_type(evsel->core.attr.type) == (int)type) 613 + if (evsel__output_type(evsel) == (int)type) 622 614 return evsel; 623 615 } 624 616 return NULL; ··· 660 652 if (output[j].fields & PERF_OUTPUT_DSOFF) 661 653 output[j].fields |= PERF_OUTPUT_DSO; 662 654 663 - set_print_ip_opts(&evsel->core.attr); 655 + evsel__set_print_ip_opts(evsel); 664 656 tod |= output[j].fields & PERF_OUTPUT_TOD; 665 657 } 666 658 ··· 696 688 output[j].fields |= PERF_OUTPUT_SYM; 697 689 output[j].fields |= PERF_OUTPUT_SYMOFFSET; 698 690 output[j].fields |= PERF_OUTPUT_DSO; 699 - set_print_ip_opts(&evsel->core.attr); 691 + evsel__set_print_ip_opts(evsel); 700 692 goto out; 701 693 } 702 694 } ··· 800 792 struct evsel *evsel, 801 793 u32 type, FILE *fp) 802 794 { 803 - struct perf_event_attr *attr = &evsel->core.attr; 804 795 unsigned long secs; 805 796 unsigned long long nsecs; 806 797 int printed = 0; ··· 951 944 952 945 static int perf_sample__fprintf_brstack(struct perf_sample *sample, 953 946 struct thread *thread, 954 - struct perf_event_attr *attr, FILE *fp) 947 + struct evsel *evsel, FILE *fp) 955 948 { 956 949 struct branch_stack *br = sample->branch_stack; 957 950 struct branch_entry *entries = perf_sample__branch_entries(sample); ··· 990 983 991 984 static int perf_sample__fprintf_brstacksym(struct perf_sample *sample, 992 985 struct thread *thread, 993 - struct perf_event_attr *attr, FILE *fp) 986 + struct evsel *evsel, FILE *fp) 994 987 { 995 988 struct branch_stack *br = sample->branch_stack; 996 989 struct branch_entry *entries = perf_sample__branch_entries(sample); ··· 1028 1021 1029 1022 static int perf_sample__fprintf_brstackoff(struct perf_sample *sample, 1030 1023 struct thread *thread, 1031 - struct perf_event_attr *attr, FILE *fp) 1024 + struct evsel *evsel, FILE *fp) 1032 1025 { 1033 1026 struct branch_stack *br = sample->branch_stack; 1034 1027 struct branch_entry *entries = perf_sample__branch_entries(sample); ··· 1195 1188 return ret; 1196 1189 } 1197 1190 1198 - static int any_dump_insn(struct perf_event_attr *attr __maybe_unused, 1191 + static int any_dump_insn(struct evsel *evsel __maybe_unused, 1199 1192 struct perf_insn *x, uint64_t ip, 1200 1193 u8 *inbuf, int inlen, int *lenp, 1201 1194 FILE *fp) ··· 1223 1216 static int ip__fprintf_jump(uint64_t ip, struct branch_entry *en, 1224 1217 struct perf_insn *x, u8 *inbuf, int len, 1225 1218 int insn, FILE *fp, int *total_cycles, 1226 - struct perf_event_attr *attr, 1227 - struct thread *thread, 1228 1219 struct evsel *evsel, 1220 + struct thread *thread, 1229 1221 u64 br_cntr) 1230 1222 { 1231 1223 int ilen = 0; 1232 1224 int printed = fprintf(fp, "\t%016" PRIx64 "\t", ip); 1233 1225 1234 - printed += add_padding(fp, any_dump_insn(attr, x, ip, inbuf, len, &ilen, fp), 30); 1226 + printed += add_padding(fp, any_dump_insn(evsel, x, ip, inbuf, len, &ilen, fp), 30); 1235 1227 printed += fprintf(fp, "\t"); 1236 1228 1237 1229 if (PRINT_FIELD(BRSTACKINSNLEN)) ··· 1286 1280 1287 1281 static int ip__fprintf_sym(uint64_t addr, struct thread *thread, 1288 1282 u8 cpumode, int cpu, struct symbol **lastsym, 1289 - struct perf_event_attr *attr, FILE *fp) 1283 + struct evsel *evsel, FILE *fp) 1290 1284 { 1291 1285 struct addr_location al; 1292 1286 int off, printed = 0, ret = 0; ··· 1362 1356 machine, thread, &x.is64bit, &x.cpumode, false); 1363 1357 if (len > 0) { 1364 1358 printed += ip__fprintf_sym(entries[nr - 1].from, thread, 1365 - x.cpumode, x.cpu, &lastsym, attr, fp); 1359 + x.cpumode, x.cpu, &lastsym, evsel, fp); 1366 1360 printed += ip__fprintf_jump(entries[nr - 1].from, &entries[nr - 1], 1367 1361 &x, buffer, len, 0, fp, &total_cycles, 1368 - attr, thread, evsel, br_cntr); 1362 + evsel, thread, br_cntr); 1369 1363 if (PRINT_FIELD(SRCCODE)) 1370 1364 printed += print_srccode(thread, x.cpumode, entries[nr - 1].from); 1371 1365 } ··· 1393 1387 for (off = 0; off < (unsigned)len; off += ilen) { 1394 1388 uint64_t ip = start + off; 1395 1389 1396 - printed += ip__fprintf_sym(ip, thread, x.cpumode, x.cpu, &lastsym, attr, fp); 1390 + printed += ip__fprintf_sym(ip, thread, x.cpumode, x.cpu, &lastsym, evsel, fp); 1397 1391 if (ip == end) { 1398 1392 if (PRINT_FIELD(BRCNTR) && sample->branch_stack_cntr) 1399 1393 br_cntr = sample->branch_stack_cntr[i]; 1400 1394 printed += ip__fprintf_jump(ip, &entries[i], &x, buffer + off, len - off, ++insn, fp, 1401 - &total_cycles, attr, thread, evsel, br_cntr); 1395 + &total_cycles, evsel, thread, br_cntr); 1402 1396 if (PRINT_FIELD(SRCCODE)) 1403 1397 printed += print_srccode(thread, x.cpumode, ip); 1404 1398 break; 1405 1399 } else { 1406 1400 ilen = 0; 1407 1401 printed += fprintf(fp, "\t%016" PRIx64 "\t", ip); 1408 - printed += any_dump_insn(attr, &x, ip, buffer + off, len - off, &ilen, fp); 1402 + printed += any_dump_insn(evsel, &x, ip, buffer + off, len - off, &ilen, fp); 1409 1403 if (PRINT_FIELD(BRSTACKINSNLEN)) 1410 1404 printed += fprintf(fp, "\tilen: %d", ilen); 1411 1405 printed += fprintf(fp, "\n"); ··· 1444 1438 end = start + 128; 1445 1439 } 1446 1440 len = grab_bb(buffer, start, end, machine, thread, &x.is64bit, &x.cpumode, true); 1447 - printed += ip__fprintf_sym(start, thread, x.cpumode, x.cpu, &lastsym, attr, fp); 1441 + printed += ip__fprintf_sym(start, thread, x.cpumode, x.cpu, &lastsym, evsel, fp); 1448 1442 if (len <= 0) { 1449 1443 /* Print at least last IP if basic block did not work */ 1450 1444 len = grab_bb(buffer, sample->ip, sample->ip, ··· 1453 1447 goto out; 1454 1448 ilen = 0; 1455 1449 printed += fprintf(fp, "\t%016" PRIx64 "\t", sample->ip); 1456 - printed += any_dump_insn(attr, &x, sample->ip, buffer, len, &ilen, fp); 1450 + printed += any_dump_insn(evsel, &x, sample->ip, buffer, len, &ilen, fp); 1457 1451 if (PRINT_FIELD(BRSTACKINSNLEN)) 1458 1452 printed += fprintf(fp, "\tilen: %d", ilen); 1459 1453 printed += fprintf(fp, "\n"); ··· 1464 1458 for (off = 0; off <= end - start; off += ilen) { 1465 1459 ilen = 0; 1466 1460 printed += fprintf(fp, "\t%016" PRIx64 "\t", start + off); 1467 - printed += any_dump_insn(attr, &x, start + off, buffer + off, len - off, &ilen, fp); 1461 + printed += any_dump_insn(evsel, &x, start + off, buffer + off, len - off, &ilen, fp); 1468 1462 if (PRINT_FIELD(BRSTACKINSNLEN)) 1469 1463 printed += fprintf(fp, "\tilen: %d", ilen); 1470 1464 printed += fprintf(fp, "\n"); ··· 1488 1482 1489 1483 static int perf_sample__fprintf_addr(struct perf_sample *sample, 1490 1484 struct thread *thread, 1491 - struct perf_event_attr *attr, FILE *fp) 1485 + struct evsel *evsel, FILE *fp) 1492 1486 { 1493 1487 struct addr_location al; 1494 1488 int printed = fprintf(fp, "%16" PRIx64, sample->addr); 1495 1489 1496 1490 addr_location__init(&al); 1497 - if (!sample_addr_correlates_sym(attr)) 1491 + if (!sample_addr_correlates_sym(&evsel->core.attr)) 1498 1492 goto out; 1499 1493 1500 1494 thread__resolve(thread, &al, sample); ··· 1521 1515 struct addr_location *addr_al, 1522 1516 u64 *ip) 1523 1517 { 1524 - struct perf_event_attr *attr = &evsel->core.attr; 1525 1518 const char *name = NULL; 1526 1519 1527 1520 if (sample->flags & (PERF_IP_FLAG_CALL | PERF_IP_FLAG_TRACE_BEGIN)) { 1528 - if (sample_addr_correlates_sym(attr)) { 1521 + if (sample_addr_correlates_sym(&evsel->core.attr)) { 1529 1522 if (!addr_al->thread) 1530 1523 thread__resolve(thread, addr_al, sample); 1531 1524 if (addr_al->sym) ··· 1550 1545 struct addr_location *addr_al, 1551 1546 FILE *fp) 1552 1547 { 1553 - struct perf_event_attr *attr = &evsel->core.attr; 1554 1548 size_t depth = thread_stack__depth(thread, sample->cpu); 1555 1549 const char *name = NULL; 1556 1550 static int spacing; ··· 1593 1589 return len + dlen; 1594 1590 } 1595 1591 1596 - __weak void arch_fetch_insn(struct perf_sample *sample __maybe_unused, 1597 - struct thread *thread __maybe_unused, 1598 - struct machine *machine __maybe_unused) 1599 - { 1600 - } 1601 - 1602 - void script_fetch_insn(struct perf_sample *sample, struct thread *thread, 1603 - struct machine *machine) 1604 - { 1605 - if (sample->insn_len == 0 && native_arch) 1606 - arch_fetch_insn(sample, thread, machine); 1607 - } 1608 - 1609 1592 static int perf_sample__fprintf_insn(struct perf_sample *sample, 1610 1593 struct evsel *evsel, 1611 1594 struct perf_event_attr *attr, ··· 1602 1611 { 1603 1612 int printed = 0; 1604 1613 1605 - script_fetch_insn(sample, thread, machine); 1614 + script_fetch_insn(sample, thread, machine, native_arch); 1606 1615 1607 1616 if (PRINT_FIELD(INSNLEN)) 1608 1617 printed += fprintf(fp, " ilen: %d", sample->insn_len); ··· 1621 1630 } 1622 1631 1623 1632 static int perf_sample__fprintf_ipc(struct perf_sample *sample, 1624 - struct perf_event_attr *attr, FILE *fp) 1633 + struct evsel *evsel, FILE *fp) 1625 1634 { 1626 1635 unsigned int ipc; 1627 1636 ··· 1642 1651 struct machine *machine, FILE *fp) 1643 1652 { 1644 1653 struct perf_event_attr *attr = &evsel->core.attr; 1645 - unsigned int type = output_type(attr->type); 1654 + unsigned int type = evsel__output_type(evsel); 1646 1655 bool print_srcline_last = false; 1647 1656 int printed = 0; 1648 1657 ··· 1679 1688 ((evsel->core.attr.sample_type & PERF_SAMPLE_ADDR) && 1680 1689 !output[type].user_set)) { 1681 1690 printed += fprintf(fp, " => "); 1682 - printed += perf_sample__fprintf_addr(sample, thread, attr, fp); 1691 + printed += perf_sample__fprintf_addr(sample, thread, evsel, fp); 1683 1692 } 1684 1693 1685 - printed += perf_sample__fprintf_ipc(sample, attr, fp); 1694 + printed += perf_sample__fprintf_ipc(sample, evsel, fp); 1686 1695 1687 1696 if (print_srcline_last) 1688 1697 printed += map__fprintf_srcline(al->map, al->addr, "\n ", fp); ··· 1698 1707 } 1699 1708 } 1700 1709 return printed; 1701 - } 1702 - 1703 - static struct { 1704 - u32 flags; 1705 - const char *name; 1706 - } sample_flags[] = { 1707 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL, "call"}, 1708 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN, "return"}, 1709 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CONDITIONAL, "jcc"}, 1710 - {PERF_IP_FLAG_BRANCH, "jmp"}, 1711 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_INTERRUPT, "int"}, 1712 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN | PERF_IP_FLAG_INTERRUPT, "iret"}, 1713 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_SYSCALLRET, "syscall"}, 1714 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN | PERF_IP_FLAG_SYSCALLRET, "sysret"}, 1715 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_ASYNC, "async"}, 1716 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_ASYNC | PERF_IP_FLAG_INTERRUPT, "hw int"}, 1717 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TX_ABORT, "tx abrt"}, 1718 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TRACE_BEGIN, "tr strt"}, 1719 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TRACE_END, "tr end"}, 1720 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_VMENTRY, "vmentry"}, 1721 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_VMEXIT, "vmexit"}, 1722 - {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_BRANCH_MISS, "br miss"}, 1723 - {0, NULL} 1724 - }; 1725 - 1726 - static const char *sample_flags_to_name(u32 flags) 1727 - { 1728 - int i; 1729 - 1730 - for (i = 0; sample_flags[i].name ; i++) { 1731 - if (sample_flags[i].flags == flags) 1732 - return sample_flags[i].name; 1733 - } 1734 - 1735 - return NULL; 1736 - } 1737 - 1738 - int perf_sample__sprintf_flags(u32 flags, char *str, size_t sz) 1739 - { 1740 - u32 xf = PERF_IP_FLAG_IN_TX | PERF_IP_FLAG_INTR_DISABLE | 1741 - PERF_IP_FLAG_INTR_TOGGLE; 1742 - const char *chars = PERF_IP_FLAG_CHARS; 1743 - const size_t n = strlen(PERF_IP_FLAG_CHARS); 1744 - const char *name = NULL; 1745 - size_t i, pos = 0; 1746 - char xs[16] = {0}; 1747 - 1748 - if (flags & xf) 1749 - snprintf(xs, sizeof(xs), "(%s%s%s)", 1750 - flags & PERF_IP_FLAG_IN_TX ? "x" : "", 1751 - flags & PERF_IP_FLAG_INTR_DISABLE ? "D" : "", 1752 - flags & PERF_IP_FLAG_INTR_TOGGLE ? "t" : ""); 1753 - 1754 - name = sample_flags_to_name(flags & ~xf); 1755 - if (name) 1756 - return snprintf(str, sz, "%-15s%6s", name, xs); 1757 - 1758 - if (flags & PERF_IP_FLAG_TRACE_BEGIN) { 1759 - name = sample_flags_to_name(flags & ~(xf | PERF_IP_FLAG_TRACE_BEGIN)); 1760 - if (name) 1761 - return snprintf(str, sz, "tr strt %-7s%6s", name, xs); 1762 - } 1763 - 1764 - if (flags & PERF_IP_FLAG_TRACE_END) { 1765 - name = sample_flags_to_name(flags & ~(xf | PERF_IP_FLAG_TRACE_END)); 1766 - if (name) 1767 - return snprintf(str, sz, "tr end %-7s%6s", name, xs); 1768 - } 1769 - 1770 - for (i = 0; i < n; i++, flags >>= 1) { 1771 - if ((flags & 1) && pos < sz) 1772 - str[pos++] = chars[i]; 1773 - } 1774 - for (; i < 32; i++, flags >>= 1) { 1775 - if ((flags & 1) && pos < sz) 1776 - str[pos++] = '?'; 1777 - } 1778 - if (pos < sz) 1779 - str[pos] = 0; 1780 - 1781 - return pos; 1782 1710 } 1783 1711 1784 1712 static int perf_sample__fprintf_flags(u32 flags, FILE *fp) ··· 2164 2254 { 2165 2255 struct thread *thread = al->thread; 2166 2256 struct perf_event_attr *attr = &evsel->core.attr; 2167 - unsigned int type = output_type(attr->type); 2257 + unsigned int type = evsel__output_type(evsel); 2168 2258 struct evsel_script *es = evsel->priv; 2169 2259 FILE *fp = es->fp; 2170 2260 char str[PAGE_SIZE_NAME_LEN]; ··· 2199 2289 } 2200 2290 #ifdef HAVE_LIBTRACEEVENT 2201 2291 if (PRINT_FIELD(TRACE) && sample->raw_data) { 2202 - event_format__fprintf(evsel->tp_format, sample->cpu, 2203 - sample->raw_data, sample->raw_size, fp); 2292 + const struct tep_event *tp_format = evsel__tp_format(evsel); 2293 + 2294 + if (tp_format) { 2295 + event_format__fprintf(tp_format, sample->cpu, 2296 + sample->raw_data, sample->raw_size, 2297 + fp); 2298 + } 2204 2299 } 2205 2300 #endif 2206 2301 if (attr->type == PERF_TYPE_SYNTH && PRINT_FIELD(SYNTH)) 2207 2302 perf_sample__fprintf_synth(sample, evsel, fp); 2208 2303 2209 2304 if (PRINT_FIELD(ADDR)) 2210 - perf_sample__fprintf_addr(sample, thread, attr, fp); 2305 + perf_sample__fprintf_addr(sample, thread, evsel, fp); 2211 2306 2212 2307 if (PRINT_FIELD(DATA_SRC)) 2213 2308 data_src__fprintf(sample->data_src, fp); ··· 2262 2347 perf_sample__fprintf_uregs(sample, attr, arch, fp); 2263 2348 2264 2349 if (PRINT_FIELD(BRSTACK)) 2265 - perf_sample__fprintf_brstack(sample, thread, attr, fp); 2350 + perf_sample__fprintf_brstack(sample, thread, evsel, fp); 2266 2351 else if (PRINT_FIELD(BRSTACKSYM)) 2267 - perf_sample__fprintf_brstacksym(sample, thread, attr, fp); 2352 + perf_sample__fprintf_brstacksym(sample, thread, evsel, fp); 2268 2353 else if (PRINT_FIELD(BRSTACKOFF)) 2269 - perf_sample__fprintf_brstackoff(sample, thread, attr, fp); 2354 + perf_sample__fprintf_brstackoff(sample, thread, evsel, fp); 2270 2355 2271 2356 if (evsel__is_bpf_output(evsel) && PRINT_FIELD(BPF_OUTPUT)) 2272 2357 perf_sample__fprintf_bpf_output(sample, fp); ··· 2281 2366 if (PRINT_FIELD(CODE_PAGE_SIZE)) 2282 2367 fprintf(fp, " %s", get_page_size_name(sample->code_page_size, str)); 2283 2368 2284 - perf_sample__fprintf_ipc(sample, attr, fp); 2369 + perf_sample__fprintf_ipc(sample, evsel, fp); 2285 2370 2286 2371 fprintf(fp, "\n"); 2287 2372 ··· 2514 2599 sample_type & PERF_SAMPLE_BRANCH_STACK || 2515 2600 (sample_type & PERF_SAMPLE_REGS_USER && 2516 2601 sample_type & PERF_SAMPLE_STACK_USER))) { 2517 - int type = output_type(evsel->core.attr.type); 2602 + int type = evsel__output_type(evsel); 2518 2603 2519 2604 if (!(output[type].user_unset_fields & PERF_OUTPUT_IP)) 2520 2605 output[type].fields |= PERF_OUTPUT_IP; 2521 2606 if (!(output[type].user_unset_fields & PERF_OUTPUT_SYM)) 2522 2607 output[type].fields |= PERF_OUTPUT_SYM; 2523 2608 } 2524 - set_print_ip_opts(&evsel->core.attr); 2609 + evsel__set_print_ip_opts(evsel); 2525 2610 return 0; 2526 2611 } 2527 2612 ··· 2874 2959 return ret; 2875 2960 } 2876 2961 2877 - struct script_spec { 2878 - struct list_head node; 2879 - struct scripting_ops *ops; 2880 - char spec[]; 2881 - }; 2882 - 2883 - static LIST_HEAD(script_specs); 2884 - 2885 - static struct script_spec *script_spec__new(const char *spec, 2886 - struct scripting_ops *ops) 2962 + static int list_available_languages_cb(struct scripting_ops *ops, const char *spec) 2887 2963 { 2888 - struct script_spec *s = malloc(sizeof(*s) + strlen(spec) + 1); 2889 - 2890 - if (s != NULL) { 2891 - strcpy(s->spec, spec); 2892 - s->ops = ops; 2893 - } 2894 - 2895 - return s; 2896 - } 2897 - 2898 - static void script_spec__add(struct script_spec *s) 2899 - { 2900 - list_add_tail(&s->node, &script_specs); 2901 - } 2902 - 2903 - static struct script_spec *script_spec__find(const char *spec) 2904 - { 2905 - struct script_spec *s; 2906 - 2907 - list_for_each_entry(s, &script_specs, node) 2908 - if (strcasecmp(s->spec, spec) == 0) 2909 - return s; 2910 - return NULL; 2911 - } 2912 - 2913 - int script_spec_register(const char *spec, struct scripting_ops *ops) 2914 - { 2915 - struct script_spec *s; 2916 - 2917 - s = script_spec__find(spec); 2918 - if (s) 2919 - return -1; 2920 - 2921 - s = script_spec__new(spec, ops); 2922 - if (!s) 2923 - return -1; 2924 - else 2925 - script_spec__add(s); 2926 - 2964 + fprintf(stderr, " %-42s [%s]\n", spec, ops->name); 2927 2965 return 0; 2928 - } 2929 - 2930 - static struct scripting_ops *script_spec__lookup(const char *spec) 2931 - { 2932 - struct script_spec *s = script_spec__find(spec); 2933 - if (!s) 2934 - return NULL; 2935 - 2936 - return s->ops; 2937 2966 } 2938 2967 2939 2968 static void list_available_languages(void) 2940 2969 { 2941 - struct script_spec *s; 2942 - 2943 2970 fprintf(stderr, "\n"); 2944 2971 fprintf(stderr, "Scripting language extensions (used in " 2945 2972 "perf script -s [spec:]script.[spec]):\n\n"); 2946 - 2947 - list_for_each_entry(s, &script_specs, node) 2948 - fprintf(stderr, " %-42s [%s]\n", s->spec, s->ops->name); 2949 - 2973 + script_spec__for_each(&list_available_languages_cb); 2950 2974 fprintf(stderr, "\n"); 2951 2975 } 2952 2976 ··· 3375 3521 while (dlargc--) 3376 3522 free(dlargv[dlargc]); 3377 3523 free(dlargv); 3378 - } 3379 - 3380 - /* 3381 - * Some scripts specify the required events in their "xxx-record" file, 3382 - * this function will check if the events in perf.data match those 3383 - * mentioned in the "xxx-record". 3384 - * 3385 - * Fixme: All existing "xxx-record" are all in good formats "-e event ", 3386 - * which is covered well now. And new parsing code should be added to 3387 - * cover the future complex formats like event groups etc. 3388 - */ 3389 - static int check_ev_match(char *dir_name, char *scriptname, 3390 - struct perf_session *session) 3391 - { 3392 - char filename[MAXPATHLEN], evname[128]; 3393 - char line[BUFSIZ], *p; 3394 - struct evsel *pos; 3395 - int match, len; 3396 - FILE *fp; 3397 - 3398 - scnprintf(filename, MAXPATHLEN, "%s/bin/%s-record", dir_name, scriptname); 3399 - 3400 - fp = fopen(filename, "r"); 3401 - if (!fp) 3402 - return -1; 3403 - 3404 - while (fgets(line, sizeof(line), fp)) { 3405 - p = skip_spaces(line); 3406 - if (*p == '#') 3407 - continue; 3408 - 3409 - while (strlen(p)) { 3410 - p = strstr(p, "-e"); 3411 - if (!p) 3412 - break; 3413 - 3414 - p += 2; 3415 - p = skip_spaces(p); 3416 - len = strcspn(p, " \t"); 3417 - if (!len) 3418 - break; 3419 - 3420 - snprintf(evname, len + 1, "%s", p); 3421 - 3422 - match = 0; 3423 - evlist__for_each_entry(session->evlist, pos) { 3424 - if (evsel__name_is(pos, evname)) { 3425 - match = 1; 3426 - break; 3427 - } 3428 - } 3429 - 3430 - if (!match) { 3431 - fclose(fp); 3432 - return -1; 3433 - } 3434 - } 3435 - } 3436 - 3437 - fclose(fp); 3438 - return 0; 3439 - } 3440 - 3441 - /* 3442 - * Return -1 if none is found, otherwise the actual scripts number. 3443 - * 3444 - * Currently the only user of this function is the script browser, which 3445 - * will list all statically runnable scripts, select one, execute it and 3446 - * show the output in a perf browser. 3447 - */ 3448 - int find_scripts(char **scripts_array, char **scripts_path_array, int num, 3449 - int pathlen) 3450 - { 3451 - struct dirent *script_dirent, *lang_dirent; 3452 - char scripts_path[MAXPATHLEN], lang_path[MAXPATHLEN]; 3453 - DIR *scripts_dir, *lang_dir; 3454 - struct perf_session *session; 3455 - struct perf_data data = { 3456 - .path = input_name, 3457 - .mode = PERF_DATA_MODE_READ, 3458 - }; 3459 - char *temp; 3460 - int i = 0; 3461 - 3462 - session = perf_session__new(&data, NULL); 3463 - if (IS_ERR(session)) 3464 - return PTR_ERR(session); 3465 - 3466 - snprintf(scripts_path, MAXPATHLEN, "%s/scripts", get_argv_exec_path()); 3467 - 3468 - scripts_dir = opendir(scripts_path); 3469 - if (!scripts_dir) { 3470 - perf_session__delete(session); 3471 - return -1; 3472 - } 3473 - 3474 - for_each_lang(scripts_path, scripts_dir, lang_dirent) { 3475 - scnprintf(lang_path, MAXPATHLEN, "%s/%s", scripts_path, 3476 - lang_dirent->d_name); 3477 - #ifndef HAVE_LIBPERL_SUPPORT 3478 - if (strstr(lang_path, "perl")) 3479 - continue; 3480 - #endif 3481 - #ifndef HAVE_LIBPYTHON_SUPPORT 3482 - if (strstr(lang_path, "python")) 3483 - continue; 3484 - #endif 3485 - 3486 - lang_dir = opendir(lang_path); 3487 - if (!lang_dir) 3488 - continue; 3489 - 3490 - for_each_script(lang_path, lang_dir, script_dirent) { 3491 - /* Skip those real time scripts: xxxtop.p[yl] */ 3492 - if (strstr(script_dirent->d_name, "top.")) 3493 - continue; 3494 - if (i >= num) 3495 - break; 3496 - snprintf(scripts_path_array[i], pathlen, "%s/%s", 3497 - lang_path, 3498 - script_dirent->d_name); 3499 - temp = strchr(script_dirent->d_name, '.'); 3500 - snprintf(scripts_array[i], 3501 - (temp - script_dirent->d_name) + 1, 3502 - "%s", script_dirent->d_name); 3503 - 3504 - if (check_ev_match(lang_path, 3505 - scripts_array[i], session)) 3506 - continue; 3507 - 3508 - i++; 3509 - } 3510 - closedir(lang_dir); 3511 - } 3512 - 3513 - closedir(scripts_dir); 3514 - perf_session__delete(session); 3515 - return i; 3516 3524 } 3517 3525 3518 3526 static char *get_script_path(const char *script_root, const char *suffix)
-27
tools/perf/builtin-stat.c
··· 112 112 .uid = UINT_MAX, 113 113 }; 114 114 115 - #define METRIC_ONLY_LEN 20 116 - 117 115 static volatile sig_atomic_t child_pid = -1; 118 116 static int detailed_run = 0; 119 117 static bool transaction_run; ··· 148 150 #define STAT_RECORD perf_stat.record 149 151 150 152 static volatile sig_atomic_t done = 0; 151 - 152 - static struct perf_stat_config stat_config = { 153 - .aggr_mode = AGGR_GLOBAL, 154 - .aggr_level = MAX_CACHE_LVL + 1, 155 - .scale = true, 156 - .unit_width = 4, /* strlen("unit") */ 157 - .run_count = 1, 158 - .metric_only_len = METRIC_ONLY_LEN, 159 - .walltime_nsecs_stats = &walltime_nsecs_stats, 160 - .ru_stats = &ru_stats, 161 - .big_num = true, 162 - .ctl_fd = -1, 163 - .ctl_fd_ack = -1, 164 - .iostat_run = false, 165 - }; 166 153 167 154 /* Options set from the command line. */ 168 155 struct opt_aggr_mode { ··· 1052 1069 1053 1070 signal(signr, SIG_DFL); 1054 1071 kill(getpid(), signr); 1055 - } 1056 - 1057 - void perf_stat__set_big_num(int set) 1058 - { 1059 - stat_config.big_num = (set != 0); 1060 - } 1061 - 1062 - void perf_stat__set_no_csv_summary(int set) 1063 - { 1064 - stat_config.no_csv_summary = (set != 0); 1065 1072 } 1066 1073 1067 1074 static int stat__set_big_num(const struct option *opt __maybe_unused,
+3 -3
tools/perf/builtin-top.c
··· 267 267 268 268 if (top->evlist->enabled) { 269 269 if (top->zero) 270 - symbol__annotate_zero_histogram(symbol, top->sym_evsel->core.idx); 270 + symbol__annotate_zero_histogram(symbol, top->sym_evsel); 271 271 else 272 - symbol__annotate_decay_histogram(symbol, top->sym_evsel->core.idx); 272 + symbol__annotate_decay_histogram(symbol, top->sym_evsel); 273 273 } 274 274 if (more != 0) 275 275 printf("%d lines not displayed, maybe increase display entries [e]\n", more); ··· 809 809 * invalid --vmlinux ;-) 810 810 */ 811 811 if (!machine->kptr_restrict_warned && !top->vmlinux_warned && 812 - __map__is_kernel(al.map) && map__has_symbols(al.map)) { 812 + __map__is_kernel(al.map) && !map__has_symbols(al.map)) { 813 813 if (symbol_conf.vmlinux_name) { 814 814 char serr[256]; 815 815
+73 -58
tools/perf/builtin-trace.c
··· 389 389 } 390 390 391 391 if (et->fmt == NULL) { 392 - et->fmt = calloc(evsel->tp_format->format.nr_fields, sizeof(struct syscall_arg_fmt)); 392 + const struct tep_event *tp_format = evsel__tp_format(evsel); 393 + 394 + if (tp_format == NULL) 395 + goto out_delete; 396 + 397 + et->fmt = calloc(tp_format->format.nr_fields, sizeof(struct syscall_arg_fmt)); 393 398 if (et->fmt == NULL) 394 399 goto out_delete; 395 400 } ··· 1113 1108 .strtoul = STUL_STRARRAY_FLAGS, \ 1114 1109 .parm = &strarray__##array, } 1115 1110 1116 - #include "trace/beauty/arch_errno_names.c" 1117 1111 #include "trace/beauty/eventfd.c" 1118 1112 #include "trace/beauty/futex_op.c" 1119 1113 #include "trace/beauty/futex_val3.c" ··· 2073 2069 const char *name = syscalltbl__name(trace->sctbl, id); 2074 2070 int err; 2075 2071 2076 - #ifdef HAVE_SYSCALL_TABLE_SUPPORT 2077 2072 if (trace->syscalls.table == NULL) { 2078 2073 trace->syscalls.table = calloc(trace->sctbl->syscalls.max_id + 1, sizeof(*sc)); 2079 2074 if (trace->syscalls.table == NULL) 2080 2075 return -ENOMEM; 2081 2076 } 2082 - #else 2083 - if (id > trace->sctbl->syscalls.max_id || (id == 0 && trace->syscalls.table == NULL)) { 2084 - // When using libaudit we don't know beforehand what is the max syscall id 2085 - struct syscall *table = realloc(trace->syscalls.table, (id + 1) * sizeof(*sc)); 2086 - 2087 - if (table == NULL) 2088 - return -ENOMEM; 2089 - 2090 - // Need to memset from offset 0 and +1 members if brand new 2091 - if (trace->syscalls.table == NULL) 2092 - memset(table, 0, (id + 1) * sizeof(*sc)); 2093 - else 2094 - memset(table + trace->sctbl->syscalls.max_id + 1, 0, (id - trace->sctbl->syscalls.max_id) * sizeof(*sc)); 2095 - 2096 - trace->syscalls.table = table; 2097 - trace->sctbl->syscalls.max_id = id; 2098 - } 2099 - #endif 2100 2077 sc = trace->syscalls.table + id; 2101 2078 if (sc->nonexistent) 2102 2079 return -EEXIST; ··· 2139 2154 struct syscall_arg_fmt *fmt = evsel__syscall_arg_fmt(evsel); 2140 2155 2141 2156 if (fmt != NULL) { 2142 - syscall_arg_fmt__init_array(fmt, evsel->tp_format->format.fields, use_btf); 2143 - return 0; 2157 + const struct tep_event *tp_format = evsel__tp_format(evsel); 2158 + 2159 + if (tp_format) { 2160 + syscall_arg_fmt__init_array(fmt, tp_format->format.fields, use_btf); 2161 + return 0; 2162 + } 2144 2163 } 2145 2164 2146 2165 return -ENOMEM; ··· 2428 2439 2429 2440 err = -EINVAL; 2430 2441 2431 - #ifdef HAVE_SYSCALL_TABLE_SUPPORT 2432 2442 if (id > trace->sctbl->syscalls.max_id) { 2433 - #else 2434 - if (id >= trace->sctbl->syscalls.max_id) { 2435 - /* 2436 - * With libaudit we don't know beforehand what is the max_id, 2437 - * so we let trace__read_syscall_info() figure that out as we 2438 - * go on reading syscalls. 2439 - */ 2440 - err = trace__read_syscall_info(trace, id); 2441 - if (err) 2442 - #endif 2443 2443 goto out_cant_read; 2444 2444 } 2445 2445 ··· 2559 2581 2560 2582 static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sample, int *augmented_args_size, int raw_augmented_args_size) 2561 2583 { 2562 - void *augmented_args = NULL; 2563 2584 /* 2564 2585 * For now with BPF raw_augmented we hook into raw_syscalls:sys_enter 2565 2586 * and there we get all 6 syscall args plus the tracepoint common fields ··· 2576 2599 int args_size = raw_augmented_args_size ?: sc->args_size; 2577 2600 2578 2601 *augmented_args_size = sample->raw_size - args_size; 2579 - if (*augmented_args_size > 0) 2580 - augmented_args = sample->raw_data + args_size; 2602 + if (*augmented_args_size > 0) { 2603 + static uintptr_t argbuf[1024]; /* assuming single-threaded */ 2581 2604 2582 - return augmented_args; 2605 + if ((size_t)(*augmented_args_size) > sizeof(argbuf)) 2606 + return NULL; 2607 + 2608 + /* 2609 + * The perf ring-buffer is 8-byte aligned but sample->raw_data 2610 + * is not because it's preceded by u32 size. Later, beautifier 2611 + * will use the augmented args with stricter alignments like in 2612 + * some struct. To make sure it's aligned, let's copy the args 2613 + * into a static buffer as it's single-threaded for now. 2614 + */ 2615 + memcpy(argbuf, sample->raw_data + args_size, *augmented_args_size); 2616 + 2617 + return argbuf; 2618 + } 2619 + return NULL; 2583 2620 } 2584 2621 2585 2622 static void syscall__exit(struct syscall *sc) ··· 3018 3027 { 3019 3028 char bf[2048]; 3020 3029 size_t size = sizeof(bf); 3021 - struct tep_format_field *field = evsel->tp_format->format.fields; 3030 + const struct tep_event *tp_format = evsel__tp_format(evsel); 3031 + struct tep_format_field *field = tp_format ? tp_format->format.fields : NULL; 3022 3032 struct syscall_arg_fmt *arg = __evsel__syscall_arg_fmt(evsel); 3023 3033 size_t printed = 0, btf_printed; 3024 3034 unsigned long val; ··· 3137 3145 3138 3146 if (evsel__is_bpf_output(evsel)) { 3139 3147 bpf_output__fprintf(trace, sample); 3140 - } else if (evsel->tp_format) { 3141 - if (strncmp(evsel->tp_format->name, "sys_enter_", 10) || 3142 - trace__fprintf_sys_enter(trace, evsel, sample)) { 3148 + } else { 3149 + const struct tep_event *tp_format = evsel__tp_format(evsel); 3150 + 3151 + if (tp_format && (strncmp(tp_format->name, "sys_enter_", 10) || 3152 + trace__fprintf_sys_enter(trace, evsel, sample))) { 3143 3153 if (trace->libtraceevent_print) { 3144 - event_format__fprintf(evsel->tp_format, sample->cpu, 3154 + event_format__fprintf(tp_format, sample->cpu, 3145 3155 sample->raw_data, sample->raw_size, 3146 3156 trace->output); 3147 3157 } else { ··· 4071 4077 static struct syscall_arg_fmt *evsel__find_syscall_arg_fmt_by_name(struct evsel *evsel, char *arg, 4072 4078 char **type) 4073 4079 { 4074 - struct tep_format_field *field; 4075 4080 struct syscall_arg_fmt *fmt = __evsel__syscall_arg_fmt(evsel); 4081 + const struct tep_event *tp_format; 4076 4082 4077 - if (evsel->tp_format == NULL || fmt == NULL) 4083 + if (!fmt) 4078 4084 return NULL; 4079 4085 4080 - for (field = evsel->tp_format->format.fields; field; field = field->next, ++fmt) 4086 + tp_format = evsel__tp_format(evsel); 4087 + if (!tp_format) 4088 + return NULL; 4089 + 4090 + for (const struct tep_format_field *field = tp_format->format.fields; field; 4091 + field = field->next, ++fmt) { 4081 4092 if (strcmp(field->name, arg) == 0) { 4082 4093 *type = field->type; 4083 4094 return fmt; 4084 4095 } 4096 + } 4085 4097 4086 4098 return NULL; 4087 4099 } ··· 4843 4843 const struct syscall_fmt *scfmt = syscall_fmt__find(name); 4844 4844 4845 4845 if (scfmt) { 4846 - int skip = 0; 4846 + const struct tep_event *tp_format = evsel__tp_format(evsel); 4847 4847 4848 - if (strcmp(evsel->tp_format->format.fields->name, "__syscall_nr") == 0 || 4849 - strcmp(evsel->tp_format->format.fields->name, "nr") == 0) 4850 - ++skip; 4848 + if (tp_format) { 4849 + int skip = 0; 4851 4850 4852 - memcpy(fmt + skip, scfmt->arg, (evsel->tp_format->format.nr_fields - skip) * sizeof(*fmt)); 4851 + if (strcmp(tp_format->format.fields->name, "__syscall_nr") == 0 || 4852 + strcmp(tp_format->format.fields->name, "nr") == 0) 4853 + ++skip; 4854 + 4855 + memcpy(fmt + skip, scfmt->arg, 4856 + (tp_format->format.nr_fields - skip) * sizeof(*fmt)); 4857 + } 4853 4858 } 4854 4859 } 4855 4860 } ··· 4864 4859 struct evsel *evsel; 4865 4860 4866 4861 evlist__for_each_entry(evlist, evsel) { 4867 - if (evsel->priv || !evsel->tp_format) 4862 + const struct tep_event *tp_format; 4863 + 4864 + if (evsel->priv) 4868 4865 continue; 4869 4866 4870 - if (strcmp(evsel->tp_format->system, "syscalls")) { 4867 + tp_format = evsel__tp_format(evsel); 4868 + if (!tp_format) 4869 + continue; 4870 + 4871 + if (strcmp(tp_format->system, "syscalls")) { 4871 4872 evsel__init_tp_arg_scnprintf(evsel, use_btf); 4872 4873 continue; 4873 4874 } ··· 4881 4870 if (evsel__init_syscall_tp(evsel)) 4882 4871 return -1; 4883 4872 4884 - if (!strncmp(evsel->tp_format->name, "sys_enter_", 10)) { 4873 + if (!strncmp(tp_format->name, "sys_enter_", 10)) { 4885 4874 struct syscall_tp *sc = __evsel__syscall_tp(evsel); 4886 4875 4887 4876 if (__tp_field__init_ptr(&sc->args, sc->id.offset + sizeof(u64))) 4888 4877 return -1; 4889 4878 4890 - evsel__set_syscall_arg_fmt(evsel, evsel->tp_format->name + sizeof("sys_enter_") - 1); 4891 - } else if (!strncmp(evsel->tp_format->name, "sys_exit_", 9)) { 4879 + evsel__set_syscall_arg_fmt(evsel, 4880 + tp_format->name + sizeof("sys_enter_") - 1); 4881 + } else if (!strncmp(tp_format->name, "sys_exit_", 9)) { 4892 4882 struct syscall_tp *sc = __evsel__syscall_tp(evsel); 4893 4883 4894 - if (__tp_field__init_uint(&sc->ret, sizeof(u64), sc->id.offset + sizeof(u64), evsel->needs_swap)) 4884 + if (__tp_field__init_uint(&sc->ret, sizeof(u64), 4885 + sc->id.offset + sizeof(u64), 4886 + evsel->needs_swap)) 4895 4887 return -1; 4896 4888 4897 - evsel__set_syscall_arg_fmt(evsel, evsel->tp_format->name + sizeof("sys_exit_") - 1); 4889 + evsel__set_syscall_arg_fmt(evsel, 4890 + tp_format->name + sizeof("sys_exit_") - 1); 4898 4891 } 4899 4892 } 4900 4893
-6
tools/perf/builtin.h
··· 2 2 #ifndef BUILTIN_H 3 3 #define BUILTIN_H 4 4 5 - #include <stddef.h> 6 - #include <linux/compiler.h> 7 - #include <tools/config.h> 8 - 9 5 struct feature_status { 10 6 const char *name; 11 7 const char *macro; ··· 52 56 int cmd_daemon(int argc, const char **argv); 53 57 int cmd_kwork(int argc, const char **argv); 54 58 55 - int find_scripts(char **scripts_array, char **scripts_path_array, int num, 56 - int pathlen); 57 59 #endif
+9
tools/perf/check-headers.sh
··· 71 71 "include/uapi/asm-generic/ioctls.h" 72 72 "include/uapi/asm-generic/mman-common.h" 73 73 "include/uapi/asm-generic/unistd.h" 74 + "scripts/syscall.tbl" 74 75 ) 75 76 76 77 declare -a SYNC_CHECK_FILES ··· 202 201 check_2 tools/perf/arch/powerpc/entry/syscalls/syscall.tbl arch/powerpc/kernel/syscalls/syscall.tbl 203 202 check_2 tools/perf/arch/s390/entry/syscalls/syscall.tbl arch/s390/kernel/syscalls/syscall.tbl 204 203 check_2 tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl arch/mips/kernel/syscalls/syscall_n64.tbl 204 + check_2 tools/perf/arch/arm/entry/syscalls/syscall.tbl arch/arm/tools/syscall.tbl 205 + check_2 tools/perf/arch/sh/entry/syscalls/syscall.tbl arch/sh/kernel/syscalls/syscall.tbl 206 + check_2 tools/perf/arch/sparc/entry/syscalls/syscall.tbl arch/sparc/kernel/syscalls/syscall.tbl 207 + check_2 tools/perf/arch/xtensa/entry/syscalls/syscall.tbl arch/xtensa/kernel/syscalls/syscall.tbl 208 + check_2 tools/perf/arch/alpha/entry/syscalls/syscall.tbl arch/alpha/entry/syscalls/syscall.tbl 209 + check_2 tools/perf/arch/parisc/entry/syscalls/syscall.tbl arch/parisc/entry/syscalls/syscall.tbl 210 + check_2 tools/perf/arch/arm64/entry/syscalls/syscall_32.tbl arch/arm64/entry/syscalls/syscall_32.tbl 211 + check_2 tools/perf/arch/arm64/entry/syscalls/syscall_64.tbl arch/arm64/entry/syscalls/syscall_64.tbl 205 212 206 213 for i in "${BEAUTY_FILES[@]}" 207 214 do
+1 -5
tools/perf/perf.c
··· 84 84 #endif 85 85 { "kvm", cmd_kvm, 0 }, 86 86 { "test", cmd_test, 0 }, 87 - #if defined(HAVE_LIBTRACEEVENT) && (defined(HAVE_LIBAUDIT_SUPPORT) || defined(HAVE_SYSCALL_TABLE_SUPPORT)) 87 + #if defined(HAVE_LIBTRACEEVENT) 88 88 { "trace", cmd_trace, 0 }, 89 89 #endif 90 90 { "inject", cmd_inject, 0 }, ··· 513 513 #ifndef HAVE_LIBTRACEEVENT 514 514 fprintf(stderr, 515 515 "trace command not available: missing libtraceevent devel package at build time.\n"); 516 - goto out; 517 - #elif !defined(HAVE_LIBAUDIT_SUPPORT) && !defined(HAVE_SYSCALL_TABLE_SUPPORT) 518 - fprintf(stderr, 519 - "trace command not available: missing audit-libs devel package at build time.\n"); 520 516 goto out; 521 517 #else 522 518 setup_path();
+1 -1
tools/perf/perf.h
··· 3 3 #define _PERF_PERF_H 4 4 5 5 #ifndef MAX_NR_CPUS 6 - #define MAX_NR_CPUS 2048 6 + #define MAX_NR_CPUS 4096 7 7 #endif 8 8 9 9 enum perf_affinity {
+1 -1
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json
··· 5 5 }, 6 6 { 7 7 "ArchStdEvent": "EXC_RETURN", 8 - "PublicDescription": "Counts any architecturally executed exception return instructions. Eg: AArch64: ERET" 8 + "PublicDescription": "Counts any architecturally executed exception return instructions. For example: AArch64: ERET" 9 9 }, 10 10 { 11 11 "ArchStdEvent": "EXC_UNDEF",
+1 -1
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json
··· 5 5 }, 6 6 { 7 7 "ArchStdEvent": "CNT_CYCLES", 8 - "PublicDescription": "Counts constant frequency cycles" 8 + "PublicDescription": "Increments at a constant frequency equal to the rate of increment of the System Counter, CNTPCT_EL0." 9 9 } 10 10 ]
+3 -3
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json
··· 1 1 [ 2 2 { 3 3 "ArchStdEvent": "L1D_CACHE_REFILL", 4 - "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line. This event does not count cache line allocations from preload instructions or from hardware cache prefetching." 4 + "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line." 5 5 }, 6 6 { 7 7 "ArchStdEvent": "L1D_CACHE", 8 - "PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) count as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted." 8 + "PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) counts as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted." 9 9 }, 10 10 { 11 11 "ArchStdEvent": "L1D_CACHE_WB", ··· 17 17 }, 18 18 { 19 19 "ArchStdEvent": "L1D_CACHE_RD", 20 - "PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches count as both a write access and read access." 20 + "PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches counts as both a write access and read access." 21 21 }, 22 22 { 23 23 "ArchStdEvent": "L1D_CACHE_WR",
+7 -7
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json
··· 1 1 [ 2 2 { 3 3 "ArchStdEvent": "L2D_CACHE", 4 - "PublicDescription": "Counts level 2 cache accesses. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level caches or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache." 4 + "PublicDescription": "Counts accesses to the level 2 cache due to data accesses. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level data cache or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache." 5 5 }, 6 6 { 7 7 "ArchStdEvent": "L2D_CACHE_REFILL", 8 - "PublicDescription": "Counts cache line refills into the level 2 cache. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 caches or translation resolutions due to accesses." 8 + "PublicDescription": "Counts cache line refills into the level 2 cache. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 data cache or translation resolutions due to accesses." 9 9 }, 10 10 { 11 11 "ArchStdEvent": "L2D_CACHE_WB", ··· 13 13 }, 14 14 { 15 15 "ArchStdEvent": "L2D_CACHE_ALLOCATE", 16 - "PublicDescription": "TBD" 16 + "PublicDescription": "Counts level 2 cache line allocates that do not fetch data from outside the level 2 data or unified cache." 17 17 }, 18 18 { 19 19 "ArchStdEvent": "L2D_CACHE_RD", 20 - "PublicDescription": "Counts level 2 cache accesses due to memory read operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." 20 + "PublicDescription": "Counts level 2 data cache accesses due to memory read operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses." 21 21 }, 22 22 { 23 23 "ArchStdEvent": "L2D_CACHE_WR", 24 - "PublicDescription": "Counts level 2 cache accesses due to memory write operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." 24 + "PublicDescription": "Counts level 2 cache accesses due to memory write operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses." 25 25 }, 26 26 { 27 27 "ArchStdEvent": "L2D_CACHE_REFILL_RD", 28 - "PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." 28 + "PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses." 29 29 }, 30 30 { 31 31 "ArchStdEvent": "L2D_CACHE_REFILL_WR", 32 - "PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." 32 + "PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses." 33 33 }, 34 34 { 35 35 "ArchStdEvent": "L2D_CACHE_WB_VICTIM",
+2 -2
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json
··· 9 9 }, 10 10 { 11 11 "ArchStdEvent": "L3D_CACHE", 12 - "PublicDescription": "Counts level 3 cache accesses. level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses." 12 + "PublicDescription": "Counts level 3 cache accesses. Level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses." 13 13 }, 14 14 { 15 15 "ArchStdEvent": "L3D_CACHE_RD", 16 - "PublicDescription": "TBD" 16 + "PublicDescription": "Counts level 3 cache accesses caused by any memory read operation. Level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses." 17 17 }, 18 18 { 19 19 "ArchStdEvent": "L3D_CACHE_LMISS_RD",
+2 -2
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json
··· 1 1 [ 2 2 { 3 3 "ArchStdEvent": "LL_CACHE_RD", 4 - "PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources." 4 + "PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for the L3 cache. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources." 5 5 }, 6 6 { 7 7 "ArchStdEvent": "LL_CACHE_MISS_RD", 8 - "PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations." 8 + "PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for L3 cache. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations." 9 9 } 10 10 ]
+1 -1
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json
··· 33 33 }, 34 34 { 35 35 "ArchStdEvent": "MEM_ACCESS_CHECKED", 36 - "PublicDescription": "Counts the number of memory read and write accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)." 36 + "PublicDescription": "Counts the number of memory read and write accesses counted by MEM_ACCESS that are tag checked by the Memory Tagging Extension (MTE). This event is implemented as the sum of MEM_ACCESS_CHECKED_RD and MEM_ACCESS_CHECKED_WR" 37 37 }, 38 38 { 39 39 "ArchStdEvent": "MEM_ACCESS_CHECKED_RD",
+50 -43
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json
··· 5 5 }, 6 6 { 7 7 "MetricName": "backend_stalled_cycles", 8 - "MetricExpr": "((STALL_BACKEND / CPU_CYCLES) * 100)", 8 + "MetricExpr": "STALL_BACKEND / CPU_CYCLES * 100", 9 9 "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the backend unit of the processor.", 10 10 "MetricGroup": "Cycle_Accounting", 11 11 "ScaleUnit": "1percent of cycles" ··· 16 16 }, 17 17 { 18 18 "MetricName": "branch_misprediction_ratio", 19 - "MetricExpr": "(BR_MIS_PRED_RETIRED / BR_RETIRED)", 19 + "MetricExpr": "BR_MIS_PRED_RETIRED / BR_RETIRED", 20 20 "BriefDescription": "This metric measures the ratio of branches mispredicted to the total number of branches architecturally executed. This gives an indication of the effectiveness of the branch prediction unit.", 21 21 "MetricGroup": "Miss_Ratio;Branch_Effectiveness", 22 - "ScaleUnit": "1per branch" 22 + "ScaleUnit": "100percent of branches" 23 23 }, 24 24 { 25 25 "MetricName": "branch_mpki", 26 - "MetricExpr": "((BR_MIS_PRED_RETIRED / INST_RETIRED) * 1000)", 26 + "MetricExpr": "BR_MIS_PRED_RETIRED / INST_RETIRED * 1000", 27 27 "BriefDescription": "This metric measures the number of branch mispredictions per thousand instructions executed.", 28 28 "MetricGroup": "MPKI;Branch_Effectiveness", 29 29 "ScaleUnit": "1MPKI" 30 30 }, 31 31 { 32 32 "MetricName": "branch_percentage", 33 - "MetricExpr": "(((BR_IMMED_SPEC + BR_INDIRECT_SPEC) / INST_SPEC) * 100)", 33 + "MetricExpr": "(BR_IMMED_SPEC + BR_INDIRECT_SPEC) / INST_SPEC * 100", 34 34 "BriefDescription": "This metric measures branch operations as a percentage of operations speculatively executed.", 35 35 "MetricGroup": "Operation_Mix", 36 36 "ScaleUnit": "1percent of operations" 37 37 }, 38 38 { 39 39 "MetricName": "crypto_percentage", 40 - "MetricExpr": "((CRYPTO_SPEC / INST_SPEC) * 100)", 40 + "MetricExpr": "CRYPTO_SPEC / INST_SPEC * 100", 41 41 "BriefDescription": "This metric measures crypto operations as a percentage of operations speculatively executed.", 42 42 "MetricGroup": "Operation_Mix", 43 43 "ScaleUnit": "1percent of operations" 44 44 }, 45 45 { 46 46 "MetricName": "dtlb_mpki", 47 - "MetricExpr": "((DTLB_WALK / INST_RETIRED) * 1000)", 47 + "MetricExpr": "DTLB_WALK / INST_RETIRED * 1000", 48 48 "BriefDescription": "This metric measures the number of data TLB Walks per thousand instructions executed.", 49 49 "MetricGroup": "MPKI;DTLB_Effectiveness", 50 50 "ScaleUnit": "1MPKI" 51 51 }, 52 52 { 53 53 "MetricName": "dtlb_walk_ratio", 54 - "MetricExpr": "(DTLB_WALK / L1D_TLB)", 54 + "MetricExpr": "DTLB_WALK / L1D_TLB", 55 55 "BriefDescription": "This metric measures the ratio of data TLB Walks to the total number of data TLB accesses. This gives an indication of the effectiveness of the data TLB accesses.", 56 56 "MetricGroup": "Miss_Ratio;DTLB_Effectiveness", 57 - "ScaleUnit": "1per TLB access" 57 + "ScaleUnit": "100percent of TLB accesses" 58 58 }, 59 59 { 60 60 "ArchStdEvent": "frontend_bound", ··· 62 62 }, 63 63 { 64 64 "MetricName": "frontend_stalled_cycles", 65 - "MetricExpr": "((STALL_FRONTEND / CPU_CYCLES) * 100)", 65 + "MetricExpr": "STALL_FRONTEND / CPU_CYCLES * 100", 66 66 "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the frontend unit of the processor.", 67 67 "MetricGroup": "Cycle_Accounting", 68 68 "ScaleUnit": "1percent of cycles" 69 69 }, 70 70 { 71 71 "MetricName": "integer_dp_percentage", 72 - "MetricExpr": "((DP_SPEC / INST_SPEC) * 100)", 72 + "MetricExpr": "DP_SPEC / INST_SPEC * 100", 73 73 "BriefDescription": "This metric measures scalar integer operations as a percentage of operations speculatively executed.", 74 74 "MetricGroup": "Operation_Mix", 75 75 "ScaleUnit": "1percent of operations" 76 76 }, 77 77 { 78 78 "MetricName": "ipc", 79 - "MetricExpr": "(INST_RETIRED / CPU_CYCLES)", 79 + "MetricExpr": "INST_RETIRED / CPU_CYCLES", 80 80 "BriefDescription": "This metric measures the number of instructions retired per cycle.", 81 81 "MetricGroup": "General", 82 82 "ScaleUnit": "1per cycle" 83 83 }, 84 84 { 85 85 "MetricName": "itlb_mpki", 86 - "MetricExpr": "((ITLB_WALK / INST_RETIRED) * 1000)", 86 + "MetricExpr": "ITLB_WALK / INST_RETIRED * 1000", 87 87 "BriefDescription": "This metric measures the number of instruction TLB Walks per thousand instructions executed.", 88 88 "MetricGroup": "MPKI;ITLB_Effectiveness", 89 89 "ScaleUnit": "1MPKI" 90 90 }, 91 91 { 92 92 "MetricName": "itlb_walk_ratio", 93 - "MetricExpr": "(ITLB_WALK / L1I_TLB)", 93 + "MetricExpr": "ITLB_WALK / L1I_TLB", 94 94 "BriefDescription": "This metric measures the ratio of instruction TLB Walks to the total number of instruction TLB accesses. This gives an indication of the effectiveness of the instruction TLB accesses.", 95 95 "MetricGroup": "Miss_Ratio;ITLB_Effectiveness", 96 - "ScaleUnit": "1per TLB access" 96 + "ScaleUnit": "100percent of TLB accesses" 97 97 }, 98 98 { 99 99 "MetricName": "l1d_cache_miss_ratio", 100 - "MetricExpr": "(L1D_CACHE_REFILL / L1D_CACHE)", 100 + "MetricExpr": "L1D_CACHE_REFILL / L1D_CACHE", 101 101 "BriefDescription": "This metric measures the ratio of level 1 data cache accesses missed to the total number of level 1 data cache accesses. This gives an indication of the effectiveness of the level 1 data cache.", 102 102 "MetricGroup": "Miss_Ratio;L1D_Cache_Effectiveness", 103 - "ScaleUnit": "1per cache access" 103 + "ScaleUnit": "100percent of cache accesses" 104 104 }, 105 105 { 106 106 "MetricName": "l1d_cache_mpki", 107 - "MetricExpr": "((L1D_CACHE_REFILL / INST_RETIRED) * 1000)", 107 + "MetricExpr": "L1D_CACHE_REFILL / INST_RETIRED * 1000", 108 108 "BriefDescription": "This metric measures the number of level 1 data cache accesses missed per thousand instructions executed.", 109 109 "MetricGroup": "MPKI;L1D_Cache_Effectiveness", 110 110 "ScaleUnit": "1MPKI" 111 111 }, 112 112 { 113 113 "MetricName": "l1d_tlb_miss_ratio", 114 - "MetricExpr": "(L1D_TLB_REFILL / L1D_TLB)", 114 + "MetricExpr": "L1D_TLB_REFILL / L1D_TLB", 115 115 "BriefDescription": "This metric measures the ratio of level 1 data TLB accesses missed to the total number of level 1 data TLB accesses. This gives an indication of the effectiveness of the level 1 data TLB.", 116 116 "MetricGroup": "Miss_Ratio;DTLB_Effectiveness", 117 - "ScaleUnit": "1per TLB access" 117 + "ScaleUnit": "100percent of TLB accesses" 118 118 }, 119 119 { 120 120 "MetricName": "l1d_tlb_mpki", 121 - "MetricExpr": "((L1D_TLB_REFILL / INST_RETIRED) * 1000)", 122 - "BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.", 121 + "MetricExpr": "L1D_TLB_REFILL / INST_RETIRED * 1000", 122 + "BriefDescription": "This metric measures the number of level 1 data TLB accesses missed per thousand instructions executed.", 123 123 "MetricGroup": "MPKI;DTLB_Effectiveness", 124 124 "ScaleUnit": "1MPKI" 125 125 }, 126 126 { 127 127 "MetricName": "l1i_cache_miss_ratio", 128 - "MetricExpr": "(L1I_CACHE_REFILL / L1I_CACHE)", 128 + "MetricExpr": "L1I_CACHE_REFILL / L1I_CACHE", 129 129 "BriefDescription": "This metric measures the ratio of level 1 instruction cache accesses missed to the total number of level 1 instruction cache accesses. This gives an indication of the effectiveness of the level 1 instruction cache.", 130 130 "MetricGroup": "Miss_Ratio;L1I_Cache_Effectiveness", 131 - "ScaleUnit": "1per cache access" 131 + "ScaleUnit": "100percent of cache accesses" 132 132 }, 133 133 { 134 134 "MetricName": "l1i_cache_mpki", 135 - "MetricExpr": "((L1I_CACHE_REFILL / INST_RETIRED) * 1000)", 135 + "MetricExpr": "L1I_CACHE_REFILL / INST_RETIRED * 1000", 136 136 "BriefDescription": "This metric measures the number of level 1 instruction cache accesses missed per thousand instructions executed.", 137 137 "MetricGroup": "MPKI;L1I_Cache_Effectiveness", 138 138 "ScaleUnit": "1MPKI" 139 139 }, 140 140 { 141 141 "MetricName": "l1i_tlb_miss_ratio", 142 - "MetricExpr": "(L1I_TLB_REFILL / L1I_TLB)", 142 + "MetricExpr": "L1I_TLB_REFILL / L1I_TLB", 143 143 "BriefDescription": "This metric measures the ratio of level 1 instruction TLB accesses missed to the total number of level 1 instruction TLB accesses. This gives an indication of the effectiveness of the level 1 instruction TLB.", 144 144 "MetricGroup": "Miss_Ratio;ITLB_Effectiveness", 145 - "ScaleUnit": "1per TLB access" 145 + "ScaleUnit": "100percent of TLB accesses" 146 146 }, 147 147 { 148 148 "MetricName": "l1i_tlb_mpki", 149 - "MetricExpr": "((L1I_TLB_REFILL / INST_RETIRED) * 1000)", 149 + "MetricExpr": "L1I_TLB_REFILL / INST_RETIRED * 1000", 150 150 "BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.", 151 151 "MetricGroup": "MPKI;ITLB_Effectiveness", 152 152 "ScaleUnit": "1MPKI" 153 153 }, 154 154 { 155 155 "MetricName": "l2_cache_miss_ratio", 156 - "MetricExpr": "(L2D_CACHE_REFILL / L2D_CACHE)", 156 + "MetricExpr": "L2D_CACHE_REFILL / L2D_CACHE", 157 157 "BriefDescription": "This metric measures the ratio of level 2 cache accesses missed to the total number of level 2 cache accesses. This gives an indication of the effectiveness of the level 2 cache, which is a unified cache that stores both data and instruction. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.", 158 158 "MetricGroup": "Miss_Ratio;L2_Cache_Effectiveness", 159 - "ScaleUnit": "1per cache access" 159 + "ScaleUnit": "100percent of cache accesses" 160 160 }, 161 161 { 162 162 "MetricName": "l2_cache_mpki", 163 - "MetricExpr": "((L2D_CACHE_REFILL / INST_RETIRED) * 1000)", 163 + "MetricExpr": "L2D_CACHE_REFILL / INST_RETIRED * 1000", 164 164 "BriefDescription": "This metric measures the number of level 2 unified cache accesses missed per thousand instructions executed. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.", 165 165 "MetricGroup": "MPKI;L2_Cache_Effectiveness", 166 166 "ScaleUnit": "1MPKI" 167 167 }, 168 168 { 169 169 "MetricName": "l2_tlb_miss_ratio", 170 - "MetricExpr": "(L2D_TLB_REFILL / L2D_TLB)", 170 + "MetricExpr": "L2D_TLB_REFILL / L2D_TLB", 171 171 "BriefDescription": "This metric measures the ratio of level 2 unified TLB accesses missed to the total number of level 2 unified TLB accesses. This gives an indication of the effectiveness of the level 2 TLB.", 172 172 "MetricGroup": "Miss_Ratio;ITLB_Effectiveness;DTLB_Effectiveness", 173 - "ScaleUnit": "1per TLB access" 173 + "ScaleUnit": "100percent of TLB accesses" 174 174 }, 175 175 { 176 176 "MetricName": "l2_tlb_mpki", 177 - "MetricExpr": "((L2D_TLB_REFILL / INST_RETIRED) * 1000)", 177 + "MetricExpr": "L2D_TLB_REFILL / INST_RETIRED * 1000", 178 178 "BriefDescription": "This metric measures the number of level 2 unified TLB accesses missed per thousand instructions executed.", 179 179 "MetricGroup": "MPKI;ITLB_Effectiveness;DTLB_Effectiveness", 180 180 "ScaleUnit": "1MPKI" 181 181 }, 182 182 { 183 183 "MetricName": "ll_cache_read_hit_ratio", 184 - "MetricExpr": "((LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD)", 184 + "MetricExpr": "(LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD", 185 185 "BriefDescription": "This metric measures the ratio of last level cache read accesses hit in the cache to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.", 186 186 "MetricGroup": "LL_Cache_Effectiveness", 187 - "ScaleUnit": "1per cache access" 187 + "ScaleUnit": "100percent of cache accesses" 188 188 }, 189 189 { 190 190 "MetricName": "ll_cache_read_miss_ratio", 191 - "MetricExpr": "(LL_CACHE_MISS_RD / LL_CACHE_RD)", 191 + "MetricExpr": "LL_CACHE_MISS_RD / LL_CACHE_RD", 192 192 "BriefDescription": "This metric measures the ratio of last level cache read accesses missed to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.", 193 193 "MetricGroup": "Miss_Ratio;LL_Cache_Effectiveness", 194 - "ScaleUnit": "1per cache access" 194 + "ScaleUnit": "100percent of cache accesses" 195 195 }, 196 196 { 197 197 "MetricName": "ll_cache_read_mpki", 198 - "MetricExpr": "((LL_CACHE_MISS_RD / INST_RETIRED) * 1000)", 198 + "MetricExpr": "LL_CACHE_MISS_RD / INST_RETIRED * 1000", 199 199 "BriefDescription": "This metric measures the number of last level cache read accesses missed per thousand instructions executed.", 200 200 "MetricGroup": "MPKI;LL_Cache_Effectiveness", 201 201 "ScaleUnit": "1MPKI" 202 202 }, 203 203 { 204 204 "MetricName": "load_percentage", 205 - "MetricExpr": "((LD_SPEC / INST_SPEC) * 100)", 205 + "MetricExpr": "LD_SPEC / INST_SPEC * 100", 206 206 "BriefDescription": "This metric measures load operations as a percentage of operations speculatively executed.", 207 207 "MetricGroup": "Operation_Mix", 208 208 "ScaleUnit": "1percent of operations" ··· 213 213 }, 214 214 { 215 215 "MetricName": "scalar_fp_percentage", 216 - "MetricExpr": "((VFP_SPEC / INST_SPEC) * 100)", 216 + "MetricExpr": "VFP_SPEC / INST_SPEC * 100", 217 217 "BriefDescription": "This metric measures scalar floating point operations as a percentage of operations speculatively executed.", 218 218 "MetricGroup": "Operation_Mix", 219 219 "ScaleUnit": "1percent of operations" 220 220 }, 221 221 { 222 222 "MetricName": "simd_percentage", 223 - "MetricExpr": "((ASE_SPEC / INST_SPEC) * 100)", 223 + "MetricExpr": "ASE_SPEC / INST_SPEC * 100", 224 224 "BriefDescription": "This metric measures advanced SIMD operations as a percentage of total operations speculatively executed.", 225 225 "MetricGroup": "Operation_Mix", 226 226 "ScaleUnit": "1percent of operations" 227 227 }, 228 228 { 229 229 "MetricName": "store_percentage", 230 - "MetricExpr": "((ST_SPEC / INST_SPEC) * 100)", 230 + "MetricExpr": "ST_SPEC / INST_SPEC * 100", 231 231 "BriefDescription": "This metric measures store operations as a percentage of operations speculatively executed.", 232 232 "MetricGroup": "Operation_Mix", 233 233 "ScaleUnit": "1percent of operations" ··· 300 300 "MetricGroup": "Operation_Mix", 301 301 "MetricName": "branch_indirect_spec_rate", 302 302 "ScaleUnit": "100%" 303 + }, 304 + { 305 + "MetricName": "sve_all_percentage", 306 + "MetricExpr": "SVE_INST_SPEC / INST_SPEC * 100", 307 + "BriefDescription": "This metric measures scalable vector operations, including loads and stores, as a percentage of operations speculatively executed.", 308 + "MetricGroup": "Operation_Mix", 309 + "ScaleUnit": "1percent of operations" 303 310 } 304 311 ]
+2 -2
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json
··· 9 9 }, 10 10 { 11 11 "ArchStdEvent": "CID_WRITE_RETIRED", 12 - "PublicDescription": "Counts architecturally executed writes to the CONTEXTIDR register, which usually contain the kernel PID and can be output with hardware trace." 12 + "PublicDescription": "Counts architecturally executed writes to the CONTEXTIDR_EL1 register, which usually contain the kernel PID and can be output with hardware trace." 13 13 }, 14 14 { 15 15 "ArchStdEvent": "TTBR_WRITE_RETIRED", ··· 17 17 }, 18 18 { 19 19 "ArchStdEvent": "BR_RETIRED", 20 - "PublicDescription": "Counts architecturally executed branches, whether the branch is taken or not. Instructions that explicitly write to the PC are also counted." 20 + "PublicDescription": "Counts architecturally executed branches, whether the branch is taken or not. Instructions that explicitly write to the PC are also counted. Note that exception generating instructions, exception return instructions and context synchronization instructions are not counted." 21 21 }, 22 22 { 23 23 "ArchStdEvent": "BR_MIS_PRED_RETIRED",
+7 -7
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json
··· 5 5 }, 6 6 { 7 7 "ArchStdEvent": "BR_PRED", 8 - "PublicDescription": "Counts branches speculatively executed and were predicted right." 8 + "PublicDescription": "Counts all speculatively executed branches." 9 9 }, 10 10 { 11 11 "ArchStdEvent": "INST_SPEC", ··· 29 29 }, 30 30 { 31 31 "ArchStdEvent": "LDREX_SPEC", 32 - "PublicDescription": "Counts Load-Exclusive operations that have been speculatively executed. Eg: LDREX, LDX" 32 + "PublicDescription": "Counts Load-Exclusive operations that have been speculatively executed. For example: LDREX, LDX" 33 33 }, 34 34 { 35 35 "ArchStdEvent": "STREX_PASS_SPEC", ··· 73 73 }, 74 74 { 75 75 "ArchStdEvent": "BR_IMMED_SPEC", 76 - "PublicDescription": "Counts immediate branch operations which are speculatively executed." 76 + "PublicDescription": "Counts direct branch operations which are speculatively executed." 77 77 }, 78 78 { 79 79 "ArchStdEvent": "BR_RETURN_SPEC", 80 - "PublicDescription": "Counts procedure return operations (RET) which are speculatively executed." 80 + "PublicDescription": "Counts procedure return operations (RET, RETAA and RETAB) which are speculatively executed." 81 81 }, 82 82 { 83 83 "ArchStdEvent": "BR_INDIRECT_SPEC", 84 - "PublicDescription": "Counts indirect branch operations including procedure returns, which are speculatively executed. This includes operations that force a software change of the PC, other than exception-generating operations. Eg: BR Xn, RET" 84 + "PublicDescription": "Counts indirect branch operations including procedure returns, which are speculatively executed. This includes operations that force a software change of the PC, other than exception-generating operations and direct branch instructions. Some examples of the instructions counted by this event include BR Xn, RET, etc..." 85 85 }, 86 86 { 87 87 "ArchStdEvent": "ISB_SPEC", ··· 97 97 }, 98 98 { 99 99 "ArchStdEvent": "RC_LD_SPEC", 100 - "PublicDescription": "Counts any load acquire operations that are speculatively executed. Eg: LDAR, LDARH, LDARB" 100 + "PublicDescription": "Counts any load acquire operations that are speculatively executed. For example: LDAR, LDARH, LDARB" 101 101 }, 102 102 { 103 103 "ArchStdEvent": "RC_ST_SPEC", 104 - "PublicDescription": "Counts any store release operations that are speculatively executed. Eg: STLR, STLRH, STLRB'" 104 + "PublicDescription": "Counts any store release operations that are speculatively executed. For example: STLR, STLRH, STLRB" 105 105 }, 106 106 { 107 107 "ArchStdEvent": "ASE_INST_SPEC",
+4 -4
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json
··· 1 1 [ 2 2 { 3 3 "ArchStdEvent": "STALL_FRONTEND", 4 - "PublicDescription": "Counts cycles when frontend could not send any micro-operations to the rename stage because of frontend resource stalls caused by fetch memory latency or branch prediction flow stalls. All the frontend slots were empty during the cycle when this event counts." 4 + "PublicDescription": "Counts cycles when frontend could not send any micro-operations to the rename stage because of frontend resource stalls caused by fetch memory latency or branch prediction flow stalls. STALL_FRONTEND_SLOTS counts SLOTS during the cycle when this event counts." 5 5 }, 6 6 { 7 7 "ArchStdEvent": "STALL_BACKEND", ··· 9 9 }, 10 10 { 11 11 "ArchStdEvent": "STALL", 12 - "PublicDescription": "Counts cycles when no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall)." 12 + "PublicDescription": "Counts cycles when no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). This event is the sum of STALL_FRONTEND and STALL_BACKEND" 13 13 }, 14 14 { 15 15 "ArchStdEvent": "STALL_SLOT_BACKEND", 16 - "PublicDescription": "Counts slots per cycle in which no operations are sent from the rename unit to the backend due to backend resource constraints." 16 + "PublicDescription": "Counts slots per cycle in which no operations are sent from the rename unit to the backend due to backend resource constraints. STALL_BACKEND counts during the cycle when STALL_SLOT_BACKEND counts at least 1." 17 17 }, 18 18 { 19 19 "ArchStdEvent": "STALL_SLOT_FRONTEND", ··· 21 21 }, 22 22 { 23 23 "ArchStdEvent": "STALL_SLOT", 24 - "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall)." 24 + "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). STALL_SLOT is the sum of STALL_SLOT_FRONTEND and STALL_SLOT_BACKEND." 25 25 }, 26 26 { 27 27 "ArchStdEvent": "STALL_BACKEND_MEM",
+2 -2
tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json
··· 25 25 }, 26 26 { 27 27 "ArchStdEvent": "DTLB_WALK", 28 - "PublicDescription": "Counts data memory translation table walks caused by a miss in the L2 TLB driven by a memory access. Note that partial translations that also cause a table walk are counted. This event does not count table walks caused by TLB maintenance operations." 28 + "PublicDescription": "Counts number of demand data translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations." 29 29 }, 30 30 { 31 31 "ArchStdEvent": "ITLB_WALK", 32 - "PublicDescription": "Counts instruction memory translation table walks caused by a miss in the L2 TLB driven by a memory access. Partial translations that also cause a table walk are counted. This event does not count table walks caused by TLB maintenance operations." 32 + "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations." 33 33 }, 34 34 { 35 35 "ArchStdEvent": "L1D_TLB_REFILL_RD",
+715
tools/perf/pmu-events/arch/arm64/common-and-microarch.json
··· 534 534 "BriefDescription": "SVE operations speculatively executed" 535 535 }, 536 536 { 537 + "EventCode": "0x8007", 538 + "EventName": "ASE_SVE_INST_SPEC", 539 + "BriefDescription": "Operation speculatively executed, Advanced SIMD or SVE." 540 + }, 541 + { 537 542 "PublicDescription": "Microarchitectural operation, Operations speculatively executed.", 538 543 "EventCode": "0x8008", 539 544 "EventName": "UOP_SPEC", ··· 557 552 "BriefDescription": "Floating-point Operations speculatively executed." 558 553 }, 559 554 { 555 + "EventCode": "0x8011", 556 + "EventName": "ASE_FP_SPEC", 557 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD." 558 + }, 559 + { 560 + "EventCode": "0x8012", 561 + "EventName": "SVE_FP_SPEC", 562 + "BriefDescription": "Floating-point Operation speculatively executed, SVE." 563 + }, 564 + { 565 + "EventCode": "0x8013", 566 + "EventName": "ASE_SVE_FP_SPEC", 567 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE." 568 + }, 569 + { 560 570 "PublicDescription": "Floating-point half-precision operations speculatively executed", 561 571 "EventCode": "0x8014", 562 572 "EventName": "FP_HP_SPEC", 563 573 "BriefDescription": "Floating-point half-precision operations speculatively executed" 574 + }, 575 + { 576 + "EventCode": "0x8015", 577 + "EventName": "ASE_FP_HP_SPEC", 578 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD half precision." 579 + }, 580 + { 581 + "EventCode": "0x8016", 582 + "EventName": "SVE_FP_HP_SPEC", 583 + "BriefDescription": "Floating-point Operation speculatively executed, SVE half precision." 584 + }, 585 + { 586 + "EventCode": "0x8017", 587 + "EventName": "ASE_SVE_FP_HP_SPEC", 588 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE half precision." 564 589 }, 565 590 { 566 591 "PublicDescription": "Floating-point single-precision operations speculatively executed", ··· 599 564 "BriefDescription": "Floating-point single-precision operations speculatively executed" 600 565 }, 601 566 { 567 + "EventCode": "0x8019", 568 + "EventName": "ASE_FP_SP_SPEC", 569 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD single precision." 570 + }, 571 + { 572 + "EventCode": "0x801A", 573 + "EventName": "SVE_FP_SP_SPEC", 574 + "BriefDescription": "Floating-point Operation speculatively executed, SVE single precision." 575 + }, 576 + { 577 + "EventCode": "0x801B", 578 + "EventName": "ASE_SVE_FP_SP_SPEC", 579 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE single precision." 580 + }, 581 + { 602 582 "PublicDescription": "Floating-point double-precision operations speculatively executed", 603 583 "EventCode": "0x801C", 604 584 "EventName": "FP_DP_SPEC", 605 585 "BriefDescription": "Floating-point double-precision operations speculatively executed" 586 + }, 587 + { 588 + "EventCode": "0x801D", 589 + "EventName": "ASE_FP_DP_SPEC", 590 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD double precision." 591 + }, 592 + { 593 + "EventCode": "0x801E", 594 + "EventName": "SVE_FP_DP_SPEC", 595 + "BriefDescription": "Floating-point Operation speculatively executed, SVE double precision." 596 + }, 597 + { 598 + "EventCode": "0x801F", 599 + "EventName": "ASE_SVE_FP_DP_SPEC", 600 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE double precision." 601 + }, 602 + { 603 + "EventCode": "0x8020", 604 + "EventName": "FP_DIV_SPEC", 605 + "BriefDescription": "Floating-point Operation speculatively executed, divide." 606 + }, 607 + { 608 + "EventCode": "0x8021", 609 + "EventName": "ASE_FP_DIV_SPEC", 610 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD divide." 611 + }, 612 + { 613 + "EventCode": "0x8022", 614 + "EventName": "SVE_FP_DIV_SPEC", 615 + "BriefDescription": "Floating-point Operation speculatively executed, SVE divide." 616 + }, 617 + { 618 + "EventCode": "0x8023", 619 + "EventName": "ASE_SVE_FP_DIV_SPEC", 620 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE divide." 621 + }, 622 + { 623 + "EventCode": "0x8024", 624 + "EventName": "FP_SQRT_SPEC", 625 + "BriefDescription": "Floating-point Operation speculatively executed, square root." 626 + }, 627 + { 628 + "EventCode": "0x8025", 629 + "EventName": "ASE_FP_SQRT_SPEC", 630 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD square root." 631 + }, 632 + { 633 + "EventCode": "0x8026", 634 + "EventName": "SVE_FP_SQRT_SPEC", 635 + "BriefDescription": "Floating-point Operation speculatively executed, SVE square root." 636 + }, 637 + { 638 + "EventCode": "0x8027", 639 + "EventName": "ASE_SVE_FP_SQRT_SPEC", 640 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE square-root." 606 641 }, 607 642 { 608 643 "PublicDescription": "Floating-point FMA Operations speculatively executed.", ··· 681 576 "BriefDescription": "Floating-point FMA Operations speculatively executed." 682 577 }, 683 578 { 579 + "EventCode": "0x8029", 580 + "EventName": "ASE_FP_FMA_SPEC", 581 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD FMA." 582 + }, 583 + { 584 + "EventCode": "0x802A", 585 + "EventName": "SVE_FP_FMA_SPEC", 586 + "BriefDescription": "Floating-point Operation speculatively executed, SVE FMA." 587 + }, 588 + { 589 + "EventCode": "0x802B", 590 + "EventName": "ASE_SVE_FP_FMA_SPEC", 591 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE FMA." 592 + }, 593 + { 594 + "EventCode": "0x802C", 595 + "EventName": "FP_MUL_SPEC", 596 + "BriefDescription": "Floating-point Operation speculatively executed, multiply." 597 + }, 598 + { 599 + "EventCode": "0x802D", 600 + "EventName": "ASE_FP_MUL_SPEC", 601 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD multiply." 602 + }, 603 + { 604 + "EventCode": "0x802E", 605 + "EventName": "SVE_FP_MUL_SPEC", 606 + "BriefDescription": "Floating-point Operation speculatively executed, SVE multiply." 607 + }, 608 + { 609 + "EventCode": "0x802F", 610 + "EventName": "ASE_SVE_FP_MUL_SPEC", 611 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE multiply." 612 + }, 613 + { 614 + "EventCode": "0x8030", 615 + "EventName": "FP_ADDSUB_SPEC", 616 + "BriefDescription": "Floating-point Operation speculatively executed, add or subtract." 617 + }, 618 + { 619 + "EventCode": "0x8031", 620 + "EventName": "ASE_FP_ADDSUB_SPEC", 621 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD add or subtract." 622 + }, 623 + { 624 + "EventCode": "0x8032", 625 + "EventName": "SVE_FP_ADDSUB_SPEC", 626 + "BriefDescription": "Floating-point Operation speculatively executed, SVE add or subtract." 627 + }, 628 + { 629 + "EventCode": "0x8033", 630 + "EventName": "ASE_SVE_FP_ADDSUB_SPEC", 631 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE add or subtract." 632 + }, 633 + { 684 634 "PublicDescription": "Floating-point reciprocal estimate Operations speculatively executed.", 685 635 "EventCode": "0x8034", 686 636 "EventName": "FP_RECPE_SPEC", 687 637 "BriefDescription": "Floating-point reciprocal estimate Operations speculatively executed." 638 + }, 639 + { 640 + "EventCode": "0x8035", 641 + "EventName": "ASE_FP_RECPE_SPEC", 642 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD reciprocal estimate." 643 + }, 644 + { 645 + "EventCode": "0x8036", 646 + "EventName": "SVE_FP_RECPE_SPEC", 647 + "BriefDescription": "Floating-point Operation speculatively executed, SVE reciprocal estimate." 648 + }, 649 + { 650 + "EventCode": "0x8037", 651 + "EventName": "ASE_SVE_FP_RECPE_SPEC", 652 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE reciprocal estimate." 688 653 }, 689 654 { 690 655 "PublicDescription": "floating-point convert Operations speculatively executed.", ··· 763 588 "BriefDescription": "floating-point convert Operations speculatively executed." 764 589 }, 765 590 { 591 + "EventCode": "0x8039", 592 + "EventName": "ASE_FP_CVT_SPEC", 593 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD convert." 594 + }, 595 + { 596 + "EventCode": "0x803A", 597 + "EventName": "SVE_FP_CVT_SPEC", 598 + "BriefDescription": "Floating-point Operation speculatively executed, SVE convert." 599 + }, 600 + { 601 + "EventCode": "0x803B", 602 + "EventName": "ASE_SVE_FP_CVT_SPEC", 603 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE convert." 604 + }, 605 + { 606 + "EventCode": "0x803C", 607 + "EventName": "SVE_FP_AREDUCE_SPEC", 608 + "BriefDescription": "Floating-point Operation speculatively executed, SVE accumulating reduction." 609 + }, 610 + { 611 + "EventCode": "0x803D", 612 + "EventName": "ASE_FP_PREDUCE_SPEC", 613 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD pairwise add step." 614 + }, 615 + { 616 + "EventCode": "0x803E", 617 + "EventName": "SVE_FP_VREDUCE_SPEC", 618 + "BriefDescription": "Floating-point Operation speculatively executed, SVE vector reduction." 619 + }, 620 + { 621 + "EventCode": "0x803F", 622 + "EventName": "ASE_SVE_FP_VREDUCE_SPEC", 623 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE vector reduction." 624 + }, 625 + { 626 + "EventCode": "0x8040", 627 + "EventName": "INT_SPEC", 628 + "BriefDescription": "Integer Operation speculatively executed." 629 + }, 630 + { 631 + "EventCode": "0x8041", 632 + "EventName": "ASE_INT_SPEC", 633 + "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD." 634 + }, 635 + { 636 + "EventCode": "0x8042", 637 + "EventName": "SVE_INT_SPEC", 638 + "BriefDescription": "Integer Operation speculatively executed, SVE." 639 + }, 640 + { 766 641 "PublicDescription": "Advanced SIMD and SVE integer Operations speculatively executed.", 767 642 "EventCode": "0x8043", 768 643 "EventName": "ASE_SVE_INT_SPEC", 769 644 "BriefDescription": "Advanced SIMD and SVE integer Operations speculatively executed." 645 + }, 646 + { 647 + "EventCode": "0x8044", 648 + "EventName": "INT_DIV_SPEC", 649 + "BriefDescription": "Integer Operation speculatively executed, divide." 650 + }, 651 + { 652 + "EventCode": "0x8045", 653 + "EventName": "INT_DIV64_SPEC", 654 + "BriefDescription": "Integer Operation speculatively executed, 64-bit divide." 655 + }, 656 + { 657 + "EventCode": "0x8046", 658 + "EventName": "SVE_INT_DIV_SPEC", 659 + "BriefDescription": "Integer Operation speculatively executed, SVE divide." 660 + }, 661 + { 662 + "EventCode": "0x8047", 663 + "EventName": "SVE_INT_DIV64_SPEC", 664 + "BriefDescription": "Integer Operation speculatively executed, SVE 64-bit divide." 665 + }, 666 + { 667 + "EventCode": "0x8048", 668 + "EventName": "INT_MUL_SPEC", 669 + "BriefDescription": "Integer Operation speculatively executed, multiply." 670 + }, 671 + { 672 + "EventCode": "0x8049", 673 + "EventName": "ASE_INT_MUL_SPEC", 674 + "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD multiply." 675 + }, 676 + { 677 + "EventCode": "0x804A", 678 + "EventName": "SVE_INT_MUL_SPEC", 679 + "BriefDescription": "Integer Operation speculatively executed, SVE multiply." 680 + }, 681 + { 682 + "EventCode": "0x804B", 683 + "EventName": "ASE_SVE_INT_MUL_SPEC", 684 + "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE multiply." 685 + }, 686 + { 687 + "EventCode": "0x804C", 688 + "EventName": "INT_MUL64_SPEC", 689 + "BriefDescription": "Integer Operation speculatively executed, 64\u00d764 multiply." 690 + }, 691 + { 692 + "EventCode": "0x804D", 693 + "EventName": "SVE_INT_MUL64_SPEC", 694 + "BriefDescription": "Integer Operation speculatively executed, SVE 64\u00d764 multiply." 695 + }, 696 + { 697 + "EventCode": "0x804E", 698 + "EventName": "INT_MULH64_SPEC", 699 + "BriefDescription": "Integer Operation speculatively executed, 64\u00d764 multiply returning high part." 700 + }, 701 + { 702 + "EventCode": "0x804F", 703 + "EventName": "SVE_INT_MULH64_SPEC", 704 + "BriefDescription": "Integer Operation speculatively executed, SVE 64\u00d764 multiply high part." 705 + }, 706 + { 707 + "EventCode": "0x8058", 708 + "EventName": "NONFP_SPEC", 709 + "BriefDescription": "Non-floating-point Operation speculatively executed." 710 + }, 711 + { 712 + "EventCode": "0x8059", 713 + "EventName": "ASE_NONFP_SPEC", 714 + "BriefDescription": "Non-floating-point Operation speculatively executed, Advanced SIMD." 715 + }, 716 + { 717 + "EventCode": "0x805A", 718 + "EventName": "SVE_NONFP_SPEC", 719 + "BriefDescription": "Non-floating-point Operation speculatively executed, SVE." 720 + }, 721 + { 722 + "EventCode": "0x805B", 723 + "EventName": "ASE_SVE_NONFP_SPEC", 724 + "BriefDescription": "Non-floating-point Operation speculatively executed, Advanced SIMD or SVE." 725 + }, 726 + { 727 + "EventCode": "0x805D", 728 + "EventName": "ASE_INT_VREDUCE_SPEC", 729 + "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD reduction." 730 + }, 731 + { 732 + "EventCode": "0x805E", 733 + "EventName": "SVE_INT_VREDUCE_SPEC", 734 + "BriefDescription": "Integer Operation speculatively executed, SVE reduction." 735 + }, 736 + { 737 + "EventCode": "0x805F", 738 + "EventName": "ASE_SVE_INT_VREDUCE_SPEC", 739 + "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE reduction." 740 + }, 741 + { 742 + "EventCode": "0x8060", 743 + "EventName": "SVE_PERM_SPEC", 744 + "BriefDescription": "Operation speculatively executed, SVE permute." 745 + }, 746 + { 747 + "EventCode": "0x8065", 748 + "EventName": "SVE_XPIPE_Z2R_SPEC", 749 + "BriefDescription": "Operation speculatively executed, SVE vector to scalar cross-pipe." 750 + }, 751 + { 752 + "EventCode": "0x8066", 753 + "EventName": "SVE_XPIPE_R2Z_SPEC", 754 + "BriefDescription": "Operation speculatively executed, SVE scalar to vector cross-pipe." 755 + }, 756 + { 757 + "EventCode": "0x8068", 758 + "EventName": "SVE_PGEN_SPEC", 759 + "BriefDescription": "Operation speculatively executed, SVE predicate generating." 760 + }, 761 + { 762 + "EventCode": "0x8069", 763 + "EventName": "SVE_PGEN_FLG_SPEC", 764 + "BriefDescription": "Operation speculatively executed, SVE predicate flag setting." 765 + }, 766 + { 767 + "EventCode": "0x806D", 768 + "EventName": "SVE_PPERM_SPEC", 769 + "BriefDescription": "Operation speculatively executed, SVE predicate permute." 770 770 }, 771 771 { 772 772 "PublicDescription": "SVE predicated Operations speculatively executed.", ··· 978 628 "EventCode": "0x807C", 979 629 "EventName": "SVE_MOVPRFX_SPEC", 980 630 "BriefDescription": "SVE MOVPRFX Operations speculatively executed." 631 + }, 632 + { 633 + "EventCode": "0x807D", 634 + "EventName": "SVE_MOVPRFX_Z_SPEC", 635 + "BriefDescription": "Operation speculatively executed, SVE MOVPRFX zeroing predication." 636 + }, 637 + { 638 + "EventCode": "0x807E", 639 + "EventName": "SVE_MOVPRFX_M_SPEC", 640 + "BriefDescription": "Operation speculatively executed, SVE MOVPRFX merging predication." 981 641 }, 982 642 { 983 643 "PublicDescription": "SVE MOVPRFX unfused Operations speculatively executed.", ··· 1054 694 "EventCode": "0x809F", 1055 695 "EventName": "SVE_PRF_CONTIG_SPEC", 1056 696 "BriefDescription": "SVE contiguous prefetch element Operations speculatively executed." 697 + }, 698 + { 699 + "EventCode": "0x80A1", 700 + "EventName": "SVE_LDNT_CONTIG_SPEC", 701 + "BriefDescription": "Operation speculatively executed, SVE non-temporal contiguous load element." 702 + }, 703 + { 704 + "EventCode": "0x80A2", 705 + "EventName": "SVE_STNT_CONTIG_SPEC", 706 + "BriefDescription": "Operation speculatively executed, SVE non-temporal contiguous store element." 1057 707 }, 1058 708 { 1059 709 "PublicDescription": "Advanced SIMD and SVE contiguous load multiple vector Operations speculatively executed.", ··· 1156 786 "BriefDescription": "Non-scalable double-precision floating-point element Operations speculatively executed." 1157 787 }, 1158 788 { 789 + "EventCode": "0x80C8", 790 + "EventName": "INT_SCALE_OPS_SPEC", 791 + "BriefDescription": "Scalable integer element arithmetic operations Speculatively executed." 792 + }, 793 + { 794 + "EventCode": "0x80C9", 795 + "EventName": "INT_FIXED_OPS_SPEC", 796 + "BriefDescription": "Non-scalable integer element arithmetic operations Speculatively executed." 797 + }, 798 + { 1159 799 "PublicDescription": "Advanced SIMD and SVE 8-bit integer operations speculatively executed", 1160 800 "EventCode": "0x80E3", 1161 801 "EventName": "ASE_SVE_INT8_SPEC", ··· 1188 808 "EventCode": "0x80EF", 1189 809 "EventName": "ASE_SVE_INT64_SPEC", 1190 810 "BriefDescription": "Advanced SIMD and SVE 64-bit integer operations speculatively executed" 811 + }, 812 + { 813 + "EventCode": "0x80F3", 814 + "EventName": "ASE_SVE_FP_DOT_SPEC", 815 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE dot-product." 816 + }, 817 + { 818 + "EventCode": "0x80F7", 819 + "EventName": "ASE_SVE_FP_MMLA_SPEC", 820 + "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE matrix multiply." 821 + }, 822 + { 823 + "EventCode": "0x80FB", 824 + "EventName": "ASE_SVE_INT_DOT_SPEC", 825 + "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE dot-product." 826 + }, 827 + { 828 + "EventCode": "0x80FF", 829 + "EventName": "ASE_SVE_INT_MMLA_SPEC", 830 + "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE matrix multiply." 831 + }, 832 + { 833 + "EventCode": "0x8128", 834 + "EventName": "DTLB_WALK_PERCYC", 835 + "BriefDescription": "Data translation table walks in progress." 836 + }, 837 + { 838 + "EventCode": "0x8129", 839 + "EventName": "ITLB_WALK_PERCYC", 840 + "BriefDescription": "Instruction translation table walks in progress." 841 + }, 842 + { 843 + "EventCode": "0x8136", 844 + "EventName": "DTLB_STEP", 845 + "BriefDescription": "Data TLB translation table walk, step." 846 + }, 847 + { 848 + "EventCode": "0x8137", 849 + "EventName": "ITLB_STEP", 850 + "BriefDescription": "Instruction TLB translation table walk, step." 851 + }, 852 + { 853 + "EventCode": "0x8138", 854 + "EventName": "DTLB_WALK_LARGE", 855 + "BriefDescription": "Data TLB large page translation table walk." 856 + }, 857 + { 858 + "EventCode": "0x8139", 859 + "EventName": "ITLB_WALK_LARGE", 860 + "BriefDescription": "Instruction TLB large page translation table walk." 861 + }, 862 + { 863 + "EventCode": "0x813A", 864 + "EventName": "DTLB_WALK_SMALL", 865 + "BriefDescription": "Data TLB small page translation table walk." 866 + }, 867 + { 868 + "EventCode": "0x813B", 869 + "EventName": "ITLB_WALK_SMALL", 870 + "BriefDescription": "Instruction TLB small page translation table walk." 871 + }, 872 + { 873 + "EventCode": "0x8144", 874 + "EventName": "L1D_CACHE_MISS", 875 + "BriefDescription": "Level 1 data cache demand access miss." 876 + }, 877 + { 878 + "EventCode": "0x8145", 879 + "EventName": "L1I_CACHE_HWPRF", 880 + "BriefDescription": "Level 1 instruction cache hardware prefetch." 881 + }, 882 + { 883 + "EventCode": "0x814C", 884 + "EventName": "L2D_CACHE_MISS", 885 + "BriefDescription": "Level 2 data cache demand access miss." 886 + }, 887 + { 888 + "EventCode": "0x8154", 889 + "EventName": "L1D_CACHE_HWPRF", 890 + "BriefDescription": "Level 1 data cache hardware prefetch." 891 + }, 892 + { 893 + "EventCode": "0x8155", 894 + "EventName": "L2D_CACHE_HWPRF", 895 + "BriefDescription": "Level 2 data cache hardware prefetch." 896 + }, 897 + { 898 + "EventCode": "0x8158", 899 + "EventName": "STALL_FRONTEND_MEMBOUND", 900 + "BriefDescription": "Frontend stall cycles, memory bound." 901 + }, 902 + { 903 + "EventCode": "0x8159", 904 + "EventName": "STALL_FRONTEND_L1I", 905 + "BriefDescription": "Frontend stall cycles, level 1 instruction cache." 906 + }, 907 + { 908 + "EventCode": "0x815A", 909 + "EventName": "STALL_FRONTEND_L2I", 910 + "BriefDescription": "Frontend stall cycles, level 2 instruction cache." 911 + }, 912 + { 913 + "EventCode": "0x815B", 914 + "EventName": "STALL_FRONTEND_MEM", 915 + "BriefDescription": "Frontend stall cycles, last level PE cache or memory." 916 + }, 917 + { 918 + "EventCode": "0x815C", 919 + "EventName": "STALL_FRONTEND_TLB", 920 + "BriefDescription": "Frontend stall cycles, TLB." 921 + }, 922 + { 923 + "EventCode": "0x8160", 924 + "EventName": "STALL_FRONTEND_CPUBOUND", 925 + "BriefDescription": "Frontend stall cycles, processor bound." 926 + }, 927 + { 928 + "EventCode": "0x8161", 929 + "EventName": "STALL_FRONTEND_FLOW", 930 + "BriefDescription": "Frontend stall cycles, flow control." 931 + }, 932 + { 933 + "EventCode": "0x8162", 934 + "EventName": "STALL_FRONTEND_FLUSH", 935 + "BriefDescription": "Frontend stall cycles, flush recovery." 936 + }, 937 + { 938 + "EventCode": "0x8163", 939 + "EventName": "STALL_FRONTEND_RENAME", 940 + "BriefDescription": "Frontend stall cycles, rename full." 941 + }, 942 + { 943 + "EventCode": "0x8164", 944 + "EventName": "STALL_BACKEND_MEMBOUND", 945 + "BriefDescription": "Backend stall cycles, memory bound." 946 + }, 947 + { 948 + "EventCode": "0x8165", 949 + "EventName": "STALL_BACKEND_L1D", 950 + "BriefDescription": "Backend stall cycles, level 1 data cache." 951 + }, 952 + { 953 + "EventCode": "0x8166", 954 + "EventName": "STALL_BACKEND_L2D", 955 + "BriefDescription": "Backend stall cycles, level 2 data cache." 956 + }, 957 + { 958 + "EventCode": "0x8167", 959 + "EventName": "STALL_BACKEND_TLB", 960 + "BriefDescription": "Backend stall cycles, TLB." 961 + }, 962 + { 963 + "EventCode": "0x8168", 964 + "EventName": "STALL_BACKEND_ST", 965 + "BriefDescription": "Backend stall cycles, store." 966 + }, 967 + { 968 + "EventCode": "0x816A", 969 + "EventName": "STALL_BACKEND_CPUBOUND", 970 + "BriefDescription": "Backend stall cycles, processor bound." 971 + }, 972 + { 973 + "EventCode": "0x816B", 974 + "EventName": "STALL_BACKEND_BUSY", 975 + "BriefDescription": "Backend stall cycles, backend busy." 976 + }, 977 + { 978 + "EventCode": "0x816C", 979 + "EventName": "STALL_BACKEND_ILOCK", 980 + "BriefDescription": "Backend stall cycles, input dependency." 981 + }, 982 + { 983 + "EventCode": "0x816D", 984 + "EventName": "STALL_BACKEND_RENAME", 985 + "BriefDescription": "Backend stall cycles, rename full." 986 + }, 987 + { 988 + "EventCode": "0x816E", 989 + "EventName": "STALL_BACKEND_ATOMIC", 990 + "BriefDescription": "Backend stall cycles, atomic operation." 991 + }, 992 + { 993 + "EventCode": "0x816F", 994 + "EventName": "STALL_BACKEND_MEMCPYSET", 995 + "BriefDescription": "Backend stall cycles, Memory Copy or Set operation." 996 + }, 997 + { 998 + "EventCode": "0x8186", 999 + "EventName": "UOP_RETIRED", 1000 + "BriefDescription": "Micro-operation architecturally executed." 1001 + }, 1002 + { 1003 + "EventCode": "0x8188", 1004 + "EventName": "DTLB_WALK_BLOCK", 1005 + "BriefDescription": "Data TLB block translation table walk." 1006 + }, 1007 + { 1008 + "EventCode": "0x8189", 1009 + "EventName": "ITLB_WALK_BLOCK", 1010 + "BriefDescription": "Instruction TLB block translation table walk." 1011 + }, 1012 + { 1013 + "EventCode": "0x818A", 1014 + "EventName": "DTLB_WALK_PAGE", 1015 + "BriefDescription": "Data TLB page translation table walk." 1016 + }, 1017 + { 1018 + "EventCode": "0x818B", 1019 + "EventName": "ITLB_WALK_PAGE", 1020 + "BriefDescription": "Instruction TLB page translation table walk." 1021 + }, 1022 + { 1023 + "EventCode": "0x81B8", 1024 + "EventName": "L1I_CACHE_REFILL_HWPRF", 1025 + "BriefDescription": "Level 1 instruction cache refill, hardware prefetch." 1026 + }, 1027 + { 1028 + "EventCode": "0x81BC", 1029 + "EventName": "L1D_CACHE_REFILL_HWPRF", 1030 + "BriefDescription": "Level 1 data cache refill, hardware prefetch." 1031 + }, 1032 + { 1033 + "EventCode": "0x81BD", 1034 + "EventName": "L2D_CACHE_REFILL_HWPRF", 1035 + "BriefDescription": "Level 2 data cache refill, hardware prefetch." 1036 + }, 1037 + { 1038 + "EventCode": "0x81C0", 1039 + "EventName": "L1I_CACHE_HIT_RD", 1040 + "BriefDescription": "Level 1 instruction cache demand fetch hit." 1041 + }, 1042 + { 1043 + "EventCode": "0x81C4", 1044 + "EventName": "L1D_CACHE_HIT_RD", 1045 + "BriefDescription": "Level 1 data cache demand access hit, read." 1046 + }, 1047 + { 1048 + "EventCode": "0x81C5", 1049 + "EventName": "L2D_CACHE_HIT_RD", 1050 + "BriefDescription": "Level 2 data cache demand access hit, read." 1051 + }, 1052 + { 1053 + "EventCode": "0x81C8", 1054 + "EventName": "L1D_CACHE_HIT_WR", 1055 + "BriefDescription": "Level 1 data cache demand access hit, write." 1056 + }, 1057 + { 1058 + "EventCode": "0x81C9", 1059 + "EventName": "L2D_CACHE_HIT_WR", 1060 + "BriefDescription": "Level 2 data cache demand access hit, write." 1061 + }, 1062 + { 1063 + "EventCode": "0x8200", 1064 + "EventName": "L1I_CACHE_HIT", 1065 + "BriefDescription": "Level 1 instruction cache hit." 1066 + }, 1067 + { 1068 + "EventCode": "0x8204", 1069 + "EventName": "L1D_CACHE_HIT", 1070 + "BriefDescription": "Level 1 data cache hit." 1071 + }, 1072 + { 1073 + "EventCode": "0x8205", 1074 + "EventName": "L2D_CACHE_HIT", 1075 + "BriefDescription": "Level 2 data cache hit." 1076 + }, 1077 + { 1078 + "EventCode": "0x8240", 1079 + "EventName": "L1I_LFB_HIT_RD", 1080 + "BriefDescription": "Level 1 instruction cache demand fetch line-fill buffer hit." 1081 + }, 1082 + { 1083 + "EventCode": "0x8244", 1084 + "EventName": "L1D_LFB_HIT_RD", 1085 + "BriefDescription": "Level 1 data cache demand access line-fill buffer hit, read." 1086 + }, 1087 + { 1088 + "EventCode": "0x8245", 1089 + "EventName": "L2D_LFB_HIT_RD", 1090 + "BriefDescription": "Level 2 data cache demand access line-fill buffer hit, read." 1091 + }, 1092 + { 1093 + "EventCode": "0x8248", 1094 + "EventName": "L1D_LFB_HIT_WR", 1095 + "BriefDescription": "Level 1 data cache demand access line-fill buffer hit, write." 1096 + }, 1097 + { 1098 + "EventCode": "0x8249", 1099 + "EventName": "L2D_LFB_HIT_WR", 1100 + "BriefDescription": "Level 2 data cache demand access line-fill buffer hit, write." 1101 + }, 1102 + { 1103 + "EventCode": "0x8280", 1104 + "EventName": "L1I_CACHE_PRF", 1105 + "BriefDescription": "Level 1 instruction cache, preload or prefetch hit." 1106 + }, 1107 + { 1108 + "EventCode": "0x8284", 1109 + "EventName": "L1D_CACHE_PRF", 1110 + "BriefDescription": "Level 1 data cache, preload or prefetch hit." 1111 + }, 1112 + { 1113 + "EventCode": "0x8285", 1114 + "EventName": "L2D_CACHE_PRF", 1115 + "BriefDescription": "Level 2 data cache, preload or prefetch hit." 1116 + }, 1117 + { 1118 + "EventCode": "0x8288", 1119 + "EventName": "L1I_CACHE_REFILL_PRF", 1120 + "BriefDescription": "Level 1 instruction cache refill, preload or prefetch hit." 1121 + }, 1122 + { 1123 + "EventCode": "0x828C", 1124 + "EventName": "L1D_CACHE_REFILL_PRF", 1125 + "BriefDescription": "Level 1 data cache refill, preload or prefetch hit." 1126 + }, 1127 + { 1128 + "EventCode": "0x828D", 1129 + "EventName": "L2D_CACHE_REFILL_PRF", 1130 + "BriefDescription": "Level 2 data cache refill, preload or prefetch hit." 1131 + }, 1132 + { 1133 + "EventCode": "0x8320", 1134 + "EventName": "L1D_CACHE_REFILL_PERCYC", 1135 + "BriefDescription": "Level 1 data or unified cache refills in progress." 1136 + }, 1137 + { 1138 + "EventCode": "0x8321", 1139 + "EventName": "L2D_CACHE_REFILL_PERCYC", 1140 + "BriefDescription": "Level 2 data or unified cache refills in progress." 1141 + }, 1142 + { 1143 + "EventCode": "0x8324", 1144 + "EventName": "L1I_CACHE_REFILL_PERCYC", 1145 + "BriefDescription": "Level 1 instruction or unified cache refills in progress." 1191 1146 } 1192 1147 ]
+6
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/core-imp-def.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "L1I_CACHE_PRF", 4 + "BriefDescription": "This event counts fetch counted by either Level 1 instruction hardware prefetch or Level 1 instruction software prefetch." 5 + } 6 + ]
+122
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/cycle_accounting.json
··· 1 + [ 2 + { 3 + "EventCode": "0x0182", 4 + "EventName": "LD_COMP_WAIT_L1_MISS", 5 + "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted load/store/prefetch operation waits for L2 cache access." 6 + }, 7 + { 8 + "EventCode": "0x0183", 9 + "EventName": "LD_COMP_WAIT_L1_MISS_EX", 10 + "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted integer load operation waits for L2 cache access." 11 + }, 12 + { 13 + "EventCode": "0x0184", 14 + "EventName": "LD_COMP_WAIT", 15 + "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted load/store/prefetch operation waits for L1D cache, L2 cache and memory access." 16 + }, 17 + { 18 + "EventCode": "0x0185", 19 + "EventName": "LD_COMP_WAIT_EX", 20 + "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted integer load operation waits for L1D cache, L2 cache and memory access." 21 + }, 22 + { 23 + "EventCode": "0x0186", 24 + "EventName": "LD_COMP_WAIT_PFP_BUSY", 25 + "BriefDescription": "This event counts every cycle that no instruction was committed due to the lack of an available prefetch port." 26 + }, 27 + { 28 + "EventCode": "0x0187", 29 + "EventName": "LD_COMP_WAIT_PFP_BUSY_EX", 30 + "BriefDescription": "This event counts the LD_COMP_WAIT_PFP_BUSY caused by an integer load operation." 31 + }, 32 + { 33 + "EventCode": "0x0188", 34 + "EventName": "LD_COMP_WAIT_PFP_BUSY_SWPF", 35 + "BriefDescription": "This event counts the LD_COMP_WAIT_PFP_BUSY caused by a software prefetch instruction." 36 + }, 37 + { 38 + "EventCode": "0x0189", 39 + "EventName": "EU_COMP_WAIT", 40 + "BriefDescription": "This event counts every cycle that no instruction was committed and the oldest and uncommitted instruction is an integer or floating-point/SIMD instruction." 41 + }, 42 + { 43 + "EventCode": "0x018A", 44 + "EventName": "FL_COMP_WAIT", 45 + "BriefDescription": "This event counts every cycle that no instruction was committed and the oldest and uncommitted instruction is a floating-point/SIMD instruction." 46 + }, 47 + { 48 + "EventCode": "0x018B", 49 + "EventName": "BR_COMP_WAIT", 50 + "BriefDescription": "This event counts every cycle that no instruction was committed and the oldest and uncommitted instruction is a branch instruction." 51 + }, 52 + { 53 + "EventCode": "0x018C", 54 + "EventName": "ROB_EMPTY", 55 + "BriefDescription": "This event counts every cycle that no instruction was committed because the CSE is empty." 56 + }, 57 + { 58 + "EventCode": "0x018D", 59 + "EventName": "ROB_EMPTY_STQ_BUSY", 60 + "BriefDescription": "This event counts every cycle that no instruction was committed because the CSE is empty and the store port (SP) is full." 61 + }, 62 + { 63 + "EventCode": "0x018E", 64 + "EventName": "WFE_WFI_CYCLE", 65 + "BriefDescription": "This event counts every cycle that the instruction unit is halted by the WFE/WFI instruction." 66 + }, 67 + { 68 + "EventCode": "0x018F", 69 + "EventName": "RETENTION_CYCLE", 70 + "BriefDescription": "This event counts every cycle that the instruction unit is halted by the RETENTION state." 71 + }, 72 + { 73 + "EventCode": "0x0190", 74 + "EventName": "_0INST_COMMIT", 75 + "BriefDescription": "This event counts every cycle that no instruction was committed, but counts at the time when commits MOVPRFX only." 76 + }, 77 + { 78 + "EventCode": "0x0191", 79 + "EventName": "_1INST_COMMIT", 80 + "BriefDescription": "This event counts every cycle that one instruction is committed." 81 + }, 82 + { 83 + "EventCode": "0x0192", 84 + "EventName": "_2INST_COMMIT", 85 + "BriefDescription": "This event counts every cycle that two instructions are committed." 86 + }, 87 + { 88 + "EventCode": "0x0193", 89 + "EventName": "_3INST_COMMIT", 90 + "BriefDescription": "This event counts every cycle that three instructions are committed." 91 + }, 92 + { 93 + "EventCode": "0x0194", 94 + "EventName": "_4INST_COMMIT", 95 + "BriefDescription": "This event counts every cycle that four instructions are committed." 96 + }, 97 + { 98 + "EventCode": "0x0195", 99 + "EventName": "_5INST_COMMIT", 100 + "BriefDescription": "This event counts every cycle that five instructions are committed." 101 + }, 102 + { 103 + "EventCode": "0x0198", 104 + "EventName": "UOP_ONLY_COMMIT", 105 + "BriefDescription": "This event counts every cycle that only any micro-operations are committed." 106 + }, 107 + { 108 + "EventCode": "0x0199", 109 + "EventName": "SINGLE_MOVPRFX_COMMIT", 110 + "BriefDescription": "This event counts every cycle that only the MOVPRFX instruction is committed." 111 + }, 112 + { 113 + "EventCode": "0x019C", 114 + "EventName": "LD_COMP_WAIT_L2_MISS", 115 + "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted load/store/prefetch operation waits for L2 cache miss." 116 + }, 117 + { 118 + "EventCode": "0x019D", 119 + "EventName": "LD_COMP_WAIT_L2_MISS_EX", 120 + "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted integer load operation waits for L2 cache miss." 121 + } 122 + ]
+17
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/energy.json
··· 1 + [ 2 + { 3 + "EventCode": "0x01F0", 4 + "EventName": "EA_CORE", 5 + "BriefDescription": "This event counts energy consumption of core." 6 + }, 7 + { 8 + "EventCode": "0x03F0", 9 + "EventName": "EA_L3", 10 + "BriefDescription": "This event counts energy consumption of L3 cache." 11 + }, 12 + { 13 + "EventCode": "0x03F1", 14 + "EventName": "EA_LDO_LOSS", 15 + "BriefDescription": "This event counts energy consumption of LDO loss." 16 + } 17 + ]
+42
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/exception.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "EXC_TAKEN", 4 + "BriefDescription": "This event counts each exception taken." 5 + }, 6 + { 7 + "ArchStdEvent": "EXC_RETURN", 8 + "BriefDescription": "This event counts each executed exception return instruction." 9 + }, 10 + { 11 + "ArchStdEvent": "EXC_UNDEF", 12 + "BriefDescription": "This event counts only other synchronous exceptions that are taken locally." 13 + }, 14 + { 15 + "ArchStdEvent": "EXC_SVC", 16 + "BriefDescription": "This event counts only Supervisor Call exceptions that are taken locally." 17 + }, 18 + { 19 + "ArchStdEvent": "EXC_PABORT", 20 + "BriefDescription": "This event counts only Instruction Abort exceptions that are taken locally." 21 + }, 22 + { 23 + "ArchStdEvent": "EXC_DABORT", 24 + "BriefDescription": "This event counts only Data Abort or SError interrupt exceptions that are taken locally." 25 + }, 26 + { 27 + "ArchStdEvent": "EXC_IRQ", 28 + "BriefDescription": "This event counts only IRQ exceptions that are taken locally, including Virtual IRQ exceptions." 29 + }, 30 + { 31 + "ArchStdEvent": "EXC_FIQ", 32 + "BriefDescription": "This event counts only FIQ exceptions that are taken locally, including Virtual FIQ exceptions." 33 + }, 34 + { 35 + "ArchStdEvent": "EXC_SMC", 36 + "BriefDescription": "This event counts only Secure Monitor Call exceptions. The counter does not increment on SMC instructions trapped as a Hyp Trap exception." 37 + }, 38 + { 39 + "ArchStdEvent": "EXC_HVC", 40 + "BriefDescription": "This event counts for both Hypervisor Call exceptions taken locally in the hypervisor and those taken as an exception from Non-secure EL1." 41 + } 42 + ]
+209
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/fp_operation.json
··· 1 + [ 2 + { 3 + "EventCode": "0x0105", 4 + "EventName": "FP_MV_SPEC", 5 + "BriefDescription": "This event counts architecturally executed floating-point move operations." 6 + }, 7 + { 8 + "EventCode": "0x0112", 9 + "EventName": "FP_LD_SPEC", 10 + "BriefDescription": "This event counts architecturally executed NOSIMD load operations that using SIMD&FP registers." 11 + }, 12 + { 13 + "EventCode": "0x0113", 14 + "EventName": "FP_ST_SPEC", 15 + "BriefDescription": "This event counts architecturally executed NOSIMD store operations that using SIMD&FP registers." 16 + }, 17 + { 18 + "ArchStdEvent": "ASE_FP_SPEC", 19 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point operation." 20 + }, 21 + { 22 + "ArchStdEvent": "SVE_FP_SPEC", 23 + "BriefDescription": "This event counts architecturally executed SVE floating-point operation." 24 + }, 25 + { 26 + "ArchStdEvent": "ASE_SVE_FP_SPEC", 27 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point operations." 28 + }, 29 + { 30 + "ArchStdEvent": "FP_HP_SPEC", 31 + "BriefDescription": "This event counts architecturally executed half-precision floating-point operation." 32 + }, 33 + { 34 + "ArchStdEvent": "ASE_FP_HP_SPEC", 35 + "BriefDescription": "This event counts architecturally executed Advanced SIMD half-precision floating-point operation." 36 + }, 37 + { 38 + "ArchStdEvent": "SVE_FP_HP_SPEC", 39 + "BriefDescription": "This event counts architecturally executed SVE half-precision floating-point operation." 40 + }, 41 + { 42 + "ArchStdEvent": "ASE_SVE_FP_HP_SPEC", 43 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE half-precision floating-point operations." 44 + }, 45 + { 46 + "ArchStdEvent": "FP_SP_SPEC", 47 + "BriefDescription": "This event counts architecturally executed single-precision floating-point operation." 48 + }, 49 + { 50 + "ArchStdEvent": "ASE_FP_SP_SPEC", 51 + "BriefDescription": "This event counts architecturally executed Advanced SIMD single-precision floating-point operation." 52 + }, 53 + { 54 + "ArchStdEvent": "SVE_FP_SP_SPEC", 55 + "BriefDescription": "This event counts architecturally executed SVE single-precision floating-point operation." 56 + }, 57 + { 58 + "ArchStdEvent": "ASE_SVE_FP_SP_SPEC", 59 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE single-precision floating-point operations." 60 + }, 61 + { 62 + "ArchStdEvent": "FP_DP_SPEC", 63 + "BriefDescription": "This event counts architecturally executed double-precision floating-point operation." 64 + }, 65 + { 66 + "ArchStdEvent": "ASE_FP_DP_SPEC", 67 + "BriefDescription": "This event counts architecturally executed Advanced SIMD double-precision floating-point operation." 68 + }, 69 + { 70 + "ArchStdEvent": "SVE_FP_DP_SPEC", 71 + "BriefDescription": "This event counts architecturally executed SVE double-precision floating-point operation." 72 + }, 73 + { 74 + "ArchStdEvent": "ASE_SVE_FP_DP_SPEC", 75 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE double-precision floating-point operations." 76 + }, 77 + { 78 + "ArchStdEvent": "FP_DIV_SPEC", 79 + "BriefDescription": "This event counts architecturally executed floating-point divide operation." 80 + }, 81 + { 82 + "ArchStdEvent": "ASE_FP_DIV_SPEC", 83 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point divide operation." 84 + }, 85 + { 86 + "ArchStdEvent": "SVE_FP_DIV_SPEC", 87 + "BriefDescription": "This event counts architecturally executed SVE floating-point divide operation." 88 + }, 89 + { 90 + "ArchStdEvent": "ASE_SVE_FP_DIV_SPEC", 91 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point divide operations." 92 + }, 93 + { 94 + "ArchStdEvent": "FP_SQRT_SPEC", 95 + "BriefDescription": "This event counts architecturally executed floating-point square root operation." 96 + }, 97 + { 98 + "ArchStdEvent": "ASE_FP_SQRT_SPEC", 99 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point square root operation." 100 + }, 101 + { 102 + "ArchStdEvent": "SVE_FP_SQRT_SPEC", 103 + "BriefDescription": "This event counts architecturally executed SVE floating-point square root operation." 104 + }, 105 + { 106 + "ArchStdEvent": "ASE_SVE_FP_SQRT_SPEC", 107 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point square root operations." 108 + }, 109 + { 110 + "ArchStdEvent": "ASE_FP_FMA_SPEC", 111 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point FMA operation." 112 + }, 113 + { 114 + "ArchStdEvent": "SVE_FP_FMA_SPEC", 115 + "BriefDescription": "This event counts architecturally executed SVE floating-point FMA operation." 116 + }, 117 + { 118 + "ArchStdEvent": "ASE_SVE_FP_FMA_SPEC", 119 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point FMA operations." 120 + }, 121 + { 122 + "ArchStdEvent": "FP_MUL_SPEC", 123 + "BriefDescription": "This event counts architecturally executed floating-point multiply operations." 124 + }, 125 + { 126 + "ArchStdEvent": "ASE_FP_MUL_SPEC", 127 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point multiply operation." 128 + }, 129 + { 130 + "ArchStdEvent": "SVE_FP_MUL_SPEC", 131 + "BriefDescription": "This event counts architecturally executed SVE floating-point multiply operation." 132 + }, 133 + { 134 + "ArchStdEvent": "ASE_SVE_FP_MUL_SPEC", 135 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point multiply operations." 136 + }, 137 + { 138 + "ArchStdEvent": "FP_ADDSUB_SPEC", 139 + "BriefDescription": "This event counts architecturally executed floating-point add or subtract operations." 140 + }, 141 + { 142 + "ArchStdEvent": "ASE_FP_ADDSUB_SPEC", 143 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point add or subtract operation." 144 + }, 145 + { 146 + "ArchStdEvent": "SVE_FP_ADDSUB_SPEC", 147 + "BriefDescription": "This event counts architecturally executed SVE floating-point add or subtract operation." 148 + }, 149 + { 150 + "ArchStdEvent": "ASE_SVE_FP_ADDSUB_SPEC", 151 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point add or subtract operations." 152 + }, 153 + { 154 + "ArchStdEvent": "ASE_FP_RECPE_SPEC", 155 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point reciprocal estimate operations." 156 + }, 157 + { 158 + "ArchStdEvent": "SVE_FP_RECPE_SPEC", 159 + "BriefDescription": "This event counts architecturally executed SVE floating-point reciprocal estimate operations." 160 + }, 161 + { 162 + "ArchStdEvent": "ASE_SVE_FP_RECPE_SPEC", 163 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point reciprocal estimate operations." 164 + }, 165 + { 166 + "ArchStdEvent": "ASE_FP_CVT_SPEC", 167 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point convert operation." 168 + }, 169 + { 170 + "ArchStdEvent": "SVE_FP_CVT_SPEC", 171 + "BriefDescription": "This event counts architecturally executed SVE floating-point convert operation." 172 + }, 173 + { 174 + "ArchStdEvent": "ASE_SVE_FP_CVT_SPEC", 175 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point convert operations." 176 + }, 177 + { 178 + "ArchStdEvent": "SVE_FP_AREDUCE_SPEC", 179 + "BriefDescription": "This event counts architecturally executed SVE floating-point accumulating reduction operations." 180 + }, 181 + { 182 + "ArchStdEvent": "ASE_FP_PREDUCE_SPEC", 183 + "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point pairwise add step operations." 184 + }, 185 + { 186 + "ArchStdEvent": "SVE_FP_VREDUCE_SPEC", 187 + "BriefDescription": "This event counts architecturally executed SVE floating-point vector reduction operation." 188 + }, 189 + { 190 + "ArchStdEvent": "ASE_SVE_FP_VREDUCE_SPEC", 191 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE floating-point vector reduction operations." 192 + }, 193 + { 194 + "ArchStdEvent": "FP_SCALE_OPS_SPEC", 195 + "BriefDescription": "This event counts architecturally executed SVE arithmetic operations. See FP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by (128 / CSIZE) and by twice that amount for operations that would also be counted by SVE_FP_FMA_SPEC." 196 + }, 197 + { 198 + "ArchStdEvent": "FP_FIXED_OPS_SPEC", 199 + "BriefDescription": "This event counts architecturally executed v8SIMD&FP arithmetic operations. See FP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. The event counter is incremented by the specified number of elements for Advanced SIMD operations or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC." 200 + }, 201 + { 202 + "ArchStdEvent": "ASE_SVE_FP_DOT_SPEC", 203 + "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE floating-point dot-product operation." 204 + }, 205 + { 206 + "ArchStdEvent": "ASE_SVE_FP_MMLA_SPEC", 207 + "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE floating-point matrix multiply operation." 208 + } 209 + ]
+97
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/gcycle.json
··· 1 + [ 2 + { 3 + "EventCode": "0x0880", 4 + "EventName": "GCYCLES", 5 + "BriefDescription": "This event counts the number of cycles at 100MHz." 6 + }, 7 + { 8 + "EventCode": "0x0890", 9 + "EventName": "FL0_GCYCLES", 10 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 0." 11 + }, 12 + { 13 + "EventCode": "0x0891", 14 + "EventName": "FL1_GCYCLES", 15 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 1." 16 + }, 17 + { 18 + "EventCode": "0x0892", 19 + "EventName": "FL2_GCYCLES", 20 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 2." 21 + }, 22 + { 23 + "EventCode": "0x0893", 24 + "EventName": "FL3_GCYCLES", 25 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 3." 26 + }, 27 + { 28 + "EventCode": "0x0894", 29 + "EventName": "FL4_GCYCLES", 30 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 4." 31 + }, 32 + { 33 + "EventCode": "0x0895", 34 + "EventName": "FL5_GCYCLES", 35 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 5." 36 + }, 37 + { 38 + "EventCode": "0x0896", 39 + "EventName": "FL6_GCYCLES", 40 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 6." 41 + }, 42 + { 43 + "EventCode": "0x0897", 44 + "EventName": "FL7_GCYCLES", 45 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 7." 46 + }, 47 + { 48 + "EventCode": "0x0898", 49 + "EventName": "FL8_GCYCLES", 50 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 8." 51 + }, 52 + { 53 + "EventCode": "0x0899", 54 + "EventName": "FL9_GCYCLES", 55 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 9." 56 + }, 57 + { 58 + "EventCode": "0x089A", 59 + "EventName": "FL10_GCYCLES", 60 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 10." 61 + }, 62 + { 63 + "EventCode": "0x089B", 64 + "EventName": "FL11_GCYCLES", 65 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 11." 66 + }, 67 + { 68 + "EventCode": "0x089C", 69 + "EventName": "FL12_GCYCLES", 70 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 12." 71 + }, 72 + { 73 + "EventCode": "0x089D", 74 + "EventName": "FL13_GCYCLES", 75 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 13." 76 + }, 77 + { 78 + "EventCode": "0x089E", 79 + "EventName": "FL14_GCYCLES", 80 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 14." 81 + }, 82 + { 83 + "EventCode": "0x089F", 84 + "EventName": "FL15_GCYCLES", 85 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 15." 86 + }, 87 + { 88 + "EventCode": "0x08A0", 89 + "EventName": "RETENTION_GCYCLES", 90 + "BriefDescription": "This event counts the number of cycles where the measured core is staying in the RETENTION state." 91 + }, 92 + { 93 + "EventCode": "0x08A1", 94 + "EventName": "RETENTION_COUNT", 95 + "BriefDescription": "This event counts the number of changes from the normal state to the RETENTION state." 96 + } 97 + ]
+10
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/general.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "CPU_CYCLES", 4 + "BriefDescription": "This event counts every cycle." 5 + }, 6 + { 7 + "ArchStdEvent": "CNT_CYCLES", 8 + "BriefDescription": "This event counts the constant frequency cycles counter increments at a constant frequency equal to the rate of increment of the System counter." 9 + } 10 + ]
+52
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/hwpf.json
··· 1 + [ 2 + { 3 + "EventCode": "0x0230", 4 + "EventName": "L1HWPF_STREAM_PF", 5 + "BriefDescription": "This event counts streaming prefetch requests to L1D cache generated by hardware prefetcher." 6 + }, 7 + { 8 + "EventCode": "0x0231", 9 + "EventName": "L1HWPF_STRIDE_PF", 10 + "BriefDescription": "This event counts stride prefetch requests to L1D cache generated by hardware prefetcher." 11 + }, 12 + { 13 + "EventCode": "0x0232", 14 + "EventName": "L1HWPF_PFTGT_PF", 15 + "BriefDescription": "This event counts LDS prefetch requests to L1D cache generated by hardware prefetcher." 16 + }, 17 + { 18 + "EventCode": "0x0234", 19 + "EventName": "L2HWPF_STREAM_PF", 20 + "BriefDescription": "This event counts streaming prefetch requests to L2 cache generated by hardware prefetcher." 21 + }, 22 + { 23 + "EventCode": "0x0235", 24 + "EventName": "L2HWPF_STRIDE_PF", 25 + "BriefDescription": "This event counts stride prefetch requests to L2 cache generated by hardware prefetcher." 26 + }, 27 + { 28 + "EventCode": "0x0237", 29 + "EventName": "L2HWPF_OTHER", 30 + "BriefDescription": "This event counts prefetch requests to L2 cache generated by the other causes." 31 + }, 32 + { 33 + "EventCode": "0x0238", 34 + "EventName": "L3HWPF_STREAM_PF", 35 + "BriefDescription": "This event counts streaming prefetch requests to L3 cache generated by hardware prefetcher." 36 + }, 37 + { 38 + "EventCode": "0x0239", 39 + "EventName": "L3HWPF_STRIDE_PF", 40 + "BriefDescription": "This event counts stride prefetch requests to L3 cache generated by hardware prefetcher." 41 + }, 42 + { 43 + "EventCode": "0x023B", 44 + "EventName": "L3HWPF_OTHER", 45 + "BriefDescription": "This event counts prefetch requests to L3 cache generated by the other causes." 46 + }, 47 + { 48 + "EventCode": "0x023C", 49 + "EventName": "L1IHWPF_NEXTLINE_PF", 50 + "BriefDescription": "This event counts next line's prefetch requests to L1I cache generated by hardware prefetcher." 51 + } 52 + ]
+113
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1d_cache.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "L1D_CACHE_REFILL", 4 + "BriefDescription": "This event counts operations that cause a refill of the L1D cache. See L1D_CACHE_REFILL of ARMv9 Reference Manual for more information." 5 + }, 6 + { 7 + "ArchStdEvent": "L1D_CACHE", 8 + "BriefDescription": "This event counts operations that cause a cache access to the L1D cache. See L1D_CACHE of ARMv9 Reference Manual for more information." 9 + }, 10 + { 11 + "ArchStdEvent": "L1D_CACHE_WB", 12 + "BriefDescription": "This event counts every write-back of data from the L1D cache. See L1D_CACHE_WB of ARMv9 Reference Manual for more information." 13 + }, 14 + { 15 + "ArchStdEvent": "L1D_CACHE_LMISS_RD", 16 + "BriefDescription": "This event counts operations that cause a refill of the L1D cache that incurs additional latency." 17 + }, 18 + { 19 + "ArchStdEvent": "L1D_CACHE_RD", 20 + "BriefDescription": "This event counts L1D CACHE caused by read access." 21 + }, 22 + { 23 + "ArchStdEvent": "L1D_CACHE_WR", 24 + "BriefDescription": "This event counts L1D CACHE caused by write access." 25 + }, 26 + { 27 + "ArchStdEvent": "L1D_CACHE_REFILL_RD", 28 + "BriefDescription": "This event counts L1D_CACHE_REFILL caused by read access." 29 + }, 30 + { 31 + "ArchStdEvent": "L1D_CACHE_REFILL_WR", 32 + "BriefDescription": "This event counts L1D_CACHE_REFILL caused by write access." 33 + }, 34 + { 35 + "EventCode": "0x0200", 36 + "EventName": "L1D_CACHE_DM", 37 + "BriefDescription": "This event counts L1D_CACHE caused by demand access." 38 + }, 39 + { 40 + "EventCode": "0x0201", 41 + "EventName": "L1D_CACHE_DM_RD", 42 + "BriefDescription": "This event counts L1D_CACHE caused by demand read access." 43 + }, 44 + { 45 + "EventCode": "0x0202", 46 + "EventName": "L1D_CACHE_DM_WR", 47 + "BriefDescription": "This event counts L1D_CACHE caused by demand write access." 48 + }, 49 + { 50 + "EventCode": "0x0208", 51 + "EventName": "L1D_CACHE_REFILL_DM", 52 + "BriefDescription": "This event counts L1D_CACHE_REFILL caused by demand access." 53 + }, 54 + { 55 + "EventCode": "0x0209", 56 + "EventName": "L1D_CACHE_REFILL_DM_RD", 57 + "BriefDescription": "This event counts L1D_CACHE_REFILL caused by demand read access." 58 + }, 59 + { 60 + "EventCode": "0x020A", 61 + "EventName": "L1D_CACHE_REFILL_DM_WR", 62 + "BriefDescription": "This event counts L1D_CACHE_REFILL caused by demand write access." 63 + }, 64 + { 65 + "EventCode": "0x020D", 66 + "EventName": "L1D_CACHE_BTC", 67 + "BriefDescription": "This event counts demand access that hits cache line with shared status and requests exclusive access in the Level 1 data cache, causing a coherence access to outside of the Level 1 caches of this PE." 68 + }, 69 + { 70 + "ArchStdEvent": "L1D_CACHE_MISS", 71 + "BriefDescription": "This event counts demand access that misses in the Level 1 data cache, causing an access to outside of the Level 1 caches of this PE." 72 + }, 73 + { 74 + "ArchStdEvent": "L1D_CACHE_HWPRF", 75 + "BriefDescription": "This event counts access counted by L1D_CACHE that is due to a hardware prefetch." 76 + }, 77 + { 78 + "ArchStdEvent": "L1D_CACHE_REFILL_HWPRF", 79 + "BriefDescription": "This event counts hardware prefetch counted by L1D_CACHE_HWPRF that causes a refill of the Level 1 data cache from outside of the Level 1 data cache." 80 + }, 81 + { 82 + "ArchStdEvent": "L1D_CACHE_HIT_RD", 83 + "BriefDescription": "This event counts demand read counted by L1D_CACHE_RD that hits in the Level 1 data cache." 84 + }, 85 + { 86 + "ArchStdEvent": "L1D_CACHE_HIT_WR", 87 + "BriefDescription": "This event counts demand write counted by L1D_CACHE_WR that hits in the Level 1 data cache." 88 + }, 89 + { 90 + "ArchStdEvent": "L1D_CACHE_HIT", 91 + "BriefDescription": "This event counts access counted by L1D_CACHE that hits in the Level 1 data cache." 92 + }, 93 + { 94 + "ArchStdEvent": "L1D_LFB_HIT_RD", 95 + "BriefDescription": "This event counts demand access counted by L1D_CACHE_HIT_RD that hits a cache line that is in the process of being loaded into the Level 1 data cache." 96 + }, 97 + { 98 + "ArchStdEvent": "L1D_LFB_HIT_WR", 99 + "BriefDescription": "This event counts demand access counted by L1D_CACHE_HIT_WR that hits a cache line that is in the process of being loaded into the Level 1 data cache." 100 + }, 101 + { 102 + "ArchStdEvent": "L1D_CACHE_PRF", 103 + "BriefDescription": "This event counts fetch counted by either Level 1 data hardware prefetch or Level 1 data software prefetch." 104 + }, 105 + { 106 + "ArchStdEvent": "L1D_CACHE_REFILL_PRF", 107 + "BriefDescription": "This event counts hardware prefetch counted by L1D_CACHE_PRF that causes a refill of the Level 1 data cache from outside of the Level 1 data cache." 108 + }, 109 + { 110 + "ArchStdEvent": "L1D_CACHE_REFILL_PERCYC", 111 + "BriefDescription": "The counter counts by the number of cache refills counted by L1D_CACHE_REFILL in progress on each Processor cycle." 112 + } 113 + ]
+52
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1i_cache.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "L1I_CACHE_REFILL", 4 + "BriefDescription": "This event counts operations that cause a refill of the L1I cache. See L1I_CACHE_REFILL of ARMv9 Reference Manual for more information." 5 + }, 6 + { 7 + "ArchStdEvent": "L1I_CACHE", 8 + "BriefDescription": "This event counts operations that cause a cache access to the L1I cache. See L1I_CACHE of ARMv9 Reference Manual for more information." 9 + }, 10 + { 11 + "EventCode": "0x0207", 12 + "EventName": "L1I_CACHE_DM_RD", 13 + "BriefDescription": "This event counts L1I_CACHE caused by demand read access." 14 + }, 15 + { 16 + "EventCode": "0x020F", 17 + "EventName": "L1I_CACHE_REFILL_DM_RD", 18 + "BriefDescription": "This event counts L1I_CACHE_REFILL caused by demand read access." 19 + }, 20 + { 21 + "ArchStdEvent": "L1I_CACHE_LMISS", 22 + "BriefDescription": "This event counts operations that cause a refill of the L1I cache that incurs additional latency." 23 + }, 24 + { 25 + "ArchStdEvent": "L1I_CACHE_HWPRF", 26 + "BriefDescription": "This event counts access counted by L1I_CACHE that is due to a hardware prefetch." 27 + }, 28 + { 29 + "ArchStdEvent": "L1I_CACHE_REFILL_HWPRF", 30 + "BriefDescription": "This event counts hardware prefetch counted by L1I_CACHE_HWPRF that causes a refill of the Level 1 instruction cache from outside of the Level 1 instruction cache." 31 + }, 32 + { 33 + "ArchStdEvent": "L1I_CACHE_HIT_RD", 34 + "BriefDescription": "This event counts demand fetch counted by L1I_CACHE_DM_RD that hits in the Level 1 instruction cache." 35 + }, 36 + { 37 + "ArchStdEvent": "L1I_CACHE_HIT", 38 + "BriefDescription": "This event counts access counted by L1I_CACHE that hits in the Level 1 instruction cache." 39 + }, 40 + { 41 + "ArchStdEvent": "L1I_LFB_HIT_RD", 42 + "BriefDescription": "This event counts demand access counted by L1I_CACHE_HIT_RD that hits a cache line that is in the process of being loaded into the Level 1 instruction cache." 43 + }, 44 + { 45 + "ArchStdEvent": "L1I_CACHE_REFILL_PRF", 46 + "BriefDescription": "This event counts hardware prefetch counted by L1I_CACHE_PRF that causes a refill of the Level 1 instruction cache from outside of the Level 1 instruction cache." 47 + }, 48 + { 49 + "ArchStdEvent": "L1I_CACHE_REFILL_PERCYC", 50 + "BriefDescription": "The counter counts by the number of cache refills counted by L1I_CACHE_REFILL in progress on each Processor cycle." 51 + } 52 + ]
+160
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l2_cache.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "L2D_CACHE", 4 + "BriefDescription": "This event counts operations that cause a cache access to the L2 cache. See L2D_CACHE of ARMv9 Reference Manual for more information." 5 + }, 6 + { 7 + "ArchStdEvent": "L2D_CACHE_REFILL", 8 + "BriefDescription": "This event counts operations that cause a refill of the L2 cache. See L2D_CACHE_REFILL of ARMv9 Reference Manual for more information." 9 + }, 10 + { 11 + "ArchStdEvent": "L2D_CACHE_WB", 12 + "BriefDescription": "This event counts every write-back of data from the L2 cache caused by L2 replace, non-temporal-store and DC ZVA." 13 + }, 14 + { 15 + "ArchStdEvent": "L2I_TLB_REFILL", 16 + "BriefDescription": "This event counts operations that cause a TLB refill of the L2I TLB. See L2I_TLB_REFILL of ARMv9 Reference Manual for more information." 17 + }, 18 + { 19 + "ArchStdEvent": "L2I_TLB", 20 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I TLB. See L2I_TLB of ARMv9 Reference Manual for more information." 21 + }, 22 + { 23 + "ArchStdEvent": "L2D_CACHE_RD", 24 + "BriefDescription": "This event counts L2D CACHE caused by read access." 25 + }, 26 + { 27 + "ArchStdEvent": "L2D_CACHE_WR", 28 + "BriefDescription": "This event counts L2D CACHE caused by write access." 29 + }, 30 + { 31 + "ArchStdEvent": "L2D_CACHE_REFILL_RD", 32 + "BriefDescription": "This event counts L2D CACHE_REFILL caused by read access." 33 + }, 34 + { 35 + "ArchStdEvent": "L2D_CACHE_REFILL_WR", 36 + "BriefDescription": "This event counts L2D CACHE_REFILL caused by write access." 37 + }, 38 + { 39 + "ArchStdEvent": "L2D_CACHE_WB_VICTIM", 40 + "BriefDescription": "This event counts every write-back of data from the L2 cache caused by L2 replace." 41 + }, 42 + { 43 + "EventCode": "0x0300", 44 + "EventName": "L2D_CACHE_DM", 45 + "BriefDescription": "This event counts L2D_CACHE caused by demand access." 46 + }, 47 + { 48 + "EventCode": "0x0301", 49 + "EventName": "L2D_CACHE_DM_RD", 50 + "BriefDescription": "This event counts L2D_CACHE caused by demand read access." 51 + }, 52 + { 53 + "EventCode": "0x0302", 54 + "EventName": "L2D_CACHE_DM_WR", 55 + "BriefDescription": "This event counts L2D_CACHE caused by demand write access." 56 + }, 57 + { 58 + "EventCode": "0x0305", 59 + "EventName": "L2D_CACHE_HWPRF_ADJACENT", 60 + "BriefDescription": "This event counts L2D_CACHE caused by hardware adjacent prefetch access." 61 + }, 62 + { 63 + "EventCode": "0x0308", 64 + "EventName": "L2D_CACHE_REFILL_DM", 65 + "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand access." 66 + }, 67 + { 68 + "EventCode": "0x0309", 69 + "EventName": "L2D_CACHE_REFILL_DM_RD", 70 + "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand read access." 71 + }, 72 + { 73 + "EventCode": "0x030A", 74 + "EventName": "L2D_CACHE_REFILL_DM_WR", 75 + "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand write access." 76 + }, 77 + { 78 + "EventCode": "0x030B", 79 + "EventName": "L2D_CACHE_REFILL_DM_WR_EXCL", 80 + "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand write exclusive access." 81 + }, 82 + { 83 + "EventCode": "0x030C", 84 + "EventName": "L2D_CACHE_REFILL_DM_WR_ATOM", 85 + "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand write atomic access." 86 + }, 87 + { 88 + "EventCode": "0x030D", 89 + "EventName": "L2D_CACHE_BTC", 90 + "BriefDescription": "This event counts demand access that hits cache line with shared status and requests exclusive access in the Level 1 data and Level 2 caches, causing a coherence access to outside of the Level 1 and Level 2 caches of this PE." 91 + }, 92 + { 93 + "EventCode": "0x03B0", 94 + "EventName": "L2D_CACHE_WB_VICTIM_CLEAN", 95 + "BriefDescription": "This event counts every write-back of data from the L2 cache caused by L2 replace where the data is clean. In this case, the data will usually be written to L3 cache." 96 + }, 97 + { 98 + "EventCode": "0x03B1", 99 + "EventName": "L2D_CACHE_WB_NT", 100 + "BriefDescription": "This event counts every write-back of data from the L2 cache caused by non-temporal-store." 101 + }, 102 + { 103 + "EventCode": "0x03B2", 104 + "EventName": "L2D_CACHE_WB_DCZVA", 105 + "BriefDescription": "This event counts every write-back of data from the L2 cache caused by DC ZVA." 106 + }, 107 + { 108 + "EventCode": "0x03B3", 109 + "EventName": "L2D_CACHE_FB", 110 + "BriefDescription": "This event counts every flush-back (drop) of data from the L2 cache." 111 + }, 112 + { 113 + "ArchStdEvent": "L2D_CACHE_LMISS_RD", 114 + "BriefDescription": "This event counts operations that cause a refill of the L2D cache that incurs additional latency." 115 + }, 116 + { 117 + "ArchStdEvent": "L2D_CACHE_MISS", 118 + "BriefDescription": "This event counts demand access that misses in the Level 1 data and Level 2 caches, causing an access to outside of the Level 1 and Level 2 caches of this PE." 119 + }, 120 + { 121 + "ArchStdEvent": "L2D_CACHE_HWPRF", 122 + "BriefDescription": "This event counts access counted by L2D_CACHE that is due to a hardware prefetch." 123 + }, 124 + { 125 + "ArchStdEvent": "L2D_CACHE_REFILL_HWPRF", 126 + "BriefDescription": "This event counts hardware prefetch counted by L2D_CACHE_HWPRF that causes a refill of the Level 2 cache, or any Level 1 data and instruction cache of this PE, from outside of those caches." 127 + }, 128 + { 129 + "ArchStdEvent": "L2D_CACHE_HIT_RD", 130 + "BriefDescription": "This event counts demand read counted by L2D_CACHE_RD that hits in the Level 2 data cache." 131 + }, 132 + { 133 + "ArchStdEvent": "L2D_CACHE_HIT_WR", 134 + "BriefDescription": "This event counts demand write counted by L2D_CACHE_WR that hits in the Level 2 data cache." 135 + }, 136 + { 137 + "ArchStdEvent": "L2D_CACHE_HIT", 138 + "BriefDescription": "This event counts access counted by L2D_CACHE that hits in the Level 2 data cache." 139 + }, 140 + { 141 + "ArchStdEvent": "L2D_LFB_HIT_RD", 142 + "BriefDescription": "This event counts demand access counted by L2D_CACHE_HIT_RD that hits a recently fetched line in the Level 2 cache." 143 + }, 144 + { 145 + "ArchStdEvent": "L2D_LFB_HIT_WR", 146 + "BriefDescription": "This event counts demand access counted by L2D_CACHE_HIT_WR that hits a recently fetched line in the Level 2 cache." 147 + }, 148 + { 149 + "ArchStdEvent": "L2D_CACHE_PRF", 150 + "BriefDescription": "This event counts fetch counted by either Level 2 data hardware prefetch or Level 2 data software prefetch." 151 + }, 152 + { 153 + "ArchStdEvent": "L2D_CACHE_REFILL_PRF", 154 + "BriefDescription": "This event counts hardware prefetch counted by L2D_CACHE_PRF that causes a refill of the Level 2 data cache from outside of the Level 1 data cache." 155 + }, 156 + { 157 + "ArchStdEvent": "L2D_CACHE_REFILL_PERCYC", 158 + "BriefDescription": "The counter counts by the number of cache refills counted by L2D_CACHE_REFILL in progress on each Processor cycle." 159 + } 160 + ]
+159
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l3_cache.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "L3D_CACHE", 4 + "BriefDescription": "This event counts operations that cause a cache access to the L3 cache, as defined by the sum of L2D_CACHE_REFILL_L3D_CACHE and L2D_CACHE_WB_VICTIM_CLEAN events." 5 + }, 6 + { 7 + "ArchStdEvent": "L3D_CACHE_RD", 8 + "BriefDescription": "This event counts access counted by L3D_CACHE that is a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_CACHE events." 9 + }, 10 + { 11 + "EventCode": "0x0390", 12 + "EventName": "L2D_CACHE_REFILL_L3D_CACHE", 13 + "BriefDescription": "This event counts operations that cause a cache access to the L3 cache." 14 + }, 15 + { 16 + "EventCode": "0x0391", 17 + "EventName": "L2D_CACHE_REFILL_L3D_CACHE_DM", 18 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by demand access." 19 + }, 20 + { 21 + "EventCode": "0x0392", 22 + "EventName": "L2D_CACHE_REFILL_L3D_CACHE_DM_RD", 23 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by demand read access." 24 + }, 25 + { 26 + "EventCode": "0x0393", 27 + "EventName": "L2D_CACHE_REFILL_L3D_CACHE_DM_WR", 28 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by demand write access." 29 + }, 30 + { 31 + "EventCode": "0x0394", 32 + "EventName": "L2D_CACHE_REFILL_L3D_CACHE_PRF", 33 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by prefetch access." 34 + }, 35 + { 36 + "EventCode": "0x0395", 37 + "EventName": "L2D_CACHE_REFILL_L3D_CACHE_HWPRF", 38 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by hardware prefetch access." 39 + }, 40 + { 41 + "EventCode": "0x0396", 42 + "EventName": "L2D_CACHE_REFILL_L3D_MISS", 43 + "BriefDescription": "This event counts operations that cause a miss of the L3 cache." 44 + }, 45 + { 46 + "EventCode": "0x0397", 47 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM", 48 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by demand access." 49 + }, 50 + { 51 + "EventCode": "0x0398", 52 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_RD", 53 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by demand read access." 54 + }, 55 + { 56 + "EventCode": "0x0399", 57 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_WR", 58 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by demand write access." 59 + }, 60 + { 61 + "EventCode": "0x039A", 62 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_PRF", 63 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by prefetch access." 64 + }, 65 + { 66 + "EventCode": "0x039B", 67 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_HWPRF", 68 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by hardware prefetch access." 69 + }, 70 + { 71 + "EventCode": "0x039C", 72 + "EventName": "L2D_CACHE_REFILL_L3D_HIT", 73 + "BriefDescription": "This event counts operations that cause a hit of the L3 cache." 74 + }, 75 + { 76 + "EventCode": "0x039D", 77 + "EventName": "L2D_CACHE_REFILL_L3D_HIT_DM", 78 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by demand access." 79 + }, 80 + { 81 + "EventCode": "0x039E", 82 + "EventName": "L2D_CACHE_REFILL_L3D_HIT_DM_RD", 83 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by demand read access." 84 + }, 85 + { 86 + "EventCode": "0x039F", 87 + "EventName": "L2D_CACHE_REFILL_L3D_HIT_DM_WR", 88 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by demand write access." 89 + }, 90 + { 91 + "EventCode": "0x03A0", 92 + "EventName": "L2D_CACHE_REFILL_L3D_HIT_PRF", 93 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by prefetch access." 94 + }, 95 + { 96 + "EventCode": "0x03A1", 97 + "EventName": "L2D_CACHE_REFILL_L3D_HIT_HWPRF", 98 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by hardware prefetch access." 99 + }, 100 + { 101 + "EventCode": "0x03A2", 102 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_PFTGT_HIT", 103 + "BriefDescription": "This event counts the number of L3 cache misses where the requests hit the PFTGT buffer." 104 + }, 105 + { 106 + "EventCode": "0x03A3", 107 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_PFTGT_HIT_DM", 108 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_PFTGT_HIT caused by demand access." 109 + }, 110 + { 111 + "EventCode": "0x03A4", 112 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_PFTGT_HIT_DM_RD", 113 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_PFTGT_HIT caused by demand read access." 114 + }, 115 + { 116 + "EventCode": "0x03A5", 117 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_PFTGT_HIT_DM_WR", 118 + "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_PFTGT_HIT caused by demand write access." 119 + }, 120 + { 121 + "EventCode": "0x03A6", 122 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_L_MEM", 123 + "BriefDescription": "This event counts the number of L3 cache misses where the requests access the memory in the same socket as the requests." 124 + }, 125 + { 126 + "EventCode": "0x03A7", 127 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_FR_MEM", 128 + "BriefDescription": "This event counts the number of L3 cache misses where the requests access the memory in the different socket from the requests." 129 + }, 130 + { 131 + "EventCode": "0x03A8", 132 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_L_L2", 133 + "BriefDescription": "This event counts the number of L3 cache misses where the requests access the different L2 cache from the requests in the same Numa nodes as the requests." 134 + }, 135 + { 136 + "EventCode": "0x03A9", 137 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_NR_L2", 138 + "BriefDescription": "This event counts the number of L3 cache misses where the requests access L2 cache in the different Numa nodes from the requests in the same socket as the requests." 139 + }, 140 + { 141 + "EventCode": "0x03AA", 142 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_NR_L3", 143 + "BriefDescription": "This event counts the number of L3 cache misses where the requests access L3 cache in the different Numa nodes from the requests in the same socket as the requests." 144 + }, 145 + { 146 + "EventCode": "0x03AB", 147 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_FR_L2", 148 + "BriefDescription": "This event counts the number of L3 cache misses where the requests access L2 cache in the different socket from the requests." 149 + }, 150 + { 151 + "EventCode": "0x03AC", 152 + "EventName": "L2D_CACHE_REFILL_L3D_MISS_FR_L3", 153 + "BriefDescription": "This event counts the number of L3 cache misses where the requests access L3 cache in the different socket from the requests." 154 + }, 155 + { 156 + "ArchStdEvent": "L3D_CACHE_LMISS_RD", 157 + "BriefDescription": "This event counts access counted by L3D_CACHE that is not completed by the L3D cache, and a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_MISS events." 158 + } 159 + ]
+10
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/ll_cache.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "LL_CACHE_RD", 4 + "BriefDescription": "This event counts access counted by L3D_CACHE that is a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_CACHE events." 5 + }, 6 + { 7 + "ArchStdEvent": "LL_CACHE_MISS_RD", 8 + "BriefDescription": "This event counts access counted by L3D_CACHE that is not completed by the L3D cache, and a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_MISS events." 9 + } 10 + ]
+10
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/memory.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "MEM_ACCESS", 4 + "BriefDescription": "This event counts architecturally executed memory-reading instructions and memory-writing instructions, as defined by the LDST_SPEC events." 5 + }, 6 + { 7 + "ArchStdEvent": "MEM_ACCESS_RD", 8 + "BriefDescription": "This event counts architecturally executed memory-reading instructions, as defined by the LD_SPEC events." 9 + } 10 + ]
+208
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pipeline.json
··· 1 + [ 2 + { 3 + "EventCode": "0x01A0", 4 + "EventName": "EAGA_VAL", 5 + "BriefDescription": "This event counts valid cycles of EAGA pipeline." 6 + }, 7 + { 8 + "EventCode": "0x01A1", 9 + "EventName": "EAGB_VAL", 10 + "BriefDescription": "This event counts valid cycles of EAGB pipeline." 11 + }, 12 + { 13 + "EventCode": "0x01A3", 14 + "EventName": "PRX_VAL", 15 + "BriefDescription": "This event counts valid cycles of PRX pipeline." 16 + }, 17 + { 18 + "EventCode": "0x01A4", 19 + "EventName": "EXA_VAL", 20 + "BriefDescription": "This event counts valid cycles of EXA pipeline." 21 + }, 22 + { 23 + "EventCode": "0x01A5", 24 + "EventName": "EXB_VAL", 25 + "BriefDescription": "This event counts valid cycles of EXB pipeline." 26 + }, 27 + { 28 + "EventCode": "0x01A6", 29 + "EventName": "EXC_VAL", 30 + "BriefDescription": "This event counts valid cycles of EXC pipeline." 31 + }, 32 + { 33 + "EventCode": "0x01A7", 34 + "EventName": "EXD_VAL", 35 + "BriefDescription": "This event counts valid cycles of EXD pipeline." 36 + }, 37 + { 38 + "EventCode": "0x01A8", 39 + "EventName": "FLA_VAL", 40 + "BriefDescription": "This event counts valid cycles of FLA pipeline." 41 + }, 42 + { 43 + "EventCode": "0x01A9", 44 + "EventName": "FLB_VAL", 45 + "BriefDescription": "This event counts valid cycles of FLB pipeline." 46 + }, 47 + { 48 + "EventCode": "0x01AA", 49 + "EventName": "STEA_VAL", 50 + "BriefDescription": "This event counts valid cycles of STEA pipeline." 51 + }, 52 + { 53 + "EventCode": "0x01AB", 54 + "EventName": "STEB_VAL", 55 + "BriefDescription": "This event counts valid cycles of STEB pipeline." 56 + }, 57 + { 58 + "EventCode": "0x01AC", 59 + "EventName": "STFL_VAL", 60 + "BriefDescription": "This event counts valid cycles of STFL pipeline." 61 + }, 62 + { 63 + "EventCode": "0x01AD", 64 + "EventName": "STPX_VAL", 65 + "BriefDescription": "This event counts valid cycles of STPX pipeline." 66 + }, 67 + { 68 + "EventCode": "0x01B0", 69 + "EventName": "FLA_VAL_PRD_CNT", 70 + "BriefDescription": "This event counts the number of 1's in the predicate bits of request in FLA pipeline, where it is corrected so that it becomes 32 when all bits are 1." 71 + }, 72 + { 73 + "EventCode": "0x01B1", 74 + "EventName": "FLB_VAL_PRD_CNT", 75 + "BriefDescription": "This event counts the number of 1's in the predicate bits of request in FLB pipeline, where it is corrected so that it becomes 32 when all bits are 1." 76 + }, 77 + { 78 + "EventCode": "0x01B2", 79 + "EventName": "FLA_VAL_FOR_PRD", 80 + "BriefDescription": "This event counts valid cycles of FLA pipeline." 81 + }, 82 + { 83 + "EventCode": "0x01B3", 84 + "EventName": "FLB_VAL_FOR_PRD", 85 + "BriefDescription": "This event counts valid cycles of FLB pipeline." 86 + }, 87 + { 88 + "EventCode": "0x0240", 89 + "EventName": "L1_PIPE0_VAL", 90 + "BriefDescription": "This event counts valid cycles of L1D cache pipeline#0." 91 + }, 92 + { 93 + "EventCode": "0x0241", 94 + "EventName": "L1_PIPE1_VAL", 95 + "BriefDescription": "This event counts valid cycles of L1D cache pipeline#1." 96 + }, 97 + { 98 + "EventCode": "0x0242", 99 + "EventName": "L1_PIPE2_VAL", 100 + "BriefDescription": "This event counts valid cycles of L1D cache pipeline#2." 101 + }, 102 + { 103 + "EventCode": "0x0250", 104 + "EventName": "L1_PIPE0_COMP", 105 + "BriefDescription": "This event counts completed requests in L1D cache pipeline#0." 106 + }, 107 + { 108 + "EventCode": "0x0251", 109 + "EventName": "L1_PIPE1_COMP", 110 + "BriefDescription": "This event counts completed requests in L1D cache pipeline#1." 111 + }, 112 + { 113 + "EventCode": "0x025A", 114 + "EventName": "L1_PIPE_ABORT_STLD_INTLK", 115 + "BriefDescription": "This event counts aborted requests in L1D pipelines that due to store-load interlock." 116 + }, 117 + { 118 + "EventCode": "0x026C", 119 + "EventName": "L1I_PIPE_COMP", 120 + "BriefDescription": "This event counts completed requests in L1I cache pipeline." 121 + }, 122 + { 123 + "EventCode": "0x026D", 124 + "EventName": "L1I_PIPE_VAL", 125 + "BriefDescription": "This event counts valid cycles of L1I cache pipeline." 126 + }, 127 + { 128 + "EventCode": "0x0278", 129 + "EventName": "L1_PIPE0_VAL_IU_TAG_ADRS_SCE", 130 + "BriefDescription": "This event counts requests in L1D cache pipeline#0 that its sce bit of tagged address is 1." 131 + }, 132 + { 133 + "EventCode": "0x0279", 134 + "EventName": "L1_PIPE1_VAL_IU_TAG_ADRS_SCE", 135 + "BriefDescription": "This event counts requests in L1D cache pipeline#1 that its sce bit of tagged address is 1." 136 + }, 137 + { 138 + "EventCode": "0x02A0", 139 + "EventName": "L1_PIPE0_VAL_IU_NOT_SEC0", 140 + "BriefDescription": "This event counts requests in L1D cache pipeline#0 that its sector cache ID is not 0." 141 + }, 142 + { 143 + "EventCode": "0x02A1", 144 + "EventName": "L1_PIPE1_VAL_IU_NOT_SEC0", 145 + "BriefDescription": "This event counts requests in L1D cache pipeline#1 that its sector cache ID is not 0." 146 + }, 147 + { 148 + "EventCode": "0x02B0", 149 + "EventName": "L1_PIPE_COMP_GATHER_2FLOW", 150 + "BriefDescription": "This event counts the number of times where 2 elements of the gather instructions became 2 flows because 2 elements could not be combined." 151 + }, 152 + { 153 + "EventCode": "0x02B1", 154 + "EventName": "L1_PIPE_COMP_GATHER_1FLOW", 155 + "BriefDescription": "This event counts the number of times where 2 elements of the gather instructions became 1 flow because 2 elements could be combined." 156 + }, 157 + { 158 + "EventCode": "0x02B2", 159 + "EventName": "L1_PIPE_COMP_GATHER_0FLOW", 160 + "BriefDescription": "This event counts the number of times where 2 elements of the gather instructions became 0 flow because both predicate values are 0." 161 + }, 162 + { 163 + "EventCode": "0x02B3", 164 + "EventName": "L1_PIPE_COMP_SCATTER_1FLOW", 165 + "BriefDescription": "This event counts the number of flows of the scatter instructions." 166 + }, 167 + { 168 + "EventCode": "0x02B8", 169 + "EventName": "L1_PIPE0_COMP_PRD_CNT", 170 + "BriefDescription": "This event counts the number of 1's in the predicate bits of request in L1D cache pipeline#0, where it is corrected so that it becomes 64 when all bits are 1." 171 + }, 172 + { 173 + "EventCode": "0x02B9", 174 + "EventName": "L1_PIPE1_COMP_PRD_CNT", 175 + "BriefDescription": "This event counts the number of 1's in the predicate bits of request in L1D cache pipeline#1, where it is corrected so that it becomes 64 when all bits are 1." 176 + }, 177 + { 178 + "EventCode": "0x0330", 179 + "EventName": "L2_PIPE_VAL", 180 + "BriefDescription": "This event counts valid cycles of L2 cache pipeline." 181 + }, 182 + { 183 + "EventCode": "0x0350", 184 + "EventName": "L2_PIPE_COMP_ALL", 185 + "BriefDescription": "This event counts completed requests in L2 cache pipeline." 186 + }, 187 + { 188 + "EventCode": "0x0370", 189 + "EventName": "L2_PIPE_COMP_PF_L2MIB_MCH", 190 + "BriefDescription": "This event counts operations where software or hardware prefetch hits an L2 cache refill buffer allocated by demand access." 191 + }, 192 + { 193 + "ArchStdEvent": "STALL_FRONTEND_TLB", 194 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the instruction TLB." 195 + }, 196 + { 197 + "ArchStdEvent": "STALL_BACKEND_TLB", 198 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when there is a demand data miss in the data TLB." 199 + }, 200 + { 201 + "ArchStdEvent": "STALL_BACKEND_ST", 202 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when the backend is stalled waiting for a store." 203 + }, 204 + { 205 + "ArchStdEvent": "STALL_BACKEND_ILOCK", 206 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when operations are available from the frontend but at least one is not ready to be sent to the backend because of an input dependency." 207 + } 208 + ]
+10
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pmu.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "PMU_OVFS", 4 + "BriefDescription": "This event counts the event generated each time one of the condition occurs described in Arm Architecture Reference Manual for A-profile architecture. This event is only for output to the trace unit." 5 + }, 6 + { 7 + "ArchStdEvent": "PMU_HOVFS", 8 + "BriefDescription": "This event counts the event generated each time an event is counted by an event counter <n> and all of the condition occur described in Arm Architecture Reference Manual for A-profile architecture. This event is only for output to the trace unit." 9 + } 10 + ]
+30
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/retired.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "SW_INCR", 4 + "BriefDescription": "This event counts on writes to the PMSWINC register." 5 + }, 6 + { 7 + "ArchStdEvent": "INST_RETIRED", 8 + "BriefDescription": "This event counts every architecturally executed instruction." 9 + }, 10 + { 11 + "ArchStdEvent": "CID_WRITE_RETIRED", 12 + "BriefDescription": "This event counts every write to CONTEXTIDR." 13 + }, 14 + { 15 + "ArchStdEvent": "BR_RETIRED", 16 + "BriefDescription": "This event counts architecturally executed branch instruction." 17 + }, 18 + { 19 + "ArchStdEvent": "BR_MIS_PRED_RETIRED", 20 + "BriefDescription": "This event counts architecturally executed branch instruction which was mispredicted." 21 + }, 22 + { 23 + "ArchStdEvent": "OP_RETIRED", 24 + "BriefDescription": "This event counts every architecturally executed micro-operation." 25 + }, 26 + { 27 + "ArchStdEvent": "UOP_RETIRED", 28 + "BriefDescription": "This event counts micro-operation that would be executed in a Simple sequential execution of the program." 29 + } 30 + ]
+171
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/spec_operation.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "BR_MIS_PRED", 4 + "BriefDescription": "This event counts each correction to the predicted program flow that occurs because of a misprediction from, or no prediction from, the branch prediction resources and that relates to instructions that the branch prediction resources are capable of predicting." 5 + }, 6 + { 7 + "ArchStdEvent": "BR_PRED", 8 + "BriefDescription": "This event counts every branch or other change in the program flow that the branch prediction resources are capable of predicting." 9 + }, 10 + { 11 + "ArchStdEvent": "INST_SPEC", 12 + "BriefDescription": "This event counts every architecturally executed instruction." 13 + }, 14 + { 15 + "ArchStdEvent": "OP_SPEC", 16 + "BriefDescription": "This event counts every speculatively executed micro-operation." 17 + }, 18 + { 19 + "ArchStdEvent": "LDREX_SPEC", 20 + "BriefDescription": "This event counts architecturally executed load-exclusive instructions." 21 + }, 22 + { 23 + "ArchStdEvent": "STREX_SPEC", 24 + "BriefDescription": "This event counts architecturally executed store-exclusive instructions." 25 + }, 26 + { 27 + "ArchStdEvent": "LD_SPEC", 28 + "BriefDescription": "This event counts architecturally executed memory-reading instructions, as defined by the LD_RETIRED event." 29 + }, 30 + { 31 + "ArchStdEvent": "ST_SPEC", 32 + "BriefDescription": "This event counts architecturally executed memory-writing instructions, as defined by the ST_RETIRED event. This event counts DCZVA as a store operation." 33 + }, 34 + { 35 + "ArchStdEvent": "LDST_SPEC", 36 + "BriefDescription": "This event counts architecturally executed memory-reading instructions and memory-writing instructions, as defined by the LD_RETIRED and ST_RETIRED events." 37 + }, 38 + { 39 + "ArchStdEvent": "DP_SPEC", 40 + "BriefDescription": "This event counts architecturally executed integer data-processing instructions. See DP_SPEC of ARMv9 Reference Manual for more information." 41 + }, 42 + { 43 + "ArchStdEvent": "ASE_SPEC", 44 + "BriefDescription": "This event counts architecturally executed Advanced SIMD data-processing instructions." 45 + }, 46 + { 47 + "ArchStdEvent": "VFP_SPEC", 48 + "BriefDescription": "This event counts architecturally executed floating-point data-processing instructions." 49 + }, 50 + { 51 + "ArchStdEvent": "PC_WRITE_SPEC", 52 + "BriefDescription": "This event counts only software changes of the PC that defined by the instruction architecturally executed, condition code check pass, software change of the PC event." 53 + }, 54 + { 55 + "ArchStdEvent": "CRYPTO_SPEC", 56 + "BriefDescription": "This event counts architecturally executed cryptographic instructions, except PMULL and VMULL." 57 + }, 58 + { 59 + "ArchStdEvent": "BR_IMMED_SPEC", 60 + "BriefDescription": "This event counts architecturally executed immediate branch instructions." 61 + }, 62 + { 63 + "ArchStdEvent": "BR_RETURN_SPEC", 64 + "BriefDescription": "This event counts architecturally executed procedure return operations that defined by the BR_RETURN_RETIRED event." 65 + }, 66 + { 67 + "ArchStdEvent": "BR_INDIRECT_SPEC", 68 + "BriefDescription": "This event counts architecturally executed indirect branch instructions that includes software change of the PC other than exception-generating instructions and immediate branch instructions." 69 + }, 70 + { 71 + "ArchStdEvent": "ISB_SPEC", 72 + "BriefDescription": "This event counts architecturally executed Instruction Synchronization Barrier instructions." 73 + }, 74 + { 75 + "ArchStdEvent": "DSB_SPEC", 76 + "BriefDescription": "This event counts architecturally executed Data Synchronization Barrier instructions." 77 + }, 78 + { 79 + "ArchStdEvent": "DMB_SPEC", 80 + "BriefDescription": "This event counts architecturally executed Data Memory Barrier instructions, excluding the implied barrier operations of load/store operations with release consistency semantics." 81 + }, 82 + { 83 + "ArchStdEvent": "CSDB_SPEC", 84 + "BriefDescription": "This event counts speculatively executed control speculation barrier instructions." 85 + }, 86 + { 87 + "EventCode": "0x0108", 88 + "EventName": "PRD_SPEC", 89 + "BriefDescription": "This event counts architecturally executed operations that using predicate register." 90 + }, 91 + { 92 + "EventCode": "0x0109", 93 + "EventName": "IEL_SPEC", 94 + "BriefDescription": "This event counts architecturally executed inter-element manipulation operations." 95 + }, 96 + { 97 + "EventCode": "0x010A", 98 + "EventName": "IREG_SPEC", 99 + "BriefDescription": "This event counts architecturally executed inter-register manipulation operations." 100 + }, 101 + { 102 + "EventCode": "0x011A", 103 + "EventName": "BC_LD_SPEC", 104 + "BriefDescription": "This event counts architecturally executed SIMD broadcast floating-point load operations." 105 + }, 106 + { 107 + "EventCode": "0x011B", 108 + "EventName": "DCZVA_SPEC", 109 + "BriefDescription": "This event counts architecturally executed zero blocking operations due to the DC ZVA instruction." 110 + }, 111 + { 112 + "EventCode": "0x0121", 113 + "EventName": "EFFECTIVE_INST_SPEC", 114 + "BriefDescription": "This event counts architecturally executed instructions, excluding the MOVPRFX instruction." 115 + }, 116 + { 117 + "EventCode": "0x0123", 118 + "EventName": "PRE_INDEX_SPEC", 119 + "BriefDescription": "This event counts architecturally executed operations that uses pre-index as its addressing mode." 120 + }, 121 + { 122 + "EventCode": "0x0124", 123 + "EventName": "POST_INDEX_SPEC", 124 + "BriefDescription": "This event counts architecturally executed operations that uses post-index as its addressing mode." 125 + }, 126 + { 127 + "EventCode": "0x0139", 128 + "EventName": "UOP_SPLIT", 129 + "BriefDescription": "This event counts the occurrence count of the micro-operation split." 130 + }, 131 + { 132 + "ArchStdEvent": "ASE_INST_SPEC", 133 + "BriefDescription": "This event counts architecturally executed Advanced SIMD operations." 134 + }, 135 + { 136 + "ArchStdEvent": "INT_SPEC", 137 + "BriefDescription": "This event counts architecturally executed operations due to scalar, Advanced SIMD, and SVE instructions listed in Integer instructions section of ARMv9 Reference Manual." 138 + }, 139 + { 140 + "ArchStdEvent": "INT_DIV_SPEC", 141 + "BriefDescription": "This event counts architecturally executed integer divide operation." 142 + }, 143 + { 144 + "ArchStdEvent": "INT_DIV64_SPEC", 145 + "BriefDescription": "This event counts architecturally executed 64-bit integer divide operation." 146 + }, 147 + { 148 + "ArchStdEvent": "INT_MUL_SPEC", 149 + "BriefDescription": "This event counts architecturally executed integer multiply operation." 150 + }, 151 + { 152 + "ArchStdEvent": "INT_MUL64_SPEC", 153 + "BriefDescription": "This event counts architecturally executed integer 64-bit x 64-bit multiply operation." 154 + }, 155 + { 156 + "ArchStdEvent": "INT_MULH64_SPEC", 157 + "BriefDescription": "This event counts architecturally executed integer 64-bit x 64-bit multiply returning high part operation." 158 + }, 159 + { 160 + "ArchStdEvent": "NONFP_SPEC", 161 + "BriefDescription": "This event counts architecturally executed non-floating-point operations." 162 + }, 163 + { 164 + "ArchStdEvent": "INT_SCALE_OPS_SPEC", 165 + "BriefDescription": "This event counts each integer ALU operation counted by SVE_INT_SPEC. See ALU operation counts section of ARMv9 Reference Manual for information on the counter increment for different types of instruction." 166 + }, 167 + { 168 + "ArchStdEvent": "INT_FIXED_OPS_SPEC", 169 + "BriefDescription": "This event counts each integer ALU operation counted by INT_SPEC that is not counted by SVE_INT_SPEC. See ALU operation counts section of ARMv9 Reference Manual for information on the counter increment for different types of instruction." 170 + } 171 + ]
+94
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/stall.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "STALL_FRONTEND", 4 + "BriefDescription": "This event counts every cycle counted by the CPU_CYCLES event on that no operation was issued because there are no operations available to issue for this PE from the frontend." 5 + }, 6 + { 7 + "ArchStdEvent": "STALL_BACKEND", 8 + "BriefDescription": "This event counts every cycle counted by the CPU_CYCLES event on that no operation was issued because the backend is unable to accept any operations." 9 + }, 10 + { 11 + "ArchStdEvent": "STALL", 12 + "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit." 13 + }, 14 + { 15 + "ArchStdEvent": "STALL_SLOT_BACKEND", 16 + "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit due to the backend." 17 + }, 18 + { 19 + "ArchStdEvent": "STALL_SLOT_FRONTEND", 20 + "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit due to the frontend." 21 + }, 22 + { 23 + "ArchStdEvent": "STALL_SLOT", 24 + "BriefDescription": "This event counts every cycle that no instruction or operation Slot was dispatched from decode unit." 25 + }, 26 + { 27 + "ArchStdEvent": "STALL_BACKEND_MEM", 28 + "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit due to memory stall." 29 + }, 30 + { 31 + "ArchStdEvent": "STALL_FRONTEND_MEMBOUND", 32 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND when no instructions are delivered from the memory system." 33 + }, 34 + { 35 + "ArchStdEvent": "STALL_FRONTEND_L1I", 36 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the first level of instruction cache." 37 + }, 38 + { 39 + "ArchStdEvent": "STALL_FRONTEND_L2I", 40 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the second level of instruction cache." 41 + }, 42 + { 43 + "ArchStdEvent": "STALL_FRONTEND_MEM", 44 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the last level of instruction cache within the PE clock domain or a non-cacheable instruction fetch in progress." 45 + }, 46 + { 47 + "ArchStdEvent": "STALL_FRONTEND_CPUBOUND", 48 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND when the frontend is stalled on a frontend processor resource, not including memory." 49 + }, 50 + { 51 + "ArchStdEvent": "STALL_FRONTEND_FLOW", 52 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_CPUBOUND when the frontend is stalled on unavailability of prediction flow resources." 53 + }, 54 + { 55 + "ArchStdEvent": "STALL_FRONTEND_FLUSH", 56 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_CPUBOUND when the frontend is recovering from a pipeline flush." 57 + }, 58 + { 59 + "ArchStdEvent": "STALL_FRONTEND_RENAME", 60 + "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_CPUBOUND when operations are available from the frontend but at least one is not ready to be sent to the backend because no rename register is available." 61 + }, 62 + { 63 + "ArchStdEvent": "STALL_BACKEND_MEMBOUND", 64 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when the backend is waiting for a memory access to complete." 65 + }, 66 + { 67 + "ArchStdEvent": "STALL_BACKEND_L1D", 68 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when there is a demand data miss in L1D cache." 69 + }, 70 + { 71 + "ArchStdEvent": "STALL_BACKEND_L2D", 72 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when there is a demand data miss in L2D cache." 73 + }, 74 + { 75 + "ArchStdEvent": "STALL_BACKEND_CPUBOUND", 76 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when the backend is stalled on a processor resource, not including memory." 77 + }, 78 + { 79 + "ArchStdEvent": "STALL_BACKEND_BUSY", 80 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when operations are available from the frontend but the backend is not able to accept an operation because an execution unit is busy." 81 + }, 82 + { 83 + "ArchStdEvent": "STALL_BACKEND_RENAME", 84 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_CPUBOUND when operations are available from the frontend but at least one is not ready to be sent to the backend because no rename register is available." 85 + }, 86 + { 87 + "ArchStdEvent": "STALL_BACKEND_ATOMIC", 88 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when the backend is processing an Atomic operation." 89 + }, 90 + { 91 + "ArchStdEvent": "STALL_BACKEND_MEMCPYSET", 92 + "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when the backend is processing a Memory Copy or Set instruction." 93 + } 94 + ]
+254
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/sve.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "SIMD_INST_RETIRED", 4 + "BriefDescription": "This event counts architecturally executed SIMD instructions, excluding the Advanced SIMD scalar instructions and the instructions listed in Non-SIMD SVE instructions section of ARMv9 Reference Manual." 5 + }, 6 + { 7 + "ArchStdEvent": "SVE_INST_RETIRED", 8 + "BriefDescription": "This event counts architecturally executed SVE instructions, including the instructions listed in Non-SIMD SVE instructions section of ARMv9 Reference Manual." 9 + }, 10 + { 11 + "ArchStdEvent": "SVE_INST_SPEC", 12 + "BriefDescription": "This event counts architecturally executed SVE instructions, including the instructions listed in Non-SIMD SVE instructions section of ARMv9 Reference Manual." 13 + }, 14 + { 15 + "ArchStdEvent": "ASE_SVE_INST_SPEC", 16 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE operations." 17 + }, 18 + { 19 + "ArchStdEvent": "UOP_SPEC", 20 + "BriefDescription": "This event counts all architecturally executed micro-operations." 21 + }, 22 + { 23 + "ArchStdEvent": "SVE_MATH_SPEC", 24 + "BriefDescription": "This event counts architecturally executed math function operations due to the SVE FTSMUL, FTMAD, FTSSEL, and FEXPA instructions." 25 + }, 26 + { 27 + "ArchStdEvent": "FP_SPEC", 28 + "BriefDescription": "This event counts architecturally executed operations due to scalar, Advanced SIMD, and SVE instructions listed in Floating-point instructions section of ARMv9 Reference Manual." 29 + }, 30 + { 31 + "ArchStdEvent": "FP_FMA_SPEC", 32 + "BriefDescription": "This event counts architecturally executed floating-point fused multiply-add and multiply-subtract operations." 33 + }, 34 + { 35 + "ArchStdEvent": "FP_RECPE_SPEC", 36 + "BriefDescription": "This event counts architecturally executed floating-point reciprocal estimate operations due to the Advanced SIMD scalar, Advanced SIMD vector, and SVE FRECPE and FRSQRTE instructions." 37 + }, 38 + { 39 + "ArchStdEvent": "FP_CVT_SPEC", 40 + "BriefDescription": "This event counts architecturally executed floating-point convert operations due to the scalar, Advanced SIMD, and SVE floating-point conversion instructions listed in Floating-point conversions section of ARMv9 Reference Manual." 41 + }, 42 + { 43 + "ArchStdEvent": "ASE_INT_SPEC", 44 + "BriefDescription": "This event counts architecturally executed Advanced SIMD integer operations." 45 + }, 46 + { 47 + "ArchStdEvent": "SVE_INT_SPEC", 48 + "BriefDescription": "This event counts architecturally executed SVE integer operations." 49 + }, 50 + { 51 + "ArchStdEvent": "ASE_SVE_INT_SPEC", 52 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE integer operations." 53 + }, 54 + { 55 + "ArchStdEvent": "SVE_INT_DIV_SPEC", 56 + "BriefDescription": "This event counts architecturally executed SVE integer divide operation." 57 + }, 58 + { 59 + "ArchStdEvent": "SVE_INT_DIV64_SPEC", 60 + "BriefDescription": "This event counts architecturally executed SVE 64-bit integer divide operation." 61 + }, 62 + { 63 + "ArchStdEvent": "ASE_INT_MUL_SPEC", 64 + "BriefDescription": "This event counts architecturally executed Advanced SIMD integer multiply operation." 65 + }, 66 + { 67 + "ArchStdEvent": "SVE_INT_MUL_SPEC", 68 + "BriefDescription": "This event counts architecturally executed SVE integer multiply operation." 69 + }, 70 + { 71 + "ArchStdEvent": "ASE_SVE_INT_MUL_SPEC", 72 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE integer multiply operations." 73 + }, 74 + { 75 + "ArchStdEvent": "SVE_INT_MUL64_SPEC", 76 + "BriefDescription": "This event counts architecturally executed SVE integer 64-bit x 64-bit multiply operation." 77 + }, 78 + { 79 + "ArchStdEvent": "SVE_INT_MULH64_SPEC", 80 + "BriefDescription": "This event counts architecturally executed SVE integer 64-bit x 64-bit multiply returning high part operations." 81 + }, 82 + { 83 + "ArchStdEvent": "ASE_NONFP_SPEC", 84 + "BriefDescription": "This event counts architecturally executed Advanced SIMD non-floating-point operations." 85 + }, 86 + { 87 + "ArchStdEvent": "SVE_NONFP_SPEC", 88 + "BriefDescription": "This event counts architecturally executed SVE non-floating-point operations." 89 + }, 90 + { 91 + "ArchStdEvent": "ASE_SVE_NONFP_SPEC", 92 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE non-floating-point operations." 93 + }, 94 + { 95 + "ArchStdEvent": "ASE_INT_VREDUCE_SPEC", 96 + "BriefDescription": "This event counts architecturally executed Advanced SIMD integer reduction operation." 97 + }, 98 + { 99 + "ArchStdEvent": "SVE_INT_VREDUCE_SPEC", 100 + "BriefDescription": "This event counts architecturally executed SVE integer reduction operation." 101 + }, 102 + { 103 + "ArchStdEvent": "ASE_SVE_INT_VREDUCE_SPEC", 104 + "BriefDescription": "This event counts architecturally executed Advanced SIMD and SVE integer reduction operations." 105 + }, 106 + { 107 + "ArchStdEvent": "SVE_PERM_SPEC", 108 + "BriefDescription": "This event counts architecturally executed vector or predicate permute operation." 109 + }, 110 + { 111 + "ArchStdEvent": "SVE_XPIPE_Z2R_SPEC", 112 + "BriefDescription": "This event counts architecturally executed vector to general-purpose scalar cross-pipeline transfer operation." 113 + }, 114 + { 115 + "ArchStdEvent": "SVE_XPIPE_R2Z_SPEC", 116 + "BriefDescription": "This event counts architecturally executed general-purpose scalar to vector cross-pipeline transfer operation." 117 + }, 118 + { 119 + "ArchStdEvent": "SVE_PGEN_SPEC", 120 + "BriefDescription": "This event counts architecturally executed predicate-generating operation." 121 + }, 122 + { 123 + "ArchStdEvent": "SVE_PGEN_FLG_SPEC", 124 + "BriefDescription": "This event counts architecturally executed predicate-generating operation that sets condition flags." 125 + }, 126 + { 127 + "ArchStdEvent": "SVE_PPERM_SPEC", 128 + "BriefDescription": "This event counts architecturally executed predicate permute operation." 129 + }, 130 + { 131 + "ArchStdEvent": "SVE_PRED_SPEC", 132 + "BriefDescription": "This event counts architecturally executed SIMD data-processing and load/store operations due to SVE instructions with a Governing predicate operand that determines the Active elements." 133 + }, 134 + { 135 + "ArchStdEvent": "SVE_MOVPRFX_SPEC", 136 + "BriefDescription": "This event counts architecturally executed operations due to MOVPRFX instructions, whether or not they were fused with the prefixed instruction." 137 + }, 138 + { 139 + "ArchStdEvent": "SVE_MOVPRFX_Z_SPEC", 140 + "BriefDescription": "This event counts architecturally executed operation counted by SVE_MOVPRFX_SPEC where the operation uses zeroing predication." 141 + }, 142 + { 143 + "ArchStdEvent": "SVE_MOVPRFX_M_SPEC", 144 + "BriefDescription": "This event counts architecturally executed operation counted by SVE_MOVPRFX_SPEC where the operation uses merging predication." 145 + }, 146 + { 147 + "ArchStdEvent": "SVE_MOVPRFX_U_SPEC", 148 + "BriefDescription": "This event counts architecturally executed operations due to MOVPRFX instructions that were not fused with the prefixed instruction." 149 + }, 150 + { 151 + "ArchStdEvent": "ASE_SVE_LD_SPEC", 152 + "BriefDescription": "This event counts architecturally executed operations that read from memory due to SVE and Advanced SIMD load instructions." 153 + }, 154 + { 155 + "ArchStdEvent": "ASE_SVE_ST_SPEC", 156 + "BriefDescription": "This event counts architecturally executed operations that write to memory due to SVE and Advanced SIMD store instructions." 157 + }, 158 + { 159 + "ArchStdEvent": "PRF_SPEC", 160 + "BriefDescription": "This event counts architecturally executed prefetch operations due to scalar PRFM, PRFUM and SVE PRF instructions." 161 + }, 162 + { 163 + "ArchStdEvent": "BASE_LD_REG_SPEC", 164 + "BriefDescription": "This event counts architecturally executed operations that read from memory due to an instruction that loads a general-purpose register." 165 + }, 166 + { 167 + "ArchStdEvent": "BASE_ST_REG_SPEC", 168 + "BriefDescription": "This event counts architecturally executed operations that write to memory due to an instruction that stores a general-purpose register, excluding the DC ZVA instruction." 169 + }, 170 + { 171 + "ArchStdEvent": "SVE_LDR_REG_SPEC", 172 + "BriefDescription": "This event counts architecturally executed operations that read from memory due to an SVE LDR instruction." 173 + }, 174 + { 175 + "ArchStdEvent": "SVE_STR_REG_SPEC", 176 + "BriefDescription": "This event counts architecturally executed operations that write to memory due to an SVE STR instruction." 177 + }, 178 + { 179 + "ArchStdEvent": "SVE_LDR_PREG_SPEC", 180 + "BriefDescription": "This event counts architecturally executed operations that read from memory due to an SVE LDR (predicate) instruction." 181 + }, 182 + { 183 + "ArchStdEvent": "SVE_STR_PREG_SPEC", 184 + "BriefDescription": "This event counts architecturally executed operations that write to memory due to an SVE STR (predicate) instruction." 185 + }, 186 + { 187 + "ArchStdEvent": "SVE_PRF_CONTIG_SPEC", 188 + "BriefDescription": "This event counts architecturally executed operations that prefetch memory due to an SVE predicated single contiguous element prefetch instruction." 189 + }, 190 + { 191 + "ArchStdEvent": "SVE_LDNT_CONTIG_SPEC", 192 + "BriefDescription": "This event counts architecturally executed operation that reads from memory with a non-temporal hint due to an SVE non-temporal contiguous element load instruction." 193 + }, 194 + { 195 + "ArchStdEvent": "SVE_STNT_CONTIG_SPEC", 196 + "BriefDescription": "This event counts architecturally executed operation that writes to memory with a non-temporal hint due to an SVE non-temporal contiguous element store instruction." 197 + }, 198 + { 199 + "ArchStdEvent": "ASE_SVE_LD_MULTI_SPEC", 200 + "BriefDescription": "This event counts architecturally executed operations that read from memory due to SVE and Advanced SIMD multiple vector contiguous structure load instructions." 201 + }, 202 + { 203 + "ArchStdEvent": "ASE_SVE_ST_MULTI_SPEC", 204 + "BriefDescription": "This event counts architecturally executed operations that write to memory due to SVE and Advanced SIMD multiple vector contiguous structure store instructions." 205 + }, 206 + { 207 + "ArchStdEvent": "SVE_LD_GATHER_SPEC", 208 + "BriefDescription": "This event counts architecturally executed operations that read from memory due to SVE non-contiguous gather-load instructions." 209 + }, 210 + { 211 + "ArchStdEvent": "SVE_ST_SCATTER_SPEC", 212 + "BriefDescription": "This event counts architecturally executed operations that write to memory due to SVE non-contiguous scatter-store instructions." 213 + }, 214 + { 215 + "ArchStdEvent": "SVE_PRF_GATHER_SPEC", 216 + "BriefDescription": "This event counts architecturally executed operations that prefetch memory due to SVE non-contiguous gather-prefetch instructions." 217 + }, 218 + { 219 + "ArchStdEvent": "SVE_LDFF_SPEC", 220 + "BriefDescription": "This event counts architecturally executed memory read operations due to SVE First-fault and Non-fault load instructions." 221 + }, 222 + { 223 + "ArchStdEvent": "FP_HP_SCALE_OPS_SPEC", 224 + "BriefDescription": "This event counts architecturally executed SVE half-precision arithmetic operations. See FP_HP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 8, or by 16 for operations that would also be counted by SVE_FP_FMA_SPEC." 225 + }, 226 + { 227 + "ArchStdEvent": "FP_HP_FIXED_OPS_SPEC", 228 + "BriefDescription": "This event counts architecturally executed v8SIMD&FP half-precision arithmetic operations. See FP_HP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by the number of 16-bit elements for Advanced SIMD operations, or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC." 229 + }, 230 + { 231 + "ArchStdEvent": "FP_SP_SCALE_OPS_SPEC", 232 + "BriefDescription": "This event counts architecturally executed SVE single-precision arithmetic operations. See FP_SP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 4, or by 8 for operations that would also be counted by SVE_FP_FMA_SPEC." 233 + }, 234 + { 235 + "ArchStdEvent": "FP_SP_FIXED_OPS_SPEC", 236 + "BriefDescription": "This event counts architecturally executed v8SIMD&FP single-precision arithmetic operations. See FP_SP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by the number of 32-bit elements for Advanced SIMD operations, or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC." 237 + }, 238 + { 239 + "ArchStdEvent": "FP_DP_SCALE_OPS_SPEC", 240 + "BriefDescription": "This event counts architecturally executed SVE double-precision arithmetic operations. See FP_DP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 2, or by 4 for operations that would also be counted by SVE_FP_FMA_SPEC." 241 + }, 242 + { 243 + "ArchStdEvent": "FP_DP_FIXED_OPS_SPEC", 244 + "BriefDescription": "This event counts architecturally executed v8SIMD&FP double-precision arithmetic operations. See FP_DP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 2 for Advanced SIMD operations, or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC." 245 + }, 246 + { 247 + "ArchStdEvent": "ASE_SVE_INT_DOT_SPEC", 248 + "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE integer dot-product operation." 249 + }, 250 + { 251 + "ArchStdEvent": "ASE_SVE_INT_MMLA_SPEC", 252 + "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE integer matrix multiply operation." 253 + } 254 + ]
+362
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/tlb.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "L1I_TLB_REFILL", 4 + "BriefDescription": "This event counts operations that cause a TLB refill of the L1I TLB. See L1I_TLB_REFILL of ARMv9 Reference Manual for more information." 5 + }, 6 + { 7 + "ArchStdEvent": "L1D_TLB_REFILL", 8 + "BriefDescription": "This event counts operations that cause a TLB refill of the L1D TLB. See L1D_TLB_REFILL of ARMv9 Reference Manual for more information." 9 + }, 10 + { 11 + "ArchStdEvent": "L1D_TLB", 12 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D TLB. See L1D_TLB of ARMv9 Reference Manual for more information." 13 + }, 14 + { 15 + "ArchStdEvent": "L1I_TLB", 16 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I TLB. See L1I_TLB of ARMv9 Reference Manual for more information." 17 + }, 18 + { 19 + "ArchStdEvent": "L2D_TLB_REFILL", 20 + "BriefDescription": "This event counts operations that cause a TLB refill of the L2D TLB. See L2D_TLB_REFILL of ARMv9 Reference Manual for more information." 21 + }, 22 + { 23 + "ArchStdEvent": "L2D_TLB", 24 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D TLB. See L2D_TLB of ARMv9 Reference Manual for more information." 25 + }, 26 + { 27 + "ArchStdEvent": "DTLB_WALK", 28 + "BriefDescription": "This event counts data TLB access with at least one translation table walk." 29 + }, 30 + { 31 + "ArchStdEvent": "ITLB_WALK", 32 + "BriefDescription": "This event counts instruction TLB access with at least one translation table walk." 33 + }, 34 + { 35 + "EventCode": "0x0C00", 36 + "EventName": "L1I_TLB_4K", 37 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 4KB page." 38 + }, 39 + { 40 + "EventCode": "0x0C01", 41 + "EventName": "L1I_TLB_64K", 42 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 64KB page." 43 + }, 44 + { 45 + "EventCode": "0x0C02", 46 + "EventName": "L1I_TLB_2M", 47 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 2MB page." 48 + }, 49 + { 50 + "EventCode": "0x0C03", 51 + "EventName": "L1I_TLB_32M", 52 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 32MB page." 53 + }, 54 + { 55 + "EventCode": "0x0C04", 56 + "EventName": "L1I_TLB_512M", 57 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 512MB page." 58 + }, 59 + { 60 + "EventCode": "0x0C05", 61 + "EventName": "L1I_TLB_1G", 62 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 1GB page." 63 + }, 64 + { 65 + "EventCode": "0x0C06", 66 + "EventName": "L1I_TLB_16G", 67 + "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 16GB page." 68 + }, 69 + { 70 + "EventCode": "0x0C08", 71 + "EventName": "L1D_TLB_4K", 72 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 4KB page." 73 + }, 74 + { 75 + "EventCode": "0x0C09", 76 + "EventName": "L1D_TLB_64K", 77 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 64KB page." 78 + }, 79 + { 80 + "EventCode": "0x0C0A", 81 + "EventName": "L1D_TLB_2M", 82 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 2MB page." 83 + }, 84 + { 85 + "EventCode": "0x0C0B", 86 + "EventName": "L1D_TLB_32M", 87 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 32MB page." 88 + }, 89 + { 90 + "EventCode": "0x0C0C", 91 + "EventName": "L1D_TLB_512M", 92 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 512MB page." 93 + }, 94 + { 95 + "EventCode": "0x0C0D", 96 + "EventName": "L1D_TLB_1G", 97 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 1GB page." 98 + }, 99 + { 100 + "EventCode": "0x0C0E", 101 + "EventName": "L1D_TLB_16G", 102 + "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 16GB page." 103 + }, 104 + { 105 + "EventCode": "0x0C10", 106 + "EventName": "L1I_TLB_REFILL_4K", 107 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1I in 4KB page." 108 + }, 109 + { 110 + "EventCode": "0x0C11", 111 + "EventName": "L1I_TLB_REFILL_64K", 112 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1I in 64KB page." 113 + }, 114 + { 115 + "EventCode": "0x0C12", 116 + "EventName": "L1I_TLB_REFILL_2M", 117 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1I in 2MB page." 118 + }, 119 + { 120 + "EventCode": "0x0C13", 121 + "EventName": "L1I_TLB_REFILL_32M", 122 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1I in 32MB page." 123 + }, 124 + { 125 + "EventCode": "0x0C14", 126 + "EventName": "L1I_TLB_REFILL_512M", 127 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1I in 512MB page." 128 + }, 129 + { 130 + "EventCode": "0x0C15", 131 + "EventName": "L1I_TLB_REFILL_1G", 132 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1I in 1GB page." 133 + }, 134 + { 135 + "EventCode": "0x0C16", 136 + "EventName": "L1I_TLB_REFILL_16G", 137 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1I in 16GB page." 138 + }, 139 + { 140 + "EventCode": "0x0C18", 141 + "EventName": "L1D_TLB_REFILL_4K", 142 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1D in 4KB page." 143 + }, 144 + { 145 + "EventCode": "0x0C19", 146 + "EventName": "L1D_TLB_REFILL_64K", 147 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1D in 64KB page." 148 + }, 149 + { 150 + "EventCode": "0x0C1A", 151 + "EventName": "L1D_TLB_REFILL_2M", 152 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1D in 2MB page." 153 + }, 154 + { 155 + "EventCode": "0x0C1B", 156 + "EventName": "L1D_TLB_REFILL_32M", 157 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1D in 32MB page." 158 + }, 159 + { 160 + "EventCode": "0x0C1C", 161 + "EventName": "L1D_TLB_REFILL_512M", 162 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1D in 512MB page." 163 + }, 164 + { 165 + "EventCode": "0x0C1D", 166 + "EventName": "L1D_TLB_REFILL_1G", 167 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1D in 1GB page." 168 + }, 169 + { 170 + "EventCode": "0x0C1E", 171 + "EventName": "L1D_TLB_REFILL_16G", 172 + "BriefDescription": "This event counts operations that cause a TLB refill to the L1D in 16GB page." 173 + }, 174 + { 175 + "EventCode": "0x0C20", 176 + "EventName": "L2I_TLB_4K", 177 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 4KB page." 178 + }, 179 + { 180 + "EventCode": "0x0C21", 181 + "EventName": "L2I_TLB_64K", 182 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 64KB page." 183 + }, 184 + { 185 + "EventCode": "0x0C22", 186 + "EventName": "L2I_TLB_2M", 187 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 2MB page." 188 + }, 189 + { 190 + "EventCode": "0x0C23", 191 + "EventName": "L2I_TLB_32M", 192 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 32MB page." 193 + }, 194 + { 195 + "EventCode": "0x0C24", 196 + "EventName": "L2I_TLB_512M", 197 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 512MB page." 198 + }, 199 + { 200 + "EventCode": "0x0C25", 201 + "EventName": "L2I_TLB_1G", 202 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 1GB page." 203 + }, 204 + { 205 + "EventCode": "0x0C26", 206 + "EventName": "L2I_TLB_16G", 207 + "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 16GB page." 208 + }, 209 + { 210 + "EventCode": "0x0C28", 211 + "EventName": "L2D_TLB_4K", 212 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 4KB page." 213 + }, 214 + { 215 + "EventCode": "0x0C29", 216 + "EventName": "L2D_TLB_64K", 217 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 64KB page." 218 + }, 219 + { 220 + "EventCode": "0x0C2A", 221 + "EventName": "L2D_TLB_2M", 222 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 2MB page." 223 + }, 224 + { 225 + "EventCode": "0x0C2B", 226 + "EventName": "L2D_TLB_32M", 227 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 32MB page." 228 + }, 229 + { 230 + "EventCode": "0x0C2C", 231 + "EventName": "L2D_TLB_512M", 232 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 512MB page." 233 + }, 234 + { 235 + "EventCode": "0x0C2D", 236 + "EventName": "L2D_TLB_1G", 237 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 1GB page." 238 + }, 239 + { 240 + "EventCode": "0x0C2E", 241 + "EventName": "L2D_TLB_16G", 242 + "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 16GB page." 243 + }, 244 + { 245 + "EventCode": "0x0C30", 246 + "EventName": "L2I_TLB_REFILL_4K", 247 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2Iin 4KB page." 248 + }, 249 + { 250 + "EventCode": "0x0C31", 251 + "EventName": "L2I_TLB_REFILL_64K", 252 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2I in 64KB page." 253 + }, 254 + { 255 + "EventCode": "0x0C32", 256 + "EventName": "L2I_TLB_REFILL_2M", 257 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2I in 2MB page." 258 + }, 259 + { 260 + "EventCode": "0x0C33", 261 + "EventName": "L2I_TLB_REFILL_32M", 262 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2I in 32MB page." 263 + }, 264 + { 265 + "EventCode": "0x0C34", 266 + "EventName": "L2I_TLB_REFILL_512M", 267 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2I in 512MB page." 268 + }, 269 + { 270 + "EventCode": "0x0C35", 271 + "EventName": "L2I_TLB_REFILL_1G", 272 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2I in 1GB page." 273 + }, 274 + { 275 + "EventCode": "0x0C36", 276 + "EventName": "L2I_TLB_REFILL_16G", 277 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2I in 16GB page." 278 + }, 279 + { 280 + "EventCode": "0x0C38", 281 + "EventName": "L2D_TLB_REFILL_4K", 282 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2D in 4KB page." 283 + }, 284 + { 285 + "EventCode": "0x0C39", 286 + "EventName": "L2D_TLB_REFILL_64K", 287 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2D in 64KB page." 288 + }, 289 + { 290 + "EventCode": "0x0C3A", 291 + "EventName": "L2D_TLB_REFILL_2M", 292 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2D in 2MB page." 293 + }, 294 + { 295 + "EventCode": "0x0C3B", 296 + "EventName": "L2D_TLB_REFILL_32M", 297 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2D in 32MB page." 298 + }, 299 + { 300 + "EventCode": "0x0C3C", 301 + "EventName": "L2D_TLB_REFILL_512M", 302 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2D in 512MB page." 303 + }, 304 + { 305 + "EventCode": "0x0C3D", 306 + "EventName": "L2D_TLB_REFILL_1G", 307 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2D in 1GB page." 308 + }, 309 + { 310 + "EventCode": "0x0C3E", 311 + "EventName": "L2D_TLB_REFILL_16G", 312 + "BriefDescription": "This event counts operations that cause a TLB refill to the L2D in 16GB page." 313 + }, 314 + { 315 + "ArchStdEvent": "DTLB_WALK_PERCYC", 316 + "BriefDescription": "This event counts the number of DTLB_WALK events in progress on each Processor cycle." 317 + }, 318 + { 319 + "ArchStdEvent": "ITLB_WALK_PERCYC", 320 + "BriefDescription": "This event counts the number of ITLB_WALK events in progress on each Processor cycle." 321 + }, 322 + { 323 + "ArchStdEvent": "DTLB_STEP", 324 + "BriefDescription": "This event counts translation table walk access made by a refill of the data TLB." 325 + }, 326 + { 327 + "ArchStdEvent": "ITLB_STEP", 328 + "BriefDescription": "This event counts translation table walk access made by a refill of the instruction TLB." 329 + }, 330 + { 331 + "ArchStdEvent": "DTLB_WALK_LARGE", 332 + "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a large page size." 333 + }, 334 + { 335 + "ArchStdEvent": "ITLB_WALK_LARGE", 336 + "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a large page size." 337 + }, 338 + { 339 + "ArchStdEvent": "DTLB_WALK_SMALL", 340 + "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a small page size." 341 + }, 342 + { 343 + "ArchStdEvent": "ITLB_WALK_SMALL", 344 + "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a small page size." 345 + }, 346 + { 347 + "ArchStdEvent": "DTLB_WALK_BLOCK", 348 + "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a Block." 349 + }, 350 + { 351 + "ArchStdEvent": "ITLB_WALK_BLOCK", 352 + "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a Block." 353 + }, 354 + { 355 + "ArchStdEvent": "DTLB_WALK_PAGE", 356 + "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a Page." 357 + }, 358 + { 359 + "ArchStdEvent": "ITLB_WALK_PAGE", 360 + "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a Page." 361 + } 362 + ]
+18
tools/perf/pmu-events/arch/arm64/fujitsu/monaka/trace.json
··· 1 + [ 2 + { 3 + "ArchStdEvent": "TRB_WRAP", 4 + "BriefDescription": "This event counts the event generated each time the current write pointer is wrapped to the base pointer." 5 + }, 6 + { 7 + "ArchStdEvent": "TRB_TRIG", 8 + "BriefDescription": "This event counts the event generated when a Trace Buffer Extension Trigger Event occurs." 9 + }, 10 + { 11 + "ArchStdEvent": "TRCEXTOUT0", 12 + "BriefDescription": "This event counts the event generated each time an event is signaled by the trace unit external event 0." 13 + }, 14 + { 15 + "ArchStdEvent": "CTI_TRIGOUT4", 16 + "BriefDescription": "This event counts the event generated each time an event is signaled on CTI output trigger 4." 17 + } 18 + ]
+1
tools/perf/pmu-events/arch/arm64/mapfile.csv
··· 39 39 0x00000000420f5160,v1,cavium/thunderx2,core 40 40 0x00000000430f0af0,v1,cavium/thunderx2,core 41 41 0x00000000460f0010,v1,fujitsu/a64fx,core 42 + 0x00000000460f0030,v1,fujitsu/monaka,core 42 43 0x00000000480fd010,v1,hisilicon/hip08,core 43 44 0x00000000500f0000,v1,ampere/emag,core 44 45 0x00000000c00fac30,v1,ampere/ampereone,core
+5
tools/perf/pmu-events/arch/arm64/recommended.json
··· 318 318 "BriefDescription": "Barrier speculatively executed, DMB" 319 319 }, 320 320 { 321 + "EventCode": "0x7F", 322 + "EventName": "CSDB_SPEC", 323 + "BriefDescription": "Barrier Speculatively executed, CSDB." 324 + }, 325 + { 321 326 "PublicDescription": "Exception taken, Other synchronous", 322 327 "EventCode": "0x81", 323 328 "EventName": "EXC_UNDEF",
+13 -3
tools/perf/pmu-events/jevents.py
··· 430 430 def to_c_string(self, metric: bool) -> str: 431 431 """Representation of the event as a C struct initializer.""" 432 432 433 + def fix_comment(s: str) -> str: 434 + return s.replace('*/', r'\*\/') 435 + 433 436 s = self.build_c_string(metric) 434 - return f'{{ { _bcs.offsets[s] } }}, /* {s} */\n' 437 + return f'{{ { _bcs.offsets[s] } }}, /* {fix_comment(s)} */\n' 435 438 436 439 437 440 @lru_cache(maxsize=None) ··· 464 461 """Read in all architecture standard events.""" 465 462 global _arch_std_events 466 463 for item in os.scandir(archpath): 467 - if item.is_file() and item.name.endswith('.json'): 464 + if not item.is_file() or not item.name.endswith('.json'): 465 + continue 466 + try: 468 467 for event in read_json_events(item.path, topic=''): 469 468 if event.name: 470 469 _arch_std_events[event.name.lower()] = event 471 470 if event.metric_name: 472 471 _arch_std_events[event.metric_name.lower()] = event 472 + except Exception as e: 473 + raise RuntimeError(f'Failure processing \'{item.name}\' in \'{archpath}\'') from e 473 474 474 475 475 476 def add_events_table_entries(item: os.DirEntry, topic: str) -> None: ··· 1259 1252 item_path = '/'.join(parents) + ('/' if len(parents) > 0 else '') + item.name 1260 1253 if 'test' not in item_path and 'common' not in item_path and item_path not in _args.model.split(','): 1261 1254 continue 1262 - action(parents, item) 1255 + try: 1256 + action(parents, item) 1257 + except Exception as e: 1258 + raise RuntimeError(f'Action failure for \'{item.name}\' in {parents}') from e 1263 1259 if item.is_dir(): 1264 1260 ftw(item.path, parents + [item.name], action) 1265 1261
+61
tools/perf/scripts/Makefile.syscalls
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # This Makefile generates headers in 3 + # tools/perf/arch/$(SRCARCH)/include/generated/asm from the architecture's 4 + # syscall table. This will either be from the generic syscall table, or from a 5 + # table that is specific to that architecture. 6 + 7 + PHONY := all 8 + all: 9 + 10 + obj := $(OUTPUT)arch/$(SRCARCH)/include/generated/asm 11 + 12 + syscall_abis_32 := common,32 13 + syscall_abis_64 := common,64 14 + syscalltbl := $(srctree)/tools/scripts/syscall.tbl 15 + 16 + # let architectures override $(syscall_abis_%) and $(syscalltbl) 17 + -include $(srctree)/tools/perf/arch/$(SRCARCH)/entry/syscalls/Makefile.syscalls 18 + include $(srctree)/tools/build/Build.include 19 + -include $(srctree)/tools/perf/arch/$(SRCARCH)/entry/syscalls/Kbuild 20 + 21 + systbl := $(srctree)/tools/perf/scripts/syscalltbl.sh 22 + 23 + syscall-y := $(addprefix $(obj)/, $(syscall-y)) 24 + 25 + # Remove stale wrappers when the corresponding files are removed from generic-y 26 + old-headers := $(wildcard $(obj)/*.h) 27 + unwanted := $(filter-out $(syscall-y),$(old-headers)) 28 + 29 + quiet_cmd_remove = REMOVE $(unwanted) 30 + cmd_remove = rm -f $(unwanted) 31 + 32 + quiet_cmd_systbl = SYSTBL $@ 33 + cmd_systbl = $(CONFIG_SHELL) $(systbl) \ 34 + $(if $(systbl-args-$*),$(systbl-args-$*),$(systbl-args)) \ 35 + --abis $(subst $(space),$(comma),$(strip $(syscall_abis_$*))) \ 36 + $< $@ 37 + 38 + all: $(syscall-y) 39 + $(if $(unwanted),$(call cmd,remove)) 40 + @: 41 + 42 + $(obj)/syscalls_%.h: $(syscalltbl) $(systbl) FORCE 43 + $(call if_changed,systbl) 44 + 45 + targets := $(syscall-y) 46 + 47 + # Create output directory. Skip it if at least one old header exists 48 + # since we know the output directory already exists. 49 + ifeq ($(old-headers),) 50 + $(shell mkdir -p $(obj)) 51 + endif 52 + 53 + PHONY += FORCE 54 + 55 + FORCE: 56 + 57 + existing-targets := $(wildcard $(sort $(targets))) 58 + 59 + -include $(foreach f,$(existing-targets),$(dir $(f)).$(notdir $(f)).cmd) 60 + 61 + .PHONY: $(PHONY)
+1 -19
tools/perf/scripts/python/Perf-Trace-Util/Context.c
··· 24 24 #include "../../../util/srcline.h" 25 25 #include "../../../util/srccode.h" 26 26 27 - #if PY_MAJOR_VERSION < 3 28 - #define _PyCapsule_GetPointer(arg1, arg2) \ 29 - PyCObject_AsVoidPtr(arg1) 30 - #define _PyBytes_FromStringAndSize(arg1, arg2) \ 31 - PyString_FromStringAndSize((arg1), (arg2)) 32 - #define _PyUnicode_AsUTF8(arg) \ 33 - PyString_AsString(arg) 34 - 35 - PyMODINIT_FUNC initperf_trace_context(void); 36 - #else 37 27 #define _PyCapsule_GetPointer(arg1, arg2) \ 38 28 PyCapsule_GetPointer((arg1), (arg2)) 39 29 #define _PyBytes_FromStringAndSize(arg1, arg2) \ ··· 32 42 PyUnicode_AsUTF8(arg) 33 43 34 44 PyMODINIT_FUNC PyInit_perf_trace_context(void); 35 - #endif 36 45 37 46 static struct scripting_context *get_args(PyObject *args, const char *name, PyObject **arg2) 38 47 { ··· 93 104 if (c->sample->ip && !c->sample->insn_len && thread__maps(c->al->thread)) { 94 105 struct machine *machine = maps__machine(thread__maps(c->al->thread)); 95 106 96 - script_fetch_insn(c->sample, c->al->thread, machine); 107 + script_fetch_insn(c->sample, c->al->thread, machine, /*native_arch=*/true); 97 108 } 98 109 if (!c->sample->insn_len) 99 110 Py_RETURN_NONE; /* N.B. This is a return statement */ ··· 202 213 { NULL, NULL, 0, NULL} 203 214 }; 204 215 205 - #if PY_MAJOR_VERSION < 3 206 - PyMODINIT_FUNC initperf_trace_context(void) 207 - { 208 - (void) Py_InitModule("perf_trace_context", ContextMethods); 209 - } 210 - #else 211 216 PyMODINIT_FUNC PyInit_perf_trace_context(void) 212 217 { 213 218 static struct PyModuleDef moduledef = { ··· 223 240 224 241 return mod; 225 242 } 226 - #endif
+103 -76
tools/perf/scripts/python/mem-phys-addr.py
··· 3 3 # 4 4 # Copyright (c) 2018, Intel Corporation. 5 5 6 - from __future__ import division 7 - from __future__ import print_function 8 - 9 6 import os 10 7 import sys 11 - import struct 12 8 import re 13 9 import bisect 14 10 import collections 11 + from dataclasses import dataclass 12 + from typing import (Dict, Optional) 15 13 16 14 sys.path.append(os.environ['PERF_EXEC_PATH'] + \ 17 - '/scripts/python/Perf-Trace-Util/lib/Perf/Trace') 15 + '/scripts/python/Perf-Trace-Util/lib/Perf/Trace') 18 16 19 - #physical address ranges for System RAM 20 - system_ram = [] 21 - #physical address ranges for Persistent Memory 22 - pmem = [] 23 - #file object for proc iomem 24 - f = None 25 - #Count for each type of memory 26 - load_mem_type_cnt = collections.Counter() 27 - #perf event name 28 - event_name = None 17 + @dataclass(frozen=True) 18 + class IomemEntry: 19 + """Read from a line in /proc/iomem""" 20 + begin: int 21 + end: int 22 + indent: int 23 + label: str 24 + 25 + # Physical memory layout from /proc/iomem. Key is the indent and then 26 + # a list of ranges. 27 + iomem: Dict[int, list[IomemEntry]] = collections.defaultdict(list) 28 + # Child nodes from the iomem parent. 29 + children: Dict[IomemEntry, set[IomemEntry]] = collections.defaultdict(set) 30 + # Maximum indent seen before an entry in the iomem file. 31 + max_indent: int = 0 32 + # Count for each range of memory. 33 + load_mem_type_cnt: Dict[IomemEntry, int] = collections.Counter() 34 + # Perf event name set from the first sample in the data. 35 + event_name: Optional[str] = None 29 36 30 37 def parse_iomem(): 31 - global f 32 - f = open('/proc/iomem', 'r') 33 - for i, j in enumerate(f): 34 - m = re.split('-|:',j,2) 35 - if m[2].strip() == 'System RAM': 36 - system_ram.append(int(m[0], 16)) 37 - system_ram.append(int(m[1], 16)) 38 - if m[2].strip() == 'Persistent Memory': 39 - pmem.append(int(m[0], 16)) 40 - pmem.append(int(m[1], 16)) 38 + """Populate iomem from /proc/iomem file""" 39 + global iomem 40 + global max_indent 41 + global children 42 + with open('/proc/iomem', 'r', encoding='ascii') as f: 43 + for line in f: 44 + indent = 0 45 + while line[indent] == ' ': 46 + indent += 1 47 + if indent > max_indent: 48 + max_indent = indent 49 + m = re.split('-|:', line, 2) 50 + begin = int(m[0], 16) 51 + end = int(m[1], 16) 52 + label = m[2].strip() 53 + entry = IomemEntry(begin, end, indent, label) 54 + # Before adding entry, search for a parent node using its begin. 55 + if indent > 0: 56 + parent = find_memory_type(begin) 57 + assert parent, f"Given indent expected a parent for {label}" 58 + children[parent].add(entry) 59 + iomem[indent].append(entry) 60 + 61 + def find_memory_type(phys_addr) -> Optional[IomemEntry]: 62 + """Search iomem for the range containing phys_addr with the maximum indent""" 63 + for i in range(max_indent, -1, -1): 64 + if i not in iomem: 65 + continue 66 + position = bisect.bisect_right(iomem[i], phys_addr, 67 + key=lambda entry: entry.begin) 68 + if position is None: 69 + continue 70 + iomem_entry = iomem[i][position-1] 71 + if iomem_entry.begin <= phys_addr <= iomem_entry.end: 72 + return iomem_entry 73 + print(f"Didn't find {phys_addr}") 74 + return None 41 75 42 76 def print_memory_type(): 43 - print("Event: %s" % (event_name)) 44 - print("%-40s %10s %10s\n" % ("Memory type", "count", "percentage"), end='') 45 - print("%-40s %10s %10s\n" % ("----------------------------------------", 46 - "-----------", "-----------"), 47 - end=''); 48 - total = sum(load_mem_type_cnt.values()) 49 - for mem_type, count in sorted(load_mem_type_cnt.most_common(), \ 50 - key = lambda kv: (kv[1], kv[0]), reverse = True): 51 - print("%-40s %10d %10.1f%%\n" % 52 - (mem_type, count, 100 * count / total), 53 - end='') 77 + print(f"Event: {event_name}") 78 + print(f"{'Memory type':<40} {'count':>10} {'percentage':>10}") 79 + print(f"{'-' * 40:<40} {'-' * 10:>10} {'-' * 10:>10}") 80 + total = sum(load_mem_type_cnt.values()) 81 + # Add count from children into the parent. 82 + for i in range(max_indent, -1, -1): 83 + if i not in iomem: 84 + continue 85 + for entry in iomem[i]: 86 + global children 87 + for child in children[entry]: 88 + if load_mem_type_cnt[child] > 0: 89 + load_mem_type_cnt[entry] += load_mem_type_cnt[child] 90 + 91 + def print_entries(entries): 92 + """Print counts from parents down to their children""" 93 + global children 94 + for entry in sorted(entries, 95 + key = lambda entry: load_mem_type_cnt[entry], 96 + reverse = True): 97 + count = load_mem_type_cnt[entry] 98 + if count > 0: 99 + mem_type = ' ' * entry.indent + f"{entry.begin:x}-{entry.end:x} : {entry.label}" 100 + percent = 100 * count / total 101 + print(f"{mem_type:<40} {count:>10} {percent:>10.1f}") 102 + print_entries(children[entry]) 103 + 104 + print_entries(iomem[0]) 54 105 55 106 def trace_begin(): 56 - parse_iomem() 107 + parse_iomem() 57 108 58 109 def trace_end(): 59 - print_memory_type() 60 - f.close() 61 - 62 - def is_system_ram(phys_addr): 63 - #/proc/iomem is sorted 64 - position = bisect.bisect(system_ram, phys_addr) 65 - if position % 2 == 0: 66 - return False 67 - return True 68 - 69 - def is_persistent_mem(phys_addr): 70 - position = bisect.bisect(pmem, phys_addr) 71 - if position % 2 == 0: 72 - return False 73 - return True 74 - 75 - def find_memory_type(phys_addr): 76 - if phys_addr == 0: 77 - return "N/A" 78 - if is_system_ram(phys_addr): 79 - return "System RAM" 80 - 81 - if is_persistent_mem(phys_addr): 82 - return "Persistent Memory" 83 - 84 - #slow path, search all 85 - f.seek(0, 0) 86 - for j in f: 87 - m = re.split('-|:',j,2) 88 - if int(m[0], 16) <= phys_addr <= int(m[1], 16): 89 - return m[2] 90 - return "N/A" 110 + print_memory_type() 91 111 92 112 def process_event(param_dict): 93 - name = param_dict["ev_name"] 94 - sample = param_dict["sample"] 95 - phys_addr = sample["phys_addr"] 113 + if "sample" not in param_dict: 114 + return 96 115 97 - global event_name 98 - if event_name == None: 99 - event_name = name 100 - load_mem_type_cnt[find_memory_type(phys_addr)] += 1 116 + sample = param_dict["sample"] 117 + if "phys_addr" not in sample: 118 + return 119 + 120 + phys_addr = sample["phys_addr"] 121 + entry = find_memory_type(phys_addr) 122 + if entry: 123 + load_mem_type_cnt[entry] += 1 124 + 125 + global event_name 126 + if event_name is None: 127 + event_name = param_dict["ev_name"]
+86
tools/perf/scripts/syscalltbl.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Generate a syscall table header. 5 + # 6 + # Each line of the syscall table should have the following format: 7 + # 8 + # NR ABI NAME [NATIVE] [COMPAT] 9 + # 10 + # NR syscall number 11 + # ABI ABI name 12 + # NAME syscall name 13 + # NATIVE native entry point (optional) 14 + # COMPAT compat entry point (optional) 15 + 16 + set -e 17 + 18 + usage() { 19 + echo >&2 "usage: $0 [--abis ABIS] INFILE OUTFILE" >&2 20 + echo >&2 21 + echo >&2 " INFILE input syscall table" 22 + echo >&2 " OUTFILE output header file" 23 + echo >&2 24 + echo >&2 "options:" 25 + echo >&2 " --abis ABIS ABI(s) to handle (By default, all lines are handled)" 26 + exit 1 27 + } 28 + 29 + # default unless specified by options 30 + abis= 31 + 32 + while [ $# -gt 0 ] 33 + do 34 + case $1 in 35 + --abis) 36 + abis=$(echo "($2)" | tr ',' '|') 37 + shift 2;; 38 + -*) 39 + echo "$1: unknown option" >&2 40 + usage;; 41 + *) 42 + break;; 43 + esac 44 + done 45 + 46 + if [ $# -ne 2 ]; then 47 + usage 48 + fi 49 + 50 + infile="$1" 51 + outfile="$2" 52 + 53 + nxt=0 54 + 55 + syscall_macro() { 56 + nr="$1" 57 + name="$2" 58 + 59 + echo " [$nr] = \"$name\"," 60 + } 61 + 62 + emit() { 63 + nr="$1" 64 + entry="$2" 65 + 66 + syscall_macro "$nr" "$entry" 67 + } 68 + 69 + echo "static const char *const syscalltbl[] = {" > $outfile 70 + 71 + sorted_table=$(mktemp /tmp/syscalltbl.XXXXXX) 72 + grep -E "^[0-9]+[[:space:]]+$abis" "$infile" | sort -n > $sorted_table 73 + 74 + max_nr=0 75 + # the params are: nr abi name entry compat 76 + # use _ for intentionally unused variables according to SC2034 77 + while read nr _ name _ _; do 78 + emit "$nr" "$name" >> $outfile 79 + max_nr=$nr 80 + done < $sorted_table 81 + 82 + rm -f $sorted_table 83 + 84 + echo "};" >> $outfile 85 + 86 + echo "#define SYSCALLTBL_MAX_ID ${max_nr}" >> $outfile
+3 -3
tools/perf/tests/Build
··· 5 5 perf-test-y += parse-events.o 6 6 perf-test-y += dso-data.o 7 7 perf-test-y += vmlinux-kallsyms.o 8 - perf-test-$(CONFIG_LIBTRACEEVENT) += openat-syscall.o 9 - perf-test-$(CONFIG_LIBTRACEEVENT) += openat-syscall-all-cpus.o 8 + perf-test-y += openat-syscall.o 9 + perf-test-y += openat-syscall-all-cpus.o 10 10 perf-test-$(CONFIG_LIBTRACEEVENT) += openat-syscall-tp-fields.o 11 - perf-test-$(CONFIG_LIBTRACEEVENT) += mmap-basic.o 11 + perf-test-y += mmap-basic.o 12 12 perf-test-y += perf-record.o 13 13 perf-test-y += evsel-roundtrip-name.o 14 14 perf-test-$(CONFIG_LIBTRACEEVENT) += evsel-tp-sched.o
+106 -117
tools/perf/tests/builtin-test.c
··· 42 42 static bool dont_fork; 43 43 /* Fork the tests in parallel and wait for their completion. */ 44 44 static bool sequential; 45 + /* Number of times each test is run. */ 46 + static unsigned int runs_per_test = 1; 45 47 const char *dso_to_test; 46 48 const char *test_objdump_path = "objdump"; 47 49 ··· 62 60 63 61 static struct test_suite *generic_tests[] = { 64 62 &suite__vmlinux_matches_kallsyms, 65 - #ifdef HAVE_LIBTRACEEVENT 66 63 &suite__openat_syscall_event, 67 64 &suite__openat_syscall_event_on_all_cpus, 68 65 &suite__basic_mmap, 69 - #endif 70 66 &suite__mem, 71 67 &suite__parse_events, 72 68 &suite__expr, ··· 151 151 #define workloads__for_each(workload) \ 152 152 for (unsigned i = 0; i < ARRAY_SIZE(workloads) && ({ workload = workloads[i]; 1; }); i++) 153 153 154 - static int num_subtests(const struct test_suite *t) 154 + #define test_suite__for_each_test_case(suite, idx) \ 155 + for (idx = 0; (suite)->test_cases && (suite)->test_cases[idx].name != NULL; idx++) 156 + 157 + static int test_suite__num_test_cases(const struct test_suite *t) 155 158 { 156 159 int num; 157 160 158 - if (!t->test_cases) 159 - return 0; 160 - 161 - num = 0; 162 - while (t->test_cases[num].name) 163 - num++; 161 + test_suite__for_each_test_case(t, num); 164 162 165 163 return num; 166 164 } 167 165 168 - static bool has_subtests(const struct test_suite *t) 169 - { 170 - return num_subtests(t) > 1; 171 - } 172 - 173 - static const char *skip_reason(const struct test_suite *t, int subtest) 166 + static const char *skip_reason(const struct test_suite *t, int test_case) 174 167 { 175 168 if (!t->test_cases) 176 169 return NULL; 177 170 178 - return t->test_cases[subtest >= 0 ? subtest : 0].skip_reason; 171 + return t->test_cases[test_case >= 0 ? test_case : 0].skip_reason; 179 172 } 180 173 181 - static const char *test_description(const struct test_suite *t, int subtest) 174 + static const char *test_description(const struct test_suite *t, int test_case) 182 175 { 183 - if (t->test_cases && subtest >= 0) 184 - return t->test_cases[subtest].desc; 176 + if (t->test_cases && test_case >= 0) 177 + return t->test_cases[test_case].desc; 185 178 186 179 return t->desc; 187 180 } 188 181 189 - static test_fnptr test_function(const struct test_suite *t, int subtest) 182 + static test_fnptr test_function(const struct test_suite *t, int test_case) 190 183 { 191 - if (subtest <= 0) 184 + if (test_case <= 0) 192 185 return t->test_cases[0].run_case; 193 186 194 - return t->test_cases[subtest].run_case; 187 + return t->test_cases[test_case].run_case; 195 188 } 196 189 197 - static bool test_exclusive(const struct test_suite *t, int subtest) 190 + static bool test_exclusive(const struct test_suite *t, int test_case) 198 191 { 199 - if (subtest <= 0) 192 + if (test_case <= 0) 200 193 return t->test_cases[0].exclusive; 201 194 202 - return t->test_cases[subtest].exclusive; 195 + return t->test_cases[test_case].exclusive; 203 196 } 204 197 205 - static bool perf_test__matches(const char *desc, int curr, int argc, const char *argv[]) 198 + static bool perf_test__matches(const char *desc, int suite_num, int argc, const char *argv[]) 206 199 { 207 200 int i; 208 201 ··· 207 214 long nr = strtoul(argv[i], &end, 10); 208 215 209 216 if (*end == '\0') { 210 - if (nr == curr + 1) 217 + if (nr == suite_num + 1) 211 218 return true; 212 219 continue; 213 220 } ··· 222 229 struct child_test { 223 230 struct child_process process; 224 231 struct test_suite *test; 225 - int test_num; 226 - int subtest; 232 + int suite_num; 233 + int test_case_num; 227 234 }; 228 235 229 236 static jmp_buf run_test_jmp_buf; ··· 253 260 254 261 pr_debug("--- start ---\n"); 255 262 pr_debug("test child forked, pid %d\n", getpid()); 256 - err = test_function(child->test, child->subtest)(child->test, child->subtest); 263 + err = test_function(child->test, child->test_case_num)(child->test, child->test_case_num); 257 264 pr_debug("---- end(%d) ----\n", err); 258 265 259 266 err_out: ··· 265 272 266 273 #define TEST_RUNNING -3 267 274 268 - static int print_test_result(struct test_suite *t, int i, int subtest, int result, int width, 269 - int running) 275 + static int print_test_result(struct test_suite *t, int curr_suite, int curr_test_case, 276 + int result, int width, int running) 270 277 { 271 - if (has_subtests(t)) { 278 + if (test_suite__num_test_cases(t) > 1) { 272 279 int subw = width > 2 ? width - 2 : width; 273 280 274 - pr_info("%3d.%1d: %-*s:", i + 1, subtest + 1, subw, test_description(t, subtest)); 281 + pr_info("%3d.%1d: %-*s:", curr_suite + 1, curr_test_case + 1, subw, 282 + test_description(t, curr_test_case)); 275 283 } else 276 - pr_info("%3d: %-*s:", i + 1, width, test_description(t, subtest)); 284 + pr_info("%3d: %-*s:", curr_suite + 1, width, test_description(t, curr_test_case)); 277 285 278 286 switch (result) { 279 287 case TEST_RUNNING: ··· 284 290 pr_info(" Ok\n"); 285 291 break; 286 292 case TEST_SKIP: { 287 - const char *reason = skip_reason(t, subtest); 293 + const char *reason = skip_reason(t, curr_test_case); 288 294 289 295 if (reason) 290 296 color_fprintf(stderr, PERF_COLOR_YELLOW, " Skip (%s)\n", reason); ··· 306 312 { 307 313 struct child_test *child_test = child_tests[running_test]; 308 314 struct test_suite *t; 309 - int i, subi, err; 315 + int curr_suite, curr_test_case, err; 310 316 bool err_done = false; 311 317 struct strbuf err_output = STRBUF_INIT; 312 318 int last_running = -1; ··· 317 323 return; 318 324 } 319 325 t = child_test->test; 320 - i = child_test->test_num; 321 - subi = child_test->subtest; 326 + curr_suite = child_test->suite_num; 327 + curr_test_case = child_test->test_case_num; 322 328 err = child_test->process.err; 323 329 /* 324 330 * For test suites with subtests, display the suite name ahead of the 325 331 * sub test names. 326 332 */ 327 - if (has_subtests(t) && subi == 0) 328 - pr_info("%3d: %-*s:\n", i + 1, width, test_description(t, -1)); 333 + if (test_suite__num_test_cases(t) > 1 && curr_test_case == 0) 334 + pr_info("%3d: %-*s:\n", curr_suite + 1, width, test_description(t, -1)); 329 335 330 336 /* 331 337 * Busy loop reading from the child's stdout/stderr that are set to be ··· 334 340 if (err > 0) 335 341 fcntl(err, F_SETFL, O_NONBLOCK); 336 342 if (verbose > 1) { 337 - if (has_subtests(t)) 338 - pr_info("%3d.%1d: %s:\n", i + 1, subi + 1, test_description(t, subi)); 343 + if (test_suite__num_test_cases(t) > 1) 344 + pr_info("%3d.%1d: %s:\n", curr_suite + 1, curr_test_case + 1, 345 + test_description(t, curr_test_case)); 339 346 else 340 - pr_info("%3d: %s:\n", i + 1, test_description(t, -1)); 347 + pr_info("%3d: %s:\n", curr_suite + 1, test_description(t, -1)); 341 348 } 342 349 while (!err_done) { 343 350 struct pollfd pfds[1] = { ··· 363 368 */ 364 369 fprintf(debug_file(), PERF_COLOR_DELETE_LINE); 365 370 } 366 - print_test_result(t, i, subi, TEST_RUNNING, width, running); 371 + print_test_result(t, curr_suite, curr_test_case, TEST_RUNNING, 372 + width, running); 367 373 last_running = running; 368 374 } 369 375 } ··· 402 406 fprintf(stderr, "%s", err_output.buf); 403 407 404 408 strbuf_release(&err_output); 405 - print_test_result(t, i, subi, ret, width, /*running=*/0); 409 + print_test_result(t, curr_suite, curr_test_case, ret, width, /*running=*/0); 406 410 if (err > 0) 407 411 close(err); 408 412 zfree(&child_tests[running_test]); 409 413 } 410 414 411 - static int start_test(struct test_suite *test, int i, int subi, struct child_test **child, 412 - int width, int pass) 415 + static int start_test(struct test_suite *test, int curr_suite, int curr_test_case, 416 + struct child_test **child, int width, int pass) 413 417 { 414 418 int err; 415 419 ··· 417 421 if (dont_fork) { 418 422 if (pass == 1) { 419 423 pr_debug("--- start ---\n"); 420 - err = test_function(test, subi)(test, subi); 424 + err = test_function(test, curr_test_case)(test, curr_test_case); 421 425 pr_debug("---- end ----\n"); 422 - print_test_result(test, i, subi, err, width, /*running=*/0); 426 + print_test_result(test, curr_suite, curr_test_case, err, width, 427 + /*running=*/0); 423 428 } 424 429 return 0; 425 430 } 426 - if (pass == 1 && !sequential && test_exclusive(test, subi)) { 431 + if (pass == 1 && !sequential && test_exclusive(test, curr_test_case)) { 427 432 /* When parallel, skip exclusive tests on the first pass. */ 428 433 return 0; 429 434 } 430 - if (pass != 1 && (sequential || !test_exclusive(test, subi))) { 435 + if (pass != 1 && (sequential || !test_exclusive(test, curr_test_case))) { 431 436 /* Sequential and non-exclusive tests were run on the first pass. */ 432 437 return 0; 433 438 } ··· 437 440 return -ENOMEM; 438 441 439 442 (*child)->test = test; 440 - (*child)->test_num = i; 441 - (*child)->subtest = subi; 443 + (*child)->suite_num = curr_suite; 444 + (*child)->test_case_num = curr_test_case; 442 445 (*child)->process.pid = -1; 443 446 (*child)->process.no_stdin = 1; 444 447 if (verbose <= 0) { ··· 478 481 int err = 0; 479 482 480 483 for (struct test_suite **t = suites; *t; t++) { 481 - int len = strlen(test_description(*t, -1)); 484 + int i, len = strlen(test_description(*t, -1)); 482 485 483 486 if (width < len) 484 487 width = len; 485 488 486 - if (has_subtests(*t)) { 487 - for (int subi = 0, subn = num_subtests(*t); subi < subn; subi++) { 488 - len = strlen(test_description(*t, subi)); 489 - if (width < len) 490 - width = len; 491 - num_tests++; 492 - } 493 - } else { 494 - num_tests++; 489 + test_suite__for_each_test_case(*t, i) { 490 + len = strlen(test_description(*t, i)); 491 + if (width < len) 492 + width = len; 493 + num_tests += runs_per_test; 495 494 } 496 495 } 497 496 child_tests = calloc(num_tests, sizeof(*child_tests)); ··· 505 512 continue; 506 513 507 514 pr_debug3("Killing %d pid %d\n", 508 - child_test->test_num + 1, 515 + child_test->suite_num + 1, 509 516 child_test->process.pid); 510 517 kill(child_test->process.pid, err); 511 518 } ··· 521 528 */ 522 529 for (int pass = 1; pass <= 2; pass++) { 523 530 int child_test_num = 0; 524 - int i = 0; 531 + int curr_suite = 0; 525 532 526 - for (struct test_suite **t = suites; *t; t++) { 527 - int curr = i++; 533 + for (struct test_suite **t = suites; *t; t++, curr_suite++) { 534 + int curr_test_case; 528 535 529 - if (!perf_test__matches(test_description(*t, -1), curr, argc, argv)) { 536 + if (!perf_test__matches(test_description(*t, -1), curr_suite, argc, argv)) { 530 537 /* 531 538 * Test suite shouldn't be run based on 532 - * description. See if subtest should. 539 + * description. See if any test case should. 533 540 */ 534 541 bool skip = true; 535 542 536 - for (int subi = 0, subn = num_subtests(*t); subi < subn; subi++) { 537 - if (perf_test__matches(test_description(*t, subi), 538 - curr, argc, argv)) 543 + test_suite__for_each_test_case(*t, curr_test_case) { 544 + if (perf_test__matches(test_description(*t, curr_test_case), 545 + curr_suite, argc, argv)) { 539 546 skip = false; 547 + break; 548 + } 540 549 } 541 - 542 550 if (skip) 543 551 continue; 544 552 } 545 553 546 - if (intlist__find(skiplist, i)) { 547 - pr_info("%3d: %-*s:", curr + 1, width, test_description(*t, -1)); 554 + if (intlist__find(skiplist, curr_suite + 1)) { 555 + pr_info("%3d: %-*s:", curr_suite + 1, width, 556 + test_description(*t, -1)); 548 557 color_fprintf(stderr, PERF_COLOR_YELLOW, " Skip (user override)\n"); 549 558 continue; 550 559 } 551 560 552 - if (!has_subtests(*t)) { 553 - err = start_test(*t, curr, -1, &child_tests[child_test_num++], 554 - width, pass); 555 - if (err) 556 - goto err_out; 557 - continue; 558 - } 559 - for (int subi = 0, subn = num_subtests(*t); subi < subn; subi++) { 560 - if (!perf_test__matches(test_description(*t, subi), 561 - curr, argc, argv)) 562 - continue; 561 + for (unsigned int run = 0; run < runs_per_test; run++) { 562 + test_suite__for_each_test_case(*t, curr_test_case) { 563 + if (!perf_test__matches(test_description(*t, curr_test_case), 564 + curr_suite, argc, argv)) 565 + continue; 563 566 564 - err = start_test(*t, curr, subi, &child_tests[child_test_num++], 565 - width, pass); 566 - if (err) 567 - goto err_out; 567 + err = start_test(*t, curr_suite, curr_test_case, 568 + &child_tests[child_test_num++], 569 + width, pass); 570 + if (err) 571 + goto err_out; 572 + } 568 573 } 569 574 } 570 575 if (!sequential) { ··· 583 592 return err; 584 593 } 585 594 586 - static int perf_test__list(struct test_suite **suites, int argc, const char **argv) 595 + static int perf_test__list(FILE *fp, struct test_suite **suites, int argc, const char **argv) 587 596 { 588 - int i = 0; 597 + int curr_suite = 0; 589 598 590 - for (struct test_suite **t = suites; *t; t++) { 591 - int curr = i++; 599 + for (struct test_suite **t = suites; *t; t++, curr_suite++) { 600 + int curr_test_case; 592 601 593 - if (!perf_test__matches(test_description(*t, -1), curr, argc, argv)) 602 + if (!perf_test__matches(test_description(*t, -1), curr_suite, argc, argv)) 594 603 continue; 595 604 596 - pr_info("%3d: %s\n", i, test_description(*t, -1)); 605 + fprintf(fp, "%3d: %s\n", curr_suite + 1, test_description(*t, -1)); 597 606 598 - if (has_subtests(*t)) { 599 - int subn = num_subtests(*t); 600 - int subi; 607 + if (test_suite__num_test_cases(*t) <= 1) 608 + continue; 601 609 602 - for (subi = 0; subi < subn; subi++) 603 - pr_info("%3d:%1d: %s\n", i, subi + 1, 604 - test_description(*t, subi)); 610 + test_suite__for_each_test_case(*t, curr_test_case) { 611 + fprintf(fp, "%3d.%1d: %s\n", curr_suite + 1, curr_test_case + 1, 612 + test_description(*t, curr_test_case)); 605 613 } 606 614 } 607 615 return 0; ··· 657 667 if (suites[2] == NULL) 658 668 suites[2] = create_script_test_suites(); 659 669 660 - #define for_each_test(t) \ 670 + #define for_each_suite(suite) \ 661 671 for (size_t i = 0, j = 0; i < ARRAY_SIZE(suites); i++, j = 0) \ 662 - while ((t = suites[i][j++]) != NULL) 672 + while ((suite = suites[i][j++]) != NULL) 663 673 664 - for_each_test(t) 674 + for_each_suite(t) 665 675 num_suites++; 666 676 667 677 result = calloc(num_suites + 1, sizeof(struct test_suite *)); 668 678 669 679 for (int pass = 1; pass <= 2; pass++) { 670 - for_each_test(t) { 680 + for_each_suite(t) { 671 681 bool exclusive = false; 682 + int curr_test_case; 672 683 673 - if (!has_subtests(t)) { 674 - exclusive = test_exclusive(t, -1); 675 - } else { 676 - for (int subi = 0, subn = num_subtests(t); subi < subn; subi++) { 677 - if (test_exclusive(t, subi)) { 678 - exclusive = true; 679 - break; 680 - } 684 + test_suite__for_each_test_case(t, curr_test_case) { 685 + if (test_exclusive(t, curr_test_case)) { 686 + exclusive = true; 687 + break; 681 688 } 682 689 } 683 690 if ((!exclusive && pass == 1) || (exclusive && pass == 2)) ··· 682 695 } 683 696 } 684 697 return result; 685 - #undef for_each_test 698 + #undef for_each_suite 686 699 } 687 700 688 701 int cmd_test(int argc, const char **argv) ··· 702 715 "Do not fork for testcase"), 703 716 OPT_BOOLEAN('S', "sequential", &sequential, 704 717 "Run the tests one after another rather than in parallel"), 718 + OPT_UINTEGER('r', "runs-per-test", &runs_per_test, 719 + "Run each test the given number of times, default 1"), 705 720 OPT_STRING('w', "workload", &workload, "work", "workload to run for testing, use '--list-workloads' to list the available ones."), 706 721 OPT_BOOLEAN(0, "list-workloads", &list_workloads, "List the available builtin workloads to use with -w/--workload"), 707 722 OPT_STRING(0, "dso", &dso_to_test, "dso", "dso to test"), ··· 727 738 argc = parse_options_subcommand(argc, argv, test_options, test_subcommands, test_usage, 0); 728 739 if (argc >= 1 && !strcmp(argv[0], "list")) { 729 740 suites = build_suites(); 730 - ret = perf_test__list(suites, argc - 1, argv + 1); 741 + ret = perf_test__list(stdout, suites, argc - 1, argv + 1); 731 742 free(suites); 732 743 return ret; 733 744 }
+91 -1
tools/perf/tests/code-reading.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <errno.h> 3 + #include <linux/kconfig.h> 3 4 #include <linux/kernel.h> 4 5 #include <linux/types.h> 5 6 #include <inttypes.h> ··· 9 8 #include <stdio.h> 10 9 #include <string.h> 11 10 #include <sys/param.h> 11 + #include <sys/utsname.h> 12 12 #include <perf/cpumap.h> 13 13 #include <perf/evlist.h> 14 14 #include <perf/mmap.h> ··· 178 176 return err; 179 177 } 180 178 179 + /* 180 + * Only gets GNU objdump version. Returns 0 for llvm-objdump. 181 + */ 182 + static int objdump_version(void) 183 + { 184 + size_t line_len; 185 + char cmd[PATH_MAX * 2]; 186 + char *line = NULL; 187 + const char *fmt; 188 + FILE *f; 189 + int ret; 190 + 191 + int version_tmp, version_num = 0; 192 + char *version = 0, *token; 193 + 194 + fmt = "%s --version"; 195 + ret = snprintf(cmd, sizeof(cmd), fmt, test_objdump_path); 196 + if (ret <= 0 || (size_t)ret >= sizeof(cmd)) 197 + return -1; 198 + /* Ignore objdump errors */ 199 + strcat(cmd, " 2>/dev/null"); 200 + f = popen(cmd, "r"); 201 + if (!f) { 202 + pr_debug("popen failed\n"); 203 + return -1; 204 + } 205 + /* Get first line of objdump --version output */ 206 + ret = getline(&line, &line_len, f); 207 + pclose(f); 208 + if (ret < 0) { 209 + pr_debug("getline failed\n"); 210 + return -1; 211 + } 212 + 213 + token = strsep(&line, " "); 214 + if (token != NULL && !strcmp(token, "GNU")) { 215 + // version is last part of first line of objdump --version output. 216 + while ((token = strsep(&line, " "))) 217 + version = token; 218 + 219 + // Convert version into a format we can compare with 220 + token = strsep(&version, "."); 221 + version_num = atoi(token); 222 + if (version_num) 223 + version_num *= 10000; 224 + 225 + token = strsep(&version, "."); 226 + version_tmp = atoi(token); 227 + if (token) 228 + version_num += version_tmp * 100; 229 + 230 + token = strsep(&version, "."); 231 + version_tmp = atoi(token); 232 + if (token) 233 + version_num += version_tmp; 234 + } 235 + 236 + return version_num; 237 + } 238 + 181 239 static int read_via_objdump(const char *filename, u64 addr, void *buf, 182 240 size_t len) 183 241 { 242 + u64 stop_address = addr + len; 243 + struct utsname uname_buf; 184 244 char cmd[PATH_MAX * 2]; 185 245 const char *fmt; 186 246 FILE *f; 187 247 int ret; 188 248 249 + ret = uname(&uname_buf); 250 + if (ret) { 251 + pr_debug("uname failed\n"); 252 + return -1; 253 + } 254 + 255 + if (!strncmp(uname_buf.machine, "riscv", 5)) { 256 + int version = objdump_version(); 257 + 258 + /* Default to this workaround if version parsing fails */ 259 + if (version < 0 || version > 24100) { 260 + /* 261 + * Starting at riscv objdump version 2.41, dumping in 262 + * the middle of an instruction is not supported. riscv 263 + * instructions are aligned along 2-byte intervals and 264 + * can be either 2-bytes or 4-bytes. This makes it 265 + * possible that the stop-address lands in the middle of 266 + * a 4-byte instruction. Increase the stop_address by 267 + * two to ensure an instruction is not cut in half, but 268 + * leave the len as-is so only the expected number of 269 + * bytes are collected. 270 + */ 271 + stop_address += 2; 272 + } 273 + } 274 + 189 275 fmt = "%s -z -d --start-address=0x%"PRIx64" --stop-address=0x%"PRIx64" %s"; 190 - ret = snprintf(cmd, sizeof(cmd), fmt, test_objdump_path, addr, addr + len, 276 + ret = snprintf(cmd, sizeof(cmd), fmt, test_objdump_path, addr, stop_address, 191 277 filename); 192 278 if (ret <= 0 || (size_t)ret >= sizeof(cmd)) 193 279 return -1;
+47 -15
tools/perf/tests/cpumap.c
··· 156 156 return 0; 157 157 } 158 158 159 - static int test__cpu_map_merge(struct test_suite *test __maybe_unused, int subtest __maybe_unused) 159 + static int __test__cpu_map_merge(const char *lhs, const char *rhs, int nr, const char *expected) 160 160 { 161 - struct perf_cpu_map *a = perf_cpu_map__new("4,2,1"); 162 - struct perf_cpu_map *b = perf_cpu_map__new("4,5,7"); 163 - struct perf_cpu_map *c = perf_cpu_map__merge(a, b); 161 + struct perf_cpu_map *a = perf_cpu_map__new(lhs); 162 + struct perf_cpu_map *b = perf_cpu_map__new(rhs); 164 163 char buf[100]; 165 164 166 - TEST_ASSERT_VAL("failed to merge map: bad nr", perf_cpu_map__nr(c) == 5); 167 - cpu_map__snprint(c, buf, sizeof(buf)); 168 - TEST_ASSERT_VAL("failed to merge map: bad result", !strcmp(buf, "1-2,4-5,7")); 165 + perf_cpu_map__merge(&a, b); 166 + TEST_ASSERT_VAL("failed to merge map: bad nr", perf_cpu_map__nr(a) == nr); 167 + cpu_map__snprint(a, buf, sizeof(buf)); 168 + TEST_ASSERT_VAL("failed to merge map: bad result", !strcmp(buf, expected)); 169 169 perf_cpu_map__put(b); 170 - perf_cpu_map__put(c); 170 + 171 + /* 172 + * If 'b' is a superset of 'a', 'a' points to the same map with the 173 + * map 'b'. In this case, the owner 'b' has released the resource above 174 + * but 'a' still keeps the ownership, the reference counter should be 1. 175 + */ 176 + TEST_ASSERT_VAL("unexpected refcnt: bad result", 177 + refcount_read(perf_cpu_map__refcnt(a)) == 1); 178 + 179 + perf_cpu_map__put(a); 171 180 return 0; 181 + } 182 + 183 + static int test__cpu_map_merge(struct test_suite *test __maybe_unused, 184 + int subtest __maybe_unused) 185 + { 186 + int ret; 187 + 188 + ret = __test__cpu_map_merge("4,2,1", "4,5,7", 5, "1-2,4-5,7"); 189 + if (ret) 190 + return ret; 191 + ret = __test__cpu_map_merge("1-8", "6-9", 9, "1-9"); 192 + if (ret) 193 + return ret; 194 + ret = __test__cpu_map_merge("1-8,12-20", "6-9,15", 18, "1-9,12-20"); 195 + if (ret) 196 + return ret; 197 + ret = __test__cpu_map_merge("4,2,1", "1", 3, "1-2,4"); 198 + if (ret) 199 + return ret; 200 + ret = __test__cpu_map_merge("1", "4,2,1", 3, "1-2,4"); 201 + if (ret) 202 + return ret; 203 + ret = __test__cpu_map_merge("1", "1", 1, "1"); 204 + return ret; 172 205 } 173 206 174 207 static int __test__cpu_map_intersect(const char *lhs, const char *rhs, int nr, const char *expected) ··· 252 219 struct perf_cpu_map *empty = perf_cpu_map__intersect(one, two); 253 220 struct perf_cpu_map *pair = perf_cpu_map__new("1-2"); 254 221 struct perf_cpu_map *tmp; 255 - struct perf_cpu_map *maps[] = {empty, any, one, two, pair}; 222 + struct perf_cpu_map **maps[] = {&empty, &any, &one, &two, &pair}; 256 223 257 224 for (size_t i = 0; i < ARRAY_SIZE(maps); i++) { 258 225 /* Maps equal themself. */ 259 - TEST_ASSERT_VAL("equal", perf_cpu_map__equal(maps[i], maps[i])); 226 + TEST_ASSERT_VAL("equal", perf_cpu_map__equal(*maps[i], *maps[i])); 260 227 for (size_t j = 0; j < ARRAY_SIZE(maps); j++) { 261 228 /* Maps dont't equal each other. */ 262 229 if (i == j) 263 230 continue; 264 - TEST_ASSERT_VAL("not equal", !perf_cpu_map__equal(maps[i], maps[j])); 231 + TEST_ASSERT_VAL("not equal", !perf_cpu_map__equal(*maps[i], *maps[j])); 265 232 } 266 233 } 267 234 268 235 /* Maps equal made maps. */ 269 - tmp = perf_cpu_map__merge(perf_cpu_map__get(one), two); 270 - TEST_ASSERT_VAL("pair", perf_cpu_map__equal(pair, tmp)); 271 - perf_cpu_map__put(tmp); 236 + perf_cpu_map__merge(&two, one); 237 + TEST_ASSERT_VAL("pair", perf_cpu_map__equal(pair, two)); 272 238 273 239 tmp = perf_cpu_map__intersect(pair, one); 274 240 TEST_ASSERT_VAL("one", perf_cpu_map__equal(one, tmp)); 275 241 perf_cpu_map__put(tmp); 276 242 277 243 for (size_t i = 0; i < ARRAY_SIZE(maps); i++) 278 - perf_cpu_map__put(maps[i]); 244 + perf_cpu_map__put(*maps[i]); 279 245 280 246 return TEST_OK; 281 247 }
+26 -5
tools/perf/tests/event_groups.c
··· 10 10 #include "header.h" 11 11 #include "../perf-sys.h" 12 12 13 - /* hw: cycles, sw: context-switch, uncore: [arch dependent] */ 13 + /* hw: cycles,instructions sw: context-switch, uncore: [arch dependent] */ 14 14 static int types[] = {0, 1, -1}; 15 15 static unsigned long configs[] = {0, 3, 0}; 16 + static unsigned long configs_hw[] = {1}; 16 17 17 18 #define NR_UNCORE_PMUS 5 18 19 ··· 94 93 return erroneous ? 0 : -1; 95 94 } 96 95 97 - sibling_fd2 = event_open(types[k], configs[k], group_fd); 96 + /* 97 + * if all three events (leader and two sibling events) 98 + * are hardware events, use instructions as one of the 99 + * sibling event. There is event constraint in powerpc that 100 + * events using same counter cannot be programmed in a group. 101 + * Since PERF_COUNT_HW_INSTRUCTIONS is a generic hardware 102 + * event and present in all platforms, lets use that. 103 + */ 104 + if (!i && !j && !k) 105 + sibling_fd2 = event_open(types[k], configs_hw[k], group_fd); 106 + else 107 + sibling_fd2 = event_open(types[k], configs[k], group_fd); 98 108 if (sibling_fd2 == -1) { 99 109 close(sibling_fd1); 100 110 close(group_fd); ··· 136 124 if (r) 137 125 ret = TEST_FAIL; 138 126 139 - pr_debug("0x%x 0x%lx, 0x%x 0x%lx, 0x%x 0x%lx: %s\n", 140 - types[i], configs[i], types[j], configs[j], 141 - types[k], configs[k], r ? "Fail" : "Pass"); 127 + /* 128 + * For all three events as HW events, second sibling 129 + * event is picked from configs_hw. So print accordingly 130 + */ 131 + if (!i && !j && !k) 132 + pr_debug("0x%x 0x%lx, 0x%x 0x%lx, 0x%x 0x%lx: %s\n", 133 + types[i], configs[i], types[j], configs[j], 134 + types[k], configs_hw[k], r ? "Fail" : "Pass"); 135 + else 136 + pr_debug("0x%x 0x%lx, 0x%x 0x%lx, 0x%x 0x%lx: %s\n", 137 + types[i], configs[i], types[j], configs[j], 138 + types[k], configs[k], r ? "Fail" : "Pass"); 142 139 } 143 140 } 144 141 }
+2 -5
tools/perf/tests/make
··· 86 86 make_no_backtrace := NO_BACKTRACE=1 87 87 make_no_libcapstone := NO_CAPSTONE=1 88 88 make_no_libnuma := NO_LIBNUMA=1 89 - make_no_libaudit := NO_LIBAUDIT=1 90 89 make_no_libbionic := NO_LIBBIONIC=1 91 90 make_no_auxtrace := NO_AUXTRACE=1 92 91 make_no_libbpf := NO_LIBBPF=1 ··· 96 97 make_with_babeltrace:= LIBBABELTRACE=1 97 98 make_with_coresight := CORESIGHT=1 98 99 make_no_sdt := NO_SDT=1 99 - make_no_syscall_tbl := NO_SYSCALL_TABLE=1 100 100 make_no_libpfm4 := NO_LIBPFM4=1 101 101 make_with_gtk2 := GTK2=1 102 102 make_refcnt_check := EXTRA_CFLAGS="-DREFCNT_CHECKING=1" ··· 120 122 # all the NO_* variable combined 121 123 make_minimal := NO_LIBPERL=1 NO_LIBPYTHON=1 NO_GTK2=1 122 124 make_minimal += NO_DEMANGLE=1 NO_LIBELF=1 NO_BACKTRACE=1 123 - make_minimal += NO_LIBNUMA=1 NO_LIBAUDIT=1 NO_LIBBIONIC=1 125 + make_minimal += NO_LIBNUMA=1 NO_LIBBIONIC=1 124 126 make_minimal += NO_LIBDW_DWARF_UNWIND=1 NO_AUXTRACE=1 NO_LIBBPF=1 125 127 make_minimal += NO_LIBCRYPTO=1 NO_SDT=1 NO_JVMTI=1 NO_LIBZSTD=1 126 - make_minimal += NO_LIBCAP=1 NO_SYSCALL_TABLE=1 NO_CAPSTONE=1 128 + make_minimal += NO_LIBCAP=1 NO_CAPSTONE=1 127 129 128 130 # $(run) contains all available tests 129 131 run := make_pure ··· 156 158 run += make_no_backtrace 157 159 run += make_no_libcapstone 158 160 run += make_no_libnuma 159 - run += make_no_libaudit 160 161 run += make_no_libbionic 161 162 run += make_no_auxtrace 162 163 run += make_no_libbpf
+1 -24
tools/perf/tests/parse-events.c
··· 54 54 return (evsel->attr.config & PERF_HW_EVENT_MASK) == expected_config; 55 55 } 56 56 57 - #ifdef HAVE_LIBTRACEEVENT 58 - 59 57 #if defined(__s390x__) 60 58 /* Return true if kvm module is available and loaded. Test this 61 59 * and return success when trace point kvm_s390_create_vm ··· 110 112 } 111 113 return TEST_OK; 112 114 } 113 - #endif /* HAVE_LIBTRACEEVENT */ 114 115 115 116 static int test__checkevent_raw(struct evlist *evlist) 116 117 { ··· 308 311 return TEST_OK; 309 312 } 310 313 311 - #ifdef HAVE_LIBTRACEEVENT 312 314 static int test__checkevent_tracepoint_modifier(struct evlist *evlist) 313 315 { 314 316 struct evsel *evsel = evlist__first(evlist); ··· 336 340 337 341 return test__checkevent_tracepoint_multi(evlist); 338 342 } 339 - #endif /* HAVE_LIBTRACEEVENT */ 340 343 341 344 static int test__checkevent_raw_modifier(struct evlist *evlist) 342 345 { ··· 624 629 return TEST_OK; 625 630 } 626 631 627 - #ifdef HAVE_LIBTRACEEVENT 628 632 static int test__checkevent_list(struct evlist *evlist) 629 633 { 630 634 struct evsel *evsel = evlist__first(evlist); ··· 665 671 666 672 return TEST_OK; 667 673 } 668 - #endif 669 674 670 675 static int test__checkevent_pmu_name(struct evlist *evlist) 671 676 { ··· 964 971 return TEST_OK; 965 972 } 966 973 967 - #ifdef HAVE_LIBTRACEEVENT 968 974 static int test__group3(struct evlist *evlist __maybe_unused) 969 975 { 970 976 struct evsel *evsel, *group1_leader = NULL, *group2_leader = NULL; ··· 1070 1078 } 1071 1079 return TEST_OK; 1072 1080 } 1073 - #endif 1074 1081 1075 1082 static int test__group4(struct evlist *evlist __maybe_unused) 1076 1083 { ··· 1804 1813 return TEST_OK; 1805 1814 } 1806 1815 1807 - #ifdef HAVE_LIBTRACEEVENT 1808 1816 static int count_tracepoints(void) 1809 1817 { 1810 1818 struct dirent *events_ent; ··· 1857 1867 1858 1868 return test__checkevent_tracepoint_multi(evlist); 1859 1869 } 1860 - #endif /* HAVE_LIBTRACEVENT */ 1861 1870 1862 1871 struct evlist_test { 1863 1872 const char *name; ··· 1865 1876 }; 1866 1877 1867 1878 static const struct evlist_test test__events[] = { 1868 - #ifdef HAVE_LIBTRACEEVENT 1869 1879 { 1870 1880 .name = "syscalls:sys_enter_openat", 1871 1881 .check = test__checkevent_tracepoint, ··· 1875 1887 .check = test__checkevent_tracepoint_multi, 1876 1888 /* 1 */ 1877 1889 }, 1878 - #endif 1879 1890 { 1880 1891 .name = "r1a", 1881 1892 .check = test__checkevent_raw, ··· 1925 1938 .check = test__checkevent_breakpoint_w, 1926 1939 /* 1 */ 1927 1940 }, 1928 - #ifdef HAVE_LIBTRACEEVENT 1929 1941 { 1930 1942 .name = "syscalls:sys_enter_openat:k", 1931 1943 .check = test__checkevent_tracepoint_modifier, ··· 1935 1949 .check = test__checkevent_tracepoint_multi_modifier, 1936 1950 /* 3 */ 1937 1951 }, 1938 - #endif 1939 1952 { 1940 1953 .name = "r1a:kp", 1941 1954 .check = test__checkevent_raw_modifier, ··· 1980 1995 .check = test__checkevent_breakpoint_w_modifier, 1981 1996 /* 2 */ 1982 1997 }, 1983 - #ifdef HAVE_LIBTRACEEVENT 1984 1998 { 1985 1999 .name = "r1,syscalls:sys_enter_openat:k,1:1:hp", 1986 2000 .check = test__checkevent_list, 1987 2001 /* 3 */ 1988 2002 }, 1989 - #endif 1990 2003 { 1991 2004 .name = "instructions:G", 1992 2005 .check = test__checkevent_exclude_host_modifier, ··· 2015 2032 .check = test__group2, 2016 2033 /* 9 */ 2017 2034 }, 2018 - #ifdef HAVE_LIBTRACEEVENT 2019 2035 { 2020 2036 .name = "group1{syscalls:sys_enter_openat:H,cycles:kppp},group2{cycles,1:3}:G,instructions:u", 2021 2037 .check = test__group3, 2022 2038 /* 0 */ 2023 2039 }, 2024 - #endif 2025 2040 { 2026 2041 .name = "{cycles:u,instructions:kp}:p", 2027 2042 .check = test__group4, ··· 2030 2049 .check = test__group5, 2031 2050 /* 2 */ 2032 2051 }, 2033 - #ifdef HAVE_LIBTRACEEVENT 2034 2052 { 2035 2053 .name = "*:*", 2036 2054 .check = test__all_tracepoints, 2037 2055 /* 3 */ 2038 2056 }, 2039 - #endif 2040 2057 { 2041 2058 .name = "{cycles,cache-misses:G}:H", 2042 2059 .check = test__group_gh1, ··· 2090 2111 .check = test__checkevent_breakpoint_len_rw_modifier, 2091 2112 /* 4 */ 2092 2113 }, 2093 - #if defined(__s390x__) && defined(HAVE_LIBTRACEEVENT) 2114 + #if defined(__s390x__) 2094 2115 { 2095 2116 .name = "kvm-s390:kvm_s390_create_vm", 2096 2117 .check = test__checkevent_tracepoint, ··· 2244 2265 .check = test__checkevent_breakpoint_2_events, 2245 2266 /* 3 */ 2246 2267 }, 2247 - #ifdef HAVE_LIBTRACEEVENT 2248 2268 { 2249 2269 .name = "9p:9p_client_req", 2250 2270 .check = test__checkevent_tracepoint, 2251 2271 /* 4 */ 2252 2272 }, 2253 - #endif 2254 2273 }; 2255 2274 2256 2275 static const struct evlist_test test__events_pmu[] = {
+2 -2
tools/perf/tests/shell/base_probe/test_adding_blacklisted.sh
··· 1 1 #!/bin/bash 2 - 2 + # perf_probe :: Reject blacklisted probes (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 # ··· 22 22 BLACKFUNC_LIST=`head -n 5 /sys/kernel/debug/kprobes/blacklist 2> /dev/null | cut -f2` 23 23 if [ -z "$BLACKFUNC_LIST" ]; then 24 24 print_overall_skipped 25 - exit 0 25 + exit 2 26 26 fi 27 27 28 28 # try to find vmlinux with DWARF debug info
+4 -4
tools/perf/tests/shell/base_probe/test_adding_kernel.sh
··· 1 1 #!/bin/bash 2 - # Add 'perf probe's, list and remove them 2 + # perf_probe :: Add probes, list and remove them (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 # ··· 33 33 check_kprobes_available 34 34 if [ $? -ne 0 ]; then 35 35 print_overall_skipped 36 - exit 0 36 + exit 2 37 37 fi 38 38 39 39 ··· 169 169 (( TEST_RESULT += $? )) 170 170 171 171 # adding existing probe with '--force' should pass 172 - NO_OF_PROBES=`$CMD_PERF probe -l | wc -l` 172 + NO_OF_PROBES=`$CMD_PERF probe -l $TEST_PROBE| wc -l` 173 173 $CMD_PERF probe --force --add $TEST_PROBE 2> $LOGS_DIR/adding_kernel_forceadd_03.err 174 174 PERF_EXIT_CODE=$? 175 175 ··· 205 205 $CMD_PERF probe --del \* 2> $LOGS_DIR/adding_kernel_removing_wildcard.err 206 206 PERF_EXIT_CODE=$? 207 207 208 - ../common/check_all_lines_matched.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err 208 + ../common/check_all_patterns_found.pl "Removed event: probe:$TEST_PROBE" "Removed event: probe:${TEST_PROBE}_1" < $LOGS_DIR/adding_kernel_removing_wildcard.err 209 209 CHECK_EXIT_CODE=$? 210 210 211 211 print_results $PERF_EXIT_CODE $CHECK_EXIT_CODE "removing multiple probes"
+2 -2
tools/perf/tests/shell/base_probe/test_basic.sh
··· 1 1 #!/bin/bash 2 - 2 + # perf_probe :: Basic perf probe functionality (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 # ··· 19 19 20 20 if ! check_kprobes_available; then 21 21 print_overall_skipped 22 - exit 0 22 + exit 2 23 23 fi 24 24 25 25
+6 -3
tools/perf/tests/shell/base_probe/test_invalid_options.sh
··· 1 1 #!/bin/bash 2 - 2 + # perf_probe :: Reject invalid options (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 # ··· 19 19 20 20 if ! check_kprobes_available; then 21 21 print_overall_skipped 22 - exit 0 22 + exit 2 23 23 fi 24 24 25 + # Check for presence of DWARF 26 + $CMD_PERF check feature -q dwarf 27 + [ $? -ne 0 ] && HINT_FAIL="Some of the tests need DWARF to run" 25 28 26 29 ### missing argument 27 30 ··· 78 75 79 76 80 77 # print overall results 81 - print_overall_results "$TEST_RESULT" 78 + print_overall_results "$TEST_RESULT" $HINT_FAIL 82 79 exit $?
+6 -3
tools/perf/tests/shell/base_probe/test_line_semantics.sh
··· 1 1 #!/bin/bash 2 - 2 + # perf_probe :: Check patterns for line semantics (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 # ··· 20 20 21 21 if ! check_kprobes_available; then 22 22 print_overall_skipped 23 - exit 0 23 + exit 2 24 24 fi 25 25 26 + # Check for presence of DWARF 27 + $CMD_PERF check feature -q dwarf 28 + [ $? -ne 0 ] && HINT_FAIL="Some of the tests need DWARF to run" 26 29 27 30 ### acceptable --line descriptions 28 31 ··· 54 51 55 52 56 53 # print overall results 57 - print_overall_results "$TEST_RESULT" 54 + print_overall_results "$TEST_RESULT" $HINT_FAIL 58 55 exit $?
+1 -1
tools/perf/tests/shell/base_report/setup.sh
··· 1 1 #!/bin/bash 2 - 2 + # perftool-testsuite :: perf_report 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 #
+1 -1
tools/perf/tests/shell/base_report/test_basic.sh
··· 1 1 #!/bin/bash 2 - 2 + # perf_report :: Basic perf report options (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 #
+5 -2
tools/perf/tests/shell/common/init.sh
··· 46 46 print_overall_results() 47 47 { 48 48 RETVAL="$1"; shift 49 + TASK_COMMENT="$*" 50 + test -n "$TASK_COMMENT" && TASK_COMMENT=":: $TASK_COMMENT" 51 + 49 52 if [ $RETVAL -eq 0 ]; then 50 53 _echo "$MALLPASS## [ PASS ] ##$MEND $TEST_NAME :: $THIS_TEST_NAME SUMMARY" 51 54 else 52 - _echo "$MALLFAIL## [ FAIL ] ##$MEND $TEST_NAME :: $THIS_TEST_NAME SUMMARY :: $RETVAL failures found" 55 + _echo "$MALLFAIL## [ FAIL ] ##$MEND $TEST_NAME :: $THIS_TEST_NAME SUMMARY :: $RETVAL failures found $TASK_COMMENT" 53 56 fi 54 57 return $RETVAL 55 58 } ··· 88 85 # the runmode of a testcase needs to be at least the current suite's runmode 89 86 if [ $PERFTOOL_TESTSUITE_RUNMODE -lt $TESTCASE_RUNMODE ]; then 90 87 print_overall_skipped 91 - exit 0 88 + exit 2 92 89 fi 93 90 } 94 91
+1 -1
tools/perf/tests/shell/coresight/Makefile
··· 24 24 25 25 clean: $(CLEANDIRS) 26 26 $(CLEANDIRS): 27 - $(call QUIET_CLEAN, test-$(@:clean-%=%)) $(Q)$(MAKE) -C $(@:clean-%=%) clean >/dev/null 27 + $(call QUIET_CLEAN, test-$(@:clean-%=%)) $(MAKE) -C $(@:clean-%=%) clean >/dev/null 28 28 29 29 .PHONY: all clean $(SUBDIRS) $(CLEANDIRS) $(INSTALLDIRS)
+1 -4
tools/perf/tests/shell/ftrace.sh
··· 67 67 68 68 test_ftrace_profile() { 69 69 echo "perf ftrace profile test" 70 - perf ftrace profile -m 16M sleep 0.1 > "${output}" 70 + perf ftrace profile --graph-opts depth=5 sleep 0.1 > "${output}" 71 71 grep ^# "${output}" 72 - grep sleep "${output}" 73 - grep schedule "${output}" 74 - grep execve "${output}" 75 72 time_re="[[:space:]]+1[[:digit:]]{5}\.[[:digit:]]{3}" 76 73 # 100283.000 100283.000 100283.000 1 __x64_sys_clock_nanosleep 77 74 # Check for one *clock_nanosleep line with a Count of just 1 that takes a bit more than 0.1 seconds
+7 -7
tools/perf/tests/shell/lib/perf_json_output_lint.py
··· 69 69 for item in json.loads(input): 70 70 if expected_items != -1: 71 71 count = len(item) 72 - if count != expected_items and count >= 1 and count <= 7 and 'metric-value' in item: 72 + if count not in expected_items and count >= 1 and count <= 7 and 'metric-value' in item: 73 73 # Events that generate >1 metric may have isolated metric 74 74 # values and possibly other prefixes like interval, core, 75 75 # aggregate-number, or event-runtime/pcnt-running from multiplexing. 76 76 pass 77 - elif count != expected_items and count >= 1 and count <= 5 and 'metricgroup' in item: 77 + elif count not in expected_items and count >= 1 and count <= 5 and 'metricgroup' in item: 78 78 pass 79 - elif count == expected_items + 1 and 'metric-threshold' in item: 79 + elif count - 1 in expected_items and 'metric-threshold' in item: 80 80 pass 81 - elif count != expected_items: 81 + elif count not in expected_items: 82 82 raise RuntimeError(f'wrong number of fields. counted {count} expected {expected_items}' 83 83 f' in \'{item}\'') 84 84 for key, value in item.items(): ··· 90 90 91 91 try: 92 92 if args.no_args or args.system_wide or args.event: 93 - expected_items = 7 93 + expected_items = [5, 7] 94 94 elif args.interval or args.per_thread or args.system_wide_no_aggr: 95 - expected_items = 8 95 + expected_items = [6, 8] 96 96 elif args.per_core or args.per_socket or args.per_node or args.per_die or args.per_cluster or args.per_cache: 97 - expected_items = 9 97 + expected_items = [7, 9] 98 98 else: 99 99 # If no option is specified, don't check the number of items. 100 100 expected_items = -1
+1 -1
tools/perf/tests/shell/perftool-testsuite_probe.sh
··· 1 1 #!/bin/bash 2 - # perftool-testsuite_probe 2 + # perftool-testsuite_probe (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 test -d "$(dirname "$0")/base_probe" || exit 2
+19 -17
tools/perf/tests/shell/record+probe_libc_inet_pton.sh
··· 1 1 #!/bin/sh 2 - # probe libc's inet_pton & backtrace it with ping 2 + # probe libc's inet_pton & backtrace it with ping (exclusive) 3 3 4 4 # Installs a probe on libc's inet_pton function, that will use uprobes, 5 5 # then use 'perf trace' on a ping to localhost asking for just one packet ··· 43 43 echo "((__GI_)?getaddrinfo|text_to_binary_address)\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected 44 44 echo "(gaih_inet|main)\+0x[[:xdigit:]]+[[:space:]]\(inlined|.*/bin/ping.*\)$" >> $expected 45 45 ;; 46 - ppc64|ppc64le) 47 - eventattr='max-stack=4' 48 - # Add gaih_inet to expected backtrace only if it is part of libc. 49 - if nm $libc | grep -F -q gaih_inet.; then 50 - echo "gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected 51 - fi 52 - echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected 53 - echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected 54 - ;; 55 46 *) 56 - eventattr='max-stack=3' 47 + eventattr='max-stack=4' 57 48 echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected 58 49 ;; 59 50 esac ··· 67 76 fi 68 77 perf script -i $perf_data | tac | grep -m1 ^ping -B9 | tac > $perf_script 69 78 70 - exec 3<$perf_script 71 79 exec 4<$expected 72 - while read line <&3 && read -r pattern <&4; do 80 + while read -r pattern <&4; do 81 + echo "Pattern: $pattern" 73 82 [ -z "$pattern" ] && break 74 - echo $line 75 - echo "$line" | grep -E -q "$pattern" 76 - if [ $? -ne 0 ] ; then 77 - printf "FAIL: expected backtrace entry \"%s\" got \"%s\"\n" "$pattern" "$line" 83 + 84 + found=0 85 + 86 + # Search lines in the perf script result 87 + exec 3<$perf_script 88 + while read line <&3; do 89 + [ -z "$line" ] && break 90 + echo " Matching: $line" 91 + ! echo "$line" | grep -E -q "$pattern" 92 + found=$? 93 + [ $found -eq 1 ] && break 94 + done 95 + 96 + if [ $found -ne 1 ] ; then 97 + printf "FAIL: Didn't find the expected backtrace entry \"%s\"\n" "$pattern" 78 98 return 1 79 99 fi 80 100 done
+1 -1
tools/perf/tests/shell/stat+std_output.sh
··· 13 13 14 14 event_name=(cpu-clock task-clock context-switches cpu-migrations page-faults stalled-cycles-frontend stalled-cycles-backend cycles instructions branches branch-misses) 15 15 event_metric=("CPUs utilized" "CPUs utilized" "/sec" "/sec" "/sec" "frontend cycles idle" "backend cycles idle" "GHz" "insn per cycle" "/sec" "of all branches") 16 - skip_metric=("stalled cycles per insn" "tma_" "retiring" "frontend_bound" "bad_speculation" "backend_bound") 16 + skip_metric=("stalled cycles per insn" "tma_" "retiring" "frontend_bound" "bad_speculation" "backend_bound" "TopdownL1" "percent of slots") 17 17 18 18 cleanup() { 19 19 rm -f "${stat_output}"
+5 -1
tools/perf/tests/shell/stat.sh
··· 187 187 # Run default Perf stat 188 188 cycles_events=$(perf stat -- true 2>&1 | grep -E "/cycles/[uH]*| cycles[:uH]* " -c) 189 189 190 - if [ "$pmus" -ne "$cycles_events" ] 190 + # The expectation is that default output will have a cycles events on each 191 + # hybrid PMU. In situations with no cycles PMU events, like virtualized, this 192 + # can fall back to task-clock and so the end count may be 0. Fail if neither 193 + # condition holds. 194 + if [ "$pmus" -ne "$cycles_events" ] && [ "0" -ne "$cycles_events" ] 191 195 then 192 196 echo "hybrid test [Found $pmus PMUs but $cycles_events cycles events. Failed]" 193 197 err=1
+30
tools/perf/tests/shell/test_arm_spe.sh
··· 107 107 arm_spe_report "SPE system-wide testing" $err 108 108 } 109 109 110 + arm_spe_discard_test() { 111 + echo "SPE discard mode" 112 + 113 + for f in /sys/bus/event_source/devices/arm_spe_*; do 114 + if [ -e "$f/format/discard" ]; then 115 + cpu=$(cut -c -1 "$f/cpumask") 116 + break 117 + fi 118 + done 119 + 120 + if [ -z $cpu ]; then 121 + arm_spe_report "SPE discard mode not present" 2 122 + return 123 + fi 124 + 125 + # Test can use wildcard SPE instance and Perf will only open the event 126 + # on instances that have that format flag. But make sure the target 127 + # runs on an instance with discard mode otherwise we're not testing 128 + # anything. 129 + perf record -o ${perfdata} -e arm_spe/discard/ -N -B --no-bpf-event \ 130 + -- taskset --cpu-list $cpu true 131 + 132 + if perf report -i ${perfdata} --stats | grep 'AUX events\|AUXTRACE events'; then 133 + arm_spe_report "SPE discard mode found unexpected data" 1 134 + else 135 + arm_spe_report "SPE discard mode" 0 136 + fi 137 + } 138 + 110 139 arm_spe_snapshot_test 111 140 arm_spe_system_wide_test 141 + arm_spe_discard_test 112 142 113 143 exit $glb_err
+2 -2
tools/perf/tests/shell/test_brstack.sh
··· 30 30 echo "Testing user branch stack sampling" 31 31 32 32 perf record -o $TMPDIR/perf.data --branch-filter any,save_type,u -- ${TESTPROG} > /dev/null 2>&1 33 - perf script -i $TMPDIR/perf.data --fields brstacksym | xargs -n1 > $TMPDIR/perf.script 33 + perf script -i $TMPDIR/perf.data --fields brstacksym | tr -s ' ' '\n' > $TMPDIR/perf.script 34 34 35 35 # example of branch entries: 36 36 # brstack_foo+0x14/brstack_bar+0x40/P/-/-/0/CALL ··· 59 59 echo "Testing branch stack filtering permutation ($test_filter_filter,$test_filter_expect)" 60 60 61 61 perf record -o $TMPDIR/perf.data --branch-filter $test_filter_filter,save_type,u -- ${TESTPROG} > /dev/null 2>&1 62 - perf script -i $TMPDIR/perf.data --fields brstack | xargs -n1 > $TMPDIR/perf.script 62 + perf script -i $TMPDIR/perf.data --fields brstack | tr -s ' ' '\n' | grep '.' > $TMPDIR/perf.script 63 63 64 64 # fail if we find any branch type that doesn't match any of the expected ones 65 65 # also consider UNKNOWN branch types (-)
+28
tools/perf/tests/shell/test_intel_pt.sh
··· 644 644 return 0 645 645 } 646 646 647 + test_pause_resume() 648 + { 649 + echo "--- Test with pause / resume ---" 650 + if ! perf_record_no_decode -o "${perfdatafile}" -e intel_pt/aux-action=start-paused/u uname ; then 651 + echo "SKIP: pause / resume is not supported" 652 + return 2 653 + fi 654 + if ! perf_record_no_bpf -o "${perfdatafile}" \ 655 + -e intel_pt/aux-action=start-paused/u \ 656 + -e instructions/period=50000,aux-action=resume,name=Resume/u \ 657 + -e instructions/period=100000,aux-action=pause,name=Pause/u uname ; then 658 + echo "perf record with pause / resume failed" 659 + return 1 660 + fi 661 + if ! perf script -i "${perfdatafile}" --itrace=b -Fperiod,event | \ 662 + awk 'BEGIN {paused=1;branches=0} 663 + /Resume/ {paused=0} 664 + /branches/ {if (paused) exit 1;branches=1} 665 + /Pause/ {paused=1} 666 + END {if (!branches) exit 1}' ; then 667 + echo "perf record with pause / resume failed" 668 + return 1 669 + fi 670 + echo OK 671 + return 0 672 + } 673 + 647 674 count_result() 648 675 { 649 676 if [ "$1" -eq 2 ] ; then ··· 699 672 test_no_tnt || ret=$? ; count_result $ret ; ret=0 700 673 test_event_trace || ret=$? ; count_result $ret ; ret=0 701 674 test_pipe || ret=$? ; count_result $ret ; ret=0 675 + test_pause_resume || ret=$? ; count_result $ret ; ret=0 702 676 703 677 cleanup 704 678
+1 -1
tools/perf/tests/shell/test_task_analyzer.sh
··· 1 1 #!/bin/bash 2 - # perf script task-analyzer tests 2 + # perf script task-analyzer tests (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 tmpdir=$(mktemp -d /tmp/perf-script-task-analyzer-XXXXX)
+94
tools/perf/tests/shell/trace_btf_general.sh
··· 1 + #!/bin/bash 2 + # perf trace BTF general tests 3 + # SPDX-License-Identifier: GPL-2.0 4 + 5 + err=0 6 + set -e 7 + 8 + # shellcheck source=lib/probe.sh 9 + . "$(dirname $0)"/lib/probe.sh 10 + 11 + file1=$(mktemp /tmp/file1_XXXX) 12 + file2=$(echo $file1 | sed 's/file1/file2/g') 13 + 14 + buffer="buffer content" 15 + perf_config_tmp=$(mktemp /tmp/.perfconfig_XXXXX) 16 + 17 + trap cleanup EXIT TERM INT HUP 18 + 19 + check_vmlinux() { 20 + echo "Checking if vmlinux BTF exists" 21 + if [ ! -f /sys/kernel/btf/vmlinux ] 22 + then 23 + echo "Skipped due to missing vmlinux BTF" 24 + return 2 25 + fi 26 + return 0 27 + } 28 + 29 + trace_test_string() { 30 + echo "Testing perf trace's string augmentation" 31 + if ! perf trace -e renameat* --max-events=1 -- mv ${file1} ${file2} 2>&1 | \ 32 + grep -q -E "^mv/[0-9]+ renameat(2)?\(.*, \"${file1}\", .*, \"${file2}\", .*\) += +[0-9]+$" 33 + then 34 + echo "String augmentation test failed" 35 + err=1 36 + fi 37 + } 38 + 39 + trace_test_buffer() { 40 + echo "Testing perf trace's buffer augmentation" 41 + # echo will insert a newline (\10) at the end of the buffer 42 + if ! perf trace -e write --max-events=1 -- echo "${buffer}" 2>&1 | \ 43 + grep -q -E "^echo/[0-9]+ write\([0-9]+, ${buffer}.*, [0-9]+\) += +[0-9]+$" 44 + then 45 + echo "Buffer augmentation test failed" 46 + err=1 47 + fi 48 + } 49 + 50 + trace_test_struct_btf() { 51 + echo "Testing perf trace's struct augmentation" 52 + if ! perf trace -e clock_nanosleep --force-btf --max-events=1 -- sleep 1 2>&1 | \ 53 + grep -q -E "^sleep/[0-9]+ clock_nanosleep\(0, 0, \{1,\}, 0x[0-9a-f]+\) += +[0-9]+$" 54 + then 55 + echo "BTF struct augmentation test failed" 56 + err=1 57 + fi 58 + } 59 + 60 + cleanup() { 61 + rm -rf ${file1} ${file2} ${perf_config_tmp} 62 + } 63 + 64 + trap_cleanup() { 65 + echo "Unexpected signal in ${FUNCNAME[1]}" 66 + cleanup 67 + exit 1 68 + } 69 + 70 + # don't overwrite user's perf config 71 + trace_config() { 72 + export PERF_CONFIG=${perf_config_tmp} 73 + perf config trace.show_arg_names=false trace.show_duration=false \ 74 + trace.show_timestamp=false trace.args_alignment=0 75 + } 76 + 77 + skip_if_no_perf_trace || exit 2 78 + check_vmlinux || exit 2 79 + 80 + trace_config 81 + 82 + trace_test_string 83 + 84 + if [ $err = 0 ]; then 85 + trace_test_buffer 86 + fi 87 + 88 + if [ $err = 0 ]; then 89 + trace_test_struct_btf 90 + fi 91 + 92 + cleanup 93 + 94 + exit $err
+3 -17
tools/perf/tests/sigtrap.c
··· 56 56 57 57 #ifdef HAVE_BPF_SKEL 58 58 #include <bpf/btf.h> 59 + #include <util/btf.h> 59 60 60 61 static struct btf *btf; 61 62 ··· 74 73 btf = NULL; 75 74 } 76 75 77 - static const struct btf_member *__btf_type__find_member_by_name(int type_id, const char *member_name) 78 - { 79 - const struct btf_type *t = btf__type_by_id(btf, type_id); 80 - const struct btf_member *m; 81 - int i; 82 - 83 - for (i = 0, m = btf_members(t); i < btf_vlen(t); i++, m++) { 84 - const char *current_member_name = btf__name_by_offset(btf, m->name_off); 85 - if (!strcmp(current_member_name, member_name)) 86 - return m; 87 - } 88 - 89 - return NULL; 90 - } 91 - 92 76 static bool attr_has_sigtrap(void) 93 77 { 94 78 int id; ··· 87 101 if (id < 0) 88 102 return false; 89 103 90 - return __btf_type__find_member_by_name(id, "sigtrap") != NULL; 104 + return __btf_type__find_member_by_name(btf, id, "sigtrap") != NULL; 91 105 } 92 106 93 107 static bool kernel_with_sleepable_spinlocks(void) ··· 105 119 return false; 106 120 107 121 // Only RT has a "lock" member for "struct spinlock" 108 - member = __btf_type__find_member_by_name(id, "lock"); 122 + member = __btf_type__find_member_by_name(btf, id, "lock"); 109 123 if (member == NULL) 110 124 return false; 111 125
+9 -7
tools/perf/tests/stat.c
··· 27 27 struct machine *machine __maybe_unused) 28 28 { 29 29 struct perf_record_stat_config *config = &event->stat_config; 30 - struct perf_stat_config stat_config = {}; 30 + struct perf_stat_config test_stat_config = {}; 31 31 32 32 #define HAS(term, val) \ 33 33 has_term(config, PERF_STAT_CONFIG_TERM__##term, val) ··· 39 39 40 40 #undef HAS 41 41 42 - perf_event__read_stat_config(&stat_config, config); 42 + perf_event__read_stat_config(&test_stat_config, config); 43 43 44 - TEST_ASSERT_VAL("wrong aggr_mode", stat_config.aggr_mode == AGGR_CORE); 45 - TEST_ASSERT_VAL("wrong scale", stat_config.scale == 1); 46 - TEST_ASSERT_VAL("wrong interval", stat_config.interval == 1); 44 + TEST_ASSERT_VAL("wrong aggr_mode", test_stat_config.aggr_mode == AGGR_CORE); 45 + TEST_ASSERT_VAL("wrong scale", test_stat_config.scale == 1); 46 + TEST_ASSERT_VAL("wrong interval", test_stat_config.interval == 1); 47 47 return 0; 48 48 } 49 49 50 50 static int test__synthesize_stat_config(struct test_suite *test __maybe_unused, 51 51 int subtest __maybe_unused) 52 52 { 53 - struct perf_stat_config stat_config = { 53 + struct perf_stat_config test_stat_config = { 54 54 .aggr_mode = AGGR_CORE, 55 55 .scale = 1, 56 56 .interval = 1, 57 57 }; 58 58 59 59 TEST_ASSERT_VAL("failed to synthesize stat_config", 60 - !perf_event__synthesize_stat_config(NULL, &stat_config, process_stat_config_event, NULL)); 60 + !perf_event__synthesize_stat_config(NULL, &test_stat_config, 61 + process_stat_config_event, 62 + NULL)); 61 63 62 64 return 0; 63 65 }
+1 -1
tools/perf/tests/switch-tracking.c
··· 583 583 goto out; 584 584 } 585 585 586 - DEFINE_SUITE("Track with sched_switch", switch_tracking); 586 + DEFINE_SUITE_EXCLUSIVE("Track with sched_switch", switch_tracking);
+1 -1
tools/perf/tests/tests-scripts.c
··· 174 174 char filename[PATH_MAX], link[128]; 175 175 struct test_suite *test_suite, **result_tmp; 176 176 struct test_case *tests; 177 - size_t len; 177 + ssize_t len; 178 178 char *exclusive; 179 179 180 180 snprintf(link, sizeof(link), "/proc/%d/fd/%d", getpid(), dir_fd);
+10
tools/perf/tests/tests.h
··· 81 81 .test_cases = tests__##_name, \ 82 82 } 83 83 84 + #define DEFINE_SUITE_EXCLUSIVE(description, _name) \ 85 + struct test_case tests__##_name[] = { \ 86 + TEST_CASE_EXCLUSIVE(description, _name),\ 87 + { .name = NULL, } \ 88 + }; \ 89 + struct test_suite suite__##_name = { \ 90 + .desc = description, \ 91 + .test_cases = tests__##_name, \ 92 + } 93 + 84 94 /* Tests */ 85 95 DECLARE_SUITE(vmlinux_matches_kallsyms); 86 96 DECLARE_SUITE(openat_syscall_event);
+1 -1
tools/perf/tests/workloads/landlock.c
··· 10 10 * 'perf test' workload) we just add the required types and defines here instead 11 11 * of including linux/landlock, that isn't available in older systems. 12 12 * 13 - * We are not interested in the the result of the syscall, just in intercepting 13 + * We are not interested in the result of the syscall, just in intercepting 14 14 * its arguments. 15 15 */ 16 16
+2 -1
tools/perf/trace/beauty/arch_errno_names.sh
··· 57 57 archlist="$1" 58 58 default="$2" 59 59 60 - printf 'arch_syscalls__strerrno_t *arch_syscalls__strerrno_function(const char *arch)\n' 60 + printf 'static arch_syscalls__strerrno_t *\n' 61 + printf 'arch_syscalls__strerrno_function(const char *arch)\n' 61 62 printf '{\n' 62 63 for arch in $archlist; do 63 64 arch_str=$(arch_string "$arch")
+1 -1
tools/perf/ui/browsers/annotate.c
··· 754 754 hbt->timer(hbt->arg); 755 755 756 756 if (delay_secs != 0) { 757 - symbol__annotate_decay_histogram(sym, evsel->core.idx); 757 + symbol__annotate_decay_histogram(sym, evsel); 758 758 hists__scnprintf_title(hists, title, sizeof(title)); 759 759 annotate_browser__show(&browser->b, title, help); 760 760 }
+175 -2
tools/perf/ui/browsers/scripts.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include "../../builtin.h" 3 - #include "../../perf.h" 4 2 #include "../../util/util.h" // perf_exe() 5 3 #include "../util.h" 4 + #include "../../util/evlist.h" 6 5 #include "../../util/hist.h" 7 6 #include "../../util/debug.h" 7 + #include "../../util/session.h" 8 8 #include "../../util/symbol.h" 9 9 #include "../browser.h" 10 10 #include "../libslang.h" 11 11 #include "config.h" 12 + #include <linux/err.h> 12 13 #include <linux/string.h> 13 14 #include <linux/zalloc.h> 15 + #include <subcmd/exec-cmd.h> 14 16 #include <stdlib.h> 15 17 16 18 #define SCRIPT_NAMELEN 128 ··· 77 75 return -1; 78 76 c->index++; 79 77 return 0; 78 + } 79 + 80 + /* 81 + * Some scripts specify the required events in their "xxx-record" file, 82 + * this function will check if the events in perf.data match those 83 + * mentioned in the "xxx-record". 84 + * 85 + * Fixme: All existing "xxx-record" are all in good formats "-e event ", 86 + * which is covered well now. And new parsing code should be added to 87 + * cover the future complex formats like event groups etc. 88 + */ 89 + static int check_ev_match(int dir_fd, const char *scriptname, struct perf_session *session) 90 + { 91 + char line[BUFSIZ]; 92 + FILE *fp; 93 + 94 + { 95 + char filename[FILENAME_MAX + 5]; 96 + int fd; 97 + 98 + scnprintf(filename, sizeof(filename), "bin/%s-record", scriptname); 99 + fd = openat(dir_fd, filename, O_RDONLY); 100 + if (fd == -1) 101 + return -1; 102 + fp = fdopen(fd, "r"); 103 + if (!fp) 104 + return -1; 105 + } 106 + 107 + while (fgets(line, sizeof(line), fp)) { 108 + char *p = skip_spaces(line); 109 + 110 + if (*p == '#') 111 + continue; 112 + 113 + while (strlen(p)) { 114 + int match, len; 115 + struct evsel *pos; 116 + char evname[128]; 117 + 118 + p = strstr(p, "-e"); 119 + if (!p) 120 + break; 121 + 122 + p += 2; 123 + p = skip_spaces(p); 124 + len = strcspn(p, " \t"); 125 + if (!len) 126 + break; 127 + 128 + snprintf(evname, len + 1, "%s", p); 129 + 130 + match = 0; 131 + evlist__for_each_entry(session->evlist, pos) { 132 + if (evsel__name_is(pos, evname)) { 133 + match = 1; 134 + break; 135 + } 136 + } 137 + 138 + if (!match) { 139 + fclose(fp); 140 + return -1; 141 + } 142 + } 143 + } 144 + 145 + fclose(fp); 146 + return 0; 147 + } 148 + 149 + /* 150 + * Return -1 if none is found, otherwise the actual scripts number. 151 + * 152 + * Currently the only user of this function is the script browser, which 153 + * will list all statically runnable scripts, select one, execute it and 154 + * show the output in a perf browser. 155 + */ 156 + static int find_scripts(char **scripts_array, char **scripts_path_array, int num, 157 + int pathlen) 158 + { 159 + struct dirent *script_dirent, *lang_dirent; 160 + int scripts_dir_fd, lang_dir_fd; 161 + DIR *scripts_dir, *lang_dir; 162 + struct perf_session *session; 163 + struct perf_data data = { 164 + .path = input_name, 165 + .mode = PERF_DATA_MODE_READ, 166 + }; 167 + char *temp; 168 + int i = 0; 169 + const char *exec_path = get_argv_exec_path(); 170 + 171 + session = perf_session__new(&data, NULL); 172 + if (IS_ERR(session)) 173 + return PTR_ERR(session); 174 + 175 + { 176 + char scripts_path[PATH_MAX]; 177 + 178 + snprintf(scripts_path, sizeof(scripts_path), "%s/scripts", exec_path); 179 + scripts_dir_fd = open(scripts_path, O_DIRECTORY); 180 + pr_err("Failed to open directory '%s'", scripts_path); 181 + if (scripts_dir_fd == -1) { 182 + perf_session__delete(session); 183 + return -1; 184 + } 185 + } 186 + scripts_dir = fdopendir(scripts_dir_fd); 187 + if (!scripts_dir) { 188 + close(scripts_dir_fd); 189 + perf_session__delete(session); 190 + return -1; 191 + } 192 + 193 + while ((lang_dirent = readdir(scripts_dir)) != NULL) { 194 + if (lang_dirent->d_type != DT_DIR && 195 + (lang_dirent->d_type == DT_UNKNOWN && 196 + !is_directory_at(scripts_dir_fd, lang_dirent->d_name))) 197 + continue; 198 + if (!strcmp(lang_dirent->d_name, ".") || !strcmp(lang_dirent->d_name, "..")) 199 + continue; 200 + 201 + #ifndef HAVE_LIBPERL_SUPPORT 202 + if (strstr(lang_dirent->d_name, "perl")) 203 + continue; 204 + #endif 205 + #ifndef HAVE_LIBPYTHON_SUPPORT 206 + if (strstr(lang_dirent->d_name, "python")) 207 + continue; 208 + #endif 209 + 210 + lang_dir_fd = openat(scripts_dir_fd, lang_dirent->d_name, O_DIRECTORY); 211 + if (lang_dir_fd == -1) 212 + continue; 213 + lang_dir = fdopendir(lang_dir_fd); 214 + if (!lang_dir) { 215 + close(lang_dir_fd); 216 + continue; 217 + } 218 + while ((script_dirent = readdir(lang_dir)) != NULL) { 219 + if (script_dirent->d_type == DT_DIR) 220 + continue; 221 + if (script_dirent->d_type == DT_UNKNOWN && 222 + is_directory_at(lang_dir_fd, script_dirent->d_name)) 223 + continue; 224 + /* Skip those real time scripts: xxxtop.p[yl] */ 225 + if (strstr(script_dirent->d_name, "top.")) 226 + continue; 227 + if (i >= num) 228 + break; 229 + scnprintf(scripts_path_array[i], pathlen, "%s/scripts/%s/%s", 230 + exec_path, 231 + lang_dirent->d_name, 232 + script_dirent->d_name); 233 + temp = strchr(script_dirent->d_name, '.'); 234 + snprintf(scripts_array[i], 235 + (temp - script_dirent->d_name) + 1, 236 + "%s", script_dirent->d_name); 237 + 238 + if (check_ev_match(lang_dir_fd, scripts_array[i], session)) 239 + continue; 240 + 241 + i++; 242 + } 243 + closedir(lang_dir); 244 + } 245 + 246 + closedir(scripts_dir); 247 + perf_session__delete(session); 248 + return i; 80 249 } 81 250 82 251 /*
+9 -7
tools/perf/ui/gtk/annotate.c
··· 3 3 #include "util/sort.h" 4 4 #include "util/debug.h" 5 5 #include "util/annotate.h" 6 + #include "util/evlist.h" 6 7 #include "util/evsel.h" 7 8 #include "util/map.h" 8 9 #include "util/dso.h" ··· 27 26 }; 28 27 29 28 static int perf_gtk__get_percent(char *buf, size_t size, struct symbol *sym, 30 - struct disasm_line *dl, int evidx) 29 + struct disasm_line *dl, const struct evsel *evsel) 31 30 { 32 31 struct annotation *notes; 33 32 struct sym_hist *symhist; ··· 43 42 return 0; 44 43 45 44 notes = symbol__annotation(sym); 46 - symhist = annotation__histogram(notes, evidx); 47 - entry = annotated_source__hist_entry(notes->src, evidx, dl->al.offset); 45 + symhist = annotation__histogram(notes, evsel); 46 + entry = annotated_source__hist_entry(notes->src, evsel, dl->al.offset); 48 47 if (entry) 49 48 nr_samples = entry->nr_samples; 50 49 ··· 140 139 gtk_list_store_append(store, &iter); 141 140 142 141 if (evsel__is_group_event(evsel)) { 143 - for (i = 0; i < evsel->core.nr_members; i++) { 142 + struct evsel *cur_evsel; 143 + 144 + for_each_group_evsel(cur_evsel, evsel__leader(evsel)) { 144 145 ret += perf_gtk__get_percent(s + ret, 145 146 sizeof(s) - ret, 146 147 sym, pos, 147 - evsel->core.idx + i); 148 + cur_evsel); 148 149 ret += scnprintf(s + ret, sizeof(s) - ret, " "); 149 150 } 150 151 } else { 151 - ret = perf_gtk__get_percent(s, sizeof(s), sym, pos, 152 - evsel->core.idx); 152 + ret = perf_gtk__get_percent(s, sizeof(s), sym, pos, evsel); 153 153 } 154 154 155 155 if (ret)
+1 -1
tools/perf/ui/hist.c
··· 121 121 const char *fmtstr, hpp_snprint_fn print_fn, 122 122 enum perf_hpp_fmt_type fmtype) 123 123 { 124 - int len = fmt->user_len ?: fmt->len; 124 + int len = max(fmt->user_len ?: fmt->len, (int)strlen(fmt->name)); 125 125 126 126 if (symbol_conf.field_sep) { 127 127 return __hpp__fmt(hpp, he, get_field, fmtstr, 1,
+5 -2
tools/perf/util/Build
··· 86 86 perf-util-y += hwmon_pmu.o 87 87 perf-util-y += tool_pmu.o 88 88 perf-util-y += svghelper.o 89 - perf-util-$(CONFIG_LIBTRACEEVENT) += trace-event-info.o 89 + perf-util-y += trace-event-info.o 90 90 perf-util-y += trace-event-scripting.o 91 91 perf-util-$(CONFIG_LIBTRACEEVENT) += trace-event.o 92 92 perf-util-$(CONFIG_LIBTRACEEVENT) += trace-event-parse.o ··· 121 121 perf-util-y += topdown.o 122 122 perf-util-y += iostat.o 123 123 perf-util-y += stream.o 124 + perf-util-y += kvm-stat.o 125 + perf-util-y += lock-contention.o 124 126 perf-util-$(CONFIG_AUXTRACE) += auxtrace.o 125 - perf-util-$(CONFIG_AUXTRACE) += intel-pt-decoder/ 127 + perf-util-y += intel-pt-decoder/ 126 128 perf-util-$(CONFIG_AUXTRACE) += intel-pt.o 127 129 perf-util-$(CONFIG_AUXTRACE) += intel-bts.o 128 130 perf-util-$(CONFIG_AUXTRACE) += arm-spe.o ··· 170 168 perf-util-$(CONFIG_PERF_BPF_SKEL) += bpf-filter.o 171 169 perf-util-$(CONFIG_PERF_BPF_SKEL) += bpf-filter-flex.o 172 170 perf-util-$(CONFIG_PERF_BPF_SKEL) += bpf-filter-bison.o 171 + perf-util-$(CONFIG_PERF_BPF_SKEL) += btf.o 173 172 174 173 ifeq ($(CONFIG_LIBTRACEEVENT),y) 175 174 perf-util-$(CONFIG_PERF_BPF_SKEL) += bpf_lock_contention.o
+15 -17
tools/perf/util/annotate.c
··· 209 209 } 210 210 211 211 static int __symbol__inc_addr_samples(struct map_symbol *ms, 212 - struct annotated_source *src, int evidx, u64 addr, 212 + struct annotated_source *src, struct evsel *evsel, u64 addr, 213 213 struct perf_sample *sample) 214 214 { 215 215 struct symbol *sym = ms->sym; ··· 228 228 } 229 229 230 230 offset = addr - sym->start; 231 - h = annotated_source__histogram(src, evidx); 231 + h = annotated_source__histogram(src, evsel); 232 232 if (h == NULL) { 233 233 pr_debug("%s(%d): ENOMEM! sym->name=%s, start=%#" PRIx64 ", addr=%#" PRIx64 ", end=%#" PRIx64 ", func: %d\n", 234 234 __func__, __LINE__, sym->name, sym->start, addr, sym->end, sym->type == STT_FUNC); 235 235 return -ENOMEM; 236 236 } 237 237 238 - hash_key = offset << 16 | evidx; 238 + hash_key = offset << 16 | evsel->core.idx; 239 239 if (!hashmap__find(src->samples, hash_key, &entry)) { 240 240 entry = zalloc(sizeof(*entry)); 241 241 if (entry == NULL) ··· 252 252 253 253 pr_debug3("%#" PRIx64 " %s: period++ [addr: %#" PRIx64 ", %#" PRIx64 254 254 ", evidx=%d] => nr_samples: %" PRIu64 ", period: %" PRIu64 "\n", 255 - sym->start, sym->name, addr, addr - sym->start, evidx, 255 + sym->start, sym->name, addr, addr - sym->start, evsel->core.idx, 256 256 entry->nr_samples, entry->period); 257 257 return 0; 258 258 } ··· 323 323 if (sym == NULL) 324 324 return 0; 325 325 src = symbol__hists(sym, evsel->evlist->core.nr_entries); 326 - return src ? __symbol__inc_addr_samples(ms, src, evsel->core.idx, addr, sample) : 0; 326 + return src ? __symbol__inc_addr_samples(ms, src, evsel, addr, sample) : 0; 327 327 } 328 328 329 329 static int symbol__account_br_cntr(struct annotated_branch *branch, ··· 861 861 s64 offset, s64 end) 862 862 { 863 863 struct hists *hists = evsel__hists(evsel); 864 - int evidx = evsel->core.idx; 865 - struct sym_hist *sym_hist = annotation__histogram(notes, evidx); 864 + struct sym_hist *sym_hist = annotation__histogram(notes, evsel); 866 865 unsigned int hits = 0; 867 866 u64 period = 0; 868 867 869 868 while (offset < end) { 870 869 struct sym_hist_entry *entry; 871 870 872 - entry = annotated_source__hist_entry(notes->src, evidx, offset); 871 + entry = annotated_source__hist_entry(notes->src, evsel, offset); 873 872 if (entry) { 874 873 hits += entry->nr_samples; 875 874 period += entry->period; ··· 1139 1140 1140 1141 static void symbol__annotate_hits(struct symbol *sym, struct evsel *evsel) 1141 1142 { 1142 - int evidx = evsel->core.idx; 1143 1143 struct annotation *notes = symbol__annotation(sym); 1144 - struct sym_hist *h = annotation__histogram(notes, evidx); 1144 + struct sym_hist *h = annotation__histogram(notes, evsel); 1145 1145 u64 len = symbol__size(sym), offset; 1146 1146 1147 1147 for (offset = 0; offset < len; ++offset) { 1148 1148 struct sym_hist_entry *entry; 1149 1149 1150 - entry = annotated_source__hist_entry(notes->src, evidx, offset); 1150 + entry = annotated_source__hist_entry(notes->src, evsel, offset); 1151 1151 if (entry && entry->nr_samples != 0) 1152 1152 printf("%*" PRIx64 ": %" PRIu64 "\n", BITS_PER_LONG / 2, 1153 1153 sym->start + offset, entry->nr_samples); ··· 1176 1178 const char *d_filename; 1177 1179 const char *evsel_name = evsel__name(evsel); 1178 1180 struct annotation *notes = symbol__annotation(sym); 1179 - struct sym_hist *h = annotation__histogram(notes, evsel->core.idx); 1181 + struct sym_hist *h = annotation__histogram(notes, evsel); 1180 1182 struct annotation_line *pos, *queue = NULL; 1181 1183 struct annotation_options *opts = &annotate_opts; 1182 1184 u64 start = map__rip_2objdump(map, sym->start); ··· 1362 1364 return err; 1363 1365 } 1364 1366 1365 - void symbol__annotate_zero_histogram(struct symbol *sym, int evidx) 1367 + void symbol__annotate_zero_histogram(struct symbol *sym, struct evsel *evsel) 1366 1368 { 1367 1369 struct annotation *notes = symbol__annotation(sym); 1368 - struct sym_hist *h = annotation__histogram(notes, evidx); 1370 + struct sym_hist *h = annotation__histogram(notes, evsel); 1369 1371 1370 1372 memset(h, 0, sizeof(*notes->src->histograms) * notes->src->nr_histograms); 1371 1373 } 1372 1374 1373 - void symbol__annotate_decay_histogram(struct symbol *sym, int evidx) 1375 + void symbol__annotate_decay_histogram(struct symbol *sym, struct evsel *evsel) 1374 1376 { 1375 1377 struct annotation *notes = symbol__annotation(sym); 1376 - struct sym_hist *h = annotation__histogram(notes, evidx); 1378 + struct sym_hist *h = annotation__histogram(notes, evsel); 1377 1379 struct annotation_line *al; 1378 1380 1379 1381 h->nr_samples = 0; ··· 1383 1385 if (al->offset == -1) 1384 1386 continue; 1385 1387 1386 - entry = annotated_source__hist_entry(notes->src, evidx, al->offset); 1388 + entry = annotated_source__hist_entry(notes->src, evsel, al->offset); 1387 1389 if (entry == NULL) 1388 1390 continue; 1389 1391
+12 -9
tools/perf/util/annotate.h
··· 15 15 #include "hashmap.h" 16 16 #include "disasm.h" 17 17 #include "branch.h" 18 + #include "evsel.h" 18 19 19 20 struct hist_browser_timer; 20 21 struct hist_entry; ··· 24 23 struct addr_map_symbol; 25 24 struct option; 26 25 struct perf_sample; 27 - struct evsel; 28 26 struct symbol; 29 27 struct annotated_data_type; 30 28 ··· 373 373 void annotation__update_column_widths(struct annotation *notes); 374 374 void annotation__toggle_full_addr(struct annotation *notes, struct map_symbol *ms); 375 375 376 - static inline struct sym_hist *annotated_source__histogram(struct annotated_source *src, int idx) 376 + static inline struct sym_hist *annotated_source__histogram(struct annotated_source *src, 377 + const struct evsel *evsel) 377 378 { 378 - return &src->histograms[idx]; 379 + return &src->histograms[evsel->core.idx]; 379 380 } 380 381 381 - static inline struct sym_hist *annotation__histogram(struct annotation *notes, int idx) 382 + static inline struct sym_hist *annotation__histogram(struct annotation *notes, 383 + const struct evsel *evsel) 382 384 { 383 - return annotated_source__histogram(notes->src, idx); 385 + return annotated_source__histogram(notes->src, evsel); 384 386 } 385 387 386 388 static inline struct sym_hist_entry * 387 - annotated_source__hist_entry(struct annotated_source *src, int idx, u64 offset) 389 + annotated_source__hist_entry(struct annotated_source *src, const struct evsel *evsel, u64 offset) 388 390 { 389 391 struct sym_hist_entry *entry; 390 - long key = offset << 16 | idx; 392 + long key = offset << 16 | evsel->core.idx; 391 393 392 394 if (!hashmap__find(src->samples, key, &entry)) 393 395 return NULL; ··· 443 441 SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_REGEXP, 444 442 SYMBOL_ANNOTATE_ERRNO__BPF_INVALID_FILE, 445 443 SYMBOL_ANNOTATE_ERRNO__BPF_MISSING_BTF, 444 + SYMBOL_ANNOTATE_ERRNO__COULDNT_DETERMINE_FILE_TYPE, 446 445 447 446 __SYMBOL_ANNOTATE_ERRNO__END, 448 447 }; ··· 451 448 int symbol__strerror_disassemble(struct map_symbol *ms, int errnum, char *buf, size_t buflen); 452 449 453 450 int symbol__annotate_printf(struct map_symbol *ms, struct evsel *evsel); 454 - void symbol__annotate_zero_histogram(struct symbol *sym, int evidx); 455 - void symbol__annotate_decay_histogram(struct symbol *sym, int evidx); 451 + void symbol__annotate_zero_histogram(struct symbol *sym, struct evsel *evsel); 452 + void symbol__annotate_decay_histogram(struct symbol *sym, struct evsel *evsel); 456 453 void annotated_source__purge(struct annotated_source *as); 457 454 458 455 int map_symbol__annotation_dump(struct map_symbol *ms, struct evsel *evsel);
+9
tools/perf/util/arm-spe-decoder/arm-spe-decoder.h
··· 67 67 ARM_SPE_COMMON_DS_DRAM = 0xe, 68 68 }; 69 69 70 + enum arm_spe_ampereone_data_source { 71 + ARM_SPE_AMPEREONE_LOCAL_CHIP_CACHE_OR_DEVICE = 0x0, 72 + ARM_SPE_AMPEREONE_SLC = 0x3, 73 + ARM_SPE_AMPEREONE_REMOTE_CHIP_CACHE = 0x5, 74 + ARM_SPE_AMPEREONE_DDR = 0x7, 75 + ARM_SPE_AMPEREONE_L1D = 0x8, 76 + ARM_SPE_AMPEREONE_L2D = 0x9, 77 + }; 78 + 70 79 struct arm_spe_record { 71 80 enum arm_spe_sample_type type; 72 81 int err;
+74 -12
tools/perf/util/arm-spe.c
··· 103 103 u32 flags; 104 104 }; 105 105 106 + struct data_source_handle { 107 + const struct midr_range *midr_ranges; 108 + void (*ds_synth)(const struct arm_spe_record *record, 109 + union perf_mem_data_src *data_src); 110 + }; 111 + 112 + #define DS(range, func) \ 113 + { \ 114 + .midr_ranges = range, \ 115 + .ds_synth = arm_spe__synth_##func, \ 116 + } 117 + 106 118 static void arm_spe_dump(struct arm_spe *spe __maybe_unused, 107 119 unsigned char *buf, size_t len) 108 120 { ··· 455 443 {}, 456 444 }; 457 445 446 + static const struct midr_range ampereone_ds_encoding_cpus[] = { 447 + MIDR_ALL_VERSIONS(MIDR_AMPERE1A), 448 + {}, 449 + }; 450 + 458 451 static void arm_spe__sample_flags(struct arm_spe_queue *speq) 459 452 { 460 453 const struct arm_spe_record *record = &speq->decoder->record; ··· 549 532 } 550 533 } 551 534 535 + /* 536 + * Source is IMPDEF. Here we convert the source code used on AmpereOne cores 537 + * to the common (Neoverse, Cortex) to avoid duplicating the decoding code. 538 + */ 539 + static void arm_spe__synth_data_source_ampereone(const struct arm_spe_record *record, 540 + union perf_mem_data_src *data_src) 541 + { 542 + struct arm_spe_record common_record; 543 + 544 + switch (record->source) { 545 + case ARM_SPE_AMPEREONE_LOCAL_CHIP_CACHE_OR_DEVICE: 546 + common_record.source = ARM_SPE_COMMON_DS_PEER_CORE; 547 + break; 548 + case ARM_SPE_AMPEREONE_SLC: 549 + common_record.source = ARM_SPE_COMMON_DS_SYS_CACHE; 550 + break; 551 + case ARM_SPE_AMPEREONE_REMOTE_CHIP_CACHE: 552 + common_record.source = ARM_SPE_COMMON_DS_REMOTE; 553 + break; 554 + case ARM_SPE_AMPEREONE_DDR: 555 + common_record.source = ARM_SPE_COMMON_DS_DRAM; 556 + break; 557 + case ARM_SPE_AMPEREONE_L1D: 558 + common_record.source = ARM_SPE_COMMON_DS_L1D; 559 + break; 560 + case ARM_SPE_AMPEREONE_L2D: 561 + common_record.source = ARM_SPE_COMMON_DS_L2; 562 + break; 563 + default: 564 + pr_warning_once("AmpereOne: Unknown data source (0x%x)\n", 565 + record->source); 566 + return; 567 + } 568 + 569 + common_record.op = record->op; 570 + arm_spe__synth_data_source_common(&common_record, data_src); 571 + } 572 + 573 + static const struct data_source_handle data_source_handles[] = { 574 + DS(common_ds_encoding_cpus, data_source_common), 575 + DS(ampereone_ds_encoding_cpus, data_source_ampereone), 576 + }; 577 + 552 578 static void arm_spe__synth_memory_level(const struct arm_spe_record *record, 553 579 union perf_mem_data_src *data_src) 554 580 { ··· 615 555 data_src->mem_lvl |= PERF_MEM_LVL_REM_CCE1; 616 556 } 617 557 618 - static bool arm_spe__is_common_ds_encoding(struct arm_spe_queue *speq) 558 + static bool arm_spe__synth_ds(struct arm_spe_queue *speq, 559 + const struct arm_spe_record *record, 560 + union perf_mem_data_src *data_src) 619 561 { 620 562 struct arm_spe *spe = speq->spe; 621 - bool is_in_cpu_list; 622 563 u64 *metadata = NULL; 623 - u64 midr = 0; 564 + u64 midr; 565 + unsigned int i; 624 566 625 567 /* Metadata version 1 assumes all CPUs are the same (old behavior) */ 626 568 if (spe->metadata_ver == 1) { ··· 654 592 midr = metadata[ARM_SPE_CPU_MIDR]; 655 593 } 656 594 657 - is_in_cpu_list = is_midr_in_range_list(midr, common_ds_encoding_cpus); 658 - if (is_in_cpu_list) 659 - return true; 660 - else 661 - return false; 595 + for (i = 0; i < ARRAY_SIZE(data_source_handles); i++) { 596 + if (is_midr_in_range_list(midr, data_source_handles[i].midr_ranges)) { 597 + data_source_handles[i].ds_synth(record, data_src); 598 + return true; 599 + } 600 + } 601 + 602 + return false; 662 603 } 663 604 664 605 static u64 arm_spe__synth_data_source(struct arm_spe_queue *speq, 665 606 const struct arm_spe_record *record) 666 607 { 667 608 union perf_mem_data_src data_src = { .mem_op = PERF_MEM_OP_NA }; 668 - bool is_common = arm_spe__is_common_ds_encoding(speq); 669 609 670 610 if (record->op & ARM_SPE_OP_LD) 671 611 data_src.mem_op = PERF_MEM_OP_LOAD; ··· 676 612 else 677 613 return 0; 678 614 679 - if (is_common) 680 - arm_spe__synth_data_source_common(record, &data_src); 681 - else 615 + if (!arm_spe__synth_ds(speq, record, &data_src)) 682 616 arm_spe__synth_memory_level(record, &data_src); 683 617 684 618 if (record->type & (ARM_SPE_TLB_ACCESS | ARM_SPE_TLB_MISS)) {
+62 -5
tools/perf/util/auxtrace.c
··· 810 810 return auxtrace_validate_aux_sample_size(evlist, opts); 811 811 } 812 812 813 - void auxtrace_regroup_aux_output(struct evlist *evlist) 813 + static struct aux_action_opt { 814 + const char *str; 815 + u32 aux_action; 816 + bool aux_event_opt; 817 + } aux_action_opts[] = { 818 + {"start-paused", BIT(0), true}, 819 + {"pause", BIT(1), false}, 820 + {"resume", BIT(2), false}, 821 + {.str = NULL}, 822 + }; 823 + 824 + static const struct aux_action_opt *auxtrace_parse_aux_action_str(const char *str) 814 825 { 815 - struct evsel *evsel, *aux_evsel = NULL; 826 + const struct aux_action_opt *opt; 827 + 828 + if (!str) 829 + return NULL; 830 + 831 + for (opt = aux_action_opts; opt->str; opt++) 832 + if (!strcmp(str, opt->str)) 833 + return opt; 834 + 835 + return NULL; 836 + } 837 + 838 + int auxtrace_parse_aux_action(struct evlist *evlist) 839 + { 816 840 struct evsel_config_term *term; 841 + struct evsel *aux_evsel = NULL; 842 + struct evsel *evsel; 817 843 818 844 evlist__for_each_entry(evlist, evsel) { 819 - if (evsel__is_aux_event(evsel)) 845 + bool is_aux_event = evsel__is_aux_event(evsel); 846 + const struct aux_action_opt *opt; 847 + 848 + if (is_aux_event) 820 849 aux_evsel = evsel; 821 - term = evsel__get_config_term(evsel, AUX_OUTPUT); 850 + term = evsel__get_config_term(evsel, AUX_ACTION); 851 + if (!term) { 852 + if (evsel__get_config_term(evsel, AUX_OUTPUT)) 853 + goto regroup; 854 + continue; 855 + } 856 + opt = auxtrace_parse_aux_action_str(term->val.str); 857 + if (!opt) { 858 + pr_err("Bad aux-action '%s'\n", term->val.str); 859 + return -EINVAL; 860 + } 861 + if (opt->aux_event_opt && !is_aux_event) { 862 + pr_err("aux-action '%s' can only be used with AUX area event\n", 863 + term->val.str); 864 + return -EINVAL; 865 + } 866 + if (!opt->aux_event_opt && is_aux_event) { 867 + pr_err("aux-action '%s' cannot be used for AUX area event itself\n", 868 + term->val.str); 869 + return -EINVAL; 870 + } 871 + evsel->core.attr.aux_action = opt->aux_action; 872 + regroup: 822 873 /* If possible, group with the AUX event */ 823 - if (term && aux_evsel) 874 + if (aux_evsel) 824 875 evlist__regroup(evlist, aux_evsel, evsel); 876 + if (!evsel__is_aux_event(evsel__leader(evsel))) { 877 + pr_err("Events with aux-action must have AUX area event group leader\n"); 878 + return -EINVAL; 879 + } 825 880 } 881 + 882 + return 0; 826 883 } 827 884 828 885 struct auxtrace_record *__weak
+4 -2
tools/perf/util/auxtrace.h
··· 578 578 int auxtrace_parse_sample_options(struct auxtrace_record *itr, 579 579 struct evlist *evlist, 580 580 struct record_opts *opts, const char *str); 581 - void auxtrace_regroup_aux_output(struct evlist *evlist); 581 + int auxtrace_parse_aux_action(struct evlist *evlist); 582 582 int auxtrace_record__options(struct auxtrace_record *itr, 583 583 struct evlist *evlist, 584 584 struct record_opts *opts); ··· 799 799 } 800 800 801 801 static inline 802 - void auxtrace_regroup_aux_output(struct evlist *evlist __maybe_unused) 802 + int auxtrace_parse_aux_action(struct evlist *evlist __maybe_unused) 803 803 { 804 + pr_err("AUX area tracing not supported\n"); 805 + return -EINVAL; 804 806 } 805 807 806 808 static inline
+8 -2
tools/perf/util/bpf-event.c
··· 289 289 } 290 290 291 291 info_node->info_linear = info_linear; 292 - perf_env__insert_bpf_prog_info(env, info_node); 292 + if (!perf_env__insert_bpf_prog_info(env, info_node)) { 293 + free(info_linear); 294 + free(info_node); 295 + } 293 296 info_linear = NULL; 294 297 295 298 /* ··· 483 480 info_node = malloc(sizeof(struct bpf_prog_info_node)); 484 481 if (info_node) { 485 482 info_node->info_linear = info_linear; 486 - perf_env__insert_bpf_prog_info(env, info_node); 483 + if (!perf_env__insert_bpf_prog_info(env, info_node)) { 484 + free(info_linear); 485 + free(info_node); 486 + } 487 487 } else 488 488 free(info_linear); 489 489
+14 -1
tools/perf/util/bpf_ftrace.c
··· 11 11 #include "util/debug.h" 12 12 #include "util/evlist.h" 13 13 #include "util/bpf_counter.h" 14 + #include "util/stat.h" 14 15 15 16 #include "util/bpf_skel/func_latency.skel.h" 16 17 ··· 36 35 pr_err("Failed to open func latency skeleton\n"); 37 36 return -1; 38 37 } 38 + 39 + skel->rodata->bucket_range = ftrace->bucket_range; 40 + skel->rodata->min_latency = ftrace->min_latency; 39 41 40 42 /* don't need to set cpu filter for system-wide mode */ 41 43 if (ftrace->target.cpu_list) { ··· 87 83 } 88 84 } 89 85 86 + skel->bss->min = INT64_MAX; 87 + 90 88 skel->links.func_begin = bpf_program__attach_kprobe(skel->progs.func_begin, 91 89 false, func->name); 92 90 if (IS_ERR(skel->links.func_begin)) { ··· 125 119 } 126 120 127 121 int perf_ftrace__latency_read_bpf(struct perf_ftrace *ftrace __maybe_unused, 128 - int buckets[]) 122 + int buckets[], struct stats *stats) 129 123 { 130 124 int i, fd, err; 131 125 u32 idx; ··· 147 141 148 142 for (i = 0; i < ncpus; i++) 149 143 buckets[idx] += hist[i]; 144 + } 145 + 146 + if (skel->bss->count) { 147 + stats->mean = skel->bss->total / skel->bss->count; 148 + stats->n = skel->bss->count; 149 + stats->max = skel->bss->max; 150 + stats->min = skel->bss->min; 150 151 } 151 152 152 153 free(hist);
+1 -1
tools/perf/util/bpf_kwork.c
··· 285 285 (bpf_trace->get_work_name(key, &tmp.name))) 286 286 return -1; 287 287 288 - work = perf_kwork_add_work(kwork, tmp.class, &tmp); 288 + work = kwork->add_work(kwork, tmp.class, &tmp); 289 289 if (work == NULL) 290 290 return -1; 291 291
+1 -1
tools/perf/util/bpf_kwork_top.c
··· 255 255 bpf_trace = kwork_class_bpf_supported_list[type]; 256 256 tmp.class = bpf_trace->class; 257 257 258 - work = perf_kwork_add_work(kwork, tmp.class, &tmp); 258 + work = kwork->add_work(kwork, tmp.class, &tmp); 259 259 if (!work) 260 260 return -1; 261 261
+140 -2
tools/perf/util/bpf_lock_contention.c
··· 2 2 #include "util/cgroup.h" 3 3 #include "util/debug.h" 4 4 #include "util/evlist.h" 5 + #include "util/hashmap.h" 5 6 #include "util/machine.h" 6 7 #include "util/map.h" 7 8 #include "util/symbol.h" ··· 13 12 #include <linux/zalloc.h> 14 13 #include <linux/string.h> 15 14 #include <bpf/bpf.h> 15 + #include <bpf/btf.h> 16 16 #include <inttypes.h> 17 17 18 18 #include "bpf_skel/lock_contention.skel.h" 19 19 #include "bpf_skel/lock_data.h" 20 20 21 21 static struct lock_contention_bpf *skel; 22 + static bool has_slab_iter; 23 + static struct hashmap slab_hash; 24 + 25 + static size_t slab_cache_hash(long key, void *ctx __maybe_unused) 26 + { 27 + return key; 28 + } 29 + 30 + static bool slab_cache_equal(long key1, long key2, void *ctx __maybe_unused) 31 + { 32 + return key1 == key2; 33 + } 34 + 35 + static void check_slab_cache_iter(struct lock_contention *con) 36 + { 37 + struct btf *btf = btf__load_vmlinux_btf(); 38 + s32 ret; 39 + 40 + hashmap__init(&slab_hash, slab_cache_hash, slab_cache_equal, /*ctx=*/NULL); 41 + 42 + if (btf == NULL) { 43 + pr_debug("BTF loading failed: %s\n", strerror(errno)); 44 + return; 45 + } 46 + 47 + ret = btf__find_by_name_kind(btf, "bpf_iter__kmem_cache", BTF_KIND_STRUCT); 48 + if (ret < 0) { 49 + bpf_program__set_autoload(skel->progs.slab_cache_iter, false); 50 + pr_debug("slab cache iterator is not available: %d\n", ret); 51 + goto out; 52 + } 53 + 54 + has_slab_iter = true; 55 + 56 + bpf_map__set_max_entries(skel->maps.slab_caches, con->map_nr_entries); 57 + out: 58 + btf__free(btf); 59 + } 60 + 61 + static void run_slab_cache_iter(void) 62 + { 63 + int fd; 64 + char buf[256]; 65 + long key, *prev_key; 66 + 67 + if (!has_slab_iter) 68 + return; 69 + 70 + fd = bpf_iter_create(bpf_link__fd(skel->links.slab_cache_iter)); 71 + if (fd < 0) { 72 + pr_debug("cannot create slab cache iter: %d\n", fd); 73 + return; 74 + } 75 + 76 + /* This will run the bpf program */ 77 + while (read(fd, buf, sizeof(buf)) > 0) 78 + continue; 79 + 80 + close(fd); 81 + 82 + /* Read the slab cache map and build a hash with IDs */ 83 + fd = bpf_map__fd(skel->maps.slab_caches); 84 + prev_key = NULL; 85 + while (!bpf_map_get_next_key(fd, prev_key, &key)) { 86 + struct slab_cache_data *data; 87 + 88 + data = malloc(sizeof(*data)); 89 + if (data == NULL) 90 + break; 91 + 92 + if (bpf_map_lookup_elem(fd, &key, data) < 0) 93 + break; 94 + 95 + hashmap__add(&slab_hash, data->id, data); 96 + prev_key = &key; 97 + } 98 + } 99 + 100 + static void exit_slab_cache_iter(void) 101 + { 102 + struct hashmap_entry *cur; 103 + unsigned bkt; 104 + 105 + hashmap__for_each_entry(&slab_hash, cur, bkt) 106 + free(cur->pvalue); 107 + 108 + hashmap__clear(&slab_hash); 109 + } 22 110 23 111 int lock_contention_prepare(struct lock_contention *con) 24 112 { 25 113 int i, fd; 26 - int ncpus = 1, ntasks = 1, ntypes = 1, naddrs = 1, ncgrps = 1; 114 + int ncpus = 1, ntasks = 1, ntypes = 1, naddrs = 1, ncgrps = 1, nslabs = 1; 27 115 struct evlist *evlist = con->evlist; 28 116 struct target *target = con->target; 29 117 ··· 199 109 skel->rodata->use_cgroup_v2 = 1; 200 110 } 201 111 112 + check_slab_cache_iter(con); 113 + 114 + if (con->filters->nr_slabs && has_slab_iter) { 115 + skel->rodata->has_slab = 1; 116 + nslabs = con->filters->nr_slabs; 117 + } 118 + 119 + bpf_map__set_max_entries(skel->maps.slab_filter, nslabs); 120 + 202 121 if (lock_contention_bpf__load(skel) < 0) { 203 122 pr_err("Failed to load lock-contention BPF skeleton\n"); 204 123 return -1; ··· 278 179 bpf_program__set_autoload(skel->progs.collect_lock_syms, false); 279 180 280 181 lock_contention_bpf__attach(skel); 182 + 183 + /* run the slab iterator after attaching */ 184 + run_slab_cache_iter(); 185 + 186 + if (con->filters->nr_slabs) { 187 + u8 val = 1; 188 + int cache_fd; 189 + long key, *prev_key; 190 + 191 + fd = bpf_map__fd(skel->maps.slab_filter); 192 + 193 + /* Read the slab cache map and build a hash with its address */ 194 + cache_fd = bpf_map__fd(skel->maps.slab_caches); 195 + prev_key = NULL; 196 + while (!bpf_map_get_next_key(cache_fd, prev_key, &key)) { 197 + struct slab_cache_data data; 198 + 199 + if (bpf_map_lookup_elem(cache_fd, &key, &data) < 0) 200 + break; 201 + 202 + for (i = 0; i < con->filters->nr_slabs; i++) { 203 + if (!strcmp(con->filters->slabs[i], data.name)) { 204 + bpf_map_update_elem(fd, &key, &val, BPF_ANY); 205 + break; 206 + } 207 + } 208 + prev_key = &key; 209 + } 210 + } 211 + 281 212 return 0; 282 213 } 283 214 ··· 476 347 477 348 if (con->aggr_mode == LOCK_AGGR_ADDR) { 478 349 int lock_fd = bpf_map__fd(skel->maps.lock_syms); 350 + struct slab_cache_data *slab_data; 479 351 480 352 /* per-process locks set upper bits of the flags */ 481 353 if (flags & LCD_F_MMAP_LOCK) ··· 493 363 if (!bpf_map_lookup_elem(lock_fd, &key->lock_addr_or_cgroup, &flags)) { 494 364 if (flags == LOCK_CLASS_RQLOCK) 495 365 return "rq_lock"; 366 + } 367 + 368 + /* look slab_hash for dynamic locks in a slab object */ 369 + if (hashmap__find(&slab_hash, flags & LCB_F_SLAB_ID_MASK, &slab_data)) { 370 + snprintf(name_buf, sizeof(name_buf), "&%s", slab_data->name); 371 + return name_buf; 496 372 } 497 373 498 374 return ""; ··· 594 458 if (con->save_callstack) { 595 459 bpf_map_lookup_elem(stack, &key.stack_id, stack_trace); 596 460 597 - if (!match_callstack_filter(machine, stack_trace)) { 461 + if (!match_callstack_filter(machine, stack_trace, con->max_stack)) { 598 462 con->nr_filtered += data.count; 599 463 goto next; 600 464 } ··· 674 538 rb_erase(node, &con->cgroups); 675 539 cgroup__put(cgrp); 676 540 } 541 + 542 + exit_slab_cache_iter(); 677 543 678 544 return 0; 679 545 }
+5
tools/perf/util/bpf_off_cpu.c
··· 100 100 const struct btf_type *t1, *t2, *t3; 101 101 u32 type_id; 102 102 103 + if (!btf) { 104 + pr_debug("Missing btf, check if CONFIG_DEBUG_INFO_BTF is enabled\n"); 105 + goto cleanup; 106 + } 107 + 103 108 type_id = btf__find_by_name_kind(btf, "btf_trace_sched_switch", 104 109 BTF_KIND_TYPEDEF); 105 110 if ((s32)type_id < 0)
+45 -1
tools/perf/util/bpf_skel/func_latency.bpf.c
··· 38 38 39 39 int enabled = 0; 40 40 41 + // stats 42 + __s64 total; 43 + __s64 count; 44 + __s64 max; 45 + __s64 min; 46 + 41 47 const volatile int has_cpu = 0; 42 48 const volatile int has_task = 0; 43 49 const volatile int use_nsec = 0; 50 + const volatile unsigned int bucket_range; 51 + const volatile unsigned int min_latency; 52 + const volatile unsigned int max_latency; 44 53 45 54 SEC("kprobe/func") 46 55 int BPF_PROG(func_begin) ··· 101 92 start = bpf_map_lookup_elem(&functime, &tid); 102 93 if (start) { 103 94 __s64 delta = bpf_ktime_get_ns() - *start; 104 - __u32 key; 95 + __u32 key = 0; 105 96 __u64 *hist; 106 97 107 98 bpf_map_delete_elem(&functime, &tid); ··· 109 100 if (delta < 0) 110 101 return 0; 111 102 103 + if (bucket_range != 0) { 104 + delta /= cmp_base; 105 + 106 + if (min_latency > 0) { 107 + if (delta > min_latency) 108 + delta -= min_latency; 109 + else 110 + goto do_lookup; 111 + } 112 + 113 + // Less than 1 unit (ms or ns), or, in the future, 114 + // than the min latency desired. 115 + if (delta > 0) { // 1st entry: [ 1 unit .. bucket_range units ) 116 + // clang 12 doesn't like s64 / u32 division 117 + key = (__u64)delta / bucket_range + 1; 118 + if (key >= NUM_BUCKET || 119 + delta >= max_latency - min_latency) 120 + key = NUM_BUCKET - 1; 121 + } 122 + 123 + delta += min_latency; 124 + goto do_lookup; 125 + } 112 126 // calculate index using delta 113 127 for (key = 0; key < (NUM_BUCKET - 1); key++) { 114 128 if (delta < (cmp_base << key)) 115 129 break; 116 130 } 117 131 132 + do_lookup: 118 133 hist = bpf_map_lookup_elem(&latency, &key); 119 134 if (!hist) 120 135 return 0; 121 136 122 137 *hist += 1; 138 + 139 + if (bucket_range == 0) 140 + delta /= cmp_base; 141 + 142 + __sync_fetch_and_add(&total, delta); 143 + __sync_fetch_and_add(&count, 1); 144 + 145 + if (delta > max) 146 + max = delta; 147 + if (delta < min) 148 + min = delta; 123 149 } 124 150 125 151 return 0;
+3 -1
tools/perf/util/bpf_skel/kwork_top.bpf.c
··· 18 18 }; 19 19 20 20 #define MAX_ENTRIES 102400 21 - #define MAX_NR_CPUS 2048 21 + #ifndef MAX_NR_CPUS 22 + #define MAX_NR_CPUS 4096 23 + #endif 22 24 #define PF_KTHREAD 0x00200000 23 25 #define MAX_COMMAND_LEN 16 24 26
+92 -3
tools/perf/util/bpf_skel/lock_contention.bpf.c
··· 100 100 __uint(max_entries, 1); 101 101 } cgroup_filter SEC(".maps"); 102 102 103 + struct { 104 + __uint(type, BPF_MAP_TYPE_HASH); 105 + __uint(key_size, sizeof(long)); 106 + __uint(value_size, sizeof(__u8)); 107 + __uint(max_entries, 1); 108 + } slab_filter SEC(".maps"); 109 + 110 + struct { 111 + __uint(type, BPF_MAP_TYPE_HASH); 112 + __uint(key_size, sizeof(long)); 113 + __uint(value_size, sizeof(struct slab_cache_data)); 114 + __uint(max_entries, 1); 115 + } slab_caches SEC(".maps"); 116 + 103 117 struct rw_semaphore___old { 104 118 struct task_struct *owner; 105 119 } __attribute__((preserve_access_index)); ··· 130 116 struct rw_semaphore mmap_lock; 131 117 } __attribute__((preserve_access_index)); 132 118 119 + extern struct kmem_cache *bpf_get_kmem_cache(u64 addr) __ksym __weak; 120 + 133 121 /* control flags */ 134 122 const volatile int has_cpu; 135 123 const volatile int has_task; 136 124 const volatile int has_type; 137 125 const volatile int has_addr; 138 126 const volatile int has_cgroup; 127 + const volatile int has_slab; 139 128 const volatile int needs_callstack; 140 129 const volatile int stack_skip; 141 130 const volatile int lock_owner; ··· 152 135 int perf_subsys_id = -1; 153 136 154 137 __u64 end_ts; 138 + 139 + __u32 slab_cache_id; 155 140 156 141 /* error stat */ 157 142 int task_fail; ··· 221 202 __u64 addr = ctx[0]; 222 203 223 204 ok = bpf_map_lookup_elem(&addr_filter, &addr); 224 - if (!ok) 205 + if (!ok && !has_slab) 225 206 return 0; 226 207 } 227 208 ··· 230 211 __u64 cgrp = get_current_cgroup_id(); 231 212 232 213 ok = bpf_map_lookup_elem(&cgroup_filter, &cgrp); 214 + if (!ok) 215 + return 0; 216 + } 217 + 218 + if (has_slab && bpf_get_kmem_cache) { 219 + __u8 *ok; 220 + __u64 addr = ctx[0]; 221 + long kmem_cache_addr; 222 + 223 + kmem_cache_addr = (long)bpf_get_kmem_cache(addr); 224 + ok = bpf_map_lookup_elem(&slab_filter, &kmem_cache_addr); 233 225 if (!ok) 234 226 return 0; 235 227 } ··· 517 487 }; 518 488 int err; 519 489 520 - if (aggr_mode == LOCK_AGGR_ADDR) 521 - first.flags |= check_lock_type(pelem->lock, pelem->flags); 490 + if (aggr_mode == LOCK_AGGR_ADDR) { 491 + first.flags |= check_lock_type(pelem->lock, 492 + pelem->flags & LCB_F_TYPE_MASK); 493 + 494 + /* Check if it's from a slab object */ 495 + if (bpf_get_kmem_cache) { 496 + struct kmem_cache *s; 497 + struct slab_cache_data *d; 498 + 499 + s = bpf_get_kmem_cache(pelem->lock); 500 + if (s != NULL) { 501 + /* 502 + * Save the ID of the slab cache in the flags 503 + * (instead of full address) to reduce the 504 + * space in the contention_data. 505 + */ 506 + d = bpf_map_lookup_elem(&slab_caches, &s); 507 + if (d != NULL) 508 + first.flags |= d->id; 509 + } 510 + } 511 + } 522 512 523 513 err = bpf_map_update_elem(&lock_stat, &key, &first, BPF_NOEXIST); 524 514 if (err < 0) { ··· 610 560 int BPF_PROG(end_timestamp) 611 561 { 612 562 end_ts = bpf_ktime_get_ns(); 563 + return 0; 564 + } 565 + 566 + /* 567 + * bpf_iter__kmem_cache added recently so old kernels don't have it in the 568 + * vmlinux.h. But we cannot add it here since it will cause a compiler error 569 + * due to redefinition of the struct on later kernels. 570 + * 571 + * So it uses a CO-RE trick to access the member only if it has the type. 572 + * This will support both old and new kernels without compiler errors. 573 + */ 574 + struct bpf_iter__kmem_cache___new { 575 + struct kmem_cache *s; 576 + } __attribute__((preserve_access_index)); 577 + 578 + SEC("iter/kmem_cache") 579 + int slab_cache_iter(void *ctx) 580 + { 581 + struct kmem_cache *s = NULL; 582 + struct slab_cache_data d; 583 + const char *nameptr; 584 + 585 + if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) { 586 + struct bpf_iter__kmem_cache___new *iter = ctx; 587 + 588 + s = iter->s; 589 + } 590 + 591 + if (s == NULL) 592 + return 0; 593 + 594 + nameptr = s->name; 595 + bpf_probe_read_kernel_str(d.name, sizeof(d.name), nameptr); 596 + 597 + d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT; 598 + if (d.id >= LCB_F_SLAB_ID_END) 599 + return 0; 600 + 601 + bpf_map_update_elem(&slab_caches, &s, &d, BPF_NOEXIST); 613 602 return 0; 614 603 } 615 604
+14 -1
tools/perf/util/bpf_skel/lock_data.h
··· 32 32 #define LCD_F_MMAP_LOCK (1U << 31) 33 33 #define LCD_F_SIGHAND_LOCK (1U << 30) 34 34 35 - #define LCB_F_MAX_FLAGS (1U << 7) 35 + #define LCB_F_SLAB_ID_SHIFT 16 36 + #define LCB_F_SLAB_ID_START (1U << 16) 37 + #define LCB_F_SLAB_ID_END (1U << 26) 38 + #define LCB_F_SLAB_ID_MASK 0x03FF0000U 39 + 40 + #define LCB_F_TYPE_MAX (1U << 7) 41 + #define LCB_F_TYPE_MASK 0x0000007FU 42 + 43 + #define SLAB_NAME_MAX 28 36 44 37 45 struct contention_data { 38 46 u64 total_time; ··· 60 52 enum lock_class_sym { 61 53 LOCK_CLASS_NONE, 62 54 LOCK_CLASS_RQLOCK, 55 + }; 56 + 57 + struct slab_cache_data { 58 + u32 id; 59 + char name[SLAB_NAME_MAX]; 63 60 }; 64 61 65 62 #endif /* UTIL_BPF_SKEL_LOCK_DATA_H */
+8
tools/perf/util/bpf_skel/vmlinux/vmlinux.h
··· 195 195 */ 196 196 struct rq {}; 197 197 198 + struct kmem_cache { 199 + const char *name; 200 + } __attribute__((preserve_access_index)); 201 + 202 + struct bpf_iter__kmem_cache { 203 + struct kmem_cache *s; 204 + } __attribute__((preserve_access_index)); 205 + 198 206 #endif // __VMLINUX_H
+27
tools/perf/util/btf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Arnaldo Carvalho de Melo <acme@redhat.com> 4 + * 5 + * Copyright (C) 2024, Red Hat, Inc 6 + */ 7 + 8 + #include <bpf/btf.h> 9 + #include <util/btf.h> 10 + #include <string.h> 11 + 12 + const struct btf_member *__btf_type__find_member_by_name(struct btf *btf, 13 + int type_id, const char *member_name) 14 + { 15 + const struct btf_type *t = btf__type_by_id(btf, type_id); 16 + const struct btf_member *m; 17 + int i; 18 + 19 + for (i = 0, m = btf_members(t); i < btf_vlen(t); i++, m++) { 20 + const char *current_member_name = btf__name_by_offset(btf, m->name_off); 21 + 22 + if (!strcmp(current_member_name, member_name)) 23 + return m; 24 + } 25 + 26 + return NULL; 27 + }
+10
tools/perf/util/btf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __PERF_UTIL_BTF 3 + #define __PERF_UTIL_BTF 1 4 + 5 + struct btf; 6 + struct btf_member; 7 + 8 + const struct btf_member *__btf_type__find_member_by_name(struct btf *btf, 9 + int type_id, const char *member_name); 10 + #endif // __PERF_UTIL_BTF
+1 -1
tools/perf/util/cgroup.c
··· 473 473 474 474 leader = NULL; 475 475 evlist__for_each_entry(orig_list, pos) { 476 - evsel = evsel__clone(pos); 476 + evsel = evsel__clone(/*dest=*/NULL, pos); 477 477 if (evsel == NULL) 478 478 goto out_err; 479 479
+27
tools/perf/util/config.c
··· 13 13 #include <sys/param.h> 14 14 #include "cache.h" 15 15 #include "callchain.h" 16 + #include "header.h" 16 17 #include <subcmd/exec-cmd.h> 17 18 #include "util/event.h" /* proc_map_timeout */ 18 19 #include "util/hist.h" /* perf_hist_config */ ··· 35 34 36 35 #define DEBUG_CACHE_DIR ".debug" 37 36 37 + #define METRIC_ONLY_LEN 20 38 + 39 + struct perf_stat_config stat_config = { 40 + .aggr_mode = AGGR_GLOBAL, 41 + .aggr_level = MAX_CACHE_LVL + 1, 42 + .scale = true, 43 + .unit_width = 4, /* strlen("unit") */ 44 + .run_count = 1, 45 + .metric_only_len = METRIC_ONLY_LEN, 46 + .walltime_nsecs_stats = &walltime_nsecs_stats, 47 + .ru_stats = &ru_stats, 48 + .big_num = true, 49 + .ctl_fd = -1, 50 + .ctl_fd_ack = -1, 51 + .iostat_run = false, 52 + }; 38 53 39 54 char buildid_dir[MAXPATHLEN]; /* root dir for buildid, binary cache */ 40 55 ··· 470 453 symbol_conf.show_hist_headers = perf_config_bool(var, value); 471 454 472 455 return 0; 456 + } 457 + 458 + void perf_stat__set_big_num(int set) 459 + { 460 + stat_config.big_num = (set != 0); 461 + } 462 + 463 + static void perf_stat__set_no_csv_summary(int set) 464 + { 465 + stat_config.no_csv_summary = (set != 0); 473 466 } 474 467 475 468 static int perf_stat_config(const char *var, const char *value)
+1
tools/perf/util/config.h
··· 50 50 const char *var, const char *value); 51 51 void perf_config__exit(void); 52 52 void perf_config__refresh(void); 53 + int perf_config__set_variable(const char *var, const char *value); 53 54 54 55 /** 55 56 * perf_config_sections__for_each - iterate thru all the sections
+6 -4
tools/perf/util/data-convert-bt.c
··· 426 426 struct evsel *evsel, 427 427 struct perf_sample *sample) 428 428 { 429 - struct tep_format_field *common_fields = evsel->tp_format->format.common_fields; 430 - struct tep_format_field *fields = evsel->tp_format->format.fields; 429 + const struct tep_event *tp_format = evsel__tp_format(evsel); 430 + struct tep_format_field *common_fields = tp_format->format.common_fields; 431 + struct tep_format_field *fields = tp_format->format.fields; 431 432 int ret; 432 433 433 434 ret = add_tracepoint_fields_values(cw, event_class, event, ··· 1065 1064 struct evsel *evsel, 1066 1065 struct bt_ctf_event_class *class) 1067 1066 { 1068 - struct tep_format_field *common_fields = evsel->tp_format->format.common_fields; 1069 - struct tep_format_field *fields = evsel->tp_format->format.fields; 1067 + const struct tep_event *tp_format = evsel__tp_format(evsel); 1068 + struct tep_format_field *common_fields = tp_format ? tp_format->format.common_fields : NULL; 1069 + struct tep_format_field *fields = tp_format ? tp_format->format.fields : NULL; 1070 1070 int ret; 1071 1071 1072 1072 ret = add_tracepoint_fields_types(cw, common_fields, class);
+4 -4
tools/perf/util/data-convert-json.c
··· 230 230 231 231 #ifdef HAVE_LIBTRACEEVENT 232 232 if (sample->raw_data) { 233 - int i; 234 - struct tep_format_field **fields; 233 + struct tep_event *tp_format = evsel__tp_format(evsel); 234 + struct tep_format_field **fields = tp_format ? tep_event_fields(tp_format) : NULL; 235 235 236 - fields = tep_event_fields(evsel->tp_format); 237 236 if (fields) { 238 - i = 0; 237 + int i = 0; 238 + 239 239 while (fields[i]) { 240 240 struct trace_seq s; 241 241
+4 -1
tools/perf/util/disasm.c
··· 1245 1245 scnprintf(buf, buflen, "The %s BPF file has no BTF section, compile with -g or use pahole -J.", 1246 1246 dso__long_name(dso)); 1247 1247 break; 1248 + case SYMBOL_ANNOTATE_ERRNO__COULDNT_DETERMINE_FILE_TYPE: 1249 + scnprintf(buf, buflen, "Couldn't determine the file %s type.", dso__long_name(dso)); 1250 + break; 1248 1251 default: 1249 1252 scnprintf(buf, buflen, "Internal error: Invalid %d error code\n", errnum); 1250 1253 break; ··· 2292 2289 } else if (dso__binary_type(dso) == DSO_BINARY_TYPE__BPF_IMAGE) { 2293 2290 return symbol__disassemble_bpf_image(sym, args); 2294 2291 } else if (dso__binary_type(dso) == DSO_BINARY_TYPE__NOT_FOUND) { 2295 - return -1; 2292 + return SYMBOL_ANNOTATE_ERRNO__COULDNT_DETERMINE_FILE_TYPE; 2296 2293 } else if (dso__is_kcore(dso)) { 2297 2294 kce.addr = map__rip_2objdump(map, sym->start); 2298 2295 kce.kcore_filename = symfs_filename;
+2 -1
tools/perf/util/dlfilter.c
··· 234 234 struct machine *machine = maps__machine(thread__maps(al->thread)); 235 235 236 236 if (machine) 237 - script_fetch_insn(d->sample, al->thread, machine); 237 + script_fetch_insn(d->sample, al->thread, machine, 238 + /*native_arch=*/true); 238 239 } 239 240 } 240 241
+21 -9
tools/perf/util/env.c
··· 24 24 #include "bpf-utils.h" 25 25 #include <bpf/libbpf.h> 26 26 27 - void perf_env__insert_bpf_prog_info(struct perf_env *env, 27 + bool perf_env__insert_bpf_prog_info(struct perf_env *env, 28 28 struct bpf_prog_info_node *info_node) 29 29 { 30 + bool ret; 31 + 30 32 down_write(&env->bpf_progs.lock); 31 - __perf_env__insert_bpf_prog_info(env, info_node); 33 + ret = __perf_env__insert_bpf_prog_info(env, info_node); 32 34 up_write(&env->bpf_progs.lock); 35 + 36 + return ret; 33 37 } 34 38 35 - void __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node) 39 + bool __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node) 36 40 { 37 41 __u32 prog_id = info_node->info_linear->info.id; 38 42 struct bpf_prog_info_node *node; ··· 54 50 p = &(*p)->rb_right; 55 51 } else { 56 52 pr_debug("duplicated bpf prog info %u\n", prog_id); 57 - return; 53 + return false; 58 54 } 59 55 } 60 56 61 57 rb_link_node(&info_node->rb_node, parent, p); 62 58 rb_insert_color(&info_node->rb_node, &env->bpf_progs.infos); 63 59 env->bpf_progs.infos_cnt++; 60 + return true; 64 61 } 65 62 66 63 struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env, ··· 331 326 332 327 for (idx = 0; idx < nr_cpus; ++idx) { 333 328 struct perf_cpu cpu = { .cpu = idx }; 329 + int core_id = cpu__get_core_id(cpu); 330 + int socket_id = cpu__get_socket_id(cpu); 331 + int die_id = cpu__get_die_id(cpu); 334 332 335 - env->cpu[idx].core_id = cpu__get_core_id(cpu); 336 - env->cpu[idx].socket_id = cpu__get_socket_id(cpu); 337 - env->cpu[idx].die_id = cpu__get_die_id(cpu); 333 + env->cpu[idx].core_id = core_id >= 0 ? core_id : -1; 334 + env->cpu[idx].socket_id = socket_id >= 0 ? socket_id : -1; 335 + env->cpu[idx].die_id = die_id >= 0 ? die_id : -1; 338 336 } 339 337 340 338 env->nr_cpus_avail = nr_cpus; ··· 480 472 return normalize_arch(arch_name); 481 473 } 482 474 475 + #if defined(HAVE_LIBTRACEEVENT) 476 + #include "trace/beauty/arch_errno_names.c" 477 + #endif 478 + 483 479 const char *perf_env__arch_strerrno(struct perf_env *env __maybe_unused, int err __maybe_unused) 484 480 { 485 - #if defined(HAVE_SYSCALL_TABLE_SUPPORT) && defined(HAVE_LIBTRACEEVENT) 481 + #if defined(HAVE_LIBTRACEEVENT) 486 482 if (env->arch_strerrno == NULL) 487 483 env->arch_strerrno = arch_syscalls__strerrno_function(perf_env__arch(env)); 488 484 489 485 return env->arch_strerrno ? env->arch_strerrno(err) : "no arch specific strerrno function"; 490 486 #else 491 - return "!(HAVE_SYSCALL_TABLE_SUPPORT && HAVE_LIBTRACEEVENT)"; 487 + return "!HAVE_LIBTRACEEVENT"; 492 488 #endif 493 489 } 494 490
+2 -4
tools/perf/util/env.h
··· 56 56 57 57 typedef const char *(arch_syscalls__strerrno_t)(int err); 58 58 59 - arch_syscalls__strerrno_t *arch_syscalls__strerrno_function(const char *arch); 60 - 61 59 struct perf_env { 62 60 char *hostname; 63 61 char *os_release; ··· 174 176 int perf_env__nr_cpus_avail(struct perf_env *env); 175 177 176 178 void perf_env__init(struct perf_env *env); 177 - void __perf_env__insert_bpf_prog_info(struct perf_env *env, 179 + bool __perf_env__insert_bpf_prog_info(struct perf_env *env, 178 180 struct bpf_prog_info_node *info_node); 179 - void perf_env__insert_bpf_prog_info(struct perf_env *env, 181 + bool perf_env__insert_bpf_prog_info(struct perf_env *env, 180 182 struct bpf_prog_info_node *info_node); 181 183 struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env, 182 184 __u32 prog_id);
+265 -49
tools/perf/util/evsel.c
··· 395 395 evsel->group_pmu_name = NULL; 396 396 evsel->skippable = false; 397 397 evsel->alternate_hw_config = PERF_COUNT_HW_MAX; 398 + evsel->script_output_type = -1; // FIXME: OUTPUT_TYPE_UNSET, see builtin-script.c 398 399 } 399 400 400 401 struct evsel *evsel__new_idx(struct perf_event_attr *attr, int idx) ··· 455 454 * The assumption is that @orig is not configured nor opened yet. 456 455 * So we only care about the attributes that can be set while it's parsed. 457 456 */ 458 - struct evsel *evsel__clone(struct evsel *orig) 457 + struct evsel *evsel__clone(struct evsel *dest, struct evsel *orig) 459 458 { 460 459 struct evsel *evsel; 461 460 ··· 468 467 if (orig->bpf_obj) 469 468 return NULL; 470 469 471 - evsel = evsel__new(&orig->core.attr); 470 + if (dest) 471 + evsel = dest; 472 + else 473 + evsel = evsel__new(&orig->core.attr); 474 + 472 475 if (evsel == NULL) 473 476 return NULL; 474 477 ··· 517 512 evsel->core.leader = orig->core.leader; 518 513 519 514 evsel->max_events = orig->max_events; 520 - free((char *)evsel->unit); 521 - evsel->unit = strdup(orig->unit); 522 - if (evsel->unit == NULL) 523 - goto out_err; 524 - 515 + zfree(&evsel->unit); 516 + if (orig->unit) { 517 + evsel->unit = strdup(orig->unit); 518 + if (evsel->unit == NULL) 519 + goto out_err; 520 + } 525 521 evsel->scale = orig->scale; 526 522 evsel->snapshot = orig->snapshot; 527 523 evsel->per_pkg = orig->per_pkg; ··· 550 544 return NULL; 551 545 } 552 546 547 + static int trace_event__id(const char *sys, const char *name) 548 + { 549 + char *tp_dir = get_events_file(sys); 550 + char path[PATH_MAX]; 551 + int id, err; 552 + 553 + if (!tp_dir) 554 + return -1; 555 + 556 + scnprintf(path, PATH_MAX, "%s/%s/id", tp_dir, name); 557 + put_events_file(tp_dir); 558 + err = filename__read_int(path, &id); 559 + if (err) 560 + return err; 561 + 562 + return id; 563 + } 564 + 553 565 /* 554 566 * Returns pointer with encoded error via <linux/err.h> interface. 555 567 */ 556 - #ifdef HAVE_LIBTRACEEVENT 557 568 struct evsel *evsel__newtp_idx(const char *sys, const char *name, int idx, bool format) 558 569 { 570 + struct perf_event_attr attr = { 571 + .type = PERF_TYPE_TRACEPOINT, 572 + .sample_type = (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME | 573 + PERF_SAMPLE_CPU | PERF_SAMPLE_PERIOD), 574 + }; 559 575 struct evsel *evsel = zalloc(perf_evsel__object.size); 560 - int err = -ENOMEM; 576 + int err = -ENOMEM, id = -1; 561 577 562 - if (evsel == NULL) { 578 + if (evsel == NULL) 563 579 goto out_err; 564 - } else { 565 - struct perf_event_attr attr = { 566 - .type = PERF_TYPE_TRACEPOINT, 567 - .sample_type = (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME | 568 - PERF_SAMPLE_CPU | PERF_SAMPLE_PERIOD), 569 - }; 570 580 571 - if (asprintf(&evsel->name, "%s:%s", sys, name) < 0) 581 + 582 + if (asprintf(&evsel->name, "%s:%s", sys, name) < 0) 583 + goto out_free; 584 + 585 + #ifdef HAVE_LIBTRACEEVENT 586 + evsel->tp_sys = strdup(sys); 587 + if (!evsel->tp_sys) 588 + goto out_free; 589 + 590 + evsel->tp_name = strdup(name); 591 + if (!evsel->tp_name) 592 + goto out_free; 593 + #endif 594 + 595 + event_attr_init(&attr); 596 + 597 + if (format) { 598 + id = trace_event__id(sys, name); 599 + if (id < 0) { 600 + err = id; 572 601 goto out_free; 573 - 574 - event_attr_init(&attr); 575 - 576 - if (format) { 577 - evsel->tp_format = trace_event__tp_format(sys, name); 578 - if (IS_ERR(evsel->tp_format)) { 579 - err = PTR_ERR(evsel->tp_format); 580 - goto out_free; 581 - } 582 - attr.config = evsel->tp_format->id; 583 - } else { 584 - attr.config = (__u64) -1; 585 602 } 586 - 587 - 588 - attr.sample_period = 1; 589 - evsel__init(evsel, &attr, idx); 590 603 } 591 - 604 + attr.config = (__u64)id; 605 + attr.sample_period = 1; 606 + evsel__init(evsel, &attr, idx); 592 607 return evsel; 593 608 594 609 out_free: 595 610 zfree(&evsel->name); 611 + #ifdef HAVE_LIBTRACEEVENT 612 + zfree(&evsel->tp_sys); 613 + zfree(&evsel->tp_name); 614 + #endif 596 615 free(evsel); 597 616 out_err: 598 617 return ERR_PTR(err); 618 + } 619 + 620 + #ifdef HAVE_LIBTRACEEVENT 621 + struct tep_event *evsel__tp_format(struct evsel *evsel) 622 + { 623 + struct tep_event *tp_format = evsel->tp_format; 624 + 625 + if (tp_format) 626 + return tp_format; 627 + 628 + if (evsel->core.attr.type != PERF_TYPE_TRACEPOINT) 629 + return NULL; 630 + 631 + tp_format = trace_event__tp_format(evsel->tp_sys, evsel->tp_name); 632 + if (IS_ERR(tp_format)) { 633 + int err = -PTR_ERR(evsel->tp_format); 634 + 635 + pr_err("Error getting tracepoint format '%s' '%s'(%d)\n", 636 + evsel__name(evsel), strerror(err), err); 637 + return NULL; 638 + } 639 + evsel->tp_format = tp_format; 640 + return evsel->tp_format; 599 641 } 600 642 #endif 601 643 ··· 1157 1103 case EVSEL__CONFIG_TERM_AUX_OUTPUT: 1158 1104 attr->aux_output = term->val.aux_output ? 1 : 0; 1159 1105 break; 1106 + case EVSEL__CONFIG_TERM_AUX_ACTION: 1107 + /* Already applied by auxtrace */ 1108 + break; 1160 1109 case EVSEL__CONFIG_TERM_AUX_SAMPLE_SIZE: 1161 1110 /* Already applied by auxtrace */ 1162 1111 break; ··· 1644 1587 perf_thread_map__put(evsel->core.threads); 1645 1588 zfree(&evsel->group_name); 1646 1589 zfree(&evsel->name); 1590 + #ifdef HAVE_LIBTRACEEVENT 1591 + zfree(&evsel->tp_sys); 1592 + zfree(&evsel->tp_name); 1593 + #endif 1647 1594 zfree(&evsel->filter); 1648 1595 zfree(&evsel->group_pmu_name); 1649 1596 zfree(&evsel->unit); ··· 2151 2090 return err; 2152 2091 } 2153 2092 2154 - static bool has_attr_feature(struct perf_event_attr *attr, unsigned long flags) 2093 + static bool __has_attr_feature(struct perf_event_attr *attr, 2094 + struct perf_cpu cpu, unsigned long flags) 2155 2095 { 2156 - int fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, /*cpu=*/-1, 2096 + int fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, cpu.cpu, 2157 2097 /*group_fd=*/-1, flags); 2158 2098 close(fd); 2159 2099 2160 2100 if (fd < 0) { 2161 2101 attr->exclude_kernel = 1; 2162 2102 2163 - fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, /*cpu=*/-1, 2103 + fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, cpu.cpu, 2164 2104 /*group_fd=*/-1, flags); 2165 2105 close(fd); 2166 2106 } ··· 2169 2107 if (fd < 0) { 2170 2108 attr->exclude_hv = 1; 2171 2109 2172 - fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, /*cpu=*/-1, 2110 + fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, cpu.cpu, 2173 2111 /*group_fd=*/-1, flags); 2174 2112 close(fd); 2175 2113 } ··· 2177 2115 if (fd < 0) { 2178 2116 attr->exclude_guest = 1; 2179 2117 2180 - fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, /*cpu=*/-1, 2118 + fd = syscall(SYS_perf_event_open, attr, /*pid=*/0, cpu.cpu, 2181 2119 /*group_fd=*/-1, flags); 2182 2120 close(fd); 2183 2121 } ··· 2187 2125 attr->exclude_hv = 0; 2188 2126 2189 2127 return fd >= 0; 2128 + } 2129 + 2130 + static bool has_attr_feature(struct perf_event_attr *attr, unsigned long flags) 2131 + { 2132 + struct perf_cpu cpu = {.cpu = -1}; 2133 + 2134 + return __has_attr_feature(attr, cpu, flags); 2190 2135 } 2191 2136 2192 2137 static void evsel__detect_missing_pmu_features(struct evsel *evsel) ··· 2284 2215 errno = old_errno; 2285 2216 } 2286 2217 2287 - static bool evsel__detect_missing_features(struct evsel *evsel) 2218 + static bool evsel__probe_aux_action(struct evsel *evsel, struct perf_cpu cpu) 2219 + { 2220 + struct perf_event_attr attr = evsel->core.attr; 2221 + int old_errno = errno; 2222 + 2223 + attr.disabled = 1; 2224 + attr.aux_start_paused = 1; 2225 + 2226 + if (__has_attr_feature(&attr, cpu, /*flags=*/0)) { 2227 + errno = old_errno; 2228 + return true; 2229 + } 2230 + 2231 + /* 2232 + * EOPNOTSUPP means the kernel supports the feature but the PMU does 2233 + * not, so keep that distinction if possible. 2234 + */ 2235 + if (errno != EOPNOTSUPP) 2236 + errno = old_errno; 2237 + 2238 + return false; 2239 + } 2240 + 2241 + static void evsel__detect_missing_aux_action_feature(struct evsel *evsel, struct perf_cpu cpu) 2242 + { 2243 + static bool detection_done; 2244 + struct evsel *leader; 2245 + 2246 + /* 2247 + * Don't bother probing aux_action if it is not being used or has been 2248 + * probed before. 2249 + */ 2250 + if (!evsel->core.attr.aux_action || detection_done) 2251 + return; 2252 + 2253 + detection_done = true; 2254 + 2255 + /* 2256 + * The leader is an AUX area event. If it has failed, assume the feature 2257 + * is not supported. 2258 + */ 2259 + leader = evsel__leader(evsel); 2260 + if (evsel == leader) { 2261 + perf_missing_features.aux_action = true; 2262 + return; 2263 + } 2264 + 2265 + /* 2266 + * AUX area event with aux_action must have been opened successfully 2267 + * already, so feature is supported. 2268 + */ 2269 + if (leader->core.attr.aux_action) 2270 + return; 2271 + 2272 + if (!evsel__probe_aux_action(leader, cpu)) 2273 + perf_missing_features.aux_action = true; 2274 + } 2275 + 2276 + static bool evsel__detect_missing_features(struct evsel *evsel, struct perf_cpu cpu) 2288 2277 { 2289 2278 static bool detection_done = false; 2290 2279 struct perf_event_attr attr = { ··· 2351 2224 .disabled = 1, 2352 2225 }; 2353 2226 int old_errno; 2227 + 2228 + evsel__detect_missing_aux_action_feature(evsel, cpu); 2354 2229 2355 2230 evsel__detect_missing_pmu_features(evsel); 2356 2231 ··· 2568 2439 int idx, thread, nthreads; 2569 2440 int pid = -1, err, old_errno; 2570 2441 enum rlimit_action set_rlimit = NO_CHANGE; 2442 + struct perf_cpu cpu; 2571 2443 2572 2444 if (evsel__is_retire_lat(evsel)) 2573 2445 return tpebs_start(evsel->evlist); ··· 2606 2476 } 2607 2477 2608 2478 for (idx = start_cpu_map_idx; idx < end_cpu_map_idx; idx++) { 2479 + cpu = perf_cpu_map__cpu(cpus, idx); 2609 2480 2610 2481 for (thread = 0; thread < nthreads; thread++) { 2611 2482 int fd, group_fd; ··· 2627 2496 2628 2497 /* Debug message used by test scripts */ 2629 2498 pr_debug2_peo("sys_perf_event_open: pid %d cpu %d group_fd %d flags %#lx", 2630 - pid, perf_cpu_map__cpu(cpus, idx).cpu, group_fd, evsel->open_flags); 2499 + pid, cpu.cpu, group_fd, evsel->open_flags); 2631 2500 2632 - fd = sys_perf_event_open(&evsel->core.attr, pid, 2633 - perf_cpu_map__cpu(cpus, idx).cpu, 2501 + fd = sys_perf_event_open(&evsel->core.attr, pid, cpu.cpu, 2634 2502 group_fd, evsel->open_flags); 2635 2503 2636 2504 FD(evsel, idx, thread) = fd; ··· 2645 2515 bpf_counter__install_pe(evsel, idx, fd); 2646 2516 2647 2517 if (unlikely(test_attr__enabled())) { 2648 - test_attr__open(&evsel->core.attr, pid, 2649 - perf_cpu_map__cpu(cpus, idx), 2518 + test_attr__open(&evsel->core.attr, pid, cpu, 2650 2519 fd, group_fd, evsel->open_flags); 2651 2520 } 2652 2521 ··· 2700 2571 if (err == -EMFILE && rlimit__increase_nofile(&set_rlimit)) 2701 2572 goto retry_open; 2702 2573 2703 - if (err == -EINVAL && evsel__detect_missing_features(evsel)) 2574 + if (err == -EINVAL && evsel__detect_missing_features(evsel, cpu)) 2704 2575 goto fallback_missing_features; 2705 2576 2706 2577 if (evsel__precise_ip_fallback(evsel)) ··· 3347 3218 #ifdef HAVE_LIBTRACEEVENT 3348 3219 struct tep_format_field *evsel__field(struct evsel *evsel, const char *name) 3349 3220 { 3350 - return tep_find_field(evsel->tp_format, name); 3221 + struct tep_event *tp_format = evsel__tp_format(evsel); 3222 + 3223 + return tp_format ? tep_find_field(tp_format, name) : NULL; 3351 3224 } 3352 3225 3353 3226 struct tep_format_field *evsel__common_field(struct evsel *evsel, const char *name) 3354 3227 { 3355 - return tep_find_common_field(evsel->tp_format, name); 3228 + struct tep_event *tp_format = evsel__tp_format(evsel); 3229 + 3230 + return tp_format ? tep_find_common_field(tp_format, name) : NULL; 3356 3231 } 3357 3232 3358 3233 void *evsel__rawptr(struct evsel *evsel, struct perf_sample *sample, const char *name) ··· 3581 3448 return ret ? false : true; 3582 3449 } 3583 3450 3451 + static int dump_perf_event_processes(char *msg, size_t size) 3452 + { 3453 + DIR *proc_dir; 3454 + struct dirent *proc_entry; 3455 + int printed = 0; 3456 + 3457 + proc_dir = opendir(procfs__mountpoint()); 3458 + if (!proc_dir) 3459 + return 0; 3460 + 3461 + /* Walk through the /proc directory. */ 3462 + while ((proc_entry = readdir(proc_dir)) != NULL) { 3463 + char buf[256]; 3464 + DIR *fd_dir; 3465 + struct dirent *fd_entry; 3466 + int fd_dir_fd; 3467 + 3468 + if (proc_entry->d_type != DT_DIR || 3469 + !isdigit(proc_entry->d_name[0]) || 3470 + strlen(proc_entry->d_name) > sizeof(buf) - 4) 3471 + continue; 3472 + 3473 + scnprintf(buf, sizeof(buf), "%s/fd", proc_entry->d_name); 3474 + fd_dir_fd = openat(dirfd(proc_dir), buf, O_DIRECTORY); 3475 + if (fd_dir_fd == -1) 3476 + continue; 3477 + fd_dir = fdopendir(fd_dir_fd); 3478 + if (!fd_dir) { 3479 + close(fd_dir_fd); 3480 + continue; 3481 + } 3482 + while ((fd_entry = readdir(fd_dir)) != NULL) { 3483 + ssize_t link_size; 3484 + 3485 + if (fd_entry->d_type != DT_LNK) 3486 + continue; 3487 + link_size = readlinkat(fd_dir_fd, fd_entry->d_name, buf, sizeof(buf)); 3488 + if (link_size < 0) 3489 + continue; 3490 + /* Take care as readlink doesn't null terminate the string. */ 3491 + if (!strncmp(buf, "anon_inode:[perf_event]", link_size)) { 3492 + int cmdline_fd; 3493 + ssize_t cmdline_size; 3494 + 3495 + scnprintf(buf, sizeof(buf), "%s/cmdline", proc_entry->d_name); 3496 + cmdline_fd = openat(dirfd(proc_dir), buf, O_RDONLY); 3497 + if (cmdline_fd == -1) 3498 + continue; 3499 + cmdline_size = read(cmdline_fd, buf, sizeof(buf) - 1); 3500 + close(cmdline_fd); 3501 + if (cmdline_size < 0) 3502 + continue; 3503 + buf[cmdline_size] = '\0'; 3504 + for (ssize_t i = 0; i < cmdline_size; i++) { 3505 + if (buf[i] == '\0') 3506 + buf[i] = ' '; 3507 + } 3508 + 3509 + if (printed == 0) 3510 + printed += scnprintf(msg, size, "Possible processes:\n"); 3511 + 3512 + printed += scnprintf(msg + printed, size - printed, 3513 + "%s %s\n", proc_entry->d_name, buf); 3514 + break; 3515 + } 3516 + } 3517 + closedir(fd_dir); 3518 + } 3519 + closedir(proc_dir); 3520 + return printed; 3521 + } 3522 + 3584 3523 int __weak arch_evsel__open_strerror(struct evsel *evsel __maybe_unused, 3585 3524 char *msg __maybe_unused, 3586 3525 size_t size __maybe_unused) ··· 3686 3481 printed += scnprintf(msg, size, 3687 3482 "No permission to enable %s event.\n\n", evsel__name(evsel)); 3688 3483 3689 - return scnprintf(msg + printed, size - printed, 3484 + return printed + scnprintf(msg + printed, size - printed, 3690 3485 "Consider adjusting /proc/sys/kernel/perf_event_paranoid setting to open\n" 3691 3486 "access to performance monitoring and observability operations for processes\n" 3692 3487 "without CAP_PERFMON, CAP_SYS_PTRACE or CAP_SYS_ADMIN Linux capability.\n" ··· 3731 3526 return scnprintf(msg, size, 3732 3527 "%s: PMU Hardware doesn't support 'aux_output' feature", 3733 3528 evsel__name(evsel)); 3529 + if (evsel->core.attr.aux_action) 3530 + return scnprintf(msg, size, 3531 + "%s: PMU Hardware doesn't support 'aux_action' feature", 3532 + evsel__name(evsel)); 3734 3533 if (evsel->core.attr.sample_period != 0) 3735 3534 return scnprintf(msg, size, 3736 3535 "%s: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat'", ··· 3753 3544 return scnprintf(msg, size, 3754 3545 "The PMU counters are busy/taken by another profiler.\n" 3755 3546 "We found oprofile daemon running, please stop it and try again."); 3547 + printed += scnprintf( 3548 + msg, size, 3549 + "The PMU %s counters are busy and in use by another process.\n", 3550 + evsel->pmu ? evsel->pmu->name : ""); 3551 + return printed + dump_perf_event_processes(msg + printed, size - printed); 3756 3552 break; 3757 3553 case EINVAL: 3758 3554 if (evsel->core.attr.sample_type & PERF_SAMPLE_CODE_PAGE_SIZE && perf_missing_features.code_page_size) ··· 3770 3556 return scnprintf(msg, size, "clockid feature not supported."); 3771 3557 if (perf_missing_features.clockid_wrong) 3772 3558 return scnprintf(msg, size, "wrong clockid (%d).", clockid); 3559 + if (perf_missing_features.aux_action) 3560 + return scnprintf(msg, size, "The 'aux_action' feature is not supported, update the kernel."); 3773 3561 if (perf_missing_features.aux_output) 3774 3562 return scnprintf(msg, size, "The 'aux_output' feature is not supported, update the kernel."); 3775 3563 if (!target__has_cpu(target))
+7 -6
tools/perf/util/evsel.h
··· 59 59 char *group_name; 60 60 const char *group_pmu_name; 61 61 #ifdef HAVE_LIBTRACEEVENT 62 + char *tp_sys; 63 + char *tp_name; 62 64 struct tep_event *tp_format; 63 65 #endif 64 66 char *filter; ··· 121 119 bool default_metricgroup; /* A member of the Default metricgroup */ 122 120 struct hashmap *per_pkg_mask; 123 121 int err; 122 + int script_output_type; 124 123 struct { 125 124 evsel__sb_cb_t *cb; 126 125 void *data; ··· 208 205 bool weight_struct; 209 206 bool read_lost; 210 207 bool branch_counters; 208 + bool aux_action; 211 209 bool inherit_sample_read; 212 210 }; 213 211 ··· 245 241 return evsel__new_idx(attr, 0); 246 242 } 247 243 248 - struct evsel *evsel__clone(struct evsel *orig); 244 + struct evsel *evsel__clone(struct evsel *dest, struct evsel *orig); 249 245 250 246 int copy_config_terms(struct list_head *dst, struct list_head *src); 251 247 void free_config_terms(struct list_head *config_terms); 252 248 253 249 254 - #ifdef HAVE_LIBTRACEEVENT 255 - struct evsel *evsel__newtp_idx(const char *sys, const char *name, int idx, bool format); 256 - 257 250 /* 258 251 * Returns pointer with encoded error via <linux/err.h> interface. 259 252 */ 253 + struct evsel *evsel__newtp_idx(const char *sys, const char *name, int idx, bool format); 260 254 static inline struct evsel *evsel__newtp(const char *sys, const char *name) 261 255 { 262 256 return evsel__newtp_idx(sys, name, 0, true); 263 257 } 264 - #endif 265 258 266 259 #ifdef HAVE_LIBTRACEEVENT 267 - struct tep_event *event_format__new(const char *sys, const char *name); 260 + struct tep_event *evsel__tp_format(struct evsel *evsel); 268 261 #endif 269 262 270 263 void evsel__init(struct evsel *evsel, struct perf_event_attr *attr, int idx);
+1
tools/perf/util/evsel_config.h
··· 25 25 EVSEL__CONFIG_TERM_BRANCH, 26 26 EVSEL__CONFIG_TERM_PERCORE, 27 27 EVSEL__CONFIG_TERM_AUX_OUTPUT, 28 + EVSEL__CONFIG_TERM_AUX_ACTION, 28 29 EVSEL__CONFIG_TERM_AUX_SAMPLE_SIZE, 29 30 EVSEL__CONFIG_TERM_CFG_CHG, 30 31 };
+3 -1
tools/perf/util/evsel_fprintf.c
··· 81 81 #ifdef HAVE_LIBTRACEEVENT 82 82 if (details->trace_fields) { 83 83 struct tep_format_field *field; 84 + const struct tep_event *tp_format; 84 85 85 86 if (evsel->core.attr.type != PERF_TYPE_TRACEPOINT) { 86 87 printed += comma_fprintf(fp, &first, " (not a tracepoint)"); 87 88 goto out; 88 89 } 89 90 90 - field = evsel->tp_format->format.fields; 91 + tp_format = evsel__tp_format(evsel); 92 + field = tp_format ? tp_format->format.fields : NULL; 91 93 if (field == NULL) { 92 94 printed += comma_fprintf(fp, &first, " (no trace field)"); 93 95 goto out;
+1 -4
tools/perf/util/expr.c
··· 285 285 { 286 286 struct expr_parse_ctx *ctx; 287 287 288 - ctx = malloc(sizeof(struct expr_parse_ctx)); 288 + ctx = calloc(1, sizeof(struct expr_parse_ctx)); 289 289 if (!ctx) 290 290 return NULL; 291 291 ··· 294 294 free(ctx); 295 295 return NULL; 296 296 } 297 - ctx->sctx.user_requested_cpu_list = NULL; 298 - ctx->sctx.runtime = 0; 299 - ctx->sctx.system_wide = false; 300 297 301 298 return ctx; 302 299 }
+7 -2
tools/perf/util/ftrace.h
··· 7 7 8 8 struct evlist; 9 9 struct hashamp; 10 + struct stats; 10 11 11 12 struct perf_ftrace { 12 13 struct evlist *evlist; ··· 21 20 unsigned long percpu_buffer_size; 22 21 bool inherit; 23 22 bool use_nsec; 23 + unsigned int bucket_range; 24 + unsigned int min_latency; 25 + unsigned int max_latency; 24 26 int graph_depth; 25 27 int func_stack_trace; 26 28 int func_irq_info; ··· 47 43 int perf_ftrace__latency_start_bpf(struct perf_ftrace *ftrace); 48 44 int perf_ftrace__latency_stop_bpf(struct perf_ftrace *ftrace); 49 45 int perf_ftrace__latency_read_bpf(struct perf_ftrace *ftrace, 50 - int buckets[]); 46 + int buckets[], struct stats *stats); 51 47 int perf_ftrace__latency_cleanup_bpf(struct perf_ftrace *ftrace); 52 48 53 49 #else /* !HAVE_BPF_SKEL */ ··· 72 68 73 69 static inline int 74 70 perf_ftrace__latency_read_bpf(struct perf_ftrace *ftrace __maybe_unused, 75 - int buckets[] __maybe_unused) 71 + int buckets[] __maybe_unused, 72 + struct stats *stats __maybe_unused) 76 73 { 77 74 return -1; 78 75 }
+2 -2
tools/perf/util/generate-cmdlist.sh
··· 38 38 done 39 39 echo "#endif /* HAVE_LIBELF_SUPPORT */" 40 40 41 - echo "#if defined(HAVE_LIBTRACEEVENT) && (defined(HAVE_LIBAUDIT_SUPPORT) || defined(HAVE_SYSCALL_TABLE_SUPPORT))" 41 + echo "#if defined(HAVE_LIBTRACEEVENT)" 42 42 sed -n -e 's/^perf-\([^ ]*\)[ ].* audit*/\1/p' command-list.txt | 43 43 sort | 44 44 while read cmd ··· 51 51 p 52 52 }' "Documentation/perf-$cmd.txt" 53 53 done 54 - echo "#endif /* HAVE_LIBTRACEEVENT && (HAVE_LIBAUDIT_SUPPORT || HAVE_SYSCALL_TABLE_SUPPORT) */" 54 + echo "#endif /* HAVE_LIBTRACEEVENT */" 55 55 56 56 echo "#ifdef HAVE_LIBTRACEEVENT" 57 57 sed -n -e 's/^perf-\([^ ]*\)[ ].* traceevent.*/\1/p' command-list.txt |
+6 -2
tools/perf/util/header.c
··· 3158 3158 /* after reading from file, translate offset to address */ 3159 3159 bpil_offs_to_addr(info_linear); 3160 3160 info_node->info_linear = info_linear; 3161 - __perf_env__insert_bpf_prog_info(env, info_node); 3161 + if (!__perf_env__insert_bpf_prog_info(env, info_node)) { 3162 + free(info_linear); 3163 + free(info_node); 3164 + } 3162 3165 } 3163 3166 3164 3167 up_write(&env->bpf_progs.lock); ··· 3208 3205 if (__do_read(ff, node->data, data_size)) 3209 3206 goto out; 3210 3207 3211 - __perf_env__insert_btf(env, node); 3208 + if (!__perf_env__insert_btf(env, node)) 3209 + free(node); 3212 3210 node = NULL; 3213 3211 } 3214 3212
+54 -62
tools/perf/util/hist.c
··· 32 32 #include <linux/time64.h> 33 33 #include <linux/zalloc.h> 34 34 35 + static int64_t hist_entry__cmp(struct hist_entry *left, struct hist_entry *right); 36 + static int64_t hist_entry__collapse(struct hist_entry *left, struct hist_entry *right); 37 + 35 38 static bool hists__filter_entry_by_dso(struct hists *hists, 36 39 struct hist_entry *he); 37 40 static bool hists__filter_entry_by_thread(struct hists *hists, ··· 1295 1292 return err; 1296 1293 } 1297 1294 1298 - int64_t 1299 - hist_entry__cmp(struct hist_entry *left, struct hist_entry *right) 1295 + static int64_t 1296 + hist_entry__cmp_impl(struct perf_hpp_list *hpp_list, struct hist_entry *left, 1297 + struct hist_entry *right, unsigned long fn_offset, 1298 + bool ignore_dynamic, bool ignore_skipped) 1300 1299 { 1301 1300 struct hists *hists = left->hists; 1302 1301 struct perf_hpp_fmt *fmt; 1303 - int64_t cmp = 0; 1302 + perf_hpp_fmt_cmp_t *fn; 1303 + int64_t cmp; 1304 1304 1305 - hists__for_each_sort_list(hists, fmt) { 1306 - if (perf_hpp__is_dynamic_entry(fmt) && 1305 + /* 1306 + * Never collapse filtered and non-filtered entries. 1307 + * Note this is not the same as having an extra (invisible) fmt 1308 + * that corresponds to the filtered status. 1309 + */ 1310 + cmp = (int64_t)!!left->filtered - (int64_t)!!right->filtered; 1311 + if (cmp) 1312 + return cmp; 1313 + 1314 + perf_hpp_list__for_each_sort_list(hpp_list, fmt) { 1315 + if (ignore_dynamic && perf_hpp__is_dynamic_entry(fmt) && 1307 1316 !perf_hpp__defined_dynamic_entry(fmt, hists)) 1308 1317 continue; 1309 1318 1310 - cmp = fmt->cmp(fmt, left, right); 1319 + if (ignore_skipped && perf_hpp__should_skip(fmt, hists)) 1320 + continue; 1321 + 1322 + fn = (void *)fmt + fn_offset; 1323 + cmp = (*fn)(fmt, left, right); 1311 1324 if (cmp) 1312 1325 break; 1313 1326 } ··· 1332 1313 } 1333 1314 1334 1315 int64_t 1316 + hist_entry__cmp(struct hist_entry *left, struct hist_entry *right) 1317 + { 1318 + return hist_entry__cmp_impl(left->hists->hpp_list, left, right, 1319 + offsetof(struct perf_hpp_fmt, cmp), true, false); 1320 + } 1321 + 1322 + static int64_t 1323 + hist_entry__sort(struct hist_entry *left, struct hist_entry *right) 1324 + { 1325 + return hist_entry__cmp_impl(left->hists->hpp_list, left, right, 1326 + offsetof(struct perf_hpp_fmt, sort), false, true); 1327 + } 1328 + 1329 + int64_t 1335 1330 hist_entry__collapse(struct hist_entry *left, struct hist_entry *right) 1336 1331 { 1337 - struct hists *hists = left->hists; 1338 - struct perf_hpp_fmt *fmt; 1339 - int64_t cmp = 0; 1332 + return hist_entry__cmp_impl(left->hists->hpp_list, left, right, 1333 + offsetof(struct perf_hpp_fmt, collapse), true, false); 1334 + } 1340 1335 1341 - hists__for_each_sort_list(hists, fmt) { 1342 - if (perf_hpp__is_dynamic_entry(fmt) && 1343 - !perf_hpp__defined_dynamic_entry(fmt, hists)) 1344 - continue; 1345 - 1346 - cmp = fmt->collapse(fmt, left, right); 1347 - if (cmp) 1348 - break; 1349 - } 1350 - 1351 - return cmp; 1336 + static int64_t 1337 + hist_entry__collapse_hierarchy(struct perf_hpp_list *hpp_list, 1338 + struct hist_entry *left, 1339 + struct hist_entry *right) 1340 + { 1341 + return hist_entry__cmp_impl(hpp_list, left, right, 1342 + offsetof(struct perf_hpp_fmt, collapse), false, false); 1352 1343 } 1353 1344 1354 1345 void hist_entry__delete(struct hist_entry *he) ··· 1532 1503 while (*p != NULL) { 1533 1504 parent = *p; 1534 1505 iter = rb_entry(parent, struct hist_entry, rb_node_in); 1535 - 1536 - cmp = 0; 1537 - perf_hpp_list__for_each_sort_list(hpp_list, fmt) { 1538 - cmp = fmt->collapse(fmt, iter, he); 1539 - if (cmp) 1540 - break; 1541 - } 1542 - 1506 + cmp = hist_entry__collapse_hierarchy(hpp_list, iter, he); 1543 1507 if (!cmp) { 1544 1508 he_stat__add_stat(&iter->stat, &he->stat); 1545 1509 return iter; ··· 1750 1728 ui_progress__update(prog, 1); 1751 1729 } 1752 1730 return 0; 1753 - } 1754 - 1755 - static int64_t hist_entry__sort(struct hist_entry *a, struct hist_entry *b) 1756 - { 1757 - struct hists *hists = a->hists; 1758 - struct perf_hpp_fmt *fmt; 1759 - int64_t cmp = 0; 1760 - 1761 - hists__for_each_sort_list(hists, fmt) { 1762 - if (perf_hpp__should_skip(fmt, a->hists)) 1763 - continue; 1764 - 1765 - cmp = fmt->sort(fmt, a, b); 1766 - if (cmp) 1767 - break; 1768 - } 1769 - 1770 - return cmp; 1771 1731 } 1772 1732 1773 1733 static void hists__reset_filter_stats(struct hists *hists) ··· 2453 2449 struct rb_node **p; 2454 2450 struct rb_node *parent = NULL; 2455 2451 struct hist_entry *he; 2456 - struct perf_hpp_fmt *fmt; 2457 2452 bool leftmost = true; 2458 2453 2459 2454 p = &root->rb_root.rb_node; 2460 2455 while (*p != NULL) { 2461 - int64_t cmp = 0; 2456 + int64_t cmp; 2462 2457 2463 2458 parent = *p; 2464 2459 he = rb_entry(parent, struct hist_entry, rb_node_in); 2465 - 2466 - perf_hpp_list__for_each_sort_list(he->hpp_list, fmt) { 2467 - cmp = fmt->collapse(fmt, he, pair); 2468 - if (cmp) 2469 - break; 2470 - } 2460 + cmp = hist_entry__collapse_hierarchy(he->hpp_list, he, pair); 2471 2461 if (!cmp) 2472 2462 goto out; 2473 2463 ··· 2519 2521 2520 2522 while (n) { 2521 2523 struct hist_entry *iter; 2522 - struct perf_hpp_fmt *fmt; 2523 - int64_t cmp = 0; 2524 + int64_t cmp; 2524 2525 2525 2526 iter = rb_entry(n, struct hist_entry, rb_node_in); 2526 - perf_hpp_list__for_each_sort_list(he->hpp_list, fmt) { 2527 - cmp = fmt->collapse(fmt, iter, he); 2528 - if (cmp) 2529 - break; 2530 - } 2531 - 2527 + cmp = hist_entry__collapse_hierarchy(he->hpp_list, iter, he); 2532 2528 if (cmp < 0) 2533 2529 n = n->rb_left; 2534 2530 else if (cmp > 0)
+6 -8
tools/perf/util/hist.h
··· 342 342 struct perf_hpp; 343 343 struct perf_hpp_fmt; 344 344 345 - int64_t hist_entry__cmp(struct hist_entry *left, struct hist_entry *right); 346 - int64_t hist_entry__collapse(struct hist_entry *left, struct hist_entry *right); 347 345 int hist_entry__transaction_len(void); 348 346 int hist_entry__sort_snprintf(struct hist_entry *he, char *bf, size_t size, 349 347 struct hists *hists); ··· 450 452 bool skip; 451 453 }; 452 454 455 + typedef int64_t (*perf_hpp_fmt_cmp_t)( 456 + struct perf_hpp_fmt *, struct hist_entry *, struct hist_entry *); 457 + 453 458 struct perf_hpp_fmt { 454 459 const char *name; 455 460 int (*header)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp, ··· 464 463 struct hist_entry *he); 465 464 int (*entry)(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp, 466 465 struct hist_entry *he); 467 - int64_t (*cmp)(struct perf_hpp_fmt *fmt, 468 - struct hist_entry *a, struct hist_entry *b); 469 - int64_t (*collapse)(struct perf_hpp_fmt *fmt, 470 - struct hist_entry *a, struct hist_entry *b); 471 - int64_t (*sort)(struct perf_hpp_fmt *fmt, 472 - struct hist_entry *a, struct hist_entry *b); 466 + perf_hpp_fmt_cmp_t cmp; 467 + perf_hpp_fmt_cmp_t collapse; 468 + perf_hpp_fmt_cmp_t sort; 473 469 bool (*equal)(struct perf_hpp_fmt *a, struct perf_hpp_fmt *b); 474 470 void (*free)(struct perf_hpp_fmt *fmt); 475 471
+13 -5
tools/perf/util/intel-pt-decoder/Build
··· 7 7 $(call rule_mkdir) 8 8 @$(call echo-cmd,gen)$(AWK) -f $(inat_tables_script) $(inat_tables_maps) > $@ || rm -f $@ 9 9 10 - # Busybox's diff doesn't have -I, avoid warning in the case 10 + ifeq ($(SRCARCH),x86) 11 + perf-util-y += inat.o insn.o 12 + else 13 + perf-util-$(CONFIG_AUXTRACE) += inat.o insn.o 14 + endif 11 15 12 - $(OUTPUT)util/intel-pt-decoder/intel-pt-insn-decoder.o: util/intel-pt-decoder/intel-pt-insn-decoder.c $(OUTPUT)util/intel-pt-decoder/inat-tables.c 16 + $(OUTPUT)util/intel-pt-decoder/inat.o: $(srctree)/tools/arch/x86/lib/inat.c $(OUTPUT)util/intel-pt-decoder/inat-tables.c 13 17 $(call rule_mkdir) 14 18 $(call if_changed_dep,cc_o_c) 15 19 16 - CFLAGS_intel-pt-insn-decoder.o += -I$(OUTPUT)util/intel-pt-decoder 20 + CFLAGS_inat.o += -I$(OUTPUT)util/intel-pt-decoder 21 + 22 + $(OUTPUT)util/intel-pt-decoder/insn.o: $(srctree)/tools/arch/x86/lib/insn.c 23 + $(call rule_mkdir) 24 + $(call if_changed_dep,cc_o_c) 17 25 18 26 ifeq ($(CC_NO_CLANG), 1) 19 - CFLAGS_intel-pt-insn-decoder.o += -Wno-override-init 27 + CFLAGS_insn.o += -Wno-override-init 20 28 endif 21 29 22 - CFLAGS_intel-pt-insn-decoder.o += -Wno-packed 30 + CFLAGS_insn.o += -Wno-packed
-3
tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.c
··· 11 11 #include <byteswap.h> 12 12 #include "../../../arch/x86/include/asm/insn.h" 13 13 14 - #include "../../../arch/x86/lib/inat.c" 15 - #include "../../../arch/x86/lib/insn.c" 16 - 17 14 #include "event.h" 18 15 19 16 #include "intel-pt-insn-decoder.h"
+12 -3
tools/perf/util/jitdump.c
··· 737 737 * as captured in the RECORD_MMAP record 738 738 */ 739 739 static int 740 - jit_detect(const char *mmap_name, pid_t pid, struct nsinfo *nsi) 740 + jit_detect(const char *mmap_name, pid_t pid, struct nsinfo *nsi, bool *in_pidns) 741 741 { 742 742 char *p; 743 743 char *end = NULL; ··· 773 773 if (!end) 774 774 return -1; 775 775 776 + *in_pidns = pid == nsinfo__nstgid(nsi); 776 777 /* 777 778 * pid does not match mmap pid 778 779 * pid==0 in system-wide mode (synthesized) 780 + * 781 + * If the pid in the file name is equal to the nstgid, then 782 + * the agent ran inside a container and perf outside the 783 + * container, so record it for further use in jit_inject(). 779 784 */ 780 - if (pid && pid2 != nsinfo__nstgid(nsi)) 785 + if (pid && !(pid2 == pid || *in_pidns)) 781 786 return -1; 782 787 /* 783 788 * validate suffix ··· 835 830 struct nsinfo *nsi; 836 831 struct evsel *first; 837 832 struct jit_buf_desc jd; 833 + bool in_pidns = false; 838 834 int ret; 839 835 840 836 thread = machine__findnew_thread(machine, pid, tid); ··· 850 844 /* 851 845 * first, detect marker mmap (i.e., the jitdump mmap) 852 846 */ 853 - if (jit_detect(filename, pid, nsi)) { 847 + if (jit_detect(filename, pid, nsi, &in_pidns)) { 854 848 nsinfo__put(nsi); 855 849 856 850 /* ··· 871 865 jd.output = output; 872 866 jd.machine = machine; 873 867 jd.nsi = nsi; 868 + 869 + if (in_pidns) 870 + nsinfo__set_in_pidns(nsi); 874 871 875 872 /* 876 873 * track sample_type to compute id_all layout
+70
tools/perf/util/kvm-stat.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include "debug.h" 3 + #include "evsel.h" 4 + #include "kvm-stat.h" 5 + 6 + #if defined(HAVE_KVM_STAT_SUPPORT) && defined(HAVE_LIBTRACEEVENT) 7 + 8 + bool kvm_exit_event(struct evsel *evsel) 9 + { 10 + return evsel__name_is(evsel, kvm_exit_trace); 11 + } 12 + 13 + void exit_event_get_key(struct evsel *evsel, 14 + struct perf_sample *sample, 15 + struct event_key *key) 16 + { 17 + key->info = 0; 18 + key->key = evsel__intval(evsel, sample, kvm_exit_reason); 19 + } 20 + 21 + 22 + bool exit_event_begin(struct evsel *evsel, 23 + struct perf_sample *sample, struct event_key *key) 24 + { 25 + if (kvm_exit_event(evsel)) { 26 + exit_event_get_key(evsel, sample, key); 27 + return true; 28 + } 29 + 30 + return false; 31 + } 32 + 33 + bool kvm_entry_event(struct evsel *evsel) 34 + { 35 + return evsel__name_is(evsel, kvm_entry_trace); 36 + } 37 + 38 + bool exit_event_end(struct evsel *evsel, 39 + struct perf_sample *sample __maybe_unused, 40 + struct event_key *key __maybe_unused) 41 + { 42 + return kvm_entry_event(evsel); 43 + } 44 + 45 + static const char *get_exit_reason(struct perf_kvm_stat *kvm, 46 + struct exit_reasons_table *tbl, 47 + u64 exit_code) 48 + { 49 + while (tbl->reason != NULL) { 50 + if (tbl->exit_code == exit_code) 51 + return tbl->reason; 52 + tbl++; 53 + } 54 + 55 + pr_err("unknown kvm exit code:%lld on %s\n", 56 + (unsigned long long)exit_code, kvm->exit_reasons_isa); 57 + return "UNKNOWN"; 58 + } 59 + 60 + void exit_event_decode_key(struct perf_kvm_stat *kvm, 61 + struct event_key *key, 62 + char *decode) 63 + { 64 + const char *exit_reason = get_exit_reason(kvm, key->exit_reasons, 65 + key->key); 66 + 67 + scnprintf(decode, KVM_EVENT_NAME_LEN, "%s", exit_reason); 68 + } 69 + 70 + #endif
+3
tools/perf/util/kvm-stat.h
··· 115 115 struct kvm_events_ops *ops; 116 116 }; 117 117 118 + #if defined(HAVE_KVM_STAT_SUPPORT) && defined(HAVE_LIBTRACEEVENT) 119 + 118 120 void exit_event_get_key(struct evsel *evsel, 119 121 struct perf_sample *sample, 120 122 struct event_key *key); ··· 129 127 void exit_event_decode_key(struct perf_kvm_stat *kvm, 130 128 struct event_key *key, 131 129 char *decode); 130 + #endif 132 131 133 132 bool kvm_exit_event(struct evsel *evsel); 134 133 bool kvm_entry_event(struct evsel *evsel);
+5 -2
tools/perf/util/kwork.h
··· 1 1 #ifndef PERF_UTIL_KWORK_H 2 2 #define PERF_UTIL_KWORK_H 3 3 4 + #include "perf.h" 4 5 #include "util/tool.h" 5 6 #include "util/time-utils.h" 6 7 ··· 252 251 * perf kwork top data 253 252 */ 254 253 struct kwork_top_stat top_stat; 255 - }; 256 254 257 - struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork, 255 + /* Add work callback. */ 256 + struct kwork_work *(*add_work)(struct perf_kwork *kwork, 258 257 struct kwork_class *class, 259 258 struct kwork_work *key); 259 + 260 + }; 260 261 261 262 #ifdef HAVE_BPF_SKEL 262 263
-1
tools/perf/util/llvm-c-helpers.cpp
··· 18 18 extern "C" { 19 19 #include <linux/zalloc.h> 20 20 } 21 - #include "symbol_conf.h" 22 21 #include "llvm-c-helpers.h" 23 22 24 23 extern "C"
+143
tools/perf/util/lock-contention.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include "debug.h" 3 + #include "env.h" 4 + #include "lock-contention.h" 5 + #include "machine.h" 6 + #include "symbol.h" 7 + 8 + #include <limits.h> 9 + #include <string.h> 10 + 11 + #include <linux/hash.h> 12 + #include <linux/zalloc.h> 13 + 14 + #define __lockhashfn(key) hash_long((unsigned long)key, LOCKHASH_BITS) 15 + #define lockhashentry(key) (lockhash_table + __lockhashfn((key))) 16 + 17 + struct callstack_filter { 18 + struct list_head list; 19 + char name[]; 20 + }; 21 + 22 + static LIST_HEAD(callstack_filters); 23 + struct hlist_head *lockhash_table; 24 + 25 + int parse_call_stack(const struct option *opt __maybe_unused, const char *str, 26 + int unset __maybe_unused) 27 + { 28 + char *s, *tmp, *tok; 29 + int ret = 0; 30 + 31 + s = strdup(str); 32 + if (s == NULL) 33 + return -1; 34 + 35 + for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) { 36 + struct callstack_filter *entry; 37 + 38 + entry = malloc(sizeof(*entry) + strlen(tok) + 1); 39 + if (entry == NULL) { 40 + pr_err("Memory allocation failure\n"); 41 + free(s); 42 + return -1; 43 + } 44 + 45 + strcpy(entry->name, tok); 46 + list_add_tail(&entry->list, &callstack_filters); 47 + } 48 + 49 + free(s); 50 + return ret; 51 + } 52 + 53 + bool needs_callstack(void) 54 + { 55 + return !list_empty(&callstack_filters); 56 + } 57 + 58 + struct lock_stat *lock_stat_find(u64 addr) 59 + { 60 + struct hlist_head *entry = lockhashentry(addr); 61 + struct lock_stat *ret; 62 + 63 + hlist_for_each_entry(ret, entry, hash_entry) { 64 + if (ret->addr == addr) 65 + return ret; 66 + } 67 + return NULL; 68 + } 69 + 70 + struct lock_stat *lock_stat_findnew(u64 addr, const char *name, int flags) 71 + { 72 + struct hlist_head *entry = lockhashentry(addr); 73 + struct lock_stat *ret, *new; 74 + 75 + hlist_for_each_entry(ret, entry, hash_entry) { 76 + if (ret->addr == addr) 77 + return ret; 78 + } 79 + 80 + new = zalloc(sizeof(struct lock_stat)); 81 + if (!new) 82 + goto alloc_failed; 83 + 84 + new->addr = addr; 85 + new->name = strdup(name); 86 + if (!new->name) { 87 + free(new); 88 + goto alloc_failed; 89 + } 90 + 91 + new->flags = flags; 92 + new->wait_time_min = ULLONG_MAX; 93 + 94 + hlist_add_head(&new->hash_entry, entry); 95 + return new; 96 + 97 + alloc_failed: 98 + pr_err("memory allocation failed\n"); 99 + return NULL; 100 + } 101 + 102 + bool match_callstack_filter(struct machine *machine, u64 *callstack, int max_stack_depth) 103 + { 104 + struct map *kmap; 105 + struct symbol *sym; 106 + u64 ip; 107 + const char *arch = perf_env__arch(machine->env); 108 + 109 + if (list_empty(&callstack_filters)) 110 + return true; 111 + 112 + for (int i = 0; i < max_stack_depth; i++) { 113 + struct callstack_filter *filter; 114 + 115 + /* 116 + * In powerpc, the callchain saved by kernel always includes 117 + * first three entries as the NIP (next instruction pointer), 118 + * LR (link register), and the contents of LR save area in the 119 + * second stack frame. In certain scenarios its possible to have 120 + * invalid kernel instruction addresses in either LR or the second 121 + * stack frame's LR. In that case, kernel will store that address as 122 + * zero. 123 + * 124 + * The below check will continue to look into callstack, 125 + * incase first or second callstack index entry has 0 126 + * address for powerpc. 127 + */ 128 + if (!callstack || (!callstack[i] && (strcmp(arch, "powerpc") || 129 + (i != 1 && i != 2)))) 130 + break; 131 + 132 + ip = callstack[i]; 133 + sym = machine__find_kernel_symbol(machine, ip, &kmap); 134 + if (sym == NULL) 135 + continue; 136 + 137 + list_for_each_entry(filter, &callstack_filters, list) { 138 + if (strstr(sym->name, filter->name)) 139 + return true; 140 + } 141 + } 142 + return false; 143 + }
+16 -4
tools/perf/util/lock-contention.h
··· 10 10 int nr_addrs; 11 11 int nr_syms; 12 12 int nr_cgrps; 13 + int nr_slabs; 13 14 unsigned int *types; 14 15 unsigned long *addrs; 15 16 char **syms; 16 17 u64 *cgrps; 18 + char **slabs; 17 19 }; 18 20 19 21 struct lock_stat { ··· 69 67 */ 70 68 #define MAX_LOCK_DEPTH 48 71 69 72 - struct lock_stat *lock_stat_find(u64 addr); 73 - struct lock_stat *lock_stat_findnew(u64 addr, const char *name, int flags); 70 + /* based on kernel/lockdep.c */ 71 + #define LOCKHASH_BITS 12 72 + #define LOCKHASH_SIZE (1UL << LOCKHASH_BITS) 74 73 75 - bool match_callstack_filter(struct machine *machine, u64 *callstack); 74 + extern struct hlist_head *lockhash_table; 76 75 77 76 /* 78 77 * struct lock_seq_stat: ··· 151 148 bool save_callstack; 152 149 }; 153 150 154 - #ifdef HAVE_BPF_SKEL 151 + struct option; 152 + int parse_call_stack(const struct option *opt, const char *str, int unset); 153 + bool needs_callstack(void); 155 154 155 + struct lock_stat *lock_stat_find(u64 addr); 156 + struct lock_stat *lock_stat_findnew(u64 addr, const char *name, int flags); 157 + 158 + bool match_callstack_filter(struct machine *machine, u64 *callstack, int max_stack_depth); 159 + 160 + 161 + #ifdef HAVE_BPF_SKEL 156 162 int lock_contention_prepare(struct lock_contention *con); 157 163 int lock_contention_start(void); 158 164 int lock_contention_stop(void);
+3 -1
tools/perf/util/machine.c
··· 1003 1003 1004 1004 err = kallsyms__get_symbol_start(filename, "_edata", &addr); 1005 1005 if (err) 1006 - err = kallsyms__get_function_start(filename, "_etext", &addr); 1006 + err = kallsyms__get_symbol_start(filename, "_etext", &addr); 1007 1007 if (!err) 1008 1008 *end = addr; 1009 1009 ··· 1467 1467 1468 1468 if (modules__parse(modules, machine, machine__create_module)) 1469 1469 return -1; 1470 + 1471 + maps__fixup_end(machine__kernel_maps(machine)); 1470 1472 1471 1473 if (!machine__set_modules_path(machine)) 1472 1474 return 0;
+6 -1
tools/perf/util/maps.c
··· 1136 1136 struct map *result = NULL; 1137 1137 1138 1138 down_read(maps__lock(maps)); 1139 + while (!maps__maps_by_address_sorted(maps)) { 1140 + up_read(maps__lock(maps)); 1141 + maps__sort_by_address(maps); 1142 + down_read(maps__lock(maps)); 1143 + } 1139 1144 i = maps__by_address_index(maps, map); 1140 - if (i < maps__nr_maps(maps)) 1145 + if (++i < maps__nr_maps(maps)) 1141 1146 result = map__get(maps__maps_by_address(maps)[i]); 1142 1147 1143 1148 up_read(maps__lock(maps));
+4 -1
tools/perf/util/mem-events.c
··· 258 258 const char *s; 259 259 char *copy; 260 260 struct perf_cpu_map *cpu_map = NULL; 261 + int ret; 261 262 262 263 while ((pmu = perf_pmus__scan_mem(pmu)) != NULL) { 263 264 for (int j = 0; j < PERF_MEM_EVENTS__MAX; j++) { ··· 284 283 rec_argv[i++] = "-e"; 285 284 rec_argv[i++] = copy; 286 285 287 - cpu_map = perf_cpu_map__merge(cpu_map, pmu->cpus); 286 + ret = perf_cpu_map__merge(&cpu_map, pmu->cpus); 287 + if (ret < 0) 288 + return ret; 288 289 } 289 290 } 290 291
+6 -1
tools/perf/util/namespaces.c
··· 266 266 return RC_CHK_ACCESS(nsi)->pid; 267 267 } 268 268 269 - pid_t nsinfo__in_pidns(const struct nsinfo *nsi) 269 + bool nsinfo__in_pidns(const struct nsinfo *nsi) 270 270 { 271 271 return RC_CHK_ACCESS(nsi)->in_pidns; 272 + } 273 + 274 + void nsinfo__set_in_pidns(struct nsinfo *nsi) 275 + { 276 + RC_CHK_ACCESS(nsi)->in_pidns = true; 272 277 } 273 278 274 279 void nsinfo__mountns_enter(struct nsinfo *nsi,
+2 -1
tools/perf/util/namespaces.h
··· 58 58 pid_t nsinfo__tgid(const struct nsinfo *nsi); 59 59 pid_t nsinfo__nstgid(const struct nsinfo *nsi); 60 60 pid_t nsinfo__pid(const struct nsinfo *nsi); 61 - pid_t nsinfo__in_pidns(const struct nsinfo *nsi); 61 + bool nsinfo__in_pidns(const struct nsinfo *nsi); 62 + void nsinfo__set_in_pidns(struct nsinfo *nsi); 62 63 63 64 void nsinfo__mountns_enter(struct nsinfo *nsi, struct nscookie *nc); 64 65 void nsinfo__mountns_exit(struct nscookie *nc);
+11 -15
tools/perf/util/parse-events.c
··· 489 489 return found_supported ? 0 : -EINVAL; 490 490 } 491 491 492 - #ifdef HAVE_LIBTRACEEVENT 493 492 static void tracepoint_error(struct parse_events_error *e, int err, 494 493 const char *sys, const char *name, int column) 495 494 { ··· 643 644 closedir(events_dir); 644 645 return ret; 645 646 } 646 - #endif /* HAVE_LIBTRACEEVENT */ 647 647 648 648 size_t default_breakpoint_len(void) 649 649 { ··· 793 795 [PARSE_EVENTS__TERM_TYPE_DRV_CFG] = "driver-config", 794 796 [PARSE_EVENTS__TERM_TYPE_PERCORE] = "percore", 795 797 [PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT] = "aux-output", 798 + [PARSE_EVENTS__TERM_TYPE_AUX_ACTION] = "aux-action", 796 799 [PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE] = "aux-sample-size", 797 800 [PARSE_EVENTS__TERM_TYPE_METRIC_ID] = "metric-id", 798 801 [PARSE_EVENTS__TERM_TYPE_RAW] = "raw", ··· 843 844 case PARSE_EVENTS__TERM_TYPE_OVERWRITE: 844 845 case PARSE_EVENTS__TERM_TYPE_DRV_CFG: 845 846 case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: 847 + case PARSE_EVENTS__TERM_TYPE_AUX_ACTION: 846 848 case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: 847 849 case PARSE_EVENTS__TERM_TYPE_RAW: 848 850 case PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE: ··· 963 963 case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: 964 964 CHECK_TYPE_VAL(NUM); 965 965 break; 966 + case PARSE_EVENTS__TERM_TYPE_AUX_ACTION: 967 + CHECK_TYPE_VAL(STR); 968 + break; 966 969 case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: 967 970 CHECK_TYPE_VAL(NUM); 968 971 if (term->val.num > UINT_MAX) { ··· 1069 1066 return config_term_common(attr, term, err); 1070 1067 } 1071 1068 1072 - #ifdef HAVE_LIBTRACEEVENT 1073 1069 static int config_term_tracepoint(struct perf_event_attr *attr, 1074 1070 struct parse_events_term *term, 1075 1071 struct parse_events_error *err) ··· 1083 1081 case PARSE_EVENTS__TERM_TYPE_OVERWRITE: 1084 1082 case PARSE_EVENTS__TERM_TYPE_NOOVERWRITE: 1085 1083 case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: 1084 + case PARSE_EVENTS__TERM_TYPE_AUX_ACTION: 1086 1085 case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: 1087 1086 return config_term_common(attr, term, err); 1088 1087 case PARSE_EVENTS__TERM_TYPE_USER: ··· 1114 1111 1115 1112 return 0; 1116 1113 } 1117 - #endif 1118 1114 1119 1115 static int config_attr(struct perf_event_attr *attr, 1120 1116 const struct parse_events_terms *head, ··· 1219 1217 ADD_CONFIG_TERM_VAL(AUX_OUTPUT, aux_output, 1220 1218 term->val.num ? 1 : 0, term->weak); 1221 1219 break; 1220 + case PARSE_EVENTS__TERM_TYPE_AUX_ACTION: 1221 + ADD_CONFIG_TERM_STR(AUX_ACTION, term->val.str, term->weak); 1222 + break; 1222 1223 case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: 1223 1224 ADD_CONFIG_TERM_VAL(AUX_SAMPLE_SIZE, aux_sample_size, 1224 1225 term->val.num, term->weak); ··· 1284 1279 case PARSE_EVENTS__TERM_TYPE_DRV_CFG: 1285 1280 case PARSE_EVENTS__TERM_TYPE_PERCORE: 1286 1281 case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: 1282 + case PARSE_EVENTS__TERM_TYPE_AUX_ACTION: 1287 1283 case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: 1288 1284 case PARSE_EVENTS__TERM_TYPE_METRIC_ID: 1289 1285 case PARSE_EVENTS__TERM_TYPE_RAW: ··· 1309 1303 struct parse_events_terms *head_config, void *loc_) 1310 1304 { 1311 1305 YYLTYPE *loc = loc_; 1312 - #ifdef HAVE_LIBTRACEEVENT 1306 + 1313 1307 if (head_config) { 1314 1308 struct perf_event_attr attr; 1315 1309 ··· 1324 1318 else 1325 1319 return add_tracepoint_event(parse_state, list, sys, event, 1326 1320 err, head_config, loc); 1327 - #else 1328 - (void)parse_state; 1329 - (void)list; 1330 - (void)sys; 1331 - (void)event; 1332 - (void)head_config; 1333 - parse_events_error__handle(err, loc->first_column, strdup("unsupported tracepoint"), 1334 - strdup("libtraceevent is necessary for tracepoint support")); 1335 - return -1; 1336 - #endif 1337 1321 } 1338 1322 1339 1323 static int __parse_events_add_numeric(struct parse_events_state *parse_state,
+1
tools/perf/util/parse-events.h
··· 74 74 PARSE_EVENTS__TERM_TYPE_DRV_CFG, 75 75 PARSE_EVENTS__TERM_TYPE_PERCORE, 76 76 PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT, 77 + PARSE_EVENTS__TERM_TYPE_AUX_ACTION, 77 78 PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE, 78 79 PARSE_EVENTS__TERM_TYPE_METRIC_ID, 79 80 PARSE_EVENTS__TERM_TYPE_RAW,
+1
tools/perf/util/parse-events.l
··· 321 321 no-overwrite { return term(yyscanner, PARSE_EVENTS__TERM_TYPE_NOOVERWRITE); } 322 322 percore { return term(yyscanner, PARSE_EVENTS__TERM_TYPE_PERCORE); } 323 323 aux-output { return term(yyscanner, PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT); } 324 + aux-action { return term(yyscanner, PARSE_EVENTS__TERM_TYPE_AUX_ACTION); } 324 325 aux-sample-size { return term(yyscanner, PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE); } 325 326 metric-id { return term(yyscanner, PARSE_EVENTS__TERM_TYPE_METRIC_ID); } 326 327 cpu-cycles|cycles { return hw_term(yyscanner, PERF_COUNT_HW_CPU_CYCLES); }
+3 -5
tools/perf/util/path.c
··· 68 68 return S_ISDIR(st.st_mode); 69 69 } 70 70 71 - bool is_executable_file(const char *base_path, const struct dirent *dent) 71 + bool is_directory_at(int dir_fd, const char *path) 72 72 { 73 - char path[PATH_MAX]; 74 73 struct stat st; 75 74 76 - snprintf(path, sizeof(path), "%s/%s", base_path, dent->d_name); 77 - if (stat(path, &st)) 75 + if (fstatat(dir_fd, path, &st, /*flags=*/0)) 78 76 return false; 79 77 80 - return !S_ISDIR(st.st_mode) && (st.st_mode & S_IXUSR); 78 + return S_ISDIR(st.st_mode); 81 79 }
+1 -1
tools/perf/util/path.h
··· 12 12 13 13 bool is_regular_file(const char *file); 14 14 bool is_directory(const char *base_path, const struct dirent *dent); 15 - bool is_executable_file(const char *base_path, const struct dirent *dent); 15 + bool is_directory_at(int dir_fd, const char *path); 16 16 17 17 #endif /* _PERF_PATH_H */
+3 -4
tools/perf/util/perf_event_attr_fprintf.c
··· 212 212 } 213 213 } 214 214 215 - #ifdef HAVE_LIBTRACEEVENT 216 215 static void __p_config_tracepoint_id(char *buf, size_t size, u64 value) 217 216 { 218 217 char *str = tracepoint_id_to_name(value); ··· 219 220 print_id_hex(str); 220 221 free(str); 221 222 } 222 - #endif 223 223 224 224 static void __p_config_id(struct perf_pmu *pmu, char *buf, size_t size, u32 type, u64 value) 225 225 { ··· 236 238 case PERF_TYPE_HW_CACHE: 237 239 return __p_config_hw_cache_id(buf, size, value); 238 240 case PERF_TYPE_TRACEPOINT: 239 - #ifdef HAVE_LIBTRACEEVENT 240 241 return __p_config_tracepoint_id(buf, size, value); 241 - #endif 242 242 case PERF_TYPE_RAW: 243 243 case PERF_TYPE_BREAKPOINT: 244 244 default: ··· 331 335 PRINT_ATTRf(sample_max_stack, p_unsigned); 332 336 PRINT_ATTRf(aux_sample_size, p_unsigned); 333 337 PRINT_ATTRf(sig_data, p_unsigned); 338 + PRINT_ATTRf(aux_start_paused, p_unsigned); 339 + PRINT_ATTRf(aux_pause, p_unsigned); 340 + PRINT_ATTRf(aux_resume, p_unsigned); 334 341 335 342 return ret; 336 343 }
+21 -10
tools/perf/util/pmu.c
··· 12 12 #include <stdbool.h> 13 13 #include <dirent.h> 14 14 #include <api/fs/fs.h> 15 + #include <api/io.h> 15 16 #include <locale.h> 16 17 #include <fnmatch.h> 17 18 #include <math.h> ··· 749 748 * Uncore PMUs have a "cpumask" file under sysfs. CPU PMUs (e.g. on arm/arm64) 750 749 * may have a "cpus" file. 751 750 */ 752 - static struct perf_cpu_map *pmu_cpumask(int dirfd, const char *name, bool is_core) 751 + static struct perf_cpu_map *pmu_cpumask(int dirfd, const char *pmu_name, bool is_core) 753 752 { 754 - struct perf_cpu_map *cpus; 755 753 const char *templates[] = { 756 754 "cpumask", 757 755 "cpus", 758 756 NULL 759 757 }; 760 758 const char **template; 761 - char pmu_name[PATH_MAX]; 762 - struct perf_pmu pmu = {.name = pmu_name}; 763 - FILE *file; 764 759 765 - strlcpy(pmu_name, name, sizeof(pmu_name)); 766 760 for (template = templates; *template; template++) { 767 - file = perf_pmu__open_file_at(&pmu, dirfd, *template); 768 - if (!file) 761 + struct io io; 762 + char buf[128]; 763 + char *cpumask = NULL; 764 + size_t cpumask_len; 765 + ssize_t ret; 766 + struct perf_cpu_map *cpus; 767 + 768 + io.fd = perf_pmu__pathname_fd(dirfd, pmu_name, *template, O_RDONLY); 769 + if (io.fd < 0) 769 770 continue; 770 - cpus = perf_cpu_map__read(file); 771 - fclose(file); 771 + 772 + io__init(&io, io.fd, buf, sizeof(buf)); 773 + ret = io__getline(&io, &cpumask, &cpumask_len); 774 + close(io.fd); 775 + if (ret < 0) 776 + continue; 777 + 778 + cpus = perf_cpu_map__new(cpumask); 779 + free(cpumask); 772 780 if (cpus) 773 781 return cpus; 774 782 } ··· 1773 1763 "no-overwrite", 1774 1764 "percore", 1775 1765 "aux-output", 1766 + "aux-action=(pause|resume|start-paused)", 1776 1767 "aux-sample-size=number", 1777 1768 }; 1778 1769 struct perf_pmu_format *format;
+34 -16
tools/perf/util/probe-event.c
··· 1383 1383 if (p == buf) { 1384 1384 semantic_error("No file/function name in '%s'.\n", p); 1385 1385 err = -EINVAL; 1386 - goto err; 1386 + goto out; 1387 1387 } 1388 1388 *(p++) = '\0'; 1389 1389 1390 1390 err = parse_line_num(&p, &lr->start, "start line"); 1391 1391 if (err) 1392 - goto err; 1392 + goto out; 1393 1393 1394 1394 if (*p == '+' || *p == '-') { 1395 1395 const char c = *(p++); 1396 1396 1397 1397 err = parse_line_num(&p, &lr->end, "end line"); 1398 1398 if (err) 1399 - goto err; 1399 + goto out; 1400 1400 1401 1401 if (c == '+') { 1402 1402 lr->end += lr->start; ··· 1416 1416 if (lr->start > lr->end) { 1417 1417 semantic_error("Start line must be smaller" 1418 1418 " than end line.\n"); 1419 - goto err; 1419 + goto out; 1420 1420 } 1421 1421 if (*p != '\0') { 1422 1422 semantic_error("Tailing with invalid str '%s'.\n", p); 1423 - goto err; 1423 + goto out; 1424 1424 } 1425 1425 } 1426 1426 ··· 1431 1431 lr->file = strdup_esq(p); 1432 1432 if (lr->file == NULL) { 1433 1433 err = -ENOMEM; 1434 - goto err; 1434 + goto out; 1435 1435 } 1436 1436 } 1437 1437 if (*buf != '\0') ··· 1439 1439 if (!lr->function && !lr->file) { 1440 1440 semantic_error("Only '@*' is not allowed.\n"); 1441 1441 err = -EINVAL; 1442 - goto err; 1442 + goto out; 1443 1443 } 1444 1444 } else if (strpbrk_esq(buf, "/.")) 1445 1445 lr->file = strdup_esq(buf); ··· 1448 1448 else { /* Invalid name */ 1449 1449 semantic_error("'%s' is not a valid function name.\n", buf); 1450 1450 err = -EINVAL; 1451 - goto err; 1451 + goto out; 1452 1452 } 1453 1453 1454 - err: 1454 + out: 1455 1455 free(buf); 1456 1456 return err; 1457 1457 } ··· 2775 2775 2776 2776 static int get_new_event_name(char *buf, size_t len, const char *base, 2777 2777 struct strlist *namelist, bool ret_event, 2778 - bool allow_suffix) 2778 + bool allow_suffix, bool not_C_symname) 2779 2779 { 2780 2780 int i, ret; 2781 2781 char *p, *nbase; ··· 2786 2786 if (!nbase) 2787 2787 return -ENOMEM; 2788 2788 2789 - /* Cut off the dot suffixes (e.g. .const, .isra) and version suffixes */ 2790 - p = strpbrk(nbase, ".@"); 2791 - if (p && p != nbase) 2792 - *p = '\0'; 2789 + if (not_C_symname) { 2790 + /* Replace non-alnum with '_' */ 2791 + char *s, *d; 2792 + 2793 + s = d = nbase; 2794 + do { 2795 + if (*s && !isalnum(*s)) { 2796 + if (d != nbase && *(d - 1) != '_') 2797 + *d++ = '_'; 2798 + } else 2799 + *d++ = *s; 2800 + } while (*s++); 2801 + } else { 2802 + /* Cut off the dot suffixes (e.g. .const, .isra) and version suffixes */ 2803 + p = strpbrk(nbase, ".@"); 2804 + if (p && p != nbase) 2805 + *p = '\0'; 2806 + } 2793 2807 2794 2808 /* Try no suffix number */ 2795 2809 ret = e_snprintf(buf, len, "%s%s", nbase, ret_event ? "__return" : ""); ··· 2898 2884 bool allow_suffix) 2899 2885 { 2900 2886 const char *event, *group; 2887 + bool not_C_symname = true; 2901 2888 char buf[MAX_EVENT_NAME_LEN]; 2902 2889 int ret; 2903 2890 ··· 2913 2898 (strncmp(pev->point.function, "0x", 2) != 0) && 2914 2899 !strisglob(pev->point.function)) 2915 2900 event = pev->point.function; 2916 - else 2901 + else { 2917 2902 event = tev->point.realname; 2903 + not_C_symname = !is_known_C_lang(tev->lang); 2904 + } 2918 2905 } 2919 2906 if (pev->group && !pev->sdt) 2920 2907 group = pev->group; ··· 2933 2916 2934 2917 /* Get an unused new event name */ 2935 2918 ret = get_new_event_name(buf, sizeof(buf), event, namelist, 2936 - tev->point.retprobe, allow_suffix); 2919 + tev->point.retprobe, allow_suffix, 2920 + not_C_symname); 2937 2921 if (ret < 0) 2938 2922 return ret; 2939 2923
+1
tools/perf/util/probe-event.h
··· 58 58 char *group; /* Group name */ 59 59 struct probe_trace_point point; /* Trace point */ 60 60 int nargs; /* Number of args */ 61 + int lang; /* Dwarf language code */ 61 62 bool uprobes; /* uprobes only */ 62 63 struct probe_trace_arg *args; /* Arguments */ 63 64 };
+15
tools/perf/util/probe-finder.c
··· 35 35 /* Kprobe tracer basic type is up to u64 */ 36 36 #define MAX_BASIC_TYPE_BITS 64 37 37 38 + bool is_known_C_lang(int lang) 39 + { 40 + switch (lang) { 41 + case DW_LANG_C89: 42 + case DW_LANG_C: 43 + case DW_LANG_C99: 44 + case DW_LANG_C11: 45 + return true; 46 + default: 47 + return false; 48 + } 49 + } 50 + 38 51 /* 39 52 * Probe finder related functions 40 53 */ ··· 1282 1269 ret = -ENOMEM; 1283 1270 goto end; 1284 1271 } 1272 + 1273 + tev->lang = dwarf_srclang(dwarf_diecu(sc_die, &pf->cu_die, NULL, NULL)); 1285 1274 1286 1275 pr_debug("Probe point found: %s+%lu\n", tev->point.symbol, 1287 1276 tev->point.offset);
+5
tools/perf/util/probe-finder.h
··· 26 26 #include "dwarf-aux.h" 27 27 #include "debuginfo.h" 28 28 29 + /* Check the language code is known C */ 30 + bool is_known_C_lang(int lang); 31 + 29 32 /* Find probe_trace_events specified by perf_probe_event from debuginfo */ 30 33 int debuginfo__find_trace_events(struct debuginfo *dbg, 31 34 struct perf_probe_event *pev, ··· 106 103 int found; 107 104 }; 108 105 106 + #else 107 + #define is_known_C_lang(lang) (false) 109 108 #endif /* HAVE_LIBDW_SUPPORT */ 110 109 111 110 #endif /*_PROBE_FINDER_H */
+144 -197
tools/perf/util/python.c
··· 13 13 #include "evsel.h" 14 14 #include "event.h" 15 15 #include "print_binary.h" 16 + #include "strbuf.h" 16 17 #include "thread_map.h" 17 18 #include "trace-event.h" 18 19 #include "mmap.h" 19 - #include "util/bpf-filter.h" 20 - #include "util/env.h" 21 - #include "util/kvm-stat.h" 22 - #include "util/stat.h" 23 - #include "util/kwork.h" 24 20 #include "util/sample.h" 25 - #include "util/lock-contention.h" 26 21 #include <internal/lib.h> 27 - #include "../builtin.h" 28 - 29 - #if PY_MAJOR_VERSION < 3 30 - #define _PyUnicode_FromString(arg) \ 31 - PyString_FromString(arg) 32 - #define _PyUnicode_AsString(arg) \ 33 - PyString_AsString(arg) 34 - #define _PyUnicode_FromFormat(...) \ 35 - PyString_FromFormat(__VA_ARGS__) 36 - #define _PyLong_FromLong(arg) \ 37 - PyInt_FromLong(arg) 38 - 39 - #else 40 22 41 23 #define _PyUnicode_FromString(arg) \ 42 24 PyUnicode_FromString(arg) ··· 26 44 PyUnicode_FromFormat(__VA_ARGS__) 27 45 #define _PyLong_FromLong(arg) \ 28 46 PyLong_FromLong(arg) 29 - #endif 30 47 31 - #ifndef Py_TYPE 32 - #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) 33 - #endif 34 - 35 - /* Define PyVarObject_HEAD_INIT for python 2.5 */ 36 - #ifndef PyVarObject_HEAD_INIT 37 - # define PyVarObject_HEAD_INIT(type, size) PyObject_HEAD_INIT(type) size, 38 - #endif 39 - 40 - #if PY_MAJOR_VERSION < 3 41 - PyMODINIT_FUNC initperf(void); 42 - #else 43 48 PyMODINIT_FUNC PyInit_perf(void); 44 - #endif 45 49 46 50 #define member_def(type, member, ptype, help) \ 47 51 { #member, ptype, \ ··· 57 89 sample_member_def(sample_period, period, T_ULONGLONG, "event period"), \ 58 90 sample_member_def(sample_cpu, cpu, T_UINT, "event cpu"), 59 91 60 - static char pyrf_mmap_event__doc[] = PyDoc_STR("perf mmap event object."); 92 + static const char pyrf_mmap_event__doc[] = PyDoc_STR("perf mmap event object."); 61 93 62 94 static PyMemberDef pyrf_mmap_event__members[] = { 63 95 sample_members ··· 72 104 { .name = NULL, }, 73 105 }; 74 106 75 - static PyObject *pyrf_mmap_event__repr(struct pyrf_event *pevent) 107 + static PyObject *pyrf_mmap_event__repr(const struct pyrf_event *pevent) 76 108 { 77 109 PyObject *ret; 78 110 char *s; ··· 85 117 pevent->event.mmap.pgoff, pevent->event.mmap.filename) < 0) { 86 118 ret = PyErr_NoMemory(); 87 119 } else { 88 - ret = _PyUnicode_FromString(s); 120 + ret = PyUnicode_FromString(s); 89 121 free(s); 90 122 } 91 123 return ret; ··· 101 133 .tp_repr = (reprfunc)pyrf_mmap_event__repr, 102 134 }; 103 135 104 - static char pyrf_task_event__doc[] = PyDoc_STR("perf task (fork/exit) event object."); 136 + static const char pyrf_task_event__doc[] = PyDoc_STR("perf task (fork/exit) event object."); 105 137 106 138 static PyMemberDef pyrf_task_event__members[] = { 107 139 sample_members ··· 114 146 { .name = NULL, }, 115 147 }; 116 148 117 - static PyObject *pyrf_task_event__repr(struct pyrf_event *pevent) 149 + static PyObject *pyrf_task_event__repr(const struct pyrf_event *pevent) 118 150 { 119 - return _PyUnicode_FromFormat("{ type: %s, pid: %u, ppid: %u, tid: %u, " 151 + return PyUnicode_FromFormat("{ type: %s, pid: %u, ppid: %u, tid: %u, " 120 152 "ptid: %u, time: %" PRI_lu64 "}", 121 153 pevent->event.header.type == PERF_RECORD_FORK ? "fork" : "exit", 122 154 pevent->event.fork.pid, ··· 136 168 .tp_repr = (reprfunc)pyrf_task_event__repr, 137 169 }; 138 170 139 - static char pyrf_comm_event__doc[] = PyDoc_STR("perf comm event object."); 171 + static const char pyrf_comm_event__doc[] = PyDoc_STR("perf comm event object."); 140 172 141 173 static PyMemberDef pyrf_comm_event__members[] = { 142 174 sample_members ··· 147 179 { .name = NULL, }, 148 180 }; 149 181 150 - static PyObject *pyrf_comm_event__repr(struct pyrf_event *pevent) 182 + static PyObject *pyrf_comm_event__repr(const struct pyrf_event *pevent) 151 183 { 152 - return _PyUnicode_FromFormat("{ type: comm, pid: %u, tid: %u, comm: %s }", 184 + return PyUnicode_FromFormat("{ type: comm, pid: %u, tid: %u, comm: %s }", 153 185 pevent->event.comm.pid, 154 186 pevent->event.comm.tid, 155 187 pevent->event.comm.comm); ··· 165 197 .tp_repr = (reprfunc)pyrf_comm_event__repr, 166 198 }; 167 199 168 - static char pyrf_throttle_event__doc[] = PyDoc_STR("perf throttle event object."); 200 + static const char pyrf_throttle_event__doc[] = PyDoc_STR("perf throttle event object."); 169 201 170 202 static PyMemberDef pyrf_throttle_event__members[] = { 171 203 sample_members ··· 176 208 { .name = NULL, }, 177 209 }; 178 210 179 - static PyObject *pyrf_throttle_event__repr(struct pyrf_event *pevent) 211 + static PyObject *pyrf_throttle_event__repr(const struct pyrf_event *pevent) 180 212 { 181 - struct perf_record_throttle *te = (struct perf_record_throttle *)(&pevent->event.header + 1); 213 + const struct perf_record_throttle *te = (const struct perf_record_throttle *) 214 + (&pevent->event.header + 1); 182 215 183 - return _PyUnicode_FromFormat("{ type: %sthrottle, time: %" PRI_lu64 ", id: %" PRI_lu64 216 + return PyUnicode_FromFormat("{ type: %sthrottle, time: %" PRI_lu64 ", id: %" PRI_lu64 184 217 ", stream_id: %" PRI_lu64 " }", 185 218 pevent->event.header.type == PERF_RECORD_THROTTLE ? "" : "un", 186 219 te->time, te->id, te->stream_id); ··· 197 228 .tp_repr = (reprfunc)pyrf_throttle_event__repr, 198 229 }; 199 230 200 - static char pyrf_lost_event__doc[] = PyDoc_STR("perf lost event object."); 231 + static const char pyrf_lost_event__doc[] = PyDoc_STR("perf lost event object."); 201 232 202 233 static PyMemberDef pyrf_lost_event__members[] = { 203 234 sample_members ··· 206 237 { .name = NULL, }, 207 238 }; 208 239 209 - static PyObject *pyrf_lost_event__repr(struct pyrf_event *pevent) 240 + static PyObject *pyrf_lost_event__repr(const struct pyrf_event *pevent) 210 241 { 211 242 PyObject *ret; 212 243 char *s; ··· 216 247 pevent->event.lost.id, pevent->event.lost.lost) < 0) { 217 248 ret = PyErr_NoMemory(); 218 249 } else { 219 - ret = _PyUnicode_FromString(s); 250 + ret = PyUnicode_FromString(s); 220 251 free(s); 221 252 } 222 253 return ret; ··· 232 263 .tp_repr = (reprfunc)pyrf_lost_event__repr, 233 264 }; 234 265 235 - static char pyrf_read_event__doc[] = PyDoc_STR("perf read event object."); 266 + static const char pyrf_read_event__doc[] = PyDoc_STR("perf read event object."); 236 267 237 268 static PyMemberDef pyrf_read_event__members[] = { 238 269 sample_members ··· 241 272 { .name = NULL, }, 242 273 }; 243 274 244 - static PyObject *pyrf_read_event__repr(struct pyrf_event *pevent) 275 + static PyObject *pyrf_read_event__repr(const struct pyrf_event *pevent) 245 276 { 246 - return _PyUnicode_FromFormat("{ type: read, pid: %u, tid: %u }", 277 + return PyUnicode_FromFormat("{ type: read, pid: %u, tid: %u }", 247 278 pevent->event.read.pid, 248 279 pevent->event.read.tid); 249 280 /* ··· 262 293 .tp_repr = (reprfunc)pyrf_read_event__repr, 263 294 }; 264 295 265 - static char pyrf_sample_event__doc[] = PyDoc_STR("perf sample event object."); 296 + static const char pyrf_sample_event__doc[] = PyDoc_STR("perf sample event object."); 266 297 267 298 static PyMemberDef pyrf_sample_event__members[] = { 268 299 sample_members ··· 270 301 { .name = NULL, }, 271 302 }; 272 303 273 - static PyObject *pyrf_sample_event__repr(struct pyrf_event *pevent) 304 + static PyObject *pyrf_sample_event__repr(const struct pyrf_event *pevent) 274 305 { 275 306 PyObject *ret; 276 307 char *s; ··· 278 309 if (asprintf(&s, "{ type: sample }") < 0) { 279 310 ret = PyErr_NoMemory(); 280 311 } else { 281 - ret = _PyUnicode_FromString(s); 312 + ret = PyUnicode_FromString(s); 282 313 free(s); 283 314 } 284 315 return ret; 285 316 } 286 317 287 318 #ifdef HAVE_LIBTRACEEVENT 288 - static bool is_tracepoint(struct pyrf_event *pevent) 319 + static bool is_tracepoint(const struct pyrf_event *pevent) 289 320 { 290 321 return pevent->evsel->core.attr.type == PERF_TYPE_TRACEPOINT; 291 322 } 292 323 293 324 static PyObject* 294 - tracepoint_field(struct pyrf_event *pe, struct tep_format_field *field) 325 + tracepoint_field(const struct pyrf_event *pe, struct tep_format_field *field) 295 326 { 296 327 struct tep_handle *pevent = field->event->tep; 297 328 void *data = pe->sample.raw_data; ··· 312 343 } 313 344 if (field->flags & TEP_FIELD_IS_STRING && 314 345 is_printable_array(data + offset, len)) { 315 - ret = _PyUnicode_FromString((char *)data + offset); 346 + ret = PyUnicode_FromString((char *)data + offset); 316 347 } else { 317 348 ret = PyByteArray_FromStringAndSize((const char *) data + offset, len); 318 349 field->flags &= ~TEP_FIELD_IS_STRING; ··· 380 411 .tp_getattro = (getattrofunc) pyrf_sample_event__getattro, 381 412 }; 382 413 383 - static char pyrf_context_switch_event__doc[] = PyDoc_STR("perf context_switch event object."); 414 + static const char pyrf_context_switch_event__doc[] = PyDoc_STR("perf context_switch event object."); 384 415 385 416 static PyMemberDef pyrf_context_switch_event__members[] = { 386 417 sample_members ··· 390 421 { .name = NULL, }, 391 422 }; 392 423 393 - static PyObject *pyrf_context_switch_event__repr(struct pyrf_event *pevent) 424 + static PyObject *pyrf_context_switch_event__repr(const struct pyrf_event *pevent) 394 425 { 395 426 PyObject *ret; 396 427 char *s; ··· 401 432 !!(pevent->event.header.misc & PERF_RECORD_MISC_SWITCH_OUT)) < 0) { 402 433 ret = PyErr_NoMemory(); 403 434 } else { 404 - ret = _PyUnicode_FromString(s); 435 + ret = PyUnicode_FromString(s); 405 436 free(s); 406 437 } 407 438 return ret; ··· 470 501 [PERF_RECORD_SWITCH_CPU_WIDE] = &pyrf_context_switch_event__type, 471 502 }; 472 503 473 - static PyObject *pyrf_event__new(union perf_event *event) 504 + static PyObject *pyrf_event__new(const union perf_event *event) 474 505 { 475 506 struct pyrf_event *pevent; 476 507 PyTypeObject *ptype; ··· 538 569 .sq_item = pyrf_cpu_map__item, 539 570 }; 540 571 541 - static char pyrf_cpu_map__doc[] = PyDoc_STR("cpu map object."); 572 + static const char pyrf_cpu_map__doc[] = PyDoc_STR("cpu map object."); 542 573 543 574 static PyTypeObject pyrf_cpu_map__type = { 544 575 PyVarObject_HEAD_INIT(NULL, 0) ··· 607 638 .sq_item = pyrf_thread_map__item, 608 639 }; 609 640 610 - static char pyrf_thread_map__doc[] = PyDoc_STR("thread map object."); 641 + static const char pyrf_thread_map__doc[] = PyDoc_STR("thread map object."); 611 642 612 643 static PyTypeObject pyrf_thread_map__type = { 613 644 PyVarObject_HEAD_INIT(NULL, 0) ··· 781 812 return Py_None; 782 813 } 783 814 815 + static PyObject *pyrf_evsel__str(PyObject *self) 816 + { 817 + struct pyrf_evsel *pevsel = (void *)self; 818 + struct evsel *evsel = &pevsel->evsel; 819 + 820 + if (!evsel->pmu) 821 + return PyUnicode_FromFormat("evsel(%s)", evsel__name(evsel)); 822 + 823 + return PyUnicode_FromFormat("evsel(%s/%s/)", evsel->pmu->name, evsel__name(evsel)); 824 + } 825 + 784 826 static PyMethodDef pyrf_evsel__methods[] = { 785 827 { 786 828 .ml_name = "open", ··· 802 822 { .ml_name = NULL, } 803 823 }; 804 824 805 - static char pyrf_evsel__doc[] = PyDoc_STR("perf event selector list object."); 825 + static const char pyrf_evsel__doc[] = PyDoc_STR("perf event selector list object."); 806 826 807 827 static PyTypeObject pyrf_evsel__type = { 808 828 PyVarObject_HEAD_INIT(NULL, 0) ··· 813 833 .tp_doc = pyrf_evsel__doc, 814 834 .tp_methods = pyrf_evsel__methods, 815 835 .tp_init = (initproc)pyrf_evsel__init, 836 + .tp_str = pyrf_evsel__str, 837 + .tp_repr = pyrf_evsel__str, 816 838 }; 817 839 818 840 static int pyrf_evsel__setup_types(void) ··· 900 918 901 919 for (i = 0; i < evlist->core.pollfd.nr; ++i) { 902 920 PyObject *file; 903 - #if PY_MAJOR_VERSION < 3 904 - FILE *fp = fdopen(evlist->core.pollfd.entries[i].fd, "r"); 905 - 906 - if (fp == NULL) 907 - goto free_list; 908 - 909 - file = PyFile_FromFile(fp, "perf", "r", NULL); 910 - #else 911 921 file = PyFile_FromFd(evlist->core.pollfd.entries[i].fd, "perf", "r", -1, 912 922 NULL, NULL, NULL, 0); 913 - #endif 914 923 if (file == NULL) 915 924 goto free_list; 916 925 ··· 1071 1098 struct pyrf_evlist *pevlist = (void *)obj; 1072 1099 struct evsel *pos; 1073 1100 1074 - if (i >= pevlist->evlist.core.nr_entries) 1101 + if (i >= pevlist->evlist.core.nr_entries) { 1102 + PyErr_SetString(PyExc_IndexError, "Index out of range"); 1075 1103 return NULL; 1104 + } 1076 1105 1077 1106 evlist__for_each_entry(&pevlist->evlist, pos) { 1078 1107 if (i-- == 0) ··· 1084 1109 return Py_BuildValue("O", container_of(pos, struct pyrf_evsel, evsel)); 1085 1110 } 1086 1111 1112 + static PyObject *pyrf_evlist__str(PyObject *self) 1113 + { 1114 + struct pyrf_evlist *pevlist = (void *)self; 1115 + struct evsel *pos; 1116 + struct strbuf sb = STRBUF_INIT; 1117 + bool first = true; 1118 + PyObject *result; 1119 + 1120 + strbuf_addstr(&sb, "evlist(["); 1121 + evlist__for_each_entry(&pevlist->evlist, pos) { 1122 + if (!first) 1123 + strbuf_addch(&sb, ','); 1124 + if (!pos->pmu) 1125 + strbuf_addstr(&sb, evsel__name(pos)); 1126 + else 1127 + strbuf_addf(&sb, "%s/%s/", pos->pmu->name, evsel__name(pos)); 1128 + first = false; 1129 + } 1130 + strbuf_addstr(&sb, "])"); 1131 + result = PyUnicode_FromString(sb.buf); 1132 + strbuf_release(&sb); 1133 + return result; 1134 + } 1135 + 1087 1136 static PySequenceMethods pyrf_evlist__sequence_methods = { 1088 1137 .sq_length = pyrf_evlist__length, 1089 1138 .sq_item = pyrf_evlist__item, 1090 1139 }; 1091 1140 1092 - static char pyrf_evlist__doc[] = PyDoc_STR("perf event selector list object."); 1141 + static const char pyrf_evlist__doc[] = PyDoc_STR("perf event selector list object."); 1093 1142 1094 1143 static PyTypeObject pyrf_evlist__type = { 1095 1144 PyVarObject_HEAD_INIT(NULL, 0) ··· 1125 1126 .tp_doc = pyrf_evlist__doc, 1126 1127 .tp_methods = pyrf_evlist__methods, 1127 1128 .tp_init = (initproc)pyrf_evlist__init, 1129 + .tp_repr = pyrf_evlist__str, 1130 + .tp_str = pyrf_evlist__str, 1128 1131 }; 1129 1132 1130 1133 static int pyrf_evlist__setup_types(void) ··· 1137 1136 1138 1137 #define PERF_CONST(name) { #name, PERF_##name } 1139 1138 1140 - static struct { 1139 + struct perf_constant { 1141 1140 const char *name; 1142 1141 int value; 1143 - } perf__constants[] = { 1142 + }; 1143 + 1144 + static const struct perf_constant perf__constants[] = { 1144 1145 PERF_CONST(TYPE_HARDWARE), 1145 1146 PERF_CONST(TYPE_SOFTWARE), 1146 1147 PERF_CONST(TYPE_TRACEPOINT), ··· 1237 1234 1238 1235 tp_format = trace_event__tp_format(sys, name); 1239 1236 if (IS_ERR(tp_format)) 1240 - return _PyLong_FromLong(-1); 1237 + return PyLong_FromLong(-1); 1241 1238 1242 - return _PyLong_FromLong(tp_format->id); 1239 + return PyLong_FromLong(tp_format->id); 1243 1240 #endif // HAVE_LIBTRACEEVENT 1241 + } 1242 + 1243 + static PyObject *pyrf_evsel__from_evsel(struct evsel *evsel) 1244 + { 1245 + struct pyrf_evsel *pevsel = PyObject_New(struct pyrf_evsel, &pyrf_evsel__type); 1246 + 1247 + if (!pevsel) 1248 + return NULL; 1249 + 1250 + memset(&pevsel->evsel, 0, sizeof(pevsel->evsel)); 1251 + evsel__init(&pevsel->evsel, &evsel->core.attr, evsel->core.idx); 1252 + 1253 + evsel__clone(&pevsel->evsel, evsel); 1254 + return (PyObject *)pevsel; 1255 + } 1256 + 1257 + static PyObject *pyrf_evlist__from_evlist(struct evlist *evlist) 1258 + { 1259 + struct pyrf_evlist *pevlist = PyObject_New(struct pyrf_evlist, &pyrf_evlist__type); 1260 + struct evsel *pos; 1261 + 1262 + if (!pevlist) 1263 + return NULL; 1264 + 1265 + memset(&pevlist->evlist, 0, sizeof(pevlist->evlist)); 1266 + evlist__init(&pevlist->evlist, evlist->core.all_cpus, evlist->core.threads); 1267 + evlist__for_each_entry(evlist, pos) { 1268 + struct pyrf_evsel *pevsel = (void *)pyrf_evsel__from_evsel(pos); 1269 + 1270 + evlist__add(&pevlist->evlist, &pevsel->evsel); 1271 + } 1272 + return (PyObject *)pevlist; 1273 + } 1274 + 1275 + static PyObject *pyrf__parse_events(PyObject *self, PyObject *args) 1276 + { 1277 + const char *input; 1278 + struct evlist evlist = {}; 1279 + struct parse_events_error err; 1280 + PyObject *result; 1281 + 1282 + if (!PyArg_ParseTuple(args, "s", &input)) 1283 + return NULL; 1284 + 1285 + parse_events_error__init(&err); 1286 + evlist__init(&evlist, NULL, NULL); 1287 + if (parse_events(&evlist, input, &err)) { 1288 + parse_events_error__print(&err, input); 1289 + PyErr_SetFromErrno(PyExc_OSError); 1290 + return NULL; 1291 + } 1292 + result = pyrf_evlist__from_evlist(&evlist); 1293 + evlist__exit(&evlist); 1294 + return result; 1244 1295 } 1245 1296 1246 1297 static PyMethodDef perf__methods[] = { ··· 1304 1247 .ml_flags = METH_VARARGS | METH_KEYWORDS, 1305 1248 .ml_doc = PyDoc_STR("Get tracepoint config.") 1306 1249 }, 1250 + { 1251 + .ml_name = "parse_events", 1252 + .ml_meth = (PyCFunction) pyrf__parse_events, 1253 + .ml_flags = METH_VARARGS, 1254 + .ml_doc = PyDoc_STR("Parse a string of events and return an evlist.") 1255 + }, 1307 1256 { .ml_name = NULL, } 1308 1257 }; 1309 1258 1310 - #if PY_MAJOR_VERSION < 3 1311 - PyMODINIT_FUNC initperf(void) 1312 - #else 1313 1259 PyMODINIT_FUNC PyInit_perf(void) 1314 - #endif 1315 1260 { 1316 1261 PyObject *obj; 1317 1262 int i; 1318 1263 PyObject *dict; 1319 - #if PY_MAJOR_VERSION < 3 1320 - PyObject *module = Py_InitModule("perf", perf__methods); 1321 - #else 1322 1264 static struct PyModuleDef moduledef = { 1323 1265 PyModuleDef_HEAD_INIT, 1324 1266 "perf", /* m_name */ ··· 1330 1274 NULL, /* m_free */ 1331 1275 }; 1332 1276 PyObject *module = PyModule_Create(&moduledef); 1333 - #endif 1334 1277 1335 1278 if (module == NULL || 1336 1279 pyrf_event__setup_types() < 0 || ··· 1337 1282 pyrf_evsel__setup_types() < 0 || 1338 1283 pyrf_thread_map__setup_types() < 0 || 1339 1284 pyrf_cpu_map__setup_types() < 0) 1340 - #if PY_MAJOR_VERSION < 3 1341 - return; 1342 - #else 1343 1285 return module; 1344 - #endif 1345 1286 1346 1287 /* The page_size is placed in util object. */ 1347 1288 page_size = sysconf(_SC_PAGE_SIZE); ··· 1386 1335 goto error; 1387 1336 1388 1337 for (i = 0; perf__constants[i].name != NULL; i++) { 1389 - obj = _PyLong_FromLong(perf__constants[i].value); 1338 + obj = PyLong_FromLong(perf__constants[i].value); 1390 1339 if (obj == NULL) 1391 1340 goto error; 1392 1341 PyDict_SetItemString(dict, perf__constants[i].name, obj); ··· 1396 1345 error: 1397 1346 if (PyErr_Occurred()) 1398 1347 PyErr_SetString(PyExc_ImportError, "perf: Init failed!"); 1399 - #if PY_MAJOR_VERSION >= 3 1400 1348 return module; 1401 - #endif 1402 - } 1403 - 1404 - 1405 - /* The following are stubs to avoid dragging in builtin-* objects. */ 1406 - /* TODO: move the code out of the builtin-* file into util. */ 1407 - 1408 - unsigned int scripting_max_stack = PERF_MAX_STACK_DEPTH; 1409 - 1410 - #ifdef HAVE_KVM_STAT_SUPPORT 1411 - bool kvm_entry_event(struct evsel *evsel __maybe_unused) 1412 - { 1413 - return false; 1414 - } 1415 - 1416 - bool kvm_exit_event(struct evsel *evsel __maybe_unused) 1417 - { 1418 - return false; 1419 - } 1420 - 1421 - bool exit_event_begin(struct evsel *evsel __maybe_unused, 1422 - struct perf_sample *sample __maybe_unused, 1423 - struct event_key *key __maybe_unused) 1424 - { 1425 - return false; 1426 - } 1427 - 1428 - bool exit_event_end(struct evsel *evsel __maybe_unused, 1429 - struct perf_sample *sample __maybe_unused, 1430 - struct event_key *key __maybe_unused) 1431 - { 1432 - return false; 1433 - } 1434 - 1435 - void exit_event_decode_key(struct perf_kvm_stat *kvm __maybe_unused, 1436 - struct event_key *key __maybe_unused, 1437 - char *decode __maybe_unused) 1438 - { 1439 - } 1440 - #endif // HAVE_KVM_STAT_SUPPORT 1441 - 1442 - int find_scripts(char **scripts_array __maybe_unused, char **scripts_path_array __maybe_unused, 1443 - int num __maybe_unused, int pathlen __maybe_unused) 1444 - { 1445 - return -1; 1446 - } 1447 - 1448 - void perf_stat__set_no_csv_summary(int set __maybe_unused) 1449 - { 1450 - } 1451 - 1452 - void perf_stat__set_big_num(int set __maybe_unused) 1453 - { 1454 - } 1455 - 1456 - int script_spec_register(const char *spec __maybe_unused, struct scripting_ops *ops __maybe_unused) 1457 - { 1458 - return -1; 1459 - } 1460 - 1461 - arch_syscalls__strerrno_t *arch_syscalls__strerrno_function(const char *arch __maybe_unused) 1462 - { 1463 - return NULL; 1464 - } 1465 - 1466 - struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork __maybe_unused, 1467 - struct kwork_class *class __maybe_unused, 1468 - struct kwork_work *key __maybe_unused) 1469 - { 1470 - return NULL; 1471 - } 1472 - 1473 - void script_fetch_insn(struct perf_sample *sample __maybe_unused, 1474 - struct thread *thread __maybe_unused, 1475 - struct machine *machine __maybe_unused) 1476 - { 1477 - } 1478 - 1479 - int perf_sample__sprintf_flags(u32 flags __maybe_unused, char *str __maybe_unused, 1480 - size_t sz __maybe_unused) 1481 - { 1482 - return -1; 1483 - } 1484 - 1485 - bool match_callstack_filter(struct machine *machine __maybe_unused, u64 *callstack __maybe_unused) 1486 - { 1487 - return false; 1488 - } 1489 - 1490 - struct lock_stat *lock_stat_find(u64 addr __maybe_unused) 1491 - { 1492 - return NULL; 1493 - } 1494 - 1495 - struct lock_stat *lock_stat_findnew(u64 addr __maybe_unused, const char *name __maybe_unused, 1496 - int flags __maybe_unused) 1497 - { 1498 - return NULL; 1499 - } 1500 - 1501 - int cmd_inject(int argc __maybe_unused, const char *argv[] __maybe_unused) 1502 - { 1503 - return -1; 1504 1349 }
+2 -1
tools/perf/util/scripting-engines/trace-event-perl.c
··· 344 344 struct addr_location *al) 345 345 { 346 346 struct thread *thread = al->thread; 347 - struct tep_event *event = evsel->tp_format; 347 + struct tep_event *event; 348 348 struct tep_format_field *field; 349 349 static char handler[256]; 350 350 unsigned long long val; ··· 362 362 if (evsel->core.attr.type != PERF_TYPE_TRACEPOINT) 363 363 return; 364 364 365 + event = evsel__tp_format(evsel); 365 366 if (!event) { 366 367 pr_debug("ug! no event found for type %" PRIu64, (u64)evsel->core.attr.config); 367 368 return;
+5 -61
tools/perf/util/scripting-engines/trace-event-python.c
··· 58 58 #include "mem-events.h" 59 59 #include "util/perf_regs.h" 60 60 61 - #if PY_MAJOR_VERSION < 3 62 - #define _PyUnicode_FromString(arg) \ 63 - PyString_FromString(arg) 64 - #define _PyUnicode_FromStringAndSize(arg1, arg2) \ 65 - PyString_FromStringAndSize((arg1), (arg2)) 66 - #define _PyBytes_FromStringAndSize(arg1, arg2) \ 67 - PyString_FromStringAndSize((arg1), (arg2)) 68 - #define _PyLong_FromLong(arg) \ 69 - PyInt_FromLong(arg) 70 - #define _PyLong_AsLong(arg) \ 71 - PyInt_AsLong(arg) 72 - #define _PyCapsule_New(arg1, arg2, arg3) \ 73 - PyCObject_FromVoidPtr((arg1), (arg2)) 74 - 75 - PyMODINIT_FUNC initperf_trace_context(void); 76 - #else 77 61 #define _PyUnicode_FromString(arg) \ 78 62 PyUnicode_FromString(arg) 79 63 #define _PyUnicode_FromStringAndSize(arg1, arg2) \ ··· 72 88 PyCapsule_New((arg1), (arg2), (arg3)) 73 89 74 90 PyMODINIT_FUNC PyInit_perf_trace_context(void); 75 - #endif 76 91 77 92 #ifdef HAVE_LIBTRACEEVENT 78 93 #define TRACE_EVENT_TYPE_MAX \ ··· 164 181 { 165 182 int arg_count = 0; 166 183 167 - /* 168 - * The attribute for the code object is func_code in Python 2, 169 - * whereas it is __code__ in Python 3.0+. 170 - */ 171 - PyObject *code_obj = PyObject_GetAttrString(handler, 172 - "func_code"); 173 - if (PyErr_Occurred()) { 174 - PyErr_Clear(); 175 - code_obj = PyObject_GetAttrString(handler, 176 - "__code__"); 177 - } 184 + PyObject *code_obj = code_obj = PyObject_GetAttrString(handler, "__code__"); 178 185 PyErr_Clear(); 179 186 if (code_obj) { 180 187 PyObject *arg_count_obj = PyObject_GetAttrString(code_obj, ··· 922 949 struct addr_location *al, 923 950 struct addr_location *addr_al) 924 951 { 925 - struct tep_event *event = evsel->tp_format; 952 + struct tep_event *event; 926 953 PyObject *handler, *context, *t, *obj = NULL, *callchain; 927 954 PyObject *dict = NULL, *all_entries_dict = NULL; 928 955 static char handler_name[256]; ··· 939 966 940 967 bitmap_zero(events_defined, TRACE_EVENT_TYPE_MAX); 941 968 969 + event = evsel__tp_format(evsel); 942 970 if (!event) { 943 971 snprintf(handler_name, sizeof(handler_name), 944 972 "ug! no event found for type %" PRIu64, (u64)evsel->core.attr.config); ··· 1876 1902 tables->synth_handler = get_handler("synth_data"); 1877 1903 } 1878 1904 1879 - #if PY_MAJOR_VERSION < 3 1880 - static void _free_command_line(const char **command_line, int num) 1881 - { 1882 - free(command_line); 1883 - } 1884 - #else 1885 1905 static void _free_command_line(wchar_t **command_line, int num) 1886 1906 { 1887 1907 int i; ··· 1883 1915 PyMem_RawFree(command_line[i]); 1884 1916 free(command_line); 1885 1917 } 1886 - #endif 1887 1918 1888 1919 1889 1920 /* ··· 1892 1925 struct perf_session *session) 1893 1926 { 1894 1927 struct tables *tables = &tables_global; 1895 - #if PY_MAJOR_VERSION < 3 1896 - const char **command_line; 1897 - #else 1898 1928 wchar_t **command_line; 1899 - #endif 1900 - /* 1901 - * Use a non-const name variable to cope with python 2.6's 1902 - * PyImport_AppendInittab prototype 1903 - */ 1904 - char buf[PATH_MAX], name[19] = "perf_trace_context"; 1929 + char buf[PATH_MAX]; 1905 1930 int i, err = 0; 1906 1931 FILE *fp; 1907 1932 1908 1933 scripting_context->session = session; 1909 - #if PY_MAJOR_VERSION < 3 1910 - command_line = malloc((argc + 1) * sizeof(const char *)); 1911 - if (!command_line) 1912 - return -1; 1913 - 1914 - command_line[0] = script; 1915 - for (i = 1; i < argc + 1; i++) 1916 - command_line[i] = argv[i - 1]; 1917 - PyImport_AppendInittab(name, initperf_trace_context); 1918 - #else 1919 1934 command_line = malloc((argc + 1) * sizeof(wchar_t *)); 1920 1935 if (!command_line) 1921 1936 return -1; ··· 1905 1956 command_line[0] = Py_DecodeLocale(script, NULL); 1906 1957 for (i = 1; i < argc + 1; i++) 1907 1958 command_line[i] = Py_DecodeLocale(argv[i - 1], NULL); 1908 - PyImport_AppendInittab(name, PyInit_perf_trace_context); 1909 - #endif 1959 + PyImport_AppendInittab("perf_trace_context", PyInit_perf_trace_context); 1910 1960 Py_Initialize(); 1911 1961 1912 - #if PY_MAJOR_VERSION < 3 1913 - PySys_SetArgv(argc + 1, (char **)command_line); 1914 - #else 1915 1962 PySys_SetArgv(argc + 1, command_line); 1916 - #endif 1917 1963 1918 1964 fp = fopen(script, "r"); 1919 1965 if (!fp) {
+1
tools/perf/util/session.c
··· 37 37 #include "arch/common.h" 38 38 #include "units.h" 39 39 #include "annotate.h" 40 + #include "perf.h" 40 41 #include <internal/lib.h> 41 42 42 43 static int perf_session__deliver_event(struct perf_session *session,
+21 -12
tools/perf/util/sort.c
··· 1038 1038 .data = he->raw_data, 1039 1039 .size = he->raw_size, 1040 1040 }; 1041 + struct tep_event *tp_format; 1041 1042 1042 1043 evsel = hists_to_evsel(he->hists); 1043 1044 1044 1045 trace_seq_init(&seq); 1045 - if (symbol_conf.raw_trace) { 1046 - tep_print_fields(&seq, he->raw_data, he->raw_size, 1047 - evsel->tp_format); 1048 - } else { 1049 - tep_print_event(evsel->tp_format->tep, 1050 - &seq, &rec, "%s", TEP_PRINT_INFO); 1046 + tp_format = evsel__tp_format(evsel); 1047 + if (tp_format) { 1048 + if (symbol_conf.raw_trace) 1049 + tep_print_fields(&seq, he->raw_data, he->raw_size, tp_format); 1050 + else 1051 + tep_print_event(tp_format->tep, &seq, &rec, "%s", TEP_PRINT_INFO); 1051 1052 } 1053 + 1052 1054 /* 1053 1055 * Trim the buffer, it starts at 4KB and we're not going to 1054 1056 * add anything more to this buffer. ··· 3295 3293 static int add_evsel_fields(struct evsel *evsel, bool raw_trace, int level) 3296 3294 { 3297 3295 int ret; 3298 - struct tep_format_field *field; 3299 - 3300 - field = evsel->tp_format->format.fields; 3296 + struct tep_event *tp_format = evsel__tp_format(evsel); 3297 + struct tep_format_field *field = tp_format ? tp_format->format.fields : NULL; 3301 3298 while (field) { 3302 3299 ret = __dynamic_dimension__add(evsel, field, raw_trace, level); 3303 3300 if (ret < 0) ··· 3329 3328 { 3330 3329 int ret = -ESRCH; 3331 3330 struct evsel *evsel; 3332 - struct tep_format_field *field; 3333 3331 3334 3332 evlist__for_each_entry(evlist, evsel) { 3333 + struct tep_event *tp_format; 3334 + struct tep_format_field *field; 3335 + 3335 3336 if (evsel->core.attr.type != PERF_TYPE_TRACEPOINT) 3336 3337 continue; 3337 3338 3338 - field = tep_find_any_field(evsel->tp_format, field_name); 3339 + tp_format = evsel__tp_format(evsel); 3340 + if (tp_format == NULL) 3341 + continue; 3342 + 3343 + field = tep_find_any_field(tp_format, field_name); 3339 3344 if (field == NULL) 3340 3345 continue; 3341 3346 ··· 3423 3416 if (!strcmp(field_name, "*")) { 3424 3417 ret = add_evsel_fields(evsel, raw_trace, level); 3425 3418 } else { 3426 - struct tep_format_field *field = tep_find_any_field(evsel->tp_format, field_name); 3419 + struct tep_event *tp_format = evsel__tp_format(evsel); 3420 + struct tep_format_field *field = 3421 + tp_format ? tep_find_any_field(tp_format, field_name) : NULL; 3427 3422 3428 3423 if (field == NULL) { 3429 3424 pr_debug("Cannot find event field for %s.%s\n",
+133 -109
tools/perf/util/stat-display.c
··· 114 114 fprintf(config->output, "%s%" PRIu64 "%s%.2f", 115 115 config->csv_sep, run, config->csv_sep, enabled_percent); 116 116 } 117 + struct outstate { 118 + /* Std mode: insert a newline before the next metric */ 119 + bool newline; 120 + /* JSON mode: track need for comma for a previous field or not */ 121 + bool first; 122 + /* Num CSV separators remaining to pad out when not all fields are printed */ 123 + int csv_col_pad; 117 124 118 - static void print_running_json(struct perf_stat_config *config, u64 run, u64 ena) 125 + /* 126 + * The following don't track state across fields, but are here as a shortcut to 127 + * pass data to the print functions. The alternative would be to update the 128 + * function signatures of the entire print stack to pass them through. 129 + */ 130 + /* Place to output to */ 131 + FILE * const fh; 132 + /* Lines are timestamped in --interval-print mode */ 133 + char timestamp[64]; 134 + /* Num items aggregated in current line. See struct perf_stat_aggr.nr */ 135 + int aggr_nr; 136 + /* Core/socket/die etc ID for the current line */ 137 + struct aggr_cpu_id id; 138 + /* Event for current line */ 139 + struct evsel *evsel; 140 + /* Cgroup for current line */ 141 + struct cgroup *cgrp; 142 + }; 143 + 144 + static const char *json_sep(struct outstate *os) 145 + { 146 + const char *sep = os->first ? "" : ", "; 147 + 148 + os->first = false; 149 + return sep; 150 + } 151 + 152 + #define json_out(os, format, ...) fprintf((os)->fh, "%s" format, json_sep(os), ##__VA_ARGS__) 153 + 154 + static void print_running_json(struct outstate *os, u64 run, u64 ena) 119 155 { 120 156 double enabled_percent = 100; 121 157 122 158 if (run != ena) 123 159 enabled_percent = 100 * run / ena; 124 - fprintf(config->output, "\"event-runtime\" : %" PRIu64 ", \"pcnt-running\" : %.2f, ", 125 - run, enabled_percent); 160 + json_out(os, "\"event-runtime\" : %" PRIu64 ", \"pcnt-running\" : %.2f", 161 + run, enabled_percent); 126 162 } 127 163 128 - static void print_running(struct perf_stat_config *config, 164 + static void print_running(struct perf_stat_config *config, struct outstate *os, 129 165 u64 run, u64 ena, bool before_metric) 130 166 { 131 167 if (config->json_output) { 132 168 if (before_metric) 133 - print_running_json(config, run, ena); 169 + print_running_json(os, run, ena); 134 170 } else if (config->csv_output) { 135 171 if (before_metric) 136 172 print_running_csv(config, run, ena); ··· 189 153 fprintf(config->output, "%s%.2f%%", config->csv_sep, pct); 190 154 } 191 155 192 - static void print_noise_pct_json(struct perf_stat_config *config, 156 + static void print_noise_pct_json(struct outstate *os, 193 157 double pct) 194 158 { 195 - fprintf(config->output, "\"variance\" : %.2f, ", pct); 159 + json_out(os, "\"variance\" : %.2f", pct); 196 160 } 197 161 198 - static void print_noise_pct(struct perf_stat_config *config, 162 + static void print_noise_pct(struct perf_stat_config *config, struct outstate *os, 199 163 double total, double avg, bool before_metric) 200 164 { 201 165 double pct = rel_stddev_stats(total, avg); 202 166 203 167 if (config->json_output) { 204 168 if (before_metric) 205 - print_noise_pct_json(config, pct); 169 + print_noise_pct_json(os, pct); 206 170 } else if (config->csv_output) { 207 171 if (before_metric) 208 172 print_noise_pct_csv(config, pct); ··· 212 176 } 213 177 } 214 178 215 - static void print_noise(struct perf_stat_config *config, 179 + static void print_noise(struct perf_stat_config *config, struct outstate *os, 216 180 struct evsel *evsel, double avg, bool before_metric) 217 181 { 218 182 struct perf_stat_evsel *ps; ··· 221 185 return; 222 186 223 187 ps = evsel->stats; 224 - print_noise_pct(config, stddev_stats(&ps->res_stats), avg, before_metric); 188 + print_noise_pct(config, os, stddev_stats(&ps->res_stats), avg, before_metric); 225 189 } 226 190 227 191 static void print_cgroup_std(struct perf_stat_config *config, const char *cgrp_name) ··· 234 198 fprintf(config->output, "%s%s", config->csv_sep, cgrp_name); 235 199 } 236 200 237 - static void print_cgroup_json(struct perf_stat_config *config, const char *cgrp_name) 201 + static void print_cgroup_json(struct outstate *os, const char *cgrp_name) 238 202 { 239 - fprintf(config->output, "\"cgroup\" : \"%s\", ", cgrp_name); 203 + json_out(os, "\"cgroup\" : \"%s\"", cgrp_name); 240 204 } 241 205 242 - static void print_cgroup(struct perf_stat_config *config, struct cgroup *cgrp) 206 + static void print_cgroup(struct perf_stat_config *config, struct outstate *os, 207 + struct cgroup *cgrp) 243 208 { 244 209 if (nr_cgroups || config->cgroup_list) { 245 210 const char *cgrp_name = cgrp ? cgrp->name : ""; 246 211 247 212 if (config->json_output) 248 - print_cgroup_json(config, cgrp_name); 213 + print_cgroup_json(os, cgrp_name); 249 214 else if (config->csv_output) 250 215 print_cgroup_csv(config, cgrp_name); 251 216 else ··· 361 324 } 362 325 } 363 326 364 - static void print_aggr_id_json(struct perf_stat_config *config, 327 + static void print_aggr_id_json(struct perf_stat_config *config, struct outstate *os, 365 328 struct evsel *evsel, struct aggr_cpu_id id, int aggr_nr) 366 329 { 367 - FILE *output = config->output; 368 - 369 330 switch (config->aggr_mode) { 370 331 case AGGR_CORE: 371 - fprintf(output, "\"core\" : \"S%d-D%d-C%d\", \"aggregate-number\" : %d, ", 332 + json_out(os, "\"core\" : \"S%d-D%d-C%d\", \"aggregate-number\" : %d", 372 333 id.socket, id.die, id.core, aggr_nr); 373 334 break; 374 335 case AGGR_CACHE: 375 - fprintf(output, "\"cache\" : \"S%d-D%d-L%d-ID%d\", \"aggregate-number\" : %d, ", 336 + json_out(os, "\"cache\" : \"S%d-D%d-L%d-ID%d\", \"aggregate-number\" : %d", 376 337 id.socket, id.die, id.cache_lvl, id.cache, aggr_nr); 377 338 break; 378 339 case AGGR_CLUSTER: 379 - fprintf(output, "\"cluster\" : \"S%d-D%d-CLS%d\", \"aggregate-number\" : %d, ", 340 + json_out(os, "\"cluster\" : \"S%d-D%d-CLS%d\", \"aggregate-number\" : %d", 380 341 id.socket, id.die, id.cluster, aggr_nr); 381 342 break; 382 343 case AGGR_DIE: 383 - fprintf(output, "\"die\" : \"S%d-D%d\", \"aggregate-number\" : %d, ", 344 + json_out(os, "\"die\" : \"S%d-D%d\", \"aggregate-number\" : %d", 384 345 id.socket, id.die, aggr_nr); 385 346 break; 386 347 case AGGR_SOCKET: 387 - fprintf(output, "\"socket\" : \"S%d\", \"aggregate-number\" : %d, ", 348 + json_out(os, "\"socket\" : \"S%d\", \"aggregate-number\" : %d", 388 349 id.socket, aggr_nr); 389 350 break; 390 351 case AGGR_NODE: 391 - fprintf(output, "\"node\" : \"N%d\", \"aggregate-number\" : %d, ", 352 + json_out(os, "\"node\" : \"N%d\", \"aggregate-number\" : %d", 392 353 id.node, aggr_nr); 393 354 break; 394 355 case AGGR_NONE: 395 356 if (evsel->percore && !config->percore_show_thread) { 396 - fprintf(output, "\"core\" : \"S%d-D%d-C%d\"", 357 + json_out(os, "\"core\" : \"S%d-D%d-C%d\"", 397 358 id.socket, id.die, id.core); 398 359 } else if (id.cpu.cpu > -1) { 399 - fprintf(output, "\"cpu\" : \"%d\", ", 360 + json_out(os, "\"cpu\" : \"%d\"", 400 361 id.cpu.cpu); 401 362 } 402 363 break; 403 364 case AGGR_THREAD: 404 - fprintf(output, "\"thread\" : \"%s-%d\", ", 365 + json_out(os, "\"thread\" : \"%s-%d\"", 405 366 perf_thread_map__comm(evsel->core.threads, id.thread_idx), 406 367 perf_thread_map__pid(evsel->core.threads, id.thread_idx)); 407 368 break; ··· 411 376 } 412 377 } 413 378 414 - static void aggr_printout(struct perf_stat_config *config, 379 + static void aggr_printout(struct perf_stat_config *config, struct outstate *os, 415 380 struct evsel *evsel, struct aggr_cpu_id id, int aggr_nr) 416 381 { 417 382 if (config->json_output) 418 - print_aggr_id_json(config, evsel, id, aggr_nr); 383 + print_aggr_id_json(config, os, evsel, id, aggr_nr); 419 384 else if (config->csv_output) 420 385 print_aggr_id_csv(config, evsel, id, aggr_nr); 421 386 else 422 387 print_aggr_id_std(config, evsel, id, aggr_nr); 423 388 } 424 - 425 - struct outstate { 426 - FILE *fh; 427 - bool newline; 428 - bool first; 429 - const char *prefix; 430 - int nfields; 431 - int aggr_nr; 432 - struct aggr_cpu_id id; 433 - struct evsel *evsel; 434 - struct cgroup *cgrp; 435 - }; 436 389 437 390 static void new_line_std(struct perf_stat_config *config __maybe_unused, 438 391 void *ctx) ··· 434 411 struct outstate *os) 435 412 { 436 413 fputc('\n', os->fh); 437 - if (os->prefix) 438 - fputs(os->prefix, os->fh); 439 - aggr_printout(config, os->evsel, os->id, os->aggr_nr); 414 + if (config->interval) 415 + fputs(os->timestamp, os->fh); 416 + aggr_printout(config, os, os->evsel, os->id, os->aggr_nr); 440 417 } 441 418 442 419 static inline void __new_line_std(struct outstate *os) ··· 487 464 int i; 488 465 489 466 __new_line_std_csv(config, os); 490 - for (i = 0; i < os->nfields; i++) 467 + for (i = 0; i < os->csv_col_pad; i++) 491 468 fputs(config->csv_sep, os->fh); 492 469 } 493 470 ··· 522 499 FILE *out = os->fh; 523 500 524 501 if (unit) { 525 - fprintf(out, "\"metric-value\" : \"%f\", \"metric-unit\" : \"%s\"", val, unit); 502 + json_out(os, "\"metric-value\" : \"%f\", \"metric-unit\" : \"%s\"", val, unit); 526 503 if (thresh != METRIC_THRESHOLD_UNKNOWN) { 527 - fprintf(out, ", \"metric-threshold\" : \"%s\"", 504 + json_out(os, "\"metric-threshold\" : \"%s\"", 528 505 metric_threshold_classify__str(thresh)); 529 506 } 530 507 } ··· 537 514 struct outstate *os = ctx; 538 515 539 516 fputs("\n{", os->fh); 540 - if (os->prefix) 541 - fprintf(os->fh, "%s", os->prefix); 542 - aggr_printout(config, os->evsel, os->id, os->aggr_nr); 517 + os->first = true; 518 + if (config->interval) 519 + json_out(os, "%s", os->timestamp); 520 + 521 + aggr_printout(config, os, os->evsel, os->id, os->aggr_nr); 543 522 } 544 523 545 524 static void print_metricgroup_header_json(struct perf_stat_config *config, ··· 551 526 if (!metricgroup_name) 552 527 return; 553 528 554 - fprintf(config->output, "\"metricgroup\" : \"%s\"}", metricgroup_name); 529 + json_out((struct outstate *) ctx, "\"metricgroup\" : \"%s\"}", metricgroup_name); 555 530 new_line_json(config, ctx); 556 531 } 557 532 ··· 564 539 565 540 if (!metricgroup_name) { 566 541 /* Leave space for running and enabling */ 567 - for (i = 0; i < os->nfields - 2; i++) 542 + for (i = 0; i < os->csv_col_pad - 2; i++) 568 543 fputs(config->csv_sep, os->fh); 569 544 return; 570 545 } 571 546 572 - for (i = 0; i < os->nfields; i++) 547 + for (i = 0; i < os->csv_col_pad; i++) 573 548 fputs(config->csv_sep, os->fh); 574 549 fprintf(config->output, "%s", metricgroup_name); 575 550 new_line_csv(config, ctx); ··· 669 644 const char *unit, double val) 670 645 { 671 646 struct outstate *os = ctx; 672 - FILE *out = os->fh; 673 647 char buf[64], *ends; 674 648 char tbuf[1024]; 675 649 const char *vals; ··· 685 661 *ends = 0; 686 662 if (!vals[0]) 687 663 vals = "none"; 688 - fprintf(out, "%s\"%s\" : \"%s\"", os->first ? "" : ", ", unit, vals); 689 - os->first = false; 690 - } 691 - 692 - static void new_line_metric(struct perf_stat_config *config __maybe_unused, 693 - void *ctx __maybe_unused) 694 - { 664 + json_out(os, "\"%s\" : \"%s\"", unit, vals); 695 665 } 696 666 697 667 static void print_metric_header(struct perf_stat_config *config, ··· 761 743 fprintf(output, "%s", evsel__name(evsel)); 762 744 } 763 745 764 - static void print_counter_value_json(struct perf_stat_config *config, 746 + static void print_counter_value_json(struct outstate *os, 765 747 struct evsel *evsel, double avg, bool ok) 766 748 { 767 - FILE *output = config->output; 768 749 const char *bad_count = evsel->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED; 769 750 770 751 if (ok) 771 - fprintf(output, "\"counter-value\" : \"%f\", ", avg); 752 + json_out(os, "\"counter-value\" : \"%f\"", avg); 772 753 else 773 - fprintf(output, "\"counter-value\" : \"%s\", ", bad_count); 754 + json_out(os, "\"counter-value\" : \"%s\"", bad_count); 774 755 775 756 if (evsel->unit) 776 - fprintf(output, "\"unit\" : \"%s\", ", evsel->unit); 757 + json_out(os, "\"unit\" : \"%s\"", evsel->unit); 777 758 778 - fprintf(output, "\"event\" : \"%s\", ", evsel__name(evsel)); 759 + json_out(os, "\"event\" : \"%s\"", evsel__name(evsel)); 779 760 } 780 761 781 - static void print_counter_value(struct perf_stat_config *config, 762 + static void print_counter_value(struct perf_stat_config *config, struct outstate *os, 782 763 struct evsel *evsel, double avg, bool ok) 783 764 { 784 765 if (config->json_output) 785 - print_counter_value_json(config, evsel, avg, ok); 766 + print_counter_value_json(os, evsel, avg, ok); 786 767 else if (config->csv_output) 787 768 print_counter_value_csv(config, evsel, avg, ok); 788 769 else ··· 789 772 } 790 773 791 774 static void abs_printout(struct perf_stat_config *config, 775 + struct outstate *os, 792 776 struct aggr_cpu_id id, int aggr_nr, 793 777 struct evsel *evsel, double avg, bool ok) 794 778 { 795 - aggr_printout(config, evsel, id, aggr_nr); 796 - print_counter_value(config, evsel, avg, ok); 797 - print_cgroup(config, evsel->cgrp); 779 + aggr_printout(config, os, evsel, id, aggr_nr); 780 + print_counter_value(config, os, evsel, avg, ok); 781 + print_cgroup(config, os, evsel->cgrp); 798 782 } 799 783 800 784 static bool is_mixed_hw_group(struct evsel *counter) ··· 849 831 850 832 if (config->csv_output) { 851 833 pm = config->metric_only ? print_metric_only_csv : print_metric_csv; 852 - nl = config->metric_only ? new_line_metric : new_line_csv; 834 + nl = config->metric_only ? NULL : new_line_csv; 853 835 pmh = print_metricgroup_header_csv; 854 - os->nfields = 4 + (counter->cgrp ? 1 : 0); 836 + os->csv_col_pad = 4 + (counter->cgrp ? 1 : 0); 855 837 } else if (config->json_output) { 856 838 pm = config->metric_only ? print_metric_only_json : print_metric_json; 857 - nl = config->metric_only ? new_line_metric : new_line_json; 839 + nl = config->metric_only ? NULL : new_line_json; 858 840 pmh = print_metricgroup_header_json; 859 841 } else { 860 842 pm = config->metric_only ? print_metric_only : print_metric_std; 861 - nl = config->metric_only ? new_line_metric : new_line_std; 843 + nl = config->metric_only ? NULL : new_line_std; 862 844 pmh = print_metricgroup_header_std; 863 845 } 864 846 865 847 if (run == 0 || ena == 0 || counter->counts->scaled == -1) { 866 848 if (config->metric_only) { 867 - pm(config, os, METRIC_THRESHOLD_UNKNOWN, "", "", 0); 849 + pm(config, os, METRIC_THRESHOLD_UNKNOWN, /*format=*/NULL, 850 + /*unit=*/NULL, /*val=*/0); 868 851 return; 869 852 } 870 853 ··· 887 868 out.force_header = false; 888 869 889 870 if (!config->metric_only && !counter->default_metricgroup) { 890 - abs_printout(config, os->id, os->aggr_nr, counter, uval, ok); 871 + abs_printout(config, os, os->id, os->aggr_nr, counter, uval, ok); 891 872 892 - print_noise(config, counter, noise, /*before_metric=*/true); 893 - print_running(config, run, ena, /*before_metric=*/true); 873 + print_noise(config, os, counter, noise, /*before_metric=*/true); 874 + print_running(config, os, run, ena, /*before_metric=*/true); 894 875 } 895 876 896 877 if (ok) { 897 878 if (!config->metric_only && counter->default_metricgroup) { 898 879 void *from = NULL; 899 880 900 - aggr_printout(config, os->evsel, os->id, os->aggr_nr); 881 + aggr_printout(config, os, os->evsel, os->id, os->aggr_nr); 901 882 /* Print out all the metricgroup with the same metric event. */ 902 883 do { 903 884 int num = 0; ··· 910 891 __new_line_std_csv(config, os); 911 892 } 912 893 913 - print_noise(config, counter, noise, /*before_metric=*/true); 914 - print_running(config, run, ena, /*before_metric=*/true); 894 + print_noise(config, os, counter, noise, /*before_metric=*/true); 895 + print_running(config, os, run, ena, /*before_metric=*/true); 915 896 from = perf_stat__print_shadow_stats_metricgroup(config, counter, aggr_idx, 916 897 &num, from, &out, 917 898 &config->metric_events); ··· 920 901 perf_stat__print_shadow_stats(config, counter, uval, aggr_idx, 921 902 &out, &config->metric_events); 922 903 } else { 923 - pm(config, os, METRIC_THRESHOLD_UNKNOWN, /*format=*/NULL, /*unit=*/"", /*val=*/0); 904 + pm(config, os, METRIC_THRESHOLD_UNKNOWN, /*format=*/NULL, /*unit=*/NULL, /*val=*/0); 924 905 } 925 906 926 907 if (!config->metric_only) { 927 - print_noise(config, counter, noise, /*before_metric=*/false); 928 - print_running(config, run, ena, /*before_metric=*/false); 908 + print_noise(config, os, counter, noise, /*before_metric=*/false); 909 + print_running(config, os, run, ena, /*before_metric=*/false); 929 910 } 930 911 } 931 912 ··· 1102 1083 return; 1103 1084 1104 1085 if (!metric_only) { 1105 - if (config->json_output) 1086 + if (config->json_output) { 1087 + os->first = true; 1106 1088 fputc('{', output); 1107 - if (os->prefix) 1108 - fprintf(output, "%s", os->prefix); 1109 - else if (config->summary && config->csv_output && 1110 - !config->no_csv_summary && !config->interval) 1089 + } 1090 + if (config->interval) { 1091 + if (config->json_output) 1092 + json_out(os, "%s", os->timestamp); 1093 + else 1094 + fprintf(output, "%s", os->timestamp); 1095 + } else if (config->summary && config->csv_output && 1096 + !config->no_csv_summary) 1111 1097 fprintf(output, "%s%s", "summary", config->csv_sep); 1112 1098 } 1113 1099 ··· 1138 1114 1139 1115 if (config->json_output) 1140 1116 fputc('{', config->output); 1141 - if (os->prefix) 1142 - fprintf(config->output, "%s", os->prefix); 1143 1117 1118 + if (config->interval) { 1119 + if (config->json_output) 1120 + json_out(os, "%s", os->timestamp); 1121 + else 1122 + fprintf(config->output, "%s", os->timestamp); 1123 + } 1144 1124 evsel = evlist__first(evlist); 1145 1125 id = config->aggr_map->map[aggr_idx]; 1146 1126 aggr = &evsel->stats->aggr[aggr_idx]; 1147 - aggr_printout(config, evsel, id, aggr->nr); 1127 + aggr_printout(config, os, evsel, id, aggr->nr); 1148 1128 1149 - print_cgroup(config, os->cgrp ? : evsel->cgrp); 1129 + print_cgroup(config, os, os->cgrp ? : evsel->cgrp); 1150 1130 } 1151 1131 1152 1132 static void print_metric_end(struct perf_stat_config *config, struct outstate *os) ··· 1329 1301 struct perf_stat_output_ctx out = { 1330 1302 .ctx = &os, 1331 1303 .print_metric = print_metric_header, 1332 - .new_line = new_line_metric, 1304 + .new_line = NULL, 1333 1305 .force_header = true, 1334 1306 }; 1335 1307 ··· 1364 1336 fputc('\n', config->output); 1365 1337 } 1366 1338 1367 - static void prepare_interval(struct perf_stat_config *config, 1368 - char *prefix, size_t len, struct timespec *ts) 1339 + static void prepare_timestamp(struct perf_stat_config *config, 1340 + struct outstate *os, struct timespec *ts) 1369 1341 { 1370 1342 if (config->iostat_run) 1371 1343 return; 1372 1344 1373 1345 if (config->json_output) 1374 - scnprintf(prefix, len, "\"interval\" : %lu.%09lu, ", 1346 + scnprintf(os->timestamp, sizeof(os->timestamp), "\"interval\" : %lu.%09lu", 1375 1347 (unsigned long) ts->tv_sec, ts->tv_nsec); 1376 1348 else if (config->csv_output) 1377 - scnprintf(prefix, len, "%lu.%09lu%s", 1349 + scnprintf(os->timestamp, sizeof(os->timestamp), "%lu.%09lu%s", 1378 1350 (unsigned long) ts->tv_sec, ts->tv_nsec, config->csv_sep); 1379 1351 else 1380 - scnprintf(prefix, len, "%6lu.%09lu ", 1352 + scnprintf(os->timestamp, sizeof(os->timestamp), "%6lu.%09lu ", 1381 1353 (unsigned long) ts->tv_sec, ts->tv_nsec); 1382 1354 } 1383 1355 ··· 1585 1557 fprintf(output, " %17.*f +- %.*f seconds time elapsed", 1586 1558 precision, avg, precision, sd); 1587 1559 1588 - print_noise_pct(config, sd, avg, /*before_metric=*/false); 1560 + print_noise_pct(config, NULL, sd, avg, /*before_metric=*/false); 1589 1561 } 1590 1562 fprintf(output, "\n\n"); 1591 1563 ··· 1700 1672 int argc, const char **argv) 1701 1673 { 1702 1674 bool metric_only = config->metric_only; 1703 - int interval = config->interval; 1704 1675 struct evsel *counter; 1705 - char buf[64]; 1706 1676 struct outstate os = { 1707 1677 .fh = config->output, 1708 1678 .first = true, ··· 1711 1685 if (config->iostat_run) 1712 1686 evlist->selected = evlist__first(evlist); 1713 1687 1714 - if (interval) { 1715 - os.prefix = buf; 1716 - prepare_interval(config, buf, sizeof(buf), ts); 1717 - } 1688 + if (config->interval) 1689 + prepare_timestamp(config, &os, ts); 1718 1690 1719 1691 print_header(config, _target, evlist, argc, argv); 1720 1692 ··· 1731 1707 case AGGR_THREAD: 1732 1708 case AGGR_GLOBAL: 1733 1709 if (config->iostat_run) { 1734 - iostat_print_counters(evlist, config, ts, buf, 1710 + iostat_print_counters(evlist, config, ts, os.timestamp, 1735 1711 (iostat_print_counter_t)print_counter, &os); 1736 1712 } else if (config->cgroup_list) { 1737 1713 print_cgroup_counter(config, evlist, &os);
+3 -2
tools/perf/util/stat-shadow.c
··· 327 327 "insn per cycle", 0); 328 328 } 329 329 if (max_stalled && instructions) { 330 - out->new_line(config, ctxp); 330 + if (out->new_line) 331 + out->new_line(config, ctxp); 331 332 print_metric(config, ctxp, METRIC_THRESHOLD_UNKNOWN, "%7.2f ", 332 333 "stalled cycles per insn", max_stalled / instructions); 333 334 } ··· 671 670 } 672 671 } 673 672 674 - if ((*num)++ > 0) 673 + if ((*num)++ > 0 && out->new_line) 675 674 out->new_line(config, ctxp); 676 675 generic_metric(config, mexp, evsel, aggr_idx, out); 677 676 }
+2 -1
tools/perf/util/stat.h
··· 117 117 unsigned int topdown_level; 118 118 }; 119 119 120 + extern struct perf_stat_config stat_config; 121 + 120 122 void perf_stat__set_big_num(int set); 121 - void perf_stat__set_no_csv_summary(int set); 122 123 123 124 void update_stats(struct stats *stats, u64 val); 124 125 double avg_stats(struct stats *stats);
+3 -4
tools/perf/util/stream.c
··· 52 52 goto err; 53 53 54 54 s->nr_streams_max = nr_streams_max; 55 - s->evsel_idx = -1; 56 55 } 57 56 58 57 els->ev_streams = es; ··· 138 139 139 140 hists__output_resort(hists, NULL); 140 141 init_hot_callchain(hists, &es[i]); 141 - es[i].evsel_idx = pos->core.idx; 142 + es[i].evsel = pos; 142 143 i++; 143 144 } 144 145 ··· 165 166 } 166 167 167 168 struct evsel_streams *evsel_streams__entry(struct evlist_streams *els, 168 - int evsel_idx) 169 + const struct evsel *evsel) 169 170 { 170 171 struct evsel_streams *es = els->ev_streams; 171 172 172 173 for (int i = 0; i < els->nr_evsel; i++) { 173 - if (es[i].evsel_idx == evsel_idx) 174 + if (es[i].evsel == evsel) 174 175 return &es[i]; 175 176 } 176 177
+5 -5
tools/perf/util/stream.h
··· 2 2 #ifndef __PERF_STREAM_H 3 3 #define __PERF_STREAM_H 4 4 5 - #include "callchain.h" 5 + struct callchain_node; 6 + struct evlist; 7 + struct evsel; 6 8 7 9 struct stream { 8 10 struct callchain_node *cnode; ··· 13 11 14 12 struct evsel_streams { 15 13 struct stream *streams; 14 + const struct evsel *evsel; 16 15 int nr_streams_max; 17 16 int nr_streams; 18 - int evsel_idx; 19 17 u64 streams_hits; 20 18 }; 21 19 ··· 24 22 int nr_evsel; 25 23 }; 26 24 27 - struct evlist; 28 - 29 25 void evlist_streams__delete(struct evlist_streams *els); 30 26 31 27 struct evlist_streams *evlist__create_streams(struct evlist *evlist, 32 28 int nr_streams_max); 33 29 34 30 struct evsel_streams *evsel_streams__entry(struct evlist_streams *els, 35 - int evsel_idx); 31 + const struct evsel *evsel); 36 32 37 33 void evsel_streams__match(struct evsel_streams *es_base, 38 34 struct evsel_streams *es_pair);
+12 -3
tools/perf/util/string.c
··· 254 254 255 255 do { 256 256 ptr = strpbrk(str, stopset); 257 - if (ptr == str || 258 - (ptr == str + 1 && *(ptr - 1) != '\\')) 257 + if (!ptr) { 258 + /* stopset not in str. */ 259 259 break; 260 + } 261 + if (ptr == str) { 262 + /* stopset character is first in str. */ 263 + break; 264 + } 265 + if (ptr == str + 1 && str[0] != '\\') { 266 + /* stopset chacter is second and wasn't preceded by a '\'. */ 267 + break; 268 + } 260 269 str = ptr + 1; 261 - } while (ptr && *(ptr - 1) == '\\' && *(ptr - 2) != '\\'); 270 + } while (ptr[-1] == '\\' && ptr[-2] != '\\'); 262 271 263 272 return ptr; 264 273 }
+1
tools/perf/util/svghelper.c
··· 21 21 #include <perf/cpumap.h> 22 22 23 23 #include "env.h" 24 + #include "perf.h" 24 25 #include "svghelper.h" 25 26 26 27 static u64 first_time, last_time;
+4 -2
tools/perf/util/symbol-elf.c
··· 287 287 * Demangle C++ function signature, typically replaced by demangle-cxx.cpp 288 288 * version. 289 289 */ 290 - __weak char *cxx_demangle_sym(const char *str __maybe_unused, bool params __maybe_unused, 291 - bool modifiers __maybe_unused) 290 + #ifndef HAVE_CXA_DEMANGLE_SUPPORT 291 + char *cxx_demangle_sym(const char *str __maybe_unused, bool params __maybe_unused, 292 + bool modifiers __maybe_unused) 292 293 { 293 294 #ifdef HAVE_LIBBFD_SUPPORT 294 295 int flags = (params ? DMGL_PARAMS : 0) | (modifiers ? DMGL_ANSI : 0); ··· 303 302 return NULL; 304 303 #endif 305 304 } 305 + #endif /* !HAVE_CXA_DEMANGLE_SUPPORT */ 306 306 307 307 static char *demangle_sym(struct dso *dso, int kmodule, const char *elf_name) 308 308 {
+8 -1
tools/perf/util/symbol.c
··· 154 154 else if ((a == 0) && (b > 0)) 155 155 return SYMBOL_B; 156 156 157 + if (syma->type != symb->type) { 158 + if (syma->type == STT_NOTYPE) 159 + return SYMBOL_B; 160 + if (symb->type == STT_NOTYPE) 161 + return SYMBOL_A; 162 + } 163 + 157 164 /* Prefer a non weak symbol over a weak one */ 158 165 a = syma->binding == STB_WEAK; 159 166 b = symb->binding == STB_WEAK; ··· 264 257 * like in: 265 258 * ffffffffc1937000 T hdmi_driver_init [snd_hda_codec_hdmi] 266 259 */ 267 - if (prev->end == prev->start && prev->type != STT_NOTYPE) { 260 + if (prev->end == prev->start) { 268 261 const char *prev_mod; 269 262 const char *curr_mod; 270 263
+9 -5
tools/perf/util/synthetic-events.c
··· 1686 1686 } 1687 1687 1688 1688 if (type & PERF_SAMPLE_RAW) { 1689 - u.val32[0] = sample->raw_size; 1690 - *array = u.val64; 1691 - array = (void *)array + sizeof(u32); 1689 + u32 *array32 = (void *)array; 1692 1690 1693 - memcpy(array, sample->raw_data, sample->raw_size); 1694 - array = (void *)array + sample->raw_size; 1691 + *array32 = sample->raw_size; 1692 + array32++; 1693 + 1694 + memcpy(array32, sample->raw_data, sample->raw_size); 1695 + array = (void *)(array32 + (sample->raw_size / sizeof(u32))); 1696 + 1697 + /* make sure the array is 64-bit aligned */ 1698 + BUG_ON(((long)array) % sizeof(u64)); 1695 1699 } 1696 1700 1697 1701 if (type & PERF_SAMPLE_BRANCH_STACK) {
+3 -87
tools/perf/util/syscalltbl.c
··· 10 10 #include <linux/compiler.h> 11 11 #include <linux/zalloc.h> 12 12 13 - #ifdef HAVE_SYSCALL_TABLE_SUPPORT 14 13 #include <string.h> 15 14 #include "string2.h" 16 15 17 - #if defined(__x86_64__) 18 - #include <asm/syscalls_64.c> 19 - const int syscalltbl_native_max_id = SYSCALLTBL_x86_64_MAX_ID; 20 - static const char *const *syscalltbl_native = syscalltbl_x86_64; 21 - #elif defined(__i386__) 22 - #include <asm/syscalls_32.c> 23 - const int syscalltbl_native_max_id = SYSCALLTBL_x86_MAX_ID; 24 - static const char *const *syscalltbl_native = syscalltbl_x86; 25 - #elif defined(__s390x__) 26 - #include <asm/syscalls_64.c> 27 - const int syscalltbl_native_max_id = SYSCALLTBL_S390_64_MAX_ID; 28 - static const char *const *syscalltbl_native = syscalltbl_s390_64; 29 - #elif defined(__powerpc64__) 30 - #include <asm/syscalls_64.c> 31 - const int syscalltbl_native_max_id = SYSCALLTBL_POWERPC_64_MAX_ID; 32 - static const char *const *syscalltbl_native = syscalltbl_powerpc_64; 33 - #elif defined(__powerpc__) 34 - #include <asm/syscalls_32.c> 35 - const int syscalltbl_native_max_id = SYSCALLTBL_POWERPC_32_MAX_ID; 36 - static const char *const *syscalltbl_native = syscalltbl_powerpc_32; 37 - #elif defined(__aarch64__) 38 - #include <asm/syscalls.c> 39 - const int syscalltbl_native_max_id = SYSCALLTBL_ARM64_MAX_ID; 40 - static const char *const *syscalltbl_native = syscalltbl_arm64; 41 - #elif defined(__mips__) 42 - #include <asm/syscalls_n64.c> 43 - const int syscalltbl_native_max_id = SYSCALLTBL_MIPS_N64_MAX_ID; 44 - static const char *const *syscalltbl_native = syscalltbl_mips_n64; 45 - #elif defined(__loongarch__) 46 - #include <asm/syscalls.c> 47 - const int syscalltbl_native_max_id = SYSCALLTBL_LOONGARCH_MAX_ID; 48 - static const char *const *syscalltbl_native = syscalltbl_loongarch; 49 - #elif defined(__riscv) 50 - #include <asm/syscalls.c> 51 - const int syscalltbl_native_max_id = SYSCALLTBL_RISCV_MAX_ID; 52 - static const char *const *syscalltbl_native = syscalltbl_riscv; 53 - #else 54 - const int syscalltbl_native_max_id = 0; 55 - static const char *const syscalltbl_native[] = { 56 - [0] = "unknown", 57 - }; 58 - #endif 16 + #include <syscall_table.h> 17 + const int syscalltbl_native_max_id = SYSCALLTBL_MAX_ID; 18 + static const char *const *syscalltbl_native = syscalltbl; 59 19 60 20 struct syscall { 61 21 int id; ··· 123 163 *idx = -1; 124 164 return syscalltbl__strglobmatch_next(tbl, syscall_glob, idx); 125 165 } 126 - 127 - #else /* HAVE_SYSCALL_TABLE_SUPPORT */ 128 - 129 - #include <libaudit.h> 130 - 131 - struct syscalltbl *syscalltbl__new(void) 132 - { 133 - struct syscalltbl *tbl = zalloc(sizeof(*tbl)); 134 - if (tbl) 135 - tbl->audit_machine = audit_detect_machine(); 136 - return tbl; 137 - } 138 - 139 - void syscalltbl__delete(struct syscalltbl *tbl) 140 - { 141 - free(tbl); 142 - } 143 - 144 - const char *syscalltbl__name(const struct syscalltbl *tbl, int id) 145 - { 146 - return audit_syscall_to_name(id, tbl->audit_machine); 147 - } 148 - 149 - int syscalltbl__id(struct syscalltbl *tbl, const char *name) 150 - { 151 - return audit_name_to_syscall(name, tbl->audit_machine); 152 - } 153 - 154 - int syscalltbl__id_at_idx(struct syscalltbl *tbl __maybe_unused, int idx) 155 - { 156 - return idx; 157 - } 158 - 159 - int syscalltbl__strglobmatch_next(struct syscalltbl *tbl __maybe_unused, 160 - const char *syscall_glob __maybe_unused, int *idx __maybe_unused) 161 - { 162 - return -1; 163 - } 164 - 165 - int syscalltbl__strglobmatch_first(struct syscalltbl *tbl, const char *syscall_glob, int *idx) 166 - { 167 - return syscalltbl__strglobmatch_next(tbl, syscall_glob, idx); 168 - } 169 - #endif /* HAVE_SYSCALL_TABLE_SUPPORT */
-1
tools/perf/util/syscalltbl.h
··· 3 3 #define __PERF_SYSCALLTBL_H 4 4 5 5 struct syscalltbl { 6 - int audit_machine; 7 6 struct { 8 7 int max_id; 9 8 int nr_entries;
+1 -1
tools/perf/util/trace-event-parse.c
··· 99 99 return tep_read_number(event->tep, ptr, size); 100 100 } 101 101 102 - void event_format__fprintf(struct tep_event *event, 102 + void event_format__fprintf(const struct tep_event *event, 103 103 int cpu, void *data, int size, FILE *fp) 104 104 { 105 105 struct tep_record record;
+183 -4
tools/perf/util/trace-event-scripting.c
··· 13 13 #include <event-parse.h> 14 14 #endif 15 15 16 + #include "archinsn.h" 16 17 #include "debug.h" 18 + #include "event.h" 17 19 #include "trace-event.h" 18 20 #include "evsel.h" 21 + #include <linux/perf_event.h> 19 22 #include <linux/zalloc.h> 20 23 #include "util/sample.h" 21 24 25 + unsigned int scripting_max_stack = PERF_MAX_STACK_DEPTH; 26 + 22 27 struct scripting_context *scripting_context; 28 + 29 + struct script_spec { 30 + struct list_head node; 31 + struct scripting_ops *ops; 32 + char spec[]; 33 + }; 34 + 35 + static LIST_HEAD(script_specs); 36 + 37 + static struct script_spec *script_spec__new(const char *spec, 38 + struct scripting_ops *ops) 39 + { 40 + struct script_spec *s = malloc(sizeof(*s) + strlen(spec) + 1); 41 + 42 + if (s != NULL) { 43 + strcpy(s->spec, spec); 44 + s->ops = ops; 45 + } 46 + 47 + return s; 48 + } 49 + 50 + static void script_spec__add(struct script_spec *s) 51 + { 52 + list_add_tail(&s->node, &script_specs); 53 + } 54 + 55 + static struct script_spec *script_spec__find(const char *spec) 56 + { 57 + struct script_spec *s; 58 + 59 + list_for_each_entry(s, &script_specs, node) 60 + if (strcasecmp(s->spec, spec) == 0) 61 + return s; 62 + return NULL; 63 + } 64 + 65 + static int script_spec_register(const char *spec, struct scripting_ops *ops) 66 + { 67 + struct script_spec *s; 68 + 69 + s = script_spec__find(spec); 70 + if (s) 71 + return -1; 72 + 73 + s = script_spec__new(spec, ops); 74 + if (!s) 75 + return -1; 76 + 77 + script_spec__add(s); 78 + return 0; 79 + } 80 + 81 + struct scripting_ops *script_spec__lookup(const char *spec) 82 + { 83 + struct script_spec *s = script_spec__find(spec); 84 + 85 + if (!s) 86 + return NULL; 87 + 88 + return s->ops; 89 + } 90 + 91 + int script_spec__for_each(int (*cb)(struct scripting_ops *ops, const char *spec)) 92 + { 93 + struct script_spec *s; 94 + int ret = 0; 95 + 96 + list_for_each_entry(s, &script_specs, node) { 97 + ret = cb(s->ops, s->spec); 98 + if (ret) 99 + break; 100 + } 101 + return ret; 102 + } 23 103 24 104 void scripting_context__update(struct scripting_context *c, 25 105 union perf_event *event, ··· 108 28 struct addr_location *al, 109 29 struct addr_location *addr_al) 110 30 { 111 - c->event_data = sample->raw_data; 112 - c->pevent = NULL; 113 31 #ifdef HAVE_LIBTRACEEVENT 114 - if (evsel->tp_format) 115 - c->pevent = evsel->tp_format->tep; 32 + const struct tep_event *tp_format = evsel__tp_format(evsel); 33 + 34 + c->pevent = tp_format ? tp_format->tep : NULL; 35 + #else 36 + c->pevent = NULL; 116 37 #endif 38 + c->event_data = sample->raw_data; 117 39 c->event = event; 118 40 c->sample = sample; 119 41 c->evsel = evsel; ··· 273 191 } 274 192 #endif 275 193 #endif 194 + 195 + #if !defined(__i386__) && !defined(__x86_64__) 196 + void arch_fetch_insn(struct perf_sample *sample __maybe_unused, 197 + struct thread *thread __maybe_unused, 198 + struct machine *machine __maybe_unused) 199 + { 200 + } 201 + #endif 202 + 203 + void script_fetch_insn(struct perf_sample *sample, struct thread *thread, 204 + struct machine *machine, bool native_arch) 205 + { 206 + if (sample->insn_len == 0 && native_arch) 207 + arch_fetch_insn(sample, thread, machine); 208 + } 209 + 210 + static const struct { 211 + u32 flags; 212 + const char *name; 213 + } sample_flags[] = { 214 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL, "call"}, 215 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN, "return"}, 216 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CONDITIONAL, "jcc"}, 217 + {PERF_IP_FLAG_BRANCH, "jmp"}, 218 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_INTERRUPT, "int"}, 219 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN | PERF_IP_FLAG_INTERRUPT, "iret"}, 220 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_SYSCALLRET, "syscall"}, 221 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN | PERF_IP_FLAG_SYSCALLRET, "sysret"}, 222 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_ASYNC, "async"}, 223 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_ASYNC | PERF_IP_FLAG_INTERRUPT, 224 + "hw int"}, 225 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TX_ABORT, "tx abrt"}, 226 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TRACE_BEGIN, "tr strt"}, 227 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TRACE_END, "tr end"}, 228 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_VMENTRY, "vmentry"}, 229 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_VMEXIT, "vmexit"}, 230 + {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_BRANCH_MISS, "br miss"}, 231 + {0, NULL} 232 + }; 233 + 234 + static const char *sample_flags_to_name(u32 flags) 235 + { 236 + int i; 237 + 238 + for (i = 0; sample_flags[i].name ; i++) { 239 + if (sample_flags[i].flags == flags) 240 + return sample_flags[i].name; 241 + } 242 + 243 + return NULL; 244 + } 245 + 246 + int perf_sample__sprintf_flags(u32 flags, char *str, size_t sz) 247 + { 248 + u32 xf = PERF_IP_FLAG_IN_TX | PERF_IP_FLAG_INTR_DISABLE | 249 + PERF_IP_FLAG_INTR_TOGGLE; 250 + const char *chars = PERF_IP_FLAG_CHARS; 251 + const size_t n = strlen(PERF_IP_FLAG_CHARS); 252 + const char *name = NULL; 253 + size_t i, pos = 0; 254 + char xs[16] = {0}; 255 + 256 + if (flags & xf) 257 + snprintf(xs, sizeof(xs), "(%s%s%s)", 258 + flags & PERF_IP_FLAG_IN_TX ? "x" : "", 259 + flags & PERF_IP_FLAG_INTR_DISABLE ? "D" : "", 260 + flags & PERF_IP_FLAG_INTR_TOGGLE ? "t" : ""); 261 + 262 + name = sample_flags_to_name(flags & ~xf); 263 + if (name) 264 + return snprintf(str, sz, "%-15s%6s", name, xs); 265 + 266 + if (flags & PERF_IP_FLAG_TRACE_BEGIN) { 267 + name = sample_flags_to_name(flags & ~(xf | PERF_IP_FLAG_TRACE_BEGIN)); 268 + if (name) 269 + return snprintf(str, sz, "tr strt %-7s%6s", name, xs); 270 + } 271 + 272 + if (flags & PERF_IP_FLAG_TRACE_END) { 273 + name = sample_flags_to_name(flags & ~(xf | PERF_IP_FLAG_TRACE_END)); 274 + if (name) 275 + return snprintf(str, sz, "tr end %-7s%6s", name, xs); 276 + } 277 + 278 + for (i = 0; i < n; i++, flags >>= 1) { 279 + if ((flags & 1) && pos < sz) 280 + str[pos++] = chars[i]; 281 + } 282 + for (; i < 32; i++, flags >>= 1) { 283 + if ((flags & 1) && pos < sz) 284 + str[pos++] = '?'; 285 + } 286 + if (pos < sz) 287 + str[pos] = 0; 288 + 289 + return pos; 290 + }
+4 -3
tools/perf/util/trace-event.h
··· 39 39 40 40 struct tep_event *trace_event__tp_format_id(int id); 41 41 42 - void event_format__fprintf(struct tep_event *event, 42 + void event_format__fprintf(const struct tep_event *event, 43 43 int cpu, void *data, int size, FILE *fp); 44 44 45 45 int parse_ftrace_file(struct tep_handle *pevent, char *buf, unsigned long size); ··· 113 113 114 114 extern unsigned int scripting_max_stack; 115 115 116 - int script_spec_register(const char *spec, struct scripting_ops *ops); 116 + struct scripting_ops *script_spec__lookup(const char *spec); 117 + int script_spec__for_each(int (*cb)(struct scripting_ops *ops, const char *spec)); 117 118 118 119 void script_fetch_insn(struct perf_sample *sample, struct thread *thread, 119 - struct machine *machine); 120 + struct machine *machine, bool native_arch); 120 121 121 122 void setup_perl_scripting(void); 122 123 void setup_python_scripting(void);
+45 -61
tools/perf/util/values.c
··· 8 8 9 9 #include "values.h" 10 10 #include "debug.h" 11 + #include "evsel.h" 11 12 12 13 int perf_read_values_init(struct perf_read_values *values) 13 14 { ··· 23 22 values->threads = 0; 24 23 25 24 values->counters_max = 16; 26 - values->counterrawid = malloc(values->counters_max 27 - * sizeof(*values->counterrawid)); 28 - values->countername = malloc(values->counters_max 29 - * sizeof(*values->countername)); 30 - if (!values->counterrawid || !values->countername) { 31 - pr_debug("failed to allocate read_values counters arrays"); 25 + values->counters = malloc(values->counters_max * sizeof(*values->counters)); 26 + if (!values->counters) { 27 + pr_debug("failed to allocate read_values counters array"); 32 28 goto out_free_counter; 33 29 } 34 - values->counters = 0; 30 + values->num_counters = 0; 35 31 36 32 return 0; 37 33 38 34 out_free_counter: 39 - zfree(&values->counterrawid); 40 - zfree(&values->countername); 35 + zfree(&values->counters); 41 36 out_free_pid: 42 37 zfree(&values->pid); 43 38 zfree(&values->tid); ··· 53 56 zfree(&values->value); 54 57 zfree(&values->pid); 55 58 zfree(&values->tid); 56 - zfree(&values->counterrawid); 57 - for (i = 0; i < values->counters; i++) 58 - zfree(&values->countername[i]); 59 - zfree(&values->countername); 59 + zfree(&values->counters); 60 60 } 61 61 62 62 static int perf_read_values__enlarge_threads(struct perf_read_values *values) ··· 110 116 111 117 static int perf_read_values__enlarge_counters(struct perf_read_values *values) 112 118 { 113 - char **countername; 114 - int i, counters_max = values->counters_max * 2; 115 - u64 *counterrawid = realloc(values->counterrawid, counters_max * sizeof(*values->counterrawid)); 119 + int counters_max = values->counters_max * 2; 120 + struct evsel **new_counters = realloc(values->counters, 121 + counters_max * sizeof(*values->counters)); 116 122 117 - if (!counterrawid) { 118 - pr_debug("failed to enlarge read_values rawid array"); 123 + if (!new_counters) { 124 + pr_debug("failed to enlarge read_values counters array"); 119 125 goto out_enomem; 120 126 } 121 127 122 - countername = realloc(values->countername, counters_max * sizeof(*values->countername)); 123 - if (!countername) { 124 - pr_debug("failed to enlarge read_values rawid array"); 125 - goto out_free_rawid; 126 - } 127 - 128 - for (i = 0; i < values->threads; i++) { 128 + for (int i = 0; i < values->threads; i++) { 129 129 u64 *value = realloc(values->value[i], counters_max * sizeof(**values->value)); 130 - int j; 131 130 132 131 if (!value) { 133 132 pr_debug("failed to enlarge read_values ->values array"); 134 - goto out_free_name; 133 + goto out_free_counters; 135 134 } 136 135 137 - for (j = values->counters_max; j < counters_max; j++) 136 + for (int j = values->counters_max; j < counters_max; j++) 138 137 value[j] = 0; 139 138 140 139 values->value[i] = value; 141 140 } 142 141 143 142 values->counters_max = counters_max; 144 - values->counterrawid = counterrawid; 145 - values->countername = countername; 143 + values->counters = new_counters; 146 144 147 145 return 0; 148 - out_free_name: 149 - free(countername); 150 - out_free_rawid: 151 - free(counterrawid); 146 + out_free_counters: 147 + free(new_counters); 152 148 out_enomem: 153 149 return -ENOMEM; 154 150 } 155 151 156 152 static int perf_read_values__findnew_counter(struct perf_read_values *values, 157 - u64 rawid, const char *name) 153 + struct evsel *evsel) 158 154 { 159 155 int i; 160 156 161 - for (i = 0; i < values->counters; i++) 162 - if (values->counterrawid[i] == rawid) 157 + for (i = 0; i < values->num_counters; i++) 158 + if (values->counters[i] == evsel) 163 159 return i; 164 160 165 - if (values->counters == values->counters_max) { 166 - i = perf_read_values__enlarge_counters(values); 167 - if (i) 168 - return i; 161 + if (values->num_counters == values->counters_max) { 162 + int err = perf_read_values__enlarge_counters(values); 163 + 164 + if (err) 165 + return err; 169 166 } 170 167 171 - i = values->counters++; 172 - values->counterrawid[i] = rawid; 173 - values->countername[i] = strdup(name); 168 + i = values->num_counters++; 169 + values->counters[i] = evsel; 174 170 175 171 return i; 176 172 } 177 173 178 174 int perf_read_values_add_value(struct perf_read_values *values, 179 175 u32 pid, u32 tid, 180 - u64 rawid, const char *name, u64 value) 176 + struct evsel *evsel, u64 value) 181 177 { 182 178 int tindex, cindex; 183 179 184 180 tindex = perf_read_values__findnew_thread(values, pid, tid); 185 181 if (tindex < 0) 186 182 return tindex; 187 - cindex = perf_read_values__findnew_counter(values, rawid, name); 183 + cindex = perf_read_values__findnew_counter(values, evsel); 188 184 if (cindex < 0) 189 185 return cindex; 190 186 ··· 189 205 int pidwidth, tidwidth; 190 206 int *counterwidth; 191 207 192 - counterwidth = malloc(values->counters * sizeof(*counterwidth)); 208 + counterwidth = malloc(values->num_counters * sizeof(*counterwidth)); 193 209 if (!counterwidth) { 194 210 fprintf(fp, "INTERNAL ERROR: Failed to allocate counterwidth array\n"); 195 211 return; 196 212 } 197 213 tidwidth = 3; 198 214 pidwidth = 3; 199 - for (j = 0; j < values->counters; j++) 200 - counterwidth[j] = strlen(values->countername[j]); 215 + for (j = 0; j < values->num_counters; j++) 216 + counterwidth[j] = strlen(evsel__name(values->counters[j])); 201 217 for (i = 0; i < values->threads; i++) { 202 218 int width; 203 219 ··· 207 223 width = snprintf(NULL, 0, "%d", values->tid[i]); 208 224 if (width > tidwidth) 209 225 tidwidth = width; 210 - for (j = 0; j < values->counters; j++) { 226 + for (j = 0; j < values->num_counters; j++) { 211 227 width = snprintf(NULL, 0, "%" PRIu64, values->value[i][j]); 212 228 if (width > counterwidth[j]) 213 229 counterwidth[j] = width; ··· 215 231 } 216 232 217 233 fprintf(fp, "# %*s %*s", pidwidth, "PID", tidwidth, "TID"); 218 - for (j = 0; j < values->counters; j++) 219 - fprintf(fp, " %*s", counterwidth[j], values->countername[j]); 234 + for (j = 0; j < values->num_counters; j++) 235 + fprintf(fp, " %*s", counterwidth[j], evsel__name(values->counters[j])); 220 236 fprintf(fp, "\n"); 221 237 222 238 for (i = 0; i < values->threads; i++) { 223 239 fprintf(fp, " %*d %*d", pidwidth, values->pid[i], 224 240 tidwidth, values->tid[i]); 225 - for (j = 0; j < values->counters; j++) 241 + for (j = 0; j < values->num_counters; j++) 226 242 fprintf(fp, " %*" PRIu64, 227 243 counterwidth[j], values->value[i][j]); 228 244 fprintf(fp, "\n"); ··· 250 266 if (width > tidwidth) 251 267 tidwidth = width; 252 268 } 253 - for (j = 0; j < values->counters; j++) { 254 - width = strlen(values->countername[j]); 269 + for (j = 0; j < values->num_counters; j++) { 270 + width = strlen(evsel__name(values->counters[j])); 255 271 if (width > namewidth) 256 272 namewidth = width; 257 - width = snprintf(NULL, 0, "%" PRIx64, values->counterrawid[j]); 273 + width = snprintf(NULL, 0, "%x", values->counters[j]->core.idx); 258 274 if (width > rawwidth) 259 275 rawwidth = width; 260 276 } 261 277 for (i = 0; i < values->threads; i++) { 262 - for (j = 0; j < values->counters; j++) { 278 + for (j = 0; j < values->num_counters; j++) { 263 279 width = snprintf(NULL, 0, "%" PRIu64, values->value[i][j]); 264 280 if (width > countwidth) 265 281 countwidth = width; ··· 271 287 namewidth, "Name", rawwidth, "Raw", 272 288 countwidth, "Count"); 273 289 for (i = 0; i < values->threads; i++) 274 - for (j = 0; j < values->counters; j++) 275 - fprintf(fp, " %*d %*d %*s %*" PRIx64 " %*" PRIu64, 290 + for (j = 0; j < values->num_counters; j++) 291 + fprintf(fp, " %*d %*d %*s %*x %*" PRIu64, 276 292 pidwidth, values->pid[i], 277 293 tidwidth, values->tid[i], 278 - namewidth, values->countername[j], 279 - rawwidth, values->counterrawid[j], 294 + namewidth, evsel__name(values->counters[j]), 295 + rawwidth, values->counters[j]->core.idx, 280 296 countwidth, values->value[i][j]); 281 297 } 282 298
+5 -4
tools/perf/util/values.h
··· 5 5 #include <stdio.h> 6 6 #include <linux/types.h> 7 7 8 + struct evsel; 9 + 8 10 struct perf_read_values { 9 11 int threads; 10 12 int threads_max; 11 13 u32 *pid, *tid; 12 - int counters; 14 + int num_counters; 13 15 int counters_max; 14 - u64 *counterrawid; 15 - char **countername; 16 + struct evsel **counters; 16 17 u64 **value; 17 18 }; 18 19 ··· 22 21 23 22 int perf_read_values_add_value(struct perf_read_values *values, 24 23 u32 pid, u32 tid, 25 - u64 rawid, const char *name, u64 value); 24 + struct evsel *evsel, u64 value); 26 25 27 26 void perf_read_values_display(FILE *fp, struct perf_read_values *values, 28 27 int raw);
+409
tools/scripts/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # This file contains the system call numbers for all of the 4 + # more recently added architectures. 5 + # 6 + # As a basic principle, no duplication of functionality 7 + # should be added, e.g. we don't use lseek when llseek 8 + # is present. New architectures should use this file 9 + # and implement the less feature-full calls in user space. 10 + # 11 + 0 common io_setup sys_io_setup compat_sys_io_setup 12 + 1 common io_destroy sys_io_destroy 13 + 2 common io_submit sys_io_submit compat_sys_io_submit 14 + 3 common io_cancel sys_io_cancel 15 + 4 time32 io_getevents sys_io_getevents_time32 16 + 4 64 io_getevents sys_io_getevents 17 + 5 common setxattr sys_setxattr 18 + 6 common lsetxattr sys_lsetxattr 19 + 7 common fsetxattr sys_fsetxattr 20 + 8 common getxattr sys_getxattr 21 + 9 common lgetxattr sys_lgetxattr 22 + 10 common fgetxattr sys_fgetxattr 23 + 11 common listxattr sys_listxattr 24 + 12 common llistxattr sys_llistxattr 25 + 13 common flistxattr sys_flistxattr 26 + 14 common removexattr sys_removexattr 27 + 15 common lremovexattr sys_lremovexattr 28 + 16 common fremovexattr sys_fremovexattr 29 + 17 common getcwd sys_getcwd 30 + 18 common lookup_dcookie sys_ni_syscall 31 + 19 common eventfd2 sys_eventfd2 32 + 20 common epoll_create1 sys_epoll_create1 33 + 21 common epoll_ctl sys_epoll_ctl 34 + 22 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait 35 + 23 common dup sys_dup 36 + 24 common dup3 sys_dup3 37 + 25 32 fcntl64 sys_fcntl64 compat_sys_fcntl64 38 + 25 64 fcntl sys_fcntl 39 + 26 common inotify_init1 sys_inotify_init1 40 + 27 common inotify_add_watch sys_inotify_add_watch 41 + 28 common inotify_rm_watch sys_inotify_rm_watch 42 + 29 common ioctl sys_ioctl compat_sys_ioctl 43 + 30 common ioprio_set sys_ioprio_set 44 + 31 common ioprio_get sys_ioprio_get 45 + 32 common flock sys_flock 46 + 33 common mknodat sys_mknodat 47 + 34 common mkdirat sys_mkdirat 48 + 35 common unlinkat sys_unlinkat 49 + 36 common symlinkat sys_symlinkat 50 + 37 common linkat sys_linkat 51 + # renameat is superseded with flags by renameat2 52 + 38 renameat renameat sys_renameat 53 + 39 common umount2 sys_umount 54 + 40 common mount sys_mount 55 + 41 common pivot_root sys_pivot_root 56 + 42 common nfsservctl sys_ni_syscall 57 + 43 32 statfs64 sys_statfs64 compat_sys_statfs64 58 + 43 64 statfs sys_statfs 59 + 44 32 fstatfs64 sys_fstatfs64 compat_sys_fstatfs64 60 + 44 64 fstatfs sys_fstatfs 61 + 45 32 truncate64 sys_truncate64 compat_sys_truncate64 62 + 45 64 truncate sys_truncate 63 + 46 32 ftruncate64 sys_ftruncate64 compat_sys_ftruncate64 64 + 46 64 ftruncate sys_ftruncate 65 + 47 common fallocate sys_fallocate compat_sys_fallocate 66 + 48 common faccessat sys_faccessat 67 + 49 common chdir sys_chdir 68 + 50 common fchdir sys_fchdir 69 + 51 common chroot sys_chroot 70 + 52 common fchmod sys_fchmod 71 + 53 common fchmodat sys_fchmodat 72 + 54 common fchownat sys_fchownat 73 + 55 common fchown sys_fchown 74 + 56 common openat sys_openat 75 + 57 common close sys_close 76 + 58 common vhangup sys_vhangup 77 + 59 common pipe2 sys_pipe2 78 + 60 common quotactl sys_quotactl 79 + 61 common getdents64 sys_getdents64 80 + 62 32 llseek sys_llseek 81 + 62 64 lseek sys_lseek 82 + 63 common read sys_read 83 + 64 common write sys_write 84 + 65 common readv sys_readv sys_readv 85 + 66 common writev sys_writev sys_writev 86 + 67 common pread64 sys_pread64 compat_sys_pread64 87 + 68 common pwrite64 sys_pwrite64 compat_sys_pwrite64 88 + 69 common preadv sys_preadv compat_sys_preadv 89 + 70 common pwritev sys_pwritev compat_sys_pwritev 90 + 71 32 sendfile64 sys_sendfile64 91 + 71 64 sendfile sys_sendfile64 92 + 72 time32 pselect6 sys_pselect6_time32 compat_sys_pselect6_time32 93 + 72 64 pselect6 sys_pselect6 94 + 73 time32 ppoll sys_ppoll_time32 compat_sys_ppoll_time32 95 + 73 64 ppoll sys_ppoll 96 + 74 common signalfd4 sys_signalfd4 compat_sys_signalfd4 97 + 75 common vmsplice sys_vmsplice 98 + 76 common splice sys_splice 99 + 77 common tee sys_tee 100 + 78 common readlinkat sys_readlinkat 101 + 79 stat64 fstatat64 sys_fstatat64 102 + 79 64 newfstatat sys_newfstatat 103 + 80 stat64 fstat64 sys_fstat64 104 + 80 64 fstat sys_newfstat 105 + 81 common sync sys_sync 106 + 82 common fsync sys_fsync 107 + 83 common fdatasync sys_fdatasync 108 + 84 common sync_file_range sys_sync_file_range compat_sys_sync_file_range 109 + 85 common timerfd_create sys_timerfd_create 110 + 86 time32 timerfd_settime sys_timerfd_settime32 111 + 86 64 timerfd_settime sys_timerfd_settime 112 + 87 time32 timerfd_gettime sys_timerfd_gettime32 113 + 87 64 timerfd_gettime sys_timerfd_gettime 114 + 88 time32 utimensat sys_utimensat_time32 115 + 88 64 utimensat sys_utimensat 116 + 89 common acct sys_acct 117 + 90 common capget sys_capget 118 + 91 common capset sys_capset 119 + 92 common personality sys_personality 120 + 93 common exit sys_exit 121 + 94 common exit_group sys_exit_group 122 + 95 common waitid sys_waitid compat_sys_waitid 123 + 96 common set_tid_address sys_set_tid_address 124 + 97 common unshare sys_unshare 125 + 98 time32 futex sys_futex_time32 126 + 98 64 futex sys_futex 127 + 99 common set_robust_list sys_set_robust_list compat_sys_set_robust_list 128 + 100 common get_robust_list sys_get_robust_list compat_sys_get_robust_list 129 + 101 time32 nanosleep sys_nanosleep_time32 130 + 101 64 nanosleep sys_nanosleep 131 + 102 common getitimer sys_getitimer compat_sys_getitimer 132 + 103 common setitimer sys_setitimer compat_sys_setitimer 133 + 104 common kexec_load sys_kexec_load compat_sys_kexec_load 134 + 105 common init_module sys_init_module 135 + 106 common delete_module sys_delete_module 136 + 107 common timer_create sys_timer_create compat_sys_timer_create 137 + 108 time32 timer_gettime sys_timer_gettime32 138 + 108 64 timer_gettime sys_timer_gettime 139 + 109 common timer_getoverrun sys_timer_getoverrun 140 + 110 time32 timer_settime sys_timer_settime32 141 + 110 64 timer_settime sys_timer_settime 142 + 111 common timer_delete sys_timer_delete 143 + 112 time32 clock_settime sys_clock_settime32 144 + 112 64 clock_settime sys_clock_settime 145 + 113 time32 clock_gettime sys_clock_gettime32 146 + 113 64 clock_gettime sys_clock_gettime 147 + 114 time32 clock_getres sys_clock_getres_time32 148 + 114 64 clock_getres sys_clock_getres 149 + 115 time32 clock_nanosleep sys_clock_nanosleep_time32 150 + 115 64 clock_nanosleep sys_clock_nanosleep 151 + 116 common syslog sys_syslog 152 + 117 common ptrace sys_ptrace compat_sys_ptrace 153 + 118 common sched_setparam sys_sched_setparam 154 + 119 common sched_setscheduler sys_sched_setscheduler 155 + 120 common sched_getscheduler sys_sched_getscheduler 156 + 121 common sched_getparam sys_sched_getparam 157 + 122 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity 158 + 123 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity 159 + 124 common sched_yield sys_sched_yield 160 + 125 common sched_get_priority_max sys_sched_get_priority_max 161 + 126 common sched_get_priority_min sys_sched_get_priority_min 162 + 127 time32 sched_rr_get_interval sys_sched_rr_get_interval_time32 163 + 127 64 sched_rr_get_interval sys_sched_rr_get_interval 164 + 128 common restart_syscall sys_restart_syscall 165 + 129 common kill sys_kill 166 + 130 common tkill sys_tkill 167 + 131 common tgkill sys_tgkill 168 + 132 common sigaltstack sys_sigaltstack compat_sys_sigaltstack 169 + 133 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend 170 + 134 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction 171 + 135 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask 172 + 136 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending 173 + 137 time32 rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32 174 + 137 64 rt_sigtimedwait sys_rt_sigtimedwait 175 + 138 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo 176 + 139 common rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn 177 + 140 common setpriority sys_setpriority 178 + 141 common getpriority sys_getpriority 179 + 142 common reboot sys_reboot 180 + 143 common setregid sys_setregid 181 + 144 common setgid sys_setgid 182 + 145 common setreuid sys_setreuid 183 + 146 common setuid sys_setuid 184 + 147 common setresuid sys_setresuid 185 + 148 common getresuid sys_getresuid 186 + 149 common setresgid sys_setresgid 187 + 150 common getresgid sys_getresgid 188 + 151 common setfsuid sys_setfsuid 189 + 152 common setfsgid sys_setfsgid 190 + 153 common times sys_times compat_sys_times 191 + 154 common setpgid sys_setpgid 192 + 155 common getpgid sys_getpgid 193 + 156 common getsid sys_getsid 194 + 157 common setsid sys_setsid 195 + 158 common getgroups sys_getgroups 196 + 159 common setgroups sys_setgroups 197 + 160 common uname sys_newuname 198 + 161 common sethostname sys_sethostname 199 + 162 common setdomainname sys_setdomainname 200 + # getrlimit and setrlimit are superseded with prlimit64 201 + 163 rlimit getrlimit sys_getrlimit compat_sys_getrlimit 202 + 164 rlimit setrlimit sys_setrlimit compat_sys_setrlimit 203 + 165 common getrusage sys_getrusage compat_sys_getrusage 204 + 166 common umask sys_umask 205 + 167 common prctl sys_prctl 206 + 168 common getcpu sys_getcpu 207 + 169 time32 gettimeofday sys_gettimeofday compat_sys_gettimeofday 208 + 169 64 gettimeofday sys_gettimeofday 209 + 170 time32 settimeofday sys_settimeofday compat_sys_settimeofday 210 + 170 64 settimeofday sys_settimeofday 211 + 171 time32 adjtimex sys_adjtimex_time32 212 + 171 64 adjtimex sys_adjtimex 213 + 172 common getpid sys_getpid 214 + 173 common getppid sys_getppid 215 + 174 common getuid sys_getuid 216 + 175 common geteuid sys_geteuid 217 + 176 common getgid sys_getgid 218 + 177 common getegid sys_getegid 219 + 178 common gettid sys_gettid 220 + 179 common sysinfo sys_sysinfo compat_sys_sysinfo 221 + 180 common mq_open sys_mq_open compat_sys_mq_open 222 + 181 common mq_unlink sys_mq_unlink 223 + 182 time32 mq_timedsend sys_mq_timedsend_time32 224 + 182 64 mq_timedsend sys_mq_timedsend 225 + 183 time32 mq_timedreceive sys_mq_timedreceive_time32 226 + 183 64 mq_timedreceive sys_mq_timedreceive 227 + 184 common mq_notify sys_mq_notify compat_sys_mq_notify 228 + 185 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr 229 + 186 common msgget sys_msgget 230 + 187 common msgctl sys_msgctl compat_sys_msgctl 231 + 188 common msgrcv sys_msgrcv compat_sys_msgrcv 232 + 189 common msgsnd sys_msgsnd compat_sys_msgsnd 233 + 190 common semget sys_semget 234 + 191 common semctl sys_semctl compat_sys_semctl 235 + 192 time32 semtimedop sys_semtimedop_time32 236 + 192 64 semtimedop sys_semtimedop 237 + 193 common semop sys_semop 238 + 194 common shmget sys_shmget 239 + 195 common shmctl sys_shmctl compat_sys_shmctl 240 + 196 common shmat sys_shmat compat_sys_shmat 241 + 197 common shmdt sys_shmdt 242 + 198 common socket sys_socket 243 + 199 common socketpair sys_socketpair 244 + 200 common bind sys_bind 245 + 201 common listen sys_listen 246 + 202 common accept sys_accept 247 + 203 common connect sys_connect 248 + 204 common getsockname sys_getsockname 249 + 205 common getpeername sys_getpeername 250 + 206 common sendto sys_sendto 251 + 207 common recvfrom sys_recvfrom compat_sys_recvfrom 252 + 208 common setsockopt sys_setsockopt sys_setsockopt 253 + 209 common getsockopt sys_getsockopt sys_getsockopt 254 + 210 common shutdown sys_shutdown 255 + 211 common sendmsg sys_sendmsg compat_sys_sendmsg 256 + 212 common recvmsg sys_recvmsg compat_sys_recvmsg 257 + 213 common readahead sys_readahead compat_sys_readahead 258 + 214 common brk sys_brk 259 + 215 common munmap sys_munmap 260 + 216 common mremap sys_mremap 261 + 217 common add_key sys_add_key 262 + 218 common request_key sys_request_key 263 + 219 common keyctl sys_keyctl compat_sys_keyctl 264 + 220 common clone sys_clone 265 + 221 common execve sys_execve compat_sys_execve 266 + 222 32 mmap2 sys_mmap2 267 + 222 64 mmap sys_mmap 268 + 223 32 fadvise64_64 sys_fadvise64_64 compat_sys_fadvise64_64 269 + 223 64 fadvise64 sys_fadvise64_64 270 + 224 common swapon sys_swapon 271 + 225 common swapoff sys_swapoff 272 + 226 common mprotect sys_mprotect 273 + 227 common msync sys_msync 274 + 228 common mlock sys_mlock 275 + 229 common munlock sys_munlock 276 + 230 common mlockall sys_mlockall 277 + 231 common munlockall sys_munlockall 278 + 232 common mincore sys_mincore 279 + 233 common madvise sys_madvise 280 + 234 common remap_file_pages sys_remap_file_pages 281 + 235 common mbind sys_mbind 282 + 236 common get_mempolicy sys_get_mempolicy 283 + 237 common set_mempolicy sys_set_mempolicy 284 + 238 common migrate_pages sys_migrate_pages 285 + 239 common move_pages sys_move_pages 286 + 240 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo 287 + 241 common perf_event_open sys_perf_event_open 288 + 242 common accept4 sys_accept4 289 + 243 time32 recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32 290 + 243 64 recvmmsg sys_recvmmsg 291 + # Architectures may provide up to 16 syscalls of their own between 244 and 259 292 + 244 arc cacheflush sys_cacheflush 293 + 245 arc arc_settls sys_arc_settls 294 + 246 arc arc_gettls sys_arc_gettls 295 + 247 arc sysfs sys_sysfs 296 + 248 arc arc_usr_cmpxchg sys_arc_usr_cmpxchg 297 + 298 + 244 csky set_thread_area sys_set_thread_area 299 + 245 csky cacheflush sys_cacheflush 300 + 301 + 244 nios2 cacheflush sys_cacheflush 302 + 303 + 244 or1k or1k_atomic sys_or1k_atomic 304 + 305 + 258 riscv riscv_hwprobe sys_riscv_hwprobe 306 + 259 riscv riscv_flush_icache sys_riscv_flush_icache 307 + 308 + 260 time32 wait4 sys_wait4 compat_sys_wait4 309 + 260 64 wait4 sys_wait4 310 + 261 common prlimit64 sys_prlimit64 311 + 262 common fanotify_init sys_fanotify_init 312 + 263 common fanotify_mark sys_fanotify_mark 313 + 264 common name_to_handle_at sys_name_to_handle_at 314 + 265 common open_by_handle_at sys_open_by_handle_at 315 + 266 time32 clock_adjtime sys_clock_adjtime32 316 + 266 64 clock_adjtime sys_clock_adjtime 317 + 267 common syncfs sys_syncfs 318 + 268 common setns sys_setns 319 + 269 common sendmmsg sys_sendmmsg compat_sys_sendmmsg 320 + 270 common process_vm_readv sys_process_vm_readv 321 + 271 common process_vm_writev sys_process_vm_writev 322 + 272 common kcmp sys_kcmp 323 + 273 common finit_module sys_finit_module 324 + 274 common sched_setattr sys_sched_setattr 325 + 275 common sched_getattr sys_sched_getattr 326 + 276 common renameat2 sys_renameat2 327 + 277 common seccomp sys_seccomp 328 + 278 common getrandom sys_getrandom 329 + 279 common memfd_create sys_memfd_create 330 + 280 common bpf sys_bpf 331 + 281 common execveat sys_execveat compat_sys_execveat 332 + 282 common userfaultfd sys_userfaultfd 333 + 283 common membarrier sys_membarrier 334 + 284 common mlock2 sys_mlock2 335 + 285 common copy_file_range sys_copy_file_range 336 + 286 common preadv2 sys_preadv2 compat_sys_preadv2 337 + 287 common pwritev2 sys_pwritev2 compat_sys_pwritev2 338 + 288 common pkey_mprotect sys_pkey_mprotect 339 + 289 common pkey_alloc sys_pkey_alloc 340 + 290 common pkey_free sys_pkey_free 341 + 291 common statx sys_statx 342 + 292 time32 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents 343 + 292 64 io_pgetevents sys_io_pgetevents 344 + 293 common rseq sys_rseq 345 + 294 common kexec_file_load sys_kexec_file_load 346 + # 295 through 402 are unassigned to sync up with generic numbers don't use 347 + 403 32 clock_gettime64 sys_clock_gettime 348 + 404 32 clock_settime64 sys_clock_settime 349 + 405 32 clock_adjtime64 sys_clock_adjtime 350 + 406 32 clock_getres_time64 sys_clock_getres 351 + 407 32 clock_nanosleep_time64 sys_clock_nanosleep 352 + 408 32 timer_gettime64 sys_timer_gettime 353 + 409 32 timer_settime64 sys_timer_settime 354 + 410 32 timerfd_gettime64 sys_timerfd_gettime 355 + 411 32 timerfd_settime64 sys_timerfd_settime 356 + 412 32 utimensat_time64 sys_utimensat 357 + 413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64 358 + 414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64 359 + 416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64 360 + 417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64 361 + 418 32 mq_timedsend_time64 sys_mq_timedsend 362 + 419 32 mq_timedreceive_time64 sys_mq_timedreceive 363 + 420 32 semtimedop_time64 sys_semtimedop 364 + 421 32 rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64 365 + 422 32 futex_time64 sys_futex 366 + 423 32 sched_rr_get_interval_time64 sys_sched_rr_get_interval 367 + 424 common pidfd_send_signal sys_pidfd_send_signal 368 + 425 common io_uring_setup sys_io_uring_setup 369 + 426 common io_uring_enter sys_io_uring_enter 370 + 427 common io_uring_register sys_io_uring_register 371 + 428 common open_tree sys_open_tree 372 + 429 common move_mount sys_move_mount 373 + 430 common fsopen sys_fsopen 374 + 431 common fsconfig sys_fsconfig 375 + 432 common fsmount sys_fsmount 376 + 433 common fspick sys_fspick 377 + 434 common pidfd_open sys_pidfd_open 378 + 435 common clone3 sys_clone3 379 + 436 common close_range sys_close_range 380 + 437 common openat2 sys_openat2 381 + 438 common pidfd_getfd sys_pidfd_getfd 382 + 439 common faccessat2 sys_faccessat2 383 + 440 common process_madvise sys_process_madvise 384 + 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 385 + 442 common mount_setattr sys_mount_setattr 386 + 443 common quotactl_fd sys_quotactl_fd 387 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 388 + 445 common landlock_add_rule sys_landlock_add_rule 389 + 446 common landlock_restrict_self sys_landlock_restrict_self 390 + 447 memfd_secret memfd_secret sys_memfd_secret 391 + 448 common process_mrelease sys_process_mrelease 392 + 449 common futex_waitv sys_futex_waitv 393 + 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 394 + 451 common cachestat sys_cachestat 395 + 452 common fchmodat2 sys_fchmodat2 396 + 453 common map_shadow_stack sys_map_shadow_stack 397 + 454 common futex_wake sys_futex_wake 398 + 455 common futex_wait sys_futex_wait 399 + 456 common futex_requeue sys_futex_requeue 400 + 457 common statmount sys_statmount 401 + 458 common listmount sys_listmount 402 + 459 common lsm_get_self_attr sys_lsm_get_self_attr 403 + 460 common lsm_set_self_attr sys_lsm_set_self_attr 404 + 461 common lsm_list_modules sys_lsm_list_modules 405 + 462 common mseal sys_mseal 406 + 463 common setxattrat sys_setxattrat 407 + 464 common getxattrat sys_getxattrat 408 + 465 common listxattrat sys_listxattrat 409 + 466 common removexattrat sys_removexattrat