Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull perf tooling updates from Arnaldo Carvalho de Melo:

New features:

* perf record: Add --initial-delay option (Andi Kleen)

* Column colouring improvements in 'diff' (Ramkumar Ramachandra)

Fixes:

* Don't show counter information when workload fails (Arnaldo Carvalho de Melo)

* Fixup leak on error path in parse events test. (Arnaldo Carvalho de Melo)

* Fix --delay option in 'stat' man page (Andi Kleen)

* Use the DWARF unwind info only if loaded (Jean Pihet):

Developer stuff:

* Improve forked workload error reporting by sending the errno in the signal
data queueing integer field, using sigqueue and by doing the signal setup in
the evlist methods, removing open coded equivalents in various tools. (Arnaldo Carvalho de Melo)

* Do more auto exit cleanup shores in the 'evlist' destructor, so that the tools
don't have to all do that sequence. (Arnaldo Carvalho de Melo)

* Pack 'struct perf_session_env' and 'struct trace' (Arnaldo Carvalho de Melo)

* Include tools/lib/api/ in MANIFEST, fixing detached tarballs (Arnaldo Carvalho de Melo)

* Add test for building detached source tarballs (Arnaldo Carvalho de Melo)

* Shut up libtracevent plugins make message (Jiri Olsa)

* Fix installation tests path setup (Jiri Olsa)

* Fix id_hdr_size initialization (Jiri Olsa)

* Move some header files from tools/perf/ to tools/include/ to make them available to
other tools/ dwelling codebases (Namhyung Kim)

* Fix 'probe' build when DWARF support libraries not present (Arnaldo Carvalho de Melo)

Refactorings:

* Move logic to warn about kptr_restrict'ed kernels to separate
function in 'report' (Arnaldo Carvalho de Melo)

* Move hist browser selection code to separate function (Arnaldo Carvalho de Melo)

* Move histogram entries collapsing to separate function (Arnaldo Carvalho de Melo)

* Introduce evlist__for_each() & friends (Arnaldo Carvalho de Melo)

* Automate setup of FEATURE_CHECK_(C|LD)FLAGS-all variables (Jiri Olsa)

* Move arch setup into seprate Makefile (Jiri Olsa)

Trivial stuff:

* Remove misplaced __maybe_unused in 'stat' (Arnaldo Carvalho de Melo)

* Remove old evsel_list usage in 'record' (Arnaldo Carvalho de Melo)

* Comment typo fix (Cody P Schafer)

* Remove unused test-volatile-register-var.c (Yann Droneaud)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>

+725 -437
+31 -28
tools/lib/traceevent/Makefile
··· 86 86 ifneq ($(OUTPUT),) 87 87 88 88 define build_output 89 - $(if $(VERBOSE:1=),@)+$(MAKE) -C $(OUTPUT) \ 90 - BUILD_SRC=$(CURDIR)/ -f $(CURDIR)/Makefile $1 89 + $(if $(VERBOSE:1=),@)+$(MAKE) -C $(OUTPUT) \ 90 + BUILD_SRC=$(CURDIR)/ -f $(CURDIR)/Makefile $1 91 91 endef 92 92 93 93 all: sub-make ··· 221 221 $(QUIET_LINK)$(CC) $(CFLAGS) -shared -nostartfiles -o $@ $< 222 222 223 223 define make_version.h 224 - (echo '/* This file is automatically generated. Do not modify. */'; \ 225 - echo \#define VERSION_CODE $(shell \ 226 - expr $(VERSION) \* 256 + $(PATCHLEVEL)); \ 227 - echo '#define EXTRAVERSION ' $(EXTRAVERSION); \ 228 - echo '#define VERSION_STRING "'$(VERSION).$(PATCHLEVEL).$(EXTRAVERSION)'"'; \ 229 - echo '#define FILE_VERSION '$(FILE_VERSION); \ 230 - ) > $1 224 + (echo '/* This file is automatically generated. Do not modify. */'; \ 225 + echo \#define VERSION_CODE $(shell \ 226 + expr $(VERSION) \* 256 + $(PATCHLEVEL)); \ 227 + echo '#define EXTRAVERSION ' $(EXTRAVERSION); \ 228 + echo '#define VERSION_STRING "'$(VERSION).$(PATCHLEVEL).$(EXTRAVERSION)'"'; \ 229 + echo '#define FILE_VERSION '$(FILE_VERSION); \ 230 + ) > $1 231 231 endef 232 232 233 233 define update_version.h 234 - ($(call make_version.h, $@.tmp); \ 235 - if [ -r $@ ] && cmp -s $@ $@.tmp; then \ 236 - rm -f $@.tmp; \ 237 - else \ 238 - echo ' UPDATE $@'; \ 239 - mv -f $@.tmp $@; \ 240 - fi); 234 + ($(call make_version.h, $@.tmp); \ 235 + if [ -r $@ ] && cmp -s $@ $@.tmp; then \ 236 + rm -f $@.tmp; \ 237 + else \ 238 + echo ' UPDATE $@'; \ 239 + mv -f $@.tmp $@; \ 240 + fi); 241 241 endef 242 242 243 243 ep_version.h: force ··· 246 246 VERSION_FILES = ep_version.h 247 247 248 248 define update_dir 249 - (echo $1 > $@.tmp; \ 250 - if [ -r $@ ] && cmp -s $@ $@.tmp; then \ 251 - rm -f $@.tmp; \ 252 - else \ 253 - echo ' UPDATE $@'; \ 254 - mv -f $@.tmp $@; \ 255 - fi); 249 + (echo $1 > $@.tmp; \ 250 + if [ -r $@ ] && cmp -s $@ $@.tmp; then \ 251 + rm -f $@.tmp; \ 252 + else \ 253 + echo ' UPDATE $@'; \ 254 + mv -f $@.tmp $@; \ 255 + fi); 256 256 endef 257 257 258 258 ## make deps ··· 262 262 263 263 # let .d file also depends on the source and header files 264 264 define check_deps 265 - @set -e; $(RM) $@; \ 266 - $(CC) -MM $(CFLAGS) $< > $@.$$$$; \ 267 - sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; \ 268 - $(RM) $@.$$$$ 265 + @set -e; $(RM) $@; \ 266 + $(CC) -MM $(CFLAGS) $< > $@.$$$$; \ 267 + sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; \ 268 + $(RM) $@.$$$$ 269 269 endef 270 270 271 271 $(all_deps): .%.d: $(src)/%.c ··· 329 329 330 330 endif # skip-makefile 331 331 332 - PHONY += force 332 + PHONY += force plugins 333 333 force: 334 + 335 + plugins: 336 + @echo > /dev/null 334 337 335 338 # Declare the contents of the .PHONY variable as phony. We keep that 336 339 # information in a variable so we can use it in if_changed and friends.
+4
tools/perf/Documentation/perf-record.txt
··· 209 209 inheritance is automatically disabled. --per-thread is ignored with a warning 210 210 if combined with -a or -C options. 211 211 212 + --initial-delay msecs:: 213 + After starting the program, wait msecs before measuring. This is useful to 214 + filter out the startup phase of the program, which is often very different. 215 + 212 216 SEE ALSO 213 217 -------- 214 218 linkperf:perf-stat[1], linkperf:perf-list[1]
+1 -1
tools/perf/Documentation/perf-stat.txt
··· 133 133 core number and the number of online logical processors on that physical processor. 134 134 135 135 -D msecs:: 136 - --initial-delay msecs:: 136 + --delay msecs:: 137 137 After starting the program, wait msecs before measuring. This is useful to 138 138 filter out the startup phase of the program, which is often very different. 139 139
+3 -1
tools/perf/MANIFEST
··· 1 1 tools/perf 2 2 tools/scripts 3 3 tools/lib/traceevent 4 - tools/lib/lk 4 + tools/lib/api 5 5 tools/lib/symbol/kallsyms.c 6 6 tools/lib/symbol/kallsyms.h 7 + tools/include/asm/bug.h 8 + tools/include/linux/compiler.h 7 9 include/linux/const.h 8 10 include/linux/perf_event.h 9 11 include/linux/rbtree.h
+2 -2
tools/perf/Makefile.perf
··· 211 211 LIB_H += ../../include/linux/stringify.h 212 212 LIB_H += util/include/linux/bitmap.h 213 213 LIB_H += util/include/linux/bitops.h 214 - LIB_H += util/include/linux/compiler.h 214 + LIB_H += ../include/linux/compiler.h 215 215 LIB_H += util/include/linux/const.h 216 216 LIB_H += util/include/linux/ctype.h 217 217 LIB_H += util/include/linux/kernel.h ··· 226 226 LIB_H += util/include/linux/types.h 227 227 LIB_H += util/include/linux/linkage.h 228 228 LIB_H += util/include/asm/asm-offsets.h 229 - LIB_H += util/include/asm/bug.h 229 + LIB_H += ../include/asm/bug.h 230 230 LIB_H += util/include/asm/byteorder.h 231 231 LIB_H += util/include/asm/hweight.h 232 232 LIB_H += util/include/asm/swab.h
+1 -1
tools/perf/builtin-annotate.c
··· 232 232 perf_session__fprintf_dsos(session, stdout); 233 233 234 234 total_nr_samples = 0; 235 - list_for_each_entry(pos, &session->evlist->entries, node) { 235 + evlist__for_each(session->evlist, pos) { 236 236 struct hists *hists = &pos->hists; 237 237 u32 nr_samples = hists->stats.nr_events[PERF_RECORD_SAMPLE]; 238 238
+94 -4
tools/perf/builtin-diff.c
··· 356 356 { 357 357 struct perf_evsel *e; 358 358 359 - list_for_each_entry(e, &evlist->entries, node) 359 + evlist__for_each(evlist, e) { 360 360 if (perf_evsel__match2(evsel, e)) 361 361 return e; 362 + } 362 363 363 364 return NULL; 364 365 } ··· 368 367 { 369 368 struct perf_evsel *evsel; 370 369 371 - list_for_each_entry(evsel, &evlist->entries, node) { 370 + evlist__for_each(evlist, evsel) { 372 371 struct hists *hists = &evsel->hists; 373 372 374 373 hists__collapse_resort(hists, NULL); ··· 615 614 struct perf_evsel *evsel_base; 616 615 bool first = true; 617 616 618 - list_for_each_entry(evsel_base, &evlist_base->entries, node) { 617 + evlist__for_each(evlist_base, evsel_base) { 619 618 struct data__file *d; 620 619 int i; 621 620 ··· 768 767 ret = scnprintf(buf, size, fmt, percent); 769 768 770 769 return ret; 770 + } 771 + 772 + static int __hpp__color_compare(struct perf_hpp_fmt *fmt, 773 + struct perf_hpp *hpp, struct hist_entry *he, 774 + int comparison_method) 775 + { 776 + struct diff_hpp_fmt *dfmt = 777 + container_of(fmt, struct diff_hpp_fmt, fmt); 778 + struct hist_entry *pair = get_pair_fmt(he, dfmt); 779 + double diff; 780 + s64 wdiff; 781 + char pfmt[20] = " "; 782 + 783 + if (!pair) 784 + goto dummy_print; 785 + 786 + switch (comparison_method) { 787 + case COMPUTE_DELTA: 788 + if (pair->diff.computed) 789 + diff = pair->diff.period_ratio_delta; 790 + else 791 + diff = compute_delta(he, pair); 792 + 793 + if (fabs(diff) < 0.01) 794 + goto dummy_print; 795 + scnprintf(pfmt, 20, "%%%+d.2f%%%%", dfmt->header_width - 1); 796 + return percent_color_snprintf(hpp->buf, hpp->size, 797 + pfmt, diff); 798 + case COMPUTE_RATIO: 799 + if (he->dummy) 800 + goto dummy_print; 801 + if (pair->diff.computed) 802 + diff = pair->diff.period_ratio; 803 + else 804 + diff = compute_ratio(he, pair); 805 + 806 + scnprintf(pfmt, 20, "%%%d.6f", dfmt->header_width); 807 + return value_color_snprintf(hpp->buf, hpp->size, 808 + pfmt, diff); 809 + case COMPUTE_WEIGHTED_DIFF: 810 + if (he->dummy) 811 + goto dummy_print; 812 + if (pair->diff.computed) 813 + wdiff = pair->diff.wdiff; 814 + else 815 + wdiff = compute_wdiff(he, pair); 816 + 817 + scnprintf(pfmt, 20, "%%14ld", dfmt->header_width); 818 + return color_snprintf(hpp->buf, hpp->size, 819 + get_percent_color(wdiff), 820 + pfmt, wdiff); 821 + default: 822 + BUG_ON(1); 823 + } 824 + dummy_print: 825 + return scnprintf(hpp->buf, hpp->size, "%*s", 826 + dfmt->header_width, pfmt); 827 + } 828 + 829 + static int hpp__color_delta(struct perf_hpp_fmt *fmt, 830 + struct perf_hpp *hpp, struct hist_entry *he) 831 + { 832 + return __hpp__color_compare(fmt, hpp, he, COMPUTE_DELTA); 833 + } 834 + 835 + static int hpp__color_ratio(struct perf_hpp_fmt *fmt, 836 + struct perf_hpp *hpp, struct hist_entry *he) 837 + { 838 + return __hpp__color_compare(fmt, hpp, he, COMPUTE_RATIO); 839 + } 840 + 841 + static int hpp__color_wdiff(struct perf_hpp_fmt *fmt, 842 + struct perf_hpp *hpp, struct hist_entry *he) 843 + { 844 + return __hpp__color_compare(fmt, hpp, he, COMPUTE_WEIGHTED_DIFF); 771 845 } 772 846 773 847 static void ··· 1016 940 fmt->entry = hpp__entry_global; 1017 941 1018 942 /* TODO more colors */ 1019 - if (idx == PERF_HPP_DIFF__BASELINE) 943 + switch (idx) { 944 + case PERF_HPP_DIFF__BASELINE: 1020 945 fmt->color = hpp__color_baseline; 946 + break; 947 + case PERF_HPP_DIFF__DELTA: 948 + fmt->color = hpp__color_delta; 949 + break; 950 + case PERF_HPP_DIFF__RATIO: 951 + fmt->color = hpp__color_ratio; 952 + break; 953 + case PERF_HPP_DIFF__WEIGHTED_DIFF: 954 + fmt->color = hpp__color_wdiff; 955 + break; 956 + default: 957 + break; 958 + } 1021 959 1022 960 init_header(d, dfmt); 1023 961 perf_hpp__column_register(fmt);
+1 -1
tools/perf/builtin-evlist.c
··· 29 29 if (session == NULL) 30 30 return -ENOMEM; 31 31 32 - list_for_each_entry(pos, &session->evlist->entries, node) 32 + evlist__for_each(session->evlist, pos) 33 33 perf_evsel__fprintf(pos, details, stdout); 34 34 35 35 perf_session__delete(session);
+1 -1
tools/perf/builtin-inject.c
··· 369 369 370 370 inject->tool.ordered_samples = true; 371 371 372 - list_for_each_entry(evsel, &session->evlist->entries, node) { 372 + evlist__for_each(session->evlist, evsel) { 373 373 const char *name = perf_evsel__name(evsel); 374 374 375 375 if (!strcmp(name, "sched:sched_switch")) {
+2 -4
tools/perf/builtin-kvm.c
··· 1174 1174 * Note: exclude_{guest,host} do not apply here. 1175 1175 * This command processes KVM tracepoints from host only 1176 1176 */ 1177 - list_for_each_entry(pos, &evlist->entries, node) { 1177 + evlist__for_each(evlist, pos) { 1178 1178 struct perf_event_attr *attr = &pos->attr; 1179 1179 1180 1180 /* make sure these *are* set */ ··· 1556 1556 if (kvm->session) 1557 1557 perf_session__delete(kvm->session); 1558 1558 kvm->session = NULL; 1559 - if (kvm->evlist) { 1560 - perf_evlist__delete_maps(kvm->evlist); 1559 + if (kvm->evlist) 1561 1560 perf_evlist__delete(kvm->evlist); 1562 - } 1563 1561 1564 1562 return err; 1565 1563 }
+51 -31
tools/perf/builtin-record.c
··· 183 183 184 184 perf_evlist__config(evlist, opts); 185 185 186 - list_for_each_entry(pos, &evlist->entries, node) { 186 + evlist__for_each(evlist, pos) { 187 187 try_again: 188 188 if (perf_evsel__open(pos, evlist->cpus, evlist->threads) < 0) { 189 189 if (perf_evsel__fallback(pos, errno, msg, sizeof(msg))) { ··· 324 324 325 325 static void record__init_features(struct record *rec) 326 326 { 327 - struct perf_evlist *evsel_list = rec->evlist; 328 327 struct perf_session *session = rec->session; 329 328 int feat; 330 329 ··· 333 334 if (rec->no_buildid) 334 335 perf_header__clear_feat(&session->header, HEADER_BUILD_ID); 335 336 336 - if (!have_tracepoints(&evsel_list->entries)) 337 + if (!have_tracepoints(&rec->evlist->entries)) 337 338 perf_header__clear_feat(&session->header, HEADER_TRACING_DATA); 338 339 339 340 if (!rec->opts.branch_stack) 340 341 perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK); 342 + } 343 + 344 + static volatile int workload_exec_errno; 345 + 346 + /* 347 + * perf_evlist__prepare_workload will send a SIGUSR1 348 + * if the fork fails, since we asked by setting its 349 + * want_signal to true. 350 + */ 351 + static void workload_exec_failed_signal(int signo, siginfo_t *info, 352 + void *ucontext __maybe_unused) 353 + { 354 + workload_exec_errno = info->si_value.sival_int; 355 + done = 1; 356 + signr = signo; 357 + child_finished = 1; 341 358 } 342 359 343 360 static int __cmd_record(struct record *rec, int argc, const char **argv) ··· 364 349 struct machine *machine; 365 350 struct perf_tool *tool = &rec->tool; 366 351 struct record_opts *opts = &rec->opts; 367 - struct perf_evlist *evsel_list = rec->evlist; 368 352 struct perf_data_file *file = &rec->file; 369 353 struct perf_session *session; 370 354 bool disabled = false; ··· 373 359 on_exit(record__sig_exit, rec); 374 360 signal(SIGCHLD, sig_handler); 375 361 signal(SIGINT, sig_handler); 376 - signal(SIGUSR1, sig_handler); 377 362 signal(SIGTERM, sig_handler); 378 363 379 364 session = perf_session__new(file, false, NULL); ··· 386 373 record__init_features(rec); 387 374 388 375 if (forks) { 389 - err = perf_evlist__prepare_workload(evsel_list, &opts->target, 376 + err = perf_evlist__prepare_workload(rec->evlist, &opts->target, 390 377 argv, file->is_pipe, 391 - true); 378 + workload_exec_failed_signal); 392 379 if (err < 0) { 393 380 pr_err("Couldn't run the workload!\n"); 394 381 goto out_delete_session; ··· 400 387 goto out_delete_session; 401 388 } 402 389 403 - if (!evsel_list->nr_groups) 390 + if (!rec->evlist->nr_groups) 404 391 perf_header__clear_feat(&session->header, HEADER_GROUP_DESC); 405 392 406 393 /* ··· 413 400 if (err < 0) 414 401 goto out_delete_session; 415 402 } else { 416 - err = perf_session__write_header(session, evsel_list, 403 + err = perf_session__write_header(session, rec->evlist, 417 404 file->fd, false); 418 405 if (err < 0) 419 406 goto out_delete_session; ··· 437 424 goto out_delete_session; 438 425 } 439 426 440 - if (have_tracepoints(&evsel_list->entries)) { 427 + if (have_tracepoints(&rec->evlist->entries)) { 441 428 /* 442 429 * FIXME err <= 0 here actually means that 443 430 * there were no tracepoints so its not really ··· 446 433 * return this more properly and also 447 434 * propagate errors that now are calling die() 448 435 */ 449 - err = perf_event__synthesize_tracing_data(tool, file->fd, evsel_list, 436 + err = perf_event__synthesize_tracing_data(tool, file->fd, rec->evlist, 450 437 process_synthesized_event); 451 438 if (err <= 0) { 452 439 pr_err("Couldn't record tracing data.\n"); ··· 478 465 perf_event__synthesize_guest_os, tool); 479 466 } 480 467 481 - err = __machine__synthesize_threads(machine, tool, &opts->target, evsel_list->threads, 468 + err = __machine__synthesize_threads(machine, tool, &opts->target, rec->evlist->threads, 482 469 process_synthesized_event, opts->sample_address); 483 470 if (err != 0) 484 471 goto out_delete_session; ··· 499 486 * (apart from group members) have enable_on_exec=1 set, 500 487 * so don't spoil it by prematurely enabling them. 501 488 */ 502 - if (!target__none(&opts->target)) 503 - perf_evlist__enable(evsel_list); 489 + if (!target__none(&opts->target) && !opts->initial_delay) 490 + perf_evlist__enable(rec->evlist); 504 491 505 492 /* 506 493 * Let the child rip 507 494 */ 508 495 if (forks) 509 - perf_evlist__start_workload(evsel_list); 496 + perf_evlist__start_workload(rec->evlist); 497 + 498 + if (opts->initial_delay) { 499 + usleep(opts->initial_delay * 1000); 500 + perf_evlist__enable(rec->evlist); 501 + } 510 502 511 503 for (;;) { 512 504 int hits = rec->samples; ··· 524 506 if (hits == rec->samples) { 525 507 if (done) 526 508 break; 527 - err = poll(evsel_list->pollfd, evsel_list->nr_fds, -1); 509 + err = poll(rec->evlist->pollfd, rec->evlist->nr_fds, -1); 528 510 waking++; 529 511 } 530 512 ··· 534 516 * disable events in this case. 535 517 */ 536 518 if (done && !disabled && !target__none(&opts->target)) { 537 - perf_evlist__disable(evsel_list); 519 + perf_evlist__disable(rec->evlist); 538 520 disabled = true; 539 521 } 522 + } 523 + 524 + if (forks && workload_exec_errno) { 525 + char msg[512]; 526 + const char *emsg = strerror_r(workload_exec_errno, msg, sizeof(msg)); 527 + pr_err("Workload failed: %s\n", emsg); 528 + err = -1; 529 + goto out_delete_session; 540 530 } 541 531 542 532 if (quiet || signr == SIGUSR1) ··· 882 856 OPT_CALLBACK('G', "cgroup", &record.evlist, "name", 883 857 "monitor event in cgroup name only", 884 858 parse_cgroups), 859 + OPT_UINTEGER(0, "initial-delay", &record.opts.initial_delay, 860 + "ms to wait before starting measurement after program start"), 885 861 OPT_STRING('u', "uid", &record.opts.target.uid_str, "user", 886 862 "user to profile"), 887 863 ··· 906 878 int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused) 907 879 { 908 880 int err = -ENOMEM; 909 - struct perf_evlist *evsel_list; 910 881 struct record *rec = &record; 911 882 char errbuf[BUFSIZ]; 912 883 913 - evsel_list = perf_evlist__new(); 914 - if (evsel_list == NULL) 884 + rec->evlist = perf_evlist__new(); 885 + if (rec->evlist == NULL) 915 886 return -ENOMEM; 916 - 917 - rec->evlist = evsel_list; 918 887 919 888 argc = parse_options(argc, argv, record_options, record_usage, 920 889 PARSE_OPT_STOP_AT_NON_OPTION); ··· 939 914 if (rec->no_buildid_cache || rec->no_buildid) 940 915 disable_buildid_cache(); 941 916 942 - if (evsel_list->nr_entries == 0 && 943 - perf_evlist__add_default(evsel_list) < 0) { 917 + if (rec->evlist->nr_entries == 0 && 918 + perf_evlist__add_default(rec->evlist) < 0) { 944 919 pr_err("Not enough memory for event selector list\n"); 945 920 goto out_symbol_exit; 946 921 } ··· 966 941 } 967 942 968 943 err = -ENOMEM; 969 - if (perf_evlist__create_maps(evsel_list, &rec->opts.target) < 0) 944 + if (perf_evlist__create_maps(rec->evlist, &rec->opts.target) < 0) 970 945 usage_with_options(record_usage, record_options); 971 946 972 947 if (record_opts__config(&rec->opts)) { 973 948 err = -EINVAL; 974 - goto out_free_fd; 949 + goto out_symbol_exit; 975 950 } 976 951 977 952 err = __cmd_record(&record, argc, argv); 978 - 979 - perf_evlist__munmap(evsel_list); 980 - perf_evlist__close(evsel_list); 981 - out_free_fd: 982 - perf_evlist__delete_maps(evsel_list); 983 953 out_symbol_exit: 984 954 symbol__exit(); 985 955 return err;
+113 -79
tools/perf/builtin-report.c
··· 384 384 { 385 385 struct perf_evsel *pos; 386 386 387 - list_for_each_entry(pos, &evlist->entries, node) { 387 + evlist__for_each(evlist, pos) { 388 388 struct hists *hists = &pos->hists; 389 389 const char *evname = perf_evsel__name(pos); 390 390 ··· 412 412 return 0; 413 413 } 414 414 415 - static int __cmd_report(struct report *rep) 415 + static void report__warn_kptr_restrict(const struct report *rep) 416 416 { 417 - int ret = -EINVAL; 418 - u64 nr_samples; 419 - struct perf_session *session = rep->session; 420 - struct perf_evsel *pos; 421 - struct map *kernel_map; 422 - struct kmap *kernel_kmap; 423 - const char *help = "For a higher level overview, try: perf report --sort comm,dso"; 424 - struct ui_progress prog; 425 - struct perf_data_file *file = session->file; 417 + struct map *kernel_map = rep->session->machines.host.vmlinux_maps[MAP__FUNCTION]; 418 + struct kmap *kernel_kmap = map__kmap(kernel_map); 426 419 427 - signal(SIGINT, sig_handler); 428 - 429 - if (rep->cpu_list) { 430 - ret = perf_session__cpu_bitmap(session, rep->cpu_list, 431 - rep->cpu_bitmap); 432 - if (ret) 433 - return ret; 434 - } 435 - 436 - if (rep->show_threads) 437 - perf_read_values_init(&rep->show_threads_values); 438 - 439 - ret = report__setup_sample_type(rep); 440 - if (ret) 441 - return ret; 442 - 443 - ret = perf_session__process_events(session, &rep->tool); 444 - if (ret) 445 - return ret; 446 - 447 - kernel_map = session->machines.host.vmlinux_maps[MAP__FUNCTION]; 448 - kernel_kmap = map__kmap(kernel_map); 449 420 if (kernel_map == NULL || 450 421 (kernel_map->dso->hit && 451 422 (kernel_kmap->ref_reloc_sym == NULL || ··· 439 468 "Samples in kernel modules can't be resolved as well.\n\n", 440 469 desc); 441 470 } 471 + } 442 472 443 - if (use_browser == 0) { 444 - if (verbose > 3) 445 - perf_session__fprintf(session, stdout); 473 + static int report__gtk_browse_hists(struct report *rep, const char *help) 474 + { 475 + int (*hist_browser)(struct perf_evlist *evlist, const char *help, 476 + struct hist_browser_timer *timer, float min_pcnt); 446 477 447 - if (verbose > 2) 448 - perf_session__fprintf_dsos(session, stdout); 478 + hist_browser = dlsym(perf_gtk_handle, "perf_evlist__gtk_browse_hists"); 449 479 450 - if (dump_trace) { 451 - perf_session__fprintf_nr_events(session, stdout); 452 - return 0; 453 - } 480 + if (hist_browser == NULL) { 481 + ui__error("GTK browser not found!\n"); 482 + return -1; 454 483 } 455 484 456 - nr_samples = 0; 457 - list_for_each_entry(pos, &session->evlist->entries, node) 485 + return hist_browser(rep->session->evlist, help, NULL, rep->min_percent); 486 + } 487 + 488 + static int report__browse_hists(struct report *rep) 489 + { 490 + int ret; 491 + struct perf_session *session = rep->session; 492 + struct perf_evlist *evlist = session->evlist; 493 + const char *help = "For a higher level overview, try: perf report --sort comm,dso"; 494 + 495 + switch (use_browser) { 496 + case 1: 497 + ret = perf_evlist__tui_browse_hists(evlist, help, NULL, 498 + rep->min_percent, 499 + &session->header.env); 500 + /* 501 + * Usually "ret" is the last pressed key, and we only 502 + * care if the key notifies us to switch data file. 503 + */ 504 + if (ret != K_SWITCH_INPUT_DATA) 505 + ret = 0; 506 + break; 507 + case 2: 508 + ret = report__gtk_browse_hists(rep, help); 509 + break; 510 + default: 511 + ret = perf_evlist__tty_browse_hists(evlist, rep, help); 512 + break; 513 + } 514 + 515 + return ret; 516 + } 517 + 518 + static u64 report__collapse_hists(struct report *rep) 519 + { 520 + struct ui_progress prog; 521 + struct perf_evsel *pos; 522 + u64 nr_samples = 0; 523 + /* 524 + * Count number of histogram entries to use when showing progress, 525 + * reusing nr_samples variable. 526 + */ 527 + evlist__for_each(rep->session->evlist, pos) 458 528 nr_samples += pos->hists.nr_entries; 459 529 460 530 ui_progress__init(&prog, nr_samples, "Merging related events..."); 461 - 531 + /* 532 + * Count total number of samples, will be used to check if this 533 + * session had any. 534 + */ 462 535 nr_samples = 0; 463 - list_for_each_entry(pos, &session->evlist->entries, node) { 536 + 537 + evlist__for_each(rep->session->evlist, pos) { 464 538 struct hists *hists = &pos->hists; 465 539 466 540 if (pos->idx == 0) ··· 523 507 hists__link(leader_hists, hists); 524 508 } 525 509 } 510 + 526 511 ui_progress__finish(); 512 + 513 + return nr_samples; 514 + } 515 + 516 + static int __cmd_report(struct report *rep) 517 + { 518 + int ret; 519 + u64 nr_samples; 520 + struct perf_session *session = rep->session; 521 + struct perf_evsel *pos; 522 + struct perf_data_file *file = session->file; 523 + 524 + signal(SIGINT, sig_handler); 525 + 526 + if (rep->cpu_list) { 527 + ret = perf_session__cpu_bitmap(session, rep->cpu_list, 528 + rep->cpu_bitmap); 529 + if (ret) 530 + return ret; 531 + } 532 + 533 + if (rep->show_threads) 534 + perf_read_values_init(&rep->show_threads_values); 535 + 536 + ret = report__setup_sample_type(rep); 537 + if (ret) 538 + return ret; 539 + 540 + ret = perf_session__process_events(session, &rep->tool); 541 + if (ret) 542 + return ret; 543 + 544 + report__warn_kptr_restrict(rep); 545 + 546 + if (use_browser == 0) { 547 + if (verbose > 3) 548 + perf_session__fprintf(session, stdout); 549 + 550 + if (verbose > 2) 551 + perf_session__fprintf_dsos(session, stdout); 552 + 553 + if (dump_trace) { 554 + perf_session__fprintf_nr_events(session, stdout); 555 + return 0; 556 + } 557 + } 558 + 559 + nr_samples = report__collapse_hists(rep); 527 560 528 561 if (session_done()) 529 562 return 0; ··· 582 517 return 0; 583 518 } 584 519 585 - list_for_each_entry(pos, &session->evlist->entries, node) 520 + evlist__for_each(session->evlist, pos) 586 521 hists__output_resort(&pos->hists); 587 522 588 - if (use_browser > 0) { 589 - if (use_browser == 1) { 590 - ret = perf_evlist__tui_browse_hists(session->evlist, 591 - help, NULL, 592 - rep->min_percent, 593 - &session->header.env); 594 - /* 595 - * Usually "ret" is the last pressed key, and we only 596 - * care if the key notifies us to switch data file. 597 - */ 598 - if (ret != K_SWITCH_INPUT_DATA) 599 - ret = 0; 600 - 601 - } else if (use_browser == 2) { 602 - int (*hist_browser)(struct perf_evlist *, 603 - const char *, 604 - struct hist_browser_timer *, 605 - float min_pcnt); 606 - 607 - hist_browser = dlsym(perf_gtk_handle, 608 - "perf_evlist__gtk_browse_hists"); 609 - if (hist_browser == NULL) { 610 - ui__error("GTK browser not found!\n"); 611 - return ret; 612 - } 613 - hist_browser(session->evlist, help, NULL, 614 - rep->min_percent); 615 - } 616 - } else 617 - perf_evlist__tty_browse_hists(session->evlist, rep, help); 618 - 619 - return ret; 523 + return report__browse_hists(rep); 620 524 } 621 525 622 526 static int
+2 -3
tools/perf/builtin-script.c
··· 603 603 if (evsel->attr.type >= PERF_TYPE_MAX) 604 604 return 0; 605 605 606 - list_for_each_entry(pos, &evlist->entries, node) { 606 + evlist__for_each(evlist, pos) { 607 607 if (pos->attr.type == evsel->attr.type && pos != evsel) 608 608 return 0; 609 609 } ··· 1309 1309 snprintf(evname, len + 1, "%s", p); 1310 1310 1311 1311 match = 0; 1312 - list_for_each_entry(pos, 1313 - &session->evlist->entries, node) { 1312 + evlist__for_each(session->evlist, pos) { 1314 1313 if (!strcmp(perf_evsel__name(pos), evname)) { 1315 1314 match = 1; 1316 1315 break;
+42 -24
tools/perf/builtin-stat.c
··· 214 214 { 215 215 struct perf_evsel *evsel; 216 216 217 - list_for_each_entry(evsel, &evlist->entries, node) { 217 + evlist__for_each(evlist, evsel) { 218 218 perf_evsel__free_stat_priv(evsel); 219 219 perf_evsel__free_counts(evsel); 220 220 perf_evsel__free_prev_raw_counts(evsel); ··· 225 225 { 226 226 struct perf_evsel *evsel; 227 227 228 - list_for_each_entry(evsel, &evlist->entries, node) { 228 + evlist__for_each(evlist, evsel) { 229 229 if (perf_evsel__alloc_stat_priv(evsel) < 0 || 230 230 perf_evsel__alloc_counts(evsel, perf_evsel__nr_cpus(evsel)) < 0 || 231 231 (alloc_raw && perf_evsel__alloc_prev_raw_counts(evsel) < 0)) ··· 259 259 { 260 260 struct perf_evsel *evsel; 261 261 262 - list_for_each_entry(evsel, &evlist->entries, node) { 262 + evlist__for_each(evlist, evsel) { 263 263 perf_evsel__reset_stat_priv(evsel); 264 264 perf_evsel__reset_counts(evsel, perf_evsel__nr_cpus(evsel)); 265 265 } ··· 326 326 327 327 /* Assumes this only called when evsel_list does not change anymore. */ 328 328 if (!array) { 329 - list_for_each_entry(ev, &evsel_list->entries, node) 329 + evlist__for_each(evsel_list, ev) 330 330 array_len++; 331 331 array = malloc(array_len * sizeof(void *)); 332 332 if (!array) 333 333 exit(ENOMEM); 334 334 j = 0; 335 - list_for_each_entry(ev, &evsel_list->entries, node) 335 + evlist__for_each(evsel_list, ev) 336 336 array[j++] = ev; 337 337 } 338 338 if (n < array_len) ··· 440 440 char prefix[64]; 441 441 442 442 if (aggr_mode == AGGR_GLOBAL) { 443 - list_for_each_entry(counter, &evsel_list->entries, node) { 443 + evlist__for_each(evsel_list, counter) { 444 444 ps = counter->priv; 445 445 memset(ps->res_stats, 0, sizeof(ps->res_stats)); 446 446 read_counter_aggr(counter); 447 447 } 448 448 } else { 449 - list_for_each_entry(counter, &evsel_list->entries, node) { 449 + evlist__for_each(evsel_list, counter) { 450 450 ps = counter->priv; 451 451 memset(ps->res_stats, 0, sizeof(ps->res_stats)); 452 452 read_counter(counter); ··· 483 483 print_aggr(prefix); 484 484 break; 485 485 case AGGR_NONE: 486 - list_for_each_entry(counter, &evsel_list->entries, node) 486 + evlist__for_each(evsel_list, counter) 487 487 print_counter(counter, prefix); 488 488 break; 489 489 case AGGR_GLOBAL: 490 490 default: 491 - list_for_each_entry(counter, &evsel_list->entries, node) 491 + evlist__for_each(evsel_list, counter) 492 492 print_counter_aggr(counter, prefix); 493 493 } 494 494 ··· 504 504 nthreads = thread_map__nr(evsel_list->threads); 505 505 506 506 usleep(initial_delay * 1000); 507 - list_for_each_entry(counter, &evsel_list->entries, node) 507 + evlist__for_each(evsel_list, counter) 508 508 perf_evsel__enable(counter, ncpus, nthreads); 509 509 } 510 + } 511 + 512 + static volatile int workload_exec_errno; 513 + 514 + /* 515 + * perf_evlist__prepare_workload will send a SIGUSR1 516 + * if the fork fails, since we asked by setting its 517 + * want_signal to true. 518 + */ 519 + static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *info, 520 + void *ucontext __maybe_unused) 521 + { 522 + workload_exec_errno = info->si_value.sival_int; 510 523 } 511 524 512 525 static int __run_perf_stat(int argc, const char **argv) ··· 541 528 } 542 529 543 530 if (forks) { 544 - if (perf_evlist__prepare_workload(evsel_list, &target, argv, 545 - false, false) < 0) { 531 + if (perf_evlist__prepare_workload(evsel_list, &target, argv, false, 532 + workload_exec_failed_signal) < 0) { 546 533 perror("failed to prepare workload"); 547 534 return -1; 548 535 } ··· 552 539 if (group) 553 540 perf_evlist__set_leader(evsel_list); 554 541 555 - list_for_each_entry(counter, &evsel_list->entries, node) { 542 + evlist__for_each(evsel_list, counter) { 556 543 if (create_perf_stat_counter(counter) < 0) { 557 544 /* 558 545 * PPC returns ENXIO for HW counters until 2.6.37 ··· 607 594 } 608 595 } 609 596 wait(&status); 597 + 598 + if (workload_exec_errno) { 599 + const char *emsg = strerror_r(workload_exec_errno, msg, sizeof(msg)); 600 + pr_err("Workload failed: %s\n", emsg); 601 + return -1; 602 + } 603 + 610 604 if (WIFSIGNALED(status)) 611 605 psignal(WTERMSIG(status), argv[0]); 612 606 } else { ··· 630 610 update_stats(&walltime_nsecs_stats, t1 - t0); 631 611 632 612 if (aggr_mode == AGGR_GLOBAL) { 633 - list_for_each_entry(counter, &evsel_list->entries, node) { 613 + evlist__for_each(evsel_list, counter) { 634 614 read_counter_aggr(counter); 635 615 perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter), 636 616 thread_map__nr(evsel_list->threads)); 637 617 } 638 618 } else { 639 - list_for_each_entry(counter, &evsel_list->entries, node) { 619 + evlist__for_each(evsel_list, counter) { 640 620 read_counter(counter); 641 621 perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter), 1); 642 622 } ··· 645 625 return WEXITSTATUS(status); 646 626 } 647 627 648 - static int run_perf_stat(int argc __maybe_unused, const char **argv) 628 + static int run_perf_stat(int argc, const char **argv) 649 629 { 650 630 int ret; 651 631 ··· 1117 1097 1118 1098 for (s = 0; s < aggr_map->nr; s++) { 1119 1099 id = aggr_map->map[s]; 1120 - list_for_each_entry(counter, &evsel_list->entries, node) { 1100 + evlist__for_each(evsel_list, counter) { 1121 1101 val = ena = run = 0; 1122 1102 nr = 0; 1123 1103 for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) { ··· 1328 1308 print_aggr(NULL); 1329 1309 break; 1330 1310 case AGGR_GLOBAL: 1331 - list_for_each_entry(counter, &evsel_list->entries, node) 1311 + evlist__for_each(evsel_list, counter) 1332 1312 print_counter_aggr(counter, NULL); 1333 1313 break; 1334 1314 case AGGR_NONE: 1335 - list_for_each_entry(counter, &evsel_list->entries, node) 1315 + evlist__for_each(evsel_list, counter) 1336 1316 print_counter(counter, NULL); 1337 1317 break; 1338 1318 default: ··· 1782 1762 if (interval && interval < 100) { 1783 1763 pr_err("print interval must be >= 100ms\n"); 1784 1764 parse_options_usage(stat_usage, options, "I", 1); 1785 - goto out_free_maps; 1765 + goto out; 1786 1766 } 1787 1767 1788 1768 if (perf_evlist__alloc_stats(evsel_list, interval)) 1789 - goto out_free_maps; 1769 + goto out; 1790 1770 1791 1771 if (perf_stat_init_aggr_mode()) 1792 - goto out_free_maps; 1772 + goto out; 1793 1773 1794 1774 /* 1795 1775 * We dont want to block the signals - that would cause ··· 1821 1801 print_stat(argc, argv); 1822 1802 1823 1803 perf_evlist__free_stats(evsel_list); 1824 - out_free_maps: 1825 - perf_evlist__delete_maps(evsel_list); 1826 1804 out: 1827 1805 perf_evlist__delete(evsel_list); 1828 1806 return status;
+6 -8
tools/perf/builtin-top.c
··· 482 482 483 483 fprintf(stderr, "\nAvailable events:"); 484 484 485 - list_for_each_entry(top->sym_evsel, &top->evlist->entries, node) 485 + evlist__for_each(top->evlist, top->sym_evsel) 486 486 fprintf(stderr, "\n\t%d %s", top->sym_evsel->idx, perf_evsel__name(top->sym_evsel)); 487 487 488 488 prompt_integer(&counter, "Enter details event counter"); ··· 493 493 sleep(1); 494 494 break; 495 495 } 496 - list_for_each_entry(top->sym_evsel, &top->evlist->entries, node) 496 + evlist__for_each(top->evlist, top->sym_evsel) 497 497 if (top->sym_evsel->idx == counter) 498 498 break; 499 499 } else ··· 575 575 * Zooming in/out UIDs. For now juse use whatever the user passed 576 576 * via --uid. 577 577 */ 578 - list_for_each_entry(pos, &top->evlist->entries, node) 578 + evlist__for_each(top->evlist, pos) 579 579 pos->hists.uid_filter_str = top->record_opts.target.uid_str; 580 580 581 581 perf_evlist__tui_browse_hists(top->evlist, help, &hbt, top->min_percent, ··· 858 858 859 859 perf_evlist__config(evlist, opts); 860 860 861 - list_for_each_entry(counter, &evlist->entries, node) { 861 + evlist__for_each(evlist, counter) { 862 862 try_again: 863 863 if (perf_evsel__open(counter, top->evlist->cpus, 864 864 top->evlist->threads) < 0) { ··· 1171 1171 if (!top.evlist->nr_entries && 1172 1172 perf_evlist__add_default(top.evlist) < 0) { 1173 1173 ui__error("Not enough memory for event selector list\n"); 1174 - goto out_delete_maps; 1174 + goto out_delete_evlist; 1175 1175 } 1176 1176 1177 1177 symbol_conf.nr_events = top.evlist->nr_entries; ··· 1181 1181 1182 1182 if (record_opts__config(opts)) { 1183 1183 status = -EINVAL; 1184 - goto out_delete_maps; 1184 + goto out_delete_evlist; 1185 1185 } 1186 1186 1187 1187 top.sym_evsel = perf_evlist__first(top.evlist); ··· 1206 1206 1207 1207 status = __cmd_top(&top); 1208 1208 1209 - out_delete_maps: 1210 - perf_evlist__delete_maps(top.evlist); 1211 1209 out_delete_evlist: 1212 1210 perf_evlist__delete(top.evlist); 1213 1211
+13 -17
tools/perf/builtin-trace.c
··· 1160 1160 struct record_opts opts; 1161 1161 struct machine *host; 1162 1162 u64 base_time; 1163 - bool full_time; 1164 1163 FILE *output; 1165 1164 unsigned long nr_events; 1166 1165 struct strlist *ev_qualifier; 1167 - bool not_ev_qualifier; 1168 - bool live; 1169 1166 const char *last_vfs_getname; 1170 1167 struct intlist *tid_list; 1171 1168 struct intlist *pid_list; 1169 + double duration_filter; 1170 + double runtime_ms; 1171 + struct { 1172 + u64 vfs_getname, 1173 + proc_getname; 1174 + } stats; 1175 + bool not_ev_qualifier; 1176 + bool live; 1177 + bool full_time; 1172 1178 bool sched; 1173 1179 bool multiple_threads; 1174 1180 bool summary; 1175 1181 bool summary_only; 1176 1182 bool show_comm; 1177 1183 bool show_tool_stats; 1178 - double duration_filter; 1179 - double runtime_ms; 1180 - struct { 1181 - u64 vfs_getname, proc_getname; 1182 - } stats; 1183 1184 }; 1184 1185 1185 1186 static int trace__set_fd_pathname(struct thread *thread, int fd, const char *pathname) ··· 1886 1885 err = trace__symbols_init(trace, evlist); 1887 1886 if (err < 0) { 1888 1887 fprintf(trace->output, "Problems initializing symbol libraries!\n"); 1889 - goto out_delete_maps; 1888 + goto out_delete_evlist; 1890 1889 } 1891 1890 1892 1891 perf_evlist__config(evlist, &trace->opts); ··· 1896 1895 1897 1896 if (forks) { 1898 1897 err = perf_evlist__prepare_workload(evlist, &trace->opts.target, 1899 - argv, false, false); 1898 + argv, false, NULL); 1900 1899 if (err < 0) { 1901 1900 fprintf(trace->output, "Couldn't run the workload!\n"); 1902 - goto out_delete_maps; 1901 + goto out_delete_evlist; 1903 1902 } 1904 1903 } 1905 1904 ··· 1910 1909 err = perf_evlist__mmap(evlist, trace->opts.mmap_pages, false); 1911 1910 if (err < 0) { 1912 1911 fprintf(trace->output, "Couldn't mmap the events: %s\n", strerror(errno)); 1913 - goto out_close_evlist; 1912 + goto out_delete_evlist; 1914 1913 } 1915 1914 1916 1915 perf_evlist__enable(evlist); ··· 1994 1993 } 1995 1994 } 1996 1995 1997 - perf_evlist__munmap(evlist); 1998 - out_close_evlist: 1999 - perf_evlist__close(evlist); 2000 - out_delete_maps: 2001 - perf_evlist__delete_maps(evlist); 2002 1996 out_delete_evlist: 2003 1997 perf_evlist__delete(evlist); 2004 1998 out:
+31 -38
tools/perf/config/Makefile
··· 1 - uname_M := $(shell uname -m 2>/dev/null || echo not) 2 1 3 - ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/ -e s/sun4u/sparc64/ \ 4 - -e s/arm.*/arm/ -e s/sa110/arm/ \ 5 - -e s/s390x/s390/ -e s/parisc64/parisc/ \ 6 - -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ 7 - -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) 8 - NO_PERF_REGS := 1 9 - CFLAGS := $(EXTRA_CFLAGS) $(EXTRA_WARNINGS) 10 - 11 - # Additional ARCH settings for x86 12 - ifeq ($(ARCH),i386) 13 - override ARCH := x86 14 - NO_PERF_REGS := 0 15 - LIBUNWIND_LIBS = -lunwind -lunwind-x86 2 + ifeq ($(src-perf),) 3 + src-perf := $(srctree)/tools/perf 16 4 endif 17 5 18 - ifeq ($(ARCH),x86_64) 19 - override ARCH := x86 20 - IS_X86_64 := 0 21 - ifeq (, $(findstring m32,$(CFLAGS))) 22 - IS_X86_64 := $(shell echo __x86_64__ | ${CC} -E -x c - | tail -n 1) 23 - endif 6 + ifeq ($(obj-perf),) 7 + obj-perf := $(OUTPUT) 8 + endif 9 + 10 + ifneq ($(obj-perf),) 11 + obj-perf := $(abspath $(obj-perf))/ 12 + endif 13 + 14 + LIB_INCLUDE := $(srctree)/tools/lib/ 15 + CFLAGS := $(EXTRA_CFLAGS) $(EXTRA_WARNINGS) 16 + 17 + include $(src-perf)/config/Makefile.arch 18 + 19 + NO_PERF_REGS := 1 20 + 21 + # Additional ARCH settings for x86 22 + ifeq ($(ARCH),x86) 24 23 ifeq (${IS_X86_64}, 1) 25 - RAW_ARCH := x86_64 26 24 CFLAGS += -DHAVE_ARCH_X86_64_SUPPORT 27 25 ARCH_INCLUDE = ../../arch/x86/lib/memcpy_64.S ../../arch/x86/lib/memset_64.S 28 26 LIBUNWIND_LIBS = -lunwind -lunwind-x86_64 ··· 53 55 FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) 54 56 FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS) 55 57 FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) 56 - # and the flags for the test-all case 57 - FEATURE_CHECK_CFLAGS-all += $(LIBUNWIND_CFLAGS) 58 - FEATURE_CHECK_LDFLAGS-all += $(LIBUNWIND_LDFLAGS) 59 58 endif 60 59 61 60 ifeq ($(NO_PERF_REGS),0) 62 61 CFLAGS += -DHAVE_PERF_REGS_SUPPORT 63 62 endif 64 - 65 - ifeq ($(src-perf),) 66 - src-perf := $(srctree)/tools/perf 67 - endif 68 - 69 - ifeq ($(obj-perf),) 70 - obj-perf := $(OUTPUT) 71 - endif 72 - 73 - ifneq ($(obj-perf),) 74 - obj-perf := $(abspath $(obj-perf))/ 75 - endif 76 - 77 - LIB_INCLUDE := $(srctree)/tools/lib/ 78 63 79 64 # include ARCH specific config 80 65 -include $(src-perf)/arch/$(ARCH)/Makefile ··· 149 168 stackprotector-all \ 150 169 timerfd 151 170 171 + # Set FEATURE_CHECK_(C|LD)FLAGS-all for all CORE_FEATURE_TESTS features. 172 + # If in the future we need per-feature checks/flags for features not 173 + # mentioned in this list we need to refactor this ;-). 174 + set_test_all_flags = $(eval $(set_test_all_flags_code)) 175 + define set_test_all_flags_code 176 + FEATURE_CHECK_CFLAGS-all += $(FEATURE_CHECK_CFLAGS-$(1)) 177 + FEATURE_CHECK_LDFLAGS-all += $(FEATURE_CHECK_LDFLAGS-$(1)) 178 + endef 179 + 180 + $(foreach feat,$(CORE_FEATURE_TESTS),$(call set_test_all_flags,$(feat))) 181 + 152 182 # 153 183 # So here we detect whether test-all was rebuilt, to be able 154 184 # to skip the print-out of the long features list if the file ··· 232 240 233 241 CFLAGS += -I$(src-perf)/util/include 234 242 CFLAGS += -I$(src-perf)/arch/$(ARCH)/include 243 + CFLAGS += -I$(srctree)/tools/include/ 235 244 CFLAGS += -I$(srctree)/arch/$(ARCH)/include/uapi 236 245 CFLAGS += -I$(srctree)/arch/$(ARCH)/include 237 246 CFLAGS += -I$(srctree)/include/uapi
+22
tools/perf/config/Makefile.arch
··· 1 + 2 + uname_M := $(shell uname -m 2>/dev/null || echo not) 3 + 4 + ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/ -e s/sun4u/sparc64/ \ 5 + -e s/arm.*/arm/ -e s/sa110/arm/ \ 6 + -e s/s390x/s390/ -e s/parisc64/parisc/ \ 7 + -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ 8 + -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) 9 + 10 + # Additional ARCH settings for x86 11 + ifeq ($(ARCH),i386) 12 + override ARCH := x86 13 + endif 14 + 15 + ifeq ($(ARCH),x86_64) 16 + override ARCH := x86 17 + IS_X86_64 := 0 18 + ifeq (, $(findstring m32,$(CFLAGS))) 19 + IS_X86_64 := $(shell echo __x86_64__ | ${CC} -E -x c - | tail -n 1) 20 + RAW_ARCH := x86_64 21 + endif 22 + endif
-6
tools/perf/config/feature-checks/test-volatile-register-var.c
··· 1 - #include <stdio.h> 2 - 3 - int main(void) 4 - { 5 - return puts("hi"); 6 - }
+1
tools/perf/perf.h
··· 269 269 u64 user_interval; 270 270 u16 stack_dump_size; 271 271 bool sample_transaction; 272 + unsigned initial_delay; 272 273 }; 273 274 274 275 #endif
+2 -5
tools/perf/tests/code-reading.c
··· 540 540 err = TEST_CODE_READING_OK; 541 541 out_err: 542 542 if (evlist) { 543 - perf_evlist__munmap(evlist); 544 - perf_evlist__close(evlist); 545 543 perf_evlist__delete(evlist); 546 - } 547 - if (cpus) 544 + } else { 548 545 cpu_map__delete(cpus); 549 - if (threads) 550 546 thread_map__delete(threads); 547 + } 551 548 machines__destroy_kernel_maps(&machines); 552 549 machine__delete_threads(machine); 553 550 machines__exit(&machines);
+1 -1
tools/perf/tests/evsel-roundtrip-name.c
··· 79 79 } 80 80 81 81 err = 0; 82 - list_for_each_entry(evsel, &evlist->entries, node) { 82 + evlist__for_each(evlist, evsel) { 83 83 if (strcmp(perf_evsel__name(evsel), names[evsel->idx])) { 84 84 --err; 85 85 pr_debug("%s != %s\n", perf_evsel__name(evsel), names[evsel->idx]);
+2 -2
tools/perf/tests/hists_link.c
··· 208 208 * However the second evsel also has a collapsed entry for 209 209 * "bash [libc] malloc" so total 9 entries will be in the tree. 210 210 */ 211 - list_for_each_entry(evsel, &evlist->entries, node) { 211 + evlist__for_each(evlist, evsel) { 212 212 for (k = 0; k < ARRAY_SIZE(fake_common_samples); k++) { 213 213 const union perf_event event = { 214 214 .header = { ··· 466 466 if (err < 0) 467 467 goto out; 468 468 469 - list_for_each_entry(evsel, &evlist->entries, node) { 469 + evlist__for_each(evlist, evsel) { 470 470 hists__collapse_resort(&evsel->hists, NULL); 471 471 472 472 if (verbose > 2)
+2 -5
tools/perf/tests/keep-tracking.c
··· 142 142 out_err: 143 143 if (evlist) { 144 144 perf_evlist__disable(evlist); 145 - perf_evlist__munmap(evlist); 146 - perf_evlist__close(evlist); 147 145 perf_evlist__delete(evlist); 148 - } 149 - if (cpus) 146 + } else { 150 147 cpu_map__delete(cpus); 151 - if (threads) 152 148 thread_map__delete(threads); 149 + } 153 150 154 151 return err; 155 152 }
+28 -12
tools/perf/tests/make
··· 1 1 PERF := . 2 2 MK := Makefile 3 3 4 + include config/Makefile.arch 5 + 6 + # FIXME looks like x86 is the only arch running tests ;-) 7 + # we need some IS_(32/64) flag to make this generic 8 + ifeq ($(IS_X86_64),1) 9 + lib = lib64 10 + else 11 + lib = lib 12 + endif 13 + 4 14 has = $(shell which $1 2>/dev/null) 5 15 6 16 # standard single make variable specified ··· 128 118 installed_files_bin += etc/bash_completion.d/perf 129 119 installed_files_bin += libexec/perf-core/perf-archive 130 120 131 - installed_files_plugins := lib64/traceevent/plugins/plugin_cfg80211.so 132 - installed_files_plugins += lib64/traceevent/plugins/plugin_scsi.so 133 - installed_files_plugins += lib64/traceevent/plugins/plugin_xen.so 134 - installed_files_plugins += lib64/traceevent/plugins/plugin_function.so 135 - installed_files_plugins += lib64/traceevent/plugins/plugin_sched_switch.so 136 - installed_files_plugins += lib64/traceevent/plugins/plugin_mac80211.so 137 - installed_files_plugins += lib64/traceevent/plugins/plugin_kvm.so 138 - installed_files_plugins += lib64/traceevent/plugins/plugin_kmem.so 139 - installed_files_plugins += lib64/traceevent/plugins/plugin_hrtimer.so 140 - installed_files_plugins += lib64/traceevent/plugins/plugin_jbd2.so 121 + installed_files_plugins := $(lib)/traceevent/plugins/plugin_cfg80211.so 122 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_scsi.so 123 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_xen.so 124 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_function.so 125 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_sched_switch.so 126 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_mac80211.so 127 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_kvm.so 128 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_kmem.so 129 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_hrtimer.so 130 + installed_files_plugins += $(lib)/traceevent/plugins/plugin_jbd2.so 141 131 142 132 installed_files_all := $(installed_files_bin) 143 133 installed_files_all += $(installed_files_plugins) ··· 216 206 rm -rf $$TMP_O \ 217 207 rm -rf $$TMP_DEST 218 208 219 - all: $(run) $(run_O) 209 + tarpkg: 210 + @cmd="$(PERF)/tests/perf-targz-src-pkg $(PERF)"; \ 211 + echo "- $@: $$cmd" && echo $$cmd > $@ && \ 212 + ( eval $$cmd ) >> $@ 2>&1 213 + 214 + 215 + all: $(run) $(run_O) tarpkg 220 216 @echo OK 221 217 222 218 out: $(run_O) 223 219 @echo OK 224 220 225 - .PHONY: all $(run) $(run_O) clean 221 + .PHONY: all $(run) $(run_O) tarpkg clean
+11 -14
tools/perf/tests/mmap-basic.c
··· 68 68 evsels[i] = perf_evsel__newtp("syscalls", name); 69 69 if (evsels[i] == NULL) { 70 70 pr_debug("perf_evsel__new\n"); 71 - goto out_free_evlist; 71 + goto out_delete_evlist; 72 72 } 73 73 74 74 evsels[i]->attr.wakeup_events = 1; ··· 80 80 pr_debug("failed to open counter: %s, " 81 81 "tweak /proc/sys/kernel/perf_event_paranoid?\n", 82 82 strerror(errno)); 83 - goto out_close_fd; 83 + goto out_delete_evlist; 84 84 } 85 85 86 86 nr_events[i] = 0; ··· 90 90 if (perf_evlist__mmap(evlist, 128, true) < 0) { 91 91 pr_debug("failed to mmap events: %d (%s)\n", errno, 92 92 strerror(errno)); 93 - goto out_close_fd; 93 + goto out_delete_evlist; 94 94 } 95 95 96 96 for (i = 0; i < nsyscalls; ++i) ··· 105 105 if (event->header.type != PERF_RECORD_SAMPLE) { 106 106 pr_debug("unexpected %s event\n", 107 107 perf_event__name(event->header.type)); 108 - goto out_munmap; 108 + goto out_delete_evlist; 109 109 } 110 110 111 111 err = perf_evlist__parse_sample(evlist, event, &sample); 112 112 if (err) { 113 113 pr_err("Can't parse sample, err = %d\n", err); 114 - goto out_munmap; 114 + goto out_delete_evlist; 115 115 } 116 116 117 117 err = -1; ··· 119 119 if (evsel == NULL) { 120 120 pr_debug("event with id %" PRIu64 121 121 " doesn't map to an evsel\n", sample.id); 122 - goto out_munmap; 122 + goto out_delete_evlist; 123 123 } 124 124 nr_events[evsel->idx]++; 125 125 perf_evlist__mmap_consume(evlist, 0); 126 126 } 127 127 128 128 err = 0; 129 - list_for_each_entry(evsel, &evlist->entries, node) { 129 + evlist__for_each(evlist, evsel) { 130 130 if (nr_events[evsel->idx] != expected_nr_events[evsel->idx]) { 131 131 pr_debug("expected %d %s events, got %d\n", 132 132 expected_nr_events[evsel->idx], 133 133 perf_evsel__name(evsel), nr_events[evsel->idx]); 134 134 err = -1; 135 - goto out_munmap; 135 + goto out_delete_evlist; 136 136 } 137 137 } 138 138 139 - out_munmap: 140 - perf_evlist__munmap(evlist); 141 - out_close_fd: 142 - for (i = 0; i < nsyscalls; ++i) 143 - perf_evsel__close_fd(evsels[i], 1, threads->nr); 144 - out_free_evlist: 139 + out_delete_evlist: 145 140 perf_evlist__delete(evlist); 141 + cpus = NULL; 142 + threads = NULL; 146 143 out_free_cpus: 147 144 cpu_map__delete(cpus); 148 145 out_free_threads:
+5 -11
tools/perf/tests/open-syscall-tp-fields.c
··· 48 48 err = perf_evlist__open(evlist); 49 49 if (err < 0) { 50 50 pr_debug("perf_evlist__open: %s\n", strerror(errno)); 51 - goto out_delete_maps; 51 + goto out_delete_evlist; 52 52 } 53 53 54 54 err = perf_evlist__mmap(evlist, UINT_MAX, false); 55 55 if (err < 0) { 56 56 pr_debug("perf_evlist__mmap: %s\n", strerror(errno)); 57 - goto out_close_evlist; 57 + goto out_delete_evlist; 58 58 } 59 59 60 60 perf_evlist__enable(evlist); ··· 85 85 err = perf_evsel__parse_sample(evsel, event, &sample); 86 86 if (err) { 87 87 pr_err("Can't parse sample, err = %d\n", err); 88 - goto out_munmap; 88 + goto out_delete_evlist; 89 89 } 90 90 91 91 tp_flags = perf_evsel__intval(evsel, &sample, "flags"); ··· 93 93 if (flags != tp_flags) { 94 94 pr_debug("%s: Expected flags=%#x, got %#x\n", 95 95 __func__, flags, tp_flags); 96 - goto out_munmap; 96 + goto out_delete_evlist; 97 97 } 98 98 99 99 goto out_ok; ··· 105 105 106 106 if (++nr_polls > 5) { 107 107 pr_debug("%s: no events!\n", __func__); 108 - goto out_munmap; 108 + goto out_delete_evlist; 109 109 } 110 110 } 111 111 out_ok: 112 112 err = 0; 113 - out_munmap: 114 - perf_evlist__munmap(evlist); 115 - out_close_evlist: 116 - perf_evlist__close(evlist); 117 - out_delete_maps: 118 - perf_evlist__delete_maps(evlist); 119 113 out_delete_evlist: 120 114 perf_evlist__delete(evlist); 121 115 out:
+5 -5
tools/perf/tests/parse-events.c
··· 30 30 TEST_ASSERT_VAL("wrong number of entries", evlist->nr_entries > 1); 31 31 TEST_ASSERT_VAL("wrong number of groups", 0 == evlist->nr_groups); 32 32 33 - list_for_each_entry(evsel, &evlist->entries, node) { 33 + evlist__for_each(evlist, evsel) { 34 34 TEST_ASSERT_VAL("wrong type", 35 35 PERF_TYPE_TRACEPOINT == evsel->attr.type); 36 36 TEST_ASSERT_VAL("wrong sample_type", ··· 201 201 202 202 TEST_ASSERT_VAL("wrong number of entries", evlist->nr_entries > 1); 203 203 204 - list_for_each_entry(evsel, &evlist->entries, node) { 204 + evlist__for_each(evlist, evsel) { 205 205 TEST_ASSERT_VAL("wrong exclude_user", 206 206 !evsel->attr.exclude_user); 207 207 TEST_ASSERT_VAL("wrong exclude_kernel", ··· 1385 1385 if (ret) { 1386 1386 pr_debug("failed to parse event '%s', err %d\n", 1387 1387 e->name, ret); 1388 - return ret; 1388 + } else { 1389 + ret = e->check(evlist); 1389 1390 } 1390 - 1391 - ret = e->check(evlist); 1391 + 1392 1392 perf_evlist__delete(evlist); 1393 1393 1394 1394 return ret;
+7 -14
tools/perf/tests/perf-record.c
··· 83 83 * so that we have time to open the evlist (calling sys_perf_event_open 84 84 * on all the fds) and then mmap them. 85 85 */ 86 - err = perf_evlist__prepare_workload(evlist, &opts.target, argv, 87 - false, false); 86 + err = perf_evlist__prepare_workload(evlist, &opts.target, argv, false, NULL); 88 87 if (err < 0) { 89 88 pr_debug("Couldn't run the workload!\n"); 90 - goto out_delete_maps; 89 + goto out_delete_evlist; 91 90 } 92 91 93 92 /* ··· 101 102 err = sched__get_first_possible_cpu(evlist->workload.pid, &cpu_mask); 102 103 if (err < 0) { 103 104 pr_debug("sched__get_first_possible_cpu: %s\n", strerror(errno)); 104 - goto out_delete_maps; 105 + goto out_delete_evlist; 105 106 } 106 107 107 108 cpu = err; ··· 111 112 */ 112 113 if (sched_setaffinity(evlist->workload.pid, cpu_mask_size, &cpu_mask) < 0) { 113 114 pr_debug("sched_setaffinity: %s\n", strerror(errno)); 114 - goto out_delete_maps; 115 + goto out_delete_evlist; 115 116 } 116 117 117 118 /* ··· 121 122 err = perf_evlist__open(evlist); 122 123 if (err < 0) { 123 124 pr_debug("perf_evlist__open: %s\n", strerror(errno)); 124 - goto out_delete_maps; 125 + goto out_delete_evlist; 125 126 } 126 127 127 128 /* ··· 132 133 err = perf_evlist__mmap(evlist, opts.mmap_pages, false); 133 134 if (err < 0) { 134 135 pr_debug("perf_evlist__mmap: %s\n", strerror(errno)); 135 - goto out_close_evlist; 136 + goto out_delete_evlist; 136 137 } 137 138 138 139 /* ··· 165 166 if (verbose) 166 167 perf_event__fprintf(event, stderr); 167 168 pr_debug("Couldn't parse sample\n"); 168 - goto out_err; 169 + goto out_delete_evlist; 169 170 } 170 171 171 172 if (verbose) { ··· 302 303 pr_debug("PERF_RECORD_MMAP for %s missing!\n", "[vdso]"); 303 304 ++errs; 304 305 } 305 - out_err: 306 - perf_evlist__munmap(evlist); 307 - out_close_evlist: 308 - perf_evlist__close(evlist); 309 - out_delete_maps: 310 - perf_evlist__delete_maps(evlist); 311 306 out_delete_evlist: 312 307 perf_evlist__delete(evlist); 313 308 out:
+21
tools/perf/tests/perf-targz-src-pkg
··· 1 + #!/bin/sh 2 + # Test one of the main kernel Makefile targets to generate a perf sources tarball 3 + # suitable for build outside the full kernel sources. 4 + # 5 + # This is to test that the tools/perf/MANIFEST file lists all the files needed to 6 + # be in such tarball, which sometimes gets broken when we move files around, 7 + # like when we made some files that were in tools/perf/ available to other tools/ 8 + # codebases by moving it to tools/include/, etc. 9 + 10 + PERF=$1 11 + cd ${PERF}/../.. 12 + make perf-targz-src-pkg > /dev/null 13 + TARBALL=$(ls -rt perf-*.tar.gz) 14 + TMP_DEST=$(mktemp -d) 15 + tar xf ${TARBALL} -C $TMP_DEST 16 + rm -f ${TARBALL} 17 + cd - > /dev/null 18 + make -C $TMP_DEST/perf*/tools/perf > /dev/null 2>&1 19 + RC=$? 20 + rm -rf ${TMP_DEST} 21 + exit $RC
-6
tools/perf/tests/perf-time-to-tsc.c
··· 166 166 out_err: 167 167 if (evlist) { 168 168 perf_evlist__disable(evlist); 169 - perf_evlist__munmap(evlist); 170 - perf_evlist__close(evlist); 171 169 perf_evlist__delete(evlist); 172 170 } 173 - if (cpus) 174 - cpu_map__delete(cpus); 175 - if (threads) 176 - thread_map__delete(threads); 177 171 178 172 return err; 179 173 }
+6 -12
tools/perf/tests/sw-clock.c
··· 45 45 evsel = perf_evsel__new(&attr); 46 46 if (evsel == NULL) { 47 47 pr_debug("perf_evsel__new\n"); 48 - goto out_free_evlist; 48 + goto out_delete_evlist; 49 49 } 50 50 perf_evlist__add(evlist, evsel); 51 51 ··· 54 54 if (!evlist->cpus || !evlist->threads) { 55 55 err = -ENOMEM; 56 56 pr_debug("Not enough memory to create thread/cpu maps\n"); 57 - goto out_delete_maps; 57 + goto out_delete_evlist; 58 58 } 59 59 60 60 if (perf_evlist__open(evlist)) { ··· 63 63 err = -errno; 64 64 pr_debug("Couldn't open evlist: %s\nHint: check %s, using %" PRIu64 " in this test.\n", 65 65 strerror(errno), knob, (u64)attr.sample_freq); 66 - goto out_delete_maps; 66 + goto out_delete_evlist; 67 67 } 68 68 69 69 err = perf_evlist__mmap(evlist, 128, true); 70 70 if (err < 0) { 71 71 pr_debug("failed to mmap event: %d (%s)\n", errno, 72 72 strerror(errno)); 73 - goto out_close_evlist; 73 + goto out_delete_evlist; 74 74 } 75 75 76 76 perf_evlist__enable(evlist); ··· 90 90 err = perf_evlist__parse_sample(evlist, event, &sample); 91 91 if (err < 0) { 92 92 pr_debug("Error during parse sample\n"); 93 - goto out_unmap_evlist; 93 + goto out_delete_evlist; 94 94 } 95 95 96 96 total_periods += sample.period; ··· 105 105 err = -1; 106 106 } 107 107 108 - out_unmap_evlist: 109 - perf_evlist__munmap(evlist); 110 - out_close_evlist: 111 - perf_evlist__close(evlist); 112 - out_delete_maps: 113 - perf_evlist__delete_maps(evlist); 114 - out_free_evlist: 108 + out_delete_evlist: 115 109 perf_evlist__delete(evlist); 116 110 return err; 117 111 }
+19 -14
tools/perf/tests/task-exit.c
··· 9 9 static int exited; 10 10 static int nr_exit; 11 11 12 - static void sig_handler(int sig) 12 + static void sig_handler(int sig __maybe_unused) 13 13 { 14 14 exited = 1; 15 + } 15 16 16 - if (sig == SIGUSR1) 17 - nr_exit = -1; 17 + /* 18 + * perf_evlist__prepare_workload will send a SIGUSR1 if the fork fails, since 19 + * we asked by setting its exec_error to this handler. 20 + */ 21 + static void workload_exec_failed_signal(int signo __maybe_unused, 22 + siginfo_t *info __maybe_unused, 23 + void *ucontext __maybe_unused) 24 + { 25 + exited = 1; 26 + nr_exit = -1; 18 27 } 19 28 20 29 /* ··· 44 35 const char *argv[] = { "true", NULL }; 45 36 46 37 signal(SIGCHLD, sig_handler); 47 - signal(SIGUSR1, sig_handler); 48 38 49 39 evlist = perf_evlist__new_default(); 50 40 if (evlist == NULL) { ··· 62 54 if (!evlist->cpus || !evlist->threads) { 63 55 err = -ENOMEM; 64 56 pr_debug("Not enough memory to create thread/cpu maps\n"); 65 - goto out_delete_maps; 57 + goto out_delete_evlist; 66 58 } 67 59 68 - err = perf_evlist__prepare_workload(evlist, &target, argv, false, true); 60 + err = perf_evlist__prepare_workload(evlist, &target, argv, false, 61 + workload_exec_failed_signal); 69 62 if (err < 0) { 70 63 pr_debug("Couldn't run the workload!\n"); 71 - goto out_delete_maps; 64 + goto out_delete_evlist; 72 65 } 73 66 74 67 evsel = perf_evlist__first(evlist); ··· 83 74 err = perf_evlist__open(evlist); 84 75 if (err < 0) { 85 76 pr_debug("Couldn't open the evlist: %s\n", strerror(-err)); 86 - goto out_delete_maps; 77 + goto out_delete_evlist; 87 78 } 88 79 89 80 if (perf_evlist__mmap(evlist, 128, true) < 0) { 90 81 pr_debug("failed to mmap events: %d (%s)\n", errno, 91 82 strerror(errno)); 92 - goto out_close_evlist; 83 + goto out_delete_evlist; 93 84 } 94 85 95 86 perf_evlist__start_workload(evlist); ··· 112 103 err = -1; 113 104 } 114 105 115 - perf_evlist__munmap(evlist); 116 - out_close_evlist: 117 - perf_evlist__close(evlist); 118 - out_delete_maps: 119 - perf_evlist__delete_maps(evlist); 106 + out_delete_evlist: 120 107 perf_evlist__delete(evlist); 121 108 return err; 122 109 }
+3 -2
tools/perf/ui/browsers/hists.c
··· 1938 1938 1939 1939 ui_helpline__push("Press ESC to exit"); 1940 1940 1941 - list_for_each_entry(pos, &evlist->entries, node) { 1941 + evlist__for_each(evlist, pos) { 1942 1942 const char *ev_name = perf_evsel__name(pos); 1943 1943 size_t line_len = strlen(ev_name) + 7; 1944 1944 ··· 1970 1970 struct perf_evsel *pos; 1971 1971 1972 1972 nr_entries = 0; 1973 - list_for_each_entry(pos, &evlist->entries, node) 1973 + evlist__for_each(evlist, pos) { 1974 1974 if (perf_evsel__is_group_leader(pos)) 1975 1975 nr_entries++; 1976 + } 1976 1977 1977 1978 if (nr_entries == 1) 1978 1979 goto single_entry;
+1 -1
tools/perf/ui/gtk/hists.c
··· 375 375 376 376 gtk_container_add(GTK_CONTAINER(window), vbox); 377 377 378 - list_for_each_entry(pos, &evlist->entries, node) { 378 + evlist__for_each(evlist, pos) { 379 379 struct hists *hists = &pos->hists; 380 380 const char *evname = perf_evsel__name(pos); 381 381 GtkWidget *scrolled_window;
+2 -2
tools/perf/util/cgroup.c
··· 81 81 /* 82 82 * check if cgrp is already defined, if so we reuse it 83 83 */ 84 - list_for_each_entry(counter, &evlist->entries, node) { 84 + evlist__for_each(evlist, counter) { 85 85 cgrp = counter->cgrp; 86 86 if (!cgrp) 87 87 continue; ··· 110 110 * if add cgroup N, then need to find event N 111 111 */ 112 112 n = 0; 113 - list_for_each_entry(counter, &evlist->entries, node) { 113 + evlist__for_each(evlist, counter) { 114 114 if (n == nr_cgroups) 115 115 goto found; 116 116 n++;
+10 -5
tools/perf/util/color.c
··· 1 1 #include <linux/kernel.h> 2 2 #include "cache.h" 3 3 #include "color.h" 4 + #include <math.h> 4 5 5 6 int perf_use_color_default = -1; 6 7 ··· 299 298 * entries in green - and keep the low overhead places 300 299 * normal: 301 300 */ 302 - if (percent >= MIN_RED) 301 + if (fabs(percent) >= MIN_RED) 303 302 color = PERF_COLOR_RED; 304 303 else { 305 - if (percent > MIN_GREEN) 304 + if (fabs(percent) > MIN_GREEN) 306 305 color = PERF_COLOR_GREEN; 307 306 } 308 307 return color; ··· 319 318 return r; 320 319 } 321 320 321 + int value_color_snprintf(char *bf, size_t size, const char *fmt, double value) 322 + { 323 + const char *color = get_percent_color(value); 324 + return color_snprintf(bf, size, color, fmt, value); 325 + } 326 + 322 327 int percent_color_snprintf(char *bf, size_t size, const char *fmt, ...) 323 328 { 324 329 va_list args; 325 330 double percent; 326 - const char *color; 327 331 328 332 va_start(args, fmt); 329 333 percent = va_arg(args, double); 330 334 va_end(args); 331 - color = get_percent_color(percent); 332 - return color_snprintf(bf, size, color, fmt, percent); 335 + return value_color_snprintf(bf, size, fmt, percent); 333 336 }
+1
tools/perf/util/color.h
··· 39 39 int color_snprintf(char *bf, size_t size, const char *color, const char *fmt, ...); 40 40 int color_fprintf_ln(FILE *fp, const char *color, const char *fmt, ...); 41 41 int color_fwrite_lines(FILE *fp, const char *color, size_t count, const char *buf); 42 + int value_color_snprintf(char *bf, size_t size, const char *fmt, double value); 42 43 int percent_color_snprintf(char *bf, size_t size, const char *fmt, ...); 43 44 int percent_color_fprintf(FILE *fp, const char *fmt, double percent); 44 45 const char *get_percent_color(double percent);
+6 -6
tools/perf/util/event.c
··· 175 175 return tgid; 176 176 } 177 177 178 - static int perf_event__synthesize_mmap_events(struct perf_tool *tool, 179 - union perf_event *event, 180 - pid_t pid, pid_t tgid, 181 - perf_event__handler_t process, 182 - struct machine *machine, 183 - bool mmap_data) 178 + int perf_event__synthesize_mmap_events(struct perf_tool *tool, 179 + union perf_event *event, 180 + pid_t pid, pid_t tgid, 181 + perf_event__handler_t process, 182 + struct machine *machine, 183 + bool mmap_data) 184 184 { 185 185 char filename[PATH_MAX]; 186 186 FILE *fp;
+7
tools/perf/util/event.h
··· 266 266 const struct perf_sample *sample, 267 267 bool swapped); 268 268 269 + int perf_event__synthesize_mmap_events(struct perf_tool *tool, 270 + union perf_event *event, 271 + pid_t pid, pid_t tgid, 272 + perf_event__handler_t process, 273 + struct machine *machine, 274 + bool mmap_data); 275 + 269 276 size_t perf_event__fprintf_comm(union perf_event *event, FILE *fp); 270 277 size_t perf_event__fprintf_mmap(union perf_event *event, FILE *fp); 271 278 size_t perf_event__fprintf_mmap2(union perf_event *event, FILE *fp);
+46 -32
tools/perf/util/evlist.c
··· 81 81 { 82 82 struct perf_evsel *evsel; 83 83 84 - list_for_each_entry(evsel, &evlist->entries, node) 84 + evlist__for_each(evlist, evsel) 85 85 perf_evsel__calc_id_pos(evsel); 86 86 87 87 perf_evlist__set_id_pos(evlist); ··· 91 91 { 92 92 struct perf_evsel *pos, *n; 93 93 94 - list_for_each_entry_safe(pos, n, &evlist->entries, node) { 94 + evlist__for_each_safe(evlist, n, pos) { 95 95 list_del_init(&pos->node); 96 96 perf_evsel__delete(pos); 97 97 } ··· 107 107 108 108 void perf_evlist__delete(struct perf_evlist *evlist) 109 109 { 110 + perf_evlist__munmap(evlist); 111 + perf_evlist__close(evlist); 112 + cpu_map__delete(evlist->cpus); 113 + thread_map__delete(evlist->threads); 114 + evlist->cpus = NULL; 115 + evlist->threads = NULL; 110 116 perf_evlist__purge(evlist); 111 117 perf_evlist__exit(evlist); 112 118 free(evlist); ··· 148 142 149 143 leader->nr_members = evsel->idx - leader->idx + 1; 150 144 151 - list_for_each_entry(evsel, list, node) { 145 + __evlist__for_each(list, evsel) { 152 146 evsel->leader = leader; 153 147 } 154 148 } ··· 207 201 return 0; 208 202 209 203 out_delete_partial_list: 210 - list_for_each_entry_safe(evsel, n, &head, node) 204 + __evlist__for_each_safe(&head, n, evsel) 211 205 perf_evsel__delete(evsel); 212 206 return -1; 213 207 } ··· 228 222 { 229 223 struct perf_evsel *evsel; 230 224 231 - list_for_each_entry(evsel, &evlist->entries, node) { 225 + evlist__for_each(evlist, evsel) { 232 226 if (evsel->attr.type == PERF_TYPE_TRACEPOINT && 233 227 (int)evsel->attr.config == id) 234 228 return evsel; ··· 243 237 { 244 238 struct perf_evsel *evsel; 245 239 246 - list_for_each_entry(evsel, &evlist->entries, node) { 240 + evlist__for_each(evlist, evsel) { 247 241 if ((evsel->attr.type == PERF_TYPE_TRACEPOINT) && 248 242 (strcmp(evsel->name, name) == 0)) 249 243 return evsel; ··· 273 267 int nr_threads = thread_map__nr(evlist->threads); 274 268 275 269 for (cpu = 0; cpu < nr_cpus; cpu++) { 276 - list_for_each_entry(pos, &evlist->entries, node) { 270 + evlist__for_each(evlist, pos) { 277 271 if (!perf_evsel__is_group_leader(pos) || !pos->fd) 278 272 continue; 279 273 for (thread = 0; thread < nr_threads; thread++) ··· 291 285 int nr_threads = thread_map__nr(evlist->threads); 292 286 293 287 for (cpu = 0; cpu < nr_cpus; cpu++) { 294 - list_for_each_entry(pos, &evlist->entries, node) { 288 + evlist__for_each(evlist, pos) { 295 289 if (!perf_evsel__is_group_leader(pos) || !pos->fd) 296 290 continue; 297 291 for (thread = 0; thread < nr_threads; thread++) ··· 588 582 { 589 583 int i; 590 584 585 + if (evlist->mmap == NULL) 586 + return; 587 + 591 588 for (i = 0; i < evlist->nr_mmaps; i++) 592 589 __perf_evlist__munmap(evlist, i); 593 590 ··· 630 621 { 631 622 struct perf_evsel *evsel; 632 623 633 - list_for_each_entry(evsel, &evlist->entries, node) { 624 + evlist__for_each(evlist, evsel) { 634 625 int fd = FD(evsel, cpu, thread); 635 626 636 627 if (*output == -1) { ··· 806 797 pr_debug("mmap size %zuB\n", evlist->mmap_len); 807 798 mask = evlist->mmap_len - page_size - 1; 808 799 809 - list_for_each_entry(evsel, &evlist->entries, node) { 800 + evlist__for_each(evlist, evsel) { 810 801 if ((evsel->attr.read_format & PERF_FORMAT_ID) && 811 802 evsel->sample_id == NULL && 812 803 perf_evsel__alloc_id(evsel, cpu_map__nr(cpus), threads->nr) < 0) ··· 842 833 return -1; 843 834 } 844 835 845 - void perf_evlist__delete_maps(struct perf_evlist *evlist) 846 - { 847 - cpu_map__delete(evlist->cpus); 848 - thread_map__delete(evlist->threads); 849 - evlist->cpus = NULL; 850 - evlist->threads = NULL; 851 - } 852 - 853 836 int perf_evlist__apply_filters(struct perf_evlist *evlist) 854 837 { 855 838 struct perf_evsel *evsel; ··· 849 848 const int ncpus = cpu_map__nr(evlist->cpus), 850 849 nthreads = thread_map__nr(evlist->threads); 851 850 852 - list_for_each_entry(evsel, &evlist->entries, node) { 851 + evlist__for_each(evlist, evsel) { 853 852 if (evsel->filter == NULL) 854 853 continue; 855 854 ··· 868 867 const int ncpus = cpu_map__nr(evlist->cpus), 869 868 nthreads = thread_map__nr(evlist->threads); 870 869 871 - list_for_each_entry(evsel, &evlist->entries, node) { 870 + evlist__for_each(evlist, evsel) { 872 871 err = perf_evsel__set_filter(evsel, ncpus, nthreads, filter); 873 872 if (err) 874 873 break; ··· 887 886 if (evlist->id_pos < 0 || evlist->is_pos < 0) 888 887 return false; 889 888 890 - list_for_each_entry(pos, &evlist->entries, node) { 889 + evlist__for_each(evlist, pos) { 891 890 if (pos->id_pos != evlist->id_pos || 892 891 pos->is_pos != evlist->is_pos) 893 892 return false; ··· 903 902 if (evlist->combined_sample_type) 904 903 return evlist->combined_sample_type; 905 904 906 - list_for_each_entry(evsel, &evlist->entries, node) 905 + evlist__for_each(evlist, evsel) 907 906 evlist->combined_sample_type |= evsel->attr.sample_type; 908 907 909 908 return evlist->combined_sample_type; ··· 921 920 u64 read_format = first->attr.read_format; 922 921 u64 sample_type = first->attr.sample_type; 923 922 924 - list_for_each_entry_continue(pos, &evlist->entries, node) { 923 + evlist__for_each(evlist, pos) { 925 924 if (read_format != pos->attr.read_format) 926 925 return false; 927 926 } ··· 978 977 { 979 978 struct perf_evsel *first = perf_evlist__first(evlist), *pos = first; 980 979 981 - list_for_each_entry_continue(pos, &evlist->entries, node) { 980 + evlist__for_each_continue(evlist, pos) { 982 981 if (first->attr.sample_id_all != pos->attr.sample_id_all) 983 982 return false; 984 983 } ··· 1004 1003 int ncpus = cpu_map__nr(evlist->cpus); 1005 1004 int nthreads = thread_map__nr(evlist->threads); 1006 1005 1007 - list_for_each_entry_reverse(evsel, &evlist->entries, node) 1006 + evlist__for_each_reverse(evlist, evsel) 1008 1007 perf_evsel__close(evsel, ncpus, nthreads); 1009 1008 } 1010 1009 ··· 1015 1014 1016 1015 perf_evlist__update_id_pos(evlist); 1017 1016 1018 - list_for_each_entry(evsel, &evlist->entries, node) { 1017 + evlist__for_each(evlist, evsel) { 1019 1018 err = perf_evsel__open(evsel, evlist->cpus, evlist->threads); 1020 1019 if (err < 0) 1021 1020 goto out_err; ··· 1030 1029 1031 1030 int perf_evlist__prepare_workload(struct perf_evlist *evlist, struct target *target, 1032 1031 const char *argv[], bool pipe_output, 1033 - bool want_signal) 1032 + void (*exec_error)(int signo, siginfo_t *info, void *ucontext)) 1034 1033 { 1035 1034 int child_ready_pipe[2], go_pipe[2]; 1036 1035 char bf; ··· 1074 1073 1075 1074 execvp(argv[0], (char **)argv); 1076 1075 1077 - perror(argv[0]); 1078 - if (want_signal) 1079 - kill(getppid(), SIGUSR1); 1076 + if (exec_error) { 1077 + union sigval val; 1078 + 1079 + val.sival_int = errno; 1080 + if (sigqueue(getppid(), SIGUSR1, val)) 1081 + perror(argv[0]); 1082 + } else 1083 + perror(argv[0]); 1080 1084 exit(-1); 1085 + } 1086 + 1087 + if (exec_error) { 1088 + struct sigaction act = { 1089 + .sa_flags = SA_SIGINFO, 1090 + .sa_sigaction = exec_error, 1091 + }; 1092 + sigaction(SIGUSR1, &act, NULL); 1081 1093 } 1082 1094 1083 1095 if (target__none(target)) ··· 1154 1140 struct perf_evsel *evsel; 1155 1141 size_t printed = 0; 1156 1142 1157 - list_for_each_entry(evsel, &evlist->entries, node) { 1143 + evlist__for_each(evlist, evsel) { 1158 1144 printed += fprintf(fp, "%s%s", evsel->idx ? ", " : "", 1159 1145 perf_evsel__name(evsel)); 1160 1146 } ··· 1233 1219 if (move_evsel == perf_evlist__first(evlist)) 1234 1220 return; 1235 1221 1236 - list_for_each_entry_safe(evsel, n, &evlist->entries, node) { 1222 + evlist__for_each_safe(evlist, n, evsel) { 1237 1223 if (evsel->leader == move_evsel->leader) 1238 1224 list_move_tail(&evsel->node, &move); 1239 1225 }
+67 -2
tools/perf/util/evlist.h
··· 103 103 int perf_evlist__prepare_workload(struct perf_evlist *evlist, 104 104 struct target *target, 105 105 const char *argv[], bool pipe_output, 106 - bool want_signal); 106 + void (*exec_error)(int signo, siginfo_t *info, 107 + void *ucontext)); 107 108 int perf_evlist__start_workload(struct perf_evlist *evlist); 108 109 109 110 int perf_evlist__parse_mmap_pages(const struct option *opt, ··· 135 134 } 136 135 137 136 int perf_evlist__create_maps(struct perf_evlist *evlist, struct target *target); 138 - void perf_evlist__delete_maps(struct perf_evlist *evlist); 139 137 int perf_evlist__apply_filters(struct perf_evlist *evlist); 140 138 141 139 void __perf_evlist__set_leader(struct list_head *list); ··· 196 196 void perf_evlist__to_front(struct perf_evlist *evlist, 197 197 struct perf_evsel *move_evsel); 198 198 199 + /** 200 + * __evlist__for_each - iterate thru all the evsels 201 + * @list: list_head instance to iterate 202 + * @evsel: struct evsel iterator 203 + */ 204 + #define __evlist__for_each(list, evsel) \ 205 + list_for_each_entry(evsel, list, node) 206 + 207 + /** 208 + * evlist__for_each - iterate thru all the evsels 209 + * @evlist: evlist instance to iterate 210 + * @evsel: struct evsel iterator 211 + */ 212 + #define evlist__for_each(evlist, evsel) \ 213 + __evlist__for_each(&(evlist)->entries, evsel) 214 + 215 + /** 216 + * __evlist__for_each_continue - continue iteration thru all the evsels 217 + * @list: list_head instance to iterate 218 + * @evsel: struct evsel iterator 219 + */ 220 + #define __evlist__for_each_continue(list, evsel) \ 221 + list_for_each_entry_continue(evsel, list, node) 222 + 223 + /** 224 + * evlist__for_each_continue - continue iteration thru all the evsels 225 + * @evlist: evlist instance to iterate 226 + * @evsel: struct evsel iterator 227 + */ 228 + #define evlist__for_each_continue(evlist, evsel) \ 229 + __evlist__for_each_continue(&(evlist)->entries, evsel) 230 + 231 + /** 232 + * __evlist__for_each_reverse - iterate thru all the evsels in reverse order 233 + * @list: list_head instance to iterate 234 + * @evsel: struct evsel iterator 235 + */ 236 + #define __evlist__for_each_reverse(list, evsel) \ 237 + list_for_each_entry_reverse(evsel, list, node) 238 + 239 + /** 240 + * evlist__for_each_reverse - iterate thru all the evsels in reverse order 241 + * @evlist: evlist instance to iterate 242 + * @evsel: struct evsel iterator 243 + */ 244 + #define evlist__for_each_reverse(evlist, evsel) \ 245 + __evlist__for_each_reverse(&(evlist)->entries, evsel) 246 + 247 + /** 248 + * __evlist__for_each_safe - safely iterate thru all the evsels 249 + * @list: list_head instance to iterate 250 + * @tmp: struct evsel temp iterator 251 + * @evsel: struct evsel iterator 252 + */ 253 + #define __evlist__for_each_safe(list, tmp, evsel) \ 254 + list_for_each_entry_safe(evsel, tmp, list, node) 255 + 256 + /** 257 + * evlist__for_each_safe - safely iterate thru all the evsels 258 + * @evlist: evlist instance to iterate 259 + * @evsel: struct evsel iterator 260 + * @tmp: struct evsel temp iterator 261 + */ 262 + #define evlist__for_each_safe(evlist, tmp, evsel) \ 263 + __evlist__for_each_safe(&(evlist)->entries, tmp, evsel) 199 264 200 265 #endif /* __PERF_EVLIST_H */
+2 -1
tools/perf/util/evsel.c
··· 658 658 * Setting enable_on_exec for independent events and 659 659 * group leaders for traced executed by perf. 660 660 */ 661 - if (target__none(&opts->target) && perf_evsel__is_group_leader(evsel)) 661 + if (target__none(&opts->target) && perf_evsel__is_group_leader(evsel) && 662 + !opts->initial_delay) 662 663 attr->enable_on_exec = 1; 663 664 } 664 665
+9 -10
tools/perf/util/header.c
··· 643 643 if (ret < 0) 644 644 return ret; 645 645 646 - list_for_each_entry(evsel, &evlist->entries, node) { 647 - 646 + evlist__for_each(evlist, evsel) { 648 647 ret = do_write(fd, &evsel->attr, sz); 649 648 if (ret < 0) 650 649 return ret; ··· 1091 1092 if (ret < 0) 1092 1093 return ret; 1093 1094 1094 - list_for_each_entry(evsel, &evlist->entries, node) { 1095 + evlist__for_each(evlist, evsel) { 1095 1096 if (perf_evsel__is_group_leader(evsel) && 1096 1097 evsel->nr_members > 1) { 1097 1098 const char *name = evsel->group_name ?: "{anon_group}"; ··· 1486 1487 1487 1488 session = container_of(ph, struct perf_session, header); 1488 1489 1489 - list_for_each_entry(evsel, &session->evlist->entries, node) { 1490 + evlist__for_each(session->evlist, evsel) { 1490 1491 if (perf_evsel__is_group_leader(evsel) && 1491 1492 evsel->nr_members > 1) { 1492 1493 fprintf(fp, "# group: %s{%s", evsel->group_name ?: "", ··· 1767 1768 { 1768 1769 struct perf_evsel *evsel; 1769 1770 1770 - list_for_each_entry(evsel, &evlist->entries, node) { 1771 + evlist__for_each(evlist, evsel) { 1771 1772 if (evsel->idx == idx) 1772 1773 return evsel; 1773 1774 } ··· 2070 2071 session->evlist->nr_groups = nr_groups; 2071 2072 2072 2073 i = nr = 0; 2073 - list_for_each_entry(evsel, &session->evlist->entries, node) { 2074 + evlist__for_each(session->evlist, evsel) { 2074 2075 if (evsel->idx == (int) desc[i].leader_idx) { 2075 2076 evsel->leader = evsel; 2076 2077 /* {anon_group} is a dummy name */ ··· 2297 2298 2298 2299 lseek(fd, sizeof(f_header), SEEK_SET); 2299 2300 2300 - list_for_each_entry(evsel, &evlist->entries, node) { 2301 + evlist__for_each(session->evlist, evsel) { 2301 2302 evsel->id_offset = lseek(fd, 0, SEEK_CUR); 2302 2303 err = do_write(fd, evsel->id, evsel->ids * sizeof(u64)); 2303 2304 if (err < 0) { ··· 2308 2309 2309 2310 attr_offset = lseek(fd, 0, SEEK_CUR); 2310 2311 2311 - list_for_each_entry(evsel, &evlist->entries, node) { 2312 + evlist__for_each(evlist, evsel) { 2312 2313 f_attr = (struct perf_file_attr){ 2313 2314 .attr = evsel->attr, 2314 2315 .ids = { ··· 2741 2742 { 2742 2743 struct perf_evsel *pos; 2743 2744 2744 - list_for_each_entry(pos, &evlist->entries, node) { 2745 + evlist__for_each(evlist, pos) { 2745 2746 if (pos->attr.type == PERF_TYPE_TRACEPOINT && 2746 2747 perf_evsel__prepare_tracepoint_event(pos, pevent)) 2747 2748 return -1; ··· 2889 2890 struct perf_evsel *evsel; 2890 2891 int err = 0; 2891 2892 2892 - list_for_each_entry(evsel, &session->evlist->entries, node) { 2893 + evlist__for_each(session->evlist, evsel) { 2893 2894 err = perf_event__synthesize_attr(tool, &evsel->attr, evsel->ids, 2894 2895 evsel->id, process); 2895 2896 if (err) {
+5 -5
tools/perf/util/header.h
··· 77 77 unsigned long long total_mem; 78 78 79 79 int nr_cmdline; 80 - char *cmdline; 81 80 int nr_sibling_cores; 82 - char *sibling_cores; 83 81 int nr_sibling_threads; 84 - char *sibling_threads; 85 82 int nr_numa_nodes; 86 - char *numa_nodes; 87 83 int nr_pmu_mappings; 88 - char *pmu_mappings; 89 84 int nr_groups; 85 + char *cmdline; 86 + char *sibling_cores; 87 + char *sibling_threads; 88 + char *numa_nodes; 89 + char *pmu_mappings; 90 90 }; 91 91 92 92 struct perf_header {
+6 -3
tools/perf/util/include/asm/bug.h tools/include/asm/bug.h
··· 1 - #ifndef _PERF_ASM_GENERIC_BUG_H 2 - #define _PERF_ASM_GENERIC_BUG_H 1 + #ifndef _TOOLS_ASM_BUG_H 2 + #define _TOOLS_ASM_BUG_H 3 + 4 + #include <linux/compiler.h> 3 5 4 6 #define __WARN_printf(arg...) do { fprintf(stderr, arg); } while (0) 5 7 ··· 21 19 __warned = 1; \ 22 20 unlikely(__ret_warn_once); \ 23 21 }) 24 - #endif 22 + 23 + #endif /* _TOOLS_ASM_BUG_H */
+10 -2
tools/perf/util/include/linux/compiler.h tools/include/linux/compiler.h
··· 1 - #ifndef _PERF_LINUX_COMPILER_H_ 2 - #define _PERF_LINUX_COMPILER_H_ 1 + #ifndef _TOOLS_LINUX_COMPILER_H_ 2 + #define _TOOLS_LINUX_COMPILER_H_ 3 3 4 4 #ifndef __always_inline 5 5 # define __always_inline inline __attribute__((always_inline)) ··· 27 27 # define __weak __attribute__((weak)) 28 28 #endif 29 29 30 + #ifndef likely 31 + # define likely(x) __builtin_expect(!!(x), 1) 30 32 #endif 33 + 34 + #ifndef unlikely 35 + # define unlikely(x) __builtin_expect(!!(x), 0) 36 + #endif 37 + 38 + #endif /* _TOOLS_LINUX_COMPILER_H */
+1
tools/perf/util/machine.c
··· 27 27 machine->pid = pid; 28 28 29 29 machine->symbol_filter = NULL; 30 + machine->id_hdr_size = 0; 30 31 31 32 machine->root_dir = strdup(root_dir); 32 33 if (machine->root_dir == NULL)
+2 -3
tools/perf/util/parse-events.c
··· 820 820 if (!add && get_event_modifier(&mod, str, NULL)) 821 821 return -EINVAL; 822 822 823 - list_for_each_entry(evsel, list, node) { 824 - 823 + __evlist__for_each(list, evsel) { 825 824 if (add && get_event_modifier(&mod, str, evsel)) 826 825 return -EINVAL; 827 826 ··· 844 845 { 845 846 struct perf_evsel *evsel; 846 847 847 - list_for_each_entry(evsel, list, node) { 848 + __evlist__for_each(list, evsel) { 848 849 if (!evsel->name) 849 850 evsel->name = strdup(name); 850 851 }
+1 -1
tools/perf/util/pmu.c
··· 505 505 506 506 /* 507 507 * Setup one of config[12] attr members based on the 508 - * user input data - temr parameter. 508 + * user input data - term parameter. 509 509 */ 510 510 static int pmu_config_term(struct list_head *formats, 511 511 struct perf_event_attr *attr,
+4 -1
tools/perf/util/probe-event.c
··· 172 172 return (dso) ? dso->long_name : NULL; 173 173 } 174 174 175 + #ifdef HAVE_DWARF_SUPPORT 175 176 /* Copied from unwind.c */ 176 177 static Elf_Scn *elf_section_by_name(Elf *elf, GElf_Ehdr *ep, 177 178 GElf_Shdr *shp, const char *name) ··· 218 217 elf_end(elf); 219 218 return ret; 220 219 } 220 + #endif 221 221 222 222 static int init_user_exec(void) 223 223 { ··· 752 750 753 751 static int try_to_find_probe_trace_events(struct perf_probe_event *pev, 754 752 struct probe_trace_event **tevs __maybe_unused, 755 - int max_tevs __maybe_unused, const char *target) 753 + int max_tevs __maybe_unused, 754 + const char *target __maybe_unused) 756 755 { 757 756 if (perf_probe_event_need_dwarf(pev)) { 758 757 pr_warning("Debuginfo-analysis is not supported.\n");
+2 -1
tools/perf/util/python.c
··· 908 908 if (i >= pevlist->evlist.nr_entries) 909 909 return NULL; 910 910 911 - list_for_each_entry(pos, &pevlist->evlist.entries, node) 911 + evlist__for_each(&pevlist->evlist, pos) { 912 912 if (i-- == 0) 913 913 break; 914 + } 914 915 915 916 return Py_BuildValue("O", container_of(pos, struct pyrf_evsel, evsel)); 916 917 }
+3 -3
tools/perf/util/record.c
··· 89 89 if (evlist->cpus->map[0] < 0) 90 90 opts->no_inherit = true; 91 91 92 - list_for_each_entry(evsel, &evlist->entries, node) 92 + evlist__for_each(evlist, evsel) 93 93 perf_evsel__config(evsel, opts); 94 94 95 95 if (evlist->nr_entries > 1) { 96 96 struct perf_evsel *first = perf_evlist__first(evlist); 97 97 98 - list_for_each_entry(evsel, &evlist->entries, node) { 98 + evlist__for_each(evlist, evsel) { 99 99 if (evsel->attr.sample_type == first->attr.sample_type) 100 100 continue; 101 101 use_sample_identifier = perf_can_sample_identifier(); 102 102 break; 103 103 } 104 - list_for_each_entry(evsel, &evlist->entries, node) 104 + evlist__for_each(evlist, evsel) 105 105 perf_evsel__set_sample_id(evsel, use_sample_identifier); 106 106 } 107 107
+3 -3
tools/perf/util/session.c
··· 1384 1384 { 1385 1385 struct perf_evsel *evsel; 1386 1386 1387 - list_for_each_entry(evsel, &session->evlist->entries, node) { 1387 + evlist__for_each(session->evlist, evsel) { 1388 1388 if (evsel->attr.type == PERF_TYPE_TRACEPOINT) 1389 1389 return true; 1390 1390 } ··· 1442 1442 1443 1443 ret += events_stats__fprintf(&session->stats, fp); 1444 1444 1445 - list_for_each_entry(pos, &session->evlist->entries, node) { 1445 + evlist__for_each(session->evlist, pos) { 1446 1446 ret += fprintf(fp, "%s stats:\n", perf_evsel__name(pos)); 1447 1447 ret += events_stats__fprintf(&pos->hists.stats, fp); 1448 1448 } ··· 1464 1464 { 1465 1465 struct perf_evsel *pos; 1466 1466 1467 - list_for_each_entry(pos, &session->evlist->entries, node) { 1467 + evlist__for_each(session->evlist, pos) { 1468 1468 if (pos->attr.type == type) 1469 1469 return pos; 1470 1470 }
+4 -4
tools/perf/util/unwind.c
··· 340 340 /* Check the .debug_frame section for unwinding info */ 341 341 if (!read_unwind_spec_debug_frame(map->dso, ui->machine, &segbase)) { 342 342 memset(&di, 0, sizeof(di)); 343 - dwarf_find_debug_frame(0, &di, ip, 0, map->dso->name, 344 - map->start, map->end); 345 - return dwarf_search_unwind_table(as, ip, &di, pi, 346 - need_unwind_info, arg); 343 + if (dwarf_find_debug_frame(0, &di, ip, 0, map->dso->name, 344 + map->start, map->end)) 345 + return dwarf_search_unwind_table(as, ip, &di, pi, 346 + need_unwind_info, arg); 347 347 } 348 348 #endif 349 349