Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

perf stat: Basic support for TopDown in perf stat

Add basic plumbing for TopDown in perf stat

TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.

This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)

The result are four metrics:

FrontendBound, BackendBound, BadSpeculation, Retiring

that describe the CPU pipeline behavior on a high level.

The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)

The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.

TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):

topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation

These metrics then allow to compute four useful metrics:

FrontendBound, BackendBound, Retiring, BadSpeculation.

Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.

The actual code to compute the metrics is in follow-on patches.

v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

authored by

Andi Kleen and committed by
Arnaldo Carvalho de Melo
44b1e60a 17a2634b

+184 -3
+32
tools/perf/Documentation/perf-stat.txt
··· 204 204 --no-aggr:: 205 205 Do not aggregate counts across all monitored CPUs. 206 206 207 + --topdown:: 208 + Print top down level 1 metrics if supported by the CPU. This allows to 209 + determine bottle necks in the CPU pipeline for CPU bound workloads, 210 + by breaking the cycles consumed down into frontend bound, backend bound, 211 + bad speculation and retiring. 212 + 213 + Frontend bound means that the CPU cannot fetch and decode instructions fast 214 + enough. Backend bound means that computation or memory access is the bottle 215 + neck. Bad Speculation means that the CPU wasted cycles due to branch 216 + mispredictions and similar issues. Retiring means that the CPU computed without 217 + an apparently bottleneck. The bottleneck is only the real bottleneck 218 + if the workload is actually bound by the CPU and not by something else. 219 + 220 + For best results it is usually a good idea to use it with interval 221 + mode like -I 1000, as the bottleneck of workloads can change often. 222 + 223 + The top down metrics are collected per core instead of per 224 + CPU thread. Per core mode is automatically enabled 225 + and -a (global monitoring) is needed, requiring root rights or 226 + perf.perf_event_paranoid=-1. 227 + 228 + Topdown uses the full Performance Monitoring Unit, and needs 229 + disabling of the NMI watchdog (as root): 230 + echo 0 > /proc/sys/kernel/nmi_watchdog 231 + for best results. Otherwise the bottlenecks may be inconsistent 232 + on workload with changing phases. 233 + 234 + This enables --metric-only, unless overriden with --no-metric-only. 235 + 236 + To interpret the results it is usually needed to know on which 237 + CPUs the workload runs on. If needed the CPUs can be forced using 238 + taskset. 207 239 208 240 EXAMPLES 209 241 --------
+1
tools/perf/arch/x86/util/Build
··· 3 3 libperf-y += pmu.o 4 4 libperf-y += kvm-stat.o 5 5 libperf-y += perf_regs.o 6 + libperf-y += group.o 6 7 7 8 libperf-$(CONFIG_DWARF) += dwarf-regs.o 8 9 libperf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
+27
tools/perf/arch/x86/util/group.c
··· 1 + #include <stdio.h> 2 + #include "api/fs/fs.h" 3 + #include "util/group.h" 4 + 5 + /* 6 + * Check whether we can use a group for top down. 7 + * Without a group may get bad results due to multiplexing. 8 + */ 9 + bool arch_topdown_check_group(bool *warn) 10 + { 11 + int n; 12 + 13 + if (sysctl__read_int("kernel/nmi_watchdog", &n) < 0) 14 + return false; 15 + if (n > 0) { 16 + *warn = true; 17 + return false; 18 + } 19 + return true; 20 + } 21 + 22 + void arch_topdown_group_warn(void) 23 + { 24 + fprintf(stderr, 25 + "nmi_watchdog enabled with topdown. May give wrong results.\n" 26 + "Disable with echo 0 > /proc/sys/kernel/nmi_watchdog\n"); 27 + }
+116 -3
tools/perf/builtin-stat.c
··· 59 59 #include "util/thread.h" 60 60 #include "util/thread_map.h" 61 61 #include "util/counts.h" 62 + #include "util/group.h" 62 63 #include "util/session.h" 63 64 #include "util/tool.h" 65 + #include "util/group.h" 64 66 #include "asm/bug.h" 65 67 68 + #include <api/fs/fs.h> 66 69 #include <stdlib.h> 67 70 #include <sys/prctl.h> 68 71 #include <locale.h> ··· 101 98 "}" 102 99 }; 103 100 101 + static const char * topdown_attrs[] = { 102 + "topdown-total-slots", 103 + "topdown-slots-retired", 104 + "topdown-recovery-bubbles", 105 + "topdown-fetch-bubbles", 106 + "topdown-slots-issued", 107 + NULL, 108 + }; 109 + 104 110 static struct perf_evlist *evsel_list; 105 111 106 112 static struct target target = { ··· 124 112 static bool null_run = false; 125 113 static int detailed_run = 0; 126 114 static bool transaction_run; 115 + static bool topdown_run = false; 127 116 static bool big_num = true; 128 117 static int big_num_opt = -1; 129 118 static const char *csv_sep = NULL; ··· 137 124 static unsigned int unit_width = 4; /* strlen("unit") */ 138 125 static bool forever = false; 139 126 static bool metric_only = false; 127 + static bool force_metric_only = false; 140 128 static struct timespec ref_time; 141 129 static struct cpu_map *aggr_map; 142 130 static aggr_get_id_t aggr_get_id; ··· 1534 1520 return 0; 1535 1521 } 1536 1522 1523 + static int enable_metric_only(const struct option *opt __maybe_unused, 1524 + const char *s __maybe_unused, int unset) 1525 + { 1526 + force_metric_only = true; 1527 + metric_only = !unset; 1528 + return 0; 1529 + } 1530 + 1537 1531 static const struct option stat_options[] = { 1538 1532 OPT_BOOLEAN('T', "transaction", &transaction_run, 1539 1533 "hardware transaction statistics"), ··· 1600 1578 "aggregate counts per thread", AGGR_THREAD), 1601 1579 OPT_UINTEGER('D', "delay", &initial_delay, 1602 1580 "ms to wait before starting measurement after program start"), 1603 - OPT_BOOLEAN(0, "metric-only", &metric_only, 1604 - "Only print computed metrics. No raw values"), 1581 + OPT_CALLBACK_NOOPT(0, "metric-only", &metric_only, NULL, 1582 + "Only print computed metrics. No raw values", enable_metric_only), 1583 + OPT_BOOLEAN(0, "topdown", &topdown_run, 1584 + "measure topdown level 1 statistics"), 1605 1585 OPT_END() 1606 1586 }; 1607 1587 ··· 1796 1772 return 0; 1797 1773 } 1798 1774 1775 + static int topdown_filter_events(const char **attr, char **str, bool use_group) 1776 + { 1777 + int off = 0; 1778 + int i; 1779 + int len = 0; 1780 + char *s; 1781 + 1782 + for (i = 0; attr[i]; i++) { 1783 + if (pmu_have_event("cpu", attr[i])) { 1784 + len += strlen(attr[i]) + 1; 1785 + attr[i - off] = attr[i]; 1786 + } else 1787 + off++; 1788 + } 1789 + attr[i - off] = NULL; 1790 + 1791 + *str = malloc(len + 1 + 2); 1792 + if (!*str) 1793 + return -1; 1794 + s = *str; 1795 + if (i - off == 0) { 1796 + *s = 0; 1797 + return 0; 1798 + } 1799 + if (use_group) 1800 + *s++ = '{'; 1801 + for (i = 0; attr[i]; i++) { 1802 + strcpy(s, attr[i]); 1803 + s += strlen(s); 1804 + *s++ = ','; 1805 + } 1806 + if (use_group) { 1807 + s[-1] = '}'; 1808 + *s = 0; 1809 + } else 1810 + s[-1] = 0; 1811 + return 0; 1812 + } 1813 + 1814 + __weak bool arch_topdown_check_group(bool *warn) 1815 + { 1816 + *warn = false; 1817 + return false; 1818 + } 1819 + 1820 + __weak void arch_topdown_group_warn(void) 1821 + { 1822 + } 1823 + 1799 1824 /* 1800 1825 * Add default attributes, if there were no attributes specified or 1801 1826 * if -d/--detailed, -d -d or -d -d -d is used: 1802 1827 */ 1803 1828 static int add_default_attributes(void) 1804 1829 { 1830 + int err; 1805 1831 struct perf_event_attr default_attrs0[] = { 1806 1832 1807 1833 { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK }, ··· 1970 1896 return 0; 1971 1897 1972 1898 if (transaction_run) { 1973 - int err; 1974 1899 if (pmu_have_event("cpu", "cycles-ct") && 1975 1900 pmu_have_event("cpu", "el-start")) 1976 1901 err = parse_events(evsel_list, transaction_attrs, NULL); ··· 1980 1907 return -1; 1981 1908 } 1982 1909 return 0; 1910 + } 1911 + 1912 + if (topdown_run) { 1913 + char *str = NULL; 1914 + bool warn = false; 1915 + 1916 + if (stat_config.aggr_mode != AGGR_GLOBAL && 1917 + stat_config.aggr_mode != AGGR_CORE) { 1918 + pr_err("top down event configuration requires --per-core mode\n"); 1919 + return -1; 1920 + } 1921 + stat_config.aggr_mode = AGGR_CORE; 1922 + if (nr_cgroups || !target__has_cpu(&target)) { 1923 + pr_err("top down event configuration requires system-wide mode (-a)\n"); 1924 + return -1; 1925 + } 1926 + 1927 + if (!force_metric_only) 1928 + metric_only = true; 1929 + if (topdown_filter_events(topdown_attrs, &str, 1930 + arch_topdown_check_group(&warn)) < 0) { 1931 + pr_err("Out of memory\n"); 1932 + return -1; 1933 + } 1934 + if (topdown_attrs[0] && str) { 1935 + if (warn) 1936 + arch_topdown_group_warn(); 1937 + err = parse_events(evsel_list, str, NULL); 1938 + if (err) { 1939 + fprintf(stderr, 1940 + "Cannot set up top down events %s: %d\n", 1941 + str, err); 1942 + free(str); 1943 + return -1; 1944 + } 1945 + } else { 1946 + fprintf(stderr, "System does not support topdown\n"); 1947 + return -1; 1948 + } 1949 + free(str); 1983 1950 } 1984 1951 1985 1952 if (!evsel_list->nr_entries) {
+7
tools/perf/util/group.h
··· 1 + #ifndef GROUP_H 2 + #define GROUP_H 1 3 + 4 + bool arch_topdown_check_group(bool *warn); 5 + void arch_topdown_group_warn(void); 6 + 7 + #endif
+1
tools/perf/util/parse-events.l
··· 260 260 cycles-t { return str(yyscanner, PE_KERNEL_PMU_EVENT); } 261 261 mem-loads { return str(yyscanner, PE_KERNEL_PMU_EVENT); } 262 262 mem-stores { return str(yyscanner, PE_KERNEL_PMU_EVENT); } 263 + topdown-[a-z-]+ { return str(yyscanner, PE_KERNEL_PMU_EVENT); } 263 264 264 265 L1-dcache|l1-d|l1d|L1-data | 265 266 L1-icache|l1-i|l1i|L1-instruction |