Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1perf-lock(1)
2============
3
4NAME
5----
6perf-lock - Analyze lock events
7
8SYNOPSIS
9--------
10[verse]
11'perf lock' {record|report|script|info|contention}
12
13DESCRIPTION
14-----------
15You can analyze various lock behaviours
16and statistics with this 'perf lock' command.
17
18 'perf lock record <command>' records lock events
19 between start and end <command>. And this command
20 produces the file "perf.data" which contains tracing
21 results of lock events.
22
23 'perf lock report' reports statistical data.
24
25 'perf lock script' shows raw lock events.
26
27 'perf lock info' shows metadata like threads or addresses
28 of lock instances.
29
30 'perf lock contention' shows contention statistics.
31
32COMMON OPTIONS
33--------------
34
35-i::
36--input=<file>::
37 Input file name. (default: perf.data unless stdin is a fifo)
38
39--output=<file>::
40 Output file name for perf lock contention and report.
41
42-v::
43--verbose::
44 Be more verbose (show symbol address, etc).
45
46-q::
47--quiet::
48 Do not show any warnings or messages. (Suppress -v)
49
50-D::
51--dump-raw-trace::
52 Dump raw trace in ASCII.
53
54-f::
55--force::
56 Don't complain, do it.
57
58--vmlinux=<file>::
59 vmlinux pathname
60
61--kallsyms=<file>::
62 kallsyms pathname
63
64
65REPORT OPTIONS
66--------------
67
68-k::
69--key=<value>::
70 Sorting key. Possible values: acquired (default), contended,
71 avg_wait, wait_total, wait_max, wait_min.
72
73-F::
74--field=<value>::
75 Output fields. By default it shows all the fields but users can
76 customize that using this. Possible values: acquired, contended,
77 avg_wait, wait_total, wait_max, wait_min.
78
79-c::
80--combine-locks::
81 Merge lock instances in the same class (based on name).
82
83-t::
84--threads::
85 The -t option is to show per-thread lock stat like below:
86
87 $ perf lock report -t -F acquired,contended,avg_wait
88
89 Name acquired contended avg wait (ns)
90
91 perf 240569 9 5784
92 swapper 106610 19 543
93 :15789 17370 2 14538
94 ContainerMgr 8981 6 874
95 sleep 5275 1 11281
96 ContainerThread 4416 4 944
97 RootPressureThr 3215 5 1215
98 rcu_preempt 2954 0 0
99 ContainerMgr 2560 0 0
100 unnamed 1873 0 0
101 EventManager_De 1845 1 636
102 futex-default-S 1609 0 0
103
104-E::
105--entries=<value>::
106 Display this many entries.
107
108
109INFO OPTIONS
110------------
111
112-t::
113--threads::
114 dump only the thread list in perf.data
115
116-m::
117--map::
118 dump only the map of lock instances (address:name table)
119
120
121CONTENTION OPTIONS
122------------------
123
124-k::
125--key=<value>::
126 Sorting key. Possible values: contended, wait_total (default),
127 wait_max, wait_min, avg_wait.
128
129-F::
130--field=<value>::
131 Output fields. By default it shows all but the wait_min fields
132 and users can customize that using this. Possible values:
133 contended, wait_total, wait_max, wait_min, avg_wait.
134
135-t::
136--threads::
137 Show per-thread lock contention stat
138
139-b::
140--use-bpf::
141 Use BPF program to collect lock contention stats instead of
142 using the input data.
143
144-a::
145--all-cpus::
146 System-wide collection from all CPUs.
147
148-C::
149--cpu=<value>::
150 Collect samples only on the list of CPUs provided. Multiple CPUs can be
151 provided as a comma-separated list with no space: 0,1. Ranges of CPUs
152 are specified with -: 0-2. Default is to monitor all CPUs.
153
154-p::
155--pid=<value>::
156 Record events on existing process ID (comma separated list).
157
158--tid=<value>::
159 Record events on existing thread ID (comma separated list).
160
161-M::
162--map-nr-entries=<value>::
163 Maximum number of BPF map entries (default: 16384).
164 This will be aligned to a power of 2.
165
166--max-stack=<value>::
167 Maximum stack depth when collecting lock contention (default: 8).
168
169--stack-skip=<value>::
170 Number of stack depth to skip when finding a lock caller (default: 3).
171
172-E::
173--entries=<value>::
174 Display this many entries.
175
176-l::
177--lock-addr::
178 Show lock contention stat by address
179
180-o::
181--lock-owner::
182 Show lock contention stat by owners. This option can be combined with -t,
183 which shows owner's per thread lock stats, or -v, which shows owner's
184 stacktrace. Requires --use-bpf.
185
186-Y::
187--type-filter=<value>::
188 Show lock contention only for given lock types (comma separated list).
189 Available values are:
190 semaphore, spinlock, rwlock, rwlock:R, rwlock:W, rwsem, rwsem:R, rwsem:W,
191 rtmutex, rwlock-rt, rwlock-rt:R, rwlock-rt:W, percpu-rwmem, pcpu-sem,
192 pcpu-sem:R, pcpu-sem:W, mutex
193
194 Note that RW-variant of locks have :R and :W suffix. Names without the
195 suffix are shortcuts for the both variants. Ex) rwsem = rwsem:R + rwsem:W.
196
197-L::
198--lock-filter=<value>::
199 Show lock contention only for given lock addresses or names (comma separated list).
200
201-S::
202--callstack-filter=<value>::
203 Show lock contention only if the callstack contains the given string.
204 Note that it matches the substring so 'rq' would match both 'raw_spin_rq_lock'
205 and 'irq_enter_rcu'.
206
207-x::
208--field-separator=<SEP>::
209 Show results using a CSV-style output to make it easy to import directly
210 into spreadsheets. Columns are separated by the string specified in SEP.
211
212--lock-cgroup::
213 Show lock contention stat by cgroup. Requires --use-bpf.
214
215-G::
216--cgroup-filter=<value>::
217 Show lock contention only in the given cgroups (comma separated list).
218
219-J::
220--inject-delay=<time@function>::
221 Add delays to the given lock. It's added to the contention-end part so
222 that the (new) owner of the lock will be delayed. But by slowing down
223 the owner, the waiters will also be delayed as well. This is working
224 only with -b/--use-bpf.
225
226 The 'time' is specified in nsec but it can have a unit suffix. Available
227 units are "ms", "us" and "ns". Currently it accepts up to 10ms of delays
228 for safety reasons.
229
230 Note that it will busy-wait after it gets the lock. Delaying locks can
231 have significant consequences including potential kernel crashes. Please
232 use it at your own risk.
233
234
235SEE ALSO
236--------
237linkperf:perf[1]