Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1.. SPDX-License-Identifier: GPL-2.0
2
3==============================
4Using RCU's CPU Stall Detector
5==============================
6
7This document first discusses what sorts of issues RCU's CPU stall
8detector can locate, and then discusses kernel parameters and Kconfig
9options that can be used to fine-tune the detector's operation. Finally,
10this document explains the stall detector's "splat" format.
11
12
13What Causes RCU CPU Stall Warnings?
14===================================
15
16So your kernel printed an RCU CPU stall warning. The next question is
17"What caused it?" The following problems can result in RCU CPU stall
18warnings:
19
20- A CPU looping in an RCU read-side critical section.
21
22- A CPU looping with interrupts disabled.
23
24- A CPU looping with preemption disabled.
25
26- A CPU looping with bottom halves disabled.
27
28- For !CONFIG_PREEMPTION kernels, a CPU looping anywhere in the
29 kernel without potentially invoking schedule(). If the looping
30 in the kernel is really expected and desirable behavior, you
31 might need to add some calls to cond_resched().
32
33- Booting Linux using a console connection that is too slow to
34 keep up with the boot-time console-message rate. For example,
35 a 115Kbaud serial console can be *way* too slow to keep up
36 with boot-time message rates, and will frequently result in
37 RCU CPU stall warning messages. Especially if you have added
38 debug printk()s.
39
40- Anything that prevents RCU's grace-period kthreads from running.
41 This can result in the "All QSes seen" console-log message.
42 This message will include information on when the kthread last
43 ran and how often it should be expected to run. It can also
44 result in the ``rcu_.*kthread starved for`` console-log message,
45 which will include additional debugging information.
46
47- A CPU-bound real-time task in a CONFIG_PREEMPTION kernel, which might
48 happen to preempt a low-priority task in the middle of an RCU
49 read-side critical section. This is especially damaging if
50 that low-priority task is not permitted to run on any other CPU,
51 in which case the next RCU grace period can never complete, which
52 will eventually cause the system to run out of memory and hang.
53 While the system is in the process of running itself out of
54 memory, you might see stall-warning messages.
55
56- A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
57 is running at a higher priority than the RCU softirq threads.
58 This will prevent RCU callbacks from ever being invoked,
59 and in a CONFIG_PREEMPT_RCU kernel will further prevent
60 RCU grace periods from ever completing. Either way, the
61 system will eventually run out of memory and hang. In the
62 CONFIG_PREEMPT_RCU case, you might see stall-warning
63 messages.
64
65 You can use the rcutree.kthread_prio kernel boot parameter to
66 increase the scheduling priority of RCU's kthreads, which can
67 help avoid this problem. However, please note that doing this
68 can increase your system's context-switch rate and thus degrade
69 performance.
70
71- A periodic interrupt whose handler takes longer than the time
72 interval between successive pairs of interrupts. This can
73 prevent RCU's kthreads and softirq handlers from running.
74 Note that certain high-overhead debugging options, for example
75 the function_graph tracer, can result in interrupt handler taking
76 considerably longer than normal, which can in turn result in
77 RCU CPU stall warnings.
78
79- Testing a workload on a fast system, tuning the stall-warning
80 timeout down to just barely avoid RCU CPU stall warnings, and then
81 running the same workload with the same stall-warning timeout on a
82 slow system. Note that thermal throttling and on-demand governors
83 can cause a single system to be sometimes fast and sometimes slow!
84
85- A hardware or software issue shuts off the scheduler-clock
86 interrupt on a CPU that is not in dyntick-idle mode. This
87 problem really has happened, and seems to be most likely to
88 result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.
89
90- A hardware or software issue that prevents time-based wakeups
91 from occurring. These issues can range from misconfigured or
92 buggy timer hardware through bugs in the interrupt or exception
93 path (whether hardware, firmware, or software) through bugs
94 in Linux's timer subsystem through bugs in the scheduler, and,
95 yes, even including bugs in RCU itself. It can also result in
96 the ``rcu_.*timer wakeup didn't happen for`` console-log message,
97 which will include additional debugging information.
98
99- A timer issue causes time to appear to jump forward, so that RCU
100 believes that the RCU CPU stall-warning timeout has been exceeded
101 when in fact much less time has passed. This could be due to
102 timer hardware bugs, timer driver bugs, or even corruption of
103 the "jiffies" global variable. These sorts of timer hardware
104 and driver bugs are not uncommon when testing new hardware.
105
106- A low-level kernel issue that either fails to invoke one of the
107 variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(),
108 ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one
109 hand, or that invokes one of them too many times on the other.
110 Historically, the most frequent issue has been an omission
111 of either irq_enter() or irq_exit(), which in turn invoke
112 ct_irq_enter() or ct_irq_exit(), respectively. Building your
113 kernel with CONFIG_RCU_EQS_DEBUG=y can help track down these types
114 of issues, which sometimes arise in architecture-specific code.
115
116- A bug in the RCU implementation.
117
118- A hardware failure. This is quite unlikely, but is not at all
119 uncommon in large datacenter. In one memorable case some decades
120 back, a CPU failed in a running system, becoming unresponsive,
121 but not causing an immediate crash. This resulted in a series
122 of RCU CPU stall warnings, eventually leading to the realization
123 that the CPU had failed.
124
125The RCU, RCU-sched, RCU-tasks, and RCU-tasks-trace implementations have
126CPU stall warning. Note that SRCU does *not* have CPU stall warnings.
127Please note that RCU only detects CPU stalls when there is a grace period
128in progress. No grace period, no CPU stall warnings.
129
130To diagnose the cause of the stall, inspect the stack traces.
131The offending function will usually be near the top of the stack.
132If you have a series of stall warnings from a single extended stall,
133comparing the stack traces can often help determine where the stall
134is occurring, which will usually be in the function nearest the top of
135that portion of the stack which remains the same from trace to trace.
136If you can reliably trigger the stall, ftrace can be quite helpful.
137
138RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
139and with RCU's event tracing. For information on RCU's event tracing,
140see include/trace/events/rcu.h.
141
142
143Fine-Tuning the RCU CPU Stall Detector
144======================================
145
146The rcuupdate.rcu_cpu_stall_suppress module parameter disables RCU's
147CPU stall detector, which detects conditions that unduly delay RCU grace
148periods. This module parameter enables CPU stall detection by default,
149but may be overridden via boot-time parameter or at runtime via sysfs.
150The stall detector's idea of what constitutes "unduly delayed" is
151controlled by a set of kernel configuration variables and cpp macros:
152
153CONFIG_RCU_CPU_STALL_TIMEOUT
154----------------------------
155
156 This kernel configuration parameter defines the period of time
157 that RCU will wait from the beginning of a grace period until it
158 issues an RCU CPU stall warning. This time period is normally
159 21 seconds.
160
161 This configuration parameter may be changed at runtime via the
162 /sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
163 this parameter is checked only at the beginning of a cycle.
164 So if you are 10 seconds into a 40-second stall, setting this
165 sysfs parameter to (say) five will shorten the timeout for the
166 *next* stall, or the following warning for the current stall
167 (assuming the stall lasts long enough). It will not affect the
168 timing of the next warning for the current stall.
169
170 Stall-warning messages may be enabled and disabled completely via
171 /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
172
173CONFIG_RCU_EXP_CPU_STALL_TIMEOUT
174--------------------------------
175
176 Same as the CONFIG_RCU_CPU_STALL_TIMEOUT parameter but only for
177 the expedited grace period. This parameter defines the period
178 of time that RCU will wait from the beginning of an expedited
179 grace period until it issues an RCU CPU stall warning. This time
180 period is normally 20 milliseconds on Android devices. A zero
181 value causes the CONFIG_RCU_CPU_STALL_TIMEOUT value to be used,
182 after conversion to milliseconds.
183
184 This configuration parameter may be changed at runtime via the
185 /sys/module/rcupdate/parameters/rcu_exp_cpu_stall_timeout, however
186 this parameter is checked only at the beginning of a cycle. If you
187 are in a current stall cycle, setting it to a new value will change
188 the timeout for the -next- stall.
189
190 Stall-warning messages may be enabled and disabled completely via
191 /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
192
193RCU_STALL_DELAY_DELTA
194---------------------
195
196 Although the lockdep facility is extremely useful, it does add
197 some overhead. Therefore, under CONFIG_PROVE_RCU, the
198 RCU_STALL_DELAY_DELTA macro allows five extra seconds before
199 giving an RCU CPU stall warning message. (This is a cpp
200 macro, not a kernel configuration parameter.)
201
202RCU_STALL_RAT_DELAY
203-------------------
204
205 The CPU stall detector tries to make the offending CPU print its
206 own warnings, as this often gives better-quality stack traces.
207 However, if the offending CPU does not detect its own stall in
208 the number of jiffies specified by RCU_STALL_RAT_DELAY, then
209 some other CPU will complain. This delay is normally set to
210 two jiffies. (This is a cpp macro, not a kernel configuration
211 parameter.)
212
213rcupdate.rcu_task_stall_timeout
214-------------------------------
215
216 This boot/sysfs parameter controls the RCU-tasks and
217 RCU-tasks-trace stall warning intervals. A value of zero or less
218 suppresses RCU-tasks stall warnings. A positive value sets the
219 stall-warning interval in seconds. An RCU-tasks stall warning
220 starts with the line:
221
222 INFO: rcu_tasks detected stalls on tasks:
223
224 And continues with the output of sched_show_task() for each
225 task stalling the current RCU-tasks grace period.
226
227 An RCU-tasks-trace stall warning starts (and continues) similarly:
228
229 INFO: rcu_tasks_trace detected stalls on tasks
230
231
232Interpreting RCU's CPU Stall-Detector "Splats"
233==============================================
234
235For non-RCU-tasks flavors of RCU, when a CPU detects that some other
236CPU is stalling, it will print a message similar to the following::
237
238 INFO: rcu_sched detected stalls on CPUs/tasks:
239 2-...: (3 GPs behind) idle=06c/0/0 softirq=1453/1455 fqs=0
240 16-...: (0 ticks this GP) idle=81c/0/0 softirq=764/764 fqs=0
241 (detected by 32, t=2603 jiffies, g=7075, q=625)
242
243This message indicates that CPU 32 detected that CPUs 2 and 16 were both
244causing stalls, and that the stall was affecting RCU-sched. This message
245will normally be followed by stack dumps for each CPU. Please note that
246PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that
247the tasks will be indicated by PID, for example, "P3421". It is even
248possible for an rcu_state stall to be caused by both CPUs *and* tasks,
249in which case the offending CPUs and tasks will all be called out in the list.
250In some cases, CPUs will detect themselves stalling, which will result
251in a self-detected stall.
252
253CPU 2's "(3 GPs behind)" indicates that this CPU has not interacted with
254the RCU core for the past three grace periods. In contrast, CPU 16's "(0
255ticks this GP)" indicates that this CPU has not taken any scheduling-clock
256interrupts during the current stalled grace period.
257
258The "idle=" portion of the message prints the dyntick-idle state.
259The hex number before the first "/" is the low-order 16 bits of the
260dynticks counter, which will have an even-numbered value if the CPU
261is in dyntick-idle mode and an odd-numbered value otherwise. The hex
262number between the two "/"s is the value of the nesting, which will be
263a small non-negative number if in the idle loop (as shown above) and a
264very large positive number otherwise. The number following the final
265"/" is the NMI nesting, which will be a small non-negative number.
266
267The "softirq=" portion of the message tracks the number of RCU softirq
268handlers that the stalled CPU has executed. The number before the "/"
269is the number that had executed since boot at the time that this CPU
270last noted the beginning of a grace period, which might be the current
271(stalled) grace period, or it might be some earlier grace period (for
272example, if the CPU might have been in dyntick-idle mode for an extended
273time period). The number after the "/" is the number that have executed
274since boot until the current time. If this latter number stays constant
275across repeated stall-warning messages, it is possible that RCU's softirq
276handlers are no longer able to execute on this CPU. This can happen if
277the stalled CPU is spinning with interrupts are disabled, or, in -rt
278kernels, if a high-priority process is starving RCU's softirq handler.
279
280The "fqs=" shows the number of force-quiescent-state idle/offline
281detection passes that the grace-period kthread has made across this
282CPU since the last time that this CPU noted the beginning of a grace
283period.
284
285The "detected by" line indicates which CPU detected the stall (in this
286case, CPU 32), how many jiffies have elapsed since the start of the grace
287period (in this case 2603), the grace-period sequence number (7075), and
288an estimate of the total number of RCU callbacks queued across all CPUs
289(625 in this case).
290
291If the grace period ends just as the stall warning starts printing,
292there will be a spurious stall-warning message, which will include
293the following::
294
295 INFO: Stall ended before state dump start
296
297This is rare, but does happen from time to time in real life. It is also
298possible for a zero-jiffy stall to be flagged in this case, depending
299on how the stall warning and the grace-period initialization happen to
300interact. Please note that it is not possible to entirely eliminate this
301sort of false positive without resorting to things like stop_machine(),
302which is overkill for this sort of problem.
303
304If all CPUs and tasks have passed through quiescent states, but the
305grace period has nevertheless failed to end, the stall-warning splat
306will include something like the following::
307
308 All QSes seen, last rcu_preempt kthread activity 23807 (4297905177-4297881370), jiffies_till_next_fqs=3, root ->qsmask 0x0
309
310The "23807" indicates that it has been more than 23 thousand jiffies
311since the grace-period kthread ran. The "jiffies_till_next_fqs"
312indicates how frequently that kthread should run, giving the number
313of jiffies between force-quiescent-state scans, in this case three,
314which is way less than 23807. Finally, the root rcu_node structure's
315->qsmask field is printed, which will normally be zero.
316
317If the relevant grace-period kthread has been unable to run prior to
318the stall warning, as was the case in the "All QSes seen" line above,
319the following additional line is printed::
320
321 rcu_sched kthread starved for 23807 jiffies! g7075 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 ->cpu=5
322 Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
323
324Starving the grace-period kthreads of CPU time can of course result
325in RCU CPU stall warnings even when all CPUs and tasks have passed
326through the required quiescent states. The "g" number shows the current
327grace-period sequence number, the "f" precedes the ->gp_flags command
328to the grace-period kthread, the "RCU_GP_WAIT_FQS" indicates that the
329kthread is waiting for a short timeout, the "state" precedes value of the
330task_struct ->state field, and the "cpu" indicates that the grace-period
331kthread last ran on CPU 5.
332
333If the relevant grace-period kthread does not wake from FQS wait in a
334reasonable time, then the following additional line is printed::
335
336 kthread timer wakeup didn't happen for 23804 jiffies! g7076 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
337
338The "23804" indicates that kthread's timer expired more than 23 thousand
339jiffies ago. The rest of the line has meaning similar to the kthread
340starvation case.
341
342Additionally, the following line is printed::
343
344 Possible timer handling issue on cpu=4 timer-softirq=11142
345
346Here "cpu" indicates that the grace-period kthread last ran on CPU 4,
347where it queued the fqs timer. The number following the "timer-softirq"
348is the current ``TIMER_SOFTIRQ`` count on cpu 4. If this value does not
349change on successive RCU CPU stall warnings, there is further reason to
350suspect a timer problem.
351
352These messages are usually followed by stack dumps of the CPUs and tasks
353involved in the stall. These stack traces can help you locate the cause
354of the stall, keeping in mind that the CPU detecting the stall will have
355an interrupt frame that is mainly devoted to detecting the stall.
356
357
358Multiple Warnings From One Stall
359================================
360
361If a stall lasts long enough, multiple stall-warning messages will
362be printed for it. The second and subsequent messages are printed at
363longer intervals, so that the time between (say) the first and second
364message will be about three times the interval between the beginning
365of the stall and the first message. It can be helpful to compare the
366stack dumps for the different messages for the same stalled grace period.
367
368
369Stall Warnings for Expedited Grace Periods
370==========================================
371
372If an expedited grace period detects a stall, it will place a message
373like the following in dmesg::
374
375 INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 7-... } 21119 jiffies s: 73 root: 0x2/.
376
377This indicates that CPU 7 has failed to respond to a reschedule IPI.
378The three periods (".") following the CPU number indicate that the CPU
379is online (otherwise the first period would instead have been "O"),
380that the CPU was online at the beginning of the expedited grace period
381(otherwise the second period would have instead been "o"), and that
382the CPU has been online at least once since boot (otherwise, the third
383period would instead have been "N"). The number before the "jiffies"
384indicates that the expedited grace period has been going on for 21,119
385jiffies. The number following the "s:" indicates that the expedited
386grace-period sequence counter is 73. The fact that this last value is
387odd indicates that an expedited grace period is in flight. The number
388following "root:" is a bitmask that indicates which children of the root
389rcu_node structure correspond to CPUs and/or tasks that are blocking the
390current expedited grace period. If the tree had more than one level,
391additional hex numbers would be printed for the states of the other
392rcu_node structures in the tree.
393
394As with normal grace periods, PREEMPT_RCU builds can be stalled by
395tasks as well as by CPUs, and that the tasks will be indicated by PID,
396for example, "P3421".
397
398It is entirely possible to see stall warnings from normal and from
399expedited grace periods at about the same time during the same run.
400
401RCU_CPU_STALL_CPUTIME
402=====================
403
404In kernels built with CONFIG_RCU_CPU_STALL_CPUTIME=y or booted with
405rcupdate.rcu_cpu_stall_cputime=1, the following additional information
406is supplied with each RCU CPU stall warning::
407
408 rcu: hardirqs softirqs csw/system
409 rcu: number: 624 45 0
410 rcu: cputime: 69 1 2425 ==> 2500(ms)
411
412These statistics are collected during the sampling period. The values
413in row "number:" are the number of hard interrupts, number of soft
414interrupts, and number of context switches on the stalled CPU. The
415first three values in row "cputime:" indicate the CPU time in
416milliseconds consumed by hard interrupts, soft interrupts, and tasks
417on the stalled CPU. The last number is the measurement interval, again
418in milliseconds. Because user-mode tasks normally do not cause RCU CPU
419stalls, these tasks are typically kernel tasks, which is why only the
420system CPU time are considered.
421
422The sampling period is shown as follows::
423
424 |<------------first timeout---------->|<-----second timeout----->|
425 |<--half timeout-->|<--half timeout-->| |
426 | |<--first period-->| |
427 | |<-----------second sampling period---------->|
428 | | | |
429 snapshot time point 1st-stall 2nd-stall
430
431The following describes four typical scenarios:
432
4331. A CPU looping with interrupts disabled.
434
435 ::
436
437 rcu: hardirqs softirqs csw/system
438 rcu: number: 0 0 0
439 rcu: cputime: 0 0 0 ==> 2500(ms)
440
441 Because interrupts have been disabled throughout the measurement
442 interval, there are no interrupts and no context switches.
443 Furthermore, because CPU time consumption was measured using interrupt
444 handlers, the system CPU consumption is misleadingly measured as zero.
445 This scenario will normally also have "(0 ticks this GP)" printed on
446 this CPU's summary line.
447
4482. A CPU looping with bottom halves disabled.
449
450 This is similar to the previous example, but with non-zero number of
451 and CPU time consumed by hard interrupts, along with non-zero CPU
452 time consumed by in-kernel execution::
453
454 rcu: hardirqs softirqs csw/system
455 rcu: number: 624 0 0
456 rcu: cputime: 49 0 2446 ==> 2500(ms)
457
458 The fact that there are zero softirqs gives a hint that these were
459 disabled, perhaps via local_bh_disable(). It is of course possible
460 that there were no softirqs, perhaps because all events that would
461 result in softirq execution are confined to other CPUs. In this case,
462 the diagnosis should continue as shown in the next example.
463
4643. A CPU looping with preemption disabled.
465
466 Here, only the number of context switches is zero::
467
468 rcu: hardirqs softirqs csw/system
469 rcu: number: 624 45 0
470 rcu: cputime: 69 1 2425 ==> 2500(ms)
471
472 This situation hints that the stalled CPU was looping with preemption
473 disabled.
474
4754. No looping, but massive hard and soft interrupts.
476
477 ::
478
479 rcu: hardirqs softirqs csw/system
480 rcu: number: xx xx 0
481 rcu: cputime: xx xx 0 ==> 2500(ms)
482
483 Here, the number and CPU time of hard interrupts are all non-zero,
484 but the number of context switches and the in-kernel CPU time consumed
485 are zero. The number and cputime of soft interrupts will usually be
486 non-zero, but could be zero, for example, if the CPU was spinning
487 within a single hard interrupt handler.
488
489 If this type of RCU CPU stall warning can be reproduced, you can
490 narrow it down by looking at /proc/interrupts or by writing code to
491 trace each interrupt, for example, by referring to show_interrupts().