Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1.. SPDX-License-Identifier: GPL-2.0
2
3================================
4Review Checklist for RCU Patches
5================================
6
7
8This document contains a checklist for producing and reviewing patches
9that make use of RCU. Violating any of the rules listed below will
10result in the same sorts of problems that leaving out a locking primitive
11would cause. This list is based on experiences reviewing such patches
12over a rather long period of time, but improvements are always welcome!
13
140. Is RCU being applied to a read-mostly situation? If the data
15 structure is updated more than about 10% of the time, then you
16 should strongly consider some other approach, unless detailed
17 performance measurements show that RCU is nonetheless the right
18 tool for the job. Yes, RCU does reduce read-side overhead by
19 increasing write-side overhead, which is exactly why normal uses
20 of RCU will do much more reading than updating.
21
22 Another exception is where performance is not an issue, and RCU
23 provides a simpler implementation. An example of this situation
24 is the dynamic NMI code in the Linux 2.6 kernel, at least on
25 architectures where NMIs are rare.
26
27 Yet another exception is where the low real-time latency of RCU's
28 read-side primitives is critically important.
29
30 One final exception is where RCU readers are used to prevent
31 the ABA problem (https://en.wikipedia.org/wiki/ABA_problem)
32 for lockless updates. This does result in the mildly
33 counter-intuitive situation where rcu_read_lock() and
34 rcu_read_unlock() are used to protect updates, however, this
35 approach can provide the same simplifications to certain types
36 of lockless algorithms that garbage collectors do.
37
381. Does the update code have proper mutual exclusion?
39
40 RCU does allow *readers* to run (almost) naked, but *writers* must
41 still use some sort of mutual exclusion, such as:
42
43 a. locking,
44 b. atomic operations, or
45 c. restricting updates to a single task.
46
47 If you choose #b, be prepared to describe how you have handled
48 memory barriers on weakly ordered machines (pretty much all of
49 them -- even x86 allows later loads to be reordered to precede
50 earlier stores), and be prepared to explain why this added
51 complexity is worthwhile. If you choose #c, be prepared to
52 explain how this single task does not become a major bottleneck
53 on large systems (for example, if the task is updating information
54 relating to itself that other tasks can read, there by definition
55 can be no bottleneck). Note that the definition of "large" has
56 changed significantly: Eight CPUs was "large" in the year 2000,
57 but a hundred CPUs was unremarkable in 2017.
58
592. Do the RCU read-side critical sections make proper use of
60 rcu_read_lock() and friends? These primitives are needed
61 to prevent grace periods from ending prematurely, which
62 could result in data being unceremoniously freed out from
63 under your read-side code, which can greatly increase the
64 actuarial risk of your kernel.
65
66 As a rough rule of thumb, any dereference of an RCU-protected
67 pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
68 rcu_read_lock_sched(), or by the appropriate update-side lock.
69 Explicit disabling of preemption (preempt_disable(), for example)
70 can serve as rcu_read_lock_sched(), but is less readable and
71 prevents lockdep from detecting locking issues.
72
73 Please note that you *cannot* rely on code known to be built
74 only in non-preemptible kernels. Such code can and will break,
75 especially in kernels built with CONFIG_PREEMPT_COUNT=y.
76
77 Letting RCU-protected pointers "leak" out of an RCU read-side
78 critical section is every bit as bad as letting them leak out
79 from under a lock. Unless, of course, you have arranged some
80 other means of protection, such as a lock or a reference count
81 *before* letting them out of the RCU read-side critical section.
82
833. Does the update code tolerate concurrent accesses?
84
85 The whole point of RCU is to permit readers to run without
86 any locks or atomic operations. This means that readers will
87 be running while updates are in progress. There are a number
88 of ways to handle this concurrency, depending on the situation:
89
90 a. Use the RCU variants of the list and hlist update
91 primitives to add, remove, and replace elements on
92 an RCU-protected list. Alternatively, use the other
93 RCU-protected data structures that have been added to
94 the Linux kernel.
95
96 This is almost always the best approach.
97
98 b. Proceed as in (a) above, but also maintain per-element
99 locks (that are acquired by both readers and writers)
100 that guard per-element state. Fields that the readers
101 refrain from accessing can be guarded by some other lock
102 acquired only by updaters, if desired.
103
104 This also works quite well.
105
106 c. Make updates appear atomic to readers. For example,
107 pointer updates to properly aligned fields will
108 appear atomic, as will individual atomic primitives.
109 Sequences of operations performed under a lock will *not*
110 appear to be atomic to RCU readers, nor will sequences
111 of multiple atomic primitives. One alternative is to
112 move multiple individual fields to a separate structure,
113 thus solving the multiple-field problem by imposing an
114 additional level of indirection.
115
116 This can work, but is starting to get a bit tricky.
117
118 d. Carefully order the updates and the reads so that readers
119 see valid data at all phases of the update. This is often
120 more difficult than it sounds, especially given modern
121 CPUs' tendency to reorder memory references. One must
122 usually liberally sprinkle memory-ordering operations
123 through the code, making it difficult to understand and
124 to test. Where it works, it is better to use things
125 like smp_store_release() and smp_load_acquire(), but in
126 some cases the smp_mb() full memory barrier is required.
127
128 As noted earlier, it is usually better to group the
129 changing data into a separate structure, so that the
130 change may be made to appear atomic by updating a pointer
131 to reference a new structure containing updated values.
132
1334. Weakly ordered CPUs pose special challenges. Almost all CPUs
134 are weakly ordered -- even x86 CPUs allow later loads to be
135 reordered to precede earlier stores. RCU code must take all of
136 the following measures to prevent memory-corruption problems:
137
138 a. Readers must maintain proper ordering of their memory
139 accesses. The rcu_dereference() primitive ensures that
140 the CPU picks up the pointer before it picks up the data
141 that the pointer points to. This really is necessary
142 on Alpha CPUs.
143
144 The rcu_dereference() primitive is also an excellent
145 documentation aid, letting the person reading the
146 code know exactly which pointers are protected by RCU.
147 Please note that compilers can also reorder code, and
148 they are becoming increasingly aggressive about doing
149 just that. The rcu_dereference() primitive therefore also
150 prevents destructive compiler optimizations. However,
151 with a bit of devious creativity, it is possible to
152 mishandle the return value from rcu_dereference().
153 Please see rcu_dereference.rst for more information.
154
155 The rcu_dereference() primitive is used by the
156 various "_rcu()" list-traversal primitives, such
157 as the list_for_each_entry_rcu(). Note that it is
158 perfectly legal (if redundant) for update-side code to
159 use rcu_dereference() and the "_rcu()" list-traversal
160 primitives. This is particularly useful in code that
161 is common to readers and updaters. However, lockdep
162 will complain if you access rcu_dereference() outside
163 of an RCU read-side critical section. See lockdep.rst
164 to learn what to do about this.
165
166 Of course, neither rcu_dereference() nor the "_rcu()"
167 list-traversal primitives can substitute for a good
168 concurrency design coordinating among multiple updaters.
169
170 b. If the list macros are being used, the list_add_tail_rcu()
171 and list_add_rcu() primitives must be used in order
172 to prevent weakly ordered machines from misordering
173 structure initialization and pointer planting.
174 Similarly, if the hlist macros are being used, the
175 hlist_add_head_rcu() primitive is required.
176
177 c. If the list macros are being used, the list_del_rcu()
178 primitive must be used to keep list_del()'s pointer
179 poisoning from inflicting toxic effects on concurrent
180 readers. Similarly, if the hlist macros are being used,
181 the hlist_del_rcu() primitive is required.
182
183 The list_replace_rcu() and hlist_replace_rcu() primitives
184 may be used to replace an old structure with a new one
185 in their respective types of RCU-protected lists.
186
187 d. Rules similar to (4b) and (4c) apply to the "hlist_nulls"
188 type of RCU-protected linked lists.
189
190 e. Updates must ensure that initialization of a given
191 structure happens before pointers to that structure are
192 publicized. Use the rcu_assign_pointer() primitive
193 when publicizing a pointer to a structure that can
194 be traversed by an RCU read-side critical section.
195
1965. If any of call_rcu(), call_srcu(), call_rcu_tasks(),
197 call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used,
198 the callback function may be invoked from softirq context,
199 and in any case with bottom halves disabled. In particular,
200 this callback function cannot block. If you need the callback
201 to block, run that code in a workqueue handler scheduled from
202 the callback. The queue_rcu_work() function does this for you
203 in the case of call_rcu().
204
2056. Since synchronize_rcu() can block, it cannot be called
206 from any sort of irq context. The same rule applies
207 for synchronize_srcu(), synchronize_rcu_expedited(),
208 synchronize_srcu_expedited(), synchronize_rcu_tasks(),
209 synchronize_rcu_tasks_rude(), and synchronize_rcu_tasks_trace().
210
211 The expedited forms of these primitives have the same semantics
212 as the non-expedited forms, but expediting is more CPU intensive.
213 Use of the expedited primitives should be restricted to rare
214 configuration-change operations that would not normally be
215 undertaken while a real-time workload is running. Note that
216 IPI-sensitive real-time workloads can use the rcupdate.rcu_normal
217 kernel boot parameter to completely disable expedited grace
218 periods, though this might have performance implications.
219
220 In particular, if you find yourself invoking one of the expedited
221 primitives repeatedly in a loop, please do everyone a favor:
222 Restructure your code so that it batches the updates, allowing
223 a single non-expedited primitive to cover the entire batch.
224 This will very likely be faster than the loop containing the
225 expedited primitive, and will be much much easier on the rest
226 of the system, especially to real-time workloads running on the
227 rest of the system. Alternatively, instead use asynchronous
228 primitives such as call_rcu().
229
2307. As of v4.20, a given kernel implements only one RCU flavor, which
231 is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y.
232 If the updater uses call_rcu() or synchronize_rcu(), then
233 the corresponding readers may use: (1) rcu_read_lock() and
234 rcu_read_unlock(), (2) any pair of primitives that disables
235 and re-enables softirq, for example, rcu_read_lock_bh() and
236 rcu_read_unlock_bh(), or (3) any pair of primitives that disables
237 and re-enables preemption, for example, rcu_read_lock_sched() and
238 rcu_read_unlock_sched(). If the updater uses synchronize_srcu()
239 or call_srcu(), then the corresponding readers must use
240 srcu_read_lock() and srcu_read_unlock(), and with the same
241 srcu_struct. The rules for the expedited RCU grace-period-wait
242 primitives are the same as for their non-expedited counterparts.
243
244 Similarly, it is necessary to correctly use the RCU Tasks flavors:
245
246 a. If the updater uses synchronize_rcu_tasks() or
247 call_rcu_tasks(), then the readers must refrain from
248 executing voluntary context switches, that is, from
249 blocking.
250
251 b. If the updater uses call_rcu_tasks_trace()
252 or synchronize_rcu_tasks_trace(), then the
253 corresponding readers must use rcu_read_lock_trace()
254 and rcu_read_unlock_trace().
255
256 c. If an updater uses call_rcu_tasks_rude() or
257 synchronize_rcu_tasks_rude(), then the corresponding
258 readers must use anything that disables preemption,
259 for example, preempt_disable() and preempt_enable().
260
261 Mixing things up will result in confusion and broken kernels, and
262 has even resulted in an exploitable security issue. Therefore,
263 when using non-obvious pairs of primitives, commenting is
264 of course a must. One example of non-obvious pairing is
265 the XDP feature in networking, which calls BPF programs from
266 network-driver NAPI (softirq) context. BPF relies heavily on RCU
267 protection for its data structures, but because the BPF program
268 invocation happens entirely within a single local_bh_disable()
269 section in a NAPI poll cycle, this usage is safe. The reason
270 that this usage is safe is that readers can use anything that
271 disables BH when updaters use call_rcu() or synchronize_rcu().
272
2738. Although synchronize_rcu() is slower than is call_rcu(),
274 it usually results in simpler code. So, unless update
275 performance is critically important, the updaters cannot block,
276 or the latency of synchronize_rcu() is visible from userspace,
277 synchronize_rcu() should be used in preference to call_rcu().
278 Furthermore, kfree_rcu() and kvfree_rcu() usually result
279 in even simpler code than does synchronize_rcu() without
280 synchronize_rcu()'s multi-millisecond latency. So please take
281 advantage of kfree_rcu()'s and kvfree_rcu()'s "fire and forget"
282 memory-freeing capabilities where it applies.
283
284 An especially important property of the synchronize_rcu()
285 primitive is that it automatically self-limits: if grace periods
286 are delayed for whatever reason, then the synchronize_rcu()
287 primitive will correspondingly delay updates. In contrast,
288 code using call_rcu() should explicitly limit update rate in
289 cases where grace periods are delayed, as failing to do so can
290 result in excessive realtime latencies or even OOM conditions.
291
292 Ways of gaining this self-limiting property when using call_rcu(),
293 kfree_rcu(), or kvfree_rcu() include:
294
295 a. Keeping a count of the number of data-structure elements
296 used by the RCU-protected data structure, including
297 those waiting for a grace period to elapse. Enforce a
298 limit on this number, stalling updates as needed to allow
299 previously deferred frees to complete. Alternatively,
300 limit only the number awaiting deferred free rather than
301 the total number of elements.
302
303 One way to stall the updates is to acquire the update-side
304 mutex. (Don't try this with a spinlock -- other CPUs
305 spinning on the lock could prevent the grace period
306 from ever ending.) Another way to stall the updates
307 is for the updates to use a wrapper function around
308 the memory allocator, so that this wrapper function
309 simulates OOM when there is too much memory awaiting an
310 RCU grace period. There are of course many other
311 variations on this theme.
312
313 b. Limiting update rate. For example, if updates occur only
314 once per hour, then no explicit rate limiting is
315 required, unless your system is already badly broken.
316 Older versions of the dcache subsystem take this approach,
317 guarding updates with a global lock, limiting their rate.
318
319 c. Trusted update -- if updates can only be done manually by
320 superuser or some other trusted user, then it might not
321 be necessary to automatically limit them. The theory
322 here is that superuser already has lots of ways to crash
323 the machine.
324
325 d. Periodically invoke rcu_barrier(), permitting a limited
326 number of updates per grace period.
327
328 The same cautions apply to call_srcu(), call_rcu_tasks(),
329 call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is
330 why there is an srcu_barrier(), rcu_barrier_tasks(),
331 rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
332 respectively.
333
334 Note that although these primitives do take action to avoid
335 memory exhaustion when any given CPU has too many callbacks,
336 a determined user or administrator can still exhaust memory.
337 This is especially the case if a system with a large number of
338 CPUs has been configured to offload all of its RCU callbacks onto
339 a single CPU, or if the system has relatively little free memory.
340
3419. All RCU list-traversal primitives, which include
342 rcu_dereference(), list_for_each_entry_rcu(), and
343 list_for_each_safe_rcu(), must be either within an RCU read-side
344 critical section or must be protected by appropriate update-side
345 locks. RCU read-side critical sections are delimited by
346 rcu_read_lock() and rcu_read_unlock(), or by similar primitives
347 such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
348 case the matching rcu_dereference() primitive must be used in
349 order to keep lockdep happy, in this case, rcu_dereference_bh().
350
351 The reason that it is permissible to use RCU list-traversal
352 primitives when the update-side lock is held is that doing so
353 can be quite helpful in reducing code bloat when common code is
354 shared between readers and updaters. Additional primitives
355 are provided for this case, as discussed in lockdep.rst.
356
357 One exception to this rule is when data is only ever added to
358 the linked data structure, and is never removed during any
359 time that readers might be accessing that structure. In such
360 cases, READ_ONCE() may be used in place of rcu_dereference()
361 and the read-side markers (rcu_read_lock() and rcu_read_unlock(),
362 for example) may be omitted.
363
36410. Conversely, if you are in an RCU read-side critical section,
365 and you don't hold the appropriate update-side lock, you *must*
366 use the "_rcu()" variants of the list macros. Failing to do so
367 will break Alpha, cause aggressive compilers to generate bad code,
368 and confuse people trying to understand your code.
369
37011. Any lock acquired by an RCU callback must be acquired elsewhere
371 with softirq disabled, e.g., via spin_lock_bh(). Failing to
372 disable softirq on a given acquisition of that lock will result
373 in deadlock as soon as the RCU softirq handler happens to run
374 your RCU callback while interrupting that acquisition's critical
375 section.
376
37712. RCU callbacks can be and are executed in parallel. In many cases,
378 the callback code simply wrappers around kfree(), so that this
379 is not an issue (or, more accurately, to the extent that it is
380 an issue, the memory-allocator locking handles it). However,
381 if the callbacks do manipulate a shared data structure, they
382 must use whatever locking or other synchronization is required
383 to safely access and/or modify that data structure.
384
385 Do not assume that RCU callbacks will be executed on the same
386 CPU that executed the corresponding call_rcu() or call_srcu().
387 For example, if a given CPU goes offline while having an RCU
388 callback pending, then that RCU callback will execute on some
389 surviving CPU. (If this was not the case, a self-spawning RCU
390 callback would prevent the victim CPU from ever going offline.)
391 Furthermore, CPUs designated by rcu_nocbs= might well *always*
392 have their RCU callbacks executed on some other CPUs, in fact,
393 for some real-time workloads, this is the whole point of using
394 the rcu_nocbs= kernel boot parameter.
395
396 In addition, do not assume that callbacks queued in a given order
397 will be invoked in that order, even if they all are queued on the
398 same CPU. Furthermore, do not assume that same-CPU callbacks will
399 be invoked serially. For example, in recent kernels, CPUs can be
400 switched between offloaded and de-offloaded callback invocation,
401 and while a given CPU is undergoing such a switch, its callbacks
402 might be concurrently invoked by that CPU's softirq handler and
403 that CPU's rcuo kthread. At such times, that CPU's callbacks
404 might be executed both concurrently and out of order.
405
40613. Unlike most flavors of RCU, it *is* permissible to block in an
407 SRCU read-side critical section (demarked by srcu_read_lock()
408 and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
409 Please note that if you don't need to sleep in read-side critical
410 sections, you should be using RCU rather than SRCU, because RCU
411 is almost always faster and easier to use than is SRCU.
412
413 Also unlike other forms of RCU, explicit initialization and
414 cleanup is required either at build time via DEFINE_SRCU()
415 or DEFINE_STATIC_SRCU() or at runtime via init_srcu_struct()
416 and cleanup_srcu_struct(). These last two are passed a
417 "struct srcu_struct" that defines the scope of a given
418 SRCU domain. Once initialized, the srcu_struct is passed
419 to srcu_read_lock(), srcu_read_unlock() synchronize_srcu(),
420 synchronize_srcu_expedited(), and call_srcu(). A given
421 synchronize_srcu() waits only for SRCU read-side critical
422 sections governed by srcu_read_lock() and srcu_read_unlock()
423 calls that have been passed the same srcu_struct. This property
424 is what makes sleeping read-side critical sections tolerable --
425 a given subsystem delays only its own updates, not those of other
426 subsystems using SRCU. Therefore, SRCU is less prone to OOM the
427 system than RCU would be if RCU's read-side critical sections
428 were permitted to sleep.
429
430 The ability to sleep in read-side critical sections does not
431 come for free. First, corresponding srcu_read_lock() and
432 srcu_read_unlock() calls must be passed the same srcu_struct.
433 Second, grace-period-detection overhead is amortized only
434 over those updates sharing a given srcu_struct, rather than
435 being globally amortized as they are for other forms of RCU.
436 Therefore, SRCU should be used in preference to rw_semaphore
437 only in extremely read-intensive situations, or in situations
438 requiring SRCU's read-side deadlock immunity or low read-side
439 realtime latency. You should also consider percpu_rw_semaphore
440 when you need lightweight readers.
441
442 SRCU's expedited primitive (synchronize_srcu_expedited())
443 never sends IPIs to other CPUs, so it is easier on
444 real-time workloads than is synchronize_rcu_expedited().
445
446 It is also permissible to sleep in RCU Tasks Trace read-side
447 critical, which are delimited by rcu_read_lock_trace() and
448 rcu_read_unlock_trace(). However, this is a specialized flavor
449 of RCU, and you should not use it without first checking with
450 its current users. In most cases, you should instead use SRCU.
451
452 Note that rcu_assign_pointer() relates to SRCU just as it does to
453 other forms of RCU, but instead of rcu_dereference() you should
454 use srcu_dereference() in order to avoid lockdep splats.
455
45614. The whole point of call_rcu(), synchronize_rcu(), and friends
457 is to wait until all pre-existing readers have finished before
458 carrying out some otherwise-destructive operation. It is
459 therefore critically important to *first* remove any path
460 that readers can follow that could be affected by the
461 destructive operation, and *only then* invoke call_rcu(),
462 synchronize_rcu(), or friends.
463
464 Because these primitives only wait for pre-existing readers, it
465 is the caller's responsibility to guarantee that any subsequent
466 readers will execute safely.
467
46815. The various RCU read-side primitives do *not* necessarily contain
469 memory barriers. You should therefore plan for the CPU
470 and the compiler to freely reorder code into and out of RCU
471 read-side critical sections. It is the responsibility of the
472 RCU update-side primitives to deal with this.
473
474 For SRCU readers, you can use smp_mb__after_srcu_read_unlock()
475 immediately after an srcu_read_unlock() to get a full barrier.
476
47716. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
478 __rcu sparse checks to validate your RCU code. These can help
479 find problems as follows:
480
481 CONFIG_PROVE_LOCKING:
482 check that accesses to RCU-protected data structures
483 are carried out under the proper RCU read-side critical
484 section, while holding the right combination of locks,
485 or whatever other conditions are appropriate.
486
487 CONFIG_DEBUG_OBJECTS_RCU_HEAD:
488 check that you don't pass the same object to call_rcu()
489 (or friends) before an RCU grace period has elapsed
490 since the last time that you passed that same object to
491 call_rcu() (or friends).
492
493 __rcu sparse checks:
494 tag the pointer to the RCU-protected data structure
495 with __rcu, and sparse will warn you if you access that
496 pointer without the services of one of the variants
497 of rcu_dereference().
498
499 These debugging aids can help you find problems that are
500 otherwise extremely difficult to spot.
501
50217. If you pass a callback function defined within a module to one of
503 call_rcu(), call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(),
504 or call_rcu_tasks_trace(), then it is necessary to wait for all
505 pending callbacks to be invoked before unloading that module.
506 Note that it is absolutely *not* sufficient to wait for a grace
507 period! For example, synchronize_rcu() implementation is *not*
508 guaranteed to wait for callbacks registered on other CPUs via
509 call_rcu(). Or even on the current CPU if that CPU recently
510 went offline and came back online.
511
512 You instead need to use one of the barrier functions:
513
514 - call_rcu() -> rcu_barrier()
515 - call_srcu() -> srcu_barrier()
516 - call_rcu_tasks() -> rcu_barrier_tasks()
517 - call_rcu_tasks_rude() -> rcu_barrier_tasks_rude()
518 - call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
519
520 However, these barrier functions are absolutely *not* guaranteed
521 to wait for a grace period. For example, if there are no
522 call_rcu() callbacks queued anywhere in the system, rcu_barrier()
523 can and will return immediately.
524
525 So if you need to wait for both a grace period and for all
526 pre-existing callbacks, you will need to invoke both functions,
527 with the pair depending on the flavor of RCU:
528
529 - Either synchronize_rcu() or synchronize_rcu_expedited(),
530 together with rcu_barrier()
531 - Either synchronize_srcu() or synchronize_srcu_expedited(),
532 together with and srcu_barrier()
533 - synchronize_rcu_tasks() and rcu_barrier_tasks()
534 - synchronize_tasks_rude() and rcu_barrier_tasks_rude()
535 - synchronize_tasks_trace() and rcu_barrier_tasks_trace()
536
537 If necessary, you can use something like workqueues to execute
538 the requisite pair of functions concurrently.
539
540 See rcubarrier.rst for more information.