Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1.. SPDX-License-Identifier: GPL-2.0
2
3================================
4Review Checklist for RCU Patches
5================================
6
7
8This document contains a checklist for producing and reviewing patches
9that make use of RCU. Violating any of the rules listed below will
10result in the same sorts of problems that leaving out a locking primitive
11would cause. This list is based on experiences reviewing such patches
12over a rather long period of time, but improvements are always welcome!
13
140. Is RCU being applied to a read-mostly situation? If the data
15 structure is updated more than about 10% of the time, then you
16 should strongly consider some other approach, unless detailed
17 performance measurements show that RCU is nonetheless the right
18 tool for the job. Yes, RCU does reduce read-side overhead by
19 increasing write-side overhead, which is exactly why normal uses
20 of RCU will do much more reading than updating.
21
22 Another exception is where performance is not an issue, and RCU
23 provides a simpler implementation. An example of this situation
24 is the dynamic NMI code in the Linux 2.6 kernel, at least on
25 architectures where NMIs are rare.
26
27 Yet another exception is where the low real-time latency of RCU's
28 read-side primitives is critically important.
29
30 One final exception is where RCU readers are used to prevent
31 the ABA problem (https://en.wikipedia.org/wiki/ABA_problem)
32 for lockless updates. This does result in the mildly
33 counter-intuitive situation where rcu_read_lock() and
34 rcu_read_unlock() are used to protect updates, however, this
35 approach can provide the same simplifications to certain types
36 of lockless algorithms that garbage collectors do.
37
381. Does the update code have proper mutual exclusion?
39
40 RCU does allow *readers* to run (almost) naked, but *writers* must
41 still use some sort of mutual exclusion, such as:
42
43 a. locking,
44 b. atomic operations, or
45 c. restricting updates to a single task.
46
47 If you choose #b, be prepared to describe how you have handled
48 memory barriers on weakly ordered machines (pretty much all of
49 them -- even x86 allows later loads to be reordered to precede
50 earlier stores), and be prepared to explain why this added
51 complexity is worthwhile. If you choose #c, be prepared to
52 explain how this single task does not become a major bottleneck
53 on large systems (for example, if the task is updating information
54 relating to itself that other tasks can read, there by definition
55 can be no bottleneck). Note that the definition of "large" has
56 changed significantly: Eight CPUs was "large" in the year 2000,
57 but a hundred CPUs was unremarkable in 2017.
58
592. Do the RCU read-side critical sections make proper use of
60 rcu_read_lock() and friends? These primitives are needed
61 to prevent grace periods from ending prematurely, which
62 could result in data being unceremoniously freed out from
63 under your read-side code, which can greatly increase the
64 actuarial risk of your kernel.
65
66 As a rough rule of thumb, any dereference of an RCU-protected
67 pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
68 rcu_read_lock_sched(), or by the appropriate update-side lock.
69 Explicit disabling of preemption (preempt_disable(), for example)
70 can serve as rcu_read_lock_sched(), but is less readable and
71 prevents lockdep from detecting locking issues. Acquiring a
72 raw spinlock also enters an RCU read-side critical section.
73
74 The guard(rcu)() and scoped_guard(rcu) primitives designate
75 the remainder of the current scope or the next statement,
76 respectively, as the RCU read-side critical section. Use of
77 these guards can be less error-prone than rcu_read_lock(),
78 rcu_read_unlock(), and friends.
79
80 Please note that you *cannot* rely on code known to be built
81 only in non-preemptible kernels. Such code can and will break,
82 especially in kernels built with CONFIG_PREEMPT_COUNT=y.
83
84 Letting RCU-protected pointers "leak" out of an RCU read-side
85 critical section is every bit as bad as letting them leak out
86 from under a lock. Unless, of course, you have arranged some
87 other means of protection, such as a lock or a reference count
88 *before* letting them out of the RCU read-side critical section.
89
903. Does the update code tolerate concurrent accesses?
91
92 The whole point of RCU is to permit readers to run without
93 any locks or atomic operations. This means that readers will
94 be running while updates are in progress. There are a number
95 of ways to handle this concurrency, depending on the situation:
96
97 a. Use the RCU variants of the list and hlist update
98 primitives to add, remove, and replace elements on
99 an RCU-protected list. Alternatively, use the other
100 RCU-protected data structures that have been added to
101 the Linux kernel.
102
103 This is almost always the best approach.
104
105 b. Proceed as in (a) above, but also maintain per-element
106 locks (that are acquired by both readers and writers)
107 that guard per-element state. Fields that the readers
108 refrain from accessing can be guarded by some other lock
109 acquired only by updaters, if desired.
110
111 This also works quite well.
112
113 c. Make updates appear atomic to readers. For example,
114 pointer updates to properly aligned fields will
115 appear atomic, as will individual atomic primitives.
116 Sequences of operations performed under a lock will *not*
117 appear to be atomic to RCU readers, nor will sequences
118 of multiple atomic primitives. One alternative is to
119 move multiple individual fields to a separate structure,
120 thus solving the multiple-field problem by imposing an
121 additional level of indirection.
122
123 This can work, but is starting to get a bit tricky.
124
125 d. Carefully order the updates and the reads so that readers
126 see valid data at all phases of the update. This is often
127 more difficult than it sounds, especially given modern
128 CPUs' tendency to reorder memory references. One must
129 usually liberally sprinkle memory-ordering operations
130 through the code, making it difficult to understand and
131 to test. Where it works, it is better to use things
132 like smp_store_release() and smp_load_acquire(), but in
133 some cases the smp_mb() full memory barrier is required.
134
135 As noted earlier, it is usually better to group the
136 changing data into a separate structure, so that the
137 change may be made to appear atomic by updating a pointer
138 to reference a new structure containing updated values.
139
1404. Weakly ordered CPUs pose special challenges. Almost all CPUs
141 are weakly ordered -- even x86 CPUs allow later loads to be
142 reordered to precede earlier stores. RCU code must take all of
143 the following measures to prevent memory-corruption problems:
144
145 a. Readers must maintain proper ordering of their memory
146 accesses. The rcu_dereference() primitive ensures that
147 the CPU picks up the pointer before it picks up the data
148 that the pointer points to. This really is necessary
149 on Alpha CPUs.
150
151 The rcu_dereference() primitive is also an excellent
152 documentation aid, letting the person reading the
153 code know exactly which pointers are protected by RCU.
154 Please note that compilers can also reorder code, and
155 they are becoming increasingly aggressive about doing
156 just that. The rcu_dereference() primitive therefore also
157 prevents destructive compiler optimizations. However,
158 with a bit of devious creativity, it is possible to
159 mishandle the return value from rcu_dereference().
160 Please see rcu_dereference.rst for more information.
161
162 The rcu_dereference() primitive is used by the
163 various "_rcu()" list-traversal primitives, such
164 as the list_for_each_entry_rcu(). Note that it is
165 perfectly legal (if redundant) for update-side code to
166 use rcu_dereference() and the "_rcu()" list-traversal
167 primitives. This is particularly useful in code that
168 is common to readers and updaters. However, lockdep
169 will complain if you access rcu_dereference() outside
170 of an RCU read-side critical section. See lockdep.rst
171 to learn what to do about this.
172
173 Of course, neither rcu_dereference() nor the "_rcu()"
174 list-traversal primitives can substitute for a good
175 concurrency design coordinating among multiple updaters.
176
177 b. If the list macros are being used, the list_add_tail_rcu()
178 and list_add_rcu() primitives must be used in order
179 to prevent weakly ordered machines from misordering
180 structure initialization and pointer planting.
181 Similarly, if the hlist macros are being used, the
182 hlist_add_head_rcu() primitive is required.
183
184 c. If the list macros are being used, the list_del_rcu()
185 primitive must be used to keep list_del()'s pointer
186 poisoning from inflicting toxic effects on concurrent
187 readers. Similarly, if the hlist macros are being used,
188 the hlist_del_rcu() primitive is required.
189
190 The list_replace_rcu() and hlist_replace_rcu() primitives
191 may be used to replace an old structure with a new one
192 in their respective types of RCU-protected lists.
193
194 d. Rules similar to (4b) and (4c) apply to the "hlist_nulls"
195 type of RCU-protected linked lists.
196
197 e. Updates must ensure that initialization of a given
198 structure happens before pointers to that structure are
199 publicized. Use the rcu_assign_pointer() primitive
200 when publicizing a pointer to a structure that can
201 be traversed by an RCU read-side critical section.
202
2035. If any of call_rcu(), call_srcu(), call_rcu_tasks(), or
204 call_rcu_tasks_trace() is used, the callback function may be
205 invoked from softirq context, and in any case with bottom halves
206 disabled. In particular, this callback function cannot block.
207 If you need the callback to block, run that code in a workqueue
208 handler scheduled from the callback. The queue_rcu_work()
209 function does this for you in the case of call_rcu().
210
2116. Since synchronize_rcu() can block, it cannot be called
212 from any sort of irq context. The same rule applies
213 for synchronize_srcu(), synchronize_rcu_expedited(),
214 synchronize_srcu_expedited(), synchronize_rcu_tasks(),
215 synchronize_rcu_tasks_rude(), and synchronize_rcu_tasks_trace().
216
217 The expedited forms of these primitives have the same semantics
218 as the non-expedited forms, but expediting is more CPU intensive.
219 Use of the expedited primitives should be restricted to rare
220 configuration-change operations that would not normally be
221 undertaken while a real-time workload is running. Note that
222 IPI-sensitive real-time workloads can use the rcupdate.rcu_normal
223 kernel boot parameter to completely disable expedited grace
224 periods, though this might have performance implications.
225
226 In particular, if you find yourself invoking one of the expedited
227 primitives repeatedly in a loop, please do everyone a favor:
228 Restructure your code so that it batches the updates, allowing
229 a single non-expedited primitive to cover the entire batch.
230 This will very likely be faster than the loop containing the
231 expedited primitive, and will be much much easier on the rest
232 of the system, especially to real-time workloads running on the
233 rest of the system. Alternatively, instead use asynchronous
234 primitives such as call_rcu().
235
2367. As of v4.20, a given kernel implements only one RCU flavor, which
237 is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y.
238 If the updater uses call_rcu() or synchronize_rcu(), then
239 the corresponding readers may use: (1) rcu_read_lock() and
240 rcu_read_unlock(), (2) any pair of primitives that disables
241 and re-enables softirq, for example, rcu_read_lock_bh() and
242 rcu_read_unlock_bh(), or (3) any pair of primitives that disables
243 and re-enables preemption, for example, rcu_read_lock_sched() and
244 rcu_read_unlock_sched(). If the updater uses synchronize_srcu()
245 or call_srcu(), then the corresponding readers must use
246 srcu_read_lock() and srcu_read_unlock(), and with the same
247 srcu_struct. The rules for the expedited RCU grace-period-wait
248 primitives are the same as for their non-expedited counterparts.
249
250 Similarly, it is necessary to correctly use the RCU Tasks flavors:
251
252 a. If the updater uses synchronize_rcu_tasks() or
253 call_rcu_tasks(), then the readers must refrain from
254 executing voluntary context switches, that is, from
255 blocking.
256
257 b. If the updater uses call_rcu_tasks_trace()
258 or synchronize_rcu_tasks_trace(), then the
259 corresponding readers must use rcu_read_lock_trace()
260 and rcu_read_unlock_trace().
261
262 c. If an updater uses synchronize_rcu_tasks_rude(),
263 then the corresponding readers must use anything that
264 disables preemption, for example, preempt_disable()
265 and preempt_enable().
266
267 Mixing things up will result in confusion and broken kernels, and
268 has even resulted in an exploitable security issue. Therefore,
269 when using non-obvious pairs of primitives, commenting is
270 of course a must. One example of non-obvious pairing is
271 the XDP feature in networking, which calls BPF programs from
272 network-driver NAPI (softirq) context. BPF relies heavily on RCU
273 protection for its data structures, but because the BPF program
274 invocation happens entirely within a single local_bh_disable()
275 section in a NAPI poll cycle, this usage is safe. The reason
276 that this usage is safe is that readers can use anything that
277 disables BH when updaters use call_rcu() or synchronize_rcu().
278
2798. Although synchronize_rcu() is slower than is call_rcu(),
280 it usually results in simpler code. So, unless update
281 performance is critically important, the updaters cannot block,
282 or the latency of synchronize_rcu() is visible from userspace,
283 synchronize_rcu() should be used in preference to call_rcu().
284 Furthermore, kfree_rcu() and kvfree_rcu() usually result
285 in even simpler code than does synchronize_rcu() without
286 synchronize_rcu()'s multi-millisecond latency. So please take
287 advantage of kfree_rcu()'s and kvfree_rcu()'s "fire and forget"
288 memory-freeing capabilities where it applies.
289
290 An especially important property of the synchronize_rcu()
291 primitive is that it automatically self-limits: if grace periods
292 are delayed for whatever reason, then the synchronize_rcu()
293 primitive will correspondingly delay updates. In contrast,
294 code using call_rcu() should explicitly limit update rate in
295 cases where grace periods are delayed, as failing to do so can
296 result in excessive realtime latencies or even OOM conditions.
297
298 Ways of gaining this self-limiting property when using call_rcu(),
299 kfree_rcu(), or kvfree_rcu() include:
300
301 a. Keeping a count of the number of data-structure elements
302 used by the RCU-protected data structure, including
303 those waiting for a grace period to elapse. Enforce a
304 limit on this number, stalling updates as needed to allow
305 previously deferred frees to complete. Alternatively,
306 limit only the number awaiting deferred free rather than
307 the total number of elements.
308
309 One way to stall the updates is to acquire the update-side
310 mutex. (Don't try this with a spinlock -- other CPUs
311 spinning on the lock could prevent the grace period
312 from ever ending.) Another way to stall the updates
313 is for the updates to use a wrapper function around
314 the memory allocator, so that this wrapper function
315 simulates OOM when there is too much memory awaiting an
316 RCU grace period. There are of course many other
317 variations on this theme.
318
319 b. Limiting update rate. For example, if updates occur only
320 once per hour, then no explicit rate limiting is
321 required, unless your system is already badly broken.
322 Older versions of the dcache subsystem take this approach,
323 guarding updates with a global lock, limiting their rate.
324
325 c. Trusted update -- if updates can only be done manually by
326 superuser or some other trusted user, then it might not
327 be necessary to automatically limit them. The theory
328 here is that superuser already has lots of ways to crash
329 the machine.
330
331 d. Periodically invoke rcu_barrier(), permitting a limited
332 number of updates per grace period.
333
334 The same cautions apply to call_srcu(), call_rcu_tasks(), and
335 call_rcu_tasks_trace(). This is why there is an srcu_barrier(),
336 rcu_barrier_tasks(), and rcu_barrier_tasks_trace(), respectively.
337
338 Note that although these primitives do take action to avoid
339 memory exhaustion when any given CPU has too many callbacks,
340 a determined user or administrator can still exhaust memory.
341 This is especially the case if a system with a large number of
342 CPUs has been configured to offload all of its RCU callbacks onto
343 a single CPU, or if the system has relatively little free memory.
344
3459. All RCU list-traversal primitives, which include
346 rcu_dereference(), list_for_each_entry_rcu(), and
347 list_for_each_safe_rcu(), must be either within an RCU read-side
348 critical section or must be protected by appropriate update-side
349 locks. RCU read-side critical sections are delimited by
350 rcu_read_lock() and rcu_read_unlock(), or by similar primitives
351 such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
352 case the matching rcu_dereference() primitive must be used in
353 order to keep lockdep happy, in this case, rcu_dereference_bh().
354
355 The reason that it is permissible to use RCU list-traversal
356 primitives when the update-side lock is held is that doing so
357 can be quite helpful in reducing code bloat when common code is
358 shared between readers and updaters. Additional primitives
359 are provided for this case, as discussed in lockdep.rst.
360
361 One exception to this rule is when data is only ever added to
362 the linked data structure, and is never removed during any
363 time that readers might be accessing that structure. In such
364 cases, READ_ONCE() may be used in place of rcu_dereference()
365 and the read-side markers (rcu_read_lock() and rcu_read_unlock(),
366 for example) may be omitted.
367
36810. Conversely, if you are in an RCU read-side critical section,
369 and you don't hold the appropriate update-side lock, you *must*
370 use the "_rcu()" variants of the list macros. Failing to do so
371 will break Alpha, cause aggressive compilers to generate bad code,
372 and confuse people trying to understand your code.
373
37411. Any lock acquired by an RCU callback must be acquired elsewhere
375 with softirq disabled, e.g., via spin_lock_bh(). Failing to
376 disable softirq on a given acquisition of that lock will result
377 in deadlock as soon as the RCU softirq handler happens to run
378 your RCU callback while interrupting that acquisition's critical
379 section.
380
38112. RCU callbacks can be and are executed in parallel. In many cases,
382 the callback code simply wrappers around kfree(), so that this
383 is not an issue (or, more accurately, to the extent that it is
384 an issue, the memory-allocator locking handles it). However,
385 if the callbacks do manipulate a shared data structure, they
386 must use whatever locking or other synchronization is required
387 to safely access and/or modify that data structure.
388
389 Do not assume that RCU callbacks will be executed on the same
390 CPU that executed the corresponding call_rcu(), call_srcu(),
391 call_rcu_tasks(), or call_rcu_tasks_trace(). For example, if
392 a given CPU goes offline while having an RCU callback pending,
393 then that RCU callback will execute on some surviving CPU.
394 (If this was not the case, a self-spawning RCU callback would
395 prevent the victim CPU from ever going offline.) Furthermore,
396 CPUs designated by rcu_nocbs= might well *always* have their
397 RCU callbacks executed on some other CPUs, in fact, for some
398 real-time workloads, this is the whole point of using the
399 rcu_nocbs= kernel boot parameter.
400
401 In addition, do not assume that callbacks queued in a given order
402 will be invoked in that order, even if they all are queued on the
403 same CPU. Furthermore, do not assume that same-CPU callbacks will
404 be invoked serially. For example, in recent kernels, CPUs can be
405 switched between offloaded and de-offloaded callback invocation,
406 and while a given CPU is undergoing such a switch, its callbacks
407 might be concurrently invoked by that CPU's softirq handler and
408 that CPU's rcuo kthread. At such times, that CPU's callbacks
409 might be executed both concurrently and out of order.
410
41113. Unlike most flavors of RCU, it *is* permissible to block in an
412 SRCU read-side critical section (demarked by srcu_read_lock()
413 and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
414 As with RCU, guard(srcu)() and scoped_guard(srcu) forms are
415 available, and often provide greater ease of use. Please note
416 that if you don't need to sleep in read-side critical sections,
417 you should be using RCU rather than SRCU, because RCU is almost
418 always faster and easier to use than is SRCU.
419
420 Also unlike other forms of RCU, explicit initialization
421 and cleanup is required either at build time via
422 DEFINE_SRCU(), DEFINE_STATIC_SRCU(), DEFINE_SRCU_FAST(),
423 or DEFINE_STATIC_SRCU_FAST() or at runtime via either
424 init_srcu_struct() or init_srcu_struct_fast() and
425 cleanup_srcu_struct(). These last three are passed a
426 `struct srcu_struct` that defines the scope of a given
427 SRCU domain. Once initialized, the srcu_struct is passed
428 to srcu_read_lock(), srcu_read_unlock() synchronize_srcu(),
429 synchronize_srcu_expedited(), and call_srcu(). A given
430 synchronize_srcu() waits only for SRCU read-side critical
431 sections governed by srcu_read_lock() and srcu_read_unlock()
432 calls that have been passed the same srcu_struct. This property
433 is what makes sleeping read-side critical sections tolerable --
434 a given subsystem delays only its own updates, not those of other
435 subsystems using SRCU. Therefore, SRCU is less prone to OOM the
436 system than RCU would be if RCU's read-side critical sections
437 were permitted to sleep.
438
439 The ability to sleep in read-side critical sections does not
440 come for free. First, corresponding srcu_read_lock() and
441 srcu_read_unlock() calls must be passed the same srcu_struct.
442 Second, grace-period-detection overhead is amortized only
443 over those updates sharing a given srcu_struct, rather than
444 being globally amortized as they are for other forms of RCU.
445 Therefore, SRCU should be used in preference to rw_semaphore
446 only in extremely read-intensive situations, or in situations
447 requiring SRCU's read-side deadlock immunity or low read-side
448 realtime latency. You should also consider percpu_rw_semaphore
449 when you need lightweight readers.
450
451 SRCU's expedited primitive (synchronize_srcu_expedited())
452 never sends IPIs to other CPUs, so it is easier on
453 real-time workloads than is synchronize_rcu_expedited().
454
455 It is also permissible to sleep in RCU Tasks Trace read-side
456 critical section, which are delimited by rcu_read_lock_trace()
457 and rcu_read_unlock_trace(). However, this is a specialized
458 flavor of RCU, and you should not use it without first checking
459 with its current users. In most cases, you should instead
460 use SRCU. As with RCU and SRCU, guard(rcu_tasks_trace)() and
461 scoped_guard(rcu_tasks_trace) are available, and often provide
462 greater ease of use.
463
464 Note that rcu_assign_pointer() relates to SRCU just as it does to
465 other forms of RCU, but instead of rcu_dereference() you should
466 use srcu_dereference() in order to avoid lockdep splats.
467
46814. The whole point of call_rcu(), synchronize_rcu(), and friends
469 is to wait until all pre-existing readers have finished before
470 carrying out some otherwise-destructive operation. It is
471 therefore critically important to *first* remove any path
472 that readers can follow that could be affected by the
473 destructive operation, and *only then* invoke call_rcu(),
474 synchronize_rcu(), or friends.
475
476 Because these primitives only wait for pre-existing readers, it
477 is the caller's responsibility to guarantee that any subsequent
478 readers will execute safely.
479
48015. The various RCU read-side primitives do *not* necessarily contain
481 memory barriers. You should therefore plan for the CPU
482 and the compiler to freely reorder code into and out of RCU
483 read-side critical sections. It is the responsibility of the
484 RCU update-side primitives to deal with this.
485
486 For SRCU readers, you can use smp_mb__after_srcu_read_unlock()
487 immediately after an srcu_read_unlock() to get a full barrier.
488
48916. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
490 __rcu sparse checks to validate your RCU code. These can help
491 find problems as follows:
492
493 CONFIG_PROVE_LOCKING:
494 check that accesses to RCU-protected data structures
495 are carried out under the proper RCU read-side critical
496 section, while holding the right combination of locks,
497 or whatever other conditions are appropriate.
498
499 CONFIG_DEBUG_OBJECTS_RCU_HEAD:
500 check that you don't pass the same object to call_rcu()
501 (or friends) before an RCU grace period has elapsed
502 since the last time that you passed that same object to
503 call_rcu() (or friends).
504
505 CONFIG_RCU_STRICT_GRACE_PERIOD:
506 combine with KASAN to check for pointers leaked out
507 of RCU read-side critical sections. This Kconfig
508 option is tough on both performance and scalability,
509 and so is limited to four-CPU systems.
510
511 __rcu sparse checks:
512 tag the pointer to the RCU-protected data structure
513 with __rcu, and sparse will warn you if you access that
514 pointer without the services of one of the variants
515 of rcu_dereference().
516
517 These debugging aids can help you find problems that are
518 otherwise extremely difficult to spot.
519
52017. If you pass a callback function defined within a module
521 to one of call_rcu(), call_srcu(), call_rcu_tasks(), or
522 call_rcu_tasks_trace(), then it is necessary to wait for all
523 pending callbacks to be invoked before unloading that module.
524 Note that it is absolutely *not* sufficient to wait for a grace
525 period! For example, synchronize_rcu() implementation is *not*
526 guaranteed to wait for callbacks registered on other CPUs via
527 call_rcu(). Or even on the current CPU if that CPU recently
528 went offline and came back online.
529
530 You instead need to use one of the barrier functions:
531
532 - call_rcu() -> rcu_barrier()
533 - call_srcu() -> srcu_barrier()
534 - call_rcu_tasks() -> rcu_barrier_tasks()
535 - call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
536
537 However, these barrier functions are absolutely *not* guaranteed
538 to wait for a grace period. For example, if there are no
539 call_rcu() callbacks queued anywhere in the system, rcu_barrier()
540 can and will return immediately.
541
542 So if you need to wait for both a grace period and for all
543 pre-existing callbacks, you will need to invoke both functions,
544 with the pair depending on the flavor of RCU:
545
546 - Either synchronize_rcu() or synchronize_rcu_expedited(),
547 together with rcu_barrier()
548 - Either synchronize_srcu() or synchronize_srcu_expedited(),
549 together with and srcu_barrier()
550 - synchronize_rcu_tasks() and rcu_barrier_tasks()
551 - synchronize_tasks_trace() and rcu_barrier_tasks_trace()
552
553 If necessary, you can use something like workqueues to execute
554 the requisite pair of functions concurrently.
555
556 See rcubarrier.rst for more information.