Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

doc: Remove obsolete RCU update functions from RCU documentation

Now that synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh,
rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited,
call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched,
and cond_synchronize_sched are obsolete, let's remove them from the
documentation aside from a small historical section.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>

+66 -76
+1 -2
Documentation/RCU/Design/Data-Structures/Data-Structures.html
··· 155 155 of the level of loading on the system. 156 156 157 157 </p><p>RCU updaters wait for normal grace periods by registering 158 - RCU callbacks, either directly via <tt>call_rcu()</tt> and 159 - friends (namely <tt>call_rcu_bh()</tt> and <tt>call_rcu_sched()</tt>), 158 + RCU callbacks, either directly via <tt>call_rcu()</tt> 160 159 or indirectly via <tt>synchronize_rcu()</tt> and friends. 161 160 RCU callbacks are represented by <tt>rcu_head</tt> structures, 162 161 which are queued on <tt>rcu_data</tt> structures while they are
+3 -1
Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.html
··· 56 56 RCU-preempt Expedited Grace Periods</a></h2> 57 57 58 58 <p> 59 + <tt>CONFIG_PREEMPT=y</tt> kernels implement RCU-preempt. 59 60 The overall flow of the handling of a given CPU by an RCU-preempt 60 61 expedited grace period is shown in the following diagram: 61 62 ··· 140 139 RCU-sched Expedited Grace Periods</a></h2> 141 140 142 141 <p> 142 + <tt>CONFIG_PREEMPT=n</tt> kernels implement RCU-sched. 143 143 The overall flow of the handling of a given CPU by an RCU-sched 144 144 expedited grace period is shown in the following diagram: 145 145 ··· 148 146 149 147 <p> 150 148 As with RCU-preempt, RCU-sched's 151 - <tt>synchronize_sched_expedited()</tt> ignores offline and 149 + <tt>synchronize_rcu_expedited()</tt> ignores offline and 152 150 idle CPUs, again because they are in remotely detectable 153 151 quiescent states. 154 152 However, because the
+2 -3
Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
··· 34 34 period is guaranteed to see the effects of all accesses following the end 35 35 of that grace period that are within RCU read-side critical sections. 36 36 37 - <p>This guarantee is particularly pervasive for <tt>synchronize_sched()</tt>, 38 - for which RCU-sched read-side critical sections include any region 37 + <p>Note well that RCU-sched read-side critical sections include any region 39 38 of code for which preemption is disabled. 40 39 Given that each individual machine instruction can be thought of as 41 40 an extremely small region of preemption-disabled code, one can think of 42 - <tt>synchronize_sched()</tt> as <tt>smp_mb()</tt> on steroids. 41 + <tt>synchronize_rcu()</tt> as <tt>smp_mb()</tt> on steroids. 43 42 44 43 <p>RCU updaters use this guarantee by splitting their updates into 45 44 two phases, one of which is executed before the grace period and
+7 -6
Documentation/RCU/NMI-RCU.txt
··· 81 81 up any data structures used by the old NMI handler until execution 82 82 of it completes on all other CPUs. 83 83 84 - One way to accomplish this is via synchronize_sched(), perhaps as 84 + One way to accomplish this is via synchronize_rcu(), perhaps as 85 85 follows: 86 86 87 87 unset_nmi_callback(); 88 - synchronize_sched(); 88 + synchronize_rcu(); 89 89 kfree(my_nmi_data); 90 90 91 - This works because synchronize_sched() blocks until all CPUs complete 92 - any preemption-disabled segments of code that they were executing. 93 - Since NMI handlers disable preemption, synchronize_sched() is guaranteed 91 + This works because (as of v4.20) synchronize_rcu() blocks until all 92 + CPUs complete any preemption-disabled segments of code that they were 93 + executing. 94 + Since NMI handlers disable preemption, synchronize_rcu() is guaranteed 94 95 not to return until all ongoing NMI handlers exit. It is therefore safe 95 - to free up the handler's data as soon as synchronize_sched() returns. 96 + to free up the handler's data as soon as synchronize_rcu() returns. 96 97 97 98 Important note: for this to work, the architecture in question must 98 99 invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
+2 -4
Documentation/RCU/UP.txt
··· 86 86 infrastructure -must- respect grace periods, and -must- invoke callbacks 87 87 from a known environment in which no locks are held. 88 88 89 - It -is- safe for synchronize_sched() and synchronize_rcu_bh() to return 90 - immediately on an UP system. It is also safe for synchronize_rcu() 91 - to return immediately on UP systems, except when running preemptable 92 - RCU. 89 + Note that it -is- safe for synchronize_rcu() to return immediately on 90 + UP systems, including !PREEMPT SMP builds running on UP systems. 93 91 94 92 Quick Quiz #3: Why can't synchronize_rcu() return immediately on 95 93 UP systems running preemptable RCU?
+34 -42
Documentation/RCU/checklist.txt
··· 182 182 when publicizing a pointer to a structure that can 183 183 be traversed by an RCU read-side critical section. 184 184 185 - 5. If call_rcu(), or a related primitive such as call_rcu_bh(), 186 - call_rcu_sched(), or call_srcu() is used, the callback function 187 - will be called from softirq context. In particular, it cannot 188 - block. 185 + 5. If call_rcu() or call_srcu() is used, the callback function will 186 + be called from softirq context. In particular, it cannot block. 189 187 190 - 6. Since synchronize_rcu() can block, it cannot be called from 191 - any sort of irq context. The same rule applies for 192 - synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(), 193 - synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), 194 - synchronize_sched_expedite(), and synchronize_srcu_expedited(). 188 + 6. Since synchronize_rcu() can block, it cannot be called 189 + from any sort of irq context. The same rule applies 190 + for synchronize_srcu(), synchronize_rcu_expedited(), and 191 + synchronize_srcu_expedited(). 195 192 196 193 The expedited forms of these primitives have the same semantics 197 194 as the non-expedited forms, but expediting is both expensive and ··· 209 212 of the system, especially to real-time workloads running on 210 213 the rest of the system. 211 214 212 - 7. If the updater uses call_rcu() or synchronize_rcu(), then the 213 - corresponding readers must use rcu_read_lock() and 214 - rcu_read_unlock(). If the updater uses call_rcu_bh() or 215 - synchronize_rcu_bh(), then the corresponding readers must 216 - use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the 217 - updater uses call_rcu_sched() or synchronize_sched(), then 218 - the corresponding readers must disable preemption, possibly 219 - by calling rcu_read_lock_sched() and rcu_read_unlock_sched(). 220 - If the updater uses synchronize_srcu() or call_srcu(), then 221 - the corresponding readers must use srcu_read_lock() and 215 + 7. As of v4.20, a given kernel implements only one RCU flavor, 216 + which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y. 217 + If the updater uses call_rcu() or synchronize_rcu(), 218 + then the corresponding readers my use rcu_read_lock() and 219 + rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(), 220 + or any pair of primitives that disables and re-enables preemption, 221 + for example, rcu_read_lock_sched() and rcu_read_unlock_sched(). 222 + If the updater uses synchronize_srcu() or call_srcu(), 223 + then the corresponding readers must use srcu_read_lock() and 222 224 srcu_read_unlock(), and with the same srcu_struct. The rules for 223 225 the expedited primitives are the same as for their non-expedited 224 226 counterparts. Mixing things up will result in confusion and 225 - broken kernels. 227 + broken kernels, and has even resulted in an exploitable security 228 + issue. 226 229 227 230 One exception to this rule: rcu_read_lock() and rcu_read_unlock() 228 231 may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh() ··· 285 288 d. Periodically invoke synchronize_rcu(), permitting a limited 286 289 number of updates per grace period. 287 290 288 - The same cautions apply to call_rcu_bh(), call_rcu_sched(), 289 - call_srcu(), and kfree_rcu(). 291 + The same cautions apply to call_srcu() and kfree_rcu(). 290 292 291 293 Note that although these primitives do take action to avoid memory 292 294 exhaustion when any given CPU has too many callbacks, a determined ··· 332 336 to safely access and/or modify that data structure. 333 337 334 338 RCU callbacks are -usually- executed on the same CPU that executed 335 - the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(), 336 - but are by -no- means guaranteed to be. For example, if a given 337 - CPU goes offline while having an RCU callback pending, then that 338 - RCU callback will execute on some surviving CPU. (If this was 339 - not the case, a self-spawning RCU callback would prevent the 340 - victim CPU from ever going offline.) 339 + the corresponding call_rcu() or call_srcu(). but are by -no- 340 + means guaranteed to be. For example, if a given CPU goes offline 341 + while having an RCU callback pending, then that RCU callback 342 + will execute on some surviving CPU. (If this was not the case, 343 + a self-spawning RCU callback would prevent the victim CPU from 344 + ever going offline.) 341 345 342 346 13. Unlike other forms of RCU, it -is- permissible to block in an 343 347 SRCU read-side critical section (demarked by srcu_read_lock() ··· 377 381 378 382 SRCU's expedited primitive (synchronize_srcu_expedited()) 379 383 never sends IPIs to other CPUs, so it is easier on 380 - real-time workloads than is synchronize_rcu_expedited(), 381 - synchronize_rcu_bh_expedited() or synchronize_sched_expedited(). 384 + real-time workloads than is synchronize_rcu_expedited(). 382 385 383 386 Note that rcu_dereference() and rcu_assign_pointer() relate to 384 387 SRCU just as they do to other forms of RCU. ··· 423 428 These debugging aids can help you find problems that are 424 429 otherwise extremely difficult to spot. 425 430 426 - 17. If you register a callback using call_rcu(), call_rcu_bh(), 427 - call_rcu_sched(), or call_srcu(), and pass in a function defined 428 - within a loadable module, then it in necessary to wait for 429 - all pending callbacks to be invoked after the last invocation 430 - and before unloading that module. Note that it is absolutely 431 - -not- sufficient to wait for a grace period! The current (say) 432 - synchronize_rcu() implementation waits only for all previous 433 - callbacks registered on the CPU that synchronize_rcu() is running 434 - on, but it is -not- guaranteed to wait for callbacks registered 435 - on other CPUs. 431 + 17. If you register a callback using call_rcu() or call_srcu(), 432 + and pass in a function defined within a loadable module, 433 + then it in necessary to wait for all pending callbacks to 434 + be invoked after the last invocation and before unloading 435 + that module. Note that it is absolutely -not- sufficient to 436 + wait for a grace period! The current (say) synchronize_rcu() 437 + implementation waits only for all previous callbacks registered 438 + on the CPU that synchronize_rcu() is running on, but it is -not- 439 + guaranteed to wait for callbacks registered on other CPUs. 436 440 437 441 You instead need to use one of the barrier functions: 438 442 439 443 o call_rcu() -> rcu_barrier() 440 - o call_rcu_bh() -> rcu_barrier() 441 - o call_rcu_sched() -> rcu_barrier() 442 444 o call_srcu() -> srcu_barrier() 443 445 444 446 However, these barrier functions are absolutely -not- guaranteed
+4 -4
Documentation/RCU/rcu.txt
··· 52 52 o How can I see where RCU is currently used in the Linux kernel? 53 53 54 54 Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu", 55 - "rcu_read_lock_bh", "rcu_read_unlock_bh", "call_rcu_bh", 56 - "srcu_read_lock", "srcu_read_unlock", "synchronize_rcu", 57 - "synchronize_net", "synchronize_srcu", and the other RCU 58 - primitives. Or grab one of the cscope databases from: 55 + "rcu_read_lock_bh", "rcu_read_unlock_bh", "srcu_read_lock", 56 + "srcu_read_unlock", "synchronize_rcu", "synchronize_net", 57 + "synchronize_srcu", and the other RCU primitives. Or grab one 58 + of the cscope databases from: 59 59 60 60 http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html 61 61
+13 -14
Documentation/RCU/rcubarrier.txt
··· 83 83 2. Execute rcu_barrier(). 84 84 3. Allow the module to be unloaded. 85 85 86 - There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier() 87 - functions for the other flavors of RCU, and you of course must match 88 - the flavor of rcu_barrier() with that of call_rcu(). If your module 89 - uses multiple flavors of call_rcu(), then it must also use multiple 86 + There is also an srcu_barrier() function for SRCU, and you of course 87 + must match the flavor of rcu_barrier() with that of call_rcu(). If your 88 + module uses multiple flavors of call_rcu(), then it must also use multiple 90 89 flavors of rcu_barrier() when unloading that module. For example, if 91 - it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on 90 + it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on 92 91 srcu_struct_2(), then the following three lines of code will be required 93 92 when unloading: 94 93 95 - 1 rcu_barrier_bh(); 94 + 1 rcu_barrier(); 96 95 2 srcu_barrier(&srcu_struct_1); 97 96 3 srcu_barrier(&srcu_struct_2); 98 97 ··· 184 185 the timers, and only then invoke rcu_barrier() to wait for any remaining 185 186 RCU callbacks to complete. 186 187 187 - Of course, if you module uses call_rcu_bh(), you will need to invoke 188 - rcu_barrier_bh() before unloading. Similarly, if your module uses 189 - call_rcu_sched(), you will need to invoke rcu_barrier_sched() before 190 - unloading. If your module uses call_rcu(), call_rcu_bh(), -and- 191 - call_rcu_sched(), then you will need to invoke each of rcu_barrier(), 192 - rcu_barrier_bh(), and rcu_barrier_sched(). 188 + Of course, if you module uses call_rcu(), you will need to invoke 189 + rcu_barrier() before unloading. Similarly, if your module uses 190 + call_srcu(), you will need to invoke srcu_barrier() before unloading, 191 + and on the same srcu_struct structure. If your module uses call_rcu() 192 + -and- call_srcu(), then you will need to invoke rcu_barrier() -and- 193 + srcu_barrier(). 193 194 194 195 195 196 Implementing rcu_barrier() ··· 222 223 ensures that all the calls to rcu_barrier_func() will have completed 223 224 before on_each_cpu() returns. Line 9 then waits for the completion. 224 225 225 - This code was rewritten in 2008 to support rcu_barrier_bh() and 226 - rcu_barrier_sched() in addition to the original rcu_barrier(). 226 + This code was rewritten in 2008 and several times thereafter, but this 227 + still gives the general idea. 227 228 228 229 The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() 229 230 to post an RCU callback, as follows: