Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

rcu: Remove references to old grace-period-wait primitives

The rcu_barrier_sched(), synchronize_sched(), and synchronize_rcu_bh()
RCU API members have been gone for many years. This commit therefore
removes non-historical instances of them.

Reported-by: Joe Perches <joe@perches.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

authored by

Paul E. McKenney and committed by
Boqun Feng
73298c7c 81a208c5

+8 -14
+1 -4
Documentation/RCU/rcubarrier.rst
··· 329 329 was first added back in 2005. This is because on_each_cpu() 330 330 disables preemption, which acted as an RCU read-side critical 331 331 section, thus preventing CPU 0's grace period from completing 332 - until on_each_cpu() had dealt with all of the CPUs. However, 333 - with the advent of preemptible RCU, rcu_barrier() no longer 334 - waited on nonpreemptible regions of code in preemptible kernels, 335 - that being the job of the new rcu_barrier_sched() function. 332 + until on_each_cpu() had dealt with all of the CPUs. 336 333 337 334 However, with the RCU flavor consolidation around v4.20, this 338 335 possibility was once again ruled out, because the consolidated
+7 -10
include/linux/rcupdate.h
··· 806 806 * sections, invocation of the corresponding RCU callback is deferred 807 807 * until after the all the other CPUs exit their critical sections. 808 808 * 809 - * In v5.0 and later kernels, synchronize_rcu() and call_rcu() also 810 - * wait for regions of code with preemption disabled, including regions of 811 - * code with interrupts or softirqs disabled. In pre-v5.0 kernels, which 812 - * define synchronize_sched(), only code enclosed within rcu_read_lock() 813 - * and rcu_read_unlock() are guaranteed to be waited for. 809 + * Both synchronize_rcu() and call_rcu() also wait for regions of code 810 + * with preemption disabled, including regions of code with interrupts or 811 + * softirqs disabled. 814 812 * 815 813 * Note, however, that RCU callbacks are permitted to run concurrently 816 814 * with new RCU read-side critical sections. One way that this can happen ··· 863 865 * rcu_read_unlock() - marks the end of an RCU read-side critical section. 864 866 * 865 867 * In almost all situations, rcu_read_unlock() is immune from deadlock. 866 - * In recent kernels that have consolidated synchronize_sched() and 867 - * synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity 868 - * also extends to the scheduler's runqueue and priority-inheritance 869 - * spinlocks, courtesy of the quiescent-state deferral that is carried 870 - * out when rcu_read_unlock() is invoked with interrupts disabled. 868 + * This deadlock immunity also extends to the scheduler's runqueue 869 + * and priority-inheritance spinlocks, courtesy of the quiescent-state 870 + * deferral that is carried out when rcu_read_unlock() is invoked with 871 + * interrupts disabled. 871 872 * 872 873 * See rcu_read_lock() for more information. 873 874 */