Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

doc: Use CONFIG_PREEMPTION

CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT.
Both PREEMPT and PREEMPT_RT require the same functionality which today
depends on CONFIG_PREEMPT.

Update the documents and mention CONFIG_PREEMPTION. Spell out
CONFIG_PREEMPT_RT (instead PREEMPT_RT) since it is an option now.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

authored by

Sebastian Andrzej Siewior and committed by
Paul E. McKenney
81ad58be 361c0f3d

+24 -24
+2 -2
Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.rst
··· 38 38 RCU-preempt Expedited Grace Periods 39 39 =================================== 40 40 41 - ``CONFIG_PREEMPT=y`` kernels implement RCU-preempt. 41 + ``CONFIG_PREEMPTION=y`` kernels implement RCU-preempt. 42 42 The overall flow of the handling of a given CPU by an RCU-preempt 43 43 expedited grace period is shown in the following diagram: 44 44 ··· 112 112 RCU-sched Expedited Grace Periods 113 113 --------------------------------- 114 114 115 - ``CONFIG_PREEMPT=n`` kernels implement RCU-sched. The overall flow of 115 + ``CONFIG_PREEMPTION=n`` kernels implement RCU-sched. The overall flow of 116 116 the handling of a given CPU by an RCU-sched expedited grace period is 117 117 shown in the following diagram: 118 118
+11 -11
Documentation/RCU/Design/Requirements/Requirements.rst
··· 78 78 Production-quality implementations of rcu_read_lock() and 79 79 rcu_read_unlock() are extremely lightweight, and in fact have 80 80 exactly zero overhead in Linux kernels built for production use with 81 - ``CONFIG_PREEMPT=n``. 81 + ``CONFIG_PREEMPTION=n``. 82 82 83 83 This guarantee allows ordering to be enforced with extremely low 84 84 overhead to readers, for example: ··· 1181 1181 costs have plummeted. However, as I learned from Matt Mackall's 1182 1182 `bloatwatch <http://elinux.org/Linux_Tiny-FAQ>`__ efforts, memory 1183 1183 footprint is critically important on single-CPU systems with 1184 - non-preemptible (``CONFIG_PREEMPT=n``) kernels, and thus `tiny 1184 + non-preemptible (``CONFIG_PREEMPTION=n``) kernels, and thus `tiny 1185 1185 RCU <https://lore.kernel.org/r/20090113221724.GA15307@linux.vnet.ibm.com>`__ 1186 1186 was born. Josh Triplett has since taken over the small-memory banner 1187 1187 with his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__ ··· 1497 1497 1498 1498 Implementations of RCU for which rcu_read_lock() and 1499 1499 rcu_read_unlock() generate no code, such as Linux-kernel RCU when 1500 - ``CONFIG_PREEMPT=n``, can be nested arbitrarily deeply. After all, there 1500 + ``CONFIG_PREEMPTION=n``, can be nested arbitrarily deeply. After all, there 1501 1501 is no overhead. Except that if all these instances of 1502 1502 rcu_read_lock() and rcu_read_unlock() are visible to the 1503 1503 compiler, compilation will eventually fail due to exhausting memory, ··· 1769 1769 1770 1770 However, once the scheduler has spawned its first kthread, this early 1771 1771 boot trick fails for synchronize_rcu() (as well as for 1772 - synchronize_rcu_expedited()) in ``CONFIG_PREEMPT=y`` kernels. The 1772 + synchronize_rcu_expedited()) in ``CONFIG_PREEMPTION=y`` kernels. The 1773 1773 reason is that an RCU read-side critical section might be preempted, 1774 1774 which means that a subsequent synchronize_rcu() really does have to 1775 1775 wait for something, as opposed to simply returning immediately. ··· 2038 2038 5 rcu_read_unlock(); 2039 2039 6 do_something_with(v, user_v); 2040 2040 2041 - If the compiler did make this transformation in a ``CONFIG_PREEMPT=n`` kernel 2041 + If the compiler did make this transformation in a ``CONFIG_PREEMPTION=n`` kernel 2042 2042 build, and if get_user() did page fault, the result would be a quiescent 2043 2043 state in the middle of an RCU read-side critical section. This misplaced 2044 2044 quiescent state could result in line 4 being a use-after-free access, ··· 2320 2320 patchset <https://wiki.linuxfoundation.org/realtime/>`__. The 2321 2321 real-time-latency response requirements are such that the traditional 2322 2322 approach of disabling preemption across RCU read-side critical sections 2323 - is inappropriate. Kernels built with ``CONFIG_PREEMPT=y`` therefore use 2323 + is inappropriate. Kernels built with ``CONFIG_PREEMPTION=y`` therefore use 2324 2324 an RCU implementation that allows RCU read-side critical sections to be 2325 2325 preempted. This requirement made its presence known after users made it 2326 2326 clear that an earlier `real-time ··· 2460 2460 RCU read-side critical section can be a quiescent state. Therefore, 2461 2461 *RCU-sched* was created, which follows “classic” RCU in that an 2462 2462 RCU-sched grace period waits for pre-existing interrupt and NMI 2463 - handlers. In kernels built with ``CONFIG_PREEMPT=n``, the RCU and 2463 + handlers. In kernels built with ``CONFIG_PREEMPTION=n``, the RCU and 2464 2464 RCU-sched APIs have identical implementations, while kernels built with 2465 - ``CONFIG_PREEMPT=y`` provide a separate implementation for each. 2465 + ``CONFIG_PREEMPTION=y`` provide a separate implementation for each. 2466 2466 2467 - Note well that in ``CONFIG_PREEMPT=y`` kernels, 2467 + Note well that in ``CONFIG_PREEMPTION=y`` kernels, 2468 2468 rcu_read_lock_sched() and rcu_read_unlock_sched() disable and 2469 2469 re-enable preemption, respectively. This means that if there was a 2470 2470 preemption attempt during the RCU-sched read-side critical section, ··· 2627 2627 2628 2628 The tasks-RCU API is quite compact, consisting only of 2629 2629 call_rcu_tasks(), synchronize_rcu_tasks(), and 2630 - rcu_barrier_tasks(). In ``CONFIG_PREEMPT=n`` kernels, trampolines 2630 + rcu_barrier_tasks(). In ``CONFIG_PREEMPTION=n`` kernels, trampolines 2631 2631 cannot be preempted, so these APIs map to call_rcu(), 2632 2632 synchronize_rcu(), and rcu_barrier(), respectively. In 2633 - ``CONFIG_PREEMPT=y`` kernels, trampolines can be preempted, and these 2633 + ``CONFIG_PREEMPTION=y`` kernels, trampolines can be preempted, and these 2634 2634 three APIs are therefore implemented by separate functions that check 2635 2635 for voluntary context switches. 2636 2636
+1 -1
Documentation/RCU/checklist.rst
··· 212 212 the rest of the system. 213 213 214 214 7. As of v4.20, a given kernel implements only one RCU flavor, 215 - which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y. 215 + which is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y. 216 216 If the updater uses call_rcu() or synchronize_rcu(), 217 217 then the corresponding readers may use rcu_read_lock() and 218 218 rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(),
+3 -3
Documentation/RCU/rcubarrier.rst
··· 9 9 of as a replacement for read-writer locking (among other things), but with 10 10 very low-overhead readers that are immune to deadlock, priority inversion, 11 11 and unbounded latency. RCU read-side critical sections are delimited 12 - by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT 12 + by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPTION 13 13 kernels, generate no code whatsoever. 14 14 15 15 This means that RCU writers are unaware of the presence of concurrent ··· 329 329 to smp_call_function() and further to smp_call_function_on_cpu(), 330 330 causing this latter to spin until the cross-CPU invocation of 331 331 rcu_barrier_func() has completed. This by itself would prevent 332 - a grace period from completing on non-CONFIG_PREEMPT kernels, 332 + a grace period from completing on non-CONFIG_PREEMPTION kernels, 333 333 since each CPU must undergo a context switch (or other quiescent 334 334 state) before the grace period can complete. However, this is 335 - of no use in CONFIG_PREEMPT kernels. 335 + of no use in CONFIG_PREEMPTION kernels. 336 336 337 337 Therefore, on_each_cpu() disables preemption across its call 338 338 to smp_call_function() and also across the local call to
+2 -2
Documentation/RCU/stallwarn.rst
··· 25 25 26 26 - A CPU looping with bottom halves disabled. 27 27 28 - - For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel 28 + - For !CONFIG_PREEMPTION kernels, a CPU looping anywhere in the kernel 29 29 without invoking schedule(). If the looping in the kernel is 30 30 really expected and desirable behavior, you might need to add 31 31 some calls to cond_resched(). ··· 44 44 result in the ``rcu_.*kthread starved for`` console-log message, 45 45 which will include additional debugging information. 46 46 47 - - A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might 47 + - A CPU-bound real-time task in a CONFIG_PREEMPTION kernel, which might 48 48 happen to preempt a low-priority task in the middle of an RCU 49 49 read-side critical section. This is especially damaging if 50 50 that low-priority task is not permitted to run on any other CPU,
+5 -5
Documentation/RCU/whatisRCU.rst
··· 683 683 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 684 684 This section presents a "toy" RCU implementation that is based on 685 685 "classic RCU". It is also short on performance (but only for updates) and 686 - on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT 686 + on features such as hotplug CPU and the ability to run in CONFIG_PREEMPTION 687 687 kernels. The definitions of rcu_dereference() and rcu_assign_pointer() 688 688 are the same as those shown in the preceding section, so they are omitted. 689 689 :: ··· 739 739 Quick Quiz #3: 740 740 If it is illegal to block in an RCU read-side 741 741 critical section, what the heck do you do in 742 - PREEMPT_RT, where normal spinlocks can block??? 742 + CONFIG_PREEMPT_RT, where normal spinlocks can block??? 743 743 744 744 :ref:`Answers to Quick Quiz <8_whatisRCU>` 745 745 ··· 1093 1093 overhead is **negative**. 1094 1094 1095 1095 Answer: 1096 - Imagine a single-CPU system with a non-CONFIG_PREEMPT 1096 + Imagine a single-CPU system with a non-CONFIG_PREEMPTION 1097 1097 kernel where a routing table is used by process-context 1098 1098 code, but can be updated by irq-context code (for example, 1099 1099 by an "ICMP REDIRECT" packet). The usual way of handling ··· 1120 1120 Quick Quiz #3: 1121 1121 If it is illegal to block in an RCU read-side 1122 1122 critical section, what the heck do you do in 1123 - PREEMPT_RT, where normal spinlocks can block??? 1123 + CONFIG_PREEMPT_RT, where normal spinlocks can block??? 1124 1124 1125 1125 Answer: 1126 - Just as PREEMPT_RT permits preemption of spinlock 1126 + Just as CONFIG_PREEMPT_RT permits preemption of spinlock 1127 1127 critical sections, it permits preemption of RCU 1128 1128 read-side critical sections. It also permits 1129 1129 spinlocks blocking while in RCU read-side critical