Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1[!] Note:
2 This file expands on the "Locking" section of recipes.txt,
3 focusing on locklessly accessing shared variables that are
4 otherwise protected by a lock.
5
6Locking
7=======
8
9Locking is well-known and the common use cases are straightforward: Any
10CPU holding a given lock sees any changes previously seen or made by any
11CPU before it previously released that same lock. This last sentence
12is the only part of this document that most developers will need to read.
13
14However, developers who would like to also access lock-protected shared
15variables outside of their corresponding locks should continue reading.
16
17
18Locking and Prior Accesses
19--------------------------
20
21The basic rule of locking is worth repeating:
22
23 Any CPU holding a given lock sees any changes previously seen
24 or made by any CPU before it previously released that same lock.
25
26Note that this statement is a bit stronger than "Any CPU holding a
27given lock sees all changes made by any CPU during the time that CPU was
28previously holding this same lock". For example, consider the following
29pair of code fragments:
30
31 /* See MP+polocks.litmus. */
32 void CPU0(void)
33 {
34 WRITE_ONCE(x, 1);
35 spin_lock(&mylock);
36 WRITE_ONCE(y, 1);
37 spin_unlock(&mylock);
38 }
39
40 void CPU1(void)
41 {
42 spin_lock(&mylock);
43 r0 = READ_ONCE(y);
44 spin_unlock(&mylock);
45 r1 = READ_ONCE(x);
46 }
47
48The basic rule guarantees that if CPU0() acquires mylock before CPU1(),
49then both r0 and r1 must be set to the value 1. This also has the
50consequence that if the final value of r0 is equal to 1, then the final
51value of r1 must also be equal to 1. In contrast, the weaker rule would
52say nothing about the final value of r1.
53
54
55Locking and Subsequent Accesses
56-------------------------------
57
58The converse to the basic rule also holds: Any CPU holding a given
59lock will not see any changes that will be made by any CPU after it
60subsequently acquires this same lock. This converse statement is
61illustrated by the following litmus test:
62
63 /* See MP+porevlocks.litmus. */
64 void CPU0(void)
65 {
66 r0 = READ_ONCE(y);
67 spin_lock(&mylock);
68 r1 = READ_ONCE(x);
69 spin_unlock(&mylock);
70 }
71
72 void CPU1(void)
73 {
74 spin_lock(&mylock);
75 WRITE_ONCE(x, 1);
76 spin_unlock(&mylock);
77 WRITE_ONCE(y, 1);
78 }
79
80This converse to the basic rule guarantees that if CPU0() acquires
81mylock before CPU1(), then both r0 and r1 must be set to the value 0.
82This also has the consequence that if the final value of r1 is equal
83to 0, then the final value of r0 must also be equal to 0. In contrast,
84the weaker rule would say nothing about the final value of r0.
85
86These examples show only a single pair of CPUs, but the effects of the
87locking basic rule extend across multiple acquisitions of a given lock
88across multiple CPUs.
89
90
91Double-Checked Locking
92----------------------
93
94It is well known that more than just a lock is required to make
95double-checked locking work correctly, This litmus test illustrates
96one incorrect approach:
97
98 /* See Documentation/litmus-tests/locking/DCL-broken.litmus. */
99 void CPU0(void)
100 {
101 r0 = READ_ONCE(flag);
102 if (r0 == 0) {
103 spin_lock(&lck);
104 r1 = READ_ONCE(flag);
105 if (r1 == 0) {
106 WRITE_ONCE(data, 1);
107 WRITE_ONCE(flag, 1);
108 }
109 spin_unlock(&lck);
110 }
111 r2 = READ_ONCE(data);
112 }
113 /* CPU1() is the exactly the same as CPU0(). */
114
115There are two problems. First, there is no ordering between the first
116READ_ONCE() of "flag" and the READ_ONCE() of "data". Second, there is
117no ordering between the two WRITE_ONCE() calls. It should therefore be
118no surprise that "r2" can be zero, and a quick herd7 run confirms this.
119
120One way to fix this is to use smp_load_acquire() and smp_store_release()
121as shown in this corrected version:
122
123 /* See Documentation/litmus-tests/locking/DCL-fixed.litmus. */
124 void CPU0(void)
125 {
126 r0 = smp_load_acquire(&flag);
127 if (r0 == 0) {
128 spin_lock(&lck);
129 r1 = READ_ONCE(flag);
130 if (r1 == 0) {
131 WRITE_ONCE(data, 1);
132 smp_store_release(&flag, 1);
133 }
134 spin_unlock(&lck);
135 }
136 r2 = READ_ONCE(data);
137 }
138 /* CPU1() is the exactly the same as CPU0(). */
139
140The smp_load_acquire() guarantees that its load from "flags" will
141be ordered before the READ_ONCE() from data, thus solving the first
142problem. The smp_store_release() guarantees that its store will be
143ordered after the WRITE_ONCE() to "data", solving the second problem.
144The smp_store_release() pairs with the smp_load_acquire(), thus ensuring
145that the ordering provided by each actually takes effect. Again, a
146quick herd7 run confirms this.
147
148In short, if you access a lock-protected variable without holding the
149corresponding lock, you will need to provide additional ordering, in
150this case, via the smp_load_acquire() and the smp_store_release().
151
152
153Ordering Provided by a Lock to CPUs Not Holding That Lock
154---------------------------------------------------------
155
156It is not necessarily the case that accesses ordered by locking will be
157seen as ordered by CPUs not holding that lock. Consider this example:
158
159 /* See Z6.0+pooncelock+pooncelock+pombonce.litmus. */
160 void CPU0(void)
161 {
162 spin_lock(&mylock);
163 WRITE_ONCE(x, 1);
164 WRITE_ONCE(y, 1);
165 spin_unlock(&mylock);
166 }
167
168 void CPU1(void)
169 {
170 spin_lock(&mylock);
171 r0 = READ_ONCE(y);
172 WRITE_ONCE(z, 1);
173 spin_unlock(&mylock);
174 }
175
176 void CPU2(void)
177 {
178 WRITE_ONCE(z, 2);
179 smp_mb();
180 r1 = READ_ONCE(x);
181 }
182
183Counter-intuitive though it might be, it is quite possible to have
184the final value of r0 be 1, the final value of z be 2, and the final
185value of r1 be 0. The reason for this surprising outcome is that CPU2()
186never acquired the lock, and thus did not fully benefit from the lock's
187ordering properties.
188
189Ordering can be extended to CPUs not holding the lock by careful use
190of smp_mb__after_spinlock():
191
192 /* See Z6.0+pooncelock+poonceLock+pombonce.litmus. */
193 void CPU0(void)
194 {
195 spin_lock(&mylock);
196 WRITE_ONCE(x, 1);
197 WRITE_ONCE(y, 1);
198 spin_unlock(&mylock);
199 }
200
201 void CPU1(void)
202 {
203 spin_lock(&mylock);
204 smp_mb__after_spinlock();
205 r0 = READ_ONCE(y);
206 WRITE_ONCE(z, 1);
207 spin_unlock(&mylock);
208 }
209
210 void CPU2(void)
211 {
212 WRITE_ONCE(z, 2);
213 smp_mb();
214 r1 = READ_ONCE(x);
215 }
216
217This addition of smp_mb__after_spinlock() strengthens the lock
218acquisition sufficiently to rule out the counter-intuitive outcome.
219In other words, the addition of the smp_mb__after_spinlock() prohibits
220the counter-intuitive result where the final value of r0 is 1, the final
221value of z is 2, and the final value of r1 is 0.
222
223
224No Roach-Motel Locking!
225-----------------------
226
227This example requires familiarity with the herd7 "filter" clause, so
228please read up on that topic in litmus-tests.txt.
229
230It is tempting to allow memory-reference instructions to be pulled
231into a critical section, but this cannot be allowed in the general case.
232For example, consider a spin loop preceding a lock-based critical section.
233Now, herd7 does not model spin loops, but we can emulate one with two
234loads, with a "filter" clause to constrain the first to return the
235initial value and the second to return the updated value, as shown below:
236
237 /* See Documentation/litmus-tests/locking/RM-fixed.litmus. */
238 void CPU0(void)
239 {
240 spin_lock(&lck);
241 r2 = atomic_inc_return(&y);
242 WRITE_ONCE(x, 1);
243 spin_unlock(&lck);
244 }
245
246 void CPU1(void)
247 {
248 r0 = READ_ONCE(x);
249 r1 = READ_ONCE(x);
250 spin_lock(&lck);
251 r2 = atomic_inc_return(&y);
252 spin_unlock(&lck);
253 }
254
255 filter (1:r0=0 /\ 1:r1=1)
256 exists (1:r2=1)
257
258The variable "x" is the control variable for the emulated spin loop.
259CPU0() sets it to "1" while holding the lock, and CPU1() emulates the
260spin loop by reading it twice, first into "1:r0" (which should get the
261initial value "0") and then into "1:r1" (which should get the updated
262value "1").
263
264The "filter" clause takes this into account, constraining "1:r0" to
265equal "0" and "1:r1" to equal 1.
266
267Then the "exists" clause checks to see if CPU1() acquired its lock first,
268which should not happen given the filter clause because CPU0() updates
269"x" while holding the lock. And herd7 confirms this.
270
271But suppose that the compiler was permitted to reorder the spin loop
272into CPU1()'s critical section, like this:
273
274 /* See Documentation/litmus-tests/locking/RM-broken.litmus. */
275 void CPU0(void)
276 {
277 int r2;
278
279 spin_lock(&lck);
280 r2 = atomic_inc_return(&y);
281 WRITE_ONCE(x, 1);
282 spin_unlock(&lck);
283 }
284
285 void CPU1(void)
286 {
287 spin_lock(&lck);
288 r0 = READ_ONCE(x);
289 r1 = READ_ONCE(x);
290 r2 = atomic_inc_return(&y);
291 spin_unlock(&lck);
292 }
293
294 filter (1:r0=0 /\ 1:r1=1)
295 exists (1:r2=1)
296
297If "1:r0" is equal to "0", "1:r1" can never equal "1" because CPU0()
298cannot update "x" while CPU1() holds the lock. And herd7 confirms this,
299showing zero executions matching the "filter" criteria.
300
301And this is why Linux-kernel lock and unlock primitives must prevent
302code from entering critical sections. It is not sufficient to only
303prevent code from leaving them.