Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at 77b2555b52a894a2e39a42e43d993df875c46a6a 902 lines 34 kB view raw
1What is RCU? 2 3RCU is a synchronization mechanism that was added to the Linux kernel 4during the 2.5 development effort that is optimized for read-mostly 5situations. Although RCU is actually quite simple once you understand it, 6getting there can sometimes be a challenge. Part of the problem is that 7most of the past descriptions of RCU have been written with the mistaken 8assumption that there is "one true way" to describe RCU. Instead, 9the experience has been that different people must take different paths 10to arrive at an understanding of RCU. This document provides several 11different paths, as follows: 12 131. RCU OVERVIEW 142. WHAT IS RCU'S CORE API? 153. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? 164. WHAT IF MY UPDATING THREAD CANNOT BLOCK? 175. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? 186. ANALOGY WITH READER-WRITER LOCKING 197. FULL LIST OF RCU APIs 208. ANSWERS TO QUICK QUIZZES 21 22People who prefer starting with a conceptual overview should focus on 23Section 1, though most readers will profit by reading this section at 24some point. People who prefer to start with an API that they can then 25experiment with should focus on Section 2. People who prefer to start 26with example uses should focus on Sections 3 and 4. People who need to 27understand the RCU implementation should focus on Section 5, then dive 28into the kernel source code. People who reason best by analogy should 29focus on Section 6. Section 7 serves as an index to the docbook API 30documentation, and Section 8 is the traditional answer key. 31 32So, start with the section that makes the most sense to you and your 33preferred method of learning. If you need to know everything about 34everything, feel free to read the whole thing -- but if you are really 35that type of person, you have perused the source code and will therefore 36never need this document anyway. ;-) 37 38 391. RCU OVERVIEW 40 41The basic idea behind RCU is to split updates into "removal" and 42"reclamation" phases. The removal phase removes references to data items 43within a data structure (possibly by replacing them with references to 44new versions of these data items), and can run concurrently with readers. 45The reason that it is safe to run the removal phase concurrently with 46readers is the semantics of modern CPUs guarantee that readers will see 47either the old or the new version of the data structure rather than a 48partially updated reference. The reclamation phase does the work of reclaiming 49(e.g., freeing) the data items removed from the data structure during the 50removal phase. Because reclaiming data items can disrupt any readers 51concurrently referencing those data items, the reclamation phase must 52not start until readers no longer hold references to those data items. 53 54Splitting the update into removal and reclamation phases permits the 55updater to perform the removal phase immediately, and to defer the 56reclamation phase until all readers active during the removal phase have 57completed, either by blocking until they finish or by registering a 58callback that is invoked after they finish. Only readers that are active 59during the removal phase need be considered, because any reader starting 60after the removal phase will be unable to gain a reference to the removed 61data items, and therefore cannot be disrupted by the reclamation phase. 62 63So the typical RCU update sequence goes something like the following: 64 65a. Remove pointers to a data structure, so that subsequent 66 readers cannot gain a reference to it. 67 68b. Wait for all previous readers to complete their RCU read-side 69 critical sections. 70 71c. At this point, there cannot be any readers who hold references 72 to the data structure, so it now may safely be reclaimed 73 (e.g., kfree()d). 74 75Step (b) above is the key idea underlying RCU's deferred destruction. 76The ability to wait until all readers are done allows RCU readers to 77use much lighter-weight synchronization, in some cases, absolutely no 78synchronization at all. In contrast, in more conventional lock-based 79schemes, readers must use heavy-weight synchronization in order to 80prevent an updater from deleting the data structure out from under them. 81This is because lock-based updaters typically update data items in place, 82and must therefore exclude readers. In contrast, RCU-based updaters 83typically take advantage of the fact that writes to single aligned 84pointers are atomic on modern CPUs, allowing atomic insertion, removal, 85and replacement of data items in a linked structure without disrupting 86readers. Concurrent RCU readers can then continue accessing the old 87versions, and can dispense with the atomic operations, memory barriers, 88and communications cache misses that are so expensive on present-day 89SMP computer systems, even in absence of lock contention. 90 91In the three-step procedure shown above, the updater is performing both 92the removal and the reclamation step, but it is often helpful for an 93entirely different thread to do the reclamation, as is in fact the case 94in the Linux kernel's directory-entry cache (dcache). Even if the same 95thread performs both the update step (step (a) above) and the reclamation 96step (step (c) above), it is often helpful to think of them separately. 97For example, RCU readers and updaters need not communicate at all, 98but RCU provides implicit low-overhead communication between readers 99and reclaimers, namely, in step (b) above. 100 101So how the heck can a reclaimer tell when a reader is done, given 102that readers are not doing any sort of synchronization operations??? 103Read on to learn about how RCU's API makes this easy. 104 105 1062. WHAT IS RCU'S CORE API? 107 108The core RCU API is quite small: 109 110a. rcu_read_lock() 111b. rcu_read_unlock() 112c. synchronize_rcu() / call_rcu() 113d. rcu_assign_pointer() 114e. rcu_dereference() 115 116There are many other members of the RCU API, but the rest can be 117expressed in terms of these five, though most implementations instead 118express synchronize_rcu() in terms of the call_rcu() callback API. 119 120The five core RCU APIs are described below, the other 18 will be enumerated 121later. See the kernel docbook documentation for more info, or look directly 122at the function header comments. 123 124rcu_read_lock() 125 126 void rcu_read_lock(void); 127 128 Used by a reader to inform the reclaimer that the reader is 129 entering an RCU read-side critical section. It is illegal 130 to block while in an RCU read-side critical section, though 131 kernels built with CONFIG_PREEMPT_RCU can preempt RCU read-side 132 critical sections. Any RCU-protected data structure accessed 133 during an RCU read-side critical section is guaranteed to remain 134 unreclaimed for the full duration of that critical section. 135 Reference counts may be used in conjunction with RCU to maintain 136 longer-term references to data structures. 137 138rcu_read_unlock() 139 140 void rcu_read_unlock(void); 141 142 Used by a reader to inform the reclaimer that the reader is 143 exiting an RCU read-side critical section. Note that RCU 144 read-side critical sections may be nested and/or overlapping. 145 146synchronize_rcu() 147 148 void synchronize_rcu(void); 149 150 Marks the end of updater code and the beginning of reclaimer 151 code. It does this by blocking until all pre-existing RCU 152 read-side critical sections on all CPUs have completed. 153 Note that synchronize_rcu() will -not- necessarily wait for 154 any subsequent RCU read-side critical sections to complete. 155 For example, consider the following sequence of events: 156 157 CPU 0 CPU 1 CPU 2 158 ----------------- ------------------------- --------------- 159 1. rcu_read_lock() 160 2. enters synchronize_rcu() 161 3. rcu_read_lock() 162 4. rcu_read_unlock() 163 5. exits synchronize_rcu() 164 6. rcu_read_unlock() 165 166 To reiterate, synchronize_rcu() waits only for ongoing RCU 167 read-side critical sections to complete, not necessarily for 168 any that begin after synchronize_rcu() is invoked. 169 170 Of course, synchronize_rcu() does not necessarily return 171 -immediately- after the last pre-existing RCU read-side critical 172 section completes. For one thing, there might well be scheduling 173 delays. For another thing, many RCU implementations process 174 requests in batches in order to improve efficiencies, which can 175 further delay synchronize_rcu(). 176 177 Since synchronize_rcu() is the API that must figure out when 178 readers are done, its implementation is key to RCU. For RCU 179 to be useful in all but the most read-intensive situations, 180 synchronize_rcu()'s overhead must also be quite small. 181 182 The call_rcu() API is a callback form of synchronize_rcu(), 183 and is described in more detail in a later section. Instead of 184 blocking, it registers a function and argument which are invoked 185 after all ongoing RCU read-side critical sections have completed. 186 This callback variant is particularly useful in situations where 187 it is illegal to block. 188 189rcu_assign_pointer() 190 191 typeof(p) rcu_assign_pointer(p, typeof(p) v); 192 193 Yes, rcu_assign_pointer() -is- implemented as a macro, though it 194 would be cool to be able to declare a function in this manner. 195 (Compiler experts will no doubt disagree.) 196 197 The updater uses this function to assign a new value to an 198 RCU-protected pointer, in order to safely communicate the change 199 in value from the updater to the reader. This function returns 200 the new value, and also executes any memory-barrier instructions 201 required for a given CPU architecture. 202 203 Perhaps more important, it serves to document which pointers 204 are protected by RCU. That said, rcu_assign_pointer() is most 205 frequently used indirectly, via the _rcu list-manipulation 206 primitives such as list_add_rcu(). 207 208rcu_dereference() 209 210 typeof(p) rcu_dereference(p); 211 212 Like rcu_assign_pointer(), rcu_dereference() must be implemented 213 as a macro. 214 215 The reader uses rcu_dereference() to fetch an RCU-protected 216 pointer, which returns a value that may then be safely 217 dereferenced. Note that rcu_deference() does not actually 218 dereference the pointer, instead, it protects the pointer for 219 later dereferencing. It also executes any needed memory-barrier 220 instructions for a given CPU architecture. Currently, only Alpha 221 needs memory barriers within rcu_dereference() -- on other CPUs, 222 it compiles to nothing, not even a compiler directive. 223 224 Common coding practice uses rcu_dereference() to copy an 225 RCU-protected pointer to a local variable, then dereferences 226 this local variable, for example as follows: 227 228 p = rcu_dereference(head.next); 229 return p->data; 230 231 However, in this case, one could just as easily combine these 232 into one statement: 233 234 return rcu_dereference(head.next)->data; 235 236 If you are going to be fetching multiple fields from the 237 RCU-protected structure, using the local variable is of 238 course preferred. Repeated rcu_dereference() calls look 239 ugly and incur unnecessary overhead on Alpha CPUs. 240 241 Note that the value returned by rcu_dereference() is valid 242 only within the enclosing RCU read-side critical section. 243 For example, the following is -not- legal: 244 245 rcu_read_lock(); 246 p = rcu_dereference(head.next); 247 rcu_read_unlock(); 248 x = p->address; 249 rcu_read_lock(); 250 y = p->data; 251 rcu_read_unlock(); 252 253 Holding a reference from one RCU read-side critical section 254 to another is just as illegal as holding a reference from 255 one lock-based critical section to another! Similarly, 256 using a reference outside of the critical section in which 257 it was acquired is just as illegal as doing so with normal 258 locking. 259 260 As with rcu_assign_pointer(), an important function of 261 rcu_dereference() is to document which pointers are protected 262 by RCU. And, again like rcu_assign_pointer(), rcu_dereference() 263 is typically used indirectly, via the _rcu list-manipulation 264 primitives, such as list_for_each_entry_rcu(). 265 266The following diagram shows how each API communicates among the 267reader, updater, and reclaimer. 268 269 270 rcu_assign_pointer() 271 +--------+ 272 +---------------------->| reader |---------+ 273 | +--------+ | 274 | | | 275 | | | Protect: 276 | | | rcu_read_lock() 277 | | | rcu_read_unlock() 278 | rcu_dereference() | | 279 +---------+ | | 280 | updater |<---------------------+ | 281 +---------+ V 282 | +-----------+ 283 +----------------------------------->| reclaimer | 284 +-----------+ 285 Defer: 286 synchronize_rcu() & call_rcu() 287 288 289The RCU infrastructure observes the time sequence of rcu_read_lock(), 290rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in 291order to determine when (1) synchronize_rcu() invocations may return 292to their callers and (2) call_rcu() callbacks may be invoked. Efficient 293implementations of the RCU infrastructure make heavy use of batching in 294order to amortize their overhead over many uses of the corresponding APIs. 295 296There are no fewer than three RCU mechanisms in the Linux kernel; the 297diagram above shows the first one, which is by far the most commonly used. 298The rcu_dereference() and rcu_assign_pointer() primitives are used for 299all three mechanisms, but different defer and protect primitives are 300used as follows: 301 302 Defer Protect 303 304a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock() 305 call_rcu() 306 307b. call_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh() 308 309c. synchronize_sched() preempt_disable() / preempt_enable() 310 local_irq_save() / local_irq_restore() 311 hardirq enter / hardirq exit 312 NMI enter / NMI exit 313 314These three mechanisms are used as follows: 315 316a. RCU applied to normal data structures. 317 318b. RCU applied to networking data structures that may be subjected 319 to remote denial-of-service attacks. 320 321c. RCU applied to scheduler and interrupt/NMI-handler tasks. 322 323Again, most uses will be of (a). The (b) and (c) cases are important 324for specialized uses, but are relatively uncommon. 325 326 3273. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? 328 329This section shows a simple use of the core RCU API to protect a 330global pointer to a dynamically allocated structure. More typical 331uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt. 332 333 struct foo { 334 int a; 335 char b; 336 long c; 337 }; 338 DEFINE_SPINLOCK(foo_mutex); 339 340 struct foo *gbl_foo; 341 342 /* 343 * Create a new struct foo that is the same as the one currently 344 * pointed to by gbl_foo, except that field "a" is replaced 345 * with "new_a". Points gbl_foo to the new structure, and 346 * frees up the old structure after a grace period. 347 * 348 * Uses rcu_assign_pointer() to ensure that concurrent readers 349 * see the initialized version of the new structure. 350 * 351 * Uses synchronize_rcu() to ensure that any readers that might 352 * have references to the old structure complete before freeing 353 * the old structure. 354 */ 355 void foo_update_a(int new_a) 356 { 357 struct foo *new_fp; 358 struct foo *old_fp; 359 360 new_fp = kmalloc(sizeof(*fp), GFP_KERNEL); 361 spin_lock(&foo_mutex); 362 old_fp = gbl_foo; 363 *new_fp = *old_fp; 364 new_fp->a = new_a; 365 rcu_assign_pointer(gbl_foo, new_fp); 366 spin_unlock(&foo_mutex); 367 synchronize_rcu(); 368 kfree(old_fp); 369 } 370 371 /* 372 * Return the value of field "a" of the current gbl_foo 373 * structure. Use rcu_read_lock() and rcu_read_unlock() 374 * to ensure that the structure does not get deleted out 375 * from under us, and use rcu_dereference() to ensure that 376 * we see the initialized version of the structure (important 377 * for DEC Alpha and for people reading the code). 378 */ 379 int foo_get_a(void) 380 { 381 int retval; 382 383 rcu_read_lock(); 384 retval = rcu_dereference(gbl_foo)->a; 385 rcu_read_unlock(); 386 return retval; 387 } 388 389So, to sum up: 390 391o Use rcu_read_lock() and rcu_read_unlock() to guard RCU 392 read-side critical sections. 393 394o Within an RCU read-side critical section, use rcu_dereference() 395 to dereference RCU-protected pointers. 396 397o Use some solid scheme (such as locks or semaphores) to 398 keep concurrent updates from interfering with each other. 399 400o Use rcu_assign_pointer() to update an RCU-protected pointer. 401 This primitive protects concurrent readers from the updater, 402 -not- concurrent updates from each other! You therefore still 403 need to use locking (or something similar) to keep concurrent 404 rcu_assign_pointer() primitives from interfering with each other. 405 406o Use synchronize_rcu() -after- removing a data element from an 407 RCU-protected data structure, but -before- reclaiming/freeing 408 the data element, in order to wait for the completion of all 409 RCU read-side critical sections that might be referencing that 410 data item. 411 412See checklist.txt for additional rules to follow when using RCU. 413 414 4154. WHAT IF MY UPDATING THREAD CANNOT BLOCK? 416 417In the example above, foo_update_a() blocks until a grace period elapses. 418This is quite simple, but in some cases one cannot afford to wait so 419long -- there might be other high-priority work to be done. 420 421In such cases, one uses call_rcu() rather than synchronize_rcu(). 422The call_rcu() API is as follows: 423 424 void call_rcu(struct rcu_head * head, 425 void (*func)(struct rcu_head *head)); 426 427This function invokes func(head) after a grace period has elapsed. 428This invocation might happen from either softirq or process context, 429so the function is not permitted to block. The foo struct needs to 430have an rcu_head structure added, perhaps as follows: 431 432 struct foo { 433 int a; 434 char b; 435 long c; 436 struct rcu_head rcu; 437 }; 438 439The foo_update_a() function might then be written as follows: 440 441 /* 442 * Create a new struct foo that is the same as the one currently 443 * pointed to by gbl_foo, except that field "a" is replaced 444 * with "new_a". Points gbl_foo to the new structure, and 445 * frees up the old structure after a grace period. 446 * 447 * Uses rcu_assign_pointer() to ensure that concurrent readers 448 * see the initialized version of the new structure. 449 * 450 * Uses call_rcu() to ensure that any readers that might have 451 * references to the old structure complete before freeing the 452 * old structure. 453 */ 454 void foo_update_a(int new_a) 455 { 456 struct foo *new_fp; 457 struct foo *old_fp; 458 459 new_fp = kmalloc(sizeof(*fp), GFP_KERNEL); 460 spin_lock(&foo_mutex); 461 old_fp = gbl_foo; 462 *new_fp = *old_fp; 463 new_fp->a = new_a; 464 rcu_assign_pointer(gbl_foo, new_fp); 465 spin_unlock(&foo_mutex); 466 call_rcu(&old_fp->rcu, foo_reclaim); 467 } 468 469The foo_reclaim() function might appear as follows: 470 471 void foo_reclaim(struct rcu_head *rp) 472 { 473 struct foo *fp = container_of(rp, struct foo, rcu); 474 475 kfree(fp); 476 } 477 478The container_of() primitive is a macro that, given a pointer into a 479struct, the type of the struct, and the pointed-to field within the 480struct, returns a pointer to the beginning of the struct. 481 482The use of call_rcu() permits the caller of foo_update_a() to 483immediately regain control, without needing to worry further about the 484old version of the newly updated element. It also clearly shows the 485RCU distinction between updater, namely foo_update_a(), and reclaimer, 486namely foo_reclaim(). 487 488The summary of advice is the same as for the previous section, except 489that we are now using call_rcu() rather than synchronize_rcu(): 490 491o Use call_rcu() -after- removing a data element from an 492 RCU-protected data structure in order to register a callback 493 function that will be invoked after the completion of all RCU 494 read-side critical sections that might be referencing that 495 data item. 496 497Again, see checklist.txt for additional rules governing the use of RCU. 498 499 5005. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? 501 502One of the nice things about RCU is that it has extremely simple "toy" 503implementations that are a good first step towards understanding the 504production-quality implementations in the Linux kernel. This section 505presents two such "toy" implementations of RCU, one that is implemented 506in terms of familiar locking primitives, and another that more closely 507resembles "classic" RCU. Both are way too simple for real-world use, 508lacking both functionality and performance. However, they are useful 509in getting a feel for how RCU works. See kernel/rcupdate.c for a 510production-quality implementation, and see: 511 512 http://www.rdrop.com/users/paulmck/RCU 513 514for papers describing the Linux kernel RCU implementation. The OLS'01 515and OLS'02 papers are a good introduction, and the dissertation provides 516more details on the current implementation. 517 518 5195A. "TOY" IMPLEMENTATION #1: LOCKING 520 521This section presents a "toy" RCU implementation that is based on 522familiar locking primitives. Its overhead makes it a non-starter for 523real-life use, as does its lack of scalability. It is also unsuitable 524for realtime use, since it allows scheduling latency to "bleed" from 525one read-side critical section to another. 526 527However, it is probably the easiest implementation to relate to, so is 528a good starting point. 529 530It is extremely simple: 531 532 static DEFINE_RWLOCK(rcu_gp_mutex); 533 534 void rcu_read_lock(void) 535 { 536 read_lock(&rcu_gp_mutex); 537 } 538 539 void rcu_read_unlock(void) 540 { 541 read_unlock(&rcu_gp_mutex); 542 } 543 544 void synchronize_rcu(void) 545 { 546 write_lock(&rcu_gp_mutex); 547 write_unlock(&rcu_gp_mutex); 548 } 549 550[You can ignore rcu_assign_pointer() and rcu_dereference() without 551missing much. But here they are anyway. And whatever you do, don't 552forget about them when submitting patches making use of RCU!] 553 554 #define rcu_assign_pointer(p, v) ({ \ 555 smp_wmb(); \ 556 (p) = (v); \ 557 }) 558 559 #define rcu_dereference(p) ({ \ 560 typeof(p) _________p1 = p; \ 561 smp_read_barrier_depends(); \ 562 (_________p1); \ 563 }) 564 565 566The rcu_read_lock() and rcu_read_unlock() primitive read-acquire 567and release a global reader-writer lock. The synchronize_rcu() 568primitive write-acquires this same lock, then immediately releases 569it. This means that once synchronize_rcu() exits, all RCU read-side 570critical sections that were in progress before synchonize_rcu() was 571called are guaranteed to have completed -- there is no way that 572synchronize_rcu() would have been able to write-acquire the lock 573otherwise. 574 575It is possible to nest rcu_read_lock(), since reader-writer locks may 576be recursively acquired. Note also that rcu_read_lock() is immune 577from deadlock (an important property of RCU). The reason for this is 578that the only thing that can block rcu_read_lock() is a synchronize_rcu(). 579But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, 580so there can be no deadlock cycle. 581 582Quick Quiz #1: Why is this argument naive? How could a deadlock 583 occur when using this algorithm in a real-world Linux 584 kernel? How could this deadlock be avoided? 585 586 5875B. "TOY" EXAMPLE #2: CLASSIC RCU 588 589This section presents a "toy" RCU implementation that is based on 590"classic RCU". It is also short on performance (but only for updates) and 591on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT 592kernels. The definitions of rcu_dereference() and rcu_assign_pointer() 593are the same as those shown in the preceding section, so they are omitted. 594 595 void rcu_read_lock(void) { } 596 597 void rcu_read_unlock(void) { } 598 599 void synchronize_rcu(void) 600 { 601 int cpu; 602 603 for_each_cpu(cpu) 604 run_on(cpu); 605 } 606 607Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. 608This is the great strength of classic RCU in a non-preemptive kernel: 609read-side overhead is precisely zero, at least on non-Alpha CPUs. 610And there is absolutely no way that rcu_read_lock() can possibly 611participate in a deadlock cycle! 612 613The implementation of synchronize_rcu() simply schedules itself on each 614CPU in turn. The run_on() primitive can be implemented straightforwardly 615in terms of the sched_setaffinity() primitive. Of course, a somewhat less 616"toy" implementation would restore the affinity upon completion rather 617than just leaving all tasks running on the last CPU, but when I said 618"toy", I meant -toy-! 619 620So how the heck is this supposed to work??? 621 622Remember that it is illegal to block while in an RCU read-side critical 623section. Therefore, if a given CPU executes a context switch, we know 624that it must have completed all preceding RCU read-side critical sections. 625Once -all- CPUs have executed a context switch, then -all- preceding 626RCU read-side critical sections will have completed. 627 628So, suppose that we remove a data item from its structure and then invoke 629synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed 630that there are no RCU read-side critical sections holding a reference 631to that data item, so we can safely reclaim it. 632 633Quick Quiz #2: Give an example where Classic RCU's read-side 634 overhead is -negative-. 635 636Quick Quiz #3: If it is illegal to block in an RCU read-side 637 critical section, what the heck do you do in 638 PREEMPT_RT, where normal spinlocks can block??? 639 640 6416. ANALOGY WITH READER-WRITER LOCKING 642 643Although RCU can be used in many different ways, a very common use of 644RCU is analogous to reader-writer locking. The following unified 645diff shows how closely related RCU and reader-writer locking can be. 646 647 @@ -13,15 +14,15 @@ 648 struct list_head *lp; 649 struct el *p; 650 651 - read_lock(); 652 - list_for_each_entry(p, head, lp) { 653 + rcu_read_lock(); 654 + list_for_each_entry_rcu(p, head, lp) { 655 if (p->key == key) { 656 *result = p->data; 657 - read_unlock(); 658 + rcu_read_unlock(); 659 return 1; 660 } 661 } 662 - read_unlock(); 663 + rcu_read_unlock(); 664 return 0; 665 } 666 667 @@ -29,15 +30,16 @@ 668 { 669 struct el *p; 670 671 - write_lock(&listmutex); 672 + spin_lock(&listmutex); 673 list_for_each_entry(p, head, lp) { 674 if (p->key == key) { 675 list_del(&p->list); 676 - write_unlock(&listmutex); 677 + spin_unlock(&listmutex); 678 + synchronize_rcu(); 679 kfree(p); 680 return 1; 681 } 682 } 683 - write_unlock(&listmutex); 684 + spin_unlock(&listmutex); 685 return 0; 686 } 687 688Or, for those who prefer a side-by-side listing: 689 690 1 struct el { 1 struct el { 691 2 struct list_head list; 2 struct list_head list; 692 3 long key; 3 long key; 693 4 spinlock_t mutex; 4 spinlock_t mutex; 694 5 int data; 5 int data; 695 6 /* Other data fields */ 6 /* Other data fields */ 696 7 }; 7 }; 697 8 spinlock_t listmutex; 8 spinlock_t listmutex; 698 9 struct el head; 9 struct el head; 699 700 1 int search(long key, int *result) 1 int search(long key, int *result) 701 2 { 2 { 702 3 struct list_head *lp; 3 struct list_head *lp; 703 4 struct el *p; 4 struct el *p; 704 5 5 705 6 read_lock(); 6 rcu_read_lock(); 706 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { 707 8 if (p->key == key) { 8 if (p->key == key) { 708 9 *result = p->data; 9 *result = p->data; 70910 read_unlock(); 10 rcu_read_unlock(); 71011 return 1; 11 return 1; 71112 } 12 } 71213 } 13 } 71314 read_unlock(); 14 rcu_read_unlock(); 71415 return 0; 15 return 0; 71516 } 16 } 716 717 1 int delete(long key) 1 int delete(long key) 718 2 { 2 { 719 3 struct el *p; 3 struct el *p; 720 4 4 721 5 write_lock(&listmutex); 5 spin_lock(&listmutex); 722 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { 723 7 if (p->key == key) { 7 if (p->key == key) { 724 8 list_del(&p->list); 8 list_del(&p->list); 725 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); 726 10 synchronize_rcu(); 72710 kfree(p); 11 kfree(p); 72811 return 1; 12 return 1; 72912 } 13 } 73013 } 14 } 73114 write_unlock(&listmutex); 15 spin_unlock(&listmutex); 73215 return 0; 16 return 0; 73316 } 17 } 734 735Either way, the differences are quite small. Read-side locking moves 736to rcu_read_lock() and rcu_read_unlock, update-side locking moves from 737from a reader-writer lock to a simple spinlock, and a synchronize_rcu() 738precedes the kfree(). 739 740However, there is one potential catch: the read-side and update-side 741critical sections can now run concurrently. In many cases, this will 742not be a problem, but it is necessary to check carefully regardless. 743For example, if multiple independent list updates must be seen as 744a single atomic update, converting to RCU will require special care. 745 746Also, the presence of synchronize_rcu() means that the RCU version of 747delete() can now block. If this is a problem, there is a callback-based 748mechanism that never blocks, namely call_rcu(), that can be used in 749place of synchronize_rcu(). 750 751 7527. FULL LIST OF RCU APIs 753 754The RCU APIs are documented in docbook-format header comments in the 755Linux-kernel source code, but it helps to have a full list of the 756APIs, since there does not appear to be a way to categorize them 757in docbook. Here is the list, by category. 758 759Markers for RCU read-side critical sections: 760 761 rcu_read_lock 762 rcu_read_unlock 763 rcu_read_lock_bh 764 rcu_read_unlock_bh 765 766RCU pointer/list traversal: 767 768 rcu_dereference 769 list_for_each_rcu (to be deprecated in favor of 770 list_for_each_entry_rcu) 771 list_for_each_safe_rcu (deprecated, not used) 772 list_for_each_entry_rcu 773 list_for_each_continue_rcu (to be deprecated in favor of new 774 list_for_each_entry_continue_rcu) 775 hlist_for_each_rcu (to be deprecated in favor of 776 hlist_for_each_entry_rcu) 777 hlist_for_each_entry_rcu 778 779RCU pointer update: 780 781 rcu_assign_pointer 782 list_add_rcu 783 list_add_tail_rcu 784 list_del_rcu 785 list_replace_rcu 786 hlist_del_rcu 787 hlist_add_head_rcu 788 789RCU grace period: 790 791 synchronize_kernel (deprecated) 792 synchronize_net 793 synchronize_sched 794 synchronize_rcu 795 call_rcu 796 call_rcu_bh 797 798See the comment headers in the source code (or the docbook generated 799from them) for more information. 800 801 8028. ANSWERS TO QUICK QUIZZES 803 804Quick Quiz #1: Why is this argument naive? How could a deadlock 805 occur when using this algorithm in a real-world Linux 806 kernel? [Referring to the lock-based "toy" RCU 807 algorithm.] 808 809Answer: Consider the following sequence of events: 810 811 1. CPU 0 acquires some unrelated lock, call it 812 "problematic_lock". 813 814 2. CPU 1 enters synchronize_rcu(), write-acquiring 815 rcu_gp_mutex. 816 817 3. CPU 0 enters rcu_read_lock(), but must wait 818 because CPU 1 holds rcu_gp_mutex. 819 820 4. CPU 1 is interrupted, and the irq handler 821 attempts to acquire problematic_lock. 822 823 The system is now deadlocked. 824 825 One way to avoid this deadlock is to use an approach like 826 that of CONFIG_PREEMPT_RT, where all normal spinlocks 827 become blocking locks, and all irq handlers execute in 828 the context of special tasks. In this case, in step 4 829 above, the irq handler would block, allowing CPU 1 to 830 release rcu_gp_mutex, avoiding the deadlock. 831 832 Even in the absence of deadlock, this RCU implementation 833 allows latency to "bleed" from readers to other 834 readers through synchronize_rcu(). To see this, 835 consider task A in an RCU read-side critical section 836 (thus read-holding rcu_gp_mutex), task B blocked 837 attempting to write-acquire rcu_gp_mutex, and 838 task C blocked in rcu_read_lock() attempting to 839 read_acquire rcu_gp_mutex. Task A's RCU read-side 840 latency is holding up task C, albeit indirectly via 841 task B. 842 843 Realtime RCU implementations therefore use a counter-based 844 approach where tasks in RCU read-side critical sections 845 cannot be blocked by tasks executing synchronize_rcu(). 846 847Quick Quiz #2: Give an example where Classic RCU's read-side 848 overhead is -negative-. 849 850Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT 851 kernel where a routing table is used by process-context 852 code, but can be updated by irq-context code (for example, 853 by an "ICMP REDIRECT" packet). The usual way of handling 854 this would be to have the process-context code disable 855 interrupts while searching the routing table. Use of 856 RCU allows such interrupt-disabling to be dispensed with. 857 Thus, without RCU, you pay the cost of disabling interrupts, 858 and with RCU you don't. 859 860 One can argue that the overhead of RCU in this 861 case is negative with respect to the single-CPU 862 interrupt-disabling approach. Others might argue that 863 the overhead of RCU is merely zero, and that replacing 864 the positive overhead of the interrupt-disabling scheme 865 with the zero-overhead RCU scheme does not constitute 866 negative overhead. 867 868 In real life, of course, things are more complex. But 869 even the theoretical possibility of negative overhead for 870 a synchronization primitive is a bit unexpected. ;-) 871 872Quick Quiz #3: If it is illegal to block in an RCU read-side 873 critical section, what the heck do you do in 874 PREEMPT_RT, where normal spinlocks can block??? 875 876Answer: Just as PREEMPT_RT permits preemption of spinlock 877 critical sections, it permits preemption of RCU 878 read-side critical sections. It also permits 879 spinlocks blocking while in RCU read-side critical 880 sections. 881 882 Why the apparent inconsistency? Because it is it 883 possible to use priority boosting to keep the RCU 884 grace periods short if need be (for example, if running 885 short of memory). In contrast, if blocking waiting 886 for (say) network reception, there is no way to know 887 what should be boosted. Especially given that the 888 process we need to boost might well be a human being 889 who just went out for a pizza or something. And although 890 a computer-operated cattle prod might arouse serious 891 interest, it might also provoke serious objections. 892 Besides, how does the computer know what pizza parlor 893 the human being went to??? 894 895 896ACKNOWLEDGEMENTS 897 898My thanks to the people who helped make this human-readable, including 899Jon Walpole, Josh Triplett, Serge Hallyn, and Suzanne Wood. 900 901 902For more information, see http://www.rdrop.com/users/paulmck/RCU.