at v2.6.26-rc7 2152 lines 68 kB view raw
1<?xml version="1.0" encoding="UTF-8"?> 2<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" 3 "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []> 4 5<book id="LKLockingGuide"> 6 <bookinfo> 7 <title>Unreliable Guide To Locking</title> 8 9 <authorgroup> 10 <author> 11 <firstname>Rusty</firstname> 12 <surname>Russell</surname> 13 <affiliation> 14 <address> 15 <email>rusty@rustcorp.com.au</email> 16 </address> 17 </affiliation> 18 </author> 19 </authorgroup> 20 21 <copyright> 22 <year>2003</year> 23 <holder>Rusty Russell</holder> 24 </copyright> 25 26 <legalnotice> 27 <para> 28 This documentation is free software; you can redistribute 29 it and/or modify it under the terms of the GNU General Public 30 License as published by the Free Software Foundation; either 31 version 2 of the License, or (at your option) any later 32 version. 33 </para> 34 35 <para> 36 This program is distributed in the hope that it will be 37 useful, but WITHOUT ANY WARRANTY; without even the implied 38 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 39 See the GNU General Public License for more details. 40 </para> 41 42 <para> 43 You should have received a copy of the GNU General Public 44 License along with this program; if not, write to the Free 45 Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, 46 MA 02111-1307 USA 47 </para> 48 49 <para> 50 For more details see the file COPYING in the source 51 distribution of Linux. 52 </para> 53 </legalnotice> 54 </bookinfo> 55 56 <toc></toc> 57 <chapter id="intro"> 58 <title>Introduction</title> 59 <para> 60 Welcome, to Rusty's Remarkably Unreliable Guide to Kernel 61 Locking issues. This document describes the locking systems in 62 the Linux Kernel in 2.6. 63 </para> 64 <para> 65 With the wide availability of HyperThreading, and <firstterm 66 linkend="gloss-preemption">preemption </firstterm> in the Linux 67 Kernel, everyone hacking on the kernel needs to know the 68 fundamentals of concurrency and locking for 69 <firstterm linkend="gloss-smp"><acronym>SMP</acronym></firstterm>. 70 </para> 71 </chapter> 72 73 <chapter id="races"> 74 <title>The Problem With Concurrency</title> 75 <para> 76 (Skip this if you know what a Race Condition is). 77 </para> 78 <para> 79 In a normal program, you can increment a counter like so: 80 </para> 81 <programlisting> 82 very_important_count++; 83 </programlisting> 84 85 <para> 86 This is what they would expect to happen: 87 </para> 88 89 <table> 90 <title>Expected Results</title> 91 92 <tgroup cols="2" align="left"> 93 94 <thead> 95 <row> 96 <entry>Instance 1</entry> 97 <entry>Instance 2</entry> 98 </row> 99 </thead> 100 101 <tbody> 102 <row> 103 <entry>read very_important_count (5)</entry> 104 <entry></entry> 105 </row> 106 <row> 107 <entry>add 1 (6)</entry> 108 <entry></entry> 109 </row> 110 <row> 111 <entry>write very_important_count (6)</entry> 112 <entry></entry> 113 </row> 114 <row> 115 <entry></entry> 116 <entry>read very_important_count (6)</entry> 117 </row> 118 <row> 119 <entry></entry> 120 <entry>add 1 (7)</entry> 121 </row> 122 <row> 123 <entry></entry> 124 <entry>write very_important_count (7)</entry> 125 </row> 126 </tbody> 127 128 </tgroup> 129 </table> 130 131 <para> 132 This is what might happen: 133 </para> 134 135 <table> 136 <title>Possible Results</title> 137 138 <tgroup cols="2" align="left"> 139 <thead> 140 <row> 141 <entry>Instance 1</entry> 142 <entry>Instance 2</entry> 143 </row> 144 </thead> 145 146 <tbody> 147 <row> 148 <entry>read very_important_count (5)</entry> 149 <entry></entry> 150 </row> 151 <row> 152 <entry></entry> 153 <entry>read very_important_count (5)</entry> 154 </row> 155 <row> 156 <entry>add 1 (6)</entry> 157 <entry></entry> 158 </row> 159 <row> 160 <entry></entry> 161 <entry>add 1 (6)</entry> 162 </row> 163 <row> 164 <entry>write very_important_count (6)</entry> 165 <entry></entry> 166 </row> 167 <row> 168 <entry></entry> 169 <entry>write very_important_count (6)</entry> 170 </row> 171 </tbody> 172 </tgroup> 173 </table> 174 175 <sect1 id="race-condition"> 176 <title>Race Conditions and Critical Regions</title> 177 <para> 178 This overlap, where the result depends on the 179 relative timing of multiple tasks, is called a <firstterm>race condition</firstterm>. 180 The piece of code containing the concurrency issue is called a 181 <firstterm>critical region</firstterm>. And especially since Linux starting running 182 on SMP machines, they became one of the major issues in kernel 183 design and implementation. 184 </para> 185 <para> 186 Preemption can have the same effect, even if there is only one 187 CPU: by preempting one task during the critical region, we have 188 exactly the same race condition. In this case the thread which 189 preempts might run the critical region itself. 190 </para> 191 <para> 192 The solution is to recognize when these simultaneous accesses 193 occur, and use locks to make sure that only one instance can 194 enter the critical region at any time. There are many 195 friendly primitives in the Linux kernel to help you do this. 196 And then there are the unfriendly primitives, but I'll pretend 197 they don't exist. 198 </para> 199 </sect1> 200 </chapter> 201 202 <chapter id="locks"> 203 <title>Locking in the Linux Kernel</title> 204 205 <para> 206 If I could give you one piece of advice: never sleep with anyone 207 crazier than yourself. But if I had to give you advice on 208 locking: <emphasis>keep it simple</emphasis>. 209 </para> 210 211 <para> 212 Be reluctant to introduce new locks. 213 </para> 214 215 <para> 216 Strangely enough, this last one is the exact reverse of my advice when 217 you <emphasis>have</emphasis> slept with someone crazier than yourself. 218 And you should think about getting a big dog. 219 </para> 220 221 <sect1 id="lock-intro"> 222 <title>Three Main Types of Kernel Locks: Spinlocks, Mutexes and Semaphores</title> 223 224 <para> 225 There are three main types of kernel locks. The fundamental type 226 is the spinlock 227 (<filename class="headerfile">include/asm/spinlock.h</filename>), 228 which is a very simple single-holder lock: if you can't get the 229 spinlock, you keep trying (spinning) until you can. Spinlocks are 230 very small and fast, and can be used anywhere. 231 </para> 232 <para> 233 The second type is a mutex 234 (<filename class="headerfile">include/linux/mutex.h</filename>): it 235 is like a spinlock, but you may block holding a mutex. 236 If you can't lock a mutex, your task will suspend itself, and be woken 237 up when the mutex is released. This means the CPU can do something 238 else while you are waiting. There are many cases when you simply 239 can't sleep (see <xref linkend="sleeping-things"/>), and so have to 240 use a spinlock instead. 241 </para> 242 <para> 243 The third type is a semaphore 244 (<filename class="headerfile">include/linux/semaphore.h</filename>): it 245 can have more than one holder at any time (the number decided at 246 initialization time), although it is most commonly used as a 247 single-holder lock (a mutex). If you can't get a semaphore, your 248 task will be suspended and later on woken up - just like for mutexes. 249 </para> 250 <para> 251 Neither type of lock is recursive: see 252 <xref linkend="deadlock"/>. 253 </para> 254 </sect1> 255 256 <sect1 id="uniprocessor"> 257 <title>Locks and Uniprocessor Kernels</title> 258 259 <para> 260 For kernels compiled without <symbol>CONFIG_SMP</symbol>, and 261 without <symbol>CONFIG_PREEMPT</symbol> spinlocks do not exist at 262 all. This is an excellent design decision: when no-one else can 263 run at the same time, there is no reason to have a lock. 264 </para> 265 266 <para> 267 If the kernel is compiled without <symbol>CONFIG_SMP</symbol>, 268 but <symbol>CONFIG_PREEMPT</symbol> is set, then spinlocks 269 simply disable preemption, which is sufficient to prevent any 270 races. For most purposes, we can think of preemption as 271 equivalent to SMP, and not worry about it separately. 272 </para> 273 274 <para> 275 You should always test your locking code with <symbol>CONFIG_SMP</symbol> 276 and <symbol>CONFIG_PREEMPT</symbol> enabled, even if you don't have an SMP test box, because it 277 will still catch some kinds of locking bugs. 278 </para> 279 280 <para> 281 Semaphores still exist, because they are required for 282 synchronization between <firstterm linkend="gloss-usercontext">user 283 contexts</firstterm>, as we will see below. 284 </para> 285 </sect1> 286 287 <sect1 id="usercontextlocking"> 288 <title>Locking Only In User Context</title> 289 290 <para> 291 If you have a data structure which is only ever accessed from 292 user context, then you can use a simple semaphore 293 (<filename>linux/linux/semaphore.h</filename>) to protect it. This 294 is the most trivial case: you initialize the semaphore to the number 295 of resources available (usually 1), and call 296 <function>down_interruptible()</function> to grab the semaphore, and 297 <function>up()</function> to release it. There is also a 298 <function>down()</function>, which should be avoided, because it 299 will not return if a signal is received. 300 </para> 301 302 <para> 303 Example: <filename>linux/net/core/netfilter.c</filename> allows 304 registration of new <function>setsockopt()</function> and 305 <function>getsockopt()</function> calls, with 306 <function>nf_register_sockopt()</function>. Registration and 307 de-registration are only done on module load and unload (and boot 308 time, where there is no concurrency), and the list of registrations 309 is only consulted for an unknown <function>setsockopt()</function> 310 or <function>getsockopt()</function> system call. The 311 <varname>nf_sockopt_mutex</varname> is perfect to protect this, 312 especially since the setsockopt and getsockopt calls may well 313 sleep. 314 </para> 315 </sect1> 316 317 <sect1 id="lock-user-bh"> 318 <title>Locking Between User Context and Softirqs</title> 319 320 <para> 321 If a <firstterm linkend="gloss-softirq">softirq</firstterm> shares 322 data with user context, you have two problems. Firstly, the current 323 user context can be interrupted by a softirq, and secondly, the 324 critical region could be entered from another CPU. This is where 325 <function>spin_lock_bh()</function> 326 (<filename class="headerfile">include/linux/spinlock.h</filename>) is 327 used. It disables softirqs on that CPU, then grabs the lock. 328 <function>spin_unlock_bh()</function> does the reverse. (The 329 '_bh' suffix is a historical reference to "Bottom Halves", the 330 old name for software interrupts. It should really be 331 called spin_lock_softirq()' in a perfect world). 332 </para> 333 334 <para> 335 Note that you can also use <function>spin_lock_irq()</function> 336 or <function>spin_lock_irqsave()</function> here, which stop 337 hardware interrupts as well: see <xref linkend="hardirq-context"/>. 338 </para> 339 340 <para> 341 This works perfectly for <firstterm linkend="gloss-up"><acronym>UP 342 </acronym></firstterm> as well: the spin lock vanishes, and this macro 343 simply becomes <function>local_bh_disable()</function> 344 (<filename class="headerfile">include/linux/interrupt.h</filename>), which 345 protects you from the softirq being run. 346 </para> 347 </sect1> 348 349 <sect1 id="lock-user-tasklet"> 350 <title>Locking Between User Context and Tasklets</title> 351 352 <para> 353 This is exactly the same as above, because <firstterm 354 linkend="gloss-tasklet">tasklets</firstterm> are actually run 355 from a softirq. 356 </para> 357 </sect1> 358 359 <sect1 id="lock-user-timers"> 360 <title>Locking Between User Context and Timers</title> 361 362 <para> 363 This, too, is exactly the same as above, because <firstterm 364 linkend="gloss-timers">timers</firstterm> are actually run from 365 a softirq. From a locking point of view, tasklets and timers 366 are identical. 367 </para> 368 </sect1> 369 370 <sect1 id="lock-tasklets"> 371 <title>Locking Between Tasklets/Timers</title> 372 373 <para> 374 Sometimes a tasklet or timer might want to share data with 375 another tasklet or timer. 376 </para> 377 378 <sect2 id="lock-tasklets-same"> 379 <title>The Same Tasklet/Timer</title> 380 <para> 381 Since a tasklet is never run on two CPUs at once, you don't 382 need to worry about your tasklet being reentrant (running 383 twice at once), even on SMP. 384 </para> 385 </sect2> 386 387 <sect2 id="lock-tasklets-different"> 388 <title>Different Tasklets/Timers</title> 389 <para> 390 If another tasklet/timer wants 391 to share data with your tasklet or timer , you will both need to use 392 <function>spin_lock()</function> and 393 <function>spin_unlock()</function> calls. 394 <function>spin_lock_bh()</function> is 395 unnecessary here, as you are already in a tasklet, and 396 none will be run on the same CPU. 397 </para> 398 </sect2> 399 </sect1> 400 401 <sect1 id="lock-softirqs"> 402 <title>Locking Between Softirqs</title> 403 404 <para> 405 Often a softirq might 406 want to share data with itself or a tasklet/timer. 407 </para> 408 409 <sect2 id="lock-softirqs-same"> 410 <title>The Same Softirq</title> 411 412 <para> 413 The same softirq can run on the other CPUs: you can use a 414 per-CPU array (see <xref linkend="per-cpu"/>) for better 415 performance. If you're going so far as to use a softirq, 416 you probably care about scalable performance enough 417 to justify the extra complexity. 418 </para> 419 420 <para> 421 You'll need to use <function>spin_lock()</function> and 422 <function>spin_unlock()</function> for shared data. 423 </para> 424 </sect2> 425 426 <sect2 id="lock-softirqs-different"> 427 <title>Different Softirqs</title> 428 429 <para> 430 You'll need to use <function>spin_lock()</function> and 431 <function>spin_unlock()</function> for shared data, whether it 432 be a timer, tasklet, different softirq or the same or another 433 softirq: any of them could be running on a different CPU. 434 </para> 435 </sect2> 436 </sect1> 437 </chapter> 438 439 <chapter id="hardirq-context"> 440 <title>Hard IRQ Context</title> 441 442 <para> 443 Hardware interrupts usually communicate with a 444 tasklet or softirq. Frequently this involves putting work in a 445 queue, which the softirq will take out. 446 </para> 447 448 <sect1 id="hardirq-softirq"> 449 <title>Locking Between Hard IRQ and Softirqs/Tasklets</title> 450 451 <para> 452 If a hardware irq handler shares data with a softirq, you have 453 two concerns. Firstly, the softirq processing can be 454 interrupted by a hardware interrupt, and secondly, the 455 critical region could be entered by a hardware interrupt on 456 another CPU. This is where <function>spin_lock_irq()</function> is 457 used. It is defined to disable interrupts on that cpu, then grab 458 the lock. <function>spin_unlock_irq()</function> does the reverse. 459 </para> 460 461 <para> 462 The irq handler does not to use 463 <function>spin_lock_irq()</function>, because the softirq cannot 464 run while the irq handler is running: it can use 465 <function>spin_lock()</function>, which is slightly faster. The 466 only exception would be if a different hardware irq handler uses 467 the same lock: <function>spin_lock_irq()</function> will stop 468 that from interrupting us. 469 </para> 470 471 <para> 472 This works perfectly for UP as well: the spin lock vanishes, 473 and this macro simply becomes <function>local_irq_disable()</function> 474 (<filename class="headerfile">include/asm/smp.h</filename>), which 475 protects you from the softirq/tasklet/BH being run. 476 </para> 477 478 <para> 479 <function>spin_lock_irqsave()</function> 480 (<filename>include/linux/spinlock.h</filename>) is a variant 481 which saves whether interrupts were on or off in a flags word, 482 which is passed to <function>spin_unlock_irqrestore()</function>. This 483 means that the same code can be used inside an hard irq handler (where 484 interrupts are already off) and in softirqs (where the irq 485 disabling is required). 486 </para> 487 488 <para> 489 Note that softirqs (and hence tasklets and timers) are run on 490 return from hardware interrupts, so 491 <function>spin_lock_irq()</function> also stops these. In that 492 sense, <function>spin_lock_irqsave()</function> is the most 493 general and powerful locking function. 494 </para> 495 496 </sect1> 497 <sect1 id="hardirq-hardirq"> 498 <title>Locking Between Two Hard IRQ Handlers</title> 499 <para> 500 It is rare to have to share data between two IRQ handlers, but 501 if you do, <function>spin_lock_irqsave()</function> should be 502 used: it is architecture-specific whether all interrupts are 503 disabled inside irq handlers themselves. 504 </para> 505 </sect1> 506 507 </chapter> 508 509 <chapter id="cheatsheet"> 510 <title>Cheat Sheet For Locking</title> 511 <para> 512 Pete Zaitcev gives the following summary: 513 </para> 514 <itemizedlist> 515 <listitem> 516 <para> 517 If you are in a process context (any syscall) and want to 518 lock other process out, use a semaphore. You can take a semaphore 519 and sleep (<function>copy_from_user*(</function> or 520 <function>kmalloc(x,GFP_KERNEL)</function>). 521 </para> 522 </listitem> 523 <listitem> 524 <para> 525 Otherwise (== data can be touched in an interrupt), use 526 <function>spin_lock_irqsave()</function> and 527 <function>spin_unlock_irqrestore()</function>. 528 </para> 529 </listitem> 530 <listitem> 531 <para> 532 Avoid holding spinlock for more than 5 lines of code and 533 across any function call (except accessors like 534 <function>readb</function>). 535 </para> 536 </listitem> 537 </itemizedlist> 538 539 <sect1 id="minimum-lock-reqirements"> 540 <title>Table of Minimum Requirements</title> 541 542 <para> The following table lists the <emphasis>minimum</emphasis> 543 locking requirements between various contexts. In some cases, 544 the same context can only be running on one CPU at a time, so 545 no locking is required for that context (eg. a particular 546 thread can only run on one CPU at a time, but if it needs 547 shares data with another thread, locking is required). 548 </para> 549 <para> 550 Remember the advice above: you can always use 551 <function>spin_lock_irqsave()</function>, which is a superset 552 of all other spinlock primitives. 553 </para> 554 555 <table> 556<title>Table of Locking Requirements</title> 557<tgroup cols="11"> 558<tbody> 559 560<row> 561<entry></entry> 562<entry>IRQ Handler A</entry> 563<entry>IRQ Handler B</entry> 564<entry>Softirq A</entry> 565<entry>Softirq B</entry> 566<entry>Tasklet A</entry> 567<entry>Tasklet B</entry> 568<entry>Timer A</entry> 569<entry>Timer B</entry> 570<entry>User Context A</entry> 571<entry>User Context B</entry> 572</row> 573 574<row> 575<entry>IRQ Handler A</entry> 576<entry>None</entry> 577</row> 578 579<row> 580<entry>IRQ Handler B</entry> 581<entry>SLIS</entry> 582<entry>None</entry> 583</row> 584 585<row> 586<entry>Softirq A</entry> 587<entry>SLI</entry> 588<entry>SLI</entry> 589<entry>SL</entry> 590</row> 591 592<row> 593<entry>Softirq B</entry> 594<entry>SLI</entry> 595<entry>SLI</entry> 596<entry>SL</entry> 597<entry>SL</entry> 598</row> 599 600<row> 601<entry>Tasklet A</entry> 602<entry>SLI</entry> 603<entry>SLI</entry> 604<entry>SL</entry> 605<entry>SL</entry> 606<entry>None</entry> 607</row> 608 609<row> 610<entry>Tasklet B</entry> 611<entry>SLI</entry> 612<entry>SLI</entry> 613<entry>SL</entry> 614<entry>SL</entry> 615<entry>SL</entry> 616<entry>None</entry> 617</row> 618 619<row> 620<entry>Timer A</entry> 621<entry>SLI</entry> 622<entry>SLI</entry> 623<entry>SL</entry> 624<entry>SL</entry> 625<entry>SL</entry> 626<entry>SL</entry> 627<entry>None</entry> 628</row> 629 630<row> 631<entry>Timer B</entry> 632<entry>SLI</entry> 633<entry>SLI</entry> 634<entry>SL</entry> 635<entry>SL</entry> 636<entry>SL</entry> 637<entry>SL</entry> 638<entry>SL</entry> 639<entry>None</entry> 640</row> 641 642<row> 643<entry>User Context A</entry> 644<entry>SLI</entry> 645<entry>SLI</entry> 646<entry>SLBH</entry> 647<entry>SLBH</entry> 648<entry>SLBH</entry> 649<entry>SLBH</entry> 650<entry>SLBH</entry> 651<entry>SLBH</entry> 652<entry>None</entry> 653</row> 654 655<row> 656<entry>User Context B</entry> 657<entry>SLI</entry> 658<entry>SLI</entry> 659<entry>SLBH</entry> 660<entry>SLBH</entry> 661<entry>SLBH</entry> 662<entry>SLBH</entry> 663<entry>SLBH</entry> 664<entry>SLBH</entry> 665<entry>DI</entry> 666<entry>None</entry> 667</row> 668 669</tbody> 670</tgroup> 671</table> 672 673 <table> 674<title>Legend for Locking Requirements Table</title> 675<tgroup cols="2"> 676<tbody> 677 678<row> 679<entry>SLIS</entry> 680<entry>spin_lock_irqsave</entry> 681</row> 682<row> 683<entry>SLI</entry> 684<entry>spin_lock_irq</entry> 685</row> 686<row> 687<entry>SL</entry> 688<entry>spin_lock</entry> 689</row> 690<row> 691<entry>SLBH</entry> 692<entry>spin_lock_bh</entry> 693</row> 694<row> 695<entry>DI</entry> 696<entry>down_interruptible</entry> 697</row> 698 699</tbody> 700</tgroup> 701</table> 702 703</sect1> 704</chapter> 705 706<chapter id="trylock-functions"> 707 <title>The trylock Functions</title> 708 <para> 709 There are functions that try to acquire a lock only once and immediately 710 return a value telling about success or failure to acquire the lock. 711 They can be used if you need no access to the data protected with the lock 712 when some other thread is holding the lock. You should acquire the lock 713 later if you then need access to the data protected with the lock. 714 </para> 715 716 <para> 717 <function>spin_trylock()</function> does not spin but returns non-zero if 718 it acquires the spinlock on the first try or 0 if not. This function can 719 be used in all contexts like <function>spin_lock</function>: you must have 720 disabled the contexts that might interrupt you and acquire the spin lock. 721 </para> 722 723 <para> 724 <function>mutex_trylock()</function> does not suspend your task 725 but returns non-zero if it could lock the mutex on the first try 726 or 0 if not. This function cannot be safely used in hardware or software 727 interrupt contexts despite not sleeping. 728 </para> 729</chapter> 730 731 <chapter id="Examples"> 732 <title>Common Examples</title> 733 <para> 734Let's step through a simple example: a cache of number to name 735mappings. The cache keeps a count of how often each of the objects is 736used, and when it gets full, throws out the least used one. 737 738 </para> 739 740 <sect1 id="examples-usercontext"> 741 <title>All In User Context</title> 742 <para> 743For our first example, we assume that all operations are in user 744context (ie. from system calls), so we can sleep. This means we can 745use a mutex to protect the cache and all the objects within 746it. Here's the code: 747 </para> 748 749 <programlisting> 750#include &lt;linux/list.h&gt; 751#include &lt;linux/slab.h&gt; 752#include &lt;linux/string.h&gt; 753#include &lt;linux/mutex.h&gt; 754#include &lt;asm/errno.h&gt; 755 756struct object 757{ 758 struct list_head list; 759 int id; 760 char name[32]; 761 int popularity; 762}; 763 764/* Protects the cache, cache_num, and the objects within it */ 765static DEFINE_MUTEX(cache_lock); 766static LIST_HEAD(cache); 767static unsigned int cache_num = 0; 768#define MAX_CACHE_SIZE 10 769 770/* Must be holding cache_lock */ 771static struct object *__cache_find(int id) 772{ 773 struct object *i; 774 775 list_for_each_entry(i, &amp;cache, list) 776 if (i-&gt;id == id) { 777 i-&gt;popularity++; 778 return i; 779 } 780 return NULL; 781} 782 783/* Must be holding cache_lock */ 784static void __cache_delete(struct object *obj) 785{ 786 BUG_ON(!obj); 787 list_del(&amp;obj-&gt;list); 788 kfree(obj); 789 cache_num--; 790} 791 792/* Must be holding cache_lock */ 793static void __cache_add(struct object *obj) 794{ 795 list_add(&amp;obj-&gt;list, &amp;cache); 796 if (++cache_num > MAX_CACHE_SIZE) { 797 struct object *i, *outcast = NULL; 798 list_for_each_entry(i, &amp;cache, list) { 799 if (!outcast || i-&gt;popularity &lt; outcast-&gt;popularity) 800 outcast = i; 801 } 802 __cache_delete(outcast); 803 } 804} 805 806int cache_add(int id, const char *name) 807{ 808 struct object *obj; 809 810 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) 811 return -ENOMEM; 812 813 strlcpy(obj-&gt;name, name, sizeof(obj-&gt;name)); 814 obj-&gt;id = id; 815 obj-&gt;popularity = 0; 816 817 mutex_lock(&amp;cache_lock); 818 __cache_add(obj); 819 mutex_unlock(&amp;cache_lock); 820 return 0; 821} 822 823void cache_delete(int id) 824{ 825 mutex_lock(&amp;cache_lock); 826 __cache_delete(__cache_find(id)); 827 mutex_unlock(&amp;cache_lock); 828} 829 830int cache_find(int id, char *name) 831{ 832 struct object *obj; 833 int ret = -ENOENT; 834 835 mutex_lock(&amp;cache_lock); 836 obj = __cache_find(id); 837 if (obj) { 838 ret = 0; 839 strcpy(name, obj-&gt;name); 840 } 841 mutex_unlock(&amp;cache_lock); 842 return ret; 843} 844</programlisting> 845 846 <para> 847Note that we always make sure we have the cache_lock when we add, 848delete, or look up the cache: both the cache infrastructure itself and 849the contents of the objects are protected by the lock. In this case 850it's easy, since we copy the data for the user, and never let them 851access the objects directly. 852 </para> 853 <para> 854There is a slight (and common) optimization here: in 855<function>cache_add</function> we set up the fields of the object 856before grabbing the lock. This is safe, as no-one else can access it 857until we put it in cache. 858 </para> 859 </sect1> 860 861 <sect1 id="examples-interrupt"> 862 <title>Accessing From Interrupt Context</title> 863 <para> 864Now consider the case where <function>cache_find</function> can be 865called from interrupt context: either a hardware interrupt or a 866softirq. An example would be a timer which deletes object from the 867cache. 868 </para> 869 <para> 870The change is shown below, in standard patch format: the 871<symbol>-</symbol> are lines which are taken away, and the 872<symbol>+</symbol> are lines which are added. 873 </para> 874<programlisting> 875--- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100 876+++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100 877@@ -12,7 +12,7 @@ 878 int popularity; 879 }; 880 881-static DEFINE_MUTEX(cache_lock); 882+static DEFINE_SPINLOCK(cache_lock); 883 static LIST_HEAD(cache); 884 static unsigned int cache_num = 0; 885 #define MAX_CACHE_SIZE 10 886@@ -55,6 +55,7 @@ 887 int cache_add(int id, const char *name) 888 { 889 struct object *obj; 890+ unsigned long flags; 891 892 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) 893 return -ENOMEM; 894@@ -63,30 +64,33 @@ 895 obj-&gt;id = id; 896 obj-&gt;popularity = 0; 897 898- mutex_lock(&amp;cache_lock); 899+ spin_lock_irqsave(&amp;cache_lock, flags); 900 __cache_add(obj); 901- mutex_unlock(&amp;cache_lock); 902+ spin_unlock_irqrestore(&amp;cache_lock, flags); 903 return 0; 904 } 905 906 void cache_delete(int id) 907 { 908- mutex_lock(&amp;cache_lock); 909+ unsigned long flags; 910+ 911+ spin_lock_irqsave(&amp;cache_lock, flags); 912 __cache_delete(__cache_find(id)); 913- mutex_unlock(&amp;cache_lock); 914+ spin_unlock_irqrestore(&amp;cache_lock, flags); 915 } 916 917 int cache_find(int id, char *name) 918 { 919 struct object *obj; 920 int ret = -ENOENT; 921+ unsigned long flags; 922 923- mutex_lock(&amp;cache_lock); 924+ spin_lock_irqsave(&amp;cache_lock, flags); 925 obj = __cache_find(id); 926 if (obj) { 927 ret = 0; 928 strcpy(name, obj-&gt;name); 929 } 930- mutex_unlock(&amp;cache_lock); 931+ spin_unlock_irqrestore(&amp;cache_lock, flags); 932 return ret; 933 } 934</programlisting> 935 936 <para> 937Note that the <function>spin_lock_irqsave</function> will turn off 938interrupts if they are on, otherwise does nothing (if we are already 939in an interrupt handler), hence these functions are safe to call from 940any context. 941 </para> 942 <para> 943Unfortunately, <function>cache_add</function> calls 944<function>kmalloc</function> with the <symbol>GFP_KERNEL</symbol> 945flag, which is only legal in user context. I have assumed that 946<function>cache_add</function> is still only called in user context, 947otherwise this should become a parameter to 948<function>cache_add</function>. 949 </para> 950 </sect1> 951 <sect1 id="examples-refcnt"> 952 <title>Exposing Objects Outside This File</title> 953 <para> 954If our objects contained more information, it might not be sufficient 955to copy the information in and out: other parts of the code might want 956to keep pointers to these objects, for example, rather than looking up 957the id every time. This produces two problems. 958 </para> 959 <para> 960The first problem is that we use the <symbol>cache_lock</symbol> to 961protect objects: we'd need to make this non-static so the rest of the 962code can use it. This makes locking trickier, as it is no longer all 963in one place. 964 </para> 965 <para> 966The second problem is the lifetime problem: if another structure keeps 967a pointer to an object, it presumably expects that pointer to remain 968valid. Unfortunately, this is only guaranteed while you hold the 969lock, otherwise someone might call <function>cache_delete</function> 970and even worse, add another object, re-using the same address. 971 </para> 972 <para> 973As there is only one lock, you can't hold it forever: no-one else would 974get any work done. 975 </para> 976 <para> 977The solution to this problem is to use a reference count: everyone who 978has a pointer to the object increases it when they first get the 979object, and drops the reference count when they're finished with it. 980Whoever drops it to zero knows it is unused, and can actually delete it. 981 </para> 982 <para> 983Here is the code: 984 </para> 985 986<programlisting> 987--- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100 988+++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100 989@@ -7,6 +7,7 @@ 990 struct object 991 { 992 struct list_head list; 993+ unsigned int refcnt; 994 int id; 995 char name[32]; 996 int popularity; 997@@ -17,6 +18,35 @@ 998 static unsigned int cache_num = 0; 999 #define MAX_CACHE_SIZE 10 1000 1001+static void __object_put(struct object *obj) 1002+{ 1003+ if (--obj-&gt;refcnt == 0) 1004+ kfree(obj); 1005+} 1006+ 1007+static void __object_get(struct object *obj) 1008+{ 1009+ obj-&gt;refcnt++; 1010+} 1011+ 1012+void object_put(struct object *obj) 1013+{ 1014+ unsigned long flags; 1015+ 1016+ spin_lock_irqsave(&amp;cache_lock, flags); 1017+ __object_put(obj); 1018+ spin_unlock_irqrestore(&amp;cache_lock, flags); 1019+} 1020+ 1021+void object_get(struct object *obj) 1022+{ 1023+ unsigned long flags; 1024+ 1025+ spin_lock_irqsave(&amp;cache_lock, flags); 1026+ __object_get(obj); 1027+ spin_unlock_irqrestore(&amp;cache_lock, flags); 1028+} 1029+ 1030 /* Must be holding cache_lock */ 1031 static struct object *__cache_find(int id) 1032 { 1033@@ -35,6 +65,7 @@ 1034 { 1035 BUG_ON(!obj); 1036 list_del(&amp;obj-&gt;list); 1037+ __object_put(obj); 1038 cache_num--; 1039 } 1040 1041@@ -63,6 +94,7 @@ 1042 strlcpy(obj-&gt;name, name, sizeof(obj-&gt;name)); 1043 obj-&gt;id = id; 1044 obj-&gt;popularity = 0; 1045+ obj-&gt;refcnt = 1; /* The cache holds a reference */ 1046 1047 spin_lock_irqsave(&amp;cache_lock, flags); 1048 __cache_add(obj); 1049@@ -79,18 +111,15 @@ 1050 spin_unlock_irqrestore(&amp;cache_lock, flags); 1051 } 1052 1053-int cache_find(int id, char *name) 1054+struct object *cache_find(int id) 1055 { 1056 struct object *obj; 1057- int ret = -ENOENT; 1058 unsigned long flags; 1059 1060 spin_lock_irqsave(&amp;cache_lock, flags); 1061 obj = __cache_find(id); 1062- if (obj) { 1063- ret = 0; 1064- strcpy(name, obj-&gt;name); 1065- } 1066+ if (obj) 1067+ __object_get(obj); 1068 spin_unlock_irqrestore(&amp;cache_lock, flags); 1069- return ret; 1070+ return obj; 1071 } 1072</programlisting> 1073 1074<para> 1075We encapsulate the reference counting in the standard 'get' and 'put' 1076functions. Now we can return the object itself from 1077<function>cache_find</function> which has the advantage that the user 1078can now sleep holding the object (eg. to 1079<function>copy_to_user</function> to name to userspace). 1080</para> 1081<para> 1082The other point to note is that I said a reference should be held for 1083every pointer to the object: thus the reference count is 1 when first 1084inserted into the cache. In some versions the framework does not hold 1085a reference count, but they are more complicated. 1086</para> 1087 1088 <sect2 id="examples-refcnt-atomic"> 1089 <title>Using Atomic Operations For The Reference Count</title> 1090<para> 1091In practice, <type>atomic_t</type> would usually be used for 1092<structfield>refcnt</structfield>. There are a number of atomic 1093operations defined in 1094 1095<filename class="headerfile">include/asm/atomic.h</filename>: these are 1096guaranteed to be seen atomically from all CPUs in the system, so no 1097lock is required. In this case, it is simpler than using spinlocks, 1098although for anything non-trivial using spinlocks is clearer. The 1099<function>atomic_inc</function> and 1100<function>atomic_dec_and_test</function> are used instead of the 1101standard increment and decrement operators, and the lock is no longer 1102used to protect the reference count itself. 1103</para> 1104 1105<programlisting> 1106--- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100 1107+++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100 1108@@ -7,7 +7,7 @@ 1109 struct object 1110 { 1111 struct list_head list; 1112- unsigned int refcnt; 1113+ atomic_t refcnt; 1114 int id; 1115 char name[32]; 1116 int popularity; 1117@@ -18,33 +18,15 @@ 1118 static unsigned int cache_num = 0; 1119 #define MAX_CACHE_SIZE 10 1120 1121-static void __object_put(struct object *obj) 1122-{ 1123- if (--obj-&gt;refcnt == 0) 1124- kfree(obj); 1125-} 1126- 1127-static void __object_get(struct object *obj) 1128-{ 1129- obj-&gt;refcnt++; 1130-} 1131- 1132 void object_put(struct object *obj) 1133 { 1134- unsigned long flags; 1135- 1136- spin_lock_irqsave(&amp;cache_lock, flags); 1137- __object_put(obj); 1138- spin_unlock_irqrestore(&amp;cache_lock, flags); 1139+ if (atomic_dec_and_test(&amp;obj-&gt;refcnt)) 1140+ kfree(obj); 1141 } 1142 1143 void object_get(struct object *obj) 1144 { 1145- unsigned long flags; 1146- 1147- spin_lock_irqsave(&amp;cache_lock, flags); 1148- __object_get(obj); 1149- spin_unlock_irqrestore(&amp;cache_lock, flags); 1150+ atomic_inc(&amp;obj-&gt;refcnt); 1151 } 1152 1153 /* Must be holding cache_lock */ 1154@@ -65,7 +47,7 @@ 1155 { 1156 BUG_ON(!obj); 1157 list_del(&amp;obj-&gt;list); 1158- __object_put(obj); 1159+ object_put(obj); 1160 cache_num--; 1161 } 1162 1163@@ -94,7 +76,7 @@ 1164 strlcpy(obj-&gt;name, name, sizeof(obj-&gt;name)); 1165 obj-&gt;id = id; 1166 obj-&gt;popularity = 0; 1167- obj-&gt;refcnt = 1; /* The cache holds a reference */ 1168+ atomic_set(&amp;obj-&gt;refcnt, 1); /* The cache holds a reference */ 1169 1170 spin_lock_irqsave(&amp;cache_lock, flags); 1171 __cache_add(obj); 1172@@ -119,7 +101,7 @@ 1173 spin_lock_irqsave(&amp;cache_lock, flags); 1174 obj = __cache_find(id); 1175 if (obj) 1176- __object_get(obj); 1177+ object_get(obj); 1178 spin_unlock_irqrestore(&amp;cache_lock, flags); 1179 return obj; 1180 } 1181</programlisting> 1182</sect2> 1183</sect1> 1184 1185 <sect1 id="examples-lock-per-obj"> 1186 <title>Protecting The Objects Themselves</title> 1187 <para> 1188In these examples, we assumed that the objects (except the reference 1189counts) never changed once they are created. If we wanted to allow 1190the name to change, there are three possibilities: 1191 </para> 1192 <itemizedlist> 1193 <listitem> 1194 <para> 1195You can make <symbol>cache_lock</symbol> non-static, and tell people 1196to grab that lock before changing the name in any object. 1197 </para> 1198 </listitem> 1199 <listitem> 1200 <para> 1201You can provide a <function>cache_obj_rename</function> which grabs 1202this lock and changes the name for the caller, and tell everyone to 1203use that function. 1204 </para> 1205 </listitem> 1206 <listitem> 1207 <para> 1208You can make the <symbol>cache_lock</symbol> protect only the cache 1209itself, and use another lock to protect the name. 1210 </para> 1211 </listitem> 1212 </itemizedlist> 1213 1214 <para> 1215Theoretically, you can make the locks as fine-grained as one lock for 1216every field, for every object. In practice, the most common variants 1217are: 1218</para> 1219 <itemizedlist> 1220 <listitem> 1221 <para> 1222One lock which protects the infrastructure (the <symbol>cache</symbol> 1223list in this example) and all the objects. This is what we have done 1224so far. 1225 </para> 1226 </listitem> 1227 <listitem> 1228 <para> 1229One lock which protects the infrastructure (including the list 1230pointers inside the objects), and one lock inside the object which 1231protects the rest of that object. 1232 </para> 1233 </listitem> 1234 <listitem> 1235 <para> 1236Multiple locks to protect the infrastructure (eg. one lock per hash 1237chain), possibly with a separate per-object lock. 1238 </para> 1239 </listitem> 1240 </itemizedlist> 1241 1242<para> 1243Here is the "lock-per-object" implementation: 1244</para> 1245<programlisting> 1246--- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100 1247+++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 1248@@ -6,11 +6,17 @@ 1249 1250 struct object 1251 { 1252+ /* These two protected by cache_lock. */ 1253 struct list_head list; 1254+ int popularity; 1255+ 1256 atomic_t refcnt; 1257+ 1258+ /* Doesn't change once created. */ 1259 int id; 1260+ 1261+ spinlock_t lock; /* Protects the name */ 1262 char name[32]; 1263- int popularity; 1264 }; 1265 1266 static DEFINE_SPINLOCK(cache_lock); 1267@@ -77,6 +84,7 @@ 1268 obj-&gt;id = id; 1269 obj-&gt;popularity = 0; 1270 atomic_set(&amp;obj-&gt;refcnt, 1); /* The cache holds a reference */ 1271+ spin_lock_init(&amp;obj-&gt;lock); 1272 1273 spin_lock_irqsave(&amp;cache_lock, flags); 1274 __cache_add(obj); 1275</programlisting> 1276 1277<para> 1278Note that I decide that the <structfield>popularity</structfield> 1279count should be protected by the <symbol>cache_lock</symbol> rather 1280than the per-object lock: this is because it (like the 1281<structname>struct list_head</structname> inside the object) is 1282logically part of the infrastructure. This way, I don't need to grab 1283the lock of every object in <function>__cache_add</function> when 1284seeking the least popular. 1285</para> 1286 1287<para> 1288I also decided that the <structfield>id</structfield> member is 1289unchangeable, so I don't need to grab each object lock in 1290<function>__cache_find()</function> to examine the 1291<structfield>id</structfield>: the object lock is only used by a 1292caller who wants to read or write the <structfield>name</structfield> 1293field. 1294</para> 1295 1296<para> 1297Note also that I added a comment describing what data was protected by 1298which locks. This is extremely important, as it describes the runtime 1299behavior of the code, and can be hard to gain from just reading. And 1300as Alan Cox says, <quote>Lock data, not code</quote>. 1301</para> 1302</sect1> 1303</chapter> 1304 1305 <chapter id="common-problems"> 1306 <title>Common Problems</title> 1307 <sect1 id="deadlock"> 1308 <title>Deadlock: Simple and Advanced</title> 1309 1310 <para> 1311 There is a coding bug where a piece of code tries to grab a 1312 spinlock twice: it will spin forever, waiting for the lock to 1313 be released (spinlocks, rwlocks and semaphores are not 1314 recursive in Linux). This is trivial to diagnose: not a 1315 stay-up-five-nights-talk-to-fluffy-code-bunnies kind of 1316 problem. 1317 </para> 1318 1319 <para> 1320 For a slightly more complex case, imagine you have a region 1321 shared by a softirq and user context. If you use a 1322 <function>spin_lock()</function> call to protect it, it is 1323 possible that the user context will be interrupted by the softirq 1324 while it holds the lock, and the softirq will then spin 1325 forever trying to get the same lock. 1326 </para> 1327 1328 <para> 1329 Both of these are called deadlock, and as shown above, it can 1330 occur even with a single CPU (although not on UP compiles, 1331 since spinlocks vanish on kernel compiles with 1332 <symbol>CONFIG_SMP</symbol>=n. You'll still get data corruption 1333 in the second example). 1334 </para> 1335 1336 <para> 1337 This complete lockup is easy to diagnose: on SMP boxes the 1338 watchdog timer or compiling with <symbol>DEBUG_SPINLOCKS</symbol> set 1339 (<filename>include/linux/spinlock.h</filename>) will show this up 1340 immediately when it happens. 1341 </para> 1342 1343 <para> 1344 A more complex problem is the so-called 'deadly embrace', 1345 involving two or more locks. Say you have a hash table: each 1346 entry in the table is a spinlock, and a chain of hashed 1347 objects. Inside a softirq handler, you sometimes want to 1348 alter an object from one place in the hash to another: you 1349 grab the spinlock of the old hash chain and the spinlock of 1350 the new hash chain, and delete the object from the old one, 1351 and insert it in the new one. 1352 </para> 1353 1354 <para> 1355 There are two problems here. First, if your code ever 1356 tries to move the object to the same chain, it will deadlock 1357 with itself as it tries to lock it twice. Secondly, if the 1358 same softirq on another CPU is trying to move another object 1359 in the reverse direction, the following could happen: 1360 </para> 1361 1362 <table> 1363 <title>Consequences</title> 1364 1365 <tgroup cols="2" align="left"> 1366 1367 <thead> 1368 <row> 1369 <entry>CPU 1</entry> 1370 <entry>CPU 2</entry> 1371 </row> 1372 </thead> 1373 1374 <tbody> 1375 <row> 1376 <entry>Grab lock A -&gt; OK</entry> 1377 <entry>Grab lock B -&gt; OK</entry> 1378 </row> 1379 <row> 1380 <entry>Grab lock B -&gt; spin</entry> 1381 <entry>Grab lock A -&gt; spin</entry> 1382 </row> 1383 </tbody> 1384 </tgroup> 1385 </table> 1386 1387 <para> 1388 The two CPUs will spin forever, waiting for the other to give up 1389 their lock. It will look, smell, and feel like a crash. 1390 </para> 1391 </sect1> 1392 1393 <sect1 id="techs-deadlock-prevent"> 1394 <title>Preventing Deadlock</title> 1395 1396 <para> 1397 Textbooks will tell you that if you always lock in the same 1398 order, you will never get this kind of deadlock. Practice 1399 will tell you that this approach doesn't scale: when I 1400 create a new lock, I don't understand enough of the kernel 1401 to figure out where in the 5000 lock hierarchy it will fit. 1402 </para> 1403 1404 <para> 1405 The best locks are encapsulated: they never get exposed in 1406 headers, and are never held around calls to non-trivial 1407 functions outside the same file. You can read through this 1408 code and see that it will never deadlock, because it never 1409 tries to grab another lock while it has that one. People 1410 using your code don't even need to know you are using a 1411 lock. 1412 </para> 1413 1414 <para> 1415 A classic problem here is when you provide callbacks or 1416 hooks: if you call these with the lock held, you risk simple 1417 deadlock, or a deadly embrace (who knows what the callback 1418 will do?). Remember, the other programmers are out to get 1419 you, so don't do this. 1420 </para> 1421 1422 <sect2 id="techs-deadlock-overprevent"> 1423 <title>Overzealous Prevention Of Deadlocks</title> 1424 1425 <para> 1426 Deadlocks are problematic, but not as bad as data 1427 corruption. Code which grabs a read lock, searches a list, 1428 fails to find what it wants, drops the read lock, grabs a 1429 write lock and inserts the object has a race condition. 1430 </para> 1431 1432 <para> 1433 If you don't see why, please stay the fuck away from my code. 1434 </para> 1435 </sect2> 1436 </sect1> 1437 1438 <sect1 id="racing-timers"> 1439 <title>Racing Timers: A Kernel Pastime</title> 1440 1441 <para> 1442 Timers can produce their own special problems with races. 1443 Consider a collection of objects (list, hash, etc) where each 1444 object has a timer which is due to destroy it. 1445 </para> 1446 1447 <para> 1448 If you want to destroy the entire collection (say on module 1449 removal), you might do the following: 1450 </para> 1451 1452 <programlisting> 1453 /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE 1454 HUNGARIAN NOTATION */ 1455 spin_lock_bh(&amp;list_lock); 1456 1457 while (list) { 1458 struct foo *next = list-&gt;next; 1459 del_timer(&amp;list-&gt;timer); 1460 kfree(list); 1461 list = next; 1462 } 1463 1464 spin_unlock_bh(&amp;list_lock); 1465 </programlisting> 1466 1467 <para> 1468 Sooner or later, this will crash on SMP, because a timer can 1469 have just gone off before the <function>spin_lock_bh()</function>, 1470 and it will only get the lock after we 1471 <function>spin_unlock_bh()</function>, and then try to free 1472 the element (which has already been freed!). 1473 </para> 1474 1475 <para> 1476 This can be avoided by checking the result of 1477 <function>del_timer()</function>: if it returns 1478 <returnvalue>1</returnvalue>, the timer has been deleted. 1479 If <returnvalue>0</returnvalue>, it means (in this 1480 case) that it is currently running, so we can do: 1481 </para> 1482 1483 <programlisting> 1484 retry: 1485 spin_lock_bh(&amp;list_lock); 1486 1487 while (list) { 1488 struct foo *next = list-&gt;next; 1489 if (!del_timer(&amp;list-&gt;timer)) { 1490 /* Give timer a chance to delete this */ 1491 spin_unlock_bh(&amp;list_lock); 1492 goto retry; 1493 } 1494 kfree(list); 1495 list = next; 1496 } 1497 1498 spin_unlock_bh(&amp;list_lock); 1499 </programlisting> 1500 1501 <para> 1502 Another common problem is deleting timers which restart 1503 themselves (by calling <function>add_timer()</function> at the end 1504 of their timer function). Because this is a fairly common case 1505 which is prone to races, you should use <function>del_timer_sync()</function> 1506 (<filename class="headerfile">include/linux/timer.h</filename>) 1507 to handle this case. It returns the number of times the timer 1508 had to be deleted before we finally stopped it from adding itself back 1509 in. 1510 </para> 1511 </sect1> 1512 1513 </chapter> 1514 1515 <chapter id="Efficiency"> 1516 <title>Locking Speed</title> 1517 1518 <para> 1519There are three main things to worry about when considering speed of 1520some code which does locking. First is concurrency: how many things 1521are going to be waiting while someone else is holding a lock. Second 1522is the time taken to actually acquire and release an uncontended lock. 1523Third is using fewer, or smarter locks. I'm assuming that the lock is 1524used fairly often: otherwise, you wouldn't be concerned about 1525efficiency. 1526</para> 1527 <para> 1528Concurrency depends on how long the lock is usually held: you should 1529hold the lock for as long as needed, but no longer. In the cache 1530example, we always create the object without the lock held, and then 1531grab the lock only when we are ready to insert it in the list. 1532</para> 1533 <para> 1534Acquisition times depend on how much damage the lock operations do to 1535the pipeline (pipeline stalls) and how likely it is that this CPU was 1536the last one to grab the lock (ie. is the lock cache-hot for this 1537CPU): on a machine with more CPUs, this likelihood drops fast. 1538Consider a 700MHz Intel Pentium III: an instruction takes about 0.7ns, 1539an atomic increment takes about 58ns, a lock which is cache-hot on 1540this CPU takes 160ns, and a cacheline transfer from another CPU takes 1541an additional 170 to 360ns. (These figures from Paul McKenney's 1542<ulink url="http://www.linuxjournal.com/article.php?sid=6993"> Linux 1543Journal RCU article</ulink>). 1544</para> 1545 <para> 1546These two aims conflict: holding a lock for a short time might be done 1547by splitting locks into parts (such as in our final per-object-lock 1548example), but this increases the number of lock acquisitions, and the 1549results are often slower than having a single lock. This is another 1550reason to advocate locking simplicity. 1551</para> 1552 <para> 1553The third concern is addressed below: there are some methods to reduce 1554the amount of locking which needs to be done. 1555</para> 1556 1557 <sect1 id="efficiency-rwlocks"> 1558 <title>Read/Write Lock Variants</title> 1559 1560 <para> 1561 Both spinlocks and semaphores have read/write variants: 1562 <type>rwlock_t</type> and <structname>struct rw_semaphore</structname>. 1563 These divide users into two classes: the readers and the writers. If 1564 you are only reading the data, you can get a read lock, but to write to 1565 the data you need the write lock. Many people can hold a read lock, 1566 but a writer must be sole holder. 1567 </para> 1568 1569 <para> 1570 If your code divides neatly along reader/writer lines (as our 1571 cache code does), and the lock is held by readers for 1572 significant lengths of time, using these locks can help. They 1573 are slightly slower than the normal locks though, so in practice 1574 <type>rwlock_t</type> is not usually worthwhile. 1575 </para> 1576 </sect1> 1577 1578 <sect1 id="efficiency-read-copy-update"> 1579 <title>Avoiding Locks: Read Copy Update</title> 1580 1581 <para> 1582 There is a special method of read/write locking called Read Copy 1583 Update. Using RCU, the readers can avoid taking a lock 1584 altogether: as we expect our cache to be read more often than 1585 updated (otherwise the cache is a waste of time), it is a 1586 candidate for this optimization. 1587 </para> 1588 1589 <para> 1590 How do we get rid of read locks? Getting rid of read locks 1591 means that writers may be changing the list underneath the 1592 readers. That is actually quite simple: we can read a linked 1593 list while an element is being added if the writer adds the 1594 element very carefully. For example, adding 1595 <symbol>new</symbol> to a single linked list called 1596 <symbol>list</symbol>: 1597 </para> 1598 1599 <programlisting> 1600 new-&gt;next = list-&gt;next; 1601 wmb(); 1602 list-&gt;next = new; 1603 </programlisting> 1604 1605 <para> 1606 The <function>wmb()</function> is a write memory barrier. It 1607 ensures that the first operation (setting the new element's 1608 <symbol>next</symbol> pointer) is complete and will be seen by 1609 all CPUs, before the second operation is (putting the new 1610 element into the list). This is important, since modern 1611 compilers and modern CPUs can both reorder instructions unless 1612 told otherwise: we want a reader to either not see the new 1613 element at all, or see the new element with the 1614 <symbol>next</symbol> pointer correctly pointing at the rest of 1615 the list. 1616 </para> 1617 <para> 1618 Fortunately, there is a function to do this for standard 1619 <structname>struct list_head</structname> lists: 1620 <function>list_add_rcu()</function> 1621 (<filename>include/linux/list.h</filename>). 1622 </para> 1623 <para> 1624 Removing an element from the list is even simpler: we replace 1625 the pointer to the old element with a pointer to its successor, 1626 and readers will either see it, or skip over it. 1627 </para> 1628 <programlisting> 1629 list-&gt;next = old-&gt;next; 1630 </programlisting> 1631 <para> 1632 There is <function>list_del_rcu()</function> 1633 (<filename>include/linux/list.h</filename>) which does this (the 1634 normal version poisons the old object, which we don't want). 1635 </para> 1636 <para> 1637 The reader must also be careful: some CPUs can look through the 1638 <symbol>next</symbol> pointer to start reading the contents of 1639 the next element early, but don't realize that the pre-fetched 1640 contents is wrong when the <symbol>next</symbol> pointer changes 1641 underneath them. Once again, there is a 1642 <function>list_for_each_entry_rcu()</function> 1643 (<filename>include/linux/list.h</filename>) to help you. Of 1644 course, writers can just use 1645 <function>list_for_each_entry()</function>, since there cannot 1646 be two simultaneous writers. 1647 </para> 1648 <para> 1649 Our final dilemma is this: when can we actually destroy the 1650 removed element? Remember, a reader might be stepping through 1651 this element in the list right now: if we free this element and 1652 the <symbol>next</symbol> pointer changes, the reader will jump 1653 off into garbage and crash. We need to wait until we know that 1654 all the readers who were traversing the list when we deleted the 1655 element are finished. We use <function>call_rcu()</function> to 1656 register a callback which will actually destroy the object once 1657 the readers are finished. 1658 </para> 1659 <para> 1660 But how does Read Copy Update know when the readers are 1661 finished? The method is this: firstly, the readers always 1662 traverse the list inside 1663 <function>rcu_read_lock()</function>/<function>rcu_read_unlock()</function> 1664 pairs: these simply disable preemption so the reader won't go to 1665 sleep while reading the list. 1666 </para> 1667 <para> 1668 RCU then waits until every other CPU has slept at least once: 1669 since readers cannot sleep, we know that any readers which were 1670 traversing the list during the deletion are finished, and the 1671 callback is triggered. The real Read Copy Update code is a 1672 little more optimized than this, but this is the fundamental 1673 idea. 1674 </para> 1675 1676<programlisting> 1677--- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 1678+++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100 1679@@ -1,15 +1,18 @@ 1680 #include &lt;linux/list.h&gt; 1681 #include &lt;linux/slab.h&gt; 1682 #include &lt;linux/string.h&gt; 1683+#include &lt;linux/rcupdate.h&gt; 1684 #include &lt;linux/semaphore.h&gt; 1685 #include &lt;asm/errno.h&gt; 1686 1687 struct object 1688 { 1689- /* These two protected by cache_lock. */ 1690+ /* This is protected by RCU */ 1691 struct list_head list; 1692 int popularity; 1693 1694+ struct rcu_head rcu; 1695+ 1696 atomic_t refcnt; 1697 1698 /* Doesn't change once created. */ 1699@@ -40,7 +43,7 @@ 1700 { 1701 struct object *i; 1702 1703- list_for_each_entry(i, &amp;cache, list) { 1704+ list_for_each_entry_rcu(i, &amp;cache, list) { 1705 if (i-&gt;id == id) { 1706 i-&gt;popularity++; 1707 return i; 1708@@ -49,19 +52,25 @@ 1709 return NULL; 1710 } 1711 1712+/* Final discard done once we know no readers are looking. */ 1713+static void cache_delete_rcu(void *arg) 1714+{ 1715+ object_put(arg); 1716+} 1717+ 1718 /* Must be holding cache_lock */ 1719 static void __cache_delete(struct object *obj) 1720 { 1721 BUG_ON(!obj); 1722- list_del(&amp;obj-&gt;list); 1723- object_put(obj); 1724+ list_del_rcu(&amp;obj-&gt;list); 1725 cache_num--; 1726+ call_rcu(&amp;obj-&gt;rcu, cache_delete_rcu, obj); 1727 } 1728 1729 /* Must be holding cache_lock */ 1730 static void __cache_add(struct object *obj) 1731 { 1732- list_add(&amp;obj-&gt;list, &amp;cache); 1733+ list_add_rcu(&amp;obj-&gt;list, &amp;cache); 1734 if (++cache_num > MAX_CACHE_SIZE) { 1735 struct object *i, *outcast = NULL; 1736 list_for_each_entry(i, &amp;cache, list) { 1737@@ -85,6 +94,7 @@ 1738 obj-&gt;popularity = 0; 1739 atomic_set(&amp;obj-&gt;refcnt, 1); /* The cache holds a reference */ 1740 spin_lock_init(&amp;obj-&gt;lock); 1741+ INIT_RCU_HEAD(&amp;obj-&gt;rcu); 1742 1743 spin_lock_irqsave(&amp;cache_lock, flags); 1744 __cache_add(obj); 1745@@ -104,12 +114,11 @@ 1746 struct object *cache_find(int id) 1747 { 1748 struct object *obj; 1749- unsigned long flags; 1750 1751- spin_lock_irqsave(&amp;cache_lock, flags); 1752+ rcu_read_lock(); 1753 obj = __cache_find(id); 1754 if (obj) 1755 object_get(obj); 1756- spin_unlock_irqrestore(&amp;cache_lock, flags); 1757+ rcu_read_unlock(); 1758 return obj; 1759 } 1760</programlisting> 1761 1762<para> 1763Note that the reader will alter the 1764<structfield>popularity</structfield> member in 1765<function>__cache_find()</function>, and now it doesn't hold a lock. 1766One solution would be to make it an <type>atomic_t</type>, but for 1767this usage, we don't really care about races: an approximate result is 1768good enough, so I didn't change it. 1769</para> 1770 1771<para> 1772The result is that <function>cache_find()</function> requires no 1773synchronization with any other functions, so is almost as fast on SMP 1774as it would be on UP. 1775</para> 1776 1777<para> 1778There is a furthur optimization possible here: remember our original 1779cache code, where there were no reference counts and the caller simply 1780held the lock whenever using the object? This is still possible: if 1781you hold the lock, noone can delete the object, so you don't need to 1782get and put the reference count. 1783</para> 1784 1785<para> 1786Now, because the 'read lock' in RCU is simply disabling preemption, a 1787caller which always has preemption disabled between calling 1788<function>cache_find()</function> and 1789<function>object_put()</function> does not need to actually get and 1790put the reference count: we could expose 1791<function>__cache_find()</function> by making it non-static, and 1792such callers could simply call that. 1793</para> 1794<para> 1795The benefit here is that the reference count is not written to: the 1796object is not altered in any way, which is much faster on SMP 1797machines due to caching. 1798</para> 1799 </sect1> 1800 1801 <sect1 id="per-cpu"> 1802 <title>Per-CPU Data</title> 1803 1804 <para> 1805 Another technique for avoiding locking which is used fairly 1806 widely is to duplicate information for each CPU. For example, 1807 if you wanted to keep a count of a common condition, you could 1808 use a spin lock and a single counter. Nice and simple. 1809 </para> 1810 1811 <para> 1812 If that was too slow (it's usually not, but if you've got a 1813 really big machine to test on and can show that it is), you 1814 could instead use a counter for each CPU, then none of them need 1815 an exclusive lock. See <function>DEFINE_PER_CPU()</function>, 1816 <function>get_cpu_var()</function> and 1817 <function>put_cpu_var()</function> 1818 (<filename class="headerfile">include/linux/percpu.h</filename>). 1819 </para> 1820 1821 <para> 1822 Of particular use for simple per-cpu counters is the 1823 <type>local_t</type> type, and the 1824 <function>cpu_local_inc()</function> and related functions, 1825 which are more efficient than simple code on some architectures 1826 (<filename class="headerfile">include/asm/local.h</filename>). 1827 </para> 1828 1829 <para> 1830 Note that there is no simple, reliable way of getting an exact 1831 value of such a counter, without introducing more locks. This 1832 is not a problem for some uses. 1833 </para> 1834 </sect1> 1835 1836 <sect1 id="mostly-hardirq"> 1837 <title>Data Which Mostly Used By An IRQ Handler</title> 1838 1839 <para> 1840 If data is always accessed from within the same IRQ handler, you 1841 don't need a lock at all: the kernel already guarantees that the 1842 irq handler will not run simultaneously on multiple CPUs. 1843 </para> 1844 <para> 1845 Manfred Spraul points out that you can still do this, even if 1846 the data is very occasionally accessed in user context or 1847 softirqs/tasklets. The irq handler doesn't use a lock, and 1848 all other accesses are done as so: 1849 </para> 1850 1851<programlisting> 1852 spin_lock(&amp;lock); 1853 disable_irq(irq); 1854 ... 1855 enable_irq(irq); 1856 spin_unlock(&amp;lock); 1857</programlisting> 1858 <para> 1859 The <function>disable_irq()</function> prevents the irq handler 1860 from running (and waits for it to finish if it's currently 1861 running on other CPUs). The spinlock prevents any other 1862 accesses happening at the same time. Naturally, this is slower 1863 than just a <function>spin_lock_irq()</function> call, so it 1864 only makes sense if this type of access happens extremely 1865 rarely. 1866 </para> 1867 </sect1> 1868 </chapter> 1869 1870 <chapter id="sleeping-things"> 1871 <title>What Functions Are Safe To Call From Interrupts?</title> 1872 1873 <para> 1874 Many functions in the kernel sleep (ie. call schedule()) 1875 directly or indirectly: you can never call them while holding a 1876 spinlock, or with preemption disabled. This also means you need 1877 to be in user context: calling them from an interrupt is illegal. 1878 </para> 1879 1880 <sect1 id="sleeping"> 1881 <title>Some Functions Which Sleep</title> 1882 1883 <para> 1884 The most common ones are listed below, but you usually have to 1885 read the code to find out if other calls are safe. If everyone 1886 else who calls it can sleep, you probably need to be able to 1887 sleep, too. In particular, registration and deregistration 1888 functions usually expect to be called from user context, and can 1889 sleep. 1890 </para> 1891 1892 <itemizedlist> 1893 <listitem> 1894 <para> 1895 Accesses to 1896 <firstterm linkend="gloss-userspace">userspace</firstterm>: 1897 </para> 1898 <itemizedlist> 1899 <listitem> 1900 <para> 1901 <function>copy_from_user()</function> 1902 </para> 1903 </listitem> 1904 <listitem> 1905 <para> 1906 <function>copy_to_user()</function> 1907 </para> 1908 </listitem> 1909 <listitem> 1910 <para> 1911 <function>get_user()</function> 1912 </para> 1913 </listitem> 1914 <listitem> 1915 <para> 1916 <function> put_user()</function> 1917 </para> 1918 </listitem> 1919 </itemizedlist> 1920 </listitem> 1921 1922 <listitem> 1923 <para> 1924 <function>kmalloc(GFP_KERNEL)</function> 1925 </para> 1926 </listitem> 1927 1928 <listitem> 1929 <para> 1930 <function>down_interruptible()</function> and 1931 <function>down()</function> 1932 </para> 1933 <para> 1934 There is a <function>down_trylock()</function> which can be 1935 used inside interrupt context, as it will not sleep. 1936 <function>up()</function> will also never sleep. 1937 </para> 1938 </listitem> 1939 </itemizedlist> 1940 </sect1> 1941 1942 <sect1 id="dont-sleep"> 1943 <title>Some Functions Which Don't Sleep</title> 1944 1945 <para> 1946 Some functions are safe to call from any context, or holding 1947 almost any lock. 1948 </para> 1949 1950 <itemizedlist> 1951 <listitem> 1952 <para> 1953 <function>printk()</function> 1954 </para> 1955 </listitem> 1956 <listitem> 1957 <para> 1958 <function>kfree()</function> 1959 </para> 1960 </listitem> 1961 <listitem> 1962 <para> 1963 <function>add_timer()</function> and <function>del_timer()</function> 1964 </para> 1965 </listitem> 1966 </itemizedlist> 1967 </sect1> 1968 </chapter> 1969 1970 <chapter id="references"> 1971 <title>Further reading</title> 1972 1973 <itemizedlist> 1974 <listitem> 1975 <para> 1976 <filename>Documentation/spinlocks.txt</filename>: 1977 Linus Torvalds' spinlocking tutorial in the kernel sources. 1978 </para> 1979 </listitem> 1980 1981 <listitem> 1982 <para> 1983 Unix Systems for Modern Architectures: Symmetric 1984 Multiprocessing and Caching for Kernel Programmers: 1985 </para> 1986 1987 <para> 1988 Curt Schimmel's very good introduction to kernel level 1989 locking (not written for Linux, but nearly everything 1990 applies). The book is expensive, but really worth every 1991 penny to understand SMP locking. [ISBN: 0201633388] 1992 </para> 1993 </listitem> 1994 </itemizedlist> 1995 </chapter> 1996 1997 <chapter id="thanks"> 1998 <title>Thanks</title> 1999 2000 <para> 2001 Thanks to Telsa Gwynne for DocBooking, neatening and adding 2002 style. 2003 </para> 2004 2005 <para> 2006 Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul 2007 Mackerras, Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim 2008 Waugh, Pete Zaitcev, James Morris, Robert Love, Paul McKenney, 2009 John Ashby for proofreading, correcting, flaming, commenting. 2010 </para> 2011 2012 <para> 2013 Thanks to the cabal for having no influence on this document. 2014 </para> 2015 </chapter> 2016 2017 <glossary id="glossary"> 2018 <title>Glossary</title> 2019 2020 <glossentry id="gloss-preemption"> 2021 <glossterm>preemption</glossterm> 2022 <glossdef> 2023 <para> 2024 Prior to 2.5, or when <symbol>CONFIG_PREEMPT</symbol> is 2025 unset, processes in user context inside the kernel would not 2026 preempt each other (ie. you had that CPU until you have it up, 2027 except for interrupts). With the addition of 2028 <symbol>CONFIG_PREEMPT</symbol> in 2.5.4, this changed: when 2029 in user context, higher priority tasks can "cut in": spinlocks 2030 were changed to disable preemption, even on UP. 2031 </para> 2032 </glossdef> 2033 </glossentry> 2034 2035 <glossentry id="gloss-bh"> 2036 <glossterm>bh</glossterm> 2037 <glossdef> 2038 <para> 2039 Bottom Half: for historical reasons, functions with 2040 '_bh' in them often now refer to any software interrupt, e.g. 2041 <function>spin_lock_bh()</function> blocks any software interrupt 2042 on the current CPU. Bottom halves are deprecated, and will 2043 eventually be replaced by tasklets. Only one bottom half will be 2044 running at any time. 2045 </para> 2046 </glossdef> 2047 </glossentry> 2048 2049 <glossentry id="gloss-hwinterrupt"> 2050 <glossterm>Hardware Interrupt / Hardware IRQ</glossterm> 2051 <glossdef> 2052 <para> 2053 Hardware interrupt request. <function>in_irq()</function> returns 2054 <returnvalue>true</returnvalue> in a hardware interrupt handler. 2055 </para> 2056 </glossdef> 2057 </glossentry> 2058 2059 <glossentry id="gloss-interruptcontext"> 2060 <glossterm>Interrupt Context</glossterm> 2061 <glossdef> 2062 <para> 2063 Not user context: processing a hardware irq or software irq. 2064 Indicated by the <function>in_interrupt()</function> macro 2065 returning <returnvalue>true</returnvalue>. 2066 </para> 2067 </glossdef> 2068 </glossentry> 2069 2070 <glossentry id="gloss-smp"> 2071 <glossterm><acronym>SMP</acronym></glossterm> 2072 <glossdef> 2073 <para> 2074 Symmetric Multi-Processor: kernels compiled for multiple-CPU 2075 machines. (CONFIG_SMP=y). 2076 </para> 2077 </glossdef> 2078 </glossentry> 2079 2080 <glossentry id="gloss-softirq"> 2081 <glossterm>Software Interrupt / softirq</glossterm> 2082 <glossdef> 2083 <para> 2084 Software interrupt handler. <function>in_irq()</function> returns 2085 <returnvalue>false</returnvalue>; <function>in_softirq()</function> 2086 returns <returnvalue>true</returnvalue>. Tasklets and softirqs 2087 both fall into the category of 'software interrupts'. 2088 </para> 2089 <para> 2090 Strictly speaking a softirq is one of up to 32 enumerated software 2091 interrupts which can run on multiple CPUs at once. 2092 Sometimes used to refer to tasklets as 2093 well (ie. all software interrupts). 2094 </para> 2095 </glossdef> 2096 </glossentry> 2097 2098 <glossentry id="gloss-tasklet"> 2099 <glossterm>tasklet</glossterm> 2100 <glossdef> 2101 <para> 2102 A dynamically-registrable software interrupt, 2103 which is guaranteed to only run on one CPU at a time. 2104 </para> 2105 </glossdef> 2106 </glossentry> 2107 2108 <glossentry id="gloss-timers"> 2109 <glossterm>timer</glossterm> 2110 <glossdef> 2111 <para> 2112 A dynamically-registrable software interrupt, which is run at 2113 (or close to) a given time. When running, it is just like a 2114 tasklet (in fact, they are called from the TIMER_SOFTIRQ). 2115 </para> 2116 </glossdef> 2117 </glossentry> 2118 2119 <glossentry id="gloss-up"> 2120 <glossterm><acronym>UP</acronym></glossterm> 2121 <glossdef> 2122 <para> 2123 Uni-Processor: Non-SMP. (CONFIG_SMP=n). 2124 </para> 2125 </glossdef> 2126 </glossentry> 2127 2128 <glossentry id="gloss-usercontext"> 2129 <glossterm>User Context</glossterm> 2130 <glossdef> 2131 <para> 2132 The kernel executing on behalf of a particular process (ie. a 2133 system call or trap) or kernel thread. You can tell which 2134 process with the <symbol>current</symbol> macro.) Not to 2135 be confused with userspace. Can be interrupted by software or 2136 hardware interrupts. 2137 </para> 2138 </glossdef> 2139 </glossentry> 2140 2141 <glossentry id="gloss-userspace"> 2142 <glossterm>Userspace</glossterm> 2143 <glossdef> 2144 <para> 2145 A process executing its own code outside the kernel. 2146 </para> 2147 </glossdef> 2148 </glossentry> 2149 2150 </glossary> 2151</book> 2152