Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

doc/rcuref: Document real world examples in kernel

Document similar real world examples in the kernel corresponding to the
second and third code snippets. Also correct an issue in
release_referenced() in the code snippet example.

Cc: oleg@redhat.com
Cc: jannh@google.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
[ paulmck: Do a bit of wordsmithing. ]
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>

authored by

Joel Fernandes (Google) and committed by
Paul E. McKenney
de1dbcee a188339c

+20 -1
+20 -1
Documentation/RCU/rcuref.txt
··· 12 12 Reference counting on elements of lists which are protected by traditional 13 13 reader/writer spinlocks or semaphores are straightforward: 14 14 15 + CODE LISTING A: 15 16 1. 2. 16 17 add() search_and_reference() 17 18 { { ··· 29 28 release_referenced() delete() 30 29 { { 31 30 ... write_lock(&list_lock); 32 - atomic_dec(&el->rc, relfunc) ... 31 + if(atomic_dec_and_test(&el->rc)) ... 32 + kfree(el); 33 33 ... remove_element 34 34 } write_unlock(&list_lock); 35 35 ... ··· 46 44 has already been deleted from the list/array. Use atomic_inc_not_zero() 47 45 in this scenario as follows: 48 46 47 + CODE LISTING B: 49 48 1. 2. 50 49 add() search_and_reference() 51 50 { { ··· 82 79 atomic_dec_and_test() may be moved from delete() to el_free() 83 80 as follows: 84 81 82 + CODE LISTING C: 85 83 1. 2. 86 84 add() search_and_reference() 87 85 { { ··· 118 114 any reader finds the element, that reader may safely acquire a reference 119 115 without checking the value of the reference counter. 120 116 117 + A clear advantage of the RCU-based pattern in listing C over the one 118 + in listing B is that any call to search_and_reference() that locates 119 + a given object will succeed in obtaining a reference to that object, 120 + even given a concurrent invocation of delete() for that same object. 121 + Similarly, a clear advantage of both listings B and C over listing A is 122 + that a call to delete() is not delayed even if there are an arbitrarily 123 + large number of calls to search_and_reference() searching for the same 124 + object that delete() was invoked on. Instead, all that is delayed is 125 + the eventual invocation of kfree(), which is usually not a problem on 126 + modern computer systems, even the small ones. 127 + 121 128 In cases where delete() can sleep, synchronize_rcu() can be called from 122 129 delete(), so that el_free() can be subsumed into delete as follows: 123 130 ··· 145 130 kfree(el); 146 131 ... 147 132 } 133 + 134 + As additional examples in the kernel, the pattern in listing C is used by 135 + reference counting of struct pid, while the pattern in listing B is used by 136 + struct posix_acl.