Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at 989a7241df87526bfef0396567e71ebe53a84ae4 214 lines 8.7 kB view raw
1SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED defeat lockdep state tracking and 2are hence deprecated. 3 4Please use DEFINE_SPINLOCK()/DEFINE_RWLOCK() or 5__SPIN_LOCK_UNLOCKED()/__RW_LOCK_UNLOCKED() as appropriate for static 6initialization. 7 8Dynamic initialization, when necessary, may be performed as 9demonstrated below. 10 11 spinlock_t xxx_lock; 12 rwlock_t xxx_rw_lock; 13 14 static int __init xxx_init(void) 15 { 16 spin_lock_init(&xxx_lock); 17 rwlock_init(&xxx_rw_lock); 18 ... 19 } 20 21 module_init(xxx_init); 22 23The following discussion is still valid, however, with the dynamic 24initialization of spinlocks or with DEFINE_SPINLOCK, etc., used 25instead of SPIN_LOCK_UNLOCKED. 26 27----------------------- 28 29On Fri, 2 Jan 1998, Doug Ledford wrote: 30> 31> I'm working on making the aic7xxx driver more SMP friendly (as well as 32> importing the latest FreeBSD sequencer code to have 7895 support) and wanted 33> to get some info from you. The goal here is to make the various routines 34> SMP safe as well as UP safe during interrupts and other manipulating 35> routines. So far, I've added a spin_lock variable to things like my queue 36> structs. Now, from what I recall, there are some spin lock functions I can 37> use to lock these spin locks from other use as opposed to a (nasty) 38> save_flags(); cli(); stuff; restore_flags(); construct. Where do I find 39> these routines and go about making use of them? Do they only lock on a 40> per-processor basis or can they also lock say an interrupt routine from 41> mucking with a queue if the queue routine was manipulating it when the 42> interrupt occurred, or should I still use a cli(); based construct on that 43> one? 44 45See <asm/spinlock.h>. The basic version is: 46 47 spinlock_t xxx_lock = SPIN_LOCK_UNLOCKED; 48 49 50 unsigned long flags; 51 52 spin_lock_irqsave(&xxx_lock, flags); 53 ... critical section here .. 54 spin_unlock_irqrestore(&xxx_lock, flags); 55 56and the above is always safe. It will disable interrupts _locally_, but the 57spinlock itself will guarantee the global lock, so it will guarantee that 58there is only one thread-of-control within the region(s) protected by that 59lock. 60 61Note that it works well even under UP - the above sequence under UP 62essentially is just the same as doing a 63 64 unsigned long flags; 65 66 save_flags(flags); cli(); 67 ... critical section ... 68 restore_flags(flags); 69 70so the code does _not_ need to worry about UP vs SMP issues: the spinlocks 71work correctly under both (and spinlocks are actually more efficient on 72architectures that allow doing the "save_flags + cli" in one go because I 73don't export that interface normally). 74 75NOTE NOTE NOTE! The reason the spinlock is so much faster than a global 76interrupt lock under SMP is exactly because it disables interrupts only on 77the local CPU. The spin-lock is safe only when you _also_ use the lock 78itself to do locking across CPU's, which implies that EVERYTHING that 79touches a shared variable has to agree about the spinlock they want to 80use. 81 82The above is usually pretty simple (you usually need and want only one 83spinlock for most things - using more than one spinlock can make things a 84lot more complex and even slower and is usually worth it only for 85sequences that you _know_ need to be split up: avoid it at all cost if you 86aren't sure). HOWEVER, it _does_ mean that if you have some code that does 87 88 cli(); 89 .. critical section .. 90 sti(); 91 92and another sequence that does 93 94 spin_lock_irqsave(flags); 95 .. critical section .. 96 spin_unlock_irqrestore(flags); 97 98then they are NOT mutually exclusive, and the critical regions can happen 99at the same time on two different CPU's. That's fine per se, but the 100critical regions had better be critical for different things (ie they 101can't stomp on each other). 102 103The above is a problem mainly if you end up mixing code - for example the 104routines in ll_rw_block() tend to use cli/sti to protect the atomicity of 105their actions, and if a driver uses spinlocks instead then you should 106think about issues like the above.. 107 108This is really the only really hard part about spinlocks: once you start 109using spinlocks they tend to expand to areas you might not have noticed 110before, because you have to make sure the spinlocks correctly protect the 111shared data structures _everywhere_ they are used. The spinlocks are most 112easily added to places that are completely independent of other code (ie 113internal driver data structures that nobody else ever touches, for 114example). 115 116---- 117 118Lesson 2: reader-writer spinlocks. 119 120If your data accesses have a very natural pattern where you usually tend 121to mostly read from the shared variables, the reader-writer locks 122(rw_lock) versions of the spinlocks are often nicer. They allow multiple 123readers to be in the same critical region at once, but if somebody wants 124to change the variables it has to get an exclusive write lock. The 125routines look the same as above: 126 127 rwlock_t xxx_lock = RW_LOCK_UNLOCKED; 128 129 130 unsigned long flags; 131 132 read_lock_irqsave(&xxx_lock, flags); 133 .. critical section that only reads the info ... 134 read_unlock_irqrestore(&xxx_lock, flags); 135 136 write_lock_irqsave(&xxx_lock, flags); 137 .. read and write exclusive access to the info ... 138 write_unlock_irqrestore(&xxx_lock, flags); 139 140The above kind of lock is useful for complex data structures like linked 141lists etc, especially when you know that most of the work is to just 142traverse the list searching for entries without changing the list itself, 143for example. Then you can use the read lock for that kind of list 144traversal, which allows many concurrent readers. Anything that _changes_ 145the list will have to get the write lock. 146 147Note: you cannot "upgrade" a read-lock to a write-lock, so if you at _any_ 148time need to do any changes (even if you don't do it every time), you have 149to get the write-lock at the very beginning. I could fairly easily add a 150primitive to create a "upgradeable" read-lock, but it hasn't been an issue 151yet. Tell me if you'd want one. 152 153---- 154 155Lesson 3: spinlocks revisited. 156 157The single spin-lock primitives above are by no means the only ones. They 158are the most safe ones, and the ones that work under all circumstances, 159but partly _because_ they are safe they are also fairly slow. They are 160much faster than a generic global cli/sti pair, but slower than they'd 161need to be, because they do have to disable interrupts (which is just a 162single instruction on a x86, but it's an expensive one - and on other 163architectures it can be worse). 164 165If you have a case where you have to protect a data structure across 166several CPU's and you want to use spinlocks you can potentially use 167cheaper versions of the spinlocks. IFF you know that the spinlocks are 168never used in interrupt handlers, you can use the non-irq versions: 169 170 spin_lock(&lock); 171 ... 172 spin_unlock(&lock); 173 174(and the equivalent read-write versions too, of course). The spinlock will 175guarantee the same kind of exclusive access, and it will be much faster. 176This is useful if you know that the data in question is only ever 177manipulated from a "process context", ie no interrupts involved. 178 179The reasons you mustn't use these versions if you have interrupts that 180play with the spinlock is that you can get deadlocks: 181 182 spin_lock(&lock); 183 ... 184 <- interrupt comes in: 185 spin_lock(&lock); 186 187where an interrupt tries to lock an already locked variable. This is ok if 188the other interrupt happens on another CPU, but it is _not_ ok if the 189interrupt happens on the same CPU that already holds the lock, because the 190lock will obviously never be released (because the interrupt is waiting 191for the lock, and the lock-holder is interrupted by the interrupt and will 192not continue until the interrupt has been processed). 193 194(This is also the reason why the irq-versions of the spinlocks only need 195to disable the _local_ interrupts - it's ok to use spinlocks in interrupts 196on other CPU's, because an interrupt on another CPU doesn't interrupt the 197CPU that holds the lock, so the lock-holder can continue and eventually 198releases the lock). 199 200Note that you can be clever with read-write locks and interrupts. For 201example, if you know that the interrupt only ever gets a read-lock, then 202you can use a non-irq version of read locks everywhere - because they 203don't block on each other (and thus there is no dead-lock wrt interrupts. 204But when you do the write-lock, you have to use the irq-safe version. 205 206For an example of being clever with rw-locks, see the "waitqueue_lock" 207handling in kernel/sched.c - nothing ever _changes_ a wait-queue from 208within an interrupt, they only read the queue in order to know whom to 209wake up. So read-locks are safe (which is good: they are very common 210indeed), while write-locks need to protect themselves against interrupts. 211 212 Linus 213 214