Merge master.kernel.org:/pub/scm/linux/kernel/git/mingo/mutex-2.6

+2754 -649
+14 -8
Documentation/DocBook/kernel-locking.tmpl
··· 222 <title>Two Main Types of Kernel Locks: Spinlocks and Semaphores</title> 223 224 <para> 225 - There are two main types of kernel locks. The fundamental type 226 is the spinlock 227 (<filename class="headerfile">include/asm/spinlock.h</filename>), 228 which is a very simple single-holder lock: if you can't get the ··· 230 very small and fast, and can be used anywhere. 231 </para> 232 <para> 233 - The second type is a semaphore 234 (<filename class="headerfile">include/asm/semaphore.h</filename>): it 235 can have more than one holder at any time (the number decided at 236 initialization time), although it is most commonly used as a 237 - single-holder lock (a mutex). If you can't get a semaphore, 238 - your task will put itself on the queue, and be woken up when the 239 - semaphore is released. This means the CPU will do something 240 - else while you are waiting, but there are many cases when you 241 - simply can't sleep (see <xref linkend="sleeping-things"/>), and so 242 - have to use a spinlock instead. 243 </para> 244 <para> 245 Neither type of lock is recursive: see
··· 222 <title>Two Main Types of Kernel Locks: Spinlocks and Semaphores</title> 223 224 <para> 225 + There are three main types of kernel locks. The fundamental type 226 is the spinlock 227 (<filename class="headerfile">include/asm/spinlock.h</filename>), 228 which is a very simple single-holder lock: if you can't get the ··· 230 very small and fast, and can be used anywhere. 231 </para> 232 <para> 233 + The second type is a mutex 234 + (<filename class="headerfile">include/linux/mutex.h</filename>): it 235 + is like a spinlock, but you may block holding a mutex. 236 + If you can't lock a mutex, your task will suspend itself, and be woken 237 + up when the mutex is released. This means the CPU can do something 238 + else while you are waiting. There are many cases when you simply 239 + can't sleep (see <xref linkend="sleeping-things"/>), and so have to 240 + use a spinlock instead. 241 + </para> 242 + <para> 243 + The third type is a semaphore 244 (<filename class="headerfile">include/asm/semaphore.h</filename>): it 245 can have more than one holder at any time (the number decided at 246 initialization time), although it is most commonly used as a 247 + single-holder lock (a mutex). If you can't get a semaphore, your 248 + task will be suspended and later on woken up - just like for mutexes. 249 </para> 250 <para> 251 Neither type of lock is recursive: see
+135
Documentation/mutex-design.txt
···
··· 1 + Generic Mutex Subsystem 2 + 3 + started by Ingo Molnar <mingo@redhat.com> 4 + 5 + "Why on earth do we need a new mutex subsystem, and what's wrong 6 + with semaphores?" 7 + 8 + firstly, there's nothing wrong with semaphores. But if the simpler 9 + mutex semantics are sufficient for your code, then there are a couple 10 + of advantages of mutexes: 11 + 12 + - 'struct mutex' is smaller on most architectures: .e.g on x86, 13 + 'struct semaphore' is 20 bytes, 'struct mutex' is 16 bytes. 14 + A smaller structure size means less RAM footprint, and better 15 + CPU-cache utilization. 16 + 17 + - tighter code. On x86 i get the following .text sizes when 18 + switching all mutex-alike semaphores in the kernel to the mutex 19 + subsystem: 20 + 21 + text data bss dec hex filename 22 + 3280380 868188 396860 4545428 455b94 vmlinux-semaphore 23 + 3255329 865296 396732 4517357 44eded vmlinux-mutex 24 + 25 + that's 25051 bytes of code saved, or a 0.76% win - off the hottest 26 + codepaths of the kernel. (The .data savings are 2892 bytes, or 0.33%) 27 + Smaller code means better icache footprint, which is one of the 28 + major optimization goals in the Linux kernel currently. 29 + 30 + - the mutex subsystem is slightly faster and has better scalability for 31 + contended workloads. On an 8-way x86 system, running a mutex-based 32 + kernel and testing creat+unlink+close (of separate, per-task files) 33 + in /tmp with 16 parallel tasks, the average number of ops/sec is: 34 + 35 + Semaphores: Mutexes: 36 + 37 + $ ./test-mutex V 16 10 $ ./test-mutex V 16 10 38 + 8 CPUs, running 16 tasks. 8 CPUs, running 16 tasks. 39 + checking VFS performance. checking VFS performance. 40 + avg loops/sec: 34713 avg loops/sec: 84153 41 + CPU utilization: 63% CPU utilization: 22% 42 + 43 + i.e. in this workload, the mutex based kernel was 2.4 times faster 44 + than the semaphore based kernel, _and_ it also had 2.8 times less CPU 45 + utilization. (In terms of 'ops per CPU cycle', the semaphore kernel 46 + performed 551 ops/sec per 1% of CPU time used, while the mutex kernel 47 + performed 3825 ops/sec per 1% of CPU time used - it was 6.9 times 48 + more efficient.) 49 + 50 + the scalability difference is visible even on a 2-way P4 HT box: 51 + 52 + Semaphores: Mutexes: 53 + 54 + $ ./test-mutex V 16 10 $ ./test-mutex V 16 10 55 + 4 CPUs, running 16 tasks. 8 CPUs, running 16 tasks. 56 + checking VFS performance. checking VFS performance. 57 + avg loops/sec: 127659 avg loops/sec: 181082 58 + CPU utilization: 100% CPU utilization: 34% 59 + 60 + (the straight performance advantage of mutexes is 41%, the per-cycle 61 + efficiency of mutexes is 4.1 times better.) 62 + 63 + - there are no fastpath tradeoffs, the mutex fastpath is just as tight 64 + as the semaphore fastpath. On x86, the locking fastpath is 2 65 + instructions: 66 + 67 + c0377ccb <mutex_lock>: 68 + c0377ccb: f0 ff 08 lock decl (%eax) 69 + c0377cce: 78 0e js c0377cde <.text.lock.mutex> 70 + c0377cd0: c3 ret 71 + 72 + the unlocking fastpath is equally tight: 73 + 74 + c0377cd1 <mutex_unlock>: 75 + c0377cd1: f0 ff 00 lock incl (%eax) 76 + c0377cd4: 7e 0f jle c0377ce5 <.text.lock.mutex+0x7> 77 + c0377cd6: c3 ret 78 + 79 + - 'struct mutex' semantics are well-defined and are enforced if 80 + CONFIG_DEBUG_MUTEXES is turned on. Semaphores on the other hand have 81 + virtually no debugging code or instrumentation. The mutex subsystem 82 + checks and enforces the following rules: 83 + 84 + * - only one task can hold the mutex at a time 85 + * - only the owner can unlock the mutex 86 + * - multiple unlocks are not permitted 87 + * - recursive locking is not permitted 88 + * - a mutex object must be initialized via the API 89 + * - a mutex object must not be initialized via memset or copying 90 + * - task may not exit with mutex held 91 + * - memory areas where held locks reside must not be freed 92 + * - held mutexes must not be reinitialized 93 + * - mutexes may not be used in irq contexts 94 + 95 + furthermore, there are also convenience features in the debugging 96 + code: 97 + 98 + * - uses symbolic names of mutexes, whenever they are printed in debug output 99 + * - point-of-acquire tracking, symbolic lookup of function names 100 + * - list of all locks held in the system, printout of them 101 + * - owner tracking 102 + * - detects self-recursing locks and prints out all relevant info 103 + * - detects multi-task circular deadlocks and prints out all affected 104 + * locks and tasks (and only those tasks) 105 + 106 + Disadvantages 107 + ------------- 108 + 109 + The stricter mutex API means you cannot use mutexes the same way you 110 + can use semaphores: e.g. they cannot be used from an interrupt context, 111 + nor can they be unlocked from a different context that which acquired 112 + it. [ I'm not aware of any other (e.g. performance) disadvantages from 113 + using mutexes at the moment, please let me know if you find any. ] 114 + 115 + Implementation of mutexes 116 + ------------------------- 117 + 118 + 'struct mutex' is the new mutex type, defined in include/linux/mutex.h 119 + and implemented in kernel/mutex.c. It is a counter-based mutex with a 120 + spinlock and a wait-list. The counter has 3 states: 1 for "unlocked", 121 + 0 for "locked" and negative numbers (usually -1) for "locked, potential 122 + waiters queued". 123 + 124 + the APIs of 'struct mutex' have been streamlined: 125 + 126 + DEFINE_MUTEX(name); 127 + 128 + mutex_init(mutex); 129 + 130 + void mutex_lock(struct mutex *lock); 131 + int mutex_lock_interruptible(struct mutex *lock); 132 + int mutex_trylock(struct mutex *lock); 133 + void mutex_unlock(struct mutex *lock); 134 + int mutex_is_locked(struct mutex *lock); 135 +
+4
arch/i386/mm/pageattr.c
··· 222 { 223 if (PageHighMem(page)) 224 return; 225 /* the return value is ignored - the calls cannot fail, 226 * large pages are disabled at boot time. 227 */
··· 222 { 223 if (PageHighMem(page)) 224 return; 225 + if (!enable) 226 + mutex_debug_check_no_locks_freed(page_address(page), 227 + page_address(page+numpages)); 228 + 229 /* the return value is ignored - the calls cannot fail, 230 * large pages are disabled at boot time. 231 */
+6 -6
arch/powerpc/platforms/cell/spufs/inode.c
··· 137 static void spufs_prune_dir(struct dentry *dir) 138 { 139 struct dentry *dentry, *tmp; 140 - down(&dir->d_inode->i_sem); 141 list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_child) { 142 spin_lock(&dcache_lock); 143 spin_lock(&dentry->d_lock); ··· 154 } 155 } 156 shrink_dcache_parent(dir); 157 - up(&dir->d_inode->i_sem); 158 } 159 160 static int spufs_rmdir(struct inode *root, struct dentry *dir_dentry) ··· 162 struct spu_context *ctx; 163 164 /* remove all entries */ 165 - down(&root->i_sem); 166 spufs_prune_dir(dir_dentry); 167 - up(&root->i_sem); 168 169 /* We have to give up the mm_struct */ 170 ctx = SPUFS_I(dir_dentry->d_inode)->i_ctx; 171 spu_forget(ctx); 172 173 - /* XXX Do we need to hold i_sem here ? */ 174 return simple_rmdir(root, dir_dentry); 175 } 176 ··· 330 out_dput: 331 dput(dentry); 332 out_dir: 333 - up(&nd->dentry->d_inode->i_sem); 334 out: 335 return ret; 336 }
··· 137 static void spufs_prune_dir(struct dentry *dir) 138 { 139 struct dentry *dentry, *tmp; 140 + mutex_lock(&dir->d_inode->i_mutex); 141 list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_child) { 142 spin_lock(&dcache_lock); 143 spin_lock(&dentry->d_lock); ··· 154 } 155 } 156 shrink_dcache_parent(dir); 157 + mutex_unlock(&dir->d_inode->i_mutex); 158 } 159 160 static int spufs_rmdir(struct inode *root, struct dentry *dir_dentry) ··· 162 struct spu_context *ctx; 163 164 /* remove all entries */ 165 + mutex_lock(&root->i_mutex); 166 spufs_prune_dir(dir_dentry); 167 + mutex_unlock(&root->i_mutex); 168 169 /* We have to give up the mm_struct */ 170 ctx = SPUFS_I(dir_dentry->d_inode)->i_ctx; 171 spu_forget(ctx); 172 173 + /* XXX Do we need to hold i_mutex here ? */ 174 return simple_rmdir(root, dir_dentry); 175 } 176 ··· 330 out_dput: 331 dput(dentry); 332 out_dir: 333 + mutex_unlock(&nd->dentry->d_inode->i_mutex); 334 out: 335 return ret; 336 }
+14 -17
drivers/block/loop.c
··· 215 unsigned offset, bv_offs; 216 int len, ret; 217 218 - down(&mapping->host->i_sem); 219 index = pos >> PAGE_CACHE_SHIFT; 220 offset = pos & ((pgoff_t)PAGE_CACHE_SIZE - 1); 221 bv_offs = bvec->bv_offset; ··· 278 } 279 ret = 0; 280 out: 281 - up(&mapping->host->i_sem); 282 return ret; 283 unlock: 284 unlock_page(page); ··· 527 lo->lo_pending++; 528 loop_add_bio(lo, old_bio); 529 spin_unlock_irq(&lo->lo_lock); 530 - up(&lo->lo_bh_mutex); 531 return 0; 532 533 out: 534 if (lo->lo_pending == 0) 535 - up(&lo->lo_bh_mutex); 536 spin_unlock_irq(&lo->lo_lock); 537 bio_io_error(old_bio, old_bio->bi_size); 538 return 0; ··· 593 lo->lo_pending = 1; 594 595 /* 596 - * up sem, we are running 597 */ 598 - up(&lo->lo_sem); 599 600 for (;;) { 601 int pending; 602 603 - /* 604 - * interruptible just to not contribute to load avg 605 - */ 606 - if (down_interruptible(&lo->lo_bh_mutex)) 607 continue; 608 609 spin_lock_irq(&lo->lo_lock); 610 611 /* 612 - * could be upped because of tear-down, not pending work 613 */ 614 if (unlikely(!lo->lo_pending)) { 615 spin_unlock_irq(&lo->lo_lock); ··· 629 break; 630 } 631 632 - up(&lo->lo_sem); 633 return 0; 634 } 635 ··· 840 set_blocksize(bdev, lo_blocksize); 841 842 kernel_thread(loop_thread, lo, CLONE_KERNEL); 843 - down(&lo->lo_sem); 844 return 0; 845 846 out_putf: ··· 906 lo->lo_state = Lo_rundown; 907 lo->lo_pending--; 908 if (!lo->lo_pending) 909 - up(&lo->lo_bh_mutex); 910 spin_unlock_irq(&lo->lo_lock); 911 912 - down(&lo->lo_sem); 913 914 lo->lo_backing_file = NULL; 915 ··· 1286 if (!lo->lo_queue) 1287 goto out_mem4; 1288 init_MUTEX(&lo->lo_ctl_mutex); 1289 - init_MUTEX_LOCKED(&lo->lo_sem); 1290 - init_MUTEX_LOCKED(&lo->lo_bh_mutex); 1291 lo->lo_number = i; 1292 spin_lock_init(&lo->lo_lock); 1293 disk->major = LOOP_MAJOR;
··· 215 unsigned offset, bv_offs; 216 int len, ret; 217 218 + mutex_lock(&mapping->host->i_mutex); 219 index = pos >> PAGE_CACHE_SHIFT; 220 offset = pos & ((pgoff_t)PAGE_CACHE_SIZE - 1); 221 bv_offs = bvec->bv_offset; ··· 278 } 279 ret = 0; 280 out: 281 + mutex_unlock(&mapping->host->i_mutex); 282 return ret; 283 unlock: 284 unlock_page(page); ··· 527 lo->lo_pending++; 528 loop_add_bio(lo, old_bio); 529 spin_unlock_irq(&lo->lo_lock); 530 + complete(&lo->lo_bh_done); 531 return 0; 532 533 out: 534 if (lo->lo_pending == 0) 535 + complete(&lo->lo_bh_done); 536 spin_unlock_irq(&lo->lo_lock); 537 bio_io_error(old_bio, old_bio->bi_size); 538 return 0; ··· 593 lo->lo_pending = 1; 594 595 /* 596 + * complete it, we are running 597 */ 598 + complete(&lo->lo_done); 599 600 for (;;) { 601 int pending; 602 603 + if (wait_for_completion_interruptible(&lo->lo_bh_done)) 604 continue; 605 606 spin_lock_irq(&lo->lo_lock); 607 608 /* 609 + * could be completed because of tear-down, not pending work 610 */ 611 if (unlikely(!lo->lo_pending)) { 612 spin_unlock_irq(&lo->lo_lock); ··· 632 break; 633 } 634 635 + complete(&lo->lo_done); 636 return 0; 637 } 638 ··· 843 set_blocksize(bdev, lo_blocksize); 844 845 kernel_thread(loop_thread, lo, CLONE_KERNEL); 846 + wait_for_completion(&lo->lo_done); 847 return 0; 848 849 out_putf: ··· 909 lo->lo_state = Lo_rundown; 910 lo->lo_pending--; 911 if (!lo->lo_pending) 912 + complete(&lo->lo_bh_done); 913 spin_unlock_irq(&lo->lo_lock); 914 915 + wait_for_completion(&lo->lo_done); 916 917 lo->lo_backing_file = NULL; 918 ··· 1289 if (!lo->lo_queue) 1290 goto out_mem4; 1291 init_MUTEX(&lo->lo_ctl_mutex); 1292 + init_completion(&lo->lo_done); 1293 + init_completion(&lo->lo_bh_done); 1294 lo->lo_number = i; 1295 spin_lock_init(&lo->lo_lock); 1296 disk->major = LOOP_MAJOR;
+6 -6
drivers/block/sx8.c
··· 27 #include <linux/time.h> 28 #include <linux/hdreg.h> 29 #include <linux/dma-mapping.h> 30 #include <asm/io.h> 31 - #include <asm/semaphore.h> 32 #include <asm/uaccess.h> 33 34 #if 0 ··· 303 304 struct work_struct fsm_task; 305 306 - struct semaphore probe_sem; 307 }; 308 309 struct carm_response { ··· 1346 } 1347 1348 case HST_PROBE_FINISHED: 1349 - up(&host->probe_sem); 1350 break; 1351 1352 case HST_ERROR: ··· 1622 host->flags = pci_dac ? FL_DAC : 0; 1623 spin_lock_init(&host->lock); 1624 INIT_WORK(&host->fsm_task, carm_fsm_task, host); 1625 - init_MUTEX_LOCKED(&host->probe_sem); 1626 1627 for (i = 0; i < ARRAY_SIZE(host->req); i++) 1628 host->req[i].tag = i; ··· 1691 if (rc) 1692 goto err_out_free_irq; 1693 1694 - DPRINTK("waiting for probe_sem\n"); 1695 - down(&host->probe_sem); 1696 1697 printk(KERN_INFO "%s: pci %s, ports %d, io %lx, irq %u, major %d\n", 1698 host->name, pci_name(pdev), (int) CARM_MAX_PORTS,
··· 27 #include <linux/time.h> 28 #include <linux/hdreg.h> 29 #include <linux/dma-mapping.h> 30 + #include <linux/completion.h> 31 #include <asm/io.h> 32 #include <asm/uaccess.h> 33 34 #if 0 ··· 303 304 struct work_struct fsm_task; 305 306 + struct completion probe_comp; 307 }; 308 309 struct carm_response { ··· 1346 } 1347 1348 case HST_PROBE_FINISHED: 1349 + complete(&host->probe_comp); 1350 break; 1351 1352 case HST_ERROR: ··· 1622 host->flags = pci_dac ? FL_DAC : 0; 1623 spin_lock_init(&host->lock); 1624 INIT_WORK(&host->fsm_task, carm_fsm_task, host); 1625 + init_completion(&host->probe_comp); 1626 1627 for (i = 0; i < ARRAY_SIZE(host->req); i++) 1628 host->req[i].tag = i; ··· 1691 if (rc) 1692 goto err_out_free_irq; 1693 1694 + DPRINTK("waiting for probe_comp\n"); 1695 + wait_for_completion(&host->probe_comp); 1696 1697 printk(KERN_INFO "%s: pci %s, ports %d, io %lx, irq %u, major %d\n", 1698 host->name, pci_name(pdev), (int) CARM_MAX_PORTS,
+2 -2
drivers/char/mem.c
··· 741 { 742 loff_t ret; 743 744 - down(&file->f_dentry->d_inode->i_sem); 745 switch (orig) { 746 case 0: 747 file->f_pos = offset; ··· 756 default: 757 ret = -EINVAL; 758 } 759 - up(&file->f_dentry->d_inode->i_sem); 760 return ret; 761 } 762
··· 741 { 742 loff_t ret; 743 744 + mutex_lock(&file->f_dentry->d_inode->i_mutex); 745 switch (orig) { 746 case 0: 747 file->f_pos = offset; ··· 756 default: 757 ret = -EINVAL; 758 } 759 + mutex_unlock(&file->f_dentry->d_inode->i_mutex); 760 return ret; 761 } 762
+19
drivers/char/sysrq.c
··· 153 154 /* END SYNC SYSRQ HANDLERS BLOCK */ 155 156 157 /* SHOW SYSRQ HANDLERS BLOCK */ 158 ··· 309 #else 310 /* c */ NULL, 311 #endif 312 /* d */ NULL, 313 /* e */ &sysrq_term_op, 314 /* f */ &sysrq_moom_op, 315 /* g */ NULL,
··· 153 154 /* END SYNC SYSRQ HANDLERS BLOCK */ 155 156 + #ifdef CONFIG_DEBUG_MUTEXES 157 + 158 + static void 159 + sysrq_handle_showlocks(int key, struct pt_regs *pt_regs, struct tty_struct *tty) 160 + { 161 + mutex_debug_show_all_locks(); 162 + } 163 + 164 + static struct sysrq_key_op sysrq_showlocks_op = { 165 + .handler = sysrq_handle_showlocks, 166 + .help_msg = "show-all-locks(D)", 167 + .action_msg = "Show Locks Held", 168 + }; 169 + 170 + #endif 171 172 /* SHOW SYSRQ HANDLERS BLOCK */ 173 ··· 294 #else 295 /* c */ NULL, 296 #endif 297 + #ifdef CONFIG_DEBUG_MUTEXES 298 + /* d */ &sysrq_showlocks_op, 299 + #else 300 /* d */ NULL, 301 + #endif 302 /* e */ &sysrq_term_op, 303 /* f */ &sysrq_moom_op, 304 /* g */ NULL,
+5 -4
drivers/char/watchdog/cpu5wdt.c
··· 28 #include <linux/init.h> 29 #include <linux/ioport.h> 30 #include <linux/timer.h> 31 #include <linux/jiffies.h> 32 #include <asm/io.h> 33 #include <asm/uaccess.h> ··· 58 /* some device data */ 59 60 static struct { 61 - struct semaphore stop; 62 volatile int running; 63 struct timer_list timer; 64 volatile int queue; ··· 86 } 87 else { 88 /* ticks doesn't matter anyway */ 89 - up(&cpu5wdt_device.stop); 90 } 91 92 } ··· 240 if ( !val ) 241 printk(KERN_INFO PFX "sorry, was my fault\n"); 242 243 - init_MUTEX_LOCKED(&cpu5wdt_device.stop); 244 cpu5wdt_device.queue = 0; 245 246 clear_bit(0, &cpu5wdt_device.inuse); ··· 270 { 271 if ( cpu5wdt_device.queue ) { 272 cpu5wdt_device.queue = 0; 273 - down(&cpu5wdt_device.stop); 274 } 275 276 misc_deregister(&cpu5wdt_misc);
··· 28 #include <linux/init.h> 29 #include <linux/ioport.h> 30 #include <linux/timer.h> 31 + #include <linux/completion.h> 32 #include <linux/jiffies.h> 33 #include <asm/io.h> 34 #include <asm/uaccess.h> ··· 57 /* some device data */ 58 59 static struct { 60 + struct completion stop; 61 volatile int running; 62 struct timer_list timer; 63 volatile int queue; ··· 85 } 86 else { 87 /* ticks doesn't matter anyway */ 88 + complete(&cpu5wdt_device.stop); 89 } 90 91 } ··· 239 if ( !val ) 240 printk(KERN_INFO PFX "sorry, was my fault\n"); 241 242 + init_completion(&cpu5wdt_device.stop); 243 cpu5wdt_device.queue = 0; 244 245 clear_bit(0, &cpu5wdt_device.inuse); ··· 269 { 270 if ( cpu5wdt_device.queue ) { 271 cpu5wdt_device.queue = 0; 272 + wait_for_completion(&cpu5wdt_device.stop); 273 } 274 275 misc_deregister(&cpu5wdt_misc);
+2 -2
drivers/ide/ide-probe.c
··· 655 { 656 ide_hwif_t *hwif = container_of(dev, ide_hwif_t, gendev); 657 658 - up(&hwif->gendev_rel_sem); 659 } 660 661 static void hwif_register (ide_hwif_t *hwif) ··· 1327 drive->queue = NULL; 1328 spin_unlock_irq(&ide_lock); 1329 1330 - up(&drive->gendev_rel_sem); 1331 } 1332 1333 /*
··· 655 { 656 ide_hwif_t *hwif = container_of(dev, ide_hwif_t, gendev); 657 658 + complete(&hwif->gendev_rel_comp); 659 } 660 661 static void hwif_register (ide_hwif_t *hwif) ··· 1327 drive->queue = NULL; 1328 spin_unlock_irq(&ide_lock); 1329 1330 + complete(&drive->gendev_rel_comp); 1331 } 1332 1333 /*
+4 -4
drivers/ide/ide.c
··· 222 hwif->mwdma_mask = 0x80; /* disable all mwdma */ 223 hwif->swdma_mask = 0x80; /* disable all swdma */ 224 225 - sema_init(&hwif->gendev_rel_sem, 0); 226 227 default_hwif_iops(hwif); 228 default_hwif_transport(hwif); ··· 245 drive->is_flash = 0; 246 drive->vdma = 0; 247 INIT_LIST_HEAD(&drive->list); 248 - sema_init(&drive->gendev_rel_sem, 0); 249 } 250 } 251 ··· 602 } 603 spin_unlock_irq(&ide_lock); 604 device_unregister(&drive->gendev); 605 - down(&drive->gendev_rel_sem); 606 spin_lock_irq(&ide_lock); 607 } 608 hwif->present = 0; ··· 662 /* More messed up locking ... */ 663 spin_unlock_irq(&ide_lock); 664 device_unregister(&hwif->gendev); 665 - down(&hwif->gendev_rel_sem); 666 667 /* 668 * Remove us from the kernel's knowledge
··· 222 hwif->mwdma_mask = 0x80; /* disable all mwdma */ 223 hwif->swdma_mask = 0x80; /* disable all swdma */ 224 225 + init_completion(&hwif->gendev_rel_comp); 226 227 default_hwif_iops(hwif); 228 default_hwif_transport(hwif); ··· 245 drive->is_flash = 0; 246 drive->vdma = 0; 247 INIT_LIST_HEAD(&drive->list); 248 + init_completion(&drive->gendev_rel_comp); 249 } 250 } 251 ··· 602 } 603 spin_unlock_irq(&ide_lock); 604 device_unregister(&drive->gendev); 605 + wait_for_completion(&drive->gendev_rel_comp); 606 spin_lock_irq(&ide_lock); 607 } 608 hwif->present = 0; ··· 662 /* More messed up locking ... */ 663 spin_unlock_irq(&ide_lock); 664 device_unregister(&hwif->gendev); 665 + wait_for_completion(&hwif->gendev_rel_comp); 666 667 /* 668 * Remove us from the kernel's knowledge
+3 -3
drivers/isdn/capi/capifs.c
··· 138 { 139 char s[10]; 140 struct dentry *root = capifs_root; 141 - down(&root->d_inode->i_sem); 142 return lookup_one_len(s, root, sprintf(s, "%d", num)); 143 } 144 ··· 159 dentry = get_node(number); 160 if (!IS_ERR(dentry) && !dentry->d_inode) 161 d_instantiate(dentry, inode); 162 - up(&capifs_root->d_inode->i_sem); 163 } 164 165 void capifs_free_ncci(unsigned int number) ··· 175 } 176 dput(dentry); 177 } 178 - up(&capifs_root->d_inode->i_sem); 179 } 180 181 static int __init capifs_init(void)
··· 138 { 139 char s[10]; 140 struct dentry *root = capifs_root; 141 + mutex_lock(&root->d_inode->i_mutex); 142 return lookup_one_len(s, root, sprintf(s, "%d", num)); 143 } 144 ··· 159 dentry = get_node(number); 160 if (!IS_ERR(dentry) && !dentry->d_inode) 161 d_instantiate(dentry, inode); 162 + mutex_unlock(&capifs_root->d_inode->i_mutex); 163 } 164 165 void capifs_free_ncci(unsigned int number) ··· 175 } 176 dput(dentry); 177 } 178 + mutex_unlock(&capifs_root->d_inode->i_mutex); 179 } 180 181 static int __init capifs_init(void)
+2 -2
drivers/md/dm.c
··· 837 { 838 set_capacity(md->disk, size); 839 840 - down(&md->suspended_bdev->bd_inode->i_sem); 841 i_size_write(md->suspended_bdev->bd_inode, (loff_t)size << SECTOR_SHIFT); 842 - up(&md->suspended_bdev->bd_inode->i_sem); 843 } 844 845 static int __bind(struct mapped_device *md, struct dm_table *t)
··· 837 { 838 set_capacity(md->disk, size); 839 840 + mutex_lock(&md->suspended_bdev->bd_inode->i_mutex); 841 i_size_write(md->suspended_bdev->bd_inode, (loff_t)size << SECTOR_SHIFT); 842 + mutex_unlock(&md->suspended_bdev->bd_inode->i_mutex); 843 } 844 845 static int __bind(struct mapped_device *md, struct dm_table *t)
+4 -4
drivers/md/md.c
··· 3460 3461 bdev = bdget_disk(mddev->gendisk, 0); 3462 if (bdev) { 3463 - down(&bdev->bd_inode->i_sem); 3464 i_size_write(bdev->bd_inode, mddev->array_size << 10); 3465 - up(&bdev->bd_inode->i_sem); 3466 bdput(bdev); 3467 } 3468 } ··· 3486 3487 bdev = bdget_disk(mddev->gendisk, 0); 3488 if (bdev) { 3489 - down(&bdev->bd_inode->i_sem); 3490 i_size_write(bdev->bd_inode, mddev->array_size << 10); 3491 - up(&bdev->bd_inode->i_sem); 3492 bdput(bdev); 3493 } 3494 }
··· 3460 3461 bdev = bdget_disk(mddev->gendisk, 0); 3462 if (bdev) { 3463 + mutex_lock(&bdev->bd_inode->i_mutex); 3464 i_size_write(bdev->bd_inode, mddev->array_size << 10); 3465 + mutex_unlock(&bdev->bd_inode->i_mutex); 3466 bdput(bdev); 3467 } 3468 } ··· 3486 3487 bdev = bdget_disk(mddev->gendisk, 0); 3488 if (bdev) { 3489 + mutex_lock(&bdev->bd_inode->i_mutex); 3490 i_size_write(bdev->bd_inode, mddev->array_size << 10); 3491 + mutex_unlock(&bdev->bd_inode->i_mutex); 3492 bdput(bdev); 3493 } 3494 }
+2 -2
drivers/pci/proc.c
··· 25 loff_t new = -1; 26 struct inode *inode = file->f_dentry->d_inode; 27 28 - down(&inode->i_sem); 29 switch (whence) { 30 case 0: 31 new = off; ··· 41 new = -EINVAL; 42 else 43 file->f_pos = new; 44 - up(&inode->i_sem); 45 return new; 46 } 47
··· 25 loff_t new = -1; 26 struct inode *inode = file->f_dentry->d_inode; 27 28 + mutex_lock(&inode->i_mutex); 29 switch (whence) { 30 case 0: 31 new = off; ··· 41 new = -EINVAL; 42 else 43 file->f_pos = new; 44 + mutex_unlock(&inode->i_mutex); 45 return new; 46 } 47
+14 -14
drivers/usb/core/inode.c
··· 184 bus->d_inode->i_gid = busgid; 185 bus->d_inode->i_mode = S_IFDIR | busmode; 186 187 - down(&bus->d_inode->i_sem); 188 189 list_for_each_entry(dev, &bus->d_subdirs, d_u.d_child) 190 if (dev->d_inode) 191 update_dev(dev); 192 193 - up(&bus->d_inode->i_sem); 194 } 195 196 static void update_sb(struct super_block *sb) ··· 201 if (!root) 202 return; 203 204 - down(&root->d_inode->i_sem); 205 206 list_for_each_entry(bus, &root->d_subdirs, d_u.d_child) { 207 if (bus->d_inode) { ··· 219 } 220 } 221 222 - up(&root->d_inode->i_sem); 223 } 224 225 static int remount(struct super_block *sb, int *flags, char *data) ··· 333 static int usbfs_unlink (struct inode *dir, struct dentry *dentry) 334 { 335 struct inode *inode = dentry->d_inode; 336 - down(&inode->i_sem); 337 dentry->d_inode->i_nlink--; 338 dput(dentry); 339 - up(&inode->i_sem); 340 d_delete(dentry); 341 return 0; 342 } ··· 346 int error = -ENOTEMPTY; 347 struct inode * inode = dentry->d_inode; 348 349 - down(&inode->i_sem); 350 dentry_unhash(dentry); 351 if (usbfs_empty(dentry)) { 352 dentry->d_inode->i_nlink -= 2; ··· 355 dir->i_nlink--; 356 error = 0; 357 } 358 - up(&inode->i_sem); 359 if (!error) 360 d_delete(dentry); 361 dput(dentry); ··· 380 { 381 loff_t retval = -EINVAL; 382 383 - down(&file->f_dentry->d_inode->i_sem); 384 switch(orig) { 385 case 0: 386 if (offset > 0) { ··· 397 default: 398 break; 399 } 400 - up(&file->f_dentry->d_inode->i_sem); 401 return retval; 402 } 403 ··· 480 } 481 482 *dentry = NULL; 483 - down(&parent->d_inode->i_sem); 484 *dentry = lookup_one_len(name, parent, strlen(name)); 485 if (!IS_ERR(dentry)) { 486 if ((mode & S_IFMT) == S_IFDIR) ··· 489 error = usbfs_create (parent->d_inode, *dentry, mode); 490 } else 491 error = PTR_ERR(dentry); 492 - up(&parent->d_inode->i_sem); 493 494 return error; 495 } ··· 528 if (!parent || !parent->d_inode) 529 return; 530 531 - down(&parent->d_inode->i_sem); 532 if (usbfs_positive(dentry)) { 533 if (dentry->d_inode) { 534 if (S_ISDIR(dentry->d_inode->i_mode)) ··· 538 dput(dentry); 539 } 540 } 541 - up(&parent->d_inode->i_sem); 542 } 543 544 /* --------------------------------------------------------------------- */
··· 184 bus->d_inode->i_gid = busgid; 185 bus->d_inode->i_mode = S_IFDIR | busmode; 186 187 + mutex_lock(&bus->d_inode->i_mutex); 188 189 list_for_each_entry(dev, &bus->d_subdirs, d_u.d_child) 190 if (dev->d_inode) 191 update_dev(dev); 192 193 + mutex_unlock(&bus->d_inode->i_mutex); 194 } 195 196 static void update_sb(struct super_block *sb) ··· 201 if (!root) 202 return; 203 204 + mutex_lock(&root->d_inode->i_mutex); 205 206 list_for_each_entry(bus, &root->d_subdirs, d_u.d_child) { 207 if (bus->d_inode) { ··· 219 } 220 } 221 222 + mutex_unlock(&root->d_inode->i_mutex); 223 } 224 225 static int remount(struct super_block *sb, int *flags, char *data) ··· 333 static int usbfs_unlink (struct inode *dir, struct dentry *dentry) 334 { 335 struct inode *inode = dentry->d_inode; 336 + mutex_lock(&inode->i_mutex); 337 dentry->d_inode->i_nlink--; 338 dput(dentry); 339 + mutex_unlock(&inode->i_mutex); 340 d_delete(dentry); 341 return 0; 342 } ··· 346 int error = -ENOTEMPTY; 347 struct inode * inode = dentry->d_inode; 348 349 + mutex_lock(&inode->i_mutex); 350 dentry_unhash(dentry); 351 if (usbfs_empty(dentry)) { 352 dentry->d_inode->i_nlink -= 2; ··· 355 dir->i_nlink--; 356 error = 0; 357 } 358 + mutex_unlock(&inode->i_mutex); 359 if (!error) 360 d_delete(dentry); 361 dput(dentry); ··· 380 { 381 loff_t retval = -EINVAL; 382 383 + mutex_lock(&file->f_dentry->d_inode->i_mutex); 384 switch(orig) { 385 case 0: 386 if (offset > 0) { ··· 397 default: 398 break; 399 } 400 + mutex_unlock(&file->f_dentry->d_inode->i_mutex); 401 return retval; 402 } 403 ··· 480 } 481 482 *dentry = NULL; 483 + mutex_lock(&parent->d_inode->i_mutex); 484 *dentry = lookup_one_len(name, parent, strlen(name)); 485 if (!IS_ERR(dentry)) { 486 if ((mode & S_IFMT) == S_IFDIR) ··· 489 error = usbfs_create (parent->d_inode, *dentry, mode); 490 } else 491 error = PTR_ERR(dentry); 492 + mutex_unlock(&parent->d_inode->i_mutex); 493 494 return error; 495 } ··· 528 if (!parent || !parent->d_inode) 529 return; 530 531 + mutex_lock(&parent->d_inode->i_mutex); 532 if (usbfs_positive(dentry)) { 533 if (dentry->d_inode) { 534 if (S_ISDIR(dentry->d_inode->i_mode)) ··· 538 dput(dentry); 539 } 540 } 541 + mutex_unlock(&parent->d_inode->i_mutex); 542 } 543 544 /* --------------------------------------------------------------------- */
+2 -2
drivers/usb/gadget/file_storage.c
··· 1891 return -EINVAL; 1892 1893 inode = filp->f_dentry->d_inode; 1894 - down(&inode->i_sem); 1895 current->flags |= PF_SYNCWRITE; 1896 rc = filemap_fdatawrite(inode->i_mapping); 1897 err = filp->f_op->fsync(filp, filp->f_dentry, 1); ··· 1901 if (!rc) 1902 rc = err; 1903 current->flags &= ~PF_SYNCWRITE; 1904 - up(&inode->i_sem); 1905 VLDBG(curlun, "fdatasync -> %d\n", rc); 1906 return rc; 1907 }
··· 1891 return -EINVAL; 1892 1893 inode = filp->f_dentry->d_inode; 1894 + mutex_lock(&inode->i_mutex); 1895 current->flags |= PF_SYNCWRITE; 1896 rc = filemap_fdatawrite(inode->i_mapping); 1897 err = filp->f_op->fsync(filp, filp->f_dentry, 1); ··· 1901 if (!rc) 1902 rc = err; 1903 current->flags &= ~PF_SYNCWRITE; 1904 + mutex_unlock(&inode->i_mutex); 1905 VLDBG(curlun, "fdatasync -> %d\n", rc); 1906 return rc; 1907 }
+2 -2
drivers/usb/gadget/inode.c
··· 1562 spin_unlock_irq (&dev->lock); 1563 1564 /* break link to dcache */ 1565 - down (&parent->i_sem); 1566 d_delete (dentry); 1567 dput (dentry); 1568 - up (&parent->i_sem); 1569 1570 /* fds may still be open */ 1571 goto restart;
··· 1562 spin_unlock_irq (&dev->lock); 1563 1564 /* break link to dcache */ 1565 + mutex_lock (&parent->i_mutex); 1566 d_delete (dentry); 1567 dput (dentry); 1568 + mutex_unlock (&parent->i_mutex); 1569 1570 /* fds may still be open */ 1571 goto restart;
+2 -2
fs/affs/inode.c
··· 244 pr_debug("AFFS: put_inode(ino=%lu, nlink=%u)\n", inode->i_ino, inode->i_nlink); 245 affs_free_prealloc(inode); 246 if (atomic_read(&inode->i_count) == 1) { 247 - down(&inode->i_sem); 248 if (inode->i_size != AFFS_I(inode)->mmu_private) 249 affs_truncate(inode); 250 - up(&inode->i_sem); 251 } 252 } 253
··· 244 pr_debug("AFFS: put_inode(ino=%lu, nlink=%u)\n", inode->i_ino, inode->i_nlink); 245 affs_free_prealloc(inode); 246 if (atomic_read(&inode->i_count) == 1) { 247 + mutex_lock(&inode->i_mutex); 248 if (inode->i_size != AFFS_I(inode)->mmu_private) 249 affs_truncate(inode); 250 + mutex_unlock(&inode->i_mutex); 251 } 252 } 253
+2 -2
fs/autofs/root.c
··· 229 dentry->d_flags |= DCACHE_AUTOFS_PENDING; 230 d_add(dentry, NULL); 231 232 - up(&dir->i_sem); 233 autofs_revalidate(dentry, nd); 234 - down(&dir->i_sem); 235 236 /* 237 * If we are still pending, check if we had to handle
··· 229 dentry->d_flags |= DCACHE_AUTOFS_PENDING; 230 d_add(dentry, NULL); 231 232 + mutex_unlock(&dir->i_mutex); 233 autofs_revalidate(dentry, nd); 234 + mutex_lock(&dir->i_mutex); 235 236 /* 237 * If we are still pending, check if we had to handle
+2 -2
fs/autofs4/root.c
··· 489 d_add(dentry, NULL); 490 491 if (dentry->d_op && dentry->d_op->d_revalidate) { 492 - up(&dir->i_sem); 493 (dentry->d_op->d_revalidate)(dentry, nd); 494 - down(&dir->i_sem); 495 } 496 497 /*
··· 489 d_add(dentry, NULL); 490 491 if (dentry->d_op && dentry->d_op->d_revalidate) { 492 + mutex_unlock(&dir->i_mutex); 493 (dentry->d_op->d_revalidate)(dentry, nd); 494 + mutex_lock(&dir->i_mutex); 495 } 496 497 /*
+6 -6
fs/binfmt_misc.c
··· 588 case 2: set_bit(Enabled, &e->flags); 589 break; 590 case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root); 591 - down(&root->d_inode->i_sem); 592 593 kill_node(e); 594 595 - up(&root->d_inode->i_sem); 596 dput(root); 597 break; 598 default: return res; ··· 622 return PTR_ERR(e); 623 624 root = dget(sb->s_root); 625 - down(&root->d_inode->i_sem); 626 dentry = lookup_one_len(e->name, root, strlen(e->name)); 627 err = PTR_ERR(dentry); 628 if (IS_ERR(dentry)) ··· 658 out2: 659 dput(dentry); 660 out: 661 - up(&root->d_inode->i_sem); 662 dput(root); 663 664 if (err) { ··· 703 case 1: enabled = 0; break; 704 case 2: enabled = 1; break; 705 case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root); 706 - down(&root->d_inode->i_sem); 707 708 while (!list_empty(&entries)) 709 kill_node(list_entry(entries.next, Node, list)); 710 711 - up(&root->d_inode->i_sem); 712 dput(root); 713 default: return res; 714 }
··· 588 case 2: set_bit(Enabled, &e->flags); 589 break; 590 case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root); 591 + mutex_lock(&root->d_inode->i_mutex); 592 593 kill_node(e); 594 595 + mutex_unlock(&root->d_inode->i_mutex); 596 dput(root); 597 break; 598 default: return res; ··· 622 return PTR_ERR(e); 623 624 root = dget(sb->s_root); 625 + mutex_lock(&root->d_inode->i_mutex); 626 dentry = lookup_one_len(e->name, root, strlen(e->name)); 627 err = PTR_ERR(dentry); 628 if (IS_ERR(dentry)) ··· 658 out2: 659 dput(dentry); 660 out: 661 + mutex_unlock(&root->d_inode->i_mutex); 662 dput(root); 663 664 if (err) { ··· 703 case 1: enabled = 0; break; 704 case 2: enabled = 1; break; 705 case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root); 706 + mutex_lock(&root->d_inode->i_mutex); 707 708 while (!list_empty(&entries)) 709 kill_node(list_entry(entries.next, Node, list)); 710 711 + mutex_unlock(&root->d_inode->i_mutex); 712 dput(root); 713 default: return res; 714 }
+2 -2
fs/block_dev.c
··· 202 loff_t size; 203 loff_t retval; 204 205 - down(&bd_inode->i_sem); 206 size = i_size_read(bd_inode); 207 208 switch (origin) { ··· 219 } 220 retval = offset; 221 } 222 - up(&bd_inode->i_sem); 223 return retval; 224 } 225
··· 202 loff_t size; 203 loff_t retval; 204 205 + mutex_lock(&bd_inode->i_mutex); 206 size = i_size_read(bd_inode); 207 208 switch (origin) { ··· 219 } 220 retval = offset; 221 } 222 + mutex_unlock(&bd_inode->i_mutex); 223 return retval; 224 } 225
+3 -3
fs/buffer.c
··· 352 * We need to protect against concurrent writers, 353 * which could cause livelocks in fsync_buffers_list 354 */ 355 - down(&mapping->host->i_sem); 356 err = file->f_op->fsync(file, file->f_dentry, datasync); 357 if (!ret) 358 ret = err; 359 - up(&mapping->host->i_sem); 360 err = filemap_fdatawait(mapping); 361 if (!ret) 362 ret = err; ··· 2338 __block_commit_write(inode,page,from,to); 2339 /* 2340 * No need to use i_size_read() here, the i_size 2341 - * cannot change under us because we hold i_sem. 2342 */ 2343 if (pos > inode->i_size) { 2344 i_size_write(inode, pos);
··· 352 * We need to protect against concurrent writers, 353 * which could cause livelocks in fsync_buffers_list 354 */ 355 + mutex_lock(&mapping->host->i_mutex); 356 err = file->f_op->fsync(file, file->f_dentry, datasync); 357 if (!ret) 358 ret = err; 359 + mutex_unlock(&mapping->host->i_mutex); 360 err = filemap_fdatawait(mapping); 361 if (!ret) 362 ret = err; ··· 2338 __block_commit_write(inode,page,from,to); 2339 /* 2340 * No need to use i_size_read() here, the i_size 2341 + * cannot change under us because we hold i_mutex. 2342 */ 2343 if (pos > inode->i_size) { 2344 i_size_write(inode, pos);
+3 -3
fs/cifs/cifsfs.c
··· 860 DeleteOplockQEntry(oplock_item); 861 /* can not grab inode sem here since it would 862 deadlock when oplock received on delete 863 - since vfs_unlink holds the i_sem across 864 the call */ 865 - /* down(&inode->i_sem);*/ 866 if (S_ISREG(inode->i_mode)) { 867 rc = filemap_fdatawrite(inode->i_mapping); 868 if(CIFS_I(inode)->clientCanCacheRead == 0) { ··· 871 } 872 } else 873 rc = 0; 874 - /* up(&inode->i_sem);*/ 875 if (rc) 876 CIFS_I(inode)->write_behind_rc = rc; 877 cFYI(1,("Oplock flush inode %p rc %d",inode,rc));
··· 860 DeleteOplockQEntry(oplock_item); 861 /* can not grab inode sem here since it would 862 deadlock when oplock received on delete 863 + since vfs_unlink holds the i_mutex across 864 the call */ 865 + /* mutex_lock(&inode->i_mutex);*/ 866 if (S_ISREG(inode->i_mode)) { 867 rc = filemap_fdatawrite(inode->i_mapping); 868 if(CIFS_I(inode)->clientCanCacheRead == 0) { ··· 871 } 872 } else 873 rc = 0; 874 + /* mutex_unlock(&inode->i_mutex);*/ 875 if (rc) 876 CIFS_I(inode)->write_behind_rc = rc; 877 cFYI(1,("Oplock flush inode %p rc %d",inode,rc));
+4 -4
fs/cifs/inode.c
··· 1040 } 1041 1042 /* can not grab this sem since kernel filesys locking documentation 1043 - indicates i_sem may be taken by the kernel on lookup and rename 1044 - which could deadlock if we grab the i_sem here as well */ 1045 - /* down(&direntry->d_inode->i_sem);*/ 1046 /* need to write out dirty pages here */ 1047 if (direntry->d_inode->i_mapping) { 1048 /* do we need to lock inode until after invalidate completes ··· 1066 } 1067 } 1068 } 1069 - /* up(&direntry->d_inode->i_sem); */ 1070 1071 kfree(full_path); 1072 FreeXid(xid);
··· 1040 } 1041 1042 /* can not grab this sem since kernel filesys locking documentation 1043 + indicates i_mutex may be taken by the kernel on lookup and rename 1044 + which could deadlock if we grab the i_mutex here as well */ 1045 + /* mutex_lock(&direntry->d_inode->i_mutex);*/ 1046 /* need to write out dirty pages here */ 1047 if (direntry->d_inode->i_mapping) { 1048 /* do we need to lock inode until after invalidate completes ··· 1066 } 1067 } 1068 } 1069 + /* mutex_unlock(&direntry->d_inode->i_mutex); */ 1070 1071 kfree(full_path); 1072 FreeXid(xid);
+2 -2
fs/coda/dir.c
··· 453 coda_vfs_stat.readdir++; 454 455 host_inode = host_file->f_dentry->d_inode; 456 - down(&host_inode->i_sem); 457 host_file->f_pos = coda_file->f_pos; 458 459 if (!host_file->f_op->readdir) { ··· 475 } 476 out: 477 coda_file->f_pos = host_file->f_pos; 478 - up(&host_inode->i_sem); 479 480 return ret; 481 }
··· 453 coda_vfs_stat.readdir++; 454 455 host_inode = host_file->f_dentry->d_inode; 456 + mutex_lock(&host_inode->i_mutex); 457 host_file->f_pos = coda_file->f_pos; 458 459 if (!host_file->f_op->readdir) { ··· 475 } 476 out: 477 coda_file->f_pos = host_file->f_pos; 478 + mutex_unlock(&host_inode->i_mutex); 479 480 return ret; 481 }
+4 -4
fs/coda/file.c
··· 77 return -EINVAL; 78 79 host_inode = host_file->f_dentry->d_inode; 80 - down(&coda_inode->i_sem); 81 82 ret = host_file->f_op->write(host_file, buf, count, ppos); 83 84 coda_inode->i_size = host_inode->i_size; 85 coda_inode->i_blocks = (coda_inode->i_size + 511) >> 9; 86 coda_inode->i_mtime = coda_inode->i_ctime = CURRENT_TIME_SEC; 87 - up(&coda_inode->i_sem); 88 89 return ret; 90 } ··· 272 if (host_file->f_op && host_file->f_op->fsync) { 273 host_dentry = host_file->f_dentry; 274 host_inode = host_dentry->d_inode; 275 - down(&host_inode->i_sem); 276 err = host_file->f_op->fsync(host_file, host_dentry, datasync); 277 - up(&host_inode->i_sem); 278 } 279 280 if ( !err && !datasync ) {
··· 77 return -EINVAL; 78 79 host_inode = host_file->f_dentry->d_inode; 80 + mutex_lock(&coda_inode->i_mutex); 81 82 ret = host_file->f_op->write(host_file, buf, count, ppos); 83 84 coda_inode->i_size = host_inode->i_size; 85 coda_inode->i_blocks = (coda_inode->i_size + 511) >> 9; 86 coda_inode->i_mtime = coda_inode->i_ctime = CURRENT_TIME_SEC; 87 + mutex_unlock(&coda_inode->i_mutex); 88 89 return ret; 90 } ··· 272 if (host_file->f_op && host_file->f_op->fsync) { 273 host_dentry = host_file->f_dentry; 274 host_inode = host_dentry->d_inode; 275 + mutex_lock(&host_inode->i_mutex); 276 err = host_file->f_op->fsync(host_file, host_dentry, datasync); 277 + mutex_unlock(&host_inode->i_mutex); 278 } 279 280 if ( !err && !datasync ) {
+27 -27
fs/configfs/dir.c
··· 288 289 /* 290 * Only subdirectories count here. Files (CONFIGFS_NOT_PINNED) are 291 - * attributes and are removed by rmdir(). We recurse, taking i_sem 292 * on all children that are candidates for default detach. If the 293 * result is clean, then configfs_detach_group() will handle dropping 294 - * i_sem. If there is an error, the caller will clean up the i_sem 295 * holders via configfs_detach_rollback(). 296 */ 297 static int configfs_detach_prep(struct dentry *dentry) ··· 309 if (sd->s_type & CONFIGFS_NOT_PINNED) 310 continue; 311 if (sd->s_type & CONFIGFS_USET_DEFAULT) { 312 - down(&sd->s_dentry->d_inode->i_sem); 313 - /* Mark that we've taken i_sem */ 314 sd->s_type |= CONFIGFS_USET_DROPPING; 315 316 ret = configfs_detach_prep(sd->s_dentry); ··· 327 } 328 329 /* 330 - * Walk the tree, dropping i_sem wherever CONFIGFS_USET_DROPPING is 331 * set. 332 */ 333 static void configfs_detach_rollback(struct dentry *dentry) ··· 341 342 if (sd->s_type & CONFIGFS_USET_DROPPING) { 343 sd->s_type &= ~CONFIGFS_USET_DROPPING; 344 - up(&sd->s_dentry->d_inode->i_sem); 345 } 346 } 347 } ··· 424 425 /* 426 * From rmdir/unregister, a configfs_detach_prep() pass 427 - * has taken our i_sem for us. Drop it. 428 * From mkdir/register cleanup, there is no sem held. 429 */ 430 if (sd->s_type & CONFIGFS_USET_DROPPING) 431 - up(&child->d_inode->i_sem); 432 433 d_delete(child); 434 dput(child); ··· 493 /* FYI, we're faking mkdir here 494 * I'm not sure we need this semaphore, as we're called 495 * from our parent's mkdir. That holds our parent's 496 - * i_sem, so afaik lookup cannot continue through our 497 * parent to find us, let alone mess with our tree. 498 - * That said, taking our i_sem is closer to mkdir 499 * emulation, and shouldn't hurt. */ 500 - down(&dentry->d_inode->i_sem); 501 502 for (i = 0; group->default_groups[i]; i++) { 503 new_group = group->default_groups[i]; ··· 507 break; 508 } 509 510 - up(&dentry->d_inode->i_sem); 511 } 512 513 if (ret) ··· 856 down_write(&configfs_rename_sem); 857 parent = item->parent->dentry; 858 859 - down(&parent->d_inode->i_sem); 860 861 new_dentry = lookup_one_len(new_name, parent, strlen(new_name)); 862 if (!IS_ERR(new_dentry)) { ··· 872 error = -EEXIST; 873 dput(new_dentry); 874 } 875 - up(&parent->d_inode->i_sem); 876 up_write(&configfs_rename_sem); 877 878 return error; ··· 884 struct dentry * dentry = file->f_dentry; 885 struct configfs_dirent * parent_sd = dentry->d_fsdata; 886 887 - down(&dentry->d_inode->i_sem); 888 file->private_data = configfs_new_dirent(parent_sd, NULL); 889 - up(&dentry->d_inode->i_sem); 890 891 return file->private_data ? 0 : -ENOMEM; 892 ··· 897 struct dentry * dentry = file->f_dentry; 898 struct configfs_dirent * cursor = file->private_data; 899 900 - down(&dentry->d_inode->i_sem); 901 list_del_init(&cursor->s_sibling); 902 - up(&dentry->d_inode->i_sem); 903 904 release_configfs_dirent(cursor); 905 ··· 975 { 976 struct dentry * dentry = file->f_dentry; 977 978 - down(&dentry->d_inode->i_sem); 979 switch (origin) { 980 case 1: 981 offset += file->f_pos; ··· 983 if (offset >= 0) 984 break; 985 default: 986 - up(&file->f_dentry->d_inode->i_sem); 987 return -EINVAL; 988 } 989 if (offset != file->f_pos) { ··· 1007 list_add_tail(&cursor->s_sibling, p); 1008 } 1009 } 1010 - up(&dentry->d_inode->i_sem); 1011 return offset; 1012 } 1013 ··· 1037 sd = configfs_sb->s_root->d_fsdata; 1038 link_group(to_config_group(sd->s_element), group); 1039 1040 - down(&configfs_sb->s_root->d_inode->i_sem); 1041 1042 name.name = group->cg_item.ci_name; 1043 name.len = strlen(name.name); ··· 1057 else 1058 d_delete(dentry); 1059 1060 - up(&configfs_sb->s_root->d_inode->i_sem); 1061 1062 if (dentry) { 1063 dput(dentry); ··· 1079 return; 1080 } 1081 1082 - down(&configfs_sb->s_root->d_inode->i_sem); 1083 - down(&dentry->d_inode->i_sem); 1084 if (configfs_detach_prep(dentry)) { 1085 printk(KERN_ERR "configfs: Tried to unregister non-empty subsystem!\n"); 1086 } 1087 configfs_detach_group(&group->cg_item); 1088 dentry->d_inode->i_flags |= S_DEAD; 1089 - up(&dentry->d_inode->i_sem); 1090 1091 d_delete(dentry); 1092 1093 - up(&configfs_sb->s_root->d_inode->i_sem); 1094 1095 dput(dentry); 1096
··· 288 289 /* 290 * Only subdirectories count here. Files (CONFIGFS_NOT_PINNED) are 291 + * attributes and are removed by rmdir(). We recurse, taking i_mutex 292 * on all children that are candidates for default detach. If the 293 * result is clean, then configfs_detach_group() will handle dropping 294 + * i_mutex. If there is an error, the caller will clean up the i_mutex 295 * holders via configfs_detach_rollback(). 296 */ 297 static int configfs_detach_prep(struct dentry *dentry) ··· 309 if (sd->s_type & CONFIGFS_NOT_PINNED) 310 continue; 311 if (sd->s_type & CONFIGFS_USET_DEFAULT) { 312 + mutex_lock(&sd->s_dentry->d_inode->i_mutex); 313 + /* Mark that we've taken i_mutex */ 314 sd->s_type |= CONFIGFS_USET_DROPPING; 315 316 ret = configfs_detach_prep(sd->s_dentry); ··· 327 } 328 329 /* 330 + * Walk the tree, dropping i_mutex wherever CONFIGFS_USET_DROPPING is 331 * set. 332 */ 333 static void configfs_detach_rollback(struct dentry *dentry) ··· 341 342 if (sd->s_type & CONFIGFS_USET_DROPPING) { 343 sd->s_type &= ~CONFIGFS_USET_DROPPING; 344 + mutex_unlock(&sd->s_dentry->d_inode->i_mutex); 345 } 346 } 347 } ··· 424 425 /* 426 * From rmdir/unregister, a configfs_detach_prep() pass 427 + * has taken our i_mutex for us. Drop it. 428 * From mkdir/register cleanup, there is no sem held. 429 */ 430 if (sd->s_type & CONFIGFS_USET_DROPPING) 431 + mutex_unlock(&child->d_inode->i_mutex); 432 433 d_delete(child); 434 dput(child); ··· 493 /* FYI, we're faking mkdir here 494 * I'm not sure we need this semaphore, as we're called 495 * from our parent's mkdir. That holds our parent's 496 + * i_mutex, so afaik lookup cannot continue through our 497 * parent to find us, let alone mess with our tree. 498 + * That said, taking our i_mutex is closer to mkdir 499 * emulation, and shouldn't hurt. */ 500 + mutex_lock(&dentry->d_inode->i_mutex); 501 502 for (i = 0; group->default_groups[i]; i++) { 503 new_group = group->default_groups[i]; ··· 507 break; 508 } 509 510 + mutex_unlock(&dentry->d_inode->i_mutex); 511 } 512 513 if (ret) ··· 856 down_write(&configfs_rename_sem); 857 parent = item->parent->dentry; 858 859 + mutex_lock(&parent->d_inode->i_mutex); 860 861 new_dentry = lookup_one_len(new_name, parent, strlen(new_name)); 862 if (!IS_ERR(new_dentry)) { ··· 872 error = -EEXIST; 873 dput(new_dentry); 874 } 875 + mutex_unlock(&parent->d_inode->i_mutex); 876 up_write(&configfs_rename_sem); 877 878 return error; ··· 884 struct dentry * dentry = file->f_dentry; 885 struct configfs_dirent * parent_sd = dentry->d_fsdata; 886 887 + mutex_lock(&dentry->d_inode->i_mutex); 888 file->private_data = configfs_new_dirent(parent_sd, NULL); 889 + mutex_unlock(&dentry->d_inode->i_mutex); 890 891 return file->private_data ? 0 : -ENOMEM; 892 ··· 897 struct dentry * dentry = file->f_dentry; 898 struct configfs_dirent * cursor = file->private_data; 899 900 + mutex_lock(&dentry->d_inode->i_mutex); 901 list_del_init(&cursor->s_sibling); 902 + mutex_unlock(&dentry->d_inode->i_mutex); 903 904 release_configfs_dirent(cursor); 905 ··· 975 { 976 struct dentry * dentry = file->f_dentry; 977 978 + mutex_lock(&dentry->d_inode->i_mutex); 979 switch (origin) { 980 case 1: 981 offset += file->f_pos; ··· 983 if (offset >= 0) 984 break; 985 default: 986 + mutex_unlock(&file->f_dentry->d_inode->i_mutex); 987 return -EINVAL; 988 } 989 if (offset != file->f_pos) { ··· 1007 list_add_tail(&cursor->s_sibling, p); 1008 } 1009 } 1010 + mutex_unlock(&dentry->d_inode->i_mutex); 1011 return offset; 1012 } 1013 ··· 1037 sd = configfs_sb->s_root->d_fsdata; 1038 link_group(to_config_group(sd->s_element), group); 1039 1040 + mutex_lock(&configfs_sb->s_root->d_inode->i_mutex); 1041 1042 name.name = group->cg_item.ci_name; 1043 name.len = strlen(name.name); ··· 1057 else 1058 d_delete(dentry); 1059 1060 + mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex); 1061 1062 if (dentry) { 1063 dput(dentry); ··· 1079 return; 1080 } 1081 1082 + mutex_lock(&configfs_sb->s_root->d_inode->i_mutex); 1083 + mutex_lock(&dentry->d_inode->i_mutex); 1084 if (configfs_detach_prep(dentry)) { 1085 printk(KERN_ERR "configfs: Tried to unregister non-empty subsystem!\n"); 1086 } 1087 configfs_detach_group(&group->cg_item); 1088 dentry->d_inode->i_flags |= S_DEAD; 1089 + mutex_unlock(&dentry->d_inode->i_mutex); 1090 1091 d_delete(dentry); 1092 1093 + mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex); 1094 1095 dput(dentry); 1096
+2 -2
fs/configfs/file.c
··· 336 umode_t mode = (attr->ca_mode & S_IALLUGO) | S_IFREG; 337 int error = 0; 338 339 - down(&dir->d_inode->i_sem); 340 error = configfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type); 341 - up(&dir->d_inode->i_sem); 342 343 return error; 344 }
··· 336 umode_t mode = (attr->ca_mode & S_IALLUGO) | S_IFREG; 337 int error = 0; 338 339 + mutex_lock(&dir->d_inode->i_mutex); 340 error = configfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type); 341 + mutex_unlock(&dir->d_inode->i_mutex); 342 343 return error; 344 }
+3 -3
fs/configfs/inode.c
··· 122 123 /* 124 * Unhashes the dentry corresponding to given configfs_dirent 125 - * Called with parent inode's i_sem held. 126 */ 127 void configfs_drop_dentry(struct configfs_dirent * sd, struct dentry * parent) 128 { ··· 145 struct configfs_dirent * sd; 146 struct configfs_dirent * parent_sd = dir->d_fsdata; 147 148 - down(&dir->d_inode->i_sem); 149 list_for_each_entry(sd, &parent_sd->s_children, s_sibling) { 150 if (!sd->s_element) 151 continue; ··· 156 break; 157 } 158 } 159 - up(&dir->d_inode->i_sem); 160 } 161 162
··· 122 123 /* 124 * Unhashes the dentry corresponding to given configfs_dirent 125 + * Called with parent inode's i_mutex held. 126 */ 127 void configfs_drop_dentry(struct configfs_dirent * sd, struct dentry * parent) 128 { ··· 145 struct configfs_dirent * sd; 146 struct configfs_dirent * parent_sd = dir->d_fsdata; 147 148 + mutex_lock(&dir->d_inode->i_mutex); 149 list_for_each_entry(sd, &parent_sd->s_children, s_sibling) { 150 if (!sd->s_element) 151 continue; ··· 156 break; 157 } 158 } 159 + mutex_unlock(&dir->d_inode->i_mutex); 160 } 161 162
+4 -4
fs/debugfs/inode.c
··· 146 } 147 148 *dentry = NULL; 149 - down(&parent->d_inode->i_sem); 150 *dentry = lookup_one_len(name, parent, strlen(name)); 151 if (!IS_ERR(dentry)) { 152 if ((mode & S_IFMT) == S_IFDIR) ··· 155 error = debugfs_create(parent->d_inode, *dentry, mode); 156 } else 157 error = PTR_ERR(dentry); 158 - up(&parent->d_inode->i_sem); 159 160 return error; 161 } ··· 273 if (!parent || !parent->d_inode) 274 return; 275 276 - down(&parent->d_inode->i_sem); 277 if (debugfs_positive(dentry)) { 278 if (dentry->d_inode) { 279 if (S_ISDIR(dentry->d_inode->i_mode)) ··· 283 dput(dentry); 284 } 285 } 286 - up(&parent->d_inode->i_sem); 287 simple_release_fs(&debugfs_mount, &debugfs_mount_count); 288 } 289 EXPORT_SYMBOL_GPL(debugfs_remove);
··· 146 } 147 148 *dentry = NULL; 149 + mutex_lock(&parent->d_inode->i_mutex); 150 *dentry = lookup_one_len(name, parent, strlen(name)); 151 if (!IS_ERR(dentry)) { 152 if ((mode & S_IFMT) == S_IFDIR) ··· 155 error = debugfs_create(parent->d_inode, *dentry, mode); 156 } else 157 error = PTR_ERR(dentry); 158 + mutex_unlock(&parent->d_inode->i_mutex); 159 160 return error; 161 } ··· 273 if (!parent || !parent->d_inode) 274 return; 275 276 + mutex_lock(&parent->d_inode->i_mutex); 277 if (debugfs_positive(dentry)) { 278 if (dentry->d_inode) { 279 if (S_ISDIR(dentry->d_inode->i_mode)) ··· 283 dput(dentry); 284 } 285 } 286 + mutex_unlock(&parent->d_inode->i_mutex); 287 simple_release_fs(&debugfs_mount, &debugfs_mount_count); 288 } 289 EXPORT_SYMBOL_GPL(debugfs_remove);
+11 -11
fs/devfs/base.c
··· 2162 * 2163 * make sure that 2164 * d_instantiate always runs under lock 2165 - * we release i_sem lock before going to sleep 2166 * 2167 * unfortunately sometimes d_revalidate is called with 2168 - * and sometimes without i_sem lock held. The following checks 2169 * attempt to deduce when we need to add (and drop resp.) lock 2170 * here. This relies on current (2.6.2) calling coventions: 2171 * 2172 - * lookup_hash is always run under i_sem and is passing NULL 2173 * as nd 2174 * 2175 - * open(...,O_CREATE,...) calls _lookup_hash under i_sem 2176 * and sets flags to LOOKUP_OPEN|LOOKUP_CREATE 2177 * 2178 * all other invocations of ->d_revalidate seem to happen 2179 - * outside of i_sem 2180 */ 2181 need_lock = nd && 2182 (!(nd->flags & LOOKUP_CREATE) || (nd->flags & LOOKUP_PARENT)); 2183 2184 if (need_lock) 2185 - down(&dir->i_sem); 2186 2187 if (is_devfsd_or_child(fs_info)) { 2188 devfs_handle_t de = lookup_info->de; ··· 2221 add_wait_queue(&lookup_info->wait_queue, &wait); 2222 read_unlock(&parent->u.dir.lock); 2223 /* at this point it is always (hopefully) locked */ 2224 - up(&dir->i_sem); 2225 schedule(); 2226 - down(&dir->i_sem); 2227 /* 2228 * This does not need nor should remove wait from wait_queue. 2229 * Wait queue head is never reused - nothing is ever added to it ··· 2238 2239 out: 2240 if (need_lock) 2241 - up(&dir->i_sem); 2242 return 1; 2243 } /* End Function devfs_d_revalidate_wait */ 2244 ··· 2284 /* Unlock directory semaphore, which will release any waiters. They 2285 will get the hashed dentry, and may be forced to wait for 2286 revalidation */ 2287 - up(&dir->i_sem); 2288 wait_for_devfsd_finished(fs_info); /* If I'm not devfsd, must wait */ 2289 - down(&dir->i_sem); /* Grab it again because them's the rules */ 2290 de = lookup_info.de; 2291 /* If someone else has been so kind as to make the inode, we go home 2292 early */
··· 2162 * 2163 * make sure that 2164 * d_instantiate always runs under lock 2165 + * we release i_mutex lock before going to sleep 2166 * 2167 * unfortunately sometimes d_revalidate is called with 2168 + * and sometimes without i_mutex lock held. The following checks 2169 * attempt to deduce when we need to add (and drop resp.) lock 2170 * here. This relies on current (2.6.2) calling coventions: 2171 * 2172 + * lookup_hash is always run under i_mutex and is passing NULL 2173 * as nd 2174 * 2175 + * open(...,O_CREATE,...) calls _lookup_hash under i_mutex 2176 * and sets flags to LOOKUP_OPEN|LOOKUP_CREATE 2177 * 2178 * all other invocations of ->d_revalidate seem to happen 2179 + * outside of i_mutex 2180 */ 2181 need_lock = nd && 2182 (!(nd->flags & LOOKUP_CREATE) || (nd->flags & LOOKUP_PARENT)); 2183 2184 if (need_lock) 2185 + mutex_lock(&dir->i_mutex); 2186 2187 if (is_devfsd_or_child(fs_info)) { 2188 devfs_handle_t de = lookup_info->de; ··· 2221 add_wait_queue(&lookup_info->wait_queue, &wait); 2222 read_unlock(&parent->u.dir.lock); 2223 /* at this point it is always (hopefully) locked */ 2224 + mutex_unlock(&dir->i_mutex); 2225 schedule(); 2226 + mutex_lock(&dir->i_mutex); 2227 /* 2228 * This does not need nor should remove wait from wait_queue. 2229 * Wait queue head is never reused - nothing is ever added to it ··· 2238 2239 out: 2240 if (need_lock) 2241 + mutex_unlock(&dir->i_mutex); 2242 return 1; 2243 } /* End Function devfs_d_revalidate_wait */ 2244 ··· 2284 /* Unlock directory semaphore, which will release any waiters. They 2285 will get the hashed dentry, and may be forced to wait for 2286 revalidation */ 2287 + mutex_unlock(&dir->i_mutex); 2288 wait_for_devfsd_finished(fs_info); /* If I'm not devfsd, must wait */ 2289 + mutex_lock(&dir->i_mutex); /* Grab it again because them's the rules */ 2290 de = lookup_info.de; 2291 /* If someone else has been so kind as to make the inode, we go home 2292 early */
+4 -4
fs/devpts/inode.c
··· 130 { 131 char s[12]; 132 struct dentry *root = devpts_root; 133 - down(&root->d_inode->i_sem); 134 return lookup_one_len(s, root, sprintf(s, "%d", num)); 135 } 136 ··· 161 if (!IS_ERR(dentry) && !dentry->d_inode) 162 d_instantiate(dentry, inode); 163 164 - up(&devpts_root->d_inode->i_sem); 165 166 return 0; 167 } ··· 178 dput(dentry); 179 } 180 181 - up(&devpts_root->d_inode->i_sem); 182 183 return tty; 184 } ··· 196 } 197 dput(dentry); 198 } 199 - up(&devpts_root->d_inode->i_sem); 200 } 201 202 static int __init init_devpts_fs(void)
··· 130 { 131 char s[12]; 132 struct dentry *root = devpts_root; 133 + mutex_lock(&root->d_inode->i_mutex); 134 return lookup_one_len(s, root, sprintf(s, "%d", num)); 135 } 136 ··· 161 if (!IS_ERR(dentry) && !dentry->d_inode) 162 d_instantiate(dentry, inode); 163 164 + mutex_unlock(&devpts_root->d_inode->i_mutex); 165 166 return 0; 167 } ··· 178 dput(dentry); 179 } 180 181 + mutex_unlock(&devpts_root->d_inode->i_mutex); 182 183 return tty; 184 } ··· 196 } 197 dput(dentry); 198 } 199 + mutex_unlock(&devpts_root->d_inode->i_mutex); 200 } 201 202 static int __init init_devpts_fs(void)
+15 -15
fs/direct-io.c
··· 56 * lock_type is DIO_LOCKING for regular files on direct-IO-naive filesystems. 57 * This determines whether we need to do the fancy locking which prevents 58 * direct-IO from being able to read uninitialised disk blocks. If its zero 59 - * (blockdev) this locking is not done, and if it is DIO_OWN_LOCKING i_sem is 60 * not held for the entire direct write (taken briefly, initially, during a 61 * direct read though, but its never held for the duration of a direct-IO). 62 */ ··· 930 } 931 932 /* 933 - * Releases both i_sem and i_alloc_sem 934 */ 935 static ssize_t 936 direct_io_worker(int rw, struct kiocb *iocb, struct inode *inode, ··· 1062 1063 /* 1064 * All block lookups have been performed. For READ requests 1065 - * we can let i_sem go now that its achieved its purpose 1066 * of protecting us from looking up uninitialized blocks. 1067 */ 1068 if ((rw == READ) && (dio->lock_type == DIO_LOCKING)) 1069 - up(&dio->inode->i_sem); 1070 1071 /* 1072 * OK, all BIOs are submitted, so we can decrement bio_count to truly ··· 1145 * The locking rules are governed by the dio_lock_type parameter. 1146 * 1147 * DIO_NO_LOCKING (no locking, for raw block device access) 1148 - * For writes, i_sem is not held on entry; it is never taken. 1149 * 1150 * DIO_LOCKING (simple locking for regular files) 1151 - * For writes we are called under i_sem and return with i_sem held, even though 1152 * it is internally dropped. 1153 - * For reads, i_sem is not held on entry, but it is taken and dropped before 1154 * returning. 1155 * 1156 * DIO_OWN_LOCKING (filesystem provides synchronisation and handling of 1157 * uninitialised data, allowing parallel direct readers and writers) 1158 - * For writes we are called without i_sem, return without it, never touch it. 1159 - * For reads, i_sem is held on entry and will be released before returning. 1160 * 1161 * Additional i_alloc_sem locking requirements described inline below. 1162 */ ··· 1214 * For block device access DIO_NO_LOCKING is used, 1215 * neither readers nor writers do any locking at all 1216 * For regular files using DIO_LOCKING, 1217 - * readers need to grab i_sem and i_alloc_sem 1218 - * writers need to grab i_alloc_sem only (i_sem is already held) 1219 * For regular files using DIO_OWN_LOCKING, 1220 * neither readers nor writers take any locks here 1221 - * (i_sem is already held and release for writers here) 1222 */ 1223 dio->lock_type = dio_lock_type; 1224 if (dio_lock_type != DIO_NO_LOCKING) { ··· 1228 1229 mapping = iocb->ki_filp->f_mapping; 1230 if (dio_lock_type != DIO_OWN_LOCKING) { 1231 - down(&inode->i_sem); 1232 reader_with_isem = 1; 1233 } 1234 ··· 1240 } 1241 1242 if (dio_lock_type == DIO_OWN_LOCKING) { 1243 - up(&inode->i_sem); 1244 reader_with_isem = 0; 1245 } 1246 } ··· 1266 1267 out: 1268 if (reader_with_isem) 1269 - up(&inode->i_sem); 1270 if (rw & WRITE) 1271 current->flags &= ~PF_SYNCWRITE; 1272 return retval;
··· 56 * lock_type is DIO_LOCKING for regular files on direct-IO-naive filesystems. 57 * This determines whether we need to do the fancy locking which prevents 58 * direct-IO from being able to read uninitialised disk blocks. If its zero 59 + * (blockdev) this locking is not done, and if it is DIO_OWN_LOCKING i_mutex is 60 * not held for the entire direct write (taken briefly, initially, during a 61 * direct read though, but its never held for the duration of a direct-IO). 62 */ ··· 930 } 931 932 /* 933 + * Releases both i_mutex and i_alloc_sem 934 */ 935 static ssize_t 936 direct_io_worker(int rw, struct kiocb *iocb, struct inode *inode, ··· 1062 1063 /* 1064 * All block lookups have been performed. For READ requests 1065 + * we can let i_mutex go now that its achieved its purpose 1066 * of protecting us from looking up uninitialized blocks. 1067 */ 1068 if ((rw == READ) && (dio->lock_type == DIO_LOCKING)) 1069 + mutex_unlock(&dio->inode->i_mutex); 1070 1071 /* 1072 * OK, all BIOs are submitted, so we can decrement bio_count to truly ··· 1145 * The locking rules are governed by the dio_lock_type parameter. 1146 * 1147 * DIO_NO_LOCKING (no locking, for raw block device access) 1148 + * For writes, i_mutex is not held on entry; it is never taken. 1149 * 1150 * DIO_LOCKING (simple locking for regular files) 1151 + * For writes we are called under i_mutex and return with i_mutex held, even though 1152 * it is internally dropped. 1153 + * For reads, i_mutex is not held on entry, but it is taken and dropped before 1154 * returning. 1155 * 1156 * DIO_OWN_LOCKING (filesystem provides synchronisation and handling of 1157 * uninitialised data, allowing parallel direct readers and writers) 1158 + * For writes we are called without i_mutex, return without it, never touch it. 1159 + * For reads, i_mutex is held on entry and will be released before returning. 1160 * 1161 * Additional i_alloc_sem locking requirements described inline below. 1162 */ ··· 1214 * For block device access DIO_NO_LOCKING is used, 1215 * neither readers nor writers do any locking at all 1216 * For regular files using DIO_LOCKING, 1217 + * readers need to grab i_mutex and i_alloc_sem 1218 + * writers need to grab i_alloc_sem only (i_mutex is already held) 1219 * For regular files using DIO_OWN_LOCKING, 1220 * neither readers nor writers take any locks here 1221 + * (i_mutex is already held and release for writers here) 1222 */ 1223 dio->lock_type = dio_lock_type; 1224 if (dio_lock_type != DIO_NO_LOCKING) { ··· 1228 1229 mapping = iocb->ki_filp->f_mapping; 1230 if (dio_lock_type != DIO_OWN_LOCKING) { 1231 + mutex_lock(&inode->i_mutex); 1232 reader_with_isem = 1; 1233 } 1234 ··· 1240 } 1241 1242 if (dio_lock_type == DIO_OWN_LOCKING) { 1243 + mutex_unlock(&inode->i_mutex); 1244 reader_with_isem = 0; 1245 } 1246 } ··· 1266 1267 out: 1268 if (reader_with_isem) 1269 + mutex_unlock(&inode->i_mutex); 1270 if (rw & WRITE) 1271 current->flags &= ~PF_SYNCWRITE; 1272 return retval;
+8 -8
fs/dquot.c
··· 100 * operation is just reading pointers from inode (or not using them at all) the 101 * read lock is enough. If pointers are altered function must hold write lock 102 * (these locking rules also apply for S_NOQUOTA flag in the inode - note that 103 - * for altering the flag i_sem is also needed). If operation is holding 104 * reference to dquot in other way (e.g. quotactl ops) it must be guarded by 105 * dqonoff_sem. 106 * This locking assures that: ··· 117 * spinlock to internal buffers before writing. 118 * 119 * Lock ordering (including related VFS locks) is the following: 120 - * i_sem > dqonoff_sem > iprune_sem > journal_lock > dqptr_sem > 121 * > dquot->dq_lock > dqio_sem 122 - * i_sem on quota files is special (it's below dqio_sem) 123 */ 124 125 static DEFINE_SPINLOCK(dq_list_lock); ··· 1369 /* If quota was reenabled in the meantime, we have 1370 * nothing to do */ 1371 if (!sb_has_quota_enabled(sb, cnt)) { 1372 - down(&toputinode[cnt]->i_sem); 1373 toputinode[cnt]->i_flags &= ~(S_IMMUTABLE | 1374 S_NOATIME | S_NOQUOTA); 1375 truncate_inode_pages(&toputinode[cnt]->i_data, 0); 1376 - up(&toputinode[cnt]->i_sem); 1377 mark_inode_dirty(toputinode[cnt]); 1378 iput(toputinode[cnt]); 1379 } ··· 1417 write_inode_now(inode, 1); 1418 /* And now flush the block cache so that kernel sees the changes */ 1419 invalidate_bdev(sb->s_bdev, 0); 1420 - down(&inode->i_sem); 1421 down(&dqopt->dqonoff_sem); 1422 if (sb_has_quota_enabled(sb, type)) { 1423 error = -EBUSY; ··· 1449 goto out_file_init; 1450 } 1451 up(&dqopt->dqio_sem); 1452 - up(&inode->i_sem); 1453 set_enable_flags(dqopt, type); 1454 1455 add_dquot_ref(sb, type); ··· 1470 inode->i_flags |= oldflags; 1471 up_write(&dqopt->dqptr_sem); 1472 } 1473 - up(&inode->i_sem); 1474 out_fmt: 1475 put_quota_format(fmt); 1476
··· 100 * operation is just reading pointers from inode (or not using them at all) the 101 * read lock is enough. If pointers are altered function must hold write lock 102 * (these locking rules also apply for S_NOQUOTA flag in the inode - note that 103 + * for altering the flag i_mutex is also needed). If operation is holding 104 * reference to dquot in other way (e.g. quotactl ops) it must be guarded by 105 * dqonoff_sem. 106 * This locking assures that: ··· 117 * spinlock to internal buffers before writing. 118 * 119 * Lock ordering (including related VFS locks) is the following: 120 + * i_mutex > dqonoff_sem > iprune_sem > journal_lock > dqptr_sem > 121 * > dquot->dq_lock > dqio_sem 122 + * i_mutex on quota files is special (it's below dqio_sem) 123 */ 124 125 static DEFINE_SPINLOCK(dq_list_lock); ··· 1369 /* If quota was reenabled in the meantime, we have 1370 * nothing to do */ 1371 if (!sb_has_quota_enabled(sb, cnt)) { 1372 + mutex_lock(&toputinode[cnt]->i_mutex); 1373 toputinode[cnt]->i_flags &= ~(S_IMMUTABLE | 1374 S_NOATIME | S_NOQUOTA); 1375 truncate_inode_pages(&toputinode[cnt]->i_data, 0); 1376 + mutex_unlock(&toputinode[cnt]->i_mutex); 1377 mark_inode_dirty(toputinode[cnt]); 1378 iput(toputinode[cnt]); 1379 } ··· 1417 write_inode_now(inode, 1); 1418 /* And now flush the block cache so that kernel sees the changes */ 1419 invalidate_bdev(sb->s_bdev, 0); 1420 + mutex_lock(&inode->i_mutex); 1421 down(&dqopt->dqonoff_sem); 1422 if (sb_has_quota_enabled(sb, type)) { 1423 error = -EBUSY; ··· 1449 goto out_file_init; 1450 } 1451 up(&dqopt->dqio_sem); 1452 + mutex_unlock(&inode->i_mutex); 1453 set_enable_flags(dqopt, type); 1454 1455 add_dquot_ref(sb, type); ··· 1470 inode->i_flags |= oldflags; 1471 up_write(&dqopt->dqptr_sem); 1472 } 1473 + mutex_unlock(&inode->i_mutex); 1474 out_fmt: 1475 put_quota_format(fmt); 1476
+6 -6
fs/exportfs/expfs.c
··· 177 struct dentry *ppd; 178 struct dentry *npd; 179 180 - down(&pd->d_inode->i_sem); 181 ppd = CALL(nops,get_parent)(pd); 182 - up(&pd->d_inode->i_sem); 183 184 if (IS_ERR(ppd)) { 185 err = PTR_ERR(ppd); ··· 201 break; 202 } 203 dprintk("find_exported_dentry: found name: %s\n", nbuf); 204 - down(&ppd->d_inode->i_sem); 205 npd = lookup_one_len(nbuf, ppd, strlen(nbuf)); 206 - up(&ppd->d_inode->i_sem); 207 if (IS_ERR(npd)) { 208 err = PTR_ERR(npd); 209 dprintk("find_exported_dentry: lookup failed: %d\n", err); ··· 242 struct dentry *nresult; 243 err = CALL(nops,get_name)(target_dir, nbuf, result); 244 if (!err) { 245 - down(&target_dir->d_inode->i_sem); 246 nresult = lookup_one_len(nbuf, target_dir, strlen(nbuf)); 247 - up(&target_dir->d_inode->i_sem); 248 if (!IS_ERR(nresult)) { 249 if (nresult->d_inode) { 250 dput(result);
··· 177 struct dentry *ppd; 178 struct dentry *npd; 179 180 + mutex_lock(&pd->d_inode->i_mutex); 181 ppd = CALL(nops,get_parent)(pd); 182 + mutex_unlock(&pd->d_inode->i_mutex); 183 184 if (IS_ERR(ppd)) { 185 err = PTR_ERR(ppd); ··· 201 break; 202 } 203 dprintk("find_exported_dentry: found name: %s\n", nbuf); 204 + mutex_lock(&ppd->d_inode->i_mutex); 205 npd = lookup_one_len(nbuf, ppd, strlen(nbuf)); 206 + mutex_unlock(&ppd->d_inode->i_mutex); 207 if (IS_ERR(npd)) { 208 err = PTR_ERR(npd); 209 dprintk("find_exported_dentry: lookup failed: %d\n", err); ··· 242 struct dentry *nresult; 243 err = CALL(nops,get_name)(target_dir, nbuf, result); 244 if (!err) { 245 + mutex_lock(&target_dir->d_inode->i_mutex); 246 nresult = lookup_one_len(nbuf, target_dir, strlen(nbuf)); 247 + mutex_unlock(&target_dir->d_inode->i_mutex); 248 if (!IS_ERR(nresult)) { 249 if (nresult->d_inode) { 250 dput(result);
+5 -5
fs/ext2/acl.c
··· 149 } 150 151 /* 152 - * inode->i_sem: don't care 153 */ 154 static struct posix_acl * 155 ext2_get_acl(struct inode *inode, int type) ··· 211 } 212 213 /* 214 - * inode->i_sem: down 215 */ 216 static int 217 ext2_set_acl(struct inode *inode, int type, struct posix_acl *acl) ··· 301 /* 302 * Initialize the ACLs of a new inode. Called from ext2_new_inode. 303 * 304 - * dir->i_sem: down 305 - * inode->i_sem: up (access to inode is still exclusive) 306 */ 307 int 308 ext2_init_acl(struct inode *inode, struct inode *dir) ··· 361 * for directories) are added. There are no more bits available in the 362 * file mode. 363 * 364 - * inode->i_sem: down 365 */ 366 int 367 ext2_acl_chmod(struct inode *inode)
··· 149 } 150 151 /* 152 + * inode->i_mutex: don't care 153 */ 154 static struct posix_acl * 155 ext2_get_acl(struct inode *inode, int type) ··· 211 } 212 213 /* 214 + * inode->i_mutex: down 215 */ 216 static int 217 ext2_set_acl(struct inode *inode, int type, struct posix_acl *acl) ··· 301 /* 302 * Initialize the ACLs of a new inode. Called from ext2_new_inode. 303 * 304 + * dir->i_mutex: down 305 + * inode->i_mutex: up (access to inode is still exclusive) 306 */ 307 int 308 ext2_init_acl(struct inode *inode, struct inode *dir) ··· 361 * for directories) are added. There are no more bits available in the 362 * file mode. 363 * 364 + * inode->i_mutex: down 365 */ 366 int 367 ext2_acl_chmod(struct inode *inode)
+1 -1
fs/ext2/ext2.h
··· 53 #ifdef CONFIG_EXT2_FS_XATTR 54 /* 55 * Extended attributes can be read independently of the main file 56 - * data. Taking i_sem even when reading would cause contention 57 * between readers of EAs and writers of regular file data, so 58 * instead we synchronize on xattr_sem when reading or changing 59 * EAs.
··· 53 #ifdef CONFIG_EXT2_FS_XATTR 54 /* 55 * Extended attributes can be read independently of the main file 56 + * data. Taking i_mutex even when reading would cause contention 57 * between readers of EAs and writers of regular file data, so 58 * instead we synchronize on xattr_sem when reading or changing 59 * EAs.
+2 -2
fs/ext2/super.c
··· 1152 struct buffer_head tmp_bh; 1153 struct buffer_head *bh; 1154 1155 - down(&inode->i_sem); 1156 while (towrite > 0) { 1157 tocopy = sb->s_blocksize - offset < towrite ? 1158 sb->s_blocksize - offset : towrite; ··· 1189 inode->i_version++; 1190 inode->i_mtime = inode->i_ctime = CURRENT_TIME; 1191 mark_inode_dirty(inode); 1192 - up(&inode->i_sem); 1193 return len - towrite; 1194 } 1195
··· 1152 struct buffer_head tmp_bh; 1153 struct buffer_head *bh; 1154 1155 + mutex_lock(&inode->i_mutex); 1156 while (towrite > 0) { 1157 tocopy = sb->s_blocksize - offset < towrite ? 1158 sb->s_blocksize - offset : towrite; ··· 1189 inode->i_version++; 1190 inode->i_mtime = inode->i_ctime = CURRENT_TIME; 1191 mark_inode_dirty(inode); 1192 + mutex_unlock(&inode->i_mutex); 1193 return len - towrite; 1194 } 1195
+1 -1
fs/ext2/xattr.c
··· 325 /* 326 * Inode operation listxattr() 327 * 328 - * dentry->d_inode->i_sem: don't care 329 */ 330 ssize_t 331 ext2_listxattr(struct dentry *dentry, char *buffer, size_t size)
··· 325 /* 326 * Inode operation listxattr() 327 * 328 + * dentry->d_inode->i_mutex: don't care 329 */ 330 ssize_t 331 ext2_listxattr(struct dentry *dentry, char *buffer, size_t size)
+5 -5
fs/ext3/acl.c
··· 152 /* 153 * Inode operation get_posix_acl(). 154 * 155 - * inode->i_sem: don't care 156 */ 157 static struct posix_acl * 158 ext3_get_acl(struct inode *inode, int type) ··· 216 /* 217 * Set the access or default ACL of an inode. 218 * 219 - * inode->i_sem: down unless called from ext3_new_inode 220 */ 221 static int 222 ext3_set_acl(handle_t *handle, struct inode *inode, int type, ··· 306 /* 307 * Initialize the ACLs of a new inode. Called from ext3_new_inode. 308 * 309 - * dir->i_sem: down 310 - * inode->i_sem: up (access to inode is still exclusive) 311 */ 312 int 313 ext3_init_acl(handle_t *handle, struct inode *inode, struct inode *dir) ··· 368 * for directories) are added. There are no more bits available in the 369 * file mode. 370 * 371 - * inode->i_sem: down 372 */ 373 int 374 ext3_acl_chmod(struct inode *inode)
··· 152 /* 153 * Inode operation get_posix_acl(). 154 * 155 + * inode->i_mutex: don't care 156 */ 157 static struct posix_acl * 158 ext3_get_acl(struct inode *inode, int type) ··· 216 /* 217 * Set the access or default ACL of an inode. 218 * 219 + * inode->i_mutex: down unless called from ext3_new_inode 220 */ 221 static int 222 ext3_set_acl(handle_t *handle, struct inode *inode, int type, ··· 306 /* 307 * Initialize the ACLs of a new inode. Called from ext3_new_inode. 308 * 309 + * dir->i_mutex: down 310 + * inode->i_mutex: up (access to inode is still exclusive) 311 */ 312 int 313 ext3_init_acl(handle_t *handle, struct inode *inode, struct inode *dir) ··· 368 * for directories) are added. There are no more bits available in the 369 * file mode. 370 * 371 + * inode->i_mutex: down 372 */ 373 int 374 ext3_acl_chmod(struct inode *inode)
+3 -3
fs/ext3/super.c
··· 2150 2151 static void ext3_write_super (struct super_block * sb) 2152 { 2153 - if (down_trylock(&sb->s_lock) == 0) 2154 BUG(); 2155 sb->s_dirt = 0; 2156 } ··· 2601 struct buffer_head *bh; 2602 handle_t *handle = journal_current_handle(); 2603 2604 - down(&inode->i_sem); 2605 while (towrite > 0) { 2606 tocopy = sb->s_blocksize - offset < towrite ? 2607 sb->s_blocksize - offset : towrite; ··· 2644 inode->i_version++; 2645 inode->i_mtime = inode->i_ctime = CURRENT_TIME; 2646 ext3_mark_inode_dirty(handle, inode); 2647 - up(&inode->i_sem); 2648 return len - towrite; 2649 } 2650
··· 2150 2151 static void ext3_write_super (struct super_block * sb) 2152 { 2153 + if (mutex_trylock(&sb->s_lock) != 0) 2154 BUG(); 2155 sb->s_dirt = 0; 2156 } ··· 2601 struct buffer_head *bh; 2602 handle_t *handle = journal_current_handle(); 2603 2604 + mutex_lock(&inode->i_mutex); 2605 while (towrite > 0) { 2606 tocopy = sb->s_blocksize - offset < towrite ? 2607 sb->s_blocksize - offset : towrite; ··· 2644 inode->i_version++; 2645 inode->i_mtime = inode->i_ctime = CURRENT_TIME; 2646 ext3_mark_inode_dirty(handle, inode); 2647 + mutex_unlock(&inode->i_mutex); 2648 return len - towrite; 2649 } 2650
+1 -1
fs/ext3/xattr.c
··· 140 /* 141 * Inode operation listxattr() 142 * 143 - * dentry->d_inode->i_sem: don't care 144 */ 145 ssize_t 146 ext3_listxattr(struct dentry *dentry, char *buffer, size_t size)
··· 140 /* 141 * Inode operation listxattr() 142 * 143 + * dentry->d_inode->i_mutex: don't care 144 */ 145 ssize_t 146 ext3_listxattr(struct dentry *dentry, char *buffer, size_t size)
+2 -2
fs/fat/dir.c
··· 729 730 buf.dirent = d1; 731 buf.result = 0; 732 - down(&inode->i_sem); 733 ret = -ENOENT; 734 if (!IS_DEADDIR(inode)) { 735 ret = __fat_readdir(inode, filp, &buf, fat_ioctl_filldir, 736 short_only, both); 737 } 738 - up(&inode->i_sem); 739 if (ret >= 0) 740 ret = buf.result; 741 return ret;
··· 729 730 buf.dirent = d1; 731 buf.result = 0; 732 + mutex_lock(&inode->i_mutex); 733 ret = -ENOENT; 734 if (!IS_DEADDIR(inode)) { 735 ret = __fat_readdir(inode, filp, &buf, fat_ioctl_filldir, 736 short_only, both); 737 } 738 + mutex_unlock(&inode->i_mutex); 739 if (ret >= 0) 740 ret = buf.result; 741 return ret;
+2 -2
fs/fat/file.c
··· 41 if (err) 42 return err; 43 44 - down(&inode->i_sem); 45 46 if (IS_RDONLY(inode)) { 47 err = -EROFS; ··· 103 MSDOS_I(inode)->i_attrs = attr & ATTR_UNUSED; 104 mark_inode_dirty(inode); 105 up: 106 - up(&inode->i_sem); 107 return err; 108 } 109 default:
··· 41 if (err) 42 return err; 43 44 + mutex_lock(&inode->i_mutex); 45 46 if (IS_RDONLY(inode)) { 47 err = -EROFS; ··· 103 MSDOS_I(inode)->i_attrs = attr & ATTR_UNUSED; 104 mark_inode_dirty(inode); 105 up: 106 + mutex_unlock(&inode->i_mutex); 107 return err; 108 } 109 default:
+3 -3
fs/fifo.c
··· 35 int ret; 36 37 ret = -ERESTARTSYS; 38 - if (down_interruptible(PIPE_SEM(*inode))) 39 goto err_nolock_nocleanup; 40 41 if (!inode->i_pipe) { ··· 119 } 120 121 /* Ok! */ 122 - up(PIPE_SEM(*inode)); 123 return 0; 124 125 err_rd: ··· 139 free_pipe_info(inode); 140 141 err_nocleanup: 142 - up(PIPE_SEM(*inode)); 143 144 err_nolock_nocleanup: 145 return ret;
··· 35 int ret; 36 37 ret = -ERESTARTSYS; 38 + if (mutex_lock_interruptible(PIPE_MUTEX(*inode))) 39 goto err_nolock_nocleanup; 40 41 if (!inode->i_pipe) { ··· 119 } 120 121 /* Ok! */ 122 + mutex_unlock(PIPE_MUTEX(*inode)); 123 return 0; 124 125 err_rd: ··· 139 free_pipe_info(inode); 140 141 err_nocleanup: 142 + mutex_unlock(PIPE_MUTEX(*inode)); 143 144 err_nolock_nocleanup: 145 return ret;
+2 -2
fs/fuse/file.c
··· 560 struct inode *inode = file->f_dentry->d_inode; 561 ssize_t res; 562 /* Don't allow parallel writes to the same file */ 563 - down(&inode->i_sem); 564 res = fuse_direct_io(file, buf, count, ppos, 1); 565 - up(&inode->i_sem); 566 return res; 567 } 568
··· 560 struct inode *inode = file->f_dentry->d_inode; 561 ssize_t res; 562 /* Don't allow parallel writes to the same file */ 563 + mutex_lock(&inode->i_mutex); 564 res = fuse_direct_io(file, buf, count, ppos, 1); 565 + mutex_unlock(&inode->i_mutex); 566 return res; 567 } 568
+2 -2
fs/hfs/inode.c
··· 547 if (atomic_read(&file->f_count) != 0) 548 return 0; 549 if (atomic_dec_and_test(&HFS_I(inode)->opencnt)) { 550 - down(&inode->i_sem); 551 hfs_file_truncate(inode); 552 //if (inode->i_flags & S_DEAD) { 553 // hfs_delete_cat(inode->i_ino, HFSPLUS_SB(sb).hidden_dir, NULL); 554 // hfs_delete_inode(inode); 555 //} 556 - up(&inode->i_sem); 557 } 558 return 0; 559 }
··· 547 if (atomic_read(&file->f_count) != 0) 548 return 0; 549 if (atomic_dec_and_test(&HFS_I(inode)->opencnt)) { 550 + mutex_lock(&inode->i_mutex); 551 hfs_file_truncate(inode); 552 //if (inode->i_flags & S_DEAD) { 553 // hfs_delete_cat(inode->i_ino, HFSPLUS_SB(sb).hidden_dir, NULL); 554 // hfs_delete_inode(inode); 555 //} 556 + mutex_unlock(&inode->i_mutex); 557 } 558 return 0; 559 }
+4 -4
fs/hfsplus/bitmap.c
··· 29 return size; 30 31 dprint(DBG_BITMAP, "block_allocate: %u,%u,%u\n", size, offset, len); 32 - down(&HFSPLUS_SB(sb).alloc_file->i_sem); 33 mapping = HFSPLUS_SB(sb).alloc_file->i_mapping; 34 page = read_cache_page(mapping, offset / PAGE_CACHE_BITS, 35 (filler_t *)mapping->a_ops->readpage, NULL); ··· 143 sb->s_dirt = 1; 144 dprint(DBG_BITMAP, "-> %u,%u\n", start, *max); 145 out: 146 - up(&HFSPLUS_SB(sb).alloc_file->i_sem); 147 return start; 148 } 149 ··· 164 if ((offset + count) > HFSPLUS_SB(sb).total_blocks) 165 return -2; 166 167 - down(&HFSPLUS_SB(sb).alloc_file->i_sem); 168 mapping = HFSPLUS_SB(sb).alloc_file->i_mapping; 169 pnr = offset / PAGE_CACHE_BITS; 170 page = read_cache_page(mapping, pnr, (filler_t *)mapping->a_ops->readpage, NULL); ··· 215 kunmap(page); 216 HFSPLUS_SB(sb).free_blocks += len; 217 sb->s_dirt = 1; 218 - up(&HFSPLUS_SB(sb).alloc_file->i_sem); 219 220 return 0; 221 }
··· 29 return size; 30 31 dprint(DBG_BITMAP, "block_allocate: %u,%u,%u\n", size, offset, len); 32 + mutex_lock(&HFSPLUS_SB(sb).alloc_file->i_mutex); 33 mapping = HFSPLUS_SB(sb).alloc_file->i_mapping; 34 page = read_cache_page(mapping, offset / PAGE_CACHE_BITS, 35 (filler_t *)mapping->a_ops->readpage, NULL); ··· 143 sb->s_dirt = 1; 144 dprint(DBG_BITMAP, "-> %u,%u\n", start, *max); 145 out: 146 + mutex_unlock(&HFSPLUS_SB(sb).alloc_file->i_mutex); 147 return start; 148 } 149 ··· 164 if ((offset + count) > HFSPLUS_SB(sb).total_blocks) 165 return -2; 166 167 + mutex_lock(&HFSPLUS_SB(sb).alloc_file->i_mutex); 168 mapping = HFSPLUS_SB(sb).alloc_file->i_mapping; 169 pnr = offset / PAGE_CACHE_BITS; 170 page = read_cache_page(mapping, pnr, (filler_t *)mapping->a_ops->readpage, NULL); ··· 215 kunmap(page); 216 HFSPLUS_SB(sb).free_blocks += len; 217 sb->s_dirt = 1; 218 + mutex_unlock(&HFSPLUS_SB(sb).alloc_file->i_mutex); 219 220 return 0; 221 }
+2 -2
fs/hfsplus/inode.c
··· 276 if (atomic_read(&file->f_count) != 0) 277 return 0; 278 if (atomic_dec_and_test(&HFSPLUS_I(inode).opencnt)) { 279 - down(&inode->i_sem); 280 hfsplus_file_truncate(inode); 281 if (inode->i_flags & S_DEAD) { 282 hfsplus_delete_cat(inode->i_ino, HFSPLUS_SB(sb).hidden_dir, NULL); 283 hfsplus_delete_inode(inode); 284 } 285 - up(&inode->i_sem); 286 } 287 return 0; 288 }
··· 276 if (atomic_read(&file->f_count) != 0) 277 return 0; 278 if (atomic_dec_and_test(&HFSPLUS_I(inode).opencnt)) { 279 + mutex_lock(&inode->i_mutex); 280 hfsplus_file_truncate(inode); 281 if (inode->i_flags & S_DEAD) { 282 hfsplus_delete_cat(inode->i_ino, HFSPLUS_SB(sb).hidden_dir, NULL); 283 hfsplus_delete_inode(inode); 284 } 285 + mutex_unlock(&inode->i_mutex); 286 } 287 return 0; 288 }
+3 -3
fs/hpfs/dir.c
··· 32 33 /*printk("dir lseek\n");*/ 34 if (new_off == 0 || new_off == 1 || new_off == 11 || new_off == 12 || new_off == 13) goto ok; 35 - down(&i->i_sem); 36 pos = ((loff_t) hpfs_de_as_down_as_possible(s, hpfs_inode->i_dno) << 4) + 1; 37 while (pos != new_off) { 38 if (map_pos_dirent(i, &pos, &qbh)) hpfs_brelse4(&qbh); 39 else goto fail; 40 if (pos == 12) goto fail; 41 } 42 - up(&i->i_sem); 43 ok: 44 unlock_kernel(); 45 return filp->f_pos = new_off; 46 fail: 47 - up(&i->i_sem); 48 /*printk("illegal lseek: %016llx\n", new_off);*/ 49 unlock_kernel(); 50 return -ESPIPE;
··· 32 33 /*printk("dir lseek\n");*/ 34 if (new_off == 0 || new_off == 1 || new_off == 11 || new_off == 12 || new_off == 13) goto ok; 35 + mutex_lock(&i->i_mutex); 36 pos = ((loff_t) hpfs_de_as_down_as_possible(s, hpfs_inode->i_dno) << 4) + 1; 37 while (pos != new_off) { 38 if (map_pos_dirent(i, &pos, &qbh)) hpfs_brelse4(&qbh); 39 else goto fail; 40 if (pos == 12) goto fail; 41 } 42 + mutex_unlock(&i->i_mutex); 43 ok: 44 unlock_kernel(); 45 return filp->f_pos = new_off; 46 fail: 47 + mutex_unlock(&i->i_mutex); 48 /*printk("illegal lseek: %016llx\n", new_off);*/ 49 unlock_kernel(); 50 return -ESPIPE;
+3 -3
fs/hppfs/hppfs_kern.c
··· 171 172 err = -ENOMEM; 173 parent = HPPFS_I(ino)->proc_dentry; 174 - down(&parent->d_inode->i_sem); 175 proc_dentry = d_lookup(parent, &dentry->d_name); 176 if(proc_dentry == NULL){ 177 proc_dentry = d_alloc(parent, &dentry->d_name); 178 if(proc_dentry == NULL){ 179 - up(&parent->d_inode->i_sem); 180 goto out; 181 } 182 new = (*parent->d_inode->i_op->lookup)(parent->d_inode, ··· 186 proc_dentry = new; 187 } 188 } 189 - up(&parent->d_inode->i_sem); 190 191 if(IS_ERR(proc_dentry)) 192 return(proc_dentry);
··· 171 172 err = -ENOMEM; 173 parent = HPPFS_I(ino)->proc_dentry; 174 + mutex_lock(&parent->d_inode->i_mutex); 175 proc_dentry = d_lookup(parent, &dentry->d_name); 176 if(proc_dentry == NULL){ 177 proc_dentry = d_alloc(parent, &dentry->d_name); 178 if(proc_dentry == NULL){ 179 + mutex_unlock(&parent->d_inode->i_mutex); 180 goto out; 181 } 182 new = (*parent->d_inode->i_op->lookup)(parent->d_inode, ··· 186 proc_dentry = new; 187 } 188 } 189 + mutex_unlock(&parent->d_inode->i_mutex); 190 191 if(IS_ERR(proc_dentry)) 192 return(proc_dentry);
+2 -2
fs/hugetlbfs/inode.c
··· 118 119 vma_len = (loff_t)(vma->vm_end - vma->vm_start); 120 121 - down(&inode->i_sem); 122 file_accessed(file); 123 vma->vm_flags |= VM_HUGETLB | VM_RESERVED; 124 vma->vm_ops = &hugetlb_vm_ops; ··· 133 if (inode->i_size < len) 134 inode->i_size = len; 135 out: 136 - up(&inode->i_sem); 137 138 return ret; 139 }
··· 118 119 vma_len = (loff_t)(vma->vm_end - vma->vm_start); 120 121 + mutex_lock(&inode->i_mutex); 122 file_accessed(file); 123 vma->vm_flags |= VM_HUGETLB | VM_RESERVED; 124 vma->vm_ops = &hugetlb_vm_ops; ··· 133 if (inode->i_size < len) 134 inode->i_size = len; 135 out: 136 + mutex_unlock(&inode->i_mutex); 137 138 return ret; 139 }
+1 -1
fs/inode.c
··· 192 INIT_HLIST_NODE(&inode->i_hash); 193 INIT_LIST_HEAD(&inode->i_dentry); 194 INIT_LIST_HEAD(&inode->i_devices); 195 - sema_init(&inode->i_sem, 1); 196 init_rwsem(&inode->i_alloc_sem); 197 INIT_RADIX_TREE(&inode->i_data.page_tree, GFP_ATOMIC); 198 rwlock_init(&inode->i_data.tree_lock);
··· 192 INIT_HLIST_NODE(&inode->i_hash); 193 INIT_LIST_HEAD(&inode->i_dentry); 194 INIT_LIST_HEAD(&inode->i_devices); 195 + mutex_init(&inode->i_mutex); 196 init_rwsem(&inode->i_alloc_sem); 197 INIT_RADIX_TREE(&inode->i_data.page_tree, GFP_ATOMIC); 198 rwlock_init(&inode->i_data.tree_lock);
+1 -1
fs/jffs/inode-v23.c
··· 1415 * This will never trigger with sane page sizes. leave it in 1416 * anyway, since I'm thinking about how to merge larger writes 1417 * (the current idea is to poke a thread that does the actual 1418 - * I/O and starts by doing a down(&inode->i_sem). then we 1419 * would need to get the page cache pages and have a list of 1420 * I/O requests and do write-merging here. 1421 * -- prumpf
··· 1415 * This will never trigger with sane page sizes. leave it in 1416 * anyway, since I'm thinking about how to merge larger writes 1417 * (the current idea is to poke a thread that does the actual 1418 + * I/O and starts by doing a mutex_lock(&inode->i_mutex). then we 1419 * would need to get the page cache pages and have a list of 1420 * I/O requests and do write-merging here. 1421 * -- prumpf
+2 -2
fs/jfs/jfs_incore.h
··· 58 /* 59 * rdwrlock serializes xtree between reads & writes and synchronizes 60 * changes to special inodes. It's use would be redundant on 61 - * directories since the i_sem taken in the VFS is sufficient. 62 */ 63 struct rw_semaphore rdwrlock; 64 /* ··· 68 * inode is blocked in txBegin or TxBeginAnon 69 */ 70 struct semaphore commit_sem; 71 - /* xattr_sem allows us to access the xattrs without taking i_sem */ 72 struct rw_semaphore xattr_sem; 73 lid_t xtlid; /* lid of xtree lock on directory */ 74 #ifdef CONFIG_JFS_POSIX_ACL
··· 58 /* 59 * rdwrlock serializes xtree between reads & writes and synchronizes 60 * changes to special inodes. It's use would be redundant on 61 + * directories since the i_mutex taken in the VFS is sufficient. 62 */ 63 struct rw_semaphore rdwrlock; 64 /* ··· 68 * inode is blocked in txBegin or TxBeginAnon 69 */ 70 struct semaphore commit_sem; 71 + /* xattr_sem allows us to access the xattrs without taking i_mutex */ 72 struct rw_semaphore xattr_sem; 73 lid_t xtlid; /* lid of xtree lock on directory */ 74 #ifdef CONFIG_JFS_POSIX_ACL
+4 -4
fs/libfs.c
··· 74 75 loff_t dcache_dir_lseek(struct file *file, loff_t offset, int origin) 76 { 77 - down(&file->f_dentry->d_inode->i_sem); 78 switch (origin) { 79 case 1: 80 offset += file->f_pos; ··· 82 if (offset >= 0) 83 break; 84 default: 85 - up(&file->f_dentry->d_inode->i_sem); 86 return -EINVAL; 87 } 88 if (offset != file->f_pos) { ··· 106 spin_unlock(&dcache_lock); 107 } 108 } 109 - up(&file->f_dentry->d_inode->i_sem); 110 return offset; 111 } 112 ··· 356 357 /* 358 * No need to use i_size_read() here, the i_size 359 - * cannot change under us because we hold the i_sem. 360 */ 361 if (pos > inode->i_size) 362 i_size_write(inode, pos);
··· 74 75 loff_t dcache_dir_lseek(struct file *file, loff_t offset, int origin) 76 { 77 + mutex_lock(&file->f_dentry->d_inode->i_mutex); 78 switch (origin) { 79 case 1: 80 offset += file->f_pos; ··· 82 if (offset >= 0) 83 break; 84 default: 85 + mutex_unlock(&file->f_dentry->d_inode->i_mutex); 86 return -EINVAL; 87 } 88 if (offset != file->f_pos) { ··· 106 spin_unlock(&dcache_lock); 107 } 108 } 109 + mutex_unlock(&file->f_dentry->d_inode->i_mutex); 110 return offset; 111 } 112 ··· 356 357 /* 358 * No need to use i_size_read() here, the i_size 359 + * cannot change under us because we hold the i_mutex. 360 */ 361 if (pos > inode->i_size) 362 i_size_write(inode, pos);
+41 -41
fs/namei.c
··· 438 struct dentry * result; 439 struct inode *dir = parent->d_inode; 440 441 - down(&dir->i_sem); 442 /* 443 * First re-do the cached lookup just in case it was created 444 * while we waited for the directory semaphore.. ··· 464 else 465 result = dentry; 466 } 467 - up(&dir->i_sem); 468 return result; 469 } 470 ··· 472 * Uhhuh! Nasty case: the cache was re-populated while 473 * we waited on the semaphore. Need to revalidate. 474 */ 475 - up(&dir->i_sem); 476 if (result->d_op && result->d_op->d_revalidate) { 477 if (!result->d_op->d_revalidate(result, nd) && !d_invalidate(result)) { 478 dput(result); ··· 1366 struct dentry *p; 1367 1368 if (p1 == p2) { 1369 - down(&p1->d_inode->i_sem); 1370 return NULL; 1371 } 1372 ··· 1374 1375 for (p = p1; p->d_parent != p; p = p->d_parent) { 1376 if (p->d_parent == p2) { 1377 - down(&p2->d_inode->i_sem); 1378 - down(&p1->d_inode->i_sem); 1379 return p; 1380 } 1381 } 1382 1383 for (p = p2; p->d_parent != p; p = p->d_parent) { 1384 if (p->d_parent == p1) { 1385 - down(&p1->d_inode->i_sem); 1386 - down(&p2->d_inode->i_sem); 1387 return p; 1388 } 1389 } 1390 1391 - down(&p1->d_inode->i_sem); 1392 - down(&p2->d_inode->i_sem); 1393 return NULL; 1394 } 1395 1396 void unlock_rename(struct dentry *p1, struct dentry *p2) 1397 { 1398 - up(&p1->d_inode->i_sem); 1399 if (p1 != p2) { 1400 - up(&p2->d_inode->i_sem); 1401 up(&p1->d_inode->i_sb->s_vfs_rename_sem); 1402 } 1403 } ··· 1563 1564 dir = nd->dentry; 1565 nd->flags &= ~LOOKUP_PARENT; 1566 - down(&dir->d_inode->i_sem); 1567 path.dentry = lookup_hash(nd); 1568 path.mnt = nd->mnt; 1569 1570 do_last: 1571 error = PTR_ERR(path.dentry); 1572 if (IS_ERR(path.dentry)) { 1573 - up(&dir->d_inode->i_sem); 1574 goto exit; 1575 } 1576 ··· 1579 if (!IS_POSIXACL(dir->d_inode)) 1580 mode &= ~current->fs->umask; 1581 error = vfs_create(dir->d_inode, path.dentry, mode, nd); 1582 - up(&dir->d_inode->i_sem); 1583 dput(nd->dentry); 1584 nd->dentry = path.dentry; 1585 if (error) ··· 1593 /* 1594 * It already exists. 1595 */ 1596 - up(&dir->d_inode->i_sem); 1597 1598 error = -EEXIST; 1599 if (flag & O_EXCL) ··· 1665 goto exit; 1666 } 1667 dir = nd->dentry; 1668 - down(&dir->d_inode->i_sem); 1669 path.dentry = lookup_hash(nd); 1670 path.mnt = nd->mnt; 1671 __putname(nd->last.name); ··· 1680 * Simple function to lookup and return a dentry and create it 1681 * if it doesn't exist. Is SMP-safe. 1682 * 1683 - * Returns with nd->dentry->d_inode->i_sem locked. 1684 */ 1685 struct dentry *lookup_create(struct nameidata *nd, int is_dir) 1686 { 1687 struct dentry *dentry = ERR_PTR(-EEXIST); 1688 1689 - down(&nd->dentry->d_inode->i_sem); 1690 /* 1691 * Yucky last component or no last component at all? 1692 * (foo/., foo/.., /////) ··· 1784 } 1785 dput(dentry); 1786 } 1787 - up(&nd.dentry->d_inode->i_sem); 1788 path_release(&nd); 1789 out: 1790 putname(tmp); ··· 1836 error = vfs_mkdir(nd.dentry->d_inode, dentry, mode); 1837 dput(dentry); 1838 } 1839 - up(&nd.dentry->d_inode->i_sem); 1840 path_release(&nd); 1841 out: 1842 putname(tmp); ··· 1885 1886 DQUOT_INIT(dir); 1887 1888 - down(&dentry->d_inode->i_sem); 1889 dentry_unhash(dentry); 1890 if (d_mountpoint(dentry)) 1891 error = -EBUSY; ··· 1897 dentry->d_inode->i_flags |= S_DEAD; 1898 } 1899 } 1900 - up(&dentry->d_inode->i_sem); 1901 if (!error) { 1902 d_delete(dentry); 1903 } ··· 1932 error = -EBUSY; 1933 goto exit1; 1934 } 1935 - down(&nd.dentry->d_inode->i_sem); 1936 dentry = lookup_hash(&nd); 1937 error = PTR_ERR(dentry); 1938 if (!IS_ERR(dentry)) { 1939 error = vfs_rmdir(nd.dentry->d_inode, dentry); 1940 dput(dentry); 1941 } 1942 - up(&nd.dentry->d_inode->i_sem); 1943 exit1: 1944 path_release(&nd); 1945 exit: ··· 1959 1960 DQUOT_INIT(dir); 1961 1962 - down(&dentry->d_inode->i_sem); 1963 if (d_mountpoint(dentry)) 1964 error = -EBUSY; 1965 else { ··· 1967 if (!error) 1968 error = dir->i_op->unlink(dir, dentry); 1969 } 1970 - up(&dentry->d_inode->i_sem); 1971 1972 /* We don't d_delete() NFS sillyrenamed files--they still exist. */ 1973 if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) { ··· 1979 1980 /* 1981 * Make sure that the actual truncation of the file will occur outside its 1982 - * directory's i_sem. Truncate can take a long time if there is a lot of 1983 * writeout happening, and we don't want to prevent access to the directory 1984 * while waiting on the I/O. 1985 */ ··· 2001 error = -EISDIR; 2002 if (nd.last_type != LAST_NORM) 2003 goto exit1; 2004 - down(&nd.dentry->d_inode->i_sem); 2005 dentry = lookup_hash(&nd); 2006 error = PTR_ERR(dentry); 2007 if (!IS_ERR(dentry)) { ··· 2015 exit2: 2016 dput(dentry); 2017 } 2018 - up(&nd.dentry->d_inode->i_sem); 2019 if (inode) 2020 iput(inode); /* truncate the inode here */ 2021 exit1: ··· 2075 error = vfs_symlink(nd.dentry->d_inode, dentry, from, S_IALLUGO); 2076 dput(dentry); 2077 } 2078 - up(&nd.dentry->d_inode->i_sem); 2079 path_release(&nd); 2080 out: 2081 putname(to); ··· 2113 if (error) 2114 return error; 2115 2116 - down(&old_dentry->d_inode->i_sem); 2117 DQUOT_INIT(dir); 2118 error = dir->i_op->link(old_dentry, dir, new_dentry); 2119 - up(&old_dentry->d_inode->i_sem); 2120 if (!error) 2121 fsnotify_create(dir, new_dentry->d_name.name); 2122 return error; ··· 2157 error = vfs_link(old_nd.dentry, nd.dentry->d_inode, new_dentry); 2158 dput(new_dentry); 2159 } 2160 - up(&nd.dentry->d_inode->i_sem); 2161 out_release: 2162 path_release(&nd); 2163 out: ··· 2178 * sb->s_vfs_rename_sem. We might be more accurate, but that's another 2179 * story. 2180 * c) we have to lock _three_ objects - parents and victim (if it exists). 2181 - * And that - after we got ->i_sem on parents (until then we don't know 2182 * whether the target exists). Solution: try to be smart with locking 2183 * order for inodes. We rely on the fact that tree topology may change 2184 * only under ->s_vfs_rename_sem _and_ that parent of the object we ··· 2195 * stuff into VFS), but the former is not going away. Solution: the same 2196 * trick as in rmdir(). 2197 * e) conversion from fhandle to dentry may come in the wrong moment - when 2198 - * we are removing the target. Solution: we will have to grab ->i_sem 2199 * in the fhandle_to_dentry code. [FIXME - current nfsfh.c relies on 2200 - * ->i_sem on parents, which works but leads to some truely excessive 2201 * locking]. 2202 */ 2203 static int vfs_rename_dir(struct inode *old_dir, struct dentry *old_dentry, ··· 2222 2223 target = new_dentry->d_inode; 2224 if (target) { 2225 - down(&target->i_sem); 2226 dentry_unhash(new_dentry); 2227 } 2228 if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry)) ··· 2232 if (target) { 2233 if (!error) 2234 target->i_flags |= S_DEAD; 2235 - up(&target->i_sem); 2236 if (d_unhashed(new_dentry)) 2237 d_rehash(new_dentry); 2238 dput(new_dentry); ··· 2255 dget(new_dentry); 2256 target = new_dentry->d_inode; 2257 if (target) 2258 - down(&target->i_sem); 2259 if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry)) 2260 error = -EBUSY; 2261 else ··· 2266 d_move(old_dentry, new_dentry); 2267 } 2268 if (target) 2269 - up(&target->i_sem); 2270 dput(new_dentry); 2271 return error; 2272 }
··· 438 struct dentry * result; 439 struct inode *dir = parent->d_inode; 440 441 + mutex_lock(&dir->i_mutex); 442 /* 443 * First re-do the cached lookup just in case it was created 444 * while we waited for the directory semaphore.. ··· 464 else 465 result = dentry; 466 } 467 + mutex_unlock(&dir->i_mutex); 468 return result; 469 } 470 ··· 472 * Uhhuh! Nasty case: the cache was re-populated while 473 * we waited on the semaphore. Need to revalidate. 474 */ 475 + mutex_unlock(&dir->i_mutex); 476 if (result->d_op && result->d_op->d_revalidate) { 477 if (!result->d_op->d_revalidate(result, nd) && !d_invalidate(result)) { 478 dput(result); ··· 1366 struct dentry *p; 1367 1368 if (p1 == p2) { 1369 + mutex_lock(&p1->d_inode->i_mutex); 1370 return NULL; 1371 } 1372 ··· 1374 1375 for (p = p1; p->d_parent != p; p = p->d_parent) { 1376 if (p->d_parent == p2) { 1377 + mutex_lock(&p2->d_inode->i_mutex); 1378 + mutex_lock(&p1->d_inode->i_mutex); 1379 return p; 1380 } 1381 } 1382 1383 for (p = p2; p->d_parent != p; p = p->d_parent) { 1384 if (p->d_parent == p1) { 1385 + mutex_lock(&p1->d_inode->i_mutex); 1386 + mutex_lock(&p2->d_inode->i_mutex); 1387 return p; 1388 } 1389 } 1390 1391 + mutex_lock(&p1->d_inode->i_mutex); 1392 + mutex_lock(&p2->d_inode->i_mutex); 1393 return NULL; 1394 } 1395 1396 void unlock_rename(struct dentry *p1, struct dentry *p2) 1397 { 1398 + mutex_unlock(&p1->d_inode->i_mutex); 1399 if (p1 != p2) { 1400 + mutex_unlock(&p2->d_inode->i_mutex); 1401 up(&p1->d_inode->i_sb->s_vfs_rename_sem); 1402 } 1403 } ··· 1563 1564 dir = nd->dentry; 1565 nd->flags &= ~LOOKUP_PARENT; 1566 + mutex_lock(&dir->d_inode->i_mutex); 1567 path.dentry = lookup_hash(nd); 1568 path.mnt = nd->mnt; 1569 1570 do_last: 1571 error = PTR_ERR(path.dentry); 1572 if (IS_ERR(path.dentry)) { 1573 + mutex_unlock(&dir->d_inode->i_mutex); 1574 goto exit; 1575 } 1576 ··· 1579 if (!IS_POSIXACL(dir->d_inode)) 1580 mode &= ~current->fs->umask; 1581 error = vfs_create(dir->d_inode, path.dentry, mode, nd); 1582 + mutex_unlock(&dir->d_inode->i_mutex); 1583 dput(nd->dentry); 1584 nd->dentry = path.dentry; 1585 if (error) ··· 1593 /* 1594 * It already exists. 1595 */ 1596 + mutex_unlock(&dir->d_inode->i_mutex); 1597 1598 error = -EEXIST; 1599 if (flag & O_EXCL) ··· 1665 goto exit; 1666 } 1667 dir = nd->dentry; 1668 + mutex_lock(&dir->d_inode->i_mutex); 1669 path.dentry = lookup_hash(nd); 1670 path.mnt = nd->mnt; 1671 __putname(nd->last.name); ··· 1680 * Simple function to lookup and return a dentry and create it 1681 * if it doesn't exist. Is SMP-safe. 1682 * 1683 + * Returns with nd->dentry->d_inode->i_mutex locked. 1684 */ 1685 struct dentry *lookup_create(struct nameidata *nd, int is_dir) 1686 { 1687 struct dentry *dentry = ERR_PTR(-EEXIST); 1688 1689 + mutex_lock(&nd->dentry->d_inode->i_mutex); 1690 /* 1691 * Yucky last component or no last component at all? 1692 * (foo/., foo/.., /////) ··· 1784 } 1785 dput(dentry); 1786 } 1787 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 1788 path_release(&nd); 1789 out: 1790 putname(tmp); ··· 1836 error = vfs_mkdir(nd.dentry->d_inode, dentry, mode); 1837 dput(dentry); 1838 } 1839 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 1840 path_release(&nd); 1841 out: 1842 putname(tmp); ··· 1885 1886 DQUOT_INIT(dir); 1887 1888 + mutex_lock(&dentry->d_inode->i_mutex); 1889 dentry_unhash(dentry); 1890 if (d_mountpoint(dentry)) 1891 error = -EBUSY; ··· 1897 dentry->d_inode->i_flags |= S_DEAD; 1898 } 1899 } 1900 + mutex_unlock(&dentry->d_inode->i_mutex); 1901 if (!error) { 1902 d_delete(dentry); 1903 } ··· 1932 error = -EBUSY; 1933 goto exit1; 1934 } 1935 + mutex_lock(&nd.dentry->d_inode->i_mutex); 1936 dentry = lookup_hash(&nd); 1937 error = PTR_ERR(dentry); 1938 if (!IS_ERR(dentry)) { 1939 error = vfs_rmdir(nd.dentry->d_inode, dentry); 1940 dput(dentry); 1941 } 1942 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 1943 exit1: 1944 path_release(&nd); 1945 exit: ··· 1959 1960 DQUOT_INIT(dir); 1961 1962 + mutex_lock(&dentry->d_inode->i_mutex); 1963 if (d_mountpoint(dentry)) 1964 error = -EBUSY; 1965 else { ··· 1967 if (!error) 1968 error = dir->i_op->unlink(dir, dentry); 1969 } 1970 + mutex_unlock(&dentry->d_inode->i_mutex); 1971 1972 /* We don't d_delete() NFS sillyrenamed files--they still exist. */ 1973 if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) { ··· 1979 1980 /* 1981 * Make sure that the actual truncation of the file will occur outside its 1982 + * directory's i_mutex. Truncate can take a long time if there is a lot of 1983 * writeout happening, and we don't want to prevent access to the directory 1984 * while waiting on the I/O. 1985 */ ··· 2001 error = -EISDIR; 2002 if (nd.last_type != LAST_NORM) 2003 goto exit1; 2004 + mutex_lock(&nd.dentry->d_inode->i_mutex); 2005 dentry = lookup_hash(&nd); 2006 error = PTR_ERR(dentry); 2007 if (!IS_ERR(dentry)) { ··· 2015 exit2: 2016 dput(dentry); 2017 } 2018 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 2019 if (inode) 2020 iput(inode); /* truncate the inode here */ 2021 exit1: ··· 2075 error = vfs_symlink(nd.dentry->d_inode, dentry, from, S_IALLUGO); 2076 dput(dentry); 2077 } 2078 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 2079 path_release(&nd); 2080 out: 2081 putname(to); ··· 2113 if (error) 2114 return error; 2115 2116 + mutex_lock(&old_dentry->d_inode->i_mutex); 2117 DQUOT_INIT(dir); 2118 error = dir->i_op->link(old_dentry, dir, new_dentry); 2119 + mutex_unlock(&old_dentry->d_inode->i_mutex); 2120 if (!error) 2121 fsnotify_create(dir, new_dentry->d_name.name); 2122 return error; ··· 2157 error = vfs_link(old_nd.dentry, nd.dentry->d_inode, new_dentry); 2158 dput(new_dentry); 2159 } 2160 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 2161 out_release: 2162 path_release(&nd); 2163 out: ··· 2178 * sb->s_vfs_rename_sem. We might be more accurate, but that's another 2179 * story. 2180 * c) we have to lock _three_ objects - parents and victim (if it exists). 2181 + * And that - after we got ->i_mutex on parents (until then we don't know 2182 * whether the target exists). Solution: try to be smart with locking 2183 * order for inodes. We rely on the fact that tree topology may change 2184 * only under ->s_vfs_rename_sem _and_ that parent of the object we ··· 2195 * stuff into VFS), but the former is not going away. Solution: the same 2196 * trick as in rmdir(). 2197 * e) conversion from fhandle to dentry may come in the wrong moment - when 2198 + * we are removing the target. Solution: we will have to grab ->i_mutex 2199 * in the fhandle_to_dentry code. [FIXME - current nfsfh.c relies on 2200 + * ->i_mutex on parents, which works but leads to some truely excessive 2201 * locking]. 2202 */ 2203 static int vfs_rename_dir(struct inode *old_dir, struct dentry *old_dentry, ··· 2222 2223 target = new_dentry->d_inode; 2224 if (target) { 2225 + mutex_lock(&target->i_mutex); 2226 dentry_unhash(new_dentry); 2227 } 2228 if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry)) ··· 2232 if (target) { 2233 if (!error) 2234 target->i_flags |= S_DEAD; 2235 + mutex_unlock(&target->i_mutex); 2236 if (d_unhashed(new_dentry)) 2237 d_rehash(new_dentry); 2238 dput(new_dentry); ··· 2255 dget(new_dentry); 2256 target = new_dentry->d_inode; 2257 if (target) 2258 + mutex_lock(&target->i_mutex); 2259 if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry)) 2260 error = -EBUSY; 2261 else ··· 2266 d_move(old_dentry, new_dentry); 2267 } 2268 if (target) 2269 + mutex_unlock(&target->i_mutex); 2270 dput(new_dentry); 2271 return error; 2272 }
+6 -6
fs/namespace.c
··· 814 return -ENOTDIR; 815 816 err = -ENOENT; 817 - down(&nd->dentry->d_inode->i_sem); 818 if (IS_DEADDIR(nd->dentry->d_inode)) 819 goto out_unlock; 820 ··· 826 if (IS_ROOT(nd->dentry) || !d_unhashed(nd->dentry)) 827 err = attach_recursive_mnt(mnt, nd, NULL); 828 out_unlock: 829 - up(&nd->dentry->d_inode->i_sem); 830 if (!err) 831 security_sb_post_addmount(mnt, nd); 832 return err; ··· 962 goto out; 963 964 err = -ENOENT; 965 - down(&nd->dentry->d_inode->i_sem); 966 if (IS_DEADDIR(nd->dentry->d_inode)) 967 goto out1; 968 ··· 1004 list_del_init(&old_nd.mnt->mnt_expire); 1005 spin_unlock(&vfsmount_lock); 1006 out1: 1007 - up(&nd->dentry->d_inode->i_sem); 1008 out: 1009 up_write(&namespace_sem); 1010 if (!err) ··· 1573 user_nd.dentry = dget(current->fs->root); 1574 read_unlock(&current->fs->lock); 1575 down_write(&namespace_sem); 1576 - down(&old_nd.dentry->d_inode->i_sem); 1577 error = -EINVAL; 1578 if (IS_MNT_SHARED(old_nd.mnt) || 1579 IS_MNT_SHARED(new_nd.mnt->mnt_parent) || ··· 1626 path_release(&root_parent); 1627 path_release(&parent_nd); 1628 out2: 1629 - up(&old_nd.dentry->d_inode->i_sem); 1630 up_write(&namespace_sem); 1631 path_release(&user_nd); 1632 path_release(&old_nd);
··· 814 return -ENOTDIR; 815 816 err = -ENOENT; 817 + mutex_lock(&nd->dentry->d_inode->i_mutex); 818 if (IS_DEADDIR(nd->dentry->d_inode)) 819 goto out_unlock; 820 ··· 826 if (IS_ROOT(nd->dentry) || !d_unhashed(nd->dentry)) 827 err = attach_recursive_mnt(mnt, nd, NULL); 828 out_unlock: 829 + mutex_unlock(&nd->dentry->d_inode->i_mutex); 830 if (!err) 831 security_sb_post_addmount(mnt, nd); 832 return err; ··· 962 goto out; 963 964 err = -ENOENT; 965 + mutex_lock(&nd->dentry->d_inode->i_mutex); 966 if (IS_DEADDIR(nd->dentry->d_inode)) 967 goto out1; 968 ··· 1004 list_del_init(&old_nd.mnt->mnt_expire); 1005 spin_unlock(&vfsmount_lock); 1006 out1: 1007 + mutex_unlock(&nd->dentry->d_inode->i_mutex); 1008 out: 1009 up_write(&namespace_sem); 1010 if (!err) ··· 1573 user_nd.dentry = dget(current->fs->root); 1574 read_unlock(&current->fs->lock); 1575 down_write(&namespace_sem); 1576 + mutex_lock(&old_nd.dentry->d_inode->i_mutex); 1577 error = -EINVAL; 1578 if (IS_MNT_SHARED(old_nd.mnt) || 1579 IS_MNT_SHARED(new_nd.mnt->mnt_parent) || ··· 1626 path_release(&root_parent); 1627 path_release(&parent_nd); 1628 out2: 1629 + mutex_unlock(&old_nd.dentry->d_inode->i_mutex); 1630 up_write(&namespace_sem); 1631 path_release(&user_nd); 1632 path_release(&old_nd);
+5 -5
fs/nfs/dir.c
··· 194 spin_unlock(&inode->i_lock); 195 /* Ensure consistent page alignment of the data. 196 * Note: assumes we have exclusive access to this mapping either 197 - * through inode->i_sem or some other mechanism. 198 */ 199 if (page->index == 0) 200 invalidate_inode_pages2_range(inode->i_mapping, PAGE_CACHE_SIZE, -1); ··· 573 574 loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int origin) 575 { 576 - down(&filp->f_dentry->d_inode->i_sem); 577 switch (origin) { 578 case 1: 579 offset += filp->f_pos; ··· 589 ((struct nfs_open_context *)filp->private_data)->dir_cookie = 0; 590 } 591 out: 592 - up(&filp->f_dentry->d_inode->i_sem); 593 return offset; 594 } 595 ··· 1001 openflags &= ~(O_CREAT|O_TRUNC); 1002 1003 /* 1004 - * Note: we're not holding inode->i_sem and so may be racing with 1005 * operations that change the directory. We therefore save the 1006 * change attribute *before* we do the RPC call. 1007 */ ··· 1051 return dentry; 1052 if (!desc->plus || !(entry->fattr->valid & NFS_ATTR_FATTR)) 1053 return NULL; 1054 - /* Note: caller is already holding the dir->i_sem! */ 1055 dentry = d_alloc(parent, &name); 1056 if (dentry == NULL) 1057 return NULL;
··· 194 spin_unlock(&inode->i_lock); 195 /* Ensure consistent page alignment of the data. 196 * Note: assumes we have exclusive access to this mapping either 197 + * through inode->i_mutex or some other mechanism. 198 */ 199 if (page->index == 0) 200 invalidate_inode_pages2_range(inode->i_mapping, PAGE_CACHE_SIZE, -1); ··· 573 574 loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int origin) 575 { 576 + mutex_lock(&filp->f_dentry->d_inode->i_mutex); 577 switch (origin) { 578 case 1: 579 offset += filp->f_pos; ··· 589 ((struct nfs_open_context *)filp->private_data)->dir_cookie = 0; 590 } 591 out: 592 + mutex_unlock(&filp->f_dentry->d_inode->i_mutex); 593 return offset; 594 } 595 ··· 1001 openflags &= ~(O_CREAT|O_TRUNC); 1002 1003 /* 1004 + * Note: we're not holding inode->i_mutex and so may be racing with 1005 * operations that change the directory. We therefore save the 1006 * change attribute *before* we do the RPC call. 1007 */ ··· 1051 return dentry; 1052 if (!desc->plus || !(entry->fattr->valid & NFS_ATTR_FATTR)) 1053 return NULL; 1054 + /* Note: caller is already holding the dir->i_mutex! */ 1055 dentry = d_alloc(parent, &name); 1056 if (dentry == NULL) 1057 return NULL;
+10 -10
fs/nfsd/nfs4recover.c
··· 121 static void 122 nfsd4_sync_rec_dir(void) 123 { 124 - down(&rec_dir.dentry->d_inode->i_sem); 125 nfsd_sync_dir(rec_dir.dentry); 126 - up(&rec_dir.dentry->d_inode->i_sem); 127 } 128 129 int ··· 143 nfs4_save_user(&uid, &gid); 144 145 /* lock the parent */ 146 - down(&rec_dir.dentry->d_inode->i_sem); 147 148 dentry = lookup_one_len(dname, rec_dir.dentry, HEXDIR_LEN-1); 149 if (IS_ERR(dentry)) { ··· 159 out_put: 160 dput(dentry); 161 out_unlock: 162 - up(&rec_dir.dentry->d_inode->i_sem); 163 if (status == 0) { 164 clp->cl_firststate = 1; 165 nfsd4_sync_rec_dir(); ··· 259 printk("nfsd4: non-file found in client recovery directory\n"); 260 return -EINVAL; 261 } 262 - down(&dir->d_inode->i_sem); 263 status = vfs_unlink(dir->d_inode, dentry); 264 - up(&dir->d_inode->i_sem); 265 return status; 266 } 267 ··· 274 * any regular files anyway, just in case the directory was created by 275 * a kernel from the future.... */ 276 nfsd4_list_rec_dir(dentry, nfsd4_remove_clid_file); 277 - down(&dir->d_inode->i_sem); 278 status = vfs_rmdir(dir->d_inode, dentry); 279 - up(&dir->d_inode->i_sem); 280 return status; 281 } 282 ··· 288 289 dprintk("NFSD: nfsd4_unlink_clid_dir. name %.*s\n", namlen, name); 290 291 - down(&rec_dir.dentry->d_inode->i_sem); 292 dentry = lookup_one_len(name, rec_dir.dentry, namlen); 293 - up(&rec_dir.dentry->d_inode->i_sem); 294 if (IS_ERR(dentry)) { 295 status = PTR_ERR(dentry); 296 return status;
··· 121 static void 122 nfsd4_sync_rec_dir(void) 123 { 124 + mutex_lock(&rec_dir.dentry->d_inode->i_mutex); 125 nfsd_sync_dir(rec_dir.dentry); 126 + mutex_unlock(&rec_dir.dentry->d_inode->i_mutex); 127 } 128 129 int ··· 143 nfs4_save_user(&uid, &gid); 144 145 /* lock the parent */ 146 + mutex_lock(&rec_dir.dentry->d_inode->i_mutex); 147 148 dentry = lookup_one_len(dname, rec_dir.dentry, HEXDIR_LEN-1); 149 if (IS_ERR(dentry)) { ··· 159 out_put: 160 dput(dentry); 161 out_unlock: 162 + mutex_unlock(&rec_dir.dentry->d_inode->i_mutex); 163 if (status == 0) { 164 clp->cl_firststate = 1; 165 nfsd4_sync_rec_dir(); ··· 259 printk("nfsd4: non-file found in client recovery directory\n"); 260 return -EINVAL; 261 } 262 + mutex_lock(&dir->d_inode->i_mutex); 263 status = vfs_unlink(dir->d_inode, dentry); 264 + mutex_unlock(&dir->d_inode->i_mutex); 265 return status; 266 } 267 ··· 274 * any regular files anyway, just in case the directory was created by 275 * a kernel from the future.... */ 276 nfsd4_list_rec_dir(dentry, nfsd4_remove_clid_file); 277 + mutex_lock(&dir->d_inode->i_mutex); 278 status = vfs_rmdir(dir->d_inode, dentry); 279 + mutex_unlock(&dir->d_inode->i_mutex); 280 return status; 281 } 282 ··· 288 289 dprintk("NFSD: nfsd4_unlink_clid_dir. name %.*s\n", namlen, name); 290 291 + mutex_lock(&rec_dir.dentry->d_inode->i_mutex); 292 dentry = lookup_one_len(name, rec_dir.dentry, namlen); 293 + mutex_unlock(&rec_dir.dentry->d_inode->i_mutex); 294 if (IS_ERR(dentry)) { 295 status = PTR_ERR(dentry); 296 return status;
+6 -6
fs/nfsd/vfs.c
··· 390 391 error = -EOPNOTSUPP; 392 if (inode->i_op && inode->i_op->setxattr) { 393 - down(&inode->i_sem); 394 security_inode_setxattr(dentry, key, buf, len, 0); 395 error = inode->i_op->setxattr(dentry, key, buf, len, 0); 396 if (!error) 397 security_inode_post_setxattr(dentry, key, buf, len, 0); 398 - up(&inode->i_sem); 399 } 400 out: 401 kfree(buf); ··· 739 int err; 740 struct inode *inode = filp->f_dentry->d_inode; 741 dprintk("nfsd: sync file %s\n", filp->f_dentry->d_name.name); 742 - down(&inode->i_sem); 743 err=nfsd_dosync(filp, filp->f_dentry, filp->f_op); 744 - up(&inode->i_sem); 745 746 return err; 747 } ··· 885 struct iattr ia; 886 ia.ia_valid = ATTR_KILL_SUID | ATTR_KILL_SGID; 887 888 - down(&dentry->d_inode->i_sem); 889 notify_change(dentry, &ia); 890 - up(&dentry->d_inode->i_sem); 891 } 892 893 static inline int
··· 390 391 error = -EOPNOTSUPP; 392 if (inode->i_op && inode->i_op->setxattr) { 393 + mutex_lock(&inode->i_mutex); 394 security_inode_setxattr(dentry, key, buf, len, 0); 395 error = inode->i_op->setxattr(dentry, key, buf, len, 0); 396 if (!error) 397 security_inode_post_setxattr(dentry, key, buf, len, 0); 398 + mutex_unlock(&inode->i_mutex); 399 } 400 out: 401 kfree(buf); ··· 739 int err; 740 struct inode *inode = filp->f_dentry->d_inode; 741 dprintk("nfsd: sync file %s\n", filp->f_dentry->d_name.name); 742 + mutex_lock(&inode->i_mutex); 743 err=nfsd_dosync(filp, filp->f_dentry, filp->f_op); 744 + mutex_unlock(&inode->i_mutex); 745 746 return err; 747 } ··· 885 struct iattr ia; 886 ia.ia_valid = ATTR_KILL_SUID | ATTR_KILL_SGID; 887 888 + mutex_lock(&dentry->d_inode->i_mutex); 889 notify_change(dentry, &ia); 890 + mutex_unlock(&dentry->d_inode->i_mutex); 891 } 892 893 static inline int
+2 -2
fs/ntfs/attrib.c
··· 1532 * NOTE to self: No changes in the attribute list are required to move from 1533 * a resident to a non-resident attribute. 1534 * 1535 - * Locking: - The caller must hold i_sem on the inode. 1536 */ 1537 int ntfs_attr_make_non_resident(ntfs_inode *ni, const u32 data_size) 1538 { ··· 1728 /* 1729 * This needs to be last since the address space operations ->readpage 1730 * and ->writepage can run concurrently with us as they are not 1731 - * serialized on i_sem. Note, we are not allowed to fail once we flip 1732 * this switch, which is another reason to do this last. 1733 */ 1734 NInoSetNonResident(ni);
··· 1532 * NOTE to self: No changes in the attribute list are required to move from 1533 * a resident to a non-resident attribute. 1534 * 1535 + * Locking: - The caller must hold i_mutex on the inode. 1536 */ 1537 int ntfs_attr_make_non_resident(ntfs_inode *ni, const u32 data_size) 1538 { ··· 1728 /* 1729 * This needs to be last since the address space operations ->readpage 1730 * and ->writepage can run concurrently with us as they are not 1731 + * serialized on i_mutex. Note, we are not allowed to fail once we flip 1732 * this switch, which is another reason to do this last. 1733 */ 1734 NInoSetNonResident(ni);
+4 -4
fs/ntfs/dir.c
··· 69 * work but we don't care for how quickly one can access them. This also fixes 70 * the dcache aliasing issues. 71 * 72 - * Locking: - Caller must hold i_sem on the directory. 73 * - Each page cache page in the index allocation mapping must be 74 * locked whilst being accessed otherwise we may find a corrupt 75 * page due to it being under ->writepage at the moment which ··· 1085 * While this will return the names in random order this doesn't matter for 1086 * ->readdir but OTOH results in a faster ->readdir. 1087 * 1088 - * VFS calls ->readdir without BKL but with i_sem held. This protects the VFS 1089 * parts (e.g. ->f_pos and ->i_size, and it also protects against directory 1090 * modifications). 1091 * 1092 - * Locking: - Caller must hold i_sem on the directory. 1093 * - Each page cache page in the index allocation mapping must be 1094 * locked whilst being accessed otherwise we may find a corrupt 1095 * page due to it being under ->writepage at the moment which ··· 1520 * Note: In the past @filp could be NULL so we ignore it as we don't need it 1521 * anyway. 1522 * 1523 - * Locking: Caller must hold i_sem on the inode. 1524 * 1525 * TODO: We should probably also write all attribute/index inodes associated 1526 * with this inode but since we have no simple way of getting to them we ignore
··· 69 * work but we don't care for how quickly one can access them. This also fixes 70 * the dcache aliasing issues. 71 * 72 + * Locking: - Caller must hold i_mutex on the directory. 73 * - Each page cache page in the index allocation mapping must be 74 * locked whilst being accessed otherwise we may find a corrupt 75 * page due to it being under ->writepage at the moment which ··· 1085 * While this will return the names in random order this doesn't matter for 1086 * ->readdir but OTOH results in a faster ->readdir. 1087 * 1088 + * VFS calls ->readdir without BKL but with i_mutex held. This protects the VFS 1089 * parts (e.g. ->f_pos and ->i_size, and it also protects against directory 1090 * modifications). 1091 * 1092 + * Locking: - Caller must hold i_mutex on the directory. 1093 * - Each page cache page in the index allocation mapping must be 1094 * locked whilst being accessed otherwise we may find a corrupt 1095 * page due to it being under ->writepage at the moment which ··· 1520 * Note: In the past @filp could be NULL so we ignore it as we don't need it 1521 * anyway. 1522 * 1523 + * Locking: Caller must hold i_mutex on the inode. 1524 * 1525 * TODO: We should probably also write all attribute/index inodes associated 1526 * with this inode but since we have no simple way of getting to them we ignore
+9 -9
fs/ntfs/file.c
··· 106 * this is the case, the necessary zeroing will also have happened and that all 107 * metadata is self-consistent. 108 * 109 - * Locking: i_sem on the vfs inode corrseponsind to the ntfs inode @ni must be 110 * held by the caller. 111 */ 112 static int ntfs_attr_extend_initialized(ntfs_inode *ni, const s64 new_init_size, ··· 473 * @bytes: number of bytes to be written 474 * 475 * This is called for non-resident attributes from ntfs_file_buffered_write() 476 - * with i_sem held on the inode (@pages[0]->mapping->host). There are 477 * @nr_pages pages in @pages which are locked but not kmap()ped. The source 478 * data has not yet been copied into the @pages. 479 * ··· 1637 * @pos: byte position in file at which the write begins 1638 * @bytes: number of bytes to be written 1639 * 1640 - * This is called from ntfs_file_buffered_write() with i_sem held on the inode 1641 * (@pages[0]->mapping->host). There are @nr_pages pages in @pages which are 1642 * locked but not kmap()ped. The source data has already been copied into the 1643 * @page. ntfs_prepare_pages_for_non_resident_write() has been called before ··· 1814 /** 1815 * ntfs_file_buffered_write - 1816 * 1817 - * Locking: The vfs is holding ->i_sem on the inode. 1818 */ 1819 static ssize_t ntfs_file_buffered_write(struct kiocb *iocb, 1820 const struct iovec *iov, unsigned long nr_segs, ··· 2196 2197 BUG_ON(iocb->ki_pos != pos); 2198 2199 - down(&inode->i_sem); 2200 ret = ntfs_file_aio_write_nolock(iocb, &local_iov, 1, &iocb->ki_pos); 2201 - up(&inode->i_sem); 2202 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2203 int err = sync_page_range(inode, mapping, pos, ret); 2204 if (err < 0) ··· 2221 struct kiocb kiocb; 2222 ssize_t ret; 2223 2224 - down(&inode->i_sem); 2225 init_sync_kiocb(&kiocb, file); 2226 ret = ntfs_file_aio_write_nolock(&kiocb, iov, nr_segs, ppos); 2227 if (ret == -EIOCBQUEUED) 2228 ret = wait_on_sync_kiocb(&kiocb); 2229 - up(&inode->i_sem); 2230 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2231 int err = sync_page_range(inode, mapping, *ppos - ret, ret); 2232 if (err < 0) ··· 2269 * Note: In the past @filp could be NULL so we ignore it as we don't need it 2270 * anyway. 2271 * 2272 - * Locking: Caller must hold i_sem on the inode. 2273 * 2274 * TODO: We should probably also write all attribute/index inodes associated 2275 * with this inode but since we have no simple way of getting to them we ignore
··· 106 * this is the case, the necessary zeroing will also have happened and that all 107 * metadata is self-consistent. 108 * 109 + * Locking: i_mutex on the vfs inode corrseponsind to the ntfs inode @ni must be 110 * held by the caller. 111 */ 112 static int ntfs_attr_extend_initialized(ntfs_inode *ni, const s64 new_init_size, ··· 473 * @bytes: number of bytes to be written 474 * 475 * This is called for non-resident attributes from ntfs_file_buffered_write() 476 + * with i_mutex held on the inode (@pages[0]->mapping->host). There are 477 * @nr_pages pages in @pages which are locked but not kmap()ped. The source 478 * data has not yet been copied into the @pages. 479 * ··· 1637 * @pos: byte position in file at which the write begins 1638 * @bytes: number of bytes to be written 1639 * 1640 + * This is called from ntfs_file_buffered_write() with i_mutex held on the inode 1641 * (@pages[0]->mapping->host). There are @nr_pages pages in @pages which are 1642 * locked but not kmap()ped. The source data has already been copied into the 1643 * @page. ntfs_prepare_pages_for_non_resident_write() has been called before ··· 1814 /** 1815 * ntfs_file_buffered_write - 1816 * 1817 + * Locking: The vfs is holding ->i_mutex on the inode. 1818 */ 1819 static ssize_t ntfs_file_buffered_write(struct kiocb *iocb, 1820 const struct iovec *iov, unsigned long nr_segs, ··· 2196 2197 BUG_ON(iocb->ki_pos != pos); 2198 2199 + mutex_lock(&inode->i_mutex); 2200 ret = ntfs_file_aio_write_nolock(iocb, &local_iov, 1, &iocb->ki_pos); 2201 + mutex_unlock(&inode->i_mutex); 2202 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2203 int err = sync_page_range(inode, mapping, pos, ret); 2204 if (err < 0) ··· 2221 struct kiocb kiocb; 2222 ssize_t ret; 2223 2224 + mutex_lock(&inode->i_mutex); 2225 init_sync_kiocb(&kiocb, file); 2226 ret = ntfs_file_aio_write_nolock(&kiocb, iov, nr_segs, ppos); 2227 if (ret == -EIOCBQUEUED) 2228 ret = wait_on_sync_kiocb(&kiocb); 2229 + mutex_unlock(&inode->i_mutex); 2230 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2231 int err = sync_page_range(inode, mapping, *ppos - ret, ret); 2232 if (err < 0) ··· 2269 * Note: In the past @filp could be NULL so we ignore it as we don't need it 2270 * anyway. 2271 * 2272 + * Locking: Caller must hold i_mutex on the inode. 2273 * 2274 * TODO: We should probably also write all attribute/index inodes associated 2275 * with this inode but since we have no simple way of getting to them we ignore
+3 -3
fs/ntfs/index.c
··· 32 * Allocate a new index context, initialize it with @idx_ni and return it. 33 * Return NULL if allocation failed. 34 * 35 - * Locking: Caller must hold i_sem on the index inode. 36 */ 37 ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni) 38 { ··· 50 * 51 * Release the index context @ictx, releasing all associated resources. 52 * 53 - * Locking: Caller must hold i_sem on the index inode. 54 */ 55 void ntfs_index_ctx_put(ntfs_index_context *ictx) 56 { ··· 106 * or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to 107 * ensure that the changes are written to disk. 108 * 109 - * Locking: - Caller must hold i_sem on the index inode. 110 * - Each page cache page in the index allocation mapping must be 111 * locked whilst being accessed otherwise we may find a corrupt 112 * page due to it being under ->writepage at the moment which
··· 32 * Allocate a new index context, initialize it with @idx_ni and return it. 33 * Return NULL if allocation failed. 34 * 35 + * Locking: Caller must hold i_mutex on the index inode. 36 */ 37 ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni) 38 { ··· 50 * 51 * Release the index context @ictx, releasing all associated resources. 52 * 53 + * Locking: Caller must hold i_mutex on the index inode. 54 */ 55 void ntfs_index_ctx_put(ntfs_index_context *ictx) 56 { ··· 106 * or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to 107 * ensure that the changes are written to disk. 108 * 109 + * Locking: - Caller must hold i_mutex on the index inode. 110 * - Each page cache page in the index allocation mapping must be 111 * locked whilst being accessed otherwise we may find a corrupt 112 * page due to it being under ->writepage at the moment which
+4 -4
fs/ntfs/inode.c
··· 2125 ntfs_inode *ni = NTFS_I(vi); 2126 if (NInoIndexAllocPresent(ni)) { 2127 struct inode *bvi = NULL; 2128 - down(&vi->i_sem); 2129 if (atomic_read(&vi->i_count) == 2) { 2130 bvi = ni->itype.index.bmp_ino; 2131 if (bvi) 2132 ni->itype.index.bmp_ino = NULL; 2133 } 2134 - up(&vi->i_sem); 2135 if (bvi) 2136 iput(bvi); 2137 } ··· 2311 * 2312 * Returns 0 on success or -errno on error. 2313 * 2314 - * Called with ->i_sem held. In all but one case ->i_alloc_sem is held for 2315 * writing. The only case in the kernel where ->i_alloc_sem is not held is 2316 * mm/filemap.c::generic_file_buffered_write() where vmtruncate() is called 2317 * with the current i_size as the offset. The analogous place in NTFS is in ··· 2831 * We also abort all changes of user, group, and mode as we do not implement 2832 * the NTFS ACLs yet. 2833 * 2834 - * Called with ->i_sem held. For the ATTR_SIZE (i.e. ->truncate) case, also 2835 * called with ->i_alloc_sem held for writing. 2836 * 2837 * Basically this is a copy of generic notify_change() and inode_setattr()
··· 2125 ntfs_inode *ni = NTFS_I(vi); 2126 if (NInoIndexAllocPresent(ni)) { 2127 struct inode *bvi = NULL; 2128 + mutex_lock(&vi->i_mutex); 2129 if (atomic_read(&vi->i_count) == 2) { 2130 bvi = ni->itype.index.bmp_ino; 2131 if (bvi) 2132 ni->itype.index.bmp_ino = NULL; 2133 } 2134 + mutex_unlock(&vi->i_mutex); 2135 if (bvi) 2136 iput(bvi); 2137 } ··· 2311 * 2312 * Returns 0 on success or -errno on error. 2313 * 2314 + * Called with ->i_mutex held. In all but one case ->i_alloc_sem is held for 2315 * writing. The only case in the kernel where ->i_alloc_sem is not held is 2316 * mm/filemap.c::generic_file_buffered_write() where vmtruncate() is called 2317 * with the current i_size as the offset. The analogous place in NTFS is in ··· 2831 * We also abort all changes of user, group, and mode as we do not implement 2832 * the NTFS ACLs yet. 2833 * 2834 + * Called with ->i_mutex held. For the ATTR_SIZE (i.e. ->truncate) case, also 2835 * called with ->i_alloc_sem held for writing. 2836 * 2837 * Basically this is a copy of generic notify_change() and inode_setattr()
+3 -3
fs/ntfs/namei.c
··· 96 * name. We then convert the name to the current NLS code page, and proceed 97 * searching for a dentry with this name, etc, as in case 2), above. 98 * 99 - * Locking: Caller must hold i_sem on the directory. 100 */ 101 static struct dentry *ntfs_lookup(struct inode *dir_ino, struct dentry *dent, 102 struct nameidata *nd) ··· 254 nls_name.hash = full_name_hash(nls_name.name, nls_name.len); 255 256 /* 257 - * Note: No need for dent->d_lock lock as i_sem is held on the 258 * parent inode. 259 */ 260 ··· 374 * The code is based on the ext3 ->get_parent() implementation found in 375 * fs/ext3/namei.c::ext3_get_parent(). 376 * 377 - * Note: ntfs_get_parent() is called with @child_dent->d_inode->i_sem down. 378 * 379 * Return the dentry of the parent directory on success or the error code on 380 * error (IS_ERR() is true).
··· 96 * name. We then convert the name to the current NLS code page, and proceed 97 * searching for a dentry with this name, etc, as in case 2), above. 98 * 99 + * Locking: Caller must hold i_mutex on the directory. 100 */ 101 static struct dentry *ntfs_lookup(struct inode *dir_ino, struct dentry *dent, 102 struct nameidata *nd) ··· 254 nls_name.hash = full_name_hash(nls_name.name, nls_name.len); 255 256 /* 257 + * Note: No need for dent->d_lock lock as i_mutex is held on the 258 * parent inode. 259 */ 260 ··· 374 * The code is based on the ext3 ->get_parent() implementation found in 375 * fs/ext3/namei.c::ext3_get_parent(). 376 * 377 + * Note: ntfs_get_parent() is called with @child_dent->d_inode->i_mutex down. 378 * 379 * Return the dentry of the parent directory on success or the error code on 380 * error (IS_ERR() is true).
+3 -3
fs/ntfs/quota.c
··· 48 ntfs_error(vol->sb, "Quota inodes are not open."); 49 return FALSE; 50 } 51 - down(&vol->quota_q_ino->i_sem); 52 ictx = ntfs_index_ctx_get(NTFS_I(vol->quota_q_ino)); 53 if (!ictx) { 54 ntfs_error(vol->sb, "Failed to get index context."); ··· 98 ntfs_index_entry_mark_dirty(ictx); 99 set_done: 100 ntfs_index_ctx_put(ictx); 101 - up(&vol->quota_q_ino->i_sem); 102 /* 103 * We set the flag so we do not try to mark the quotas out of date 104 * again on remount. ··· 110 err_out: 111 if (ictx) 112 ntfs_index_ctx_put(ictx); 113 - up(&vol->quota_q_ino->i_sem); 114 return FALSE; 115 } 116
··· 48 ntfs_error(vol->sb, "Quota inodes are not open."); 49 return FALSE; 50 } 51 + mutex_lock(&vol->quota_q_ino->i_mutex); 52 ictx = ntfs_index_ctx_get(NTFS_I(vol->quota_q_ino)); 53 if (!ictx) { 54 ntfs_error(vol->sb, "Failed to get index context."); ··· 98 ntfs_index_entry_mark_dirty(ictx); 99 set_done: 100 ntfs_index_ctx_put(ictx); 101 + mutex_unlock(&vol->quota_q_ino->i_mutex); 102 /* 103 * We set the flag so we do not try to mark the quotas out of date 104 * again on remount. ··· 110 err_out: 111 if (ictx) 112 ntfs_index_ctx_put(ictx); 113 + mutex_unlock(&vol->quota_q_ino->i_mutex); 114 return FALSE; 115 } 116
+8 -8
fs/ntfs/super.c
··· 1213 * Find the inode number for the hibernation file by looking up the 1214 * filename hiberfil.sys in the root directory. 1215 */ 1216 - down(&vol->root_ino->i_sem); 1217 mref = ntfs_lookup_inode_by_name(NTFS_I(vol->root_ino), hiberfil, 12, 1218 &name); 1219 - up(&vol->root_ino->i_sem); 1220 if (IS_ERR_MREF(mref)) { 1221 ret = MREF_ERR(mref); 1222 /* If the file does not exist, Windows is not hibernated. */ ··· 1307 * Find the inode number for the quota file by looking up the filename 1308 * $Quota in the extended system files directory $Extend. 1309 */ 1310 - down(&vol->extend_ino->i_sem); 1311 mref = ntfs_lookup_inode_by_name(NTFS_I(vol->extend_ino), Quota, 6, 1312 &name); 1313 - up(&vol->extend_ino->i_sem); 1314 if (IS_ERR_MREF(mref)) { 1315 /* 1316 * If the file does not exist, quotas are disabled and have ··· 1390 * Find the inode number for the transaction log file by looking up the 1391 * filename $UsnJrnl in the extended system files directory $Extend. 1392 */ 1393 - down(&vol->extend_ino->i_sem); 1394 mref = ntfs_lookup_inode_by_name(NTFS_I(vol->extend_ino), UsnJrnl, 8, 1395 &name); 1396 - up(&vol->extend_ino->i_sem); 1397 if (IS_ERR_MREF(mref)) { 1398 /* 1399 * If the file does not exist, transaction logging is disabled, ··· 2312 if (!list_empty(&sb->s_dirty)) { 2313 const char *s1, *s2; 2314 2315 - down(&vol->mft_ino->i_sem); 2316 truncate_inode_pages(vol->mft_ino->i_mapping, 0); 2317 - up(&vol->mft_ino->i_sem); 2318 write_inode_now(vol->mft_ino, 1); 2319 if (!list_empty(&sb->s_dirty)) { 2320 static const char *_s1 = "inodes";
··· 1213 * Find the inode number for the hibernation file by looking up the 1214 * filename hiberfil.sys in the root directory. 1215 */ 1216 + mutex_lock(&vol->root_ino->i_mutex); 1217 mref = ntfs_lookup_inode_by_name(NTFS_I(vol->root_ino), hiberfil, 12, 1218 &name); 1219 + mutex_unlock(&vol->root_ino->i_mutex); 1220 if (IS_ERR_MREF(mref)) { 1221 ret = MREF_ERR(mref); 1222 /* If the file does not exist, Windows is not hibernated. */ ··· 1307 * Find the inode number for the quota file by looking up the filename 1308 * $Quota in the extended system files directory $Extend. 1309 */ 1310 + mutex_lock(&vol->extend_ino->i_mutex); 1311 mref = ntfs_lookup_inode_by_name(NTFS_I(vol->extend_ino), Quota, 6, 1312 &name); 1313 + mutex_unlock(&vol->extend_ino->i_mutex); 1314 if (IS_ERR_MREF(mref)) { 1315 /* 1316 * If the file does not exist, quotas are disabled and have ··· 1390 * Find the inode number for the transaction log file by looking up the 1391 * filename $UsnJrnl in the extended system files directory $Extend. 1392 */ 1393 + mutex_lock(&vol->extend_ino->i_mutex); 1394 mref = ntfs_lookup_inode_by_name(NTFS_I(vol->extend_ino), UsnJrnl, 8, 1395 &name); 1396 + mutex_unlock(&vol->extend_ino->i_mutex); 1397 if (IS_ERR_MREF(mref)) { 1398 /* 1399 * If the file does not exist, transaction logging is disabled, ··· 2312 if (!list_empty(&sb->s_dirty)) { 2313 const char *s1, *s2; 2314 2315 + mutex_lock(&vol->mft_ino->i_mutex); 2316 truncate_inode_pages(vol->mft_ino->i_mapping, 0); 2317 + mutex_unlock(&vol->mft_ino->i_mutex); 2318 write_inode_now(vol->mft_ino, 1); 2319 if (!list_empty(&sb->s_dirty)) { 2320 static const char *_s1 = "inodes";
+12 -12
fs/ocfs2/alloc.c
··· 966 mlog_entry("start_blk = %"MLFu64", num_clusters = %u\n", start_blk, 967 num_clusters); 968 969 - BUG_ON(!down_trylock(&tl_inode->i_sem)); 970 971 start_cluster = ocfs2_blocks_to_clusters(osb->sb, start_blk); 972 ··· 1108 return status; 1109 } 1110 1111 - /* Expects you to already be holding tl_inode->i_sem */ 1112 static int __ocfs2_flush_truncate_log(struct ocfs2_super *osb) 1113 { 1114 int status; ··· 1123 1124 mlog_entry_void(); 1125 1126 - BUG_ON(!down_trylock(&tl_inode->i_sem)); 1127 1128 di = (struct ocfs2_dinode *) tl_bh->b_data; 1129 tl = &di->id2.i_dealloc; ··· 1198 int status; 1199 struct inode *tl_inode = osb->osb_tl_inode; 1200 1201 - down(&tl_inode->i_sem); 1202 status = __ocfs2_flush_truncate_log(osb); 1203 - up(&tl_inode->i_sem); 1204 1205 return status; 1206 } ··· 1363 mlog(0, "cleanup %u records from %"MLFu64"\n", num_recs, 1364 tl_copy->i_blkno); 1365 1366 - down(&tl_inode->i_sem); 1367 for(i = 0; i < num_recs; i++) { 1368 if (ocfs2_truncate_log_needs_flush(osb)) { 1369 status = __ocfs2_flush_truncate_log(osb); ··· 1395 } 1396 1397 bail_up: 1398 - up(&tl_inode->i_sem); 1399 1400 mlog_exit(status); 1401 return status; ··· 1840 1841 mlog(0, "clusters_to_del = %u in this pass\n", clusters_to_del); 1842 1843 - down(&tl_inode->i_sem); 1844 tl_sem = 1; 1845 /* ocfs2_truncate_log_needs_flush guarantees us at least one 1846 * record is free for use. If there isn't any, we flush to get ··· 1875 goto bail; 1876 } 1877 1878 - up(&tl_inode->i_sem); 1879 tl_sem = 0; 1880 1881 ocfs2_commit_trans(handle); ··· 1890 ocfs2_schedule_truncate_log_flush(osb, 1); 1891 1892 if (tl_sem) 1893 - up(&tl_inode->i_sem); 1894 1895 if (handle) 1896 ocfs2_commit_trans(handle); ··· 1994 goto bail; 1995 } 1996 1997 - down(&ext_alloc_inode->i_sem); 1998 (*tc)->tc_ext_alloc_inode = ext_alloc_inode; 1999 2000 status = ocfs2_meta_lock(ext_alloc_inode, ··· 2026 if (tc->tc_ext_alloc_locked) 2027 ocfs2_meta_unlock(tc->tc_ext_alloc_inode, 1); 2028 2029 - up(&tc->tc_ext_alloc_inode->i_sem); 2030 iput(tc->tc_ext_alloc_inode); 2031 } 2032
··· 966 mlog_entry("start_blk = %"MLFu64", num_clusters = %u\n", start_blk, 967 num_clusters); 968 969 + BUG_ON(mutex_trylock(&tl_inode->i_mutex)); 970 971 start_cluster = ocfs2_blocks_to_clusters(osb->sb, start_blk); 972 ··· 1108 return status; 1109 } 1110 1111 + /* Expects you to already be holding tl_inode->i_mutex */ 1112 static int __ocfs2_flush_truncate_log(struct ocfs2_super *osb) 1113 { 1114 int status; ··· 1123 1124 mlog_entry_void(); 1125 1126 + BUG_ON(mutex_trylock(&tl_inode->i_mutex)); 1127 1128 di = (struct ocfs2_dinode *) tl_bh->b_data; 1129 tl = &di->id2.i_dealloc; ··· 1198 int status; 1199 struct inode *tl_inode = osb->osb_tl_inode; 1200 1201 + mutex_lock(&tl_inode->i_mutex); 1202 status = __ocfs2_flush_truncate_log(osb); 1203 + mutex_unlock(&tl_inode->i_mutex); 1204 1205 return status; 1206 } ··· 1363 mlog(0, "cleanup %u records from %"MLFu64"\n", num_recs, 1364 tl_copy->i_blkno); 1365 1366 + mutex_lock(&tl_inode->i_mutex); 1367 for(i = 0; i < num_recs; i++) { 1368 if (ocfs2_truncate_log_needs_flush(osb)) { 1369 status = __ocfs2_flush_truncate_log(osb); ··· 1395 } 1396 1397 bail_up: 1398 + mutex_unlock(&tl_inode->i_mutex); 1399 1400 mlog_exit(status); 1401 return status; ··· 1840 1841 mlog(0, "clusters_to_del = %u in this pass\n", clusters_to_del); 1842 1843 + mutex_lock(&tl_inode->i_mutex); 1844 tl_sem = 1; 1845 /* ocfs2_truncate_log_needs_flush guarantees us at least one 1846 * record is free for use. If there isn't any, we flush to get ··· 1875 goto bail; 1876 } 1877 1878 + mutex_unlock(&tl_inode->i_mutex); 1879 tl_sem = 0; 1880 1881 ocfs2_commit_trans(handle); ··· 1890 ocfs2_schedule_truncate_log_flush(osb, 1); 1891 1892 if (tl_sem) 1893 + mutex_unlock(&tl_inode->i_mutex); 1894 1895 if (handle) 1896 ocfs2_commit_trans(handle); ··· 1994 goto bail; 1995 } 1996 1997 + mutex_lock(&ext_alloc_inode->i_mutex); 1998 (*tc)->tc_ext_alloc_inode = ext_alloc_inode; 1999 2000 status = ocfs2_meta_lock(ext_alloc_inode, ··· 2026 if (tc->tc_ext_alloc_locked) 2027 ocfs2_meta_unlock(tc->tc_ext_alloc_inode, 1); 2028 2029 + mutex_unlock(&tc->tc_ext_alloc_inode->i_mutex); 2030 iput(tc->tc_ext_alloc_inode); 2031 } 2032
+1 -1
fs/ocfs2/cluster/nodemanager.c
··· 653 struct config_group *o2hb_group = NULL, *ret = NULL; 654 void *defs = NULL; 655 656 - /* this runs under the parent dir's i_sem; there can be only 657 * one caller in here at a time */ 658 if (o2nm_single_cluster) 659 goto out; /* ENOSPC */
··· 653 struct config_group *o2hb_group = NULL, *ret = NULL; 654 void *defs = NULL; 655 656 + /* this runs under the parent dir's i_mutex; there can be only 657 * one caller in here at a time */ 658 if (o2nm_single_cluster) 659 goto out; /* ENOSPC */
+2 -2
fs/ocfs2/dir.c
··· 202 } 203 204 /* 205 - * NOTE: this should always be called with parent dir i_sem taken. 206 */ 207 int ocfs2_find_files_on_disk(const char *name, 208 int namelen, ··· 245 * Return 0 if the name does not exist 246 * Return -EEXIST if the directory contains the name 247 * 248 - * Callers should have i_sem + a cluster lock on dir 249 */ 250 int ocfs2_check_dir_for_entry(struct inode *dir, 251 const char *name,
··· 202 } 203 204 /* 205 + * NOTE: this should always be called with parent dir i_mutex taken. 206 */ 207 int ocfs2_find_files_on_disk(const char *name, 208 int namelen, ··· 245 * Return 0 if the name does not exist 246 * Return -EEXIST if the directory contains the name 247 * 248 + * Callers should have i_mutex + a cluster lock on dir 249 */ 250 int ocfs2_check_dir_for_entry(struct inode *dir, 251 const char *name,
+4 -4
fs/ocfs2/file.c
··· 492 } 493 494 /* blocks peope in read/write from reading our allocation 495 - * until we're done changing it. We depend on i_sem to block 496 * other extend/truncate calls while we're here. Ordering wrt 497 * start_trans is important here -- always do it before! */ 498 down_write(&OCFS2_I(inode)->ip_alloc_sem); ··· 958 filp->f_flags &= ~O_DIRECT; 959 #endif 960 961 - down(&inode->i_sem); 962 - /* to match setattr's i_sem -> i_alloc_sem -> rw_lock ordering */ 963 if (filp->f_flags & O_DIRECT) { 964 have_alloc_sem = 1; 965 down_read(&inode->i_alloc_sem); ··· 1123 up_read(&inode->i_alloc_sem); 1124 if (rw_level != -1) 1125 ocfs2_rw_unlock(inode, rw_level); 1126 - up(&inode->i_sem); 1127 1128 mlog_exit(ret); 1129 return ret;
··· 492 } 493 494 /* blocks peope in read/write from reading our allocation 495 + * until we're done changing it. We depend on i_mutex to block 496 * other extend/truncate calls while we're here. Ordering wrt 497 * start_trans is important here -- always do it before! */ 498 down_write(&OCFS2_I(inode)->ip_alloc_sem); ··· 958 filp->f_flags &= ~O_DIRECT; 959 #endif 960 961 + mutex_lock(&inode->i_mutex); 962 + /* to match setattr's i_mutex -> i_alloc_sem -> rw_lock ordering */ 963 if (filp->f_flags & O_DIRECT) { 964 have_alloc_sem = 1; 965 down_read(&inode->i_alloc_sem); ··· 1123 up_read(&inode->i_alloc_sem); 1124 if (rw_level != -1) 1125 ocfs2_rw_unlock(inode, rw_level); 1126 + mutex_unlock(&inode->i_mutex); 1127 1128 mlog_exit(ret); 1129 return ret;
+6 -6
fs/ocfs2/inode.c
··· 485 goto bail; 486 } 487 488 - down(&inode_alloc_inode->i_sem); 489 status = ocfs2_meta_lock(inode_alloc_inode, NULL, &inode_alloc_bh, 1); 490 if (status < 0) { 491 - up(&inode_alloc_inode->i_sem); 492 493 mlog_errno(status); 494 goto bail; ··· 536 ocfs2_commit_trans(handle); 537 bail_unlock: 538 ocfs2_meta_unlock(inode_alloc_inode, 1); 539 - up(&inode_alloc_inode->i_sem); 540 brelse(inode_alloc_bh); 541 bail: 542 iput(inode_alloc_inode); ··· 567 /* Lock the orphan dir. The lock will be held for the entire 568 * delete_inode operation. We do this now to avoid races with 569 * recovery completion on other nodes. */ 570 - down(&orphan_dir_inode->i_sem); 571 status = ocfs2_meta_lock(orphan_dir_inode, NULL, &orphan_dir_bh, 1); 572 if (status < 0) { 573 - up(&orphan_dir_inode->i_sem); 574 575 mlog_errno(status); 576 goto bail; ··· 593 594 bail_unlock_dir: 595 ocfs2_meta_unlock(orphan_dir_inode, 1); 596 - up(&orphan_dir_inode->i_sem); 597 brelse(orphan_dir_bh); 598 bail: 599 iput(orphan_dir_inode);
··· 485 goto bail; 486 } 487 488 + mutex_lock(&inode_alloc_inode->i_mutex); 489 status = ocfs2_meta_lock(inode_alloc_inode, NULL, &inode_alloc_bh, 1); 490 if (status < 0) { 491 + mutex_unlock(&inode_alloc_inode->i_mutex); 492 493 mlog_errno(status); 494 goto bail; ··· 536 ocfs2_commit_trans(handle); 537 bail_unlock: 538 ocfs2_meta_unlock(inode_alloc_inode, 1); 539 + mutex_unlock(&inode_alloc_inode->i_mutex); 540 brelse(inode_alloc_bh); 541 bail: 542 iput(inode_alloc_inode); ··· 567 /* Lock the orphan dir. The lock will be held for the entire 568 * delete_inode operation. We do this now to avoid races with 569 * recovery completion on other nodes. */ 570 + mutex_lock(&orphan_dir_inode->i_mutex); 571 status = ocfs2_meta_lock(orphan_dir_inode, NULL, &orphan_dir_bh, 1); 572 if (status < 0) { 573 + mutex_unlock(&orphan_dir_inode->i_mutex); 574 575 mlog_errno(status); 576 goto bail; ··· 593 594 bail_unlock_dir: 595 ocfs2_meta_unlock(orphan_dir_inode, 1); 596 + mutex_unlock(&orphan_dir_inode->i_mutex); 597 brelse(orphan_dir_bh); 598 bail: 599 iput(orphan_dir_inode);
+7 -7
fs/ocfs2/journal.c
··· 216 atomic_inc(&inode->i_count); 217 218 /* we're obviously changing it... */ 219 - down(&inode->i_sem); 220 221 /* sanity check */ 222 BUG_ON(OCFS2_I(inode)->ip_handle); ··· 241 OCFS2_I(inode)->ip_handle = NULL; 242 list_del_init(&OCFS2_I(inode)->ip_handle_list); 243 244 - up(&inode->i_sem); 245 iput(inode); 246 } 247 } ··· 1433 goto out; 1434 } 1435 1436 - down(&orphan_dir_inode->i_sem); 1437 status = ocfs2_meta_lock(orphan_dir_inode, NULL, NULL, 0); 1438 if (status < 0) { 1439 - up(&orphan_dir_inode->i_sem); 1440 mlog_errno(status); 1441 goto out; 1442 } ··· 1451 if (!bh) 1452 status = -EINVAL; 1453 if (status < 0) { 1454 - up(&orphan_dir_inode->i_sem); 1455 if (bh) 1456 brelse(bh); 1457 mlog_errno(status); ··· 1465 1466 if (!ocfs2_check_dir_entry(orphan_dir_inode, 1467 de, bh, local)) { 1468 - up(&orphan_dir_inode->i_sem); 1469 status = -EINVAL; 1470 mlog_errno(status); 1471 brelse(bh); ··· 1509 } 1510 brelse(bh); 1511 } 1512 - up(&orphan_dir_inode->i_sem); 1513 1514 ocfs2_meta_unlock(orphan_dir_inode, 0); 1515 have_disk_lock = 0;
··· 216 atomic_inc(&inode->i_count); 217 218 /* we're obviously changing it... */ 219 + mutex_lock(&inode->i_mutex); 220 221 /* sanity check */ 222 BUG_ON(OCFS2_I(inode)->ip_handle); ··· 241 OCFS2_I(inode)->ip_handle = NULL; 242 list_del_init(&OCFS2_I(inode)->ip_handle_list); 243 244 + mutex_unlock(&inode->i_mutex); 245 iput(inode); 246 } 247 } ··· 1433 goto out; 1434 } 1435 1436 + mutex_lock(&orphan_dir_inode->i_mutex); 1437 status = ocfs2_meta_lock(orphan_dir_inode, NULL, NULL, 0); 1438 if (status < 0) { 1439 + mutex_unlock(&orphan_dir_inode->i_mutex); 1440 mlog_errno(status); 1441 goto out; 1442 } ··· 1451 if (!bh) 1452 status = -EINVAL; 1453 if (status < 0) { 1454 + mutex_unlock(&orphan_dir_inode->i_mutex); 1455 if (bh) 1456 brelse(bh); 1457 mlog_errno(status); ··· 1465 1466 if (!ocfs2_check_dir_entry(orphan_dir_inode, 1467 de, bh, local)) { 1468 + mutex_unlock(&orphan_dir_inode->i_mutex); 1469 status = -EINVAL; 1470 mlog_errno(status); 1471 brelse(bh); ··· 1509 } 1510 brelse(bh); 1511 } 1512 + mutex_unlock(&orphan_dir_inode->i_mutex); 1513 1514 ocfs2_meta_unlock(orphan_dir_inode, 0); 1515 have_disk_lock = 0;
+3 -3
fs/ocfs2/localalloc.c
··· 334 goto bail; 335 } 336 337 - down(&inode->i_sem); 338 339 status = ocfs2_read_block(osb, OCFS2_I(inode)->ip_blkno, 340 &alloc_bh, 0, inode); ··· 367 brelse(alloc_bh); 368 369 if (inode) { 370 - up(&inode->i_sem); 371 iput(inode); 372 } 373 ··· 446 447 /* 448 * make sure we've got at least bitswanted contiguous bits in the 449 - * local alloc. You lose them when you drop i_sem. 450 * 451 * We will add ourselves to the transaction passed in, but may start 452 * our own in order to shift windows.
··· 334 goto bail; 335 } 336 337 + mutex_lock(&inode->i_mutex); 338 339 status = ocfs2_read_block(osb, OCFS2_I(inode)->ip_blkno, 340 &alloc_bh, 0, inode); ··· 367 brelse(alloc_bh); 368 369 if (inode) { 370 + mutex_unlock(&inode->i_mutex); 371 iput(inode); 372 } 373 ··· 446 447 /* 448 * make sure we've got at least bitswanted contiguous bits in the 449 + * local alloc. You lose them when you drop i_mutex. 450 * 451 * We will add ourselves to the transaction passed in, but may start 452 * our own in order to shift windows.
+1 -1
fs/ocfs2/super.c
··· 169 */ 170 static void ocfs2_write_super(struct super_block *sb) 171 { 172 - if (down_trylock(&sb->s_lock) == 0) 173 BUG(); 174 sb->s_dirt = 0; 175 }
··· 169 */ 170 static void ocfs2_write_super(struct super_block *sb) 171 { 172 + if (mutex_trylock(&sb->s_lock) != 0) 173 BUG(); 174 sb->s_dirt = 0; 175 }
+12 -12
fs/open.c
··· 211 newattrs.ia_valid |= ATTR_FILE; 212 } 213 214 - down(&dentry->d_inode->i_sem); 215 err = notify_change(dentry, &newattrs); 216 - up(&dentry->d_inode->i_sem); 217 return err; 218 } 219 ··· 398 (error = vfs_permission(&nd, MAY_WRITE)) != 0) 399 goto dput_and_out; 400 } 401 - down(&inode->i_sem); 402 error = notify_change(nd.dentry, &newattrs); 403 - up(&inode->i_sem); 404 dput_and_out: 405 path_release(&nd); 406 out: ··· 451 (error = vfs_permission(&nd, MAY_WRITE)) != 0) 452 goto dput_and_out; 453 } 454 - down(&inode->i_sem); 455 error = notify_change(nd.dentry, &newattrs); 456 - up(&inode->i_sem); 457 dput_and_out: 458 path_release(&nd); 459 out: ··· 620 err = -EPERM; 621 if (IS_IMMUTABLE(inode) || IS_APPEND(inode)) 622 goto out_putf; 623 - down(&inode->i_sem); 624 if (mode == (mode_t) -1) 625 mode = inode->i_mode; 626 newattrs.ia_mode = (mode & S_IALLUGO) | (inode->i_mode & ~S_IALLUGO); 627 newattrs.ia_valid = ATTR_MODE | ATTR_CTIME; 628 err = notify_change(dentry, &newattrs); 629 - up(&inode->i_sem); 630 631 out_putf: 632 fput(file); ··· 654 if (IS_IMMUTABLE(inode) || IS_APPEND(inode)) 655 goto dput_and_out; 656 657 - down(&inode->i_sem); 658 if (mode == (mode_t) -1) 659 mode = inode->i_mode; 660 newattrs.ia_mode = (mode & S_IALLUGO) | (inode->i_mode & ~S_IALLUGO); 661 newattrs.ia_valid = ATTR_MODE | ATTR_CTIME; 662 error = notify_change(nd.dentry, &newattrs); 663 - up(&inode->i_sem); 664 665 dput_and_out: 666 path_release(&nd); ··· 696 } 697 if (!S_ISDIR(inode->i_mode)) 698 newattrs.ia_valid |= ATTR_KILL_SUID|ATTR_KILL_SGID; 699 - down(&inode->i_sem); 700 error = notify_change(dentry, &newattrs); 701 - up(&inode->i_sem); 702 out: 703 return error; 704 }
··· 211 newattrs.ia_valid |= ATTR_FILE; 212 } 213 214 + mutex_lock(&dentry->d_inode->i_mutex); 215 err = notify_change(dentry, &newattrs); 216 + mutex_unlock(&dentry->d_inode->i_mutex); 217 return err; 218 } 219 ··· 398 (error = vfs_permission(&nd, MAY_WRITE)) != 0) 399 goto dput_and_out; 400 } 401 + mutex_lock(&inode->i_mutex); 402 error = notify_change(nd.dentry, &newattrs); 403 + mutex_unlock(&inode->i_mutex); 404 dput_and_out: 405 path_release(&nd); 406 out: ··· 451 (error = vfs_permission(&nd, MAY_WRITE)) != 0) 452 goto dput_and_out; 453 } 454 + mutex_lock(&inode->i_mutex); 455 error = notify_change(nd.dentry, &newattrs); 456 + mutex_unlock(&inode->i_mutex); 457 dput_and_out: 458 path_release(&nd); 459 out: ··· 620 err = -EPERM; 621 if (IS_IMMUTABLE(inode) || IS_APPEND(inode)) 622 goto out_putf; 623 + mutex_lock(&inode->i_mutex); 624 if (mode == (mode_t) -1) 625 mode = inode->i_mode; 626 newattrs.ia_mode = (mode & S_IALLUGO) | (inode->i_mode & ~S_IALLUGO); 627 newattrs.ia_valid = ATTR_MODE | ATTR_CTIME; 628 err = notify_change(dentry, &newattrs); 629 + mutex_unlock(&inode->i_mutex); 630 631 out_putf: 632 fput(file); ··· 654 if (IS_IMMUTABLE(inode) || IS_APPEND(inode)) 655 goto dput_and_out; 656 657 + mutex_lock(&inode->i_mutex); 658 if (mode == (mode_t) -1) 659 mode = inode->i_mode; 660 newattrs.ia_mode = (mode & S_IALLUGO) | (inode->i_mode & ~S_IALLUGO); 661 newattrs.ia_valid = ATTR_MODE | ATTR_CTIME; 662 error = notify_change(nd.dentry, &newattrs); 663 + mutex_unlock(&inode->i_mutex); 664 665 dput_and_out: 666 path_release(&nd); ··· 696 } 697 if (!S_ISDIR(inode->i_mode)) 698 newattrs.ia_valid |= ATTR_KILL_SUID|ATTR_KILL_SGID; 699 + mutex_lock(&inode->i_mutex); 700 error = notify_change(dentry, &newattrs); 701 + mutex_unlock(&inode->i_mutex); 702 out: 703 return error; 704 }
+22 -22
fs/pipe.c
··· 44 * is considered a noninteractive wait: 45 */ 46 prepare_to_wait(PIPE_WAIT(*inode), &wait, TASK_INTERRUPTIBLE|TASK_NONINTERACTIVE); 47 - up(PIPE_SEM(*inode)); 48 schedule(); 49 finish_wait(PIPE_WAIT(*inode), &wait); 50 - down(PIPE_SEM(*inode)); 51 } 52 53 static inline int ··· 136 137 do_wakeup = 0; 138 ret = 0; 139 - down(PIPE_SEM(*inode)); 140 info = inode->i_pipe; 141 for (;;) { 142 int bufs = info->nrbufs; ··· 200 } 201 pipe_wait(inode); 202 } 203 - up(PIPE_SEM(*inode)); 204 /* Signal writers asynchronously that there is more room. */ 205 if (do_wakeup) { 206 wake_up_interruptible(PIPE_WAIT(*inode)); ··· 237 238 do_wakeup = 0; 239 ret = 0; 240 - down(PIPE_SEM(*inode)); 241 info = inode->i_pipe; 242 243 if (!PIPE_READERS(*inode)) { ··· 341 PIPE_WAITING_WRITERS(*inode)--; 342 } 343 out: 344 - up(PIPE_SEM(*inode)); 345 if (do_wakeup) { 346 wake_up_interruptible(PIPE_WAIT(*inode)); 347 kill_fasync(PIPE_FASYNC_READERS(*inode), SIGIO, POLL_IN); ··· 381 382 switch (cmd) { 383 case FIONREAD: 384 - down(PIPE_SEM(*inode)); 385 info = inode->i_pipe; 386 count = 0; 387 buf = info->curbuf; ··· 390 count += info->bufs[buf].len; 391 buf = (buf+1) & (PIPE_BUFFERS-1); 392 } 393 - up(PIPE_SEM(*inode)); 394 return put_user(count, (int __user *)arg); 395 default: 396 return -EINVAL; ··· 433 static int 434 pipe_release(struct inode *inode, int decr, int decw) 435 { 436 - down(PIPE_SEM(*inode)); 437 PIPE_READERS(*inode) -= decr; 438 PIPE_WRITERS(*inode) -= decw; 439 if (!PIPE_READERS(*inode) && !PIPE_WRITERS(*inode)) { ··· 443 kill_fasync(PIPE_FASYNC_READERS(*inode), SIGIO, POLL_IN); 444 kill_fasync(PIPE_FASYNC_WRITERS(*inode), SIGIO, POLL_OUT); 445 } 446 - up(PIPE_SEM(*inode)); 447 448 return 0; 449 } ··· 454 struct inode *inode = filp->f_dentry->d_inode; 455 int retval; 456 457 - down(PIPE_SEM(*inode)); 458 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_READERS(*inode)); 459 - up(PIPE_SEM(*inode)); 460 461 if (retval < 0) 462 return retval; ··· 471 struct inode *inode = filp->f_dentry->d_inode; 472 int retval; 473 474 - down(PIPE_SEM(*inode)); 475 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_WRITERS(*inode)); 476 - up(PIPE_SEM(*inode)); 477 478 if (retval < 0) 479 return retval; ··· 488 struct inode *inode = filp->f_dentry->d_inode; 489 int retval; 490 491 - down(PIPE_SEM(*inode)); 492 493 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_READERS(*inode)); 494 495 if (retval >= 0) 496 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_WRITERS(*inode)); 497 498 - up(PIPE_SEM(*inode)); 499 500 if (retval < 0) 501 return retval; ··· 534 { 535 /* We could have perhaps used atomic_t, but this and friends 536 below are the only places. So it doesn't seem worthwhile. */ 537 - down(PIPE_SEM(*inode)); 538 PIPE_READERS(*inode)++; 539 - up(PIPE_SEM(*inode)); 540 541 return 0; 542 } ··· 544 static int 545 pipe_write_open(struct inode *inode, struct file *filp) 546 { 547 - down(PIPE_SEM(*inode)); 548 PIPE_WRITERS(*inode)++; 549 - up(PIPE_SEM(*inode)); 550 551 return 0; 552 } ··· 554 static int 555 pipe_rdwr_open(struct inode *inode, struct file *filp) 556 { 557 - down(PIPE_SEM(*inode)); 558 if (filp->f_mode & FMODE_READ) 559 PIPE_READERS(*inode)++; 560 if (filp->f_mode & FMODE_WRITE) 561 PIPE_WRITERS(*inode)++; 562 - up(PIPE_SEM(*inode)); 563 564 return 0; 565 }
··· 44 * is considered a noninteractive wait: 45 */ 46 prepare_to_wait(PIPE_WAIT(*inode), &wait, TASK_INTERRUPTIBLE|TASK_NONINTERACTIVE); 47 + mutex_unlock(PIPE_MUTEX(*inode)); 48 schedule(); 49 finish_wait(PIPE_WAIT(*inode), &wait); 50 + mutex_lock(PIPE_MUTEX(*inode)); 51 } 52 53 static inline int ··· 136 137 do_wakeup = 0; 138 ret = 0; 139 + mutex_lock(PIPE_MUTEX(*inode)); 140 info = inode->i_pipe; 141 for (;;) { 142 int bufs = info->nrbufs; ··· 200 } 201 pipe_wait(inode); 202 } 203 + mutex_unlock(PIPE_MUTEX(*inode)); 204 /* Signal writers asynchronously that there is more room. */ 205 if (do_wakeup) { 206 wake_up_interruptible(PIPE_WAIT(*inode)); ··· 237 238 do_wakeup = 0; 239 ret = 0; 240 + mutex_lock(PIPE_MUTEX(*inode)); 241 info = inode->i_pipe; 242 243 if (!PIPE_READERS(*inode)) { ··· 341 PIPE_WAITING_WRITERS(*inode)--; 342 } 343 out: 344 + mutex_unlock(PIPE_MUTEX(*inode)); 345 if (do_wakeup) { 346 wake_up_interruptible(PIPE_WAIT(*inode)); 347 kill_fasync(PIPE_FASYNC_READERS(*inode), SIGIO, POLL_IN); ··· 381 382 switch (cmd) { 383 case FIONREAD: 384 + mutex_lock(PIPE_MUTEX(*inode)); 385 info = inode->i_pipe; 386 count = 0; 387 buf = info->curbuf; ··· 390 count += info->bufs[buf].len; 391 buf = (buf+1) & (PIPE_BUFFERS-1); 392 } 393 + mutex_unlock(PIPE_MUTEX(*inode)); 394 return put_user(count, (int __user *)arg); 395 default: 396 return -EINVAL; ··· 433 static int 434 pipe_release(struct inode *inode, int decr, int decw) 435 { 436 + mutex_lock(PIPE_MUTEX(*inode)); 437 PIPE_READERS(*inode) -= decr; 438 PIPE_WRITERS(*inode) -= decw; 439 if (!PIPE_READERS(*inode) && !PIPE_WRITERS(*inode)) { ··· 443 kill_fasync(PIPE_FASYNC_READERS(*inode), SIGIO, POLL_IN); 444 kill_fasync(PIPE_FASYNC_WRITERS(*inode), SIGIO, POLL_OUT); 445 } 446 + mutex_unlock(PIPE_MUTEX(*inode)); 447 448 return 0; 449 } ··· 454 struct inode *inode = filp->f_dentry->d_inode; 455 int retval; 456 457 + mutex_lock(PIPE_MUTEX(*inode)); 458 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_READERS(*inode)); 459 + mutex_unlock(PIPE_MUTEX(*inode)); 460 461 if (retval < 0) 462 return retval; ··· 471 struct inode *inode = filp->f_dentry->d_inode; 472 int retval; 473 474 + mutex_lock(PIPE_MUTEX(*inode)); 475 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_WRITERS(*inode)); 476 + mutex_unlock(PIPE_MUTEX(*inode)); 477 478 if (retval < 0) 479 return retval; ··· 488 struct inode *inode = filp->f_dentry->d_inode; 489 int retval; 490 491 + mutex_lock(PIPE_MUTEX(*inode)); 492 493 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_READERS(*inode)); 494 495 if (retval >= 0) 496 retval = fasync_helper(fd, filp, on, PIPE_FASYNC_WRITERS(*inode)); 497 498 + mutex_unlock(PIPE_MUTEX(*inode)); 499 500 if (retval < 0) 501 return retval; ··· 534 { 535 /* We could have perhaps used atomic_t, but this and friends 536 below are the only places. So it doesn't seem worthwhile. */ 537 + mutex_lock(PIPE_MUTEX(*inode)); 538 PIPE_READERS(*inode)++; 539 + mutex_unlock(PIPE_MUTEX(*inode)); 540 541 return 0; 542 } ··· 544 static int 545 pipe_write_open(struct inode *inode, struct file *filp) 546 { 547 + mutex_lock(PIPE_MUTEX(*inode)); 548 PIPE_WRITERS(*inode)++; 549 + mutex_unlock(PIPE_MUTEX(*inode)); 550 551 return 0; 552 } ··· 554 static int 555 pipe_rdwr_open(struct inode *inode, struct file *filp) 556 { 557 + mutex_lock(PIPE_MUTEX(*inode)); 558 if (filp->f_mode & FMODE_READ) 559 PIPE_READERS(*inode)++; 560 if (filp->f_mode & FMODE_WRITE) 561 PIPE_WRITERS(*inode)++; 562 + mutex_unlock(PIPE_MUTEX(*inode)); 563 564 return 0; 565 }
+3 -3
fs/quota.c
··· 168 sync_blockdev(sb->s_bdev); 169 170 /* Now when everything is written we can discard the pagecache so 171 - * that userspace sees the changes. We need i_sem and so we could 172 * not do it inside dqonoff_sem. Moreover we need to be carefull 173 * about races with quotaoff() (that is the reason why we have own 174 * reference to inode). */ ··· 184 up(&sb_dqopt(sb)->dqonoff_sem); 185 for (cnt = 0; cnt < MAXQUOTAS; cnt++) { 186 if (discard[cnt]) { 187 - down(&discard[cnt]->i_sem); 188 truncate_inode_pages(&discard[cnt]->i_data, 0); 189 - up(&discard[cnt]->i_sem); 190 iput(discard[cnt]); 191 } 192 }
··· 168 sync_blockdev(sb->s_bdev); 169 170 /* Now when everything is written we can discard the pagecache so 171 + * that userspace sees the changes. We need i_mutex and so we could 172 * not do it inside dqonoff_sem. Moreover we need to be carefull 173 * about races with quotaoff() (that is the reason why we have own 174 * reference to inode). */ ··· 184 up(&sb_dqopt(sb)->dqonoff_sem); 185 for (cnt = 0; cnt < MAXQUOTAS; cnt++) { 186 if (discard[cnt]) { 187 + mutex_lock(&discard[cnt]->i_mutex); 188 truncate_inode_pages(&discard[cnt]->i_data, 0); 189 + mutex_unlock(&discard[cnt]->i_mutex); 190 iput(discard[cnt]); 191 } 192 }
+2 -2
fs/read_write.c
··· 33 long long retval; 34 struct inode *inode = file->f_mapping->host; 35 36 - down(&inode->i_sem); 37 switch (origin) { 38 case 2: 39 offset += inode->i_size; ··· 49 } 50 retval = offset; 51 } 52 - up(&inode->i_sem); 53 return retval; 54 } 55
··· 33 long long retval; 34 struct inode *inode = file->f_mapping->host; 35 36 + mutex_lock(&inode->i_mutex); 37 switch (origin) { 38 case 2: 39 offset += inode->i_size; ··· 49 } 50 retval = offset; 51 } 52 + mutex_unlock(&inode->i_mutex); 53 return retval; 54 } 55
+2 -2
fs/readdir.c
··· 30 if (res) 31 goto out; 32 33 - down(&inode->i_sem); 34 res = -ENOENT; 35 if (!IS_DEADDIR(inode)) { 36 res = file->f_op->readdir(file, buf, filler); 37 file_accessed(file); 38 } 39 - up(&inode->i_sem); 40 out: 41 return res; 42 }
··· 30 if (res) 31 goto out; 32 33 + mutex_lock(&inode->i_mutex); 34 res = -ENOENT; 35 if (!IS_DEADDIR(inode)) { 36 res = file->f_op->readdir(file, buf, filler); 37 file_accessed(file); 38 } 39 + mutex_unlock(&inode->i_mutex); 40 out: 41 return res; 42 }
+5 -5
fs/reiserfs/file.c
··· 49 } 50 51 reiserfs_write_lock(inode->i_sb); 52 - down(&inode->i_sem); 53 /* freeing preallocation only involves relogging blocks that 54 * are already in the current transaction. preallocation gets 55 * freed at the end of each transaction, so it is impossible for ··· 100 err = reiserfs_truncate_file(inode, 0); 101 } 102 out: 103 - up(&inode->i_sem); 104 reiserfs_write_unlock(inode->i_sb); 105 return err; 106 } ··· 1342 if (unlikely(!access_ok(VERIFY_READ, buf, count))) 1343 return -EFAULT; 1344 1345 - down(&inode->i_sem); // locks the entire file for just us 1346 1347 pos = *ppos; 1348 ··· 1532 generic_osync_inode(inode, file->f_mapping, 1533 OSYNC_METADATA | OSYNC_DATA); 1534 1535 - up(&inode->i_sem); 1536 reiserfs_async_progress_wait(inode->i_sb); 1537 return (already_written != 0) ? already_written : res; 1538 1539 out: 1540 - up(&inode->i_sem); // unlock the file on exit. 1541 return res; 1542 } 1543
··· 49 } 50 51 reiserfs_write_lock(inode->i_sb); 52 + mutex_lock(&inode->i_mutex); 53 /* freeing preallocation only involves relogging blocks that 54 * are already in the current transaction. preallocation gets 55 * freed at the end of each transaction, so it is impossible for ··· 100 err = reiserfs_truncate_file(inode, 0); 101 } 102 out: 103 + mutex_unlock(&inode->i_mutex); 104 reiserfs_write_unlock(inode->i_sb); 105 return err; 106 } ··· 1342 if (unlikely(!access_ok(VERIFY_READ, buf, count))) 1343 return -EFAULT; 1344 1345 + mutex_lock(&inode->i_mutex); // locks the entire file for just us 1346 1347 pos = *ppos; 1348 ··· 1532 generic_osync_inode(inode, file->f_mapping, 1533 OSYNC_METADATA | OSYNC_DATA); 1534 1535 + mutex_unlock(&inode->i_mutex); 1536 reiserfs_async_progress_wait(inode->i_sb); 1537 return (already_written != 0) ? already_written : res; 1538 1539 out: 1540 + mutex_unlock(&inode->i_mutex); // unlock the file on exit. 1541 return res; 1542 } 1543
+7 -7
fs/reiserfs/inode.c
··· 40 41 /* The = 0 happens when we abort creating a new inode for some reason like lack of space.. */ 42 if (!(inode->i_state & I_NEW) && INODE_PKEY(inode)->k_objectid != 0) { /* also handles bad_inode case */ 43 - down(&inode->i_sem); 44 45 reiserfs_delete_xattrs(inode); 46 47 if (journal_begin(&th, inode->i_sb, jbegin_count)) { 48 - up(&inode->i_sem); 49 goto out; 50 } 51 reiserfs_update_inode_transaction(inode); ··· 59 DQUOT_FREE_INODE(inode); 60 61 if (journal_end(&th, inode->i_sb, jbegin_count)) { 62 - up(&inode->i_sem); 63 goto out; 64 } 65 66 - up(&inode->i_sem); 67 68 /* check return value from reiserfs_delete_object after 69 * ending the transaction ··· 551 552 /* we don't have to make sure the conversion did not happen while 553 ** we were locking the page because anyone that could convert 554 - ** must first take i_sem. 555 ** 556 ** We must fix the tail page for writing because it might have buffers 557 ** that are mapped, but have a block number of 0. This indicates tail ··· 586 BUG_ON(!th->t_trans_id); 587 588 #ifdef REISERFS_PREALLOCATE 589 - if (!(flags & GET_BLOCK_NO_ISEM)) { 590 return reiserfs_new_unf_blocknrs2(th, inode, allocated_block_nr, 591 path, block); 592 } ··· 2318 /* this is where we fill in holes in the file. */ 2319 if (use_get_block) { 2320 retval = reiserfs_get_block(inode, block, bh_result, 2321 - GET_BLOCK_CREATE | GET_BLOCK_NO_ISEM 2322 | GET_BLOCK_NO_DANGLE); 2323 if (!retval) { 2324 if (!buffer_mapped(bh_result)
··· 40 41 /* The = 0 happens when we abort creating a new inode for some reason like lack of space.. */ 42 if (!(inode->i_state & I_NEW) && INODE_PKEY(inode)->k_objectid != 0) { /* also handles bad_inode case */ 43 + mutex_lock(&inode->i_mutex); 44 45 reiserfs_delete_xattrs(inode); 46 47 if (journal_begin(&th, inode->i_sb, jbegin_count)) { 48 + mutex_unlock(&inode->i_mutex); 49 goto out; 50 } 51 reiserfs_update_inode_transaction(inode); ··· 59 DQUOT_FREE_INODE(inode); 60 61 if (journal_end(&th, inode->i_sb, jbegin_count)) { 62 + mutex_unlock(&inode->i_mutex); 63 goto out; 64 } 65 66 + mutex_unlock(&inode->i_mutex); 67 68 /* check return value from reiserfs_delete_object after 69 * ending the transaction ··· 551 552 /* we don't have to make sure the conversion did not happen while 553 ** we were locking the page because anyone that could convert 554 + ** must first take i_mutex. 555 ** 556 ** We must fix the tail page for writing because it might have buffers 557 ** that are mapped, but have a block number of 0. This indicates tail ··· 586 BUG_ON(!th->t_trans_id); 587 588 #ifdef REISERFS_PREALLOCATE 589 + if (!(flags & GET_BLOCK_NO_IMUX)) { 590 return reiserfs_new_unf_blocknrs2(th, inode, allocated_block_nr, 591 path, block); 592 } ··· 2318 /* this is where we fill in holes in the file. */ 2319 if (use_get_block) { 2320 retval = reiserfs_get_block(inode, block, bh_result, 2321 + GET_BLOCK_CREATE | GET_BLOCK_NO_IMUX 2322 | GET_BLOCK_NO_DANGLE); 2323 if (!retval) { 2324 if (!buffer_mapped(bh_result)
+2 -2
fs/reiserfs/ioctl.c
··· 120 /* we need to make sure nobody is changing the file size beneath 121 ** us 122 */ 123 - down(&inode->i_sem); 124 125 write_from = inode->i_size & (blocksize - 1); 126 /* if we are on a block boundary, we are already unpacked. */ ··· 156 page_cache_release(page); 157 158 out: 159 - up(&inode->i_sem); 160 reiserfs_write_unlock(inode->i_sb); 161 return retval; 162 }
··· 120 /* we need to make sure nobody is changing the file size beneath 121 ** us 122 */ 123 + mutex_lock(&inode->i_mutex); 124 125 write_from = inode->i_size & (blocksize - 1); 126 /* if we are on a block boundary, we are already unpacked. */ ··· 156 page_cache_release(page); 157 158 out: 159 + mutex_unlock(&inode->i_mutex); 160 reiserfs_write_unlock(inode->i_sb); 161 return retval; 162 }
+2 -2
fs/reiserfs/super.c
··· 2211 size_t towrite = len; 2212 struct buffer_head tmp_bh, *bh; 2213 2214 - down(&inode->i_sem); 2215 while (towrite > 0) { 2216 tocopy = sb->s_blocksize - offset < towrite ? 2217 sb->s_blocksize - offset : towrite; ··· 2250 inode->i_version++; 2251 inode->i_mtime = inode->i_ctime = CURRENT_TIME; 2252 mark_inode_dirty(inode); 2253 - up(&inode->i_sem); 2254 return len - towrite; 2255 } 2256
··· 2211 size_t towrite = len; 2212 struct buffer_head tmp_bh, *bh; 2213 2214 + mutex_lock(&inode->i_mutex); 2215 while (towrite > 0) { 2216 tocopy = sb->s_blocksize - offset < towrite ? 2217 sb->s_blocksize - offset : towrite; ··· 2250 inode->i_version++; 2251 inode->i_mtime = inode->i_ctime = CURRENT_TIME; 2252 mark_inode_dirty(inode); 2253 + mutex_unlock(&inode->i_mutex); 2254 return len - towrite; 2255 } 2256
+1 -1
fs/reiserfs/tail_conversion.c
··· 205 1) * p_s_sb->s_blocksize; 206 pos1 = pos; 207 208 - // we are protected by i_sem. The tail can not disapper, not 209 // append can be done either 210 // we are in truncate or packing tail in file_release 211
··· 205 1) * p_s_sb->s_blocksize; 206 pos1 = pos; 207 208 + // we are protected by i_mutex. The tail can not disapper, not 209 // append can be done either 210 // we are in truncate or packing tail in file_release 211
+17 -17
fs/reiserfs/xattr.c
··· 67 goto out; 68 } else if (!xaroot->d_inode) { 69 int err; 70 - down(&privroot->d_inode->i_sem); 71 err = 72 privroot->d_inode->i_op->mkdir(privroot->d_inode, xaroot, 73 0700); 74 - up(&privroot->d_inode->i_sem); 75 76 if (err) { 77 dput(xaroot); ··· 219 } else if (flags & XATTR_REPLACE || flags & FL_READONLY) { 220 goto out; 221 } else { 222 - /* inode->i_sem is down, so nothing else can try to create 223 * the same xattr */ 224 err = xadir->d_inode->i_op->create(xadir->d_inode, xafile, 225 0700 | S_IFREG, NULL); ··· 268 * and don't mess with f->f_pos, but the idea is the same. Do some 269 * action on each and every entry in the directory. 270 * 271 - * we're called with i_sem held, so there are no worries about the directory 272 * changing underneath us. 273 */ 274 static int __xattr_readdir(struct file *filp, void *dirent, filldir_t filldir) ··· 426 int res = -ENOTDIR; 427 if (!file->f_op || !file->f_op->readdir) 428 goto out; 429 - down(&inode->i_sem); 430 // down(&inode->i_zombie); 431 res = -ENOENT; 432 if (!IS_DEADDIR(inode)) { ··· 435 unlock_kernel(); 436 } 437 // up(&inode->i_zombie); 438 - up(&inode->i_sem); 439 out: 440 return res; 441 } ··· 480 /* Generic extended attribute operations that can be used by xa plugins */ 481 482 /* 483 - * inode->i_sem: down 484 */ 485 int 486 reiserfs_xattr_set(struct inode *inode, const char *name, const void *buffer, ··· 535 /* Resize it so we're ok to write there */ 536 newattrs.ia_size = buffer_size; 537 newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME; 538 - down(&xinode->i_sem); 539 err = notify_change(fp->f_dentry, &newattrs); 540 if (err) 541 goto out_filp; ··· 598 } 599 600 out_filp: 601 - up(&xinode->i_sem); 602 fput(fp); 603 604 out: ··· 606 } 607 608 /* 609 - * inode->i_sem: down 610 */ 611 int 612 reiserfs_xattr_get(const struct inode *inode, const char *name, void *buffer, ··· 793 794 } 795 796 - /* This is called w/ inode->i_sem downed */ 797 int reiserfs_delete_xattrs(struct inode *inode) 798 { 799 struct file *fp; ··· 946 947 /* 948 * Inode operation getxattr() 949 - * Preliminary locking: we down dentry->d_inode->i_sem 950 */ 951 ssize_t 952 reiserfs_getxattr(struct dentry * dentry, const char *name, void *buffer, ··· 970 /* 971 * Inode operation setxattr() 972 * 973 - * dentry->d_inode->i_sem down 974 */ 975 int 976 reiserfs_setxattr(struct dentry *dentry, const char *name, const void *value, ··· 1008 /* 1009 * Inode operation removexattr() 1010 * 1011 - * dentry->d_inode->i_sem down 1012 */ 1013 int reiserfs_removexattr(struct dentry *dentry, const char *name) 1014 { ··· 1091 /* 1092 * Inode operation listxattr() 1093 * 1094 - * Preliminary locking: we down dentry->d_inode->i_sem 1095 */ 1096 ssize_t reiserfs_listxattr(struct dentry * dentry, char *buffer, size_t size) 1097 { ··· 1289 if (!IS_ERR(dentry)) { 1290 if (!(mount_flags & MS_RDONLY) && !dentry->d_inode) { 1291 struct inode *inode = dentry->d_parent->d_inode; 1292 - down(&inode->i_sem); 1293 err = inode->i_op->mkdir(inode, dentry, 0700); 1294 - up(&inode->i_sem); 1295 if (err) { 1296 dput(dentry); 1297 dentry = NULL;
··· 67 goto out; 68 } else if (!xaroot->d_inode) { 69 int err; 70 + mutex_lock(&privroot->d_inode->i_mutex); 71 err = 72 privroot->d_inode->i_op->mkdir(privroot->d_inode, xaroot, 73 0700); 74 + mutex_unlock(&privroot->d_inode->i_mutex); 75 76 if (err) { 77 dput(xaroot); ··· 219 } else if (flags & XATTR_REPLACE || flags & FL_READONLY) { 220 goto out; 221 } else { 222 + /* inode->i_mutex is down, so nothing else can try to create 223 * the same xattr */ 224 err = xadir->d_inode->i_op->create(xadir->d_inode, xafile, 225 0700 | S_IFREG, NULL); ··· 268 * and don't mess with f->f_pos, but the idea is the same. Do some 269 * action on each and every entry in the directory. 270 * 271 + * we're called with i_mutex held, so there are no worries about the directory 272 * changing underneath us. 273 */ 274 static int __xattr_readdir(struct file *filp, void *dirent, filldir_t filldir) ··· 426 int res = -ENOTDIR; 427 if (!file->f_op || !file->f_op->readdir) 428 goto out; 429 + mutex_lock(&inode->i_mutex); 430 // down(&inode->i_zombie); 431 res = -ENOENT; 432 if (!IS_DEADDIR(inode)) { ··· 435 unlock_kernel(); 436 } 437 // up(&inode->i_zombie); 438 + mutex_unlock(&inode->i_mutex); 439 out: 440 return res; 441 } ··· 480 /* Generic extended attribute operations that can be used by xa plugins */ 481 482 /* 483 + * inode->i_mutex: down 484 */ 485 int 486 reiserfs_xattr_set(struct inode *inode, const char *name, const void *buffer, ··· 535 /* Resize it so we're ok to write there */ 536 newattrs.ia_size = buffer_size; 537 newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME; 538 + mutex_lock(&xinode->i_mutex); 539 err = notify_change(fp->f_dentry, &newattrs); 540 if (err) 541 goto out_filp; ··· 598 } 599 600 out_filp: 601 + mutex_unlock(&xinode->i_mutex); 602 fput(fp); 603 604 out: ··· 606 } 607 608 /* 609 + * inode->i_mutex: down 610 */ 611 int 612 reiserfs_xattr_get(const struct inode *inode, const char *name, void *buffer, ··· 793 794 } 795 796 + /* This is called w/ inode->i_mutex downed */ 797 int reiserfs_delete_xattrs(struct inode *inode) 798 { 799 struct file *fp; ··· 946 947 /* 948 * Inode operation getxattr() 949 + * Preliminary locking: we down dentry->d_inode->i_mutex 950 */ 951 ssize_t 952 reiserfs_getxattr(struct dentry * dentry, const char *name, void *buffer, ··· 970 /* 971 * Inode operation setxattr() 972 * 973 + * dentry->d_inode->i_mutex down 974 */ 975 int 976 reiserfs_setxattr(struct dentry *dentry, const char *name, const void *value, ··· 1008 /* 1009 * Inode operation removexattr() 1010 * 1011 + * dentry->d_inode->i_mutex down 1012 */ 1013 int reiserfs_removexattr(struct dentry *dentry, const char *name) 1014 { ··· 1091 /* 1092 * Inode operation listxattr() 1093 * 1094 + * Preliminary locking: we down dentry->d_inode->i_mutex 1095 */ 1096 ssize_t reiserfs_listxattr(struct dentry * dentry, char *buffer, size_t size) 1097 { ··· 1289 if (!IS_ERR(dentry)) { 1290 if (!(mount_flags & MS_RDONLY) && !dentry->d_inode) { 1291 struct inode *inode = dentry->d_parent->d_inode; 1292 + mutex_lock(&inode->i_mutex); 1293 err = inode->i_op->mkdir(inode, dentry, 0700); 1294 + mutex_unlock(&inode->i_mutex); 1295 if (err) { 1296 dput(dentry); 1297 dentry = NULL;
+3 -3
fs/reiserfs/xattr_acl.c
··· 174 /* 175 * Inode operation get_posix_acl(). 176 * 177 - * inode->i_sem: down 178 * BKL held [before 2.5.x] 179 */ 180 struct posix_acl *reiserfs_get_acl(struct inode *inode, int type) ··· 237 /* 238 * Inode operation set_posix_acl(). 239 * 240 - * inode->i_sem: down 241 * BKL held [before 2.5.x] 242 */ 243 static int ··· 312 return error; 313 } 314 315 - /* dir->i_sem: down, 316 * inode is new and not released into the wild yet */ 317 int 318 reiserfs_inherit_default_acl(struct inode *dir, struct dentry *dentry,
··· 174 /* 175 * Inode operation get_posix_acl(). 176 * 177 + * inode->i_mutex: down 178 * BKL held [before 2.5.x] 179 */ 180 struct posix_acl *reiserfs_get_acl(struct inode *inode, int type) ··· 237 /* 238 * Inode operation set_posix_acl(). 239 * 240 + * inode->i_mutex: down 241 * BKL held [before 2.5.x] 242 */ 243 static int ··· 312 return error; 313 } 314 315 + /* dir->i_mutex: locked, 316 * inode is new and not released into the wild yet */ 317 int 318 reiserfs_inherit_default_acl(struct inode *dir, struct dentry *dentry,
+6 -6
fs/relayfs/inode.c
··· 109 } 110 111 parent = dget(parent); 112 - down(&parent->d_inode->i_sem); 113 d = lookup_one_len(name, parent, strlen(name)); 114 if (IS_ERR(d)) { 115 d = NULL; ··· 139 simple_release_fs(&relayfs_mount, &relayfs_mount_count); 140 141 exit: 142 - up(&parent->d_inode->i_sem); 143 dput(parent); 144 return d; 145 } ··· 204 return -EINVAL; 205 206 parent = dget(parent); 207 - down(&parent->d_inode->i_sem); 208 if (dentry->d_inode) { 209 if (S_ISDIR(dentry->d_inode->i_mode)) 210 error = simple_rmdir(parent->d_inode, dentry); ··· 215 } 216 if (!error) 217 dput(dentry); 218 - up(&parent->d_inode->i_sem); 219 dput(parent); 220 221 if (!error) ··· 476 ssize_t ret = 0; 477 void *from; 478 479 - down(&inode->i_sem); 480 if(!relay_file_read_avail(buf, *ppos)) 481 goto out; 482 ··· 494 relay_file_read_consume(buf, read_start, count); 495 *ppos = relay_file_read_end_pos(buf, read_start, count); 496 out: 497 - up(&inode->i_sem); 498 return ret; 499 } 500
··· 109 } 110 111 parent = dget(parent); 112 + mutex_lock(&parent->d_inode->i_mutex); 113 d = lookup_one_len(name, parent, strlen(name)); 114 if (IS_ERR(d)) { 115 d = NULL; ··· 139 simple_release_fs(&relayfs_mount, &relayfs_mount_count); 140 141 exit: 142 + mutex_unlock(&parent->d_inode->i_mutex); 143 dput(parent); 144 return d; 145 } ··· 204 return -EINVAL; 205 206 parent = dget(parent); 207 + mutex_lock(&parent->d_inode->i_mutex); 208 if (dentry->d_inode) { 209 if (S_ISDIR(dentry->d_inode->i_mode)) 210 error = simple_rmdir(parent->d_inode, dentry); ··· 215 } 216 if (!error) 217 dput(dentry); 218 + mutex_unlock(&parent->d_inode->i_mutex); 219 dput(parent); 220 221 if (!error) ··· 476 ssize_t ret = 0; 477 void *from; 478 479 + mutex_lock(&inode->i_mutex); 480 if(!relay_file_read_avail(buf, *ppos)) 481 goto out; 482 ··· 494 relay_file_read_consume(buf, read_start, count); 495 *ppos = relay_file_read_end_pos(buf, read_start, count); 496 out: 497 + mutex_unlock(&inode->i_mutex); 498 return ret; 499 } 500
+1 -1
fs/super.c
··· 72 INIT_HLIST_HEAD(&s->s_anon); 73 INIT_LIST_HEAD(&s->s_inodes); 74 init_rwsem(&s->s_umount); 75 - sema_init(&s->s_lock, 1); 76 down_write(&s->s_umount); 77 s->s_count = S_BIAS; 78 atomic_set(&s->s_active, 1);
··· 72 INIT_HLIST_HEAD(&s->s_anon); 73 INIT_LIST_HEAD(&s->s_inodes); 74 init_rwsem(&s->s_umount); 75 + mutex_init(&s->s_lock); 76 down_write(&s->s_umount); 77 s->s_count = S_BIAS; 78 atomic_set(&s->s_active, 1);
+15 -16
fs/sysfs/dir.c
··· 99 int error; 100 umode_t mode = S_IFDIR| S_IRWXU | S_IRUGO | S_IXUGO; 101 102 - down(&p->d_inode->i_sem); 103 *d = lookup_one_len(n, p, strlen(n)); 104 if (!IS_ERR(*d)) { 105 error = sysfs_make_dirent(p->d_fsdata, *d, k, mode, SYSFS_DIR); ··· 122 dput(*d); 123 } else 124 error = PTR_ERR(*d); 125 - up(&p->d_inode->i_sem); 126 return error; 127 } 128 ··· 246 struct dentry * parent = dget(d->d_parent); 247 struct sysfs_dirent * sd; 248 249 - down(&parent->d_inode->i_sem); 250 d_delete(d); 251 sd = d->d_fsdata; 252 list_del_init(&sd->s_sibling); ··· 257 pr_debug(" o %s removing done (%d)\n",d->d_name.name, 258 atomic_read(&d->d_count)); 259 260 - up(&parent->d_inode->i_sem); 261 dput(parent); 262 } 263 ··· 286 return; 287 288 pr_debug("sysfs %s: removing dir\n",dentry->d_name.name); 289 - down(&dentry->d_inode->i_sem); 290 parent_sd = dentry->d_fsdata; 291 list_for_each_entry_safe(sd, tmp, &parent_sd->s_children, s_sibling) { 292 if (!sd->s_element || !(sd->s_type & SYSFS_NOT_PINNED)) ··· 295 sysfs_drop_dentry(sd, dentry); 296 sysfs_put(sd); 297 } 298 - up(&dentry->d_inode->i_sem); 299 300 remove_dir(dentry); 301 /** ··· 318 down_write(&sysfs_rename_sem); 319 parent = kobj->parent->dentry; 320 321 - down(&parent->d_inode->i_sem); 322 323 new_dentry = lookup_one_len(new_name, parent, strlen(new_name)); 324 if (!IS_ERR(new_dentry)) { ··· 334 error = -EEXIST; 335 dput(new_dentry); 336 } 337 - up(&parent->d_inode->i_sem); 338 up_write(&sysfs_rename_sem); 339 340 return error; ··· 345 struct dentry * dentry = file->f_dentry; 346 struct sysfs_dirent * parent_sd = dentry->d_fsdata; 347 348 - down(&dentry->d_inode->i_sem); 349 file->private_data = sysfs_new_dirent(parent_sd, NULL); 350 - up(&dentry->d_inode->i_sem); 351 352 return file->private_data ? 0 : -ENOMEM; 353 ··· 358 struct dentry * dentry = file->f_dentry; 359 struct sysfs_dirent * cursor = file->private_data; 360 361 - down(&dentry->d_inode->i_sem); 362 list_del_init(&cursor->s_sibling); 363 - up(&dentry->d_inode->i_sem); 364 365 release_sysfs_dirent(cursor); 366 ··· 436 { 437 struct dentry * dentry = file->f_dentry; 438 439 - down(&dentry->d_inode->i_sem); 440 switch (origin) { 441 case 1: 442 offset += file->f_pos; ··· 444 if (offset >= 0) 445 break; 446 default: 447 - up(&file->f_dentry->d_inode->i_sem); 448 return -EINVAL; 449 } 450 if (offset != file->f_pos) { ··· 468 list_add_tail(&cursor->s_sibling, p); 469 } 470 } 471 - up(&dentry->d_inode->i_sem); 472 return offset; 473 } 474 ··· 483 EXPORT_SYMBOL_GPL(sysfs_create_dir); 484 EXPORT_SYMBOL_GPL(sysfs_remove_dir); 485 EXPORT_SYMBOL_GPL(sysfs_rename_dir); 486 -
··· 99 int error; 100 umode_t mode = S_IFDIR| S_IRWXU | S_IRUGO | S_IXUGO; 101 102 + mutex_lock(&p->d_inode->i_mutex); 103 *d = lookup_one_len(n, p, strlen(n)); 104 if (!IS_ERR(*d)) { 105 error = sysfs_make_dirent(p->d_fsdata, *d, k, mode, SYSFS_DIR); ··· 122 dput(*d); 123 } else 124 error = PTR_ERR(*d); 125 + mutex_unlock(&p->d_inode->i_mutex); 126 return error; 127 } 128 ··· 246 struct dentry * parent = dget(d->d_parent); 247 struct sysfs_dirent * sd; 248 249 + mutex_lock(&parent->d_inode->i_mutex); 250 d_delete(d); 251 sd = d->d_fsdata; 252 list_del_init(&sd->s_sibling); ··· 257 pr_debug(" o %s removing done (%d)\n",d->d_name.name, 258 atomic_read(&d->d_count)); 259 260 + mutex_unlock(&parent->d_inode->i_mutex); 261 dput(parent); 262 } 263 ··· 286 return; 287 288 pr_debug("sysfs %s: removing dir\n",dentry->d_name.name); 289 + mutex_lock(&dentry->d_inode->i_mutex); 290 parent_sd = dentry->d_fsdata; 291 list_for_each_entry_safe(sd, tmp, &parent_sd->s_children, s_sibling) { 292 if (!sd->s_element || !(sd->s_type & SYSFS_NOT_PINNED)) ··· 295 sysfs_drop_dentry(sd, dentry); 296 sysfs_put(sd); 297 } 298 + mutex_unlock(&dentry->d_inode->i_mutex); 299 300 remove_dir(dentry); 301 /** ··· 318 down_write(&sysfs_rename_sem); 319 parent = kobj->parent->dentry; 320 321 + mutex_lock(&parent->d_inode->i_mutex); 322 323 new_dentry = lookup_one_len(new_name, parent, strlen(new_name)); 324 if (!IS_ERR(new_dentry)) { ··· 334 error = -EEXIST; 335 dput(new_dentry); 336 } 337 + mutex_unlock(&parent->d_inode->i_mutex); 338 up_write(&sysfs_rename_sem); 339 340 return error; ··· 345 struct dentry * dentry = file->f_dentry; 346 struct sysfs_dirent * parent_sd = dentry->d_fsdata; 347 348 + mutex_lock(&dentry->d_inode->i_mutex); 349 file->private_data = sysfs_new_dirent(parent_sd, NULL); 350 + mutex_unlock(&dentry->d_inode->i_mutex); 351 352 return file->private_data ? 0 : -ENOMEM; 353 ··· 358 struct dentry * dentry = file->f_dentry; 359 struct sysfs_dirent * cursor = file->private_data; 360 361 + mutex_lock(&dentry->d_inode->i_mutex); 362 list_del_init(&cursor->s_sibling); 363 + mutex_unlock(&dentry->d_inode->i_mutex); 364 365 release_sysfs_dirent(cursor); 366 ··· 436 { 437 struct dentry * dentry = file->f_dentry; 438 439 + mutex_lock(&dentry->d_inode->i_mutex); 440 switch (origin) { 441 case 1: 442 offset += file->f_pos; ··· 444 if (offset >= 0) 445 break; 446 default: 447 + mutex_unlock(&file->f_dentry->d_inode->i_mutex); 448 return -EINVAL; 449 } 450 if (offset != file->f_pos) { ··· 468 list_add_tail(&cursor->s_sibling, p); 469 } 470 } 471 + mutex_unlock(&dentry->d_inode->i_mutex); 472 return offset; 473 } 474 ··· 483 EXPORT_SYMBOL_GPL(sysfs_create_dir); 484 EXPORT_SYMBOL_GPL(sysfs_remove_dir); 485 EXPORT_SYMBOL_GPL(sysfs_rename_dir);
+8 -9
fs/sysfs/file.c
··· 364 umode_t mode = (attr->mode & S_IALLUGO) | S_IFREG; 365 int error = 0; 366 367 - down(&dir->d_inode->i_sem); 368 error = sysfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type); 369 - up(&dir->d_inode->i_sem); 370 371 return error; 372 } ··· 398 struct dentry * victim; 399 int res = -ENOENT; 400 401 - down(&dir->d_inode->i_sem); 402 victim = lookup_one_len(attr->name, dir, strlen(attr->name)); 403 if (!IS_ERR(victim)) { 404 /* make sure dentry is really there */ ··· 420 */ 421 dput(victim); 422 } 423 - up(&dir->d_inode->i_sem); 424 425 return res; 426 } ··· 441 struct iattr newattrs; 442 int res = -ENOENT; 443 444 - down(&dir->d_inode->i_sem); 445 victim = lookup_one_len(attr->name, dir, strlen(attr->name)); 446 if (!IS_ERR(victim)) { 447 if (victim->d_inode && 448 (victim->d_parent->d_inode == dir->d_inode)) { 449 inode = victim->d_inode; 450 - down(&inode->i_sem); 451 newattrs.ia_mode = (mode & S_IALLUGO) | 452 (inode->i_mode & ~S_IALLUGO); 453 newattrs.ia_valid = ATTR_MODE | ATTR_CTIME; 454 res = notify_change(victim, &newattrs); 455 - up(&inode->i_sem); 456 } 457 dput(victim); 458 } 459 - up(&dir->d_inode->i_sem); 460 461 return res; 462 } ··· 480 EXPORT_SYMBOL_GPL(sysfs_create_file); 481 EXPORT_SYMBOL_GPL(sysfs_remove_file); 482 EXPORT_SYMBOL_GPL(sysfs_update_file); 483 -
··· 364 umode_t mode = (attr->mode & S_IALLUGO) | S_IFREG; 365 int error = 0; 366 367 + mutex_lock(&dir->d_inode->i_mutex); 368 error = sysfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type); 369 + mutex_unlock(&dir->d_inode->i_mutex); 370 371 return error; 372 } ··· 398 struct dentry * victim; 399 int res = -ENOENT; 400 401 + mutex_lock(&dir->d_inode->i_mutex); 402 victim = lookup_one_len(attr->name, dir, strlen(attr->name)); 403 if (!IS_ERR(victim)) { 404 /* make sure dentry is really there */ ··· 420 */ 421 dput(victim); 422 } 423 + mutex_unlock(&dir->d_inode->i_mutex); 424 425 return res; 426 } ··· 441 struct iattr newattrs; 442 int res = -ENOENT; 443 444 + mutex_lock(&dir->d_inode->i_mutex); 445 victim = lookup_one_len(attr->name, dir, strlen(attr->name)); 446 if (!IS_ERR(victim)) { 447 if (victim->d_inode && 448 (victim->d_parent->d_inode == dir->d_inode)) { 449 inode = victim->d_inode; 450 + mutex_lock(&inode->i_mutex); 451 newattrs.ia_mode = (mode & S_IALLUGO) | 452 (inode->i_mode & ~S_IALLUGO); 453 newattrs.ia_valid = ATTR_MODE | ATTR_CTIME; 454 res = notify_change(victim, &newattrs); 455 + mutex_unlock(&inode->i_mutex); 456 } 457 dput(victim); 458 } 459 + mutex_unlock(&dir->d_inode->i_mutex); 460 461 return res; 462 } ··· 480 EXPORT_SYMBOL_GPL(sysfs_create_file); 481 EXPORT_SYMBOL_GPL(sysfs_remove_file); 482 EXPORT_SYMBOL_GPL(sysfs_update_file);
+3 -5
fs/sysfs/inode.c
··· 201 202 /* 203 * Unhashes the dentry corresponding to given sysfs_dirent 204 - * Called with parent inode's i_sem held. 205 */ 206 void sysfs_drop_dentry(struct sysfs_dirent * sd, struct dentry * parent) 207 { ··· 232 /* no inode means this hasn't been made visible yet */ 233 return; 234 235 - down(&dir->d_inode->i_sem); 236 list_for_each_entry(sd, &parent_sd->s_children, s_sibling) { 237 if (!sd->s_element) 238 continue; ··· 243 break; 244 } 245 } 246 - up(&dir->d_inode->i_sem); 247 } 248 - 249 -
··· 201 202 /* 203 * Unhashes the dentry corresponding to given sysfs_dirent 204 + * Called with parent inode's i_mutex held. 205 */ 206 void sysfs_drop_dentry(struct sysfs_dirent * sd, struct dentry * parent) 207 { ··· 232 /* no inode means this hasn't been made visible yet */ 233 return; 234 235 + mutex_lock(&dir->d_inode->i_mutex); 236 list_for_each_entry(sd, &parent_sd->s_children, s_sibling) { 237 if (!sd->s_element) 238 continue; ··· 243 break; 244 } 245 } 246 + mutex_unlock(&dir->d_inode->i_mutex); 247 }
+2 -3
fs/sysfs/symlink.c
··· 86 87 BUG_ON(!kobj || !kobj->dentry || !name); 88 89 - down(&dentry->d_inode->i_sem); 90 error = sysfs_add_link(dentry, name, target); 91 - up(&dentry->d_inode->i_sem); 92 return error; 93 } 94 ··· 177 178 EXPORT_SYMBOL_GPL(sysfs_create_link); 179 EXPORT_SYMBOL_GPL(sysfs_remove_link); 180 -
··· 86 87 BUG_ON(!kobj || !kobj->dentry || !name); 88 89 + mutex_lock(&dentry->d_inode->i_mutex); 90 error = sysfs_add_link(dentry, name, target); 91 + mutex_unlock(&dentry->d_inode->i_mutex); 92 return error; 93 } 94 ··· 177 178 EXPORT_SYMBOL_GPL(sysfs_create_link); 179 EXPORT_SYMBOL_GPL(sysfs_remove_link);
+3 -3
fs/ufs/super.c
··· 1275 size_t towrite = len; 1276 struct buffer_head *bh; 1277 1278 - down(&inode->i_sem); 1279 while (towrite > 0) { 1280 tocopy = sb->s_blocksize - offset < towrite ? 1281 sb->s_blocksize - offset : towrite; ··· 1297 } 1298 out: 1299 if (len == towrite) { 1300 - up(&inode->i_sem); 1301 return err; 1302 } 1303 if (inode->i_size < off+len-towrite) ··· 1305 inode->i_version++; 1306 inode->i_mtime = inode->i_ctime = CURRENT_TIME_SEC; 1307 mark_inode_dirty(inode); 1308 - up(&inode->i_sem); 1309 return len - towrite; 1310 } 1311
··· 1275 size_t towrite = len; 1276 struct buffer_head *bh; 1277 1278 + mutex_lock(&inode->i_mutex); 1279 while (towrite > 0) { 1280 tocopy = sb->s_blocksize - offset < towrite ? 1281 sb->s_blocksize - offset : towrite; ··· 1297 } 1298 out: 1299 if (len == towrite) { 1300 + mutex_unlock(&inode->i_mutex); 1301 return err; 1302 } 1303 if (inode->i_size < off+len-towrite) ··· 1305 inode->i_version++; 1306 inode->i_mtime = inode->i_ctime = CURRENT_TIME_SEC; 1307 mark_inode_dirty(inode); 1308 + mutex_unlock(&inode->i_mutex); 1309 return len - towrite; 1310 } 1311
+4 -4
fs/xattr.c
··· 51 } 52 } 53 54 - down(&d->d_inode->i_sem); 55 error = security_inode_setxattr(d, kname, kvalue, size, flags); 56 if (error) 57 goto out; ··· 73 fsnotify_xattr(d); 74 } 75 out: 76 - up(&d->d_inode->i_sem); 77 kfree(kvalue); 78 return error; 79 } ··· 323 error = security_inode_removexattr(d, kname); 324 if (error) 325 goto out; 326 - down(&d->d_inode->i_sem); 327 error = d->d_inode->i_op->removexattr(d, kname); 328 - up(&d->d_inode->i_sem); 329 if (!error) 330 fsnotify_xattr(d); 331 }
··· 51 } 52 } 53 54 + mutex_lock(&d->d_inode->i_mutex); 55 error = security_inode_setxattr(d, kname, kvalue, size, flags); 56 if (error) 57 goto out; ··· 73 fsnotify_xattr(d); 74 } 75 out: 76 + mutex_unlock(&d->d_inode->i_mutex); 77 kfree(kvalue); 78 return error; 79 } ··· 323 error = security_inode_removexattr(d, kname); 324 if (error) 325 goto out; 326 + mutex_lock(&d->d_inode->i_mutex); 327 error = d->d_inode->i_op->removexattr(d, kname); 328 + mutex_unlock(&d->d_inode->i_mutex); 329 if (!error) 330 fsnotify_xattr(d); 331 }
+3 -7
fs/xfs/linux-2.6/mutex.h
··· 19 #define __XFS_SUPPORT_MUTEX_H__ 20 21 #include <linux/spinlock.h> 22 - #include <asm/semaphore.h> 23 24 /* 25 * Map the mutex'es from IRIX to Linux semaphores. ··· 28 * callers. 29 */ 30 #define MUTEX_DEFAULT 0x0 31 - typedef struct semaphore mutex_t; 32 33 - #define mutex_init(lock, type, name) sema_init(lock, 1) 34 - #define mutex_destroy(lock) sema_init(lock, -99) 35 - #define mutex_lock(lock, num) down(lock) 36 - #define mutex_trylock(lock) (down_trylock(lock) ? 0 : 1) 37 - #define mutex_unlock(lock) up(lock) 38 39 #endif /* __XFS_SUPPORT_MUTEX_H__ */
··· 19 #define __XFS_SUPPORT_MUTEX_H__ 20 21 #include <linux/spinlock.h> 22 + #include <linux/mutex.h> 23 24 /* 25 * Map the mutex'es from IRIX to Linux semaphores. ··· 28 * callers. 29 */ 30 #define MUTEX_DEFAULT 0x0 31 32 + typedef struct mutex mutex_t; 33 + //#define mutex_destroy(lock) do{}while(0) 34 35 #endif /* __XFS_SUPPORT_MUTEX_H__ */
+1 -1
fs/xfs/linux-2.6/xfs_iops.c
··· 203 ip->i_nlink = va.va_nlink; 204 ip->i_blocks = va.va_nblocks; 205 206 - /* we're under i_sem so i_size can't change under us */ 207 if (i_size_read(ip) != va.va_size) 208 i_size_write(ip, va.va_size); 209 }
··· 203 ip->i_nlink = va.va_nlink; 204 ip->i_blocks = va.va_nblocks; 205 206 + /* we're under i_mutex so i_size can't change under us */ 207 if (i_size_read(ip) != va.va_size) 208 i_size_write(ip, va.va_size); 209 }
+9 -9
fs/xfs/linux-2.6/xfs_lrw.c
··· 254 } 255 256 if (unlikely(ioflags & IO_ISDIRECT)) 257 - down(&inode->i_sem); 258 xfs_ilock(ip, XFS_IOLOCK_SHARED); 259 260 if (DM_EVENT_ENABLED(vp->v_vfsp, ip, DM_EVENT_READ) && ··· 286 287 unlock_isem: 288 if (unlikely(ioflags & IO_ISDIRECT)) 289 - up(&inode->i_sem); 290 return ret; 291 } 292 ··· 655 iolock = XFS_IOLOCK_EXCL; 656 locktype = VRWLOCK_WRITE; 657 658 - down(&inode->i_sem); 659 } else { 660 iolock = XFS_IOLOCK_SHARED; 661 locktype = VRWLOCK_WRITE_DIRECT; ··· 686 int dmflags = FILP_DELAY_FLAG(file); 687 688 if (need_isem) 689 - dmflags |= DM_FLAGS_ISEM; 690 691 xfs_iunlock(xip, XFS_ILOCK_EXCL); 692 error = XFS_SEND_DATA(xip->i_mount, DM_EVENT_WRITE, vp, ··· 772 if (need_isem) { 773 /* demote the lock now the cached pages are gone */ 774 XFS_ILOCK_DEMOTE(mp, io, XFS_IOLOCK_EXCL); 775 - up(&inode->i_sem); 776 777 iolock = XFS_IOLOCK_SHARED; 778 locktype = VRWLOCK_WRITE_DIRECT; ··· 817 818 xfs_rwunlock(bdp, locktype); 819 if (need_isem) 820 - up(&inode->i_sem); 821 error = XFS_SEND_NAMESP(xip->i_mount, DM_EVENT_NOSPACE, vp, 822 DM_RIGHT_NULL, vp, DM_RIGHT_NULL, NULL, NULL, 823 0, 0, 0); /* Delay flag intentionally unused */ 824 if (error) 825 goto out_nounlocks; 826 if (need_isem) 827 - down(&inode->i_sem); 828 xfs_rwlock(bdp, locktype); 829 pos = xip->i_d.di_size; 830 ret = 0; ··· 926 927 xfs_rwunlock(bdp, locktype); 928 if (need_isem) 929 - up(&inode->i_sem); 930 931 error = sync_page_range(inode, mapping, pos, ret); 932 if (!error) ··· 938 xfs_rwunlock(bdp, locktype); 939 out_unlock_isem: 940 if (need_isem) 941 - up(&inode->i_sem); 942 out_nounlocks: 943 return -error; 944 }
··· 254 } 255 256 if (unlikely(ioflags & IO_ISDIRECT)) 257 + mutex_lock(&inode->i_mutex); 258 xfs_ilock(ip, XFS_IOLOCK_SHARED); 259 260 if (DM_EVENT_ENABLED(vp->v_vfsp, ip, DM_EVENT_READ) && ··· 286 287 unlock_isem: 288 if (unlikely(ioflags & IO_ISDIRECT)) 289 + mutex_unlock(&inode->i_mutex); 290 return ret; 291 } 292 ··· 655 iolock = XFS_IOLOCK_EXCL; 656 locktype = VRWLOCK_WRITE; 657 658 + mutex_lock(&inode->i_mutex); 659 } else { 660 iolock = XFS_IOLOCK_SHARED; 661 locktype = VRWLOCK_WRITE_DIRECT; ··· 686 int dmflags = FILP_DELAY_FLAG(file); 687 688 if (need_isem) 689 + dmflags |= DM_FLAGS_IMUX; 690 691 xfs_iunlock(xip, XFS_ILOCK_EXCL); 692 error = XFS_SEND_DATA(xip->i_mount, DM_EVENT_WRITE, vp, ··· 772 if (need_isem) { 773 /* demote the lock now the cached pages are gone */ 774 XFS_ILOCK_DEMOTE(mp, io, XFS_IOLOCK_EXCL); 775 + mutex_unlock(&inode->i_mutex); 776 777 iolock = XFS_IOLOCK_SHARED; 778 locktype = VRWLOCK_WRITE_DIRECT; ··· 817 818 xfs_rwunlock(bdp, locktype); 819 if (need_isem) 820 + mutex_unlock(&inode->i_mutex); 821 error = XFS_SEND_NAMESP(xip->i_mount, DM_EVENT_NOSPACE, vp, 822 DM_RIGHT_NULL, vp, DM_RIGHT_NULL, NULL, NULL, 823 0, 0, 0); /* Delay flag intentionally unused */ 824 if (error) 825 goto out_nounlocks; 826 if (need_isem) 827 + mutex_lock(&inode->i_mutex); 828 xfs_rwlock(bdp, locktype); 829 pos = xip->i_d.di_size; 830 ret = 0; ··· 926 927 xfs_rwunlock(bdp, locktype); 928 if (need_isem) 929 + mutex_unlock(&inode->i_mutex); 930 931 error = sync_page_range(inode, mapping, pos, ret); 932 if (!error) ··· 938 xfs_rwunlock(bdp, locktype); 939 out_unlock_isem: 940 if (need_isem) 941 + mutex_unlock(&inode->i_mutex); 942 out_nounlocks: 943 return -error; 944 }
+2 -2
fs/xfs/quota/xfs_dquot.c
··· 104 */ 105 if (brandnewdquot) { 106 dqp->dq_flnext = dqp->dq_flprev = dqp; 107 - mutex_init(&dqp->q_qlock, MUTEX_DEFAULT, "xdq"); 108 initnsema(&dqp->q_flock, 1, "fdq"); 109 sv_init(&dqp->q_pinwait, SV_DEFAULT, "pdq"); 110 ··· 1382 xfs_dqlock( 1383 xfs_dquot_t *dqp) 1384 { 1385 - mutex_lock(&(dqp->q_qlock), PINOD); 1386 } 1387 1388 void
··· 104 */ 105 if (brandnewdquot) { 106 dqp->dq_flnext = dqp->dq_flprev = dqp; 107 + mutex_init(&dqp->q_qlock); 108 initnsema(&dqp->q_flock, 1, "fdq"); 109 sv_init(&dqp->q_pinwait, SV_DEFAULT, "pdq"); 110 ··· 1382 xfs_dqlock( 1383 xfs_dquot_t *dqp) 1384 { 1385 + mutex_lock(&(dqp->q_qlock)); 1386 } 1387 1388 void
+5 -5
fs/xfs/quota/xfs_qm.c
··· 167 xqm->qm_dqfree_ratio = XFS_QM_DQFREE_RATIO; 168 xqm->qm_nrefs = 0; 169 #ifdef DEBUG 170 - mutex_init(&qcheck_lock, MUTEX_DEFAULT, "qchk"); 171 #endif 172 return xqm; 173 } ··· 1166 qinf->qi_dqreclaims = 0; 1167 1168 /* mutex used to serialize quotaoffs */ 1169 - mutex_init(&qinf->qi_quotaofflock, MUTEX_DEFAULT, "qoff"); 1170 1171 /* Precalc some constants */ 1172 qinf->qi_dqchunklen = XFS_FSB_TO_BB(mp, XFS_DQUOT_CLUSTER_SIZE_FSB); ··· 1285 char *str, 1286 int n) 1287 { 1288 - mutex_init(&list->qh_lock, MUTEX_DEFAULT, str); 1289 list->qh_next = NULL; 1290 list->qh_version = 0; 1291 list->qh_nelems = 0; ··· 2762 xfs_qm_freelist_init(xfs_frlist_t *ql) 2763 { 2764 ql->qh_next = ql->qh_prev = (xfs_dquot_t *) ql; 2765 - mutex_init(&ql->qh_lock, MUTEX_DEFAULT, "dqf"); 2766 ql->qh_version = 0; 2767 ql->qh_nelems = 0; 2768 } ··· 2772 { 2773 xfs_dquot_t *dqp, *nextdqp; 2774 2775 - mutex_lock(&ql->qh_lock, PINOD); 2776 for (dqp = ql->qh_next; 2777 dqp != (xfs_dquot_t *)ql; ) { 2778 xfs_dqlock(dqp);
··· 167 xqm->qm_dqfree_ratio = XFS_QM_DQFREE_RATIO; 168 xqm->qm_nrefs = 0; 169 #ifdef DEBUG 170 + xfs_mutex_init(&qcheck_lock, MUTEX_DEFAULT, "qchk"); 171 #endif 172 return xqm; 173 } ··· 1166 qinf->qi_dqreclaims = 0; 1167 1168 /* mutex used to serialize quotaoffs */ 1169 + mutex_init(&qinf->qi_quotaofflock); 1170 1171 /* Precalc some constants */ 1172 qinf->qi_dqchunklen = XFS_FSB_TO_BB(mp, XFS_DQUOT_CLUSTER_SIZE_FSB); ··· 1285 char *str, 1286 int n) 1287 { 1288 + mutex_init(&list->qh_lock); 1289 list->qh_next = NULL; 1290 list->qh_version = 0; 1291 list->qh_nelems = 0; ··· 2762 xfs_qm_freelist_init(xfs_frlist_t *ql) 2763 { 2764 ql->qh_next = ql->qh_prev = (xfs_dquot_t *) ql; 2765 + mutex_init(&ql->qh_lock); 2766 ql->qh_version = 0; 2767 ql->qh_nelems = 0; 2768 } ··· 2772 { 2773 xfs_dquot_t *dqp, *nextdqp; 2774 2775 + mutex_lock(&ql->qh_lock); 2776 for (dqp = ql->qh_next; 2777 dqp != (xfs_dquot_t *)ql; ) { 2778 xfs_dqlock(dqp);
+1 -1
fs/xfs/quota/xfs_qm.h
··· 165 #define XFS_QM_IWARNLIMIT 5 166 #define XFS_QM_RTBWARNLIMIT 5 167 168 - #define XFS_QM_LOCK(xqm) (mutex_lock(&xqm##_lock, PINOD)) 169 #define XFS_QM_UNLOCK(xqm) (mutex_unlock(&xqm##_lock)) 170 #define XFS_QM_HOLD(xqm) ((xqm)->qm_nrefs++) 171 #define XFS_QM_RELE(xqm) ((xqm)->qm_nrefs--)
··· 165 #define XFS_QM_IWARNLIMIT 5 166 #define XFS_QM_RTBWARNLIMIT 5 167 168 + #define XFS_QM_LOCK(xqm) (mutex_lock(&xqm##_lock)) 169 #define XFS_QM_UNLOCK(xqm) (mutex_unlock(&xqm##_lock)) 170 #define XFS_QM_HOLD(xqm) ((xqm)->qm_nrefs++) 171 #define XFS_QM_RELE(xqm) ((xqm)->qm_nrefs--)
+1 -1
fs/xfs/quota/xfs_qm_bhv.c
··· 363 KERN_INFO "SGI XFS Quota Management subsystem\n"; 364 365 printk(message); 366 - mutex_init(&xfs_Gqm_lock, MUTEX_DEFAULT, "xfs_qmlock"); 367 vfs_bhv_set_custom(&xfs_qmops, &xfs_qmcore_xfs); 368 xfs_qm_init_procfs(); 369 }
··· 363 KERN_INFO "SGI XFS Quota Management subsystem\n"; 364 365 printk(message); 366 + mutex_init(&xfs_Gqm_lock); 367 vfs_bhv_set_custom(&xfs_qmops, &xfs_qmcore_xfs); 368 xfs_qm_init_procfs(); 369 }
+4 -4
fs/xfs/quota/xfs_qm_syscalls.c
··· 233 */ 234 ASSERT(mp->m_quotainfo); 235 if (mp->m_quotainfo) 236 - mutex_lock(&(XFS_QI_QOFFLOCK(mp)), PINOD); 237 238 ASSERT(mp->m_quotainfo); 239 ··· 508 /* 509 * Switch on quota enforcement in core. 510 */ 511 - mutex_lock(&(XFS_QI_QOFFLOCK(mp)), PINOD); 512 mp->m_qflags |= (flags & XFS_ALL_QUOTA_ENFD); 513 mutex_unlock(&(XFS_QI_QOFFLOCK(mp))); 514 ··· 617 * a quotaoff from happening). (XXXThis doesn't currently happen 618 * because we take the vfslock before calling xfs_qm_sysent). 619 */ 620 - mutex_lock(&(XFS_QI_QOFFLOCK(mp)), PINOD); 621 622 /* 623 * Get the dquot (locked), and join it to the transaction. ··· 1426 xfs_log_force(mp, (xfs_lsn_t)0, XFS_LOG_FORCE | XFS_LOG_SYNC); 1427 XFS_bflush(mp->m_ddev_targp); 1428 1429 - mutex_lock(&qcheck_lock, PINOD); 1430 /* There should be absolutely no quota activity while this 1431 is going on. */ 1432 qmtest_udqtab = kmem_zalloc(qmtest_hashmask *
··· 233 */ 234 ASSERT(mp->m_quotainfo); 235 if (mp->m_quotainfo) 236 + mutex_lock(&(XFS_QI_QOFFLOCK(mp))); 237 238 ASSERT(mp->m_quotainfo); 239 ··· 508 /* 509 * Switch on quota enforcement in core. 510 */ 511 + mutex_lock(&(XFS_QI_QOFFLOCK(mp))); 512 mp->m_qflags |= (flags & XFS_ALL_QUOTA_ENFD); 513 mutex_unlock(&(XFS_QI_QOFFLOCK(mp))); 514 ··· 617 * a quotaoff from happening). (XXXThis doesn't currently happen 618 * because we take the vfslock before calling xfs_qm_sysent). 619 */ 620 + mutex_lock(&(XFS_QI_QOFFLOCK(mp))); 621 622 /* 623 * Get the dquot (locked), and join it to the transaction. ··· 1426 xfs_log_force(mp, (xfs_lsn_t)0, XFS_LOG_FORCE | XFS_LOG_SYNC); 1427 XFS_bflush(mp->m_ddev_targp); 1428 1429 + mutex_lock(&qcheck_lock); 1430 /* There should be absolutely no quota activity while this 1431 is going on. */ 1432 qmtest_udqtab = kmem_zalloc(qmtest_hashmask *
+1 -1
fs/xfs/quota/xfs_quota_priv.h
··· 51 #define XFS_QI_MPLNEXT(mp) ((mp)->m_quotainfo->qi_dqlist.qh_next) 52 #define XFS_QI_MPLNDQUOTS(mp) ((mp)->m_quotainfo->qi_dqlist.qh_nelems) 53 54 - #define XQMLCK(h) (mutex_lock(&((h)->qh_lock), PINOD)) 55 #define XQMUNLCK(h) (mutex_unlock(&((h)->qh_lock))) 56 #ifdef DEBUG 57 struct xfs_dqhash;
··· 51 #define XFS_QI_MPLNEXT(mp) ((mp)->m_quotainfo->qi_dqlist.qh_next) 52 #define XFS_QI_MPLNDQUOTS(mp) ((mp)->m_quotainfo->qi_dqlist.qh_nelems) 53 54 + #define XQMLCK(h) (mutex_lock(&((h)->qh_lock))) 55 #define XQMUNLCK(h) (mutex_unlock(&((h)->qh_lock))) 56 #ifdef DEBUG 57 struct xfs_dqhash;
+3 -3
fs/xfs/support/uuid.c
··· 24 void 25 uuid_init(void) 26 { 27 - mutex_init(&uuid_monitor, MUTEX_DEFAULT, "uuid_monitor"); 28 } 29 30 /* ··· 94 { 95 int i, hole; 96 97 - mutex_lock(&uuid_monitor, PVFS); 98 for (i = 0, hole = -1; i < uuid_table_size; i++) { 99 if (uuid_is_nil(&uuid_table[i])) { 100 hole = i; ··· 122 { 123 int i; 124 125 - mutex_lock(&uuid_monitor, PVFS); 126 for (i = 0; i < uuid_table_size; i++) { 127 if (uuid_is_nil(&uuid_table[i])) 128 continue;
··· 24 void 25 uuid_init(void) 26 { 27 + mutex_init(&uuid_monitor); 28 } 29 30 /* ··· 94 { 95 int i, hole; 96 97 + mutex_lock(&uuid_monitor); 98 for (i = 0, hole = -1; i < uuid_table_size; i++) { 99 if (uuid_is_nil(&uuid_table[i])) { 100 hole = i; ··· 122 { 123 int i; 124 125 + mutex_lock(&uuid_monitor); 126 for (i = 0; i < uuid_table_size; i++) { 127 if (uuid_is_nil(&uuid_table[i])) 128 continue;
+7 -7
fs/xfs/xfs_dmapi.h
··· 152 153 #define DM_FLAGS_NDELAY 0x001 /* return EAGAIN after dm_pending() */ 154 #define DM_FLAGS_UNWANTED 0x002 /* event not in fsys dm_eventset_t */ 155 - #define DM_FLAGS_ISEM 0x004 /* thread holds i_sem */ 156 #define DM_FLAGS_IALLOCSEM_RD 0x010 /* thread holds i_alloc_sem rd */ 157 #define DM_FLAGS_IALLOCSEM_WR 0x020 /* thread holds i_alloc_sem wr */ 158 ··· 161 */ 162 #if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,0) 163 #define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \ 164 - DM_FLAGS_ISEM : 0) 165 - #define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_ISEM) 166 #endif 167 168 #if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,0)) && \ 169 (LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,22)) 170 #define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \ 171 - DM_FLAGS_IALLOCSEM_RD : DM_FLAGS_ISEM) 172 - #define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_ISEM) 173 #endif 174 175 #if LINUX_VERSION_CODE <= KERNEL_VERSION(2,4,21) 176 #define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \ 177 - 0 : DM_FLAGS_ISEM) 178 - #define DM_SEM_FLAG_WR (DM_FLAGS_ISEM) 179 #endif 180 181
··· 152 153 #define DM_FLAGS_NDELAY 0x001 /* return EAGAIN after dm_pending() */ 154 #define DM_FLAGS_UNWANTED 0x002 /* event not in fsys dm_eventset_t */ 155 + #define DM_FLAGS_IMUX 0x004 /* thread holds i_mutex */ 156 #define DM_FLAGS_IALLOCSEM_RD 0x010 /* thread holds i_alloc_sem rd */ 157 #define DM_FLAGS_IALLOCSEM_WR 0x020 /* thread holds i_alloc_sem wr */ 158 ··· 161 */ 162 #if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,0) 163 #define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \ 164 + DM_FLAGS_IMUX : 0) 165 + #define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_IMUX) 166 #endif 167 168 #if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,0)) && \ 169 (LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,22)) 170 #define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \ 171 + DM_FLAGS_IALLOCSEM_RD : DM_FLAGS_IMUX) 172 + #define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_IMUX) 173 #endif 174 175 #if LINUX_VERSION_CODE <= KERNEL_VERSION(2,4,21) 176 #define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \ 177 + 0 : DM_FLAGS_IMUX) 178 + #define DM_SEM_FLAG_WR (DM_FLAGS_IMUX) 179 #endif 180 181
+1 -1
fs/xfs/xfs_mount.c
··· 117 118 AIL_LOCKINIT(&mp->m_ail_lock, "xfs_ail"); 119 spinlock_init(&mp->m_sb_lock, "xfs_sb"); 120 - mutex_init(&mp->m_ilock, MUTEX_DEFAULT, "xfs_ilock"); 121 initnsema(&mp->m_growlock, 1, "xfs_grow"); 122 /* 123 * Initialize the AIL.
··· 117 118 AIL_LOCKINIT(&mp->m_ail_lock, "xfs_ail"); 119 spinlock_init(&mp->m_sb_lock, "xfs_sb"); 120 + mutex_init(&mp->m_ilock); 121 initnsema(&mp->m_growlock, 1, "xfs_grow"); 122 /* 123 * Initialize the AIL.
+1 -1
fs/xfs/xfs_mount.h
··· 533 int msb_delta; /* Change to make to specified field */ 534 } xfs_mod_sb_t; 535 536 - #define XFS_MOUNT_ILOCK(mp) mutex_lock(&((mp)->m_ilock), PINOD) 537 #define XFS_MOUNT_IUNLOCK(mp) mutex_unlock(&((mp)->m_ilock)) 538 #define XFS_SB_LOCK(mp) mutex_spinlock(&(mp)->m_sb_lock) 539 #define XFS_SB_UNLOCK(mp,s) mutex_spinunlock(&(mp)->m_sb_lock,(s))
··· 533 int msb_delta; /* Change to make to specified field */ 534 } xfs_mod_sb_t; 535 536 + #define XFS_MOUNT_ILOCK(mp) mutex_lock(&((mp)->m_ilock)) 537 #define XFS_MOUNT_IUNLOCK(mp) mutex_unlock(&((mp)->m_ilock)) 538 #define XFS_SB_LOCK(mp) mutex_spinlock(&(mp)->m_sb_lock) 539 #define XFS_SB_UNLOCK(mp,s) mutex_spinunlock(&(mp)->m_sb_lock,(s))
+1
include/asm-alpha/atomic.h
··· 176 } 177 178 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 179 180 #define atomic_add_unless(v, a, u) \ 181 ({ \
··· 176 } 177 178 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 179 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 180 181 #define atomic_add_unless(v, a, u) \ 182 ({ \
+9
include/asm-alpha/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+2
include/asm-arm/atomic.h
··· 175 176 #endif /* __LINUX_ARM_ARCH__ */ 177 178 static inline int atomic_add_unless(atomic_t *v, int a, int u) 179 { 180 int c, old;
··· 175 176 #endif /* __LINUX_ARM_ARCH__ */ 177 178 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 179 + 180 static inline int atomic_add_unless(atomic_t *v, int a, int u) 181 { 182 int c, old;
+128
include/asm-arm/mutex.h
···
··· 1 + /* 2 + * include/asm-arm/mutex.h 3 + * 4 + * ARM optimized mutex locking primitives 5 + * 6 + * Please look into asm-generic/mutex-xchg.h for a formal definition. 7 + */ 8 + #ifndef _ASM_MUTEX_H 9 + #define _ASM_MUTEX_H 10 + 11 + #if __LINUX_ARM_ARCH__ < 6 12 + /* On pre-ARMv6 hardware the swp based implementation is the most efficient. */ 13 + # include <asm-generic/mutex-xchg.h> 14 + #else 15 + 16 + /* 17 + * Attempting to lock a mutex on ARMv6+ can be done with a bastardized 18 + * atomic decrement (it is not a reliable atomic decrement but it satisfies 19 + * the defined semantics for our purpose, while being smaller and faster 20 + * than a real atomic decrement or atomic swap. The idea is to attempt 21 + * decrementing the lock value only once. If once decremented it isn't zero, 22 + * or if its store-back fails due to a dispute on the exclusive store, we 23 + * simply bail out immediately through the slow path where the lock will be 24 + * reattempted until it succeeds. 25 + */ 26 + #define __mutex_fastpath_lock(count, fail_fn) \ 27 + do { \ 28 + int __ex_flag, __res; \ 29 + \ 30 + typecheck(atomic_t *, count); \ 31 + typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \ 32 + \ 33 + __asm__ ( \ 34 + "ldrex %0, [%2] \n" \ 35 + "sub %0, %0, #1 \n" \ 36 + "strex %1, %0, [%2] \n" \ 37 + \ 38 + : "=&r" (__res), "=&r" (__ex_flag) \ 39 + : "r" (&(count)->counter) \ 40 + : "cc","memory" ); \ 41 + \ 42 + if (unlikely(__res || __ex_flag)) \ 43 + fail_fn(count); \ 44 + } while (0) 45 + 46 + #define __mutex_fastpath_lock_retval(count, fail_fn) \ 47 + ({ \ 48 + int __ex_flag, __res; \ 49 + \ 50 + typecheck(atomic_t *, count); \ 51 + typecheck_fn(fastcall int (*)(atomic_t *), fail_fn); \ 52 + \ 53 + __asm__ ( \ 54 + "ldrex %0, [%2] \n" \ 55 + "sub %0, %0, #1 \n" \ 56 + "strex %1, %0, [%2] \n" \ 57 + \ 58 + : "=&r" (__res), "=&r" (__ex_flag) \ 59 + : "r" (&(count)->counter) \ 60 + : "cc","memory" ); \ 61 + \ 62 + __res |= __ex_flag; \ 63 + if (unlikely(__res != 0)) \ 64 + __res = fail_fn(count); \ 65 + __res; \ 66 + }) 67 + 68 + /* 69 + * Same trick is used for the unlock fast path. However the original value, 70 + * rather than the result, is used to test for success in order to have 71 + * better generated assembly. 72 + */ 73 + #define __mutex_fastpath_unlock(count, fail_fn) \ 74 + do { \ 75 + int __ex_flag, __res, __orig; \ 76 + \ 77 + typecheck(atomic_t *, count); \ 78 + typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \ 79 + \ 80 + __asm__ ( \ 81 + "ldrex %0, [%3] \n" \ 82 + "add %1, %0, #1 \n" \ 83 + "strex %2, %1, [%3] \n" \ 84 + \ 85 + : "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag) \ 86 + : "r" (&(count)->counter) \ 87 + : "cc","memory" ); \ 88 + \ 89 + if (unlikely(__orig || __ex_flag)) \ 90 + fail_fn(count); \ 91 + } while (0) 92 + 93 + /* 94 + * If the unlock was done on a contended lock, or if the unlock simply fails 95 + * then the mutex remains locked. 96 + */ 97 + #define __mutex_slowpath_needs_to_unlock() 1 98 + 99 + /* 100 + * For __mutex_fastpath_trylock we use another construct which could be 101 + * described as a "single value cmpxchg". 102 + * 103 + * This provides the needed trylock semantics like cmpxchg would, but it is 104 + * lighter and less generic than a true cmpxchg implementation. 105 + */ 106 + static inline int 107 + __mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *)) 108 + { 109 + int __ex_flag, __res, __orig; 110 + 111 + __asm__ ( 112 + 113 + "1: ldrex %0, [%3] \n" 114 + "subs %1, %0, #1 \n" 115 + "strexeq %2, %1, [%3] \n" 116 + "movlt %0, #0 \n" 117 + "cmpeq %2, #0 \n" 118 + "bgt 1b \n" 119 + 120 + : "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag) 121 + : "r" (&count->counter) 122 + : "cc", "memory" ); 123 + 124 + return __orig; 125 + } 126 + 127 + #endif 128 + #endif
+2
include/asm-arm26/atomic.h
··· 76 return ret; 77 } 78 79 static inline int atomic_add_unless(atomic_t *v, int a, int u) 80 { 81 int ret;
··· 76 return ret; 77 } 78 79 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 80 + 81 static inline int atomic_add_unless(atomic_t *v, int a, int u) 82 { 83 int ret;
+2
include/asm-cris/atomic.h
··· 136 return ret; 137 } 138 139 static inline int atomic_add_unless(atomic_t *v, int a, int u) 140 { 141 int ret;
··· 136 return ret; 137 } 138 139 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 140 + 141 static inline int atomic_add_unless(atomic_t *v, int a, int u) 142 { 143 int ret;
+9
include/asm-cris/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-frv/atomic.h
··· 328 #endif 329 330 #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new)) 331 332 #define atomic_add_unless(v, a, u) \ 333 ({ \
··· 328 #endif 329 330 #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new)) 331 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 332 333 #define atomic_add_unless(v, a, u) \ 334 ({ \
+9
include/asm-frv/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+110
include/asm-generic/mutex-dec.h
···
··· 1 + /* 2 + * asm-generic/mutex-dec.h 3 + * 4 + * Generic implementation of the mutex fastpath, based on atomic 5 + * decrement/increment. 6 + */ 7 + #ifndef _ASM_GENERIC_MUTEX_DEC_H 8 + #define _ASM_GENERIC_MUTEX_DEC_H 9 + 10 + /** 11 + * __mutex_fastpath_lock - try to take the lock by moving the count 12 + * from 1 to a 0 value 13 + * @count: pointer of type atomic_t 14 + * @fail_fn: function to call if the original value was not 1 15 + * 16 + * Change the count from 1 to a value lower than 1, and call <fail_fn> if 17 + * it wasn't 1 originally. This function MUST leave the value lower than 18 + * 1 even when the "1" assertion wasn't true. 19 + */ 20 + #define __mutex_fastpath_lock(count, fail_fn) \ 21 + do { \ 22 + if (unlikely(atomic_dec_return(count) < 0)) \ 23 + fail_fn(count); \ 24 + else \ 25 + smp_mb(); \ 26 + } while (0) 27 + 28 + /** 29 + * __mutex_fastpath_lock_retval - try to take the lock by moving the count 30 + * from 1 to a 0 value 31 + * @count: pointer of type atomic_t 32 + * @fail_fn: function to call if the original value was not 1 33 + * 34 + * Change the count from 1 to a value lower than 1, and call <fail_fn> if 35 + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, 36 + * or anything the slow path function returns. 37 + */ 38 + static inline int 39 + __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) 40 + { 41 + if (unlikely(atomic_dec_return(count) < 0)) 42 + return fail_fn(count); 43 + else { 44 + smp_mb(); 45 + return 0; 46 + } 47 + } 48 + 49 + /** 50 + * __mutex_fastpath_unlock - try to promote the count from 0 to 1 51 + * @count: pointer of type atomic_t 52 + * @fail_fn: function to call if the original value was not 0 53 + * 54 + * Try to promote the count from 0 to 1. If it wasn't 0, call <fail_fn>. 55 + * In the failure case, this function is allowed to either set the value to 56 + * 1, or to set it to a value lower than 1. 57 + * 58 + * If the implementation sets it to a value of lower than 1, then the 59 + * __mutex_slowpath_needs_to_unlock() macro needs to return 1, it needs 60 + * to return 0 otherwise. 61 + */ 62 + #define __mutex_fastpath_unlock(count, fail_fn) \ 63 + do { \ 64 + smp_mb(); \ 65 + if (unlikely(atomic_inc_return(count) <= 0)) \ 66 + fail_fn(count); \ 67 + } while (0) 68 + 69 + #define __mutex_slowpath_needs_to_unlock() 1 70 + 71 + /** 72 + * __mutex_fastpath_trylock - try to acquire the mutex, without waiting 73 + * 74 + * @count: pointer of type atomic_t 75 + * @fail_fn: fallback function 76 + * 77 + * Change the count from 1 to a value lower than 1, and return 0 (failure) 78 + * if it wasn't 1 originally, or return 1 (success) otherwise. This function 79 + * MUST leave the value lower than 1 even when the "1" assertion wasn't true. 80 + * Additionally, if the value was < 0 originally, this function must not leave 81 + * it to 0 on failure. 82 + * 83 + * If the architecture has no effective trylock variant, it should call the 84 + * <fail_fn> spinlock-based trylock variant unconditionally. 85 + */ 86 + static inline int 87 + __mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *)) 88 + { 89 + /* 90 + * We have two variants here. The cmpxchg based one is the best one 91 + * because it never induce a false contention state. It is included 92 + * here because architectures using the inc/dec algorithms over the 93 + * xchg ones are much more likely to support cmpxchg natively. 94 + * 95 + * If not we fall back to the spinlock based variant - that is 96 + * just as efficient (and simpler) as a 'destructive' probing of 97 + * the mutex state would be. 98 + */ 99 + #ifdef __HAVE_ARCH_CMPXCHG 100 + if (likely(atomic_cmpxchg(count, 1, 0)) == 1) { 101 + smp_mb(); 102 + return 1; 103 + } 104 + return 0; 105 + #else 106 + return fail_fn(count); 107 + #endif 108 + } 109 + 110 + #endif
+24
include/asm-generic/mutex-null.h
···
··· 1 + /* 2 + * asm-generic/mutex-null.h 3 + * 4 + * Generic implementation of the mutex fastpath, based on NOP :-) 5 + * 6 + * This is used by the mutex-debugging infrastructure, but it can also 7 + * be used by architectures that (for whatever reason) want to use the 8 + * spinlock based slowpath. 9 + */ 10 + #ifndef _ASM_GENERIC_MUTEX_NULL_H 11 + #define _ASM_GENERIC_MUTEX_NULL_H 12 + 13 + /* extra parameter only needed for mutex debugging: */ 14 + #ifndef __IP__ 15 + # define __IP__ 16 + #endif 17 + 18 + #define __mutex_fastpath_lock(count, fail_fn) fail_fn(count __RET_IP__) 19 + #define __mutex_fastpath_lock_retval(count, fail_fn) fail_fn(count __RET_IP__) 20 + #define __mutex_fastpath_unlock(count, fail_fn) fail_fn(count __RET_IP__) 21 + #define __mutex_fastpath_trylock(count, fail_fn) fail_fn(count) 22 + #define __mutex_slowpath_needs_to_unlock() 1 23 + 24 + #endif
+117
include/asm-generic/mutex-xchg.h
···
··· 1 + /* 2 + * asm-generic/mutex-xchg.h 3 + * 4 + * Generic implementation of the mutex fastpath, based on xchg(). 5 + * 6 + * NOTE: An xchg based implementation is less optimal than an atomic 7 + * decrement/increment based implementation. If your architecture 8 + * has a reasonable atomic dec/inc then you should probably use 9 + * asm-generic/mutex-dec.h instead, or you could open-code an 10 + * optimized version in asm/mutex.h. 11 + */ 12 + #ifndef _ASM_GENERIC_MUTEX_XCHG_H 13 + #define _ASM_GENERIC_MUTEX_XCHG_H 14 + 15 + /** 16 + * __mutex_fastpath_lock - try to take the lock by moving the count 17 + * from 1 to a 0 value 18 + * @count: pointer of type atomic_t 19 + * @fail_fn: function to call if the original value was not 1 20 + * 21 + * Change the count from 1 to a value lower than 1, and call <fail_fn> if it 22 + * wasn't 1 originally. This function MUST leave the value lower than 1 23 + * even when the "1" assertion wasn't true. 24 + */ 25 + #define __mutex_fastpath_lock(count, fail_fn) \ 26 + do { \ 27 + if (unlikely(atomic_xchg(count, 0) != 1)) \ 28 + fail_fn(count); \ 29 + else \ 30 + smp_mb(); \ 31 + } while (0) 32 + 33 + 34 + /** 35 + * __mutex_fastpath_lock_retval - try to take the lock by moving the count 36 + * from 1 to a 0 value 37 + * @count: pointer of type atomic_t 38 + * @fail_fn: function to call if the original value was not 1 39 + * 40 + * Change the count from 1 to a value lower than 1, and call <fail_fn> if it 41 + * wasn't 1 originally. This function returns 0 if the fastpath succeeds, 42 + * or anything the slow path function returns 43 + */ 44 + static inline int 45 + __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) 46 + { 47 + if (unlikely(atomic_xchg(count, 0) != 1)) 48 + return fail_fn(count); 49 + else { 50 + smp_mb(); 51 + return 0; 52 + } 53 + } 54 + 55 + /** 56 + * __mutex_fastpath_unlock - try to promote the mutex from 0 to 1 57 + * @count: pointer of type atomic_t 58 + * @fail_fn: function to call if the original value was not 0 59 + * 60 + * try to promote the mutex from 0 to 1. if it wasn't 0, call <function> 61 + * In the failure case, this function is allowed to either set the value to 62 + * 1, or to set it to a value lower than one. 63 + * If the implementation sets it to a value of lower than one, the 64 + * __mutex_slowpath_needs_to_unlock() macro needs to return 1, it needs 65 + * to return 0 otherwise. 66 + */ 67 + #define __mutex_fastpath_unlock(count, fail_fn) \ 68 + do { \ 69 + smp_mb(); \ 70 + if (unlikely(atomic_xchg(count, 1) != 0)) \ 71 + fail_fn(count); \ 72 + } while (0) 73 + 74 + #define __mutex_slowpath_needs_to_unlock() 0 75 + 76 + /** 77 + * __mutex_fastpath_trylock - try to acquire the mutex, without waiting 78 + * 79 + * @count: pointer of type atomic_t 80 + * @fail_fn: spinlock based trylock implementation 81 + * 82 + * Change the count from 1 to a value lower than 1, and return 0 (failure) 83 + * if it wasn't 1 originally, or return 1 (success) otherwise. This function 84 + * MUST leave the value lower than 1 even when the "1" assertion wasn't true. 85 + * Additionally, if the value was < 0 originally, this function must not leave 86 + * it to 0 on failure. 87 + * 88 + * If the architecture has no effective trylock variant, it should call the 89 + * <fail_fn> spinlock-based trylock variant unconditionally. 90 + */ 91 + static inline int 92 + __mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *)) 93 + { 94 + int prev = atomic_xchg(count, 0); 95 + 96 + if (unlikely(prev < 0)) { 97 + /* 98 + * The lock was marked contended so we must restore that 99 + * state. If while doing so we get back a prev value of 1 100 + * then we just own it. 101 + * 102 + * [ In the rare case of the mutex going to 1, to 0, to -1 103 + * and then back to 0 in this few-instructions window, 104 + * this has the potential to trigger the slowpath for the 105 + * owner's unlock path needlessly, but that's not a problem 106 + * in practice. ] 107 + */ 108 + prev = atomic_xchg(count, prev); 109 + if (prev < 0) 110 + prev = 0; 111 + } 112 + smp_mb(); 113 + 114 + return prev; 115 + } 116 + 117 + #endif
+2
include/asm-h8300/atomic.h
··· 95 return ret; 96 } 97 98 static inline int atomic_add_unless(atomic_t *v, int a, int u) 99 { 100 int ret;
··· 95 return ret; 96 } 97 98 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 99 + 100 static inline int atomic_add_unless(atomic_t *v, int a, int u) 101 { 102 int ret;
+9
include/asm-h8300/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-i386/atomic.h
··· 216 } 217 218 #define atomic_cmpxchg(v, old, new) ((int)cmpxchg(&((v)->counter), old, new)) 219 220 /** 221 * atomic_add_unless - add unless the number is a given value
··· 216 } 217 218 #define atomic_cmpxchg(v, old, new) ((int)cmpxchg(&((v)->counter), old, new)) 219 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 220 221 /** 222 * atomic_add_unless - add unless the number is a given value
+124
include/asm-i386/mutex.h
···
··· 1 + /* 2 + * Assembly implementation of the mutex fastpath, based on atomic 3 + * decrement/increment. 4 + * 5 + * started by Ingo Molnar: 6 + * 7 + * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 8 + */ 9 + #ifndef _ASM_MUTEX_H 10 + #define _ASM_MUTEX_H 11 + 12 + /** 13 + * __mutex_fastpath_lock - try to take the lock by moving the count 14 + * from 1 to a 0 value 15 + * @count: pointer of type atomic_t 16 + * @fn: function to call if the original value was not 1 17 + * 18 + * Change the count from 1 to a value lower than 1, and call <fn> if it 19 + * wasn't 1 originally. This function MUST leave the value lower than 1 20 + * even when the "1" assertion wasn't true. 21 + */ 22 + #define __mutex_fastpath_lock(count, fail_fn) \ 23 + do { \ 24 + unsigned int dummy; \ 25 + \ 26 + typecheck(atomic_t *, count); \ 27 + typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \ 28 + \ 29 + __asm__ __volatile__( \ 30 + LOCK " decl (%%eax) \n" \ 31 + " js "#fail_fn" \n" \ 32 + \ 33 + :"=a" (dummy) \ 34 + : "a" (count) \ 35 + : "memory", "ecx", "edx"); \ 36 + } while (0) 37 + 38 + 39 + /** 40 + * __mutex_fastpath_lock_retval - try to take the lock by moving the count 41 + * from 1 to a 0 value 42 + * @count: pointer of type atomic_t 43 + * @fail_fn: function to call if the original value was not 1 44 + * 45 + * Change the count from 1 to a value lower than 1, and call <fail_fn> if it 46 + * wasn't 1 originally. This function returns 0 if the fastpath succeeds, 47 + * or anything the slow path function returns 48 + */ 49 + static inline int 50 + __mutex_fastpath_lock_retval(atomic_t *count, 51 + int fastcall (*fail_fn)(atomic_t *)) 52 + { 53 + if (unlikely(atomic_dec_return(count) < 0)) 54 + return fail_fn(count); 55 + else 56 + return 0; 57 + } 58 + 59 + /** 60 + * __mutex_fastpath_unlock - try to promote the mutex from 0 to 1 61 + * @count: pointer of type atomic_t 62 + * @fail_fn: function to call if the original value was not 0 63 + * 64 + * try to promote the mutex from 0 to 1. if it wasn't 0, call <fail_fn>. 65 + * In the failure case, this function is allowed to either set the value 66 + * to 1, or to set it to a value lower than 1. 67 + * 68 + * If the implementation sets it to a value of lower than 1, the 69 + * __mutex_slowpath_needs_to_unlock() macro needs to return 1, it needs 70 + * to return 0 otherwise. 71 + */ 72 + #define __mutex_fastpath_unlock(count, fail_fn) \ 73 + do { \ 74 + unsigned int dummy; \ 75 + \ 76 + typecheck(atomic_t *, count); \ 77 + typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \ 78 + \ 79 + __asm__ __volatile__( \ 80 + LOCK " incl (%%eax) \n" \ 81 + " jle "#fail_fn" \n" \ 82 + \ 83 + :"=a" (dummy) \ 84 + : "a" (count) \ 85 + : "memory", "ecx", "edx"); \ 86 + } while (0) 87 + 88 + #define __mutex_slowpath_needs_to_unlock() 1 89 + 90 + /** 91 + * __mutex_fastpath_trylock - try to acquire the mutex, without waiting 92 + * 93 + * @count: pointer of type atomic_t 94 + * @fail_fn: fallback function 95 + * 96 + * Change the count from 1 to a value lower than 1, and return 0 (failure) 97 + * if it wasn't 1 originally, or return 1 (success) otherwise. This function 98 + * MUST leave the value lower than 1 even when the "1" assertion wasn't true. 99 + * Additionally, if the value was < 0 originally, this function must not leave 100 + * it to 0 on failure. 101 + */ 102 + static inline int 103 + __mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *)) 104 + { 105 + /* 106 + * We have two variants here. The cmpxchg based one is the best one 107 + * because it never induce a false contention state. It is included 108 + * here because architectures using the inc/dec algorithms over the 109 + * xchg ones are much more likely to support cmpxchg natively. 110 + * 111 + * If not we fall back to the spinlock based variant - that is 112 + * just as efficient (and simpler) as a 'destructive' probing of 113 + * the mutex state would be. 114 + */ 115 + #ifdef __HAVE_ARCH_CMPXCHG 116 + if (likely(atomic_cmpxchg(count, 1, 0)) == 1) 117 + return 1; 118 + return 0; 119 + #else 120 + return fail_fn(count); 121 + #endif 122 + } 123 + 124 + #endif
+1
include/asm-ia64/atomic.h
··· 89 } 90 91 #define atomic_cmpxchg(v, old, new) ((int)cmpxchg(&((v)->counter), old, new)) 92 93 #define atomic_add_unless(v, a, u) \ 94 ({ \
··· 89 } 90 91 #define atomic_cmpxchg(v, old, new) ((int)cmpxchg(&((v)->counter), old, new)) 92 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 93 94 #define atomic_add_unless(v, a, u) \ 95 ({ \
+9
include/asm-ia64/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-m32r/atomic.h
··· 243 #define atomic_add_negative(i,v) (atomic_add_return((i), (v)) < 0) 244 245 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 246 247 /** 248 * atomic_add_unless - add unless the number is a given value
··· 243 #define atomic_add_negative(i,v) (atomic_add_return((i), (v)) < 0) 244 245 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 246 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 247 248 /** 249 * atomic_add_unless - add unless the number is a given value
+9
include/asm-m32r/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-m68k/atomic.h
··· 140 } 141 142 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 143 144 #define atomic_add_unless(v, a, u) \ 145 ({ \
··· 140 } 141 142 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 143 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 144 145 #define atomic_add_unless(v, a, u) \ 146 ({ \
+9
include/asm-m68k/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-m68knommu/atomic.h
··· 129 } 130 131 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 132 133 #define atomic_add_unless(v, a, u) \ 134 ({ \
··· 129 } 130 131 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 132 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 133 134 #define atomic_add_unless(v, a, u) \ 135 ({ \
+9
include/asm-m68knommu/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-mips/atomic.h
··· 289 } 290 291 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 292 293 /** 294 * atomic_add_unless - add unless the number is a given value
··· 289 } 290 291 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 292 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 293 294 /** 295 * atomic_add_unless - add unless the number is a given value
+9
include/asm-mips/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-parisc/atomic.h
··· 165 166 /* exported interface */ 167 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 168 169 /** 170 * atomic_add_unless - add unless the number is a given value
··· 165 166 /* exported interface */ 167 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 168 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 169 170 /** 171 * atomic_add_unless - add unless the number is a given value
+9
include/asm-parisc/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-powerpc/atomic.h
··· 165 } 166 167 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 168 169 /** 170 * atomic_add_unless - add unless the number is a given value
··· 165 } 166 167 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 168 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 169 170 /** 171 * atomic_add_unless - add unless the number is a given value
+9
include/asm-powerpc/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+2
include/asm-s390/atomic.h
··· 75 __CS_LOOP(v, mask, "or"); 76 } 77 78 static __inline__ int atomic_cmpxchg(atomic_t *v, int old, int new) 79 { 80 __asm__ __volatile__(" cs %0,%3,0(%2)\n"
··· 75 __CS_LOOP(v, mask, "or"); 76 } 77 78 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 79 + 80 static __inline__ int atomic_cmpxchg(atomic_t *v, int old, int new) 81 { 82 __asm__ __volatile__(" cs %0,%3,0(%2)\n"
+9
include/asm-s390/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+2
include/asm-sh/atomic.h
··· 101 return ret; 102 } 103 104 static inline int atomic_add_unless(atomic_t *v, int a, int u) 105 { 106 int ret;
··· 101 return ret; 102 } 103 104 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 105 + 106 static inline int atomic_add_unless(atomic_t *v, int a, int u) 107 { 108 int ret;
+9
include/asm-sh/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+2
include/asm-sh64/atomic.h
··· 113 return ret; 114 } 115 116 static inline int atomic_add_unless(atomic_t *v, int a, int u) 117 { 118 int ret;
··· 113 return ret; 114 } 115 116 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 117 + 118 static inline int atomic_add_unless(atomic_t *v, int a, int u) 119 { 120 int ret;
+9
include/asm-sh64/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-sparc/atomic.h
··· 20 21 extern int __atomic_add_return(int, atomic_t *); 22 extern int atomic_cmpxchg(atomic_t *, int, int); 23 extern int atomic_add_unless(atomic_t *, int, int); 24 extern void atomic_set(atomic_t *, int); 25
··· 20 21 extern int __atomic_add_return(int, atomic_t *); 22 extern int atomic_cmpxchg(atomic_t *, int, int); 23 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 24 extern int atomic_add_unless(atomic_t *, int, int); 25 extern void atomic_set(atomic_t *, int); 26
+9
include/asm-sparc/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-sparc64/atomic.h
··· 72 #define atomic64_add_negative(i, v) (atomic64_add_ret(i, v) < 0) 73 74 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 75 76 #define atomic_add_unless(v, a, u) \ 77 ({ \
··· 72 #define atomic64_add_negative(i, v) (atomic64_add_ret(i, v) < 0) 73 74 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 75 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 76 77 #define atomic_add_unless(v, a, u) \ 78 ({ \
+9
include/asm-sparc64/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+9
include/asm-um/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+2
include/asm-v850/atomic.h
··· 104 return ret; 105 } 106 107 static inline int atomic_add_unless(atomic_t *v, int a, int u) 108 { 109 int ret;
··· 104 return ret; 105 } 106 107 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 108 + 109 static inline int atomic_add_unless(atomic_t *v, int a, int u) 110 { 111 int ret;
+9
include/asm-v850/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1
include/asm-x86_64/atomic.h
··· 389 #define atomic64_dec_return(v) (atomic64_sub_return(1,v)) 390 391 #define atomic_cmpxchg(v, old, new) ((int)cmpxchg(&((v)->counter), old, new)) 392 393 /** 394 * atomic_add_unless - add unless the number is a given value
··· 389 #define atomic64_dec_return(v) (atomic64_sub_return(1,v)) 390 391 #define atomic_cmpxchg(v, old, new) ((int)cmpxchg(&((v)->counter), old, new)) 392 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 393 394 /** 395 * atomic_add_unless - add unless the number is a given value
+113
include/asm-x86_64/mutex.h
···
··· 1 + /* 2 + * Assembly implementation of the mutex fastpath, based on atomic 3 + * decrement/increment. 4 + * 5 + * started by Ingo Molnar: 6 + * 7 + * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 8 + */ 9 + #ifndef _ASM_MUTEX_H 10 + #define _ASM_MUTEX_H 11 + 12 + /** 13 + * __mutex_fastpath_lock - decrement and call function if negative 14 + * @v: pointer of type atomic_t 15 + * @fail_fn: function to call if the result is negative 16 + * 17 + * Atomically decrements @v and calls <fail_fn> if the result is negative. 18 + */ 19 + #define __mutex_fastpath_lock(v, fail_fn) \ 20 + do { \ 21 + unsigned long dummy; \ 22 + \ 23 + typecheck(atomic_t *, v); \ 24 + typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \ 25 + \ 26 + __asm__ __volatile__( \ 27 + LOCK " decl (%%rdi) \n" \ 28 + " js 2f \n" \ 29 + "1: \n" \ 30 + \ 31 + LOCK_SECTION_START("") \ 32 + "2: call "#fail_fn" \n" \ 33 + " jmp 1b \n" \ 34 + LOCK_SECTION_END \ 35 + \ 36 + :"=D" (dummy) \ 37 + : "D" (v) \ 38 + : "rax", "rsi", "rdx", "rcx", \ 39 + "r8", "r9", "r10", "r11", "memory"); \ 40 + } while (0) 41 + 42 + /** 43 + * __mutex_fastpath_lock_retval - try to take the lock by moving the count 44 + * from 1 to a 0 value 45 + * @count: pointer of type atomic_t 46 + * @fail_fn: function to call if the original value was not 1 47 + * 48 + * Change the count from 1 to a value lower than 1, and call <fail_fn> if 49 + * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, 50 + * or anything the slow path function returns 51 + */ 52 + static inline int 53 + __mutex_fastpath_lock_retval(atomic_t *count, 54 + int fastcall (*fail_fn)(atomic_t *)) 55 + { 56 + if (unlikely(atomic_dec_return(count) < 0)) 57 + return fail_fn(count); 58 + else 59 + return 0; 60 + } 61 + 62 + /** 63 + * __mutex_fastpath_unlock - increment and call function if nonpositive 64 + * @v: pointer of type atomic_t 65 + * @fail_fn: function to call if the result is nonpositive 66 + * 67 + * Atomically increments @v and calls <fail_fn> if the result is nonpositive. 68 + */ 69 + #define __mutex_fastpath_unlock(v, fail_fn) \ 70 + do { \ 71 + unsigned long dummy; \ 72 + \ 73 + typecheck(atomic_t *, v); \ 74 + typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \ 75 + \ 76 + __asm__ __volatile__( \ 77 + LOCK " incl (%%rdi) \n" \ 78 + " jle 2f \n" \ 79 + "1: \n" \ 80 + \ 81 + LOCK_SECTION_START("") \ 82 + "2: call "#fail_fn" \n" \ 83 + " jmp 1b \n" \ 84 + LOCK_SECTION_END \ 85 + \ 86 + :"=D" (dummy) \ 87 + : "D" (v) \ 88 + : "rax", "rsi", "rdx", "rcx", \ 89 + "r8", "r9", "r10", "r11", "memory"); \ 90 + } while (0) 91 + 92 + #define __mutex_slowpath_needs_to_unlock() 1 93 + 94 + /** 95 + * __mutex_fastpath_trylock - try to acquire the mutex, without waiting 96 + * 97 + * @count: pointer of type atomic_t 98 + * @fail_fn: fallback function 99 + * 100 + * Change the count from 1 to 0 and return 1 (success), or return 0 (failure) 101 + * if it wasn't 1 originally. [the fallback function is never used on 102 + * x86_64, because all x86_64 CPUs have a CMPXCHG instruction.] 103 + */ 104 + static inline int 105 + __mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *)) 106 + { 107 + if (likely(atomic_cmpxchg(count, 1, 0)) == 1) 108 + return 1; 109 + else 110 + return 0; 111 + } 112 + 113 + #endif
+1
include/asm-xtensa/atomic.h
··· 224 #define atomic_add_negative(i,v) (atomic_add_return((i),(v)) < 0) 225 226 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 227 228 /** 229 * atomic_add_unless - add unless the number is a given value
··· 224 #define atomic_add_negative(i,v) (atomic_add_return((i),(v)) < 0) 225 226 #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) 227 + #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 228 229 /** 230 * atomic_add_unless - add unless the number is a given value
+9
include/asm-xtensa/mutex.h
···
··· 1 + /* 2 + * Pull in the generic implementation for the mutex fastpath. 3 + * 4 + * TODO: implement optimized primitives instead, or leave the generic 5 + * implementation in place, or pick the atomic_xchg() based generic 6 + * implementation. (see asm-generic/mutex-xchg.h for details) 7 + */ 8 + 9 + #include <asm-generic/mutex-dec.h>
+1 -1
include/linux/ext3_fs_i.h
··· 87 #ifdef CONFIG_EXT3_FS_XATTR 88 /* 89 * Extended attributes can be read independently of the main file 90 - * data. Taking i_sem even when reading would cause contention 91 * between readers of EAs and writers of regular file data, so 92 * instead we synchronize on xattr_sem when reading or changing 93 * EAs.
··· 87 #ifdef CONFIG_EXT3_FS_XATTR 88 /* 89 * Extended attributes can be read independently of the main file 90 + * data. Taking i_mutex even when reading would cause contention 91 * between readers of EAs and writers of regular file data, so 92 * instead we synchronize on xattr_sem when reading or changing 93 * EAs.
+7 -6
include/linux/fs.h
··· 219 #include <linux/prio_tree.h> 220 #include <linux/init.h> 221 #include <linux/sched.h> 222 223 #include <asm/atomic.h> 224 #include <asm/semaphore.h> ··· 485 unsigned long i_blocks; 486 unsigned short i_bytes; 487 spinlock_t i_lock; /* i_blocks, i_bytes, maybe i_size */ 488 - struct semaphore i_sem; 489 struct rw_semaphore i_alloc_sem; 490 struct inode_operations *i_op; 491 struct file_operations *i_fop; /* former ->i_op->default_file_ops */ ··· 821 unsigned long s_magic; 822 struct dentry *s_root; 823 struct rw_semaphore s_umount; 824 - struct semaphore s_lock; 825 int s_count; 826 int s_syncing; 827 int s_need_sync_fs; ··· 893 static inline void lock_super(struct super_block * sb) 894 { 895 get_fs_excl(); 896 - down(&sb->s_lock); 897 } 898 899 static inline void unlock_super(struct super_block * sb) 900 { 901 put_fs_excl(); 902 - up(&sb->s_lock); 903 } 904 905 /* ··· 1192 * directory. The name should be stored in the @name (with the 1193 * understanding that it is already pointing to a a %NAME_MAX+1 sized 1194 * buffer. get_name() should return %0 on success, a negative error code 1195 - * or error. @get_name will be called without @parent->i_sem held. 1196 * 1197 * get_parent: 1198 * @get_parent should find the parent directory for the given @child which ··· 1214 * nfsd_find_fh_dentry() in either the @obj or @parent parameters. 1215 * 1216 * Locking rules: 1217 - * get_parent is called with child->d_inode->i_sem down 1218 * get_name is not (which is possibly inconsistent) 1219 */ 1220
··· 219 #include <linux/prio_tree.h> 220 #include <linux/init.h> 221 #include <linux/sched.h> 222 + #include <linux/mutex.h> 223 224 #include <asm/atomic.h> 225 #include <asm/semaphore.h> ··· 484 unsigned long i_blocks; 485 unsigned short i_bytes; 486 spinlock_t i_lock; /* i_blocks, i_bytes, maybe i_size */ 487 + struct mutex i_mutex; 488 struct rw_semaphore i_alloc_sem; 489 struct inode_operations *i_op; 490 struct file_operations *i_fop; /* former ->i_op->default_file_ops */ ··· 820 unsigned long s_magic; 821 struct dentry *s_root; 822 struct rw_semaphore s_umount; 823 + struct mutex s_lock; 824 int s_count; 825 int s_syncing; 826 int s_need_sync_fs; ··· 892 static inline void lock_super(struct super_block * sb) 893 { 894 get_fs_excl(); 895 + mutex_lock(&sb->s_lock); 896 } 897 898 static inline void unlock_super(struct super_block * sb) 899 { 900 put_fs_excl(); 901 + mutex_unlock(&sb->s_lock); 902 } 903 904 /* ··· 1191 * directory. The name should be stored in the @name (with the 1192 * understanding that it is already pointing to a a %NAME_MAX+1 sized 1193 * buffer. get_name() should return %0 on success, a negative error code 1194 + * or error. @get_name will be called without @parent->i_mutex held. 1195 * 1196 * get_parent: 1197 * @get_parent should find the parent directory for the given @child which ··· 1213 * nfsd_find_fh_dentry() in either the @obj or @parent parameters. 1214 * 1215 * Locking rules: 1216 + * get_parent is called with child->d_inode->i_mutex down 1217 * get_name is not (which is possibly inconsistent) 1218 */ 1219
+3 -2
include/linux/ide.h
··· 18 #include <linux/bio.h> 19 #include <linux/device.h> 20 #include <linux/pci.h> 21 #include <asm/byteorder.h> 22 #include <asm/system.h> 23 #include <asm/io.h> ··· 639 int crc_count; /* crc counter to reduce drive speed */ 640 struct list_head list; 641 struct device gendev; 642 - struct semaphore gendev_rel_sem; /* to deal with device release() */ 643 } ide_drive_t; 644 645 #define to_ide_device(dev)container_of(dev, ide_drive_t, gendev) ··· 795 unsigned sg_mapped : 1; /* sg_table and sg_nents are ready */ 796 797 struct device gendev; 798 - struct semaphore gendev_rel_sem; /* To deal with device release() */ 799 800 void *hwif_data; /* extra hwif data */ 801
··· 18 #include <linux/bio.h> 19 #include <linux/device.h> 20 #include <linux/pci.h> 21 + #include <linux/completion.h> 22 #include <asm/byteorder.h> 23 #include <asm/system.h> 24 #include <asm/io.h> ··· 638 int crc_count; /* crc counter to reduce drive speed */ 639 struct list_head list; 640 struct device gendev; 641 + struct completion gendev_rel_comp; /* to deal with device release() */ 642 } ide_drive_t; 643 644 #define to_ide_device(dev)container_of(dev, ide_drive_t, gendev) ··· 794 unsigned sg_mapped : 1; /* sg_table and sg_nents are ready */ 795 796 struct device gendev; 797 + struct completion gendev_rel_comp; /* To deal with device release() */ 798 799 void *hwif_data; /* extra hwif data */ 800
+2 -2
include/linux/jffs2_fs_i.h
··· 8 #include <asm/semaphore.h> 9 10 struct jffs2_inode_info { 11 - /* We need an internal semaphore similar to inode->i_sem. 12 Unfortunately, we can't used the existing one, because 13 either the GC would deadlock, or we'd have to release it 14 before letting GC proceed. Or we'd have to put ugliness 15 - into the GC code so it didn't attempt to obtain the i_sem 16 for the inode(s) which are already locked */ 17 struct semaphore sem; 18
··· 8 #include <asm/semaphore.h> 9 10 struct jffs2_inode_info { 11 + /* We need an internal mutex similar to inode->i_mutex. 12 Unfortunately, we can't used the existing one, because 13 either the GC would deadlock, or we'd have to release it 14 before letting GC proceed. Or we'd have to put ugliness 15 + into the GC code so it didn't attempt to obtain the i_mutex 16 for the inode(s) which are already locked */ 17 struct semaphore sem; 18
+9
include/linux/kernel.h
··· 286 1; \ 287 }) 288 289 #endif /* __KERNEL__ */ 290 291 #define SI_LOAD_SHIFT 16
··· 286 1; \ 287 }) 288 289 + /* 290 + * Check at compile time that 'function' is a certain type, or is a pointer 291 + * to that type (needs to use typedef for the function type.) 292 + */ 293 + #define typecheck_fn(type,function) \ 294 + ({ typeof(type) __tmp = function; \ 295 + (void)__tmp; \ 296 + }) 297 + 298 #endif /* __KERNEL__ */ 299 300 #define SI_LOAD_SHIFT 16
+2 -2
include/linux/loop.h
··· 58 struct bio *lo_bio; 59 struct bio *lo_biotail; 60 int lo_state; 61 - struct semaphore lo_sem; 62 struct semaphore lo_ctl_mutex; 63 - struct semaphore lo_bh_mutex; 64 int lo_pending; 65 66 request_queue_t *lo_queue;
··· 58 struct bio *lo_bio; 59 struct bio *lo_biotail; 60 int lo_state; 61 + struct completion lo_done; 62 + struct completion lo_bh_done; 63 struct semaphore lo_ctl_mutex; 64 int lo_pending; 65 66 request_queue_t *lo_queue;
+4
include/linux/mm.h
··· 13 #include <linux/rbtree.h> 14 #include <linux/prio_tree.h> 15 #include <linux/fs.h> 16 17 struct mempolicy; 18 struct anon_vma; ··· 1025 static inline void 1026 kernel_map_pages(struct page *page, int numpages, int enable) 1027 { 1028 } 1029 #endif 1030
··· 13 #include <linux/rbtree.h> 14 #include <linux/prio_tree.h> 15 #include <linux/fs.h> 16 + #include <linux/mutex.h> 17 18 struct mempolicy; 19 struct anon_vma; ··· 1024 static inline void 1025 kernel_map_pages(struct page *page, int numpages, int enable) 1026 { 1027 + if (!PageHighMem(page) && !enable) 1028 + mutex_debug_check_no_locks_freed(page_address(page), 1029 + page_address(page + numpages)); 1030 } 1031 #endif 1032
+21
include/linux/mutex-debug.h
···
··· 1 + #ifndef __LINUX_MUTEX_DEBUG_H 2 + #define __LINUX_MUTEX_DEBUG_H 3 + 4 + /* 5 + * Mutexes - debugging helpers: 6 + */ 7 + 8 + #define __DEBUG_MUTEX_INITIALIZER(lockname) \ 9 + , .held_list = LIST_HEAD_INIT(lockname.held_list), \ 10 + .name = #lockname , .magic = &lockname 11 + 12 + #define mutex_init(sem) __mutex_init(sem, __FUNCTION__) 13 + 14 + extern void FASTCALL(mutex_destroy(struct mutex *lock)); 15 + 16 + extern void mutex_debug_show_all_locks(void); 17 + extern void mutex_debug_show_held_locks(struct task_struct *filter); 18 + extern void mutex_debug_check_no_locks_held(struct task_struct *task); 19 + extern void mutex_debug_check_no_locks_freed(const void *from, const void *to); 20 + 21 + #endif
+119
include/linux/mutex.h
···
··· 1 + /* 2 + * Mutexes: blocking mutual exclusion locks 3 + * 4 + * started by Ingo Molnar: 5 + * 6 + * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 7 + * 8 + * This file contains the main data structure and API definitions. 9 + */ 10 + #ifndef __LINUX_MUTEX_H 11 + #define __LINUX_MUTEX_H 12 + 13 + #include <linux/list.h> 14 + #include <linux/spinlock_types.h> 15 + 16 + #include <asm/atomic.h> 17 + 18 + /* 19 + * Simple, straightforward mutexes with strict semantics: 20 + * 21 + * - only one task can hold the mutex at a time 22 + * - only the owner can unlock the mutex 23 + * - multiple unlocks are not permitted 24 + * - recursive locking is not permitted 25 + * - a mutex object must be initialized via the API 26 + * - a mutex object must not be initialized via memset or copying 27 + * - task may not exit with mutex held 28 + * - memory areas where held locks reside must not be freed 29 + * - held mutexes must not be reinitialized 30 + * - mutexes may not be used in irq contexts 31 + * 32 + * These semantics are fully enforced when DEBUG_MUTEXES is 33 + * enabled. Furthermore, besides enforcing the above rules, the mutex 34 + * debugging code also implements a number of additional features 35 + * that make lock debugging easier and faster: 36 + * 37 + * - uses symbolic names of mutexes, whenever they are printed in debug output 38 + * - point-of-acquire tracking, symbolic lookup of function names 39 + * - list of all locks held in the system, printout of them 40 + * - owner tracking 41 + * - detects self-recursing locks and prints out all relevant info 42 + * - detects multi-task circular deadlocks and prints out all affected 43 + * locks and tasks (and only those tasks) 44 + */ 45 + struct mutex { 46 + /* 1: unlocked, 0: locked, negative: locked, possible waiters */ 47 + atomic_t count; 48 + spinlock_t wait_lock; 49 + struct list_head wait_list; 50 + #ifdef CONFIG_DEBUG_MUTEXES 51 + struct thread_info *owner; 52 + struct list_head held_list; 53 + unsigned long acquire_ip; 54 + const char *name; 55 + void *magic; 56 + #endif 57 + }; 58 + 59 + /* 60 + * This is the control structure for tasks blocked on mutex, 61 + * which resides on the blocked task's kernel stack: 62 + */ 63 + struct mutex_waiter { 64 + struct list_head list; 65 + struct task_struct *task; 66 + #ifdef CONFIG_DEBUG_MUTEXES 67 + struct mutex *lock; 68 + void *magic; 69 + #endif 70 + }; 71 + 72 + #ifdef CONFIG_DEBUG_MUTEXES 73 + # include <linux/mutex-debug.h> 74 + #else 75 + # define __DEBUG_MUTEX_INITIALIZER(lockname) 76 + # define mutex_init(mutex) __mutex_init(mutex, NULL) 77 + # define mutex_destroy(mutex) do { } while (0) 78 + # define mutex_debug_show_all_locks() do { } while (0) 79 + # define mutex_debug_show_held_locks(p) do { } while (0) 80 + # define mutex_debug_check_no_locks_held(task) do { } while (0) 81 + # define mutex_debug_check_no_locks_freed(from, to) do { } while (0) 82 + #endif 83 + 84 + #define __MUTEX_INITIALIZER(lockname) \ 85 + { .count = ATOMIC_INIT(1) \ 86 + , .wait_lock = SPIN_LOCK_UNLOCKED \ 87 + , .wait_list = LIST_HEAD_INIT(lockname.wait_list) \ 88 + __DEBUG_MUTEX_INITIALIZER(lockname) } 89 + 90 + #define DEFINE_MUTEX(mutexname) \ 91 + struct mutex mutexname = __MUTEX_INITIALIZER(mutexname) 92 + 93 + extern void fastcall __mutex_init(struct mutex *lock, const char *name); 94 + 95 + /*** 96 + * mutex_is_locked - is the mutex locked 97 + * @lock: the mutex to be queried 98 + * 99 + * Returns 1 if the mutex is locked, 0 if unlocked. 100 + */ 101 + static inline int fastcall mutex_is_locked(struct mutex *lock) 102 + { 103 + return atomic_read(&lock->count) != 1; 104 + } 105 + 106 + /* 107 + * See kernel/mutex.c for detailed documentation of these APIs. 108 + * Also see Documentation/mutex-design.txt. 109 + */ 110 + extern void fastcall mutex_lock(struct mutex *lock); 111 + extern int fastcall mutex_lock_interruptible(struct mutex *lock); 112 + /* 113 + * NOTE: mutex_trylock() follows the spin_trylock() convention, 114 + * not the down_trylock() convention! 115 + */ 116 + extern int fastcall mutex_trylock(struct mutex *lock); 117 + extern void fastcall mutex_unlock(struct mutex *lock); 118 + 119 + #endif
+3 -3
include/linux/nfsd/nfsfh.h
··· 294 /* 295 * Lock a file handle/inode 296 * NOTE: both fh_lock and fh_unlock are done "by hand" in 297 - * vfs.c:nfsd_rename as it needs to grab 2 i_sem's at once 298 * so, any changes here should be reflected there. 299 */ 300 static inline void ··· 317 } 318 319 inode = dentry->d_inode; 320 - down(&inode->i_sem); 321 fill_pre_wcc(fhp); 322 fhp->fh_locked = 1; 323 } ··· 333 334 if (fhp->fh_locked) { 335 fill_post_wcc(fhp); 336 - up(&fhp->fh_dentry->d_inode->i_sem); 337 fhp->fh_locked = 0; 338 } 339 }
··· 294 /* 295 * Lock a file handle/inode 296 * NOTE: both fh_lock and fh_unlock are done "by hand" in 297 + * vfs.c:nfsd_rename as it needs to grab 2 i_mutex's at once 298 * so, any changes here should be reflected there. 299 */ 300 static inline void ··· 317 } 318 319 inode = dentry->d_inode; 320 + mutex_lock(&inode->i_mutex); 321 fill_pre_wcc(fhp); 322 fhp->fh_locked = 1; 323 } ··· 333 334 if (fhp->fh_locked) { 335 fill_post_wcc(fhp); 336 + mutex_unlock(&fhp->fh_dentry->d_inode->i_mutex); 337 fhp->fh_locked = 0; 338 } 339 }
+1 -1
include/linux/pipe_fs_i.h
··· 37 memory allocation, whereas PIPE_BUF makes atomicity guarantees. */ 38 #define PIPE_SIZE PAGE_SIZE 39 40 - #define PIPE_SEM(inode) (&(inode).i_sem) 41 #define PIPE_WAIT(inode) (&(inode).i_pipe->wait) 42 #define PIPE_READERS(inode) ((inode).i_pipe->readers) 43 #define PIPE_WRITERS(inode) ((inode).i_pipe->writers)
··· 37 memory allocation, whereas PIPE_BUF makes atomicity guarantees. */ 38 #define PIPE_SIZE PAGE_SIZE 39 40 + #define PIPE_MUTEX(inode) (&(inode).i_mutex) 41 #define PIPE_WAIT(inode) (&(inode).i_pipe->wait) 42 #define PIPE_READERS(inode) ((inode).i_pipe->readers) 43 #define PIPE_WRITERS(inode) ((inode).i_pipe->writers)
+1 -1
include/linux/reiserfs_fs.h
··· 1857 #define GET_BLOCK_CREATE 1 /* add anything you need to find block */ 1858 #define GET_BLOCK_NO_HOLE 2 /* return -ENOENT for file holes */ 1859 #define GET_BLOCK_READ_DIRECT 4 /* read the tail if indirect item not found */ 1860 - #define GET_BLOCK_NO_ISEM 8 /* i_sem is not held, don't preallocate */ 1861 #define GET_BLOCK_NO_DANGLE 16 /* don't leave any transactions running */ 1862 1863 int restart_transaction(struct reiserfs_transaction_handle *th,
··· 1857 #define GET_BLOCK_CREATE 1 /* add anything you need to find block */ 1858 #define GET_BLOCK_NO_HOLE 2 /* return -ENOENT for file holes */ 1859 #define GET_BLOCK_READ_DIRECT 4 /* read the tail if indirect item not found */ 1860 + #define GET_BLOCK_NO_IMUX 8 /* i_mutex is not held, don't preallocate */ 1861 #define GET_BLOCK_NO_DANGLE 16 /* don't leave any transactions running */ 1862 1863 int restart_transaction(struct reiserfs_transaction_handle *th,
+5
include/linux/sched.h
··· 817 /* Protection of proc_dentry: nesting proc_lock, dcache_lock, write_lock_irq(&tasklist_lock); */ 818 spinlock_t proc_lock; 819 820 /* journalling filesystem info */ 821 void *journal_info; 822
··· 817 /* Protection of proc_dentry: nesting proc_lock, dcache_lock, write_lock_irq(&tasklist_lock); */ 818 spinlock_t proc_lock; 819 820 + #ifdef CONFIG_DEBUG_MUTEXES 821 + /* mutex deadlock detection */ 822 + struct mutex_waiter *blocked_on; 823 + #endif 824 + 825 /* journalling filesystem info */ 826 void *journal_info; 827
+4 -4
ipc/mqueue.c
··· 660 if (fd < 0) 661 goto out_putname; 662 663 - down(&mqueue_mnt->mnt_root->d_inode->i_sem); 664 dentry = lookup_one_len(name, mqueue_mnt->mnt_root, strlen(name)); 665 if (IS_ERR(dentry)) { 666 error = PTR_ERR(dentry); ··· 697 out_err: 698 fd = error; 699 out_upsem: 700 - up(&mqueue_mnt->mnt_root->d_inode->i_sem); 701 out_putname: 702 putname(name); 703 return fd; ··· 714 if (IS_ERR(name)) 715 return PTR_ERR(name); 716 717 - down(&mqueue_mnt->mnt_root->d_inode->i_sem); 718 dentry = lookup_one_len(name, mqueue_mnt->mnt_root, strlen(name)); 719 if (IS_ERR(dentry)) { 720 err = PTR_ERR(dentry); ··· 735 dput(dentry); 736 737 out_unlock: 738 - up(&mqueue_mnt->mnt_root->d_inode->i_sem); 739 putname(name); 740 if (inode) 741 iput(inode);
··· 660 if (fd < 0) 661 goto out_putname; 662 663 + mutex_lock(&mqueue_mnt->mnt_root->d_inode->i_mutex); 664 dentry = lookup_one_len(name, mqueue_mnt->mnt_root, strlen(name)); 665 if (IS_ERR(dentry)) { 666 error = PTR_ERR(dentry); ··· 697 out_err: 698 fd = error; 699 out_upsem: 700 + mutex_unlock(&mqueue_mnt->mnt_root->d_inode->i_mutex); 701 out_putname: 702 putname(name); 703 return fd; ··· 714 if (IS_ERR(name)) 715 return PTR_ERR(name); 716 717 + mutex_lock(&mqueue_mnt->mnt_root->d_inode->i_mutex); 718 dentry = lookup_one_len(name, mqueue_mnt->mnt_root, strlen(name)); 719 if (IS_ERR(dentry)) { 720 err = PTR_ERR(dentry); ··· 735 dput(dentry); 736 737 out_unlock: 738 + mutex_unlock(&mqueue_mnt->mnt_root->d_inode->i_mutex); 739 putname(name); 740 if (inode) 741 iput(inode);
+2 -1
kernel/Makefile
··· 7 sysctl.o capability.o ptrace.o timer.o user.o \ 8 signal.o sys.o kmod.o workqueue.o pid.o \ 9 rcupdate.o intermodule.o extable.o params.o posix-timers.o \ 10 - kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o 11 12 obj-$(CONFIG_FUTEX) += futex.o 13 obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o 14 obj-$(CONFIG_SMP) += cpu.o spinlock.o
··· 7 sysctl.o capability.o ptrace.o timer.o user.o \ 8 signal.o sys.o kmod.o workqueue.o pid.o \ 9 rcupdate.o intermodule.o extable.o params.o posix-timers.o \ 10 + kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o mutex.o 11 12 + obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o 13 obj-$(CONFIG_FUTEX) += futex.o 14 obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o 15 obj-$(CONFIG_SMP) += cpu.o spinlock.o
+5 -5
kernel/cpuset.c
··· 1513 struct dentry *dentry; 1514 int error; 1515 1516 - down(&dir->d_inode->i_sem); 1517 dentry = cpuset_get_dentry(dir, cft->name); 1518 if (!IS_ERR(dentry)) { 1519 error = cpuset_create_file(dentry, 0644 | S_IFREG); ··· 1522 dput(dentry); 1523 } else 1524 error = PTR_ERR(dentry); 1525 - up(&dir->d_inode->i_sem); 1526 return error; 1527 } 1528 ··· 1793 1794 /* 1795 * Release manage_sem before cpuset_populate_dir() because it 1796 - * will down() this new directory's i_sem and if we race with 1797 * another mkdir, we might deadlock. 1798 */ 1799 up(&manage_sem); ··· 1812 { 1813 struct cpuset *c_parent = dentry->d_parent->d_fsdata; 1814 1815 - /* the vfs holds inode->i_sem already */ 1816 return cpuset_create(c_parent, dentry->d_name.name, mode | S_IFDIR); 1817 } 1818 ··· 1823 struct cpuset *parent; 1824 char *pathbuf = NULL; 1825 1826 - /* the vfs holds both inode->i_sem already */ 1827 1828 down(&manage_sem); 1829 cpuset_update_task_memory_state();
··· 1513 struct dentry *dentry; 1514 int error; 1515 1516 + mutex_lock(&dir->d_inode->i_mutex); 1517 dentry = cpuset_get_dentry(dir, cft->name); 1518 if (!IS_ERR(dentry)) { 1519 error = cpuset_create_file(dentry, 0644 | S_IFREG); ··· 1522 dput(dentry); 1523 } else 1524 error = PTR_ERR(dentry); 1525 + mutex_unlock(&dir->d_inode->i_mutex); 1526 return error; 1527 } 1528 ··· 1793 1794 /* 1795 * Release manage_sem before cpuset_populate_dir() because it 1796 + * will down() this new directory's i_mutex and if we race with 1797 * another mkdir, we might deadlock. 1798 */ 1799 up(&manage_sem); ··· 1812 { 1813 struct cpuset *c_parent = dentry->d_parent->d_fsdata; 1814 1815 + /* the vfs holds inode->i_mutex already */ 1816 return cpuset_create(c_parent, dentry->d_name.name, mode | S_IFDIR); 1817 } 1818 ··· 1823 struct cpuset *parent; 1824 char *pathbuf = NULL; 1825 1826 + /* the vfs holds both inode->i_mutex already */ 1827 1828 down(&manage_sem); 1829 cpuset_update_task_memory_state();
+5
kernel/exit.c
··· 29 #include <linux/syscalls.h> 30 #include <linux/signal.h> 31 #include <linux/cn_proc.h> 32 33 #include <asm/uaccess.h> 34 #include <asm/unistd.h> ··· 870 mpol_free(tsk->mempolicy); 871 tsk->mempolicy = NULL; 872 #endif 873 874 /* PF_DEAD causes final put_task_struct after we schedule. */ 875 preempt_disable();
··· 29 #include <linux/syscalls.h> 30 #include <linux/signal.h> 31 #include <linux/cn_proc.h> 32 + #include <linux/mutex.h> 33 34 #include <asm/uaccess.h> 35 #include <asm/unistd.h> ··· 869 mpol_free(tsk->mempolicy); 870 tsk->mempolicy = NULL; 871 #endif 872 + /* 873 + * If DEBUG_MUTEXES is on, make sure we are holding no locks: 874 + */ 875 + mutex_debug_check_no_locks_held(tsk); 876 877 /* PF_DEAD causes final put_task_struct after we schedule. */ 878 preempt_disable();
+4
kernel/fork.c
··· 979 } 980 #endif 981 982 p->tgid = p->pid; 983 if (clone_flags & CLONE_THREAD) 984 p->tgid = current->tgid;
··· 979 } 980 #endif 981 982 + #ifdef CONFIG_DEBUG_MUTEXES 983 + p->blocked_on = NULL; /* not blocked yet */ 984 + #endif 985 + 986 p->tgid = p->pid; 987 if (clone_flags & CLONE_THREAD) 988 p->tgid = current->tgid;
+464
kernel/mutex-debug.c
···
··· 1 + /* 2 + * kernel/mutex-debug.c 3 + * 4 + * Debugging code for mutexes 5 + * 6 + * Started by Ingo Molnar: 7 + * 8 + * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 9 + * 10 + * lock debugging, locking tree, deadlock detection started by: 11 + * 12 + * Copyright (C) 2004, LynuxWorks, Inc., Igor Manyilov, Bill Huey 13 + * Released under the General Public License (GPL). 14 + */ 15 + #include <linux/mutex.h> 16 + #include <linux/sched.h> 17 + #include <linux/delay.h> 18 + #include <linux/module.h> 19 + #include <linux/spinlock.h> 20 + #include <linux/kallsyms.h> 21 + #include <linux/interrupt.h> 22 + 23 + #include <asm/mutex.h> 24 + 25 + #include "mutex-debug.h" 26 + 27 + /* 28 + * We need a global lock when we walk through the multi-process 29 + * lock tree. Only used in the deadlock-debugging case. 30 + */ 31 + DEFINE_SPINLOCK(debug_mutex_lock); 32 + 33 + /* 34 + * All locks held by all tasks, in a single global list: 35 + */ 36 + LIST_HEAD(debug_mutex_held_locks); 37 + 38 + /* 39 + * In the debug case we carry the caller's instruction pointer into 40 + * other functions, but we dont want the function argument overhead 41 + * in the nondebug case - hence these macros: 42 + */ 43 + #define __IP_DECL__ , unsigned long ip 44 + #define __IP__ , ip 45 + #define __RET_IP__ , (unsigned long)__builtin_return_address(0) 46 + 47 + /* 48 + * "mutex debugging enabled" flag. We turn it off when we detect 49 + * the first problem because we dont want to recurse back 50 + * into the tracing code when doing error printk or 51 + * executing a BUG(): 52 + */ 53 + int debug_mutex_on = 1; 54 + 55 + static void printk_task(struct task_struct *p) 56 + { 57 + if (p) 58 + printk("%16s:%5d [%p, %3d]", p->comm, p->pid, p, p->prio); 59 + else 60 + printk("<none>"); 61 + } 62 + 63 + static void printk_ti(struct thread_info *ti) 64 + { 65 + if (ti) 66 + printk_task(ti->task); 67 + else 68 + printk("<none>"); 69 + } 70 + 71 + static void printk_task_short(struct task_struct *p) 72 + { 73 + if (p) 74 + printk("%s/%d [%p, %3d]", p->comm, p->pid, p, p->prio); 75 + else 76 + printk("<none>"); 77 + } 78 + 79 + static void printk_lock(struct mutex *lock, int print_owner) 80 + { 81 + printk(" [%p] {%s}\n", lock, lock->name); 82 + 83 + if (print_owner && lock->owner) { 84 + printk(".. held by: "); 85 + printk_ti(lock->owner); 86 + printk("\n"); 87 + } 88 + if (lock->owner) { 89 + printk("... acquired at: "); 90 + print_symbol("%s\n", lock->acquire_ip); 91 + } 92 + } 93 + 94 + /* 95 + * printk locks held by a task: 96 + */ 97 + static void show_task_locks(struct task_struct *p) 98 + { 99 + switch (p->state) { 100 + case TASK_RUNNING: printk("R"); break; 101 + case TASK_INTERRUPTIBLE: printk("S"); break; 102 + case TASK_UNINTERRUPTIBLE: printk("D"); break; 103 + case TASK_STOPPED: printk("T"); break; 104 + case EXIT_ZOMBIE: printk("Z"); break; 105 + case EXIT_DEAD: printk("X"); break; 106 + default: printk("?"); break; 107 + } 108 + printk_task(p); 109 + if (p->blocked_on) { 110 + struct mutex *lock = p->blocked_on->lock; 111 + 112 + printk(" blocked on mutex:"); 113 + printk_lock(lock, 1); 114 + } else 115 + printk(" (not blocked on mutex)\n"); 116 + } 117 + 118 + /* 119 + * printk all locks held in the system (if filter == NULL), 120 + * or all locks belonging to a single task (if filter != NULL): 121 + */ 122 + void show_held_locks(struct task_struct *filter) 123 + { 124 + struct list_head *curr, *cursor = NULL; 125 + struct mutex *lock; 126 + struct thread_info *t; 127 + unsigned long flags; 128 + int count = 0; 129 + 130 + if (filter) { 131 + printk("------------------------------\n"); 132 + printk("| showing all locks held by: | ("); 133 + printk_task_short(filter); 134 + printk("):\n"); 135 + printk("------------------------------\n"); 136 + } else { 137 + printk("---------------------------\n"); 138 + printk("| showing all locks held: |\n"); 139 + printk("---------------------------\n"); 140 + } 141 + 142 + /* 143 + * Play safe and acquire the global trace lock. We 144 + * cannot printk with that lock held so we iterate 145 + * very carefully: 146 + */ 147 + next: 148 + debug_spin_lock_save(&debug_mutex_lock, flags); 149 + list_for_each(curr, &debug_mutex_held_locks) { 150 + if (cursor && curr != cursor) 151 + continue; 152 + lock = list_entry(curr, struct mutex, held_list); 153 + t = lock->owner; 154 + if (filter && (t != filter->thread_info)) 155 + continue; 156 + count++; 157 + cursor = curr->next; 158 + debug_spin_lock_restore(&debug_mutex_lock, flags); 159 + 160 + printk("\n#%03d: ", count); 161 + printk_lock(lock, filter ? 0 : 1); 162 + goto next; 163 + } 164 + debug_spin_lock_restore(&debug_mutex_lock, flags); 165 + printk("\n"); 166 + } 167 + 168 + void mutex_debug_show_all_locks(void) 169 + { 170 + struct task_struct *g, *p; 171 + int count = 10; 172 + int unlock = 1; 173 + 174 + printk("\nShowing all blocking locks in the system:\n"); 175 + 176 + /* 177 + * Here we try to get the tasklist_lock as hard as possible, 178 + * if not successful after 2 seconds we ignore it (but keep 179 + * trying). This is to enable a debug printout even if a 180 + * tasklist_lock-holding task deadlocks or crashes. 181 + */ 182 + retry: 183 + if (!read_trylock(&tasklist_lock)) { 184 + if (count == 10) 185 + printk("hm, tasklist_lock locked, retrying... "); 186 + if (count) { 187 + count--; 188 + printk(" #%d", 10-count); 189 + mdelay(200); 190 + goto retry; 191 + } 192 + printk(" ignoring it.\n"); 193 + unlock = 0; 194 + } 195 + if (count != 10) 196 + printk(" locked it.\n"); 197 + 198 + do_each_thread(g, p) { 199 + show_task_locks(p); 200 + if (!unlock) 201 + if (read_trylock(&tasklist_lock)) 202 + unlock = 1; 203 + } while_each_thread(g, p); 204 + 205 + printk("\n"); 206 + show_held_locks(NULL); 207 + printk("=============================================\n\n"); 208 + 209 + if (unlock) 210 + read_unlock(&tasklist_lock); 211 + } 212 + 213 + static void report_deadlock(struct task_struct *task, struct mutex *lock, 214 + struct mutex *lockblk, unsigned long ip) 215 + { 216 + printk("\n%s/%d is trying to acquire this lock:\n", 217 + current->comm, current->pid); 218 + printk_lock(lock, 1); 219 + printk("... trying at: "); 220 + print_symbol("%s\n", ip); 221 + show_held_locks(current); 222 + 223 + if (lockblk) { 224 + printk("but %s/%d is deadlocking current task %s/%d!\n\n", 225 + task->comm, task->pid, current->comm, current->pid); 226 + printk("\n%s/%d is blocked on this lock:\n", 227 + task->comm, task->pid); 228 + printk_lock(lockblk, 1); 229 + 230 + show_held_locks(task); 231 + 232 + printk("\n%s/%d's [blocked] stackdump:\n\n", 233 + task->comm, task->pid); 234 + show_stack(task, NULL); 235 + } 236 + 237 + printk("\n%s/%d's [current] stackdump:\n\n", 238 + current->comm, current->pid); 239 + dump_stack(); 240 + mutex_debug_show_all_locks(); 241 + printk("[ turning off deadlock detection. Please report this. ]\n\n"); 242 + local_irq_disable(); 243 + } 244 + 245 + /* 246 + * Recursively check for mutex deadlocks: 247 + */ 248 + static int check_deadlock(struct mutex *lock, int depth, 249 + struct thread_info *ti, unsigned long ip) 250 + { 251 + struct mutex *lockblk; 252 + struct task_struct *task; 253 + 254 + if (!debug_mutex_on) 255 + return 0; 256 + 257 + ti = lock->owner; 258 + if (!ti) 259 + return 0; 260 + 261 + task = ti->task; 262 + lockblk = NULL; 263 + if (task->blocked_on) 264 + lockblk = task->blocked_on->lock; 265 + 266 + /* Self-deadlock: */ 267 + if (current == task) { 268 + DEBUG_OFF(); 269 + if (depth) 270 + return 1; 271 + printk("\n==========================================\n"); 272 + printk( "[ BUG: lock recursion deadlock detected! |\n"); 273 + printk( "------------------------------------------\n"); 274 + report_deadlock(task, lock, NULL, ip); 275 + return 0; 276 + } 277 + 278 + /* Ugh, something corrupted the lock data structure? */ 279 + if (depth > 20) { 280 + DEBUG_OFF(); 281 + printk("\n===========================================\n"); 282 + printk( "[ BUG: infinite lock dependency detected!? |\n"); 283 + printk( "-------------------------------------------\n"); 284 + report_deadlock(task, lock, lockblk, ip); 285 + return 0; 286 + } 287 + 288 + /* Recursively check for dependencies: */ 289 + if (lockblk && check_deadlock(lockblk, depth+1, ti, ip)) { 290 + printk("\n============================================\n"); 291 + printk( "[ BUG: circular locking deadlock detected! ]\n"); 292 + printk( "--------------------------------------------\n"); 293 + report_deadlock(task, lock, lockblk, ip); 294 + return 0; 295 + } 296 + return 0; 297 + } 298 + 299 + /* 300 + * Called when a task exits, this function checks whether the 301 + * task is holding any locks, and reports the first one if so: 302 + */ 303 + void mutex_debug_check_no_locks_held(struct task_struct *task) 304 + { 305 + struct list_head *curr, *next; 306 + struct thread_info *t; 307 + unsigned long flags; 308 + struct mutex *lock; 309 + 310 + if (!debug_mutex_on) 311 + return; 312 + 313 + debug_spin_lock_save(&debug_mutex_lock, flags); 314 + list_for_each_safe(curr, next, &debug_mutex_held_locks) { 315 + lock = list_entry(curr, struct mutex, held_list); 316 + t = lock->owner; 317 + if (t != task->thread_info) 318 + continue; 319 + list_del_init(curr); 320 + DEBUG_OFF(); 321 + debug_spin_lock_restore(&debug_mutex_lock, flags); 322 + 323 + printk("BUG: %s/%d, lock held at task exit time!\n", 324 + task->comm, task->pid); 325 + printk_lock(lock, 1); 326 + if (lock->owner != task->thread_info) 327 + printk("exiting task is not even the owner??\n"); 328 + return; 329 + } 330 + debug_spin_lock_restore(&debug_mutex_lock, flags); 331 + } 332 + 333 + /* 334 + * Called when kernel memory is freed (or unmapped), or if a mutex 335 + * is destroyed or reinitialized - this code checks whether there is 336 + * any held lock in the memory range of <from> to <to>: 337 + */ 338 + void mutex_debug_check_no_locks_freed(const void *from, const void *to) 339 + { 340 + struct list_head *curr, *next; 341 + unsigned long flags; 342 + struct mutex *lock; 343 + void *lock_addr; 344 + 345 + if (!debug_mutex_on) 346 + return; 347 + 348 + debug_spin_lock_save(&debug_mutex_lock, flags); 349 + list_for_each_safe(curr, next, &debug_mutex_held_locks) { 350 + lock = list_entry(curr, struct mutex, held_list); 351 + lock_addr = lock; 352 + if (lock_addr < from || lock_addr >= to) 353 + continue; 354 + list_del_init(curr); 355 + DEBUG_OFF(); 356 + debug_spin_lock_restore(&debug_mutex_lock, flags); 357 + 358 + printk("BUG: %s/%d, active lock [%p(%p-%p)] freed!\n", 359 + current->comm, current->pid, lock, from, to); 360 + dump_stack(); 361 + printk_lock(lock, 1); 362 + if (lock->owner != current_thread_info()) 363 + printk("freeing task is not even the owner??\n"); 364 + return; 365 + } 366 + debug_spin_lock_restore(&debug_mutex_lock, flags); 367 + } 368 + 369 + /* 370 + * Must be called with lock->wait_lock held. 371 + */ 372 + void debug_mutex_set_owner(struct mutex *lock, 373 + struct thread_info *new_owner __IP_DECL__) 374 + { 375 + lock->owner = new_owner; 376 + DEBUG_WARN_ON(!list_empty(&lock->held_list)); 377 + if (debug_mutex_on) { 378 + list_add_tail(&lock->held_list, &debug_mutex_held_locks); 379 + lock->acquire_ip = ip; 380 + } 381 + } 382 + 383 + void debug_mutex_init_waiter(struct mutex_waiter *waiter) 384 + { 385 + memset(waiter, 0x11, sizeof(*waiter)); 386 + waiter->magic = waiter; 387 + INIT_LIST_HEAD(&waiter->list); 388 + } 389 + 390 + void debug_mutex_wake_waiter(struct mutex *lock, struct mutex_waiter *waiter) 391 + { 392 + SMP_DEBUG_WARN_ON(!spin_is_locked(&lock->wait_lock)); 393 + DEBUG_WARN_ON(list_empty(&lock->wait_list)); 394 + DEBUG_WARN_ON(waiter->magic != waiter); 395 + DEBUG_WARN_ON(list_empty(&waiter->list)); 396 + } 397 + 398 + void debug_mutex_free_waiter(struct mutex_waiter *waiter) 399 + { 400 + DEBUG_WARN_ON(!list_empty(&waiter->list)); 401 + memset(waiter, 0x22, sizeof(*waiter)); 402 + } 403 + 404 + void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, 405 + struct thread_info *ti __IP_DECL__) 406 + { 407 + SMP_DEBUG_WARN_ON(!spin_is_locked(&lock->wait_lock)); 408 + check_deadlock(lock, 0, ti, ip); 409 + /* Mark the current thread as blocked on the lock: */ 410 + ti->task->blocked_on = waiter; 411 + waiter->lock = lock; 412 + } 413 + 414 + void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, 415 + struct thread_info *ti) 416 + { 417 + DEBUG_WARN_ON(list_empty(&waiter->list)); 418 + DEBUG_WARN_ON(waiter->task != ti->task); 419 + DEBUG_WARN_ON(ti->task->blocked_on != waiter); 420 + ti->task->blocked_on = NULL; 421 + 422 + list_del_init(&waiter->list); 423 + waiter->task = NULL; 424 + } 425 + 426 + void debug_mutex_unlock(struct mutex *lock) 427 + { 428 + DEBUG_WARN_ON(lock->magic != lock); 429 + DEBUG_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next); 430 + DEBUG_WARN_ON(lock->owner != current_thread_info()); 431 + if (debug_mutex_on) { 432 + DEBUG_WARN_ON(list_empty(&lock->held_list)); 433 + list_del_init(&lock->held_list); 434 + } 435 + } 436 + 437 + void debug_mutex_init(struct mutex *lock, const char *name) 438 + { 439 + /* 440 + * Make sure we are not reinitializing a held lock: 441 + */ 442 + mutex_debug_check_no_locks_freed((void *)lock, (void *)(lock + 1)); 443 + lock->owner = NULL; 444 + INIT_LIST_HEAD(&lock->held_list); 445 + lock->name = name; 446 + lock->magic = lock; 447 + } 448 + 449 + /*** 450 + * mutex_destroy - mark a mutex unusable 451 + * @lock: the mutex to be destroyed 452 + * 453 + * This function marks the mutex uninitialized, and any subsequent 454 + * use of the mutex is forbidden. The mutex must not be locked when 455 + * this function is called. 456 + */ 457 + void fastcall mutex_destroy(struct mutex *lock) 458 + { 459 + DEBUG_WARN_ON(mutex_is_locked(lock)); 460 + lock->magic = NULL; 461 + } 462 + 463 + EXPORT_SYMBOL_GPL(mutex_destroy); 464 +
+134
kernel/mutex-debug.h
···
··· 1 + /* 2 + * Mutexes: blocking mutual exclusion locks 3 + * 4 + * started by Ingo Molnar: 5 + * 6 + * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 7 + * 8 + * This file contains mutex debugging related internal declarations, 9 + * prototypes and inline functions, for the CONFIG_DEBUG_MUTEXES case. 10 + * More details are in kernel/mutex-debug.c. 11 + */ 12 + 13 + extern spinlock_t debug_mutex_lock; 14 + extern struct list_head debug_mutex_held_locks; 15 + extern int debug_mutex_on; 16 + 17 + /* 18 + * In the debug case we carry the caller's instruction pointer into 19 + * other functions, but we dont want the function argument overhead 20 + * in the nondebug case - hence these macros: 21 + */ 22 + #define __IP_DECL__ , unsigned long ip 23 + #define __IP__ , ip 24 + #define __RET_IP__ , (unsigned long)__builtin_return_address(0) 25 + 26 + /* 27 + * This must be called with lock->wait_lock held. 28 + */ 29 + extern void debug_mutex_set_owner(struct mutex *lock, 30 + struct thread_info *new_owner __IP_DECL__); 31 + 32 + static inline void debug_mutex_clear_owner(struct mutex *lock) 33 + { 34 + lock->owner = NULL; 35 + } 36 + 37 + extern void debug_mutex_init_waiter(struct mutex_waiter *waiter); 38 + extern void debug_mutex_wake_waiter(struct mutex *lock, 39 + struct mutex_waiter *waiter); 40 + extern void debug_mutex_free_waiter(struct mutex_waiter *waiter); 41 + extern void debug_mutex_add_waiter(struct mutex *lock, 42 + struct mutex_waiter *waiter, 43 + struct thread_info *ti __IP_DECL__); 44 + extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, 45 + struct thread_info *ti); 46 + extern void debug_mutex_unlock(struct mutex *lock); 47 + extern void debug_mutex_init(struct mutex *lock, const char *name); 48 + 49 + #define debug_spin_lock(lock) \ 50 + do { \ 51 + local_irq_disable(); \ 52 + if (debug_mutex_on) \ 53 + spin_lock(lock); \ 54 + } while (0) 55 + 56 + #define debug_spin_unlock(lock) \ 57 + do { \ 58 + if (debug_mutex_on) \ 59 + spin_unlock(lock); \ 60 + local_irq_enable(); \ 61 + preempt_check_resched(); \ 62 + } while (0) 63 + 64 + #define debug_spin_lock_save(lock, flags) \ 65 + do { \ 66 + local_irq_save(flags); \ 67 + if (debug_mutex_on) \ 68 + spin_lock(lock); \ 69 + } while (0) 70 + 71 + #define debug_spin_lock_restore(lock, flags) \ 72 + do { \ 73 + if (debug_mutex_on) \ 74 + spin_unlock(lock); \ 75 + local_irq_restore(flags); \ 76 + preempt_check_resched(); \ 77 + } while (0) 78 + 79 + #define spin_lock_mutex(lock) \ 80 + do { \ 81 + struct mutex *l = container_of(lock, struct mutex, wait_lock); \ 82 + \ 83 + DEBUG_WARN_ON(in_interrupt()); \ 84 + debug_spin_lock(&debug_mutex_lock); \ 85 + spin_lock(lock); \ 86 + DEBUG_WARN_ON(l->magic != l); \ 87 + } while (0) 88 + 89 + #define spin_unlock_mutex(lock) \ 90 + do { \ 91 + spin_unlock(lock); \ 92 + debug_spin_unlock(&debug_mutex_lock); \ 93 + } while (0) 94 + 95 + #define DEBUG_OFF() \ 96 + do { \ 97 + if (debug_mutex_on) { \ 98 + debug_mutex_on = 0; \ 99 + console_verbose(); \ 100 + if (spin_is_locked(&debug_mutex_lock)) \ 101 + spin_unlock(&debug_mutex_lock); \ 102 + } \ 103 + } while (0) 104 + 105 + #define DEBUG_BUG() \ 106 + do { \ 107 + if (debug_mutex_on) { \ 108 + DEBUG_OFF(); \ 109 + BUG(); \ 110 + } \ 111 + } while (0) 112 + 113 + #define DEBUG_WARN_ON(c) \ 114 + do { \ 115 + if (unlikely(c && debug_mutex_on)) { \ 116 + DEBUG_OFF(); \ 117 + WARN_ON(1); \ 118 + } \ 119 + } while (0) 120 + 121 + # define DEBUG_BUG_ON(c) \ 122 + do { \ 123 + if (unlikely(c)) \ 124 + DEBUG_BUG(); \ 125 + } while (0) 126 + 127 + #ifdef CONFIG_SMP 128 + # define SMP_DEBUG_WARN_ON(c) DEBUG_WARN_ON(c) 129 + # define SMP_DEBUG_BUG_ON(c) DEBUG_BUG_ON(c) 130 + #else 131 + # define SMP_DEBUG_WARN_ON(c) do { } while (0) 132 + # define SMP_DEBUG_BUG_ON(c) do { } while (0) 133 + #endif 134 +
+325
kernel/mutex.c
···
··· 1 + /* 2 + * kernel/mutex.c 3 + * 4 + * Mutexes: blocking mutual exclusion locks 5 + * 6 + * Started by Ingo Molnar: 7 + * 8 + * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 9 + * 10 + * Many thanks to Arjan van de Ven, Thomas Gleixner, Steven Rostedt and 11 + * David Howells for suggestions and improvements. 12 + * 13 + * Also see Documentation/mutex-design.txt. 14 + */ 15 + #include <linux/mutex.h> 16 + #include <linux/sched.h> 17 + #include <linux/module.h> 18 + #include <linux/spinlock.h> 19 + #include <linux/interrupt.h> 20 + 21 + /* 22 + * In the DEBUG case we are using the "NULL fastpath" for mutexes, 23 + * which forces all calls into the slowpath: 24 + */ 25 + #ifdef CONFIG_DEBUG_MUTEXES 26 + # include "mutex-debug.h" 27 + # include <asm-generic/mutex-null.h> 28 + #else 29 + # include "mutex.h" 30 + # include <asm/mutex.h> 31 + #endif 32 + 33 + /*** 34 + * mutex_init - initialize the mutex 35 + * @lock: the mutex to be initialized 36 + * 37 + * Initialize the mutex to unlocked state. 38 + * 39 + * It is not allowed to initialize an already locked mutex. 40 + */ 41 + void fastcall __mutex_init(struct mutex *lock, const char *name) 42 + { 43 + atomic_set(&lock->count, 1); 44 + spin_lock_init(&lock->wait_lock); 45 + INIT_LIST_HEAD(&lock->wait_list); 46 + 47 + debug_mutex_init(lock, name); 48 + } 49 + 50 + EXPORT_SYMBOL(__mutex_init); 51 + 52 + /* 53 + * We split the mutex lock/unlock logic into separate fastpath and 54 + * slowpath functions, to reduce the register pressure on the fastpath. 55 + * We also put the fastpath first in the kernel image, to make sure the 56 + * branch is predicted by the CPU as default-untaken. 57 + */ 58 + static void fastcall noinline __sched 59 + __mutex_lock_slowpath(atomic_t *lock_count __IP_DECL__); 60 + 61 + /*** 62 + * mutex_lock - acquire the mutex 63 + * @lock: the mutex to be acquired 64 + * 65 + * Lock the mutex exclusively for this task. If the mutex is not 66 + * available right now, it will sleep until it can get it. 67 + * 68 + * The mutex must later on be released by the same task that 69 + * acquired it. Recursive locking is not allowed. The task 70 + * may not exit without first unlocking the mutex. Also, kernel 71 + * memory where the mutex resides mutex must not be freed with 72 + * the mutex still locked. The mutex must first be initialized 73 + * (or statically defined) before it can be locked. memset()-ing 74 + * the mutex to 0 is not allowed. 75 + * 76 + * ( The CONFIG_DEBUG_MUTEXES .config option turns on debugging 77 + * checks that will enforce the restrictions and will also do 78 + * deadlock debugging. ) 79 + * 80 + * This function is similar to (but not equivalent to) down(). 81 + */ 82 + void fastcall __sched mutex_lock(struct mutex *lock) 83 + { 84 + /* 85 + * The locking fastpath is the 1->0 transition from 86 + * 'unlocked' into 'locked' state. 87 + * 88 + * NOTE: if asm/mutex.h is included, then some architectures 89 + * rely on mutex_lock() having _no other code_ here but this 90 + * fastpath. That allows the assembly fastpath to do 91 + * tail-merging optimizations. (If you want to put testcode 92 + * here, do it under #ifndef CONFIG_MUTEX_DEBUG.) 93 + */ 94 + __mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath); 95 + } 96 + 97 + EXPORT_SYMBOL(mutex_lock); 98 + 99 + static void fastcall noinline __sched 100 + __mutex_unlock_slowpath(atomic_t *lock_count __IP_DECL__); 101 + 102 + /*** 103 + * mutex_unlock - release the mutex 104 + * @lock: the mutex to be released 105 + * 106 + * Unlock a mutex that has been locked by this task previously. 107 + * 108 + * This function must not be used in interrupt context. Unlocking 109 + * of a not locked mutex is not allowed. 110 + * 111 + * This function is similar to (but not equivalent to) up(). 112 + */ 113 + void fastcall __sched mutex_unlock(struct mutex *lock) 114 + { 115 + /* 116 + * The unlocking fastpath is the 0->1 transition from 'locked' 117 + * into 'unlocked' state: 118 + * 119 + * NOTE: no other code must be here - see mutex_lock() . 120 + */ 121 + __mutex_fastpath_unlock(&lock->count, __mutex_unlock_slowpath); 122 + } 123 + 124 + EXPORT_SYMBOL(mutex_unlock); 125 + 126 + /* 127 + * Lock a mutex (possibly interruptible), slowpath: 128 + */ 129 + static inline int __sched 130 + __mutex_lock_common(struct mutex *lock, long state __IP_DECL__) 131 + { 132 + struct task_struct *task = current; 133 + struct mutex_waiter waiter; 134 + unsigned int old_val; 135 + 136 + debug_mutex_init_waiter(&waiter); 137 + 138 + spin_lock_mutex(&lock->wait_lock); 139 + 140 + debug_mutex_add_waiter(lock, &waiter, task->thread_info, ip); 141 + 142 + /* add waiting tasks to the end of the waitqueue (FIFO): */ 143 + list_add_tail(&waiter.list, &lock->wait_list); 144 + waiter.task = task; 145 + 146 + for (;;) { 147 + /* 148 + * Lets try to take the lock again - this is needed even if 149 + * we get here for the first time (shortly after failing to 150 + * acquire the lock), to make sure that we get a wakeup once 151 + * it's unlocked. Later on, if we sleep, this is the 152 + * operation that gives us the lock. We xchg it to -1, so 153 + * that when we release the lock, we properly wake up the 154 + * other waiters: 155 + */ 156 + old_val = atomic_xchg(&lock->count, -1); 157 + if (old_val == 1) 158 + break; 159 + 160 + /* 161 + * got a signal? (This code gets eliminated in the 162 + * TASK_UNINTERRUPTIBLE case.) 163 + */ 164 + if (unlikely(state == TASK_INTERRUPTIBLE && 165 + signal_pending(task))) { 166 + mutex_remove_waiter(lock, &waiter, task->thread_info); 167 + spin_unlock_mutex(&lock->wait_lock); 168 + 169 + debug_mutex_free_waiter(&waiter); 170 + return -EINTR; 171 + } 172 + __set_task_state(task, state); 173 + 174 + /* didnt get the lock, go to sleep: */ 175 + spin_unlock_mutex(&lock->wait_lock); 176 + schedule(); 177 + spin_lock_mutex(&lock->wait_lock); 178 + } 179 + 180 + /* got the lock - rejoice! */ 181 + mutex_remove_waiter(lock, &waiter, task->thread_info); 182 + debug_mutex_set_owner(lock, task->thread_info __IP__); 183 + 184 + /* set it to 0 if there are no waiters left: */ 185 + if (likely(list_empty(&lock->wait_list))) 186 + atomic_set(&lock->count, 0); 187 + 188 + spin_unlock_mutex(&lock->wait_lock); 189 + 190 + debug_mutex_free_waiter(&waiter); 191 + 192 + DEBUG_WARN_ON(list_empty(&lock->held_list)); 193 + DEBUG_WARN_ON(lock->owner != task->thread_info); 194 + 195 + return 0; 196 + } 197 + 198 + static void fastcall noinline __sched 199 + __mutex_lock_slowpath(atomic_t *lock_count __IP_DECL__) 200 + { 201 + struct mutex *lock = container_of(lock_count, struct mutex, count); 202 + 203 + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE __IP__); 204 + } 205 + 206 + /* 207 + * Release the lock, slowpath: 208 + */ 209 + static fastcall noinline void 210 + __mutex_unlock_slowpath(atomic_t *lock_count __IP_DECL__) 211 + { 212 + struct mutex *lock = container_of(lock_count, struct mutex, count); 213 + 214 + DEBUG_WARN_ON(lock->owner != current_thread_info()); 215 + 216 + spin_lock_mutex(&lock->wait_lock); 217 + 218 + /* 219 + * some architectures leave the lock unlocked in the fastpath failure 220 + * case, others need to leave it locked. In the later case we have to 221 + * unlock it here 222 + */ 223 + if (__mutex_slowpath_needs_to_unlock()) 224 + atomic_set(&lock->count, 1); 225 + 226 + debug_mutex_unlock(lock); 227 + 228 + if (!list_empty(&lock->wait_list)) { 229 + /* get the first entry from the wait-list: */ 230 + struct mutex_waiter *waiter = 231 + list_entry(lock->wait_list.next, 232 + struct mutex_waiter, list); 233 + 234 + debug_mutex_wake_waiter(lock, waiter); 235 + 236 + wake_up_process(waiter->task); 237 + } 238 + 239 + debug_mutex_clear_owner(lock); 240 + 241 + spin_unlock_mutex(&lock->wait_lock); 242 + } 243 + 244 + /* 245 + * Here come the less common (and hence less performance-critical) APIs: 246 + * mutex_lock_interruptible() and mutex_trylock(). 247 + */ 248 + static int fastcall noinline __sched 249 + __mutex_lock_interruptible_slowpath(atomic_t *lock_count __IP_DECL__); 250 + 251 + /*** 252 + * mutex_lock_interruptible - acquire the mutex, interruptable 253 + * @lock: the mutex to be acquired 254 + * 255 + * Lock the mutex like mutex_lock(), and return 0 if the mutex has 256 + * been acquired or sleep until the mutex becomes available. If a 257 + * signal arrives while waiting for the lock then this function 258 + * returns -EINTR. 259 + * 260 + * This function is similar to (but not equivalent to) down_interruptible(). 261 + */ 262 + int fastcall __sched mutex_lock_interruptible(struct mutex *lock) 263 + { 264 + /* NOTE: no other code must be here - see mutex_lock() */ 265 + return __mutex_fastpath_lock_retval 266 + (&lock->count, __mutex_lock_interruptible_slowpath); 267 + } 268 + 269 + EXPORT_SYMBOL(mutex_lock_interruptible); 270 + 271 + static int fastcall noinline __sched 272 + __mutex_lock_interruptible_slowpath(atomic_t *lock_count __IP_DECL__) 273 + { 274 + struct mutex *lock = container_of(lock_count, struct mutex, count); 275 + 276 + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE __IP__); 277 + } 278 + 279 + /* 280 + * Spinlock based trylock, we take the spinlock and check whether we 281 + * can get the lock: 282 + */ 283 + static inline int __mutex_trylock_slowpath(atomic_t *lock_count) 284 + { 285 + struct mutex *lock = container_of(lock_count, struct mutex, count); 286 + int prev; 287 + 288 + spin_lock_mutex(&lock->wait_lock); 289 + 290 + prev = atomic_xchg(&lock->count, -1); 291 + if (likely(prev == 1)) 292 + debug_mutex_set_owner(lock, current_thread_info() __RET_IP__); 293 + /* Set it back to 0 if there are no waiters: */ 294 + if (likely(list_empty(&lock->wait_list))) 295 + atomic_set(&lock->count, 0); 296 + 297 + spin_unlock_mutex(&lock->wait_lock); 298 + 299 + return prev == 1; 300 + } 301 + 302 + /*** 303 + * mutex_trylock - try acquire the mutex, without waiting 304 + * @lock: the mutex to be acquired 305 + * 306 + * Try to acquire the mutex atomically. Returns 1 if the mutex 307 + * has been acquired successfully, and 0 on contention. 308 + * 309 + * NOTE: this function follows the spin_trylock() convention, so 310 + * it is negated to the down_trylock() return values! Be careful 311 + * about this when converting semaphore users to mutexes. 312 + * 313 + * This function must not be used in interrupt context. The 314 + * mutex must be released by the same task that acquired it. 315 + */ 316 + int fastcall mutex_trylock(struct mutex *lock) 317 + { 318 + return __mutex_fastpath_trylock(&lock->count, 319 + __mutex_trylock_slowpath); 320 + } 321 + 322 + EXPORT_SYMBOL(mutex_trylock); 323 + 324 + 325 +
+35
kernel/mutex.h
···
··· 1 + /* 2 + * Mutexes: blocking mutual exclusion locks 3 + * 4 + * started by Ingo Molnar: 5 + * 6 + * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 7 + * 8 + * This file contains mutex debugging related internal prototypes, for the 9 + * !CONFIG_DEBUG_MUTEXES case. Most of them are NOPs: 10 + */ 11 + 12 + #define spin_lock_mutex(lock) spin_lock(lock) 13 + #define spin_unlock_mutex(lock) spin_unlock(lock) 14 + #define mutex_remove_waiter(lock, waiter, ti) \ 15 + __list_del((waiter)->list.prev, (waiter)->list.next) 16 + 17 + #define DEBUG_WARN_ON(c) do { } while (0) 18 + #define debug_mutex_set_owner(lock, new_owner) do { } while (0) 19 + #define debug_mutex_clear_owner(lock) do { } while (0) 20 + #define debug_mutex_init_waiter(waiter) do { } while (0) 21 + #define debug_mutex_wake_waiter(lock, waiter) do { } while (0) 22 + #define debug_mutex_free_waiter(waiter) do { } while (0) 23 + #define debug_mutex_add_waiter(lock, waiter, ti, ip) do { } while (0) 24 + #define debug_mutex_unlock(lock) do { } while (0) 25 + #define debug_mutex_init(lock, name) do { } while (0) 26 + 27 + /* 28 + * Return-address parameters/declarations. They are very useful for 29 + * debugging, but add overhead in the !DEBUG case - so we go the 30 + * trouble of using this not too elegant but zero-cost solution: 31 + */ 32 + #define __IP_DECL__ 33 + #define __IP__ 34 + #define __RET_IP__ 35 +
+1
kernel/sched.c
··· 4386 } while_each_thread(g, p); 4387 4388 read_unlock(&tasklist_lock); 4389 } 4390 4391 /**
··· 4386 } while_each_thread(g, p); 4387 4388 read_unlock(&tasklist_lock); 4389 + mutex_debug_show_all_locks(); 4390 } 4391 4392 /**
+8
lib/Kconfig.debug
··· 95 if kernel code uses it in a preemption-unsafe way. Also, the kernel 96 will detect preemption count underflows. 97 98 config DEBUG_SPINLOCK 99 bool "Spinlock debugging" 100 depends on DEBUG_KERNEL
··· 95 if kernel code uses it in a preemption-unsafe way. Also, the kernel 96 will detect preemption count underflows. 97 98 + config DEBUG_MUTEXES 99 + bool "Mutex debugging, deadlock detection" 100 + default y 101 + depends on DEBUG_KERNEL 102 + help 103 + This allows mutex semantics violations and mutex related deadlocks 104 + (lockups) to be detected and reported automatically. 105 + 106 config DEBUG_SPINLOCK 107 bool "Spinlock debugging" 108 depends on DEBUG_KERNEL
+15 -15
mm/filemap.c
··· 61 * ->swap_lock (exclusive_swap_page, others) 62 * ->mapping->tree_lock 63 * 64 - * ->i_sem 65 * ->i_mmap_lock (truncate->unmap_mapping_range) 66 * 67 * ->mmap_sem ··· 73 * ->lock_page (access_process_vm) 74 * 75 * ->mmap_sem 76 - * ->i_sem (msync) 77 * 78 - * ->i_sem 79 * ->i_alloc_sem (various) 80 * 81 * ->inode_lock ··· 276 * integrity" operation. It waits upon in-flight writeout before starting and 277 * waiting upon new writeout. If there was an IO error, return it. 278 * 279 - * We need to re-take i_sem during the generic_osync_inode list walk because 280 * it is otherwise livelockable. 281 */ 282 int sync_page_range(struct inode *inode, struct address_space *mapping, ··· 290 return 0; 291 ret = filemap_fdatawrite_range(mapping, pos, pos + count - 1); 292 if (ret == 0) { 293 - down(&inode->i_sem); 294 ret = generic_osync_inode(inode, mapping, OSYNC_METADATA); 295 - up(&inode->i_sem); 296 } 297 if (ret == 0) 298 ret = wait_on_page_writeback_range(mapping, start, end); ··· 301 EXPORT_SYMBOL(sync_page_range); 302 303 /* 304 - * Note: Holding i_sem across sync_page_range_nolock is not a good idea 305 * as it forces O_SYNC writers to different parts of the same file 306 * to be serialised right until io completion. 307 */ ··· 1892 /* 1893 * Sync the fs metadata but not the minor inode changes and 1894 * of course not the data as we did direct DMA for the IO. 1895 - * i_sem is held, which protects generic_osync_inode() from 1896 * livelocking. 1897 */ 1898 if (written >= 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { ··· 2195 2196 BUG_ON(iocb->ki_pos != pos); 2197 2198 - down(&inode->i_sem); 2199 ret = __generic_file_aio_write_nolock(iocb, &local_iov, 1, 2200 &iocb->ki_pos); 2201 - up(&inode->i_sem); 2202 2203 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2204 ssize_t err; ··· 2220 struct iovec local_iov = { .iov_base = (void __user *)buf, 2221 .iov_len = count }; 2222 2223 - down(&inode->i_sem); 2224 ret = __generic_file_write_nolock(file, &local_iov, 1, ppos); 2225 - up(&inode->i_sem); 2226 2227 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2228 ssize_t err; ··· 2256 struct inode *inode = mapping->host; 2257 ssize_t ret; 2258 2259 - down(&inode->i_sem); 2260 ret = __generic_file_write_nolock(file, iov, nr_segs, ppos); 2261 - up(&inode->i_sem); 2262 2263 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2264 int err; ··· 2272 EXPORT_SYMBOL(generic_file_writev); 2273 2274 /* 2275 - * Called under i_sem for writes to S_ISREG files. Returns -EIO if something 2276 * went wrong during pagecache shootdown. 2277 */ 2278 static ssize_t
··· 61 * ->swap_lock (exclusive_swap_page, others) 62 * ->mapping->tree_lock 63 * 64 + * ->i_mutex 65 * ->i_mmap_lock (truncate->unmap_mapping_range) 66 * 67 * ->mmap_sem ··· 73 * ->lock_page (access_process_vm) 74 * 75 * ->mmap_sem 76 + * ->i_mutex (msync) 77 * 78 + * ->i_mutex 79 * ->i_alloc_sem (various) 80 * 81 * ->inode_lock ··· 276 * integrity" operation. It waits upon in-flight writeout before starting and 277 * waiting upon new writeout. If there was an IO error, return it. 278 * 279 + * We need to re-take i_mutex during the generic_osync_inode list walk because 280 * it is otherwise livelockable. 281 */ 282 int sync_page_range(struct inode *inode, struct address_space *mapping, ··· 290 return 0; 291 ret = filemap_fdatawrite_range(mapping, pos, pos + count - 1); 292 if (ret == 0) { 293 + mutex_lock(&inode->i_mutex); 294 ret = generic_osync_inode(inode, mapping, OSYNC_METADATA); 295 + mutex_unlock(&inode->i_mutex); 296 } 297 if (ret == 0) 298 ret = wait_on_page_writeback_range(mapping, start, end); ··· 301 EXPORT_SYMBOL(sync_page_range); 302 303 /* 304 + * Note: Holding i_mutex across sync_page_range_nolock is not a good idea 305 * as it forces O_SYNC writers to different parts of the same file 306 * to be serialised right until io completion. 307 */ ··· 1892 /* 1893 * Sync the fs metadata but not the minor inode changes and 1894 * of course not the data as we did direct DMA for the IO. 1895 + * i_mutex is held, which protects generic_osync_inode() from 1896 * livelocking. 1897 */ 1898 if (written >= 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { ··· 2195 2196 BUG_ON(iocb->ki_pos != pos); 2197 2198 + mutex_lock(&inode->i_mutex); 2199 ret = __generic_file_aio_write_nolock(iocb, &local_iov, 1, 2200 &iocb->ki_pos); 2201 + mutex_unlock(&inode->i_mutex); 2202 2203 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2204 ssize_t err; ··· 2220 struct iovec local_iov = { .iov_base = (void __user *)buf, 2221 .iov_len = count }; 2222 2223 + mutex_lock(&inode->i_mutex); 2224 ret = __generic_file_write_nolock(file, &local_iov, 1, ppos); 2225 + mutex_unlock(&inode->i_mutex); 2226 2227 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2228 ssize_t err; ··· 2256 struct inode *inode = mapping->host; 2257 ssize_t ret; 2258 2259 + mutex_lock(&inode->i_mutex); 2260 ret = __generic_file_write_nolock(file, iov, nr_segs, ppos); 2261 + mutex_unlock(&inode->i_mutex); 2262 2263 if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) { 2264 int err; ··· 2272 EXPORT_SYMBOL(generic_file_writev); 2273 2274 /* 2275 + * Called under i_mutex for writes to S_ISREG files. Returns -EIO if something 2276 * went wrong during pagecache shootdown. 2277 */ 2278 static ssize_t
+3 -3
mm/filemap_xip.c
··· 338 *ppos = pos; 339 /* 340 * No need to use i_size_read() here, the i_size 341 - * cannot change under us because we hold i_sem. 342 */ 343 if (pos > inode->i_size) { 344 i_size_write(inode, pos); ··· 358 loff_t pos; 359 ssize_t ret; 360 361 - down(&inode->i_sem); 362 363 if (!access_ok(VERIFY_READ, buf, len)) { 364 ret=-EFAULT; ··· 390 out_backing: 391 current->backing_dev_info = NULL; 392 out_up: 393 - up(&inode->i_sem); 394 return ret; 395 } 396 EXPORT_SYMBOL_GPL(xip_file_write);
··· 338 *ppos = pos; 339 /* 340 * No need to use i_size_read() here, the i_size 341 + * cannot change under us because we hold i_mutex. 342 */ 343 if (pos > inode->i_size) { 344 i_size_write(inode, pos); ··· 358 loff_t pos; 359 ssize_t ret; 360 361 + mutex_lock(&inode->i_mutex); 362 363 if (!access_ok(VERIFY_READ, buf, len)) { 364 ret=-EFAULT; ··· 390 out_backing: 391 current->backing_dev_info = NULL; 392 out_up: 393 + mutex_unlock(&inode->i_mutex); 394 return ret; 395 } 396 EXPORT_SYMBOL_GPL(xip_file_write);
+2 -2
mm/memory.c
··· 1784 if (!inode->i_op || !inode->i_op->truncate_range) 1785 return -ENOSYS; 1786 1787 - down(&inode->i_sem); 1788 down_write(&inode->i_alloc_sem); 1789 unmap_mapping_range(mapping, offset, (end - offset), 1); 1790 truncate_inode_pages_range(mapping, offset, end); 1791 inode->i_op->truncate_range(inode, offset, end); 1792 up_write(&inode->i_alloc_sem); 1793 - up(&inode->i_sem); 1794 1795 return 0; 1796 }
··· 1784 if (!inode->i_op || !inode->i_op->truncate_range) 1785 return -ENOSYS; 1786 1787 + mutex_lock(&inode->i_mutex); 1788 down_write(&inode->i_alloc_sem); 1789 unmap_mapping_range(mapping, offset, (end - offset), 1); 1790 truncate_inode_pages_range(mapping, offset, end); 1791 inode->i_op->truncate_range(inode, offset, end); 1792 up_write(&inode->i_alloc_sem); 1793 + mutex_unlock(&inode->i_mutex); 1794 1795 return 0; 1796 }
+1 -1
mm/msync.c
··· 137 ret = filemap_fdatawrite(mapping); 138 if (file->f_op && file->f_op->fsync) { 139 /* 140 - * We don't take i_sem here because mmap_sem 141 * is already held. 142 */ 143 err = file->f_op->fsync(file,file->f_dentry,1);
··· 137 ret = filemap_fdatawrite(mapping); 138 if (file->f_op && file->f_op->fsync) { 139 /* 140 + * We don't take i_mutex here because mmap_sem 141 * is already held. 142 */ 143 err = file->f_op->fsync(file,file->f_dentry,1);
+3
mm/page_alloc.c
··· 415 int reserved = 0; 416 417 arch_free_page(page, order); 418 419 #ifndef CONFIG_MMU 420 for (i = 1 ; i < (1 << order) ; ++i)
··· 415 int reserved = 0; 416 417 arch_free_page(page, order); 418 + if (!PageHighMem(page)) 419 + mutex_debug_check_no_locks_freed(page_address(page), 420 + page_address(page+(1<<order))); 421 422 #ifndef CONFIG_MMU 423 for (i = 1 ; i < (1 << order) ; ++i)
+4 -4
mm/rmap.c
··· 20 /* 21 * Lock ordering in mm: 22 * 23 - * inode->i_sem (while writing or truncating, not reading or faulting) 24 * inode->i_alloc_sem 25 * 26 * When a page fault occurs in writing from user to file, down_read 27 - * of mmap_sem nests within i_sem; in sys_msync, i_sem nests within 28 - * down_read of mmap_sem; i_sem and down_write of mmap_sem are never 29 - * taken together; in truncation, i_sem is taken outermost. 30 * 31 * mm->mmap_sem 32 * page->flags PG_locked (lock_page)
··· 20 /* 21 * Lock ordering in mm: 22 * 23 + * inode->i_mutex (while writing or truncating, not reading or faulting) 24 * inode->i_alloc_sem 25 * 26 * When a page fault occurs in writing from user to file, down_read 27 + * of mmap_sem nests within i_mutex; in sys_msync, i_mutex nests within 28 + * down_read of mmap_sem; i_mutex and down_write of mmap_sem are never 29 + * taken together; in truncation, i_mutex is taken outermost. 30 * 31 * mm->mmap_sem 32 * page->flags PG_locked (lock_page)
+3 -3
mm/shmem.c
··· 1370 if (!access_ok(VERIFY_READ, buf, count)) 1371 return -EFAULT; 1372 1373 - down(&inode->i_sem); 1374 1375 pos = *ppos; 1376 written = 0; ··· 1455 if (written) 1456 err = written; 1457 out: 1458 - up(&inode->i_sem); 1459 return err; 1460 } 1461 ··· 1491 1492 /* 1493 * We must evaluate after, since reads (unlike writes) 1494 - * are called without i_sem protection against truncate 1495 */ 1496 nr = PAGE_CACHE_SIZE; 1497 i_size = i_size_read(inode);
··· 1370 if (!access_ok(VERIFY_READ, buf, count)) 1371 return -EFAULT; 1372 1373 + mutex_lock(&inode->i_mutex); 1374 1375 pos = *ppos; 1376 written = 0; ··· 1455 if (written) 1456 err = written; 1457 out: 1458 + mutex_unlock(&inode->i_mutex); 1459 return err; 1460 } 1461 ··· 1491 1492 /* 1493 * We must evaluate after, since reads (unlike writes) 1494 + * are called without i_mutex protection against truncate 1495 */ 1496 nr = PAGE_CACHE_SIZE; 1497 i_size = i_size_read(inode);
+1
mm/slab.c
··· 3071 local_irq_save(flags); 3072 kfree_debugcheck(objp); 3073 c = page_get_cache(virt_to_page(objp)); 3074 __cache_free(c, (void *)objp); 3075 local_irq_restore(flags); 3076 }
··· 3071 local_irq_save(flags); 3072 kfree_debugcheck(objp); 3073 c = page_get_cache(virt_to_page(objp)); 3074 + mutex_debug_check_no_locks_freed(objp, objp+obj_reallen(c)); 3075 __cache_free(c, (void *)objp); 3076 local_irq_restore(flags); 3077 }
+4 -4
mm/swapfile.c
··· 1187 set_blocksize(bdev, p->old_block_size); 1188 bd_release(bdev); 1189 } else { 1190 - down(&inode->i_sem); 1191 inode->i_flags &= ~S_SWAPFILE; 1192 - up(&inode->i_sem); 1193 } 1194 filp_close(swap_file, NULL); 1195 err = 0; ··· 1406 p->bdev = bdev; 1407 } else if (S_ISREG(inode->i_mode)) { 1408 p->bdev = inode->i_sb->s_bdev; 1409 - down(&inode->i_sem); 1410 did_down = 1; 1411 if (IS_SWAPFILE(inode)) { 1412 error = -EBUSY; ··· 1596 if (did_down) { 1597 if (!error) 1598 inode->i_flags |= S_SWAPFILE; 1599 - up(&inode->i_sem); 1600 } 1601 return error; 1602 }
··· 1187 set_blocksize(bdev, p->old_block_size); 1188 bd_release(bdev); 1189 } else { 1190 + mutex_lock(&inode->i_mutex); 1191 inode->i_flags &= ~S_SWAPFILE; 1192 + mutex_unlock(&inode->i_mutex); 1193 } 1194 filp_close(swap_file, NULL); 1195 err = 0; ··· 1406 p->bdev = bdev; 1407 } else if (S_ISREG(inode->i_mode)) { 1408 p->bdev = inode->i_sb->s_bdev; 1409 + mutex_lock(&inode->i_mutex); 1410 did_down = 1; 1411 if (IS_SWAPFILE(inode)) { 1412 error = -EBUSY; ··· 1596 if (did_down) { 1597 if (!error) 1598 inode->i_flags |= S_SWAPFILE; 1599 + mutex_unlock(&inode->i_mutex); 1600 } 1601 return error; 1602 }
+1 -1
mm/truncate.c
··· 196 * @mapping: mapping to truncate 197 * @lstart: offset from which to truncate 198 * 199 - * Called under (and serialised by) inode->i_sem. 200 */ 201 void truncate_inode_pages(struct address_space *mapping, loff_t lstart) 202 {
··· 196 * @mapping: mapping to truncate 197 * @lstart: offset from which to truncate 198 * 199 + * Called under (and serialised by) inode->i_mutex. 200 */ 201 void truncate_inode_pages(struct address_space *mapping, loff_t lstart) 202 {
+29 -29
net/sunrpc/rpc_pipe.c
··· 69 struct rpc_inode *rpci = (struct rpc_inode *)data; 70 struct inode *inode = &rpci->vfs_inode; 71 72 - down(&inode->i_sem); 73 if (rpci->ops == NULL) 74 goto out; 75 if (rpci->nreaders == 0 && !list_empty(&rpci->pipe)) 76 __rpc_purge_upcall(inode, -ETIMEDOUT); 77 out: 78 - up(&inode->i_sem); 79 } 80 81 int ··· 84 struct rpc_inode *rpci = RPC_I(inode); 85 int res = -EPIPE; 86 87 - down(&inode->i_sem); 88 if (rpci->ops == NULL) 89 goto out; 90 if (rpci->nreaders) { ··· 100 res = 0; 101 } 102 out: 103 - up(&inode->i_sem); 104 wake_up(&rpci->waitq); 105 return res; 106 } ··· 116 { 117 struct rpc_inode *rpci = RPC_I(inode); 118 119 - down(&inode->i_sem); 120 if (rpci->ops != NULL) { 121 rpci->nreaders = 0; 122 __rpc_purge_list(rpci, &rpci->in_upcall, -EPIPE); ··· 127 rpci->ops = NULL; 128 } 129 rpc_inode_setowner(inode, NULL); 130 - up(&inode->i_sem); 131 cancel_delayed_work(&rpci->queue_timeout); 132 flush_scheduled_work(); 133 } ··· 154 struct rpc_inode *rpci = RPC_I(inode); 155 int res = -ENXIO; 156 157 - down(&inode->i_sem); 158 if (rpci->ops != NULL) { 159 if (filp->f_mode & FMODE_READ) 160 rpci->nreaders ++; ··· 162 rpci->nwriters ++; 163 res = 0; 164 } 165 - up(&inode->i_sem); 166 return res; 167 } 168 ··· 172 struct rpc_inode *rpci = RPC_I(inode); 173 struct rpc_pipe_msg *msg; 174 175 - down(&inode->i_sem); 176 if (rpci->ops == NULL) 177 goto out; 178 msg = (struct rpc_pipe_msg *)filp->private_data; ··· 190 if (rpci->ops->release_pipe) 191 rpci->ops->release_pipe(inode); 192 out: 193 - up(&inode->i_sem); 194 return 0; 195 } 196 ··· 202 struct rpc_pipe_msg *msg; 203 int res = 0; 204 205 - down(&inode->i_sem); 206 if (rpci->ops == NULL) { 207 res = -EPIPE; 208 goto out_unlock; ··· 229 rpci->ops->destroy_msg(msg); 230 } 231 out_unlock: 232 - up(&inode->i_sem); 233 return res; 234 } 235 ··· 240 struct rpc_inode *rpci = RPC_I(inode); 241 int res; 242 243 - down(&inode->i_sem); 244 res = -EPIPE; 245 if (rpci->ops != NULL) 246 res = rpci->ops->downcall(filp, buf, len); 247 - up(&inode->i_sem); 248 return res; 249 } 250 ··· 322 323 if (!ret) { 324 struct seq_file *m = file->private_data; 325 - down(&inode->i_sem); 326 clnt = RPC_I(inode)->private; 327 if (clnt) { 328 atomic_inc(&clnt->cl_users); ··· 331 single_release(inode, file); 332 ret = -EINVAL; 333 } 334 - up(&inode->i_sem); 335 } 336 return ret; 337 } ··· 491 struct dentry *dentry, *dvec[10]; 492 int n = 0; 493 494 - down(&dir->i_sem); 495 repeat: 496 spin_lock(&dcache_lock); 497 list_for_each_safe(pos, next, &parent->d_subdirs) { ··· 519 } while (n); 520 goto repeat; 521 } 522 - up(&dir->i_sem); 523 } 524 525 static int ··· 532 struct dentry *dentry; 533 int mode, i; 534 535 - down(&dir->i_sem); 536 for (i = start; i < eof; i++) { 537 dentry = d_alloc_name(parent, files[i].name); 538 if (!dentry) ··· 552 dir->i_nlink++; 553 d_add(dentry, inode); 554 } 555 - up(&dir->i_sem); 556 return 0; 557 out_bad: 558 - up(&dir->i_sem); 559 printk(KERN_WARNING "%s: %s failed to populate directory %s\n", 560 __FILE__, __FUNCTION__, parent->d_name.name); 561 return -ENOMEM; ··· 609 if ((error = rpc_lookup_parent(path, nd)) != 0) 610 return ERR_PTR(error); 611 dir = nd->dentry->d_inode; 612 - down(&dir->i_sem); 613 dentry = lookup_hash(nd); 614 if (IS_ERR(dentry)) 615 goto out_err; ··· 620 } 621 return dentry; 622 out_err: 623 - up(&dir->i_sem); 624 rpc_release_path(nd); 625 return dentry; 626 } ··· 646 if (error) 647 goto err_depopulate; 648 out: 649 - up(&dir->i_sem); 650 rpc_release_path(&nd); 651 return dentry; 652 err_depopulate: ··· 671 if ((error = rpc_lookup_parent(path, &nd)) != 0) 672 return error; 673 dir = nd.dentry->d_inode; 674 - down(&dir->i_sem); 675 dentry = lookup_hash(&nd); 676 if (IS_ERR(dentry)) { 677 error = PTR_ERR(dentry); ··· 681 error = __rpc_rmdir(dir, dentry); 682 dput(dentry); 683 out_release: 684 - up(&dir->i_sem); 685 rpc_release_path(&nd); 686 return error; 687 } ··· 710 rpci->ops = ops; 711 inode_dir_notify(dir, DN_CREATE); 712 out: 713 - up(&dir->i_sem); 714 rpc_release_path(&nd); 715 return dentry; 716 err_dput: ··· 732 if ((error = rpc_lookup_parent(path, &nd)) != 0) 733 return error; 734 dir = nd.dentry->d_inode; 735 - down(&dir->i_sem); 736 dentry = lookup_hash(&nd); 737 if (IS_ERR(dentry)) { 738 error = PTR_ERR(dentry); ··· 746 dput(dentry); 747 inode_dir_notify(dir, DN_DELETE); 748 out_release: 749 - up(&dir->i_sem); 750 rpc_release_path(&nd); 751 return error; 752 }
··· 69 struct rpc_inode *rpci = (struct rpc_inode *)data; 70 struct inode *inode = &rpci->vfs_inode; 71 72 + mutex_lock(&inode->i_mutex); 73 if (rpci->ops == NULL) 74 goto out; 75 if (rpci->nreaders == 0 && !list_empty(&rpci->pipe)) 76 __rpc_purge_upcall(inode, -ETIMEDOUT); 77 out: 78 + mutex_unlock(&inode->i_mutex); 79 } 80 81 int ··· 84 struct rpc_inode *rpci = RPC_I(inode); 85 int res = -EPIPE; 86 87 + mutex_lock(&inode->i_mutex); 88 if (rpci->ops == NULL) 89 goto out; 90 if (rpci->nreaders) { ··· 100 res = 0; 101 } 102 out: 103 + mutex_unlock(&inode->i_mutex); 104 wake_up(&rpci->waitq); 105 return res; 106 } ··· 116 { 117 struct rpc_inode *rpci = RPC_I(inode); 118 119 + mutex_lock(&inode->i_mutex); 120 if (rpci->ops != NULL) { 121 rpci->nreaders = 0; 122 __rpc_purge_list(rpci, &rpci->in_upcall, -EPIPE); ··· 127 rpci->ops = NULL; 128 } 129 rpc_inode_setowner(inode, NULL); 130 + mutex_unlock(&inode->i_mutex); 131 cancel_delayed_work(&rpci->queue_timeout); 132 flush_scheduled_work(); 133 } ··· 154 struct rpc_inode *rpci = RPC_I(inode); 155 int res = -ENXIO; 156 157 + mutex_lock(&inode->i_mutex); 158 if (rpci->ops != NULL) { 159 if (filp->f_mode & FMODE_READ) 160 rpci->nreaders ++; ··· 162 rpci->nwriters ++; 163 res = 0; 164 } 165 + mutex_unlock(&inode->i_mutex); 166 return res; 167 } 168 ··· 172 struct rpc_inode *rpci = RPC_I(inode); 173 struct rpc_pipe_msg *msg; 174 175 + mutex_lock(&inode->i_mutex); 176 if (rpci->ops == NULL) 177 goto out; 178 msg = (struct rpc_pipe_msg *)filp->private_data; ··· 190 if (rpci->ops->release_pipe) 191 rpci->ops->release_pipe(inode); 192 out: 193 + mutex_unlock(&inode->i_mutex); 194 return 0; 195 } 196 ··· 202 struct rpc_pipe_msg *msg; 203 int res = 0; 204 205 + mutex_lock(&inode->i_mutex); 206 if (rpci->ops == NULL) { 207 res = -EPIPE; 208 goto out_unlock; ··· 229 rpci->ops->destroy_msg(msg); 230 } 231 out_unlock: 232 + mutex_unlock(&inode->i_mutex); 233 return res; 234 } 235 ··· 240 struct rpc_inode *rpci = RPC_I(inode); 241 int res; 242 243 + mutex_lock(&inode->i_mutex); 244 res = -EPIPE; 245 if (rpci->ops != NULL) 246 res = rpci->ops->downcall(filp, buf, len); 247 + mutex_unlock(&inode->i_mutex); 248 return res; 249 } 250 ··· 322 323 if (!ret) { 324 struct seq_file *m = file->private_data; 325 + mutex_lock(&inode->i_mutex); 326 clnt = RPC_I(inode)->private; 327 if (clnt) { 328 atomic_inc(&clnt->cl_users); ··· 331 single_release(inode, file); 332 ret = -EINVAL; 333 } 334 + mutex_unlock(&inode->i_mutex); 335 } 336 return ret; 337 } ··· 491 struct dentry *dentry, *dvec[10]; 492 int n = 0; 493 494 + mutex_lock(&dir->i_mutex); 495 repeat: 496 spin_lock(&dcache_lock); 497 list_for_each_safe(pos, next, &parent->d_subdirs) { ··· 519 } while (n); 520 goto repeat; 521 } 522 + mutex_unlock(&dir->i_mutex); 523 } 524 525 static int ··· 532 struct dentry *dentry; 533 int mode, i; 534 535 + mutex_lock(&dir->i_mutex); 536 for (i = start; i < eof; i++) { 537 dentry = d_alloc_name(parent, files[i].name); 538 if (!dentry) ··· 552 dir->i_nlink++; 553 d_add(dentry, inode); 554 } 555 + mutex_unlock(&dir->i_mutex); 556 return 0; 557 out_bad: 558 + mutex_unlock(&dir->i_mutex); 559 printk(KERN_WARNING "%s: %s failed to populate directory %s\n", 560 __FILE__, __FUNCTION__, parent->d_name.name); 561 return -ENOMEM; ··· 609 if ((error = rpc_lookup_parent(path, nd)) != 0) 610 return ERR_PTR(error); 611 dir = nd->dentry->d_inode; 612 + mutex_lock(&dir->i_mutex); 613 dentry = lookup_hash(nd); 614 if (IS_ERR(dentry)) 615 goto out_err; ··· 620 } 621 return dentry; 622 out_err: 623 + mutex_unlock(&dir->i_mutex); 624 rpc_release_path(nd); 625 return dentry; 626 } ··· 646 if (error) 647 goto err_depopulate; 648 out: 649 + mutex_unlock(&dir->i_mutex); 650 rpc_release_path(&nd); 651 return dentry; 652 err_depopulate: ··· 671 if ((error = rpc_lookup_parent(path, &nd)) != 0) 672 return error; 673 dir = nd.dentry->d_inode; 674 + mutex_lock(&dir->i_mutex); 675 dentry = lookup_hash(&nd); 676 if (IS_ERR(dentry)) { 677 error = PTR_ERR(dentry); ··· 681 error = __rpc_rmdir(dir, dentry); 682 dput(dentry); 683 out_release: 684 + mutex_unlock(&dir->i_mutex); 685 rpc_release_path(&nd); 686 return error; 687 } ··· 710 rpci->ops = ops; 711 inode_dir_notify(dir, DN_CREATE); 712 out: 713 + mutex_unlock(&dir->i_mutex); 714 rpc_release_path(&nd); 715 return dentry; 716 err_dput: ··· 732 if ((error = rpc_lookup_parent(path, &nd)) != 0) 733 return error; 734 dir = nd.dentry->d_inode; 735 + mutex_lock(&dir->i_mutex); 736 dentry = lookup_hash(&nd); 737 if (IS_ERR(dentry)) { 738 error = PTR_ERR(dentry); ··· 746 dput(dentry); 747 inode_dir_notify(dir, DN_DELETE); 748 out_release: 749 + mutex_unlock(&dir->i_mutex); 750 rpc_release_path(&nd); 751 return error; 752 }
+2 -2
net/unix/af_unix.c
··· 784 err = vfs_mknod(nd.dentry->d_inode, dentry, mode, 0); 785 if (err) 786 goto out_mknod_dput; 787 - up(&nd.dentry->d_inode->i_sem); 788 dput(nd.dentry); 789 nd.dentry = dentry; 790 ··· 823 out_mknod_dput: 824 dput(dentry); 825 out_mknod_unlock: 826 - up(&nd.dentry->d_inode->i_sem); 827 path_release(&nd); 828 out_mknod_parent: 829 if (err==-EEXIST)
··· 784 err = vfs_mknod(nd.dentry->d_inode, dentry, mode, 0); 785 if (err) 786 goto out_mknod_dput; 787 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 788 dput(nd.dentry); 789 nd.dentry = dentry; 790 ··· 823 out_mknod_dput: 824 dput(dentry); 825 out_mknod_unlock: 826 + mutex_unlock(&nd.dentry->d_inode->i_mutex); 827 path_release(&nd); 828 out_mknod_parent: 829 if (err==-EEXIST)
+4 -4
security/inode.c
··· 172 return -EFAULT; 173 } 174 175 - down(&parent->d_inode->i_sem); 176 *dentry = lookup_one_len(name, parent, strlen(name)); 177 if (!IS_ERR(dentry)) { 178 if ((mode & S_IFMT) == S_IFDIR) ··· 181 error = create(parent->d_inode, *dentry, mode); 182 } else 183 error = PTR_ERR(dentry); 184 - up(&parent->d_inode->i_sem); 185 186 return error; 187 } ··· 302 if (!parent || !parent->d_inode) 303 return; 304 305 - down(&parent->d_inode->i_sem); 306 if (positive(dentry)) { 307 if (dentry->d_inode) { 308 if (S_ISDIR(dentry->d_inode->i_mode)) ··· 312 dput(dentry); 313 } 314 } 315 - up(&parent->d_inode->i_sem); 316 simple_release_fs(&mount, &mount_count); 317 } 318 EXPORT_SYMBOL_GPL(securityfs_remove);
··· 172 return -EFAULT; 173 } 174 175 + mutex_lock(&parent->d_inode->i_mutex); 176 *dentry = lookup_one_len(name, parent, strlen(name)); 177 if (!IS_ERR(dentry)) { 178 if ((mode & S_IFMT) == S_IFDIR) ··· 181 error = create(parent->d_inode, *dentry, mode); 182 } else 183 error = PTR_ERR(dentry); 184 + mutex_unlock(&parent->d_inode->i_mutex); 185 186 return error; 187 } ··· 302 if (!parent || !parent->d_inode) 303 return; 304 305 + mutex_lock(&parent->d_inode->i_mutex); 306 if (positive(dentry)) { 307 if (dentry->d_inode) { 308 if (S_ISDIR(dentry->d_inode->i_mode)) ··· 312 dput(dentry); 313 } 314 } 315 + mutex_unlock(&parent->d_inode->i_mutex); 316 simple_release_fs(&mount, &mount_count); 317 } 318 EXPORT_SYMBOL_GPL(securityfs_remove);
-2
sound/core/oss/pcm_oss.c
··· 2135 substream = pcm_oss_file->streams[SNDRV_PCM_STREAM_PLAYBACK]; 2136 if (substream == NULL) 2137 return -ENXIO; 2138 - up(&file->f_dentry->d_inode->i_sem); 2139 result = snd_pcm_oss_write1(substream, buf, count); 2140 - down(&file->f_dentry->d_inode->i_sem); 2141 #ifdef OSS_DEBUG 2142 printk("pcm_oss: write %li bytes (wrote %li bytes)\n", (long)count, (long)result); 2143 #endif
··· 2135 substream = pcm_oss_file->streams[SNDRV_PCM_STREAM_PLAYBACK]; 2136 if (substream == NULL) 2137 return -ENXIO; 2138 result = snd_pcm_oss_write1(substream, buf, count); 2139 #ifdef OSS_DEBUG 2140 printk("pcm_oss: write %li bytes (wrote %li bytes)\n", (long)count, (long)result); 2141 #endif
-4
sound/core/seq/seq_memory.c
··· 32 #include "seq_info.h" 33 #include "seq_lock.h" 34 35 - /* semaphore in struct file record */ 36 - #define semaphore_of(fp) ((fp)->f_dentry->d_inode->i_sem) 37 - 38 - 39 static inline int snd_seq_pool_available(struct snd_seq_pool *pool) 40 { 41 return pool->total_elements - atomic_read(&pool->counter);
··· 32 #include "seq_info.h" 33 #include "seq_lock.h" 34 35 static inline int snd_seq_pool_available(struct snd_seq_pool *pool) 36 { 37 return pool->total_elements - atomic_read(&pool->counter);