Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

fs: dcache remove dcache_lock

dcache_lock no longer protects anything. remove it.

Signed-off-by: Nick Piggin <npiggin@kernel.dk>

+108 -306
+8 -8
Documentation/filesystems/Locking
··· 21 21 char *(*d_dname)((struct dentry *dentry, char *buffer, int buflen); 22 22 23 23 locking rules: 24 - dcache_lock rename_lock ->d_lock may block 25 - d_revalidate: no no no yes 26 - d_hash no no no no 27 - d_compare: no yes no no 28 - d_delete: yes no yes no 29 - d_release: no no no yes 30 - d_iput: no no no yes 31 - d_dname: no no no no 24 + rename_lock ->d_lock may block 25 + d_revalidate: no no yes 26 + d_hash no no no 27 + d_compare: yes no no 28 + d_delete: no yes no 29 + d_release: no no yes 30 + d_iput: no no yes 31 + d_dname: no no no 32 32 33 33 --------------------------- inode_operations --------------------------- 34 34 prototypes:
+18 -20
Documentation/filesystems/dentry-locking.txt
··· 31 31 doesn't acquire the dcache_lock for this and rely on RCU to ensure 32 32 that the dentry has not been *freed*. 33 33 34 + dcache_lock no longer exists, dentry locking is explained in fs/dcache.c 34 35 35 36 Dcache locking details 36 37 ====================== ··· 51 50 52 51 Dcache is a complex data structure with the hash table entries also 53 52 linked together in other lists. In 2.4 kernel, dcache_lock protected 54 - all the lists. We applied RCU only on hash chain walking. The rest of 55 - the lists are still protected by dcache_lock. Some of the important 56 - changes are : 53 + all the lists. RCU dentry hash walking works like this: 57 54 58 55 1. The deletion from hash chain is done using hlist_del_rcu() macro 59 56 which doesn't initialize next pointer of the deleted dentry and 60 57 this allows us to walk safely lock-free while a deletion is 61 - happening. 58 + happening. This is a standard hlist_rcu iteration. 62 59 63 60 2. Insertion of a dentry into the hash table is done using 64 61 hlist_add_head_rcu() which take care of ordering the writes - the ··· 65 66 which has since been replaced by hlist_for_each_entry_rcu(), while 66 67 walking the hash chain. The only requirement is that all 67 68 initialization to the dentry must be done before 68 - hlist_add_head_rcu() since we don't have dcache_lock protection 69 - while traversing the hash chain. This isn't different from the 70 - existing code. 69 + hlist_add_head_rcu() since we don't have lock protection 70 + while traversing the hash chain. 71 71 72 - 3. The dentry looked up without holding dcache_lock by cannot be 73 - returned for walking if it is unhashed. It then may have a NULL 74 - d_inode or other bogosity since RCU doesn't protect the other 75 - fields in the dentry. We therefore use a flag DCACHE_UNHASHED to 76 - indicate unhashed dentries and use this in conjunction with a 77 - per-dentry lock (d_lock). Once looked up without the dcache_lock, 78 - we acquire the per-dentry lock (d_lock) and check if the dentry is 79 - unhashed. If so, the look-up is failed. If not, the reference count 80 - of the dentry is increased and the dentry is returned. 72 + 3. The dentry looked up without holding locks cannot be returned for 73 + walking if it is unhashed. It then may have a NULL d_inode or other 74 + bogosity since RCU doesn't protect the other fields in the dentry. We 75 + therefore use a flag DCACHE_UNHASHED to indicate unhashed dentries 76 + and use this in conjunction with a per-dentry lock (d_lock). Once 77 + looked up without locks, we acquire the per-dentry lock (d_lock) and 78 + check if the dentry is unhashed. If so, the look-up is failed. If not, 79 + the reference count of the dentry is increased and the dentry is 80 + returned. 81 81 82 82 4. Once a dentry is looked up, it must be ensured during the path walk 83 83 for that component it doesn't go away. In pre-2.5.10 code, this was ··· 84 86 In some sense, dcache_rcu path walking looks like the pre-2.5.10 85 87 version. 86 88 87 - 5. All dentry hash chain updates must take the dcache_lock as well as 88 - the per-dentry lock in that order. dput() does this to ensure that 89 - a dentry that has just been looked up in another CPU doesn't get 90 - deleted before dget() can be done on it. 89 + 5. All dentry hash chain updates must take the per-dentry lock (see 90 + fs/dcache.c). This excludes dput() to ensure that a dentry that has 91 + been looked up concurrently does not get deleted before dget() can 92 + take a ref. 91 93 92 94 6. There are several ways to do reference counting of RCU protected 93 95 objects. One such example is in ipv4 route cache where deferred
+7 -1
Documentation/filesystems/porting
··· 216 216 ->d_parent changes are not protected by BKL anymore. Read access is safe 217 217 if at least one of the following is true: 218 218 * filesystem has no cross-directory rename() 219 - * dcache_lock is held 220 219 * we know that parent had been locked (e.g. we are looking at 221 220 ->d_parent of ->lookup() argument). 222 221 * we are called from ->rename(). ··· 339 340 .d_hash() calling convention and locking rules are significantly 340 341 changed. Read updated documentation in Documentation/filesystems/vfs.txt (and 341 342 look at examples of other filesystems) for guidance. 343 + 344 + --- 345 + [mandatory] 346 + dcache_lock is gone, replaced by fine grained locks. See fs/dcache.c 347 + for details of what locks to replace dcache_lock with in order to protect 348 + particular things. Most of the time, a filesystem only needs ->d_lock, which 349 + protects *all* the dcache state of a given dentry.
+1 -4
arch/powerpc/platforms/cell/spufs/inode.c
··· 159 159 160 160 mutex_lock(&dir->d_inode->i_mutex); 161 161 list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_u.d_child) { 162 - spin_lock(&dcache_lock); 163 162 spin_lock(&dentry->d_lock); 164 163 if (!(d_unhashed(dentry)) && dentry->d_inode) { 165 164 dget_locked_dlock(dentry); 166 165 __d_drop(dentry); 167 166 spin_unlock(&dentry->d_lock); 168 167 simple_unlink(dir->d_inode, dentry); 169 - /* XXX: what is dcache_lock protecting here? Other 168 + /* XXX: what was dcache_lock protecting here? Other 170 169 * filesystems (IB, configfs) release dcache_lock 171 170 * before unlink */ 172 - spin_unlock(&dcache_lock); 173 171 dput(dentry); 174 172 } else { 175 173 spin_unlock(&dentry->d_lock); 176 - spin_unlock(&dcache_lock); 177 174 } 178 175 } 179 176 shrink_dcache_parent(dir);
+1 -5
drivers/infiniband/hw/ipath/ipath_fs.c
··· 277 277 goto bail; 278 278 } 279 279 280 - spin_lock(&dcache_lock); 281 280 spin_lock(&tmp->d_lock); 282 281 if (!(d_unhashed(tmp) && tmp->d_inode)) { 283 282 dget_locked_dlock(tmp); 284 283 __d_drop(tmp); 285 284 spin_unlock(&tmp->d_lock); 286 - spin_unlock(&dcache_lock); 287 285 simple_unlink(parent->d_inode, tmp); 288 - } else { 286 + } else 289 287 spin_unlock(&tmp->d_lock); 290 - spin_unlock(&dcache_lock); 291 - } 292 288 293 289 ret = 0; 294 290 bail:
-3
drivers/infiniband/hw/qib/qib_fs.c
··· 453 453 goto bail; 454 454 } 455 455 456 - spin_lock(&dcache_lock); 457 456 spin_lock(&tmp->d_lock); 458 457 if (!(d_unhashed(tmp) && tmp->d_inode)) { 459 458 dget_locked_dlock(tmp); 460 459 __d_drop(tmp); 461 460 spin_unlock(&tmp->d_lock); 462 - spin_unlock(&dcache_lock); 463 461 simple_unlink(parent->d_inode, tmp); 464 462 } else { 465 463 spin_unlock(&tmp->d_lock); 466 - spin_unlock(&dcache_lock); 467 464 } 468 465 469 466 ret = 0;
-2
drivers/staging/pohmelfs/path_entry.c
··· 101 101 d = first; 102 102 seq = read_seqbegin(&rename_lock); 103 103 rcu_read_lock(); 104 - spin_lock(&dcache_lock); 105 104 106 105 if (!IS_ROOT(d) && d_unhashed(d)) 107 106 len += UNHASHED_OBSCURE_STRING_SIZE; /* Obscure " (deleted)" string */ ··· 109 110 len += d->d_name.len + 1; /* Plus slash */ 110 111 d = d->d_parent; 111 112 } 112 - spin_unlock(&dcache_lock); 113 113 rcu_read_unlock(); 114 114 if (read_seqretry(&rename_lock, seq)) 115 115 goto rename_retry;
-4
drivers/staging/smbfs/cache.c
··· 62 62 struct list_head *next; 63 63 struct dentry *dentry; 64 64 65 - spin_lock(&dcache_lock); 66 65 spin_lock(&parent->d_lock); 67 66 next = parent->d_subdirs.next; 68 67 while (next != &parent->d_subdirs) { ··· 71 72 next = next->next; 72 73 } 73 74 spin_unlock(&parent->d_lock); 74 - spin_unlock(&dcache_lock); 75 75 } 76 76 77 77 /* ··· 96 98 } 97 99 98 100 /* If a pointer is invalid, we search the dentry. */ 99 - spin_lock(&dcache_lock); 100 101 spin_lock(&parent->d_lock); 101 102 next = parent->d_subdirs.next; 102 103 while (next != &parent->d_subdirs) { ··· 112 115 dent = NULL; 113 116 out_unlock: 114 117 spin_unlock(&parent->d_lock); 115 - spin_unlock(&dcache_lock); 116 118 return dent; 117 119 } 118 120
-3
drivers/usb/core/inode.c
··· 343 343 { 344 344 struct list_head *list; 345 345 346 - spin_lock(&dcache_lock); 347 346 spin_lock(&dentry->d_lock); 348 347 list_for_each(list, &dentry->d_subdirs) { 349 348 struct dentry *de = list_entry(list, struct dentry, d_u.d_child); ··· 351 352 if (usbfs_positive(de)) { 352 353 spin_unlock(&de->d_lock); 353 354 spin_unlock(&dentry->d_lock); 354 - spin_unlock(&dcache_lock); 355 355 return 0; 356 356 } 357 357 spin_unlock(&de->d_lock); 358 358 } 359 359 spin_unlock(&dentry->d_lock); 360 - spin_unlock(&dcache_lock); 361 360 return 1; 362 361 } 363 362
-2
fs/9p/vfs_inode.c
··· 270 270 { 271 271 struct dentry *dentry; 272 272 273 - spin_lock(&dcache_lock); 274 273 spin_lock(&dcache_inode_lock); 275 274 /* Directory should have only one entry. */ 276 275 BUG_ON(S_ISDIR(inode->i_mode) && !list_is_singular(&inode->i_dentry)); 277 276 dentry = list_entry(inode->i_dentry.next, struct dentry, d_alias); 278 277 spin_unlock(&dcache_inode_lock); 279 - spin_unlock(&dcache_lock); 280 278 return dentry; 281 279 } 282 280
-2
fs/affs/amigaffs.c
··· 128 128 void *data = dentry->d_fsdata; 129 129 struct list_head *head, *next; 130 130 131 - spin_lock(&dcache_lock); 132 131 spin_lock(&dcache_inode_lock); 133 132 head = &inode->i_dentry; 134 133 next = head->next; ··· 140 141 next = next->next; 141 142 } 142 143 spin_unlock(&dcache_inode_lock); 143 - spin_unlock(&dcache_lock); 144 144 } 145 145 146 146
+3
fs/autofs4/autofs_i.h
··· 16 16 #include <linux/auto_fs4.h> 17 17 #include <linux/auto_dev-ioctl.h> 18 18 #include <linux/mutex.h> 19 + #include <linux/spinlock.h> 19 20 #include <linux/list.h> 20 21 21 22 /* This is the range of ioctl() numbers we claim as ours */ ··· 60 59 printk(KERN_ERR "pid %d: %s: " fmt "\n", \ 61 60 current->pid, __func__, ##args); \ 62 61 } while (0) 62 + 63 + extern spinlock_t autofs4_lock; 63 64 64 65 /* Unified info structure. This is pointed to by both the dentry and 65 66 inode structures. Each file in the filesystem has an instance of this
+5 -5
fs/autofs4/expire.c
··· 102 102 if (prev == NULL) 103 103 return dget(prev); 104 104 105 - spin_lock(&dcache_lock); 105 + spin_lock(&autofs4_lock); 106 106 relock: 107 107 p = prev; 108 108 spin_lock(&p->d_lock); ··· 114 114 115 115 if (p == root) { 116 116 spin_unlock(&p->d_lock); 117 - spin_unlock(&dcache_lock); 117 + spin_unlock(&autofs4_lock); 118 118 dput(prev); 119 119 return NULL; 120 120 } ··· 144 144 dget_dlock(ret); 145 145 spin_unlock(&ret->d_lock); 146 146 spin_unlock(&p->d_lock); 147 - spin_unlock(&dcache_lock); 147 + spin_unlock(&autofs4_lock); 148 148 149 149 dput(prev); 150 150 ··· 408 408 ino->flags |= AUTOFS_INF_EXPIRING; 409 409 init_completion(&ino->expire_complete); 410 410 spin_unlock(&sbi->fs_lock); 411 - spin_lock(&dcache_lock); 411 + spin_lock(&autofs4_lock); 412 412 spin_lock(&expired->d_parent->d_lock); 413 413 spin_lock_nested(&expired->d_lock, DENTRY_D_LOCK_NESTED); 414 414 list_move(&expired->d_parent->d_subdirs, &expired->d_u.d_child); 415 415 spin_unlock(&expired->d_lock); 416 416 spin_unlock(&expired->d_parent->d_lock); 417 - spin_unlock(&dcache_lock); 417 + spin_unlock(&autofs4_lock); 418 418 return expired; 419 419 } 420 420
+23 -21
fs/autofs4/root.c
··· 23 23 24 24 #include "autofs_i.h" 25 25 26 + DEFINE_SPINLOCK(autofs4_lock); 27 + 26 28 static int autofs4_dir_symlink(struct inode *,struct dentry *,const char *); 27 29 static int autofs4_dir_unlink(struct inode *,struct dentry *); 28 30 static int autofs4_dir_rmdir(struct inode *,struct dentry *); ··· 144 142 * autofs file system so just let the libfs routines handle 145 143 * it. 146 144 */ 147 - spin_lock(&dcache_lock); 145 + spin_lock(&autofs4_lock); 148 146 spin_lock(&dentry->d_lock); 149 147 if (!d_mountpoint(dentry) && list_empty(&dentry->d_subdirs)) { 150 148 spin_unlock(&dentry->d_lock); 151 - spin_unlock(&dcache_lock); 149 + spin_unlock(&autofs4_lock); 152 150 return -ENOENT; 153 151 } 154 152 spin_unlock(&dentry->d_lock); 155 - spin_unlock(&dcache_lock); 153 + spin_unlock(&autofs4_lock); 156 154 157 155 out: 158 156 return dcache_dir_open(inode, file); ··· 257 255 /* We trigger a mount for almost all flags */ 258 256 lookup_type = autofs4_need_mount(nd->flags); 259 257 spin_lock(&sbi->fs_lock); 260 - spin_lock(&dcache_lock); 258 + spin_lock(&autofs4_lock); 261 259 spin_lock(&dentry->d_lock); 262 260 if (!(lookup_type || ino->flags & AUTOFS_INF_PENDING)) { 263 261 spin_unlock(&dentry->d_lock); 264 - spin_unlock(&dcache_lock); 262 + spin_unlock(&autofs4_lock); 265 263 spin_unlock(&sbi->fs_lock); 266 264 goto follow; 267 265 } ··· 274 272 if (ino->flags & AUTOFS_INF_PENDING || 275 273 (!d_mountpoint(dentry) && list_empty(&dentry->d_subdirs))) { 276 274 spin_unlock(&dentry->d_lock); 277 - spin_unlock(&dcache_lock); 275 + spin_unlock(&autofs4_lock); 278 276 spin_unlock(&sbi->fs_lock); 279 277 280 278 status = try_to_fill_dentry(dentry, nd->flags); ··· 284 282 goto follow; 285 283 } 286 284 spin_unlock(&dentry->d_lock); 287 - spin_unlock(&dcache_lock); 285 + spin_unlock(&autofs4_lock); 288 286 spin_unlock(&sbi->fs_lock); 289 287 follow: 290 288 /* ··· 355 353 return 0; 356 354 357 355 /* Check for a non-mountpoint directory with no contents */ 358 - spin_lock(&dcache_lock); 356 + spin_lock(&autofs4_lock); 359 357 spin_lock(&dentry->d_lock); 360 358 if (S_ISDIR(dentry->d_inode->i_mode) && 361 359 !d_mountpoint(dentry) && list_empty(&dentry->d_subdirs)) { 362 360 DPRINTK("dentry=%p %.*s, emptydir", 363 361 dentry, dentry->d_name.len, dentry->d_name.name); 364 362 spin_unlock(&dentry->d_lock); 365 - spin_unlock(&dcache_lock); 363 + spin_unlock(&autofs4_lock); 366 364 367 365 /* The daemon never causes a mount to trigger */ 368 366 if (oz_mode) ··· 379 377 return status; 380 378 } 381 379 spin_unlock(&dentry->d_lock); 382 - spin_unlock(&dcache_lock); 380 + spin_unlock(&autofs4_lock); 383 381 384 382 return 1; 385 383 } ··· 434 432 const unsigned char *str = name->name; 435 433 struct list_head *p, *head; 436 434 437 - spin_lock(&dcache_lock); 435 + spin_lock(&autofs4_lock); 438 436 spin_lock(&sbi->lookup_lock); 439 437 head = &sbi->active_list; 440 438 list_for_each(p, head) { ··· 467 465 dget_dlock(active); 468 466 spin_unlock(&active->d_lock); 469 467 spin_unlock(&sbi->lookup_lock); 470 - spin_unlock(&dcache_lock); 468 + spin_unlock(&autofs4_lock); 471 469 return active; 472 470 } 473 471 next: 474 472 spin_unlock(&active->d_lock); 475 473 } 476 474 spin_unlock(&sbi->lookup_lock); 477 - spin_unlock(&dcache_lock); 475 + spin_unlock(&autofs4_lock); 478 476 479 477 return NULL; 480 478 } ··· 489 487 const unsigned char *str = name->name; 490 488 struct list_head *p, *head; 491 489 492 - spin_lock(&dcache_lock); 490 + spin_lock(&autofs4_lock); 493 491 spin_lock(&sbi->lookup_lock); 494 492 head = &sbi->expiring_list; 495 493 list_for_each(p, head) { ··· 522 520 dget_dlock(expiring); 523 521 spin_unlock(&expiring->d_lock); 524 522 spin_unlock(&sbi->lookup_lock); 525 - spin_unlock(&dcache_lock); 523 + spin_unlock(&autofs4_lock); 526 524 return expiring; 527 525 } 528 526 next: 529 527 spin_unlock(&expiring->d_lock); 530 528 } 531 529 spin_unlock(&sbi->lookup_lock); 532 - spin_unlock(&dcache_lock); 530 + spin_unlock(&autofs4_lock); 533 531 534 532 return NULL; 535 533 } ··· 765 763 766 764 dir->i_mtime = CURRENT_TIME; 767 765 768 - spin_lock(&dcache_lock); 766 + spin_lock(&autofs4_lock); 769 767 autofs4_add_expiring(dentry); 770 768 spin_lock(&dentry->d_lock); 771 769 __d_drop(dentry); 772 770 spin_unlock(&dentry->d_lock); 773 - spin_unlock(&dcache_lock); 771 + spin_unlock(&autofs4_lock); 774 772 775 773 return 0; 776 774 } ··· 787 785 if (!autofs4_oz_mode(sbi)) 788 786 return -EACCES; 789 787 790 - spin_lock(&dcache_lock); 788 + spin_lock(&autofs4_lock); 791 789 spin_lock(&sbi->lookup_lock); 792 790 spin_lock(&dentry->d_lock); 793 791 if (!list_empty(&dentry->d_subdirs)) { 794 792 spin_unlock(&dentry->d_lock); 795 793 spin_unlock(&sbi->lookup_lock); 796 - spin_unlock(&dcache_lock); 794 + spin_unlock(&autofs4_lock); 797 795 return -ENOTEMPTY; 798 796 } 799 797 __autofs4_add_expiring(dentry); 800 798 spin_unlock(&sbi->lookup_lock); 801 799 __d_drop(dentry); 802 800 spin_unlock(&dentry->d_lock); 803 - spin_unlock(&dcache_lock); 801 + spin_unlock(&autofs4_lock); 804 802 805 803 if (atomic_dec_and_test(&ino->count)) { 806 804 p_ino = autofs4_dentry_ino(dentry->d_parent);
+4 -3
fs/autofs4/waitq.c
··· 194 194 rename_retry: 195 195 buf = *name; 196 196 len = 0; 197 + 197 198 seq = read_seqbegin(&rename_lock); 198 199 rcu_read_lock(); 199 - spin_lock(&dcache_lock); 200 + spin_lock(&autofs4_lock); 200 201 for (tmp = dentry ; tmp != root ; tmp = tmp->d_parent) 201 202 len += tmp->d_name.len + 1; 202 203 203 204 if (!len || --len > NAME_MAX) { 204 - spin_unlock(&dcache_lock); 205 + spin_unlock(&autofs4_lock); 205 206 rcu_read_unlock(); 206 207 if (read_seqretry(&rename_lock, seq)) 207 208 goto rename_retry; ··· 218 217 p -= tmp->d_name.len; 219 218 strncpy(p, tmp->d_name.name, tmp->d_name.len); 220 219 } 221 - spin_unlock(&dcache_lock); 220 + spin_unlock(&autofs4_lock); 222 221 rcu_read_unlock(); 223 222 if (read_seqretry(&rename_lock, seq)) 224 223 goto rename_retry;
+1 -5
fs/ceph/dir.c
··· 112 112 dout("__dcache_readdir %p at %llu (last %p)\n", dir, filp->f_pos, 113 113 last); 114 114 115 - spin_lock(&dcache_lock); 116 115 spin_lock(&parent->d_lock); 117 116 118 117 /* start at beginning? */ ··· 155 156 dget_dlock(dentry); 156 157 spin_unlock(&dentry->d_lock); 157 158 spin_unlock(&parent->d_lock); 158 - spin_unlock(&dcache_lock); 159 159 160 160 dout(" %llu (%llu) dentry %p %.*s %p\n", di->offset, filp->f_pos, 161 161 dentry, dentry->d_name.len, dentry->d_name.name, dentry->d_inode); ··· 180 182 181 183 filp->f_pos++; 182 184 183 - /* make sure a dentry wasn't dropped while we didn't have dcache_lock */ 185 + /* make sure a dentry wasn't dropped while we didn't have parent lock */ 184 186 if (!ceph_i_test(dir, CEPH_I_COMPLETE)) { 185 187 dout(" lost I_COMPLETE on %p; falling back to mds\n", dir); 186 188 err = -EAGAIN; 187 189 goto out; 188 190 } 189 191 190 - spin_lock(&dcache_lock); 191 192 spin_lock(&parent->d_lock); 192 193 p = p->prev; /* advance to next dentry */ 193 194 goto more; 194 195 195 196 out_unlock: 196 197 spin_unlock(&parent->d_lock); 197 - spin_unlock(&dcache_lock); 198 198 out: 199 199 if (last) 200 200 dput(last);
-4
fs/ceph/inode.c
··· 841 841 di->offset = ceph_inode(inode)->i_max_offset++; 842 842 spin_unlock(&inode->i_lock); 843 843 844 - spin_lock(&dcache_lock); 845 844 spin_lock(&dir->d_lock); 846 845 spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED); 847 846 list_move(&dn->d_u.d_child, &dir->d_subdirs); ··· 848 849 dn->d_u.d_child.prev, dn->d_u.d_child.next); 849 850 spin_unlock(&dn->d_lock); 850 851 spin_unlock(&dir->d_lock); 851 - spin_unlock(&dcache_lock); 852 852 } 853 853 854 854 /* ··· 1231 1233 goto retry_lookup; 1232 1234 } else { 1233 1235 /* reorder parent's d_subdirs */ 1234 - spin_lock(&dcache_lock); 1235 1236 spin_lock(&parent->d_lock); 1236 1237 spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED); 1237 1238 list_move(&dn->d_u.d_child, &parent->d_subdirs); 1238 1239 spin_unlock(&dn->d_lock); 1239 1240 spin_unlock(&parent->d_lock); 1240 - spin_unlock(&dcache_lock); 1241 1241 } 1242 1242 1243 1243 di = dn->d_fsdata;
-3
fs/cifs/inode.c
··· 809 809 { 810 810 struct dentry *dentry; 811 811 812 - spin_lock(&dcache_lock); 813 812 spin_lock(&dcache_inode_lock); 814 813 list_for_each_entry(dentry, &inode->i_dentry, d_alias) { 815 814 if (!d_unhashed(dentry) || IS_ROOT(dentry)) { 816 815 spin_unlock(&dcache_inode_lock); 817 - spin_unlock(&dcache_lock); 818 816 return true; 819 817 } 820 818 } 821 819 spin_unlock(&dcache_inode_lock); 822 - spin_unlock(&dcache_lock); 823 820 return false; 824 821 } 825 822
-2
fs/coda/cache.c
··· 93 93 struct list_head *child; 94 94 struct dentry *de; 95 95 96 - spin_lock(&dcache_lock); 97 96 spin_lock(&parent->d_lock); 98 97 list_for_each(child, &parent->d_subdirs) 99 98 { ··· 103 104 coda_flag_inode(de->d_inode, flag); 104 105 } 105 106 spin_unlock(&parent->d_lock); 106 - spin_unlock(&dcache_lock); 107 107 return; 108 108 } 109 109
-2
fs/configfs/configfs_internal.h
··· 120 120 { 121 121 struct config_item * item = NULL; 122 122 123 - spin_lock(&dcache_lock); 124 123 spin_lock(&dentry->d_lock); 125 124 if (!d_unhashed(dentry)) { 126 125 struct configfs_dirent * sd = dentry->d_fsdata; ··· 130 131 item = config_item_get(sd->s_element); 131 132 } 132 133 spin_unlock(&dentry->d_lock); 133 - spin_unlock(&dcache_lock); 134 134 135 135 return item; 136 136 }
+1 -5
fs/configfs/inode.c
··· 250 250 struct dentry * dentry = sd->s_dentry; 251 251 252 252 if (dentry) { 253 - spin_lock(&dcache_lock); 254 253 spin_lock(&dentry->d_lock); 255 254 if (!(d_unhashed(dentry) && dentry->d_inode)) { 256 255 dget_locked_dlock(dentry); 257 256 __d_drop(dentry); 258 257 spin_unlock(&dentry->d_lock); 259 - spin_unlock(&dcache_lock); 260 258 simple_unlink(parent->d_inode, dentry); 261 - } else { 259 + } else 262 260 spin_unlock(&dentry->d_lock); 263 - spin_unlock(&dcache_lock); 264 - } 265 261 } 266 262 } 267 263
+20 -140
fs/dcache.c
··· 54 54 * - d_alias, d_inode 55 55 * 56 56 * Ordering: 57 - * dcache_lock 58 - * dcache_inode_lock 59 - * dentry->d_lock 60 - * dcache_lru_lock 61 - * dcache_hash_lock 57 + * dcache_inode_lock 58 + * dentry->d_lock 59 + * dcache_lru_lock 60 + * dcache_hash_lock 62 61 * 63 62 * If there is an ancestor relationship: 64 63 * dentry->d_parent->...->d_parent->d_lock ··· 76 77 __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_inode_lock); 77 78 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_hash_lock); 78 79 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_lru_lock); 79 - __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_lock); 80 80 __cacheline_aligned_in_smp DEFINE_SEQLOCK(rename_lock); 81 81 82 82 EXPORT_SYMBOL(rename_lock); 83 83 EXPORT_SYMBOL(dcache_inode_lock); 84 - EXPORT_SYMBOL(dcache_lock); 85 84 86 85 static struct kmem_cache *dentry_cache __read_mostly; 87 86 ··· 136 139 } 137 140 138 141 /* 139 - * no dcache_lock, please. 142 + * no locks, please. 140 143 */ 141 144 static void d_free(struct dentry *dentry) 142 145 { ··· 159 162 static void dentry_iput(struct dentry * dentry) 160 163 __releases(dentry->d_lock) 161 164 __releases(dcache_inode_lock) 162 - __releases(dcache_lock) 163 165 { 164 166 struct inode *inode = dentry->d_inode; 165 167 if (inode) { ··· 166 170 list_del_init(&dentry->d_alias); 167 171 spin_unlock(&dentry->d_lock); 168 172 spin_unlock(&dcache_inode_lock); 169 - spin_unlock(&dcache_lock); 170 173 if (!inode->i_nlink) 171 174 fsnotify_inoderemove(inode); 172 175 if (dentry->d_op && dentry->d_op->d_iput) ··· 175 180 } else { 176 181 spin_unlock(&dentry->d_lock); 177 182 spin_unlock(&dcache_inode_lock); 178 - spin_unlock(&dcache_lock); 179 183 } 180 184 } 181 185 ··· 229 235 * 230 236 * If this is the root of the dentry tree, return NULL. 231 237 * 232 - * dcache_lock and d_lock and d_parent->d_lock must be held by caller, and 233 - * are dropped by d_kill. 238 + * dentry->d_lock and parent->d_lock must be held by caller, and are dropped by 239 + * d_kill. 234 240 */ 235 241 static struct dentry *d_kill(struct dentry *dentry, struct dentry *parent) 236 242 __releases(dentry->d_lock) 237 243 __releases(parent->d_lock) 238 244 __releases(dcache_inode_lock) 239 - __releases(dcache_lock) 240 245 { 241 246 dentry->d_parent = NULL; 242 247 list_del(&dentry->d_u.d_child); ··· 278 285 279 286 void d_drop(struct dentry *dentry) 280 287 { 281 - spin_lock(&dcache_lock); 282 288 spin_lock(&dentry->d_lock); 283 289 __d_drop(dentry); 284 290 spin_unlock(&dentry->d_lock); 285 - spin_unlock(&dcache_lock); 286 291 } 287 292 EXPORT_SYMBOL(d_drop); 288 293 ··· 328 337 else 329 338 parent = dentry->d_parent; 330 339 if (dentry->d_count == 1) { 331 - if (!spin_trylock(&dcache_lock)) { 332 - /* 333 - * Something of a livelock possibility we could avoid 334 - * by taking dcache_lock and trying again, but we 335 - * want to reduce dcache_lock anyway so this will 336 - * get improved. 337 - */ 338 - drop1: 339 - spin_unlock(&dentry->d_lock); 340 - goto repeat; 341 - } 342 340 if (!spin_trylock(&dcache_inode_lock)) { 343 341 drop2: 344 - spin_unlock(&dcache_lock); 345 - goto drop1; 342 + spin_unlock(&dentry->d_lock); 343 + goto repeat; 346 344 } 347 345 if (parent && !spin_trylock(&parent->d_lock)) { 348 346 spin_unlock(&dcache_inode_lock); ··· 343 363 spin_unlock(&dentry->d_lock); 344 364 if (parent) 345 365 spin_unlock(&parent->d_lock); 346 - spin_unlock(&dcache_lock); 347 366 return; 348 367 } 349 368 ··· 366 387 if (parent) 367 388 spin_unlock(&parent->d_lock); 368 389 spin_unlock(&dcache_inode_lock); 369 - spin_unlock(&dcache_lock); 370 390 return; 371 391 372 392 unhash_it: ··· 396 418 /* 397 419 * If it's already been dropped, return OK. 398 420 */ 399 - spin_lock(&dcache_lock); 400 421 spin_lock(&dentry->d_lock); 401 422 if (d_unhashed(dentry)) { 402 423 spin_unlock(&dentry->d_lock); 403 - spin_unlock(&dcache_lock); 404 424 return 0; 405 425 } 406 426 /* ··· 407 431 */ 408 432 if (!list_empty(&dentry->d_subdirs)) { 409 433 spin_unlock(&dentry->d_lock); 410 - spin_unlock(&dcache_lock); 411 434 shrink_dcache_parent(dentry); 412 - spin_lock(&dcache_lock); 413 435 spin_lock(&dentry->d_lock); 414 436 } 415 437 ··· 424 450 if (dentry->d_count > 1) { 425 451 if (dentry->d_inode && S_ISDIR(dentry->d_inode->i_mode)) { 426 452 spin_unlock(&dentry->d_lock); 427 - spin_unlock(&dcache_lock); 428 453 return -EBUSY; 429 454 } 430 455 } 431 456 432 457 __d_drop(dentry); 433 458 spin_unlock(&dentry->d_lock); 434 - spin_unlock(&dcache_lock); 435 459 return 0; 436 460 } 437 461 EXPORT_SYMBOL(d_invalidate); 438 462 439 - /* This must be called with dcache_lock and d_lock held */ 463 + /* This must be called with d_lock held */ 440 464 static inline struct dentry * __dget_locked_dlock(struct dentry *dentry) 441 465 { 442 466 dentry->d_count++; ··· 442 470 return dentry; 443 471 } 444 472 445 - /* This should be called _only_ with dcache_lock held */ 473 + /* This must be called with d_lock held */ 446 474 static inline struct dentry * __dget_locked(struct dentry *dentry) 447 475 { 448 476 spin_lock(&dentry->d_lock); ··· 547 575 struct dentry *de = NULL; 548 576 549 577 if (!list_empty(&inode->i_dentry)) { 550 - spin_lock(&dcache_lock); 551 578 spin_lock(&dcache_inode_lock); 552 579 de = __d_find_alias(inode, 0); 553 580 spin_unlock(&dcache_inode_lock); 554 - spin_unlock(&dcache_lock); 555 581 } 556 582 return de; 557 583 } ··· 563 593 { 564 594 struct dentry *dentry; 565 595 restart: 566 - spin_lock(&dcache_lock); 567 596 spin_lock(&dcache_inode_lock); 568 597 list_for_each_entry(dentry, &inode->i_dentry, d_alias) { 569 598 spin_lock(&dentry->d_lock); ··· 571 602 __d_drop(dentry); 572 603 spin_unlock(&dentry->d_lock); 573 604 spin_unlock(&dcache_inode_lock); 574 - spin_unlock(&dcache_lock); 575 605 dput(dentry); 576 606 goto restart; 577 607 } 578 608 spin_unlock(&dentry->d_lock); 579 609 } 580 610 spin_unlock(&dcache_inode_lock); 581 - spin_unlock(&dcache_lock); 582 611 } 583 612 EXPORT_SYMBOL(d_prune_aliases); 584 613 ··· 592 625 __releases(dentry->d_lock) 593 626 __releases(parent->d_lock) 594 627 __releases(dcache_inode_lock) 595 - __releases(dcache_lock) 596 628 { 597 629 __d_drop(dentry); 598 630 dentry = d_kill(dentry, parent); 599 631 600 632 /* 601 - * Prune ancestors. Locking is simpler than in dput(), 602 - * because dcache_lock needs to be taken anyway. 633 + * Prune ancestors. 603 634 */ 604 635 while (dentry) { 605 - spin_lock(&dcache_lock); 606 636 spin_lock(&dcache_inode_lock); 607 637 again: 608 638 spin_lock(&dentry->d_lock); ··· 617 653 spin_unlock(&parent->d_lock); 618 654 spin_unlock(&dentry->d_lock); 619 655 spin_unlock(&dcache_inode_lock); 620 - spin_unlock(&dcache_lock); 621 656 return; 622 657 } 623 658 ··· 665 702 spin_unlock(&dcache_lru_lock); 666 703 667 704 prune_one_dentry(dentry, parent); 668 - /* dcache_lock, dcache_inode_lock and dentry->d_lock dropped */ 669 - spin_lock(&dcache_lock); 705 + /* dcache_inode_lock and dentry->d_lock dropped */ 670 706 spin_lock(&dcache_inode_lock); 671 707 spin_lock(&dcache_lru_lock); 672 708 } ··· 687 725 LIST_HEAD(tmp); 688 726 int cnt = *count; 689 727 690 - spin_lock(&dcache_lock); 691 728 spin_lock(&dcache_inode_lock); 692 729 relock: 693 730 spin_lock(&dcache_lru_lock); ··· 727 766 list_splice(&referenced, &sb->s_dentry_lru); 728 767 spin_unlock(&dcache_lru_lock); 729 768 spin_unlock(&dcache_inode_lock); 730 - spin_unlock(&dcache_lock); 731 769 } 732 770 733 771 /** ··· 748 788 749 789 if (unused == 0 || count == 0) 750 790 return; 751 - spin_lock(&dcache_lock); 752 791 if (count >= unused) 753 792 prune_ratio = 1; 754 793 else ··· 784 825 if (down_read_trylock(&sb->s_umount)) { 785 826 if ((sb->s_root != NULL) && 786 827 (!list_empty(&sb->s_dentry_lru))) { 787 - spin_unlock(&dcache_lock); 788 828 __shrink_dcache_sb(sb, &w_count, 789 829 DCACHE_REFERENCED); 790 830 pruned -= w_count; 791 - spin_lock(&dcache_lock); 792 831 } 793 832 up_read(&sb->s_umount); 794 833 } ··· 802 845 if (p) 803 846 __put_super(p); 804 847 spin_unlock(&sb_lock); 805 - spin_unlock(&dcache_lock); 806 848 } 807 849 808 850 /** ··· 815 859 { 816 860 LIST_HEAD(tmp); 817 861 818 - spin_lock(&dcache_lock); 819 862 spin_lock(&dcache_inode_lock); 820 863 spin_lock(&dcache_lru_lock); 821 864 while (!list_empty(&sb->s_dentry_lru)) { ··· 823 868 } 824 869 spin_unlock(&dcache_lru_lock); 825 870 spin_unlock(&dcache_inode_lock); 826 - spin_unlock(&dcache_lock); 827 871 } 828 872 EXPORT_SYMBOL(shrink_dcache_sb); 829 873 ··· 839 885 BUG_ON(!IS_ROOT(dentry)); 840 886 841 887 /* detach this root from the system */ 842 - spin_lock(&dcache_lock); 843 888 spin_lock(&dentry->d_lock); 844 889 dentry_lru_del(dentry); 845 890 __d_drop(dentry); 846 891 spin_unlock(&dentry->d_lock); 847 - spin_unlock(&dcache_lock); 848 892 849 893 for (;;) { 850 894 /* descend to the first leaf in the current subtree */ ··· 851 899 852 900 /* this is a branch with children - detach all of them 853 901 * from the system in one go */ 854 - spin_lock(&dcache_lock); 855 902 spin_lock(&dentry->d_lock); 856 903 list_for_each_entry(loop, &dentry->d_subdirs, 857 904 d_u.d_child) { ··· 861 910 spin_unlock(&loop->d_lock); 862 911 } 863 912 spin_unlock(&dentry->d_lock); 864 - spin_unlock(&dcache_lock); 865 913 866 914 /* move to the first child */ 867 915 dentry = list_entry(dentry->d_subdirs.next, ··· 927 977 928 978 /* 929 979 * destroy the dentries attached to a superblock on unmounting 930 - * - we don't need to use dentry->d_lock, and only need dcache_lock when 931 - * removing the dentry from the system lists and hashes because: 980 + * - we don't need to use dentry->d_lock because: 932 981 * - the superblock is detached from all mountings and open files, so the 933 982 * dentry trees will not be rearranged by the VFS 934 983 * - s_umount is write-locked, so the memory pressure shrinker will ignore ··· 978 1029 this_parent = parent; 979 1030 seq = read_seqbegin(&rename_lock); 980 1031 981 - spin_lock(&dcache_lock); 982 1032 if (d_mountpoint(parent)) 983 1033 goto positive; 984 1034 spin_lock(&this_parent->d_lock); ··· 1023 1075 if (this_parent != child->d_parent || 1024 1076 read_seqretry(&rename_lock, seq)) { 1025 1077 spin_unlock(&this_parent->d_lock); 1026 - spin_unlock(&dcache_lock); 1027 1078 rcu_read_unlock(); 1028 1079 goto rename_retry; 1029 1080 } ··· 1031 1084 goto resume; 1032 1085 } 1033 1086 spin_unlock(&this_parent->d_lock); 1034 - spin_unlock(&dcache_lock); 1035 1087 if (read_seqretry(&rename_lock, seq)) 1036 1088 goto rename_retry; 1037 1089 return 0; /* No mount points found in tree */ 1038 1090 positive: 1039 - spin_unlock(&dcache_lock); 1040 1091 if (read_seqretry(&rename_lock, seq)) 1041 1092 goto rename_retry; 1042 1093 return 1; ··· 1066 1121 this_parent = parent; 1067 1122 seq = read_seqbegin(&rename_lock); 1068 1123 1069 - spin_lock(&dcache_lock); 1070 1124 spin_lock(&this_parent->d_lock); 1071 1125 repeat: 1072 1126 next = this_parent->d_subdirs.next; ··· 1129 1185 if (this_parent != child->d_parent || 1130 1186 read_seqretry(&rename_lock, seq)) { 1131 1187 spin_unlock(&this_parent->d_lock); 1132 - spin_unlock(&dcache_lock); 1133 1188 rcu_read_unlock(); 1134 1189 goto rename_retry; 1135 1190 } ··· 1138 1195 } 1139 1196 out: 1140 1197 spin_unlock(&this_parent->d_lock); 1141 - spin_unlock(&dcache_lock); 1142 1198 if (read_seqretry(&rename_lock, seq)) 1143 1199 goto rename_retry; 1144 1200 return found; ··· 1239 1297 INIT_LIST_HEAD(&dentry->d_u.d_child); 1240 1298 1241 1299 if (parent) { 1242 - spin_lock(&dcache_lock); 1243 1300 spin_lock(&parent->d_lock); 1244 1301 spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED); 1245 1302 dentry->d_parent = dget_dlock(parent); ··· 1246 1305 list_add(&dentry->d_u.d_child, &parent->d_subdirs); 1247 1306 spin_unlock(&dentry->d_lock); 1248 1307 spin_unlock(&parent->d_lock); 1249 - spin_unlock(&dcache_lock); 1250 1308 } 1251 1309 1252 1310 this_cpu_inc(nr_dentry); ··· 1265 1325 } 1266 1326 EXPORT_SYMBOL(d_alloc_name); 1267 1327 1268 - /* the caller must hold dcache_lock */ 1269 1328 static void __d_instantiate(struct dentry *dentry, struct inode *inode) 1270 1329 { 1271 1330 spin_lock(&dentry->d_lock); ··· 1293 1354 void d_instantiate(struct dentry *entry, struct inode * inode) 1294 1355 { 1295 1356 BUG_ON(!list_empty(&entry->d_alias)); 1296 - spin_lock(&dcache_lock); 1297 1357 spin_lock(&dcache_inode_lock); 1298 1358 __d_instantiate(entry, inode); 1299 1359 spin_unlock(&dcache_inode_lock); 1300 - spin_unlock(&dcache_lock); 1301 1360 security_d_instantiate(entry, inode); 1302 1361 } 1303 1362 EXPORT_SYMBOL(d_instantiate); ··· 1359 1422 1360 1423 BUG_ON(!list_empty(&entry->d_alias)); 1361 1424 1362 - spin_lock(&dcache_lock); 1363 1425 spin_lock(&dcache_inode_lock); 1364 1426 result = __d_instantiate_unique(entry, inode); 1365 1427 spin_unlock(&dcache_inode_lock); 1366 - spin_unlock(&dcache_lock); 1367 1428 1368 1429 if (!result) { 1369 1430 security_d_instantiate(entry, inode); ··· 1450 1515 } 1451 1516 tmp->d_parent = tmp; /* make sure dput doesn't croak */ 1452 1517 1453 - spin_lock(&dcache_lock); 1518 + 1454 1519 spin_lock(&dcache_inode_lock); 1455 1520 res = __d_find_alias(inode, 0); 1456 1521 if (res) { 1457 1522 spin_unlock(&dcache_inode_lock); 1458 - spin_unlock(&dcache_lock); 1459 1523 dput(tmp); 1460 1524 goto out_iput; 1461 1525 } ··· 1472 1538 spin_unlock(&tmp->d_lock); 1473 1539 spin_unlock(&dcache_inode_lock); 1474 1540 1475 - spin_unlock(&dcache_lock); 1476 1541 return tmp; 1477 1542 1478 1543 out_iput: ··· 1501 1568 struct dentry *new = NULL; 1502 1569 1503 1570 if (inode && S_ISDIR(inode->i_mode)) { 1504 - spin_lock(&dcache_lock); 1505 1571 spin_lock(&dcache_inode_lock); 1506 1572 new = __d_find_alias(inode, 1); 1507 1573 if (new) { 1508 1574 BUG_ON(!(new->d_flags & DCACHE_DISCONNECTED)); 1509 1575 spin_unlock(&dcache_inode_lock); 1510 - spin_unlock(&dcache_lock); 1511 1576 security_d_instantiate(new, inode); 1512 1577 d_move(new, dentry); 1513 1578 iput(inode); 1514 1579 } else { 1515 - /* already taking dcache_lock, so d_add() by hand */ 1580 + /* already taking dcache_inode_lock, so d_add() by hand */ 1516 1581 __d_instantiate(dentry, inode); 1517 1582 spin_unlock(&dcache_inode_lock); 1518 - spin_unlock(&dcache_lock); 1519 1583 security_d_instantiate(dentry, inode); 1520 1584 d_rehash(dentry); 1521 1585 } ··· 1585 1655 * Negative dentry: instantiate it unless the inode is a directory and 1586 1656 * already has a dentry. 1587 1657 */ 1588 - spin_lock(&dcache_lock); 1589 1658 spin_lock(&dcache_inode_lock); 1590 1659 if (!S_ISDIR(inode->i_mode) || list_empty(&inode->i_dentry)) { 1591 1660 __d_instantiate(found, inode); 1592 1661 spin_unlock(&dcache_inode_lock); 1593 - spin_unlock(&dcache_lock); 1594 1662 security_d_instantiate(found, inode); 1595 1663 return found; 1596 1664 } ··· 1600 1672 new = list_entry(inode->i_dentry.next, struct dentry, d_alias); 1601 1673 dget_locked(new); 1602 1674 spin_unlock(&dcache_inode_lock); 1603 - spin_unlock(&dcache_lock); 1604 1675 security_d_instantiate(found, inode); 1605 1676 d_move(new, found); 1606 1677 iput(inode); ··· 1770 1843 { 1771 1844 struct dentry *child; 1772 1845 1773 - spin_lock(&dcache_lock); 1774 1846 spin_lock(&dparent->d_lock); 1775 1847 list_for_each_entry(child, &dparent->d_subdirs, d_u.d_child) { 1776 1848 if (dentry == child) { ··· 1777 1851 __dget_locked_dlock(dentry); 1778 1852 spin_unlock(&dentry->d_lock); 1779 1853 spin_unlock(&dparent->d_lock); 1780 - spin_unlock(&dcache_lock); 1781 1854 return 1; 1782 1855 } 1783 1856 } 1784 1857 spin_unlock(&dparent->d_lock); 1785 - spin_unlock(&dcache_lock); 1786 1858 1787 1859 return 0; 1788 1860 } ··· 1813 1889 /* 1814 1890 * Are we the only user? 1815 1891 */ 1816 - spin_lock(&dcache_lock); 1817 1892 spin_lock(&dcache_inode_lock); 1818 1893 spin_lock(&dentry->d_lock); 1819 1894 isdir = S_ISDIR(dentry->d_inode->i_mode); ··· 1828 1905 1829 1906 spin_unlock(&dentry->d_lock); 1830 1907 spin_unlock(&dcache_inode_lock); 1831 - spin_unlock(&dcache_lock); 1832 1908 1833 1909 fsnotify_nameremove(dentry, isdir); 1834 1910 } ··· 1854 1932 1855 1933 void d_rehash(struct dentry * entry) 1856 1934 { 1857 - spin_lock(&dcache_lock); 1858 1935 spin_lock(&entry->d_lock); 1859 1936 spin_lock(&dcache_hash_lock); 1860 1937 _d_rehash(entry); 1861 1938 spin_unlock(&dcache_hash_lock); 1862 1939 spin_unlock(&entry->d_lock); 1863 - spin_unlock(&dcache_lock); 1864 1940 } 1865 1941 EXPORT_SYMBOL(d_rehash); 1866 1942 ··· 1881 1961 BUG_ON(!mutex_is_locked(&dentry->d_inode->i_mutex)); 1882 1962 BUG_ON(dentry->d_name.len != name->len); /* d_lookup gives this */ 1883 1963 1884 - spin_lock(&dcache_lock); 1885 1964 spin_lock(&dentry->d_lock); 1886 1965 memcpy((unsigned char *)dentry->d_name.name, name->name, name->len); 1887 1966 spin_unlock(&dentry->d_lock); 1888 - spin_unlock(&dcache_lock); 1889 1967 } 1890 1968 EXPORT_SYMBOL(dentry_update_name_case); 1891 1969 ··· 1976 2058 * The hash value has to match the hash queue that the dentry is on.. 1977 2059 */ 1978 2060 /* 1979 - * d_move_locked - move a dentry 2061 + * d_move - move a dentry 1980 2062 * @dentry: entry to move 1981 2063 * @target: new dentry 1982 2064 * 1983 2065 * Update the dcache to reflect the move of a file name. Negative 1984 2066 * dcache entries should not be moved in this way. 1985 2067 */ 1986 - static void d_move_locked(struct dentry * dentry, struct dentry * target) 2068 + void d_move(struct dentry * dentry, struct dentry * target) 1987 2069 { 1988 2070 if (!dentry->d_inode) 1989 2071 printk(KERN_WARNING "VFS: moving negative dcache entry\n"); ··· 2032 2114 spin_unlock(&dentry->d_lock); 2033 2115 write_sequnlock(&rename_lock); 2034 2116 } 2035 - 2036 - /** 2037 - * d_move - move a dentry 2038 - * @dentry: entry to move 2039 - * @target: new dentry 2040 - * 2041 - * Update the dcache to reflect the move of a file name. Negative 2042 - * dcache entries should not be moved in this way. 2043 - */ 2044 - 2045 - void d_move(struct dentry * dentry, struct dentry * target) 2046 - { 2047 - spin_lock(&dcache_lock); 2048 - d_move_locked(dentry, target); 2049 - spin_unlock(&dcache_lock); 2050 - } 2051 2117 EXPORT_SYMBOL(d_move); 2052 2118 2053 2119 /** ··· 2057 2155 * This helper attempts to cope with remotely renamed directories 2058 2156 * 2059 2157 * It assumes that the caller is already holding 2060 - * dentry->d_parent->d_inode->i_mutex and the dcache_lock 2158 + * dentry->d_parent->d_inode->i_mutex and the dcache_inode_lock 2061 2159 * 2062 2160 * Note: If ever the locking in lock_rename() changes, then please 2063 2161 * remember to update this too... 2064 2162 */ 2065 2163 static struct dentry *__d_unalias(struct dentry *dentry, struct dentry *alias) 2066 - __releases(dcache_lock) 2067 2164 __releases(dcache_inode_lock) 2068 2165 { 2069 2166 struct mutex *m1 = NULL, *m2 = NULL; ··· 2086 2185 goto out_err; 2087 2186 m2 = &alias->d_parent->d_inode->i_mutex; 2088 2187 out_unalias: 2089 - d_move_locked(alias, dentry); 2188 + d_move(alias, dentry); 2090 2189 ret = alias; 2091 2190 out_err: 2092 2191 spin_unlock(&dcache_inode_lock); 2093 - spin_unlock(&dcache_lock); 2094 2192 if (m2) 2095 2193 mutex_unlock(m2); 2096 2194 if (m1) ··· 2149 2249 2150 2250 BUG_ON(!d_unhashed(dentry)); 2151 2251 2152 - spin_lock(&dcache_lock); 2153 2252 spin_lock(&dcache_inode_lock); 2154 2253 2155 2254 if (!inode) { ··· 2194 2295 spin_unlock(&dcache_hash_lock); 2195 2296 spin_unlock(&actual->d_lock); 2196 2297 spin_unlock(&dcache_inode_lock); 2197 - spin_unlock(&dcache_lock); 2198 2298 out_nolock: 2199 2299 if (actual == dentry) { 2200 2300 security_d_instantiate(dentry, inode); ··· 2205 2307 2206 2308 shouldnt_be_hashed: 2207 2309 spin_unlock(&dcache_inode_lock); 2208 - spin_unlock(&dcache_lock); 2209 2310 BUG(); 2210 2311 } 2211 2312 EXPORT_SYMBOL_GPL(d_materialise_unique); ··· 2318 2421 int error; 2319 2422 2320 2423 prepend(&res, &buflen, "\0", 1); 2321 - spin_lock(&dcache_lock); 2322 2424 write_seqlock(&rename_lock); 2323 2425 error = prepend_path(path, root, &res, &buflen); 2324 2426 write_sequnlock(&rename_lock); 2325 - spin_unlock(&dcache_lock); 2326 2427 2327 2428 if (error) 2328 2429 return ERR_PTR(error); ··· 2382 2487 return path->dentry->d_op->d_dname(path->dentry, buf, buflen); 2383 2488 2384 2489 get_fs_root(current->fs, &root); 2385 - spin_lock(&dcache_lock); 2386 2490 write_seqlock(&rename_lock); 2387 2491 tmp = root; 2388 2492 error = path_with_deleted(path, &tmp, &res, &buflen); 2389 2493 if (error) 2390 2494 res = ERR_PTR(error); 2391 2495 write_sequnlock(&rename_lock); 2392 - spin_unlock(&dcache_lock); 2393 2496 path_put(&root); 2394 2497 return res; 2395 2498 } ··· 2413 2520 return path->dentry->d_op->d_dname(path->dentry, buf, buflen); 2414 2521 2415 2522 get_fs_root(current->fs, &root); 2416 - spin_lock(&dcache_lock); 2417 2523 write_seqlock(&rename_lock); 2418 2524 tmp = root; 2419 2525 error = path_with_deleted(path, &tmp, &res, &buflen); 2420 2526 if (!error && !path_equal(&tmp, &root)) 2421 2527 error = prepend_unreachable(&res, &buflen); 2422 2528 write_sequnlock(&rename_lock); 2423 - spin_unlock(&dcache_lock); 2424 2529 path_put(&root); 2425 2530 if (error) 2426 2531 res = ERR_PTR(error); ··· 2485 2594 { 2486 2595 char *retval; 2487 2596 2488 - spin_lock(&dcache_lock); 2489 2597 write_seqlock(&rename_lock); 2490 2598 retval = __dentry_path(dentry, buf, buflen); 2491 2599 write_sequnlock(&rename_lock); 2492 - spin_unlock(&dcache_lock); 2493 2600 2494 2601 return retval; 2495 2602 } ··· 2498 2609 char *p = NULL; 2499 2610 char *retval; 2500 2611 2501 - spin_lock(&dcache_lock); 2502 2612 write_seqlock(&rename_lock); 2503 2613 if (d_unlinked(dentry)) { 2504 2614 p = buf + buflen; ··· 2507 2619 } 2508 2620 retval = __dentry_path(dentry, buf, buflen); 2509 2621 write_sequnlock(&rename_lock); 2510 - spin_unlock(&dcache_lock); 2511 2622 if (!IS_ERR(retval) && p) 2512 2623 *p = '/'; /* restore '/' overriden with '\0' */ 2513 2624 return retval; 2514 2625 Elong: 2515 - spin_unlock(&dcache_lock); 2516 2626 return ERR_PTR(-ENAMETOOLONG); 2517 2627 } 2518 2628 ··· 2544 2658 get_fs_root_and_pwd(current->fs, &root, &pwd); 2545 2659 2546 2660 error = -ENOENT; 2547 - spin_lock(&dcache_lock); 2548 2661 write_seqlock(&rename_lock); 2549 2662 if (!d_unlinked(pwd.dentry)) { 2550 2663 unsigned long len; ··· 2554 2669 prepend(&cwd, &buflen, "\0", 1); 2555 2670 error = prepend_path(&pwd, &tmp, &cwd, &buflen); 2556 2671 write_sequnlock(&rename_lock); 2557 - spin_unlock(&dcache_lock); 2558 2672 2559 2673 if (error) 2560 2674 goto out; ··· 2574 2690 } 2575 2691 } else { 2576 2692 write_sequnlock(&rename_lock); 2577 - spin_unlock(&dcache_lock); 2578 2693 } 2579 2694 2580 2695 out: ··· 2659 2776 rename_retry: 2660 2777 this_parent = root; 2661 2778 seq = read_seqbegin(&rename_lock); 2662 - spin_lock(&dcache_lock); 2663 2779 spin_lock(&this_parent->d_lock); 2664 2780 repeat: 2665 2781 next = this_parent->d_subdirs.next; ··· 2705 2823 if (this_parent != child->d_parent || 2706 2824 read_seqretry(&rename_lock, seq)) { 2707 2825 spin_unlock(&this_parent->d_lock); 2708 - spin_unlock(&dcache_lock); 2709 2826 rcu_read_unlock(); 2710 2827 goto rename_retry; 2711 2828 } ··· 2713 2832 goto resume; 2714 2833 } 2715 2834 spin_unlock(&this_parent->d_lock); 2716 - spin_unlock(&dcache_lock); 2717 2835 if (read_seqretry(&rename_lock, seq)) 2718 2836 goto rename_retry; 2719 2837 }
-4
fs/exportfs/expfs.c
··· 47 47 if (acceptable(context, result)) 48 48 return result; 49 49 50 - spin_lock(&dcache_lock); 51 50 spin_lock(&dcache_inode_lock); 52 51 list_for_each_entry(dentry, &result->d_inode->i_dentry, d_alias) { 53 52 dget_locked(dentry); 54 53 spin_unlock(&dcache_inode_lock); 55 - spin_unlock(&dcache_lock); 56 54 if (toput) 57 55 dput(toput); 58 56 if (dentry != result && acceptable(context, dentry)) { 59 57 dput(result); 60 58 return dentry; 61 59 } 62 - spin_lock(&dcache_lock); 63 60 spin_lock(&dcache_inode_lock); 64 61 toput = dentry; 65 62 } 66 63 spin_unlock(&dcache_inode_lock); 67 - spin_unlock(&dcache_lock); 68 64 69 65 if (toput) 70 66 dput(toput);
-8
fs/libfs.c
··· 100 100 struct dentry *cursor = file->private_data; 101 101 loff_t n = file->f_pos - 2; 102 102 103 - spin_lock(&dcache_lock); 104 103 spin_lock(&dentry->d_lock); 105 104 /* d_lock not required for cursor */ 106 105 list_del(&cursor->d_u.d_child); ··· 115 116 } 116 117 list_add_tail(&cursor->d_u.d_child, p); 117 118 spin_unlock(&dentry->d_lock); 118 - spin_unlock(&dcache_lock); 119 119 } 120 120 } 121 121 mutex_unlock(&dentry->d_inode->i_mutex); ··· 157 159 i++; 158 160 /* fallthrough */ 159 161 default: 160 - spin_lock(&dcache_lock); 161 162 spin_lock(&dentry->d_lock); 162 163 if (filp->f_pos == 2) 163 164 list_move(q, &dentry->d_subdirs); ··· 172 175 173 176 spin_unlock(&next->d_lock); 174 177 spin_unlock(&dentry->d_lock); 175 - spin_unlock(&dcache_lock); 176 178 if (filldir(dirent, next->d_name.name, 177 179 next->d_name.len, filp->f_pos, 178 180 next->d_inode->i_ino, 179 181 dt_type(next->d_inode)) < 0) 180 182 return 0; 181 - spin_lock(&dcache_lock); 182 183 spin_lock(&dentry->d_lock); 183 184 spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED); 184 185 /* next is still alive */ ··· 186 191 filp->f_pos++; 187 192 } 188 193 spin_unlock(&dentry->d_lock); 189 - spin_unlock(&dcache_lock); 190 194 } 191 195 return 0; 192 196 } ··· 279 285 struct dentry *child; 280 286 int ret = 0; 281 287 282 - spin_lock(&dcache_lock); 283 288 spin_lock(&dentry->d_lock); 284 289 list_for_each_entry(child, &dentry->d_subdirs, d_u.d_child) { 285 290 spin_lock_nested(&child->d_lock, DENTRY_D_LOCK_NESTED); ··· 291 298 ret = 1; 292 299 out: 293 300 spin_unlock(&dentry->d_lock); 294 - spin_unlock(&dcache_lock); 295 301 return ret; 296 302 } 297 303
+2 -7
fs/namei.c
··· 612 612 return 1; 613 613 } 614 614 615 - /* no need for dcache_lock, as serialization is taken care in 616 - * namespace.c 615 + /* 616 + * serialization is taken care of in namespace.c 617 617 */ 618 618 static int __follow_mount(struct path *path) 619 619 { ··· 645 645 } 646 646 } 647 647 648 - /* no need for dcache_lock, as serialization is taken care in 649 - * namespace.c 650 - */ 651 648 int follow_down(struct path *path) 652 649 { 653 650 struct vfsmount *mounted; ··· 2128 2131 { 2129 2132 dget(dentry); 2130 2133 shrink_dcache_parent(dentry); 2131 - spin_lock(&dcache_lock); 2132 2134 spin_lock(&dentry->d_lock); 2133 2135 if (dentry->d_count == 2) 2134 2136 __d_drop(dentry); 2135 2137 spin_unlock(&dentry->d_lock); 2136 - spin_unlock(&dcache_lock); 2137 2138 } 2138 2139 2139 2140 int vfs_rmdir(struct inode *dir, struct dentry *dentry)
-3
fs/ncpfs/dir.c
··· 391 391 } 392 392 393 393 /* If a pointer is invalid, we search the dentry. */ 394 - spin_lock(&dcache_lock); 395 394 spin_lock(&parent->d_lock); 396 395 next = parent->d_subdirs.next; 397 396 while (next != &parent->d_subdirs) { ··· 401 402 else 402 403 dent = NULL; 403 404 spin_unlock(&parent->d_lock); 404 - spin_unlock(&dcache_lock); 405 405 goto out; 406 406 } 407 407 next = next->next; 408 408 } 409 409 spin_unlock(&parent->d_lock); 410 - spin_unlock(&dcache_lock); 411 410 return NULL; 412 411 413 412 out:
-4
fs/ncpfs/ncplib_kernel.h
··· 193 193 struct list_head *next; 194 194 struct dentry *dentry; 195 195 196 - spin_lock(&dcache_lock); 197 196 spin_lock(&parent->d_lock); 198 197 next = parent->d_subdirs.next; 199 198 while (next != &parent->d_subdirs) { ··· 206 207 next = next->next; 207 208 } 208 209 spin_unlock(&parent->d_lock); 209 - spin_unlock(&dcache_lock); 210 210 } 211 211 212 212 static inline void ··· 215 217 struct list_head *next; 216 218 struct dentry *dentry; 217 219 218 - spin_lock(&dcache_lock); 219 220 spin_lock(&parent->d_lock); 220 221 next = parent->d_subdirs.next; 221 222 while (next != &parent->d_subdirs) { ··· 224 227 next = next->next; 225 228 } 226 229 spin_unlock(&parent->d_lock); 227 - spin_unlock(&dcache_lock); 228 230 } 229 231 230 232 struct ncp_cache_head {
-3
fs/nfs/dir.c
··· 1718 1718 dfprintk(VFS, "NFS: unlink(%s/%ld, %s)\n", dir->i_sb->s_id, 1719 1719 dir->i_ino, dentry->d_name.name); 1720 1720 1721 - spin_lock(&dcache_lock); 1722 1721 spin_lock(&dentry->d_lock); 1723 1722 if (dentry->d_count > 1) { 1724 1723 spin_unlock(&dentry->d_lock); 1725 - spin_unlock(&dcache_lock); 1726 1724 /* Start asynchronous writeout of the inode */ 1727 1725 write_inode_now(dentry->d_inode, 0); 1728 1726 error = nfs_sillyrename(dir, dentry); ··· 1731 1733 need_rehash = 1; 1732 1734 } 1733 1735 spin_unlock(&dentry->d_lock); 1734 - spin_unlock(&dcache_lock); 1735 1736 error = nfs_safe_remove(dentry); 1736 1737 if (!error || error == -ENOENT) { 1737 1738 nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
-2
fs/nfs/getroot.c
··· 63 63 * This again causes shrink_dcache_for_umount_subtree() to 64 64 * Oops, since the test for IS_ROOT() will fail. 65 65 */ 66 - spin_lock(&dcache_lock); 67 66 spin_lock(&dcache_inode_lock); 68 67 spin_lock(&sb->s_root->d_lock); 69 68 list_del_init(&sb->s_root->d_alias); 70 69 spin_unlock(&sb->s_root->d_lock); 71 70 spin_unlock(&dcache_inode_lock); 72 - spin_unlock(&dcache_lock); 73 71 } 74 72 return 0; 75 73 }
-3
fs/nfs/namespace.c
··· 60 60 61 61 seq = read_seqbegin(&rename_lock); 62 62 rcu_read_lock(); 63 - spin_lock(&dcache_lock); 64 63 while (!IS_ROOT(dentry) && dentry != droot) { 65 64 namelen = dentry->d_name.len; 66 65 buflen -= namelen + 1; ··· 70 71 *--end = '/'; 71 72 dentry = dentry->d_parent; 72 73 } 73 - spin_unlock(&dcache_lock); 74 74 rcu_read_unlock(); 75 75 if (read_seqretry(&rename_lock, seq)) 76 76 goto rename_retry; ··· 89 91 memcpy(end, base, namelen); 90 92 return end; 91 93 Elong_unlock: 92 - spin_unlock(&dcache_lock); 93 94 rcu_read_unlock(); 94 95 if (read_seqretry(&rename_lock, seq)) 95 96 goto rename_retry;
-2
fs/notify/fsnotify.c
··· 59 59 /* determine if the children should tell inode about their events */ 60 60 watched = fsnotify_inode_watches_children(inode); 61 61 62 - spin_lock(&dcache_lock); 63 62 spin_lock(&dcache_inode_lock); 64 63 /* run all of the dentries associated with this inode. Since this is a 65 64 * directory, there damn well better only be one item on this list */ ··· 83 84 spin_unlock(&alias->d_lock); 84 85 } 85 86 spin_unlock(&dcache_inode_lock); 86 - spin_unlock(&dcache_lock); 87 87 } 88 88 89 89 /* Notify this dentry's parent about a child's events. */
-2
fs/ocfs2/dcache.c
··· 169 169 struct list_head *p; 170 170 struct dentry *dentry = NULL; 171 171 172 - spin_lock(&dcache_lock); 173 172 spin_lock(&dcache_inode_lock); 174 173 list_for_each(p, &inode->i_dentry) { 175 174 dentry = list_entry(p, struct dentry, d_alias); ··· 188 189 } 189 190 190 191 spin_unlock(&dcache_inode_lock); 191 - spin_unlock(&dcache_lock); 192 192 193 193 return dentry; 194 194 }
+2 -3
include/linux/dcache.h
··· 183 183 #define DCACHE_GENOCIDE 0x0200 184 184 185 185 extern spinlock_t dcache_inode_lock; 186 - extern spinlock_t dcache_lock; 187 186 extern seqlock_t rename_lock; 188 187 189 188 static inline int dname_external(struct dentry *dentry) ··· 295 296 * destroyed when it has references. dget() should never be 296 297 * called for dentries with zero reference counter. For these cases 297 298 * (preferably none, functions in dcache.c are sufficient for normal 298 - * needs and they take necessary precautions) you should hold dcache_lock 299 - * and call dget_locked() instead of dget(). 299 + * needs and they take necessary precautions) you should hold d_lock 300 + * and call dget_dlock() instead of dget(). 300 301 */ 301 302 static inline struct dentry *dget_dlock(struct dentry *dentry) 302 303 {
+5 -1
include/linux/fs.h
··· 1378 1378 #else 1379 1379 struct list_head s_files; 1380 1380 #endif 1381 - /* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */ 1381 + /* s_dentry_lru, s_nr_dentry_unused protected by dcache.c lru locks */ 1382 1382 struct list_head s_dentry_lru; /* unused dentry lru */ 1383 1383 int s_nr_dentry_unused; /* # of dentry on lru */ 1384 1384 ··· 2446 2446 { 2447 2447 ino_t res; 2448 2448 2449 + /* 2450 + * Don't strictly need d_lock here? If the parent ino could change 2451 + * then surely we'd have a deeper race in the caller? 2452 + */ 2449 2453 spin_lock(&dentry->d_lock); 2450 2454 res = dentry->d_parent->d_inode->i_ino; 2451 2455 spin_unlock(&dentry->d_lock);
-2
include/linux/fsnotify.h
··· 17 17 18 18 /* 19 19 * fsnotify_d_instantiate - instantiate a dentry for inode 20 - * Called with dcache_lock held. 21 20 */ 22 21 static inline void fsnotify_d_instantiate(struct dentry *dentry, 23 22 struct inode *inode) ··· 61 62 62 63 /* 63 64 * fsnotify_d_move - dentry has been moved 64 - * Called with dcache_lock and dentry->d_lock held. 65 65 */ 66 66 static inline void fsnotify_d_move(struct dentry *dentry) 67 67 {
+7 -4
include/linux/fsnotify_backend.h
··· 329 329 { 330 330 struct dentry *parent; 331 331 332 - assert_spin_locked(&dcache_lock); 333 332 assert_spin_locked(&dentry->d_lock); 334 333 334 + /* 335 + * Serialisation of setting PARENT_WATCHED on the dentries is provided 336 + * by d_lock. If inotify_inode_watched changes after we have taken 337 + * d_lock, the following __fsnotify_update_child_dentry_flags call will 338 + * find our entry, so it will spin until we complete here, and update 339 + * us with the new state. 340 + */ 335 341 parent = dentry->d_parent; 336 342 if (parent->d_inode && fsnotify_inode_watches_children(parent->d_inode)) 337 343 dentry->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED; ··· 347 341 348 342 /* 349 343 * fsnotify_d_instantiate - instantiate a dentry for inode 350 - * Called with dcache_lock held. 351 344 */ 352 345 static inline void __fsnotify_d_instantiate(struct dentry *dentry, struct inode *inode) 353 346 { 354 347 if (!inode) 355 348 return; 356 - 357 - assert_spin_locked(&dcache_lock); 358 349 359 350 spin_lock(&dentry->d_lock); 360 351 __fsnotify_update_dcache_flags(dentry);
-1
include/linux/namei.h
··· 41 41 * - require a directory 42 42 * - ending slashes ok even for nonexistent files 43 43 * - internal "there are more path components" flag 44 - * - locked when lookup done with dcache_lock held 45 44 * - dentry cache is untrusted; force a real lookup 46 45 */ 47 46 #define LOOKUP_FOLLOW 1
-6
kernel/cgroup.c
··· 876 876 struct list_head *node; 877 877 878 878 BUG_ON(!mutex_is_locked(&dentry->d_inode->i_mutex)); 879 - spin_lock(&dcache_lock); 880 879 spin_lock(&dentry->d_lock); 881 880 node = dentry->d_subdirs.next; 882 881 while (node != &dentry->d_subdirs) { ··· 890 891 dget_locked_dlock(d); 891 892 spin_unlock(&d->d_lock); 892 893 spin_unlock(&dentry->d_lock); 893 - spin_unlock(&dcache_lock); 894 894 d_delete(d); 895 895 simple_unlink(dentry->d_inode, d); 896 896 dput(d); 897 - spin_lock(&dcache_lock); 898 897 spin_lock(&dentry->d_lock); 899 898 } else 900 899 spin_unlock(&d->d_lock); 901 900 node = dentry->d_subdirs.next; 902 901 } 903 902 spin_unlock(&dentry->d_lock); 904 - spin_unlock(&dcache_lock); 905 903 } 906 904 907 905 /* ··· 910 914 911 915 cgroup_clear_directory(dentry); 912 916 913 - spin_lock(&dcache_lock); 914 917 parent = dentry->d_parent; 915 918 spin_lock(&parent->d_lock); 916 919 spin_lock(&dentry->d_lock); 917 920 list_del_init(&dentry->d_u.d_child); 918 921 spin_unlock(&dentry->d_lock); 919 922 spin_unlock(&parent->d_lock); 920 - spin_unlock(&dcache_lock); 921 923 remove_dir(dentry); 922 924 } 923 925
-3
mm/filemap.c
··· 102 102 * ->inode_lock (zap_pte_range->set_page_dirty) 103 103 * ->private_lock (zap_pte_range->__set_page_dirty_buffers) 104 104 * 105 - * ->task->proc_lock 106 - * ->dcache_lock (proc_pid_lookup) 107 - * 108 105 * (code doesn't rely on that order, so you could switch it around) 109 106 * ->tasklist_lock (memory_failure, collect_procs_ao) 110 107 * ->i_mmap_lock
-4
security/selinux/selinuxfs.c
··· 1145 1145 { 1146 1146 struct list_head *node; 1147 1147 1148 - spin_lock(&dcache_lock); 1149 1148 spin_lock(&de->d_lock); 1150 1149 node = de->d_subdirs.next; 1151 1150 while (node != &de->d_subdirs) { ··· 1157 1158 dget_locked_dlock(d); 1158 1159 spin_unlock(&de->d_lock); 1159 1160 spin_unlock(&d->d_lock); 1160 - spin_unlock(&dcache_lock); 1161 1161 d_delete(d); 1162 1162 simple_unlink(de->d_inode, d); 1163 1163 dput(d); 1164 - spin_lock(&dcache_lock); 1165 1164 spin_lock(&de->d_lock); 1166 1165 } else 1167 1166 spin_unlock(&d->d_lock); ··· 1167 1170 } 1168 1171 1169 1172 spin_unlock(&de->d_lock); 1170 - spin_unlock(&dcache_lock); 1171 1173 } 1172 1174 1173 1175 #define BOOL_DIR_NAME "booleans"