Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull vfs updates from Al Viro:
"All kinds of stuff this time around; some more notable parts:

- RCU'd vfsmounts handling
- new primitives for coredump handling
- files_lock is gone
- Bruce's delegations handling series
- exportfs fixes

plus misc stuff all over the place"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (101 commits)
ecryptfs: ->f_op is never NULL
locks: break delegations on any attribute modification
locks: break delegations on link
locks: break delegations on rename
locks: helper functions for delegation breaking
locks: break delegations on unlink
namei: minor vfs_unlink cleanup
locks: implement delegations
locks: introduce new FL_DELEG lock flag
vfs: take i_mutex on renamed file
vfs: rename I_MUTEX_QUOTA now that it's not used for quotas
vfs: don't use PARENT/CHILD lock classes for non-directories
vfs: pull ext4's double-i_mutex-locking into common code
exportfs: fix quadratic behavior in filehandle lookup
exportfs: better variable name
exportfs: move most of reconnect_path to helper function
exportfs: eliminate unused "noprogress" counter
exportfs: stop retrying once we race with rename/remove
exportfs: clear DISCONNECTED on all parents sooner
exportfs: more detailed comment for path_reconnect
...

+2114 -2506
+22 -9
Documentation/filesystems/directory-locking
··· 2 2 kinds of locks - per-inode (->i_mutex) and per-filesystem 3 3 (->s_vfs_rename_mutex). 4 4 5 + When taking the i_mutex on multiple non-directory objects, we 6 + always acquire the locks in order by increasing address. We'll call 7 + that "inode pointer" order in the following. 8 + 5 9 For our purposes all operations fall in 5 classes: 6 10 7 11 1) read access. Locking rules: caller locks directory we are accessing. ··· 16 12 locks victim and calls the method. 17 13 18 14 4) rename() that is _not_ cross-directory. Locking rules: caller locks 19 - the parent, finds source and target, if target already exists - locks it 20 - and then calls the method. 15 + the parent and finds source and target. If target already exists, lock 16 + it. If source is a non-directory, lock it. If that means we need to 17 + lock both, lock them in inode pointer order. 21 18 22 19 5) link creation. Locking rules: 23 20 * lock parent ··· 35 30 fail with -ENOTEMPTY 36 31 * if new parent is equal to or is a descendent of source 37 32 fail with -ELOOP 38 - * if target exists - lock it. 33 + * If target exists, lock it. If source is a non-directory, lock 34 + it. In case that means we need to lock both source and target, 35 + do so in inode pointer order. 39 36 * call the method. 40 37 41 38 ··· 63 56 renames will be blocked on filesystem lock and we don't start changing 64 57 the order until we had acquired all locks). 65 58 66 - (3) any operation holds at most one lock on non-directory object and 67 - that lock is acquired after all other locks. (Proof: see descriptions 68 - of operations). 59 + (3) locks on non-directory objects are acquired only after locks on 60 + directory objects, and are acquired in inode pointer order. 61 + (Proof: all operations but renames take lock on at most one 62 + non-directory object, except renames, which take locks on source and 63 + target in inode pointer order in the case they are not directories.) 69 64 70 65 Now consider the minimal deadlock. Each process is blocked on 71 66 attempt to acquire some lock and already holds at least one lock. Let's ··· 75 66 not contended, since any process blocked on it is not holding any locks. 76 67 Thus all processes are blocked on ->i_mutex. 77 68 78 - Non-directory objects are not contended due to (3). Thus link 79 - creation can't be a part of deadlock - it can't be blocked on source 80 - and it means that it doesn't hold any locks. 69 + By (3), any process holding a non-directory lock can only be 70 + waiting on another non-directory lock with a larger address. Therefore 71 + the process holding the "largest" such lock can always make progress, and 72 + non-directory objects are not included in the set of contended locks. 73 + 74 + Thus link creation can't be a part of deadlock - it can't be 75 + blocked on source and it means that it doesn't hold any locks. 81 76 82 77 Any contended object is either held by cross-directory rename or 83 78 has a child that is also contended. Indeed, suppose that it is held by
+8
Documentation/filesystems/porting
··· 455 455 vfs_follow_link has been removed. Filesystems must use nd_set_link 456 456 from ->follow_link for normal symlinks, or nd_jump_link for magic 457 457 /proc/<pid> style links. 458 + -- 459 + [mandatory] 460 + iget5_locked()/ilookup5()/ilookup5_nowait() test() callback used to be 461 + called with both ->i_lock and inode_hash_lock held; the former is *not* 462 + taken anymore, so verify that your callbacks do not rely on it (none 463 + of the in-tree instances did). inode_hash_lock is still held, 464 + of course, so they are still serialized wrt removal from inode hash, 465 + as well as wrt set() callback of iget5_locked().
+1 -1
arch/arm64/kernel/signal32.c
··· 122 122 return 0; 123 123 } 124 124 125 - int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) 125 + int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from) 126 126 { 127 127 int err; 128 128
+4 -8
arch/ia64/kernel/elfcore.c
··· 11 11 return GATE_EHDR->e_phnum; 12 12 } 13 13 14 - int elf_core_write_extra_phdrs(struct file *file, loff_t offset, size_t *size, 15 - unsigned long limit) 14 + int elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset) 16 15 { 17 16 const struct elf_phdr *const gate_phdrs = 18 17 (const struct elf_phdr *) (GATE_ADDR + GATE_EHDR->e_phoff); ··· 34 35 phdr.p_offset += ofs; 35 36 } 36 37 phdr.p_paddr = 0; /* match other core phdrs */ 37 - *size += sizeof(phdr); 38 - if (*size > limit || !dump_write(file, &phdr, sizeof(phdr))) 38 + if (!dump_emit(cprm, &phdr, sizeof(phdr))) 39 39 return 0; 40 40 } 41 41 return 1; 42 42 } 43 43 44 - int elf_core_write_extra_data(struct file *file, size_t *size, 45 - unsigned long limit) 44 + int elf_core_write_extra_data(struct coredump_params *cprm) 46 45 { 47 46 const struct elf_phdr *const gate_phdrs = 48 47 (const struct elf_phdr *) (GATE_ADDR + GATE_EHDR->e_phoff); ··· 51 54 void *addr = (void *)gate_phdrs[i].p_vaddr; 52 55 size_t memsz = PAGE_ALIGN(gate_phdrs[i].p_memsz); 53 56 54 - *size += memsz; 55 - if (*size > limit || !dump_write(file, addr, memsz)) 57 + if (!dump_emit(cprm, addr, memsz)) 56 58 return 0; 57 59 break; 58 60 }
+1 -1
arch/ia64/kernel/signal.c
··· 105 105 } 106 106 107 107 int 108 - copy_siginfo_to_user (siginfo_t __user *to, siginfo_t *from) 108 + copy_siginfo_to_user (siginfo_t __user *to, const siginfo_t *from) 109 109 { 110 110 if (!access_ok(VERIFY_WRITE, to, sizeof(siginfo_t))) 111 111 return -EFAULT;
+1 -1
arch/mips/kernel/signal32.c
··· 314 314 return ret; 315 315 } 316 316 317 - int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) 317 + int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from) 318 318 { 319 319 int err; 320 320
+1 -1
arch/parisc/kernel/signal32.c
··· 319 319 } 320 320 321 321 int 322 - copy_siginfo_to_user32 (compat_siginfo_t __user *to, siginfo_t *from) 322 + copy_siginfo_to_user32 (compat_siginfo_t __user *to, const siginfo_t *from) 323 323 { 324 324 compat_uptr_t addr; 325 325 compat_int_t val;
+1 -1
arch/parisc/kernel/signal32.h
··· 34 34 35 35 /* ELF32 signal handling */ 36 36 37 - int copy_siginfo_to_user32 (compat_siginfo_t __user *to, siginfo_t *from); 37 + int copy_siginfo_to_user32 (compat_siginfo_t __user *to, const siginfo_t *from); 38 38 int copy_siginfo_from_user32 (siginfo_t *to, compat_siginfo_t __user *from); 39 39 40 40 /* In a deft move of uber-hackery, we decide to carry the top half of all
+2 -1
arch/powerpc/include/asm/spu.h
··· 235 235 236 236 /* syscalls implemented in spufs */ 237 237 struct file; 238 + struct coredump_params; 238 239 struct spufs_calls { 239 240 long (*create_thread)(const char __user *name, 240 241 unsigned int flags, umode_t mode, ··· 243 242 long (*spu_run)(struct file *filp, __u32 __user *unpc, 244 243 __u32 __user *ustatus); 245 244 int (*coredump_extra_notes_size)(void); 246 - int (*coredump_extra_notes_write)(struct file *file, loff_t *foffset); 245 + int (*coredump_extra_notes_write)(struct coredump_params *cprm); 247 246 void (*notify_spus_active)(void); 248 247 struct module *owner; 249 248 };
+1 -1
arch/powerpc/kernel/signal_32.c
··· 893 893 #endif 894 894 895 895 #ifdef CONFIG_PPC64 896 - int copy_siginfo_to_user32(struct compat_siginfo __user *d, siginfo_t *s) 896 + int copy_siginfo_to_user32(struct compat_siginfo __user *d, const siginfo_t *s) 897 897 { 898 898 int err; 899 899
+3 -2
arch/powerpc/platforms/cell/spu_syscalls.c
··· 25 25 #include <linux/module.h> 26 26 #include <linux/syscalls.h> 27 27 #include <linux/rcupdate.h> 28 + #include <linux/binfmts.h> 28 29 29 30 #include <asm/spu.h> 30 31 ··· 127 126 return ret; 128 127 } 129 128 130 - int elf_coredump_extra_notes_write(struct file *file, loff_t *foffset) 129 + int elf_coredump_extra_notes_write(struct coredump_params *cprm) 131 130 { 132 131 struct spufs_calls *calls; 133 132 int ret; ··· 136 135 if (!calls) 137 136 return 0; 138 137 139 - ret = calls->coredump_extra_notes_write(file, foffset); 138 + ret = calls->coredump_extra_notes_write(cprm); 140 139 141 140 spufs_calls_put(calls); 142 141
+25 -64
arch/powerpc/platforms/cell/spufs/coredump.c
··· 27 27 #include <linux/gfp.h> 28 28 #include <linux/list.h> 29 29 #include <linux/syscalls.h> 30 + #include <linux/coredump.h> 31 + #include <linux/binfmts.h> 30 32 31 33 #include <asm/uaccess.h> 32 34 ··· 48 46 if (ret >= size) 49 47 return size; 50 48 return ++ret; /* count trailing NULL */ 51 - } 52 - 53 - /* 54 - * These are the only things you should do on a core-file: use only these 55 - * functions to write out all the necessary info. 56 - */ 57 - static int spufs_dump_write(struct file *file, const void *addr, int nr, loff_t *foffset) 58 - { 59 - unsigned long limit = rlimit(RLIMIT_CORE); 60 - ssize_t written; 61 - 62 - if (*foffset + nr > limit) 63 - return -EIO; 64 - 65 - written = file->f_op->write(file, addr, nr, &file->f_pos); 66 - *foffset += written; 67 - 68 - if (written != nr) 69 - return -EIO; 70 - 71 - return 0; 72 - } 73 - 74 - static int spufs_dump_align(struct file *file, char *buf, loff_t new_off, 75 - loff_t *foffset) 76 - { 77 - int rc, size; 78 - 79 - size = min((loff_t)PAGE_SIZE, new_off - *foffset); 80 - memset(buf, 0, size); 81 - 82 - rc = 0; 83 - while (rc == 0 && new_off > *foffset) { 84 - size = min((loff_t)PAGE_SIZE, new_off - *foffset); 85 - rc = spufs_dump_write(file, buf, size, foffset); 86 - } 87 - 88 - return rc; 89 49 } 90 50 91 51 static int spufs_ctx_note_size(struct spu_context *ctx, int dfd) ··· 129 165 } 130 166 131 167 static int spufs_arch_write_note(struct spu_context *ctx, int i, 132 - struct file *file, int dfd, loff_t *foffset) 168 + struct coredump_params *cprm, int dfd) 133 169 { 134 170 loff_t pos = 0; 135 - int sz, rc, nread, total = 0; 171 + int sz, rc, total = 0; 136 172 const int bufsz = PAGE_SIZE; 137 173 char *name; 138 174 char fullname[80], *buf; ··· 150 186 en.n_descsz = sz; 151 187 en.n_type = NT_SPU; 152 188 153 - rc = spufs_dump_write(file, &en, sizeof(en), foffset); 154 - if (rc) 155 - goto out; 189 + if (!dump_emit(cprm, &en, sizeof(en))) 190 + goto Eio; 156 191 157 - rc = spufs_dump_write(file, fullname, en.n_namesz, foffset); 158 - if (rc) 159 - goto out; 192 + if (!dump_emit(cprm, fullname, en.n_namesz)) 193 + goto Eio; 160 194 161 - rc = spufs_dump_align(file, buf, roundup(*foffset, 4), foffset); 162 - if (rc) 163 - goto out; 195 + if (!dump_align(cprm, 4)) 196 + goto Eio; 164 197 165 198 do { 166 - nread = do_coredump_read(i, ctx, buf, bufsz, &pos); 167 - if (nread > 0) { 168 - rc = spufs_dump_write(file, buf, nread, foffset); 169 - if (rc) 170 - goto out; 171 - total += nread; 199 + rc = do_coredump_read(i, ctx, buf, bufsz, &pos); 200 + if (rc > 0) { 201 + if (!dump_emit(cprm, buf, rc)) 202 + goto Eio; 203 + total += rc; 172 204 } 173 - } while (nread == bufsz && total < sz); 205 + } while (rc == bufsz && total < sz); 174 206 175 - if (nread < 0) { 176 - rc = nread; 207 + if (rc < 0) 177 208 goto out; 178 - } 179 209 180 - rc = spufs_dump_align(file, buf, roundup(*foffset - total + sz, 4), 181 - foffset); 182 - 210 + if (!dump_skip(cprm, 211 + roundup(cprm->written - total + sz, 4) - cprm->written)) 212 + goto Eio; 183 213 out: 184 214 free_page((unsigned long)buf); 185 215 return rc; 216 + Eio: 217 + free_page((unsigned long)buf); 218 + return -EIO; 186 219 } 187 220 188 - int spufs_coredump_extra_notes_write(struct file *file, loff_t *foffset) 221 + int spufs_coredump_extra_notes_write(struct coredump_params *cprm) 189 222 { 190 223 struct spu_context *ctx; 191 224 int fd, j, rc; ··· 194 233 return rc; 195 234 196 235 for (j = 0; spufs_coredump_read[j].name != NULL; j++) { 197 - rc = spufs_arch_write_note(ctx, j, file, fd, foffset); 236 + rc = spufs_arch_write_note(ctx, j, cprm, fd); 198 237 if (rc) { 199 238 spu_release_saved(ctx); 200 239 return rc;
+2 -1
arch/powerpc/platforms/cell/spufs/spufs.h
··· 247 247 248 248 /* system call implementation */ 249 249 extern struct spufs_calls spufs_calls; 250 + struct coredump_params; 250 251 long spufs_run_spu(struct spu_context *ctx, u32 *npc, u32 *status); 251 252 long spufs_create(struct path *nd, struct dentry *dentry, unsigned int flags, 252 253 umode_t mode, struct file *filp); 253 254 /* ELF coredump callbacks for writing SPU ELF notes */ 254 255 extern int spufs_coredump_extra_notes_size(void); 255 - extern int spufs_coredump_extra_notes_write(struct file *file, loff_t *foffset); 256 + extern int spufs_coredump_extra_notes_write(struct coredump_params *cprm); 256 257 257 258 extern const struct file_operations spufs_context_fops; 258 259
+1 -1
arch/s390/kernel/compat_signal.c
··· 49 49 __u32 gprs_high[NUM_GPRS]; 50 50 } rt_sigframe32; 51 51 52 - int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) 52 + int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from) 53 53 { 54 54 int err; 55 55
+1 -1
arch/sparc/kernel/signal32.c
··· 68 68 /* __siginfo_rwin_t * */u32 rwin_save; 69 69 } __attribute__((aligned(8))); 70 70 71 - int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) 71 + int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from) 72 72 { 73 73 int err; 74 74
+1 -1
arch/tile/kernel/compat_signal.c
··· 49 49 struct compat_ucontext uc; 50 50 }; 51 51 52 - int copy_siginfo_to_user32(struct compat_siginfo __user *to, siginfo_t *from) 52 + int copy_siginfo_to_user32(struct compat_siginfo __user *to, const siginfo_t *from) 53 53 { 54 54 int err; 55 55
+42 -44
arch/x86/ia32/ia32_aout.c
··· 25 25 #include <linux/personality.h> 26 26 #include <linux/init.h> 27 27 #include <linux/jiffies.h> 28 + #include <linux/perf_event.h> 28 29 29 30 #include <asm/uaccess.h> 30 31 #include <asm/pgalloc.h> ··· 34 33 #include <asm/ia32.h> 35 34 36 35 #undef WARN_OLD 37 - #undef CORE_DUMP /* definitely broken */ 38 36 39 37 static int load_aout_binary(struct linux_binprm *); 40 38 static int load_aout_library(struct file *); 41 39 42 - #ifdef CORE_DUMP 43 - static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file, 44 - unsigned long limit); 40 + #ifdef CONFIG_COREDUMP 41 + static int aout_core_dump(struct coredump_params *); 42 + 43 + static unsigned long get_dr(int n) 44 + { 45 + struct perf_event *bp = current->thread.ptrace_bps[n]; 46 + return bp ? bp->hw.info.address : 0; 47 + } 45 48 46 49 /* 47 50 * fill in the user structure for a core dump.. ··· 53 48 static void dump_thread32(struct pt_regs *regs, struct user32 *dump) 54 49 { 55 50 u32 fs, gs; 51 + memset(dump, 0, sizeof(*dump)); 56 52 57 53 /* changed the size calculations - should hopefully work better. lbt */ 58 54 dump->magic = CMAGIC; ··· 63 57 dump->u_dsize = ((unsigned long) 64 58 (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT; 65 59 dump->u_dsize -= dump->u_tsize; 66 - dump->u_ssize = 0; 67 - dump->u_debugreg[0] = current->thread.debugreg0; 68 - dump->u_debugreg[1] = current->thread.debugreg1; 69 - dump->u_debugreg[2] = current->thread.debugreg2; 70 - dump->u_debugreg[3] = current->thread.debugreg3; 71 - dump->u_debugreg[4] = 0; 72 - dump->u_debugreg[5] = 0; 60 + dump->u_debugreg[0] = get_dr(0); 61 + dump->u_debugreg[1] = get_dr(1); 62 + dump->u_debugreg[2] = get_dr(2); 63 + dump->u_debugreg[3] = get_dr(3); 73 64 dump->u_debugreg[6] = current->thread.debugreg6; 74 - dump->u_debugreg[7] = current->thread.debugreg7; 65 + dump->u_debugreg[7] = current->thread.ptrace_dr7; 75 66 76 67 if (dump->start_stack < 0xc0000000) { 77 68 unsigned long tmp; ··· 77 74 dump->u_ssize = tmp >> PAGE_SHIFT; 78 75 } 79 76 80 - dump->regs.bx = regs->bx; 81 - dump->regs.cx = regs->cx; 82 - dump->regs.dx = regs->dx; 83 - dump->regs.si = regs->si; 84 - dump->regs.di = regs->di; 85 - dump->regs.bp = regs->bp; 86 - dump->regs.ax = regs->ax; 77 + dump->regs.ebx = regs->bx; 78 + dump->regs.ecx = regs->cx; 79 + dump->regs.edx = regs->dx; 80 + dump->regs.esi = regs->si; 81 + dump->regs.edi = regs->di; 82 + dump->regs.ebp = regs->bp; 83 + dump->regs.eax = regs->ax; 87 84 dump->regs.ds = current->thread.ds; 88 85 dump->regs.es = current->thread.es; 89 86 savesegment(fs, fs); 90 87 dump->regs.fs = fs; 91 88 savesegment(gs, gs); 92 89 dump->regs.gs = gs; 93 - dump->regs.orig_ax = regs->orig_ax; 94 - dump->regs.ip = regs->ip; 90 + dump->regs.orig_eax = regs->orig_ax; 91 + dump->regs.eip = regs->ip; 95 92 dump->regs.cs = regs->cs; 96 - dump->regs.flags = regs->flags; 97 - dump->regs.sp = regs->sp; 93 + dump->regs.eflags = regs->flags; 94 + dump->regs.esp = regs->sp; 98 95 dump->regs.ss = regs->ss; 99 96 100 97 #if 1 /* FIXME */ ··· 110 107 .module = THIS_MODULE, 111 108 .load_binary = load_aout_binary, 112 109 .load_shlib = load_aout_library, 113 - #ifdef CORE_DUMP 110 + #ifdef CONFIG_COREDUMP 114 111 .core_dump = aout_core_dump, 115 112 #endif 116 113 .min_coredump = PAGE_SIZE ··· 125 122 vm_brk(start, end - start); 126 123 } 127 124 128 - #ifdef CORE_DUMP 125 + #ifdef CONFIG_COREDUMP 129 126 /* 130 127 * These are the only things you should do on a core-file: use only these 131 128 * macros to write out all the necessary info. ··· 133 130 134 131 #include <linux/coredump.h> 135 132 136 - #define DUMP_WRITE(addr, nr) \ 137 - if (!dump_write(file, (void *)(addr), (nr))) \ 138 - goto end_coredump; 139 - 140 - #define DUMP_SEEK(offset) \ 141 - if (!dump_seek(file, offset)) \ 142 - goto end_coredump; 143 - 144 - #define START_DATA() (u.u_tsize << PAGE_SHIFT) 133 + #define START_DATA(u) (u.u_tsize << PAGE_SHIFT) 145 134 #define START_STACK(u) (u.start_stack) 146 135 147 136 /* ··· 146 151 * dumping of the process results in another error.. 147 152 */ 148 153 149 - static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file, 150 - unsigned long limit) 154 + static int aout_core_dump(struct coredump_params *cprm) 151 155 { 152 156 mm_segment_t fs; 153 157 int has_dumped = 0; ··· 158 164 has_dumped = 1; 159 165 strncpy(dump.u_comm, current->comm, sizeof(current->comm)); 160 166 dump.u_ar0 = offsetof(struct user32, regs); 161 - dump.signal = signr; 162 - dump_thread32(regs, &dump); 167 + dump.signal = cprm->siginfo->si_signo; 168 + dump_thread32(cprm->regs, &dump); 163 169 164 170 /* 165 171 * If the size of the dump file exceeds the rlimit, then see 166 172 * what would happen if we wrote the stack, but not the data 167 173 * area. 168 174 */ 169 - if ((dump.u_dsize + dump.u_ssize + 1) * PAGE_SIZE > limit) 175 + if ((dump.u_dsize + dump.u_ssize + 1) * PAGE_SIZE > cprm->limit) 170 176 dump.u_dsize = 0; 171 177 172 178 /* Make sure we have enough room to write the stack and data areas. */ 173 - if ((dump.u_ssize + 1) * PAGE_SIZE > limit) 179 + if ((dump.u_ssize + 1) * PAGE_SIZE > cprm->limit) 174 180 dump.u_ssize = 0; 175 181 176 182 /* make sure we actually have a data and stack area to dump */ ··· 184 190 185 191 set_fs(KERNEL_DS); 186 192 /* struct user */ 187 - DUMP_WRITE(&dump, sizeof(dump)); 193 + if (!dump_emit(cprm, &dump, sizeof(dump))) 194 + goto end_coredump; 188 195 /* Now dump all of the user data. Include malloced stuff as well */ 189 - DUMP_SEEK(PAGE_SIZE - sizeof(dump)); 196 + if (!dump_skip(cprm, PAGE_SIZE - sizeof(dump))) 197 + goto end_coredump; 190 198 /* now we start writing out the user space info */ 191 199 set_fs(USER_DS); 192 200 /* Dump the data area */ 193 201 if (dump.u_dsize != 0) { 194 202 dump_start = START_DATA(dump); 195 203 dump_size = dump.u_dsize << PAGE_SHIFT; 196 - DUMP_WRITE(dump_start, dump_size); 204 + if (!dump_emit(cprm, (void *)dump_start, dump_size)) 205 + goto end_coredump; 197 206 } 198 207 /* Now prepare to dump the stack area */ 199 208 if (dump.u_ssize != 0) { 200 209 dump_start = START_STACK(dump); 201 210 dump_size = dump.u_ssize << PAGE_SHIFT; 202 - DUMP_WRITE(dump_start, dump_size); 211 + if (!dump_emit(cprm, (void *)dump_start, dump_size)) 212 + goto end_coredump; 203 213 } 204 214 end_coredump: 205 215 set_fs(fs);
+1 -1
arch/x86/ia32/ia32_signal.c
··· 34 34 #include <asm/sys_ia32.h> 35 35 #include <asm/smap.h> 36 36 37 - int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) 37 + int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from) 38 38 { 39 39 int err = 0; 40 40 bool ia32 = test_thread_flag(TIF_IA32);
+4 -11
arch/x86/um/elfcore.c
··· 11 11 return vsyscall_ehdr ? (((struct elfhdr *)vsyscall_ehdr)->e_phnum) : 0; 12 12 } 13 13 14 - int elf_core_write_extra_phdrs(struct file *file, loff_t offset, size_t *size, 15 - unsigned long limit) 14 + int elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset) 16 15 { 17 16 if ( vsyscall_ehdr ) { 18 17 const struct elfhdr *const ehdrp = ··· 31 32 phdr.p_offset += ofs; 32 33 } 33 34 phdr.p_paddr = 0; /* match other core phdrs */ 34 - *size += sizeof(phdr); 35 - if (*size > limit 36 - || !dump_write(file, &phdr, sizeof(phdr))) 35 + if (!dump_emit(cprm, &phdr, sizeof(phdr))) 37 36 return 0; 38 37 } 39 38 } 40 39 return 1; 41 40 } 42 41 43 - int elf_core_write_extra_data(struct file *file, size_t *size, 44 - unsigned long limit) 42 + int elf_core_write_extra_data(struct coredump_params *cprm) 45 43 { 46 44 if ( vsyscall_ehdr ) { 47 45 const struct elfhdr *const ehdrp = ··· 51 55 if (phdrp[i].p_type == PT_LOAD) { 52 56 void *addr = (void *) phdrp[i].p_vaddr; 53 57 size_t filesz = phdrp[i].p_filesz; 54 - 55 - *size += filesz; 56 - if (*size > limit 57 - || !dump_write(file, addr, filesz)) 58 + if (!dump_emit(cprm, addr, filesz)) 58 59 return 0; 59 60 } 60 61 }
+3 -3
drivers/base/devtmpfs.c
··· 216 216 newattrs.ia_gid = gid; 217 217 newattrs.ia_valid = ATTR_MODE|ATTR_UID|ATTR_GID; 218 218 mutex_lock(&dentry->d_inode->i_mutex); 219 - notify_change(dentry, &newattrs); 219 + notify_change(dentry, &newattrs, NULL); 220 220 mutex_unlock(&dentry->d_inode->i_mutex); 221 221 222 222 /* mark as kernel-created inode */ ··· 322 322 newattrs.ia_valid = 323 323 ATTR_UID|ATTR_GID|ATTR_MODE; 324 324 mutex_lock(&dentry->d_inode->i_mutex); 325 - notify_change(dentry, &newattrs); 325 + notify_change(dentry, &newattrs, NULL); 326 326 mutex_unlock(&dentry->d_inode->i_mutex); 327 - err = vfs_unlink(parent.dentry->d_inode, dentry); 327 + err = vfs_unlink(parent.dentry->d_inode, dentry, NULL); 328 328 if (!err || err == -ENOENT) 329 329 deleted = 1; 330 330 }
+3 -9
drivers/char/misc.c
··· 114 114 int minor = iminor(inode); 115 115 struct miscdevice *c; 116 116 int err = -ENODEV; 117 - const struct file_operations *old_fops, *new_fops = NULL; 117 + const struct file_operations *new_fops = NULL; 118 118 119 119 mutex_lock(&misc_mtx); 120 120 ··· 141 141 } 142 142 143 143 err = 0; 144 - old_fops = file->f_op; 145 - file->f_op = new_fops; 144 + replace_fops(file, new_fops); 146 145 if (file->f_op->open) { 147 146 file->private_data = c; 148 - err=file->f_op->open(inode,file); 149 - if (err) { 150 - fops_put(file->f_op); 151 - file->f_op = fops_get(old_fops); 152 - } 147 + err = file->f_op->open(inode,file); 153 148 } 154 - fops_put(old_fops); 155 149 fail: 156 150 mutex_unlock(&misc_mtx); 157 151 return err;
+6 -11
drivers/gpu/drm/drm_fops.c
··· 148 148 struct drm_minor *minor; 149 149 int minor_id = iminor(inode); 150 150 int err = -ENODEV; 151 - const struct file_operations *old_fops; 151 + const struct file_operations *new_fops; 152 152 153 153 DRM_DEBUG("\n"); 154 154 ··· 163 163 if (drm_device_is_unplugged(dev)) 164 164 goto out; 165 165 166 - old_fops = filp->f_op; 167 - filp->f_op = fops_get(dev->driver->fops); 168 - if (filp->f_op == NULL) { 169 - filp->f_op = old_fops; 166 + new_fops = fops_get(dev->driver->fops); 167 + if (!new_fops) 170 168 goto out; 171 - } 172 - if (filp->f_op->open && (err = filp->f_op->open(inode, filp))) { 173 - fops_put(filp->f_op); 174 - filp->f_op = fops_get(old_fops); 175 - } 176 - fops_put(old_fops); 177 169 170 + replace_fops(filp, new_fops); 171 + if (filp->f_op->open) 172 + err = filp->f_op->open(inode, filp); 178 173 out: 179 174 mutex_unlock(&drm_global_mutex); 180 175 return err;
-4
drivers/media/dvb-core/dmxdev.c
··· 206 206 /* TODO */ 207 207 dvbdev->users--; 208 208 if (dvbdev->users == 1 && dmxdev->exit == 1) { 209 - fops_put(file->f_op); 210 - file->f_op = NULL; 211 209 mutex_unlock(&dmxdev->mutex); 212 210 wake_up(&dvbdev->wait_queue); 213 211 } else ··· 1118 1120 mutex_lock(&dmxdev->mutex); 1119 1121 dmxdev->dvbdev->users--; 1120 1122 if(dmxdev->dvbdev->users==1 && dmxdev->exit==1) { 1121 - fops_put(file->f_op); 1122 - file->f_op = NULL; 1123 1123 mutex_unlock(&dmxdev->mutex); 1124 1124 wake_up(&dmxdev->dvbdev->wait_queue); 1125 1125 } else
+6 -13
drivers/media/dvb-core/dvbdev.c
··· 74 74 75 75 if (dvbdev && dvbdev->fops) { 76 76 int err = 0; 77 - const struct file_operations *old_fops; 77 + const struct file_operations *new_fops; 78 78 79 - file->private_data = dvbdev; 80 - old_fops = file->f_op; 81 - file->f_op = fops_get(dvbdev->fops); 82 - if (file->f_op == NULL) { 83 - file->f_op = old_fops; 79 + new_fops = fops_get(dvbdev->fops); 80 + if (!new_fops) 84 81 goto fail; 85 - } 86 - if(file->f_op->open) 82 + file->private_data = dvbdev; 83 + replace_fops(file, new_fops); 84 + if (file->f_op->open) 87 85 err = file->f_op->open(inode,file); 88 - if (err) { 89 - fops_put(file->f_op); 90 - file->f_op = fops_get(old_fops); 91 - } 92 - fops_put(old_fops); 93 86 up_read(&minor_rwsem); 94 87 mutex_unlock(&dvbdev_mutex); 95 88 return err;
+1 -1
drivers/mtd/nand/nandsim.c
··· 575 575 cfile = filp_open(cache_file, O_CREAT | O_RDWR | O_LARGEFILE, 0600); 576 576 if (IS_ERR(cfile)) 577 577 return PTR_ERR(cfile); 578 - if (!cfile->f_op || (!cfile->f_op->read && !cfile->f_op->aio_read)) { 578 + if (!cfile->f_op->read && !cfile->f_op->aio_read) { 579 579 NS_ERR("alloc_device: cache file not readable\n"); 580 580 err = -EINVAL; 581 581 goto err_close;
-3
drivers/staging/comedi/comedi_compat32.c
··· 86 86 static int translated_ioctl(struct file *file, unsigned int cmd, 87 87 unsigned long arg) 88 88 { 89 - if (!file->f_op) 90 - return -ENOTTY; 91 - 92 89 if (file->f_op->unlocked_ioctl) 93 90 return file->f_op->unlocked_ioctl(file, cmd, arg); 94 91
+2 -2
drivers/staging/lustre/lustre/include/linux/lustre_compat25.h
··· 105 105 #define ll_vfs_unlink(inode,entry,mnt) vfs_unlink(inode,entry) 106 106 #define ll_vfs_mknod(dir,entry,mnt,mode,dev) vfs_mknod(dir,entry,mode,dev) 107 107 #define ll_security_inode_unlink(dir,entry,mnt) security_inode_unlink(dir,entry) 108 - #define ll_vfs_rename(old,old_dir,mnt,new,new_dir,mnt1) \ 109 - vfs_rename(old,old_dir,new,new_dir) 108 + #define ll_vfs_rename(old,old_dir,mnt,new,new_dir,mnt1,delegated_inode) \ 109 + vfs_rename(old,old_dir,new,new_dir,delegated_inode) 110 110 111 111 #define cfs_bio_io_error(a,b) bio_io_error((a)) 112 112 #define cfs_bio_endio(a,b,c) bio_endio((a),(c))
+1 -1
drivers/staging/lustre/lustre/llite/namei.c
··· 83 83 } 84 84 85 85 86 - /* called from iget5_locked->find_inode() under inode_lock spinlock */ 86 + /* called from iget5_locked->find_inode() under inode_hash_lock spinlock */ 87 87 static int ll_test_inode(struct inode *inode, void *opaque) 88 88 { 89 89 struct ll_inode_info *lli = ll_i2info(inode);
+1 -1
drivers/staging/lustre/lustre/lvfs/lvfs_linux.c
··· 224 224 GOTO(put_old, err = PTR_ERR(dchild_new)); 225 225 226 226 err = ll_vfs_rename(dir->d_inode, dchild_old, mnt, 227 - dir->d_inode, dchild_new, mnt); 227 + dir->d_inode, dchild_new, mnt, NULL); 228 228 229 229 dput(dchild_new); 230 230 put_old:
-5
drivers/staging/rtl8188eu/include/osdep_service.h
··· 430 430 int ATOMIC_INC_RETURN(ATOMIC_T *v); 431 431 int ATOMIC_DEC_RETURN(ATOMIC_T *v); 432 432 433 - /* File operation APIs, just for linux now */ 434 - int rtw_is_file_readable(char *path); 435 - int rtw_retrive_from_file(char *path, u8 __user *buf, u32 sz); 436 - int rtw_store_to_file(char *path, u8 __user *buf, u32 sz); 437 - 438 433 struct rtw_netdev_priv_indicator { 439 434 void *priv; 440 435 u32 sizeof_priv;
-208
drivers/staging/rtl8188eu/os_dep/osdep_service.c
··· 356 356 return atomic_dec_return(v); 357 357 } 358 358 359 - /* Open a file with the specific @param path, @param flag, @param mode 360 - * @param fpp the pointer of struct file pointer to get struct file pointer while file opening is success 361 - * @param path the path of the file to open 362 - * @param flag file operation flags, please refer to linux document 363 - * @param mode please refer to linux document 364 - * @return Linux specific error code 365 - */ 366 - static int openfile(struct file **fpp, char *path, int flag, int mode) 367 - { 368 - struct file *fp; 369 - 370 - fp = filp_open(path, flag, mode); 371 - if (IS_ERR(fp)) { 372 - *fpp = NULL; 373 - return PTR_ERR(fp); 374 - } else { 375 - *fpp = fp; 376 - return 0; 377 - } 378 - } 379 - 380 - /* Close the file with the specific @param fp 381 - * @param fp the pointer of struct file to close 382 - * @return always 0 383 - */ 384 - static int closefile(struct file *fp) 385 - { 386 - filp_close(fp, NULL); 387 - return 0; 388 - } 389 - 390 - static int readfile(struct file *fp, char __user *buf, int len) 391 - { 392 - int rlen = 0, sum = 0; 393 - 394 - if (!fp->f_op || !fp->f_op->read) 395 - return -EPERM; 396 - 397 - while (sum < len) { 398 - rlen = fp->f_op->read(fp, buf+sum, len-sum, &fp->f_pos); 399 - if (rlen > 0) 400 - sum += rlen; 401 - else if (0 != rlen) 402 - return rlen; 403 - else 404 - break; 405 - } 406 - return sum; 407 - } 408 - 409 - static int writefile(struct file *fp, char __user *buf, int len) 410 - { 411 - int wlen = 0, sum = 0; 412 - 413 - if (!fp->f_op || !fp->f_op->write) 414 - return -EPERM; 415 - 416 - while (sum < len) { 417 - wlen = fp->f_op->write(fp, buf+sum, len-sum, &fp->f_pos); 418 - if (wlen > 0) 419 - sum += wlen; 420 - else if (0 != wlen) 421 - return wlen; 422 - else 423 - break; 424 - } 425 - return sum; 426 - } 427 - 428 - /* Test if the specifi @param path is a file and readable 429 - * @param path the path of the file to test 430 - * @return Linux specific error code 431 - */ 432 - static int isfilereadable(char *path) 433 - { 434 - struct file *fp; 435 - int ret = 0; 436 - mm_segment_t oldfs; 437 - char __user buf; 438 - 439 - fp = filp_open(path, O_RDONLY, 0); 440 - if (IS_ERR(fp)) { 441 - ret = PTR_ERR(fp); 442 - } else { 443 - oldfs = get_fs(); set_fs(get_ds()); 444 - 445 - if (1 != readfile(fp, &buf, 1)) 446 - ret = PTR_ERR(fp); 447 - 448 - set_fs(oldfs); 449 - filp_close(fp, NULL); 450 - } 451 - return ret; 452 - } 453 - 454 - /* Open the file with @param path and retrive the file content into 455 - * memory starting from @param buf for @param sz at most 456 - * @param path the path of the file to open and read 457 - * @param buf the starting address of the buffer to store file content 458 - * @param sz how many bytes to read at most 459 - * @return the byte we've read, or Linux specific error code 460 - */ 461 - static int retrievefromfile(char *path, u8 __user *buf, u32 sz) 462 - { 463 - int ret = -1; 464 - mm_segment_t oldfs; 465 - struct file *fp; 466 - 467 - if (path && buf) { 468 - ret = openfile(&fp, path, O_RDONLY, 0); 469 - if (0 == ret) { 470 - DBG_88E("%s openfile path:%s fp =%p\n", __func__, 471 - path, fp); 472 - 473 - oldfs = get_fs(); set_fs(get_ds()); 474 - ret = readfile(fp, buf, sz); 475 - set_fs(oldfs); 476 - closefile(fp); 477 - 478 - DBG_88E("%s readfile, ret:%d\n", __func__, ret); 479 - 480 - } else { 481 - DBG_88E("%s openfile path:%s Fail, ret:%d\n", __func__, 482 - path, ret); 483 - } 484 - } else { 485 - DBG_88E("%s NULL pointer\n", __func__); 486 - ret = -EINVAL; 487 - } 488 - return ret; 489 - } 490 - 491 - /* 492 - * Open the file with @param path and wirte @param sz byte of data starting from @param buf into the file 493 - * @param path the path of the file to open and write 494 - * @param buf the starting address of the data to write into file 495 - * @param sz how many bytes to write at most 496 - * @return the byte we've written, or Linux specific error code 497 - */ 498 - static int storetofile(char *path, u8 __user *buf, u32 sz) 499 - { 500 - int ret = 0; 501 - mm_segment_t oldfs; 502 - struct file *fp; 503 - 504 - if (path && buf) { 505 - ret = openfile(&fp, path, O_CREAT|O_WRONLY, 0666); 506 - if (0 == ret) { 507 - DBG_88E("%s openfile path:%s fp =%p\n", __func__, path, fp); 508 - 509 - oldfs = get_fs(); set_fs(get_ds()); 510 - ret = writefile(fp, buf, sz); 511 - set_fs(oldfs); 512 - closefile(fp); 513 - 514 - DBG_88E("%s writefile, ret:%d\n", __func__, ret); 515 - 516 - } else { 517 - DBG_88E("%s openfile path:%s Fail, ret:%d\n", __func__, path, ret); 518 - } 519 - } else { 520 - DBG_88E("%s NULL pointer\n", __func__); 521 - ret = -EINVAL; 522 - } 523 - return ret; 524 - } 525 - 526 - /* 527 - * Test if the specifi @param path is a file and readable 528 - * @param path the path of the file to test 529 - * @return true or false 530 - */ 531 - int rtw_is_file_readable(char *path) 532 - { 533 - if (isfilereadable(path) == 0) 534 - return true; 535 - else 536 - return false; 537 - } 538 - 539 - /* 540 - * Open the file with @param path and retrive the file content into memory starting from @param buf for @param sz at most 541 - * @param path the path of the file to open and read 542 - * @param buf the starting address of the buffer to store file content 543 - * @param sz how many bytes to read at most 544 - * @return the byte we've read 545 - */ 546 - int rtw_retrive_from_file(char *path, u8 __user *buf, u32 sz) 547 - { 548 - int ret = retrievefromfile(path, buf, sz); 549 - 550 - return ret >= 0 ? ret : 0; 551 - } 552 - 553 - /* 554 - * Open the file with @param path and wirte @param sz byte of data 555 - * starting from @param buf into the file 556 - * @param path the path of the file to open and write 557 - * @param buf the starting address of the data to write into file 558 - * @param sz how many bytes to write at most 559 - * @return the byte we've written 560 - */ 561 - int rtw_store_to_file(char *path, u8 __user *buf, u32 sz) 562 - { 563 - int ret = storetofile(path, buf, sz); 564 - return ret >= 0 ? ret : 0; 565 - } 566 - 567 359 struct net_device *rtw_alloc_etherdev_with_old_priv(int sizeof_priv, 568 360 void *old_priv) 569 361 {
+4 -12
drivers/usb/core/file.c
··· 29 29 30 30 static int usb_open(struct inode *inode, struct file *file) 31 31 { 32 - int minor = iminor(inode); 33 - const struct file_operations *c; 34 32 int err = -ENODEV; 35 - const struct file_operations *old_fops, *new_fops = NULL; 33 + const struct file_operations *new_fops; 36 34 37 35 down_read(&minor_rwsem); 38 - c = usb_minors[minor]; 36 + new_fops = fops_get(usb_minors[iminor(inode)]); 39 37 40 - if (!c || !(new_fops = fops_get(c))) 38 + if (!new_fops) 41 39 goto done; 42 40 43 - old_fops = file->f_op; 44 - file->f_op = new_fops; 41 + replace_fops(file, new_fops); 45 42 /* Curiouser and curiouser... NULL ->open() as "no device" ? */ 46 43 if (file->f_op->open) 47 44 err = file->f_op->open(inode, file); 48 - if (err) { 49 - fops_put(file->f_op); 50 - file->f_op = fops_get(old_fops); 51 - } 52 - fops_put(old_fops); 53 45 done: 54 46 up_read(&minor_rwsem); 55 47 return err;
+12
fs/9p/cache.h
··· 101 101 102 102 #else /* CONFIG_9P_FSCACHE */ 103 103 104 + static inline void v9fs_cache_inode_get_cookie(struct inode *inode) 105 + { 106 + } 107 + 108 + static inline void v9fs_cache_inode_put_cookie(struct inode *inode) 109 + { 110 + } 111 + 112 + static inline void v9fs_cache_inode_set_cookie(struct inode *inode, struct file *file) 113 + { 114 + } 115 + 104 116 static inline int v9fs_fscache_release_page(struct page *page, 105 117 gfp_t gfp) { 106 118 return 1;
-2
fs/9p/vfs_file.c
··· 105 105 v9inode->writeback_fid = (void *) fid; 106 106 } 107 107 mutex_unlock(&v9inode->v_mutex); 108 - #ifdef CONFIG_9P_FSCACHE 109 108 if (v9ses->cache) 110 109 v9fs_cache_inode_set_cookie(inode, file); 111 - #endif 112 110 return 0; 113 111 out_error: 114 112 p9_client_clunk(file->private_data);
-6
fs/9p/vfs_inode.c
··· 448 448 clear_inode(inode); 449 449 filemap_fdatawrite(inode->i_mapping); 450 450 451 - #ifdef CONFIG_9P_FSCACHE 452 451 v9fs_cache_inode_put_cookie(inode); 453 - #endif 454 452 /* clunk the fid stashed in writeback_fid */ 455 453 if (v9inode->writeback_fid) { 456 454 p9_client_clunk(v9inode->writeback_fid); ··· 529 531 goto error; 530 532 531 533 v9fs_stat2inode(st, inode, sb); 532 - #ifdef CONFIG_9P_FSCACHE 533 534 v9fs_cache_inode_get_cookie(inode); 534 - #endif 535 535 unlock_new_inode(inode); 536 536 return inode; 537 537 error: ··· 901 905 goto error; 902 906 903 907 file->private_data = fid; 904 - #ifdef CONFIG_9P_FSCACHE 905 908 if (v9ses->cache) 906 909 v9fs_cache_inode_set_cookie(dentry->d_inode, file); 907 - #endif 908 910 909 911 *opened |= FILE_CREATED; 910 912 out:
-4
fs/9p/vfs_inode_dotl.c
··· 141 141 goto error; 142 142 143 143 v9fs_stat2inode_dotl(st, inode); 144 - #ifdef CONFIG_9P_FSCACHE 145 144 v9fs_cache_inode_get_cookie(inode); 146 - #endif 147 145 retval = v9fs_get_acl(inode, fid); 148 146 if (retval) 149 147 goto error; ··· 353 355 if (err) 354 356 goto err_clunk_old_fid; 355 357 file->private_data = ofid; 356 - #ifdef CONFIG_9P_FSCACHE 357 358 if (v9ses->cache) 358 359 v9fs_cache_inode_set_cookie(inode, file); 359 - #endif 360 360 *opened |= FILE_CREATED; 361 361 out: 362 362 v9fs_put_acl(dacl, pacl);
+6 -3
fs/adfs/adfs.h
··· 43 43 * ADFS file system superblock data in memory 44 44 */ 45 45 struct adfs_sb_info { 46 - struct adfs_discmap *s_map; /* bh list containing map */ 47 - struct adfs_dir_ops *s_dir; /* directory operations */ 48 - 46 + union { struct { 47 + struct adfs_discmap *s_map; /* bh list containing map */ 48 + struct adfs_dir_ops *s_dir; /* directory operations */ 49 + }; 50 + struct rcu_head rcu; /* used only at shutdown time */ 51 + }; 49 52 kuid_t s_uid; /* owner uid */ 50 53 kgid_t s_gid; /* owner gid */ 51 54 umode_t s_owner_mask; /* ADFS owner perm -> unix perm */
+1 -2
fs/adfs/super.c
··· 123 123 for (i = 0; i < asb->s_map_size; i++) 124 124 brelse(asb->s_map[i].dm_bh); 125 125 kfree(asb->s_map); 126 - kfree(asb); 127 - sb->s_fs_info = NULL; 126 + kfree_rcu(asb, rcu); 128 127 } 129 128 130 129 static int adfs_show_options(struct seq_file *seq, struct dentry *root)
+57 -6
fs/aio.c
··· 36 36 #include <linux/eventfd.h> 37 37 #include <linux/blkdev.h> 38 38 #include <linux/compat.h> 39 - #include <linux/anon_inodes.h> 40 39 #include <linux/migrate.h> 41 40 #include <linux/ramfs.h> 42 41 #include <linux/percpu-refcount.h> 42 + #include <linux/mount.h> 43 43 44 44 #include <asm/kmap_types.h> 45 45 #include <asm/uaccess.h> ··· 152 152 static struct kmem_cache *kiocb_cachep; 153 153 static struct kmem_cache *kioctx_cachep; 154 154 155 + static struct vfsmount *aio_mnt; 156 + 157 + static const struct file_operations aio_ring_fops; 158 + static const struct address_space_operations aio_ctx_aops; 159 + 160 + static struct file *aio_private_file(struct kioctx *ctx, loff_t nr_pages) 161 + { 162 + struct qstr this = QSTR_INIT("[aio]", 5); 163 + struct file *file; 164 + struct path path; 165 + struct inode *inode = alloc_anon_inode(aio_mnt->mnt_sb); 166 + if (!inode) 167 + return ERR_PTR(-ENOMEM); 168 + 169 + inode->i_mapping->a_ops = &aio_ctx_aops; 170 + inode->i_mapping->private_data = ctx; 171 + inode->i_size = PAGE_SIZE * nr_pages; 172 + 173 + path.dentry = d_alloc_pseudo(aio_mnt->mnt_sb, &this); 174 + if (!path.dentry) { 175 + iput(inode); 176 + return ERR_PTR(-ENOMEM); 177 + } 178 + path.mnt = mntget(aio_mnt); 179 + 180 + d_instantiate(path.dentry, inode); 181 + file = alloc_file(&path, FMODE_READ | FMODE_WRITE, &aio_ring_fops); 182 + if (IS_ERR(file)) { 183 + path_put(&path); 184 + return file; 185 + } 186 + 187 + file->f_flags = O_RDWR; 188 + file->private_data = ctx; 189 + return file; 190 + } 191 + 192 + static struct dentry *aio_mount(struct file_system_type *fs_type, 193 + int flags, const char *dev_name, void *data) 194 + { 195 + static const struct dentry_operations ops = { 196 + .d_dname = simple_dname, 197 + }; 198 + return mount_pseudo(fs_type, "aio:", NULL, &ops, 0xa10a10a1); 199 + } 200 + 155 201 /* aio_setup 156 202 * Creates the slab caches used by the aio routines, panic on 157 203 * failure as this is done early during the boot sequence. 158 204 */ 159 205 static int __init aio_setup(void) 160 206 { 207 + static struct file_system_type aio_fs = { 208 + .name = "aio", 209 + .mount = aio_mount, 210 + .kill_sb = kill_anon_super, 211 + }; 212 + aio_mnt = kern_mount(&aio_fs); 213 + if (IS_ERR(aio_mnt)) 214 + panic("Failed to create aio fs mount."); 215 + 161 216 kiocb_cachep = KMEM_CACHE(kiocb, SLAB_HWCACHE_ALIGN|SLAB_PANIC); 162 217 kioctx_cachep = KMEM_CACHE(kioctx,SLAB_HWCACHE_ALIGN|SLAB_PANIC); 163 218 ··· 338 283 if (nr_pages < 0) 339 284 return -EINVAL; 340 285 341 - file = anon_inode_getfile_private("[aio]", &aio_ring_fops, ctx, O_RDWR); 286 + file = aio_private_file(ctx, nr_pages); 342 287 if (IS_ERR(file)) { 343 288 ctx->aio_ring_file = NULL; 344 289 return -EAGAIN; 345 290 } 346 - 347 - file->f_inode->i_mapping->a_ops = &aio_ctx_aops; 348 - file->f_inode->i_mapping->private_data = ctx; 349 - file->f_inode->i_size = PAGE_SIZE * (loff_t)nr_pages; 350 291 351 292 for (i = 0; i < nr_pages; i++) { 352 293 struct page *page;
+1 -113
fs/anon_inodes.c
··· 24 24 25 25 static struct vfsmount *anon_inode_mnt __read_mostly; 26 26 static struct inode *anon_inode_inode; 27 - static const struct file_operations anon_inode_fops; 28 27 29 28 /* 30 29 * anon_inodefs_dname() is called from d_path(). ··· 38 39 .d_dname = anon_inodefs_dname, 39 40 }; 40 41 41 - /* 42 - * nop .set_page_dirty method so that people can use .page_mkwrite on 43 - * anon inodes. 44 - */ 45 - static int anon_set_page_dirty(struct page *page) 46 - { 47 - return 0; 48 - }; 49 - 50 - static const struct address_space_operations anon_aops = { 51 - .set_page_dirty = anon_set_page_dirty, 52 - }; 53 - 54 - /* 55 - * A single inode exists for all anon_inode files. Contrary to pipes, 56 - * anon_inode inodes have no associated per-instance data, so we need 57 - * only allocate one of them. 58 - */ 59 - static struct inode *anon_inode_mkinode(struct super_block *s) 60 - { 61 - struct inode *inode = new_inode_pseudo(s); 62 - 63 - if (!inode) 64 - return ERR_PTR(-ENOMEM); 65 - 66 - inode->i_ino = get_next_ino(); 67 - inode->i_fop = &anon_inode_fops; 68 - 69 - inode->i_mapping->a_ops = &anon_aops; 70 - 71 - /* 72 - * Mark the inode dirty from the very beginning, 73 - * that way it will never be moved to the dirty 74 - * list because mark_inode_dirty() will think 75 - * that it already _is_ on the dirty list. 76 - */ 77 - inode->i_state = I_DIRTY; 78 - inode->i_mode = S_IRUSR | S_IWUSR; 79 - inode->i_uid = current_fsuid(); 80 - inode->i_gid = current_fsgid(); 81 - inode->i_flags |= S_PRIVATE; 82 - inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME; 83 - return inode; 84 - } 85 - 86 42 static struct dentry *anon_inodefs_mount(struct file_system_type *fs_type, 87 43 int flags, const char *dev_name, void *data) 88 44 { ··· 46 92 &anon_inodefs_dentry_operations, ANON_INODE_FS_MAGIC); 47 93 if (!IS_ERR(root)) { 48 94 struct super_block *s = root->d_sb; 49 - anon_inode_inode = anon_inode_mkinode(s); 95 + anon_inode_inode = alloc_anon_inode(s); 50 96 if (IS_ERR(anon_inode_inode)) { 51 97 dput(root); 52 98 deactivate_locked_super(s); ··· 61 107 .mount = anon_inodefs_mount, 62 108 .kill_sb = kill_anon_super, 63 109 }; 64 - 65 - /** 66 - * anon_inode_getfile_private - creates a new file instance by hooking it up to an 67 - * anonymous inode, and a dentry that describe the "class" 68 - * of the file 69 - * 70 - * @name: [in] name of the "class" of the new file 71 - * @fops: [in] file operations for the new file 72 - * @priv: [in] private data for the new file (will be file's private_data) 73 - * @flags: [in] flags 74 - * 75 - * 76 - * Similar to anon_inode_getfile, but each file holds a single inode. 77 - * 78 - */ 79 - struct file *anon_inode_getfile_private(const char *name, 80 - const struct file_operations *fops, 81 - void *priv, int flags) 82 - { 83 - struct qstr this; 84 - struct path path; 85 - struct file *file; 86 - struct inode *inode; 87 - 88 - if (fops->owner && !try_module_get(fops->owner)) 89 - return ERR_PTR(-ENOENT); 90 - 91 - inode = anon_inode_mkinode(anon_inode_mnt->mnt_sb); 92 - if (IS_ERR(inode)) { 93 - file = ERR_PTR(-ENOMEM); 94 - goto err_module; 95 - } 96 - 97 - /* 98 - * Link the inode to a directory entry by creating a unique name 99 - * using the inode sequence number. 100 - */ 101 - file = ERR_PTR(-ENOMEM); 102 - this.name = name; 103 - this.len = strlen(name); 104 - this.hash = 0; 105 - path.dentry = d_alloc_pseudo(anon_inode_mnt->mnt_sb, &this); 106 - if (!path.dentry) 107 - goto err_module; 108 - 109 - path.mnt = mntget(anon_inode_mnt); 110 - 111 - d_instantiate(path.dentry, inode); 112 - 113 - file = alloc_file(&path, OPEN_FMODE(flags), fops); 114 - if (IS_ERR(file)) 115 - goto err_dput; 116 - 117 - file->f_mapping = inode->i_mapping; 118 - file->f_flags = flags & (O_ACCMODE | O_NONBLOCK); 119 - file->private_data = priv; 120 - 121 - return file; 122 - 123 - err_dput: 124 - path_put(&path); 125 - err_module: 126 - module_put(fops->owner); 127 - return file; 128 - } 129 - EXPORT_SYMBOL_GPL(anon_inode_getfile_private); 130 110 131 111 /** 132 112 * anon_inode_getfile - creates a new file instance by hooking it up to an
+24 -1
fs/attr.c
··· 167 167 } 168 168 EXPORT_SYMBOL(setattr_copy); 169 169 170 - int notify_change(struct dentry * dentry, struct iattr * attr) 170 + /** 171 + * notify_change - modify attributes of a filesytem object 172 + * @dentry: object affected 173 + * @iattr: new attributes 174 + * @delegated_inode: returns inode, if the inode is delegated 175 + * 176 + * The caller must hold the i_mutex on the affected object. 177 + * 178 + * If notify_change discovers a delegation in need of breaking, 179 + * it will return -EWOULDBLOCK and return a reference to the inode in 180 + * delegated_inode. The caller should then break the delegation and 181 + * retry. Because breaking a delegation may take a long time, the 182 + * caller should drop the i_mutex before doing so. 183 + * 184 + * Alternatively, a caller may pass NULL for delegated_inode. This may 185 + * be appropriate for callers that expect the underlying filesystem not 186 + * to be NFS exported. Also, passing NULL is fine for callers holding 187 + * the file open for write, as there can be no conflicting delegation in 188 + * that case. 189 + */ 190 + int notify_change(struct dentry * dentry, struct iattr * attr, struct inode **delegated_inode) 171 191 { 172 192 struct inode *inode = dentry->d_inode; 173 193 umode_t mode = inode->i_mode; ··· 261 241 return 0; 262 242 263 243 error = security_inode_setattr(dentry, attr); 244 + if (error) 245 + return error; 246 + error = try_break_deleg(inode, delegated_inode); 264 247 if (error) 265 248 return error; 266 249
+2 -1
fs/autofs4/autofs_i.h
··· 122 122 spinlock_t lookup_lock; 123 123 struct list_head active_list; 124 124 struct list_head expiring_list; 125 + struct rcu_head rcu; 125 126 }; 126 127 127 128 static inline struct autofs_sb_info *autofs4_sbi(struct super_block *sb) ··· 272 271 273 272 static inline int autofs_prepare_pipe(struct file *pipe) 274 273 { 275 - if (!pipe->f_op || !pipe->f_op->write) 274 + if (!pipe->f_op->write) 276 275 return -EINVAL; 277 276 if (!S_ISFIFO(file_inode(pipe)->i_mode)) 278 277 return -EINVAL;
-6
fs/autofs4/dev-ioctl.c
··· 658 658 goto out; 659 659 } 660 660 661 - if (!fp->f_op) { 662 - err = -ENOTTY; 663 - fput(fp); 664 - goto out; 665 - } 666 - 667 661 sbi = autofs_dev_ioctl_sbi(fp); 668 662 if (!sbi || sbi->magic != AUTOFS_SBI_MAGIC) { 669 663 err = -EINVAL;
+4 -9
fs/autofs4/inode.c
··· 56 56 * just call kill_anon_super when we are called from 57 57 * deactivate_super. 58 58 */ 59 - if (!sbi) 60 - goto out_kill_sb; 59 + if (sbi) /* Free wait queues, close pipe */ 60 + autofs4_catatonic_mode(sbi); 61 61 62 - /* Free wait queues, close pipe */ 63 - autofs4_catatonic_mode(sbi); 64 - 65 - sb->s_fs_info = NULL; 66 - kfree(sbi); 67 - 68 - out_kill_sb: 69 62 DPRINTK("shutting down"); 70 63 kill_litter_super(sb); 64 + if (sbi) 65 + kfree_rcu(sbi, rcu); 71 66 } 72 67 73 68 static int autofs4_show_options(struct seq_file *m, struct dentry *root)
+31 -30
fs/befs/linuxvfs.c
··· 42 42 static int befs_init_inodecache(void); 43 43 static void befs_destroy_inodecache(void); 44 44 static void *befs_follow_link(struct dentry *, struct nameidata *); 45 - static void befs_put_link(struct dentry *, struct nameidata *, void *); 45 + static void *befs_fast_follow_link(struct dentry *, struct nameidata *); 46 46 static int befs_utf2nls(struct super_block *sb, const char *in, int in_len, 47 47 char **out, int *out_len); 48 48 static int befs_nls2utf(struct super_block *sb, const char *in, int in_len, ··· 79 79 .bmap = befs_bmap, 80 80 }; 81 81 82 + static const struct inode_operations befs_fast_symlink_inode_operations = { 83 + .readlink = generic_readlink, 84 + .follow_link = befs_fast_follow_link, 85 + }; 86 + 82 87 static const struct inode_operations befs_symlink_inode_operations = { 83 88 .readlink = generic_readlink, 84 89 .follow_link = befs_follow_link, 85 - .put_link = befs_put_link, 90 + .put_link = kfree_put_link, 86 91 }; 87 92 88 93 /* ··· 416 411 inode->i_op = &befs_dir_inode_operations; 417 412 inode->i_fop = &befs_dir_operations; 418 413 } else if (S_ISLNK(inode->i_mode)) { 419 - inode->i_op = &befs_symlink_inode_operations; 414 + if (befs_ino->i_flags & BEFS_LONG_SYMLINK) 415 + inode->i_op = &befs_symlink_inode_operations; 416 + else 417 + inode->i_op = &befs_fast_symlink_inode_operations; 420 418 } else { 421 419 befs_error(sb, "Inode %lu is not a regular file, " 422 420 "directory or symlink. THAT IS WRONG! BeFS has no " ··· 485 477 static void * 486 478 befs_follow_link(struct dentry *dentry, struct nameidata *nd) 487 479 { 480 + struct super_block *sb = dentry->d_sb; 488 481 befs_inode_info *befs_ino = BEFS_I(dentry->d_inode); 482 + befs_data_stream *data = &befs_ino->i_data.ds; 483 + befs_off_t len = data->size; 489 484 char *link; 490 485 491 - if (befs_ino->i_flags & BEFS_LONG_SYMLINK) { 492 - struct super_block *sb = dentry->d_sb; 493 - befs_data_stream *data = &befs_ino->i_data.ds; 494 - befs_off_t len = data->size; 486 + if (len == 0) { 487 + befs_error(sb, "Long symlink with illegal length"); 488 + link = ERR_PTR(-EIO); 489 + } else { 490 + befs_debug(sb, "Follow long symlink"); 495 491 496 - if (len == 0) { 497 - befs_error(sb, "Long symlink with illegal length"); 492 + link = kmalloc(len, GFP_NOFS); 493 + if (!link) { 494 + link = ERR_PTR(-ENOMEM); 495 + } else if (befs_read_lsymlink(sb, data, link, len) != len) { 496 + kfree(link); 497 + befs_error(sb, "Failed to read entire long symlink"); 498 498 link = ERR_PTR(-EIO); 499 499 } else { 500 - befs_debug(sb, "Follow long symlink"); 501 - 502 - link = kmalloc(len, GFP_NOFS); 503 - if (!link) { 504 - link = ERR_PTR(-ENOMEM); 505 - } else if (befs_read_lsymlink(sb, data, link, len) != len) { 506 - kfree(link); 507 - befs_error(sb, "Failed to read entire long symlink"); 508 - link = ERR_PTR(-EIO); 509 - } else { 510 - link[len - 1] = '\0'; 511 - } 500 + link[len - 1] = '\0'; 512 501 } 513 - } else { 514 - link = befs_ino->i_data.symlink; 515 502 } 516 - 517 503 nd_set_link(nd, link); 518 504 return NULL; 519 505 } 520 506 521 - static void befs_put_link(struct dentry *dentry, struct nameidata *nd, void *p) 507 + 508 + static void * 509 + befs_fast_follow_link(struct dentry *dentry, struct nameidata *nd) 522 510 { 523 511 befs_inode_info *befs_ino = BEFS_I(dentry->d_inode); 524 - if (befs_ino->i_flags & BEFS_LONG_SYMLINK) { 525 - char *link = nd_get_link(nd); 526 - if (!IS_ERR(link)) 527 - kfree(link); 528 - } 512 + nd_set_link(nd, befs_ino->i_data.symlink); 513 + return NULL; 529 514 } 530 515 531 516 /*
+6 -7
fs/binfmt_aout.c
··· 45 45 */ 46 46 static int aout_core_dump(struct coredump_params *cprm) 47 47 { 48 - struct file *file = cprm->file; 49 48 mm_segment_t fs; 50 49 int has_dumped = 0; 51 50 void __user *dump_start; ··· 84 85 85 86 set_fs(KERNEL_DS); 86 87 /* struct user */ 87 - if (!dump_write(file, &dump, sizeof(dump))) 88 + if (!dump_emit(cprm, &dump, sizeof(dump))) 88 89 goto end_coredump; 89 90 /* Now dump all of the user data. Include malloced stuff as well */ 90 - if (!dump_seek(cprm->file, PAGE_SIZE - sizeof(dump))) 91 + if (!dump_skip(cprm, PAGE_SIZE - sizeof(dump))) 91 92 goto end_coredump; 92 93 /* now we start writing out the user space info */ 93 94 set_fs(USER_DS); ··· 95 96 if (dump.u_dsize != 0) { 96 97 dump_start = START_DATA(dump); 97 98 dump_size = dump.u_dsize << PAGE_SHIFT; 98 - if (!dump_write(file, dump_start, dump_size)) 99 + if (!dump_emit(cprm, dump_start, dump_size)) 99 100 goto end_coredump; 100 101 } 101 102 /* Now prepare to dump the stack area */ 102 103 if (dump.u_ssize != 0) { 103 104 dump_start = START_STACK(dump); 104 105 dump_size = dump.u_ssize << PAGE_SHIFT; 105 - if (!dump_write(file, dump_start, dump_size)) 106 + if (!dump_emit(cprm, dump_start, dump_size)) 106 107 goto end_coredump; 107 108 } 108 109 end_coredump: ··· 220 221 * Requires a mmap handler. This prevents people from using a.out 221 222 * as part of an exploit attack against /proc-related vulnerabilities. 222 223 */ 223 - if (!bprm->file->f_op || !bprm->file->f_op->mmap) 224 + if (!bprm->file->f_op->mmap) 224 225 return -ENOEXEC; 225 226 226 227 fd_offset = N_TXTOFF(ex); ··· 373 374 * Requires a mmap handler. This prevents people from using a.out 374 375 * as part of an exploit attack against /proc-related vulnerabilities. 375 376 */ 376 - if (!file->f_op || !file->f_op->mmap) 377 + if (!file->f_op->mmap) 377 378 goto out; 378 379 379 380 if (N_FLAGS(ex))
+47 -80
fs/binfmt_elf.c
··· 406 406 goto out; 407 407 if (!elf_check_arch(interp_elf_ex)) 408 408 goto out; 409 - if (!interpreter->f_op || !interpreter->f_op->mmap) 409 + if (!interpreter->f_op->mmap) 410 410 goto out; 411 411 412 412 /* ··· 607 607 goto out; 608 608 if (!elf_check_arch(&loc->elf_ex)) 609 609 goto out; 610 - if (!bprm->file->f_op || !bprm->file->f_op->mmap) 610 + if (!bprm->file->f_op->mmap) 611 611 goto out; 612 612 613 613 /* Now read in all of the header information */ ··· 1028 1028 1029 1029 /* First of all, some simple consistency checks */ 1030 1030 if (elf_ex.e_type != ET_EXEC || elf_ex.e_phnum > 2 || 1031 - !elf_check_arch(&elf_ex) || !file->f_op || !file->f_op->mmap) 1031 + !elf_check_arch(&elf_ex) || !file->f_op->mmap) 1032 1032 goto out; 1033 1033 1034 1034 /* Now read in all of the header information */ ··· 1225 1225 return sz; 1226 1226 } 1227 1227 1228 - #define DUMP_WRITE(addr, nr, foffset) \ 1229 - do { if (!dump_write(file, (addr), (nr))) return 0; *foffset += (nr); } while(0) 1230 - 1231 - static int alignfile(struct file *file, loff_t *foffset) 1232 - { 1233 - static const char buf[4] = { 0, }; 1234 - DUMP_WRITE(buf, roundup(*foffset, 4) - *foffset, foffset); 1235 - return 1; 1236 - } 1237 - 1238 - static int writenote(struct memelfnote *men, struct file *file, 1239 - loff_t *foffset) 1228 + static int writenote(struct memelfnote *men, struct coredump_params *cprm) 1240 1229 { 1241 1230 struct elf_note en; 1242 1231 en.n_namesz = strlen(men->name) + 1; 1243 1232 en.n_descsz = men->datasz; 1244 1233 en.n_type = men->type; 1245 1234 1246 - DUMP_WRITE(&en, sizeof(en), foffset); 1247 - DUMP_WRITE(men->name, en.n_namesz, foffset); 1248 - if (!alignfile(file, foffset)) 1249 - return 0; 1250 - DUMP_WRITE(men->data, men->datasz, foffset); 1251 - if (!alignfile(file, foffset)) 1252 - return 0; 1253 - 1254 - return 1; 1235 + return dump_emit(cprm, &en, sizeof(en)) && 1236 + dump_emit(cprm, men->name, en.n_namesz) && dump_align(cprm, 4) && 1237 + dump_emit(cprm, men->data, men->datasz) && dump_align(cprm, 4); 1255 1238 } 1256 - #undef DUMP_WRITE 1257 1239 1258 1240 static void fill_elf_header(struct elfhdr *elf, int segs, 1259 1241 u16 machine, u32 flags) ··· 1374 1392 } 1375 1393 1376 1394 static void fill_siginfo_note(struct memelfnote *note, user_siginfo_t *csigdata, 1377 - siginfo_t *siginfo) 1395 + const siginfo_t *siginfo) 1378 1396 { 1379 1397 mm_segment_t old_fs = get_fs(); 1380 1398 set_fs(KERNEL_DS); ··· 1581 1599 1582 1600 static int fill_note_info(struct elfhdr *elf, int phdrs, 1583 1601 struct elf_note_info *info, 1584 - siginfo_t *siginfo, struct pt_regs *regs) 1602 + const siginfo_t *siginfo, struct pt_regs *regs) 1585 1603 { 1586 1604 struct task_struct *dump_task = current; 1587 1605 const struct user_regset_view *view = task_user_regset_view(dump_task); ··· 1684 1702 * process-wide notes are interleaved after the first thread-specific note. 1685 1703 */ 1686 1704 static int write_note_info(struct elf_note_info *info, 1687 - struct file *file, loff_t *foffset) 1705 + struct coredump_params *cprm) 1688 1706 { 1689 1707 bool first = 1; 1690 1708 struct elf_thread_core_info *t = info->thread; ··· 1692 1710 do { 1693 1711 int i; 1694 1712 1695 - if (!writenote(&t->notes[0], file, foffset)) 1713 + if (!writenote(&t->notes[0], cprm)) 1696 1714 return 0; 1697 1715 1698 - if (first && !writenote(&info->psinfo, file, foffset)) 1716 + if (first && !writenote(&info->psinfo, cprm)) 1699 1717 return 0; 1700 - if (first && !writenote(&info->signote, file, foffset)) 1718 + if (first && !writenote(&info->signote, cprm)) 1701 1719 return 0; 1702 - if (first && !writenote(&info->auxv, file, foffset)) 1720 + if (first && !writenote(&info->auxv, cprm)) 1703 1721 return 0; 1704 1722 if (first && info->files.data && 1705 - !writenote(&info->files, file, foffset)) 1723 + !writenote(&info->files, cprm)) 1706 1724 return 0; 1707 1725 1708 1726 for (i = 1; i < info->thread_notes; ++i) 1709 1727 if (t->notes[i].data && 1710 - !writenote(&t->notes[i], file, foffset)) 1728 + !writenote(&t->notes[i], cprm)) 1711 1729 return 0; 1712 1730 1713 1731 first = 0; ··· 1830 1848 1831 1849 static int fill_note_info(struct elfhdr *elf, int phdrs, 1832 1850 struct elf_note_info *info, 1833 - siginfo_t *siginfo, struct pt_regs *regs) 1851 + const siginfo_t *siginfo, struct pt_regs *regs) 1834 1852 { 1835 1853 struct list_head *t; 1854 + struct core_thread *ct; 1855 + struct elf_thread_status *ets; 1836 1856 1837 1857 if (!elf_note_info_init(info)) 1838 1858 return 0; 1839 1859 1840 - if (siginfo->si_signo) { 1841 - struct core_thread *ct; 1842 - struct elf_thread_status *ets; 1860 + for (ct = current->mm->core_state->dumper.next; 1861 + ct; ct = ct->next) { 1862 + ets = kzalloc(sizeof(*ets), GFP_KERNEL); 1863 + if (!ets) 1864 + return 0; 1843 1865 1844 - for (ct = current->mm->core_state->dumper.next; 1845 - ct; ct = ct->next) { 1846 - ets = kzalloc(sizeof(*ets), GFP_KERNEL); 1847 - if (!ets) 1848 - return 0; 1866 + ets->thread = ct->task; 1867 + list_add(&ets->list, &info->thread_list); 1868 + } 1849 1869 1850 - ets->thread = ct->task; 1851 - list_add(&ets->list, &info->thread_list); 1852 - } 1870 + list_for_each(t, &info->thread_list) { 1871 + int sz; 1853 1872 1854 - list_for_each(t, &info->thread_list) { 1855 - int sz; 1856 - 1857 - ets = list_entry(t, struct elf_thread_status, list); 1858 - sz = elf_dump_thread_status(siginfo->si_signo, ets); 1859 - info->thread_status_size += sz; 1860 - } 1873 + ets = list_entry(t, struct elf_thread_status, list); 1874 + sz = elf_dump_thread_status(siginfo->si_signo, ets); 1875 + info->thread_status_size += sz; 1861 1876 } 1862 1877 /* now collect the dump for the current */ 1863 1878 memset(info->prstatus, 0, sizeof(*info->prstatus)); ··· 1914 1935 } 1915 1936 1916 1937 static int write_note_info(struct elf_note_info *info, 1917 - struct file *file, loff_t *foffset) 1938 + struct coredump_params *cprm) 1918 1939 { 1919 1940 int i; 1920 1941 struct list_head *t; 1921 1942 1922 1943 for (i = 0; i < info->numnote; i++) 1923 - if (!writenote(info->notes + i, file, foffset)) 1944 + if (!writenote(info->notes + i, cprm)) 1924 1945 return 0; 1925 1946 1926 1947 /* write out the thread status notes section */ ··· 1929 1950 list_entry(t, struct elf_thread_status, list); 1930 1951 1931 1952 for (i = 0; i < tmp->num_notes; i++) 1932 - if (!writenote(&tmp->notes[i], file, foffset)) 1953 + if (!writenote(&tmp->notes[i], cprm)) 1933 1954 return 0; 1934 1955 } 1935 1956 ··· 2025 2046 int has_dumped = 0; 2026 2047 mm_segment_t fs; 2027 2048 int segs; 2028 - size_t size = 0; 2029 2049 struct vm_area_struct *vma, *gate_vma; 2030 2050 struct elfhdr *elf = NULL; 2031 - loff_t offset = 0, dataoff, foffset; 2051 + loff_t offset = 0, dataoff; 2032 2052 struct elf_note_info info = { }; 2033 2053 struct elf_phdr *phdr4note = NULL; 2034 2054 struct elf_shdr *shdr4extnum = NULL; ··· 2083 2105 2084 2106 offset += sizeof(*elf); /* Elf header */ 2085 2107 offset += segs * sizeof(struct elf_phdr); /* Program headers */ 2086 - foffset = offset; 2087 2108 2088 2109 /* Write notes phdr entry */ 2089 2110 { ··· 2113 2136 2114 2137 offset = dataoff; 2115 2138 2116 - size += sizeof(*elf); 2117 - if (size > cprm->limit || !dump_write(cprm->file, elf, sizeof(*elf))) 2139 + if (!dump_emit(cprm, elf, sizeof(*elf))) 2118 2140 goto end_coredump; 2119 2141 2120 - size += sizeof(*phdr4note); 2121 - if (size > cprm->limit 2122 - || !dump_write(cprm->file, phdr4note, sizeof(*phdr4note))) 2142 + if (!dump_emit(cprm, phdr4note, sizeof(*phdr4note))) 2123 2143 goto end_coredump; 2124 2144 2125 2145 /* Write program headers for segments dump */ ··· 2138 2164 phdr.p_flags |= PF_X; 2139 2165 phdr.p_align = ELF_EXEC_PAGESIZE; 2140 2166 2141 - size += sizeof(phdr); 2142 - if (size > cprm->limit 2143 - || !dump_write(cprm->file, &phdr, sizeof(phdr))) 2167 + if (!dump_emit(cprm, &phdr, sizeof(phdr))) 2144 2168 goto end_coredump; 2145 2169 } 2146 2170 2147 - if (!elf_core_write_extra_phdrs(cprm->file, offset, &size, cprm->limit)) 2171 + if (!elf_core_write_extra_phdrs(cprm, offset)) 2148 2172 goto end_coredump; 2149 2173 2150 2174 /* write out the notes section */ 2151 - if (!write_note_info(&info, cprm->file, &foffset)) 2175 + if (!write_note_info(&info, cprm)) 2152 2176 goto end_coredump; 2153 2177 2154 - if (elf_coredump_extra_notes_write(cprm->file, &foffset)) 2178 + if (elf_coredump_extra_notes_write(cprm)) 2155 2179 goto end_coredump; 2156 2180 2157 2181 /* Align to page */ 2158 - if (!dump_seek(cprm->file, dataoff - foffset)) 2182 + if (!dump_skip(cprm, dataoff - cprm->written)) 2159 2183 goto end_coredump; 2160 2184 2161 2185 for (vma = first_vma(current, gate_vma); vma != NULL; ··· 2170 2198 page = get_dump_page(addr); 2171 2199 if (page) { 2172 2200 void *kaddr = kmap(page); 2173 - stop = ((size += PAGE_SIZE) > cprm->limit) || 2174 - !dump_write(cprm->file, kaddr, 2175 - PAGE_SIZE); 2201 + stop = !dump_emit(cprm, kaddr, PAGE_SIZE); 2176 2202 kunmap(page); 2177 2203 page_cache_release(page); 2178 2204 } else 2179 - stop = !dump_seek(cprm->file, PAGE_SIZE); 2205 + stop = !dump_skip(cprm, PAGE_SIZE); 2180 2206 if (stop) 2181 2207 goto end_coredump; 2182 2208 } 2183 2209 } 2184 2210 2185 - if (!elf_core_write_extra_data(cprm->file, &size, cprm->limit)) 2211 + if (!elf_core_write_extra_data(cprm)) 2186 2212 goto end_coredump; 2187 2213 2188 2214 if (e_phnum == PN_XNUM) { 2189 - size += sizeof(*shdr4extnum); 2190 - if (size > cprm->limit 2191 - || !dump_write(cprm->file, shdr4extnum, 2192 - sizeof(*shdr4extnum))) 2215 + if (!dump_emit(cprm, shdr4extnum, sizeof(*shdr4extnum))) 2193 2216 goto end_coredump; 2194 2217 } 2195 2218
+49 -107
fs/binfmt_elf_fdpic.c
··· 111 111 return 0; 112 112 if (!elf_check_arch(hdr) || !elf_check_fdpic(hdr)) 113 113 return 0; 114 - if (!file->f_op || !file->f_op->mmap) 114 + if (!file->f_op->mmap) 115 115 return 0; 116 116 return 1; 117 117 } ··· 1267 1267 1268 1268 /* #define DEBUG */ 1269 1269 1270 - #define DUMP_WRITE(addr, nr, foffset) \ 1271 - do { if (!dump_write(file, (addr), (nr))) return 0; *foffset += (nr); } while(0) 1272 - 1273 - static int alignfile(struct file *file, loff_t *foffset) 1274 - { 1275 - static const char buf[4] = { 0, }; 1276 - DUMP_WRITE(buf, roundup(*foffset, 4) - *foffset, foffset); 1277 - return 1; 1278 - } 1279 - 1280 - static int writenote(struct memelfnote *men, struct file *file, 1281 - loff_t *foffset) 1270 + static int writenote(struct memelfnote *men, struct coredump_params *cprm) 1282 1271 { 1283 1272 struct elf_note en; 1284 1273 en.n_namesz = strlen(men->name) + 1; 1285 1274 en.n_descsz = men->datasz; 1286 1275 en.n_type = men->type; 1287 1276 1288 - DUMP_WRITE(&en, sizeof(en), foffset); 1289 - DUMP_WRITE(men->name, en.n_namesz, foffset); 1290 - if (!alignfile(file, foffset)) 1291 - return 0; 1292 - DUMP_WRITE(men->data, men->datasz, foffset); 1293 - if (!alignfile(file, foffset)) 1294 - return 0; 1295 - 1296 - return 1; 1277 + return dump_emit(cprm, &en, sizeof(en)) && 1278 + dump_emit(cprm, men->name, en.n_namesz) && dump_align(cprm, 4) && 1279 + dump_emit(cprm, men->data, men->datasz) && dump_align(cprm, 4); 1297 1280 } 1298 - #undef DUMP_WRITE 1299 1281 1300 1282 static inline void fill_elf_fdpic_header(struct elfhdr *elf, int segs) 1301 1283 { ··· 1482 1500 /* 1483 1501 * dump the segments for an MMU process 1484 1502 */ 1485 - #ifdef CONFIG_MMU 1486 - static int elf_fdpic_dump_segments(struct file *file, size_t *size, 1487 - unsigned long *limit, unsigned long mm_flags) 1503 + static bool elf_fdpic_dump_segments(struct coredump_params *cprm) 1488 1504 { 1489 1505 struct vm_area_struct *vma; 1490 - int err = 0; 1491 1506 1492 1507 for (vma = current->mm->mmap; vma; vma = vma->vm_next) { 1493 1508 unsigned long addr; 1494 1509 1495 - if (!maydump(vma, mm_flags)) 1510 + if (!maydump(vma, cprm->mm_flags)) 1496 1511 continue; 1497 1512 1513 + #ifdef CONFIG_MMU 1498 1514 for (addr = vma->vm_start; addr < vma->vm_end; 1499 1515 addr += PAGE_SIZE) { 1516 + bool res; 1500 1517 struct page *page = get_dump_page(addr); 1501 1518 if (page) { 1502 1519 void *kaddr = kmap(page); 1503 - *size += PAGE_SIZE; 1504 - if (*size > *limit) 1505 - err = -EFBIG; 1506 - else if (!dump_write(file, kaddr, PAGE_SIZE)) 1507 - err = -EIO; 1520 + res = dump_emit(cprm, kaddr, PAGE_SIZE); 1508 1521 kunmap(page); 1509 1522 page_cache_release(page); 1510 - } else if (!dump_seek(file, PAGE_SIZE)) 1511 - err = -EFBIG; 1512 - if (err) 1513 - goto out; 1523 + } else { 1524 + res = dump_skip(cprm, PAGE_SIZE); 1525 + } 1526 + if (!res) 1527 + return false; 1514 1528 } 1515 - } 1516 - out: 1517 - return err; 1518 - } 1519 - #endif 1520 - 1521 - /* 1522 - * dump the segments for a NOMMU process 1523 - */ 1524 - #ifndef CONFIG_MMU 1525 - static int elf_fdpic_dump_segments(struct file *file, size_t *size, 1526 - unsigned long *limit, unsigned long mm_flags) 1527 - { 1528 - struct vm_area_struct *vma; 1529 - 1530 - for (vma = current->mm->mmap; vma; vma = vma->vm_next) { 1531 - if (!maydump(vma, mm_flags)) 1532 - continue; 1533 - 1534 - if ((*size += PAGE_SIZE) > *limit) 1535 - return -EFBIG; 1536 - 1537 - if (!dump_write(file, (void *) vma->vm_start, 1529 + #else 1530 + if (!dump_emit(cprm, (void *) vma->vm_start, 1538 1531 vma->vm_end - vma->vm_start)) 1539 - return -EIO; 1540 - } 1541 - 1542 - return 0; 1543 - } 1532 + return false; 1544 1533 #endif 1534 + } 1535 + return true; 1536 + } 1545 1537 1546 1538 static size_t elf_core_vma_data_size(unsigned long mm_flags) 1547 1539 { ··· 1541 1585 int has_dumped = 0; 1542 1586 mm_segment_t fs; 1543 1587 int segs; 1544 - size_t size = 0; 1545 1588 int i; 1546 1589 struct vm_area_struct *vma; 1547 1590 struct elfhdr *elf = NULL; 1548 - loff_t offset = 0, dataoff, foffset; 1591 + loff_t offset = 0, dataoff; 1549 1592 int numnote; 1550 1593 struct memelfnote *notes = NULL; 1551 1594 struct elf_prstatus *prstatus = NULL; /* NT_PRSTATUS */ ··· 1561 1606 struct elf_shdr *shdr4extnum = NULL; 1562 1607 Elf_Half e_phnum; 1563 1608 elf_addr_t e_shoff; 1609 + struct core_thread *ct; 1610 + struct elf_thread_status *tmp; 1564 1611 1565 1612 /* 1566 1613 * We no longer stop all VM operations. ··· 1598 1641 goto cleanup; 1599 1642 #endif 1600 1643 1601 - if (cprm->siginfo->si_signo) { 1602 - struct core_thread *ct; 1644 + for (ct = current->mm->core_state->dumper.next; 1645 + ct; ct = ct->next) { 1646 + tmp = kzalloc(sizeof(*tmp), GFP_KERNEL); 1647 + if (!tmp) 1648 + goto cleanup; 1649 + 1650 + tmp->thread = ct->task; 1651 + list_add(&tmp->list, &thread_list); 1652 + } 1653 + 1654 + list_for_each(t, &thread_list) { 1603 1655 struct elf_thread_status *tmp; 1656 + int sz; 1604 1657 1605 - for (ct = current->mm->core_state->dumper.next; 1606 - ct; ct = ct->next) { 1607 - tmp = kzalloc(sizeof(*tmp), GFP_KERNEL); 1608 - if (!tmp) 1609 - goto cleanup; 1610 - 1611 - tmp->thread = ct->task; 1612 - list_add(&tmp->list, &thread_list); 1613 - } 1614 - 1615 - list_for_each(t, &thread_list) { 1616 - struct elf_thread_status *tmp; 1617 - int sz; 1618 - 1619 - tmp = list_entry(t, struct elf_thread_status, list); 1620 - sz = elf_dump_thread_status(cprm->siginfo->si_signo, tmp); 1621 - thread_status_size += sz; 1622 - } 1658 + tmp = list_entry(t, struct elf_thread_status, list); 1659 + sz = elf_dump_thread_status(cprm->siginfo->si_signo, tmp); 1660 + thread_status_size += sz; 1623 1661 } 1624 1662 1625 1663 /* now collect the dump for the current */ ··· 1672 1720 1673 1721 offset += sizeof(*elf); /* Elf header */ 1674 1722 offset += segs * sizeof(struct elf_phdr); /* Program headers */ 1675 - foffset = offset; 1676 1723 1677 1724 /* Write notes phdr entry */ 1678 1725 { ··· 1706 1755 1707 1756 offset = dataoff; 1708 1757 1709 - size += sizeof(*elf); 1710 - if (size > cprm->limit || !dump_write(cprm->file, elf, sizeof(*elf))) 1758 + if (!dump_emit(cprm, elf, sizeof(*elf))) 1711 1759 goto end_coredump; 1712 1760 1713 - size += sizeof(*phdr4note); 1714 - if (size > cprm->limit 1715 - || !dump_write(cprm->file, phdr4note, sizeof(*phdr4note))) 1761 + if (!dump_emit(cprm, phdr4note, sizeof(*phdr4note))) 1716 1762 goto end_coredump; 1717 1763 1718 1764 /* write program headers for segments dump */ ··· 1733 1785 phdr.p_flags |= PF_X; 1734 1786 phdr.p_align = ELF_EXEC_PAGESIZE; 1735 1787 1736 - size += sizeof(phdr); 1737 - if (size > cprm->limit 1738 - || !dump_write(cprm->file, &phdr, sizeof(phdr))) 1788 + if (!dump_emit(cprm, &phdr, sizeof(phdr))) 1739 1789 goto end_coredump; 1740 1790 } 1741 1791 1742 - if (!elf_core_write_extra_phdrs(cprm->file, offset, &size, cprm->limit)) 1792 + if (!elf_core_write_extra_phdrs(cprm, offset)) 1743 1793 goto end_coredump; 1744 1794 1745 1795 /* write out the notes section */ 1746 1796 for (i = 0; i < numnote; i++) 1747 - if (!writenote(notes + i, cprm->file, &foffset)) 1797 + if (!writenote(notes + i, cprm)) 1748 1798 goto end_coredump; 1749 1799 1750 1800 /* write out the thread status notes section */ ··· 1751 1805 list_entry(t, struct elf_thread_status, list); 1752 1806 1753 1807 for (i = 0; i < tmp->num_notes; i++) 1754 - if (!writenote(&tmp->notes[i], cprm->file, &foffset)) 1808 + if (!writenote(&tmp->notes[i], cprm)) 1755 1809 goto end_coredump; 1756 1810 } 1757 1811 1758 - if (!dump_seek(cprm->file, dataoff - foffset)) 1812 + if (!dump_skip(cprm, dataoff - cprm->written)) 1759 1813 goto end_coredump; 1760 1814 1761 - if (elf_fdpic_dump_segments(cprm->file, &size, &cprm->limit, 1762 - cprm->mm_flags) < 0) 1815 + if (!elf_fdpic_dump_segments(cprm)) 1763 1816 goto end_coredump; 1764 1817 1765 - if (!elf_core_write_extra_data(cprm->file, &size, cprm->limit)) 1818 + if (!elf_core_write_extra_data(cprm)) 1766 1819 goto end_coredump; 1767 1820 1768 1821 if (e_phnum == PN_XNUM) { 1769 - size += sizeof(*shdr4extnum); 1770 - if (size > cprm->limit 1771 - || !dump_write(cprm->file, shdr4extnum, 1772 - sizeof(*shdr4extnum))) 1822 + if (!dump_emit(cprm, shdr4extnum, sizeof(*shdr4extnum))) 1773 1823 goto end_coredump; 1774 1824 } 1775 1825
+1 -1
fs/binfmt_em86.c
··· 38 38 /* First of all, some simple consistency checks */ 39 39 if ((elf_ex.e_type != ET_EXEC && elf_ex.e_type != ET_DYN) || 40 40 (!((elf_ex.e_machine == EM_386) || (elf_ex.e_machine == EM_486))) || 41 - (!bprm->file->f_op || !bprm->file->f_op->mmap)) { 41 + !bprm->file->f_op->mmap) { 42 42 return -ENOEXEC; 43 43 } 44 44
+2 -2
fs/cachefiles/interface.c
··· 449 449 _debug("discard tail %llx", oi_size); 450 450 newattrs.ia_valid = ATTR_SIZE; 451 451 newattrs.ia_size = oi_size & PAGE_MASK; 452 - ret = notify_change(object->backer, &newattrs); 452 + ret = notify_change(object->backer, &newattrs, NULL); 453 453 if (ret < 0) 454 454 goto truncate_failed; 455 455 } 456 456 457 457 newattrs.ia_valid = ATTR_SIZE; 458 458 newattrs.ia_size = ni_size; 459 - ret = notify_change(object->backer, &newattrs); 459 + ret = notify_change(object->backer, &newattrs, NULL); 460 460 461 461 truncate_failed: 462 462 mutex_unlock(&object->backer->d_inode->i_mutex);
+2 -2
fs/cachefiles/namei.c
··· 294 294 if (ret < 0) { 295 295 cachefiles_io_error(cache, "Unlink security error"); 296 296 } else { 297 - ret = vfs_unlink(dir->d_inode, rep); 297 + ret = vfs_unlink(dir->d_inode, rep, NULL); 298 298 299 299 if (preemptive) 300 300 cachefiles_mark_object_buried(cache, rep); ··· 396 396 cachefiles_io_error(cache, "Rename security error %d", ret); 397 397 } else { 398 398 ret = vfs_rename(dir->d_inode, rep, 399 - cache->graveyard->d_inode, grave); 399 + cache->graveyard->d_inode, grave, NULL); 400 400 if (ret != 0 && ret != -ENOMEM) 401 401 cachefiles_io_error(cache, 402 402 "Rename failed with error %d", ret);
+4 -2
fs/char_dev.c
··· 368 368 */ 369 369 static int chrdev_open(struct inode *inode, struct file *filp) 370 370 { 371 + const struct file_operations *fops; 371 372 struct cdev *p; 372 373 struct cdev *new = NULL; 373 374 int ret = 0; ··· 401 400 return ret; 402 401 403 402 ret = -ENXIO; 404 - filp->f_op = fops_get(p->ops); 405 - if (!filp->f_op) 403 + fops = fops_get(p->ops); 404 + if (!fops) 406 405 goto out_cdev_put; 407 406 407 + replace_fops(filp, fops); 408 408 if (filp->f_op->open) { 409 409 ret = filp->f_op->open(inode, filp); 410 410 if (ret)
+1
fs/cifs/cifs_fs_sb.h
··· 65 65 char *mountdata; /* options received at mount time or via DFS refs */ 66 66 struct backing_dev_info bdi; 67 67 struct delayed_work prune_tlinks; 68 + struct rcu_head rcu; 68 69 }; 69 70 #endif /* _CIFS_FS_SB_H */
+1 -1
fs/cifs/cifsfs.c
··· 862 862 const struct inode_operations cifs_symlink_inode_ops = { 863 863 .readlink = generic_readlink, 864 864 .follow_link = cifs_follow_link, 865 - .put_link = cifs_put_link, 865 + .put_link = kfree_put_link, 866 866 .permission = cifs_permission, 867 867 /* BB add the following two eventually */ 868 868 /* revalidate: cifs_revalidate,
-2
fs/cifs/cifsfs.h
··· 115 115 116 116 /* Functions related to symlinks */ 117 117 extern void *cifs_follow_link(struct dentry *direntry, struct nameidata *nd); 118 - extern void cifs_put_link(struct dentry *direntry, 119 - struct nameidata *nd, void *); 120 118 extern int cifs_readlink(struct dentry *direntry, char __user *buffer, 121 119 int buflen); 122 120 extern int cifs_symlink(struct inode *inode, struct dentry *direntry,
+8 -2
fs/cifs/connect.c
··· 3770 3770 return rc; 3771 3771 } 3772 3772 3773 + static void delayed_free(struct rcu_head *p) 3774 + { 3775 + struct cifs_sb_info *sbi = container_of(p, struct cifs_sb_info, rcu); 3776 + unload_nls(sbi->local_nls); 3777 + kfree(sbi); 3778 + } 3779 + 3773 3780 void 3774 3781 cifs_umount(struct cifs_sb_info *cifs_sb) 3775 3782 { ··· 3801 3794 3802 3795 bdi_destroy(&cifs_sb->bdi); 3803 3796 kfree(cifs_sb->mountdata); 3804 - unload_nls(cifs_sb->local_nls); 3805 - kfree(cifs_sb); 3797 + call_rcu(&cifs_sb->rcu, delayed_free); 3806 3798 } 3807 3799 3808 3800 int
-7
fs/cifs/link.c
··· 621 621 free_xid(xid); 622 622 return rc; 623 623 } 624 - 625 - void cifs_put_link(struct dentry *direntry, struct nameidata *nd, void *cookie) 626 - { 627 - char *p = nd_get_link(nd); 628 - if (!IS_ERR(p)) 629 - kfree(p); 630 - }
+1 -1
fs/coda/coda_linux.h
··· 40 40 int coda_open(struct inode *i, struct file *f); 41 41 int coda_release(struct inode *i, struct file *f); 42 42 int coda_permission(struct inode *inode, int mask); 43 - int coda_revalidate_inode(struct dentry *); 43 + int coda_revalidate_inode(struct inode *); 44 44 int coda_getattr(struct vfsmount *, struct dentry *, struct kstat *); 45 45 int coda_setattr(struct dentry *, struct iattr *); 46 46
+1 -5
fs/coda/dir.c
··· 387 387 BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); 388 388 host_file = cfi->cfi_container; 389 389 390 - if (!host_file->f_op) 391 - return -ENOTDIR; 392 - 393 390 if (host_file->f_op->iterate) { 394 391 struct inode *host_inode = file_inode(host_file); 395 392 mutex_lock(&host_inode->i_mutex); ··· 563 566 * cache manager Venus issues a downcall to the kernel when this 564 567 * happens 565 568 */ 566 - int coda_revalidate_inode(struct dentry *dentry) 569 + int coda_revalidate_inode(struct inode *inode) 567 570 { 568 571 struct coda_vattr attr; 569 572 int error; 570 573 int old_mode; 571 574 ino_t old_ino; 572 - struct inode *inode = dentry->d_inode; 573 575 struct coda_inode_info *cii = ITOC(inode); 574 576 575 577 if (!cii->c_flags)
+3 -3
fs/coda/file.c
··· 36 36 BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); 37 37 host_file = cfi->cfi_container; 38 38 39 - if (!host_file->f_op || !host_file->f_op->read) 39 + if (!host_file->f_op->read) 40 40 return -EINVAL; 41 41 42 42 return host_file->f_op->read(host_file, buf, count, ppos); ··· 75 75 BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); 76 76 host_file = cfi->cfi_container; 77 77 78 - if (!host_file->f_op || !host_file->f_op->write) 78 + if (!host_file->f_op->write) 79 79 return -EINVAL; 80 80 81 81 host_inode = file_inode(host_file); ··· 105 105 BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); 106 106 host_file = cfi->cfi_container; 107 107 108 - if (!host_file->f_op || !host_file->f_op->mmap) 108 + if (!host_file->f_op->mmap) 109 109 return -ENODEV; 110 110 111 111 coda_inode = file_inode(coda_file);
+1 -1
fs/coda/inode.c
··· 257 257 258 258 int coda_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat) 259 259 { 260 - int err = coda_revalidate_inode(dentry); 260 + int err = coda_revalidate_inode(dentry->d_inode); 261 261 if (!err) 262 262 generic_fillattr(dentry->d_inode, stat); 263 263 return err;
+2 -2
fs/compat_ioctl.c
··· 1583 1583 /*FALL THROUGH*/ 1584 1584 1585 1585 default: 1586 - if (f.file->f_op && f.file->f_op->compat_ioctl) { 1586 + if (f.file->f_op->compat_ioctl) { 1587 1587 error = f.file->f_op->compat_ioctl(f.file, cmd, arg); 1588 1588 if (error != -ENOIOCTLCMD) 1589 1589 goto out_fput; 1590 1590 } 1591 1591 1592 - if (!f.file->f_op || !f.file->f_op->unlocked_ioctl) 1592 + if (!f.file->f_op->unlocked_ioctl) 1593 1593 goto do_ioctl; 1594 1594 break; 1595 1595 }
+49 -34
fs/coredump.c
··· 485 485 return err; 486 486 } 487 487 488 - void do_coredump(siginfo_t *siginfo) 488 + void do_coredump(const siginfo_t *siginfo) 489 489 { 490 490 struct core_state core_state; 491 491 struct core_name cn; ··· 645 645 */ 646 646 if (!uid_eq(inode->i_uid, current_fsuid())) 647 647 goto close_fail; 648 - if (!cprm.file->f_op || !cprm.file->f_op->write) 648 + if (!cprm.file->f_op->write) 649 649 goto close_fail; 650 650 if (do_truncate(cprm.file->f_path.dentry, 0, 0, cprm.file)) 651 651 goto close_fail; ··· 685 685 * do on a core-file: use only these functions to write out all the 686 686 * necessary info. 687 687 */ 688 - int dump_write(struct file *file, const void *addr, int nr) 688 + int dump_emit(struct coredump_params *cprm, const void *addr, int nr) 689 689 { 690 - return !dump_interrupted() && 691 - access_ok(VERIFY_READ, addr, nr) && 692 - file->f_op->write(file, addr, nr, &file->f_pos) == nr; 693 - } 694 - EXPORT_SYMBOL(dump_write); 695 - 696 - int dump_seek(struct file *file, loff_t off) 697 - { 698 - int ret = 1; 699 - 700 - if (file->f_op->llseek && file->f_op->llseek != no_llseek) { 701 - if (dump_interrupted() || 702 - file->f_op->llseek(file, off, SEEK_CUR) < 0) 690 + struct file *file = cprm->file; 691 + loff_t pos = file->f_pos; 692 + ssize_t n; 693 + if (cprm->written + nr > cprm->limit) 694 + return 0; 695 + while (nr) { 696 + if (dump_interrupted()) 703 697 return 0; 704 - } else { 705 - char *buf = (char *)get_zeroed_page(GFP_KERNEL); 706 - 707 - if (!buf) 698 + n = vfs_write(file, addr, nr, &pos); 699 + if (n <= 0) 708 700 return 0; 709 - while (off > 0) { 710 - unsigned long n = off; 711 - 712 - if (n > PAGE_SIZE) 713 - n = PAGE_SIZE; 714 - if (!dump_write(file, buf, n)) { 715 - ret = 0; 716 - break; 717 - } 718 - off -= n; 719 - } 720 - free_page((unsigned long)buf); 701 + file->f_pos = pos; 702 + cprm->written += n; 703 + nr -= n; 721 704 } 722 - return ret; 705 + return 1; 723 706 } 724 - EXPORT_SYMBOL(dump_seek); 707 + EXPORT_SYMBOL(dump_emit); 708 + 709 + int dump_skip(struct coredump_params *cprm, size_t nr) 710 + { 711 + static char zeroes[PAGE_SIZE]; 712 + struct file *file = cprm->file; 713 + if (file->f_op->llseek && file->f_op->llseek != no_llseek) { 714 + if (cprm->written + nr > cprm->limit) 715 + return 0; 716 + if (dump_interrupted() || 717 + file->f_op->llseek(file, nr, SEEK_CUR) < 0) 718 + return 0; 719 + cprm->written += nr; 720 + return 1; 721 + } else { 722 + while (nr > PAGE_SIZE) { 723 + if (!dump_emit(cprm, zeroes, PAGE_SIZE)) 724 + return 0; 725 + nr -= PAGE_SIZE; 726 + } 727 + return dump_emit(cprm, zeroes, nr); 728 + } 729 + } 730 + EXPORT_SYMBOL(dump_skip); 731 + 732 + int dump_align(struct coredump_params *cprm, int align) 733 + { 734 + unsigned mod = cprm->written & (align - 1); 735 + if (align & (align - 1)) 736 + return -EINVAL; 737 + return mod ? dump_skip(cprm, align - mod) : 0; 738 + } 739 + EXPORT_SYMBOL(dump_align);
+187 -155
fs/dcache.c
··· 343 343 __releases(dentry->d_inode->i_lock) 344 344 { 345 345 struct inode *inode = dentry->d_inode; 346 + __d_clear_type(dentry); 346 347 dentry->d_inode = NULL; 347 348 hlist_del_init(&dentry->d_alias); 348 349 dentry_rcuwalk_barrier(dentry); ··· 484 483 return parent; 485 484 } 486 485 487 - /* 488 - * Unhash a dentry without inserting an RCU walk barrier or checking that 489 - * dentry->d_lock is locked. The caller must take care of that, if 490 - * appropriate. 491 - */ 492 - static void __d_shrink(struct dentry *dentry) 493 - { 494 - if (!d_unhashed(dentry)) { 495 - struct hlist_bl_head *b; 496 - if (unlikely(dentry->d_flags & DCACHE_DISCONNECTED)) 497 - b = &dentry->d_sb->s_anon; 498 - else 499 - b = d_hash(dentry->d_parent, dentry->d_name.hash); 500 - 501 - hlist_bl_lock(b); 502 - __hlist_bl_del(&dentry->d_hash); 503 - dentry->d_hash.pprev = NULL; 504 - hlist_bl_unlock(b); 505 - } 506 - } 507 - 508 486 /** 509 487 * d_drop - drop a dentry 510 488 * @dentry: dentry to drop ··· 502 522 void __d_drop(struct dentry *dentry) 503 523 { 504 524 if (!d_unhashed(dentry)) { 505 - __d_shrink(dentry); 525 + struct hlist_bl_head *b; 526 + /* 527 + * Hashed dentries are normally on the dentry hashtable, 528 + * with the exception of those newly allocated by 529 + * d_obtain_alias, which are always IS_ROOT: 530 + */ 531 + if (unlikely(IS_ROOT(dentry))) 532 + b = &dentry->d_sb->s_anon; 533 + else 534 + b = d_hash(dentry->d_parent, dentry->d_name.hash); 535 + 536 + hlist_bl_lock(b); 537 + __hlist_bl_del(&dentry->d_hash); 538 + dentry->d_hash.pprev = NULL; 539 + hlist_bl_unlock(b); 506 540 dentry_rcuwalk_barrier(dentry); 507 541 } 508 542 } ··· 1070 1076 EXPORT_SYMBOL(shrink_dcache_sb); 1071 1077 1072 1078 /* 1073 - * destroy a single subtree of dentries for unmount 1074 - * - see the comments on shrink_dcache_for_umount() for a description of the 1075 - * locking 1076 - */ 1077 - static void shrink_dcache_for_umount_subtree(struct dentry *dentry) 1078 - { 1079 - struct dentry *parent; 1080 - 1081 - BUG_ON(!IS_ROOT(dentry)); 1082 - 1083 - for (;;) { 1084 - /* descend to the first leaf in the current subtree */ 1085 - while (!list_empty(&dentry->d_subdirs)) 1086 - dentry = list_entry(dentry->d_subdirs.next, 1087 - struct dentry, d_u.d_child); 1088 - 1089 - /* consume the dentries from this leaf up through its parents 1090 - * until we find one with children or run out altogether */ 1091 - do { 1092 - struct inode *inode; 1093 - 1094 - /* 1095 - * inform the fs that this dentry is about to be 1096 - * unhashed and destroyed. 1097 - */ 1098 - if ((dentry->d_flags & DCACHE_OP_PRUNE) && 1099 - !d_unhashed(dentry)) 1100 - dentry->d_op->d_prune(dentry); 1101 - 1102 - dentry_lru_del(dentry); 1103 - __d_shrink(dentry); 1104 - 1105 - if (dentry->d_lockref.count != 0) { 1106 - printk(KERN_ERR 1107 - "BUG: Dentry %p{i=%lx,n=%s}" 1108 - " still in use (%d)" 1109 - " [unmount of %s %s]\n", 1110 - dentry, 1111 - dentry->d_inode ? 1112 - dentry->d_inode->i_ino : 0UL, 1113 - dentry->d_name.name, 1114 - dentry->d_lockref.count, 1115 - dentry->d_sb->s_type->name, 1116 - dentry->d_sb->s_id); 1117 - BUG(); 1118 - } 1119 - 1120 - if (IS_ROOT(dentry)) { 1121 - parent = NULL; 1122 - list_del(&dentry->d_u.d_child); 1123 - } else { 1124 - parent = dentry->d_parent; 1125 - parent->d_lockref.count--; 1126 - list_del(&dentry->d_u.d_child); 1127 - } 1128 - 1129 - inode = dentry->d_inode; 1130 - if (inode) { 1131 - dentry->d_inode = NULL; 1132 - hlist_del_init(&dentry->d_alias); 1133 - if (dentry->d_op && dentry->d_op->d_iput) 1134 - dentry->d_op->d_iput(dentry, inode); 1135 - else 1136 - iput(inode); 1137 - } 1138 - 1139 - d_free(dentry); 1140 - 1141 - /* finished when we fall off the top of the tree, 1142 - * otherwise we ascend to the parent and move to the 1143 - * next sibling if there is one */ 1144 - if (!parent) 1145 - return; 1146 - dentry = parent; 1147 - } while (list_empty(&dentry->d_subdirs)); 1148 - 1149 - dentry = list_entry(dentry->d_subdirs.next, 1150 - struct dentry, d_u.d_child); 1151 - } 1152 - } 1153 - 1154 - /* 1155 - * destroy the dentries attached to a superblock on unmounting 1156 - * - we don't need to use dentry->d_lock because: 1157 - * - the superblock is detached from all mountings and open files, so the 1158 - * dentry trees will not be rearranged by the VFS 1159 - * - s_umount is write-locked, so the memory pressure shrinker will ignore 1160 - * any dentries belonging to this superblock that it comes across 1161 - * - the filesystem itself is no longer permitted to rearrange the dentries 1162 - * in this superblock 1163 - */ 1164 - void shrink_dcache_for_umount(struct super_block *sb) 1165 - { 1166 - struct dentry *dentry; 1167 - 1168 - if (down_read_trylock(&sb->s_umount)) 1169 - BUG(); 1170 - 1171 - dentry = sb->s_root; 1172 - sb->s_root = NULL; 1173 - dentry->d_lockref.count--; 1174 - shrink_dcache_for_umount_subtree(dentry); 1175 - 1176 - while (!hlist_bl_empty(&sb->s_anon)) { 1177 - dentry = hlist_bl_entry(hlist_bl_first(&sb->s_anon), struct dentry, d_hash); 1178 - shrink_dcache_for_umount_subtree(dentry); 1179 - } 1180 - } 1181 - 1182 - /* 1183 1079 * This tries to ascend one level of parenthood, but 1184 1080 * we can race with renaming, so we need to re-check 1185 1081 * the parenthood after dropping the lock and check ··· 1362 1478 } 1363 1479 EXPORT_SYMBOL(shrink_dcache_parent); 1364 1480 1481 + static enum d_walk_ret umount_collect(void *_data, struct dentry *dentry) 1482 + { 1483 + struct select_data *data = _data; 1484 + enum d_walk_ret ret = D_WALK_CONTINUE; 1485 + 1486 + if (dentry->d_lockref.count) { 1487 + dentry_lru_del(dentry); 1488 + if (likely(!list_empty(&dentry->d_subdirs))) 1489 + goto out; 1490 + if (dentry == data->start && dentry->d_lockref.count == 1) 1491 + goto out; 1492 + printk(KERN_ERR 1493 + "BUG: Dentry %p{i=%lx,n=%s}" 1494 + " still in use (%d)" 1495 + " [unmount of %s %s]\n", 1496 + dentry, 1497 + dentry->d_inode ? 1498 + dentry->d_inode->i_ino : 0UL, 1499 + dentry->d_name.name, 1500 + dentry->d_lockref.count, 1501 + dentry->d_sb->s_type->name, 1502 + dentry->d_sb->s_id); 1503 + BUG(); 1504 + } else if (!(dentry->d_flags & DCACHE_SHRINK_LIST)) { 1505 + /* 1506 + * We can't use d_lru_shrink_move() because we 1507 + * need to get the global LRU lock and do the 1508 + * LRU accounting. 1509 + */ 1510 + if (dentry->d_flags & DCACHE_LRU_LIST) 1511 + d_lru_del(dentry); 1512 + d_shrink_add(dentry, &data->dispose); 1513 + data->found++; 1514 + ret = D_WALK_NORETRY; 1515 + } 1516 + out: 1517 + if (data->found && need_resched()) 1518 + ret = D_WALK_QUIT; 1519 + return ret; 1520 + } 1521 + 1522 + /* 1523 + * destroy the dentries attached to a superblock on unmounting 1524 + */ 1525 + void shrink_dcache_for_umount(struct super_block *sb) 1526 + { 1527 + struct dentry *dentry; 1528 + 1529 + if (down_read_trylock(&sb->s_umount)) 1530 + BUG(); 1531 + 1532 + dentry = sb->s_root; 1533 + sb->s_root = NULL; 1534 + for (;;) { 1535 + struct select_data data; 1536 + 1537 + INIT_LIST_HEAD(&data.dispose); 1538 + data.start = dentry; 1539 + data.found = 0; 1540 + 1541 + d_walk(dentry, &data, umount_collect, NULL); 1542 + if (!data.found) 1543 + break; 1544 + 1545 + shrink_dentry_list(&data.dispose); 1546 + cond_resched(); 1547 + } 1548 + d_drop(dentry); 1549 + dput(dentry); 1550 + 1551 + while (!hlist_bl_empty(&sb->s_anon)) { 1552 + struct select_data data; 1553 + dentry = hlist_bl_entry(hlist_bl_first(&sb->s_anon), struct dentry, d_hash); 1554 + 1555 + INIT_LIST_HEAD(&data.dispose); 1556 + data.start = NULL; 1557 + data.found = 0; 1558 + 1559 + d_walk(dentry, &data, umount_collect, NULL); 1560 + if (data.found) 1561 + shrink_dentry_list(&data.dispose); 1562 + cond_resched(); 1563 + } 1564 + } 1565 + 1365 1566 static enum d_walk_ret check_and_collect(void *_data, struct dentry *dentry) 1366 1567 { 1367 1568 struct select_data *data = _data; ··· 1607 1638 } 1608 1639 EXPORT_SYMBOL(d_alloc); 1609 1640 1641 + /** 1642 + * d_alloc_pseudo - allocate a dentry (for lookup-less filesystems) 1643 + * @sb: the superblock 1644 + * @name: qstr of the name 1645 + * 1646 + * For a filesystem that just pins its dentries in memory and never 1647 + * performs lookups at all, return an unhashed IS_ROOT dentry. 1648 + */ 1610 1649 struct dentry *d_alloc_pseudo(struct super_block *sb, const struct qstr *name) 1611 1650 { 1612 - struct dentry *dentry = __d_alloc(sb, name); 1613 - if (dentry) 1614 - dentry->d_flags |= DCACHE_DISCONNECTED; 1615 - return dentry; 1651 + return __d_alloc(sb, name); 1616 1652 } 1617 1653 EXPORT_SYMBOL(d_alloc_pseudo); 1618 1654 ··· 1659 1685 } 1660 1686 EXPORT_SYMBOL(d_set_d_op); 1661 1687 1688 + static unsigned d_flags_for_inode(struct inode *inode) 1689 + { 1690 + unsigned add_flags = DCACHE_FILE_TYPE; 1691 + 1692 + if (!inode) 1693 + return DCACHE_MISS_TYPE; 1694 + 1695 + if (S_ISDIR(inode->i_mode)) { 1696 + add_flags = DCACHE_DIRECTORY_TYPE; 1697 + if (unlikely(!(inode->i_opflags & IOP_LOOKUP))) { 1698 + if (unlikely(!inode->i_op->lookup)) 1699 + add_flags = DCACHE_AUTODIR_TYPE; 1700 + else 1701 + inode->i_opflags |= IOP_LOOKUP; 1702 + } 1703 + } else if (unlikely(!(inode->i_opflags & IOP_NOFOLLOW))) { 1704 + if (unlikely(inode->i_op->follow_link)) 1705 + add_flags = DCACHE_SYMLINK_TYPE; 1706 + else 1707 + inode->i_opflags |= IOP_NOFOLLOW; 1708 + } 1709 + 1710 + if (unlikely(IS_AUTOMOUNT(inode))) 1711 + add_flags |= DCACHE_NEED_AUTOMOUNT; 1712 + return add_flags; 1713 + } 1714 + 1662 1715 static void __d_instantiate(struct dentry *dentry, struct inode *inode) 1663 1716 { 1717 + unsigned add_flags = d_flags_for_inode(inode); 1718 + 1664 1719 spin_lock(&dentry->d_lock); 1665 - if (inode) { 1666 - if (unlikely(IS_AUTOMOUNT(inode))) 1667 - dentry->d_flags |= DCACHE_NEED_AUTOMOUNT; 1720 + dentry->d_flags &= ~DCACHE_ENTRY_TYPE; 1721 + dentry->d_flags |= add_flags; 1722 + if (inode) 1668 1723 hlist_add_head(&dentry->d_alias, &inode->i_dentry); 1669 - } 1670 1724 dentry->d_inode = inode; 1671 1725 dentry_rcuwalk_barrier(dentry); 1672 1726 spin_unlock(&dentry->d_lock); ··· 1803 1801 1804 1802 EXPORT_SYMBOL(d_instantiate_unique); 1805 1803 1804 + /** 1805 + * d_instantiate_no_diralias - instantiate a non-aliased dentry 1806 + * @entry: dentry to complete 1807 + * @inode: inode to attach to this dentry 1808 + * 1809 + * Fill in inode information in the entry. If a directory alias is found, then 1810 + * return an error (and drop inode). Together with d_materialise_unique() this 1811 + * guarantees that a directory inode may never have more than one alias. 1812 + */ 1813 + int d_instantiate_no_diralias(struct dentry *entry, struct inode *inode) 1814 + { 1815 + BUG_ON(!hlist_unhashed(&entry->d_alias)); 1816 + 1817 + spin_lock(&inode->i_lock); 1818 + if (S_ISDIR(inode->i_mode) && !hlist_empty(&inode->i_dentry)) { 1819 + spin_unlock(&inode->i_lock); 1820 + iput(inode); 1821 + return -EBUSY; 1822 + } 1823 + __d_instantiate(entry, inode); 1824 + spin_unlock(&inode->i_lock); 1825 + security_d_instantiate(entry, inode); 1826 + 1827 + return 0; 1828 + } 1829 + EXPORT_SYMBOL(d_instantiate_no_diralias); 1830 + 1806 1831 struct dentry *d_make_root(struct inode *root_inode) 1807 1832 { 1808 1833 struct dentry *res = NULL; ··· 1899 1870 static const struct qstr anonstring = QSTR_INIT("/", 1); 1900 1871 struct dentry *tmp; 1901 1872 struct dentry *res; 1873 + unsigned add_flags; 1902 1874 1903 1875 if (!inode) 1904 1876 return ERR_PTR(-ESTALE); ··· 1925 1895 } 1926 1896 1927 1897 /* attach a disconnected dentry */ 1898 + add_flags = d_flags_for_inode(inode) | DCACHE_DISCONNECTED; 1899 + 1928 1900 spin_lock(&tmp->d_lock); 1929 1901 tmp->d_inode = inode; 1930 - tmp->d_flags |= DCACHE_DISCONNECTED; 1902 + tmp->d_flags |= add_flags; 1931 1903 hlist_add_head(&tmp->d_alias, &inode->i_dentry); 1932 1904 hlist_bl_lock(&tmp->d_sb->s_anon); 1933 1905 hlist_bl_add_head(&tmp->d_hash, &tmp->d_sb->s_anon); ··· 2757 2725 spin_unlock(&dentry->d_lock); 2758 2726 2759 2727 /* anon->d_lock still locked, returns locked */ 2760 - anon->d_flags &= ~DCACHE_DISCONNECTED; 2761 2728 } 2762 2729 2763 2730 /** ··· 2916 2885 struct vfsmount *vfsmnt = path->mnt; 2917 2886 struct mount *mnt = real_mount(vfsmnt); 2918 2887 int error = 0; 2919 - unsigned seq = 0; 2888 + unsigned seq, m_seq = 0; 2920 2889 char *bptr; 2921 2890 int blen; 2922 2891 2923 2892 rcu_read_lock(); 2893 + restart_mnt: 2894 + read_seqbegin_or_lock(&mount_lock, &m_seq); 2895 + seq = 0; 2924 2896 restart: 2925 2897 bptr = *buffer; 2926 2898 blen = *buflen; 2899 + error = 0; 2927 2900 read_seqbegin_or_lock(&rename_lock, &seq); 2928 2901 while (dentry != root->dentry || vfsmnt != root->mnt) { 2929 2902 struct dentry * parent; 2930 2903 2931 2904 if (dentry == vfsmnt->mnt_root || IS_ROOT(dentry)) { 2905 + struct mount *parent = ACCESS_ONCE(mnt->mnt_parent); 2932 2906 /* Global root? */ 2933 - if (mnt_has_parent(mnt)) { 2934 - dentry = mnt->mnt_mountpoint; 2935 - mnt = mnt->mnt_parent; 2907 + if (mnt != parent) { 2908 + dentry = ACCESS_ONCE(mnt->mnt_mountpoint); 2909 + mnt = parent; 2936 2910 vfsmnt = &mnt->mnt; 2937 2911 continue; 2938 2912 } ··· 2971 2935 goto restart; 2972 2936 } 2973 2937 done_seqretry(&rename_lock, seq); 2938 + if (need_seqretry(&mount_lock, m_seq)) { 2939 + m_seq = 1; 2940 + goto restart_mnt; 2941 + } 2942 + done_seqretry(&mount_lock, m_seq); 2974 2943 2975 2944 if (error >= 0 && bptr == *buffer) { 2976 2945 if (--blen < 0) ··· 3012 2971 int error; 3013 2972 3014 2973 prepend(&res, &buflen, "\0", 1); 3015 - br_read_lock(&vfsmount_lock); 3016 2974 error = prepend_path(path, root, &res, &buflen); 3017 - br_read_unlock(&vfsmount_lock); 3018 2975 3019 2976 if (error < 0) 3020 2977 return ERR_PTR(error); ··· 3029 2990 int error; 3030 2991 3031 2992 prepend(&res, &buflen, "\0", 1); 3032 - br_read_lock(&vfsmount_lock); 3033 2993 error = prepend_path(path, &root, &res, &buflen); 3034 - br_read_unlock(&vfsmount_lock); 3035 2994 3036 2995 if (error > 1) 3037 2996 error = -EINVAL; ··· 3104 3067 3105 3068 rcu_read_lock(); 3106 3069 get_fs_root_rcu(current->fs, &root); 3107 - br_read_lock(&vfsmount_lock); 3108 3070 error = path_with_deleted(path, &root, &res, &buflen); 3109 - br_read_unlock(&vfsmount_lock); 3110 3071 rcu_read_unlock(); 3111 3072 3112 3073 if (error < 0) ··· 3259 3224 get_fs_root_and_pwd_rcu(current->fs, &root, &pwd); 3260 3225 3261 3226 error = -ENOENT; 3262 - br_read_lock(&vfsmount_lock); 3263 3227 if (!d_unlinked(pwd.dentry)) { 3264 3228 unsigned long len; 3265 3229 char *cwd = page + PATH_MAX; ··· 3266 3232 3267 3233 prepend(&cwd, &buflen, "\0", 1); 3268 3234 error = prepend_path(&pwd, &root, &cwd, &buflen); 3269 - br_read_unlock(&vfsmount_lock); 3270 3235 rcu_read_unlock(); 3271 3236 3272 3237 if (error < 0) ··· 3286 3253 error = -EFAULT; 3287 3254 } 3288 3255 } else { 3289 - br_read_unlock(&vfsmount_lock); 3290 3256 rcu_read_unlock(); 3291 3257 } 3292 3258
+15 -14
fs/ecryptfs/dentry.c
··· 44 44 */ 45 45 static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags) 46 46 { 47 - struct dentry *lower_dentry; 48 - int rc = 1; 47 + struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry); 48 + int rc; 49 + 50 + if (!(lower_dentry->d_flags & DCACHE_OP_REVALIDATE)) 51 + return 1; 49 52 50 53 if (flags & LOOKUP_RCU) 51 54 return -ECHILD; 52 55 53 - lower_dentry = ecryptfs_dentry_to_lower(dentry); 54 - if (!lower_dentry->d_op || !lower_dentry->d_op->d_revalidate) 55 - goto out; 56 56 rc = lower_dentry->d_op->d_revalidate(lower_dentry, flags); 57 57 if (dentry->d_inode) { 58 58 struct inode *lower_inode = ··· 60 60 61 61 fsstack_copy_attr_all(dentry->d_inode, lower_inode); 62 62 } 63 - out: 64 63 return rc; 65 64 } 66 65 67 66 struct kmem_cache *ecryptfs_dentry_info_cache; 67 + 68 + static void ecryptfs_dentry_free_rcu(struct rcu_head *head) 69 + { 70 + kmem_cache_free(ecryptfs_dentry_info_cache, 71 + container_of(head, struct ecryptfs_dentry_info, rcu)); 72 + } 68 73 69 74 /** 70 75 * ecryptfs_d_release ··· 79 74 */ 80 75 static void ecryptfs_d_release(struct dentry *dentry) 81 76 { 82 - if (ecryptfs_dentry_to_private(dentry)) { 83 - if (ecryptfs_dentry_to_lower(dentry)) { 84 - dput(ecryptfs_dentry_to_lower(dentry)); 85 - mntput(ecryptfs_dentry_to_lower_mnt(dentry)); 86 - } 87 - kmem_cache_free(ecryptfs_dentry_info_cache, 88 - ecryptfs_dentry_to_private(dentry)); 77 + struct ecryptfs_dentry_info *p = dentry->d_fsdata; 78 + if (p) { 79 + path_put(&p->lower_path); 80 + call_rcu(&p->rcu, ecryptfs_dentry_free_rcu); 89 81 } 90 - return; 91 82 } 92 83 93 84 const struct dentry_operations ecryptfs_dops = {
+4 -15
fs/ecryptfs/ecryptfs_kernel.h
··· 261 261 * vfsmount too. */ 262 262 struct ecryptfs_dentry_info { 263 263 struct path lower_path; 264 - struct ecryptfs_crypt_stat *crypt_stat; 264 + union { 265 + struct ecryptfs_crypt_stat *crypt_stat; 266 + struct rcu_head rcu; 267 + }; 265 268 }; 266 269 267 270 /** ··· 515 512 return ((struct ecryptfs_dentry_info *)dentry->d_fsdata)->lower_path.dentry; 516 513 } 517 514 518 - static inline void 519 - ecryptfs_set_dentry_lower(struct dentry *dentry, struct dentry *lower_dentry) 520 - { 521 - ((struct ecryptfs_dentry_info *)dentry->d_fsdata)->lower_path.dentry = 522 - lower_dentry; 523 - } 524 - 525 515 static inline struct vfsmount * 526 516 ecryptfs_dentry_to_lower_mnt(struct dentry *dentry) 527 517 { ··· 525 529 ecryptfs_dentry_to_lower_path(struct dentry *dentry) 526 530 { 527 531 return &((struct ecryptfs_dentry_info *)dentry->d_fsdata)->lower_path; 528 - } 529 - 530 - static inline void 531 - ecryptfs_set_dentry_lower_mnt(struct dentry *dentry, struct vfsmount *lower_mnt) 532 - { 533 - ((struct ecryptfs_dentry_info *)dentry->d_fsdata)->lower_path.mnt = 534 - lower_mnt; 535 532 } 536 533 537 534 #define ecryptfs_printk(type, fmt, arg...) \
+4 -4
fs/ecryptfs/file.c
··· 271 271 { 272 272 struct file *lower_file = ecryptfs_file_to_lower(file); 273 273 274 - if (lower_file->f_op && lower_file->f_op->flush) { 274 + if (lower_file->f_op->flush) { 275 275 filemap_write_and_wait(file->f_mapping); 276 276 return lower_file->f_op->flush(lower_file, td); 277 277 } ··· 305 305 struct file *lower_file = NULL; 306 306 307 307 lower_file = ecryptfs_file_to_lower(file); 308 - if (lower_file->f_op && lower_file->f_op->fasync) 308 + if (lower_file->f_op->fasync) 309 309 rc = lower_file->f_op->fasync(fd, lower_file, flag); 310 310 return rc; 311 311 } ··· 318 318 319 319 if (ecryptfs_file_to_private(file)) 320 320 lower_file = ecryptfs_file_to_lower(file); 321 - if (lower_file && lower_file->f_op && lower_file->f_op->unlocked_ioctl) 321 + if (lower_file->f_op->unlocked_ioctl) 322 322 rc = lower_file->f_op->unlocked_ioctl(lower_file, cmd, arg); 323 323 return rc; 324 324 } ··· 332 332 333 333 if (ecryptfs_file_to_private(file)) 334 334 lower_file = ecryptfs_file_to_lower(file); 335 - if (lower_file && lower_file->f_op && lower_file->f_op->compat_ioctl) 335 + if (lower_file->f_op && lower_file->f_op->compat_ioctl) 336 336 rc = lower_file->f_op->compat_ioctl(lower_file, cmd, arg); 337 337 return rc; 338 338 }
+10 -19
fs/ecryptfs/inode.c
··· 153 153 154 154 dget(lower_dentry); 155 155 lower_dir_dentry = lock_parent(lower_dentry); 156 - rc = vfs_unlink(lower_dir_inode, lower_dentry); 156 + rc = vfs_unlink(lower_dir_inode, lower_dentry, NULL); 157 157 if (rc) { 158 158 printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc); 159 159 goto out_unlock; ··· 208 208 inode = __ecryptfs_get_inode(lower_dentry->d_inode, 209 209 directory_inode->i_sb); 210 210 if (IS_ERR(inode)) { 211 - vfs_unlink(lower_dir_dentry->d_inode, lower_dentry); 211 + vfs_unlink(lower_dir_dentry->d_inode, lower_dentry, NULL); 212 212 goto out_lock; 213 213 } 214 214 fsstack_copy_attr_times(directory_inode, lower_dir_dentry->d_inode); ··· 361 361 BUG_ON(!d_count(lower_dentry)); 362 362 363 363 ecryptfs_set_dentry_private(dentry, dentry_info); 364 - ecryptfs_set_dentry_lower(dentry, lower_dentry); 365 - ecryptfs_set_dentry_lower_mnt(dentry, lower_mnt); 364 + dentry_info->lower_path.mnt = lower_mnt; 365 + dentry_info->lower_path.dentry = lower_dentry; 366 366 367 367 if (!lower_dentry->d_inode) { 368 368 /* We want to add because we couldn't find in lower */ ··· 475 475 dget(lower_new_dentry); 476 476 lower_dir_dentry = lock_parent(lower_new_dentry); 477 477 rc = vfs_link(lower_old_dentry, lower_dir_dentry->d_inode, 478 - lower_new_dentry); 478 + lower_new_dentry, NULL); 479 479 if (rc || !lower_new_dentry->d_inode) 480 480 goto out_lock; 481 481 rc = ecryptfs_interpose(lower_new_dentry, new_dentry, dir->i_sb); ··· 640 640 goto out_lock; 641 641 } 642 642 rc = vfs_rename(lower_old_dir_dentry->d_inode, lower_old_dentry, 643 - lower_new_dir_dentry->d_inode, lower_new_dentry); 643 + lower_new_dir_dentry->d_inode, lower_new_dentry, 644 + NULL); 644 645 if (rc) 645 646 goto out_lock; 646 647 if (target_inode) ··· 702 701 out: 703 702 nd_set_link(nd, buf); 704 703 return NULL; 705 - } 706 - 707 - static void 708 - ecryptfs_put_link(struct dentry *dentry, struct nameidata *nd, void *ptr) 709 - { 710 - char *buf = nd_get_link(nd); 711 - if (!IS_ERR(buf)) { 712 - /* Free the char* */ 713 - kfree(buf); 714 - } 715 704 } 716 705 717 706 /** ··· 882 891 struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry); 883 892 884 893 mutex_lock(&lower_dentry->d_inode->i_mutex); 885 - rc = notify_change(lower_dentry, &lower_ia); 894 + rc = notify_change(lower_dentry, &lower_ia, NULL); 886 895 mutex_unlock(&lower_dentry->d_inode->i_mutex); 887 896 } 888 897 return rc; ··· 983 992 lower_ia.ia_valid &= ~ATTR_MODE; 984 993 985 994 mutex_lock(&lower_dentry->d_inode->i_mutex); 986 - rc = notify_change(lower_dentry, &lower_ia); 995 + rc = notify_change(lower_dentry, &lower_ia, NULL); 987 996 mutex_unlock(&lower_dentry->d_inode->i_mutex); 988 997 out: 989 998 fsstack_copy_attr_all(inode, lower_inode); ··· 1112 1121 const struct inode_operations ecryptfs_symlink_iops = { 1113 1122 .readlink = generic_readlink, 1114 1123 .follow_link = ecryptfs_follow_link, 1115 - .put_link = ecryptfs_put_link, 1124 + .put_link = kfree_put_link, 1116 1125 .permission = ecryptfs_permission, 1117 1126 .setattr = ecryptfs_setattr, 1118 1127 .getattr = ecryptfs_getattr_link,
+1 -2
fs/ecryptfs/main.c
··· 585 585 586 586 /* ->kill_sb() will take care of root_info */ 587 587 ecryptfs_set_dentry_private(s->s_root, root_info); 588 - ecryptfs_set_dentry_lower(s->s_root, path.dentry); 589 - ecryptfs_set_dentry_lower_mnt(s->s_root, path.mnt); 588 + root_info->lower_path = path; 590 589 591 590 s->s_flags |= MS_ACTIVE; 592 591 return dget(s->s_root);
+1 -1
fs/eventpoll.c
··· 1814 1814 1815 1815 /* The target file descriptor must support poll */ 1816 1816 error = -EPERM; 1817 - if (!tf.file->f_op || !tf.file->f_op->poll) 1817 + if (!tf.file->f_op->poll) 1818 1818 goto error_tgt_fput; 1819 1819 1820 1820 /* Check if EPOLLWAKEUP is allowed */
+15 -20
fs/exec.c
··· 106 106 */ 107 107 SYSCALL_DEFINE1(uselib, const char __user *, library) 108 108 { 109 + struct linux_binfmt *fmt; 109 110 struct file *file; 110 111 struct filename *tmp = getname(library); 111 112 int error = PTR_ERR(tmp); ··· 137 136 fsnotify_open(file); 138 137 139 138 error = -ENOEXEC; 140 - if(file->f_op) { 141 - struct linux_binfmt * fmt; 142 139 143 - read_lock(&binfmt_lock); 144 - list_for_each_entry(fmt, &formats, lh) { 145 - if (!fmt->load_shlib) 146 - continue; 147 - if (!try_module_get(fmt->module)) 148 - continue; 149 - read_unlock(&binfmt_lock); 150 - error = fmt->load_shlib(file); 151 - read_lock(&binfmt_lock); 152 - put_binfmt(fmt); 153 - if (error != -ENOEXEC) 154 - break; 155 - } 140 + read_lock(&binfmt_lock); 141 + list_for_each_entry(fmt, &formats, lh) { 142 + if (!fmt->load_shlib) 143 + continue; 144 + if (!try_module_get(fmt->module)) 145 + continue; 156 146 read_unlock(&binfmt_lock); 147 + error = fmt->load_shlib(file); 148 + read_lock(&binfmt_lock); 149 + put_binfmt(fmt); 150 + if (error != -ENOEXEC) 151 + break; 157 152 } 153 + read_unlock(&binfmt_lock); 158 154 exit: 159 155 fput(file); 160 156 out: ··· 1275 1277 */ 1276 1278 int prepare_binprm(struct linux_binprm *bprm) 1277 1279 { 1278 - umode_t mode; 1279 - struct inode * inode = file_inode(bprm->file); 1280 + struct inode *inode = file_inode(bprm->file); 1281 + umode_t mode = inode->i_mode; 1280 1282 int retval; 1281 1283 1282 - mode = inode->i_mode; 1283 - if (bprm->file->f_op == NULL) 1284 - return -EACCES; 1285 1284 1286 1285 /* clear any previous set[ug]id data from a previous binary */ 1287 1286 bprm->cred->euid = current_euid();
+148 -117
fs/exportfs/expfs.c
··· 69 69 return NULL; 70 70 } 71 71 72 - /* 73 - * Find root of a disconnected subtree and return a reference to it. 74 - */ 75 - static struct dentry * 76 - find_disconnected_root(struct dentry *dentry) 72 + static bool dentry_connected(struct dentry *dentry) 77 73 { 78 74 dget(dentry); 79 - while (!IS_ROOT(dentry)) { 75 + while (dentry->d_flags & DCACHE_DISCONNECTED) { 80 76 struct dentry *parent = dget_parent(dentry); 81 77 82 - if (!(parent->d_flags & DCACHE_DISCONNECTED)) { 78 + dput(dentry); 79 + if (IS_ROOT(dentry)) { 83 80 dput(parent); 84 - break; 81 + return false; 85 82 } 83 + dentry = parent; 84 + } 85 + dput(dentry); 86 + return true; 87 + } 88 + 89 + static void clear_disconnected(struct dentry *dentry) 90 + { 91 + dget(dentry); 92 + while (dentry->d_flags & DCACHE_DISCONNECTED) { 93 + struct dentry *parent = dget_parent(dentry); 94 + 95 + WARN_ON_ONCE(IS_ROOT(dentry)); 96 + 97 + spin_lock(&dentry->d_lock); 98 + dentry->d_flags &= ~DCACHE_DISCONNECTED; 99 + spin_unlock(&dentry->d_lock); 86 100 87 101 dput(dentry); 88 102 dentry = parent; 89 103 } 90 - return dentry; 104 + dput(dentry); 105 + } 106 + 107 + /* 108 + * Reconnect a directory dentry with its parent. 109 + * 110 + * This can return a dentry, or NULL, or an error. 111 + * 112 + * In the first case the returned dentry is the parent of the given 113 + * dentry, and may itself need to be reconnected to its parent. 114 + * 115 + * In the NULL case, a concurrent VFS operation has either renamed or 116 + * removed this directory. The concurrent operation has reconnected our 117 + * dentry, so we no longer need to. 118 + */ 119 + static struct dentry *reconnect_one(struct vfsmount *mnt, 120 + struct dentry *dentry, char *nbuf) 121 + { 122 + struct dentry *parent; 123 + struct dentry *tmp; 124 + int err; 125 + 126 + parent = ERR_PTR(-EACCES); 127 + mutex_lock(&dentry->d_inode->i_mutex); 128 + if (mnt->mnt_sb->s_export_op->get_parent) 129 + parent = mnt->mnt_sb->s_export_op->get_parent(dentry); 130 + mutex_unlock(&dentry->d_inode->i_mutex); 131 + 132 + if (IS_ERR(parent)) { 133 + dprintk("%s: get_parent of %ld failed, err %d\n", 134 + __func__, dentry->d_inode->i_ino, PTR_ERR(parent)); 135 + return parent; 136 + } 137 + 138 + dprintk("%s: find name of %lu in %lu\n", __func__, 139 + dentry->d_inode->i_ino, parent->d_inode->i_ino); 140 + err = exportfs_get_name(mnt, parent, nbuf, dentry); 141 + if (err == -ENOENT) 142 + goto out_reconnected; 143 + if (err) 144 + goto out_err; 145 + dprintk("%s: found name: %s\n", __func__, nbuf); 146 + mutex_lock(&parent->d_inode->i_mutex); 147 + tmp = lookup_one_len(nbuf, parent, strlen(nbuf)); 148 + mutex_unlock(&parent->d_inode->i_mutex); 149 + if (IS_ERR(tmp)) { 150 + dprintk("%s: lookup failed: %d\n", __func__, PTR_ERR(tmp)); 151 + goto out_err; 152 + } 153 + if (tmp != dentry) { 154 + dput(tmp); 155 + goto out_reconnected; 156 + } 157 + dput(tmp); 158 + if (IS_ROOT(dentry)) { 159 + err = -ESTALE; 160 + goto out_err; 161 + } 162 + return parent; 163 + 164 + out_err: 165 + dput(parent); 166 + return ERR_PTR(err); 167 + out_reconnected: 168 + dput(parent); 169 + /* 170 + * Someone must have renamed our entry into another parent, in 171 + * which case it has been reconnected by the rename. 172 + * 173 + * Or someone removed it entirely, in which case filehandle 174 + * lookup will succeed but the directory is now IS_DEAD and 175 + * subsequent operations on it will fail. 176 + * 177 + * Alternatively, maybe there was no race at all, and the 178 + * filesystem is just corrupt and gave us a parent that doesn't 179 + * actually contain any entry pointing to this inode. So, 180 + * double check that this worked and return -ESTALE if not: 181 + */ 182 + if (!dentry_connected(dentry)) 183 + return ERR_PTR(-ESTALE); 184 + return NULL; 91 185 } 92 186 93 187 /* 94 188 * Make sure target_dir is fully connected to the dentry tree. 95 189 * 96 - * It may already be, as the flag isn't always updated when connection happens. 190 + * On successful return, DCACHE_DISCONNECTED will be cleared on 191 + * target_dir, and target_dir->d_parent->...->d_parent will reach the 192 + * root of the filesystem. 193 + * 194 + * Whenever DCACHE_DISCONNECTED is unset, target_dir is fully connected. 195 + * But the converse is not true: target_dir may have DCACHE_DISCONNECTED 196 + * set but already be connected. In that case we'll verify the 197 + * connection to root and then clear the flag. 198 + * 199 + * Note that target_dir could be removed by a concurrent operation. In 200 + * that case reconnect_path may still succeed with target_dir fully 201 + * connected, but further operations using the filehandle will fail when 202 + * necessary (due to S_DEAD being set on the directory). 97 203 */ 98 204 static int 99 205 reconnect_path(struct vfsmount *mnt, struct dentry *target_dir, char *nbuf) 100 206 { 101 - int noprogress = 0; 102 - int err = -ESTALE; 207 + struct dentry *dentry, *parent; 103 208 104 - /* 105 - * It is possible that a confused file system might not let us complete 106 - * the path to the root. For example, if get_parent returns a directory 107 - * in which we cannot find a name for the child. While this implies a 108 - * very sick filesystem we don't want it to cause knfsd to spin. Hence 109 - * the noprogress counter. If we go through the loop 10 times (2 is 110 - * probably enough) without getting anywhere, we just give up 111 - */ 112 - while (target_dir->d_flags & DCACHE_DISCONNECTED && noprogress++ < 10) { 113 - struct dentry *pd = find_disconnected_root(target_dir); 209 + dentry = dget(target_dir); 114 210 115 - if (!IS_ROOT(pd)) { 116 - /* must have found a connected parent - great */ 117 - spin_lock(&pd->d_lock); 118 - pd->d_flags &= ~DCACHE_DISCONNECTED; 119 - spin_unlock(&pd->d_lock); 120 - noprogress = 0; 121 - } else if (pd == mnt->mnt_sb->s_root) { 122 - printk(KERN_ERR "export: Eeek filesystem root is not connected, impossible\n"); 123 - spin_lock(&pd->d_lock); 124 - pd->d_flags &= ~DCACHE_DISCONNECTED; 125 - spin_unlock(&pd->d_lock); 126 - noprogress = 0; 127 - } else { 128 - /* 129 - * We have hit the top of a disconnected path, try to 130 - * find parent and connect. 131 - * 132 - * Racing with some other process renaming a directory 133 - * isn't much of a problem here. If someone renames 134 - * the directory, it will end up properly connected, 135 - * which is what we want 136 - * 137 - * Getting the parent can't be supported generically, 138 - * the locking is too icky. 139 - * 140 - * Instead we just return EACCES. If server reboots 141 - * or inodes get flushed, you lose 142 - */ 143 - struct dentry *ppd = ERR_PTR(-EACCES); 144 - struct dentry *npd; 211 + while (dentry->d_flags & DCACHE_DISCONNECTED) { 212 + BUG_ON(dentry == mnt->mnt_sb->s_root); 145 213 146 - mutex_lock(&pd->d_inode->i_mutex); 147 - if (mnt->mnt_sb->s_export_op->get_parent) 148 - ppd = mnt->mnt_sb->s_export_op->get_parent(pd); 149 - mutex_unlock(&pd->d_inode->i_mutex); 214 + if (IS_ROOT(dentry)) 215 + parent = reconnect_one(mnt, dentry, nbuf); 216 + else 217 + parent = dget_parent(dentry); 150 218 151 - if (IS_ERR(ppd)) { 152 - err = PTR_ERR(ppd); 153 - dprintk("%s: get_parent of %ld failed, err %d\n", 154 - __func__, pd->d_inode->i_ino, err); 155 - dput(pd); 156 - break; 157 - } 158 - 159 - dprintk("%s: find name of %lu in %lu\n", __func__, 160 - pd->d_inode->i_ino, ppd->d_inode->i_ino); 161 - err = exportfs_get_name(mnt, ppd, nbuf, pd); 162 - if (err) { 163 - dput(ppd); 164 - dput(pd); 165 - if (err == -ENOENT) 166 - /* some race between get_parent and 167 - * get_name? just try again 168 - */ 169 - continue; 170 - break; 171 - } 172 - dprintk("%s: found name: %s\n", __func__, nbuf); 173 - mutex_lock(&ppd->d_inode->i_mutex); 174 - npd = lookup_one_len(nbuf, ppd, strlen(nbuf)); 175 - mutex_unlock(&ppd->d_inode->i_mutex); 176 - if (IS_ERR(npd)) { 177 - err = PTR_ERR(npd); 178 - dprintk("%s: lookup failed: %d\n", 179 - __func__, err); 180 - dput(ppd); 181 - dput(pd); 182 - break; 183 - } 184 - /* we didn't really want npd, we really wanted 185 - * a side-effect of the lookup. 186 - * hopefully, npd == pd, though it isn't really 187 - * a problem if it isn't 188 - */ 189 - if (npd == pd) 190 - noprogress = 0; 191 - else 192 - printk("%s: npd != pd\n", __func__); 193 - dput(npd); 194 - dput(ppd); 195 - if (IS_ROOT(pd)) { 196 - /* something went wrong, we have to give up */ 197 - dput(pd); 198 - break; 199 - } 200 - } 201 - dput(pd); 219 + if (!parent) 220 + break; 221 + dput(dentry); 222 + if (IS_ERR(parent)) 223 + return PTR_ERR(parent); 224 + dentry = parent; 202 225 } 203 - 204 - if (target_dir->d_flags & DCACHE_DISCONNECTED) { 205 - /* something went wrong - oh-well */ 206 - if (!err) 207 - err = -ESTALE; 208 - return err; 209 - } 210 - 226 + dput(dentry); 227 + clear_disconnected(target_dir); 211 228 return 0; 212 229 } 213 230 ··· 232 215 struct dir_context ctx; 233 216 char *name; /* name that was found. It already points to a 234 217 buffer NAME_MAX+1 is size */ 235 - unsigned long ino; /* the inum we are looking for */ 218 + u64 ino; /* the inum we are looking for */ 236 219 int found; /* inode matched? */ 237 220 int sequence; /* sequence counter */ 238 221 }; ··· 272 255 struct inode *dir = path->dentry->d_inode; 273 256 int error; 274 257 struct file *file; 258 + struct kstat stat; 259 + struct path child_path = { 260 + .mnt = path->mnt, 261 + .dentry = child, 262 + }; 275 263 struct getdents_callback buffer = { 276 264 .ctx.actor = filldir_one, 277 265 .name = name, 278 - .ino = child->d_inode->i_ino 279 266 }; 280 267 281 268 error = -ENOTDIR; ··· 288 267 error = -EINVAL; 289 268 if (!dir->i_fop) 290 269 goto out; 270 + /* 271 + * inode->i_ino is unsigned long, kstat->ino is u64, so the 272 + * former would be insufficient on 32-bit hosts when the 273 + * filesystem supports 64-bit inode numbers. So we need to 274 + * actually call ->getattr, not just read i_ino: 275 + */ 276 + error = vfs_getattr_nosec(&child_path, &stat); 277 + if (error) 278 + return error; 279 + buffer.ino = stat.ino; 291 280 /* 292 281 * Open the directory ... 293 282 */
-2
fs/ext4/ext4.h
··· 2734 2734 struct inode *second); 2735 2735 extern void ext4_double_up_write_data_sem(struct inode *orig_inode, 2736 2736 struct inode *donor_inode); 2737 - void ext4_inode_double_lock(struct inode *inode1, struct inode *inode2); 2738 - void ext4_inode_double_unlock(struct inode *inode1, struct inode *inode2); 2739 2737 extern int ext4_move_extents(struct file *o_filp, struct file *d_filp, 2740 2738 __u64 start_orig, __u64 start_donor, 2741 2739 __u64 len, __u64 *moved_len);
+2 -2
fs/ext4/ioctl.c
··· 130 130 131 131 /* Protect orig inodes against a truncate and make sure, 132 132 * that only 1 swap_inode_boot_loader is running. */ 133 - ext4_inode_double_lock(inode, inode_bl); 133 + lock_two_nondirectories(inode, inode_bl); 134 134 135 135 truncate_inode_pages(&inode->i_data, 0); 136 136 truncate_inode_pages(&inode_bl->i_data, 0); ··· 205 205 ext4_inode_resume_unlocked_dio(inode); 206 206 ext4_inode_resume_unlocked_dio(inode_bl); 207 207 208 - ext4_inode_double_unlock(inode, inode_bl); 208 + unlock_two_nondirectories(inode, inode_bl); 209 209 210 210 iput(inode_bl); 211 211
+2 -38
fs/ext4/move_extent.c
··· 1203 1203 } 1204 1204 1205 1205 /** 1206 - * ext4_inode_double_lock - Lock i_mutex on both @inode1 and @inode2 1207 - * 1208 - * @inode1: the inode structure 1209 - * @inode2: the inode structure 1210 - * 1211 - * Lock two inodes' i_mutex 1212 - */ 1213 - void 1214 - ext4_inode_double_lock(struct inode *inode1, struct inode *inode2) 1215 - { 1216 - BUG_ON(inode1 == inode2); 1217 - if (inode1 < inode2) { 1218 - mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT); 1219 - mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD); 1220 - } else { 1221 - mutex_lock_nested(&inode2->i_mutex, I_MUTEX_PARENT); 1222 - mutex_lock_nested(&inode1->i_mutex, I_MUTEX_CHILD); 1223 - } 1224 - } 1225 - 1226 - /** 1227 - * ext4_inode_double_unlock - Release i_mutex on both @inode1 and @inode2 1228 - * 1229 - * @inode1: the inode that is released first 1230 - * @inode2: the inode that is released second 1231 - * 1232 - */ 1233 - 1234 - void 1235 - ext4_inode_double_unlock(struct inode *inode1, struct inode *inode2) 1236 - { 1237 - mutex_unlock(&inode1->i_mutex); 1238 - mutex_unlock(&inode2->i_mutex); 1239 - } 1240 - 1241 - /** 1242 1206 * ext4_move_extents - Exchange the specified range of a file 1243 1207 * 1244 1208 * @o_filp: file structure of the original file ··· 1291 1327 return -EINVAL; 1292 1328 } 1293 1329 /* Protect orig and donor inodes against a truncate */ 1294 - ext4_inode_double_lock(orig_inode, donor_inode); 1330 + lock_two_nondirectories(orig_inode, donor_inode); 1295 1331 1296 1332 /* Wait for all existing dio workers */ 1297 1333 ext4_inode_block_unlocked_dio(orig_inode); ··· 1499 1535 ext4_double_up_write_data_sem(orig_inode, donor_inode); 1500 1536 ext4_inode_resume_unlocked_dio(orig_inode); 1501 1537 ext4_inode_resume_unlocked_dio(donor_inode); 1502 - ext4_inode_double_unlock(orig_inode, donor_inode); 1538 + unlock_two_nondirectories(orig_inode, donor_inode); 1503 1539 1504 1540 return ret; 1505 1541 }
+1
fs/fat/fat.h
··· 102 102 struct hlist_head dir_hashtable[FAT_HASH_SIZE]; 103 103 104 104 unsigned int dirty; /* fs state before mount */ 105 + struct rcu_head rcu; 105 106 }; 106 107 107 108 #define FAT_CACHE_VALID 0 /* special case for valid cache */
+11 -8
fs/fat/inode.c
··· 548 548 brelse(bh); 549 549 } 550 550 551 + static void delayed_free(struct rcu_head *p) 552 + { 553 + struct msdos_sb_info *sbi = container_of(p, struct msdos_sb_info, rcu); 554 + unload_nls(sbi->nls_disk); 555 + unload_nls(sbi->nls_io); 556 + if (sbi->options.iocharset != fat_default_iocharset) 557 + kfree(sbi->options.iocharset); 558 + kfree(sbi); 559 + } 560 + 551 561 static void fat_put_super(struct super_block *sb) 552 562 { 553 563 struct msdos_sb_info *sbi = MSDOS_SB(sb); ··· 567 557 iput(sbi->fsinfo_inode); 568 558 iput(sbi->fat_inode); 569 559 570 - unload_nls(sbi->nls_disk); 571 - unload_nls(sbi->nls_io); 572 - 573 - if (sbi->options.iocharset != fat_default_iocharset) 574 - kfree(sbi->options.iocharset); 575 - 576 - sb->s_fs_info = NULL; 577 - kfree(sbi); 560 + call_rcu(&sbi->rcu, delayed_free); 578 561 } 579 562 580 563 static struct kmem_cache *fat_inode_cachep;
+2 -3
fs/fcntl.c
··· 56 56 return -EINVAL; 57 57 } 58 58 59 - if (filp->f_op && filp->f_op->check_flags) 59 + if (filp->f_op->check_flags) 60 60 error = filp->f_op->check_flags(arg); 61 61 if (error) 62 62 return error; ··· 64 64 /* 65 65 * ->fasync() is responsible for setting the FASYNC bit. 66 66 */ 67 - if (((arg ^ filp->f_flags) & FASYNC) && filp->f_op && 68 - filp->f_op->fasync) { 67 + if (((arg ^ filp->f_flags) & FASYNC) && filp->f_op->fasync) { 69 68 error = filp->f_op->fasync(fd, filp, (arg & FASYNC) != 0); 70 69 if (error < 0) 71 70 goto out;
+2 -127
fs/file_table.c
··· 36 36 .max_files = NR_FILE 37 37 }; 38 38 39 - DEFINE_STATIC_LGLOCK(files_lglock); 40 - 41 39 /* SLAB cache for file structures */ 42 40 static struct kmem_cache *filp_cachep __read_mostly; 43 41 ··· 132 134 return ERR_PTR(error); 133 135 } 134 136 135 - INIT_LIST_HEAD(&f->f_u.fu_list); 136 137 atomic_long_set(&f->f_count, 1); 137 138 rwlock_init(&f->f_owner.lock); 138 139 spin_lock_init(&f->f_lock); ··· 237 240 locks_remove_flock(file); 238 241 239 242 if (unlikely(file->f_flags & FASYNC)) { 240 - if (file->f_op && file->f_op->fasync) 243 + if (file->f_op->fasync) 241 244 file->f_op->fasync(-1, file, 0); 242 245 } 243 246 ima_file_free(file); 244 - if (file->f_op && file->f_op->release) 247 + if (file->f_op->release) 245 248 file->f_op->release(inode, file); 246 249 security_file_free(file); 247 250 if (unlikely(S_ISCHR(inode->i_mode) && inode->i_cdev != NULL && ··· 301 304 if (atomic_long_dec_and_test(&file->f_count)) { 302 305 struct task_struct *task = current; 303 306 304 - file_sb_list_del(file); 305 307 if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) { 306 308 init_task_work(&file->f_u.fu_rcuhead, ____fput); 307 309 if (!task_work_add(task, &file->f_u.fu_rcuhead, true)) ··· 329 333 { 330 334 if (atomic_long_dec_and_test(&file->f_count)) { 331 335 struct task_struct *task = current; 332 - file_sb_list_del(file); 333 336 BUG_ON(!(task->flags & PF_KTHREAD)); 334 337 __fput(file); 335 338 } ··· 340 345 { 341 346 if (atomic_long_dec_and_test(&file->f_count)) { 342 347 security_file_free(file); 343 - file_sb_list_del(file); 344 348 file_free(file); 345 349 } 346 - } 347 - 348 - static inline int file_list_cpu(struct file *file) 349 - { 350 - #ifdef CONFIG_SMP 351 - return file->f_sb_list_cpu; 352 - #else 353 - return smp_processor_id(); 354 - #endif 355 - } 356 - 357 - /* helper for file_sb_list_add to reduce ifdefs */ 358 - static inline void __file_sb_list_add(struct file *file, struct super_block *sb) 359 - { 360 - struct list_head *list; 361 - #ifdef CONFIG_SMP 362 - int cpu; 363 - cpu = smp_processor_id(); 364 - file->f_sb_list_cpu = cpu; 365 - list = per_cpu_ptr(sb->s_files, cpu); 366 - #else 367 - list = &sb->s_files; 368 - #endif 369 - list_add(&file->f_u.fu_list, list); 370 - } 371 - 372 - /** 373 - * file_sb_list_add - add a file to the sb's file list 374 - * @file: file to add 375 - * @sb: sb to add it to 376 - * 377 - * Use this function to associate a file with the superblock of the inode it 378 - * refers to. 379 - */ 380 - void file_sb_list_add(struct file *file, struct super_block *sb) 381 - { 382 - if (likely(!(file->f_mode & FMODE_WRITE))) 383 - return; 384 - if (!S_ISREG(file_inode(file)->i_mode)) 385 - return; 386 - lg_local_lock(&files_lglock); 387 - __file_sb_list_add(file, sb); 388 - lg_local_unlock(&files_lglock); 389 - } 390 - 391 - /** 392 - * file_sb_list_del - remove a file from the sb's file list 393 - * @file: file to remove 394 - * @sb: sb to remove it from 395 - * 396 - * Use this function to remove a file from its superblock. 397 - */ 398 - void file_sb_list_del(struct file *file) 399 - { 400 - if (!list_empty(&file->f_u.fu_list)) { 401 - lg_local_lock_cpu(&files_lglock, file_list_cpu(file)); 402 - list_del_init(&file->f_u.fu_list); 403 - lg_local_unlock_cpu(&files_lglock, file_list_cpu(file)); 404 - } 405 - } 406 - 407 - #ifdef CONFIG_SMP 408 - 409 - /* 410 - * These macros iterate all files on all CPUs for a given superblock. 411 - * files_lglock must be held globally. 412 - */ 413 - #define do_file_list_for_each_entry(__sb, __file) \ 414 - { \ 415 - int i; \ 416 - for_each_possible_cpu(i) { \ 417 - struct list_head *list; \ 418 - list = per_cpu_ptr((__sb)->s_files, i); \ 419 - list_for_each_entry((__file), list, f_u.fu_list) 420 - 421 - #define while_file_list_for_each_entry \ 422 - } \ 423 - } 424 - 425 - #else 426 - 427 - #define do_file_list_for_each_entry(__sb, __file) \ 428 - { \ 429 - struct list_head *list; \ 430 - list = &(sb)->s_files; \ 431 - list_for_each_entry((__file), list, f_u.fu_list) 432 - 433 - #define while_file_list_for_each_entry \ 434 - } 435 - 436 - #endif 437 - 438 - /** 439 - * mark_files_ro - mark all files read-only 440 - * @sb: superblock in question 441 - * 442 - * All files are marked read-only. We don't care about pending 443 - * delete files so this should be used in 'force' mode only. 444 - */ 445 - void mark_files_ro(struct super_block *sb) 446 - { 447 - struct file *f; 448 - 449 - lg_global_lock(&files_lglock); 450 - do_file_list_for_each_entry(sb, f) { 451 - if (!file_count(f)) 452 - continue; 453 - if (!(f->f_mode & FMODE_WRITE)) 454 - continue; 455 - spin_lock(&f->f_lock); 456 - f->f_mode &= ~FMODE_WRITE; 457 - spin_unlock(&f->f_lock); 458 - if (file_check_writeable(f) != 0) 459 - continue; 460 - __mnt_drop_write(f->f_path.mnt); 461 - file_release_write(f); 462 - } while_file_list_for_each_entry; 463 - lg_global_unlock(&files_lglock); 464 350 } 465 351 466 352 void __init files_init(unsigned long mempages) ··· 359 483 n = (mempages * (PAGE_SIZE / 1024)) / 10; 360 484 files_stat.max_files = max_t(unsigned long, n, NR_FILE); 361 485 files_defer_init(); 362 - lg_lock_init(&files_lglock, "files_lglock"); 363 486 percpu_counter_init(&nr_files, 0); 364 487 }
+1
fs/fs-writeback.c
··· 26 26 #include <linux/blkdev.h> 27 27 #include <linux/backing-dev.h> 28 28 #include <linux/tracepoint.h> 29 + #include <linux/device.h> 29 30 #include "internal.h" 30 31 31 32 /*
+1 -1
fs/fuse/cuse.c
··· 473 473 static void cuse_fc_release(struct fuse_conn *fc) 474 474 { 475 475 struct cuse_conn *cc = fc_to_cc(fc); 476 - kfree(cc); 476 + kfree_rcu(cc, fc.rcu); 477 477 } 478 478 479 479 /**
+5 -35
fs/fuse/dir.c
··· 342 342 return err; 343 343 } 344 344 345 - static struct dentry *fuse_materialise_dentry(struct dentry *dentry, 346 - struct inode *inode) 347 - { 348 - struct dentry *newent; 349 - 350 - if (inode && S_ISDIR(inode->i_mode)) { 351 - struct fuse_conn *fc = get_fuse_conn(inode); 352 - 353 - mutex_lock(&fc->inst_mutex); 354 - newent = d_materialise_unique(dentry, inode); 355 - mutex_unlock(&fc->inst_mutex); 356 - } else { 357 - newent = d_materialise_unique(dentry, inode); 358 - } 359 - 360 - return newent; 361 - } 362 - 363 345 static struct dentry *fuse_lookup(struct inode *dir, struct dentry *entry, 364 346 unsigned int flags) 365 347 { ··· 364 382 if (inode && get_node_id(inode) == FUSE_ROOT_ID) 365 383 goto out_iput; 366 384 367 - newent = fuse_materialise_dentry(entry, inode); 385 + newent = d_materialise_unique(entry, inode); 368 386 err = PTR_ERR(newent); 369 387 if (IS_ERR(newent)) 370 388 goto out_err; ··· 583 601 } 584 602 kfree(forget); 585 603 586 - if (S_ISDIR(inode->i_mode)) { 587 - struct dentry *alias; 588 - mutex_lock(&fc->inst_mutex); 589 - alias = d_find_alias(inode); 590 - if (alias) { 591 - /* New directory must have moved since mkdir */ 592 - mutex_unlock(&fc->inst_mutex); 593 - dput(alias); 594 - iput(inode); 595 - return -EBUSY; 596 - } 597 - d_instantiate(entry, inode); 598 - mutex_unlock(&fc->inst_mutex); 599 - } else 600 - d_instantiate(entry, inode); 604 + err = d_instantiate_no_diralias(entry, inode); 605 + if (err) 606 + return err; 601 607 602 608 fuse_change_entry_timeout(entry, &outarg); 603 609 fuse_invalidate_attr(dir); ··· 1254 1284 if (!inode) 1255 1285 goto out; 1256 1286 1257 - alias = fuse_materialise_dentry(dentry, inode); 1287 + alias = d_materialise_unique(dentry, inode); 1258 1288 err = PTR_ERR(alias); 1259 1289 if (IS_ERR(alias)) 1260 1290 goto out;
+2 -3
fs/fuse/fuse_i.h
··· 375 375 /** Lock protecting accessess to members of this structure */ 376 376 spinlock_t lock; 377 377 378 - /** Mutex protecting against directory alias creation */ 379 - struct mutex inst_mutex; 380 - 381 378 /** Refcount */ 382 379 atomic_t count; 380 + 381 + struct rcu_head rcu; 383 382 384 383 /** The user id for this mount */ 385 384 kuid_t user_id;
+1 -3
fs/fuse/inode.c
··· 565 565 { 566 566 memset(fc, 0, sizeof(*fc)); 567 567 spin_lock_init(&fc->lock); 568 - mutex_init(&fc->inst_mutex); 569 568 init_rwsem(&fc->killsb); 570 569 atomic_set(&fc->count, 1); 571 570 init_waitqueue_head(&fc->waitq); ··· 595 596 if (atomic_dec_and_test(&fc->count)) { 596 597 if (fc->destroy_req) 597 598 fuse_request_free(fc->destroy_req); 598 - mutex_destroy(&fc->inst_mutex); 599 599 fc->release(fc); 600 600 } 601 601 } ··· 918 920 919 921 static void fuse_free_conn(struct fuse_conn *fc) 920 922 { 921 - kfree(fc); 923 + kfree_rcu(fc, rcu); 922 924 } 923 925 924 926 static int fuse_bdi_init(struct fuse_conn *fc, struct super_block *sb)
+1 -8
fs/gfs2/inode.c
··· 1514 1514 return NULL; 1515 1515 } 1516 1516 1517 - static void gfs2_put_link(struct dentry *dentry, struct nameidata *nd, void *p) 1518 - { 1519 - char *s = nd_get_link(nd); 1520 - if (!IS_ERR(s)) 1521 - kfree(s); 1522 - } 1523 - 1524 1517 /** 1525 1518 * gfs2_permission - 1526 1519 * @inode: The inode ··· 1865 1872 const struct inode_operations gfs2_symlink_iops = { 1866 1873 .readlink = generic_readlink, 1867 1874 .follow_link = gfs2_follow_link, 1868 - .put_link = gfs2_put_link, 1875 + .put_link = kfree_put_link, 1869 1876 .permission = gfs2_permission, 1870 1877 .setattr = gfs2_setattr, 1871 1878 .getattr = gfs2_getattr,
+1
fs/hpfs/hpfs_fn.h
··· 80 80 unsigned sb_c_bitmap; /* current bitmap */ 81 81 unsigned sb_max_fwd_alloc; /* max forwad allocation */ 82 82 int sb_timeshift; 83 + struct rcu_head rcu; 83 84 }; 84 85 85 86 /* Four 512-byte buffers and the 2k block obtained by concatenating them */
+1 -1
fs/hpfs/namei.c
··· 407 407 /*printk("HPFS: truncating file before delete.\n");*/ 408 408 newattrs.ia_size = 0; 409 409 newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME; 410 - err = notify_change(dentry, &newattrs); 410 + err = notify_change(dentry, &newattrs, NULL); 411 411 put_write_access(inode); 412 412 if (!err) 413 413 goto again;
+14 -14
fs/hpfs/super.c
··· 101 101 return 0; 102 102 } 103 103 104 + static void free_sbi(struct hpfs_sb_info *sbi) 105 + { 106 + kfree(sbi->sb_cp_table); 107 + kfree(sbi->sb_bmp_dir); 108 + kfree(sbi); 109 + } 110 + 111 + static void lazy_free_sbi(struct rcu_head *rcu) 112 + { 113 + free_sbi(container_of(rcu, struct hpfs_sb_info, rcu)); 114 + } 115 + 104 116 static void hpfs_put_super(struct super_block *s) 105 117 { 106 - struct hpfs_sb_info *sbi = hpfs_sb(s); 107 - 108 118 hpfs_lock(s); 109 119 unmark_dirty(s); 110 120 hpfs_unlock(s); 111 - 112 - kfree(sbi->sb_cp_table); 113 - kfree(sbi->sb_bmp_dir); 114 - s->s_fs_info = NULL; 115 - kfree(sbi); 121 + call_rcu(&hpfs_sb(s)->rcu, lazy_free_sbi); 116 122 } 117 123 118 124 unsigned hpfs_count_one_bitmap(struct super_block *s, secno secno) ··· 491 485 } 492 486 s->s_fs_info = sbi; 493 487 494 - sbi->sb_bmp_dir = NULL; 495 - sbi->sb_cp_table = NULL; 496 - 497 488 mutex_init(&sbi->hpfs_mutex); 498 489 hpfs_lock(s); 499 490 ··· 682 679 bail1: 683 680 bail0: 684 681 hpfs_unlock(s); 685 - kfree(sbi->sb_bmp_dir); 686 - kfree(sbi->sb_cp_table); 687 - s->s_fs_info = NULL; 688 - kfree(sbi); 682 + free_sbi(sbi); 689 683 return -EINVAL; 690 684 } 691 685
+49 -17
fs/inode.c
··· 773 773 774 774 repeat: 775 775 hlist_for_each_entry(inode, head, i_hash) { 776 + if (inode->i_sb != sb) 777 + continue; 778 + if (!test(inode, data)) 779 + continue; 776 780 spin_lock(&inode->i_lock); 777 - if (inode->i_sb != sb) { 778 - spin_unlock(&inode->i_lock); 779 - continue; 780 - } 781 - if (!test(inode, data)) { 782 - spin_unlock(&inode->i_lock); 783 - continue; 784 - } 785 781 if (inode->i_state & (I_FREEING|I_WILL_FREE)) { 786 782 __wait_on_freeing_inode(inode); 787 783 goto repeat; ··· 800 804 801 805 repeat: 802 806 hlist_for_each_entry(inode, head, i_hash) { 807 + if (inode->i_ino != ino) 808 + continue; 809 + if (inode->i_sb != sb) 810 + continue; 803 811 spin_lock(&inode->i_lock); 804 - if (inode->i_ino != ino) { 805 - spin_unlock(&inode->i_lock); 806 - continue; 807 - } 808 - if (inode->i_sb != sb) { 809 - spin_unlock(&inode->i_lock); 810 - continue; 811 - } 812 812 if (inode->i_state & (I_FREEING|I_WILL_FREE)) { 813 813 __wait_on_freeing_inode(inode); 814 814 goto repeat; ··· 941 949 spin_unlock(&inode->i_lock); 942 950 } 943 951 EXPORT_SYMBOL(unlock_new_inode); 952 + 953 + /** 954 + * lock_two_nondirectories - take two i_mutexes on non-directory objects 955 + * @inode1: first inode to lock 956 + * @inode2: second inode to lock 957 + */ 958 + void lock_two_nondirectories(struct inode *inode1, struct inode *inode2) 959 + { 960 + WARN_ON_ONCE(S_ISDIR(inode1->i_mode)); 961 + if (inode1 == inode2 || !inode2) { 962 + mutex_lock(&inode1->i_mutex); 963 + return; 964 + } 965 + WARN_ON_ONCE(S_ISDIR(inode2->i_mode)); 966 + if (inode1 < inode2) { 967 + mutex_lock(&inode1->i_mutex); 968 + mutex_lock_nested(&inode2->i_mutex, I_MUTEX_NONDIR2); 969 + } else { 970 + mutex_lock(&inode2->i_mutex); 971 + mutex_lock_nested(&inode1->i_mutex, I_MUTEX_NONDIR2); 972 + } 973 + } 974 + EXPORT_SYMBOL(lock_two_nondirectories); 975 + 976 + /** 977 + * unlock_two_nondirectories - release locks from lock_two_nondirectories() 978 + * @inode1: first inode to unlock 979 + * @inode2: second inode to unlock 980 + */ 981 + void unlock_two_nondirectories(struct inode *inode1, struct inode *inode2) 982 + { 983 + mutex_unlock(&inode1->i_mutex); 984 + if (inode2 && inode2 != inode1) 985 + mutex_unlock(&inode2->i_mutex); 986 + } 987 + EXPORT_SYMBOL(unlock_two_nondirectories); 944 988 945 989 /** 946 990 * iget5_locked - obtain an inode from a mounted file system ··· 1603 1575 struct iattr newattrs; 1604 1576 1605 1577 newattrs.ia_valid = ATTR_FORCE | kill; 1606 - return notify_change(dentry, &newattrs); 1578 + /* 1579 + * Note we call this on write, so notify_change will not 1580 + * encounter any conflicting delegations: 1581 + */ 1582 + return notify_change(dentry, &newattrs, NULL); 1607 1583 } 1608 1584 1609 1585 int file_remove_suid(struct file *file)
-7
fs/internal.h
··· 9 9 * 2 of the License, or (at your option) any later version. 10 10 */ 11 11 12 - #include <linux/lglock.h> 13 - 14 12 struct super_block; 15 13 struct file_system_type; 16 14 struct linux_binprm; ··· 60 62 61 63 extern void __init mnt_init(void); 62 64 63 - extern struct lglock vfsmount_lock; 64 - 65 65 extern int __mnt_want_write(struct vfsmount *); 66 66 extern int __mnt_want_write_file(struct file *); 67 67 extern void __mnt_drop_write(struct vfsmount *); ··· 73 77 /* 74 78 * file_table.c 75 79 */ 76 - extern void file_sb_list_add(struct file *f, struct super_block *sb); 77 - extern void file_sb_list_del(struct file *f); 78 - extern void mark_files_ro(struct super_block *); 79 80 extern struct file *get_empty_filp(void); 80 81 81 82 /*
+2 -2
fs/ioctl.c
··· 37 37 { 38 38 int error = -ENOTTY; 39 39 40 - if (!filp->f_op || !filp->f_op->unlocked_ioctl) 40 + if (!filp->f_op->unlocked_ioctl) 41 41 goto out; 42 42 43 43 error = filp->f_op->unlocked_ioctl(filp, cmd, arg); ··· 501 501 502 502 /* Did FASYNC state change ? */ 503 503 if ((flag ^ filp->f_flags) & FASYNC) { 504 - if (filp->f_op && filp->f_op->fasync) 504 + if (filp->f_op->fasync) 505 505 /* fasync() adjusts filp->f_flags */ 506 506 error = filp->f_op->fasync(fd, filp, on); 507 507 else
+6 -6
fs/isofs/inode.c
··· 181 181 * Compute the hash for the isofs name corresponding to the dentry. 182 182 */ 183 183 static int 184 - isofs_hash_common(const struct dentry *dentry, struct qstr *qstr, int ms) 184 + isofs_hash_common(struct qstr *qstr, int ms) 185 185 { 186 186 const char *name; 187 187 int len; ··· 202 202 * Compute the hash for the isofs name corresponding to the dentry. 203 203 */ 204 204 static int 205 - isofs_hashi_common(const struct dentry *dentry, struct qstr *qstr, int ms) 205 + isofs_hashi_common(struct qstr *qstr, int ms) 206 206 { 207 207 const char *name; 208 208 int len; ··· 259 259 static int 260 260 isofs_hash(const struct dentry *dentry, struct qstr *qstr) 261 261 { 262 - return isofs_hash_common(dentry, qstr, 0); 262 + return isofs_hash_common(qstr, 0); 263 263 } 264 264 265 265 static int 266 266 isofs_hashi(const struct dentry *dentry, struct qstr *qstr) 267 267 { 268 - return isofs_hashi_common(dentry, qstr, 0); 268 + return isofs_hashi_common(qstr, 0); 269 269 } 270 270 271 271 static int ··· 286 286 static int 287 287 isofs_hash_ms(const struct dentry *dentry, struct qstr *qstr) 288 288 { 289 - return isofs_hash_common(dentry, qstr, 1); 289 + return isofs_hash_common(qstr, 1); 290 290 } 291 291 292 292 static int 293 293 isofs_hashi_ms(const struct dentry *dentry, struct qstr *qstr) 294 294 { 295 - return isofs_hashi_common(dentry, qstr, 1); 295 + return isofs_hashi_common(qstr, 1); 296 296 } 297 297 298 298 static int
+87 -35
fs/libfs.c
··· 10 10 #include <linux/vfs.h> 11 11 #include <linux/quotaops.h> 12 12 #include <linux/mutex.h> 13 + #include <linux/namei.h> 13 14 #include <linux/exportfs.h> 14 15 #include <linux/writeback.h> 15 16 #include <linux/buffer_head.h> /* sync_mapping_buffers */ ··· 32 31 stat->blocks = inode->i_mapping->nrpages << (PAGE_CACHE_SHIFT - 9); 33 32 return 0; 34 33 } 34 + EXPORT_SYMBOL(simple_getattr); 35 35 36 36 int simple_statfs(struct dentry *dentry, struct kstatfs *buf) 37 37 { ··· 41 39 buf->f_namelen = NAME_MAX; 42 40 return 0; 43 41 } 42 + EXPORT_SYMBOL(simple_statfs); 44 43 45 44 /* 46 45 * Retaining negative dentries for an in-memory filesystem just wastes ··· 69 66 d_add(dentry, NULL); 70 67 return NULL; 71 68 } 69 + EXPORT_SYMBOL(simple_lookup); 72 70 73 71 int dcache_dir_open(struct inode *inode, struct file *file) 74 72 { ··· 79 75 80 76 return file->private_data ? 0 : -ENOMEM; 81 77 } 78 + EXPORT_SYMBOL(dcache_dir_open); 82 79 83 80 int dcache_dir_close(struct inode *inode, struct file *file) 84 81 { 85 82 dput(file->private_data); 86 83 return 0; 87 84 } 85 + EXPORT_SYMBOL(dcache_dir_close); 88 86 89 87 loff_t dcache_dir_lseek(struct file *file, loff_t offset, int whence) 90 88 { ··· 129 123 mutex_unlock(&dentry->d_inode->i_mutex); 130 124 return offset; 131 125 } 126 + EXPORT_SYMBOL(dcache_dir_lseek); 132 127 133 128 /* Relationship between i_mode and the DT_xxx types */ 134 129 static inline unsigned char dt_type(struct inode *inode) ··· 179 172 spin_unlock(&dentry->d_lock); 180 173 return 0; 181 174 } 175 + EXPORT_SYMBOL(dcache_readdir); 182 176 183 177 ssize_t generic_read_dir(struct file *filp, char __user *buf, size_t siz, loff_t *ppos) 184 178 { 185 179 return -EISDIR; 186 180 } 181 + EXPORT_SYMBOL(generic_read_dir); 187 182 188 183 const struct file_operations simple_dir_operations = { 189 184 .open = dcache_dir_open, ··· 195 186 .iterate = dcache_readdir, 196 187 .fsync = noop_fsync, 197 188 }; 189 + EXPORT_SYMBOL(simple_dir_operations); 198 190 199 191 const struct inode_operations simple_dir_inode_operations = { 200 192 .lookup = simple_lookup, 201 193 }; 194 + EXPORT_SYMBOL(simple_dir_inode_operations); 202 195 203 196 static const struct super_operations simple_super_operations = { 204 197 .statfs = simple_statfs, ··· 255 244 deactivate_locked_super(s); 256 245 return ERR_PTR(-ENOMEM); 257 246 } 247 + EXPORT_SYMBOL(mount_pseudo); 258 248 259 249 int simple_open(struct inode *inode, struct file *file) 260 250 { ··· 263 251 file->private_data = inode->i_private; 264 252 return 0; 265 253 } 254 + EXPORT_SYMBOL(simple_open); 266 255 267 256 int simple_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry) 268 257 { ··· 276 263 d_instantiate(dentry, inode); 277 264 return 0; 278 265 } 266 + EXPORT_SYMBOL(simple_link); 279 267 280 268 int simple_empty(struct dentry *dentry) 281 269 { ··· 297 283 spin_unlock(&dentry->d_lock); 298 284 return ret; 299 285 } 286 + EXPORT_SYMBOL(simple_empty); 300 287 301 288 int simple_unlink(struct inode *dir, struct dentry *dentry) 302 289 { ··· 308 293 dput(dentry); 309 294 return 0; 310 295 } 296 + EXPORT_SYMBOL(simple_unlink); 311 297 312 298 int simple_rmdir(struct inode *dir, struct dentry *dentry) 313 299 { ··· 320 304 drop_nlink(dir); 321 305 return 0; 322 306 } 307 + EXPORT_SYMBOL(simple_rmdir); 323 308 324 309 int simple_rename(struct inode *old_dir, struct dentry *old_dentry, 325 310 struct inode *new_dir, struct dentry *new_dentry) ··· 347 330 348 331 return 0; 349 332 } 333 + EXPORT_SYMBOL(simple_rename); 350 334 351 335 /** 352 336 * simple_setattr - setattr for simple filesystem ··· 388 370 unlock_page(page); 389 371 return 0; 390 372 } 373 + EXPORT_SYMBOL(simple_readpage); 391 374 392 375 int simple_write_begin(struct file *file, struct address_space *mapping, 393 376 loff_t pos, unsigned len, unsigned flags, ··· 412 393 } 413 394 return 0; 414 395 } 396 + EXPORT_SYMBOL(simple_write_begin); 415 397 416 398 /** 417 399 * simple_write_end - .write_end helper for non-block-device FSes ··· 464 444 465 445 return copied; 466 446 } 447 + EXPORT_SYMBOL(simple_write_end); 467 448 468 449 /* 469 450 * the inodes created here are not hashed. If you use iunique to generate ··· 533 512 dput(root); 534 513 return -ENOMEM; 535 514 } 515 + EXPORT_SYMBOL(simple_fill_super); 536 516 537 517 static DEFINE_SPINLOCK(pin_fs_lock); 538 518 ··· 556 534 mntput(mnt); 557 535 return 0; 558 536 } 537 + EXPORT_SYMBOL(simple_pin_fs); 559 538 560 539 void simple_release_fs(struct vfsmount **mount, int *count) 561 540 { ··· 568 545 spin_unlock(&pin_fs_lock); 569 546 mntput(mnt); 570 547 } 548 + EXPORT_SYMBOL(simple_release_fs); 571 549 572 550 /** 573 551 * simple_read_from_buffer - copy data from the buffer to user space ··· 603 579 *ppos = pos + count; 604 580 return count; 605 581 } 582 + EXPORT_SYMBOL(simple_read_from_buffer); 606 583 607 584 /** 608 585 * simple_write_to_buffer - copy data from user space to the buffer ··· 638 613 *ppos = pos + count; 639 614 return count; 640 615 } 616 + EXPORT_SYMBOL(simple_write_to_buffer); 641 617 642 618 /** 643 619 * memory_read_from_buffer - copy data from the buffer ··· 670 644 671 645 return count; 672 646 } 647 + EXPORT_SYMBOL(memory_read_from_buffer); 673 648 674 649 /* 675 650 * Transaction based IO. ··· 692 665 smp_mb(); 693 666 ar->size = n; 694 667 } 668 + EXPORT_SYMBOL(simple_transaction_set); 695 669 696 670 char *simple_transaction_get(struct file *file, const char __user *buf, size_t size) 697 671 { ··· 724 696 725 697 return ar->data; 726 698 } 699 + EXPORT_SYMBOL(simple_transaction_get); 727 700 728 701 ssize_t simple_transaction_read(struct file *file, char __user *buf, size_t size, loff_t *pos) 729 702 { ··· 734 705 return 0; 735 706 return simple_read_from_buffer(buf, size, pos, ar->data, ar->size); 736 707 } 708 + EXPORT_SYMBOL(simple_transaction_read); 737 709 738 710 int simple_transaction_release(struct inode *inode, struct file *file) 739 711 { 740 712 free_page((unsigned long)file->private_data); 741 713 return 0; 742 714 } 715 + EXPORT_SYMBOL(simple_transaction_release); 743 716 744 717 /* Simple attribute files */ 745 718 ··· 777 746 778 747 return nonseekable_open(inode, file); 779 748 } 749 + EXPORT_SYMBOL_GPL(simple_attr_open); 780 750 781 751 int simple_attr_release(struct inode *inode, struct file *file) 782 752 { 783 753 kfree(file->private_data); 784 754 return 0; 785 755 } 756 + EXPORT_SYMBOL_GPL(simple_attr_release); /* GPL-only? This? Really? */ 786 757 787 758 /* read from the buffer that is filled with the get function */ 788 759 ssize_t simple_attr_read(struct file *file, char __user *buf, ··· 820 787 mutex_unlock(&attr->mutex); 821 788 return ret; 822 789 } 790 + EXPORT_SYMBOL_GPL(simple_attr_read); 823 791 824 792 /* interpret the buffer as a number to call the set function with */ 825 793 ssize_t simple_attr_write(struct file *file, const char __user *buf, ··· 853 819 mutex_unlock(&attr->mutex); 854 820 return ret; 855 821 } 822 + EXPORT_SYMBOL_GPL(simple_attr_write); 856 823 857 824 /** 858 825 * generic_fh_to_dentry - generic helper for the fh_to_dentry export operation ··· 992 957 { 993 958 return 0; 994 959 } 995 - 996 - EXPORT_SYMBOL(dcache_dir_close); 997 - EXPORT_SYMBOL(dcache_dir_lseek); 998 - EXPORT_SYMBOL(dcache_dir_open); 999 - EXPORT_SYMBOL(dcache_readdir); 1000 - EXPORT_SYMBOL(generic_read_dir); 1001 - EXPORT_SYMBOL(mount_pseudo); 1002 - EXPORT_SYMBOL(simple_write_begin); 1003 - EXPORT_SYMBOL(simple_write_end); 1004 - EXPORT_SYMBOL(simple_dir_inode_operations); 1005 - EXPORT_SYMBOL(simple_dir_operations); 1006 - EXPORT_SYMBOL(simple_empty); 1007 - EXPORT_SYMBOL(simple_fill_super); 1008 - EXPORT_SYMBOL(simple_getattr); 1009 - EXPORT_SYMBOL(simple_open); 1010 - EXPORT_SYMBOL(simple_link); 1011 - EXPORT_SYMBOL(simple_lookup); 1012 - EXPORT_SYMBOL(simple_pin_fs); 1013 - EXPORT_SYMBOL(simple_readpage); 1014 - EXPORT_SYMBOL(simple_release_fs); 1015 - EXPORT_SYMBOL(simple_rename); 1016 - EXPORT_SYMBOL(simple_rmdir); 1017 - EXPORT_SYMBOL(simple_statfs); 1018 960 EXPORT_SYMBOL(noop_fsync); 1019 - EXPORT_SYMBOL(simple_unlink); 1020 - EXPORT_SYMBOL(simple_read_from_buffer); 1021 - EXPORT_SYMBOL(simple_write_to_buffer); 1022 - EXPORT_SYMBOL(memory_read_from_buffer); 1023 - EXPORT_SYMBOL(simple_transaction_set); 1024 - EXPORT_SYMBOL(simple_transaction_get); 1025 - EXPORT_SYMBOL(simple_transaction_read); 1026 - EXPORT_SYMBOL(simple_transaction_release); 1027 - EXPORT_SYMBOL_GPL(simple_attr_open); 1028 - EXPORT_SYMBOL_GPL(simple_attr_release); 1029 - EXPORT_SYMBOL_GPL(simple_attr_read); 1030 - EXPORT_SYMBOL_GPL(simple_attr_write); 961 + 962 + void kfree_put_link(struct dentry *dentry, struct nameidata *nd, 963 + void *cookie) 964 + { 965 + char *s = nd_get_link(nd); 966 + if (!IS_ERR(s)) 967 + kfree(s); 968 + } 969 + EXPORT_SYMBOL(kfree_put_link); 970 + 971 + /* 972 + * nop .set_page_dirty method so that people can use .page_mkwrite on 973 + * anon inodes. 974 + */ 975 + static int anon_set_page_dirty(struct page *page) 976 + { 977 + return 0; 978 + }; 979 + 980 + /* 981 + * A single inode exists for all anon_inode files. Contrary to pipes, 982 + * anon_inode inodes have no associated per-instance data, so we need 983 + * only allocate one of them. 984 + */ 985 + struct inode *alloc_anon_inode(struct super_block *s) 986 + { 987 + static const struct address_space_operations anon_aops = { 988 + .set_page_dirty = anon_set_page_dirty, 989 + }; 990 + struct inode *inode = new_inode_pseudo(s); 991 + 992 + if (!inode) 993 + return ERR_PTR(-ENOMEM); 994 + 995 + inode->i_ino = get_next_ino(); 996 + inode->i_mapping->a_ops = &anon_aops; 997 + 998 + /* 999 + * Mark the inode dirty from the very beginning, 1000 + * that way it will never be moved to the dirty 1001 + * list because mark_inode_dirty() will think 1002 + * that it already _is_ on the dirty list. 1003 + */ 1004 + inode->i_state = I_DIRTY; 1005 + inode->i_mode = S_IRUSR | S_IWUSR; 1006 + inode->i_uid = current_fsuid(); 1007 + inode->i_gid = current_fsgid(); 1008 + inode->i_flags |= S_PRIVATE; 1009 + inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME; 1010 + return inode; 1011 + } 1012 + EXPORT_SYMBOL(alloc_anon_inode);
+52 -17
fs/locks.c
··· 134 134 135 135 #define IS_POSIX(fl) (fl->fl_flags & FL_POSIX) 136 136 #define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK) 137 - #define IS_LEASE(fl) (fl->fl_flags & FL_LEASE) 137 + #define IS_LEASE(fl) (fl->fl_flags & (FL_LEASE|FL_DELEG)) 138 138 139 139 static bool lease_breaking(struct file_lock *fl) 140 140 { ··· 1292 1292 } 1293 1293 } 1294 1294 1295 + static bool leases_conflict(struct file_lock *lease, struct file_lock *breaker) 1296 + { 1297 + if ((breaker->fl_flags & FL_DELEG) && (lease->fl_flags & FL_LEASE)) 1298 + return false; 1299 + return locks_conflict(breaker, lease); 1300 + } 1301 + 1295 1302 /** 1296 1303 * __break_lease - revoke all outstanding leases on file 1297 1304 * @inode: the inode of the file to return 1298 - * @mode: the open mode (read or write) 1305 + * @mode: O_RDONLY: break only write leases; O_WRONLY or O_RDWR: 1306 + * break all leases 1307 + * @type: FL_LEASE: break leases and delegations; FL_DELEG: break 1308 + * only delegations 1299 1309 * 1300 1310 * break_lease (inlined for speed) has checked there already is at least 1301 1311 * some kind of lock (maybe a lease) on this file. Leases are broken on 1302 1312 * a call to open() or truncate(). This function can sleep unless you 1303 1313 * specified %O_NONBLOCK to your open(). 1304 1314 */ 1305 - int __break_lease(struct inode *inode, unsigned int mode) 1315 + int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) 1306 1316 { 1307 1317 int error = 0; 1308 1318 struct file_lock *new_fl, *flock; 1309 1319 struct file_lock *fl; 1310 1320 unsigned long break_time; 1311 1321 int i_have_this_lease = 0; 1322 + bool lease_conflict = false; 1312 1323 int want_write = (mode & O_ACCMODE) != O_RDONLY; 1313 1324 1314 1325 new_fl = lease_alloc(NULL, want_write ? F_WRLCK : F_RDLCK); 1315 1326 if (IS_ERR(new_fl)) 1316 1327 return PTR_ERR(new_fl); 1328 + new_fl->fl_flags = type; 1317 1329 1318 1330 spin_lock(&inode->i_lock); 1319 1331 ··· 1335 1323 if ((flock == NULL) || !IS_LEASE(flock)) 1336 1324 goto out; 1337 1325 1338 - if (!locks_conflict(flock, new_fl)) 1326 + for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next) { 1327 + if (leases_conflict(fl, new_fl)) { 1328 + lease_conflict = true; 1329 + if (fl->fl_owner == current->files) 1330 + i_have_this_lease = 1; 1331 + } 1332 + } 1333 + if (!lease_conflict) 1339 1334 goto out; 1340 - 1341 - for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next) 1342 - if (fl->fl_owner == current->files) 1343 - i_have_this_lease = 1; 1344 1335 1345 1336 break_time = 0; 1346 1337 if (lease_break_time > 0) { ··· 1353 1338 } 1354 1339 1355 1340 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next) { 1341 + if (!leases_conflict(fl, new_fl)) 1342 + continue; 1356 1343 if (want_write) { 1357 1344 if (fl->fl_flags & FL_UNLOCK_PENDING) 1358 1345 continue; ··· 1396 1379 */ 1397 1380 for (flock = inode->i_flock; flock && IS_LEASE(flock); 1398 1381 flock = flock->fl_next) { 1399 - if (locks_conflict(new_fl, flock)) 1382 + if (leases_conflict(new_fl, flock)) 1400 1383 goto restart; 1401 1384 } 1402 1385 error = 0; ··· 1477 1460 struct file_lock *fl, **before, **my_before = NULL, *lease; 1478 1461 struct dentry *dentry = filp->f_path.dentry; 1479 1462 struct inode *inode = dentry->d_inode; 1463 + bool is_deleg = (*flp)->fl_flags & FL_DELEG; 1480 1464 int error; 1481 1465 1482 1466 lease = *flp; 1467 + /* 1468 + * In the delegation case we need mutual exclusion with 1469 + * a number of operations that take the i_mutex. We trylock 1470 + * because delegations are an optional optimization, and if 1471 + * there's some chance of a conflict--we'd rather not 1472 + * bother, maybe that's a sign this just isn't a good file to 1473 + * hand out a delegation on. 1474 + */ 1475 + if (is_deleg && !mutex_trylock(&inode->i_mutex)) 1476 + return -EAGAIN; 1477 + 1478 + if (is_deleg && arg == F_WRLCK) { 1479 + /* Write delegations are not currently supported: */ 1480 + WARN_ON_ONCE(1); 1481 + return -EINVAL; 1482 + } 1483 1483 1484 1484 error = -EAGAIN; 1485 1485 if ((arg == F_RDLCK) && (atomic_read(&inode->i_writecount) > 0)) ··· 1548 1514 goto out; 1549 1515 1550 1516 locks_insert_lock(before, lease); 1551 - return 0; 1552 - 1517 + error = 0; 1553 1518 out: 1519 + if (is_deleg) 1520 + mutex_unlock(&inode->i_mutex); 1554 1521 return error; 1555 1522 } 1556 1523 ··· 1614 1579 1615 1580 static int __vfs_setlease(struct file *filp, long arg, struct file_lock **lease) 1616 1581 { 1617 - if (filp->f_op && filp->f_op->setlease) 1582 + if (filp->f_op->setlease) 1618 1583 return filp->f_op->setlease(filp, arg, lease); 1619 1584 else 1620 1585 return generic_setlease(filp, arg, lease); ··· 1806 1771 if (error) 1807 1772 goto out_free; 1808 1773 1809 - if (f.file->f_op && f.file->f_op->flock) 1774 + if (f.file->f_op->flock) 1810 1775 error = f.file->f_op->flock(f.file, 1811 1776 (can_sleep) ? F_SETLKW : F_SETLK, 1812 1777 lock); ··· 1832 1797 */ 1833 1798 int vfs_test_lock(struct file *filp, struct file_lock *fl) 1834 1799 { 1835 - if (filp->f_op && filp->f_op->lock) 1800 + if (filp->f_op->lock) 1836 1801 return filp->f_op->lock(filp, F_GETLK, fl); 1837 1802 posix_test_lock(filp, fl); 1838 1803 return 0; ··· 1944 1909 */ 1945 1910 int vfs_lock_file(struct file *filp, unsigned int cmd, struct file_lock *fl, struct file_lock *conf) 1946 1911 { 1947 - if (filp->f_op && filp->f_op->lock) 1912 + if (filp->f_op->lock) 1948 1913 return filp->f_op->lock(filp, cmd, fl); 1949 1914 else 1950 1915 return posix_lock_file(filp, fl, conf); ··· 2217 2182 if (!inode->i_flock) 2218 2183 return; 2219 2184 2220 - if (filp->f_op && filp->f_op->flock) { 2185 + if (filp->f_op->flock) { 2221 2186 struct file_lock fl = { 2222 2187 .fl_pid = current->tgid, 2223 2188 .fl_file = filp, ··· 2281 2246 */ 2282 2247 int vfs_cancel_lock(struct file *filp, struct file_lock *fl) 2283 2248 { 2284 - if (filp->f_op && filp->f_op->lock) 2249 + if (filp->f_op->lock) 2285 2250 return filp->f_op->lock(filp, F_CANCELLK, fl); 2286 2251 return 0; 2287 2252 }
+18 -2
fs/mount.h
··· 29 29 struct mount *mnt_parent; 30 30 struct dentry *mnt_mountpoint; 31 31 struct vfsmount mnt; 32 + struct rcu_head mnt_rcu; 32 33 #ifdef CONFIG_SMP 33 34 struct mnt_pcp __percpu *mnt_pcp; 34 35 #else ··· 56 55 int mnt_group_id; /* peer group identifier */ 57 56 int mnt_expiry_mark; /* true if marked for expiry */ 58 57 int mnt_pinned; 59 - int mnt_ghosts; 58 + struct path mnt_ex_mountpoint; 60 59 }; 61 60 62 61 #define MNT_NS_INTERNAL ERR_PTR(-EINVAL) /* distinct from any mnt_namespace */ ··· 77 76 return !IS_ERR_OR_NULL(real_mount(mnt)); 78 77 } 79 78 80 - extern struct mount *__lookup_mnt(struct vfsmount *, struct dentry *, int); 79 + extern struct mount *__lookup_mnt(struct vfsmount *, struct dentry *); 80 + extern struct mount *__lookup_mnt_last(struct vfsmount *, struct dentry *); 81 + 82 + extern bool legitimize_mnt(struct vfsmount *, unsigned); 81 83 82 84 static inline void get_mnt_ns(struct mnt_namespace *ns) 83 85 { 84 86 atomic_inc(&ns->count); 87 + } 88 + 89 + extern seqlock_t mount_lock; 90 + 91 + static inline void lock_mount_hash(void) 92 + { 93 + write_seqlock(&mount_lock); 94 + } 95 + 96 + static inline void unlock_mount_hash(void) 97 + { 98 + write_sequnlock(&mount_lock); 85 99 } 86 100 87 101 struct proc_mounts {
+196 -126
fs/namei.c
··· 482 482 * to restart the path walk from the beginning in ref-walk mode. 483 483 */ 484 484 485 - static inline void lock_rcu_walk(void) 486 - { 487 - br_read_lock(&vfsmount_lock); 488 - rcu_read_lock(); 489 - } 490 - 491 - static inline void unlock_rcu_walk(void) 492 - { 493 - rcu_read_unlock(); 494 - br_read_unlock(&vfsmount_lock); 495 - } 496 - 497 485 /** 498 486 * unlazy_walk - try to switch to ref-walk mode. 499 487 * @nd: nameidata pathwalk data ··· 500 512 BUG_ON(!(nd->flags & LOOKUP_RCU)); 501 513 502 514 /* 503 - * Get a reference to the parent first: we're 504 - * going to make "path_put(nd->path)" valid in 505 - * non-RCU context for "terminate_walk()". 506 - * 507 - * If this doesn't work, return immediately with 508 - * RCU walking still active (and then we will do 509 - * the RCU walk cleanup in terminate_walk()). 515 + * After legitimizing the bastards, terminate_walk() 516 + * will do the right thing for non-RCU mode, and all our 517 + * subsequent exit cases should rcu_read_unlock() 518 + * before returning. Do vfsmount first; if dentry 519 + * can't be legitimized, just set nd->path.dentry to NULL 520 + * and rely on dput(NULL) being a no-op. 510 521 */ 511 - if (!lockref_get_not_dead(&parent->d_lockref)) 522 + if (!legitimize_mnt(nd->path.mnt, nd->m_seq)) 512 523 return -ECHILD; 513 - 514 - /* 515 - * After the mntget(), we terminate_walk() will do 516 - * the right thing for non-RCU mode, and all our 517 - * subsequent exit cases should unlock_rcu_walk() 518 - * before returning. 519 - */ 520 - mntget(nd->path.mnt); 521 524 nd->flags &= ~LOOKUP_RCU; 525 + 526 + if (!lockref_get_not_dead(&parent->d_lockref)) { 527 + nd->path.dentry = NULL; 528 + rcu_read_unlock(); 529 + return -ECHILD; 530 + } 522 531 523 532 /* 524 533 * For a negative lookup, the lookup sequence point is the parents ··· 551 566 spin_unlock(&fs->lock); 552 567 } 553 568 554 - unlock_rcu_walk(); 569 + rcu_read_unlock(); 555 570 return 0; 556 571 557 572 unlock_and_drop_dentry: 558 573 spin_unlock(&fs->lock); 559 574 drop_dentry: 560 - unlock_rcu_walk(); 575 + rcu_read_unlock(); 561 576 dput(dentry); 562 577 goto drop_root_mnt; 563 578 out: 564 - unlock_rcu_walk(); 579 + rcu_read_unlock(); 565 580 drop_root_mnt: 566 581 if (!(nd->flags & LOOKUP_ROOT)) 567 582 nd->root.mnt = NULL; ··· 593 608 if (!(nd->flags & LOOKUP_ROOT)) 594 609 nd->root.mnt = NULL; 595 610 611 + if (!legitimize_mnt(nd->path.mnt, nd->m_seq)) { 612 + rcu_read_unlock(); 613 + return -ECHILD; 614 + } 596 615 if (unlikely(!lockref_get_not_dead(&dentry->d_lockref))) { 597 - unlock_rcu_walk(); 616 + rcu_read_unlock(); 617 + mntput(nd->path.mnt); 598 618 return -ECHILD; 599 619 } 600 620 if (read_seqcount_retry(&dentry->d_seq, nd->seq)) { 601 - unlock_rcu_walk(); 621 + rcu_read_unlock(); 602 622 dput(dentry); 623 + mntput(nd->path.mnt); 603 624 return -ECHILD; 604 625 } 605 - mntget(nd->path.mnt); 606 - unlock_rcu_walk(); 626 + rcu_read_unlock(); 607 627 } 608 628 609 629 if (likely(!(nd->flags & LOOKUP_JUMPED))) ··· 899 909 struct mount *parent; 900 910 struct dentry *mountpoint; 901 911 902 - br_read_lock(&vfsmount_lock); 912 + read_seqlock_excl(&mount_lock); 903 913 parent = mnt->mnt_parent; 904 914 if (parent == mnt) { 905 - br_read_unlock(&vfsmount_lock); 915 + read_sequnlock_excl(&mount_lock); 906 916 return 0; 907 917 } 908 918 mntget(&parent->mnt); 909 919 mountpoint = dget(mnt->mnt_mountpoint); 910 - br_read_unlock(&vfsmount_lock); 920 + read_sequnlock_excl(&mount_lock); 911 921 dput(path->dentry); 912 922 path->dentry = mountpoint; 913 923 mntput(path->mnt); ··· 1038 1048 1039 1049 /* Something is mounted on this dentry in another 1040 1050 * namespace and/or whatever was mounted there in this 1041 - * namespace got unmounted before we managed to get the 1042 - * vfsmount_lock */ 1051 + * namespace got unmounted before lookup_mnt() could 1052 + * get it */ 1043 1053 } 1044 1054 1045 1055 /* Handle an automount point */ ··· 1101 1111 if (!d_mountpoint(path->dentry)) 1102 1112 break; 1103 1113 1104 - mounted = __lookup_mnt(path->mnt, path->dentry, 1); 1114 + mounted = __lookup_mnt(path->mnt, path->dentry); 1105 1115 if (!mounted) 1106 1116 break; 1107 1117 path->mnt = &mounted->mnt; ··· 1122 1132 { 1123 1133 while (d_mountpoint(nd->path.dentry)) { 1124 1134 struct mount *mounted; 1125 - mounted = __lookup_mnt(nd->path.mnt, nd->path.dentry, 1); 1135 + mounted = __lookup_mnt(nd->path.mnt, nd->path.dentry); 1126 1136 if (!mounted) 1127 1137 break; 1128 1138 nd->path.mnt = &mounted->mnt; ··· 1164 1174 nd->flags &= ~LOOKUP_RCU; 1165 1175 if (!(nd->flags & LOOKUP_ROOT)) 1166 1176 nd->root.mnt = NULL; 1167 - unlock_rcu_walk(); 1177 + rcu_read_unlock(); 1168 1178 return -ECHILD; 1169 1179 } 1170 1180 ··· 1298 1308 } 1299 1309 1300 1310 /* 1301 - * Call i_op->lookup on the dentry. The dentry must be negative but may be 1302 - * hashed if it was pouplated with DCACHE_NEED_LOOKUP. 1311 + * Call i_op->lookup on the dentry. The dentry must be negative and 1312 + * unhashed. 1303 1313 * 1304 1314 * dir->d_inode->i_mutex must be held 1305 1315 */ ··· 1491 1501 nd->flags &= ~LOOKUP_RCU; 1492 1502 if (!(nd->flags & LOOKUP_ROOT)) 1493 1503 nd->root.mnt = NULL; 1494 - unlock_rcu_walk(); 1504 + rcu_read_unlock(); 1495 1505 } 1496 1506 } 1497 1507 ··· 1501 1511 * so we keep a cache of "no, this doesn't need follow_link" 1502 1512 * for the common case. 1503 1513 */ 1504 - static inline int should_follow_link(struct inode *inode, int follow) 1514 + static inline int should_follow_link(struct dentry *dentry, int follow) 1505 1515 { 1506 - if (unlikely(!(inode->i_opflags & IOP_NOFOLLOW))) { 1507 - if (likely(inode->i_op->follow_link)) 1508 - return follow; 1509 - 1510 - /* This gets set once for the inode lifetime */ 1511 - spin_lock(&inode->i_lock); 1512 - inode->i_opflags |= IOP_NOFOLLOW; 1513 - spin_unlock(&inode->i_lock); 1514 - } 1515 - return 0; 1516 + return unlikely(d_is_symlink(dentry)) ? follow : 0; 1516 1517 } 1517 1518 1518 1519 static inline int walk_component(struct nameidata *nd, struct path *path, ··· 1533 1552 if (!inode) 1534 1553 goto out_path_put; 1535 1554 1536 - if (should_follow_link(inode, follow)) { 1555 + if (should_follow_link(path->dentry, follow)) { 1537 1556 if (nd->flags & LOOKUP_RCU) { 1538 1557 if (unlikely(unlazy_walk(nd, path->dentry))) { 1539 1558 err = -ECHILD; ··· 1589 1608 current->link_count--; 1590 1609 nd->depth--; 1591 1610 return res; 1592 - } 1593 - 1594 - /* 1595 - * We really don't want to look at inode->i_op->lookup 1596 - * when we don't have to. So we keep a cache bit in 1597 - * the inode ->i_opflags field that says "yes, we can 1598 - * do lookup on this inode". 1599 - */ 1600 - static inline int can_lookup(struct inode *inode) 1601 - { 1602 - if (likely(inode->i_opflags & IOP_LOOKUP)) 1603 - return 1; 1604 - if (likely(!inode->i_op->lookup)) 1605 - return 0; 1606 - 1607 - /* We do this once for the lifetime of the inode */ 1608 - spin_lock(&inode->i_lock); 1609 - inode->i_opflags |= IOP_LOOKUP; 1610 - spin_unlock(&inode->i_lock); 1611 - return 1; 1612 1611 } 1613 1612 1614 1613 /* ··· 1794 1833 if (err) 1795 1834 return err; 1796 1835 } 1797 - if (!can_lookup(nd->inode)) { 1836 + if (!d_is_directory(nd->path.dentry)) { 1798 1837 err = -ENOTDIR; 1799 1838 break; 1800 1839 } ··· 1812 1851 nd->flags = flags | LOOKUP_JUMPED; 1813 1852 nd->depth = 0; 1814 1853 if (flags & LOOKUP_ROOT) { 1815 - struct inode *inode = nd->root.dentry->d_inode; 1854 + struct dentry *root = nd->root.dentry; 1855 + struct inode *inode = root->d_inode; 1816 1856 if (*name) { 1817 - if (!can_lookup(inode)) 1857 + if (!d_is_directory(root)) 1818 1858 return -ENOTDIR; 1819 1859 retval = inode_permission(inode, MAY_EXEC); 1820 1860 if (retval) ··· 1824 1862 nd->path = nd->root; 1825 1863 nd->inode = inode; 1826 1864 if (flags & LOOKUP_RCU) { 1827 - lock_rcu_walk(); 1865 + rcu_read_lock(); 1828 1866 nd->seq = __read_seqcount_begin(&nd->path.dentry->d_seq); 1867 + nd->m_seq = read_seqbegin(&mount_lock); 1829 1868 } else { 1830 1869 path_get(&nd->path); 1831 1870 } ··· 1835 1872 1836 1873 nd->root.mnt = NULL; 1837 1874 1875 + nd->m_seq = read_seqbegin(&mount_lock); 1838 1876 if (*name=='/') { 1839 1877 if (flags & LOOKUP_RCU) { 1840 - lock_rcu_walk(); 1878 + rcu_read_lock(); 1841 1879 set_root_rcu(nd); 1842 1880 } else { 1843 1881 set_root(nd); ··· 1850 1886 struct fs_struct *fs = current->fs; 1851 1887 unsigned seq; 1852 1888 1853 - lock_rcu_walk(); 1889 + rcu_read_lock(); 1854 1890 1855 1891 do { 1856 1892 seq = read_seqcount_begin(&fs->seq); ··· 1871 1907 dentry = f.file->f_path.dentry; 1872 1908 1873 1909 if (*name) { 1874 - if (!can_lookup(dentry->d_inode)) { 1910 + if (!d_is_directory(dentry)) { 1875 1911 fdput(f); 1876 1912 return -ENOTDIR; 1877 1913 } ··· 1882 1918 if (f.need_put) 1883 1919 *fp = f.file; 1884 1920 nd->seq = __read_seqcount_begin(&nd->path.dentry->d_seq); 1885 - lock_rcu_walk(); 1921 + rcu_read_lock(); 1886 1922 } else { 1887 1923 path_get(&nd->path); 1888 1924 fdput(f); ··· 1953 1989 err = complete_walk(nd); 1954 1990 1955 1991 if (!err && nd->flags & LOOKUP_DIRECTORY) { 1956 - if (!can_lookup(nd->inode)) { 1992 + if (!d_is_directory(nd->path.dentry)) { 1957 1993 path_put(&nd->path); 1958 1994 err = -ENOTDIR; 1959 1995 } ··· 2245 2281 } 2246 2282 path->dentry = dentry; 2247 2283 path->mnt = mntget(nd->path.mnt); 2248 - if (should_follow_link(dentry->d_inode, nd->flags & LOOKUP_FOLLOW)) 2284 + if (should_follow_link(dentry, nd->flags & LOOKUP_FOLLOW)) 2249 2285 return 1; 2250 2286 follow_mount(path); 2251 2287 error = 0; ··· 2390 2426 * 10. We don't allow removal of NFS sillyrenamed files; it's handled by 2391 2427 * nfs_async_unlink(). 2392 2428 */ 2393 - static int may_delete(struct inode *dir,struct dentry *victim,int isdir) 2429 + static int may_delete(struct inode *dir, struct dentry *victim, bool isdir) 2394 2430 { 2431 + struct inode *inode = victim->d_inode; 2395 2432 int error; 2396 2433 2397 - if (!victim->d_inode) 2434 + if (d_is_negative(victim)) 2398 2435 return -ENOENT; 2436 + BUG_ON(!inode); 2399 2437 2400 2438 BUG_ON(victim->d_parent->d_inode != dir); 2401 2439 audit_inode_child(dir, victim, AUDIT_TYPE_CHILD_DELETE); ··· 2407 2441 return error; 2408 2442 if (IS_APPEND(dir)) 2409 2443 return -EPERM; 2410 - if (check_sticky(dir, victim->d_inode)||IS_APPEND(victim->d_inode)|| 2411 - IS_IMMUTABLE(victim->d_inode) || IS_SWAPFILE(victim->d_inode)) 2444 + 2445 + if (check_sticky(dir, inode) || IS_APPEND(inode) || 2446 + IS_IMMUTABLE(inode) || IS_SWAPFILE(inode)) 2412 2447 return -EPERM; 2413 2448 if (isdir) { 2414 - if (!S_ISDIR(victim->d_inode->i_mode)) 2449 + if (!d_is_directory(victim) && !d_is_autodir(victim)) 2415 2450 return -ENOTDIR; 2416 2451 if (IS_ROOT(victim)) 2417 2452 return -EBUSY; 2418 - } else if (S_ISDIR(victim->d_inode->i_mode)) 2453 + } else if (d_is_directory(victim) || d_is_autodir(victim)) 2419 2454 return -EISDIR; 2420 2455 if (IS_DEADDIR(dir)) 2421 2456 return -ENOENT; ··· 2950 2983 /* 2951 2984 * create/update audit record if it already exists. 2952 2985 */ 2953 - if (path->dentry->d_inode) 2986 + if (d_is_positive(path->dentry)) 2954 2987 audit_inode(name, path->dentry, 0); 2955 2988 2956 2989 /* ··· 2979 3012 finish_lookup: 2980 3013 /* we _can_ be in RCU mode here */ 2981 3014 error = -ENOENT; 2982 - if (!inode) { 3015 + if (d_is_negative(path->dentry)) { 2983 3016 path_to_nameidata(path, nd); 2984 3017 goto out; 2985 3018 } 2986 3019 2987 - if (should_follow_link(inode, !symlink_ok)) { 3020 + if (should_follow_link(path->dentry, !symlink_ok)) { 2988 3021 if (nd->flags & LOOKUP_RCU) { 2989 3022 if (unlikely(unlazy_walk(nd, path->dentry))) { 2990 3023 error = -ECHILD; ··· 3013 3046 } 3014 3047 audit_inode(name, nd->path.dentry, 0); 3015 3048 error = -EISDIR; 3016 - if ((open_flag & O_CREAT) && S_ISDIR(nd->inode->i_mode)) 3049 + if ((open_flag & O_CREAT) && 3050 + (d_is_directory(nd->path.dentry) || d_is_autodir(nd->path.dentry))) 3017 3051 goto out; 3018 3052 error = -ENOTDIR; 3019 - if ((nd->flags & LOOKUP_DIRECTORY) && !can_lookup(nd->inode)) 3053 + if ((nd->flags & LOOKUP_DIRECTORY) && !d_is_directory(nd->path.dentry)) 3020 3054 goto out; 3021 3055 if (!S_ISREG(nd->inode->i_mode)) 3022 3056 will_truncate = false; ··· 3243 3275 nd.root.mnt = mnt; 3244 3276 nd.root.dentry = dentry; 3245 3277 3246 - if (dentry->d_inode->i_op->follow_link && op->intent & LOOKUP_OPEN) 3278 + if (d_is_symlink(dentry) && op->intent & LOOKUP_OPEN) 3247 3279 return ERR_PTR(-ELOOP); 3248 3280 3249 3281 file = path_openat(-1, &filename, &nd, op, flags | LOOKUP_RCU); ··· 3293 3325 goto unlock; 3294 3326 3295 3327 error = -EEXIST; 3296 - if (dentry->d_inode) 3328 + if (d_is_positive(dentry)) 3297 3329 goto fail; 3330 + 3298 3331 /* 3299 3332 * Special case - lookup gave negative, but... we had foo/bar/ 3300 3333 * From the vfs_mknod() POV we just have a negative dentry - ··· 3616 3647 return do_rmdir(AT_FDCWD, pathname); 3617 3648 } 3618 3649 3619 - int vfs_unlink(struct inode *dir, struct dentry *dentry) 3650 + /** 3651 + * vfs_unlink - unlink a filesystem object 3652 + * @dir: parent directory 3653 + * @dentry: victim 3654 + * @delegated_inode: returns victim inode, if the inode is delegated. 3655 + * 3656 + * The caller must hold dir->i_mutex. 3657 + * 3658 + * If vfs_unlink discovers a delegation, it will return -EWOULDBLOCK and 3659 + * return a reference to the inode in delegated_inode. The caller 3660 + * should then break the delegation on that inode and retry. Because 3661 + * breaking a delegation may take a long time, the caller should drop 3662 + * dir->i_mutex before doing so. 3663 + * 3664 + * Alternatively, a caller may pass NULL for delegated_inode. This may 3665 + * be appropriate for callers that expect the underlying filesystem not 3666 + * to be NFS exported. 3667 + */ 3668 + int vfs_unlink(struct inode *dir, struct dentry *dentry, struct inode **delegated_inode) 3620 3669 { 3670 + struct inode *target = dentry->d_inode; 3621 3671 int error = may_delete(dir, dentry, 0); 3622 3672 3623 3673 if (error) ··· 3645 3657 if (!dir->i_op->unlink) 3646 3658 return -EPERM; 3647 3659 3648 - mutex_lock(&dentry->d_inode->i_mutex); 3660 + mutex_lock(&target->i_mutex); 3649 3661 if (d_mountpoint(dentry)) 3650 3662 error = -EBUSY; 3651 3663 else { 3652 3664 error = security_inode_unlink(dir, dentry); 3653 3665 if (!error) { 3666 + error = try_break_deleg(target, delegated_inode); 3667 + if (error) 3668 + goto out; 3654 3669 error = dir->i_op->unlink(dir, dentry); 3655 3670 if (!error) 3656 3671 dont_mount(dentry); 3657 3672 } 3658 3673 } 3659 - mutex_unlock(&dentry->d_inode->i_mutex); 3674 + out: 3675 + mutex_unlock(&target->i_mutex); 3660 3676 3661 3677 /* We don't d_delete() NFS sillyrenamed files--they still exist. */ 3662 3678 if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) { 3663 - fsnotify_link_count(dentry->d_inode); 3679 + fsnotify_link_count(target); 3664 3680 d_delete(dentry); 3665 3681 } 3666 3682 ··· 3684 3692 struct dentry *dentry; 3685 3693 struct nameidata nd; 3686 3694 struct inode *inode = NULL; 3695 + struct inode *delegated_inode = NULL; 3687 3696 unsigned int lookup_flags = 0; 3688 3697 retry: 3689 3698 name = user_path_parent(dfd, pathname, &nd, lookup_flags); ··· 3699 3706 error = mnt_want_write(nd.path.mnt); 3700 3707 if (error) 3701 3708 goto exit1; 3702 - 3709 + retry_deleg: 3703 3710 mutex_lock_nested(&nd.path.dentry->d_inode->i_mutex, I_MUTEX_PARENT); 3704 3711 dentry = lookup_hash(&nd); 3705 3712 error = PTR_ERR(dentry); ··· 3708 3715 if (nd.last.name[nd.last.len]) 3709 3716 goto slashes; 3710 3717 inode = dentry->d_inode; 3711 - if (!inode) 3718 + if (d_is_negative(dentry)) 3712 3719 goto slashes; 3713 3720 ihold(inode); 3714 3721 error = security_path_unlink(&nd.path, dentry); 3715 3722 if (error) 3716 3723 goto exit2; 3717 - error = vfs_unlink(nd.path.dentry->d_inode, dentry); 3724 + error = vfs_unlink(nd.path.dentry->d_inode, dentry, &delegated_inode); 3718 3725 exit2: 3719 3726 dput(dentry); 3720 3727 } 3721 3728 mutex_unlock(&nd.path.dentry->d_inode->i_mutex); 3722 3729 if (inode) 3723 3730 iput(inode); /* truncate the inode here */ 3731 + inode = NULL; 3732 + if (delegated_inode) { 3733 + error = break_deleg_wait(&delegated_inode); 3734 + if (!error) 3735 + goto retry_deleg; 3736 + } 3724 3737 mnt_drop_write(nd.path.mnt); 3725 3738 exit1: 3726 3739 path_put(&nd.path); ··· 3739 3740 return error; 3740 3741 3741 3742 slashes: 3742 - error = !dentry->d_inode ? -ENOENT : 3743 - S_ISDIR(dentry->d_inode->i_mode) ? -EISDIR : -ENOTDIR; 3743 + if (d_is_negative(dentry)) 3744 + error = -ENOENT; 3745 + else if (d_is_directory(dentry) || d_is_autodir(dentry)) 3746 + error = -EISDIR; 3747 + else 3748 + error = -ENOTDIR; 3744 3749 goto exit2; 3745 3750 } 3746 3751 ··· 3820 3817 return sys_symlinkat(oldname, AT_FDCWD, newname); 3821 3818 } 3822 3819 3823 - int vfs_link(struct dentry *old_dentry, struct inode *dir, struct dentry *new_dentry) 3820 + /** 3821 + * vfs_link - create a new link 3822 + * @old_dentry: object to be linked 3823 + * @dir: new parent 3824 + * @new_dentry: where to create the new link 3825 + * @delegated_inode: returns inode needing a delegation break 3826 + * 3827 + * The caller must hold dir->i_mutex 3828 + * 3829 + * If vfs_link discovers a delegation on the to-be-linked file in need 3830 + * of breaking, it will return -EWOULDBLOCK and return a reference to the 3831 + * inode in delegated_inode. The caller should then break the delegation 3832 + * and retry. Because breaking a delegation may take a long time, the 3833 + * caller should drop the i_mutex before doing so. 3834 + * 3835 + * Alternatively, a caller may pass NULL for delegated_inode. This may 3836 + * be appropriate for callers that expect the underlying filesystem not 3837 + * to be NFS exported. 3838 + */ 3839 + int vfs_link(struct dentry *old_dentry, struct inode *dir, struct dentry *new_dentry, struct inode **delegated_inode) 3824 3840 { 3825 3841 struct inode *inode = old_dentry->d_inode; 3826 3842 unsigned max_links = dir->i_sb->s_max_links; ··· 3875 3853 error = -ENOENT; 3876 3854 else if (max_links && inode->i_nlink >= max_links) 3877 3855 error = -EMLINK; 3878 - else 3879 - error = dir->i_op->link(old_dentry, dir, new_dentry); 3856 + else { 3857 + error = try_break_deleg(inode, delegated_inode); 3858 + if (!error) 3859 + error = dir->i_op->link(old_dentry, dir, new_dentry); 3860 + } 3880 3861 3881 3862 if (!error && (inode->i_state & I_LINKABLE)) { 3882 3863 spin_lock(&inode->i_lock); ··· 3906 3881 { 3907 3882 struct dentry *new_dentry; 3908 3883 struct path old_path, new_path; 3884 + struct inode *delegated_inode = NULL; 3909 3885 int how = 0; 3910 3886 int error; 3911 3887 ··· 3945 3919 error = security_path_link(old_path.dentry, &new_path, new_dentry); 3946 3920 if (error) 3947 3921 goto out_dput; 3948 - error = vfs_link(old_path.dentry, new_path.dentry->d_inode, new_dentry); 3922 + error = vfs_link(old_path.dentry, new_path.dentry->d_inode, new_dentry, &delegated_inode); 3949 3923 out_dput: 3950 3924 done_path_create(&new_path, new_dentry); 3925 + if (delegated_inode) { 3926 + error = break_deleg_wait(&delegated_inode); 3927 + if (!error) 3928 + goto retry; 3929 + } 3951 3930 if (retry_estale(error, how)) { 3952 3931 how |= LOOKUP_REVAL; 3953 3932 goto retry; ··· 3977 3946 * That's where 4.4 screws up. Current fix: serialization on 3978 3947 * sb->s_vfs_rename_mutex. We might be more accurate, but that's another 3979 3948 * story. 3980 - * c) we have to lock _three_ objects - parents and victim (if it exists). 3949 + * c) we have to lock _four_ objects - parents and victim (if it exists), 3950 + * and source (if it is not a directory). 3981 3951 * And that - after we got ->i_mutex on parents (until then we don't know 3982 3952 * whether the target exists). Solution: try to be smart with locking 3983 3953 * order for inodes. We rely on the fact that tree topology may change ··· 4051 4019 } 4052 4020 4053 4021 static int vfs_rename_other(struct inode *old_dir, struct dentry *old_dentry, 4054 - struct inode *new_dir, struct dentry *new_dentry) 4022 + struct inode *new_dir, struct dentry *new_dentry, 4023 + struct inode **delegated_inode) 4055 4024 { 4056 4025 struct inode *target = new_dentry->d_inode; 4026 + struct inode *source = old_dentry->d_inode; 4057 4027 int error; 4058 4028 4059 4029 error = security_inode_rename(old_dir, old_dentry, new_dir, new_dentry); ··· 4063 4029 return error; 4064 4030 4065 4031 dget(new_dentry); 4066 - if (target) 4067 - mutex_lock(&target->i_mutex); 4032 + lock_two_nondirectories(source, target); 4068 4033 4069 4034 error = -EBUSY; 4070 4035 if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry)) 4071 4036 goto out; 4072 4037 4038 + error = try_break_deleg(source, delegated_inode); 4039 + if (error) 4040 + goto out; 4041 + if (target) { 4042 + error = try_break_deleg(target, delegated_inode); 4043 + if (error) 4044 + goto out; 4045 + } 4073 4046 error = old_dir->i_op->rename(old_dir, old_dentry, new_dir, new_dentry); 4074 4047 if (error) 4075 4048 goto out; ··· 4086 4045 if (!(old_dir->i_sb->s_type->fs_flags & FS_RENAME_DOES_D_MOVE)) 4087 4046 d_move(old_dentry, new_dentry); 4088 4047 out: 4089 - if (target) 4090 - mutex_unlock(&target->i_mutex); 4048 + unlock_two_nondirectories(source, target); 4091 4049 dput(new_dentry); 4092 4050 return error; 4093 4051 } 4094 4052 4053 + /** 4054 + * vfs_rename - rename a filesystem object 4055 + * @old_dir: parent of source 4056 + * @old_dentry: source 4057 + * @new_dir: parent of destination 4058 + * @new_dentry: destination 4059 + * @delegated_inode: returns an inode needing a delegation break 4060 + * 4061 + * The caller must hold multiple mutexes--see lock_rename()). 4062 + * 4063 + * If vfs_rename discovers a delegation in need of breaking at either 4064 + * the source or destination, it will return -EWOULDBLOCK and return a 4065 + * reference to the inode in delegated_inode. The caller should then 4066 + * break the delegation and retry. Because breaking a delegation may 4067 + * take a long time, the caller should drop all locks before doing 4068 + * so. 4069 + * 4070 + * Alternatively, a caller may pass NULL for delegated_inode. This may 4071 + * be appropriate for callers that expect the underlying filesystem not 4072 + * to be NFS exported. 4073 + */ 4095 4074 int vfs_rename(struct inode *old_dir, struct dentry *old_dentry, 4096 - struct inode *new_dir, struct dentry *new_dentry) 4075 + struct inode *new_dir, struct dentry *new_dentry, 4076 + struct inode **delegated_inode) 4097 4077 { 4098 4078 int error; 4099 - int is_dir = S_ISDIR(old_dentry->d_inode->i_mode); 4079 + int is_dir = d_is_directory(old_dentry) || d_is_autodir(old_dentry); 4100 4080 const unsigned char *old_name; 4101 4081 4102 4082 if (old_dentry->d_inode == new_dentry->d_inode) ··· 4142 4080 if (is_dir) 4143 4081 error = vfs_rename_dir(old_dir,old_dentry,new_dir,new_dentry); 4144 4082 else 4145 - error = vfs_rename_other(old_dir,old_dentry,new_dir,new_dentry); 4083 + error = vfs_rename_other(old_dir,old_dentry,new_dir,new_dentry,delegated_inode); 4146 4084 if (!error) 4147 4085 fsnotify_move(old_dir, new_dir, old_name, is_dir, 4148 4086 new_dentry->d_inode, old_dentry); ··· 4158 4096 struct dentry *old_dentry, *new_dentry; 4159 4097 struct dentry *trap; 4160 4098 struct nameidata oldnd, newnd; 4099 + struct inode *delegated_inode = NULL; 4161 4100 struct filename *from; 4162 4101 struct filename *to; 4163 4102 unsigned int lookup_flags = 0; ··· 4198 4135 newnd.flags &= ~LOOKUP_PARENT; 4199 4136 newnd.flags |= LOOKUP_RENAME_TARGET; 4200 4137 4138 + retry_deleg: 4201 4139 trap = lock_rename(new_dir, old_dir); 4202 4140 4203 4141 old_dentry = lookup_hash(&oldnd); ··· 4207 4143 goto exit3; 4208 4144 /* source must exist */ 4209 4145 error = -ENOENT; 4210 - if (!old_dentry->d_inode) 4146 + if (d_is_negative(old_dentry)) 4211 4147 goto exit4; 4212 4148 /* unless the source is a directory trailing slashes give -ENOTDIR */ 4213 - if (!S_ISDIR(old_dentry->d_inode->i_mode)) { 4149 + if (!d_is_directory(old_dentry) && !d_is_autodir(old_dentry)) { 4214 4150 error = -ENOTDIR; 4215 4151 if (oldnd.last.name[oldnd.last.len]) 4216 4152 goto exit4; ··· 4235 4171 if (error) 4236 4172 goto exit5; 4237 4173 error = vfs_rename(old_dir->d_inode, old_dentry, 4238 - new_dir->d_inode, new_dentry); 4174 + new_dir->d_inode, new_dentry, 4175 + &delegated_inode); 4239 4176 exit5: 4240 4177 dput(new_dentry); 4241 4178 exit4: 4242 4179 dput(old_dentry); 4243 4180 exit3: 4244 4181 unlock_rename(new_dir, old_dir); 4182 + if (delegated_inode) { 4183 + error = break_deleg_wait(&delegated_inode); 4184 + if (!error) 4185 + goto retry_deleg; 4186 + } 4245 4187 mnt_drop_write(oldnd.path.mnt); 4246 4188 exit2: 4247 4189 if (retry_estale(error, lookup_flags))
+200 -202
fs/namespace.c
··· 39 39 static struct list_head *mount_hashtable __read_mostly; 40 40 static struct list_head *mountpoint_hashtable __read_mostly; 41 41 static struct kmem_cache *mnt_cache __read_mostly; 42 - static struct rw_semaphore namespace_sem; 42 + static DECLARE_RWSEM(namespace_sem); 43 43 44 44 /* /sys/fs */ 45 45 struct kobject *fs_kobj; ··· 53 53 * It should be taken for write in all cases where the vfsmount 54 54 * tree or hash is modified or when a vfsmount structure is modified. 55 55 */ 56 - DEFINE_BRLOCK(vfsmount_lock); 56 + __cacheline_aligned_in_smp DEFINE_SEQLOCK(mount_lock); 57 57 58 58 static inline unsigned long hash(struct vfsmount *mnt, struct dentry *dentry) 59 59 { ··· 62 62 tmp = tmp + (tmp >> HASH_SHIFT); 63 63 return tmp & (HASH_SIZE - 1); 64 64 } 65 - 66 - #define MNT_WRITER_UNDERFLOW_LIMIT -(1<<16) 67 65 68 66 /* 69 67 * allocation is serialized by namespace_sem, but we need the spinlock to ··· 456 458 { 457 459 int ret = 0; 458 460 459 - br_write_lock(&vfsmount_lock); 461 + lock_mount_hash(); 460 462 mnt->mnt.mnt_flags |= MNT_WRITE_HOLD; 461 463 /* 462 464 * After storing MNT_WRITE_HOLD, we'll read the counters. This store ··· 490 492 */ 491 493 smp_wmb(); 492 494 mnt->mnt.mnt_flags &= ~MNT_WRITE_HOLD; 493 - br_write_unlock(&vfsmount_lock); 495 + unlock_mount_hash(); 494 496 return ret; 495 497 } 496 498 497 499 static void __mnt_unmake_readonly(struct mount *mnt) 498 500 { 499 - br_write_lock(&vfsmount_lock); 501 + lock_mount_hash(); 500 502 mnt->mnt.mnt_flags &= ~MNT_READONLY; 501 - br_write_unlock(&vfsmount_lock); 503 + unlock_mount_hash(); 502 504 } 503 505 504 506 int sb_prepare_remount_readonly(struct super_block *sb) ··· 510 512 if (atomic_long_read(&sb->s_remove_count)) 511 513 return -EBUSY; 512 514 513 - br_write_lock(&vfsmount_lock); 515 + lock_mount_hash(); 514 516 list_for_each_entry(mnt, &sb->s_mounts, mnt_instance) { 515 517 if (!(mnt->mnt.mnt_flags & MNT_READONLY)) { 516 518 mnt->mnt.mnt_flags |= MNT_WRITE_HOLD; ··· 532 534 if (mnt->mnt.mnt_flags & MNT_WRITE_HOLD) 533 535 mnt->mnt.mnt_flags &= ~MNT_WRITE_HOLD; 534 536 } 535 - br_write_unlock(&vfsmount_lock); 537 + unlock_mount_hash(); 536 538 537 539 return err; 538 540 } ··· 547 549 kmem_cache_free(mnt_cache, mnt); 548 550 } 549 551 552 + /* call under rcu_read_lock */ 553 + bool legitimize_mnt(struct vfsmount *bastard, unsigned seq) 554 + { 555 + struct mount *mnt; 556 + if (read_seqretry(&mount_lock, seq)) 557 + return false; 558 + if (bastard == NULL) 559 + return true; 560 + mnt = real_mount(bastard); 561 + mnt_add_count(mnt, 1); 562 + if (likely(!read_seqretry(&mount_lock, seq))) 563 + return true; 564 + if (bastard->mnt_flags & MNT_SYNC_UMOUNT) { 565 + mnt_add_count(mnt, -1); 566 + return false; 567 + } 568 + rcu_read_unlock(); 569 + mntput(bastard); 570 + rcu_read_lock(); 571 + return false; 572 + } 573 + 550 574 /* 551 - * find the first or last mount at @dentry on vfsmount @mnt depending on 552 - * @dir. If @dir is set return the first mount else return the last mount. 553 - * vfsmount_lock must be held for read or write. 575 + * find the first mount at @dentry on vfsmount @mnt. 576 + * call under rcu_read_lock() 554 577 */ 555 - struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry, 556 - int dir) 578 + struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry) 557 579 { 558 580 struct list_head *head = mount_hashtable + hash(mnt, dentry); 559 - struct list_head *tmp = head; 560 - struct mount *p, *found = NULL; 581 + struct mount *p; 561 582 562 - for (;;) { 563 - tmp = dir ? tmp->next : tmp->prev; 564 - p = NULL; 565 - if (tmp == head) 566 - break; 567 - p = list_entry(tmp, struct mount, mnt_hash); 568 - if (&p->mnt_parent->mnt == mnt && p->mnt_mountpoint == dentry) { 569 - found = p; 570 - break; 571 - } 572 - } 573 - return found; 583 + list_for_each_entry_rcu(p, head, mnt_hash) 584 + if (&p->mnt_parent->mnt == mnt && p->mnt_mountpoint == dentry) 585 + return p; 586 + return NULL; 587 + } 588 + 589 + /* 590 + * find the last mount at @dentry on vfsmount @mnt. 591 + * mount_lock must be held. 592 + */ 593 + struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry) 594 + { 595 + struct list_head *head = mount_hashtable + hash(mnt, dentry); 596 + struct mount *p; 597 + 598 + list_for_each_entry_reverse(p, head, mnt_hash) 599 + if (&p->mnt_parent->mnt == mnt && p->mnt_mountpoint == dentry) 600 + return p; 601 + return NULL; 574 602 } 575 603 576 604 /* ··· 618 594 struct vfsmount *lookup_mnt(struct path *path) 619 595 { 620 596 struct mount *child_mnt; 597 + struct vfsmount *m; 598 + unsigned seq; 621 599 622 - br_read_lock(&vfsmount_lock); 623 - child_mnt = __lookup_mnt(path->mnt, path->dentry, 1); 624 - if (child_mnt) { 625 - mnt_add_count(child_mnt, 1); 626 - br_read_unlock(&vfsmount_lock); 627 - return &child_mnt->mnt; 628 - } else { 629 - br_read_unlock(&vfsmount_lock); 630 - return NULL; 631 - } 600 + rcu_read_lock(); 601 + do { 602 + seq = read_seqbegin(&mount_lock); 603 + child_mnt = __lookup_mnt(path->mnt, path->dentry); 604 + m = child_mnt ? &child_mnt->mnt : NULL; 605 + } while (!legitimize_mnt(m, seq)); 606 + rcu_read_unlock(); 607 + return m; 632 608 } 633 609 634 610 static struct mountpoint *new_mountpoint(struct dentry *dentry) ··· 820 796 mnt->mnt.mnt_sb = root->d_sb; 821 797 mnt->mnt_mountpoint = mnt->mnt.mnt_root; 822 798 mnt->mnt_parent = mnt; 823 - br_write_lock(&vfsmount_lock); 799 + lock_mount_hash(); 824 800 list_add_tail(&mnt->mnt_instance, &root->d_sb->s_mounts); 825 - br_write_unlock(&vfsmount_lock); 801 + unlock_mount_hash(); 826 802 return &mnt->mnt; 827 803 } 828 804 EXPORT_SYMBOL_GPL(vfs_kern_mount); ··· 863 839 mnt->mnt.mnt_root = dget(root); 864 840 mnt->mnt_mountpoint = mnt->mnt.mnt_root; 865 841 mnt->mnt_parent = mnt; 866 - br_write_lock(&vfsmount_lock); 842 + lock_mount_hash(); 867 843 list_add_tail(&mnt->mnt_instance, &sb->s_mounts); 868 - br_write_unlock(&vfsmount_lock); 844 + unlock_mount_hash(); 869 845 870 846 if ((flag & CL_SLAVE) || 871 847 ((flag & CL_SHARED_TO_SLAVE) && IS_MNT_SHARED(old))) { ··· 896 872 return ERR_PTR(err); 897 873 } 898 874 899 - static inline void mntfree(struct mount *mnt) 875 + static void delayed_free(struct rcu_head *head) 900 876 { 901 - struct vfsmount *m = &mnt->mnt; 902 - struct super_block *sb = m->mnt_sb; 877 + struct mount *mnt = container_of(head, struct mount, mnt_rcu); 878 + kfree(mnt->mnt_devname); 879 + #ifdef CONFIG_SMP 880 + free_percpu(mnt->mnt_pcp); 881 + #endif 882 + kmem_cache_free(mnt_cache, mnt); 883 + } 884 + 885 + static void mntput_no_expire(struct mount *mnt) 886 + { 887 + put_again: 888 + rcu_read_lock(); 889 + mnt_add_count(mnt, -1); 890 + if (likely(mnt->mnt_ns)) { /* shouldn't be the last one */ 891 + rcu_read_unlock(); 892 + return; 893 + } 894 + lock_mount_hash(); 895 + if (mnt_get_count(mnt)) { 896 + rcu_read_unlock(); 897 + unlock_mount_hash(); 898 + return; 899 + } 900 + if (unlikely(mnt->mnt_pinned)) { 901 + mnt_add_count(mnt, mnt->mnt_pinned + 1); 902 + mnt->mnt_pinned = 0; 903 + rcu_read_unlock(); 904 + unlock_mount_hash(); 905 + acct_auto_close_mnt(&mnt->mnt); 906 + goto put_again; 907 + } 908 + if (unlikely(mnt->mnt.mnt_flags & MNT_DOOMED)) { 909 + rcu_read_unlock(); 910 + unlock_mount_hash(); 911 + return; 912 + } 913 + mnt->mnt.mnt_flags |= MNT_DOOMED; 914 + rcu_read_unlock(); 915 + 916 + list_del(&mnt->mnt_instance); 917 + unlock_mount_hash(); 903 918 904 919 /* 905 920 * This probably indicates that somebody messed ··· 951 888 * so mnt_get_writers() below is safe. 952 889 */ 953 890 WARN_ON(mnt_get_writers(mnt)); 954 - fsnotify_vfsmount_delete(m); 955 - dput(m->mnt_root); 956 - free_vfsmnt(mnt); 957 - deactivate_super(sb); 958 - } 959 - 960 - static void mntput_no_expire(struct mount *mnt) 961 - { 962 - put_again: 963 - #ifdef CONFIG_SMP 964 - br_read_lock(&vfsmount_lock); 965 - if (likely(mnt->mnt_ns)) { 966 - /* shouldn't be the last one */ 967 - mnt_add_count(mnt, -1); 968 - br_read_unlock(&vfsmount_lock); 969 - return; 970 - } 971 - br_read_unlock(&vfsmount_lock); 972 - 973 - br_write_lock(&vfsmount_lock); 974 - mnt_add_count(mnt, -1); 975 - if (mnt_get_count(mnt)) { 976 - br_write_unlock(&vfsmount_lock); 977 - return; 978 - } 979 - #else 980 - mnt_add_count(mnt, -1); 981 - if (likely(mnt_get_count(mnt))) 982 - return; 983 - br_write_lock(&vfsmount_lock); 984 - #endif 985 - if (unlikely(mnt->mnt_pinned)) { 986 - mnt_add_count(mnt, mnt->mnt_pinned + 1); 987 - mnt->mnt_pinned = 0; 988 - br_write_unlock(&vfsmount_lock); 989 - acct_auto_close_mnt(&mnt->mnt); 990 - goto put_again; 991 - } 992 - 993 - list_del(&mnt->mnt_instance); 994 - br_write_unlock(&vfsmount_lock); 995 - mntfree(mnt); 891 + fsnotify_vfsmount_delete(&mnt->mnt); 892 + dput(mnt->mnt.mnt_root); 893 + deactivate_super(mnt->mnt.mnt_sb); 894 + mnt_free_id(mnt); 895 + call_rcu(&mnt->mnt_rcu, delayed_free); 996 896 } 997 897 998 898 void mntput(struct vfsmount *mnt) ··· 980 954 981 955 void mnt_pin(struct vfsmount *mnt) 982 956 { 983 - br_write_lock(&vfsmount_lock); 957 + lock_mount_hash(); 984 958 real_mount(mnt)->mnt_pinned++; 985 - br_write_unlock(&vfsmount_lock); 959 + unlock_mount_hash(); 986 960 } 987 961 EXPORT_SYMBOL(mnt_pin); 988 962 989 963 void mnt_unpin(struct vfsmount *m) 990 964 { 991 965 struct mount *mnt = real_mount(m); 992 - br_write_lock(&vfsmount_lock); 966 + lock_mount_hash(); 993 967 if (mnt->mnt_pinned) { 994 968 mnt_add_count(mnt, 1); 995 969 mnt->mnt_pinned--; 996 970 } 997 - br_write_unlock(&vfsmount_lock); 971 + unlock_mount_hash(); 998 972 } 999 973 EXPORT_SYMBOL(mnt_unpin); 1000 974 ··· 1111 1085 BUG_ON(!m); 1112 1086 1113 1087 /* write lock needed for mnt_get_count */ 1114 - br_write_lock(&vfsmount_lock); 1088 + lock_mount_hash(); 1115 1089 for (p = mnt; p; p = next_mnt(p, mnt)) { 1116 1090 actual_refs += mnt_get_count(p); 1117 1091 minimum_refs += 2; 1118 1092 } 1119 - br_write_unlock(&vfsmount_lock); 1093 + unlock_mount_hash(); 1120 1094 1121 1095 if (actual_refs > minimum_refs) 1122 1096 return 0; ··· 1143 1117 { 1144 1118 int ret = 1; 1145 1119 down_read(&namespace_sem); 1146 - br_write_lock(&vfsmount_lock); 1120 + lock_mount_hash(); 1147 1121 if (propagate_mount_busy(real_mount(mnt), 2)) 1148 1122 ret = 0; 1149 - br_write_unlock(&vfsmount_lock); 1123 + unlock_mount_hash(); 1150 1124 up_read(&namespace_sem); 1151 1125 return ret; 1152 1126 } ··· 1168 1142 list_splice_init(&unmounted, &head); 1169 1143 up_write(&namespace_sem); 1170 1144 1145 + synchronize_rcu(); 1146 + 1171 1147 while (!list_empty(&head)) { 1172 1148 mnt = list_first_entry(&head, struct mount, mnt_hash); 1173 1149 list_del_init(&mnt->mnt_hash); 1174 - if (mnt_has_parent(mnt)) { 1175 - struct dentry *dentry; 1176 - struct mount *m; 1177 - 1178 - br_write_lock(&vfsmount_lock); 1179 - dentry = mnt->mnt_mountpoint; 1180 - m = mnt->mnt_parent; 1181 - mnt->mnt_mountpoint = mnt->mnt.mnt_root; 1182 - mnt->mnt_parent = mnt; 1183 - m->mnt_ghosts--; 1184 - br_write_unlock(&vfsmount_lock); 1185 - dput(dentry); 1186 - mntput(&m->mnt); 1187 - } 1150 + if (mnt->mnt_ex_mountpoint.mnt) 1151 + path_put(&mnt->mnt_ex_mountpoint); 1188 1152 mntput(&mnt->mnt); 1189 1153 } 1190 1154 } ··· 1185 1169 } 1186 1170 1187 1171 /* 1188 - * vfsmount lock must be held for write 1172 + * mount_lock must be held 1189 1173 * namespace_sem must be held for write 1174 + * how = 0 => just this tree, don't propagate 1175 + * how = 1 => propagate; we know that nobody else has reference to any victims 1176 + * how = 2 => lazy umount 1190 1177 */ 1191 - void umount_tree(struct mount *mnt, int propagate) 1178 + void umount_tree(struct mount *mnt, int how) 1192 1179 { 1193 1180 LIST_HEAD(tmp_list); 1194 1181 struct mount *p; ··· 1199 1180 for (p = mnt; p; p = next_mnt(p, mnt)) 1200 1181 list_move(&p->mnt_hash, &tmp_list); 1201 1182 1202 - if (propagate) 1183 + if (how) 1203 1184 propagate_umount(&tmp_list); 1204 1185 1205 1186 list_for_each_entry(p, &tmp_list, mnt_hash) { ··· 1207 1188 list_del_init(&p->mnt_list); 1208 1189 __touch_mnt_namespace(p->mnt_ns); 1209 1190 p->mnt_ns = NULL; 1191 + if (how < 2) 1192 + p->mnt.mnt_flags |= MNT_SYNC_UMOUNT; 1210 1193 list_del_init(&p->mnt_child); 1211 1194 if (mnt_has_parent(p)) { 1212 - p->mnt_parent->mnt_ghosts++; 1213 1195 put_mountpoint(p->mnt_mp); 1196 + /* move the reference to mountpoint into ->mnt_ex_mountpoint */ 1197 + p->mnt_ex_mountpoint.dentry = p->mnt_mountpoint; 1198 + p->mnt_ex_mountpoint.mnt = &p->mnt_parent->mnt; 1199 + p->mnt_mountpoint = p->mnt.mnt_root; 1200 + p->mnt_parent = p; 1214 1201 p->mnt_mp = NULL; 1215 1202 } 1216 1203 change_mnt_propagation(p, MS_PRIVATE); ··· 1250 1225 * probably don't strictly need the lock here if we examined 1251 1226 * all race cases, but it's a slowpath. 1252 1227 */ 1253 - br_write_lock(&vfsmount_lock); 1228 + lock_mount_hash(); 1254 1229 if (mnt_get_count(mnt) != 2) { 1255 - br_write_unlock(&vfsmount_lock); 1230 + unlock_mount_hash(); 1256 1231 return -EBUSY; 1257 1232 } 1258 - br_write_unlock(&vfsmount_lock); 1233 + unlock_mount_hash(); 1259 1234 1260 1235 if (!xchg(&mnt->mnt_expiry_mark, 1)) 1261 1236 return -EAGAIN; ··· 1297 1272 } 1298 1273 1299 1274 namespace_lock(); 1300 - br_write_lock(&vfsmount_lock); 1275 + lock_mount_hash(); 1301 1276 event++; 1302 1277 1303 - if (!(flags & MNT_DETACH)) 1304 - shrink_submounts(mnt); 1305 - 1306 - retval = -EBUSY; 1307 - if (flags & MNT_DETACH || !propagate_mount_busy(mnt, 2)) { 1278 + if (flags & MNT_DETACH) { 1308 1279 if (!list_empty(&mnt->mnt_list)) 1309 - umount_tree(mnt, 1); 1280 + umount_tree(mnt, 2); 1310 1281 retval = 0; 1282 + } else { 1283 + shrink_submounts(mnt); 1284 + retval = -EBUSY; 1285 + if (!propagate_mount_busy(mnt, 2)) { 1286 + if (!list_empty(&mnt->mnt_list)) 1287 + umount_tree(mnt, 1); 1288 + retval = 0; 1289 + } 1311 1290 } 1312 - br_write_unlock(&vfsmount_lock); 1291 + unlock_mount_hash(); 1313 1292 namespace_unlock(); 1314 1293 return retval; 1315 1294 } ··· 1456 1427 q = clone_mnt(p, p->mnt.mnt_root, flag); 1457 1428 if (IS_ERR(q)) 1458 1429 goto out; 1459 - br_write_lock(&vfsmount_lock); 1430 + lock_mount_hash(); 1460 1431 list_add_tail(&q->mnt_list, &res->mnt_list); 1461 1432 attach_mnt(q, parent, p->mnt_mp); 1462 - br_write_unlock(&vfsmount_lock); 1433 + unlock_mount_hash(); 1463 1434 } 1464 1435 } 1465 1436 return res; 1466 1437 out: 1467 1438 if (res) { 1468 - br_write_lock(&vfsmount_lock); 1439 + lock_mount_hash(); 1469 1440 umount_tree(res, 0); 1470 - br_write_unlock(&vfsmount_lock); 1441 + unlock_mount_hash(); 1471 1442 } 1472 1443 return q; 1473 1444 } ··· 1489 1460 void drop_collected_mounts(struct vfsmount *mnt) 1490 1461 { 1491 1462 namespace_lock(); 1492 - br_write_lock(&vfsmount_lock); 1463 + lock_mount_hash(); 1493 1464 umount_tree(real_mount(mnt), 0); 1494 - br_write_unlock(&vfsmount_lock); 1465 + unlock_mount_hash(); 1495 1466 namespace_unlock(); 1496 1467 } 1497 1468 ··· 1618 1589 if (err) 1619 1590 goto out_cleanup_ids; 1620 1591 1621 - br_write_lock(&vfsmount_lock); 1592 + lock_mount_hash(); 1622 1593 1623 1594 if (IS_MNT_SHARED(dest_mnt)) { 1624 1595 for (p = source_mnt; p; p = next_mnt(p, source_mnt)) ··· 1637 1608 list_del_init(&child->mnt_hash); 1638 1609 commit_tree(child); 1639 1610 } 1640 - br_write_unlock(&vfsmount_lock); 1611 + unlock_mount_hash(); 1641 1612 1642 1613 return 0; 1643 1614 ··· 1739 1710 goto out_unlock; 1740 1711 } 1741 1712 1742 - br_write_lock(&vfsmount_lock); 1713 + lock_mount_hash(); 1743 1714 for (m = mnt; m; m = (recurse ? next_mnt(m, mnt) : NULL)) 1744 1715 change_mnt_propagation(m, type); 1745 - br_write_unlock(&vfsmount_lock); 1716 + unlock_mount_hash(); 1746 1717 1747 1718 out_unlock: 1748 1719 namespace_unlock(); ··· 1814 1785 1815 1786 err = graft_tree(mnt, parent, mp); 1816 1787 if (err) { 1817 - br_write_lock(&vfsmount_lock); 1788 + lock_mount_hash(); 1818 1789 umount_tree(mnt, 0); 1819 - br_write_unlock(&vfsmount_lock); 1790 + unlock_mount_hash(); 1820 1791 } 1821 1792 out2: 1822 1793 unlock_mount(mp); ··· 1875 1846 else 1876 1847 err = do_remount_sb(sb, flags, data, 0); 1877 1848 if (!err) { 1878 - br_write_lock(&vfsmount_lock); 1849 + lock_mount_hash(); 1879 1850 mnt_flags |= mnt->mnt.mnt_flags & MNT_PROPAGATION_MASK; 1880 1851 mnt->mnt.mnt_flags = mnt_flags; 1881 - br_write_unlock(&vfsmount_lock); 1852 + touch_mnt_namespace(mnt->mnt_ns); 1853 + unlock_mount_hash(); 1882 1854 } 1883 1855 up_write(&sb->s_umount); 1884 - if (!err) { 1885 - br_write_lock(&vfsmount_lock); 1886 - touch_mnt_namespace(mnt->mnt_ns); 1887 - br_write_unlock(&vfsmount_lock); 1888 - } 1889 1856 return err; 1890 1857 } 1891 1858 ··· 1997 1972 struct mount *parent; 1998 1973 int err; 1999 1974 2000 - mnt_flags &= ~(MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL); 1975 + mnt_flags &= ~(MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL | MNT_DOOMED | MNT_SYNC_UMOUNT); 2001 1976 2002 1977 mp = lock_mount(path); 2003 1978 if (IS_ERR(mp)) ··· 2102 2077 /* remove m from any expiration list it may be on */ 2103 2078 if (!list_empty(&mnt->mnt_expire)) { 2104 2079 namespace_lock(); 2105 - br_write_lock(&vfsmount_lock); 2106 2080 list_del_init(&mnt->mnt_expire); 2107 - br_write_unlock(&vfsmount_lock); 2108 2081 namespace_unlock(); 2109 2082 } 2110 2083 mntput(m); ··· 2118 2095 void mnt_set_expiry(struct vfsmount *mnt, struct list_head *expiry_list) 2119 2096 { 2120 2097 namespace_lock(); 2121 - br_write_lock(&vfsmount_lock); 2122 2098 2123 2099 list_add_tail(&real_mount(mnt)->mnt_expire, expiry_list); 2124 2100 2125 - br_write_unlock(&vfsmount_lock); 2126 2101 namespace_unlock(); 2127 2102 } 2128 2103 EXPORT_SYMBOL(mnt_set_expiry); ··· 2139 2118 return; 2140 2119 2141 2120 namespace_lock(); 2142 - br_write_lock(&vfsmount_lock); 2121 + lock_mount_hash(); 2143 2122 2144 2123 /* extract from the expiration list every vfsmount that matches the 2145 2124 * following criteria: ··· 2158 2137 touch_mnt_namespace(mnt->mnt_ns); 2159 2138 umount_tree(mnt, 1); 2160 2139 } 2161 - br_write_unlock(&vfsmount_lock); 2140 + unlock_mount_hash(); 2162 2141 namespace_unlock(); 2163 2142 } 2164 2143 ··· 2214 2193 * process a list of expirable mountpoints with the intent of discarding any 2215 2194 * submounts of a specific parent mountpoint 2216 2195 * 2217 - * vfsmount_lock must be held for write 2196 + * mount_lock must be held for write 2218 2197 */ 2219 2198 static void shrink_submounts(struct mount *mnt) 2220 2199 { ··· 2435 2414 return new_ns; 2436 2415 } 2437 2416 2438 - /* 2439 - * Allocate a new namespace structure and populate it with contents 2440 - * copied from the namespace of the passed in task structure. 2441 - */ 2442 - static struct mnt_namespace *dup_mnt_ns(struct mnt_namespace *mnt_ns, 2443 - struct user_namespace *user_ns, struct fs_struct *fs) 2417 + struct mnt_namespace *copy_mnt_ns(unsigned long flags, struct mnt_namespace *ns, 2418 + struct user_namespace *user_ns, struct fs_struct *new_fs) 2444 2419 { 2445 2420 struct mnt_namespace *new_ns; 2446 2421 struct vfsmount *rootmnt = NULL, *pwdmnt = NULL; 2447 2422 struct mount *p, *q; 2448 - struct mount *old = mnt_ns->root; 2423 + struct mount *old; 2449 2424 struct mount *new; 2450 2425 int copy_flags; 2426 + 2427 + BUG_ON(!ns); 2428 + 2429 + if (likely(!(flags & CLONE_NEWNS))) { 2430 + get_mnt_ns(ns); 2431 + return ns; 2432 + } 2433 + 2434 + old = ns->root; 2451 2435 2452 2436 new_ns = alloc_mnt_ns(user_ns); 2453 2437 if (IS_ERR(new_ns)) ··· 2461 2435 namespace_lock(); 2462 2436 /* First pass: copy the tree topology */ 2463 2437 copy_flags = CL_COPY_UNBINDABLE | CL_EXPIRE; 2464 - if (user_ns != mnt_ns->user_ns) 2438 + if (user_ns != ns->user_ns) 2465 2439 copy_flags |= CL_SHARED_TO_SLAVE | CL_UNPRIVILEGED; 2466 2440 new = copy_tree(old, old->mnt.mnt_root, copy_flags); 2467 2441 if (IS_ERR(new)) { ··· 2470 2444 return ERR_CAST(new); 2471 2445 } 2472 2446 new_ns->root = new; 2473 - br_write_lock(&vfsmount_lock); 2474 2447 list_add_tail(&new_ns->list, &new->mnt_list); 2475 - br_write_unlock(&vfsmount_lock); 2476 2448 2477 2449 /* 2478 2450 * Second pass: switch the tsk->fs->* elements and mark new vfsmounts ··· 2481 2457 q = new; 2482 2458 while (p) { 2483 2459 q->mnt_ns = new_ns; 2484 - if (fs) { 2485 - if (&p->mnt == fs->root.mnt) { 2486 - fs->root.mnt = mntget(&q->mnt); 2460 + if (new_fs) { 2461 + if (&p->mnt == new_fs->root.mnt) { 2462 + new_fs->root.mnt = mntget(&q->mnt); 2487 2463 rootmnt = &p->mnt; 2488 2464 } 2489 - if (&p->mnt == fs->pwd.mnt) { 2490 - fs->pwd.mnt = mntget(&q->mnt); 2465 + if (&p->mnt == new_fs->pwd.mnt) { 2466 + new_fs->pwd.mnt = mntget(&q->mnt); 2491 2467 pwdmnt = &p->mnt; 2492 2468 } 2493 2469 } ··· 2505 2481 if (pwdmnt) 2506 2482 mntput(pwdmnt); 2507 2483 2508 - return new_ns; 2509 - } 2510 - 2511 - struct mnt_namespace *copy_mnt_ns(unsigned long flags, struct mnt_namespace *ns, 2512 - struct user_namespace *user_ns, struct fs_struct *new_fs) 2513 - { 2514 - struct mnt_namespace *new_ns; 2515 - 2516 - BUG_ON(!ns); 2517 - get_mnt_ns(ns); 2518 - 2519 - if (!(flags & CLONE_NEWNS)) 2520 - return ns; 2521 - 2522 - new_ns = dup_mnt_ns(ns, user_ns, new_fs); 2523 - 2524 - put_mnt_ns(ns); 2525 2484 return new_ns; 2526 2485 } 2527 2486 ··· 2600 2593 /* 2601 2594 * Return true if path is reachable from root 2602 2595 * 2603 - * namespace_sem or vfsmount_lock is held 2596 + * namespace_sem or mount_lock is held 2604 2597 */ 2605 2598 bool is_path_reachable(struct mount *mnt, struct dentry *dentry, 2606 2599 const struct path *root) ··· 2615 2608 int path_is_under(struct path *path1, struct path *path2) 2616 2609 { 2617 2610 int res; 2618 - br_read_lock(&vfsmount_lock); 2611 + read_seqlock_excl(&mount_lock); 2619 2612 res = is_path_reachable(real_mount(path1->mnt), path1->dentry, path2); 2620 - br_read_unlock(&vfsmount_lock); 2613 + read_sequnlock_excl(&mount_lock); 2621 2614 return res; 2622 2615 } 2623 2616 EXPORT_SYMBOL(path_is_under); ··· 2708 2701 if (!is_path_reachable(old_mnt, old.dentry, &new)) 2709 2702 goto out4; 2710 2703 root_mp->m_count++; /* pin it so it won't go away */ 2711 - br_write_lock(&vfsmount_lock); 2704 + lock_mount_hash(); 2712 2705 detach_mnt(new_mnt, &parent_path); 2713 2706 detach_mnt(root_mnt, &root_parent); 2714 2707 if (root_mnt->mnt.mnt_flags & MNT_LOCKED) { ··· 2720 2713 /* mount new_root on / */ 2721 2714 attach_mnt(new_mnt, real_mount(root_parent.mnt), root_mp); 2722 2715 touch_mnt_namespace(current->nsproxy->mnt_ns); 2723 - br_write_unlock(&vfsmount_lock); 2716 + unlock_mount_hash(); 2724 2717 chroot_fs_refs(&root, &new); 2725 2718 put_mountpoint(root_mp); 2726 2719 error = 0; ··· 2774 2767 unsigned u; 2775 2768 int err; 2776 2769 2777 - init_rwsem(&namespace_sem); 2778 - 2779 2770 mnt_cache = kmem_cache_create("mnt_cache", sizeof(struct mount), 2780 2771 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL); 2781 2772 ··· 2790 2785 for (u = 0; u < HASH_SIZE; u++) 2791 2786 INIT_LIST_HEAD(&mountpoint_hashtable[u]); 2792 2787 2793 - br_lock_init(&vfsmount_lock); 2794 - 2795 2788 err = sysfs_init(); 2796 2789 if (err) 2797 2790 printk(KERN_WARNING "%s: sysfs_init error: %d\n", ··· 2805 2802 { 2806 2803 if (!atomic_dec_and_test(&ns->count)) 2807 2804 return; 2808 - namespace_lock(); 2809 - br_write_lock(&vfsmount_lock); 2810 - umount_tree(ns->root, 0); 2811 - br_write_unlock(&vfsmount_lock); 2812 - namespace_unlock(); 2805 + drop_collected_mounts(&ns->root->mnt); 2813 2806 free_mnt_ns(ns); 2814 2807 } 2815 2808 ··· 2828 2829 { 2829 2830 /* release long term mount so mount point can be released */ 2830 2831 if (!IS_ERR_OR_NULL(mnt)) { 2831 - br_write_lock(&vfsmount_lock); 2832 2832 real_mount(mnt)->mnt_ns = NULL; 2833 - br_write_unlock(&vfsmount_lock); 2833 + synchronize_rcu(); /* yecchhh... */ 2834 2834 mntput(mnt); 2835 2835 } 2836 2836 } ··· 2873 2875 if (unlikely(!ns)) 2874 2876 return false; 2875 2877 2876 - namespace_lock(); 2878 + down_read(&namespace_sem); 2877 2879 list_for_each_entry(mnt, &ns->list, mnt_list) { 2878 2880 struct mount *child; 2879 2881 if (mnt->mnt.mnt_sb->s_type != type) ··· 2894 2896 next: ; 2895 2897 } 2896 2898 found: 2897 - namespace_unlock(); 2899 + up_read(&namespace_sem); 2898 2900 return visible; 2899 2901 } 2900 2902
+20 -35
fs/ncpfs/dir.c
··· 339 339 if (val) 340 340 goto finished; 341 341 342 - DDPRINTK("ncp_lookup_validate: %s/%s not valid, age=%ld, server lookup\n", 343 - dentry->d_parent->d_name.name, dentry->d_name.name, 344 - NCP_GET_AGE(dentry)); 342 + DDPRINTK("ncp_lookup_validate: %pd2 not valid, age=%ld, server lookup\n", 343 + dentry, NCP_GET_AGE(dentry)); 345 344 346 345 len = sizeof(__name); 347 346 if (ncp_is_server_root(dir)) { ··· 358 359 res = ncp_obtain_info(server, dir, __name, &(finfo.i)); 359 360 } 360 361 finfo.volume = finfo.i.volNumber; 361 - DDPRINTK("ncp_lookup_validate: looked for %s/%s, res=%d\n", 362 - dentry->d_parent->d_name.name, __name, res); 362 + DDPRINTK("ncp_lookup_validate: looked for %pd/%s, res=%d\n", 363 + dentry->d_parent, __name, res); 363 364 /* 364 365 * If we didn't find it, or if it has a different dirEntNum to 365 366 * what we remember, it's not valid any more. ··· 453 454 ctl.page = NULL; 454 455 ctl.cache = NULL; 455 456 456 - DDPRINTK("ncp_readdir: reading %s/%s, pos=%d\n", 457 - dentry->d_parent->d_name.name, dentry->d_name.name, 457 + DDPRINTK("ncp_readdir: reading %pD2, pos=%d\n", file, 458 458 (int) ctx->pos); 459 459 460 460 result = -EIO; ··· 738 740 int more; 739 741 size_t bufsize; 740 742 741 - DPRINTK("ncp_do_readdir: %s/%s, fpos=%ld\n", 742 - dentry->d_parent->d_name.name, dentry->d_name.name, 743 + DPRINTK("ncp_do_readdir: %pD2, fpos=%ld\n", file, 743 744 (unsigned long) ctx->pos); 744 - PPRINTK("ncp_do_readdir: init %s, volnum=%d, dirent=%u\n", 745 - dentry->d_name.name, NCP_FINFO(dir)->volNumber, 746 - NCP_FINFO(dir)->dirEntNum); 745 + PPRINTK("ncp_do_readdir: init %pD, volnum=%d, dirent=%u\n", 746 + file, NCP_FINFO(dir)->volNumber, NCP_FINFO(dir)->dirEntNum); 747 747 748 748 err = ncp_initialize_search(server, dir, &seq); 749 749 if (err) { ··· 846 850 if (!ncp_conn_valid(server)) 847 851 goto finished; 848 852 849 - PPRINTK("ncp_lookup: server lookup for %s/%s\n", 850 - dentry->d_parent->d_name.name, dentry->d_name.name); 853 + PPRINTK("ncp_lookup: server lookup for %pd2\n", dentry); 851 854 852 855 len = sizeof(__name); 853 856 if (ncp_is_server_root(dir)) { ··· 862 867 if (!res) 863 868 res = ncp_obtain_info(server, dir, __name, &(finfo.i)); 864 869 } 865 - PPRINTK("ncp_lookup: looked for %s/%s, res=%d\n", 866 - dentry->d_parent->d_name.name, __name, res); 870 + PPRINTK("ncp_lookup: looked for %pd2, res=%d\n", dentry, res); 867 871 /* 868 872 * If we didn't find an entry, make a negative dentry. 869 873 */ ··· 909 915 return error; 910 916 911 917 out_close: 912 - PPRINTK("ncp_instantiate: %s/%s failed, closing file\n", 913 - dentry->d_parent->d_name.name, dentry->d_name.name); 918 + PPRINTK("ncp_instantiate: %pd2 failed, closing file\n", dentry); 914 919 ncp_close_file(NCP_SERVER(dir), finfo->file_handle); 915 920 goto out; 916 921 } ··· 923 930 int opmode; 924 931 __u8 __name[NCP_MAXPATHLEN + 1]; 925 932 926 - PPRINTK("ncp_create_new: creating %s/%s, mode=%hx\n", 927 - dentry->d_parent->d_name.name, dentry->d_name.name, mode); 933 + PPRINTK("ncp_create_new: creating %pd2, mode=%hx\n", dentry, mode); 928 934 929 935 ncp_age_dentry(server, dentry); 930 936 len = sizeof(__name); ··· 952 960 error = -ENAMETOOLONG; 953 961 else if (result < 0) 954 962 error = result; 955 - DPRINTK("ncp_create: %s/%s failed\n", 956 - dentry->d_parent->d_name.name, dentry->d_name.name); 963 + DPRINTK("ncp_create: %pd2 failed\n", dentry); 957 964 goto out; 958 965 } 959 966 opmode = O_WRONLY; ··· 985 994 int error, len; 986 995 __u8 __name[NCP_MAXPATHLEN + 1]; 987 996 988 - DPRINTK("ncp_mkdir: making %s/%s\n", 989 - dentry->d_parent->d_name.name, dentry->d_name.name); 997 + DPRINTK("ncp_mkdir: making %pd2\n", dentry); 990 998 991 999 ncp_age_dentry(server, dentry); 992 1000 len = sizeof(__name); ··· 1022 1032 int error, result, len; 1023 1033 __u8 __name[NCP_MAXPATHLEN + 1]; 1024 1034 1025 - DPRINTK("ncp_rmdir: removing %s/%s\n", 1026 - dentry->d_parent->d_name.name, dentry->d_name.name); 1035 + DPRINTK("ncp_rmdir: removing %pd2\n", dentry); 1027 1036 1028 1037 len = sizeof(__name); 1029 1038 error = ncp_io2vol(server, __name, &len, dentry->d_name.name, ··· 1067 1078 int error; 1068 1079 1069 1080 server = NCP_SERVER(dir); 1070 - DPRINTK("ncp_unlink: unlinking %s/%s\n", 1071 - dentry->d_parent->d_name.name, dentry->d_name.name); 1081 + DPRINTK("ncp_unlink: unlinking %pd2\n", dentry); 1072 1082 1073 1083 /* 1074 1084 * Check whether to close the file ... ··· 1087 1099 #endif 1088 1100 switch (error) { 1089 1101 case 0x00: 1090 - DPRINTK("ncp: removed %s/%s\n", 1091 - dentry->d_parent->d_name.name, dentry->d_name.name); 1102 + DPRINTK("ncp: removed %pd2\n", dentry); 1092 1103 break; 1093 1104 case 0x85: 1094 1105 case 0x8A: ··· 1120 1133 int old_len, new_len; 1121 1134 __u8 __old_name[NCP_MAXPATHLEN + 1], __new_name[NCP_MAXPATHLEN + 1]; 1122 1135 1123 - DPRINTK("ncp_rename: %s/%s to %s/%s\n", 1124 - old_dentry->d_parent->d_name.name, old_dentry->d_name.name, 1125 - new_dentry->d_parent->d_name.name, new_dentry->d_name.name); 1136 + DPRINTK("ncp_rename: %pd2 to %pd2\n", old_dentry, new_dentry); 1126 1137 1127 1138 ncp_age_dentry(server, old_dentry); 1128 1139 ncp_age_dentry(server, new_dentry); ··· 1150 1165 #endif 1151 1166 switch (error) { 1152 1167 case 0x00: 1153 - DPRINTK("ncp renamed %s -> %s.\n", 1154 - old_dentry->d_name.name,new_dentry->d_name.name); 1168 + DPRINTK("ncp renamed %pd -> %pd.\n", 1169 + old_dentry, new_dentry); 1155 1170 break; 1156 1171 case 0x9E: 1157 1172 error = -ENAMETOOLONG;
+4 -8
fs/ncpfs/file.c
··· 107 107 void* freepage; 108 108 size_t freelen; 109 109 110 - DPRINTK("ncp_file_read: enter %s/%s\n", 111 - dentry->d_parent->d_name.name, dentry->d_name.name); 110 + DPRINTK("ncp_file_read: enter %pd2\n", dentry); 112 111 113 112 pos = *ppos; 114 113 ··· 165 166 166 167 file_accessed(file); 167 168 168 - DPRINTK("ncp_file_read: exit %s/%s\n", 169 - dentry->d_parent->d_name.name, dentry->d_name.name); 169 + DPRINTK("ncp_file_read: exit %pd2\n", dentry); 170 170 outrel: 171 171 ncp_inode_close(inode); 172 172 return already_read ? already_read : error; ··· 182 184 int errno; 183 185 void* bouncebuffer; 184 186 185 - DPRINTK("ncp_file_write: enter %s/%s\n", 186 - dentry->d_parent->d_name.name, dentry->d_name.name); 187 + DPRINTK("ncp_file_write: enter %pd2\n", dentry); 187 188 if ((ssize_t) count < 0) 188 189 return -EINVAL; 189 190 pos = *ppos; ··· 261 264 i_size_write(inode, pos); 262 265 mutex_unlock(&inode->i_mutex); 263 266 } 264 - DPRINTK("ncp_file_write: exit %s/%s\n", 265 - dentry->d_parent->d_name.name, dentry->d_name.name); 267 + DPRINTK("ncp_file_write: exit %pd2\n", dentry); 266 268 outrel: 267 269 ncp_inode_close(inode); 268 270 return already_written ? already_written : errno;
+12 -7
fs/ncpfs/inode.c
··· 782 782 return error; 783 783 } 784 784 785 + static void delayed_free(struct rcu_head *p) 786 + { 787 + struct ncp_server *server = container_of(p, struct ncp_server, rcu); 788 + #ifdef CONFIG_NCPFS_NLS 789 + /* unload the NLS charsets */ 790 + unload_nls(server->nls_vol); 791 + unload_nls(server->nls_io); 792 + #endif /* CONFIG_NCPFS_NLS */ 793 + kfree(server); 794 + } 795 + 785 796 static void ncp_put_super(struct super_block *sb) 786 797 { 787 798 struct ncp_server *server = NCP_SBP(sb); ··· 803 792 804 793 ncp_stop_tasks(server); 805 794 806 - #ifdef CONFIG_NCPFS_NLS 807 - /* unload the NLS charsets */ 808 - unload_nls(server->nls_vol); 809 - unload_nls(server->nls_io); 810 - #endif /* CONFIG_NCPFS_NLS */ 811 795 mutex_destroy(&server->rcv.creq_mutex); 812 796 mutex_destroy(&server->root_setup_lock); 813 797 mutex_destroy(&server->mutex); ··· 819 813 vfree(server->rxbuf); 820 814 vfree(server->txbuf); 821 815 vfree(server->packet); 822 - sb->s_fs_info = NULL; 823 - kfree(server); 816 + call_rcu(&server->rcu, delayed_free); 824 817 } 825 818 826 819 static int ncp_statfs(struct dentry *dentry, struct kstatfs *buf)
+1 -1
fs/ncpfs/ncp_fs_sb.h
··· 38 38 }; 39 39 40 40 struct ncp_server { 41 - 41 + struct rcu_head rcu; 42 42 struct ncp_mount_data_kernel m; /* Nearly all of the mount data is of 43 43 interest for us later, so we store 44 44 it completely. */
+48 -71
fs/nfs/dir.c
··· 98 98 struct nfs_open_dir_context *ctx; 99 99 struct rpc_cred *cred; 100 100 101 - dfprintk(FILE, "NFS: open dir(%s/%s)\n", 102 - filp->f_path.dentry->d_parent->d_name.name, 103 - filp->f_path.dentry->d_name.name); 101 + dfprintk(FILE, "NFS: open dir(%pD2)\n", filp); 104 102 105 103 nfs_inc_stats(inode, NFSIOS_VFSOPEN); 106 104 ··· 295 297 if (ctx->duped > 0 296 298 && ctx->dup_cookie == *desc->dir_cookie) { 297 299 if (printk_ratelimit()) { 298 - pr_notice("NFS: directory %s/%s contains a readdir loop." 300 + pr_notice("NFS: directory %pD2 contains a readdir loop." 299 301 "Please contact your server vendor. " 300 302 "The file: %s has duplicate cookie %llu\n", 301 - desc->file->f_dentry->d_parent->d_name.name, 302 - desc->file->f_dentry->d_name.name, 303 + desc->file, 303 304 array->array[i].string.name, 304 305 *desc->dir_cookie); 305 306 } ··· 819 822 struct nfs_open_dir_context *dir_ctx = file->private_data; 820 823 int res = 0; 821 824 822 - dfprintk(FILE, "NFS: readdir(%s/%s) starting at cookie %llu\n", 823 - dentry->d_parent->d_name.name, dentry->d_name.name, 824 - (long long)ctx->pos); 825 + dfprintk(FILE, "NFS: readdir(%pD2) starting at cookie %llu\n", 826 + file, (long long)ctx->pos); 825 827 nfs_inc_stats(inode, NFSIOS_VFSGETDENTS); 826 828 827 829 /* ··· 876 880 nfs_unblock_sillyrename(dentry); 877 881 if (res > 0) 878 882 res = 0; 879 - dfprintk(FILE, "NFS: readdir(%s/%s) returns %d\n", 880 - dentry->d_parent->d_name.name, dentry->d_name.name, 881 - res); 883 + dfprintk(FILE, "NFS: readdir(%pD2) returns %d\n", file, res); 882 884 return res; 883 885 } 884 886 885 887 static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int whence) 886 888 { 887 - struct dentry *dentry = filp->f_path.dentry; 888 - struct inode *inode = dentry->d_inode; 889 + struct inode *inode = file_inode(filp); 889 890 struct nfs_open_dir_context *dir_ctx = filp->private_data; 890 891 891 - dfprintk(FILE, "NFS: llseek dir(%s/%s, %lld, %d)\n", 892 - dentry->d_parent->d_name.name, 893 - dentry->d_name.name, 894 - offset, whence); 892 + dfprintk(FILE, "NFS: llseek dir(%pD2, %lld, %d)\n", 893 + filp, offset, whence); 895 894 896 895 mutex_lock(&inode->i_mutex); 897 896 switch (whence) { ··· 916 925 static int nfs_fsync_dir(struct file *filp, loff_t start, loff_t end, 917 926 int datasync) 918 927 { 919 - struct dentry *dentry = filp->f_path.dentry; 920 - struct inode *inode = dentry->d_inode; 928 + struct inode *inode = file_inode(filp); 921 929 922 - dfprintk(FILE, "NFS: fsync dir(%s/%s) datasync %d\n", 923 - dentry->d_parent->d_name.name, dentry->d_name.name, 924 - datasync); 930 + dfprintk(FILE, "NFS: fsync dir(%pD2) datasync %d\n", filp, datasync); 925 931 926 932 mutex_lock(&inode->i_mutex); 927 - nfs_inc_stats(dentry->d_inode, NFSIOS_VFSFSYNC); 933 + nfs_inc_stats(inode, NFSIOS_VFSFSYNC); 928 934 mutex_unlock(&inode->i_mutex); 929 935 return 0; 930 936 } ··· 1061 1073 } 1062 1074 1063 1075 if (is_bad_inode(inode)) { 1064 - dfprintk(LOOKUPCACHE, "%s: %s/%s has dud inode\n", 1065 - __func__, dentry->d_parent->d_name.name, 1066 - dentry->d_name.name); 1076 + dfprintk(LOOKUPCACHE, "%s: %pd2 has dud inode\n", 1077 + __func__, dentry); 1067 1078 goto out_bad; 1068 1079 } 1069 1080 ··· 1112 1125 nfs_advise_use_readdirplus(dir); 1113 1126 out_valid_noent: 1114 1127 dput(parent); 1115 - dfprintk(LOOKUPCACHE, "NFS: %s(%s/%s) is valid\n", 1116 - __func__, dentry->d_parent->d_name.name, 1117 - dentry->d_name.name); 1128 + dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is valid\n", 1129 + __func__, dentry); 1118 1130 return 1; 1119 1131 out_zap_parent: 1120 1132 nfs_zap_caches(dir); ··· 1139 1153 goto out_valid; 1140 1154 1141 1155 dput(parent); 1142 - dfprintk(LOOKUPCACHE, "NFS: %s(%s/%s) is invalid\n", 1143 - __func__, dentry->d_parent->d_name.name, 1144 - dentry->d_name.name); 1156 + dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is invalid\n", 1157 + __func__, dentry); 1145 1158 return 0; 1146 1159 out_error: 1147 1160 nfs_free_fattr(fattr); 1148 1161 nfs_free_fhandle(fhandle); 1149 1162 nfs4_label_free(label); 1150 1163 dput(parent); 1151 - dfprintk(LOOKUPCACHE, "NFS: %s(%s/%s) lookup returned error %d\n", 1152 - __func__, dentry->d_parent->d_name.name, 1153 - dentry->d_name.name, error); 1164 + dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) lookup returned error %d\n", 1165 + __func__, dentry, error); 1154 1166 return error; 1155 1167 } 1156 1168 ··· 1172 1188 * eventually need to do something more here. 1173 1189 */ 1174 1190 if (!inode) { 1175 - dfprintk(LOOKUPCACHE, "%s: %s/%s has negative inode\n", 1176 - __func__, dentry->d_parent->d_name.name, 1177 - dentry->d_name.name); 1191 + dfprintk(LOOKUPCACHE, "%s: %pd2 has negative inode\n", 1192 + __func__, dentry); 1178 1193 return 1; 1179 1194 } 1180 1195 1181 1196 if (is_bad_inode(inode)) { 1182 - dfprintk(LOOKUPCACHE, "%s: %s/%s has dud inode\n", 1183 - __func__, dentry->d_parent->d_name.name, 1184 - dentry->d_name.name); 1197 + dfprintk(LOOKUPCACHE, "%s: %pd2 has dud inode\n", 1198 + __func__, dentry); 1185 1199 return 0; 1186 1200 } 1187 1201 ··· 1194 1212 */ 1195 1213 static int nfs_dentry_delete(const struct dentry *dentry) 1196 1214 { 1197 - dfprintk(VFS, "NFS: dentry_delete(%s/%s, %x)\n", 1198 - dentry->d_parent->d_name.name, dentry->d_name.name, 1199 - dentry->d_flags); 1215 + dfprintk(VFS, "NFS: dentry_delete(%pd2, %x)\n", 1216 + dentry, dentry->d_flags); 1200 1217 1201 1218 /* Unhash any dentry with a stale inode */ 1202 1219 if (dentry->d_inode != NULL && NFS_STALE(dentry->d_inode)) ··· 1273 1292 struct nfs4_label *label = NULL; 1274 1293 int error; 1275 1294 1276 - dfprintk(VFS, "NFS: lookup(%s/%s)\n", 1277 - dentry->d_parent->d_name.name, dentry->d_name.name); 1295 + dfprintk(VFS, "NFS: lookup(%pd2)\n", dentry); 1278 1296 nfs_inc_stats(dir, NFSIOS_VFSLOOKUP); 1279 1297 1280 1298 res = ERR_PTR(-ENAMETOOLONG); ··· 1404 1424 /* Expect a negative dentry */ 1405 1425 BUG_ON(dentry->d_inode); 1406 1426 1407 - dfprintk(VFS, "NFS: atomic_open(%s/%ld), %s\n", 1408 - dir->i_sb->s_id, dir->i_ino, dentry->d_name.name); 1427 + dfprintk(VFS, "NFS: atomic_open(%s/%ld), %pd\n", 1428 + dir->i_sb->s_id, dir->i_ino, dentry); 1409 1429 1410 1430 err = nfs_check_flags(open_flags); 1411 1431 if (err) ··· 1594 1614 int open_flags = excl ? O_CREAT | O_EXCL : O_CREAT; 1595 1615 int error; 1596 1616 1597 - dfprintk(VFS, "NFS: create(%s/%ld), %s\n", 1598 - dir->i_sb->s_id, dir->i_ino, dentry->d_name.name); 1617 + dfprintk(VFS, "NFS: create(%s/%ld), %pd\n", 1618 + dir->i_sb->s_id, dir->i_ino, dentry); 1599 1619 1600 1620 attr.ia_mode = mode; 1601 1621 attr.ia_valid = ATTR_MODE; ··· 1621 1641 struct iattr attr; 1622 1642 int status; 1623 1643 1624 - dfprintk(VFS, "NFS: mknod(%s/%ld), %s\n", 1625 - dir->i_sb->s_id, dir->i_ino, dentry->d_name.name); 1644 + dfprintk(VFS, "NFS: mknod(%s/%ld), %pd\n", 1645 + dir->i_sb->s_id, dir->i_ino, dentry); 1626 1646 1627 1647 if (!new_valid_dev(rdev)) 1628 1648 return -EINVAL; ··· 1650 1670 struct iattr attr; 1651 1671 int error; 1652 1672 1653 - dfprintk(VFS, "NFS: mkdir(%s/%ld), %s\n", 1654 - dir->i_sb->s_id, dir->i_ino, dentry->d_name.name); 1673 + dfprintk(VFS, "NFS: mkdir(%s/%ld), %pd\n", 1674 + dir->i_sb->s_id, dir->i_ino, dentry); 1655 1675 1656 1676 attr.ia_valid = ATTR_MODE; 1657 1677 attr.ia_mode = mode | S_IFDIR; ··· 1678 1698 { 1679 1699 int error; 1680 1700 1681 - dfprintk(VFS, "NFS: rmdir(%s/%ld), %s\n", 1682 - dir->i_sb->s_id, dir->i_ino, dentry->d_name.name); 1701 + dfprintk(VFS, "NFS: rmdir(%s/%ld), %pd\n", 1702 + dir->i_sb->s_id, dir->i_ino, dentry); 1683 1703 1684 1704 trace_nfs_rmdir_enter(dir, dentry); 1685 1705 if (dentry->d_inode) { ··· 1714 1734 struct inode *inode = dentry->d_inode; 1715 1735 int error = -EBUSY; 1716 1736 1717 - dfprintk(VFS, "NFS: safe_remove(%s/%s)\n", 1718 - dentry->d_parent->d_name.name, dentry->d_name.name); 1737 + dfprintk(VFS, "NFS: safe_remove(%pd2)\n", dentry); 1719 1738 1720 1739 /* If the dentry was sillyrenamed, we simply call d_delete() */ 1721 1740 if (dentry->d_flags & DCACHE_NFSFS_RENAMED) { ··· 1747 1768 int error; 1748 1769 int need_rehash = 0; 1749 1770 1750 - dfprintk(VFS, "NFS: unlink(%s/%ld, %s)\n", dir->i_sb->s_id, 1751 - dir->i_ino, dentry->d_name.name); 1771 + dfprintk(VFS, "NFS: unlink(%s/%ld, %pd)\n", dir->i_sb->s_id, 1772 + dir->i_ino, dentry); 1752 1773 1753 1774 trace_nfs_unlink_enter(dir, dentry); 1754 1775 spin_lock(&dentry->d_lock); ··· 1798 1819 unsigned int pathlen = strlen(symname); 1799 1820 int error; 1800 1821 1801 - dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s)\n", dir->i_sb->s_id, 1802 - dir->i_ino, dentry->d_name.name, symname); 1822 + dfprintk(VFS, "NFS: symlink(%s/%ld, %pd, %s)\n", dir->i_sb->s_id, 1823 + dir->i_ino, dentry, symname); 1803 1824 1804 1825 if (pathlen > PAGE_SIZE) 1805 1826 return -ENAMETOOLONG; ··· 1821 1842 error = NFS_PROTO(dir)->symlink(dir, dentry, page, pathlen, &attr); 1822 1843 trace_nfs_symlink_exit(dir, dentry, error); 1823 1844 if (error != 0) { 1824 - dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s) error %d\n", 1845 + dfprintk(VFS, "NFS: symlink(%s/%ld, %pd, %s) error %d\n", 1825 1846 dir->i_sb->s_id, dir->i_ino, 1826 - dentry->d_name.name, symname, error); 1847 + dentry, symname, error); 1827 1848 d_drop(dentry); 1828 1849 __free_page(page); 1829 1850 return error; ··· 1850 1871 struct inode *inode = old_dentry->d_inode; 1851 1872 int error; 1852 1873 1853 - dfprintk(VFS, "NFS: link(%s/%s -> %s/%s)\n", 1854 - old_dentry->d_parent->d_name.name, old_dentry->d_name.name, 1855 - dentry->d_parent->d_name.name, dentry->d_name.name); 1874 + dfprintk(VFS, "NFS: link(%pd2 -> %pd2)\n", 1875 + old_dentry, dentry); 1856 1876 1857 1877 trace_nfs_link_enter(inode, dir, dentry); 1858 1878 NFS_PROTO(inode)->return_delegation(inode); ··· 1899 1921 struct dentry *dentry = NULL, *rehash = NULL; 1900 1922 int error = -EBUSY; 1901 1923 1902 - dfprintk(VFS, "NFS: rename(%s/%s -> %s/%s, ct=%d)\n", 1903 - old_dentry->d_parent->d_name.name, old_dentry->d_name.name, 1904 - new_dentry->d_parent->d_name.name, new_dentry->d_name.name, 1924 + dfprintk(VFS, "NFS: rename(%pd2 -> %pd2, ct=%d)\n", 1925 + old_dentry, new_dentry, 1905 1926 d_count(new_dentry)); 1906 1927 1907 1928 trace_nfs_rename_enter(old_dir, old_dentry, new_dir, new_dentry);
+6 -11
fs/nfs/direct.c
··· 124 124 ssize_t nfs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov, loff_t pos, unsigned long nr_segs) 125 125 { 126 126 #ifndef CONFIG_NFS_SWAP 127 - dprintk("NFS: nfs_direct_IO (%s) off/no(%Ld/%lu) EINVAL\n", 128 - iocb->ki_filp->f_path.dentry->d_name.name, 129 - (long long) pos, nr_segs); 127 + dprintk("NFS: nfs_direct_IO (%pD) off/no(%Ld/%lu) EINVAL\n", 128 + iocb->ki_filp, (long long) pos, nr_segs); 130 129 131 130 return -EINVAL; 132 131 #else ··· 908 909 count = iov_length(iov, nr_segs); 909 910 nfs_add_stats(mapping->host, NFSIOS_DIRECTREADBYTES, count); 910 911 911 - dfprintk(FILE, "NFS: direct read(%s/%s, %zd@%Ld)\n", 912 - file->f_path.dentry->d_parent->d_name.name, 913 - file->f_path.dentry->d_name.name, 914 - count, (long long) pos); 912 + dfprintk(FILE, "NFS: direct read(%pD2, %zd@%Ld)\n", 913 + file, count, (long long) pos); 915 914 916 915 retval = 0; 917 916 if (!count) ··· 962 965 count = iov_length(iov, nr_segs); 963 966 nfs_add_stats(mapping->host, NFSIOS_DIRECTWRITTENBYTES, count); 964 967 965 - dfprintk(FILE, "NFS: direct write(%s/%s, %zd@%Ld)\n", 966 - file->f_path.dentry->d_parent->d_name.name, 967 - file->f_path.dentry->d_name.name, 968 - count, (long long) pos); 968 + dfprintk(FILE, "NFS: direct write(%pD2, %zd@%Ld)\n", 969 + file, count, (long long) pos); 969 970 970 971 retval = generic_write_checks(file, &pos, &count, 0); 971 972 if (retval)
+43 -74
fs/nfs/file.c
··· 65 65 { 66 66 int res; 67 67 68 - dprintk("NFS: open file(%s/%s)\n", 69 - filp->f_path.dentry->d_parent->d_name.name, 70 - filp->f_path.dentry->d_name.name); 68 + dprintk("NFS: open file(%pD2)\n", filp); 71 69 72 70 nfs_inc_stats(inode, NFSIOS_VFSOPEN); 73 71 res = nfs_check_flags(filp->f_flags); ··· 79 81 int 80 82 nfs_file_release(struct inode *inode, struct file *filp) 81 83 { 82 - dprintk("NFS: release(%s/%s)\n", 83 - filp->f_path.dentry->d_parent->d_name.name, 84 - filp->f_path.dentry->d_name.name); 84 + dprintk("NFS: release(%pD2)\n", filp); 85 85 86 86 nfs_inc_stats(inode, NFSIOS_VFSRELEASE); 87 87 return nfs_release(inode, filp); ··· 119 123 120 124 loff_t nfs_file_llseek(struct file *filp, loff_t offset, int whence) 121 125 { 122 - dprintk("NFS: llseek file(%s/%s, %lld, %d)\n", 123 - filp->f_path.dentry->d_parent->d_name.name, 124 - filp->f_path.dentry->d_name.name, 125 - offset, whence); 126 + dprintk("NFS: llseek file(%pD2, %lld, %d)\n", 127 + filp, offset, whence); 126 128 127 129 /* 128 130 * whence == SEEK_END || SEEK_DATA || SEEK_HOLE => we must revalidate ··· 144 150 int 145 151 nfs_file_flush(struct file *file, fl_owner_t id) 146 152 { 147 - struct dentry *dentry = file->f_path.dentry; 148 - struct inode *inode = dentry->d_inode; 153 + struct inode *inode = file_inode(file); 149 154 150 - dprintk("NFS: flush(%s/%s)\n", 151 - dentry->d_parent->d_name.name, 152 - dentry->d_name.name); 155 + dprintk("NFS: flush(%pD2)\n", file); 153 156 154 157 nfs_inc_stats(inode, NFSIOS_VFSFLUSH); 155 158 if ((file->f_mode & FMODE_WRITE) == 0) ··· 168 177 nfs_file_read(struct kiocb *iocb, const struct iovec *iov, 169 178 unsigned long nr_segs, loff_t pos) 170 179 { 171 - struct dentry * dentry = iocb->ki_filp->f_path.dentry; 172 - struct inode * inode = dentry->d_inode; 180 + struct inode *inode = file_inode(iocb->ki_filp); 173 181 ssize_t result; 174 182 175 183 if (iocb->ki_filp->f_flags & O_DIRECT) 176 184 return nfs_file_direct_read(iocb, iov, nr_segs, pos, true); 177 185 178 - dprintk("NFS: read(%s/%s, %lu@%lu)\n", 179 - dentry->d_parent->d_name.name, dentry->d_name.name, 186 + dprintk("NFS: read(%pD2, %lu@%lu)\n", 187 + iocb->ki_filp, 180 188 (unsigned long) iov_length(iov, nr_segs), (unsigned long) pos); 181 189 182 190 result = nfs_revalidate_mapping(inode, iocb->ki_filp->f_mapping); ··· 193 203 struct pipe_inode_info *pipe, size_t count, 194 204 unsigned int flags) 195 205 { 196 - struct dentry *dentry = filp->f_path.dentry; 197 - struct inode *inode = dentry->d_inode; 206 + struct inode *inode = file_inode(filp); 198 207 ssize_t res; 199 208 200 - dprintk("NFS: splice_read(%s/%s, %lu@%Lu)\n", 201 - dentry->d_parent->d_name.name, dentry->d_name.name, 202 - (unsigned long) count, (unsigned long long) *ppos); 209 + dprintk("NFS: splice_read(%pD2, %lu@%Lu)\n", 210 + filp, (unsigned long) count, (unsigned long long) *ppos); 203 211 204 212 res = nfs_revalidate_mapping(inode, filp->f_mapping); 205 213 if (!res) { ··· 212 224 int 213 225 nfs_file_mmap(struct file * file, struct vm_area_struct * vma) 214 226 { 215 - struct dentry *dentry = file->f_path.dentry; 216 - struct inode *inode = dentry->d_inode; 227 + struct inode *inode = file_inode(file); 217 228 int status; 218 229 219 - dprintk("NFS: mmap(%s/%s)\n", 220 - dentry->d_parent->d_name.name, dentry->d_name.name); 230 + dprintk("NFS: mmap(%pD2)\n", file); 221 231 222 232 /* Note: generic_file_mmap() returns ENOSYS on nommu systems 223 233 * so we call that before revalidating the mapping ··· 244 258 int 245 259 nfs_file_fsync_commit(struct file *file, loff_t start, loff_t end, int datasync) 246 260 { 247 - struct dentry *dentry = file->f_path.dentry; 248 261 struct nfs_open_context *ctx = nfs_file_open_context(file); 249 - struct inode *inode = dentry->d_inode; 262 + struct inode *inode = file_inode(file); 250 263 int have_error, do_resend, status; 251 264 int ret = 0; 252 265 253 - dprintk("NFS: fsync file(%s/%s) datasync %d\n", 254 - dentry->d_parent->d_name.name, dentry->d_name.name, 255 - datasync); 266 + dprintk("NFS: fsync file(%pD2) datasync %d\n", file, datasync); 256 267 257 268 nfs_inc_stats(inode, NFSIOS_VFSFSYNC); 258 269 do_resend = test_and_clear_bit(NFS_CONTEXT_RESEND_WRITES, &ctx->flags); ··· 354 371 struct page *page; 355 372 int once_thru = 0; 356 373 357 - dfprintk(PAGECACHE, "NFS: write_begin(%s/%s(%ld), %u@%lld)\n", 358 - file->f_path.dentry->d_parent->d_name.name, 359 - file->f_path.dentry->d_name.name, 360 - mapping->host->i_ino, len, (long long) pos); 374 + dfprintk(PAGECACHE, "NFS: write_begin(%pD2(%ld), %u@%lld)\n", 375 + file, mapping->host->i_ino, len, (long long) pos); 361 376 362 377 start: 363 378 /* ··· 395 414 struct nfs_open_context *ctx = nfs_file_open_context(file); 396 415 int status; 397 416 398 - dfprintk(PAGECACHE, "NFS: write_end(%s/%s(%ld), %u@%lld)\n", 399 - file->f_path.dentry->d_parent->d_name.name, 400 - file->f_path.dentry->d_name.name, 401 - mapping->host->i_ino, len, (long long) pos); 417 + dfprintk(PAGECACHE, "NFS: write_end(%pD2(%ld), %u@%lld)\n", 418 + file, mapping->host->i_ino, len, (long long) pos); 402 419 403 420 /* 404 421 * Zero any uninitialised parts of the page, and then mark the page ··· 580 601 { 581 602 struct page *page = vmf->page; 582 603 struct file *filp = vma->vm_file; 583 - struct dentry *dentry = filp->f_path.dentry; 604 + struct inode *inode = file_inode(filp); 584 605 unsigned pagelen; 585 606 int ret = VM_FAULT_NOPAGE; 586 607 struct address_space *mapping; 587 608 588 - dfprintk(PAGECACHE, "NFS: vm_page_mkwrite(%s/%s(%ld), offset %lld)\n", 589 - dentry->d_parent->d_name.name, dentry->d_name.name, 590 - filp->f_mapping->host->i_ino, 609 + dfprintk(PAGECACHE, "NFS: vm_page_mkwrite(%pD2(%ld), offset %lld)\n", 610 + filp, filp->f_mapping->host->i_ino, 591 611 (long long)page_offset(page)); 592 612 593 613 /* make sure the cache has finished storing the page */ 594 - nfs_fscache_wait_on_page_write(NFS_I(dentry->d_inode), page); 614 + nfs_fscache_wait_on_page_write(NFS_I(inode), page); 595 615 596 616 lock_page(page); 597 617 mapping = page_file_mapping(page); 598 - if (mapping != dentry->d_inode->i_mapping) 618 + if (mapping != inode->i_mapping) 599 619 goto out_unlock; 600 620 601 621 wait_on_page_writeback(page); ··· 637 659 ssize_t nfs_file_write(struct kiocb *iocb, const struct iovec *iov, 638 660 unsigned long nr_segs, loff_t pos) 639 661 { 640 - struct dentry * dentry = iocb->ki_filp->f_path.dentry; 641 - struct inode * inode = dentry->d_inode; 662 + struct file *file = iocb->ki_filp; 663 + struct inode *inode = file_inode(file); 642 664 unsigned long written = 0; 643 665 ssize_t result; 644 666 size_t count = iov_length(iov, nr_segs); 645 667 646 - result = nfs_key_timeout_notify(iocb->ki_filp, inode); 668 + result = nfs_key_timeout_notify(file, inode); 647 669 if (result) 648 670 return result; 649 671 650 - if (iocb->ki_filp->f_flags & O_DIRECT) 672 + if (file->f_flags & O_DIRECT) 651 673 return nfs_file_direct_write(iocb, iov, nr_segs, pos, true); 652 674 653 - dprintk("NFS: write(%s/%s, %lu@%Ld)\n", 654 - dentry->d_parent->d_name.name, dentry->d_name.name, 655 - (unsigned long) count, (long long) pos); 675 + dprintk("NFS: write(%pD2, %lu@%Ld)\n", 676 + file, (unsigned long) count, (long long) pos); 656 677 657 678 result = -EBUSY; 658 679 if (IS_SWAPFILE(inode)) ··· 659 682 /* 660 683 * O_APPEND implies that we must revalidate the file length. 661 684 */ 662 - if (iocb->ki_filp->f_flags & O_APPEND) { 663 - result = nfs_revalidate_file_size(inode, iocb->ki_filp); 685 + if (file->f_flags & O_APPEND) { 686 + result = nfs_revalidate_file_size(inode, file); 664 687 if (result) 665 688 goto out; 666 689 } ··· 674 697 written = result; 675 698 676 699 /* Return error values for O_DSYNC and IS_SYNC() */ 677 - if (result >= 0 && nfs_need_sync_write(iocb->ki_filp, inode)) { 678 - int err = vfs_fsync(iocb->ki_filp, 0); 700 + if (result >= 0 && nfs_need_sync_write(file, inode)) { 701 + int err = vfs_fsync(file, 0); 679 702 if (err < 0) 680 703 result = err; 681 704 } ··· 694 717 struct file *filp, loff_t *ppos, 695 718 size_t count, unsigned int flags) 696 719 { 697 - struct dentry *dentry = filp->f_path.dentry; 698 - struct inode *inode = dentry->d_inode; 720 + struct inode *inode = file_inode(filp); 699 721 unsigned long written = 0; 700 722 ssize_t ret; 701 723 702 - dprintk("NFS splice_write(%s/%s, %lu@%llu)\n", 703 - dentry->d_parent->d_name.name, dentry->d_name.name, 704 - (unsigned long) count, (unsigned long long) *ppos); 724 + dprintk("NFS splice_write(%pD2, %lu@%llu)\n", 725 + filp, (unsigned long) count, (unsigned long long) *ppos); 705 726 706 727 /* 707 728 * The combination of splice and an O_APPEND destination is disallowed. ··· 858 883 int ret = -ENOLCK; 859 884 int is_local = 0; 860 885 861 - dprintk("NFS: lock(%s/%s, t=%x, fl=%x, r=%lld:%lld)\n", 862 - filp->f_path.dentry->d_parent->d_name.name, 863 - filp->f_path.dentry->d_name.name, 864 - fl->fl_type, fl->fl_flags, 886 + dprintk("NFS: lock(%pD2, t=%x, fl=%x, r=%lld:%lld)\n", 887 + filp, fl->fl_type, fl->fl_flags, 865 888 (long long)fl->fl_start, (long long)fl->fl_end); 866 889 867 890 nfs_inc_stats(inode, NFSIOS_VFSLOCK); ··· 896 923 struct inode *inode = filp->f_mapping->host; 897 924 int is_local = 0; 898 925 899 - dprintk("NFS: flock(%s/%s, t=%x, fl=%x)\n", 900 - filp->f_path.dentry->d_parent->d_name.name, 901 - filp->f_path.dentry->d_name.name, 902 - fl->fl_type, fl->fl_flags); 926 + dprintk("NFS: flock(%pD2, t=%x, fl=%x)\n", 927 + filp, fl->fl_type, fl->fl_flags); 903 928 904 929 if (!(fl->fl_flags & FL_FLOCK)) 905 930 return -ENOLCK; ··· 931 960 */ 932 961 int nfs_setlease(struct file *file, long arg, struct file_lock **fl) 933 962 { 934 - dprintk("NFS: setlease(%s/%s, arg=%ld)\n", 935 - file->f_path.dentry->d_parent->d_name.name, 936 - file->f_path.dentry->d_name.name, arg); 963 + dprintk("NFS: setlease(%pD2, arg=%ld)\n", file, arg); 937 964 return -EINVAL; 938 965 } 939 966 EXPORT_SYMBOL_GPL(nfs_setlease);
+2 -3
fs/nfs/namespace.c
··· 253 253 254 254 dprintk("--> nfs_do_submount()\n"); 255 255 256 - dprintk("%s: submounting on %s/%s\n", __func__, 257 - dentry->d_parent->d_name.name, 258 - dentry->d_name.name); 256 + dprintk("%s: submounting on %pd2\n", __func__, 257 + dentry); 259 258 if (page == NULL) 260 259 goto out; 261 260 devname = nfs_devname(dentry, page, PAGE_SIZE);
+4 -4
fs/nfs/nfs3proc.c
··· 321 321 umode_t mode = sattr->ia_mode; 322 322 int status = -ENOMEM; 323 323 324 - dprintk("NFS call create %s\n", dentry->d_name.name); 324 + dprintk("NFS call create %pd\n", dentry); 325 325 326 326 data = nfs3_alloc_createdata(); 327 327 if (data == NULL) ··· 548 548 if (len > NFS3_MAXPATHLEN) 549 549 return -ENAMETOOLONG; 550 550 551 - dprintk("NFS call symlink %s\n", dentry->d_name.name); 551 + dprintk("NFS call symlink %pd\n", dentry); 552 552 553 553 data = nfs3_alloc_createdata(); 554 554 if (data == NULL) ··· 576 576 umode_t mode = sattr->ia_mode; 577 577 int status = -ENOMEM; 578 578 579 - dprintk("NFS call mkdir %s\n", dentry->d_name.name); 579 + dprintk("NFS call mkdir %pd\n", dentry); 580 580 581 581 sattr->ia_mode &= ~current_umask(); 582 582 ··· 695 695 umode_t mode = sattr->ia_mode; 696 696 int status = -ENOMEM; 697 697 698 - dprintk("NFS call mknod %s %u:%u\n", dentry->d_name.name, 698 + dprintk("NFS call mknod %pd %u:%u\n", dentry, 699 699 MAJOR(rdev), MINOR(rdev)); 700 700 701 701 sattr->ia_mode &= ~current_umask();
+1 -3
fs/nfs/nfs4file.c
··· 31 31 * -EOPENSTALE. The VFS will retry the lookup/create/open. 32 32 */ 33 33 34 - dprintk("NFS: open file(%s/%s)\n", 35 - dentry->d_parent->d_name.name, 36 - dentry->d_name.name); 34 + dprintk("NFS: open file(%pd2)\n", dentry); 37 35 38 36 if ((openflags & O_ACCMODE) == 3) 39 37 openflags--;
+3 -4
fs/nfs/nfs4namespace.c
··· 292 292 if (locations == NULL || locations->nlocations <= 0) 293 293 goto out; 294 294 295 - dprintk("%s: referral at %s/%s\n", __func__, 296 - dentry->d_parent->d_name.name, dentry->d_name.name); 295 + dprintk("%s: referral at %pd2\n", __func__, dentry); 297 296 298 297 page = (char *) __get_free_page(GFP_USER); 299 298 if (!page) ··· 356 357 mnt = ERR_PTR(-ENOENT); 357 358 358 359 parent = dget_parent(dentry); 359 - dprintk("%s: getting locations for %s/%s\n", 360 - __func__, parent->d_name.name, dentry->d_name.name); 360 + dprintk("%s: getting locations for %pd2\n", 361 + __func__, dentry); 361 362 362 363 err = nfs4_proc_fs_locations(client, parent->d_inode, &dentry->d_name, fs_locations, page); 363 364 dput(parent);
+2 -3
fs/nfs/nfs4proc.c
··· 3771 3771 }; 3772 3772 int status; 3773 3773 3774 - dprintk("%s: dentry = %s/%s, cookie = %Lu\n", __func__, 3775 - dentry->d_parent->d_name.name, 3776 - dentry->d_name.name, 3774 + dprintk("%s: dentry = %pd2, cookie = %Lu\n", __func__, 3775 + dentry, 3777 3776 (unsigned long long)cookie); 3778 3777 nfs4_setup_readdir(cookie, NFS_I(dir)->cookieverf, dentry, &args); 3779 3778 res.pgbase = args.pgbase;
+4 -4
fs/nfs/proc.c
··· 235 235 }; 236 236 int status = -ENOMEM; 237 237 238 - dprintk("NFS call create %s\n", dentry->d_name.name); 238 + dprintk("NFS call create %pd\n", dentry); 239 239 data = nfs_alloc_createdata(dir, dentry, sattr); 240 240 if (data == NULL) 241 241 goto out; ··· 265 265 umode_t mode; 266 266 int status = -ENOMEM; 267 267 268 - dprintk("NFS call mknod %s\n", dentry->d_name.name); 268 + dprintk("NFS call mknod %pd\n", dentry); 269 269 270 270 mode = sattr->ia_mode; 271 271 if (S_ISFIFO(mode)) { ··· 423 423 }; 424 424 int status = -ENAMETOOLONG; 425 425 426 - dprintk("NFS call symlink %s\n", dentry->d_name.name); 426 + dprintk("NFS call symlink %pd\n", dentry); 427 427 428 428 if (len > NFS2_MAXPATHLEN) 429 429 goto out; ··· 462 462 }; 463 463 int status = -ENOMEM; 464 464 465 - dprintk("NFS call mkdir %s\n", dentry->d_name.name); 465 + dprintk("NFS call mkdir %pd\n", dentry); 466 466 data = nfs_alloc_createdata(dir, dentry, sattr); 467 467 if (data == NULL) 468 468 goto out;
+4 -5
fs/nfs/unlink.c
··· 495 495 struct rpc_task *task; 496 496 int error = -EBUSY; 497 497 498 - dfprintk(VFS, "NFS: silly-rename(%s/%s, ct=%d)\n", 499 - dentry->d_parent->d_name.name, dentry->d_name.name, 500 - d_count(dentry)); 498 + dfprintk(VFS, "NFS: silly-rename(%pd2, ct=%d)\n", 499 + dentry, d_count(dentry)); 501 500 nfs_inc_stats(dir, NFSIOS_SILLYRENAME); 502 501 503 502 /* ··· 520 521 SILLYNAME_FILEID_LEN, fileid, 521 522 SILLYNAME_COUNTER_LEN, sillycounter); 522 523 523 - dfprintk(VFS, "NFS: trying to rename %s to %s\n", 524 - dentry->d_name.name, silly); 524 + dfprintk(VFS, "NFS: trying to rename %pd to %s\n", 525 + dentry, silly); 525 526 526 527 sdentry = lookup_one_len(silly, dentry->d_parent, slen); 527 528 /*
+2 -4
fs/nfs/write.c
··· 954 954 955 955 nfs_inc_stats(inode, NFSIOS_VFSUPDATEPAGE); 956 956 957 - dprintk("NFS: nfs_updatepage(%s/%s %d@%lld)\n", 958 - file->f_path.dentry->d_parent->d_name.name, 959 - file->f_path.dentry->d_name.name, count, 960 - (long long)(page_file_offset(page) + offset)); 957 + dprintk("NFS: nfs_updatepage(%pD2 %d@%lld)\n", 958 + file, count, (long long)(page_file_offset(page) + offset)); 961 959 962 960 if (nfs_can_extend_write(file, page, inode)) { 963 961 count = max(count + offset, nfs_page_length(page));
+6 -6
fs/nfsd/nfs4recover.c
··· 385 385 386 386 status = vfs_rmdir(parent->d_inode, child); 387 387 if (status) 388 - printk("failed to remove client recovery directory %s\n", 389 - child->d_name.name); 388 + printk("failed to remove client recovery directory %pd\n", 389 + child); 390 390 /* Keep trying, success or failure: */ 391 391 return 0; 392 392 } ··· 410 410 nfs4_release_reclaim(nn); 411 411 if (status) 412 412 printk("nfsd4: failed to purge old clients from recovery" 413 - " directory %s\n", nn->rec_file->f_path.dentry->d_name.name); 413 + " directory %pD\n", nn->rec_file); 414 414 } 415 415 416 416 static int 417 417 load_recdir(struct dentry *parent, struct dentry *child, struct nfsd_net *nn) 418 418 { 419 419 if (child->d_name.len != HEXDIR_LEN - 1) { 420 - printk("nfsd4: illegal name %s in recovery directory\n", 421 - child->d_name.name); 420 + printk("nfsd4: illegal name %pd in recovery directory\n", 421 + child); 422 422 /* Keep trying; maybe the others are OK: */ 423 423 return 0; 424 424 } ··· 437 437 status = nfsd4_list_rec_dir(load_recdir, nn); 438 438 if (status) 439 439 printk("nfsd4: failed loading clients from recovery" 440 - " directory %s\n", nn->rec_file->f_path.dentry->d_name.name); 440 + " directory %pD\n", nn->rec_file); 441 441 return status; 442 442 } 443 443
+7 -10
fs/nfsd/nfs4state.c
··· 3008 3008 return NULL; 3009 3009 locks_init_lock(fl); 3010 3010 fl->fl_lmops = &nfsd_lease_mng_ops; 3011 - fl->fl_flags = FL_LEASE; 3011 + fl->fl_flags = FL_DELEG; 3012 3012 fl->fl_type = flag == NFS4_OPEN_DELEGATE_READ? F_RDLCK: F_WRLCK; 3013 3013 fl->fl_end = OFFSET_MAX; 3014 3014 fl->fl_owner = (fl_owner_t)(dp->dl_file); ··· 3843 3843 struct nfs4_ol_stateid *stp; 3844 3844 struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id); 3845 3845 3846 - dprintk("NFSD: nfsd4_open_confirm on file %.*s\n", 3847 - (int)cstate->current_fh.fh_dentry->d_name.len, 3848 - cstate->current_fh.fh_dentry->d_name.name); 3846 + dprintk("NFSD: nfsd4_open_confirm on file %pd\n", 3847 + cstate->current_fh.fh_dentry); 3849 3848 3850 3849 status = fh_verify(rqstp, &cstate->current_fh, S_IFREG, 0); 3851 3850 if (status) ··· 3921 3922 struct nfs4_ol_stateid *stp; 3922 3923 struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id); 3923 3924 3924 - dprintk("NFSD: nfsd4_open_downgrade on file %.*s\n", 3925 - (int)cstate->current_fh.fh_dentry->d_name.len, 3926 - cstate->current_fh.fh_dentry->d_name.name); 3925 + dprintk("NFSD: nfsd4_open_downgrade on file %pd\n", 3926 + cstate->current_fh.fh_dentry); 3927 3927 3928 3928 /* We don't yet support WANT bits: */ 3929 3929 if (od->od_deleg_want) ··· 3978 3980 struct net *net = SVC_NET(rqstp); 3979 3981 struct nfsd_net *nn = net_generic(net, nfsd_net_id); 3980 3982 3981 - dprintk("NFSD: nfsd4_close on file %.*s\n", 3982 - (int)cstate->current_fh.fh_dentry->d_name.len, 3983 - cstate->current_fh.fh_dentry->d_name.name); 3983 + dprintk("NFSD: nfsd4_close on file %pd\n", 3984 + cstate->current_fh.fh_dentry); 3984 3985 3985 3986 nfs4_lock_state(); 3986 3987 status = nfs4_preprocess_seqid_op(cstate, close->cl_seqid,
+13 -15
fs/nfsd/nfsfh.c
··· 47 47 tdentry = parent; 48 48 } 49 49 if (tdentry != exp->ex_path.dentry) 50 - dprintk("nfsd_acceptable failed at %p %s\n", tdentry, tdentry->d_name.name); 50 + dprintk("nfsd_acceptable failed at %p %pd\n", tdentry, tdentry); 51 51 rv = (tdentry == exp->ex_path.dentry); 52 52 dput(tdentry); 53 53 return rv; ··· 253 253 254 254 if (S_ISDIR(dentry->d_inode->i_mode) && 255 255 (dentry->d_flags & DCACHE_DISCONNECTED)) { 256 - printk("nfsd: find_fh_dentry returned a DISCONNECTED directory: %s/%s\n", 257 - dentry->d_parent->d_name.name, dentry->d_name.name); 256 + printk("nfsd: find_fh_dentry returned a DISCONNECTED directory: %pd2\n", 257 + dentry); 258 258 } 259 259 260 260 fhp->fh_dentry = dentry; ··· 361 361 error = nfsd_permission(rqstp, exp, dentry, access); 362 362 363 363 if (error) { 364 - dprintk("fh_verify: %s/%s permission failure, " 364 + dprintk("fh_verify: %pd2 permission failure, " 365 365 "acc=%x, error=%d\n", 366 - dentry->d_parent->d_name.name, 367 - dentry->d_name.name, 366 + dentry, 368 367 access, ntohl(error)); 369 368 } 370 369 out: ··· 513 514 */ 514 515 515 516 struct inode * inode = dentry->d_inode; 516 - struct dentry *parent = dentry->d_parent; 517 517 __u32 *datap; 518 518 dev_t ex_dev = exp_sb(exp)->s_dev; 519 519 520 - dprintk("nfsd: fh_compose(exp %02x:%02x/%ld %s/%s, ino=%ld)\n", 520 + dprintk("nfsd: fh_compose(exp %02x:%02x/%ld %pd2, ino=%ld)\n", 521 521 MAJOR(ex_dev), MINOR(ex_dev), 522 522 (long) exp->ex_path.dentry->d_inode->i_ino, 523 - parent->d_name.name, dentry->d_name.name, 523 + dentry, 524 524 (inode ? inode->i_ino : 0)); 525 525 526 526 /* Choose filehandle version and fsid type based on ··· 532 534 fh_put(ref_fh); 533 535 534 536 if (fhp->fh_locked || fhp->fh_dentry) { 535 - printk(KERN_ERR "fh_compose: fh %s/%s not initialized!\n", 536 - parent->d_name.name, dentry->d_name.name); 537 + printk(KERN_ERR "fh_compose: fh %pd2 not initialized!\n", 538 + dentry); 537 539 } 538 540 if (fhp->fh_maxsize < NFS_FHSIZE) 539 - printk(KERN_ERR "fh_compose: called with maxsize %d! %s/%s\n", 541 + printk(KERN_ERR "fh_compose: called with maxsize %d! %pd2\n", 540 542 fhp->fh_maxsize, 541 - parent->d_name.name, dentry->d_name.name); 543 + dentry); 542 544 543 545 fhp->fh_dentry = dget(dentry); /* our internal copy */ 544 546 fhp->fh_export = exp; ··· 611 613 printk(KERN_ERR "fh_update: fh not verified!\n"); 612 614 goto out; 613 615 out_negative: 614 - printk(KERN_ERR "fh_update: %s/%s still negative!\n", 615 - dentry->d_parent->d_name.name, dentry->d_name.name); 616 + printk(KERN_ERR "fh_update: %pd2 still negative!\n", 617 + dentry); 616 618 goto out; 617 619 } 618 620
+2 -2
fs/nfsd/nfsfh.h
··· 173 173 BUG_ON(!dentry); 174 174 175 175 if (fhp->fh_locked) { 176 - printk(KERN_WARNING "fh_lock: %s/%s already locked!\n", 177 - dentry->d_parent->d_name.name, dentry->d_name.name); 176 + printk(KERN_WARNING "fh_lock: %pd2 already locked!\n", 177 + dentry); 178 178 return; 179 179 } 180 180
+13 -10
fs/nfsd/vfs.c
··· 427 427 goto out_nfserr; 428 428 fh_lock(fhp); 429 429 430 - host_err = notify_change(dentry, iap); 430 + host_err = notify_change(dentry, iap, NULL); 431 431 err = nfserrno(host_err); 432 432 fh_unlock(fhp); 433 433 } ··· 988 988 ia.ia_valid = ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV; 989 989 990 990 mutex_lock(&dentry->d_inode->i_mutex); 991 - notify_change(dentry, &ia); 991 + /* 992 + * Note we call this on write, so notify_change will not 993 + * encounter any conflicting delegations: 994 + */ 995 + notify_change(dentry, &ia, NULL); 992 996 mutex_unlock(&dentry->d_inode->i_mutex); 993 997 } 994 998 ··· 1321 1317 if (!fhp->fh_locked) { 1322 1318 /* not actually possible */ 1323 1319 printk(KERN_ERR 1324 - "nfsd_create: parent %s/%s not locked!\n", 1325 - dentry->d_parent->d_name.name, 1326 - dentry->d_name.name); 1320 + "nfsd_create: parent %pd2 not locked!\n", 1321 + dentry); 1327 1322 err = nfserr_io; 1328 1323 goto out; 1329 1324 } ··· 1332 1329 */ 1333 1330 err = nfserr_exist; 1334 1331 if (dchild->d_inode) { 1335 - dprintk("nfsd_create: dentry %s/%s not negative!\n", 1336 - dentry->d_name.name, dchild->d_name.name); 1332 + dprintk("nfsd_create: dentry %pd/%pd not negative!\n", 1333 + dentry, dchild); 1337 1334 goto out; 1338 1335 } 1339 1336 ··· 1740 1737 err = nfserrno(host_err); 1741 1738 goto out_dput; 1742 1739 } 1743 - host_err = vfs_link(dold, dirp, dnew); 1740 + host_err = vfs_link(dold, dirp, dnew, NULL); 1744 1741 if (!host_err) { 1745 1742 err = nfserrno(commit_metadata(ffhp)); 1746 1743 if (!err) ··· 1841 1838 if (host_err) 1842 1839 goto out_dput_new; 1843 1840 } 1844 - host_err = vfs_rename(fdir, odentry, tdir, ndentry); 1841 + host_err = vfs_rename(fdir, odentry, tdir, ndentry, NULL); 1845 1842 if (!host_err) { 1846 1843 host_err = commit_metadata(tfhp); 1847 1844 if (!host_err) ··· 1914 1911 if (host_err) 1915 1912 goto out_put; 1916 1913 if (type != S_IFDIR) 1917 - host_err = vfs_unlink(dirp, rdentry); 1914 + host_err = vfs_unlink(dirp, rdentry, NULL); 1918 1915 else 1919 1916 host_err = vfs_rmdir(dirp, rdentry); 1920 1917 if (!host_err)
+1 -1
fs/ntfs/inode.c
··· 55 55 * 56 56 * Return 1 if the attributes match and 0 if not. 57 57 * 58 - * NOTE: This function runs with the inode->i_lock spin lock held so it is not 58 + * NOTE: This function runs with the inode_hash_lock spin lock held so it is not 59 59 * allowed to sleep. 60 60 */ 61 61 int ntfs_test_inode(struct inode *vi, ntfs_attr *na)
-10
fs/ocfs2/inode.c
··· 386 386 u32 generation = 0; 387 387 388 388 status = -EINVAL; 389 - if (inode == NULL || inode->i_sb == NULL) { 390 - mlog(ML_ERROR, "bad inode\n"); 391 - return status; 392 - } 393 389 sb = inode->i_sb; 394 390 osb = OCFS2_SB(sb); 395 - 396 - if (!args) { 397 - mlog(ML_ERROR, "bad inode args\n"); 398 - make_bad_inode(inode); 399 - return status; 400 - } 401 391 402 392 /* 403 393 * To improve performance of cold-cache inode stats, we take
+24 -8
fs/open.c
··· 57 57 newattrs.ia_valid |= ret | ATTR_FORCE; 58 58 59 59 mutex_lock(&dentry->d_inode->i_mutex); 60 - ret = notify_change(dentry, &newattrs); 60 + /* Note any delegations or leases have already been broken: */ 61 + ret = notify_change(dentry, &newattrs, NULL); 61 62 mutex_unlock(&dentry->d_inode->i_mutex); 62 63 return ret; 63 64 } ··· 465 464 static int chmod_common(struct path *path, umode_t mode) 466 465 { 467 466 struct inode *inode = path->dentry->d_inode; 467 + struct inode *delegated_inode = NULL; 468 468 struct iattr newattrs; 469 469 int error; 470 470 471 471 error = mnt_want_write(path->mnt); 472 472 if (error) 473 473 return error; 474 + retry_deleg: 474 475 mutex_lock(&inode->i_mutex); 475 476 error = security_path_chmod(path, mode); 476 477 if (error) 477 478 goto out_unlock; 478 479 newattrs.ia_mode = (mode & S_IALLUGO) | (inode->i_mode & ~S_IALLUGO); 479 480 newattrs.ia_valid = ATTR_MODE | ATTR_CTIME; 480 - error = notify_change(path->dentry, &newattrs); 481 + error = notify_change(path->dentry, &newattrs, &delegated_inode); 481 482 out_unlock: 482 483 mutex_unlock(&inode->i_mutex); 484 + if (delegated_inode) { 485 + error = break_deleg_wait(&delegated_inode); 486 + if (!error) 487 + goto retry_deleg; 488 + } 483 489 mnt_drop_write(path->mnt); 484 490 return error; 485 491 } ··· 530 522 static int chown_common(struct path *path, uid_t user, gid_t group) 531 523 { 532 524 struct inode *inode = path->dentry->d_inode; 525 + struct inode *delegated_inode = NULL; 533 526 int error; 534 527 struct iattr newattrs; 535 528 kuid_t uid; ··· 555 546 if (!S_ISDIR(inode->i_mode)) 556 547 newattrs.ia_valid |= 557 548 ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV; 549 + retry_deleg: 558 550 mutex_lock(&inode->i_mutex); 559 551 error = security_path_chown(path, uid, gid); 560 552 if (!error) 561 - error = notify_change(path->dentry, &newattrs); 553 + error = notify_change(path->dentry, &newattrs, &delegated_inode); 562 554 mutex_unlock(&inode->i_mutex); 563 - 555 + if (delegated_inode) { 556 + error = break_deleg_wait(&delegated_inode); 557 + if (!error) 558 + goto retry_deleg; 559 + } 564 560 return error; 565 561 } 566 562 ··· 699 685 } 700 686 701 687 f->f_mapping = inode->i_mapping; 702 - file_sb_list_add(f, inode->i_sb); 703 688 704 689 if (unlikely(f->f_mode & FMODE_PATH)) { 705 690 f->f_op = &empty_fops; ··· 706 693 } 707 694 708 695 f->f_op = fops_get(inode->i_fop); 696 + if (unlikely(WARN_ON(!f->f_op))) { 697 + error = -ENODEV; 698 + goto cleanup_all; 699 + } 709 700 710 701 error = security_file_open(f, cred); 711 702 if (error) ··· 719 702 if (error) 720 703 goto cleanup_all; 721 704 722 - if (!open && f->f_op) 705 + if (!open) 723 706 open = f->f_op->open; 724 707 if (open) { 725 708 error = open(inode, f); ··· 737 720 738 721 cleanup_all: 739 722 fops_put(f->f_op); 740 - file_sb_list_del(f); 741 723 if (f->f_mode & FMODE_WRITE) { 742 724 put_write_access(inode); 743 725 if (!special_file(inode->i_mode)) { ··· 1039 1023 return 0; 1040 1024 } 1041 1025 1042 - if (filp->f_op && filp->f_op->flush) 1026 + if (filp->f_op->flush) 1043 1027 retval = filp->f_op->flush(filp, id); 1044 1028 1045 1029 if (likely(!(filp->f_mode & FMODE_PATH))) {
+6 -7
fs/pnode.c
··· 264 264 prev_src_mnt = child; 265 265 } 266 266 out: 267 - br_write_lock(&vfsmount_lock); 267 + lock_mount_hash(); 268 268 while (!list_empty(&tmp_list)) { 269 269 child = list_first_entry(&tmp_list, struct mount, mnt_hash); 270 270 umount_tree(child, 0); 271 271 } 272 - br_write_unlock(&vfsmount_lock); 272 + unlock_mount_hash(); 273 273 return ret; 274 274 } 275 275 ··· 278 278 */ 279 279 static inline int do_refcount_check(struct mount *mnt, int count) 280 280 { 281 - int mycount = mnt_get_count(mnt) - mnt->mnt_ghosts; 282 - return (mycount > count); 281 + return mnt_get_count(mnt) > count; 283 282 } 284 283 285 284 /* ··· 310 311 311 312 for (m = propagation_next(parent, parent); m; 312 313 m = propagation_next(m, parent)) { 313 - child = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint, 0); 314 + child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint); 314 315 if (child && list_empty(&child->mnt_mounts) && 315 316 (ret = do_refcount_check(child, 1))) 316 317 break; ··· 332 333 for (m = propagation_next(parent, parent); m; 333 334 m = propagation_next(m, parent)) { 334 335 335 - struct mount *child = __lookup_mnt(&m->mnt, 336 - mnt->mnt_mountpoint, 0); 336 + struct mount *child = __lookup_mnt_last(&m->mnt, 337 + mnt->mnt_mountpoint); 337 338 /* 338 339 * umount the child only if the child has no 339 340 * other children
+1 -9
fs/proc/self.c
··· 36 36 return NULL; 37 37 } 38 38 39 - static void proc_self_put_link(struct dentry *dentry, struct nameidata *nd, 40 - void *cookie) 41 - { 42 - char *s = nd_get_link(nd); 43 - if (!IS_ERR(s)) 44 - kfree(s); 45 - } 46 - 47 39 static const struct inode_operations proc_self_inode_operations = { 48 40 .readlink = proc_self_readlink, 49 41 .follow_link = proc_self_follow_link, 50 - .put_link = proc_self_put_link, 42 + .put_link = kfree_put_link, 51 43 }; 52 44 53 45 static unsigned self_inum;
+4 -4
fs/proc_namespace.c
··· 20 20 struct proc_mounts *p = proc_mounts(file->private_data); 21 21 struct mnt_namespace *ns = p->ns; 22 22 unsigned res = POLLIN | POLLRDNORM; 23 + int event; 23 24 24 25 poll_wait(file, &p->ns->poll, wait); 25 26 26 - br_read_lock(&vfsmount_lock); 27 - if (p->m.poll_event != ns->event) { 28 - p->m.poll_event = ns->event; 27 + event = ACCESS_ONCE(ns->event); 28 + if (p->m.poll_event != event) { 29 + p->m.poll_event = event; 29 30 res |= POLLERR | POLLPRI; 30 31 } 31 - br_read_unlock(&vfsmount_lock); 32 32 33 33 return res; 34 34 }
-4
fs/qnx4/namei.c
··· 60 60 struct buffer_head *bh; 61 61 62 62 *res_dir = NULL; 63 - if (!dir->i_sb) { 64 - printk(KERN_WARNING "qnx4: no superblock on dir.\n"); 65 - return NULL; 66 - } 67 63 bh = NULL; 68 64 block = offset = blkofs = 0; 69 65 while (blkofs * QNX4_BLOCK_SIZE + offset < dir->i_size) {
+8 -17
fs/read_write.c
··· 257 257 258 258 fn = no_llseek; 259 259 if (file->f_mode & FMODE_LSEEK) { 260 - if (file->f_op && file->f_op->llseek) 260 + if (file->f_op->llseek) 261 261 fn = file->f_op->llseek; 262 262 } 263 263 return fn(file, offset, whence); ··· 384 384 385 385 if (!(file->f_mode & FMODE_READ)) 386 386 return -EBADF; 387 - if (!file->f_op || (!file->f_op->read && !file->f_op->aio_read)) 387 + if (!file->f_op->read && !file->f_op->aio_read) 388 388 return -EINVAL; 389 389 if (unlikely(!access_ok(VERIFY_WRITE, buf, count))) 390 390 return -EFAULT; ··· 433 433 const char __user *p; 434 434 ssize_t ret; 435 435 436 - if (!file->f_op || (!file->f_op->write && !file->f_op->aio_write)) 436 + if (!file->f_op->write && !file->f_op->aio_write) 437 437 return -EINVAL; 438 438 439 439 old_fs = get_fs(); ··· 460 460 461 461 if (!(file->f_mode & FMODE_WRITE)) 462 462 return -EBADF; 463 - if (!file->f_op || (!file->f_op->write && !file->f_op->aio_write)) 463 + if (!file->f_op->write && !file->f_op->aio_write) 464 464 return -EINVAL; 465 465 if (unlikely(!access_ok(VERIFY_READ, buf, count))) 466 466 return -EFAULT; ··· 727 727 io_fn_t fn; 728 728 iov_fn_t fnv; 729 729 730 - if (!file->f_op) { 731 - ret = -EINVAL; 732 - goto out; 733 - } 734 - 735 730 ret = rw_copy_check_uvector(type, uvector, nr_segs, 736 731 ARRAY_SIZE(iovstack), iovstack, &iov); 737 732 if (ret <= 0) ··· 773 778 { 774 779 if (!(file->f_mode & FMODE_READ)) 775 780 return -EBADF; 776 - if (!file->f_op || (!file->f_op->aio_read && !file->f_op->read)) 781 + if (!file->f_op->aio_read && !file->f_op->read) 777 782 return -EINVAL; 778 783 779 784 return do_readv_writev(READ, file, vec, vlen, pos); ··· 786 791 { 787 792 if (!(file->f_mode & FMODE_WRITE)) 788 793 return -EBADF; 789 - if (!file->f_op || (!file->f_op->aio_write && !file->f_op->write)) 794 + if (!file->f_op->aio_write && !file->f_op->write) 790 795 return -EINVAL; 791 796 792 797 return do_readv_writev(WRITE, file, vec, vlen, pos); ··· 901 906 io_fn_t fn; 902 907 iov_fn_t fnv; 903 908 904 - ret = -EINVAL; 905 - if (!file->f_op) 906 - goto out; 907 - 908 909 ret = -EFAULT; 909 910 if (!access_ok(VERIFY_READ, uvector, nr_segs*sizeof(*uvector))) 910 911 goto out; ··· 956 965 goto out; 957 966 958 967 ret = -EINVAL; 959 - if (!file->f_op || (!file->f_op->aio_read && !file->f_op->read)) 968 + if (!file->f_op->aio_read && !file->f_op->read) 960 969 goto out; 961 970 962 971 ret = compat_do_readv_writev(READ, file, vec, vlen, pos); ··· 1023 1032 goto out; 1024 1033 1025 1034 ret = -EINVAL; 1026 - if (!file->f_op || (!file->f_op->aio_write && !file->f_op->write)) 1035 + if (!file->f_op->aio_write && !file->f_op->write) 1027 1036 goto out; 1028 1037 1029 1038 ret = compat_do_readv_writev(WRITE, file, vec, vlen, pos);
+1 -1
fs/readdir.c
··· 24 24 { 25 25 struct inode *inode = file_inode(file); 26 26 int res = -ENOTDIR; 27 - if (!file->f_op || !file->f_op->iterate) 27 + if (!file->f_op->iterate) 28 28 goto out; 29 29 30 30 res = security_file_permission(file, MAY_READ);
+2 -2
fs/select.c
··· 454 454 const struct file_operations *f_op; 455 455 f_op = f.file->f_op; 456 456 mask = DEFAULT_POLLMASK; 457 - if (f_op && f_op->poll) { 457 + if (f_op->poll) { 458 458 wait_key_set(wait, in, out, 459 459 bit, busy_flag); 460 460 mask = (*f_op->poll)(f.file, wait); ··· 761 761 mask = POLLNVAL; 762 762 if (f.file) { 763 763 mask = DEFAULT_POLLMASK; 764 - if (f.file->f_op && f.file->f_op->poll) { 764 + if (f.file->f_op->poll) { 765 765 pwait->_key = pollfd->events|POLLERR|POLLHUP; 766 766 pwait->_key |= busy_flag; 767 767 mask = f.file->f_op->poll(f.file, pwait);
+3 -3
fs/splice.c
··· 695 695 loff_t pos = sd->pos; 696 696 int more; 697 697 698 - if (!likely(file->f_op && file->f_op->sendpage)) 698 + if (!likely(file->f_op->sendpage)) 699 699 return -EINVAL; 700 700 701 701 more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0; ··· 1099 1099 ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, 1100 1100 loff_t *, size_t, unsigned int); 1101 1101 1102 - if (out->f_op && out->f_op->splice_write) 1102 + if (out->f_op->splice_write) 1103 1103 splice_write = out->f_op->splice_write; 1104 1104 else 1105 1105 splice_write = default_file_splice_write; ··· 1125 1125 if (unlikely(ret < 0)) 1126 1126 return ret; 1127 1127 1128 - if (in->f_op && in->f_op->splice_read) 1128 + if (in->f_op->splice_read) 1129 1129 splice_read = in->f_op->splice_read; 1130 1130 else 1131 1131 splice_read = default_file_splice_read;
+25 -6
fs/stat.c
··· 37 37 38 38 EXPORT_SYMBOL(generic_fillattr); 39 39 40 - int vfs_getattr(struct path *path, struct kstat *stat) 40 + /** 41 + * vfs_getattr_nosec - getattr without security checks 42 + * @path: file to get attributes from 43 + * @stat: structure to return attributes in 44 + * 45 + * Get attributes without calling security_inode_getattr. 46 + * 47 + * Currently the only caller other than vfs_getattr is internal to the 48 + * filehandle lookup code, which uses only the inode number and returns 49 + * no attributes to any user. Any other code probably wants 50 + * vfs_getattr. 51 + */ 52 + int vfs_getattr_nosec(struct path *path, struct kstat *stat) 41 53 { 42 54 struct inode *inode = path->dentry->d_inode; 43 - int retval; 44 - 45 - retval = security_inode_getattr(path->mnt, path->dentry); 46 - if (retval) 47 - return retval; 48 55 49 56 if (inode->i_op->getattr) 50 57 return inode->i_op->getattr(path->mnt, path->dentry, stat); 51 58 52 59 generic_fillattr(inode, stat); 53 60 return 0; 61 + } 62 + 63 + EXPORT_SYMBOL(vfs_getattr_nosec); 64 + 65 + int vfs_getattr(struct path *path, struct kstat *stat) 66 + { 67 + int retval; 68 + 69 + retval = security_inode_getattr(path->mnt, path->dentry); 70 + if (retval) 71 + return retval; 72 + return vfs_getattr_nosec(path, stat); 54 73 } 55 74 56 75 EXPORT_SYMBOL(vfs_getattr);
+78 -123
fs/super.c
··· 129 129 return total_objects; 130 130 } 131 131 132 - static int init_sb_writers(struct super_block *s, struct file_system_type *type) 133 - { 134 - int err; 135 - int i; 136 - 137 - for (i = 0; i < SB_FREEZE_LEVELS; i++) { 138 - err = percpu_counter_init(&s->s_writers.counter[i], 0); 139 - if (err < 0) 140 - goto err_out; 141 - lockdep_init_map(&s->s_writers.lock_map[i], sb_writers_name[i], 142 - &type->s_writers_key[i], 0); 143 - } 144 - init_waitqueue_head(&s->s_writers.wait); 145 - init_waitqueue_head(&s->s_writers.wait_unfrozen); 146 - return 0; 147 - err_out: 148 - while (--i >= 0) 149 - percpu_counter_destroy(&s->s_writers.counter[i]); 150 - return err; 151 - } 152 - 153 - static void destroy_sb_writers(struct super_block *s) 132 + /** 133 + * destroy_super - frees a superblock 134 + * @s: superblock to free 135 + * 136 + * Frees a superblock. 137 + */ 138 + static void destroy_super(struct super_block *s) 154 139 { 155 140 int i; 156 - 141 + list_lru_destroy(&s->s_dentry_lru); 142 + list_lru_destroy(&s->s_inode_lru); 157 143 for (i = 0; i < SB_FREEZE_LEVELS; i++) 158 144 percpu_counter_destroy(&s->s_writers.counter[i]); 145 + security_sb_free(s); 146 + WARN_ON(!list_empty(&s->s_mounts)); 147 + kfree(s->s_subtype); 148 + kfree(s->s_options); 149 + kfree_rcu(s, rcu); 159 150 } 160 151 161 152 /** ··· 161 170 { 162 171 struct super_block *s = kzalloc(sizeof(struct super_block), GFP_USER); 163 172 static const struct super_operations default_op; 173 + int i; 164 174 165 - if (s) { 166 - if (security_sb_alloc(s)) 167 - goto out_free_sb; 175 + if (!s) 176 + return NULL; 168 177 169 - #ifdef CONFIG_SMP 170 - s->s_files = alloc_percpu(struct list_head); 171 - if (!s->s_files) 172 - goto err_out; 173 - else { 174 - int i; 178 + if (security_sb_alloc(s)) 179 + goto fail; 175 180 176 - for_each_possible_cpu(i) 177 - INIT_LIST_HEAD(per_cpu_ptr(s->s_files, i)); 178 - } 179 - #else 180 - INIT_LIST_HEAD(&s->s_files); 181 - #endif 182 - if (init_sb_writers(s, type)) 183 - goto err_out; 184 - s->s_flags = flags; 185 - s->s_bdi = &default_backing_dev_info; 186 - INIT_HLIST_NODE(&s->s_instances); 187 - INIT_HLIST_BL_HEAD(&s->s_anon); 188 - INIT_LIST_HEAD(&s->s_inodes); 189 - 190 - if (list_lru_init(&s->s_dentry_lru)) 191 - goto err_out; 192 - if (list_lru_init(&s->s_inode_lru)) 193 - goto err_out_dentry_lru; 194 - 195 - INIT_LIST_HEAD(&s->s_mounts); 196 - init_rwsem(&s->s_umount); 197 - lockdep_set_class(&s->s_umount, &type->s_umount_key); 198 - /* 199 - * sget() can have s_umount recursion. 200 - * 201 - * When it cannot find a suitable sb, it allocates a new 202 - * one (this one), and tries again to find a suitable old 203 - * one. 204 - * 205 - * In case that succeeds, it will acquire the s_umount 206 - * lock of the old one. Since these are clearly distrinct 207 - * locks, and this object isn't exposed yet, there's no 208 - * risk of deadlocks. 209 - * 210 - * Annotate this by putting this lock in a different 211 - * subclass. 212 - */ 213 - down_write_nested(&s->s_umount, SINGLE_DEPTH_NESTING); 214 - s->s_count = 1; 215 - atomic_set(&s->s_active, 1); 216 - mutex_init(&s->s_vfs_rename_mutex); 217 - lockdep_set_class(&s->s_vfs_rename_mutex, &type->s_vfs_rename_key); 218 - mutex_init(&s->s_dquot.dqio_mutex); 219 - mutex_init(&s->s_dquot.dqonoff_mutex); 220 - init_rwsem(&s->s_dquot.dqptr_sem); 221 - s->s_maxbytes = MAX_NON_LFS; 222 - s->s_op = &default_op; 223 - s->s_time_gran = 1000000000; 224 - s->cleancache_poolid = -1; 225 - 226 - s->s_shrink.seeks = DEFAULT_SEEKS; 227 - s->s_shrink.scan_objects = super_cache_scan; 228 - s->s_shrink.count_objects = super_cache_count; 229 - s->s_shrink.batch = 1024; 230 - s->s_shrink.flags = SHRINKER_NUMA_AWARE; 181 + for (i = 0; i < SB_FREEZE_LEVELS; i++) { 182 + if (percpu_counter_init(&s->s_writers.counter[i], 0) < 0) 183 + goto fail; 184 + lockdep_init_map(&s->s_writers.lock_map[i], sb_writers_name[i], 185 + &type->s_writers_key[i], 0); 231 186 } 232 - out: 187 + init_waitqueue_head(&s->s_writers.wait); 188 + init_waitqueue_head(&s->s_writers.wait_unfrozen); 189 + s->s_flags = flags; 190 + s->s_bdi = &default_backing_dev_info; 191 + INIT_HLIST_NODE(&s->s_instances); 192 + INIT_HLIST_BL_HEAD(&s->s_anon); 193 + INIT_LIST_HEAD(&s->s_inodes); 194 + 195 + if (list_lru_init(&s->s_dentry_lru)) 196 + goto fail; 197 + if (list_lru_init(&s->s_inode_lru)) 198 + goto fail; 199 + 200 + INIT_LIST_HEAD(&s->s_mounts); 201 + init_rwsem(&s->s_umount); 202 + lockdep_set_class(&s->s_umount, &type->s_umount_key); 203 + /* 204 + * sget() can have s_umount recursion. 205 + * 206 + * When it cannot find a suitable sb, it allocates a new 207 + * one (this one), and tries again to find a suitable old 208 + * one. 209 + * 210 + * In case that succeeds, it will acquire the s_umount 211 + * lock of the old one. Since these are clearly distrinct 212 + * locks, and this object isn't exposed yet, there's no 213 + * risk of deadlocks. 214 + * 215 + * Annotate this by putting this lock in a different 216 + * subclass. 217 + */ 218 + down_write_nested(&s->s_umount, SINGLE_DEPTH_NESTING); 219 + s->s_count = 1; 220 + atomic_set(&s->s_active, 1); 221 + mutex_init(&s->s_vfs_rename_mutex); 222 + lockdep_set_class(&s->s_vfs_rename_mutex, &type->s_vfs_rename_key); 223 + mutex_init(&s->s_dquot.dqio_mutex); 224 + mutex_init(&s->s_dquot.dqonoff_mutex); 225 + init_rwsem(&s->s_dquot.dqptr_sem); 226 + s->s_maxbytes = MAX_NON_LFS; 227 + s->s_op = &default_op; 228 + s->s_time_gran = 1000000000; 229 + s->cleancache_poolid = -1; 230 + 231 + s->s_shrink.seeks = DEFAULT_SEEKS; 232 + s->s_shrink.scan_objects = super_cache_scan; 233 + s->s_shrink.count_objects = super_cache_count; 234 + s->s_shrink.batch = 1024; 235 + s->s_shrink.flags = SHRINKER_NUMA_AWARE; 233 236 return s; 234 237 235 - err_out_dentry_lru: 236 - list_lru_destroy(&s->s_dentry_lru); 237 - err_out: 238 - security_sb_free(s); 239 - #ifdef CONFIG_SMP 240 - if (s->s_files) 241 - free_percpu(s->s_files); 242 - #endif 243 - destroy_sb_writers(s); 244 - out_free_sb: 245 - kfree(s); 246 - s = NULL; 247 - goto out; 248 - } 249 - 250 - /** 251 - * destroy_super - frees a superblock 252 - * @s: superblock to free 253 - * 254 - * Frees a superblock. 255 - */ 256 - static inline void destroy_super(struct super_block *s) 257 - { 258 - list_lru_destroy(&s->s_dentry_lru); 259 - list_lru_destroy(&s->s_inode_lru); 260 - #ifdef CONFIG_SMP 261 - free_percpu(s->s_files); 262 - #endif 263 - destroy_sb_writers(s); 264 - security_sb_free(s); 265 - WARN_ON(!list_empty(&s->s_mounts)); 266 - kfree(s->s_subtype); 267 - kfree(s->s_options); 268 - kfree(s); 238 + fail: 239 + destroy_super(s); 240 + return NULL; 269 241 } 270 242 271 243 /* Superblock refcounting */ ··· 710 756 make sure there are no rw files opened */ 711 757 if (remount_ro) { 712 758 if (force) { 713 - mark_files_ro(sb); 759 + sb->s_readonly_remount = 1; 760 + smp_wmb(); 714 761 } else { 715 762 retval = sb_prepare_remount_readonly(sb); 716 763 if (retval)
+1 -1
fs/sync.c
··· 177 177 */ 178 178 int vfs_fsync_range(struct file *file, loff_t start, loff_t end, int datasync) 179 179 { 180 - if (!file->f_op || !file->f_op->fsync) 180 + if (!file->f_op->fsync) 181 181 return -EINVAL; 182 182 return file->f_op->fsync(file, start, end, datasync); 183 183 }
+19 -22
fs/ubifs/dir.c
··· 192 192 struct ubifs_dent_node *dent; 193 193 struct ubifs_info *c = dir->i_sb->s_fs_info; 194 194 195 - dbg_gen("'%.*s' in dir ino %lu", 196 - dentry->d_name.len, dentry->d_name.name, dir->i_ino); 195 + dbg_gen("'%pd' in dir ino %lu", dentry, dir->i_ino); 197 196 198 197 if (dentry->d_name.len > UBIFS_MAX_NLEN) 199 198 return ERR_PTR(-ENAMETOOLONG); ··· 224 225 * checking. 225 226 */ 226 227 err = PTR_ERR(inode); 227 - ubifs_err("dead directory entry '%.*s', error %d", 228 - dentry->d_name.len, dentry->d_name.name, err); 228 + ubifs_err("dead directory entry '%pd', error %d", 229 + dentry, err); 229 230 ubifs_ro_mode(c, err); 230 231 goto out; 231 232 } ··· 259 260 * parent directory inode. 260 261 */ 261 262 262 - dbg_gen("dent '%.*s', mode %#hx in dir ino %lu", 263 - dentry->d_name.len, dentry->d_name.name, mode, dir->i_ino); 263 + dbg_gen("dent '%pd', mode %#hx in dir ino %lu", 264 + dentry, mode, dir->i_ino); 264 265 265 266 err = ubifs_budget_space(c, &req); 266 267 if (err) ··· 508 509 * changing the parent inode. 509 510 */ 510 511 511 - dbg_gen("dent '%.*s' to ino %lu (nlink %d) in dir ino %lu", 512 - dentry->d_name.len, dentry->d_name.name, inode->i_ino, 512 + dbg_gen("dent '%pd' to ino %lu (nlink %d) in dir ino %lu", 513 + dentry, inode->i_ino, 513 514 inode->i_nlink, dir->i_ino); 514 515 ubifs_assert(mutex_is_locked(&dir->i_mutex)); 515 516 ubifs_assert(mutex_is_locked(&inode->i_mutex)); ··· 565 566 * deletions. 566 567 */ 567 568 568 - dbg_gen("dent '%.*s' from ino %lu (nlink %d) in dir ino %lu", 569 - dentry->d_name.len, dentry->d_name.name, inode->i_ino, 569 + dbg_gen("dent '%pd' from ino %lu (nlink %d) in dir ino %lu", 570 + dentry, inode->i_ino, 570 571 inode->i_nlink, dir->i_ino); 571 572 ubifs_assert(mutex_is_locked(&dir->i_mutex)); 572 573 ubifs_assert(mutex_is_locked(&inode->i_mutex)); ··· 655 656 * because we have extra space reserved for deletions. 656 657 */ 657 658 658 - dbg_gen("directory '%.*s', ino %lu in dir ino %lu", dentry->d_name.len, 659 - dentry->d_name.name, inode->i_ino, dir->i_ino); 659 + dbg_gen("directory '%pd', ino %lu in dir ino %lu", dentry, 660 + inode->i_ino, dir->i_ino); 660 661 ubifs_assert(mutex_is_locked(&dir->i_mutex)); 661 662 ubifs_assert(mutex_is_locked(&inode->i_mutex)); 662 663 err = check_dir_empty(c, dentry->d_inode); ··· 715 716 * directory inode. 716 717 */ 717 718 718 - dbg_gen("dent '%.*s', mode %#hx in dir ino %lu", 719 - dentry->d_name.len, dentry->d_name.name, mode, dir->i_ino); 719 + dbg_gen("dent '%pd', mode %#hx in dir ino %lu", 720 + dentry, mode, dir->i_ino); 720 721 721 722 err = ubifs_budget_space(c, &req); 722 723 if (err) ··· 777 778 * directory inode. 778 779 */ 779 780 780 - dbg_gen("dent '%.*s' in dir ino %lu", 781 - dentry->d_name.len, dentry->d_name.name, dir->i_ino); 781 + dbg_gen("dent '%pd' in dir ino %lu", dentry, dir->i_ino); 782 782 783 783 if (!new_valid_dev(rdev)) 784 784 return -EINVAL; ··· 851 853 * directory inode. 852 854 */ 853 855 854 - dbg_gen("dent '%.*s', target '%s' in dir ino %lu", dentry->d_name.len, 855 - dentry->d_name.name, symname, dir->i_ino); 856 + dbg_gen("dent '%pd', target '%s' in dir ino %lu", dentry, 857 + symname, dir->i_ino); 856 858 857 859 if (len > UBIFS_MAX_INO_DATA) 858 860 return -ENAMETOOLONG; ··· 977 979 * separately. 978 980 */ 979 981 980 - dbg_gen("dent '%.*s' ino %lu in dir ino %lu to dent '%.*s' in dir ino %lu", 981 - old_dentry->d_name.len, old_dentry->d_name.name, 982 - old_inode->i_ino, old_dir->i_ino, new_dentry->d_name.len, 983 - new_dentry->d_name.name, new_dir->i_ino); 982 + dbg_gen("dent '%pd' ino %lu in dir ino %lu to dent '%pd' in dir ino %lu", 983 + old_dentry, old_inode->i_ino, old_dir->i_ino, 984 + new_dentry, new_dir->i_ino); 984 985 ubifs_assert(mutex_is_locked(&old_dir->i_mutex)); 985 986 ubifs_assert(mutex_is_locked(&new_dir->i_mutex)); 986 987 if (unlink)
+2 -4
fs/ubifs/journal.c
··· 933 933 int move = (old_dir != new_dir); 934 934 struct ubifs_inode *uninitialized_var(new_ui); 935 935 936 - dbg_jnl("dent '%.*s' in dir ino %lu to dent '%.*s' in dir ino %lu", 937 - old_dentry->d_name.len, old_dentry->d_name.name, 938 - old_dir->i_ino, new_dentry->d_name.len, 939 - new_dentry->d_name.name, new_dir->i_ino); 936 + dbg_jnl("dent '%pd' in dir ino %lu to dent '%pd' in dir ino %lu", 937 + old_dentry, old_dir->i_ino, new_dentry, new_dir->i_ino); 940 938 ubifs_assert(ubifs_inode(old_dir)->data_len == 0); 941 939 ubifs_assert(ubifs_inode(new_dir)->data_len == 0); 942 940 ubifs_assert(mutex_is_locked(&ubifs_inode(old_dir)->ui_mutex));
+8 -8
fs/ubifs/xattr.c
··· 303 303 union ubifs_key key; 304 304 int err, type; 305 305 306 - dbg_gen("xattr '%s', host ino %lu ('%.*s'), size %zd", name, 307 - host->i_ino, dentry->d_name.len, dentry->d_name.name, size); 306 + dbg_gen("xattr '%s', host ino %lu ('%pd'), size %zd", name, 307 + host->i_ino, dentry, size); 308 308 ubifs_assert(mutex_is_locked(&host->i_mutex)); 309 309 310 310 if (size > UBIFS_MAX_INO_DATA) ··· 367 367 union ubifs_key key; 368 368 int err; 369 369 370 - dbg_gen("xattr '%s', ino %lu ('%.*s'), buf size %zd", name, 371 - host->i_ino, dentry->d_name.len, dentry->d_name.name, size); 370 + dbg_gen("xattr '%s', ino %lu ('%pd'), buf size %zd", name, 371 + host->i_ino, dentry, size); 372 372 373 373 err = check_namespace(&nm); 374 374 if (err < 0) ··· 426 426 int err, len, written = 0; 427 427 struct qstr nm = { .name = NULL }; 428 428 429 - dbg_gen("ino %lu ('%.*s'), buffer size %zd", host->i_ino, 430 - dentry->d_name.len, dentry->d_name.name, size); 429 + dbg_gen("ino %lu ('%pd'), buffer size %zd", host->i_ino, 430 + dentry, size); 431 431 432 432 len = host_ui->xattr_names + host_ui->xattr_cnt; 433 433 if (!buffer) ··· 529 529 union ubifs_key key; 530 530 int err; 531 531 532 - dbg_gen("xattr '%s', ino %lu ('%.*s')", name, 533 - host->i_ino, dentry->d_name.len, dentry->d_name.name); 532 + dbg_gen("xattr '%s', ino %lu ('%pd')", name, 533 + host->i_ino, dentry); 534 534 ubifs_assert(mutex_is_locked(&host->i_mutex)); 535 535 536 536 err = check_namespace(&nm);
+8 -1
fs/utimes.c
··· 53 53 int error; 54 54 struct iattr newattrs; 55 55 struct inode *inode = path->dentry->d_inode; 56 + struct inode *delegated_inode = NULL; 56 57 57 58 error = mnt_want_write(path->mnt); 58 59 if (error) ··· 102 101 goto mnt_drop_write_and_out; 103 102 } 104 103 } 104 + retry_deleg: 105 105 mutex_lock(&inode->i_mutex); 106 - error = notify_change(path->dentry, &newattrs); 106 + error = notify_change(path->dentry, &newattrs, &delegated_inode); 107 107 mutex_unlock(&inode->i_mutex); 108 + if (delegated_inode) { 109 + error = break_deleg_wait(&delegated_inode); 110 + if (!error) 111 + goto retry_deleg; 112 + } 108 113 109 114 mnt_drop_write_and_out: 110 115 mnt_drop_write(path->mnt);
+1 -1
include/asm-generic/siginfo.h
··· 32 32 33 33 #endif 34 34 35 - extern int copy_siginfo_to_user(struct siginfo __user *to, struct siginfo *from); 35 + extern int copy_siginfo_to_user(struct siginfo __user *to, const struct siginfo *from); 36 36 37 37 #endif
-3
include/linux/anon_inodes.h
··· 13 13 struct file *anon_inode_getfile(const char *name, 14 14 const struct file_operations *fops, 15 15 void *priv, int flags); 16 - struct file *anon_inode_getfile_private(const char *name, 17 - const struct file_operations *fops, 18 - void *priv, int flags); 19 16 int anon_inode_getfd(const char *name, const struct file_operations *fops, 20 17 void *priv, int flags); 21 18
+2 -1
include/linux/binfmts.h
··· 56 56 57 57 /* Function parameter for binfmt->coredump */ 58 58 struct coredump_params { 59 - siginfo_t *siginfo; 59 + const siginfo_t *siginfo; 60 60 struct pt_regs *regs; 61 61 struct file *file; 62 62 unsigned long limit; 63 63 unsigned long mm_flags; 64 + loff_t written; 64 65 }; 65 66 66 67 /*
+1 -1
include/linux/compat.h
··· 362 362 long compat_put_bitmap(compat_ulong_t __user *umask, unsigned long *mask, 363 363 unsigned long bitmap_size); 364 364 int copy_siginfo_from_user32(siginfo_t *to, struct compat_siginfo __user *from); 365 - int copy_siginfo_to_user32(struct compat_siginfo __user *to, siginfo_t *from); 365 + int copy_siginfo_to_user32(struct compat_siginfo __user *to, const siginfo_t *from); 366 366 int get_compat_sigevent(struct sigevent *event, 367 367 const struct compat_sigevent __user *u_event); 368 368 long compat_sys_rt_tgsigqueueinfo(compat_pid_t tgid, compat_pid_t pid, int sig,
+6 -4
include/linux/coredump.h
··· 10 10 * These are the only things you should do on a core-file: use only these 11 11 * functions to write out all the necessary info. 12 12 */ 13 - extern int dump_write(struct file *file, const void *addr, int nr); 14 - extern int dump_seek(struct file *file, loff_t off); 13 + struct coredump_params; 14 + extern int dump_skip(struct coredump_params *cprm, size_t nr); 15 + extern int dump_emit(struct coredump_params *cprm, const void *addr, int nr); 16 + extern int dump_align(struct coredump_params *cprm, int align); 15 17 #ifdef CONFIG_COREDUMP 16 - extern void do_coredump(siginfo_t *siginfo); 18 + extern void do_coredump(const siginfo_t *siginfo); 17 19 #else 18 - static inline void do_coredump(siginfo_t *siginfo) {} 20 + static inline void do_coredump(const siginfo_t *siginfo) {} 19 21 #endif 20 22 21 23 #endif /* _LINUX_COREDUMP_H */
+84 -20
include/linux/dcache.h
··· 169 169 */ 170 170 171 171 /* d_flags entries */ 172 - #define DCACHE_OP_HASH 0x0001 173 - #define DCACHE_OP_COMPARE 0x0002 174 - #define DCACHE_OP_REVALIDATE 0x0004 175 - #define DCACHE_OP_DELETE 0x0008 176 - #define DCACHE_OP_PRUNE 0x0010 172 + #define DCACHE_OP_HASH 0x00000001 173 + #define DCACHE_OP_COMPARE 0x00000002 174 + #define DCACHE_OP_REVALIDATE 0x00000004 175 + #define DCACHE_OP_DELETE 0x00000008 176 + #define DCACHE_OP_PRUNE 0x00000010 177 177 178 - #define DCACHE_DISCONNECTED 0x0020 178 + #define DCACHE_DISCONNECTED 0x00000020 179 179 /* This dentry is possibly not currently connected to the dcache tree, in 180 180 * which case its parent will either be itself, or will have this flag as 181 181 * well. nfsd will not use a dentry with this bit set, but will first ··· 186 186 * dentry into place and return that dentry rather than the passed one, 187 187 * typically using d_splice_alias. */ 188 188 189 - #define DCACHE_REFERENCED 0x0040 /* Recently used, don't discard. */ 190 - #define DCACHE_RCUACCESS 0x0080 /* Entry has ever been RCU-visible */ 189 + #define DCACHE_REFERENCED 0x00000040 /* Recently used, don't discard. */ 190 + #define DCACHE_RCUACCESS 0x00000080 /* Entry has ever been RCU-visible */ 191 191 192 - #define DCACHE_CANT_MOUNT 0x0100 193 - #define DCACHE_GENOCIDE 0x0200 194 - #define DCACHE_SHRINK_LIST 0x0400 192 + #define DCACHE_CANT_MOUNT 0x00000100 193 + #define DCACHE_GENOCIDE 0x00000200 194 + #define DCACHE_SHRINK_LIST 0x00000400 195 195 196 - #define DCACHE_OP_WEAK_REVALIDATE 0x0800 196 + #define DCACHE_OP_WEAK_REVALIDATE 0x00000800 197 197 198 - #define DCACHE_NFSFS_RENAMED 0x1000 198 + #define DCACHE_NFSFS_RENAMED 0x00001000 199 199 /* this dentry has been "silly renamed" and has to be deleted on the last 200 200 * dput() */ 201 - #define DCACHE_COOKIE 0x2000 /* For use by dcookie subsystem */ 202 - #define DCACHE_FSNOTIFY_PARENT_WATCHED 0x4000 201 + #define DCACHE_COOKIE 0x00002000 /* For use by dcookie subsystem */ 202 + #define DCACHE_FSNOTIFY_PARENT_WATCHED 0x00004000 203 203 /* Parent inode is watched by some fsnotify listener */ 204 204 205 - #define DCACHE_MOUNTED 0x10000 /* is a mountpoint */ 206 - #define DCACHE_NEED_AUTOMOUNT 0x20000 /* handle automount on this dir */ 207 - #define DCACHE_MANAGE_TRANSIT 0x40000 /* manage transit from this dirent */ 205 + #define DCACHE_DENTRY_KILLED 0x00008000 206 + 207 + #define DCACHE_MOUNTED 0x00010000 /* is a mountpoint */ 208 + #define DCACHE_NEED_AUTOMOUNT 0x00020000 /* handle automount on this dir */ 209 + #define DCACHE_MANAGE_TRANSIT 0x00040000 /* manage transit from this dirent */ 208 210 #define DCACHE_MANAGED_DENTRY \ 209 211 (DCACHE_MOUNTED|DCACHE_NEED_AUTOMOUNT|DCACHE_MANAGE_TRANSIT) 210 212 211 - #define DCACHE_LRU_LIST 0x80000 212 - #define DCACHE_DENTRY_KILLED 0x100000 213 + #define DCACHE_LRU_LIST 0x00080000 214 + 215 + #define DCACHE_ENTRY_TYPE 0x00700000 216 + #define DCACHE_MISS_TYPE 0x00000000 /* Negative dentry */ 217 + #define DCACHE_DIRECTORY_TYPE 0x00100000 /* Normal directory */ 218 + #define DCACHE_AUTODIR_TYPE 0x00200000 /* Lookupless directory (presumed automount) */ 219 + #define DCACHE_SYMLINK_TYPE 0x00300000 /* Symlink */ 220 + #define DCACHE_FILE_TYPE 0x00400000 /* Other file type */ 213 221 214 222 extern seqlock_t rename_lock; 215 223 ··· 232 224 extern void d_instantiate(struct dentry *, struct inode *); 233 225 extern struct dentry * d_instantiate_unique(struct dentry *, struct inode *); 234 226 extern struct dentry * d_materialise_unique(struct dentry *, struct inode *); 227 + extern int d_instantiate_no_diralias(struct dentry *, struct inode *); 235 228 extern void __d_drop(struct dentry *dentry); 236 229 extern void d_drop(struct dentry *dentry); 237 230 extern void d_delete(struct dentry *); ··· 400 391 static inline bool d_mountpoint(const struct dentry *dentry) 401 392 { 402 393 return dentry->d_flags & DCACHE_MOUNTED; 394 + } 395 + 396 + /* 397 + * Directory cache entry type accessor functions. 398 + */ 399 + static inline void __d_set_type(struct dentry *dentry, unsigned type) 400 + { 401 + dentry->d_flags = (dentry->d_flags & ~DCACHE_ENTRY_TYPE) | type; 402 + } 403 + 404 + static inline void __d_clear_type(struct dentry *dentry) 405 + { 406 + __d_set_type(dentry, DCACHE_MISS_TYPE); 407 + } 408 + 409 + static inline void d_set_type(struct dentry *dentry, unsigned type) 410 + { 411 + spin_lock(&dentry->d_lock); 412 + __d_set_type(dentry, type); 413 + spin_unlock(&dentry->d_lock); 414 + } 415 + 416 + static inline unsigned __d_entry_type(const struct dentry *dentry) 417 + { 418 + return dentry->d_flags & DCACHE_ENTRY_TYPE; 419 + } 420 + 421 + static inline bool d_is_directory(const struct dentry *dentry) 422 + { 423 + return __d_entry_type(dentry) == DCACHE_DIRECTORY_TYPE; 424 + } 425 + 426 + static inline bool d_is_autodir(const struct dentry *dentry) 427 + { 428 + return __d_entry_type(dentry) == DCACHE_AUTODIR_TYPE; 429 + } 430 + 431 + static inline bool d_is_symlink(const struct dentry *dentry) 432 + { 433 + return __d_entry_type(dentry) == DCACHE_SYMLINK_TYPE; 434 + } 435 + 436 + static inline bool d_is_file(const struct dentry *dentry) 437 + { 438 + return __d_entry_type(dentry) == DCACHE_FILE_TYPE; 439 + } 440 + 441 + static inline bool d_is_negative(const struct dentry *dentry) 442 + { 443 + return __d_entry_type(dentry) == DCACHE_MISS_TYPE; 444 + } 445 + 446 + static inline bool d_is_positive(const struct dentry *dentry) 447 + { 448 + return !d_is_negative(dentry); 403 449 } 404 450 405 451 extern int sysctl_vfs_cache_pressure;
+3 -3
include/linux/elf.h
··· 39 39 40 40 /* Optional callbacks to write extra ELF notes. */ 41 41 struct file; 42 + struct coredump_params; 42 43 43 44 #ifndef ARCH_HAVE_EXTRA_ELF_NOTES 44 45 static inline int elf_coredump_extra_notes_size(void) { return 0; } 45 - static inline int elf_coredump_extra_notes_write(struct file *file, 46 - loff_t *foffset) { return 0; } 46 + static inline int elf_coredump_extra_notes_write(struct coredump_params *cprm) { return 0; } 47 47 #else 48 48 extern int elf_coredump_extra_notes_size(void); 49 - extern int elf_coredump_extra_notes_write(struct file *file, loff_t *foffset); 49 + extern int elf_coredump_extra_notes_write(struct coredump_params *cprm); 50 50 #endif 51 51 #endif /* _LINUX_ELF_H */
+4 -3
include/linux/elfcore.h
··· 6 6 #include <asm/elf.h> 7 7 #include <uapi/linux/elfcore.h> 8 8 9 + struct coredump_params; 10 + 9 11 static inline void elf_core_copy_regs(elf_gregset_t *elfregs, struct pt_regs *regs) 10 12 { 11 13 #ifdef ELF_CORE_COPY_REGS ··· 65 63 */ 66 64 extern Elf_Half elf_core_extra_phdrs(void); 67 65 extern int 68 - elf_core_write_extra_phdrs(struct file *file, loff_t offset, size_t *size, 69 - unsigned long limit); 66 + elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset); 70 67 extern int 71 - elf_core_write_extra_data(struct file *file, size_t *size, unsigned long limit); 68 + elf_core_write_extra_data(struct coredump_params *cprm); 72 69 extern size_t elf_core_extra_data_size(void); 73 70 74 71 #endif /* _LINUX_ELFCORE_H */
+81 -25
include/linux/fs.h
··· 623 623 * 0: the object of the current VFS operation 624 624 * 1: parent 625 625 * 2: child/target 626 - * 3: quota file 626 + * 3: xattr 627 + * 4: second non-directory 628 + * The last is for certain operations (such as rename) which lock two 629 + * non-directories at once. 627 630 * 628 631 * The locking order between these classes is 629 - * parent -> child -> normal -> xattr -> quota 632 + * parent -> child -> normal -> xattr -> second non-directory 630 633 */ 631 634 enum inode_i_mutex_lock_class 632 635 { ··· 637 634 I_MUTEX_PARENT, 638 635 I_MUTEX_CHILD, 639 636 I_MUTEX_XATTR, 640 - I_MUTEX_QUOTA 637 + I_MUTEX_NONDIR2 641 638 }; 639 + 640 + void lock_two_nondirectories(struct inode *, struct inode*); 641 + void unlock_two_nondirectories(struct inode *, struct inode*); 642 642 643 643 /* 644 644 * NOTE: in a 32bit arch with a preemptable kernel and ··· 770 764 #define FILE_MNT_WRITE_RELEASED 2 771 765 772 766 struct file { 773 - /* 774 - * fu_list becomes invalid after file_free is called and queued via 775 - * fu_rcuhead for RCU freeing 776 - */ 777 767 union { 778 - struct list_head fu_list; 779 768 struct llist_node fu_llist; 780 769 struct rcu_head fu_rcuhead; 781 770 } f_u; ··· 784 783 * Must not be taken from IRQ context. 785 784 */ 786 785 spinlock_t f_lock; 787 - #ifdef CONFIG_SMP 788 - int f_sb_list_cpu; 789 - #endif 790 786 atomic_long_t f_count; 791 787 unsigned int f_flags; 792 788 fmode_t f_mode; ··· 880 882 881 883 #define FL_POSIX 1 882 884 #define FL_FLOCK 2 885 + #define FL_DELEG 4 /* NFSv4 delegation */ 883 886 #define FL_ACCESS 8 /* not trying to lock, just looking */ 884 887 #define FL_EXISTS 16 /* when unlocking, test for existence */ 885 888 #define FL_LEASE 32 /* lease held on this file */ ··· 1022 1023 extern int vfs_lock_file(struct file *, unsigned int, struct file_lock *, struct file_lock *); 1023 1024 extern int vfs_cancel_lock(struct file *filp, struct file_lock *fl); 1024 1025 extern int flock_lock_file_wait(struct file *filp, struct file_lock *fl); 1025 - extern int __break_lease(struct inode *inode, unsigned int flags); 1026 + extern int __break_lease(struct inode *inode, unsigned int flags, unsigned int type); 1026 1027 extern void lease_get_mtime(struct inode *, struct timespec *time); 1027 1028 extern int generic_setlease(struct file *, long, struct file_lock **); 1028 1029 extern int vfs_setlease(struct file *, long, struct file_lock **); ··· 1131 1132 return -ENOLCK; 1132 1133 } 1133 1134 1134 - static inline int __break_lease(struct inode *inode, unsigned int mode) 1135 + static inline int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) 1135 1136 { 1136 1137 return 0; 1137 1138 } ··· 1263 1264 1264 1265 struct list_head s_inodes; /* all inodes */ 1265 1266 struct hlist_bl_head s_anon; /* anonymous dentries for (nfs) exporting */ 1266 - #ifdef CONFIG_SMP 1267 - struct list_head __percpu *s_files; 1268 - #else 1269 - struct list_head s_files; 1270 - #endif 1271 1267 struct list_head s_mounts; /* list of mounts; _not_ for fs use */ 1272 1268 struct block_device *s_bdev; 1273 1269 struct backing_dev_info *s_bdi; ··· 1324 1330 */ 1325 1331 struct list_lru s_dentry_lru ____cacheline_aligned_in_smp; 1326 1332 struct list_lru s_inode_lru ____cacheline_aligned_in_smp; 1333 + struct rcu_head rcu; 1327 1334 }; 1328 1335 1329 1336 extern struct timespec current_fs_time(struct super_block *sb); ··· 1453 1458 extern int vfs_mkdir(struct inode *, struct dentry *, umode_t); 1454 1459 extern int vfs_mknod(struct inode *, struct dentry *, umode_t, dev_t); 1455 1460 extern int vfs_symlink(struct inode *, struct dentry *, const char *); 1456 - extern int vfs_link(struct dentry *, struct inode *, struct dentry *); 1461 + extern int vfs_link(struct dentry *, struct inode *, struct dentry *, struct inode **); 1457 1462 extern int vfs_rmdir(struct inode *, struct dentry *); 1458 - extern int vfs_unlink(struct inode *, struct dentry *); 1459 - extern int vfs_rename(struct inode *, struct dentry *, struct inode *, struct dentry *); 1463 + extern int vfs_unlink(struct inode *, struct dentry *, struct inode **); 1464 + extern int vfs_rename(struct inode *, struct dentry *, struct inode *, struct dentry *, struct inode **); 1460 1465 1461 1466 /* 1462 1467 * VFS dentry helper functions. ··· 1870 1875 (((fops) && try_module_get((fops)->owner) ? (fops) : NULL)) 1871 1876 #define fops_put(fops) \ 1872 1877 do { if (fops) module_put((fops)->owner); } while(0) 1878 + /* 1879 + * This one is to be used *ONLY* from ->open() instances. 1880 + * fops must be non-NULL, pinned down *and* module dependencies 1881 + * should be sufficient to pin the caller down as well. 1882 + */ 1883 + #define replace_fops(f, fops) \ 1884 + do { \ 1885 + struct file *__file = (f); \ 1886 + fops_put(__file->f_op); \ 1887 + BUG_ON(!(__file->f_op = (fops))); \ 1888 + } while(0) 1873 1889 1874 1890 extern int register_filesystem(struct file_system_type *); 1875 1891 extern int unregister_filesystem(struct file_system_type *); ··· 1904 1898 extern bool fs_fully_visible(struct file_system_type *); 1905 1899 1906 1900 extern int current_umask(void); 1901 + 1902 + extern void ihold(struct inode * inode); 1903 + extern void iput(struct inode *); 1907 1904 1908 1905 /* /sys/fs */ 1909 1906 extern struct kobject *fs_kobj; ··· 1964 1955 static inline int break_lease(struct inode *inode, unsigned int mode) 1965 1956 { 1966 1957 if (inode->i_flock) 1967 - return __break_lease(inode, mode); 1958 + return __break_lease(inode, mode, FL_LEASE); 1968 1959 return 0; 1969 1960 } 1961 + 1962 + static inline int break_deleg(struct inode *inode, unsigned int mode) 1963 + { 1964 + if (inode->i_flock) 1965 + return __break_lease(inode, mode, FL_DELEG); 1966 + return 0; 1967 + } 1968 + 1969 + static inline int try_break_deleg(struct inode *inode, struct inode **delegated_inode) 1970 + { 1971 + int ret; 1972 + 1973 + ret = break_deleg(inode, O_WRONLY|O_NONBLOCK); 1974 + if (ret == -EWOULDBLOCK && delegated_inode) { 1975 + *delegated_inode = inode; 1976 + ihold(inode); 1977 + } 1978 + return ret; 1979 + } 1980 + 1981 + static inline int break_deleg_wait(struct inode **delegated_inode) 1982 + { 1983 + int ret; 1984 + 1985 + ret = break_deleg(*delegated_inode, O_WRONLY); 1986 + iput(*delegated_inode); 1987 + *delegated_inode = NULL; 1988 + return ret; 1989 + } 1990 + 1970 1991 #else /* !CONFIG_FILE_LOCKING */ 1971 1992 static inline int locks_mandatory_locked(struct inode *inode) 1972 1993 { ··· 2033 1994 2034 1995 static inline int break_lease(struct inode *inode, unsigned int mode) 2035 1996 { 1997 + return 0; 1998 + } 1999 + 2000 + static inline int break_deleg(struct inode *inode, unsigned int mode) 2001 + { 2002 + return 0; 2003 + } 2004 + 2005 + static inline int try_break_deleg(struct inode *inode, struct inode **delegated_inode) 2006 + { 2007 + return 0; 2008 + } 2009 + 2010 + static inline int break_deleg_wait(struct inode **delegated_inode) 2011 + { 2012 + BUG(); 2036 2013 return 0; 2037 2014 } 2038 2015 ··· 2278 2223 #ifdef CONFIG_BLOCK 2279 2224 extern sector_t bmap(struct inode *, sector_t); 2280 2225 #endif 2281 - extern int notify_change(struct dentry *, struct iattr *); 2226 + extern int notify_change(struct dentry *, struct iattr *, struct inode **); 2282 2227 extern int inode_permission(struct inode *, int); 2283 2228 extern int generic_permission(struct inode *, int); 2284 2229 ··· 2392 2337 extern int inode_init_always(struct super_block *, struct inode *); 2393 2338 extern void inode_init_once(struct inode *); 2394 2339 extern void address_space_init_once(struct address_space *mapping); 2395 - extern void ihold(struct inode * inode); 2396 - extern void iput(struct inode *); 2397 2340 extern struct inode * igrab(struct inode *); 2398 2341 extern ino_t iunique(struct super_block *, ino_t); 2399 2342 extern int inode_needs_sync(struct inode *inode); ··· 2560 2507 int nofs); 2561 2508 extern int page_symlink(struct inode *inode, const char *symname, int len); 2562 2509 extern const struct inode_operations page_symlink_inode_operations; 2510 + extern void kfree_put_link(struct dentry *, struct nameidata *, void *); 2563 2511 extern int generic_readlink(struct dentry *, char __user *, int); 2564 2512 extern void generic_fillattr(struct inode *, struct kstat *); 2513 + int vfs_getattr_nosec(struct path *path, struct kstat *stat); 2565 2514 extern int vfs_getattr(struct path *, struct kstat *); 2566 2515 void __inode_add_bytes(struct inode *inode, loff_t bytes); 2567 2516 void inode_add_bytes(struct inode *inode, loff_t bytes); ··· 2622 2567 extern int simple_write_end(struct file *file, struct address_space *mapping, 2623 2568 loff_t pos, unsigned len, unsigned copied, 2624 2569 struct page *page, void *fsdata); 2570 + extern struct inode *alloc_anon_inode(struct super_block *); 2625 2571 2626 2572 extern struct dentry *simple_lookup(struct inode *, struct dentry *, unsigned int flags); 2627 2573 extern ssize_t generic_read_dir(struct file *, char __user *, size_t, loff_t *);
-10
include/linux/lglock.h
··· 25 25 #include <linux/cpu.h> 26 26 #include <linux/notifier.h> 27 27 28 - /* can make br locks by using local lock for read side, global lock for write */ 29 - #define br_lock_init(name) lg_lock_init(name, #name) 30 - #define br_read_lock(name) lg_local_lock(name) 31 - #define br_read_unlock(name) lg_local_unlock(name) 32 - #define br_write_lock(name) lg_global_lock(name) 33 - #define br_write_unlock(name) lg_global_unlock(name) 34 - 35 - #define DEFINE_BRLOCK(name) DEFINE_LGLOCK(name) 36 - #define DEFINE_STATIC_BRLOCK(name) DEFINE_STATIC_LGLOCK(name) 37 - 38 28 #ifdef CONFIG_DEBUG_LOCK_ALLOC 39 29 #define LOCKDEP_INIT_MAP lockdep_init_map 40 30 #else
+2
include/linux/mount.h
··· 49 49 50 50 #define MNT_LOCK_READONLY 0x400000 51 51 #define MNT_LOCKED 0x800000 52 + #define MNT_DOOMED 0x1000000 53 + #define MNT_SYNC_UMOUNT 0x2000000 52 54 53 55 struct vfsmount { 54 56 struct dentry *mnt_root; /* root of the mounted tree */
+1 -1
include/linux/namei.h
··· 16 16 struct path root; 17 17 struct inode *inode; /* path.dentry.d_inode */ 18 18 unsigned int flags; 19 - unsigned seq; 19 + unsigned seq, m_seq; 20 20 int last_type; 21 21 unsigned depth; 22 22 char *saved_names[MAX_NESTED_LINKS + 1];
+1
include/linux/pid_namespace.h
··· 23 23 struct pid_namespace { 24 24 struct kref kref; 25 25 struct pidmap pidmap[PIDMAP_ENTRIES]; 26 + struct rcu_head rcu; 26 27 int last_pid; 27 28 unsigned int nr_hashed; 28 29 struct task_struct *child_reaper;
+1 -1
ipc/mqueue.c
··· 886 886 err = -ENOENT; 887 887 } else { 888 888 ihold(inode); 889 - err = vfs_unlink(dentry->d_parent->d_inode, dentry); 889 + err = vfs_unlink(dentry->d_parent->d_inode, dentry, NULL); 890 890 } 891 891 dput(dentry); 892 892
+3 -7
kernel/elfcore.c
··· 1 1 #include <linux/elf.h> 2 2 #include <linux/fs.h> 3 3 #include <linux/mm.h> 4 - 5 - #include <asm/elf.h> 6 - 4 + #include <linux/binfmts.h> 7 5 8 6 Elf_Half __weak elf_core_extra_phdrs(void) 9 7 { 10 8 return 0; 11 9 } 12 10 13 - int __weak elf_core_write_extra_phdrs(struct file *file, loff_t offset, size_t *size, 14 - unsigned long limit) 11 + int __weak elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset) 15 12 { 16 13 return 1; 17 14 } 18 15 19 - int __weak elf_core_write_extra_data(struct file *file, size_t *size, 20 - unsigned long limit) 16 + int __weak elf_core_write_extra_data(struct coredump_params *cprm) 21 17 { 22 18 return 1; 23 19 }
+7 -1
kernel/pid_namespace.c
··· 132 132 return ERR_PTR(err); 133 133 } 134 134 135 + static void delayed_free_pidns(struct rcu_head *p) 136 + { 137 + kmem_cache_free(pid_ns_cachep, 138 + container_of(p, struct pid_namespace, rcu)); 139 + } 140 + 135 141 static void destroy_pid_namespace(struct pid_namespace *ns) 136 142 { 137 143 int i; ··· 146 140 for (i = 0; i < PIDMAP_ENTRIES; i++) 147 141 kfree(ns->pidmap[i].page); 148 142 put_user_ns(ns->user_ns); 149 - kmem_cache_free(pid_ns_cachep, ns); 143 + call_rcu(&ns->rcu, delayed_free_pidns); 150 144 } 151 145 152 146 struct pid_namespace *copy_pid_ns(unsigned long flags,
+1 -1
kernel/signal.c
··· 2723 2723 2724 2724 #ifndef HAVE_ARCH_COPY_SIGINFO_TO_USER 2725 2725 2726 - int copy_siginfo_to_user(siginfo_t __user *to, siginfo_t *from) 2726 + int copy_siginfo_to_user(siginfo_t __user *to, const siginfo_t *from) 2727 2727 { 2728 2728 int err; 2729 2729
+1 -1
mm/memory.c
··· 681 681 if (vma->vm_ops) 682 682 printk(KERN_ALERT "vma->vm_ops->fault: %pSR\n", 683 683 vma->vm_ops->fault); 684 - if (vma->vm_file && vma->vm_file->f_op) 684 + if (vma->vm_file) 685 685 printk(KERN_ALERT "vma->vm_file->f_op->mmap: %pSR\n", 686 686 vma->vm_file->f_op->mmap); 687 687 dump_stack();
+2 -2
mm/mmap.c
··· 1299 1299 vm_flags &= ~VM_MAYEXEC; 1300 1300 } 1301 1301 1302 - if (!file->f_op || !file->f_op->mmap) 1302 + if (!file->f_op->mmap) 1303 1303 return -ENODEV; 1304 1304 if (vm_flags & (VM_GROWSDOWN|VM_GROWSUP)) 1305 1305 return -EINVAL; ··· 1951 1951 return -ENOMEM; 1952 1952 1953 1953 get_area = current->mm->get_unmapped_area; 1954 - if (file && file->f_op && file->f_op->get_unmapped_area) 1954 + if (file && file->f_op->get_unmapped_area) 1955 1955 get_area = file->f_op->get_unmapped_area; 1956 1956 addr = get_area(file, addr, len, pgoff, flags); 1957 1957 if (IS_ERR_VALUE(addr))
+1 -1
mm/nommu.c
··· 937 937 struct address_space *mapping; 938 938 939 939 /* files must support mmap */ 940 - if (!file->f_op || !file->f_op->mmap) 940 + if (!file->f_op->mmap) 941 941 return -ENODEV; 942 942 943 943 /* work out if what we've got could possibly be shared
+2 -2
net/9p/trans_fd.c
··· 244 244 if (!ts) 245 245 return -EREMOTEIO; 246 246 247 - if (!ts->rd->f_op || !ts->rd->f_op->poll) 247 + if (!ts->rd->f_op->poll) 248 248 return -EIO; 249 249 250 - if (!ts->wr->f_op || !ts->wr->f_op->poll) 250 + if (!ts->wr->f_op->poll) 251 251 return -EIO; 252 252 253 253 ret = ts->rd->f_op->poll(ts->rd, pt);
+6 -6
net/sunrpc/rpc_pipe.c
··· 519 519 d_add(dentry, inode); 520 520 return 0; 521 521 out_err: 522 - printk(KERN_WARNING "%s: %s failed to allocate inode for dentry %s\n", 523 - __FILE__, __func__, dentry->d_name.name); 522 + printk(KERN_WARNING "%s: %s failed to allocate inode for dentry %pd\n", 523 + __FILE__, __func__, dentry); 524 524 dput(dentry); 525 525 return -ENOMEM; 526 526 } ··· 755 755 out_bad: 756 756 __rpc_depopulate(parent, files, start, eof); 757 757 mutex_unlock(&dir->i_mutex); 758 - printk(KERN_WARNING "%s: %s failed to populate directory %s\n", 759 - __FILE__, __func__, parent->d_name.name); 758 + printk(KERN_WARNING "%s: %s failed to populate directory %pd\n", 759 + __FILE__, __func__, parent); 760 760 return err; 761 761 } 762 762 ··· 852 852 return dentry; 853 853 out_err: 854 854 dentry = ERR_PTR(err); 855 - printk(KERN_WARNING "%s: %s() failed to create pipe %s/%s (errno = %d)\n", 856 - __FILE__, __func__, parent->d_name.name, name, 855 + printk(KERN_WARNING "%s: %s() failed to create pipe %pd/%s (errno = %d)\n", 856 + __FILE__, __func__, parent, name, 857 857 err); 858 858 goto out; 859 859 }
+6 -16
sound/core/sound.c
··· 153 153 { 154 154 unsigned int minor = iminor(inode); 155 155 struct snd_minor *mptr = NULL; 156 - const struct file_operations *old_fops; 156 + const struct file_operations *new_fops; 157 157 int err = 0; 158 158 159 159 if (minor >= ARRAY_SIZE(snd_minors)) ··· 167 167 return -ENODEV; 168 168 } 169 169 } 170 - old_fops = file->f_op; 171 - file->f_op = fops_get(mptr->f_ops); 172 - if (file->f_op == NULL) { 173 - file->f_op = old_fops; 174 - err = -ENODEV; 175 - } 170 + new_fops = fops_get(mptr->f_ops); 176 171 mutex_unlock(&sound_mutex); 177 - if (err < 0) 178 - return err; 172 + if (!new_fops) 173 + return -ENODEV; 174 + replace_fops(file, new_fops); 179 175 180 - if (file->f_op->open) { 176 + if (file->f_op->open) 181 177 err = file->f_op->open(inode, file); 182 - if (err) { 183 - fops_put(file->f_op); 184 - file->f_op = fops_get(old_fops); 185 - } 186 - } 187 - fops_put(old_fops); 188 178 return err; 189 179 } 190 180
+3 -14
sound/sound_core.c
··· 626 626 if (s) 627 627 new_fops = fops_get(s->unit_fops); 628 628 } 629 + spin_unlock(&sound_loader_lock); 629 630 if (new_fops) { 630 631 /* 631 632 * We rely upon the fact that we can't be unloaded while the 632 - * subdriver is there, so if ->open() is successful we can 633 - * safely drop the reference counter and if it is not we can 634 - * revert to old ->f_op. Ugly, indeed, but that's the cost of 635 - * switching ->f_op in the first place. 633 + * subdriver is there. 636 634 */ 637 635 int err = 0; 638 - const struct file_operations *old_fops = file->f_op; 639 - file->f_op = new_fops; 640 - spin_unlock(&sound_loader_lock); 636 + replace_fops(file, new_fops); 641 637 642 638 if (file->f_op->open) 643 639 err = file->f_op->open(inode,file); 644 640 645 - if (err) { 646 - fops_put(file->f_op); 647 - file->f_op = fops_get(old_fops); 648 - } 649 - 650 - fops_put(old_fops); 651 641 return err; 652 642 } 653 - spin_unlock(&sound_loader_lock); 654 643 return -ENODEV; 655 644 } 656 645