Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull VFS changes from Al Viro:
"First pile out of several (there _definitely_ will be more). Stuff in
this one:

- unification of d_splice_alias()/d_materialize_unique()

- iov_iter rewrite

- killing a bunch of ->f_path.dentry users (and f_dentry macro).

Getting that completed will make life much simpler for
unionmount/overlayfs, since then we'll be able to limit the places
sensitive to file _dentry_ to reasonably few. Which allows to have
file_inode(file) pointing to inode in a covered layer, with dentry
pointing to (negative) dentry in union one.

Still not complete, but much closer now.

- crapectomy in lustre (dead code removal, mostly)

- "let's make seq_printf return nothing" preparations

- assorted cleanups and fixes

There _definitely_ will be more piles"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (63 commits)
copy_from_iter_nocache()
new helper: iov_iter_kvec()
csum_and_copy_..._iter()
iov_iter.c: handle ITER_KVEC directly
iov_iter.c: convert copy_to_iter() to iterate_and_advance
iov_iter.c: convert copy_from_iter() to iterate_and_advance
iov_iter.c: get rid of bvec_copy_page_{to,from}_iter()
iov_iter.c: convert iov_iter_zero() to iterate_and_advance
iov_iter.c: convert iov_iter_get_pages_alloc() to iterate_all_kinds
iov_iter.c: convert iov_iter_get_pages() to iterate_all_kinds
iov_iter.c: convert iov_iter_npages() to iterate_all_kinds
iov_iter.c: iterate_and_advance
iov_iter.c: macros for iterating over iov_iter
kill f_dentry macro
dcache: fix kmemcheck warning in switch_names
new helper: audit_file()
nfsd_vfs_write(): use file_inode()
ncpfs: use file_inode()
kill f_dentry uses
lockd: get rid of ->f_path.dentry->d_sb
...

+1675 -2175
+1 -1
Documentation/filesystems/debugfs.txt
··· 140 140 struct dentry *parent, 141 141 struct debugfs_regset32 *regset); 142 142 143 - int debugfs_print_regs32(struct seq_file *s, struct debugfs_reg32 *regs, 143 + void debugfs_print_regs32(struct seq_file *s, struct debugfs_reg32 *regs, 144 144 int nregs, void __iomem *base, char *prefix); 145 145 146 146 The "base" argument may be 0, but you may want to build the reg32 array
+5 -18
Documentation/filesystems/nfs/Exporting
··· 72 72 DCACHE_DISCONNECTED) dentry is allocated and attached. 73 73 In the case of a directory, care is taken that only one dentry 74 74 can ever be attached. 75 - d_splice_alias(inode, dentry) or d_materialise_unique(dentry, inode) 76 - will introduce a new dentry into the tree; either the passed-in 77 - dentry or a preexisting alias for the given inode (such as an 78 - anonymous one created by d_obtain_alias), if appropriate. The two 79 - functions differ in their handling of directories with preexisting 80 - aliases: 81 - d_splice_alias will use any existing IS_ROOT dentry, but it will 82 - return -EIO rather than try to move a dentry with a different 83 - parent. This is appropriate for local filesystems, which 84 - should never see such an alias unless the filesystem is 85 - corrupted somehow (for example, if two on-disk directory 86 - entries refer to the same directory.) 87 - d_materialise_unique will attempt to move any dentry. This is 88 - appropriate for distributed filesystems, where finding a 89 - directory other than where we last cached it may be a normal 90 - consequence of concurrent operations on other hosts. 91 - Both functions return NULL when the passed-in dentry is used, 92 - following the calling convention of ->lookup. 75 + d_splice_alias(inode, dentry) will introduce a new dentry into the tree; 76 + either the passed-in dentry or a preexisting alias for the given inode 77 + (such as an anonymous one created by d_obtain_alias), if appropriate. 78 + It returns NULL when the passed-in dentry is used, following the calling 79 + convention of ->lookup. 93 80 94 81 95 82 Filesystem Issues
+8
Documentation/filesystems/porting
··· 463 463 of the in-tree instances did). inode_hash_lock is still held, 464 464 of course, so they are still serialized wrt removal from inode hash, 465 465 as well as wrt set() callback of iget5_locked(). 466 + -- 467 + [mandatory] 468 + d_materialise_unique() is gone; d_splice_alias() does everything you 469 + need now. Remember that they have opposite orders of arguments ;-/ 470 + -- 471 + [mandatory] 472 + f_dentry is gone; use f_path.dentry, or, better yet, see if you can avoid 473 + it entirely.
+13 -9
Documentation/filesystems/seq_file.txt
··· 180 180 been defined which make this task easy. 181 181 182 182 Most code will simply use seq_printf(), which works pretty much like 183 - printk(), but which requires the seq_file pointer as an argument. It is 184 - common to ignore the return value from seq_printf(), but a function 185 - producing complicated output may want to check that value and quit if 186 - something non-zero is returned; an error return means that the seq_file 187 - buffer has been filled and further output will be discarded. 183 + printk(), but which requires the seq_file pointer as an argument. 188 184 189 185 For straight character output, the following functions may be used: 190 186 191 - int seq_putc(struct seq_file *m, char c); 192 - int seq_puts(struct seq_file *m, const char *s); 193 - int seq_escape(struct seq_file *m, const char *s, const char *esc); 187 + seq_putc(struct seq_file *m, char c); 188 + seq_puts(struct seq_file *m, const char *s); 189 + seq_escape(struct seq_file *m, const char *s, const char *esc); 194 190 195 191 The first two output a single character and a string, just like one would 196 192 expect. seq_escape() is like seq_puts(), except that any character in s 197 193 which is in the string esc will be represented in octal form in the output. 198 194 199 - There is also a pair of functions for printing filenames: 195 + There are also a pair of functions for printing filenames: 200 196 201 197 int seq_path(struct seq_file *m, struct path *path, char *esc); 202 198 int seq_path_root(struct seq_file *m, struct path *path, ··· 204 208 root is desired, it can be used with seq_path_root(). Note that, if it 205 209 turns out that path cannot be reached from root, the value of root will be 206 210 changed in seq_file_root() to a root which *does* work. 211 + 212 + A function producing complicated output may want to check 213 + bool seq_has_overflowed(struct seq_file *m); 214 + and avoid further seq_<output> calls if true is returned. 215 + 216 + A true return from seq_has_overflowed means that the seq_file buffer will 217 + be discarded and the seq_show function will attempt to allocate a larger 218 + buffer and retry printing. 207 219 208 220 209 221 Making it all work
+1 -1
Documentation/filesystems/vfs.txt
··· 835 835 ssize_t (*splice_read)(struct file *, struct pipe_inode_info *, size_t, unsigned int); 836 836 int (*setlease)(struct file *, long arg, struct file_lock **, void **); 837 837 long (*fallocate)(struct file *, int mode, loff_t offset, loff_t len); 838 - int (*show_fdinfo)(struct seq_file *m, struct file *f); 838 + void (*show_fdinfo)(struct seq_file *m, struct file *f); 839 839 }; 840 840 841 841 Again, all methods are called without any locks being held, unless
+4 -3
arch/alpha/kernel/osf_sys.c
··· 104 104 }; 105 105 106 106 static int 107 - osf_filldir(void *__buf, const char *name, int namlen, loff_t offset, 108 - u64 ino, unsigned int d_type) 107 + osf_filldir(struct dir_context *ctx, const char *name, int namlen, 108 + loff_t offset, u64 ino, unsigned int d_type) 109 109 { 110 110 struct osf_dirent __user *dirent; 111 - struct osf_dirent_callback *buf = (struct osf_dirent_callback *) __buf; 111 + struct osf_dirent_callback *buf = 112 + container_of(ctx, struct osf_dirent_callback, ctx); 112 113 unsigned int reclen = ALIGN(NAME_OFFSET + namlen + 1, sizeof(u32)); 113 114 unsigned int d_ino; 114 115
+4 -3
arch/parisc/hpux/fs.c
··· 56 56 57 57 #define NAME_OFFSET(de) ((int) ((de)->d_name - (char __user *) (de))) 58 58 59 - static int filldir(void * __buf, const char * name, int namlen, loff_t offset, 60 - u64 ino, unsigned d_type) 59 + static int filldir(struct dir_context *ctx, const char *name, int namlen, 60 + loff_t offset, u64 ino, unsigned d_type) 61 61 { 62 62 struct hpux_dirent __user * dirent; 63 - struct getdents_callback * buf = (struct getdents_callback *) __buf; 63 + struct getdents_callback *buf = 64 + container_of(ctx, struct getdents_callback, ctx); 64 65 ino_t d_ino; 65 66 int reclen = ALIGN(NAME_OFFSET(dirent) + namlen + 1, sizeof(long)); 66 67
+4 -6
arch/powerpc/oprofile/cell/spu_task_sync.c
··· 331 331 332 332 if (mm->exe_file) { 333 333 app_cookie = fast_get_dcookie(&mm->exe_file->f_path); 334 - pr_debug("got dcookie for %s\n", 335 - mm->exe_file->f_dentry->d_name.name); 334 + pr_debug("got dcookie for %pD\n", mm->exe_file); 336 335 } 337 336 338 337 for (vma = mm->mmap; vma; vma = vma->vm_next) { ··· 341 342 if (!vma->vm_file) 342 343 goto fail_no_image_cookie; 343 344 344 - pr_debug("Found spu ELF at %X(object-id:%lx) for file %s\n", 345 - my_offset, spu_ref, 346 - vma->vm_file->f_dentry->d_name.name); 345 + pr_debug("Found spu ELF at %X(object-id:%lx) for file %pD\n", 346 + my_offset, spu_ref, vma->vm_file); 347 347 *offsetp = my_offset; 348 348 break; 349 349 } 350 350 351 351 *spu_bin_dcookie = fast_get_dcookie(&vma->vm_file->f_path); 352 - pr_debug("got dcookie for %s\n", vma->vm_file->f_dentry->d_name.name); 352 + pr_debug("got dcookie for %pD\n", vma->vm_file); 353 353 354 354 up_read(&mm->mmap_sem); 355 355
+1 -1
arch/powerpc/platforms/cell/spufs/inode.c
··· 164 164 struct dentry *dentry, *tmp; 165 165 166 166 mutex_lock(&dir->d_inode->i_mutex); 167 - list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_u.d_child) { 167 + list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_child) { 168 168 spin_lock(&dentry->d_lock); 169 169 if (!(d_unhashed(dentry)) && dentry->d_inode) { 170 170 dget_dlock(dentry);
+1 -2
arch/s390/hypfs/hypfs_dbfs.c
··· 83 83 84 84 static long dbfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 85 85 { 86 - struct hypfs_dbfs_file *df; 86 + struct hypfs_dbfs_file *df = file_inode(file)->i_private; 87 87 long rc; 88 88 89 - df = file->f_path.dentry->d_inode->i_private; 90 89 mutex_lock(&df->lock); 91 90 if (df->unlocked_ioctl) 92 91 rc = df->unlocked_ioctl(file, cmd, arg);
+4 -4
arch/x86/ia32/ia32_aout.c
··· 342 342 time_after(jiffies, error_time + 5*HZ)) { 343 343 printk(KERN_WARNING 344 344 "fd_offset is not page aligned. Please convert " 345 - "program: %s\n", 346 - bprm->file->f_path.dentry->d_name.name); 345 + "program: %pD\n", 346 + bprm->file); 347 347 error_time = jiffies; 348 348 } 349 349 #endif ··· 429 429 if (time_after(jiffies, error_time + 5*HZ)) { 430 430 printk(KERN_WARNING 431 431 "N_TXTOFF is not page aligned. Please convert " 432 - "library: %s\n", 433 - file->f_path.dentry->d_name.name); 432 + "library: %pD\n", 433 + file); 434 434 error_time = jiffies; 435 435 } 436 436 #endif
+3 -3
drivers/block/drbd/drbd_debugfs.c
··· 419 419 return 0; 420 420 } 421 421 422 - /* simple_positive(file->f_dentry) respectively debugfs_positive(), 422 + /* simple_positive(file->f_path.dentry) respectively debugfs_positive(), 423 423 * but neither is "reachable" from here. 424 424 * So we have our own inline version of it above. :-( */ 425 425 static inline int debugfs_positive(struct dentry *dentry) ··· 437 437 438 438 /* Are we still linked, 439 439 * or has debugfs_remove() already been called? */ 440 - parent = file->f_dentry->d_parent; 440 + parent = file->f_path.dentry->d_parent; 441 441 /* not sure if this can happen: */ 442 442 if (!parent || !parent->d_inode) 443 443 goto out; 444 444 /* serialize with d_delete() */ 445 445 mutex_lock(&parent->d_inode->i_mutex); 446 446 /* Make sure the object is still alive */ 447 - if (debugfs_positive(file->f_dentry) 447 + if (debugfs_positive(file->f_path.dentry) 448 448 && kref_get_unless_zero(kref)) 449 449 ret = 0; 450 450 mutex_unlock(&parent->d_inode->i_mutex);
+1 -1
drivers/gpu/drm/armada/armada_gem.c
··· 226 226 227 227 obj->dev_addr = DMA_ERROR_CODE; 228 228 229 - mapping = obj->obj.filp->f_path.dentry->d_inode->i_mapping; 229 + mapping = file_inode(obj->obj.filp)->i_mapping; 230 230 mapping_set_gfp_mask(mapping, GFP_HIGHUSER | __GFP_RECLAIMABLE); 231 231 232 232 DRM_DEBUG_DRIVER("alloc obj %p size %zu\n", obj, size);
+2 -2
drivers/media/pci/zoran/zoran_procfs.c
··· 157 157 return -EFAULT; 158 158 } 159 159 string[count] = 0; 160 - dprintk(4, KERN_INFO "%s: write_proc: name=%s count=%zu zr=%p\n", 161 - ZR_DEVNAME(zr), file->f_path.dentry->d_name.name, count, zr); 160 + dprintk(4, KERN_INFO "%s: write_proc: name=%pD count=%zu zr=%p\n", 161 + ZR_DEVNAME(zr), file, count, zr); 162 162 ldelim = " \t\n"; 163 163 tdelim = "="; 164 164 line = strpbrk(sp, ldelim);
+1 -1
drivers/misc/genwqe/card_dev.c
··· 395 395 static void genwqe_vma_close(struct vm_area_struct *vma) 396 396 { 397 397 unsigned long vsize = vma->vm_end - vma->vm_start; 398 - struct inode *inode = vma->vm_file->f_dentry->d_inode; 398 + struct inode *inode = file_inode(vma->vm_file); 399 399 struct dma_mapping *dma_map; 400 400 struct genwqe_dev *cd = container_of(inode->i_cdev, struct genwqe_dev, 401 401 cdev_genwqe);
+2 -2
drivers/net/tun.c
··· 2222 2222 } 2223 2223 2224 2224 #ifdef CONFIG_PROC_FS 2225 - static int tun_chr_show_fdinfo(struct seq_file *m, struct file *f) 2225 + static void tun_chr_show_fdinfo(struct seq_file *m, struct file *f) 2226 2226 { 2227 2227 struct tun_struct *tun; 2228 2228 struct ifreq ifr; ··· 2238 2238 if (tun) 2239 2239 tun_put(tun); 2240 2240 2241 - return seq_printf(m, "iff:\t%s\n", ifr.ifr_name); 2241 + seq_printf(m, "iff:\t%s\n", ifr.ifr_name); 2242 2242 } 2243 2243 #endif 2244 2244
+7 -9
drivers/s390/char/hmcdrv_dev.c
··· 136 136 if (rc) 137 137 module_put(THIS_MODULE); 138 138 139 - pr_debug("open file '/dev/%s' with return code %d\n", 140 - fp->f_dentry->d_name.name, rc); 139 + pr_debug("open file '/dev/%pD' with return code %d\n", fp, rc); 141 140 return rc; 142 141 } 143 142 ··· 145 146 */ 146 147 static int hmcdrv_dev_release(struct inode *inode, struct file *fp) 147 148 { 148 - pr_debug("closing file '/dev/%s'\n", fp->f_dentry->d_name.name); 149 + pr_debug("closing file '/dev/%pD'\n", fp); 149 150 kfree(fp->private_data); 150 151 fp->private_data = NULL; 151 152 hmcdrv_ftp_shutdown(); ··· 230 231 retlen = hmcdrv_dev_transfer((char *) fp->private_data, 231 232 *pos, ubuf, len); 232 233 233 - pr_debug("read from file '/dev/%s' at %lld returns %zd/%zu\n", 234 - fp->f_dentry->d_name.name, (long long) *pos, retlen, len); 234 + pr_debug("read from file '/dev/%pD' at %lld returns %zd/%zu\n", 235 + fp, (long long) *pos, retlen, len); 235 236 236 237 if (retlen > 0) 237 238 *pos += retlen; ··· 247 248 { 248 249 ssize_t retlen; 249 250 250 - pr_debug("writing file '/dev/%s' at pos. %lld with length %zd\n", 251 - fp->f_dentry->d_name.name, (long long) *pos, len); 251 + pr_debug("writing file '/dev/%pD' at pos. %lld with length %zd\n", 252 + fp, (long long) *pos, len); 252 253 253 254 if (!fp->private_data) { /* first expect a cmd write */ 254 255 fp->private_data = kmalloc(len + 1, GFP_KERNEL); ··· 271 272 if (retlen > 0) 272 273 *pos += retlen; 273 274 274 - pr_debug("write to file '/dev/%s' returned %zd\n", 275 - fp->f_dentry->d_name.name, retlen); 275 + pr_debug("write to file '/dev/%pD' returned %zd\n", fp, retlen); 276 276 277 277 return retlen; 278 278 }
+4 -4
drivers/scsi/lpfc/lpfc_debugfs.c
··· 968 968 goto out; 969 969 970 970 /* Round to page boundary */ 971 - printk(KERN_ERR "9060 BLKGRD: %s: _dump_buf_dif=0x%p file=%s\n", 972 - __func__, _dump_buf_dif, file->f_dentry->d_name.name); 971 + printk(KERN_ERR "9060 BLKGRD: %s: _dump_buf_dif=0x%p file=%pD\n", 972 + __func__, _dump_buf_dif, file); 973 973 debug->buffer = _dump_buf_dif; 974 974 if (!debug->buffer) { 975 975 kfree(debug); ··· 1011 1011 lpfc_debugfs_dif_err_read(struct file *file, char __user *buf, 1012 1012 size_t nbytes, loff_t *ppos) 1013 1013 { 1014 - struct dentry *dent = file->f_dentry; 1014 + struct dentry *dent = file->f_path.dentry; 1015 1015 struct lpfc_hba *phba = file->private_data; 1016 1016 char cbuf[32]; 1017 1017 uint64_t tmp = 0; ··· 1052 1052 lpfc_debugfs_dif_err_write(struct file *file, const char __user *buf, 1053 1053 size_t nbytes, loff_t *ppos) 1054 1054 { 1055 - struct dentry *dent = file->f_dentry; 1055 + struct dentry *dent = file->f_path.dentry; 1056 1056 struct lpfc_hba *phba = file->private_data; 1057 1057 char dstbuf[32]; 1058 1058 uint64_t tmp = 0;
+2 -2
drivers/staging/lustre/lustre/libcfs/tracefile.c
··· 1025 1025 1026 1026 if (f_pos >= (off_t)cfs_tracefile_size) 1027 1027 f_pos = 0; 1028 - else if (f_pos > i_size_read(filp->f_dentry->d_inode)) 1029 - f_pos = i_size_read(filp->f_dentry->d_inode); 1028 + else if (f_pos > i_size_read(file_inode(filp))) 1029 + f_pos = i_size_read(file_inode(filp)); 1030 1030 1031 1031 buf = kmap(tage->page); 1032 1032 rc = vfs_write(filp, (__force const char __user *)buf,
+10 -11
drivers/staging/lustre/lustre/llite/dcache.c
··· 151 151 { 152 152 LASSERT(de); 153 153 154 - CDEBUG(D_DENTRY, "%s dentry %.*s (%p, parent %p, inode %p) %s%s\n", 154 + CDEBUG(D_DENTRY, "%s dentry %pd (%p, parent %p, inode %p) %s%s\n", 155 155 d_lustre_invalid((struct dentry *)de) ? "deleting" : "keeping", 156 - de->d_name.len, de->d_name.name, de, de->d_parent, de->d_inode, 157 - d_unhashed((struct dentry *)de) ? "" : "hashed,", 156 + de, de, de->d_parent, de->d_inode, 157 + d_unhashed(de) ? "" : "hashed,", 158 158 list_empty(&de->d_subdirs) ? "" : "subdirs"); 159 159 160 160 /* kernel >= 2.6.38 last refcount is decreased after this function. */ ··· 180 180 { 181 181 LASSERT(de != NULL); 182 182 183 - CDEBUG(D_DENTRY, "ldd on dentry %.*s (%p) parent %p inode %p refc %d\n", 184 - de->d_name.len, de->d_name.name, de, de->d_parent, de->d_inode, 183 + CDEBUG(D_DENTRY, "ldd on dentry %pd (%p) parent %p inode %p refc %d\n", 184 + de, de, de->d_parent, de->d_inode, 185 185 d_count(de)); 186 186 187 187 if (de->d_fsdata == NULL) { ··· 258 258 inode->i_ino, inode->i_generation, inode); 259 259 260 260 ll_lock_dcache(inode); 261 - ll_d_hlist_for_each_entry(dentry, p, &inode->i_dentry, d_alias) { 262 - CDEBUG(D_DENTRY, "dentry in drop %.*s (%p) parent %p " 263 - "inode %p flags %d\n", dentry->d_name.len, 264 - dentry->d_name.name, dentry, dentry->d_parent, 261 + ll_d_hlist_for_each_entry(dentry, p, &inode->i_dentry, d_u.d_alias) { 262 + CDEBUG(D_DENTRY, "dentry in drop %pd (%p) parent %p " 263 + "inode %p flags %d\n", dentry, dentry, dentry->d_parent, 265 264 dentry->d_inode, dentry->d_flags); 266 265 267 266 if (unlikely(dentry == dentry->d_sb->s_root)) { ··· 351 352 { 352 353 int rc; 353 354 354 - CDEBUG(D_VFSTRACE, "VFS Op:name=%s, flags=%u\n", 355 - dentry->d_name.name, flags); 355 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd, flags=%u\n", 356 + dentry, flags); 356 357 357 358 rc = ll_revalidate_dentry(dentry, flags); 358 359 return rc;
+4 -5
drivers/staging/lustre/lustre/llite/dir.c
··· 593 593 594 594 static int ll_readdir(struct file *filp, struct dir_context *ctx) 595 595 { 596 - struct inode *inode = filp->f_dentry->d_inode; 596 + struct inode *inode = file_inode(filp); 597 597 struct ll_file_data *lfd = LUSTRE_FPRIVATE(filp); 598 598 struct ll_sb_info *sbi = ll_i2sbi(inode); 599 599 int hash64 = sbi->ll_flags & LL_SBI_64BIT_HASH; ··· 1242 1242 1243 1243 static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 1244 1244 { 1245 - struct inode *inode = file->f_dentry->d_inode; 1245 + struct inode *inode = file_inode(file); 1246 1246 struct ll_sb_info *sbi = ll_i2sbi(inode); 1247 1247 struct obd_ioctl_data *data; 1248 1248 int rc = 0; ··· 1389 1389 return -EFAULT; 1390 1390 } 1391 1391 1392 - if (inode->i_sb->s_root == file->f_dentry) 1392 + if (is_root_inode(inode)) 1393 1393 set_default = 1; 1394 1394 1395 1395 /* in v1 and v3 cases lumv1 points to data */ ··· 1780 1780 return ll_flush_ctx(inode); 1781 1781 #ifdef CONFIG_FS_POSIX_ACL 1782 1782 case LL_IOC_RMTACL: { 1783 - if (sbi->ll_flags & LL_SBI_RMT_CLIENT && 1784 - inode == inode->i_sb->s_root->d_inode) { 1783 + if (sbi->ll_flags & LL_SBI_RMT_CLIENT && is_root_inode(inode)) { 1785 1784 struct ll_file_data *fd = LUSTRE_FPRIVATE(file); 1786 1785 1787 1786 LASSERT(fd != NULL);
+70 -88
drivers/staging/lustre/lustre/llite/file.c
··· 266 266 { 267 267 struct ll_file_data *fd = LUSTRE_FPRIVATE(file); 268 268 struct ll_inode_info *lli = ll_i2info(inode); 269 + int lockmode; 270 + __u64 flags = LDLM_FL_BLOCK_GRANTED | LDLM_FL_TEST_LOCK; 271 + struct lustre_handle lockh; 272 + ldlm_policy_data_t policy = {.l_inodebits={MDS_INODELOCK_OPEN}}; 269 273 int rc = 0; 270 274 271 275 /* clear group lock, if present */ ··· 296 292 297 293 /* Let's see if we have good enough OPEN lock on the file and if 298 294 we can skip talking to MDS */ 299 - if (file->f_dentry->d_inode) { /* Can this ever be false? */ 300 - int lockmode; 301 - __u64 flags = LDLM_FL_BLOCK_GRANTED | LDLM_FL_TEST_LOCK; 302 - struct lustre_handle lockh; 303 - struct inode *inode = file->f_dentry->d_inode; 304 - ldlm_policy_data_t policy = {.l_inodebits={MDS_INODELOCK_OPEN}}; 305 295 306 - mutex_lock(&lli->lli_och_mutex); 307 - if (fd->fd_omode & FMODE_WRITE) { 308 - lockmode = LCK_CW; 309 - LASSERT(lli->lli_open_fd_write_count); 310 - lli->lli_open_fd_write_count--; 311 - } else if (fd->fd_omode & FMODE_EXEC) { 312 - lockmode = LCK_PR; 313 - LASSERT(lli->lli_open_fd_exec_count); 314 - lli->lli_open_fd_exec_count--; 315 - } else { 316 - lockmode = LCK_CR; 317 - LASSERT(lli->lli_open_fd_read_count); 318 - lli->lli_open_fd_read_count--; 319 - } 320 - mutex_unlock(&lli->lli_och_mutex); 321 - 322 - if (!md_lock_match(md_exp, flags, ll_inode2fid(inode), 323 - LDLM_IBITS, &policy, lockmode, 324 - &lockh)) { 325 - rc = ll_md_real_close(file->f_dentry->d_inode, 326 - fd->fd_omode); 327 - } 296 + mutex_lock(&lli->lli_och_mutex); 297 + if (fd->fd_omode & FMODE_WRITE) { 298 + lockmode = LCK_CW; 299 + LASSERT(lli->lli_open_fd_write_count); 300 + lli->lli_open_fd_write_count--; 301 + } else if (fd->fd_omode & FMODE_EXEC) { 302 + lockmode = LCK_PR; 303 + LASSERT(lli->lli_open_fd_exec_count); 304 + lli->lli_open_fd_exec_count--; 328 305 } else { 329 - CERROR("Releasing a file %p with negative dentry %p. Name %s", 330 - file, file->f_dentry, file->f_dentry->d_name.name); 306 + lockmode = LCK_CR; 307 + LASSERT(lli->lli_open_fd_read_count); 308 + lli->lli_open_fd_read_count--; 331 309 } 310 + mutex_unlock(&lli->lli_och_mutex); 311 + 312 + if (!md_lock_match(md_exp, flags, ll_inode2fid(inode), 313 + LDLM_IBITS, &policy, lockmode, &lockh)) 314 + rc = ll_md_real_close(inode, fd->fd_omode); 332 315 333 316 out: 334 317 LUSTRE_FPRIVATE(file) = NULL; ··· 341 350 inode->i_generation, inode); 342 351 343 352 #ifdef CONFIG_FS_POSIX_ACL 344 - if (sbi->ll_flags & LL_SBI_RMT_CLIENT && 345 - inode == inode->i_sb->s_root->d_inode) { 353 + if (sbi->ll_flags & LL_SBI_RMT_CLIENT && is_root_inode(inode)) { 346 354 struct ll_file_data *fd = LUSTRE_FPRIVATE(file); 347 355 348 356 LASSERT(fd != NULL); ··· 353 363 } 354 364 #endif 355 365 356 - if (inode->i_sb->s_root != file->f_dentry) 366 + if (!is_root_inode(inode)) 357 367 ll_stats_ops_tally(sbi, LPROC_LL_RELEASE, 1); 358 368 fd = LUSTRE_FPRIVATE(file); 359 369 LASSERT(fd != NULL); ··· 365 375 lli->lli_opendir_pid != 0) 366 376 ll_stop_statahead(inode, lli->lli_opendir_key); 367 377 368 - if (inode->i_sb->s_root == file->f_dentry) { 378 + if (is_root_inode(inode)) { 369 379 LUSTRE_FPRIVATE(file) = NULL; 370 380 ll_file_data_put(fd); 371 381 return 0; ··· 384 394 return rc; 385 395 } 386 396 387 - static int ll_intent_file_open(struct file *file, void *lmm, 397 + static int ll_intent_file_open(struct dentry *dentry, void *lmm, 388 398 int lmmsize, struct lookup_intent *itp) 389 399 { 390 - struct ll_sb_info *sbi = ll_i2sbi(file->f_dentry->d_inode); 391 - struct dentry *parent = file->f_dentry->d_parent; 392 - const char *name = file->f_dentry->d_name.name; 393 - const int len = file->f_dentry->d_name.len; 400 + struct inode *inode = dentry->d_inode; 401 + struct ll_sb_info *sbi = ll_i2sbi(inode); 402 + struct dentry *parent = dentry->d_parent; 403 + const char *name = dentry->d_name.name; 404 + const int len = dentry->d_name.len; 394 405 struct md_op_data *op_data; 395 406 struct ptlrpc_request *req; 396 407 __u32 opc = LUSTRE_OPC_ANY; 397 408 int rc; 398 - 399 - if (!parent) 400 - return -ENOENT; 401 409 402 410 /* Usually we come here only for NFSD, and we want open lock. 403 411 But we can also get here with pre 2.6.15 patchless kernels, and in ··· 413 425 } 414 426 415 427 op_data = ll_prep_md_op_data(NULL, parent->d_inode, 416 - file->f_dentry->d_inode, name, len, 428 + inode, name, len, 417 429 O_RDWR, opc, NULL); 418 430 if (IS_ERR(op_data)) 419 431 return PTR_ERR(op_data); ··· 429 441 if (!it_disposition(itp, DISP_OPEN_OPEN) || 430 442 it_open_error(DISP_OPEN_OPEN, itp)) 431 443 goto out; 432 - ll_release_openhandle(file->f_dentry, itp); 444 + ll_release_openhandle(inode, itp); 433 445 goto out; 434 446 } 435 447 ··· 444 456 goto out; 445 457 } 446 458 447 - rc = ll_prep_inode(&file->f_dentry->d_inode, req, NULL, itp); 459 + rc = ll_prep_inode(&inode, req, NULL, itp); 448 460 if (!rc && itp->d.lustre.it_lock_mode) 449 - ll_set_lock_data(sbi->ll_md_exp, file->f_dentry->d_inode, 450 - itp, NULL); 461 + ll_set_lock_data(sbi->ll_md_exp, inode, itp, NULL); 451 462 452 463 out: 453 464 ptlrpc_req_finished(req); ··· 488 501 static int ll_local_open(struct file *file, struct lookup_intent *it, 489 502 struct ll_file_data *fd, struct obd_client_handle *och) 490 503 { 491 - struct inode *inode = file->f_dentry->d_inode; 504 + struct inode *inode = file_inode(file); 492 505 struct ll_inode_info *lli = ll_i2info(inode); 493 506 494 507 LASSERT(!LUSTRE_FPRIVATE(file)); ··· 561 574 spin_unlock(&lli->lli_sa_lock); 562 575 } 563 576 564 - if (inode->i_sb->s_root == file->f_dentry) { 577 + if (is_root_inode(inode)) { 565 578 LUSTRE_FPRIVATE(file) = fd; 566 579 return 0; 567 580 } ··· 619 632 goto out_openerr; 620 633 } 621 634 622 - ll_release_openhandle(file->f_dentry, it); 635 + ll_release_openhandle(inode, it); 623 636 } 624 637 (*och_usecount)++; 625 638 ··· 639 652 result in a deadlock */ 640 653 mutex_unlock(&lli->lli_och_mutex); 641 654 it->it_create_mode |= M_CHECK_STALE; 642 - rc = ll_intent_file_open(file, NULL, 0, it); 655 + rc = ll_intent_file_open(file->f_path.dentry, NULL, 0, it); 643 656 it->it_create_mode &= ~M_CHECK_STALE; 644 657 if (rc) 645 658 goto out_openerr; ··· 1052 1065 static bool file_is_noatime(const struct file *file) 1053 1066 { 1054 1067 const struct vfsmount *mnt = file->f_path.mnt; 1055 - const struct inode *inode = file->f_path.dentry->d_inode; 1068 + const struct inode *inode = file_inode(file); 1056 1069 1057 1070 /* Adapted from file_accessed() and touch_atime().*/ 1058 1071 if (file->f_flags & O_NOATIME) ··· 1078 1091 1079 1092 void ll_io_init(struct cl_io *io, const struct file *file, int write) 1080 1093 { 1081 - struct inode *inode = file->f_dentry->d_inode; 1094 + struct inode *inode = file_inode(file); 1082 1095 1083 1096 io->u.ci_rw.crw_nonblock = file->f_flags & O_NONBLOCK; 1084 1097 if (write) { ··· 1104 1117 struct file *file, enum cl_io_type iot, 1105 1118 loff_t *ppos, size_t count) 1106 1119 { 1107 - struct ll_inode_info *lli = ll_i2info(file->f_dentry->d_inode); 1120 + struct ll_inode_info *lli = ll_i2info(file_inode(file)); 1108 1121 struct ll_file_data *fd = LUSTRE_FPRIVATE(file); 1109 1122 struct cl_io *io; 1110 1123 ssize_t result; ··· 1165 1178 /* If any bit been read/written (result != 0), we just return 1166 1179 * short read/write instead of restart io. */ 1167 1180 if ((result == 0 || result == -ENODATA) && io->ci_need_restart) { 1168 - CDEBUG(D_VFSTRACE, "Restart %s on %s from %lld, count:%zd\n", 1181 + CDEBUG(D_VFSTRACE, "Restart %s on %pD from %lld, count:%zd\n", 1169 1182 iot == CIT_READ ? "read" : "write", 1170 - file->f_dentry->d_name.name, *ppos, count); 1183 + file, *ppos, count); 1171 1184 LASSERTF(io->ci_nob == 0, "%zd", io->ci_nob); 1172 1185 goto restart; 1173 1186 } 1174 1187 1175 1188 if (iot == CIT_READ) { 1176 1189 if (result >= 0) 1177 - ll_stats_ops_tally(ll_i2sbi(file->f_dentry->d_inode), 1190 + ll_stats_ops_tally(ll_i2sbi(file_inode(file)), 1178 1191 LPROC_LL_READ_BYTES, result); 1179 1192 } else if (iot == CIT_WRITE) { 1180 1193 if (result >= 0) { 1181 - ll_stats_ops_tally(ll_i2sbi(file->f_dentry->d_inode), 1194 + ll_stats_ops_tally(ll_i2sbi(file_inode(file)), 1182 1195 LPROC_LL_WRITE_BYTES, result); 1183 1196 fd->fd_write_failed = false; 1184 1197 } else if (result != -ERESTARTSYS) { ··· 1341 1354 return ll_lov_recreate(inode, &oi, ost_idx); 1342 1355 } 1343 1356 1344 - int ll_lov_setstripe_ea_info(struct inode *inode, struct file *file, 1357 + int ll_lov_setstripe_ea_info(struct inode *inode, struct dentry *dentry, 1345 1358 int flags, struct lov_user_md *lum, int lum_size) 1346 1359 { 1347 1360 struct lov_stripe_md *lsm = NULL; ··· 1358 1371 } 1359 1372 1360 1373 ll_inode_size_lock(inode); 1361 - rc = ll_intent_file_open(file, lum, lum_size, &oit); 1374 + rc = ll_intent_file_open(dentry, lum, lum_size, &oit); 1362 1375 if (rc) 1363 1376 goto out_unlock; 1364 1377 rc = oit.d.lustre.it_status; 1365 1378 if (rc < 0) 1366 1379 goto out_req_free; 1367 1380 1368 - ll_release_openhandle(file->f_dentry, &oit); 1381 + ll_release_openhandle(inode, &oit); 1369 1382 1370 1383 out_unlock: 1371 1384 ll_inode_size_unlock(inode); 1372 1385 ll_intent_release(&oit); 1373 1386 ccc_inode_lsm_put(inode, lsm); 1374 1387 out: 1375 - cl_lov_delay_create_clear(&file->f_flags); 1376 1388 return rc; 1377 1389 out_req_free: 1378 1390 ptlrpc_req_finished((struct ptlrpc_request *) oit.d.lustre.it_data); ··· 1485 1499 return -EFAULT; 1486 1500 } 1487 1501 1488 - rc = ll_lov_setstripe_ea_info(inode, file, flags, lump, lum_size); 1502 + rc = ll_lov_setstripe_ea_info(inode, file->f_path.dentry, flags, lump, 1503 + lum_size); 1504 + cl_lov_delay_create_clear(&file->f_flags); 1489 1505 1490 1506 OBD_FREE_LARGE(lump, lum_size); 1491 1507 return rc; ··· 1514 1526 return -EFAULT; 1515 1527 } 1516 1528 1517 - rc = ll_lov_setstripe_ea_info(inode, file, flags, lumv1, lum_size); 1529 + rc = ll_lov_setstripe_ea_info(inode, file->f_path.dentry, flags, lumv1, 1530 + lum_size); 1531 + cl_lov_delay_create_clear(&file->f_flags); 1518 1532 if (rc == 0) { 1519 1533 struct lov_stripe_md *lsm; 1520 1534 __u32 gen; ··· 1621 1631 /** 1622 1632 * Close inode open handle 1623 1633 * 1624 - * \param dentry [in] dentry which contains the inode 1634 + * \param inode [in] inode in question 1625 1635 * \param it [in,out] intent which contains open info and result 1626 1636 * 1627 1637 * \retval 0 success 1628 1638 * \retval <0 failure 1629 1639 */ 1630 - int ll_release_openhandle(struct dentry *dentry, struct lookup_intent *it) 1640 + int ll_release_openhandle(struct inode *inode, struct lookup_intent *it) 1631 1641 { 1632 - struct inode *inode = dentry->d_inode; 1633 1642 struct obd_client_handle *och; 1634 1643 int rc; 1635 1644 1636 1645 LASSERT(inode); 1637 1646 1638 1647 /* Root ? Do nothing. */ 1639 - if (dentry->d_inode->i_sb->s_root == dentry) 1648 + if (is_root_inode(inode)) 1640 1649 return 0; 1641 1650 1642 1651 /* No open handle to close? Move away */ ··· 1948 1959 if (!llss) 1949 1960 return -ENOMEM; 1950 1961 1951 - llss->inode1 = file1->f_dentry->d_inode; 1952 - llss->inode2 = file2->f_dentry->d_inode; 1962 + llss->inode1 = file_inode(file1); 1963 + llss->inode2 = file_inode(file2); 1953 1964 1954 1965 if (!S_ISREG(llss->inode2->i_mode)) { 1955 1966 rc = -EINVAL; ··· 2081 2092 rc = 0; 2082 2093 if (llss->ia2.ia_valid != 0) { 2083 2094 mutex_lock(&llss->inode1->i_mutex); 2084 - rc = ll_setattr(file1->f_dentry, &llss->ia2); 2095 + rc = ll_setattr(file1->f_path.dentry, &llss->ia2); 2085 2096 mutex_unlock(&llss->inode1->i_mutex); 2086 2097 } 2087 2098 ··· 2089 2100 int rc1; 2090 2101 2091 2102 mutex_lock(&llss->inode2->i_mutex); 2092 - rc1 = ll_setattr(file2->f_dentry, &llss->ia1); 2103 + rc1 = ll_setattr(file2->f_path.dentry, &llss->ia1); 2093 2104 mutex_unlock(&llss->inode2->i_mutex); 2094 2105 if (rc == 0) 2095 2106 rc = rc1; ··· 2174 2185 2175 2186 mutex_lock(&inode->i_mutex); 2176 2187 2177 - rc = ll_setattr_raw(file->f_dentry, attr, true); 2188 + rc = ll_setattr_raw(file->f_path.dentry, attr, true); 2178 2189 if (rc == -ENODATA) 2179 2190 rc = 0; 2180 2191 ··· 2193 2204 static long 2194 2205 ll_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 2195 2206 { 2196 - struct inode *inode = file->f_dentry->d_inode; 2207 + struct inode *inode = file_inode(file); 2197 2208 struct ll_file_data *fd = LUSTRE_FPRIVATE(file); 2198 2209 int flags, rc; 2199 2210 ··· 2512 2523 2513 2524 static loff_t ll_file_seek(struct file *file, loff_t offset, int origin) 2514 2525 { 2515 - struct inode *inode = file->f_dentry->d_inode; 2526 + struct inode *inode = file_inode(file); 2516 2527 loff_t retval, eof = 0; 2517 2528 2518 2529 retval = offset + ((origin == SEEK_END) ? i_size_read(inode) : ··· 2536 2547 2537 2548 static int ll_flush(struct file *file, fl_owner_t id) 2538 2549 { 2539 - struct inode *inode = file->f_dentry->d_inode; 2550 + struct inode *inode = file_inode(file); 2540 2551 struct ll_inode_info *lli = ll_i2info(inode); 2541 2552 struct ll_file_data *fd = LUSTRE_FPRIVATE(file); 2542 2553 int rc, err; ··· 2611 2622 return result; 2612 2623 } 2613 2624 2614 - /* 2615 - * When dentry is provided (the 'else' case), *file->f_dentry may be 2616 - * null and dentry must be used directly rather than pulled from 2617 - * *file->f_dentry as is done otherwise. 2618 - */ 2619 - 2620 2625 int ll_fsync(struct file *file, loff_t start, loff_t end, int datasync) 2621 2626 { 2622 - struct dentry *dentry = file->f_dentry; 2623 - struct inode *inode = dentry->d_inode; 2627 + struct inode *inode = file_inode(file); 2624 2628 struct ll_inode_info *lli = ll_i2info(inode); 2625 2629 struct ptlrpc_request *req; 2626 2630 struct obd_capa *oc; ··· 2666 2684 static int 2667 2685 ll_file_flock(struct file *file, int cmd, struct file_lock *file_lock) 2668 2686 { 2669 - struct inode *inode = file->f_dentry->d_inode; 2687 + struct inode *inode = file_inode(file); 2670 2688 struct ll_sb_info *sbi = ll_i2sbi(inode); 2671 2689 struct ldlm_enqueue_info einfo = { 2672 2690 .ei_type = LDLM_FLOCK, ··· 2890 2908 2891 2909 LASSERT(inode != NULL); 2892 2910 2893 - CDEBUG(D_VFSTRACE, "VFS Op:inode=%lu/%u(%p),name=%s\n", 2894 - inode->i_ino, inode->i_generation, inode, dentry->d_name.name); 2911 + CDEBUG(D_VFSTRACE, "VFS Op:inode=%lu/%u(%p),name=%pd\n", 2912 + inode->i_ino, inode->i_generation, inode, dentry); 2895 2913 2896 2914 exp = ll_i2mdexp(inode); 2897 2915 ··· 3101 3119 /* as root inode are NOT getting validated in lookup operation, 3102 3120 * need to do it before permission check. */ 3103 3121 3104 - if (inode == inode->i_sb->s_root->d_inode) { 3122 + if (is_root_inode(inode)) { 3105 3123 rc = __ll_inode_revalidate(inode->i_sb->s_root, 3106 3124 MDS_INODELOCK_LOOKUP); 3107 3125 if (rc)
+5 -5
drivers/staging/lustre/lustre/llite/llite_internal.h
··· 748 748 int ll_glimpse_ioctl(struct ll_sb_info *sbi, 749 749 struct lov_stripe_md *lsm, lstat_t *st); 750 750 void ll_ioepoch_open(struct ll_inode_info *lli, __u64 ioepoch); 751 - int ll_release_openhandle(struct dentry *, struct lookup_intent *); 751 + int ll_release_openhandle(struct inode *, struct lookup_intent *); 752 752 int ll_md_real_close(struct inode *inode, fmode_t fmode); 753 753 void ll_ioepoch_close(struct inode *inode, struct md_op_data *op_data, 754 754 struct obd_client_handle **och, unsigned long flags); ··· 763 763 764 764 int ll_inode_permission(struct inode *inode, int mask); 765 765 766 - int ll_lov_setstripe_ea_info(struct inode *inode, struct file *file, 766 + int ll_lov_setstripe_ea_info(struct inode *inode, struct dentry *dentry, 767 767 int flags, struct lov_user_md *lum, 768 768 int lum_size); 769 769 int ll_lov_getstripe_ea_info(struct inode *inode, const char *filename, ··· 1413 1413 static inline int ll_file_nolock(const struct file *file) 1414 1414 { 1415 1415 struct ll_file_data *fd = LUSTRE_FPRIVATE(file); 1416 - struct inode *inode = file->f_dentry->d_inode; 1416 + struct inode *inode = file_inode(file); 1417 1417 1418 1418 LASSERT(fd != NULL); 1419 1419 return ((fd->fd_flags & LL_FILE_IGNORE_LOCK) || ··· 1489 1489 */ 1490 1490 static inline void d_lustre_invalidate(struct dentry *dentry, int nested) 1491 1491 { 1492 - CDEBUG(D_DENTRY, "invalidate dentry %.*s (%p) parent %p inode %p " 1493 - "refc %d\n", dentry->d_name.len, dentry->d_name.name, dentry, 1492 + CDEBUG(D_DENTRY, "invalidate dentry %pd (%p) parent %p inode %p " 1493 + "refc %d\n", dentry, dentry, 1494 1494 dentry->d_parent, dentry->d_inode, d_count(dentry)); 1495 1495 1496 1496 spin_lock_nested(&dentry->d_lock,
+3 -5
drivers/staging/lustre/lustre/llite/llite_lib.c
··· 698 698 list_for_each(tmp, &dentry->d_subdirs) 699 699 subdirs++; 700 700 701 - CERROR("dentry %p dump: name=%.*s parent=%.*s (%p), inode=%p, count=%u," 702 - " flags=0x%x, fsdata=%p, %d subdirs\n", dentry, 703 - dentry->d_name.len, dentry->d_name.name, 704 - dentry->d_parent->d_name.len, dentry->d_parent->d_name.name, 701 + CERROR("dentry %p dump: name=%pd parent=%p, inode=%p, count=%u," 702 + " flags=0x%x, fsdata=%p, %d subdirs\n", dentry, dentry, 705 703 dentry->d_parent, dentry->d_inode, d_count(dentry), 706 704 dentry->d_flags, dentry->d_fsdata, subdirs); 707 705 if (dentry->d_inode != NULL) ··· 709 711 return; 710 712 711 713 list_for_each(tmp, &dentry->d_subdirs) { 712 - struct dentry *d = list_entry(tmp, struct dentry, d_u.d_child); 714 + struct dentry *d = list_entry(tmp, struct dentry, d_child); 713 715 lustre_dump_dentry(d, recur - 1); 714 716 } 715 717 }
+6 -6
drivers/staging/lustre/lustre/llite/llite_mmap.c
··· 100 100 unsigned long *ra_flags) 101 101 { 102 102 struct file *file = vma->vm_file; 103 - struct inode *inode = file->f_dentry->d_inode; 103 + struct inode *inode = file_inode(file); 104 104 struct cl_io *io; 105 105 struct cl_fault_io *fio; 106 106 struct lu_env *env; ··· 213 213 cfs_restore_sigs(set); 214 214 215 215 if (result == 0) { 216 - struct inode *inode = vma->vm_file->f_dentry->d_inode; 216 + struct inode *inode = file_inode(vma->vm_file); 217 217 struct ll_inode_info *lli = ll_i2info(inode); 218 218 219 219 lock_page(vmpage); ··· 396 396 CWARN("app(%s): the page %lu of file %lu is under heavy" 397 397 " contention.\n", 398 398 current->comm, vmf->pgoff, 399 - vma->vm_file->f_dentry->d_inode->i_ino); 399 + file_inode(vma->vm_file)->i_ino); 400 400 printed = true; 401 401 } 402 402 } while (retry); ··· 430 430 */ 431 431 static void ll_vm_open(struct vm_area_struct *vma) 432 432 { 433 - struct inode *inode = vma->vm_file->f_dentry->d_inode; 433 + struct inode *inode = file_inode(vma->vm_file); 434 434 struct ccc_object *vob = cl_inode2ccc(inode); 435 435 436 436 LASSERT(vma->vm_file); ··· 443 443 */ 444 444 static void ll_vm_close(struct vm_area_struct *vma) 445 445 { 446 - struct inode *inode = vma->vm_file->f_dentry->d_inode; 446 + struct inode *inode = file_inode(vma->vm_file); 447 447 struct ccc_object *vob = cl_inode2ccc(inode); 448 448 449 449 LASSERT(vma->vm_file); ··· 476 476 477 477 int ll_file_mmap(struct file *file, struct vm_area_struct *vma) 478 478 { 479 - struct inode *inode = file->f_dentry->d_inode; 479 + struct inode *inode = file_inode(file); 480 480 int rc; 481 481 482 482 if (ll_file_nolock(file))
+5 -3
drivers/staging/lustre/lustre/llite/llite_nfs.c
··· 207 207 return LUSTRE_NFS_FID; 208 208 } 209 209 210 - static int ll_nfs_get_name_filldir(void *cookie, const char *name, int namelen, 211 - loff_t hash, u64 ino, unsigned type) 210 + static int ll_nfs_get_name_filldir(struct dir_context *ctx, const char *name, 211 + int namelen, loff_t hash, u64 ino, 212 + unsigned type) 212 213 { 213 214 /* It is hack to access lde_fid for comparison with lgd_fid. 214 215 * So the input 'name' must be part of the 'lu_dirent'. */ 215 216 struct lu_dirent *lde = container_of0(name, struct lu_dirent, lde_name); 216 - struct ll_getname_data *lgd = cookie; 217 + struct ll_getname_data *lgd = 218 + container_of(ctx, struct ll_getname_data, ctx); 217 219 struct lu_fid fid; 218 220 219 221 fid_le_to_cpu(&fid, &lde->lde_fid);
+3 -4
drivers/staging/lustre/lustre/llite/lloop.c
··· 187 187 { 188 188 const struct lu_env *env = lo->lo_env; 189 189 struct cl_io *io = &lo->lo_io; 190 - struct inode *inode = lo->lo_backing_file->f_dentry->d_inode; 190 + struct inode *inode = file_inode(lo->lo_backing_file); 191 191 struct cl_object *obj = ll_i2info(inode)->lli_clob; 192 192 pgoff_t offset; 193 193 int ret; ··· 626 626 break; 627 627 } 628 628 if (inode == NULL) 629 - inode = lo->lo_backing_file->f_dentry->d_inode; 629 + inode = file_inode(lo->lo_backing_file); 630 630 if (lo->lo_state == LLOOP_BOUND) 631 631 fid = ll_i2info(inode)->lli_fid; 632 632 else ··· 692 692 lo_free = lo; 693 693 continue; 694 694 } 695 - if (lo->lo_backing_file->f_dentry->d_inode == 696 - file->f_dentry->d_inode) 695 + if (file_inode(lo->lo_backing_file) == file_inode(file)) 697 696 break; 698 697 } 699 698 if (lo || !lo_free) {
+152 -242
drivers/staging/lustre/lustre/llite/namei.c
··· 54 54 static int ll_create_it(struct inode *, struct dentry *, 55 55 int, struct lookup_intent *); 56 56 57 - /* 58 - * Check if we have something mounted at the named dchild. 59 - * In such a case there would always be dentry present. 60 - */ 61 - static int ll_d_mountpoint(struct dentry *dparent, struct dentry *dchild, 62 - struct qstr *name) 63 - { 64 - int mounted = 0; 65 - 66 - if (unlikely(dchild)) { 67 - mounted = d_mountpoint(dchild); 68 - } else if (dparent) { 69 - dchild = d_lookup(dparent, name); 70 - if (dchild) { 71 - mounted = d_mountpoint(dchild); 72 - dput(dchild); 73 - } 74 - } 75 - return mounted; 76 - } 77 - 78 57 /* called from iget5_locked->find_inode() under inode_hash_lock spinlock */ 79 58 static int ll_test_inode(struct inode *inode, void *opaque) 80 59 { ··· 146 167 struct ll_d_hlist_node *p; 147 168 148 169 ll_lock_dcache(dir); 149 - ll_d_hlist_for_each_entry(dentry, p, &dir->i_dentry, d_alias) { 170 + ll_d_hlist_for_each_entry(dentry, p, &dir->i_dentry, d_u.d_alias) { 150 171 spin_lock(&dentry->d_lock); 151 172 if (!list_empty(&dentry->d_subdirs)) { 152 173 struct dentry *child; 153 174 154 175 list_for_each_entry_safe(child, tmp_subdir, 155 176 &dentry->d_subdirs, 156 - d_u.d_child) { 177 + d_child) { 157 178 if (child->d_inode == NULL) 158 179 d_lustre_invalidate(child, 1); 159 180 } ··· 264 285 265 286 if ((bits & (MDS_INODELOCK_LOOKUP | MDS_INODELOCK_PERM)) && 266 287 inode->i_sb->s_root != NULL && 267 - inode != inode->i_sb->s_root->d_inode) 288 + is_root_inode(inode)) 268 289 ll_invalidate_aliases(inode); 269 290 270 291 iput(inode); ··· 341 362 discon_alias = invalid_alias = NULL; 342 363 343 364 ll_lock_dcache(inode); 344 - ll_d_hlist_for_each_entry(alias, p, &inode->i_dentry, d_alias) { 365 + ll_d_hlist_for_each_entry(alias, p, &inode->i_dentry, d_u.d_alias) { 345 366 LASSERT(alias != dentry); 346 367 347 368 spin_lock(&alias->d_lock); ··· 488 509 if (dentry->d_name.len > ll_i2sbi(parent)->ll_namelen) 489 510 return ERR_PTR(-ENAMETOOLONG); 490 511 491 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p),intent=%s\n", 492 - dentry->d_name.len, dentry->d_name.name, parent->i_ino, 512 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p),intent=%s\n", 513 + dentry, parent->i_ino, 493 514 parent->i_generation, parent, LL_IT2STR(it)); 494 515 495 516 if (d_mountpoint(dentry)) ··· 542 563 if ((it->it_op & IT_OPEN) && dentry->d_inode && 543 564 !S_ISREG(dentry->d_inode->i_mode) && 544 565 !S_ISDIR(dentry->d_inode->i_mode)) { 545 - ll_release_openhandle(dentry, it); 566 + ll_release_openhandle(dentry->d_inode, it); 546 567 } 547 568 ll_lookup_finish_locks(it, dentry); 548 569 ··· 565 586 struct lookup_intent *itp, it = { .it_op = IT_GETATTR }; 566 587 struct dentry *de; 567 588 568 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p),flags=%u\n", 569 - dentry->d_name.len, dentry->d_name.name, parent->i_ino, 589 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p),flags=%u\n", 590 + dentry, parent->i_ino, 570 591 parent->i_generation, parent, flags); 571 592 572 593 /* Optimize away (CREATE && !OPEN). Let .create handle the race. */ ··· 598 619 long long lookup_flags = LOOKUP_OPEN; 599 620 int rc = 0; 600 621 601 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p),file %p," 622 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p),file %p," 602 623 "open_flags %x,mode %x opened %d\n", 603 - dentry->d_name.len, dentry->d_name.name, dir->i_ino, 624 + dentry, dir->i_ino, 604 625 dir->i_generation, dir, file, open_flags, mode, *opened); 605 626 606 627 it = kzalloc(sizeof(*it), GFP_NOFS); ··· 720 741 struct inode *inode; 721 742 int rc = 0; 722 743 723 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p),intent=%s\n", 724 - dentry->d_name.len, dentry->d_name.name, dir->i_ino, 744 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p),intent=%s\n", 745 + dentry, dir->i_ino, 725 746 dir->i_generation, dir, LL_IT2STR(it)); 726 747 727 748 rc = it_open_error(DISP_OPEN_CREATE, it); ··· 754 775 LTIME_S(inode->i_ctime) = body->ctime; 755 776 } 756 777 757 - static int ll_new_node(struct inode *dir, struct qstr *name, 778 + static int ll_new_node(struct inode *dir, struct dentry *dentry, 758 779 const char *tgt, int mode, int rdev, 759 - struct dentry *dchild, __u32 opc) 780 + __u32 opc) 760 781 { 761 782 struct ptlrpc_request *request = NULL; 762 783 struct md_op_data *op_data; ··· 768 789 if (unlikely(tgt != NULL)) 769 790 tgt_len = strlen(tgt) + 1; 770 791 771 - op_data = ll_prep_md_op_data(NULL, dir, NULL, name->name, 772 - name->len, 0, opc, NULL); 792 + op_data = ll_prep_md_op_data(NULL, dir, NULL, 793 + dentry->d_name.name, 794 + dentry->d_name.len, 795 + 0, opc, NULL); 773 796 if (IS_ERR(op_data)) { 774 797 err = PTR_ERR(op_data); 775 798 goto err_exit; ··· 787 806 788 807 ll_update_times(request, dir); 789 808 790 - if (dchild) { 791 - err = ll_prep_inode(&inode, request, dchild->d_sb, NULL); 792 - if (err) 793 - goto err_exit; 809 + err = ll_prep_inode(&inode, request, dir->i_sb, NULL); 810 + if (err) 811 + goto err_exit; 794 812 795 - d_instantiate(dchild, inode); 796 - } 813 + d_instantiate(dentry, inode); 797 814 err_exit: 798 815 ptlrpc_req_finished(request); 799 816 800 817 return err; 801 818 } 802 819 803 - static int ll_mknod_generic(struct inode *dir, struct qstr *name, int mode, 804 - unsigned rdev, struct dentry *dchild) 820 + static int ll_mknod(struct inode *dir, struct dentry *dchild, 821 + umode_t mode, dev_t rdev) 805 822 { 806 823 int err; 807 824 808 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p) mode %o dev %x\n", 809 - name->len, name->name, dir->i_ino, dir->i_generation, dir, 810 - mode, rdev); 825 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p) mode %o dev %x\n", 826 + dchild, dir->i_ino, dir->i_generation, dir, 827 + mode, old_encode_dev(rdev)); 811 828 812 829 if (!IS_POSIXACL(dir) || !exp_connect_umask(ll_i2mdexp(dir))) 813 830 mode &= ~current_umask(); ··· 818 839 case S_IFBLK: 819 840 case S_IFIFO: 820 841 case S_IFSOCK: 821 - err = ll_new_node(dir, name, NULL, mode, rdev, dchild, 842 + err = ll_new_node(dir, dchild, NULL, mode, 843 + old_encode_dev(rdev), 822 844 LUSTRE_OPC_MKNOD); 823 845 break; 824 846 case S_IFDIR: ··· 843 863 { 844 864 int rc; 845 865 846 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p)," 866 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p)," 847 867 "flags=%u, excl=%d\n", 848 - dentry->d_name.len, dentry->d_name.name, dir->i_ino, 868 + dentry, dir->i_ino, 849 869 dir->i_generation, dir, mode, want_excl); 850 870 851 - rc = ll_mknod_generic(dir, &dentry->d_name, mode, 0, dentry); 871 + rc = ll_mknod(dir, dentry, mode, 0); 852 872 853 873 ll_stats_ops_tally(ll_i2sbi(dir), LPROC_LL_CREATE, 1); 854 874 855 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s, unhashed %d\n", 856 - dentry->d_name.len, dentry->d_name.name, d_unhashed(dentry)); 875 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd, unhashed %d\n", 876 + dentry, d_unhashed(dentry)); 857 877 858 878 return rc; 859 879 } 860 880 861 - static int ll_symlink_generic(struct inode *dir, struct qstr *name, 862 - const char *tgt, struct dentry *dchild) 881 + static inline void ll_get_child_fid(struct dentry *child, struct lu_fid *fid) 863 882 { 864 - int err; 865 - 866 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p),target=%.*s\n", 867 - name->len, name->name, dir->i_ino, dir->i_generation, 868 - dir, 3000, tgt); 869 - 870 - err = ll_new_node(dir, name, (char *)tgt, S_IFLNK | S_IRWXUGO, 871 - 0, dchild, LUSTRE_OPC_SYMLINK); 872 - 873 - if (!err) 874 - ll_stats_ops_tally(ll_i2sbi(dir), LPROC_LL_SYMLINK, 1); 875 - 876 - return err; 877 - } 878 - 879 - static int ll_link_generic(struct inode *src, struct inode *dir, 880 - struct qstr *name, struct dentry *dchild) 881 - { 882 - struct ll_sb_info *sbi = ll_i2sbi(dir); 883 - struct ptlrpc_request *request = NULL; 884 - struct md_op_data *op_data; 885 - int err; 886 - 887 - CDEBUG(D_VFSTRACE, 888 - "VFS Op: inode=%lu/%u(%p), dir=%lu/%u(%p), target=%.*s\n", 889 - src->i_ino, src->i_generation, src, dir->i_ino, 890 - dir->i_generation, dir, name->len, name->name); 891 - 892 - op_data = ll_prep_md_op_data(NULL, src, dir, name->name, name->len, 893 - 0, LUSTRE_OPC_ANY, NULL); 894 - if (IS_ERR(op_data)) 895 - return PTR_ERR(op_data); 896 - 897 - err = md_link(sbi->ll_md_exp, op_data, &request); 898 - ll_finish_md_op_data(op_data); 899 - if (err) 900 - goto out; 901 - 902 - ll_update_times(request, dir); 903 - ll_stats_ops_tally(sbi, LPROC_LL_LINK, 1); 904 - out: 905 - ptlrpc_req_finished(request); 906 - return err; 907 - } 908 - 909 - static int ll_mkdir_generic(struct inode *dir, struct qstr *name, 910 - int mode, struct dentry *dchild) 911 - 912 - { 913 - int err; 914 - 915 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p)\n", 916 - name->len, name->name, dir->i_ino, dir->i_generation, dir); 917 - 918 - if (!IS_POSIXACL(dir) || !exp_connect_umask(ll_i2mdexp(dir))) 919 - mode &= ~current_umask(); 920 - mode = (mode & (S_IRWXUGO|S_ISVTX)) | S_IFDIR; 921 - err = ll_new_node(dir, name, NULL, mode, 0, dchild, LUSTRE_OPC_MKDIR); 922 - 923 - if (!err) 924 - ll_stats_ops_tally(ll_i2sbi(dir), LPROC_LL_MKDIR, 1); 925 - 926 - return err; 927 - } 928 - 929 - /* Try to find the child dentry by its name. 930 - If found, put the result fid into @fid. */ 931 - static void ll_get_child_fid(struct inode * dir, struct qstr *name, 932 - struct lu_fid *fid) 933 - { 934 - struct dentry *parent, *child; 935 - 936 - parent = ll_d_hlist_entry(dir->i_dentry, struct dentry, d_alias); 937 - child = d_lookup(parent, name); 938 - if (child) { 939 - if (child->d_inode) 940 - *fid = *ll_inode2fid(child->d_inode); 941 - dput(child); 942 - } 943 - } 944 - 945 - static int ll_rmdir_generic(struct inode *dir, struct dentry *dparent, 946 - struct dentry *dchild, struct qstr *name) 947 - { 948 - struct ptlrpc_request *request = NULL; 949 - struct md_op_data *op_data; 950 - int rc; 951 - 952 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p)\n", 953 - name->len, name->name, dir->i_ino, dir->i_generation, dir); 954 - 955 - if (unlikely(ll_d_mountpoint(dparent, dchild, name))) 956 - return -EBUSY; 957 - 958 - op_data = ll_prep_md_op_data(NULL, dir, NULL, name->name, name->len, 959 - S_IFDIR, LUSTRE_OPC_ANY, NULL); 960 - if (IS_ERR(op_data)) 961 - return PTR_ERR(op_data); 962 - 963 - ll_get_child_fid(dir, name, &op_data->op_fid3); 964 - op_data->op_fid2 = op_data->op_fid3; 965 - rc = md_unlink(ll_i2sbi(dir)->ll_md_exp, op_data, &request); 966 - ll_finish_md_op_data(op_data); 967 - if (rc == 0) { 968 - ll_update_times(request, dir); 969 - ll_stats_ops_tally(ll_i2sbi(dir), LPROC_LL_RMDIR, 1); 970 - } 971 - 972 - ptlrpc_req_finished(request); 973 - return rc; 883 + if (child->d_inode) 884 + *fid = *ll_inode2fid(child->d_inode); 974 885 } 975 886 976 887 /** ··· 970 1099 return rc; 971 1100 } 972 1101 973 - /* ll_unlink_generic() doesn't update the inode with the new link count. 1102 + /* ll_unlink() doesn't update the inode with the new link count. 974 1103 * Instead, ll_ddelete() and ll_d_iput() will update it based upon if there 975 1104 * is any lock existing. They will recycle dentries and inodes based upon locks 976 1105 * too. b=20433 */ 977 - static int ll_unlink_generic(struct inode *dir, struct dentry *dparent, 978 - struct dentry *dchild, struct qstr *name) 1106 + static int ll_unlink(struct inode * dir, struct dentry *dentry) 979 1107 { 980 1108 struct ptlrpc_request *request = NULL; 981 1109 struct md_op_data *op_data; 982 1110 int rc; 983 - CDEBUG(D_VFSTRACE, "VFS Op:name=%.*s,dir=%lu/%u(%p)\n", 984 - name->len, name->name, dir->i_ino, dir->i_generation, dir); 1111 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p)\n", 1112 + dentry, dir->i_ino, dir->i_generation, dir); 985 1113 986 - /* 987 - * XXX: unlink bind mountpoint maybe call to here, 988 - * just check it as vfs_unlink does. 989 - */ 990 - if (unlikely(ll_d_mountpoint(dparent, dchild, name))) 991 - return -EBUSY; 992 - 993 - op_data = ll_prep_md_op_data(NULL, dir, NULL, name->name, 994 - name->len, 0, LUSTRE_OPC_ANY, NULL); 1114 + op_data = ll_prep_md_op_data(NULL, dir, NULL, 1115 + dentry->d_name.name, 1116 + dentry->d_name.len, 1117 + 0, LUSTRE_OPC_ANY, NULL); 995 1118 if (IS_ERR(op_data)) 996 1119 return PTR_ERR(op_data); 997 1120 998 - ll_get_child_fid(dir, name, &op_data->op_fid3); 1121 + ll_get_child_fid(dentry, &op_data->op_fid3); 999 1122 op_data->op_fid2 = op_data->op_fid3; 1000 1123 rc = md_unlink(ll_i2sbi(dir)->ll_md_exp, op_data, &request); 1001 1124 ll_finish_md_op_data(op_data); ··· 1005 1140 return rc; 1006 1141 } 1007 1142 1008 - static int ll_rename_generic(struct inode *src, struct dentry *src_dparent, 1009 - struct dentry *src_dchild, struct qstr *src_name, 1010 - struct inode *tgt, struct dentry *tgt_dparent, 1011 - struct dentry *tgt_dchild, struct qstr *tgt_name) 1143 + static int ll_mkdir(struct inode *dir, struct dentry *dentry, ll_umode_t mode) 1012 1144 { 1013 - struct ptlrpc_request *request = NULL; 1014 - struct ll_sb_info *sbi = ll_i2sbi(src); 1015 - struct md_op_data *op_data; 1016 1145 int err; 1017 1146 1018 - CDEBUG(D_VFSTRACE, 1019 - "VFS Op:oldname=%.*s,src_dir=%lu/%u(%p),newname=%.*s," 1020 - "tgt_dir=%lu/%u(%p)\n", src_name->len, src_name->name, 1021 - src->i_ino, src->i_generation, src, tgt_name->len, 1022 - tgt_name->name, tgt->i_ino, tgt->i_generation, tgt); 1147 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p)\n", 1148 + dentry, dir->i_ino, dir->i_generation, dir); 1023 1149 1024 - if (unlikely(ll_d_mountpoint(src_dparent, src_dchild, src_name) || 1025 - ll_d_mountpoint(tgt_dparent, tgt_dchild, tgt_name))) 1026 - return -EBUSY; 1150 + if (!IS_POSIXACL(dir) || !exp_connect_umask(ll_i2mdexp(dir))) 1151 + mode &= ~current_umask(); 1152 + mode = (mode & (S_IRWXUGO|S_ISVTX)) | S_IFDIR; 1153 + err = ll_new_node(dir, dentry, NULL, mode, 0, LUSTRE_OPC_MKDIR); 1027 1154 1028 - op_data = ll_prep_md_op_data(NULL, src, tgt, NULL, 0, 0, 1029 - LUSTRE_OPC_ANY, NULL); 1030 - if (IS_ERR(op_data)) 1031 - return PTR_ERR(op_data); 1032 - 1033 - ll_get_child_fid(src, src_name, &op_data->op_fid3); 1034 - ll_get_child_fid(tgt, tgt_name, &op_data->op_fid4); 1035 - err = md_rename(sbi->ll_md_exp, op_data, 1036 - src_name->name, src_name->len, 1037 - tgt_name->name, tgt_name->len, &request); 1038 - ll_finish_md_op_data(op_data); 1039 - if (!err) { 1040 - ll_update_times(request, src); 1041 - ll_update_times(request, tgt); 1042 - ll_stats_ops_tally(sbi, LPROC_LL_RENAME, 1); 1043 - err = ll_objects_destroy(request, src); 1044 - } 1045 - 1046 - ptlrpc_req_finished(request); 1155 + if (!err) 1156 + ll_stats_ops_tally(ll_i2sbi(dir), LPROC_LL_MKDIR, 1); 1047 1157 1048 1158 return err; 1049 1159 } 1050 1160 1051 - static int ll_mknod(struct inode *dir, struct dentry *dchild, ll_umode_t mode, 1052 - dev_t rdev) 1053 - { 1054 - return ll_mknod_generic(dir, &dchild->d_name, mode, 1055 - old_encode_dev(rdev), dchild); 1056 - } 1057 - 1058 - static int ll_unlink(struct inode * dir, struct dentry *dentry) 1059 - { 1060 - return ll_unlink_generic(dir, NULL, dentry, &dentry->d_name); 1061 - } 1062 - 1063 - static int ll_mkdir(struct inode *dir, struct dentry *dentry, ll_umode_t mode) 1064 - { 1065 - return ll_mkdir_generic(dir, &dentry->d_name, mode, dentry); 1066 - } 1067 - 1068 1161 static int ll_rmdir(struct inode *dir, struct dentry *dentry) 1069 1162 { 1070 - return ll_rmdir_generic(dir, NULL, dentry, &dentry->d_name); 1163 + struct ptlrpc_request *request = NULL; 1164 + struct md_op_data *op_data; 1165 + int rc; 1166 + 1167 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p)\n", 1168 + dentry, dir->i_ino, dir->i_generation, dir); 1169 + 1170 + op_data = ll_prep_md_op_data(NULL, dir, NULL, 1171 + dentry->d_name.name, 1172 + dentry->d_name.len, 1173 + S_IFDIR, LUSTRE_OPC_ANY, NULL); 1174 + if (IS_ERR(op_data)) 1175 + return PTR_ERR(op_data); 1176 + 1177 + ll_get_child_fid(dentry, &op_data->op_fid3); 1178 + op_data->op_fid2 = op_data->op_fid3; 1179 + rc = md_unlink(ll_i2sbi(dir)->ll_md_exp, op_data, &request); 1180 + ll_finish_md_op_data(op_data); 1181 + if (rc == 0) { 1182 + ll_update_times(request, dir); 1183 + ll_stats_ops_tally(ll_i2sbi(dir), LPROC_LL_RMDIR, 1); 1184 + } 1185 + 1186 + ptlrpc_req_finished(request); 1187 + return rc; 1071 1188 } 1072 1189 1073 1190 static int ll_symlink(struct inode *dir, struct dentry *dentry, 1074 1191 const char *oldname) 1075 1192 { 1076 - return ll_symlink_generic(dir, &dentry->d_name, oldname, dentry); 1193 + int err; 1194 + 1195 + CDEBUG(D_VFSTRACE, "VFS Op:name=%pd,dir=%lu/%u(%p),target=%.*s\n", 1196 + dentry, dir->i_ino, dir->i_generation, 1197 + dir, 3000, oldname); 1198 + 1199 + err = ll_new_node(dir, dentry, oldname, S_IFLNK | S_IRWXUGO, 1200 + 0, LUSTRE_OPC_SYMLINK); 1201 + 1202 + if (!err) 1203 + ll_stats_ops_tally(ll_i2sbi(dir), LPROC_LL_SYMLINK, 1); 1204 + 1205 + return err; 1077 1206 } 1078 1207 1079 1208 static int ll_link(struct dentry *old_dentry, struct inode *dir, 1080 1209 struct dentry *new_dentry) 1081 1210 { 1082 - return ll_link_generic(old_dentry->d_inode, dir, &new_dentry->d_name, 1083 - new_dentry); 1211 + struct inode *src = old_dentry->d_inode; 1212 + struct ll_sb_info *sbi = ll_i2sbi(dir); 1213 + struct ptlrpc_request *request = NULL; 1214 + struct md_op_data *op_data; 1215 + int err; 1216 + 1217 + CDEBUG(D_VFSTRACE, 1218 + "VFS Op: inode=%lu/%u(%p), dir=%lu/%u(%p), target=%pd\n", 1219 + src->i_ino, src->i_generation, src, dir->i_ino, 1220 + dir->i_generation, dir, new_dentry); 1221 + 1222 + op_data = ll_prep_md_op_data(NULL, src, dir, new_dentry->d_name.name, 1223 + new_dentry->d_name.len, 1224 + 0, LUSTRE_OPC_ANY, NULL); 1225 + if (IS_ERR(op_data)) 1226 + return PTR_ERR(op_data); 1227 + 1228 + err = md_link(sbi->ll_md_exp, op_data, &request); 1229 + ll_finish_md_op_data(op_data); 1230 + if (err) 1231 + goto out; 1232 + 1233 + ll_update_times(request, dir); 1234 + ll_stats_ops_tally(sbi, LPROC_LL_LINK, 1); 1235 + out: 1236 + ptlrpc_req_finished(request); 1237 + return err; 1084 1238 } 1085 1239 1086 1240 static int ll_rename(struct inode *old_dir, struct dentry *old_dentry, 1087 1241 struct inode *new_dir, struct dentry *new_dentry) 1088 1242 { 1243 + struct ptlrpc_request *request = NULL; 1244 + struct ll_sb_info *sbi = ll_i2sbi(old_dir); 1245 + struct md_op_data *op_data; 1089 1246 int err; 1090 - err = ll_rename_generic(old_dir, NULL, 1091 - old_dentry, &old_dentry->d_name, 1092 - new_dir, NULL, new_dentry, 1093 - &new_dentry->d_name); 1247 + 1248 + CDEBUG(D_VFSTRACE, 1249 + "VFS Op:oldname=%pd,src_dir=%lu/%u(%p),newname=%pd," 1250 + "tgt_dir=%lu/%u(%p)\n", old_dentry, 1251 + old_dir->i_ino, old_dir->i_generation, old_dir, new_dentry, 1252 + new_dir->i_ino, new_dir->i_generation, new_dir); 1253 + 1254 + op_data = ll_prep_md_op_data(NULL, old_dir, new_dir, NULL, 0, 0, 1255 + LUSTRE_OPC_ANY, NULL); 1256 + if (IS_ERR(op_data)) 1257 + return PTR_ERR(op_data); 1258 + 1259 + ll_get_child_fid(old_dentry, &op_data->op_fid3); 1260 + ll_get_child_fid(new_dentry, &op_data->op_fid4); 1261 + err = md_rename(sbi->ll_md_exp, op_data, 1262 + old_dentry->d_name.name, 1263 + old_dentry->d_name.len, 1264 + new_dentry->d_name.name, 1265 + new_dentry->d_name.len, &request); 1266 + ll_finish_md_op_data(op_data); 1094 1267 if (!err) { 1095 - d_move(old_dentry, new_dentry); 1268 + ll_update_times(request, old_dir); 1269 + ll_update_times(request, new_dir); 1270 + ll_stats_ops_tally(sbi, LPROC_LL_RENAME, 1); 1271 + err = ll_objects_destroy(request, old_dir); 1096 1272 } 1273 + 1274 + ptlrpc_req_finished(request); 1275 + if (!err) 1276 + d_move(old_dentry, new_dentry); 1097 1277 return err; 1098 1278 } 1099 1279
+16 -17
drivers/staging/lustre/lustre/llite/statahead.c
··· 969 969 struct l_wait_info lwi = { 0 }; 970 970 971 971 thread->t_pid = current_pid(); 972 - CDEBUG(D_READA, "agl thread started: sai %p, parent %.*s\n", 973 - sai, parent->d_name.len, parent->d_name.name); 972 + CDEBUG(D_READA, "agl thread started: sai %p, parent %pd\n", 973 + sai, parent); 974 974 975 975 atomic_inc(&sbi->ll_agl_total); 976 976 spin_lock(&plli->lli_agl_lock); ··· 1019 1019 spin_unlock(&plli->lli_agl_lock); 1020 1020 wake_up(&thread->t_ctl_waitq); 1021 1021 ll_sai_put(sai); 1022 - CDEBUG(D_READA, "agl thread stopped: sai %p, parent %.*s\n", 1023 - sai, parent->d_name.len, parent->d_name.name); 1022 + CDEBUG(D_READA, "agl thread stopped: sai %p, parent %pd\n", 1023 + sai, parent); 1024 1024 return 0; 1025 1025 } 1026 1026 ··· 1031 1031 struct ll_inode_info *plli; 1032 1032 struct task_struct *task; 1033 1033 1034 - CDEBUG(D_READA, "start agl thread: sai %p, parent %.*s\n", 1035 - sai, parent->d_name.len, parent->d_name.name); 1034 + CDEBUG(D_READA, "start agl thread: sai %p, parent %pd\n", 1035 + sai, parent); 1036 1036 1037 1037 plli = ll_i2info(parent->d_inode); 1038 1038 task = kthread_run(ll_agl_thread, parent, ··· 1066 1066 struct l_wait_info lwi = { 0 }; 1067 1067 1068 1068 thread->t_pid = current_pid(); 1069 - CDEBUG(D_READA, "statahead thread starting: sai %p, parent %.*s\n", 1070 - sai, parent->d_name.len, parent->d_name.name); 1069 + CDEBUG(D_READA, "statahead thread starting: sai %p, parent %pd\n", 1070 + sai, parent); 1071 1071 1072 1072 if (sbi->ll_flags & LL_SBI_AGL_ENABLED) 1073 1073 ll_start_agl(parent, sai); ··· 1288 1288 wake_up(&thread->t_ctl_waitq); 1289 1289 ll_sai_put(sai); 1290 1290 dput(parent); 1291 - CDEBUG(D_READA, "statahead thread stopped: sai %p, parent %.*s\n", 1292 - sai, parent->d_name.len, parent->d_name.name); 1291 + CDEBUG(D_READA, "statahead thread stopped: sai %p, parent %pd\n", 1292 + sai, parent); 1293 1293 return rc; 1294 1294 } 1295 1295 ··· 1612 1612 } else if ((*dentryp)->d_inode != inode) { 1613 1613 /* revalidate, but inode is recreated */ 1614 1614 CDEBUG(D_READA, 1615 - "stale dentry %.*s inode %lu/%u, " 1615 + "stale dentry %pd inode %lu/%u, " 1616 1616 "statahead inode %lu/%u\n", 1617 - (*dentryp)->d_name.len, 1618 - (*dentryp)->d_name.name, 1617 + *dentryp, 1619 1618 (*dentryp)->d_inode->i_ino, 1620 1619 (*dentryp)->d_inode->i_generation, 1621 1620 inode->i_ino, ··· 1665 1666 if (unlikely(sai->sai_inode != parent->d_inode)) { 1666 1667 struct ll_inode_info *nlli = ll_i2info(parent->d_inode); 1667 1668 1668 - CWARN("Race condition, someone changed %.*s just now: " 1669 + CWARN("Race condition, someone changed %pd just now: " 1669 1670 "old parent "DFID", new parent "DFID"\n", 1670 - (*dentryp)->d_name.len, (*dentryp)->d_name.name, 1671 + *dentryp, 1671 1672 PFID(&lli->lli_fid), PFID(&nlli->lli_fid)); 1672 1673 dput(parent); 1673 1674 iput(sai->sai_inode); ··· 1675 1676 goto out; 1676 1677 } 1677 1678 1678 - CDEBUG(D_READA, "start statahead thread: sai %p, parent %.*s\n", 1679 - sai, parent->d_name.len, parent->d_name.name); 1679 + CDEBUG(D_READA, "start statahead thread: sai %p, parent %pd\n", 1680 + sai, parent); 1680 1681 1681 1682 /* The sai buffer already has one reference taken at allocation time, 1682 1683 * but as soon as we expose the sai by attaching it to the lli that
+2 -2
drivers/staging/lustre/lustre/llite/vvp_io.c
··· 108 108 struct inode *inode = ccc_object_inode(ios->cis_obj); 109 109 110 110 LASSERT(inode == 111 - cl2ccc_io(env, ios)->cui_fd->fd_file->f_dentry->d_inode); 111 + file_inode(cl2ccc_io(env, ios)->cui_fd->fd_file)); 112 112 vio->u.fault.ft_mtime = LTIME_S(inode->i_mtime); 113 113 return 0; 114 114 } ··· 239 239 240 240 down_read(&mm->mmap_sem); 241 241 while ((vma = our_vma(mm, addr, count)) != NULL) { 242 - struct inode *inode = vma->vm_file->f_dentry->d_inode; 242 + struct inode *inode = file_inode(vma->vm_file); 243 243 int flags = CEF_MUST; 244 244 245 245 if (ll_file_nolock(vma->vm_file)) {
+3 -6
drivers/staging/lustre/lustre/llite/xattr.c
··· 241 241 lump->lmm_stripe_offset = -1; 242 242 243 243 if (lump != NULL && S_ISREG(inode->i_mode)) { 244 - struct file f; 245 244 int flags = FMODE_WRITE; 246 245 int lum_size = (lump->lmm_magic == LOV_USER_MAGIC_V1) ? 247 246 sizeof(*lump) : sizeof(struct lov_user_md_v3); 248 247 249 - memset(&f, 0, sizeof(f)); /* f.f_flags is used below */ 250 - f.f_dentry = dentry; 251 - rc = ll_lov_setstripe_ea_info(inode, &f, flags, lump, 248 + rc = ll_lov_setstripe_ea_info(inode, dentry, flags, lump, 252 249 lum_size); 253 250 /* b10667: rc always be 0 here for now */ 254 251 rc = 0; ··· 516 519 } 517 520 518 521 if (size < lmmsize) { 519 - CERROR("server bug: replied size %d > %d for %s (%s)\n", 520 - lmmsize, (int)size, dentry->d_name.name, name); 522 + CERROR("server bug: replied size %d > %d for %pd (%s)\n", 523 + lmmsize, (int)size, dentry, name); 521 524 rc = -ERANGE; 522 525 goto out; 523 526 }
+1 -1
fs/9p/vfs_inode.c
··· 832 832 * moved b under k and client parallely did a lookup for 833 833 * k/b. 834 834 */ 835 - res = d_materialise_unique(dentry, inode); 835 + res = d_splice_alias(inode, dentry); 836 836 if (!res) 837 837 v9fs_fid_add(dentry, fid); 838 838 else if (!IS_ERR(res))
+2 -2
fs/9p/vfs_inode_dotl.c
··· 826 826 struct dentry *dir_dentry; 827 827 struct posix_acl *dacl = NULL, *pacl = NULL; 828 828 829 - p9_debug(P9_DEBUG_VFS, " %lu,%s mode: %hx MAJOR: %u MINOR: %u\n", 830 - dir->i_ino, dentry->d_name.name, omode, 829 + p9_debug(P9_DEBUG_VFS, " %lu,%pd mode: %hx MAJOR: %u MINOR: %u\n", 830 + dir->i_ino, dentry, omode, 831 831 MAJOR(rdev), MINOR(rdev)); 832 832 833 833 if (!new_valid_dev(rdev))
+1 -1
fs/affs/amigaffs.c
··· 125 125 { 126 126 struct dentry *dentry; 127 127 spin_lock(&inode->i_lock); 128 - hlist_for_each_entry(dentry, &inode->i_dentry, d_alias) { 128 + hlist_for_each_entry(dentry, &inode->i_dentry, d_u.d_alias) { 129 129 if (entry_ino == (u32)(long)dentry->d_fsdata) { 130 130 dentry->d_fsdata = (void *)inode->i_ino; 131 131 break;
+2 -2
fs/affs/inode.c
··· 348 348 u32 block = 0; 349 349 int retval; 350 350 351 - pr_debug("%s(dir=%u, inode=%u, \"%*s\", type=%d)\n", 351 + pr_debug("%s(dir=%u, inode=%u, \"%pd\", type=%d)\n", 352 352 __func__, (u32)dir->i_ino, 353 - (u32)inode->i_ino, (int)dentry->d_name.len, dentry->d_name.name, type); 353 + (u32)inode->i_ino, dentry, type); 354 354 355 355 retval = -EIO; 356 356 bh = affs_bread(sb, inode->i_ino);
+17 -23
fs/affs/namei.c
··· 190 190 toupper_t toupper = affs_get_toupper(sb); 191 191 u32 key; 192 192 193 - pr_debug("%s(\"%.*s\")\n", 194 - __func__, (int)dentry->d_name.len, dentry->d_name.name); 193 + pr_debug("%s(\"%pd\")\n", __func__, dentry); 195 194 196 195 bh = affs_bread(sb, dir->i_ino); 197 196 if (!bh) ··· 218 219 struct buffer_head *bh; 219 220 struct inode *inode = NULL; 220 221 221 - pr_debug("%s(\"%.*s\")\n", 222 - __func__, (int)dentry->d_name.len, dentry->d_name.name); 222 + pr_debug("%s(\"%pd\")\n", __func__, dentry); 223 223 224 224 affs_lock_dir(dir); 225 225 bh = affs_find_entry(dir, dentry); ··· 248 250 int 249 251 affs_unlink(struct inode *dir, struct dentry *dentry) 250 252 { 251 - pr_debug("%s(dir=%d, %lu \"%.*s\")\n", 253 + pr_debug("%s(dir=%d, %lu \"%pd\")\n", 252 254 __func__, (u32)dir->i_ino, dentry->d_inode->i_ino, 253 - (int)dentry->d_name.len, dentry->d_name.name); 255 + dentry); 254 256 255 257 return affs_remove_header(dentry); 256 258 } ··· 262 264 struct inode *inode; 263 265 int error; 264 266 265 - pr_debug("%s(%lu,\"%.*s\",0%ho)\n", 266 - __func__, dir->i_ino, (int)dentry->d_name.len, 267 - dentry->d_name.name,mode); 267 + pr_debug("%s(%lu,\"%pd\",0%ho)\n", 268 + __func__, dir->i_ino, dentry, mode); 268 269 269 270 inode = affs_new_inode(dir); 270 271 if (!inode) ··· 291 294 struct inode *inode; 292 295 int error; 293 296 294 - pr_debug("%s(%lu,\"%.*s\",0%ho)\n", 295 - __func__, dir->i_ino, (int)dentry->d_name.len, 296 - dentry->d_name.name, mode); 297 + pr_debug("%s(%lu,\"%pd\",0%ho)\n", 298 + __func__, dir->i_ino, dentry, mode); 297 299 298 300 inode = affs_new_inode(dir); 299 301 if (!inode) ··· 317 321 int 318 322 affs_rmdir(struct inode *dir, struct dentry *dentry) 319 323 { 320 - pr_debug("%s(dir=%u, %lu \"%.*s\")\n", 324 + pr_debug("%s(dir=%u, %lu \"%pd\")\n", 321 325 __func__, (u32)dir->i_ino, dentry->d_inode->i_ino, 322 - (int)dentry->d_name.len, dentry->d_name.name); 326 + dentry); 323 327 324 328 return affs_remove_header(dentry); 325 329 } ··· 334 338 int i, maxlen, error; 335 339 char c, lc; 336 340 337 - pr_debug("%s(%lu,\"%.*s\" -> \"%s\")\n", 338 - __func__, dir->i_ino, (int)dentry->d_name.len, 339 - dentry->d_name.name, symname); 341 + pr_debug("%s(%lu,\"%pd\" -> \"%s\")\n", 342 + __func__, dir->i_ino, dentry, symname); 340 343 341 344 maxlen = AFFS_SB(sb)->s_hashsize * sizeof(u32) - 1; 342 345 inode = affs_new_inode(dir); ··· 404 409 { 405 410 struct inode *inode = old_dentry->d_inode; 406 411 407 - pr_debug("%s(%u, %u, \"%.*s\")\n", 412 + pr_debug("%s(%u, %u, \"%pd\")\n", 408 413 __func__, (u32)inode->i_ino, (u32)dir->i_ino, 409 - (int)dentry->d_name.len,dentry->d_name.name); 414 + dentry); 410 415 411 416 return affs_add_entry(dir, inode, dentry, ST_LINKFILE); 412 417 } ··· 419 424 struct buffer_head *bh = NULL; 420 425 int retval; 421 426 422 - pr_debug("%s(old=%u,\"%*s\" to new=%u,\"%*s\")\n", 423 - __func__, (u32)old_dir->i_ino, (int)old_dentry->d_name.len, 424 - old_dentry->d_name.name, (u32)new_dir->i_ino, 425 - (int)new_dentry->d_name.len, new_dentry->d_name.name); 427 + pr_debug("%s(old=%u,\"%pd\" to new=%u,\"%pd\")\n", 428 + __func__, (u32)old_dir->i_ino, old_dentry, 429 + (u32)new_dir->i_ino, new_dentry); 426 430 427 431 retval = affs_check_name(new_dentry->d_name.name, 428 432 new_dentry->d_name.len,
+40 -40
fs/afs/dir.c
··· 26 26 static int afs_d_revalidate(struct dentry *dentry, unsigned int flags); 27 27 static int afs_d_delete(const struct dentry *dentry); 28 28 static void afs_d_release(struct dentry *dentry); 29 - static int afs_lookup_filldir(void *_cookie, const char *name, int nlen, 29 + static int afs_lookup_filldir(struct dir_context *ctx, const char *name, int nlen, 30 30 loff_t fpos, u64 ino, unsigned dtype); 31 31 static int afs_create(struct inode *dir, struct dentry *dentry, umode_t mode, 32 32 bool excl); ··· 391 391 * - if afs_dir_iterate_block() spots this function, it'll pass the FID 392 392 * uniquifier through dtype 393 393 */ 394 - static int afs_lookup_filldir(void *_cookie, const char *name, int nlen, 395 - loff_t fpos, u64 ino, unsigned dtype) 394 + static int afs_lookup_filldir(struct dir_context *ctx, const char *name, 395 + int nlen, loff_t fpos, u64 ino, unsigned dtype) 396 396 { 397 - struct afs_lookup_cookie *cookie = _cookie; 397 + struct afs_lookup_cookie *cookie = 398 + container_of(ctx, struct afs_lookup_cookie, ctx); 398 399 399 400 _enter("{%s,%u},%s,%u,,%llu,%u", 400 401 cookie->name.name, cookie->name.len, name, nlen, ··· 434 433 }; 435 434 int ret; 436 435 437 - _enter("{%lu},%p{%s},", dir->i_ino, dentry, dentry->d_name.name); 436 + _enter("{%lu},%p{%pd},", dir->i_ino, dentry, dentry); 438 437 439 438 /* search the directory */ 440 439 ret = afs_dir_iterate(dir, &cookie.ctx, key); ··· 466 465 struct afs_vnode *vnode = AFS_FS_I(dir); 467 466 struct inode *inode; 468 467 469 - _enter("%d, %p{%s}, {%x:%u}, %p", 470 - ret, dentry, devname, vnode->fid.vid, vnode->fid.vnode, key); 468 + _enter("%d, %p{%pd}, {%x:%u}, %p", 469 + ret, dentry, dentry, vnode->fid.vid, vnode->fid.vnode, key); 471 470 472 471 if (ret != -ENOENT || 473 472 !test_bit(AFS_VNODE_AUTOCELL, &vnode->flags)) ··· 502 501 503 502 vnode = AFS_FS_I(dir); 504 503 505 - _enter("{%x:%u},%p{%s},", 506 - vnode->fid.vid, vnode->fid.vnode, dentry, dentry->d_name.name); 504 + _enter("{%x:%u},%p{%pd},", 505 + vnode->fid.vid, vnode->fid.vnode, dentry, dentry); 507 506 508 507 ASSERTCMP(dentry->d_inode, ==, NULL); 509 508 ··· 589 588 vnode = AFS_FS_I(dentry->d_inode); 590 589 591 590 if (dentry->d_inode) 592 - _enter("{v={%x:%u} n=%s fl=%lx},", 593 - vnode->fid.vid, vnode->fid.vnode, dentry->d_name.name, 591 + _enter("{v={%x:%u} n=%pd fl=%lx},", 592 + vnode->fid.vid, vnode->fid.vnode, dentry, 594 593 vnode->flags); 595 594 else 596 - _enter("{neg n=%s}", dentry->d_name.name); 595 + _enter("{neg n=%pd}", dentry); 597 596 598 597 key = afs_request_key(AFS_FS_S(dentry->d_sb)->volume->cell); 599 598 if (IS_ERR(key)) ··· 608 607 afs_validate(dir, key); 609 608 610 609 if (test_bit(AFS_VNODE_DELETED, &dir->flags)) { 611 - _debug("%s: parent dir deleted", dentry->d_name.name); 610 + _debug("%pd: parent dir deleted", dentry); 612 611 goto out_bad; 613 612 } 614 613 ··· 626 625 if (!dentry->d_inode) 627 626 goto out_bad; 628 627 if (is_bad_inode(dentry->d_inode)) { 629 - printk("kAFS: afs_d_revalidate: %s/%s has bad inode\n", 630 - parent->d_name.name, dentry->d_name.name); 628 + printk("kAFS: afs_d_revalidate: %pd2 has bad inode\n", 629 + dentry); 631 630 goto out_bad; 632 631 } 633 632 634 633 /* if the vnode ID has changed, then the dirent points to a 635 634 * different file */ 636 635 if (fid.vnode != vnode->fid.vnode) { 637 - _debug("%s: dirent changed [%u != %u]", 638 - dentry->d_name.name, fid.vnode, 636 + _debug("%pd: dirent changed [%u != %u]", 637 + dentry, fid.vnode, 639 638 vnode->fid.vnode); 640 639 goto not_found; 641 640 } ··· 644 643 * been deleted and replaced, and the original vnode ID has 645 644 * been reused */ 646 645 if (fid.unique != vnode->fid.unique) { 647 - _debug("%s: file deleted (uq %u -> %u I:%u)", 648 - dentry->d_name.name, fid.unique, 646 + _debug("%pd: file deleted (uq %u -> %u I:%u)", 647 + dentry, fid.unique, 649 648 vnode->fid.unique, 650 649 dentry->d_inode->i_generation); 651 650 spin_lock(&vnode->lock); ··· 657 656 658 657 case -ENOENT: 659 658 /* the filename is unknown */ 660 - _debug("%s: dirent not found", dentry->d_name.name); 659 + _debug("%pd: dirent not found", dentry); 661 660 if (dentry->d_inode) 662 661 goto not_found; 663 662 goto out_valid; 664 663 665 664 default: 666 - _debug("failed to iterate dir %s: %d", 667 - parent->d_name.name, ret); 665 + _debug("failed to iterate dir %pd: %d", 666 + parent, ret); 668 667 goto out_bad; 669 668 } 670 669 ··· 682 681 spin_unlock(&dentry->d_lock); 683 682 684 683 out_bad: 685 - _debug("dropping dentry %s/%s", 686 - parent->d_name.name, dentry->d_name.name); 684 + _debug("dropping dentry %pd2", dentry); 687 685 dput(parent); 688 686 key_put(key); 689 687 ··· 698 698 */ 699 699 static int afs_d_delete(const struct dentry *dentry) 700 700 { 701 - _enter("%s", dentry->d_name.name); 701 + _enter("%pd", dentry); 702 702 703 703 if (dentry->d_flags & DCACHE_NFSFS_RENAMED) 704 704 goto zap; ··· 721 721 */ 722 722 static void afs_d_release(struct dentry *dentry) 723 723 { 724 - _enter("%s", dentry->d_name.name); 724 + _enter("%pd", dentry); 725 725 } 726 726 727 727 /* ··· 740 740 741 741 dvnode = AFS_FS_I(dir); 742 742 743 - _enter("{%x:%u},{%s},%ho", 744 - dvnode->fid.vid, dvnode->fid.vnode, dentry->d_name.name, mode); 743 + _enter("{%x:%u},{%pd},%ho", 744 + dvnode->fid.vid, dvnode->fid.vnode, dentry, mode); 745 745 746 746 key = afs_request_key(dvnode->volume->cell); 747 747 if (IS_ERR(key)) { ··· 801 801 802 802 dvnode = AFS_FS_I(dir); 803 803 804 - _enter("{%x:%u},{%s}", 805 - dvnode->fid.vid, dvnode->fid.vnode, dentry->d_name.name); 804 + _enter("{%x:%u},{%pd}", 805 + dvnode->fid.vid, dvnode->fid.vnode, dentry); 806 806 807 807 key = afs_request_key(dvnode->volume->cell); 808 808 if (IS_ERR(key)) { ··· 843 843 844 844 dvnode = AFS_FS_I(dir); 845 845 846 - _enter("{%x:%u},{%s}", 847 - dvnode->fid.vid, dvnode->fid.vnode, dentry->d_name.name); 846 + _enter("{%x:%u},{%pd}", 847 + dvnode->fid.vid, dvnode->fid.vnode, dentry); 848 848 849 849 ret = -ENAMETOOLONG; 850 850 if (dentry->d_name.len >= AFSNAMEMAX) ··· 917 917 918 918 dvnode = AFS_FS_I(dir); 919 919 920 - _enter("{%x:%u},{%s},%ho,", 921 - dvnode->fid.vid, dvnode->fid.vnode, dentry->d_name.name, mode); 920 + _enter("{%x:%u},{%pd},%ho,", 921 + dvnode->fid.vid, dvnode->fid.vnode, dentry, mode); 922 922 923 923 key = afs_request_key(dvnode->volume->cell); 924 924 if (IS_ERR(key)) { ··· 980 980 vnode = AFS_FS_I(from->d_inode); 981 981 dvnode = AFS_FS_I(dir); 982 982 983 - _enter("{%x:%u},{%x:%u},{%s}", 983 + _enter("{%x:%u},{%x:%u},{%pd}", 984 984 vnode->fid.vid, vnode->fid.vnode, 985 985 dvnode->fid.vid, dvnode->fid.vnode, 986 - dentry->d_name.name); 986 + dentry); 987 987 988 988 key = afs_request_key(dvnode->volume->cell); 989 989 if (IS_ERR(key)) { ··· 1025 1025 1026 1026 dvnode = AFS_FS_I(dir); 1027 1027 1028 - _enter("{%x:%u},{%s},%s", 1029 - dvnode->fid.vid, dvnode->fid.vnode, dentry->d_name.name, 1028 + _enter("{%x:%u},{%pd},%s", 1029 + dvnode->fid.vid, dvnode->fid.vnode, dentry, 1030 1030 content); 1031 1031 1032 1032 ret = -EINVAL; ··· 1093 1093 orig_dvnode = AFS_FS_I(old_dir); 1094 1094 new_dvnode = AFS_FS_I(new_dir); 1095 1095 1096 - _enter("{%x:%u},{%x:%u},{%x:%u},{%s}", 1096 + _enter("{%x:%u},{%x:%u},{%x:%u},{%pd}", 1097 1097 orig_dvnode->fid.vid, orig_dvnode->fid.vnode, 1098 1098 vnode->fid.vid, vnode->fid.vnode, 1099 1099 new_dvnode->fid.vid, new_dvnode->fid.vnode, 1100 - new_dentry->d_name.name); 1100 + new_dentry); 1101 1101 1102 1102 key = afs_request_key(orig_dvnode->volume->cell); 1103 1103 if (IS_ERR(key)) {
+2 -2
fs/afs/inode.c
··· 462 462 struct key *key; 463 463 int ret; 464 464 465 - _enter("{%x:%u},{n=%s},%x", 466 - vnode->fid.vid, vnode->fid.vnode, dentry->d_name.name, 465 + _enter("{%x:%u},{n=%pd},%x", 466 + vnode->fid.vid, vnode->fid.vnode, dentry, 467 467 attr->ia_valid); 468 468 469 469 if (!(attr->ia_valid & (ATTR_SIZE | ATTR_MODE | ATTR_UID | ATTR_GID |
+4 -18
fs/afs/mntpt.c
··· 106 106 struct dentry *dentry, 107 107 unsigned int flags) 108 108 { 109 - _enter("%p,%p{%p{%s},%s}", 110 - dir, 111 - dentry, 112 - dentry->d_parent, 113 - dentry->d_parent ? 114 - dentry->d_parent->d_name.name : (const unsigned char *) "", 115 - dentry->d_name.name); 116 - 109 + _enter("%p,%p{%pd2}", dir, dentry, dentry); 117 110 return ERR_PTR(-EREMOTE); 118 111 } 119 112 ··· 115 122 */ 116 123 static int afs_mntpt_open(struct inode *inode, struct file *file) 117 124 { 118 - _enter("%p,%p{%p{%s},%s}", 119 - inode, file, 120 - file->f_path.dentry->d_parent, 121 - file->f_path.dentry->d_parent ? 122 - file->f_path.dentry->d_parent->d_name.name : 123 - (const unsigned char *) "", 124 - file->f_path.dentry->d_name.name); 125 - 125 + _enter("%p,%p{%pD2}", inode, file, file); 126 126 return -EREMOTE; 127 127 } 128 128 ··· 132 146 bool rwpath = false; 133 147 int ret; 134 148 135 - _enter("{%s}", mntpt->d_name.name); 149 + _enter("{%pd}", mntpt); 136 150 137 151 BUG_ON(!mntpt->d_inode); 138 152 ··· 228 242 { 229 243 struct vfsmount *newmnt; 230 244 231 - _enter("{%s}", path->dentry->d_name.name); 245 + _enter("{%pd}", path->dentry); 232 246 233 247 newmnt = afs_mntpt_do_automount(path->dentry); 234 248 if (IS_ERR(newmnt))
+4 -5
fs/afs/write.c
··· 682 682 */ 683 683 int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync) 684 684 { 685 - struct dentry *dentry = file->f_path.dentry; 686 - struct inode *inode = file->f_mapping->host; 685 + struct inode *inode = file_inode(file); 687 686 struct afs_writeback *wb, *xwb; 688 - struct afs_vnode *vnode = AFS_FS_I(dentry->d_inode); 687 + struct afs_vnode *vnode = AFS_FS_I(inode); 689 688 int ret; 690 689 691 - _enter("{%x:%u},{n=%s},%d", 692 - vnode->fid.vid, vnode->fid.vnode, dentry->d_name.name, 690 + _enter("{%x:%u},{n=%pD},%d", 691 + vnode->fid.vid, vnode->fid.vnode, file, 693 692 datasync); 694 693 695 694 ret = filemap_write_and_wait_range(inode->i_mapping, start, end);
+16 -26
fs/autofs4/expire.c
··· 41 41 struct path path = {.mnt = mnt, .dentry = dentry}; 42 42 int status = 1; 43 43 44 - DPRINTK("dentry %p %.*s", 45 - dentry, (int)dentry->d_name.len, dentry->d_name.name); 44 + DPRINTK("dentry %p %pd", dentry, dentry); 46 45 47 46 path_get(&path); 48 47 ··· 84 85 spin_lock(&root->d_lock); 85 86 86 87 if (prev) 87 - next = prev->d_u.d_child.next; 88 + next = prev->d_child.next; 88 89 else { 89 90 prev = dget_dlock(root); 90 91 next = prev->d_subdirs.next; ··· 98 99 return NULL; 99 100 } 100 101 101 - q = list_entry(next, struct dentry, d_u.d_child); 102 + q = list_entry(next, struct dentry, d_child); 102 103 103 104 spin_lock_nested(&q->d_lock, DENTRY_D_LOCK_NESTED); 104 105 /* Already gone or negative dentry (under construction) - try next */ 105 106 if (!d_count(q) || !simple_positive(q)) { 106 107 spin_unlock(&q->d_lock); 107 - next = q->d_u.d_child.next; 108 + next = q->d_child.next; 108 109 goto cont; 109 110 } 110 111 dget_dlock(q); ··· 154 155 goto relock; 155 156 } 156 157 spin_unlock(&p->d_lock); 157 - next = p->d_u.d_child.next; 158 + next = p->d_child.next; 158 159 p = parent; 159 160 if (next != &parent->d_subdirs) 160 161 break; 161 162 } 162 163 } 163 - ret = list_entry(next, struct dentry, d_u.d_child); 164 + ret = list_entry(next, struct dentry, d_child); 164 165 165 166 spin_lock_nested(&ret->d_lock, DENTRY_D_LOCK_NESTED); 166 167 /* Negative dentry - try next */ ··· 191 192 unsigned long timeout, 192 193 int do_now) 193 194 { 194 - DPRINTK("top %p %.*s", 195 - top, (int) top->d_name.len, top->d_name.name); 195 + DPRINTK("top %p %pd", top, top); 196 196 197 197 /* If it's busy update the expiry counters */ 198 198 if (!may_umount_tree(mnt)) { ··· 219 221 struct autofs_info *top_ino = autofs4_dentry_ino(top); 220 222 struct dentry *p; 221 223 222 - DPRINTK("top %p %.*s", 223 - top, (int)top->d_name.len, top->d_name.name); 224 + DPRINTK("top %p %pd", top, top); 224 225 225 226 /* Negative dentry - give up */ 226 227 if (!simple_positive(top)) ··· 227 230 228 231 p = NULL; 229 232 while ((p = get_next_positive_dentry(p, top))) { 230 - DPRINTK("dentry %p %.*s", 231 - p, (int) p->d_name.len, p->d_name.name); 233 + DPRINTK("dentry %p %pd", p, p); 232 234 233 235 /* 234 236 * Is someone visiting anywhere in the subtree ? ··· 273 277 { 274 278 struct dentry *p; 275 279 276 - DPRINTK("parent %p %.*s", 277 - parent, (int)parent->d_name.len, parent->d_name.name); 280 + DPRINTK("parent %p %pd", parent, parent); 278 281 279 282 p = NULL; 280 283 while ((p = get_next_positive_dentry(p, parent))) { 281 - DPRINTK("dentry %p %.*s", 282 - p, (int) p->d_name.len, p->d_name.name); 284 + DPRINTK("dentry %p %pd", p, p); 283 285 284 286 if (d_mountpoint(p)) { 285 287 /* Can we umount this guy */ ··· 362 368 * offset (autofs-5.0+). 363 369 */ 364 370 if (d_mountpoint(dentry)) { 365 - DPRINTK("checking mountpoint %p %.*s", 366 - dentry, (int)dentry->d_name.len, dentry->d_name.name); 371 + DPRINTK("checking mountpoint %p %pd", dentry, dentry); 367 372 368 373 /* Can we umount this guy */ 369 374 if (autofs4_mount_busy(mnt, dentry)) ··· 375 382 } 376 383 377 384 if (dentry->d_inode && S_ISLNK(dentry->d_inode->i_mode)) { 378 - DPRINTK("checking symlink %p %.*s", 379 - dentry, (int)dentry->d_name.len, dentry->d_name.name); 385 + DPRINTK("checking symlink %p %pd", dentry, dentry); 380 386 /* 381 387 * A symlink can't be "busy" in the usual sense so 382 388 * just check last used for expire timeout. ··· 471 479 return NULL; 472 480 473 481 found: 474 - DPRINTK("returning %p %.*s", 475 - expired, (int)expired->d_name.len, expired->d_name.name); 482 + DPRINTK("returning %p %pd", expired, expired); 476 483 ino->flags |= AUTOFS_INF_EXPIRING; 477 484 smp_mb(); 478 485 ino->flags &= ~AUTOFS_INF_NO_RCU; ··· 480 489 spin_lock(&sbi->lookup_lock); 481 490 spin_lock(&expired->d_parent->d_lock); 482 491 spin_lock_nested(&expired->d_lock, DENTRY_D_LOCK_NESTED); 483 - list_move(&expired->d_parent->d_subdirs, &expired->d_u.d_child); 492 + list_move(&expired->d_parent->d_subdirs, &expired->d_child); 484 493 spin_unlock(&expired->d_lock); 485 494 spin_unlock(&expired->d_parent->d_lock); 486 495 spin_unlock(&sbi->lookup_lock); ··· 503 512 if (ino->flags & AUTOFS_INF_EXPIRING) { 504 513 spin_unlock(&sbi->fs_lock); 505 514 506 - DPRINTK("waiting for expire %p name=%.*s", 507 - dentry, dentry->d_name.len, dentry->d_name.name); 515 + DPRINTK("waiting for expire %p name=%pd", dentry, dentry); 508 516 509 517 status = autofs4_wait(sbi, dentry, NFY_NONE); 510 518 wait_for_completion(&ino->expire_complete);
+9 -16
fs/autofs4/root.c
··· 108 108 struct dentry *dentry = file->f_path.dentry; 109 109 struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb); 110 110 111 - DPRINTK("file=%p dentry=%p %.*s", 112 - file, dentry, dentry->d_name.len, dentry->d_name.name); 111 + DPRINTK("file=%p dentry=%p %pD", file, dentry, dentry); 113 112 114 113 if (autofs4_oz_mode(sbi)) 115 114 goto out; ··· 278 279 if (ino->flags & AUTOFS_INF_PENDING) { 279 280 if (rcu_walk) 280 281 return -ECHILD; 281 - DPRINTK("waiting for mount name=%.*s", 282 - dentry->d_name.len, dentry->d_name.name); 282 + DPRINTK("waiting for mount name=%pd", dentry); 283 283 status = autofs4_wait(sbi, dentry, NFY_MOUNT); 284 284 DPRINTK("mount wait done status=%d", status); 285 285 } ··· 338 340 struct autofs_info *ino = autofs4_dentry_ino(dentry); 339 341 int status; 340 342 341 - DPRINTK("dentry=%p %.*s", 342 - dentry, dentry->d_name.len, dentry->d_name.name); 343 + DPRINTK("dentry=%p %pd", dentry, dentry); 343 344 344 345 /* The daemon never triggers a mount. */ 345 346 if (autofs4_oz_mode(sbi)) ··· 425 428 struct autofs_info *ino = autofs4_dentry_ino(dentry); 426 429 int status; 427 430 428 - DPRINTK("dentry=%p %.*s", 429 - dentry, dentry->d_name.len, dentry->d_name.name); 431 + DPRINTK("dentry=%p %pd", dentry, dentry); 430 432 431 433 /* The daemon never waits. */ 432 434 if (autofs4_oz_mode(sbi)) { ··· 500 504 struct autofs_info *ino; 501 505 struct dentry *active; 502 506 503 - DPRINTK("name = %.*s", dentry->d_name.len, dentry->d_name.name); 507 + DPRINTK("name = %pd", dentry); 504 508 505 509 /* File name too long to exist */ 506 510 if (dentry->d_name.len > NAME_MAX) ··· 554 558 size_t size = strlen(symname); 555 559 char *cp; 556 560 557 - DPRINTK("%s <- %.*s", symname, 558 - dentry->d_name.len, dentry->d_name.name); 561 + DPRINTK("%s <- %pd", symname, dentry); 559 562 560 563 if (!autofs4_oz_mode(sbi)) 561 564 return -EACCES; ··· 682 687 /* only consider parents below dentrys in the root */ 683 688 if (IS_ROOT(parent->d_parent)) 684 689 return; 685 - d_child = &dentry->d_u.d_child; 690 + d_child = &dentry->d_child; 686 691 /* Set parent managed if it's becoming empty */ 687 692 if (d_child->next == &parent->d_subdirs && 688 693 d_child->prev == &parent->d_subdirs) ··· 696 701 struct autofs_info *ino = autofs4_dentry_ino(dentry); 697 702 struct autofs_info *p_ino; 698 703 699 - DPRINTK("dentry %p, removing %.*s", 700 - dentry, dentry->d_name.len, dentry->d_name.name); 704 + DPRINTK("dentry %p, removing %pd", dentry, dentry); 701 705 702 706 if (!autofs4_oz_mode(sbi)) 703 707 return -EACCES; ··· 738 744 if (!autofs4_oz_mode(sbi)) 739 745 return -EACCES; 740 746 741 - DPRINTK("dentry %p, creating %.*s", 742 - dentry, dentry->d_name.len, dentry->d_name.name); 747 + DPRINTK("dentry %p, creating %pd", dentry, dentry); 743 748 744 749 BUG_ON(!ino); 745 750
+7 -9
fs/befs/linuxvfs.c
··· 172 172 char *utfname; 173 173 const char *name = dentry->d_name.name; 174 174 175 - befs_debug(sb, "---> %s name %s inode %ld", __func__, 176 - dentry->d_name.name, dir->i_ino); 175 + befs_debug(sb, "---> %s name %pd inode %ld", __func__, 176 + dentry, dir->i_ino); 177 177 178 178 /* Convert to UTF-8 */ 179 179 if (BEFS_SB(sb)->nls) { ··· 191 191 } 192 192 193 193 if (ret == BEFS_BT_NOT_FOUND) { 194 - befs_debug(sb, "<--- %s %s not found", __func__, 195 - dentry->d_name.name); 194 + befs_debug(sb, "<--- %s %pd not found", __func__, dentry); 196 195 return ERR_PTR(-ENOENT); 197 196 198 197 } else if (ret != BEFS_OK || offset == 0) { ··· 221 222 size_t keysize; 222 223 unsigned char d_type; 223 224 char keybuf[BEFS_NAME_LEN + 1]; 224 - const char *dirname = file->f_path.dentry->d_name.name; 225 225 226 - befs_debug(sb, "---> %s name %s, inode %ld, ctx->pos %lld", 227 - __func__, dirname, inode->i_ino, ctx->pos); 226 + befs_debug(sb, "---> %s name %pD, inode %ld, ctx->pos %lld", 227 + __func__, file, inode->i_ino, ctx->pos); 228 228 229 229 more: 230 230 result = befs_btree_read(sb, ds, ctx->pos, BEFS_NAME_LEN + 1, ··· 231 233 232 234 if (result == BEFS_ERR) { 233 235 befs_debug(sb, "<--- %s ERROR", __func__); 234 - befs_error(sb, "IO error reading %s (inode %lu)", 235 - dirname, inode->i_ino); 236 + befs_error(sb, "IO error reading %pD (inode %lu)", 237 + file, inode->i_ino); 236 238 return -EIO; 237 239 238 240 } else if (result == BEFS_BT_END) {
+4 -4
fs/binfmt_aout.c
··· 292 292 if ((fd_offset & ~PAGE_MASK) != 0 && printk_ratelimit()) 293 293 { 294 294 printk(KERN_WARNING 295 - "fd_offset is not page aligned. Please convert program: %s\n", 296 - bprm->file->f_path.dentry->d_name.name); 295 + "fd_offset is not page aligned. Please convert program: %pD\n", 296 + bprm->file); 297 297 } 298 298 299 299 if (!bprm->file->f_op->mmap||((fd_offset & ~PAGE_MASK) != 0)) { ··· 375 375 if (printk_ratelimit()) 376 376 { 377 377 printk(KERN_WARNING 378 - "N_TXTOFF is not page aligned. Please convert library: %s\n", 379 - file->f_path.dentry->d_name.name); 378 + "N_TXTOFF is not page aligned. Please convert library: %pD\n", 379 + file); 380 380 } 381 381 vm_brk(start_addr, ex.a_text + ex.a_data + ex.a_bss); 382 382
+1 -1
fs/btrfs/inode.c
··· 5303 5303 return ERR_CAST(inode); 5304 5304 } 5305 5305 5306 - return d_materialise_unique(dentry, inode); 5306 + return d_splice_alias(inode, dentry); 5307 5307 } 5308 5308 5309 5309 unsigned char btrfs_filetype_table[] = {
+1 -1
fs/btrfs/ioctl.c
··· 5296 5296 ret = btrfs_start_delalloc_roots(root->fs_info, 0, -1); 5297 5297 if (ret) 5298 5298 return ret; 5299 - ret = btrfs_sync_fs(file->f_dentry->d_sb, 1); 5299 + ret = btrfs_sync_fs(file_inode(file)->i_sb, 1); 5300 5300 /* 5301 5301 * The transaction thread may want to do more work, 5302 5302 * namely it pokes the cleaner ktread that will start
+8 -13
fs/cachefiles/namei.c
··· 102 102 struct cachefiles_object *object; 103 103 struct rb_node *p; 104 104 105 - _enter(",'%*.*s'", 106 - dentry->d_name.len, dentry->d_name.len, dentry->d_name.name); 105 + _enter(",'%pd'", dentry); 107 106 108 107 write_lock(&cache->active_lock); 109 108 ··· 272 273 char nbuffer[8 + 8 + 1]; 273 274 int ret; 274 275 275 - _enter(",'%*.*s','%*.*s'", 276 - dir->d_name.len, dir->d_name.len, dir->d_name.name, 277 - rep->d_name.len, rep->d_name.len, rep->d_name.name); 276 + _enter(",'%pd','%pd'", dir, rep); 278 277 279 278 _debug("remove %p from %p", rep, dir); 280 279 ··· 594 597 /* if we've found that the terminal object exists, then we need to 595 598 * check its attributes and delete it if it's out of date */ 596 599 if (!object->new) { 597 - _debug("validate '%*.*s'", 598 - next->d_name.len, next->d_name.len, next->d_name.name); 600 + _debug("validate '%pd'", next); 599 601 600 602 ret = cachefiles_check_object_xattr(object, auxdata); 601 603 if (ret == -ESTALE) { ··· 823 827 unsigned long start; 824 828 int ret; 825 829 826 - //_enter(",%*.*s/,%s", 827 - // dir->d_name.len, dir->d_name.len, dir->d_name.name, filename); 830 + //_enter(",%pd/,%s", 831 + // dir, filename); 828 832 829 833 /* look up the victim */ 830 834 mutex_lock_nested(&dir->d_inode->i_mutex, I_MUTEX_PARENT); ··· 906 910 struct dentry *victim; 907 911 int ret; 908 912 909 - _enter(",%*.*s/,%s", 910 - dir->d_name.len, dir->d_name.len, dir->d_name.name, filename); 913 + _enter(",%pd/,%s", dir, filename); 911 914 912 915 victim = cachefiles_check_active(cache, dir, filename); 913 916 if (IS_ERR(victim)) ··· 964 969 { 965 970 struct dentry *victim; 966 971 967 - //_enter(",%*.*s/,%s", 968 - // dir->d_name.len, dir->d_name.len, dir->d_name.name, filename); 972 + //_enter(",%pd/,%s", 973 + // dir, filename); 969 974 970 975 victim = cachefiles_check_active(cache, dir, filename); 971 976 if (IS_ERR(victim))
+6 -9
fs/cachefiles/xattr.c
··· 51 51 } 52 52 53 53 if (ret != -EEXIST) { 54 - pr_err("Can't set xattr on %*.*s [%lu] (err %d)\n", 55 - dentry->d_name.len, dentry->d_name.len, 56 - dentry->d_name.name, dentry->d_inode->i_ino, 54 + pr_err("Can't set xattr on %pd [%lu] (err %d)\n", 55 + dentry, dentry->d_inode->i_ino, 57 56 -ret); 58 57 goto error; 59 58 } ··· 63 64 if (ret == -ERANGE) 64 65 goto bad_type_length; 65 66 66 - pr_err("Can't read xattr on %*.*s [%lu] (err %d)\n", 67 - dentry->d_name.len, dentry->d_name.len, 68 - dentry->d_name.name, dentry->d_inode->i_ino, 67 + pr_err("Can't read xattr on %pd [%lu] (err %d)\n", 68 + dentry, dentry->d_inode->i_ino, 69 69 -ret); 70 70 goto error; 71 71 } ··· 90 92 91 93 bad_type: 92 94 xtype[2] = 0; 93 - pr_err("Cache object %*.*s [%lu] type %s not %s\n", 94 - dentry->d_name.len, dentry->d_name.len, 95 - dentry->d_name.name, dentry->d_inode->i_ino, 95 + pr_err("Cache object %pd [%lu] type %s not %s\n", 96 + dentry, dentry->d_inode->i_ino, 96 97 xtype, type); 97 98 ret = -EIO; 98 99 goto error;
+6 -8
fs/ceph/debugfs.c
··· 83 83 if (IS_ERR(path)) 84 84 path = NULL; 85 85 spin_lock(&req->r_dentry->d_lock); 86 - seq_printf(s, " #%llx/%.*s (%s)", 86 + seq_printf(s, " #%llx/%pd (%s)", 87 87 ceph_ino(req->r_dentry->d_parent->d_inode), 88 - req->r_dentry->d_name.len, 89 - req->r_dentry->d_name.name, 88 + req->r_dentry, 90 89 path ? path : ""); 91 90 spin_unlock(&req->r_dentry->d_lock); 92 91 kfree(path); ··· 102 103 if (IS_ERR(path)) 103 104 path = NULL; 104 105 spin_lock(&req->r_old_dentry->d_lock); 105 - seq_printf(s, " #%llx/%.*s (%s)", 106 + seq_printf(s, " #%llx/%pd (%s)", 106 107 req->r_old_dentry_dir ? 107 108 ceph_ino(req->r_old_dentry_dir) : 0, 108 - req->r_old_dentry->d_name.len, 109 - req->r_old_dentry->d_name.name, 109 + req->r_old_dentry, 110 110 path ? path : ""); 111 111 spin_unlock(&req->r_old_dentry->d_lock); 112 112 kfree(path); ··· 148 150 spin_lock(&mdsc->dentry_lru_lock); 149 151 list_for_each_entry(di, &mdsc->dentry_lru, lru) { 150 152 struct dentry *dentry = di->dentry; 151 - seq_printf(s, "%p %p\t%.*s\n", 152 - di, dentry, dentry->d_name.len, dentry->d_name.name); 153 + seq_printf(s, "%p %p\t%pd\n", 154 + di, dentry, dentry); 153 155 } 154 156 spin_unlock(&mdsc->dentry_lru_lock); 155 157
+26 -30
fs/ceph/dir.c
··· 111 111 /* 112 112 * When possible, we try to satisfy a readdir by peeking at the 113 113 * dcache. We make this work by carefully ordering dentries on 114 - * d_u.d_child when we initially get results back from the MDS, and 114 + * d_child when we initially get results back from the MDS, and 115 115 * falling back to a "normal" sync readdir if any dentries in the dir 116 116 * are dropped. 117 117 * ··· 123 123 u32 shared_gen) 124 124 { 125 125 struct ceph_file_info *fi = file->private_data; 126 - struct dentry *parent = file->f_dentry; 126 + struct dentry *parent = file->f_path.dentry; 127 127 struct inode *dir = parent->d_inode; 128 128 struct list_head *p; 129 129 struct dentry *dentry, *last; ··· 147 147 p = parent->d_subdirs.prev; 148 148 dout(" initial p %p/%p\n", p->prev, p->next); 149 149 } else { 150 - p = last->d_u.d_child.prev; 150 + p = last->d_child.prev; 151 151 } 152 152 153 153 more: 154 - dentry = list_entry(p, struct dentry, d_u.d_child); 154 + dentry = list_entry(p, struct dentry, d_child); 155 155 di = ceph_dentry(dentry); 156 156 while (1) { 157 157 dout(" p %p/%p %s d_subdirs %p/%p\n", p->prev, p->next, ··· 168 168 ceph_ino(dentry->d_inode) != CEPH_INO_CEPH && 169 169 fpos_cmp(ctx->pos, di->offset) <= 0) 170 170 break; 171 - dout(" skipping %p %.*s at %llu (%llu)%s%s\n", dentry, 172 - dentry->d_name.len, dentry->d_name.name, di->offset, 171 + dout(" skipping %p %pd at %llu (%llu)%s%s\n", dentry, 172 + dentry, di->offset, 173 173 ctx->pos, d_unhashed(dentry) ? " unhashed" : "", 174 174 !dentry->d_inode ? " null" : ""); 175 175 spin_unlock(&dentry->d_lock); 176 176 p = p->prev; 177 - dentry = list_entry(p, struct dentry, d_u.d_child); 177 + dentry = list_entry(p, struct dentry, d_child); 178 178 di = ceph_dentry(dentry); 179 179 } 180 180 ··· 190 190 goto out; 191 191 } 192 192 193 - dout(" %llu (%llu) dentry %p %.*s %p\n", di->offset, ctx->pos, 194 - dentry, dentry->d_name.len, dentry->d_name.name, dentry->d_inode); 193 + dout(" %llu (%llu) dentry %p %pd %p\n", di->offset, ctx->pos, 194 + dentry, dentry, dentry->d_inode); 195 195 if (!dir_emit(ctx, dentry->d_name.name, 196 196 dentry->d_name.len, 197 197 ceph_translate_ino(dentry->d_sb, dentry->d_inode->i_ino), ··· 274 274 off = 1; 275 275 } 276 276 if (ctx->pos == 1) { 277 - ino_t ino = parent_ino(file->f_dentry); 277 + ino_t ino = parent_ino(file->f_path.dentry); 278 278 dout("readdir off 1 -> '..'\n"); 279 279 if (!dir_emit(ctx, "..", 2, 280 280 ceph_translate_ino(inode->i_sb, ino), ··· 337 337 } 338 338 req->r_inode = inode; 339 339 ihold(inode); 340 - req->r_dentry = dget(file->f_dentry); 340 + req->r_dentry = dget(file->f_path.dentry); 341 341 /* hints to request -> mds selection code */ 342 342 req->r_direct_mode = USE_AUTH_MDS; 343 343 req->r_direct_hash = ceph_frag_value(frag); ··· 538 538 strcmp(dentry->d_name.name, 539 539 fsc->mount_options->snapdir_name) == 0) { 540 540 struct inode *inode = ceph_get_snapdir(parent); 541 - dout("ENOENT on snapdir %p '%.*s', linking to snapdir %p\n", 542 - dentry, dentry->d_name.len, dentry->d_name.name, inode); 541 + dout("ENOENT on snapdir %p '%pd', linking to snapdir %p\n", 542 + dentry, dentry, inode); 543 543 BUG_ON(!d_unhashed(dentry)); 544 544 d_add(dentry, inode); 545 545 err = 0; ··· 603 603 int op; 604 604 int err; 605 605 606 - dout("lookup %p dentry %p '%.*s'\n", 607 - dir, dentry, dentry->d_name.len, dentry->d_name.name); 606 + dout("lookup %p dentry %p '%pd'\n", 607 + dir, dentry, dentry); 608 608 609 609 if (dentry->d_name.len > NAME_MAX) 610 610 return ERR_PTR(-ENAMETOOLONG); ··· 774 774 if (ceph_snap(dir) == CEPH_SNAPDIR) { 775 775 /* mkdir .snap/foo is a MKSNAP */ 776 776 op = CEPH_MDS_OP_MKSNAP; 777 - dout("mksnap dir %p snap '%.*s' dn %p\n", dir, 778 - dentry->d_name.len, dentry->d_name.name, dentry); 777 + dout("mksnap dir %p snap '%pd' dn %p\n", dir, 778 + dentry, dentry); 779 779 } else if (ceph_snap(dir) == CEPH_NOSNAP) { 780 780 dout("mkdir dir %p dn %p mode 0%ho\n", dir, dentry, mode); 781 781 op = CEPH_MDS_OP_MKDIR; ··· 888 888 889 889 if (ceph_snap(dir) == CEPH_SNAPDIR) { 890 890 /* rmdir .snap/foo is RMSNAP */ 891 - dout("rmsnap dir %p '%.*s' dn %p\n", dir, dentry->d_name.len, 892 - dentry->d_name.name, dentry); 891 + dout("rmsnap dir %p '%pd' dn %p\n", dir, dentry, dentry); 893 892 op = CEPH_MDS_OP_RMSNAP; 894 893 } else if (ceph_snap(dir) == CEPH_NOSNAP) { 895 894 dout("unlink/rmdir dir %p dn %p inode %p\n", ··· 1062 1063 if (flags & LOOKUP_RCU) 1063 1064 return -ECHILD; 1064 1065 1065 - dout("d_revalidate %p '%.*s' inode %p offset %lld\n", dentry, 1066 - dentry->d_name.len, dentry->d_name.name, dentry->d_inode, 1067 - ceph_dentry(dentry)->offset); 1066 + dout("d_revalidate %p '%pd' inode %p offset %lld\n", dentry, 1067 + dentry, dentry->d_inode, ceph_dentry(dentry)->offset); 1068 1068 1069 1069 dir = ceph_get_dentry_parent_inode(dentry); 1070 1070 1071 1071 /* always trust cached snapped dentries, snapdir dentry */ 1072 1072 if (ceph_snap(dir) != CEPH_NOSNAP) { 1073 - dout("d_revalidate %p '%.*s' inode %p is SNAPPED\n", dentry, 1074 - dentry->d_name.len, dentry->d_name.name, dentry->d_inode); 1073 + dout("d_revalidate %p '%pd' inode %p is SNAPPED\n", dentry, 1074 + dentry, dentry->d_inode); 1075 1075 valid = 1; 1076 1076 } else if (dentry->d_inode && 1077 1077 ceph_snap(dentry->d_inode) == CEPH_SNAPDIR) { ··· 1263 1265 struct ceph_dentry_info *di = ceph_dentry(dn); 1264 1266 struct ceph_mds_client *mdsc; 1265 1267 1266 - dout("dentry_lru_add %p %p '%.*s'\n", di, dn, 1267 - dn->d_name.len, dn->d_name.name); 1268 + dout("dentry_lru_add %p %p '%pd'\n", di, dn, dn); 1268 1269 mdsc = ceph_sb_to_client(dn->d_sb)->mdsc; 1269 1270 spin_lock(&mdsc->dentry_lru_lock); 1270 1271 list_add_tail(&di->lru, &mdsc->dentry_lru); ··· 1276 1279 struct ceph_dentry_info *di = ceph_dentry(dn); 1277 1280 struct ceph_mds_client *mdsc; 1278 1281 1279 - dout("dentry_lru_touch %p %p '%.*s' (offset %lld)\n", di, dn, 1280 - dn->d_name.len, dn->d_name.name, di->offset); 1282 + dout("dentry_lru_touch %p %p '%pd' (offset %lld)\n", di, dn, dn, 1283 + di->offset); 1281 1284 mdsc = ceph_sb_to_client(dn->d_sb)->mdsc; 1282 1285 spin_lock(&mdsc->dentry_lru_lock); 1283 1286 list_move_tail(&di->lru, &mdsc->dentry_lru); ··· 1289 1292 struct ceph_dentry_info *di = ceph_dentry(dn); 1290 1293 struct ceph_mds_client *mdsc; 1291 1294 1292 - dout("dentry_lru_del %p %p '%.*s'\n", di, dn, 1293 - dn->d_name.len, dn->d_name.name); 1295 + dout("dentry_lru_del %p %p '%pd'\n", di, dn, dn); 1294 1296 mdsc = ceph_sb_to_client(dn->d_sb)->mdsc; 1295 1297 spin_lock(&mdsc->dentry_lru_lock); 1296 1298 list_del_init(&di->lru);
+3 -3
fs/ceph/file.c
··· 211 211 212 212 req->r_num_caps = 1; 213 213 if (flags & O_CREAT) 214 - parent_inode = ceph_get_dentry_parent_inode(file->f_dentry); 214 + parent_inode = ceph_get_dentry_parent_inode(file->f_path.dentry); 215 215 err = ceph_mdsc_do_request(mdsc, parent_inode, req); 216 216 iput(parent_inode); 217 217 if (!err) ··· 238 238 struct ceph_acls_info acls = {}; 239 239 int err; 240 240 241 - dout("atomic_open %p dentry %p '%.*s' %s flags %d mode 0%o\n", 242 - dir, dentry, dentry->d_name.len, dentry->d_name.name, 241 + dout("atomic_open %p dentry %p '%pd' %s flags %d mode 0%o\n", 242 + dir, dentry, dentry, 243 243 d_unhashed(dentry) ? "unhashed" : "hashed", flags, mode); 244 244 245 245 if (dentry->d_name.len > NAME_MAX)
+8 -10
fs/ceph/inode.c
··· 967 967 /* dn must be unhashed */ 968 968 if (!d_unhashed(dn)) 969 969 d_drop(dn); 970 - realdn = d_materialise_unique(dn, in); 970 + realdn = d_splice_alias(in, dn); 971 971 if (IS_ERR(realdn)) { 972 972 pr_err("splice_dentry error %ld %p inode %p ino %llx.%llx\n", 973 973 PTR_ERR(realdn), dn, in, ceph_vinop(in)); ··· 1186 1186 struct inode *olddir = req->r_old_dentry_dir; 1187 1187 BUG_ON(!olddir); 1188 1188 1189 - dout(" src %p '%.*s' dst %p '%.*s'\n", 1189 + dout(" src %p '%pd' dst %p '%pd'\n", 1190 1190 req->r_old_dentry, 1191 - req->r_old_dentry->d_name.len, 1192 - req->r_old_dentry->d_name.name, 1193 - dn, dn->d_name.len, dn->d_name.name); 1191 + req->r_old_dentry, 1192 + dn, dn); 1194 1193 dout("fill_trace doing d_move %p -> %p\n", 1195 1194 req->r_old_dentry, dn); 1196 1195 1197 1196 d_move(req->r_old_dentry, dn); 1198 - dout(" src %p '%.*s' dst %p '%.*s'\n", 1197 + dout(" src %p '%pd' dst %p '%pd'\n", 1199 1198 req->r_old_dentry, 1200 - req->r_old_dentry->d_name.len, 1201 - req->r_old_dentry->d_name.name, 1202 - dn, dn->d_name.len, dn->d_name.name); 1199 + req->r_old_dentry, 1200 + dn, dn); 1203 1201 1204 1202 /* ensure target dentry is invalidated, despite 1205 1203 rehashing bug in vfs_rename_dir */ ··· 1397 1399 /* reorder parent's d_subdirs */ 1398 1400 spin_lock(&parent->d_lock); 1399 1401 spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED); 1400 - list_move(&dn->d_u.d_child, &parent->d_subdirs); 1402 + list_move(&dn->d_child, &parent->d_subdirs); 1401 1403 spin_unlock(&dn->d_lock); 1402 1404 spin_unlock(&parent->d_lock); 1403 1405 }
+1 -2
fs/cifs/cifsfs.c
··· 209 209 210 210 static long cifs_fallocate(struct file *file, int mode, loff_t off, loff_t len) 211 211 { 212 - struct super_block *sb = file->f_path.dentry->d_sb; 213 - struct cifs_sb_info *cifs_sb = CIFS_SB(sb); 212 + struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); 214 213 struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb); 215 214 struct TCP_Server_Info *server = tcon->ses->server; 216 215
+6
fs/cifs/cifsglob.h
··· 1168 1168 return sb->s_fs_info; 1169 1169 } 1170 1170 1171 + static inline struct cifs_sb_info * 1172 + CIFS_FILE_SB(struct file *file) 1173 + { 1174 + return CIFS_SB(file_inode(file)->i_sb); 1175 + } 1176 + 1171 1177 static inline char CIFS_DIR_SEP(const struct cifs_sb_info *cifs_sb) 1172 1178 { 1173 1179 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS)
+6 -6
fs/cifs/file.c
··· 1586 1586 cifs_read_flock(flock, &type, &lock, &unlock, &wait_flag, 1587 1587 tcon->ses->server); 1588 1588 1589 - cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 1589 + cifs_sb = CIFS_FILE_SB(file); 1590 1590 netfid = cfile->fid.netfid; 1591 1591 cinode = CIFS_I(file_inode(file)); 1592 1592 ··· 2305 2305 struct cifs_tcon *tcon; 2306 2306 struct TCP_Server_Info *server; 2307 2307 struct cifsFileInfo *smbfile = file->private_data; 2308 - struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 2308 + struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); 2309 2309 struct inode *inode = file->f_mapping->host; 2310 2310 2311 2311 rc = filemap_write_and_wait_range(inode->i_mapping, start, end); ··· 2585 2585 iov_iter_truncate(from, len); 2586 2586 2587 2587 INIT_LIST_HEAD(&wdata_list); 2588 - cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 2588 + cifs_sb = CIFS_FILE_SB(file); 2589 2589 open_file = file->private_data; 2590 2590 tcon = tlink_tcon(open_file->tlink); 2591 2591 ··· 3010 3010 return 0; 3011 3011 3012 3012 INIT_LIST_HEAD(&rdata_list); 3013 - cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 3013 + cifs_sb = CIFS_FILE_SB(file); 3014 3014 open_file = file->private_data; 3015 3015 tcon = tlink_tcon(open_file->tlink); 3016 3016 ··· 3155 3155 __u32 pid; 3156 3156 3157 3157 xid = get_xid(); 3158 - cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 3158 + cifs_sb = CIFS_FILE_SB(file); 3159 3159 3160 3160 /* FIXME: set up handlers for larger reads and/or convert to async */ 3161 3161 rsize = min_t(unsigned int, cifs_sb->rsize, CIFSMaxBufSize); ··· 3462 3462 int rc; 3463 3463 struct list_head tmplist; 3464 3464 struct cifsFileInfo *open_file = file->private_data; 3465 - struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 3465 + struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); 3466 3466 struct TCP_Server_Info *server; 3467 3467 pid_t pid; 3468 3468
+1 -1
fs/cifs/inode.c
··· 895 895 struct dentry *dentry; 896 896 897 897 spin_lock(&inode->i_lock); 898 - hlist_for_each_entry(dentry, &inode->i_dentry, d_alias) { 898 + hlist_for_each_entry(dentry, &inode->i_dentry, d_u.d_alias) { 899 899 if (!d_unhashed(dentry) || IS_ROOT(dentry)) { 900 900 spin_unlock(&inode->i_lock); 901 901 return true;
+5 -5
fs/cifs/readdir.c
··· 123 123 if (!inode) 124 124 goto out; 125 125 126 - alias = d_materialise_unique(dentry, inode); 126 + alias = d_splice_alias(inode, dentry); 127 127 if (alias && !IS_ERR(alias)) 128 128 dput(alias); 129 129 out: ··· 261 261 int rc = 0; 262 262 char *full_path = NULL; 263 263 struct cifsFileInfo *cifsFile; 264 - struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 264 + struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); 265 265 struct tcon_link *tlink = NULL; 266 266 struct cifs_tcon *tcon; 267 267 struct TCP_Server_Info *server; ··· 561 561 loff_t first_entry_in_buffer; 562 562 loff_t index_to_find = pos; 563 563 struct cifsFileInfo *cfile = file->private_data; 564 - struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 564 + struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); 565 565 struct TCP_Server_Info *server = tcon->ses->server; 566 566 /* check if index in the buffer */ 567 567 ··· 679 679 char *scratch_buf, unsigned int max_len) 680 680 { 681 681 struct cifsFileInfo *file_info = file->private_data; 682 - struct super_block *sb = file->f_path.dentry->d_sb; 682 + struct super_block *sb = file_inode(file)->i_sb; 683 683 struct cifs_sb_info *cifs_sb = CIFS_SB(sb); 684 684 struct cifs_dirent de = { NULL, }; 685 685 struct cifs_fattr fattr; ··· 753 753 */ 754 754 fattr.cf_flags |= CIFS_FATTR_NEED_REVAL; 755 755 756 - cifs_prime_dcache(file->f_dentry, &name, &fattr); 756 + cifs_prime_dcache(file->f_path.dentry, &name, &fattr); 757 757 758 758 ino = cifs_uniqueid_to_ino_t(fattr.cf_uniqueid); 759 759 return !dir_emit(ctx, name.name, name.len, ino, fattr.cf_dtype);
+1 -1
fs/coda/cache.c
··· 92 92 struct dentry *de; 93 93 94 94 spin_lock(&parent->d_lock); 95 - list_for_each_entry(de, &parent->d_subdirs, d_u.d_child) { 95 + list_for_each_entry(de, &parent->d_subdirs, d_child) { 96 96 /* don't know what to do with negative dentries */ 97 97 if (de->d_inode ) 98 98 coda_flag_inode(de->d_inode, flag);
-6
fs/coda/coda_linux.c
··· 40 40 (strncmp(name, CODA_CONTROL, CODA_CONTROLLEN) == 0)); 41 41 } 42 42 43 - /* recognize /coda inode */ 44 - int coda_isroot(struct inode *i) 45 - { 46 - return ( i->i_sb->s_root->d_inode == i ); 47 - } 48 - 49 43 unsigned short coda_flags_to_cflags(unsigned short flags) 50 44 { 51 45 unsigned short coda_flags = 0;
-1
fs/coda/coda_linux.h
··· 52 52 53 53 /* this file: heloers */ 54 54 char *coda_f2s(struct CodaFid *f); 55 - int coda_isroot(struct inode *i); 56 55 int coda_iscontrol(const char *name, size_t length); 57 56 58 57 void coda_vattr_to_iattr(struct inode *, struct coda_vattr *);
+6 -6
fs/coda/dir.c
··· 107 107 } 108 108 109 109 /* control object, create inode on the fly */ 110 - if (coda_isroot(dir) && coda_iscontrol(name, length)) { 110 + if (is_root_inode(dir) && coda_iscontrol(name, length)) { 111 111 inode = coda_cnode_makectl(sb); 112 112 type = CODA_NOCACHE; 113 113 } else { ··· 195 195 struct CodaFid newfid; 196 196 struct coda_vattr attrs; 197 197 198 - if (coda_isroot(dir) && coda_iscontrol(name, length)) 198 + if (is_root_inode(dir) && coda_iscontrol(name, length)) 199 199 return -EPERM; 200 200 201 201 error = venus_create(dir->i_sb, coda_i2f(dir), name, length, ··· 227 227 int error; 228 228 struct CodaFid newfid; 229 229 230 - if (coda_isroot(dir) && coda_iscontrol(name, len)) 230 + if (is_root_inode(dir) && coda_iscontrol(name, len)) 231 231 return -EPERM; 232 232 233 233 attrs.va_mode = mode; ··· 261 261 int len = de->d_name.len; 262 262 int error; 263 263 264 - if (coda_isroot(dir_inode) && coda_iscontrol(name, len)) 264 + if (is_root_inode(dir_inode) && coda_iscontrol(name, len)) 265 265 return -EPERM; 266 266 267 267 error = venus_link(dir_inode->i_sb, coda_i2f(inode), ··· 287 287 int symlen; 288 288 int error; 289 289 290 - if (coda_isroot(dir_inode) && coda_iscontrol(name, len)) 290 + if (is_root_inode(dir_inode) && coda_iscontrol(name, len)) 291 291 return -EPERM; 292 292 293 293 symlen = strlen(symname); ··· 507 507 return -ECHILD; 508 508 509 509 inode = de->d_inode; 510 - if (!inode || coda_isroot(inode)) 510 + if (!inode || is_root_inode(inode)) 511 511 goto out; 512 512 if (is_bad_inode(inode)) 513 513 goto bad;
+13 -8
fs/compat.c
··· 847 847 int result; 848 848 }; 849 849 850 - static int compat_fillonedir(void *__buf, const char *name, int namlen, 851 - loff_t offset, u64 ino, unsigned int d_type) 850 + static int compat_fillonedir(struct dir_context *ctx, const char *name, 851 + int namlen, loff_t offset, u64 ino, 852 + unsigned int d_type) 852 853 { 853 - struct compat_readdir_callback *buf = __buf; 854 + struct compat_readdir_callback *buf = 855 + container_of(ctx, struct compat_readdir_callback, ctx); 854 856 struct compat_old_linux_dirent __user *dirent; 855 857 compat_ulong_t d_ino; 856 858 ··· 917 915 int error; 918 916 }; 919 917 920 - static int compat_filldir(void *__buf, const char *name, int namlen, 918 + static int compat_filldir(struct dir_context *ctx, const char *name, int namlen, 921 919 loff_t offset, u64 ino, unsigned int d_type) 922 920 { 923 921 struct compat_linux_dirent __user * dirent; 924 - struct compat_getdents_callback *buf = __buf; 922 + struct compat_getdents_callback *buf = 923 + container_of(ctx, struct compat_getdents_callback, ctx); 925 924 compat_ulong_t d_ino; 926 925 int reclen = ALIGN(offsetof(struct compat_linux_dirent, d_name) + 927 926 namlen + 2, sizeof(compat_long_t)); ··· 1004 1001 int error; 1005 1002 }; 1006 1003 1007 - static int compat_filldir64(void * __buf, const char * name, int namlen, loff_t offset, 1008 - u64 ino, unsigned int d_type) 1004 + static int compat_filldir64(struct dir_context *ctx, const char *name, 1005 + int namlen, loff_t offset, u64 ino, 1006 + unsigned int d_type) 1009 1007 { 1010 1008 struct linux_dirent64 __user *dirent; 1011 - struct compat_getdents_callback64 *buf = __buf; 1009 + struct compat_getdents_callback64 *buf = 1010 + container_of(ctx, struct compat_getdents_callback64, ctx); 1012 1011 int reclen = ALIGN(offsetof(struct linux_dirent64, d_name) + namlen + 1, 1013 1012 sizeof(u64)); 1014 1013 u64 off;
+1 -1
fs/configfs/dir.c
··· 386 386 if (d->d_inode) 387 387 simple_rmdir(parent->d_inode,d); 388 388 389 - pr_debug(" o %s removing done (%d)\n",d->d_name.name, d_count(d)); 389 + pr_debug(" o %pd removing done (%d)\n", d, d_count(d)); 390 390 391 391 dput(parent); 392 392 }
+85 -188
fs/dcache.c
··· 44 44 /* 45 45 * Usage: 46 46 * dcache->d_inode->i_lock protects: 47 - * - i_dentry, d_alias, d_inode of aliases 47 + * - i_dentry, d_u.d_alias, d_inode of aliases 48 48 * dcache_hash_bucket lock protects: 49 49 * - the dcache hash table 50 50 * s_anon bl list spinlock protects: ··· 59 59 * - d_unhashed() 60 60 * - d_parent and d_subdirs 61 61 * - childrens' d_child and d_parent 62 - * - d_alias, d_inode 62 + * - d_u.d_alias, d_inode 63 63 * 64 64 * Ordering: 65 65 * dentry->d_inode->i_lock ··· 252 252 { 253 253 struct dentry *dentry = container_of(head, struct dentry, d_u.d_rcu); 254 254 255 - WARN_ON(!hlist_unhashed(&dentry->d_alias)); 256 255 kmem_cache_free(dentry_cache, dentry); 257 256 } 258 257 259 258 static void __d_free_external(struct rcu_head *head) 260 259 { 261 260 struct dentry *dentry = container_of(head, struct dentry, d_u.d_rcu); 262 - WARN_ON(!hlist_unhashed(&dentry->d_alias)); 263 261 kfree(external_name(dentry)); 264 262 kmem_cache_free(dentry_cache, dentry); 265 263 } ··· 269 271 270 272 static void dentry_free(struct dentry *dentry) 271 273 { 274 + WARN_ON(!hlist_unhashed(&dentry->d_u.d_alias)); 272 275 if (unlikely(dname_external(dentry))) { 273 276 struct external_name *p = external_name(dentry); 274 277 if (likely(atomic_dec_and_test(&p->u.count))) { ··· 310 311 struct inode *inode = dentry->d_inode; 311 312 if (inode) { 312 313 dentry->d_inode = NULL; 313 - hlist_del_init(&dentry->d_alias); 314 + hlist_del_init(&dentry->d_u.d_alias); 314 315 spin_unlock(&dentry->d_lock); 315 316 spin_unlock(&inode->i_lock); 316 317 if (!inode->i_nlink) ··· 335 336 struct inode *inode = dentry->d_inode; 336 337 __d_clear_type(dentry); 337 338 dentry->d_inode = NULL; 338 - hlist_del_init(&dentry->d_alias); 339 + hlist_del_init(&dentry->d_u.d_alias); 339 340 dentry_rcuwalk_barrier(dentry); 340 341 spin_unlock(&dentry->d_lock); 341 342 spin_unlock(&inode->i_lock); ··· 495 496 } 496 497 /* if it was on the hash then remove it */ 497 498 __d_drop(dentry); 498 - list_del(&dentry->d_u.d_child); 499 + __list_del_entry(&dentry->d_child); 499 500 /* 500 501 * Inform d_walk() that we are no longer attached to the 501 502 * dentry tree ··· 721 722 722 723 again: 723 724 discon_alias = NULL; 724 - hlist_for_each_entry(alias, &inode->i_dentry, d_alias) { 725 + hlist_for_each_entry(alias, &inode->i_dentry, d_u.d_alias) { 725 726 spin_lock(&alias->d_lock); 726 727 if (S_ISDIR(inode->i_mode) || !d_unhashed(alias)) { 727 728 if (IS_ROOT(alias) && ··· 771 772 struct dentry *dentry; 772 773 restart: 773 774 spin_lock(&inode->i_lock); 774 - hlist_for_each_entry(dentry, &inode->i_dentry, d_alias) { 775 + hlist_for_each_entry(dentry, &inode->i_dentry, d_u.d_alias) { 775 776 spin_lock(&dentry->d_lock); 776 777 if (!dentry->d_lockref.count) { 777 778 struct dentry *parent = lock_parent(dentry); ··· 1050 1051 resume: 1051 1052 while (next != &this_parent->d_subdirs) { 1052 1053 struct list_head *tmp = next; 1053 - struct dentry *dentry = list_entry(tmp, struct dentry, d_u.d_child); 1054 + struct dentry *dentry = list_entry(tmp, struct dentry, d_child); 1054 1055 next = tmp->next; 1055 1056 1056 1057 spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED); ··· 1082 1083 /* 1083 1084 * All done at this level ... ascend and resume the search. 1084 1085 */ 1086 + rcu_read_lock(); 1087 + ascend: 1085 1088 if (this_parent != parent) { 1086 1089 struct dentry *child = this_parent; 1087 1090 this_parent = child->d_parent; 1088 1091 1089 - rcu_read_lock(); 1090 1092 spin_unlock(&child->d_lock); 1091 1093 spin_lock(&this_parent->d_lock); 1092 1094 1093 - /* 1094 - * might go back up the wrong parent if we have had a rename 1095 - * or deletion 1096 - */ 1097 - if (this_parent != child->d_parent || 1098 - (child->d_flags & DCACHE_DENTRY_KILLED) || 1099 - need_seqretry(&rename_lock, seq)) { 1100 - spin_unlock(&this_parent->d_lock); 1101 - rcu_read_unlock(); 1095 + /* might go back up the wrong parent if we have had a rename. */ 1096 + if (need_seqretry(&rename_lock, seq)) 1102 1097 goto rename_retry; 1098 + next = child->d_child.next; 1099 + while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED)) { 1100 + if (next == &this_parent->d_subdirs) 1101 + goto ascend; 1102 + child = list_entry(next, struct dentry, d_child); 1103 + next = next->next; 1103 1104 } 1104 1105 rcu_read_unlock(); 1105 - next = child->d_u.d_child.next; 1106 1106 goto resume; 1107 1107 } 1108 - if (need_seqretry(&rename_lock, seq)) { 1109 - spin_unlock(&this_parent->d_lock); 1108 + if (need_seqretry(&rename_lock, seq)) 1110 1109 goto rename_retry; 1111 - } 1110 + rcu_read_unlock(); 1112 1111 if (finish) 1113 1112 finish(data); 1114 1113 ··· 1116 1119 return; 1117 1120 1118 1121 rename_retry: 1122 + spin_unlock(&this_parent->d_lock); 1123 + rcu_read_unlock(); 1124 + BUG_ON(seq & 1); 1119 1125 if (!retry) 1120 1126 return; 1121 1127 seq = 1; ··· 1455 1455 INIT_HLIST_BL_NODE(&dentry->d_hash); 1456 1456 INIT_LIST_HEAD(&dentry->d_lru); 1457 1457 INIT_LIST_HEAD(&dentry->d_subdirs); 1458 - INIT_HLIST_NODE(&dentry->d_alias); 1459 - INIT_LIST_HEAD(&dentry->d_u.d_child); 1458 + INIT_HLIST_NODE(&dentry->d_u.d_alias); 1459 + INIT_LIST_HEAD(&dentry->d_child); 1460 1460 d_set_d_op(dentry, dentry->d_sb->s_d_op); 1461 1461 1462 1462 this_cpu_inc(nr_dentry); ··· 1486 1486 */ 1487 1487 __dget_dlock(parent); 1488 1488 dentry->d_parent = parent; 1489 - list_add(&dentry->d_u.d_child, &parent->d_subdirs); 1489 + list_add(&dentry->d_child, &parent->d_subdirs); 1490 1490 spin_unlock(&parent->d_lock); 1491 1491 1492 1492 return dentry; ··· 1579 1579 spin_lock(&dentry->d_lock); 1580 1580 __d_set_type(dentry, add_flags); 1581 1581 if (inode) 1582 - hlist_add_head(&dentry->d_alias, &inode->i_dentry); 1582 + hlist_add_head(&dentry->d_u.d_alias, &inode->i_dentry); 1583 1583 dentry->d_inode = inode; 1584 1584 dentry_rcuwalk_barrier(dentry); 1585 1585 spin_unlock(&dentry->d_lock); ··· 1603 1603 1604 1604 void d_instantiate(struct dentry *entry, struct inode * inode) 1605 1605 { 1606 - BUG_ON(!hlist_unhashed(&entry->d_alias)); 1606 + BUG_ON(!hlist_unhashed(&entry->d_u.d_alias)); 1607 1607 if (inode) 1608 1608 spin_lock(&inode->i_lock); 1609 1609 __d_instantiate(entry, inode); ··· 1642 1642 return NULL; 1643 1643 } 1644 1644 1645 - hlist_for_each_entry(alias, &inode->i_dentry, d_alias) { 1645 + hlist_for_each_entry(alias, &inode->i_dentry, d_u.d_alias) { 1646 1646 /* 1647 1647 * Don't need alias->d_lock here, because aliases with 1648 1648 * d_parent == entry->d_parent are not subject to name or ··· 1668 1668 { 1669 1669 struct dentry *result; 1670 1670 1671 - BUG_ON(!hlist_unhashed(&entry->d_alias)); 1671 + BUG_ON(!hlist_unhashed(&entry->d_u.d_alias)); 1672 1672 1673 1673 if (inode) 1674 1674 spin_lock(&inode->i_lock); ··· 1699 1699 */ 1700 1700 int d_instantiate_no_diralias(struct dentry *entry, struct inode *inode) 1701 1701 { 1702 - BUG_ON(!hlist_unhashed(&entry->d_alias)); 1702 + BUG_ON(!hlist_unhashed(&entry->d_u.d_alias)); 1703 1703 1704 1704 spin_lock(&inode->i_lock); 1705 1705 if (S_ISDIR(inode->i_mode) && !hlist_empty(&inode->i_dentry)) { ··· 1738 1738 1739 1739 if (hlist_empty(&inode->i_dentry)) 1740 1740 return NULL; 1741 - alias = hlist_entry(inode->i_dentry.first, struct dentry, d_alias); 1741 + alias = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias); 1742 1742 __dget(alias); 1743 1743 return alias; 1744 1744 } ··· 1800 1800 spin_lock(&tmp->d_lock); 1801 1801 tmp->d_inode = inode; 1802 1802 tmp->d_flags |= add_flags; 1803 - hlist_add_head(&tmp->d_alias, &inode->i_dentry); 1803 + hlist_add_head(&tmp->d_u.d_alias, &inode->i_dentry); 1804 1804 hlist_bl_lock(&tmp->d_sb->s_anon); 1805 1805 hlist_bl_add_head(&tmp->d_hash, &tmp->d_sb->s_anon); 1806 1806 hlist_bl_unlock(&tmp->d_sb->s_anon); ··· 1889 1889 * if not go ahead and create it now. 1890 1890 */ 1891 1891 found = d_hash_and_lookup(dentry->d_parent, name); 1892 - if (unlikely(IS_ERR(found))) 1893 - goto err_out; 1894 1892 if (!found) { 1895 1893 new = d_alloc(dentry->d_parent, name); 1896 1894 if (!new) { 1897 1895 found = ERR_PTR(-ENOMEM); 1898 - goto err_out; 1896 + } else { 1897 + found = d_splice_alias(inode, new); 1898 + if (found) { 1899 + dput(new); 1900 + return found; 1901 + } 1902 + return new; 1899 1903 } 1900 - 1901 - found = d_splice_alias(inode, new); 1902 - if (found) { 1903 - dput(new); 1904 - return found; 1905 - } 1906 - return new; 1907 1904 } 1908 - 1909 - /* 1910 - * If a matching dentry exists, and it's not negative use it. 1911 - * 1912 - * Decrement the reference count to balance the iget() done 1913 - * earlier on. 1914 - */ 1915 - if (found->d_inode) { 1916 - if (unlikely(found->d_inode != inode)) { 1917 - /* This can't happen because bad inodes are unhashed. */ 1918 - BUG_ON(!is_bad_inode(inode)); 1919 - BUG_ON(!is_bad_inode(found->d_inode)); 1920 - } 1921 - iput(inode); 1922 - return found; 1923 - } 1924 - 1925 - /* 1926 - * Negative dentry: instantiate it unless the inode is a directory and 1927 - * already has a dentry. 1928 - */ 1929 - new = d_splice_alias(inode, found); 1930 - if (new) { 1931 - dput(found); 1932 - found = new; 1933 - } 1934 - return found; 1935 - 1936 - err_out: 1937 1905 iput(inode); 1938 1906 return found; 1939 1907 } ··· 2203 2235 struct dentry *child; 2204 2236 2205 2237 spin_lock(&dparent->d_lock); 2206 - list_for_each_entry(child, &dparent->d_subdirs, d_u.d_child) { 2238 + list_for_each_entry(child, &dparent->d_subdirs, d_child) { 2207 2239 if (dentry == child) { 2208 2240 spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED); 2209 2241 __dget_dlock(dentry); ··· 2361 2393 */ 2362 2394 unsigned int i; 2363 2395 BUILD_BUG_ON(!IS_ALIGNED(DNAME_INLINE_LEN, sizeof(long))); 2396 + kmemcheck_mark_initialized(dentry->d_iname, DNAME_INLINE_LEN); 2397 + kmemcheck_mark_initialized(target->d_iname, DNAME_INLINE_LEN); 2364 2398 for (i = 0; i < DNAME_INLINE_LEN / sizeof(long); i++) { 2365 2399 swap(((long *) &dentry->d_iname)[i], 2366 2400 ((long *) &target->d_iname)[i]); ··· 2496 2526 /* splicing a tree */ 2497 2527 dentry->d_parent = target->d_parent; 2498 2528 target->d_parent = target; 2499 - list_del_init(&target->d_u.d_child); 2500 - list_move(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs); 2529 + list_del_init(&target->d_child); 2530 + list_move(&dentry->d_child, &dentry->d_parent->d_subdirs); 2501 2531 } else { 2502 2532 /* swapping two dentries */ 2503 2533 swap(dentry->d_parent, target->d_parent); 2504 - list_move(&target->d_u.d_child, &target->d_parent->d_subdirs); 2505 - list_move(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs); 2534 + list_move(&target->d_child, &target->d_parent->d_subdirs); 2535 + list_move(&dentry->d_child, &dentry->d_parent->d_subdirs); 2506 2536 if (exchange) 2507 2537 fsnotify_d_move(target); 2508 2538 fsnotify_d_move(dentry); ··· 2578 2608 * Note: If ever the locking in lock_rename() changes, then please 2579 2609 * remember to update this too... 2580 2610 */ 2581 - static struct dentry *__d_unalias(struct inode *inode, 2611 + static int __d_unalias(struct inode *inode, 2582 2612 struct dentry *dentry, struct dentry *alias) 2583 2613 { 2584 2614 struct mutex *m1 = NULL, *m2 = NULL; 2585 - struct dentry *ret = ERR_PTR(-EBUSY); 2615 + int ret = -EBUSY; 2586 2616 2587 2617 /* If alias and dentry share a parent, then no extra locks required */ 2588 2618 if (alias->d_parent == dentry->d_parent) ··· 2597 2627 m2 = &alias->d_parent->d_inode->i_mutex; 2598 2628 out_unalias: 2599 2629 __d_move(alias, dentry, false); 2600 - ret = alias; 2630 + ret = 0; 2601 2631 out_err: 2602 2632 spin_unlock(&inode->i_lock); 2603 2633 if (m2) ··· 2632 2662 */ 2633 2663 struct dentry *d_splice_alias(struct inode *inode, struct dentry *dentry) 2634 2664 { 2635 - struct dentry *new = NULL; 2636 - 2637 2665 if (IS_ERR(inode)) 2638 2666 return ERR_CAST(inode); 2639 - 2640 - if (inode && S_ISDIR(inode->i_mode)) { 2641 - spin_lock(&inode->i_lock); 2642 - new = __d_find_any_alias(inode); 2643 - if (new) { 2644 - if (!IS_ROOT(new)) { 2645 - spin_unlock(&inode->i_lock); 2646 - dput(new); 2647 - iput(inode); 2648 - return ERR_PTR(-EIO); 2649 - } 2650 - if (d_ancestor(new, dentry)) { 2651 - spin_unlock(&inode->i_lock); 2652 - dput(new); 2653 - iput(inode); 2654 - return ERR_PTR(-EIO); 2655 - } 2656 - write_seqlock(&rename_lock); 2657 - __d_move(new, dentry, false); 2658 - write_sequnlock(&rename_lock); 2659 - spin_unlock(&inode->i_lock); 2660 - security_d_instantiate(new, inode); 2661 - iput(inode); 2662 - } else { 2663 - /* already taking inode->i_lock, so d_add() by hand */ 2664 - __d_instantiate(dentry, inode); 2665 - spin_unlock(&inode->i_lock); 2666 - security_d_instantiate(dentry, inode); 2667 - d_rehash(dentry); 2668 - } 2669 - } else { 2670 - d_instantiate(dentry, inode); 2671 - if (d_unhashed(dentry)) 2672 - d_rehash(dentry); 2673 - } 2674 - return new; 2675 - } 2676 - EXPORT_SYMBOL(d_splice_alias); 2677 - 2678 - /** 2679 - * d_materialise_unique - introduce an inode into the tree 2680 - * @dentry: candidate dentry 2681 - * @inode: inode to bind to the dentry, to which aliases may be attached 2682 - * 2683 - * Introduces an dentry into the tree, substituting an extant disconnected 2684 - * root directory alias in its place if there is one. Caller must hold the 2685 - * i_mutex of the parent directory. 2686 - */ 2687 - struct dentry *d_materialise_unique(struct dentry *dentry, struct inode *inode) 2688 - { 2689 - struct dentry *actual; 2690 2667 2691 2668 BUG_ON(!d_unhashed(dentry)); 2692 2669 2693 2670 if (!inode) { 2694 - actual = dentry; 2695 2671 __d_instantiate(dentry, NULL); 2696 - d_rehash(actual); 2697 - goto out_nolock; 2672 + goto out; 2698 2673 } 2699 - 2700 2674 spin_lock(&inode->i_lock); 2701 - 2702 2675 if (S_ISDIR(inode->i_mode)) { 2703 - struct dentry *alias; 2704 - 2705 - /* Does an aliased dentry already exist? */ 2706 - alias = __d_find_alias(inode); 2707 - if (alias) { 2708 - actual = alias; 2676 + struct dentry *new = __d_find_any_alias(inode); 2677 + if (unlikely(new)) { 2709 2678 write_seqlock(&rename_lock); 2710 - 2711 - if (d_ancestor(alias, dentry)) { 2712 - /* Check for loops */ 2713 - actual = ERR_PTR(-ELOOP); 2714 - spin_unlock(&inode->i_lock); 2715 - } else if (IS_ROOT(alias)) { 2716 - /* Is this an anonymous mountpoint that we 2717 - * could splice into our tree? */ 2718 - __d_move(alias, dentry, false); 2679 + if (unlikely(d_ancestor(new, dentry))) { 2719 2680 write_sequnlock(&rename_lock); 2720 - goto found; 2681 + spin_unlock(&inode->i_lock); 2682 + dput(new); 2683 + new = ERR_PTR(-ELOOP); 2684 + pr_warn_ratelimited( 2685 + "VFS: Lookup of '%s' in %s %s" 2686 + " would have caused loop\n", 2687 + dentry->d_name.name, 2688 + inode->i_sb->s_type->name, 2689 + inode->i_sb->s_id); 2690 + } else if (!IS_ROOT(new)) { 2691 + int err = __d_unalias(inode, dentry, new); 2692 + write_sequnlock(&rename_lock); 2693 + if (err) { 2694 + dput(new); 2695 + new = ERR_PTR(err); 2696 + } 2721 2697 } else { 2722 - /* Nope, but we must(!) avoid directory 2723 - * aliasing. This drops inode->i_lock */ 2724 - actual = __d_unalias(inode, dentry, alias); 2698 + __d_move(new, dentry, false); 2699 + write_sequnlock(&rename_lock); 2700 + spin_unlock(&inode->i_lock); 2701 + security_d_instantiate(new, inode); 2725 2702 } 2726 - write_sequnlock(&rename_lock); 2727 - if (IS_ERR(actual)) { 2728 - if (PTR_ERR(actual) == -ELOOP) 2729 - pr_warn_ratelimited( 2730 - "VFS: Lookup of '%s' in %s %s" 2731 - " would have caused loop\n", 2732 - dentry->d_name.name, 2733 - inode->i_sb->s_type->name, 2734 - inode->i_sb->s_id); 2735 - dput(alias); 2736 - } 2737 - goto out_nolock; 2703 + iput(inode); 2704 + return new; 2738 2705 } 2739 2706 } 2740 - 2741 - /* Add a unique reference */ 2742 - actual = __d_instantiate_unique(dentry, inode); 2743 - if (!actual) 2744 - actual = dentry; 2745 - 2746 - d_rehash(actual); 2747 - found: 2707 + /* already taking inode->i_lock, so d_add() by hand */ 2708 + __d_instantiate(dentry, inode); 2748 2709 spin_unlock(&inode->i_lock); 2749 - out_nolock: 2750 - if (actual == dentry) { 2751 - security_d_instantiate(dentry, inode); 2752 - return NULL; 2753 - } 2754 - 2755 - iput(inode); 2756 - return actual; 2710 + out: 2711 + security_d_instantiate(dentry, inode); 2712 + d_rehash(dentry); 2713 + return NULL; 2757 2714 } 2758 - EXPORT_SYMBOL_GPL(d_materialise_unique); 2715 + EXPORT_SYMBOL(d_splice_alias); 2759 2716 2760 2717 static int prepend(char **buffer, int *buflen, const char *str, int namelen) 2761 2718 { ··· 3218 3321 { 3219 3322 inode_dec_link_count(inode); 3220 3323 BUG_ON(dentry->d_name.name != dentry->d_iname || 3221 - !hlist_unhashed(&dentry->d_alias) || 3324 + !hlist_unhashed(&dentry->d_u.d_alias) || 3222 3325 !d_unlinked(dentry)); 3223 3326 spin_lock(&dentry->d_parent->d_lock); 3224 3327 spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
+8 -7
fs/debugfs/file.c
··· 692 692 * because some peripherals have several blocks of identical registers, 693 693 * for example configuration of dma channels 694 694 */ 695 - int debugfs_print_regs32(struct seq_file *s, const struct debugfs_reg32 *regs, 696 - int nregs, void __iomem *base, char *prefix) 695 + void debugfs_print_regs32(struct seq_file *s, const struct debugfs_reg32 *regs, 696 + int nregs, void __iomem *base, char *prefix) 697 697 { 698 - int i, ret = 0; 698 + int i; 699 699 700 700 for (i = 0; i < nregs; i++, regs++) { 701 701 if (prefix) 702 - ret += seq_printf(s, "%s", prefix); 703 - ret += seq_printf(s, "%s = 0x%08x\n", regs->name, 704 - readl(base + regs->offset)); 702 + seq_printf(s, "%s", prefix); 703 + seq_printf(s, "%s = 0x%08x\n", regs->name, 704 + readl(base + regs->offset)); 705 + if (seq_has_overflowed(s)) 706 + break; 705 707 } 706 - return ret; 707 708 } 708 709 EXPORT_SYMBOL_GPL(debugfs_print_regs32); 709 710
+1 -1
fs/debugfs/inode.c
··· 553 553 * use the d_u.d_child as the rcu head and corrupt this list. 554 554 */ 555 555 spin_lock(&parent->d_lock); 556 - list_for_each_entry(child, &parent->d_subdirs, d_u.d_child) { 556 + list_for_each_entry(child, &parent->d_subdirs, d_child) { 557 557 if (!debugfs_positive(child)) 558 558 continue; 559 559
+122 -141
fs/dlm/debug_fs.c
··· 48 48 } 49 49 } 50 50 51 - static int print_format1_lock(struct seq_file *s, struct dlm_lkb *lkb, 52 - struct dlm_rsb *res) 51 + static void print_format1_lock(struct seq_file *s, struct dlm_lkb *lkb, 52 + struct dlm_rsb *res) 53 53 { 54 54 seq_printf(s, "%08x %s", lkb->lkb_id, print_lockmode(lkb->lkb_grmode)); 55 55 ··· 68 68 if (lkb->lkb_wait_type) 69 69 seq_printf(s, " wait_type: %d", lkb->lkb_wait_type); 70 70 71 - return seq_puts(s, "\n"); 71 + seq_puts(s, "\n"); 72 72 } 73 73 74 - static int print_format1(struct dlm_rsb *res, struct seq_file *s) 74 + static void print_format1(struct dlm_rsb *res, struct seq_file *s) 75 75 { 76 76 struct dlm_lkb *lkb; 77 77 int i, lvblen = res->res_ls->ls_lvblen, recover_list, root_list; 78 - int rv; 79 78 80 79 lock_rsb(res); 81 80 82 - rv = seq_printf(s, "\nResource %p Name (len=%d) \"", 83 - res, res->res_length); 84 - if (rv) 85 - goto out; 81 + seq_printf(s, "\nResource %p Name (len=%d) \"", res, res->res_length); 86 82 87 83 for (i = 0; i < res->res_length; i++) { 88 84 if (isprint(res->res_name[i])) ··· 88 92 } 89 93 90 94 if (res->res_nodeid > 0) 91 - rv = seq_printf(s, "\"\nLocal Copy, Master is node %d\n", 92 - res->res_nodeid); 95 + seq_printf(s, "\"\nLocal Copy, Master is node %d\n", 96 + res->res_nodeid); 93 97 else if (res->res_nodeid == 0) 94 - rv = seq_puts(s, "\"\nMaster Copy\n"); 98 + seq_puts(s, "\"\nMaster Copy\n"); 95 99 else if (res->res_nodeid == -1) 96 - rv = seq_printf(s, "\"\nLooking up master (lkid %x)\n", 97 - res->res_first_lkid); 100 + seq_printf(s, "\"\nLooking up master (lkid %x)\n", 101 + res->res_first_lkid); 98 102 else 99 - rv = seq_printf(s, "\"\nInvalid master %d\n", 100 - res->res_nodeid); 101 - if (rv) 103 + seq_printf(s, "\"\nInvalid master %d\n", res->res_nodeid); 104 + if (seq_has_overflowed(s)) 102 105 goto out; 103 106 104 107 /* Print the LVB: */ ··· 111 116 } 112 117 if (rsb_flag(res, RSB_VALNOTVALID)) 113 118 seq_puts(s, " (INVALID)"); 114 - rv = seq_puts(s, "\n"); 115 - if (rv) 119 + seq_puts(s, "\n"); 120 + if (seq_has_overflowed(s)) 116 121 goto out; 117 122 } 118 123 ··· 120 125 recover_list = !list_empty(&res->res_recover_list); 121 126 122 127 if (root_list || recover_list) { 123 - rv = seq_printf(s, "Recovery: root %d recover %d flags %lx " 124 - "count %d\n", root_list, recover_list, 125 - res->res_flags, res->res_recover_locks_count); 126 - if (rv) 127 - goto out; 128 + seq_printf(s, "Recovery: root %d recover %d flags %lx count %d\n", 129 + root_list, recover_list, 130 + res->res_flags, res->res_recover_locks_count); 128 131 } 129 132 130 133 /* Print the locks attached to this resource */ 131 134 seq_puts(s, "Granted Queue\n"); 132 135 list_for_each_entry(lkb, &res->res_grantqueue, lkb_statequeue) { 133 - rv = print_format1_lock(s, lkb, res); 134 - if (rv) 136 + print_format1_lock(s, lkb, res); 137 + if (seq_has_overflowed(s)) 135 138 goto out; 136 139 } 137 140 138 141 seq_puts(s, "Conversion Queue\n"); 139 142 list_for_each_entry(lkb, &res->res_convertqueue, lkb_statequeue) { 140 - rv = print_format1_lock(s, lkb, res); 141 - if (rv) 143 + print_format1_lock(s, lkb, res); 144 + if (seq_has_overflowed(s)) 142 145 goto out; 143 146 } 144 147 145 148 seq_puts(s, "Waiting Queue\n"); 146 149 list_for_each_entry(lkb, &res->res_waitqueue, lkb_statequeue) { 147 - rv = print_format1_lock(s, lkb, res); 148 - if (rv) 150 + print_format1_lock(s, lkb, res); 151 + if (seq_has_overflowed(s)) 149 152 goto out; 150 153 } 151 154 ··· 152 159 153 160 seq_puts(s, "Lookup Queue\n"); 154 161 list_for_each_entry(lkb, &res->res_lookup, lkb_rsb_lookup) { 155 - rv = seq_printf(s, "%08x %s", lkb->lkb_id, 156 - print_lockmode(lkb->lkb_rqmode)); 162 + seq_printf(s, "%08x %s", 163 + lkb->lkb_id, print_lockmode(lkb->lkb_rqmode)); 157 164 if (lkb->lkb_wait_type) 158 165 seq_printf(s, " wait_type: %d", lkb->lkb_wait_type); 159 - rv = seq_puts(s, "\n"); 166 + seq_puts(s, "\n"); 167 + if (seq_has_overflowed(s)) 168 + goto out; 160 169 } 161 170 out: 162 171 unlock_rsb(res); 163 - return rv; 164 172 } 165 173 166 - static int print_format2_lock(struct seq_file *s, struct dlm_lkb *lkb, 167 - struct dlm_rsb *r) 174 + static void print_format2_lock(struct seq_file *s, struct dlm_lkb *lkb, 175 + struct dlm_rsb *r) 168 176 { 169 177 u64 xid = 0; 170 178 u64 us; 171 - int rv; 172 179 173 180 if (lkb->lkb_flags & DLM_IFL_USER) { 174 181 if (lkb->lkb_ua) ··· 181 188 /* id nodeid remid pid xid exflags flags sts grmode rqmode time_us 182 189 r_nodeid r_len r_name */ 183 190 184 - rv = seq_printf(s, "%x %d %x %u %llu %x %x %d %d %d %llu %u %d \"%s\"\n", 185 - lkb->lkb_id, 186 - lkb->lkb_nodeid, 187 - lkb->lkb_remid, 188 - lkb->lkb_ownpid, 189 - (unsigned long long)xid, 190 - lkb->lkb_exflags, 191 - lkb->lkb_flags, 192 - lkb->lkb_status, 193 - lkb->lkb_grmode, 194 - lkb->lkb_rqmode, 195 - (unsigned long long)us, 196 - r->res_nodeid, 197 - r->res_length, 198 - r->res_name); 199 - return rv; 191 + seq_printf(s, "%x %d %x %u %llu %x %x %d %d %d %llu %u %d \"%s\"\n", 192 + lkb->lkb_id, 193 + lkb->lkb_nodeid, 194 + lkb->lkb_remid, 195 + lkb->lkb_ownpid, 196 + (unsigned long long)xid, 197 + lkb->lkb_exflags, 198 + lkb->lkb_flags, 199 + lkb->lkb_status, 200 + lkb->lkb_grmode, 201 + lkb->lkb_rqmode, 202 + (unsigned long long)us, 203 + r->res_nodeid, 204 + r->res_length, 205 + r->res_name); 200 206 } 201 207 202 - static int print_format2(struct dlm_rsb *r, struct seq_file *s) 208 + static void print_format2(struct dlm_rsb *r, struct seq_file *s) 203 209 { 204 210 struct dlm_lkb *lkb; 205 - int rv = 0; 206 211 207 212 lock_rsb(r); 208 213 209 214 list_for_each_entry(lkb, &r->res_grantqueue, lkb_statequeue) { 210 - rv = print_format2_lock(s, lkb, r); 211 - if (rv) 215 + print_format2_lock(s, lkb, r); 216 + if (seq_has_overflowed(s)) 212 217 goto out; 213 218 } 214 219 215 220 list_for_each_entry(lkb, &r->res_convertqueue, lkb_statequeue) { 216 - rv = print_format2_lock(s, lkb, r); 217 - if (rv) 221 + print_format2_lock(s, lkb, r); 222 + if (seq_has_overflowed(s)) 218 223 goto out; 219 224 } 220 225 221 226 list_for_each_entry(lkb, &r->res_waitqueue, lkb_statequeue) { 222 - rv = print_format2_lock(s, lkb, r); 223 - if (rv) 227 + print_format2_lock(s, lkb, r); 228 + if (seq_has_overflowed(s)) 224 229 goto out; 225 230 } 226 231 out: 227 232 unlock_rsb(r); 228 - return rv; 229 233 } 230 234 231 - static int print_format3_lock(struct seq_file *s, struct dlm_lkb *lkb, 235 + static void print_format3_lock(struct seq_file *s, struct dlm_lkb *lkb, 232 236 int rsb_lookup) 233 237 { 234 238 u64 xid = 0; 235 - int rv; 236 239 237 240 if (lkb->lkb_flags & DLM_IFL_USER) { 238 241 if (lkb->lkb_ua) 239 242 xid = lkb->lkb_ua->xid; 240 243 } 241 244 242 - rv = seq_printf(s, "lkb %x %d %x %u %llu %x %x %d %d %d %d %d %d %u %llu %llu\n", 243 - lkb->lkb_id, 244 - lkb->lkb_nodeid, 245 - lkb->lkb_remid, 246 - lkb->lkb_ownpid, 247 - (unsigned long long)xid, 248 - lkb->lkb_exflags, 249 - lkb->lkb_flags, 250 - lkb->lkb_status, 251 - lkb->lkb_grmode, 252 - lkb->lkb_rqmode, 253 - lkb->lkb_last_bast.mode, 254 - rsb_lookup, 255 - lkb->lkb_wait_type, 256 - lkb->lkb_lvbseq, 257 - (unsigned long long)ktime_to_ns(lkb->lkb_timestamp), 258 - (unsigned long long)ktime_to_ns(lkb->lkb_last_bast_time)); 259 - return rv; 245 + seq_printf(s, "lkb %x %d %x %u %llu %x %x %d %d %d %d %d %d %u %llu %llu\n", 246 + lkb->lkb_id, 247 + lkb->lkb_nodeid, 248 + lkb->lkb_remid, 249 + lkb->lkb_ownpid, 250 + (unsigned long long)xid, 251 + lkb->lkb_exflags, 252 + lkb->lkb_flags, 253 + lkb->lkb_status, 254 + lkb->lkb_grmode, 255 + lkb->lkb_rqmode, 256 + lkb->lkb_last_bast.mode, 257 + rsb_lookup, 258 + lkb->lkb_wait_type, 259 + lkb->lkb_lvbseq, 260 + (unsigned long long)ktime_to_ns(lkb->lkb_timestamp), 261 + (unsigned long long)ktime_to_ns(lkb->lkb_last_bast_time)); 260 262 } 261 263 262 - static int print_format3(struct dlm_rsb *r, struct seq_file *s) 264 + static void print_format3(struct dlm_rsb *r, struct seq_file *s) 263 265 { 264 266 struct dlm_lkb *lkb; 265 267 int i, lvblen = r->res_ls->ls_lvblen; 266 268 int print_name = 1; 267 - int rv; 268 269 269 270 lock_rsb(r); 270 271 271 - rv = seq_printf(s, "rsb %p %d %x %lx %d %d %u %d ", 272 - r, 273 - r->res_nodeid, 274 - r->res_first_lkid, 275 - r->res_flags, 276 - !list_empty(&r->res_root_list), 277 - !list_empty(&r->res_recover_list), 278 - r->res_recover_locks_count, 279 - r->res_length); 280 - if (rv) 272 + seq_printf(s, "rsb %p %d %x %lx %d %d %u %d ", 273 + r, 274 + r->res_nodeid, 275 + r->res_first_lkid, 276 + r->res_flags, 277 + !list_empty(&r->res_root_list), 278 + !list_empty(&r->res_recover_list), 279 + r->res_recover_locks_count, 280 + r->res_length); 281 + if (seq_has_overflowed(s)) 281 282 goto out; 282 283 283 284 for (i = 0; i < r->res_length; i++) { ··· 279 292 print_name = 0; 280 293 } 281 294 282 - seq_printf(s, "%s", print_name ? "str " : "hex"); 295 + seq_puts(s, print_name ? "str " : "hex"); 283 296 284 297 for (i = 0; i < r->res_length; i++) { 285 298 if (print_name) ··· 287 300 else 288 301 seq_printf(s, " %02x", (unsigned char)r->res_name[i]); 289 302 } 290 - rv = seq_puts(s, "\n"); 291 - if (rv) 303 + seq_puts(s, "\n"); 304 + if (seq_has_overflowed(s)) 292 305 goto out; 293 306 294 307 if (!r->res_lvbptr) ··· 298 311 299 312 for (i = 0; i < lvblen; i++) 300 313 seq_printf(s, " %02x", (unsigned char)r->res_lvbptr[i]); 301 - rv = seq_puts(s, "\n"); 302 - if (rv) 314 + seq_puts(s, "\n"); 315 + if (seq_has_overflowed(s)) 303 316 goto out; 304 317 305 318 do_locks: 306 319 list_for_each_entry(lkb, &r->res_grantqueue, lkb_statequeue) { 307 - rv = print_format3_lock(s, lkb, 0); 308 - if (rv) 320 + print_format3_lock(s, lkb, 0); 321 + if (seq_has_overflowed(s)) 309 322 goto out; 310 323 } 311 324 312 325 list_for_each_entry(lkb, &r->res_convertqueue, lkb_statequeue) { 313 - rv = print_format3_lock(s, lkb, 0); 314 - if (rv) 326 + print_format3_lock(s, lkb, 0); 327 + if (seq_has_overflowed(s)) 315 328 goto out; 316 329 } 317 330 318 331 list_for_each_entry(lkb, &r->res_waitqueue, lkb_statequeue) { 319 - rv = print_format3_lock(s, lkb, 0); 320 - if (rv) 332 + print_format3_lock(s, lkb, 0); 333 + if (seq_has_overflowed(s)) 321 334 goto out; 322 335 } 323 336 324 337 list_for_each_entry(lkb, &r->res_lookup, lkb_rsb_lookup) { 325 - rv = print_format3_lock(s, lkb, 1); 326 - if (rv) 338 + print_format3_lock(s, lkb, 1); 339 + if (seq_has_overflowed(s)) 327 340 goto out; 328 341 } 329 342 out: 330 343 unlock_rsb(r); 331 - return rv; 332 344 } 333 345 334 - static int print_format4(struct dlm_rsb *r, struct seq_file *s) 346 + static void print_format4(struct dlm_rsb *r, struct seq_file *s) 335 347 { 336 348 int our_nodeid = dlm_our_nodeid(); 337 349 int print_name = 1; 338 - int i, rv; 350 + int i; 339 351 340 352 lock_rsb(r); 341 353 342 - rv = seq_printf(s, "rsb %p %d %d %d %d %lu %lx %d ", 343 - r, 344 - r->res_nodeid, 345 - r->res_master_nodeid, 346 - r->res_dir_nodeid, 347 - our_nodeid, 348 - r->res_toss_time, 349 - r->res_flags, 350 - r->res_length); 351 - if (rv) 352 - goto out; 354 + seq_printf(s, "rsb %p %d %d %d %d %lu %lx %d ", 355 + r, 356 + r->res_nodeid, 357 + r->res_master_nodeid, 358 + r->res_dir_nodeid, 359 + our_nodeid, 360 + r->res_toss_time, 361 + r->res_flags, 362 + r->res_length); 353 363 354 364 for (i = 0; i < r->res_length; i++) { 355 365 if (!isascii(r->res_name[i]) || !isprint(r->res_name[i])) 356 366 print_name = 0; 357 367 } 358 368 359 - seq_printf(s, "%s", print_name ? "str " : "hex"); 369 + seq_puts(s, print_name ? "str " : "hex"); 360 370 361 371 for (i = 0; i < r->res_length; i++) { 362 372 if (print_name) ··· 361 377 else 362 378 seq_printf(s, " %02x", (unsigned char)r->res_name[i]); 363 379 } 364 - rv = seq_puts(s, "\n"); 365 - out: 380 + seq_puts(s, "\n"); 381 + 366 382 unlock_rsb(r); 367 - return rv; 368 383 } 369 384 370 385 struct rsbtbl_iter { ··· 373 390 int header; 374 391 }; 375 392 376 - /* seq_printf returns -1 if the buffer is full, and 0 otherwise. 377 - If the buffer is full, seq_printf can be called again, but it 378 - does nothing and just returns -1. So, the these printing routines 379 - periodically check the return value to avoid wasting too much time 380 - trying to print to a full buffer. */ 393 + /* 394 + * If the buffer is full, seq_printf can be called again, but it 395 + * does nothing. So, the these printing routines periodically check 396 + * seq_has_overflowed to avoid wasting too much time trying to print to 397 + * a full buffer. 398 + */ 381 399 382 400 static int table_seq_show(struct seq_file *seq, void *iter_ptr) 383 401 { 384 402 struct rsbtbl_iter *ri = iter_ptr; 385 - int rv = 0; 386 403 387 404 switch (ri->format) { 388 405 case 1: 389 - rv = print_format1(ri->rsb, seq); 406 + print_format1(ri->rsb, seq); 390 407 break; 391 408 case 2: 392 409 if (ri->header) { 393 - seq_printf(seq, "id nodeid remid pid xid exflags " 394 - "flags sts grmode rqmode time_ms " 395 - "r_nodeid r_len r_name\n"); 410 + seq_puts(seq, "id nodeid remid pid xid exflags flags sts grmode rqmode time_ms r_nodeid r_len r_name\n"); 396 411 ri->header = 0; 397 412 } 398 - rv = print_format2(ri->rsb, seq); 413 + print_format2(ri->rsb, seq); 399 414 break; 400 415 case 3: 401 416 if (ri->header) { 402 - seq_printf(seq, "version rsb 1.1 lvb 1.1 lkb 1.1\n"); 417 + seq_puts(seq, "version rsb 1.1 lvb 1.1 lkb 1.1\n"); 403 418 ri->header = 0; 404 419 } 405 - rv = print_format3(ri->rsb, seq); 420 + print_format3(ri->rsb, seq); 406 421 break; 407 422 case 4: 408 423 if (ri->header) { 409 - seq_printf(seq, "version 4 rsb 2\n"); 424 + seq_puts(seq, "version 4 rsb 2\n"); 410 425 ri->header = 0; 411 426 } 412 - rv = print_format4(ri->rsb, seq); 427 + print_format4(ri->rsb, seq); 413 428 break; 414 429 } 415 430 416 - return rv; 431 + return 0; 417 432 } 418 433 419 434 static const struct seq_operations format1_seq_ops;
+1 -1
fs/ecryptfs/crypto.c
··· 1373 1373 int ecryptfs_read_xattr_region(char *page_virt, struct inode *ecryptfs_inode) 1374 1374 { 1375 1375 struct dentry *lower_dentry = 1376 - ecryptfs_inode_to_private(ecryptfs_inode)->lower_file->f_dentry; 1376 + ecryptfs_inode_to_private(ecryptfs_inode)->lower_file->f_path.dentry; 1377 1377 ssize_t size; 1378 1378 int rc = 0; 1379 1379
+3 -3
fs/ecryptfs/file.c
··· 75 75 76 76 /* Inspired by generic filldir in fs/readdir.c */ 77 77 static int 78 - ecryptfs_filldir(void *dirent, const char *lower_name, int lower_namelen, 79 - loff_t offset, u64 ino, unsigned int d_type) 78 + ecryptfs_filldir(struct dir_context *ctx, const char *lower_name, 79 + int lower_namelen, loff_t offset, u64 ino, unsigned int d_type) 80 80 { 81 81 struct ecryptfs_getdents_callback *buf = 82 - (struct ecryptfs_getdents_callback *)dirent; 82 + container_of(ctx, struct ecryptfs_getdents_callback, ctx); 83 83 size_t name_size; 84 84 char *name; 85 85 int rc;
+1 -1
fs/ecryptfs/mmap.c
··· 419 419 ssize_t size; 420 420 void *xattr_virt; 421 421 struct dentry *lower_dentry = 422 - ecryptfs_inode_to_private(ecryptfs_inode)->lower_file->f_dentry; 422 + ecryptfs_inode_to_private(ecryptfs_inode)->lower_file->f_path.dentry; 423 423 struct inode *lower_inode = lower_dentry->d_inode; 424 424 int rc; 425 425
+2 -2
fs/efivarfs/file.c
··· 47 47 48 48 if (bytes == -ENOENT) { 49 49 drop_nlink(inode); 50 - d_delete(file->f_dentry); 51 - dput(file->f_dentry); 50 + d_delete(file->f_path.dentry); 51 + dput(file->f_path.dentry); 52 52 } else { 53 53 mutex_lock(&inode->i_mutex); 54 54 i_size_write(inode, datasize + sizeof(attributes));
+3 -6
fs/eventfd.c
··· 287 287 } 288 288 289 289 #ifdef CONFIG_PROC_FS 290 - static int eventfd_show_fdinfo(struct seq_file *m, struct file *f) 290 + static void eventfd_show_fdinfo(struct seq_file *m, struct file *f) 291 291 { 292 292 struct eventfd_ctx *ctx = f->private_data; 293 - int ret; 294 293 295 294 spin_lock_irq(&ctx->wqh.lock); 296 - ret = seq_printf(m, "eventfd-count: %16llx\n", 297 - (unsigned long long)ctx->count); 295 + seq_printf(m, "eventfd-count: %16llx\n", 296 + (unsigned long long)ctx->count); 298 297 spin_unlock_irq(&ctx->wqh.lock); 299 - 300 - return ret; 301 298 } 302 299 #endif 303 300
+5 -8
fs/eventpoll.c
··· 870 870 } 871 871 872 872 #ifdef CONFIG_PROC_FS 873 - static int ep_show_fdinfo(struct seq_file *m, struct file *f) 873 + static void ep_show_fdinfo(struct seq_file *m, struct file *f) 874 874 { 875 875 struct eventpoll *ep = f->private_data; 876 876 struct rb_node *rbp; 877 - int ret = 0; 878 877 879 878 mutex_lock(&ep->mtx); 880 879 for (rbp = rb_first(&ep->rbr); rbp; rbp = rb_next(rbp)) { 881 880 struct epitem *epi = rb_entry(rbp, struct epitem, rbn); 882 881 883 - ret = seq_printf(m, "tfd: %8d events: %8x data: %16llx\n", 884 - epi->ffd.fd, epi->event.events, 885 - (long long)epi->event.data); 886 - if (ret) 882 + seq_printf(m, "tfd: %8d events: %8x data: %16llx\n", 883 + epi->ffd.fd, epi->event.events, 884 + (long long)epi->event.data); 885 + if (seq_has_overflowed(m)) 887 886 break; 888 887 } 889 888 mutex_unlock(&ep->mtx); 890 - 891 - return ret; 892 889 } 893 890 #endif 894 891
+4 -3
fs/exportfs/expfs.c
··· 50 50 51 51 inode = result->d_inode; 52 52 spin_lock(&inode->i_lock); 53 - hlist_for_each_entry(dentry, &inode->i_dentry, d_alias) { 53 + hlist_for_each_entry(dentry, &inode->i_dentry, d_u.d_alias) { 54 54 dget(dentry); 55 55 spin_unlock(&inode->i_lock); 56 56 if (toput) ··· 241 241 * A rather strange filldir function to capture 242 242 * the name matching the specified inode number. 243 243 */ 244 - static int filldir_one(void * __buf, const char * name, int len, 244 + static int filldir_one(struct dir_context *ctx, const char *name, int len, 245 245 loff_t pos, u64 ino, unsigned int d_type) 246 246 { 247 - struct getdents_callback *buf = __buf; 247 + struct getdents_callback *buf = 248 + container_of(ctx, struct getdents_callback, ctx); 248 249 int result = 0; 249 250 250 251 buf->sequence++;
+3 -2
fs/fat/dir.c
··· 702 702 } 703 703 704 704 #define FAT_IOCTL_FILLDIR_FUNC(func, dirent_type) \ 705 - static int func(void *__buf, const char *name, int name_len, \ 705 + static int func(struct dir_context *ctx, const char *name, int name_len, \ 706 706 loff_t offset, u64 ino, unsigned int d_type) \ 707 707 { \ 708 - struct fat_ioctl_filldir_callback *buf = __buf; \ 708 + struct fat_ioctl_filldir_callback *buf = \ 709 + container_of(ctx, struct fat_ioctl_filldir_callback, ctx); \ 709 710 struct dirent_type __user *d1 = buf->dirent; \ 710 711 struct dirent_type __user *d2 = d1 + 1; \ 711 712 \
+2 -2
fs/fuse/dir.c
··· 372 372 if (inode && get_node_id(inode) == FUSE_ROOT_ID) 373 373 goto out_iput; 374 374 375 - newent = d_materialise_unique(entry, inode); 375 + newent = d_splice_alias(inode, entry); 376 376 err = PTR_ERR(newent); 377 377 if (IS_ERR(newent)) 378 378 goto out_err; ··· 1320 1320 if (!inode) 1321 1321 goto out; 1322 1322 1323 - alias = d_materialise_unique(dentry, inode); 1323 + alias = d_splice_alias(inode, dentry); 1324 1324 err = PTR_ERR(alias); 1325 1325 if (IS_ERR(alias)) 1326 1326 goto out;
+1 -1
fs/fuse/file.c
··· 1988 1988 struct page **pagep, void **fsdata) 1989 1989 { 1990 1990 pgoff_t index = pos >> PAGE_CACHE_SHIFT; 1991 - struct fuse_conn *fc = get_fuse_conn(file->f_dentry->d_inode); 1991 + struct fuse_conn *fc = get_fuse_conn(file_inode(file)); 1992 1992 struct page *page; 1993 1993 loff_t fsize; 1994 1994 int err = -ENOMEM;
+5 -3
fs/gfs2/export.c
··· 69 69 char *name; 70 70 }; 71 71 72 - static int get_name_filldir(void *opaque, const char *name, int length, 73 - loff_t offset, u64 inum, unsigned int type) 72 + static int get_name_filldir(struct dir_context *ctx, const char *name, 73 + int length, loff_t offset, u64 inum, 74 + unsigned int type) 74 75 { 75 - struct get_name_filldir *gnfd = opaque; 76 + struct get_name_filldir *gnfd = 77 + container_of(ctx, struct get_name_filldir, ctx); 76 78 77 79 if (inum != gnfd->inum.no_addr) 78 80 return 0;
+3 -2
fs/hppfs/hppfs.c
··· 548 548 struct dentry *dentry; 549 549 }; 550 550 551 - static int hppfs_filldir(void *d, const char *name, int size, 551 + static int hppfs_filldir(struct dir_context *ctx, const char *name, int size, 552 552 loff_t offset, u64 inode, unsigned int type) 553 553 { 554 - struct hppfs_dirent *dirent = d; 554 + struct hppfs_dirent *dirent = 555 + container_of(ctx, struct hppfs_dirent, ctx); 555 556 556 557 if (file_removed(dirent->dentry, name)) 557 558 return 0;
+8 -10
fs/jfs/namei.c
··· 84 84 struct inode *iplist[2]; 85 85 struct tblock *tblk; 86 86 87 - jfs_info("jfs_create: dip:0x%p name:%s", dip, dentry->d_name.name); 87 + jfs_info("jfs_create: dip:0x%p name:%pd", dip, dentry); 88 88 89 89 dquot_initialize(dip); 90 90 ··· 216 216 struct inode *iplist[2]; 217 217 struct tblock *tblk; 218 218 219 - jfs_info("jfs_mkdir: dip:0x%p name:%s", dip, dentry->d_name.name); 219 + jfs_info("jfs_mkdir: dip:0x%p name:%pd", dip, dentry); 220 220 221 221 dquot_initialize(dip); 222 222 ··· 352 352 struct inode *iplist[2]; 353 353 struct tblock *tblk; 354 354 355 - jfs_info("jfs_rmdir: dip:0x%p name:%s", dip, dentry->d_name.name); 355 + jfs_info("jfs_rmdir: dip:0x%p name:%pd", dip, dentry); 356 356 357 357 /* Init inode for quota operations. */ 358 358 dquot_initialize(dip); ··· 480 480 s64 new_size = 0; 481 481 int commit_flag; 482 482 483 - jfs_info("jfs_unlink: dip:0x%p name:%s", dip, dentry->d_name.name); 483 + jfs_info("jfs_unlink: dip:0x%p name:%pd", dip, dentry); 484 484 485 485 /* Init inode for quota operations. */ 486 486 dquot_initialize(dip); ··· 797 797 struct btstack btstack; 798 798 struct inode *iplist[2]; 799 799 800 - jfs_info("jfs_link: %s %s", old_dentry->d_name.name, 801 - dentry->d_name.name); 800 + jfs_info("jfs_link: %pd %pd", old_dentry, dentry); 802 801 803 802 dquot_initialize(dir); 804 803 ··· 1081 1082 int commit_flag; 1082 1083 1083 1084 1084 - jfs_info("jfs_rename: %s %s", old_dentry->d_name.name, 1085 - new_dentry->d_name.name); 1085 + jfs_info("jfs_rename: %pd %pd", old_dentry, new_dentry); 1086 1086 1087 1087 dquot_initialize(old_dir); 1088 1088 dquot_initialize(new_dir); ··· 1353 1355 if (!new_valid_dev(rdev)) 1354 1356 return -EINVAL; 1355 1357 1356 - jfs_info("jfs_mknod: %s", dentry->d_name.name); 1358 + jfs_info("jfs_mknod: %pd", dentry); 1357 1359 1358 1360 dquot_initialize(dir); 1359 1361 ··· 1442 1444 struct component_name key; 1443 1445 int rc; 1444 1446 1445 - jfs_info("jfs_lookup: name = %s", dentry->d_name.name); 1447 + jfs_info("jfs_lookup: name = %pd", dentry); 1446 1448 1447 1449 if ((rc = get_UCSname(&key, dentry))) 1448 1450 return ERR_PTR(rc);
+1 -1
fs/kernfs/dir.c
··· 807 807 } 808 808 809 809 /* instantiate and hash dentry */ 810 - ret = d_materialise_unique(dentry, inode); 810 + ret = d_splice_alias(inode, dentry); 811 811 out_unlock: 812 812 mutex_unlock(&kernfs_mutex); 813 813 return ret;
+6 -6
fs/libfs.c
··· 114 114 115 115 spin_lock(&dentry->d_lock); 116 116 /* d_lock not required for cursor */ 117 - list_del(&cursor->d_u.d_child); 117 + list_del(&cursor->d_child); 118 118 p = dentry->d_subdirs.next; 119 119 while (n && p != &dentry->d_subdirs) { 120 120 struct dentry *next; 121 - next = list_entry(p, struct dentry, d_u.d_child); 121 + next = list_entry(p, struct dentry, d_child); 122 122 spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED); 123 123 if (simple_positive(next)) 124 124 n--; 125 125 spin_unlock(&next->d_lock); 126 126 p = p->next; 127 127 } 128 - list_add_tail(&cursor->d_u.d_child, p); 128 + list_add_tail(&cursor->d_child, p); 129 129 spin_unlock(&dentry->d_lock); 130 130 } 131 131 } ··· 150 150 { 151 151 struct dentry *dentry = file->f_path.dentry; 152 152 struct dentry *cursor = file->private_data; 153 - struct list_head *p, *q = &cursor->d_u.d_child; 153 + struct list_head *p, *q = &cursor->d_child; 154 154 155 155 if (!dir_emit_dots(file, ctx)) 156 156 return 0; ··· 159 159 list_move(q, &dentry->d_subdirs); 160 160 161 161 for (p = q->next; p != &dentry->d_subdirs; p = p->next) { 162 - struct dentry *next = list_entry(p, struct dentry, d_u.d_child); 162 + struct dentry *next = list_entry(p, struct dentry, d_child); 163 163 spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED); 164 164 if (!simple_positive(next)) { 165 165 spin_unlock(&next->d_lock); ··· 287 287 int ret = 0; 288 288 289 289 spin_lock(&dentry->d_lock); 290 - list_for_each_entry(child, &dentry->d_subdirs, d_u.d_child) { 290 + list_for_each_entry(child, &dentry->d_subdirs, d_child) { 291 291 spin_lock_nested(&child->d_lock, DENTRY_D_LOCK_NESTED); 292 292 if (simple_positive(child)) { 293 293 spin_unlock(&child->d_lock);
+1 -1
fs/lockd/svcsubs.c
··· 408 408 { 409 409 struct super_block *sb = datap; 410 410 411 - return sb == file->f_file->f_path.dentry->d_sb; 411 + return sb == file_inode(file->f_file)->i_sb; 412 412 } 413 413 414 414 /**
+5 -7
fs/ncpfs/dir.c
··· 198 198 199 199 static inline int ncp_is_server_root(struct inode *inode) 200 200 { 201 - return (!ncp_single_volume(NCP_SERVER(inode)) && 202 - inode == inode->i_sb->s_root->d_inode); 201 + return !ncp_single_volume(NCP_SERVER(inode)) && 202 + is_root_inode(inode); 203 203 } 204 204 205 205 ··· 403 403 404 404 /* If a pointer is invalid, we search the dentry. */ 405 405 spin_lock(&parent->d_lock); 406 - list_for_each_entry(dent, &parent->d_subdirs, d_u.d_child) { 406 + list_for_each_entry(dent, &parent->d_subdirs, d_child) { 407 407 if ((unsigned long)dent->d_fsdata == fpos) { 408 408 if (dent->d_inode) 409 409 dget(dent); ··· 685 685 ncp_read_volume_list(struct file *file, struct dir_context *ctx, 686 686 struct ncp_cache_control *ctl) 687 687 { 688 - struct dentry *dentry = file->f_path.dentry; 689 - struct inode *inode = dentry->d_inode; 688 + struct inode *inode = file_inode(file); 690 689 struct ncp_server *server = NCP_SERVER(inode); 691 690 struct ncp_volume_info info; 692 691 struct ncp_entry_info entry; ··· 720 721 ncp_do_readdir(struct file *file, struct dir_context *ctx, 721 722 struct ncp_cache_control *ctl) 722 723 { 723 - struct dentry *dentry = file->f_path.dentry; 724 - struct inode *dir = dentry->d_inode; 724 + struct inode *dir = file_inode(file); 725 725 struct ncp_server *server = NCP_SERVER(dir); 726 726 struct nw_search_sequence seq; 727 727 struct ncp_entry_info entry;
+6 -8
fs/ncpfs/file.c
··· 100 100 static ssize_t 101 101 ncp_file_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) 102 102 { 103 - struct dentry *dentry = file->f_path.dentry; 104 - struct inode *inode = dentry->d_inode; 103 + struct inode *inode = file_inode(file); 105 104 size_t already_read = 0; 106 105 off_t pos; 107 106 size_t bufsize; ··· 108 109 void* freepage; 109 110 size_t freelen; 110 111 111 - ncp_dbg(1, "enter %pd2\n", dentry); 112 + ncp_dbg(1, "enter %pD2\n", file); 112 113 113 114 pos = *ppos; 114 115 ··· 166 167 167 168 file_accessed(file); 168 169 169 - ncp_dbg(1, "exit %pd2\n", dentry); 170 + ncp_dbg(1, "exit %pD2\n", file); 170 171 outrel: 171 172 ncp_inode_close(inode); 172 173 return already_read ? already_read : error; ··· 175 176 static ssize_t 176 177 ncp_file_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos) 177 178 { 178 - struct dentry *dentry = file->f_path.dentry; 179 - struct inode *inode = dentry->d_inode; 179 + struct inode *inode = file_inode(file); 180 180 size_t already_written = 0; 181 181 off_t pos; 182 182 size_t bufsize; 183 183 int errno; 184 184 void* bouncebuffer; 185 185 186 - ncp_dbg(1, "enter %pd2\n", dentry); 186 + ncp_dbg(1, "enter %pD2\n", file); 187 187 if ((ssize_t) count < 0) 188 188 return -EINVAL; 189 189 pos = *ppos; ··· 261 263 i_size_write(inode, pos); 262 264 mutex_unlock(&inode->i_mutex); 263 265 } 264 - ncp_dbg(1, "exit %pd2\n", dentry); 266 + ncp_dbg(1, "exit %pD2\n", file); 265 267 outrel: 266 268 ncp_inode_close(inode); 267 269 return already_written ? already_written : errno;
+1 -3
fs/ncpfs/mmap.c
··· 30 30 static int ncp_file_mmap_fault(struct vm_area_struct *area, 31 31 struct vm_fault *vmf) 32 32 { 33 - struct file *file = area->vm_file; 34 - struct dentry *dentry = file->f_path.dentry; 35 - struct inode *inode = dentry->d_inode; 33 + struct inode *inode = file_inode(area->vm_file); 36 34 char *pg_addr; 37 35 unsigned int already_read; 38 36 unsigned int count;
+2 -2
fs/ncpfs/ncplib_kernel.h
··· 191 191 struct dentry *dentry; 192 192 193 193 spin_lock(&parent->d_lock); 194 - list_for_each_entry(dentry, &parent->d_subdirs, d_u.d_child) { 194 + list_for_each_entry(dentry, &parent->d_subdirs, d_child) { 195 195 if (dentry->d_fsdata == NULL) 196 196 ncp_age_dentry(server, dentry); 197 197 else ··· 207 207 struct dentry *dentry; 208 208 209 209 spin_lock(&parent->d_lock); 210 - list_for_each_entry(dentry, &parent->d_subdirs, d_u.d_child) { 210 + list_for_each_entry(dentry, &parent->d_subdirs, d_child) { 211 211 dentry->d_fsdata = NULL; 212 212 ncp_age_dentry(server, dentry); 213 213 }
+1 -1
fs/nfs/blocklayout/rpc_pipefs.c
··· 112 112 static ssize_t bl_pipe_downcall(struct file *filp, const char __user *src, 113 113 size_t mlen) 114 114 { 115 - struct nfs_net *nn = net_generic(filp->f_dentry->d_sb->s_fs_info, 115 + struct nfs_net *nn = net_generic(file_inode(filp)->i_sb->s_fs_info, 116 116 nfs_net_id); 117 117 118 118 if (mlen != sizeof (struct bl_dev_msg))
+3 -3
fs/nfs/dir.c
··· 133 133 static int 134 134 nfs_closedir(struct inode *inode, struct file *filp) 135 135 { 136 - put_nfs_open_dir_context(filp->f_path.dentry->d_inode, filp->private_data); 136 + put_nfs_open_dir_context(file_inode(filp), filp->private_data); 137 137 return 0; 138 138 } 139 139 ··· 499 499 if (IS_ERR(inode)) 500 500 goto out; 501 501 502 - alias = d_materialise_unique(dentry, inode); 502 + alias = d_splice_alias(inode, dentry); 503 503 if (IS_ERR(alias)) 504 504 goto out; 505 505 else if (alias) { ··· 1393 1393 nfs_advise_use_readdirplus(dir); 1394 1394 1395 1395 no_entry: 1396 - res = d_materialise_unique(dentry, inode); 1396 + res = d_splice_alias(inode, dentry); 1397 1397 if (res != NULL) { 1398 1398 if (IS_ERR(res)) 1399 1399 goto out_unblock_sillyrename;
+2 -2
fs/nfs/getroot.c
··· 51 51 /* 52 52 * Ensure that this dentry is invisible to d_find_alias(). 53 53 * Otherwise, it may be spliced into the tree by 54 - * d_materialise_unique if a parent directory from the same 54 + * d_splice_alias if a parent directory from the same 55 55 * filesystem gets mounted at a later time. 56 56 * This again causes shrink_dcache_for_umount_subtree() to 57 57 * Oops, since the test for IS_ROOT() will fail. 58 58 */ 59 59 spin_lock(&sb->s_root->d_inode->i_lock); 60 60 spin_lock(&sb->s_root->d_lock); 61 - hlist_del_init(&sb->s_root->d_alias); 61 + hlist_del_init(&sb->s_root->d_u.d_alias); 62 62 spin_unlock(&sb->s_root->d_lock); 63 63 spin_unlock(&sb->s_root->d_inode->i_lock); 64 64 }
+4 -3
fs/nfsd/nfs4recover.c
··· 245 245 }; 246 246 247 247 static int 248 - nfsd4_build_namelist(void *arg, const char *name, int namlen, 248 + nfsd4_build_namelist(struct dir_context *__ctx, const char *name, int namlen, 249 249 loff_t offset, u64 ino, unsigned int d_type) 250 250 { 251 - struct nfs4_dir_ctx *ctx = arg; 251 + struct nfs4_dir_ctx *ctx = 252 + container_of(__ctx, struct nfs4_dir_ctx, ctx); 252 253 struct name_list *entry; 253 254 254 255 if (namlen != HEXDIR_LEN - 1) ··· 705 704 struct cld_upcall *tmp, *cup; 706 705 struct cld_msg __user *cmsg = (struct cld_msg __user *)src; 707 706 uint32_t xid; 708 - struct nfsd_net *nn = net_generic(filp->f_dentry->d_sb->s_fs_info, 707 + struct nfsd_net *nn = net_generic(file_inode(filp)->i_sb->s_fs_info, 709 708 nfsd_net_id); 710 709 struct cld_net *cn = nn->cld_net; 711 710
+1 -1
fs/nfsd/nfs4xdr.c
··· 1886 1886 goto out_free; 1887 1887 } 1888 1888 p = xdr_encode_opaque(p, dentry->d_name.name, len); 1889 - dprintk("/%s", dentry->d_name.name); 1889 + dprintk("/%pd", dentry); 1890 1890 spin_unlock(&dentry->d_lock); 1891 1891 dput(dentry); 1892 1892 ncomponents--;
+16 -21
fs/nfsd/nfsctl.c
··· 231 231 * payload - write methods 232 232 */ 233 233 234 + static inline struct net *netns(struct file *file) 235 + { 236 + return file_inode(file)->i_sb->s_fs_info; 237 + } 234 238 235 239 /** 236 240 * write_unlock_ip - Release all locks used by a client ··· 256 252 struct sockaddr *sap = (struct sockaddr *)&address; 257 253 size_t salen = sizeof(address); 258 254 char *fo_path; 259 - struct net *net = file->f_dentry->d_sb->s_fs_info; 255 + struct net *net = netns(file); 260 256 261 257 /* sanity check */ 262 258 if (size == 0) ··· 354 350 int len; 355 351 struct auth_domain *dom; 356 352 struct knfsd_fh fh; 357 - struct net *net = file->f_dentry->d_sb->s_fs_info; 358 353 359 354 if (size == 0) 360 355 return -EINVAL; ··· 388 385 if (!dom) 389 386 return -ENOMEM; 390 387 391 - len = exp_rootfh(net, dom, path, &fh, maxsize); 388 + len = exp_rootfh(netns(file), dom, path, &fh, maxsize); 392 389 auth_domain_put(dom); 393 390 if (len) 394 391 return len; ··· 432 429 { 433 430 char *mesg = buf; 434 431 int rv; 435 - struct net *net = file->f_dentry->d_sb->s_fs_info; 432 + struct net *net = netns(file); 436 433 437 434 if (size > 0) { 438 435 int newthreads; ··· 483 480 int len; 484 481 int npools; 485 482 int *nthreads; 486 - struct net *net = file->f_dentry->d_sb->s_fs_info; 483 + struct net *net = netns(file); 487 484 488 485 mutex_lock(&nfsd_mutex); 489 486 npools = nfsd_nrpools(net); ··· 546 543 unsigned minor; 547 544 ssize_t tlen = 0; 548 545 char *sep; 549 - struct net *net = file->f_dentry->d_sb->s_fs_info; 550 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 546 + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); 551 547 552 548 if (size>0) { 553 549 if (nn->nfsd_serv) ··· 832 830 static ssize_t write_ports(struct file *file, char *buf, size_t size) 833 831 { 834 832 ssize_t rv; 835 - struct net *net = file->f_dentry->d_sb->s_fs_info; 836 833 837 834 mutex_lock(&nfsd_mutex); 838 - rv = __write_ports(file, buf, size, net); 835 + rv = __write_ports(file, buf, size, netns(file)); 839 836 mutex_unlock(&nfsd_mutex); 840 837 return rv; 841 838 } ··· 866 865 static ssize_t write_maxblksize(struct file *file, char *buf, size_t size) 867 866 { 868 867 char *mesg = buf; 869 - struct net *net = file->f_dentry->d_sb->s_fs_info; 870 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 868 + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); 871 869 872 870 if (size > 0) { 873 871 int bsize; ··· 915 915 static ssize_t write_maxconn(struct file *file, char *buf, size_t size) 916 916 { 917 917 char *mesg = buf; 918 - struct net *net = file->f_dentry->d_sb->s_fs_info; 919 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 918 + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); 920 919 unsigned int maxconn = nn->max_connections; 921 920 922 921 if (size > 0) { ··· 996 997 */ 997 998 static ssize_t write_leasetime(struct file *file, char *buf, size_t size) 998 999 { 999 - struct net *net = file->f_dentry->d_sb->s_fs_info; 1000 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 1000 + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); 1001 1001 return nfsd4_write_time(file, buf, size, &nn->nfsd4_lease, nn); 1002 1002 } 1003 1003 ··· 1012 1014 */ 1013 1015 static ssize_t write_gracetime(struct file *file, char *buf, size_t size) 1014 1016 { 1015 - struct net *net = file->f_dentry->d_sb->s_fs_info; 1016 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 1017 + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); 1017 1018 return nfsd4_write_time(file, buf, size, &nn->nfsd4_grace, nn); 1018 1019 } 1019 1020 ··· 1068 1071 static ssize_t write_recoverydir(struct file *file, char *buf, size_t size) 1069 1072 { 1070 1073 ssize_t rv; 1071 - struct net *net = file->f_dentry->d_sb->s_fs_info; 1072 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 1074 + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); 1073 1075 1074 1076 mutex_lock(&nfsd_mutex); 1075 1077 rv = __write_recoverydir(file, buf, size, nn); ··· 1098 1102 */ 1099 1103 static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) 1100 1104 { 1101 - struct net *net = file->f_dentry->d_sb->s_fs_info; 1102 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 1105 + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); 1103 1106 1104 1107 if (size > 0) { 1105 1108 switch(buf[0]) {
+8 -8
fs/nfsd/vfs.c
··· 930 930 unsigned long *cnt, int *stablep) 931 931 { 932 932 struct svc_export *exp; 933 - struct dentry *dentry; 934 933 struct inode *inode; 935 934 mm_segment_t oldfs; 936 935 __be32 err = 0; ··· 948 949 */ 949 950 current->flags |= PF_LESS_THROTTLE; 950 951 951 - dentry = file->f_path.dentry; 952 - inode = dentry->d_inode; 952 + inode = file_inode(file); 953 953 exp = fhp->fh_export; 954 954 955 955 use_wgather = (rqstp->rq_vers == 2) && EX_WGATHER(exp); ··· 1817 1819 int full; 1818 1820 }; 1819 1821 1820 - static int nfsd_buffered_filldir(void *__buf, const char *name, int namlen, 1821 - loff_t offset, u64 ino, unsigned int d_type) 1822 + static int nfsd_buffered_filldir(struct dir_context *ctx, const char *name, 1823 + int namlen, loff_t offset, u64 ino, 1824 + unsigned int d_type) 1822 1825 { 1823 - struct readdir_data *buf = __buf; 1826 + struct readdir_data *buf = 1827 + container_of(ctx, struct readdir_data, ctx); 1824 1828 struct buffered_dirent *de = (void *)(buf->dirent + buf->used); 1825 1829 unsigned int reclen; 1826 1830 ··· 1842 1842 return 0; 1843 1843 } 1844 1844 1845 - static __be32 nfsd_buffered_readdir(struct file *file, filldir_t func, 1845 + static __be32 nfsd_buffered_readdir(struct file *file, nfsd_filldir_t func, 1846 1846 struct readdir_cd *cdp, loff_t *offsetp) 1847 1847 { 1848 1848 struct buffered_dirent *de; ··· 1926 1926 */ 1927 1927 __be32 1928 1928 nfsd_readdir(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t *offsetp, 1929 - struct readdir_cd *cdp, filldir_t func) 1929 + struct readdir_cd *cdp, nfsd_filldir_t func) 1930 1930 { 1931 1931 __be32 err; 1932 1932 struct file *file;
+2 -2
fs/nfsd/vfs.h
··· 36 36 /* 37 37 * Callback function for readdir 38 38 */ 39 - typedef int (*nfsd_dirop_t)(struct inode *, struct dentry *, int, int); 39 + typedef int (*nfsd_filldir_t)(void *, const char *, int, loff_t, u64, unsigned); 40 40 41 41 /* nfsd/vfs.c */ 42 42 int nfsd_racache_init(int); ··· 95 95 __be32 nfsd_unlink(struct svc_rqst *, struct svc_fh *, int type, 96 96 char *name, int len); 97 97 __be32 nfsd_readdir(struct svc_rqst *, struct svc_fh *, 98 - loff_t *, struct readdir_cd *, filldir_t); 98 + loff_t *, struct readdir_cd *, nfsd_filldir_t); 99 99 __be32 nfsd_statfs(struct svc_rqst *, struct svc_fh *, 100 100 struct kstatfs *, int access); 101 101
+32 -46
fs/notify/fdinfo.c
··· 20 20 21 21 #if defined(CONFIG_INOTIFY_USER) || defined(CONFIG_FANOTIFY) 22 22 23 - static int show_fdinfo(struct seq_file *m, struct file *f, 24 - int (*show)(struct seq_file *m, struct fsnotify_mark *mark)) 23 + static void show_fdinfo(struct seq_file *m, struct file *f, 24 + void (*show)(struct seq_file *m, 25 + struct fsnotify_mark *mark)) 25 26 { 26 27 struct fsnotify_group *group = f->private_data; 27 28 struct fsnotify_mark *mark; 28 - int ret = 0; 29 29 30 30 mutex_lock(&group->mark_mutex); 31 31 list_for_each_entry(mark, &group->marks_list, g_list) { 32 - ret = show(m, mark); 33 - if (ret) 32 + show(m, mark); 33 + if (seq_has_overflowed(m)) 34 34 break; 35 35 } 36 36 mutex_unlock(&group->mark_mutex); 37 - return ret; 38 37 } 39 38 40 39 #if defined(CONFIG_EXPORTFS) 41 - static int show_mark_fhandle(struct seq_file *m, struct inode *inode) 40 + static void show_mark_fhandle(struct seq_file *m, struct inode *inode) 42 41 { 43 42 struct { 44 43 struct file_handle handle; ··· 51 52 ret = exportfs_encode_inode_fh(inode, (struct fid *)f.handle.f_handle, &size, 0); 52 53 if ((ret == FILEID_INVALID) || (ret < 0)) { 53 54 WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret); 54 - return 0; 55 + return; 55 56 } 56 57 57 58 f.handle.handle_type = ret; 58 59 f.handle.handle_bytes = size * sizeof(u32); 59 60 60 - ret = seq_printf(m, "fhandle-bytes:%x fhandle-type:%x f_handle:", 61 - f.handle.handle_bytes, f.handle.handle_type); 61 + seq_printf(m, "fhandle-bytes:%x fhandle-type:%x f_handle:", 62 + f.handle.handle_bytes, f.handle.handle_type); 62 63 63 64 for (i = 0; i < f.handle.handle_bytes; i++) 64 - ret |= seq_printf(m, "%02x", (int)f.handle.f_handle[i]); 65 - 66 - return ret; 65 + seq_printf(m, "%02x", (int)f.handle.f_handle[i]); 67 66 } 68 67 #else 69 - static int show_mark_fhandle(struct seq_file *m, struct inode *inode) 68 + static void show_mark_fhandle(struct seq_file *m, struct inode *inode) 70 69 { 71 - return 0; 72 70 } 73 71 #endif 74 72 75 73 #ifdef CONFIG_INOTIFY_USER 76 74 77 - static int inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark) 75 + static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark) 78 76 { 79 77 struct inotify_inode_mark *inode_mark; 80 78 struct inode *inode; 81 - int ret = 0; 82 79 83 80 if (!(mark->flags & (FSNOTIFY_MARK_FLAG_ALIVE | FSNOTIFY_MARK_FLAG_INODE))) 84 - return 0; 81 + return; 85 82 86 83 inode_mark = container_of(mark, struct inotify_inode_mark, fsn_mark); 87 84 inode = igrab(mark->i.inode); 88 85 if (inode) { 89 - ret = seq_printf(m, "inotify wd:%x ino:%lx sdev:%x " 90 - "mask:%x ignored_mask:%x ", 91 - inode_mark->wd, inode->i_ino, 92 - inode->i_sb->s_dev, 93 - mark->mask, mark->ignored_mask); 94 - ret |= show_mark_fhandle(m, inode); 95 - ret |= seq_putc(m, '\n'); 86 + seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:%x ", 87 + inode_mark->wd, inode->i_ino, inode->i_sb->s_dev, 88 + mark->mask, mark->ignored_mask); 89 + show_mark_fhandle(m, inode); 90 + seq_putc(m, '\n'); 96 91 iput(inode); 97 92 } 98 - 99 - return ret; 100 93 } 101 94 102 - int inotify_show_fdinfo(struct seq_file *m, struct file *f) 95 + void inotify_show_fdinfo(struct seq_file *m, struct file *f) 103 96 { 104 - return show_fdinfo(m, f, inotify_fdinfo); 97 + show_fdinfo(m, f, inotify_fdinfo); 105 98 } 106 99 107 100 #endif /* CONFIG_INOTIFY_USER */ 108 101 109 102 #ifdef CONFIG_FANOTIFY 110 103 111 - static int fanotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark) 104 + static void fanotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark) 112 105 { 113 106 unsigned int mflags = 0; 114 107 struct inode *inode; 115 - int ret = 0; 116 108 117 109 if (!(mark->flags & FSNOTIFY_MARK_FLAG_ALIVE)) 118 - return 0; 110 + return; 119 111 120 112 if (mark->flags & FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY) 121 113 mflags |= FAN_MARK_IGNORED_SURV_MODIFY; ··· 114 124 if (mark->flags & FSNOTIFY_MARK_FLAG_INODE) { 115 125 inode = igrab(mark->i.inode); 116 126 if (!inode) 117 - goto out; 118 - ret = seq_printf(m, "fanotify ino:%lx sdev:%x " 119 - "mflags:%x mask:%x ignored_mask:%x ", 120 - inode->i_ino, inode->i_sb->s_dev, 121 - mflags, mark->mask, mark->ignored_mask); 122 - ret |= show_mark_fhandle(m, inode); 123 - ret |= seq_putc(m, '\n'); 127 + return; 128 + seq_printf(m, "fanotify ino:%lx sdev:%x mflags:%x mask:%x ignored_mask:%x ", 129 + inode->i_ino, inode->i_sb->s_dev, 130 + mflags, mark->mask, mark->ignored_mask); 131 + show_mark_fhandle(m, inode); 132 + seq_putc(m, '\n'); 124 133 iput(inode); 125 134 } else if (mark->flags & FSNOTIFY_MARK_FLAG_VFSMOUNT) { 126 135 struct mount *mnt = real_mount(mark->m.mnt); 127 136 128 - ret = seq_printf(m, "fanotify mnt_id:%x mflags:%x mask:%x " 129 - "ignored_mask:%x\n", mnt->mnt_id, mflags, 130 - mark->mask, mark->ignored_mask); 137 + seq_printf(m, "fanotify mnt_id:%x mflags:%x mask:%x ignored_mask:%x\n", 138 + mnt->mnt_id, mflags, mark->mask, mark->ignored_mask); 131 139 } 132 - out: 133 - return ret; 134 140 } 135 141 136 - int fanotify_show_fdinfo(struct seq_file *m, struct file *f) 142 + void fanotify_show_fdinfo(struct seq_file *m, struct file *f) 137 143 { 138 144 struct fsnotify_group *group = f->private_data; 139 145 unsigned int flags = 0; ··· 155 169 seq_printf(m, "fanotify flags:%x event-flags:%x\n", 156 170 flags, group->fanotify_data.f_flags); 157 171 158 - return show_fdinfo(m, f, fanotify_fdinfo); 172 + show_fdinfo(m, f, fanotify_fdinfo); 159 173 } 160 174 161 175 #endif /* CONFIG_FANOTIFY */
+2 -2
fs/notify/fdinfo.h
··· 10 10 #ifdef CONFIG_PROC_FS 11 11 12 12 #ifdef CONFIG_INOTIFY_USER 13 - extern int inotify_show_fdinfo(struct seq_file *m, struct file *f); 13 + void inotify_show_fdinfo(struct seq_file *m, struct file *f); 14 14 #endif 15 15 16 16 #ifdef CONFIG_FANOTIFY 17 - extern int fanotify_show_fdinfo(struct seq_file *m, struct file *f); 17 + void fanotify_show_fdinfo(struct seq_file *m, struct file *f); 18 18 #endif 19 19 20 20 #else /* CONFIG_PROC_FS */
+2 -2
fs/notify/fsnotify.c
··· 63 63 spin_lock(&inode->i_lock); 64 64 /* run all of the dentries associated with this inode. Since this is a 65 65 * directory, there damn well better only be one item on this list */ 66 - hlist_for_each_entry(alias, &inode->i_dentry, d_alias) { 66 + hlist_for_each_entry(alias, &inode->i_dentry, d_u.d_alias) { 67 67 struct dentry *child; 68 68 69 69 /* run all of the children of the original inode and fix their 70 70 * d_flags to indicate parental interest (their parent is the 71 71 * original inode) */ 72 72 spin_lock(&alias->d_lock); 73 - list_for_each_entry(child, &alias->d_subdirs, d_u.d_child) { 73 + list_for_each_entry(child, &alias->d_subdirs, d_child) { 74 74 if (!child->d_inode) 75 75 continue; 76 76
+2 -2
fs/ntfs/namei.c
··· 111 111 unsigned long dent_ino; 112 112 int uname_len; 113 113 114 - ntfs_debug("Looking up %s in directory inode 0x%lx.", 115 - dent->d_name.name, dir_ino->i_ino); 114 + ntfs_debug("Looking up %pd in directory inode 0x%lx.", 115 + dent, dir_ino->i_ino); 116 116 /* Convert the name of the dentry to Unicode. */ 117 117 uname_len = ntfs_nlstoucs(vol, dent->d_name.name, dent->d_name.len, 118 118 &uname);
+9 -11
fs/ocfs2/dcache.c
··· 172 172 struct dentry *dentry; 173 173 174 174 spin_lock(&inode->i_lock); 175 - hlist_for_each_entry(dentry, &inode->i_dentry, d_alias) { 175 + hlist_for_each_entry(dentry, &inode->i_dentry, d_u.d_alias) { 176 176 spin_lock(&dentry->d_lock); 177 177 if (ocfs2_match_dentry(dentry, parent_blkno, skip_unhashed)) { 178 178 trace_ocfs2_find_local_alias(dentry->d_name.len, ··· 251 251 252 252 if (dl) { 253 253 mlog_bug_on_msg(dl->dl_parent_blkno != parent_blkno, 254 - " \"%.*s\": old parent: %llu, new: %llu\n", 255 - dentry->d_name.len, dentry->d_name.name, 254 + " \"%pd\": old parent: %llu, new: %llu\n", 255 + dentry, 256 256 (unsigned long long)parent_blkno, 257 257 (unsigned long long)dl->dl_parent_blkno); 258 258 return 0; ··· 277 277 (unsigned long long)OCFS2_I(inode)->ip_blkno); 278 278 279 279 mlog_bug_on_msg(dl->dl_parent_blkno != parent_blkno, 280 - " \"%.*s\": old parent: %llu, new: %llu\n", 281 - dentry->d_name.len, dentry->d_name.name, 280 + " \"%pd\": old parent: %llu, new: %llu\n", 281 + dentry, 282 282 (unsigned long long)parent_blkno, 283 283 (unsigned long long)dl->dl_parent_blkno); 284 284 ··· 406 406 if (inode) 407 407 ino = (unsigned long long)OCFS2_I(inode)->ip_blkno; 408 408 mlog(ML_ERROR, "Dentry is missing cluster lock. " 409 - "inode: %llu, d_flags: 0x%x, d_name: %.*s\n", 410 - ino, dentry->d_flags, dentry->d_name.len, 411 - dentry->d_name.name); 409 + "inode: %llu, d_flags: 0x%x, d_name: %pd\n", 410 + ino, dentry->d_flags, dentry); 412 411 } 413 412 414 413 goto out; 415 414 } 416 415 417 - mlog_bug_on_msg(dl->dl_count == 0, "dentry: %.*s, count: %u\n", 418 - dentry->d_name.len, dentry->d_name.name, 419 - dl->dl_count); 416 + mlog_bug_on_msg(dl->dl_count == 0, "dentry: %pd, count: %u\n", 417 + dentry, dl->dl_count); 420 418 421 419 ocfs2_dentry_lock_put(OCFS2_SB(dentry->d_sb), dl); 422 420
+5 -3
fs/ocfs2/dir.c
··· 2073 2073 unsigned seen_other; 2074 2074 unsigned dx_dir; 2075 2075 }; 2076 - static int ocfs2_empty_dir_filldir(void *priv, const char *name, int name_len, 2077 - loff_t pos, u64 ino, unsigned type) 2076 + static int ocfs2_empty_dir_filldir(struct dir_context *ctx, const char *name, 2077 + int name_len, loff_t pos, u64 ino, 2078 + unsigned type) 2078 2079 { 2079 - struct ocfs2_empty_dir_priv *p = priv; 2080 + struct ocfs2_empty_dir_priv *p = 2081 + container_of(ctx, struct ocfs2_empty_dir_priv, ctx); 2080 2082 2081 2083 /* 2082 2084 * Check the positions of "." and ".." records to be sure
+2 -2
fs/ocfs2/dlmfs/dlmfs.c
··· 565 565 * to acquire a lock, this basically destroys our lockres. */ 566 566 status = user_dlm_destroy_lock(&DLMFS_I(inode)->ip_lockres); 567 567 if (status < 0) { 568 - mlog(ML_ERROR, "unlink %.*s, error %d from destroy\n", 569 - dentry->d_name.len, dentry->d_name.name, status); 568 + mlog(ML_ERROR, "unlink %pd, error %d from destroy\n", 569 + dentry, status); 570 570 goto bail; 571 571 } 572 572 status = simple_unlink(dir, dentry);
+1 -2
fs/ocfs2/dlmglue.c
··· 3725 3725 break; 3726 3726 spin_unlock(&dentry_attach_lock); 3727 3727 3728 - mlog(0, "d_delete(%.*s);\n", dentry->d_name.len, 3729 - dentry->d_name.name); 3728 + mlog(0, "d_delete(%pd);\n", dentry); 3730 3729 3731 3730 /* 3732 3731 * The following dcache calls may do an
+5 -3
fs/ocfs2/journal.c
··· 1982 1982 struct ocfs2_super *osb; 1983 1983 }; 1984 1984 1985 - static int ocfs2_orphan_filldir(void *priv, const char *name, int name_len, 1986 - loff_t pos, u64 ino, unsigned type) 1985 + static int ocfs2_orphan_filldir(struct dir_context *ctx, const char *name, 1986 + int name_len, loff_t pos, u64 ino, 1987 + unsigned type) 1987 1988 { 1988 - struct ocfs2_orphan_filldir_priv *p = priv; 1989 + struct ocfs2_orphan_filldir_priv *p = 1990 + container_of(ctx, struct ocfs2_orphan_filldir_priv, ctx); 1989 1991 struct inode *iter; 1990 1992 1991 1993 if (name_len == 1 && !strncmp(".", name, 1))
+2 -2
fs/open.c
··· 516 516 int err = -EBADF; 517 517 518 518 if (f.file) { 519 - audit_inode(NULL, f.file->f_path.dentry, 0); 519 + audit_file(f.file); 520 520 err = chmod_common(&f.file->f_path, mode); 521 521 fdput(f); 522 522 } ··· 642 642 error = mnt_want_write_file(f.file); 643 643 if (error) 644 644 goto out_fput; 645 - audit_inode(NULL, f.file->f_path.dentry, 0); 645 + audit_file(f.file); 646 646 error = chown_common(&f.file->f_path, user, group); 647 647 mnt_drop_write_file(f.file); 648 648 out_fput:
+5 -3
fs/overlayfs/readdir.c
··· 180 180 } 181 181 } 182 182 183 - static int ovl_fill_merge(void *buf, const char *name, int namelen, 184 - loff_t offset, u64 ino, unsigned int d_type) 183 + static int ovl_fill_merge(struct dir_context *ctx, const char *name, 184 + int namelen, loff_t offset, u64 ino, 185 + unsigned int d_type) 185 186 { 186 - struct ovl_readdir_data *rdd = buf; 187 + struct ovl_readdir_data *rdd = 188 + container_of(ctx, struct ovl_readdir_data, ctx); 187 189 188 190 rdd->count++; 189 191 if (!rdd->is_merge)
+2 -2
fs/proc/base.c
··· 2789 2789 int proc_pid_readdir(struct file *file, struct dir_context *ctx) 2790 2790 { 2791 2791 struct tgid_iter iter; 2792 - struct pid_namespace *ns = file->f_dentry->d_sb->s_fs_info; 2792 + struct pid_namespace *ns = file_inode(file)->i_sb->s_fs_info; 2793 2793 loff_t pos = ctx->pos; 2794 2794 2795 2795 if (pos >= PID_MAX_LIMIT + TGID_OFFSET) ··· 3095 3095 /* f_version caches the tgid value that the last readdir call couldn't 3096 3096 * return. lseek aka telldir automagically resets f_version to 0. 3097 3097 */ 3098 - ns = file->f_dentry->d_sb->s_fs_info; 3098 + ns = inode->i_sb->s_fs_info; 3099 3099 tid = (int)file->f_version; 3100 3100 file->f_version = 0; 3101 3101 for (task = first_tid(proc_pid(inode), tid, ctx->pos - 2, ns);
+2 -1
fs/proc/fd.c
··· 53 53 (long long)file->f_pos, f_flags, 54 54 real_mount(file->f_path.mnt)->mnt_id); 55 55 if (file->f_op->show_fdinfo) 56 - ret = file->f_op->show_fdinfo(m, file); 56 + file->f_op->show_fdinfo(m, file); 57 + ret = seq_has_overflowed(m); 57 58 fput(file); 58 59 } 59 60
+12 -9
fs/readdir.c
··· 74 74 int result; 75 75 }; 76 76 77 - static int fillonedir(void * __buf, const char * name, int namlen, loff_t offset, 78 - u64 ino, unsigned int d_type) 77 + static int fillonedir(struct dir_context *ctx, const char *name, int namlen, 78 + loff_t offset, u64 ino, unsigned int d_type) 79 79 { 80 - struct readdir_callback *buf = (struct readdir_callback *) __buf; 80 + struct readdir_callback *buf = 81 + container_of(ctx, struct readdir_callback, ctx); 81 82 struct old_linux_dirent __user * dirent; 82 83 unsigned long d_ino; 83 84 ··· 149 148 int error; 150 149 }; 151 150 152 - static int filldir(void * __buf, const char * name, int namlen, loff_t offset, 153 - u64 ino, unsigned int d_type) 151 + static int filldir(struct dir_context *ctx, const char *name, int namlen, 152 + loff_t offset, u64 ino, unsigned int d_type) 154 153 { 155 154 struct linux_dirent __user * dirent; 156 - struct getdents_callback * buf = (struct getdents_callback *) __buf; 155 + struct getdents_callback *buf = 156 + container_of(ctx, struct getdents_callback, ctx); 157 157 unsigned long d_ino; 158 158 int reclen = ALIGN(offsetof(struct linux_dirent, d_name) + namlen + 2, 159 159 sizeof(long)); ··· 234 232 int error; 235 233 }; 236 234 237 - static int filldir64(void * __buf, const char * name, int namlen, loff_t offset, 238 - u64 ino, unsigned int d_type) 235 + static int filldir64(struct dir_context *ctx, const char *name, int namlen, 236 + loff_t offset, u64 ino, unsigned int d_type) 239 237 { 240 238 struct linux_dirent64 __user *dirent; 241 - struct getdents_callback64 * buf = (struct getdents_callback64 *) __buf; 239 + struct getdents_callback64 *buf = 240 + container_of(ctx, struct getdents_callback64, ctx); 242 241 int reclen = ALIGN(offsetof(struct linux_dirent64, d_name) + namlen + 1, 243 242 sizeof(u64)); 244 243
+12 -9
fs/reiserfs/xattr.c
··· 188 188 }; 189 189 190 190 static int 191 - fill_with_dentries(void *buf, const char *name, int namelen, loff_t offset, 192 - u64 ino, unsigned int d_type) 191 + fill_with_dentries(struct dir_context *ctx, const char *name, int namelen, 192 + loff_t offset, u64 ino, unsigned int d_type) 193 193 { 194 - struct reiserfs_dentry_buf *dbuf = buf; 194 + struct reiserfs_dentry_buf *dbuf = 195 + container_of(ctx, struct reiserfs_dentry_buf, ctx); 195 196 struct dentry *dentry; 196 197 197 198 WARN_ON_ONCE(!mutex_is_locked(&dbuf->xadir->d_inode->i_mutex)); ··· 210 209 } else if (!dentry->d_inode) { 211 210 /* A directory entry exists, but no file? */ 212 211 reiserfs_error(dentry->d_sb, "xattr-20003", 213 - "Corrupted directory: xattr %s listed but " 214 - "not found for file %s.\n", 215 - dentry->d_name.name, dbuf->xadir->d_name.name); 212 + "Corrupted directory: xattr %pd listed but " 213 + "not found for file %pd.\n", 214 + dentry, dbuf->xadir); 216 215 dput(dentry); 217 216 return -EIO; 218 217 } ··· 825 824 struct dentry *dentry; 826 825 }; 827 826 828 - static int listxattr_filler(void *buf, const char *name, int namelen, 829 - loff_t offset, u64 ino, unsigned int d_type) 827 + static int listxattr_filler(struct dir_context *ctx, const char *name, 828 + int namelen, loff_t offset, u64 ino, 829 + unsigned int d_type) 830 830 { 831 - struct listxattr_buf *b = (struct listxattr_buf *)buf; 831 + struct listxattr_buf *b = 832 + container_of(ctx, struct listxattr_buf, ctx); 832 833 size_t size; 833 834 834 835 if (name[0] != '.' ||
+2 -13
fs/seq_file.c
··· 16 16 #include <asm/uaccess.h> 17 17 #include <asm/page.h> 18 18 19 - 20 - /* 21 - * seq_files have a buffer which can may overflow. When this happens a larger 22 - * buffer is reallocated and all the data will be printed again. 23 - * The overflow state is true when m->count == m->size. 24 - */ 25 - static bool seq_overflow(struct seq_file *m) 26 - { 27 - return m->count == m->size; 28 - } 29 - 30 19 static void seq_set_overflow(struct seq_file *m) 31 20 { 32 21 m->count = m->size; ··· 113 124 error = 0; 114 125 m->count = 0; 115 126 } 116 - if (seq_overflow(m)) 127 + if (seq_has_overflowed(m)) 117 128 goto Eoverflow; 118 129 if (pos + m->count > offset) { 119 130 m->from = offset - pos; ··· 256 267 break; 257 268 } 258 269 err = m->op->show(m, p); 259 - if (seq_overflow(m) || err) { 270 + if (seq_has_overflowed(m) || err) { 260 271 m->count = offs; 261 272 if (likely(err <= 0)) 262 273 break;
+1 -3
fs/signalfd.c
··· 230 230 } 231 231 232 232 #ifdef CONFIG_PROC_FS 233 - static int signalfd_show_fdinfo(struct seq_file *m, struct file *f) 233 + static void signalfd_show_fdinfo(struct seq_file *m, struct file *f) 234 234 { 235 235 struct signalfd_ctx *ctx = f->private_data; 236 236 sigset_t sigmask; ··· 238 238 sigmask = ctx->sigmask; 239 239 signotset(&sigmask); 240 240 render_sigset_t(m, "sigmask:\t", &sigmask); 241 - 242 - return 0; 243 241 } 244 242 #endif 245 243
+1 -1
fs/sync.c
··· 154 154 155 155 if (!f.file) 156 156 return -EBADF; 157 - sb = f.file->f_dentry->d_sb; 157 + sb = f.file->f_path.dentry->d_sb; 158 158 159 159 down_read(&sb->s_umount); 160 160 ret = sync_filesystem(sb);
+14 -13
fs/timerfd.c
··· 288 288 } 289 289 290 290 #ifdef CONFIG_PROC_FS 291 - static int timerfd_show(struct seq_file *m, struct file *file) 291 + static void timerfd_show(struct seq_file *m, struct file *file) 292 292 { 293 293 struct timerfd_ctx *ctx = file->private_data; 294 294 struct itimerspec t; ··· 298 298 t.it_interval = ktime_to_timespec(ctx->tintv); 299 299 spin_unlock_irq(&ctx->wqh.lock); 300 300 301 - return seq_printf(m, 302 - "clockid: %d\n" 303 - "ticks: %llu\n" 304 - "settime flags: 0%o\n" 305 - "it_value: (%llu, %llu)\n" 306 - "it_interval: (%llu, %llu)\n", 307 - ctx->clockid, (unsigned long long)ctx->ticks, 308 - ctx->settime_flags, 309 - (unsigned long long)t.it_value.tv_sec, 310 - (unsigned long long)t.it_value.tv_nsec, 311 - (unsigned long long)t.it_interval.tv_sec, 312 - (unsigned long long)t.it_interval.tv_nsec); 301 + seq_printf(m, 302 + "clockid: %d\n" 303 + "ticks: %llu\n" 304 + "settime flags: 0%o\n" 305 + "it_value: (%llu, %llu)\n" 306 + "it_interval: (%llu, %llu)\n", 307 + ctx->clockid, 308 + (unsigned long long)ctx->ticks, 309 + ctx->settime_flags, 310 + (unsigned long long)t.it_value.tv_sec, 311 + (unsigned long long)t.it_value.tv_nsec, 312 + (unsigned long long)t.it_interval.tv_sec, 313 + (unsigned long long)t.it_interval.tv_nsec); 313 314 } 314 315 #else 315 316 #define timerfd_show NULL
+6 -10
fs/xattr.c
··· 405 405 const void __user *,value, size_t, size, int, flags) 406 406 { 407 407 struct fd f = fdget(fd); 408 - struct dentry *dentry; 409 408 int error = -EBADF; 410 409 411 410 if (!f.file) 412 411 return error; 413 - dentry = f.file->f_path.dentry; 414 - audit_inode(NULL, dentry, 0); 412 + audit_file(f.file); 415 413 error = mnt_want_write_file(f.file); 416 414 if (!error) { 417 - error = setxattr(dentry, name, value, size, flags); 415 + error = setxattr(f.file->f_path.dentry, name, value, size, flags); 418 416 mnt_drop_write_file(f.file); 419 417 } 420 418 fdput(f); ··· 507 509 508 510 if (!f.file) 509 511 return error; 510 - audit_inode(NULL, f.file->f_path.dentry, 0); 512 + audit_file(f.file); 511 513 error = getxattr(f.file->f_path.dentry, name, value, size); 512 514 fdput(f); 513 515 return error; ··· 588 590 589 591 if (!f.file) 590 592 return error; 591 - audit_inode(NULL, f.file->f_path.dentry, 0); 593 + audit_file(f.file); 592 594 error = listxattr(f.file->f_path.dentry, list, size); 593 595 fdput(f); 594 596 return error; ··· 649 651 SYSCALL_DEFINE2(fremovexattr, int, fd, const char __user *, name) 650 652 { 651 653 struct fd f = fdget(fd); 652 - struct dentry *dentry; 653 654 int error = -EBADF; 654 655 655 656 if (!f.file) 656 657 return error; 657 - dentry = f.file->f_path.dentry; 658 - audit_inode(NULL, dentry, 0); 658 + audit_file(f.file); 659 659 error = mnt_want_write_file(f.file); 660 660 if (!error) { 661 - error = removexattr(dentry, name); 661 + error = removexattr(f.file->f_path.dentry, name); 662 662 mnt_drop_write_file(f.file); 663 663 } 664 664 fdput(f);
+9
include/linux/audit.h
··· 130 130 #define AUDIT_INODE_HIDDEN 2 /* audit record should be hidden */ 131 131 extern void __audit_inode(struct filename *name, const struct dentry *dentry, 132 132 unsigned int flags); 133 + extern void __audit_file(const struct file *); 133 134 extern void __audit_inode_child(const struct inode *parent, 134 135 const struct dentry *dentry, 135 136 const unsigned char type); ··· 183 182 flags |= AUDIT_INODE_PARENT; 184 183 __audit_inode(name, dentry, flags); 185 184 } 185 + } 186 + static inline void audit_file(struct file *file) 187 + { 188 + if (unlikely(!audit_dummy_context())) 189 + __audit_file(file); 186 190 } 187 191 static inline void audit_inode_parent_hidden(struct filename *name, 188 192 const struct dentry *dentry) ··· 363 357 const struct dentry *dentry, 364 358 unsigned int parent) 365 359 { } 360 + static inline void audit_file(struct file *file) 361 + { 362 + } 366 363 static inline void audit_inode_parent_hidden(struct filename *name, 367 364 const struct dentry *dentry) 368 365 { }
+2 -2
include/linux/cgroup.h
··· 367 367 * struct cftype: handler definitions for cgroup control files 368 368 * 369 369 * When reading/writing to a file: 370 - * - the cgroup to use is file->f_dentry->d_parent->d_fsdata 371 - * - the 'cftype' of the file is file->f_dentry->d_fsdata 370 + * - the cgroup to use is file->f_path.dentry->d_parent->d_fsdata 371 + * - the 'cftype' of the file is file->f_path.dentry->d_fsdata 372 372 */ 373 373 374 374 /* cftype->flags */
+4 -5
include/linux/dcache.h
··· 124 124 void *d_fsdata; /* fs-specific data */ 125 125 126 126 struct list_head d_lru; /* LRU list */ 127 + struct list_head d_child; /* child of parent list */ 128 + struct list_head d_subdirs; /* our children */ 127 129 /* 128 - * d_child and d_rcu can share memory 130 + * d_alias and d_rcu can share memory 129 131 */ 130 132 union { 131 - struct list_head d_child; /* child of parent list */ 133 + struct hlist_node d_alias; /* inode alias list */ 132 134 struct rcu_head d_rcu; 133 135 } d_u; 134 - struct list_head d_subdirs; /* our children */ 135 - struct hlist_node d_alias; /* inode alias list */ 136 136 }; 137 137 138 138 /* ··· 230 230 */ 231 231 extern void d_instantiate(struct dentry *, struct inode *); 232 232 extern struct dentry * d_instantiate_unique(struct dentry *, struct inode *); 233 - extern struct dentry * d_materialise_unique(struct dentry *, struct inode *); 234 233 extern int d_instantiate_no_diralias(struct dentry *, struct inode *); 235 234 extern void __d_drop(struct dentry *dentry); 236 235 extern void d_drop(struct dentry *dentry);
+3 -4
include/linux/debugfs.h
··· 92 92 struct dentry *parent, 93 93 struct debugfs_regset32 *regset); 94 94 95 - int debugfs_print_regs32(struct seq_file *s, const struct debugfs_reg32 *regs, 96 - int nregs, void __iomem *base, char *prefix); 95 + void debugfs_print_regs32(struct seq_file *s, const struct debugfs_reg32 *regs, 96 + int nregs, void __iomem *base, char *prefix); 97 97 98 98 struct dentry *debugfs_create_u32_array(const char *name, umode_t mode, 99 99 struct dentry *parent, ··· 233 233 return ERR_PTR(-ENODEV); 234 234 } 235 235 236 - static inline int debugfs_print_regs32(struct seq_file *s, const struct debugfs_reg32 *regs, 236 + static inline void debugfs_print_regs32(struct seq_file *s, const struct debugfs_reg32 *regs, 237 237 int nregs, void __iomem *base, char *prefix) 238 238 { 239 - return 0; 240 239 } 241 240 242 241 static inline bool debugfs_initialized(void)
+10 -3
include/linux/fs.h
··· 786 786 struct rcu_head fu_rcuhead; 787 787 } f_u; 788 788 struct path f_path; 789 - #define f_dentry f_path.dentry 790 789 struct inode *f_inode; /* cached value */ 791 790 const struct file_operations *f_op; 792 791 ··· 1464 1465 * This allows the kernel to read directories into kernel space or 1465 1466 * to have different dirent layouts depending on the binary type. 1466 1467 */ 1467 - typedef int (*filldir_t)(void *, const char *, int, loff_t, u64, unsigned); 1468 + struct dir_context; 1469 + typedef int (*filldir_t)(struct dir_context *, const char *, int, loff_t, u64, 1470 + unsigned); 1471 + 1468 1472 struct dir_context { 1469 1473 const filldir_t actor; 1470 1474 loff_t pos; ··· 1513 1511 int (*setlease)(struct file *, long, struct file_lock **, void **); 1514 1512 long (*fallocate)(struct file *file, int mode, loff_t offset, 1515 1513 loff_t len); 1516 - int (*show_fdinfo)(struct seq_file *m, struct file *f); 1514 + void (*show_fdinfo)(struct seq_file *m, struct file *f); 1517 1515 }; 1518 1516 1519 1517 struct inode_operations { ··· 2787 2785 { 2788 2786 if (!is_sxid(inode->i_mode) && (inode->i_sb->s_flags & MS_NOSEC)) 2789 2787 inode->i_flags |= S_NOSEC; 2788 + } 2789 + 2790 + static inline bool is_root_inode(struct inode *inode) 2791 + { 2792 + return inode == inode->i_sb->s_root->d_inode; 2790 2793 } 2791 2794 2792 2795 static inline bool dir_emit(struct dir_context *ctx,
+15
include/linux/seq_file.h
··· 43 43 #define SEQ_SKIP 1 44 44 45 45 /** 46 + * seq_has_overflowed - check if the buffer has overflowed 47 + * @m: the seq_file handle 48 + * 49 + * seq_files have a buffer which may overflow. When this happens a larger 50 + * buffer is reallocated and all the data will be printed again. 51 + * The overflow state is true when m->count == m->size. 52 + * 53 + * Returns true if the buffer received more than it can hold. 54 + */ 55 + static inline bool seq_has_overflowed(struct seq_file *m) 56 + { 57 + return m->count == m->size; 58 + } 59 + 60 + /** 46 61 * seq_get_buf - get buffer to write arbitrary data to 47 62 * @m: the seq_file handle 48 63 * @bufp: the beginning of the buffer is stored here
+6
include/linux/uio.h
··· 31 31 size_t count; 32 32 union { 33 33 const struct iovec *iov; 34 + const struct kvec *kvec; 34 35 const struct bio_vec *bvec; 35 36 }; 36 37 unsigned long nr_segs; ··· 83 82 struct iov_iter *i); 84 83 size_t copy_to_iter(void *addr, size_t bytes, struct iov_iter *i); 85 84 size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i); 85 + size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i); 86 86 size_t iov_iter_zero(size_t bytes, struct iov_iter *); 87 87 unsigned long iov_iter_alignment(const struct iov_iter *i); 88 88 void iov_iter_init(struct iov_iter *i, int direction, const struct iovec *iov, 89 + unsigned long nr_segs, size_t count); 90 + void iov_iter_kvec(struct iov_iter *i, int direction, const struct kvec *iov, 89 91 unsigned long nr_segs, size_t count); 90 92 ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, 91 93 size_t maxsize, unsigned maxpages, size_t *start); ··· 127 123 { 128 124 i->count = count; 129 125 } 126 + size_t csum_and_copy_to_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); 127 + size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); 130 128 131 129 int memcpy_fromiovec(unsigned char *kdata, struct iovec *iov, int len); 132 130 int memcpy_toiovec(struct iovec *iov, unsigned char *kdata, int len);
+1 -1
include/net/netfilter/nf_conntrack_core.h
··· 72 72 return ret; 73 73 } 74 74 75 - int 75 + void 76 76 print_tuple(struct seq_file *s, const struct nf_conntrack_tuple *tuple, 77 77 const struct nf_conntrack_l3proto *l3proto, 78 78 const struct nf_conntrack_l4proto *proto);
+2 -2
include/net/netfilter/nf_conntrack_l3proto.h
··· 38 38 const struct nf_conntrack_tuple *orig); 39 39 40 40 /* Print out the per-protocol part of the tuple. */ 41 - int (*print_tuple)(struct seq_file *s, 42 - const struct nf_conntrack_tuple *); 41 + void (*print_tuple)(struct seq_file *s, 42 + const struct nf_conntrack_tuple *); 43 43 44 44 /* 45 45 * Called before tracking.
+3 -3
include/net/netfilter/nf_conntrack_l4proto.h
··· 56 56 u_int8_t pf, unsigned int hooknum); 57 57 58 58 /* Print out the per-protocol part of the tuple. Return like seq_* */ 59 - int (*print_tuple)(struct seq_file *s, 60 - const struct nf_conntrack_tuple *); 59 + void (*print_tuple)(struct seq_file *s, 60 + const struct nf_conntrack_tuple *); 61 61 62 62 /* Print out the private part of the conntrack. */ 63 - int (*print_conntrack)(struct seq_file *s, struct nf_conn *); 63 + void (*print_conntrack)(struct seq_file *s, struct nf_conn *); 64 64 65 65 /* Return the array of timeouts for this protocol. */ 66 66 unsigned int *(*get_timeouts)(struct net *net);
+2 -2
ipc/mqueue.c
··· 990 990 goto out_fput; 991 991 } 992 992 info = MQUEUE_I(inode); 993 - audit_inode(NULL, f.file->f_path.dentry, 0); 993 + audit_file(f.file); 994 994 995 995 if (unlikely(!(f.file->f_mode & FMODE_WRITE))) { 996 996 ret = -EBADF; ··· 1106 1106 goto out_fput; 1107 1107 } 1108 1108 info = MQUEUE_I(inode); 1109 - audit_inode(NULL, f.file->f_path.dentry, 0); 1109 + audit_file(f.file); 1110 1110 1111 1111 if (unlikely(!(f.file->f_mode & FMODE_READ))) { 1112 1112 ret = -EBADF;
+6 -1
kernel/auditsc.c
··· 1897 1897 audit_copy_inode(n, dentry, inode); 1898 1898 } 1899 1899 1900 + void __audit_file(const struct file *file) 1901 + { 1902 + __audit_inode(NULL, file->f_path.dentry, 0); 1903 + } 1904 + 1900 1905 /** 1901 1906 * __audit_inode_child - collect inode info for created/removed objects 1902 1907 * @parent: inode of dentry parent ··· 2378 2373 ax->d.next = context->aux; 2379 2374 context->aux = (void *)ax; 2380 2375 2381 - dentry = dget(bprm->file->f_dentry); 2376 + dentry = dget(bprm->file->f_path.dentry); 2382 2377 get_vfs_caps_from_disk(dentry, &vcaps); 2383 2378 dput(dentry); 2384 2379
+1 -1
kernel/events/core.c
··· 614 614 if (!f.file) 615 615 return -EBADF; 616 616 617 - css = css_tryget_online_from_dir(f.file->f_dentry, 617 + css = css_tryget_online_from_dir(f.file->f_path.dentry, 618 618 &perf_event_cgrp_subsys); 619 619 if (IS_ERR(css)) { 620 620 ret = PTR_ERR(css);
+1 -1
kernel/taskstats.c
··· 459 459 stats = nla_data(na); 460 460 memset(stats, 0, sizeof(*stats)); 461 461 462 - rc = cgroupstats_build(stats, f.file->f_dentry); 462 + rc = cgroupstats_build(stats, f.file->f_path.dentry); 463 463 if (rc < 0) { 464 464 nlmsg_free(rep_skb); 465 465 goto err;
+2 -2
kernel/trace/trace.c
··· 6417 6417 int ret; 6418 6418 6419 6419 /* Paranoid: Make sure the parent is the "instances" directory */ 6420 - parent = hlist_entry(inode->i_dentry.first, struct dentry, d_alias); 6420 + parent = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias); 6421 6421 if (WARN_ON_ONCE(parent != trace_instance_dir)) 6422 6422 return -ENOENT; 6423 6423 ··· 6444 6444 int ret; 6445 6445 6446 6446 /* Paranoid: Make sure the parent is the "instances" directory */ 6447 - parent = hlist_entry(inode->i_dentry.first, struct dentry, d_alias); 6447 + parent = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias); 6448 6448 if (WARN_ON_ONCE(parent != trace_instance_dir)) 6449 6449 return -ENOENT; 6450 6450
+1 -1
kernel/trace/trace_events.c
··· 461 461 462 462 if (dir) { 463 463 spin_lock(&dir->d_lock); /* probably unneeded */ 464 - list_for_each_entry(child, &dir->d_subdirs, d_u.d_child) { 464 + list_for_each_entry(child, &dir->d_subdirs, d_child) { 465 465 if (child->d_inode) /* probably unneeded */ 466 466 child->d_inode->i_private = NULL; 467 467 }
+425 -643
mm/iov_iter.c
··· 3 3 #include <linux/pagemap.h> 4 4 #include <linux/slab.h> 5 5 #include <linux/vmalloc.h> 6 + #include <net/checksum.h> 6 7 7 - static size_t copy_to_iter_iovec(void *from, size_t bytes, struct iov_iter *i) 8 - { 9 - size_t skip, copy, left, wanted; 10 - const struct iovec *iov; 11 - char __user *buf; 12 - 13 - if (unlikely(bytes > i->count)) 14 - bytes = i->count; 15 - 16 - if (unlikely(!bytes)) 17 - return 0; 18 - 19 - wanted = bytes; 20 - iov = i->iov; 21 - skip = i->iov_offset; 22 - buf = iov->iov_base + skip; 23 - copy = min(bytes, iov->iov_len - skip); 24 - 25 - left = __copy_to_user(buf, from, copy); 26 - copy -= left; 27 - skip += copy; 28 - from += copy; 29 - bytes -= copy; 30 - while (unlikely(!left && bytes)) { 31 - iov++; 32 - buf = iov->iov_base; 33 - copy = min(bytes, iov->iov_len); 34 - left = __copy_to_user(buf, from, copy); 35 - copy -= left; 36 - skip = copy; 37 - from += copy; 38 - bytes -= copy; 39 - } 40 - 41 - if (skip == iov->iov_len) { 42 - iov++; 43 - skip = 0; 44 - } 45 - i->count -= wanted - bytes; 46 - i->nr_segs -= iov - i->iov; 47 - i->iov = iov; 48 - i->iov_offset = skip; 49 - return wanted - bytes; 8 + #define iterate_iovec(i, n, __v, __p, skip, STEP) { \ 9 + size_t left; \ 10 + size_t wanted = n; \ 11 + __p = i->iov; \ 12 + __v.iov_len = min(n, __p->iov_len - skip); \ 13 + if (likely(__v.iov_len)) { \ 14 + __v.iov_base = __p->iov_base + skip; \ 15 + left = (STEP); \ 16 + __v.iov_len -= left; \ 17 + skip += __v.iov_len; \ 18 + n -= __v.iov_len; \ 19 + } else { \ 20 + left = 0; \ 21 + } \ 22 + while (unlikely(!left && n)) { \ 23 + __p++; \ 24 + __v.iov_len = min(n, __p->iov_len); \ 25 + if (unlikely(!__v.iov_len)) \ 26 + continue; \ 27 + __v.iov_base = __p->iov_base; \ 28 + left = (STEP); \ 29 + __v.iov_len -= left; \ 30 + skip = __v.iov_len; \ 31 + n -= __v.iov_len; \ 32 + } \ 33 + n = wanted - n; \ 50 34 } 51 35 52 - static size_t copy_from_iter_iovec(void *to, size_t bytes, struct iov_iter *i) 53 - { 54 - size_t skip, copy, left, wanted; 55 - const struct iovec *iov; 56 - char __user *buf; 36 + #define iterate_kvec(i, n, __v, __p, skip, STEP) { \ 37 + size_t wanted = n; \ 38 + __p = i->kvec; \ 39 + __v.iov_len = min(n, __p->iov_len - skip); \ 40 + if (likely(__v.iov_len)) { \ 41 + __v.iov_base = __p->iov_base + skip; \ 42 + (void)(STEP); \ 43 + skip += __v.iov_len; \ 44 + n -= __v.iov_len; \ 45 + } \ 46 + while (unlikely(n)) { \ 47 + __p++; \ 48 + __v.iov_len = min(n, __p->iov_len); \ 49 + if (unlikely(!__v.iov_len)) \ 50 + continue; \ 51 + __v.iov_base = __p->iov_base; \ 52 + (void)(STEP); \ 53 + skip = __v.iov_len; \ 54 + n -= __v.iov_len; \ 55 + } \ 56 + n = wanted; \ 57 + } 57 58 58 - if (unlikely(bytes > i->count)) 59 - bytes = i->count; 59 + #define iterate_bvec(i, n, __v, __p, skip, STEP) { \ 60 + size_t wanted = n; \ 61 + __p = i->bvec; \ 62 + __v.bv_len = min_t(size_t, n, __p->bv_len - skip); \ 63 + if (likely(__v.bv_len)) { \ 64 + __v.bv_page = __p->bv_page; \ 65 + __v.bv_offset = __p->bv_offset + skip; \ 66 + (void)(STEP); \ 67 + skip += __v.bv_len; \ 68 + n -= __v.bv_len; \ 69 + } \ 70 + while (unlikely(n)) { \ 71 + __p++; \ 72 + __v.bv_len = min_t(size_t, n, __p->bv_len); \ 73 + if (unlikely(!__v.bv_len)) \ 74 + continue; \ 75 + __v.bv_page = __p->bv_page; \ 76 + __v.bv_offset = __p->bv_offset; \ 77 + (void)(STEP); \ 78 + skip = __v.bv_len; \ 79 + n -= __v.bv_len; \ 80 + } \ 81 + n = wanted; \ 82 + } 60 83 61 - if (unlikely(!bytes)) 62 - return 0; 84 + #define iterate_all_kinds(i, n, v, I, B, K) { \ 85 + size_t skip = i->iov_offset; \ 86 + if (unlikely(i->type & ITER_BVEC)) { \ 87 + const struct bio_vec *bvec; \ 88 + struct bio_vec v; \ 89 + iterate_bvec(i, n, v, bvec, skip, (B)) \ 90 + } else if (unlikely(i->type & ITER_KVEC)) { \ 91 + const struct kvec *kvec; \ 92 + struct kvec v; \ 93 + iterate_kvec(i, n, v, kvec, skip, (K)) \ 94 + } else { \ 95 + const struct iovec *iov; \ 96 + struct iovec v; \ 97 + iterate_iovec(i, n, v, iov, skip, (I)) \ 98 + } \ 99 + } 63 100 64 - wanted = bytes; 65 - iov = i->iov; 66 - skip = i->iov_offset; 67 - buf = iov->iov_base + skip; 68 - copy = min(bytes, iov->iov_len - skip); 69 - 70 - left = __copy_from_user(to, buf, copy); 71 - copy -= left; 72 - skip += copy; 73 - to += copy; 74 - bytes -= copy; 75 - while (unlikely(!left && bytes)) { 76 - iov++; 77 - buf = iov->iov_base; 78 - copy = min(bytes, iov->iov_len); 79 - left = __copy_from_user(to, buf, copy); 80 - copy -= left; 81 - skip = copy; 82 - to += copy; 83 - bytes -= copy; 84 - } 85 - 86 - if (skip == iov->iov_len) { 87 - iov++; 88 - skip = 0; 89 - } 90 - i->count -= wanted - bytes; 91 - i->nr_segs -= iov - i->iov; 92 - i->iov = iov; 93 - i->iov_offset = skip; 94 - return wanted - bytes; 101 + #define iterate_and_advance(i, n, v, I, B, K) { \ 102 + size_t skip = i->iov_offset; \ 103 + if (unlikely(i->type & ITER_BVEC)) { \ 104 + const struct bio_vec *bvec; \ 105 + struct bio_vec v; \ 106 + iterate_bvec(i, n, v, bvec, skip, (B)) \ 107 + if (skip == bvec->bv_len) { \ 108 + bvec++; \ 109 + skip = 0; \ 110 + } \ 111 + i->nr_segs -= bvec - i->bvec; \ 112 + i->bvec = bvec; \ 113 + } else if (unlikely(i->type & ITER_KVEC)) { \ 114 + const struct kvec *kvec; \ 115 + struct kvec v; \ 116 + iterate_kvec(i, n, v, kvec, skip, (K)) \ 117 + if (skip == kvec->iov_len) { \ 118 + kvec++; \ 119 + skip = 0; \ 120 + } \ 121 + i->nr_segs -= kvec - i->kvec; \ 122 + i->kvec = kvec; \ 123 + } else { \ 124 + const struct iovec *iov; \ 125 + struct iovec v; \ 126 + iterate_iovec(i, n, v, iov, skip, (I)) \ 127 + if (skip == iov->iov_len) { \ 128 + iov++; \ 129 + skip = 0; \ 130 + } \ 131 + i->nr_segs -= iov - i->iov; \ 132 + i->iov = iov; \ 133 + } \ 134 + i->count -= n; \ 135 + i->iov_offset = skip; \ 95 136 } 96 137 97 138 static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, ··· 297 256 return wanted - bytes; 298 257 } 299 258 300 - static size_t zero_iovec(size_t bytes, struct iov_iter *i) 301 - { 302 - size_t skip, copy, left, wanted; 303 - const struct iovec *iov; 304 - char __user *buf; 305 - 306 - if (unlikely(bytes > i->count)) 307 - bytes = i->count; 308 - 309 - if (unlikely(!bytes)) 310 - return 0; 311 - 312 - wanted = bytes; 313 - iov = i->iov; 314 - skip = i->iov_offset; 315 - buf = iov->iov_base + skip; 316 - copy = min(bytes, iov->iov_len - skip); 317 - 318 - left = __clear_user(buf, copy); 319 - copy -= left; 320 - skip += copy; 321 - bytes -= copy; 322 - 323 - while (unlikely(!left && bytes)) { 324 - iov++; 325 - buf = iov->iov_base; 326 - copy = min(bytes, iov->iov_len); 327 - left = __clear_user(buf, copy); 328 - copy -= left; 329 - skip = copy; 330 - bytes -= copy; 331 - } 332 - 333 - if (skip == iov->iov_len) { 334 - iov++; 335 - skip = 0; 336 - } 337 - i->count -= wanted - bytes; 338 - i->nr_segs -= iov - i->iov; 339 - i->iov = iov; 340 - i->iov_offset = skip; 341 - return wanted - bytes; 342 - } 343 - 344 - static size_t __iovec_copy_from_user_inatomic(char *vaddr, 345 - const struct iovec *iov, size_t base, size_t bytes) 346 - { 347 - size_t copied = 0, left = 0; 348 - 349 - while (bytes) { 350 - char __user *buf = iov->iov_base + base; 351 - int copy = min(bytes, iov->iov_len - base); 352 - 353 - base = 0; 354 - left = __copy_from_user_inatomic(vaddr, buf, copy); 355 - copied += copy; 356 - bytes -= copy; 357 - vaddr += copy; 358 - iov++; 359 - 360 - if (unlikely(left)) 361 - break; 362 - } 363 - return copied - left; 364 - } 365 - 366 - /* 367 - * Copy as much as we can into the page and return the number of bytes which 368 - * were successfully copied. If a fault is encountered then return the number of 369 - * bytes which were copied. 370 - */ 371 - static size_t copy_from_user_atomic_iovec(struct page *page, 372 - struct iov_iter *i, unsigned long offset, size_t bytes) 373 - { 374 - char *kaddr; 375 - size_t copied; 376 - 377 - kaddr = kmap_atomic(page); 378 - if (likely(i->nr_segs == 1)) { 379 - int left; 380 - char __user *buf = i->iov->iov_base + i->iov_offset; 381 - left = __copy_from_user_inatomic(kaddr + offset, buf, bytes); 382 - copied = bytes - left; 383 - } else { 384 - copied = __iovec_copy_from_user_inatomic(kaddr + offset, 385 - i->iov, i->iov_offset, bytes); 386 - } 387 - kunmap_atomic(kaddr); 388 - 389 - return copied; 390 - } 391 - 392 - static void advance_iovec(struct iov_iter *i, size_t bytes) 393 - { 394 - BUG_ON(i->count < bytes); 395 - 396 - if (likely(i->nr_segs == 1)) { 397 - i->iov_offset += bytes; 398 - i->count -= bytes; 399 - } else { 400 - const struct iovec *iov = i->iov; 401 - size_t base = i->iov_offset; 402 - unsigned long nr_segs = i->nr_segs; 403 - 404 - /* 405 - * The !iov->iov_len check ensures we skip over unlikely 406 - * zero-length segments (without overruning the iovec). 407 - */ 408 - while (bytes || unlikely(i->count && !iov->iov_len)) { 409 - int copy; 410 - 411 - copy = min(bytes, iov->iov_len - base); 412 - BUG_ON(!i->count || i->count < copy); 413 - i->count -= copy; 414 - bytes -= copy; 415 - base += copy; 416 - if (iov->iov_len == base) { 417 - iov++; 418 - nr_segs--; 419 - base = 0; 420 - } 421 - } 422 - i->iov = iov; 423 - i->iov_offset = base; 424 - i->nr_segs = nr_segs; 425 - } 426 - } 427 - 428 259 /* 429 260 * Fault in the first iovec of the given iov_iter, to a maximum length 430 261 * of bytes. Returns 0 on success, or non-zero if the memory could not be ··· 308 395 */ 309 396 int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes) 310 397 { 311 - if (!(i->type & ITER_BVEC)) { 398 + if (!(i->type & (ITER_BVEC|ITER_KVEC))) { 312 399 char __user *buf = i->iov->iov_base + i->iov_offset; 313 400 bytes = min(bytes, i->iov->iov_len - i->iov_offset); 314 401 return fault_in_pages_readable(buf, bytes); ··· 317 404 } 318 405 EXPORT_SYMBOL(iov_iter_fault_in_readable); 319 406 320 - static unsigned long alignment_iovec(const struct iov_iter *i) 321 - { 322 - const struct iovec *iov = i->iov; 323 - unsigned long res; 324 - size_t size = i->count; 325 - size_t n; 326 - 327 - if (!size) 328 - return 0; 329 - 330 - res = (unsigned long)iov->iov_base + i->iov_offset; 331 - n = iov->iov_len - i->iov_offset; 332 - if (n >= size) 333 - return res | size; 334 - size -= n; 335 - res |= n; 336 - while (size > (++iov)->iov_len) { 337 - res |= (unsigned long)iov->iov_base | iov->iov_len; 338 - size -= iov->iov_len; 339 - } 340 - res |= (unsigned long)iov->iov_base | size; 341 - return res; 342 - } 343 - 344 407 void iov_iter_init(struct iov_iter *i, int direction, 345 408 const struct iovec *iov, unsigned long nr_segs, 346 409 size_t count) 347 410 { 348 411 /* It will get better. Eventually... */ 349 - if (segment_eq(get_fs(), KERNEL_DS)) 412 + if (segment_eq(get_fs(), KERNEL_DS)) { 350 413 direction |= ITER_KVEC; 351 - i->type = direction; 352 - i->iov = iov; 414 + i->type = direction; 415 + i->kvec = (struct kvec *)iov; 416 + } else { 417 + i->type = direction; 418 + i->iov = iov; 419 + } 353 420 i->nr_segs = nr_segs; 354 421 i->iov_offset = 0; 355 422 i->count = count; 356 423 } 357 424 EXPORT_SYMBOL(iov_iter_init); 358 - 359 - static ssize_t get_pages_iovec(struct iov_iter *i, 360 - struct page **pages, size_t maxsize, unsigned maxpages, 361 - size_t *start) 362 - { 363 - size_t offset = i->iov_offset; 364 - const struct iovec *iov = i->iov; 365 - size_t len; 366 - unsigned long addr; 367 - int n; 368 - int res; 369 - 370 - len = iov->iov_len - offset; 371 - if (len > i->count) 372 - len = i->count; 373 - if (len > maxsize) 374 - len = maxsize; 375 - addr = (unsigned long)iov->iov_base + offset; 376 - len += *start = addr & (PAGE_SIZE - 1); 377 - if (len > maxpages * PAGE_SIZE) 378 - len = maxpages * PAGE_SIZE; 379 - addr &= ~(PAGE_SIZE - 1); 380 - n = (len + PAGE_SIZE - 1) / PAGE_SIZE; 381 - res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, pages); 382 - if (unlikely(res < 0)) 383 - return res; 384 - return (res == n ? len : res * PAGE_SIZE) - *start; 385 - } 386 - 387 - static ssize_t get_pages_alloc_iovec(struct iov_iter *i, 388 - struct page ***pages, size_t maxsize, 389 - size_t *start) 390 - { 391 - size_t offset = i->iov_offset; 392 - const struct iovec *iov = i->iov; 393 - size_t len; 394 - unsigned long addr; 395 - void *p; 396 - int n; 397 - int res; 398 - 399 - len = iov->iov_len - offset; 400 - if (len > i->count) 401 - len = i->count; 402 - if (len > maxsize) 403 - len = maxsize; 404 - addr = (unsigned long)iov->iov_base + offset; 405 - len += *start = addr & (PAGE_SIZE - 1); 406 - addr &= ~(PAGE_SIZE - 1); 407 - n = (len + PAGE_SIZE - 1) / PAGE_SIZE; 408 - 409 - p = kmalloc(n * sizeof(struct page *), GFP_KERNEL); 410 - if (!p) 411 - p = vmalloc(n * sizeof(struct page *)); 412 - if (!p) 413 - return -ENOMEM; 414 - 415 - res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, p); 416 - if (unlikely(res < 0)) { 417 - kvfree(p); 418 - return res; 419 - } 420 - *pages = p; 421 - return (res == n ? len : res * PAGE_SIZE) - *start; 422 - } 423 - 424 - static int iov_iter_npages_iovec(const struct iov_iter *i, int maxpages) 425 - { 426 - size_t offset = i->iov_offset; 427 - size_t size = i->count; 428 - const struct iovec *iov = i->iov; 429 - int npages = 0; 430 - int n; 431 - 432 - for (n = 0; size && n < i->nr_segs; n++, iov++) { 433 - unsigned long addr = (unsigned long)iov->iov_base + offset; 434 - size_t len = iov->iov_len - offset; 435 - offset = 0; 436 - if (unlikely(!len)) /* empty segment */ 437 - continue; 438 - if (len > size) 439 - len = size; 440 - npages += (addr + len + PAGE_SIZE - 1) / PAGE_SIZE 441 - - addr / PAGE_SIZE; 442 - if (npages >= maxpages) /* don't bother going further */ 443 - return maxpages; 444 - size -= len; 445 - offset = 0; 446 - } 447 - return min(npages, maxpages); 448 - } 449 425 450 426 static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len) 451 427 { ··· 357 555 kunmap_atomic(addr); 358 556 } 359 557 360 - static size_t copy_to_iter_bvec(void *from, size_t bytes, struct iov_iter *i) 558 + size_t copy_to_iter(void *addr, size_t bytes, struct iov_iter *i) 361 559 { 362 - size_t skip, copy, wanted; 363 - const struct bio_vec *bvec; 364 - 560 + char *from = addr; 365 561 if (unlikely(bytes > i->count)) 366 562 bytes = i->count; 367 563 368 564 if (unlikely(!bytes)) 369 565 return 0; 370 566 371 - wanted = bytes; 372 - bvec = i->bvec; 373 - skip = i->iov_offset; 374 - copy = min_t(size_t, bytes, bvec->bv_len - skip); 567 + iterate_and_advance(i, bytes, v, 568 + __copy_to_user(v.iov_base, (from += v.iov_len) - v.iov_len, 569 + v.iov_len), 570 + memcpy_to_page(v.bv_page, v.bv_offset, 571 + (from += v.bv_len) - v.bv_len, v.bv_len), 572 + memcpy(v.iov_base, (from += v.iov_len) - v.iov_len, v.iov_len) 573 + ) 375 574 376 - memcpy_to_page(bvec->bv_page, skip + bvec->bv_offset, from, copy); 377 - skip += copy; 378 - from += copy; 379 - bytes -= copy; 380 - while (bytes) { 381 - bvec++; 382 - copy = min(bytes, (size_t)bvec->bv_len); 383 - memcpy_to_page(bvec->bv_page, bvec->bv_offset, from, copy); 384 - skip = copy; 385 - from += copy; 386 - bytes -= copy; 387 - } 388 - if (skip == bvec->bv_len) { 389 - bvec++; 390 - skip = 0; 391 - } 392 - i->count -= wanted - bytes; 393 - i->nr_segs -= bvec - i->bvec; 394 - i->bvec = bvec; 395 - i->iov_offset = skip; 396 - return wanted - bytes; 397 - } 398 - 399 - static size_t copy_from_iter_bvec(void *to, size_t bytes, struct iov_iter *i) 400 - { 401 - size_t skip, copy, wanted; 402 - const struct bio_vec *bvec; 403 - 404 - if (unlikely(bytes > i->count)) 405 - bytes = i->count; 406 - 407 - if (unlikely(!bytes)) 408 - return 0; 409 - 410 - wanted = bytes; 411 - bvec = i->bvec; 412 - skip = i->iov_offset; 413 - 414 - copy = min(bytes, bvec->bv_len - skip); 415 - 416 - memcpy_from_page(to, bvec->bv_page, bvec->bv_offset + skip, copy); 417 - 418 - to += copy; 419 - skip += copy; 420 - bytes -= copy; 421 - 422 - while (bytes) { 423 - bvec++; 424 - copy = min(bytes, (size_t)bvec->bv_len); 425 - memcpy_from_page(to, bvec->bv_page, bvec->bv_offset, copy); 426 - skip = copy; 427 - to += copy; 428 - bytes -= copy; 429 - } 430 - if (skip == bvec->bv_len) { 431 - bvec++; 432 - skip = 0; 433 - } 434 - i->count -= wanted; 435 - i->nr_segs -= bvec - i->bvec; 436 - i->bvec = bvec; 437 - i->iov_offset = skip; 438 - return wanted; 439 - } 440 - 441 - static size_t copy_page_to_iter_bvec(struct page *page, size_t offset, 442 - size_t bytes, struct iov_iter *i) 443 - { 444 - void *kaddr = kmap_atomic(page); 445 - size_t wanted = copy_to_iter_bvec(kaddr + offset, bytes, i); 446 - kunmap_atomic(kaddr); 447 - return wanted; 448 - } 449 - 450 - static size_t copy_page_from_iter_bvec(struct page *page, size_t offset, 451 - size_t bytes, struct iov_iter *i) 452 - { 453 - void *kaddr = kmap_atomic(page); 454 - size_t wanted = copy_from_iter_bvec(kaddr + offset, bytes, i); 455 - kunmap_atomic(kaddr); 456 - return wanted; 457 - } 458 - 459 - static size_t zero_bvec(size_t bytes, struct iov_iter *i) 460 - { 461 - size_t skip, copy, wanted; 462 - const struct bio_vec *bvec; 463 - 464 - if (unlikely(bytes > i->count)) 465 - bytes = i->count; 466 - 467 - if (unlikely(!bytes)) 468 - return 0; 469 - 470 - wanted = bytes; 471 - bvec = i->bvec; 472 - skip = i->iov_offset; 473 - copy = min_t(size_t, bytes, bvec->bv_len - skip); 474 - 475 - memzero_page(bvec->bv_page, skip + bvec->bv_offset, copy); 476 - skip += copy; 477 - bytes -= copy; 478 - while (bytes) { 479 - bvec++; 480 - copy = min(bytes, (size_t)bvec->bv_len); 481 - memzero_page(bvec->bv_page, bvec->bv_offset, copy); 482 - skip = copy; 483 - bytes -= copy; 484 - } 485 - if (skip == bvec->bv_len) { 486 - bvec++; 487 - skip = 0; 488 - } 489 - i->count -= wanted - bytes; 490 - i->nr_segs -= bvec - i->bvec; 491 - i->bvec = bvec; 492 - i->iov_offset = skip; 493 - return wanted - bytes; 494 - } 495 - 496 - static size_t copy_from_user_bvec(struct page *page, 497 - struct iov_iter *i, unsigned long offset, size_t bytes) 498 - { 499 - char *kaddr; 500 - size_t left; 501 - const struct bio_vec *bvec; 502 - size_t base = i->iov_offset; 503 - 504 - kaddr = kmap_atomic(page); 505 - for (left = bytes, bvec = i->bvec; left; bvec++, base = 0) { 506 - size_t copy = min(left, bvec->bv_len - base); 507 - if (!bvec->bv_len) 508 - continue; 509 - memcpy_from_page(kaddr + offset, bvec->bv_page, 510 - bvec->bv_offset + base, copy); 511 - offset += copy; 512 - left -= copy; 513 - } 514 - kunmap_atomic(kaddr); 515 575 return bytes; 516 576 } 577 + EXPORT_SYMBOL(copy_to_iter); 517 578 518 - static void advance_bvec(struct iov_iter *i, size_t bytes) 579 + size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) 519 580 { 520 - BUG_ON(i->count < bytes); 581 + char *to = addr; 582 + if (unlikely(bytes > i->count)) 583 + bytes = i->count; 521 584 522 - if (likely(i->nr_segs == 1)) { 523 - i->iov_offset += bytes; 524 - i->count -= bytes; 525 - } else { 526 - const struct bio_vec *bvec = i->bvec; 527 - size_t base = i->iov_offset; 528 - unsigned long nr_segs = i->nr_segs; 529 - 530 - /* 531 - * The !iov->iov_len check ensures we skip over unlikely 532 - * zero-length segments (without overruning the iovec). 533 - */ 534 - while (bytes || unlikely(i->count && !bvec->bv_len)) { 535 - int copy; 536 - 537 - copy = min(bytes, bvec->bv_len - base); 538 - BUG_ON(!i->count || i->count < copy); 539 - i->count -= copy; 540 - bytes -= copy; 541 - base += copy; 542 - if (bvec->bv_len == base) { 543 - bvec++; 544 - nr_segs--; 545 - base = 0; 546 - } 547 - } 548 - i->bvec = bvec; 549 - i->iov_offset = base; 550 - i->nr_segs = nr_segs; 551 - } 552 - } 553 - 554 - static unsigned long alignment_bvec(const struct iov_iter *i) 555 - { 556 - const struct bio_vec *bvec = i->bvec; 557 - unsigned long res; 558 - size_t size = i->count; 559 - size_t n; 560 - 561 - if (!size) 585 + if (unlikely(!bytes)) 562 586 return 0; 563 587 564 - res = bvec->bv_offset + i->iov_offset; 565 - n = bvec->bv_len - i->iov_offset; 566 - if (n >= size) 567 - return res | size; 568 - size -= n; 569 - res |= n; 570 - while (size > (++bvec)->bv_len) { 571 - res |= bvec->bv_offset | bvec->bv_len; 572 - size -= bvec->bv_len; 573 - } 574 - res |= bvec->bv_offset | size; 575 - return res; 576 - } 588 + iterate_and_advance(i, bytes, v, 589 + __copy_from_user((to += v.iov_len) - v.iov_len, v.iov_base, 590 + v.iov_len), 591 + memcpy_from_page((to += v.bv_len) - v.bv_len, v.bv_page, 592 + v.bv_offset, v.bv_len), 593 + memcpy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len) 594 + ) 577 595 578 - static ssize_t get_pages_bvec(struct iov_iter *i, 579 - struct page **pages, size_t maxsize, unsigned maxpages, 580 - size_t *start) 596 + return bytes; 597 + } 598 + EXPORT_SYMBOL(copy_from_iter); 599 + 600 + size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i) 581 601 { 582 - const struct bio_vec *bvec = i->bvec; 583 - size_t len = bvec->bv_len - i->iov_offset; 584 - if (len > i->count) 585 - len = i->count; 586 - if (len > maxsize) 587 - len = maxsize; 588 - /* can't be more than PAGE_SIZE */ 589 - *start = bvec->bv_offset + i->iov_offset; 602 + char *to = addr; 603 + if (unlikely(bytes > i->count)) 604 + bytes = i->count; 590 605 591 - get_page(*pages = bvec->bv_page); 606 + if (unlikely(!bytes)) 607 + return 0; 592 608 593 - return len; 609 + iterate_and_advance(i, bytes, v, 610 + __copy_from_user_nocache((to += v.iov_len) - v.iov_len, 611 + v.iov_base, v.iov_len), 612 + memcpy_from_page((to += v.bv_len) - v.bv_len, v.bv_page, 613 + v.bv_offset, v.bv_len), 614 + memcpy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len) 615 + ) 616 + 617 + return bytes; 594 618 } 595 - 596 - static ssize_t get_pages_alloc_bvec(struct iov_iter *i, 597 - struct page ***pages, size_t maxsize, 598 - size_t *start) 599 - { 600 - const struct bio_vec *bvec = i->bvec; 601 - size_t len = bvec->bv_len - i->iov_offset; 602 - if (len > i->count) 603 - len = i->count; 604 - if (len > maxsize) 605 - len = maxsize; 606 - *start = bvec->bv_offset + i->iov_offset; 607 - 608 - *pages = kmalloc(sizeof(struct page *), GFP_KERNEL); 609 - if (!*pages) 610 - return -ENOMEM; 611 - 612 - get_page(**pages = bvec->bv_page); 613 - 614 - return len; 615 - } 616 - 617 - static int iov_iter_npages_bvec(const struct iov_iter *i, int maxpages) 618 - { 619 - size_t offset = i->iov_offset; 620 - size_t size = i->count; 621 - const struct bio_vec *bvec = i->bvec; 622 - int npages = 0; 623 - int n; 624 - 625 - for (n = 0; size && n < i->nr_segs; n++, bvec++) { 626 - size_t len = bvec->bv_len - offset; 627 - offset = 0; 628 - if (unlikely(!len)) /* empty segment */ 629 - continue; 630 - if (len > size) 631 - len = size; 632 - npages++; 633 - if (npages >= maxpages) /* don't bother going further */ 634 - return maxpages; 635 - size -= len; 636 - offset = 0; 637 - } 638 - return min(npages, maxpages); 639 - } 619 + EXPORT_SYMBOL(copy_from_iter_nocache); 640 620 641 621 size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes, 642 622 struct iov_iter *i) 643 623 { 644 - if (i->type & ITER_BVEC) 645 - return copy_page_to_iter_bvec(page, offset, bytes, i); 646 - else 624 + if (i->type & (ITER_BVEC|ITER_KVEC)) { 625 + void *kaddr = kmap_atomic(page); 626 + size_t wanted = copy_to_iter(kaddr + offset, bytes, i); 627 + kunmap_atomic(kaddr); 628 + return wanted; 629 + } else 647 630 return copy_page_to_iter_iovec(page, offset, bytes, i); 648 631 } 649 632 EXPORT_SYMBOL(copy_page_to_iter); ··· 436 849 size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, 437 850 struct iov_iter *i) 438 851 { 439 - if (i->type & ITER_BVEC) 440 - return copy_page_from_iter_bvec(page, offset, bytes, i); 441 - else 852 + if (i->type & (ITER_BVEC|ITER_KVEC)) { 853 + void *kaddr = kmap_atomic(page); 854 + size_t wanted = copy_from_iter(kaddr + offset, bytes, i); 855 + kunmap_atomic(kaddr); 856 + return wanted; 857 + } else 442 858 return copy_page_from_iter_iovec(page, offset, bytes, i); 443 859 } 444 860 EXPORT_SYMBOL(copy_page_from_iter); 445 861 446 - size_t copy_to_iter(void *addr, size_t bytes, struct iov_iter *i) 447 - { 448 - if (i->type & ITER_BVEC) 449 - return copy_to_iter_bvec(addr, bytes, i); 450 - else 451 - return copy_to_iter_iovec(addr, bytes, i); 452 - } 453 - EXPORT_SYMBOL(copy_to_iter); 454 - 455 - size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) 456 - { 457 - if (i->type & ITER_BVEC) 458 - return copy_from_iter_bvec(addr, bytes, i); 459 - else 460 - return copy_from_iter_iovec(addr, bytes, i); 461 - } 462 - EXPORT_SYMBOL(copy_from_iter); 463 - 464 862 size_t iov_iter_zero(size_t bytes, struct iov_iter *i) 465 863 { 466 - if (i->type & ITER_BVEC) { 467 - return zero_bvec(bytes, i); 468 - } else { 469 - return zero_iovec(bytes, i); 470 - } 864 + if (unlikely(bytes > i->count)) 865 + bytes = i->count; 866 + 867 + if (unlikely(!bytes)) 868 + return 0; 869 + 870 + iterate_and_advance(i, bytes, v, 871 + __clear_user(v.iov_base, v.iov_len), 872 + memzero_page(v.bv_page, v.bv_offset, v.bv_len), 873 + memset(v.iov_base, 0, v.iov_len) 874 + ) 875 + 876 + return bytes; 471 877 } 472 878 EXPORT_SYMBOL(iov_iter_zero); 473 879 474 880 size_t iov_iter_copy_from_user_atomic(struct page *page, 475 881 struct iov_iter *i, unsigned long offset, size_t bytes) 476 882 { 477 - if (i->type & ITER_BVEC) 478 - return copy_from_user_bvec(page, i, offset, bytes); 479 - else 480 - return copy_from_user_atomic_iovec(page, i, offset, bytes); 883 + char *kaddr = kmap_atomic(page), *p = kaddr + offset; 884 + iterate_all_kinds(i, bytes, v, 885 + __copy_from_user_inatomic((p += v.iov_len) - v.iov_len, 886 + v.iov_base, v.iov_len), 887 + memcpy_from_page((p += v.bv_len) - v.bv_len, v.bv_page, 888 + v.bv_offset, v.bv_len), 889 + memcpy((p += v.iov_len) - v.iov_len, v.iov_base, v.iov_len) 890 + ) 891 + kunmap_atomic(kaddr); 892 + return bytes; 481 893 } 482 894 EXPORT_SYMBOL(iov_iter_copy_from_user_atomic); 483 895 484 896 void iov_iter_advance(struct iov_iter *i, size_t size) 485 897 { 486 - if (i->type & ITER_BVEC) 487 - advance_bvec(i, size); 488 - else 489 - advance_iovec(i, size); 898 + iterate_and_advance(i, size, v, 0, 0, 0) 490 899 } 491 900 EXPORT_SYMBOL(iov_iter_advance); 492 901 ··· 500 917 } 501 918 EXPORT_SYMBOL(iov_iter_single_seg_count); 502 919 920 + void iov_iter_kvec(struct iov_iter *i, int direction, 921 + const struct kvec *iov, unsigned long nr_segs, 922 + size_t count) 923 + { 924 + BUG_ON(!(direction & ITER_KVEC)); 925 + i->type = direction; 926 + i->kvec = (struct kvec *)iov; 927 + i->nr_segs = nr_segs; 928 + i->iov_offset = 0; 929 + i->count = count; 930 + } 931 + EXPORT_SYMBOL(iov_iter_kvec); 932 + 503 933 unsigned long iov_iter_alignment(const struct iov_iter *i) 504 934 { 505 - if (i->type & ITER_BVEC) 506 - return alignment_bvec(i); 507 - else 508 - return alignment_iovec(i); 935 + unsigned long res = 0; 936 + size_t size = i->count; 937 + 938 + if (!size) 939 + return 0; 940 + 941 + iterate_all_kinds(i, size, v, 942 + (res |= (unsigned long)v.iov_base | v.iov_len, 0), 943 + res |= v.bv_offset | v.bv_len, 944 + res |= (unsigned long)v.iov_base | v.iov_len 945 + ) 946 + return res; 509 947 } 510 948 EXPORT_SYMBOL(iov_iter_alignment); 511 949 ··· 534 930 struct page **pages, size_t maxsize, unsigned maxpages, 535 931 size_t *start) 536 932 { 537 - if (i->type & ITER_BVEC) 538 - return get_pages_bvec(i, pages, maxsize, maxpages, start); 539 - else 540 - return get_pages_iovec(i, pages, maxsize, maxpages, start); 933 + if (maxsize > i->count) 934 + maxsize = i->count; 935 + 936 + if (!maxsize) 937 + return 0; 938 + 939 + iterate_all_kinds(i, maxsize, v, ({ 940 + unsigned long addr = (unsigned long)v.iov_base; 941 + size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1)); 942 + int n; 943 + int res; 944 + 945 + if (len > maxpages * PAGE_SIZE) 946 + len = maxpages * PAGE_SIZE; 947 + addr &= ~(PAGE_SIZE - 1); 948 + n = DIV_ROUND_UP(len, PAGE_SIZE); 949 + res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, pages); 950 + if (unlikely(res < 0)) 951 + return res; 952 + return (res == n ? len : res * PAGE_SIZE) - *start; 953 + 0;}),({ 954 + /* can't be more than PAGE_SIZE */ 955 + *start = v.bv_offset; 956 + get_page(*pages = v.bv_page); 957 + return v.bv_len; 958 + }),({ 959 + return -EFAULT; 960 + }) 961 + ) 962 + return 0; 541 963 } 542 964 EXPORT_SYMBOL(iov_iter_get_pages); 965 + 966 + static struct page **get_pages_array(size_t n) 967 + { 968 + struct page **p = kmalloc(n * sizeof(struct page *), GFP_KERNEL); 969 + if (!p) 970 + p = vmalloc(n * sizeof(struct page *)); 971 + return p; 972 + } 543 973 544 974 ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, 545 975 struct page ***pages, size_t maxsize, 546 976 size_t *start) 547 977 { 548 - if (i->type & ITER_BVEC) 549 - return get_pages_alloc_bvec(i, pages, maxsize, start); 550 - else 551 - return get_pages_alloc_iovec(i, pages, maxsize, start); 978 + struct page **p; 979 + 980 + if (maxsize > i->count) 981 + maxsize = i->count; 982 + 983 + if (!maxsize) 984 + return 0; 985 + 986 + iterate_all_kinds(i, maxsize, v, ({ 987 + unsigned long addr = (unsigned long)v.iov_base; 988 + size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1)); 989 + int n; 990 + int res; 991 + 992 + addr &= ~(PAGE_SIZE - 1); 993 + n = DIV_ROUND_UP(len, PAGE_SIZE); 994 + p = get_pages_array(n); 995 + if (!p) 996 + return -ENOMEM; 997 + res = get_user_pages_fast(addr, n, (i->type & WRITE) != WRITE, p); 998 + if (unlikely(res < 0)) { 999 + kvfree(p); 1000 + return res; 1001 + } 1002 + *pages = p; 1003 + return (res == n ? len : res * PAGE_SIZE) - *start; 1004 + 0;}),({ 1005 + /* can't be more than PAGE_SIZE */ 1006 + *start = v.bv_offset; 1007 + *pages = p = get_pages_array(1); 1008 + if (!p) 1009 + return -ENOMEM; 1010 + get_page(*p = v.bv_page); 1011 + return v.bv_len; 1012 + }),({ 1013 + return -EFAULT; 1014 + }) 1015 + ) 1016 + return 0; 552 1017 } 553 1018 EXPORT_SYMBOL(iov_iter_get_pages_alloc); 554 1019 1020 + size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, 1021 + struct iov_iter *i) 1022 + { 1023 + char *to = addr; 1024 + __wsum sum, next; 1025 + size_t off = 0; 1026 + if (unlikely(bytes > i->count)) 1027 + bytes = i->count; 1028 + 1029 + if (unlikely(!bytes)) 1030 + return 0; 1031 + 1032 + sum = *csum; 1033 + iterate_and_advance(i, bytes, v, ({ 1034 + int err = 0; 1035 + next = csum_and_copy_from_user(v.iov_base, 1036 + (to += v.iov_len) - v.iov_len, 1037 + v.iov_len, 0, &err); 1038 + if (!err) { 1039 + sum = csum_block_add(sum, next, off); 1040 + off += v.iov_len; 1041 + } 1042 + err ? v.iov_len : 0; 1043 + }), ({ 1044 + char *p = kmap_atomic(v.bv_page); 1045 + next = csum_partial_copy_nocheck(p + v.bv_offset, 1046 + (to += v.bv_len) - v.bv_len, 1047 + v.bv_len, 0); 1048 + kunmap_atomic(p); 1049 + sum = csum_block_add(sum, next, off); 1050 + off += v.bv_len; 1051 + }),({ 1052 + next = csum_partial_copy_nocheck(v.iov_base, 1053 + (to += v.iov_len) - v.iov_len, 1054 + v.iov_len, 0); 1055 + sum = csum_block_add(sum, next, off); 1056 + off += v.iov_len; 1057 + }) 1058 + ) 1059 + *csum = sum; 1060 + return bytes; 1061 + } 1062 + EXPORT_SYMBOL(csum_and_copy_from_iter); 1063 + 1064 + size_t csum_and_copy_to_iter(void *addr, size_t bytes, __wsum *csum, 1065 + struct iov_iter *i) 1066 + { 1067 + char *from = addr; 1068 + __wsum sum, next; 1069 + size_t off = 0; 1070 + if (unlikely(bytes > i->count)) 1071 + bytes = i->count; 1072 + 1073 + if (unlikely(!bytes)) 1074 + return 0; 1075 + 1076 + sum = *csum; 1077 + iterate_and_advance(i, bytes, v, ({ 1078 + int err = 0; 1079 + next = csum_and_copy_to_user((from += v.iov_len) - v.iov_len, 1080 + v.iov_base, 1081 + v.iov_len, 0, &err); 1082 + if (!err) { 1083 + sum = csum_block_add(sum, next, off); 1084 + off += v.iov_len; 1085 + } 1086 + err ? v.iov_len : 0; 1087 + }), ({ 1088 + char *p = kmap_atomic(v.bv_page); 1089 + next = csum_partial_copy_nocheck((from += v.bv_len) - v.bv_len, 1090 + p + v.bv_offset, 1091 + v.bv_len, 0); 1092 + kunmap_atomic(p); 1093 + sum = csum_block_add(sum, next, off); 1094 + off += v.bv_len; 1095 + }),({ 1096 + next = csum_partial_copy_nocheck((from += v.iov_len) - v.iov_len, 1097 + v.iov_base, 1098 + v.iov_len, 0); 1099 + sum = csum_block_add(sum, next, off); 1100 + off += v.iov_len; 1101 + }) 1102 + ) 1103 + *csum = sum; 1104 + return bytes; 1105 + } 1106 + EXPORT_SYMBOL(csum_and_copy_to_iter); 1107 + 555 1108 int iov_iter_npages(const struct iov_iter *i, int maxpages) 556 1109 { 557 - if (i->type & ITER_BVEC) 558 - return iov_iter_npages_bvec(i, maxpages); 559 - else 560 - return iov_iter_npages_iovec(i, maxpages); 1110 + size_t size = i->count; 1111 + int npages = 0; 1112 + 1113 + if (!size) 1114 + return 0; 1115 + 1116 + iterate_all_kinds(i, size, v, ({ 1117 + unsigned long p = (unsigned long)v.iov_base; 1118 + npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE) 1119 + - p / PAGE_SIZE; 1120 + if (npages >= maxpages) 1121 + return maxpages; 1122 + 0;}),({ 1123 + npages++; 1124 + if (npages >= maxpages) 1125 + return maxpages; 1126 + }),({ 1127 + unsigned long p = (unsigned long)v.iov_base; 1128 + npages += DIV_ROUND_UP(p + v.iov_len, PAGE_SIZE) 1129 + - p / PAGE_SIZE; 1130 + if (npages >= maxpages) 1131 + return maxpages; 1132 + }) 1133 + ) 1134 + return npages; 561 1135 } 562 1136 EXPORT_SYMBOL(iov_iter_npages);
+2 -2
mm/memcontrol.c
··· 5064 5064 * 5065 5065 * DO NOT ADD NEW FILES. 5066 5066 */ 5067 - name = cfile.file->f_dentry->d_name.name; 5067 + name = cfile.file->f_path.dentry->d_name.name; 5068 5068 5069 5069 if (!strcmp(name, "memory.usage_in_bytes")) { 5070 5070 event->register_event = mem_cgroup_usage_register_event; ··· 5088 5088 * automatically removed on cgroup destruction but the removal is 5089 5089 * asynchronous, so take an extra ref on @css. 5090 5090 */ 5091 - cfile_css = css_tryget_online_from_dir(cfile.file->f_dentry->d_parent, 5091 + cfile_css = css_tryget_online_from_dir(cfile.file->f_path.dentry->d_parent, 5092 5092 &memory_cgrp_subsys); 5093 5093 ret = -EINVAL; 5094 5094 if (IS_ERR(cfile_css))
+3 -3
net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
··· 56 56 return true; 57 57 } 58 58 59 - static int ipv4_print_tuple(struct seq_file *s, 59 + static void ipv4_print_tuple(struct seq_file *s, 60 60 const struct nf_conntrack_tuple *tuple) 61 61 { 62 - return seq_printf(s, "src=%pI4 dst=%pI4 ", 63 - &tuple->src.u3.ip, &tuple->dst.u3.ip); 62 + seq_printf(s, "src=%pI4 dst=%pI4 ", 63 + &tuple->src.u3.ip, &tuple->dst.u3.ip); 64 64 } 65 65 66 66 static int ipv4_get_l4proto(const struct sk_buff *skb, unsigned int nhoff,
+28 -25
net/ipv4/netfilter/nf_conntrack_l3proto_ipv4_compat.c
··· 94 94 } 95 95 96 96 #ifdef CONFIG_NF_CONNTRACK_SECMARK 97 - static int ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 97 + static void ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 98 98 { 99 99 int ret; 100 100 u32 len; ··· 102 102 103 103 ret = security_secid_to_secctx(ct->secmark, &secctx, &len); 104 104 if (ret) 105 - return 0; 105 + return; 106 106 107 - ret = seq_printf(s, "secctx=%s ", secctx); 107 + seq_printf(s, "secctx=%s ", secctx); 108 108 109 109 security_release_secctx(secctx, len); 110 - return ret; 111 110 } 112 111 #else 113 - static inline int ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 112 + static inline void ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 114 113 { 115 - return 0; 116 114 } 117 115 #endif 118 116 ··· 139 141 NF_CT_ASSERT(l4proto); 140 142 141 143 ret = -ENOSPC; 142 - if (seq_printf(s, "%-8s %u %ld ", 143 - l4proto->name, nf_ct_protonum(ct), 144 - timer_pending(&ct->timeout) 145 - ? (long)(ct->timeout.expires - jiffies)/HZ : 0) != 0) 144 + seq_printf(s, "%-8s %u %ld ", 145 + l4proto->name, nf_ct_protonum(ct), 146 + timer_pending(&ct->timeout) 147 + ? (long)(ct->timeout.expires - jiffies)/HZ : 0); 148 + 149 + if (l4proto->print_conntrack) 150 + l4proto->print_conntrack(s, ct); 151 + 152 + if (seq_has_overflowed(s)) 146 153 goto release; 147 154 148 - if (l4proto->print_conntrack && l4proto->print_conntrack(s, ct)) 149 - goto release; 155 + print_tuple(s, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 156 + l3proto, l4proto); 150 157 151 - if (print_tuple(s, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 152 - l3proto, l4proto)) 158 + if (seq_has_overflowed(s)) 153 159 goto release; 154 160 155 161 if (seq_print_acct(s, ct, IP_CT_DIR_ORIGINAL)) 156 162 goto release; 157 163 158 164 if (!(test_bit(IPS_SEEN_REPLY_BIT, &ct->status))) 159 - if (seq_printf(s, "[UNREPLIED] ")) 160 - goto release; 165 + seq_printf(s, "[UNREPLIED] "); 161 166 162 - if (print_tuple(s, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, 163 - l3proto, l4proto)) 167 + print_tuple(s, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, 168 + l3proto, l4proto); 169 + 170 + if (seq_has_overflowed(s)) 164 171 goto release; 165 172 166 173 if (seq_print_acct(s, ct, IP_CT_DIR_REPLY)) 167 174 goto release; 168 175 169 176 if (test_bit(IPS_ASSURED_BIT, &ct->status)) 170 - if (seq_printf(s, "[ASSURED] ")) 171 - goto release; 177 + seq_printf(s, "[ASSURED] "); 172 178 173 179 #ifdef CONFIG_NF_CONNTRACK_MARK 174 - if (seq_printf(s, "mark=%u ", ct->mark)) 175 - goto release; 180 + seq_printf(s, "mark=%u ", ct->mark); 176 181 #endif 177 182 178 - if (ct_show_secctx(s, ct)) 183 + ct_show_secctx(s, ct); 184 + 185 + seq_printf(s, "use=%u\n", atomic_read(&ct->ct_general.use)); 186 + 187 + if (seq_has_overflowed(s)) 179 188 goto release; 180 189 181 - if (seq_printf(s, "use=%u\n", atomic_read(&ct->ct_general.use))) 182 - goto release; 183 190 ret = 0; 184 191 release: 185 192 nf_ct_put(ct);
+5 -5
net/ipv4/netfilter/nf_conntrack_proto_icmp.c
··· 72 72 } 73 73 74 74 /* Print out the per-protocol part of the tuple. */ 75 - static int icmp_print_tuple(struct seq_file *s, 75 + static void icmp_print_tuple(struct seq_file *s, 76 76 const struct nf_conntrack_tuple *tuple) 77 77 { 78 - return seq_printf(s, "type=%u code=%u id=%u ", 79 - tuple->dst.u.icmp.type, 80 - tuple->dst.u.icmp.code, 81 - ntohs(tuple->src.u.icmp.id)); 78 + seq_printf(s, "type=%u code=%u id=%u ", 79 + tuple->dst.u.icmp.type, 80 + tuple->dst.u.icmp.code, 81 + ntohs(tuple->src.u.icmp.id)); 82 82 } 83 83 84 84 static unsigned int *icmp_get_timeouts(struct net *net)
+3 -3
net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c
··· 60 60 return true; 61 61 } 62 62 63 - static int ipv6_print_tuple(struct seq_file *s, 63 + static void ipv6_print_tuple(struct seq_file *s, 64 64 const struct nf_conntrack_tuple *tuple) 65 65 { 66 - return seq_printf(s, "src=%pI6 dst=%pI6 ", 67 - tuple->src.u3.ip6, tuple->dst.u3.ip6); 66 + seq_printf(s, "src=%pI6 dst=%pI6 ", 67 + tuple->src.u3.ip6, tuple->dst.u3.ip6); 68 68 } 69 69 70 70 static int ipv6_get_l4proto(const struct sk_buff *skb, unsigned int nhoff,
+5 -5
net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c
··· 84 84 } 85 85 86 86 /* Print out the per-protocol part of the tuple. */ 87 - static int icmpv6_print_tuple(struct seq_file *s, 87 + static void icmpv6_print_tuple(struct seq_file *s, 88 88 const struct nf_conntrack_tuple *tuple) 89 89 { 90 - return seq_printf(s, "type=%u code=%u id=%u ", 91 - tuple->dst.u.icmp.type, 92 - tuple->dst.u.icmp.code, 93 - ntohs(tuple->src.u.icmp.id)); 90 + seq_printf(s, "type=%u code=%u id=%u ", 91 + tuple->dst.u.icmp.type, 92 + tuple->dst.u.icmp.code, 93 + ntohs(tuple->src.u.icmp.id)); 94 94 } 95 95 96 96 static unsigned int *icmpv6_get_timeouts(struct net *net)
+2 -3
net/netfilter/nf_conntrack_l3proto_generic.c
··· 49 49 return true; 50 50 } 51 51 52 - static int generic_print_tuple(struct seq_file *s, 53 - const struct nf_conntrack_tuple *tuple) 52 + static void generic_print_tuple(struct seq_file *s, 53 + const struct nf_conntrack_tuple *tuple) 54 54 { 55 - return 0; 56 55 } 57 56 58 57 static int generic_get_l4proto(const struct sk_buff *skb, unsigned int nhoff,
+7 -7
net/netfilter/nf_conntrack_proto_dccp.c
··· 618 618 return -NF_ACCEPT; 619 619 } 620 620 621 - static int dccp_print_tuple(struct seq_file *s, 622 - const struct nf_conntrack_tuple *tuple) 621 + static void dccp_print_tuple(struct seq_file *s, 622 + const struct nf_conntrack_tuple *tuple) 623 623 { 624 - return seq_printf(s, "sport=%hu dport=%hu ", 625 - ntohs(tuple->src.u.dccp.port), 626 - ntohs(tuple->dst.u.dccp.port)); 624 + seq_printf(s, "sport=%hu dport=%hu ", 625 + ntohs(tuple->src.u.dccp.port), 626 + ntohs(tuple->dst.u.dccp.port)); 627 627 } 628 628 629 - static int dccp_print_conntrack(struct seq_file *s, struct nf_conn *ct) 629 + static void dccp_print_conntrack(struct seq_file *s, struct nf_conn *ct) 630 630 { 631 - return seq_printf(s, "%s ", dccp_state_names[ct->proto.dccp.state]); 631 + seq_printf(s, "%s ", dccp_state_names[ct->proto.dccp.state]); 632 632 } 633 633 634 634 #if IS_ENABLED(CONFIG_NF_CT_NETLINK)
+2 -3
net/netfilter/nf_conntrack_proto_generic.c
··· 63 63 } 64 64 65 65 /* Print out the per-protocol part of the tuple. */ 66 - static int generic_print_tuple(struct seq_file *s, 67 - const struct nf_conntrack_tuple *tuple) 66 + static void generic_print_tuple(struct seq_file *s, 67 + const struct nf_conntrack_tuple *tuple) 68 68 { 69 - return 0; 70 69 } 71 70 72 71 static unsigned int *generic_get_timeouts(struct net *net)
+9 -9
net/netfilter/nf_conntrack_proto_gre.c
··· 226 226 } 227 227 228 228 /* print gre part of tuple */ 229 - static int gre_print_tuple(struct seq_file *s, 230 - const struct nf_conntrack_tuple *tuple) 229 + static void gre_print_tuple(struct seq_file *s, 230 + const struct nf_conntrack_tuple *tuple) 231 231 { 232 - return seq_printf(s, "srckey=0x%x dstkey=0x%x ", 233 - ntohs(tuple->src.u.gre.key), 234 - ntohs(tuple->dst.u.gre.key)); 232 + seq_printf(s, "srckey=0x%x dstkey=0x%x ", 233 + ntohs(tuple->src.u.gre.key), 234 + ntohs(tuple->dst.u.gre.key)); 235 235 } 236 236 237 237 /* print private data for conntrack */ 238 - static int gre_print_conntrack(struct seq_file *s, struct nf_conn *ct) 238 + static void gre_print_conntrack(struct seq_file *s, struct nf_conn *ct) 239 239 { 240 - return seq_printf(s, "timeout=%u, stream_timeout=%u ", 241 - (ct->proto.gre.timeout / HZ), 242 - (ct->proto.gre.stream_timeout / HZ)); 240 + seq_printf(s, "timeout=%u, stream_timeout=%u ", 241 + (ct->proto.gre.timeout / HZ), 242 + (ct->proto.gre.stream_timeout / HZ)); 243 243 } 244 244 245 245 static unsigned int *gre_get_timeouts(struct net *net)
+7 -7
net/netfilter/nf_conntrack_proto_sctp.c
··· 166 166 } 167 167 168 168 /* Print out the per-protocol part of the tuple. */ 169 - static int sctp_print_tuple(struct seq_file *s, 170 - const struct nf_conntrack_tuple *tuple) 169 + static void sctp_print_tuple(struct seq_file *s, 170 + const struct nf_conntrack_tuple *tuple) 171 171 { 172 - return seq_printf(s, "sport=%hu dport=%hu ", 173 - ntohs(tuple->src.u.sctp.port), 174 - ntohs(tuple->dst.u.sctp.port)); 172 + seq_printf(s, "sport=%hu dport=%hu ", 173 + ntohs(tuple->src.u.sctp.port), 174 + ntohs(tuple->dst.u.sctp.port)); 175 175 } 176 176 177 177 /* Print out the private part of the conntrack. */ 178 - static int sctp_print_conntrack(struct seq_file *s, struct nf_conn *ct) 178 + static void sctp_print_conntrack(struct seq_file *s, struct nf_conn *ct) 179 179 { 180 180 enum sctp_conntrack state; 181 181 ··· 183 183 state = ct->proto.sctp.state; 184 184 spin_unlock_bh(&ct->lock); 185 185 186 - return seq_printf(s, "%s ", sctp_conntrack_names[state]); 186 + seq_printf(s, "%s ", sctp_conntrack_names[state]); 187 187 } 188 188 189 189 #define for_each_sctp_chunk(skb, sch, _sch, offset, dataoff, count) \
+7 -7
net/netfilter/nf_conntrack_proto_tcp.c
··· 302 302 } 303 303 304 304 /* Print out the per-protocol part of the tuple. */ 305 - static int tcp_print_tuple(struct seq_file *s, 306 - const struct nf_conntrack_tuple *tuple) 305 + static void tcp_print_tuple(struct seq_file *s, 306 + const struct nf_conntrack_tuple *tuple) 307 307 { 308 - return seq_printf(s, "sport=%hu dport=%hu ", 309 - ntohs(tuple->src.u.tcp.port), 310 - ntohs(tuple->dst.u.tcp.port)); 308 + seq_printf(s, "sport=%hu dport=%hu ", 309 + ntohs(tuple->src.u.tcp.port), 310 + ntohs(tuple->dst.u.tcp.port)); 311 311 } 312 312 313 313 /* Print out the private part of the conntrack. */ 314 - static int tcp_print_conntrack(struct seq_file *s, struct nf_conn *ct) 314 + static void tcp_print_conntrack(struct seq_file *s, struct nf_conn *ct) 315 315 { 316 316 enum tcp_conntrack state; 317 317 ··· 319 319 state = ct->proto.tcp.state; 320 320 spin_unlock_bh(&ct->lock); 321 321 322 - return seq_printf(s, "%s ", tcp_conntrack_names[state]); 322 + seq_printf(s, "%s ", tcp_conntrack_names[state]); 323 323 } 324 324 325 325 static unsigned int get_conntrack_index(const struct tcphdr *tcph)
+5 -5
net/netfilter/nf_conntrack_proto_udp.c
··· 63 63 } 64 64 65 65 /* Print out the per-protocol part of the tuple. */ 66 - static int udp_print_tuple(struct seq_file *s, 67 - const struct nf_conntrack_tuple *tuple) 66 + static void udp_print_tuple(struct seq_file *s, 67 + const struct nf_conntrack_tuple *tuple) 68 68 { 69 - return seq_printf(s, "sport=%hu dport=%hu ", 70 - ntohs(tuple->src.u.udp.port), 71 - ntohs(tuple->dst.u.udp.port)); 69 + seq_printf(s, "sport=%hu dport=%hu ", 70 + ntohs(tuple->src.u.udp.port), 71 + ntohs(tuple->dst.u.udp.port)); 72 72 } 73 73 74 74 static unsigned int *udp_get_timeouts(struct net *net)
+5 -5
net/netfilter/nf_conntrack_proto_udplite.c
··· 71 71 } 72 72 73 73 /* Print out the per-protocol part of the tuple. */ 74 - static int udplite_print_tuple(struct seq_file *s, 75 - const struct nf_conntrack_tuple *tuple) 74 + static void udplite_print_tuple(struct seq_file *s, 75 + const struct nf_conntrack_tuple *tuple) 76 76 { 77 - return seq_printf(s, "sport=%hu dport=%hu ", 78 - ntohs(tuple->src.u.udp.port), 79 - ntohs(tuple->dst.u.udp.port)); 77 + seq_printf(s, "sport=%hu dport=%hu ", 78 + ntohs(tuple->src.u.udp.port), 79 + ntohs(tuple->dst.u.udp.port)); 80 80 } 81 81 82 82 static unsigned int *udplite_get_timeouts(struct net *net)
+37 -40
net/netfilter/nf_conntrack_standalone.c
··· 36 36 MODULE_LICENSE("GPL"); 37 37 38 38 #ifdef CONFIG_NF_CONNTRACK_PROCFS 39 - int 39 + void 40 40 print_tuple(struct seq_file *s, const struct nf_conntrack_tuple *tuple, 41 41 const struct nf_conntrack_l3proto *l3proto, 42 42 const struct nf_conntrack_l4proto *l4proto) 43 43 { 44 - return l3proto->print_tuple(s, tuple) || l4proto->print_tuple(s, tuple); 44 + l3proto->print_tuple(s, tuple); 45 + l4proto->print_tuple(s, tuple); 45 46 } 46 47 EXPORT_SYMBOL_GPL(print_tuple); 47 48 ··· 120 119 } 121 120 122 121 #ifdef CONFIG_NF_CONNTRACK_SECMARK 123 - static int ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 122 + static void ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 124 123 { 125 124 int ret; 126 125 u32 len; ··· 128 127 129 128 ret = security_secid_to_secctx(ct->secmark, &secctx, &len); 130 129 if (ret) 131 - return 0; 130 + return; 132 131 133 - ret = seq_printf(s, "secctx=%s ", secctx); 132 + seq_printf(s, "secctx=%s ", secctx); 134 133 135 134 security_release_secctx(secctx, len); 136 - return ret; 137 135 } 138 136 #else 139 - static inline int ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 137 + static inline void ct_show_secctx(struct seq_file *s, const struct nf_conn *ct) 140 138 { 141 - return 0; 142 139 } 143 140 #endif 144 141 145 142 #ifdef CONFIG_NF_CONNTRACK_TIMESTAMP 146 - static int ct_show_delta_time(struct seq_file *s, const struct nf_conn *ct) 143 + static void ct_show_delta_time(struct seq_file *s, const struct nf_conn *ct) 147 144 { 148 145 struct ct_iter_state *st = s->private; 149 146 struct nf_conn_tstamp *tstamp; ··· 155 156 else 156 157 delta_time = 0; 157 158 158 - return seq_printf(s, "delta-time=%llu ", 159 - (unsigned long long)delta_time); 159 + seq_printf(s, "delta-time=%llu ", 160 + (unsigned long long)delta_time); 160 161 } 161 - return 0; 162 + return; 162 163 } 163 164 #else 164 - static inline int 165 + static inline void 165 166 ct_show_delta_time(struct seq_file *s, const struct nf_conn *ct) 166 167 { 167 - return 0; 168 168 } 169 169 #endif 170 170 ··· 190 192 NF_CT_ASSERT(l4proto); 191 193 192 194 ret = -ENOSPC; 193 - if (seq_printf(s, "%-8s %u %-8s %u %ld ", 194 - l3proto->name, nf_ct_l3num(ct), 195 - l4proto->name, nf_ct_protonum(ct), 196 - timer_pending(&ct->timeout) 197 - ? (long)(ct->timeout.expires - jiffies)/HZ : 0) != 0) 198 - goto release; 195 + seq_printf(s, "%-8s %u %-8s %u %ld ", 196 + l3proto->name, nf_ct_l3num(ct), 197 + l4proto->name, nf_ct_protonum(ct), 198 + timer_pending(&ct->timeout) 199 + ? (long)(ct->timeout.expires - jiffies)/HZ : 0); 199 200 200 - if (l4proto->print_conntrack && l4proto->print_conntrack(s, ct)) 201 - goto release; 201 + if (l4proto->print_conntrack) 202 + l4proto->print_conntrack(s, ct); 202 203 203 - if (print_tuple(s, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 204 - l3proto, l4proto)) 204 + print_tuple(s, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 205 + l3proto, l4proto); 206 + 207 + if (seq_has_overflowed(s)) 205 208 goto release; 206 209 207 210 if (seq_print_acct(s, ct, IP_CT_DIR_ORIGINAL)) 208 211 goto release; 209 212 210 213 if (!(test_bit(IPS_SEEN_REPLY_BIT, &ct->status))) 211 - if (seq_printf(s, "[UNREPLIED] ")) 212 - goto release; 214 + seq_printf(s, "[UNREPLIED] "); 213 215 214 - if (print_tuple(s, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, 215 - l3proto, l4proto)) 216 - goto release; 216 + print_tuple(s, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, 217 + l3proto, l4proto); 217 218 218 219 if (seq_print_acct(s, ct, IP_CT_DIR_REPLY)) 219 220 goto release; 220 221 221 222 if (test_bit(IPS_ASSURED_BIT, &ct->status)) 222 - if (seq_printf(s, "[ASSURED] ")) 223 - goto release; 223 + seq_printf(s, "[ASSURED] "); 224 + 225 + if (seq_has_overflowed(s)) 226 + goto release; 224 227 225 228 #if defined(CONFIG_NF_CONNTRACK_MARK) 226 - if (seq_printf(s, "mark=%u ", ct->mark)) 227 - goto release; 229 + seq_printf(s, "mark=%u ", ct->mark); 228 230 #endif 229 231 230 - if (ct_show_secctx(s, ct)) 231 - goto release; 232 + ct_show_secctx(s, ct); 232 233 233 234 #ifdef CONFIG_NF_CONNTRACK_ZONES 234 - if (seq_printf(s, "zone=%u ", nf_ct_zone(ct))) 235 - goto release; 235 + seq_printf(s, "zone=%u ", nf_ct_zone(ct)); 236 236 #endif 237 237 238 - if (ct_show_delta_time(s, ct)) 239 - goto release; 238 + ct_show_delta_time(s, ct); 240 239 241 - if (seq_printf(s, "use=%u\n", atomic_read(&ct->ct_general.use))) 240 + seq_printf(s, "use=%u\n", atomic_read(&ct->ct_general.use)); 241 + 242 + if (seq_has_overflowed(s)) 242 243 goto release; 243 244 244 245 ret = 0;
+16 -14
net/netfilter/nf_log.c
··· 294 294 { 295 295 loff_t *pos = v; 296 296 const struct nf_logger *logger; 297 - int i, ret; 297 + int i; 298 298 struct net *net = seq_file_net(s); 299 299 300 300 logger = rcu_dereference_protected(net->nf.nf_loggers[*pos], 301 301 lockdep_is_held(&nf_log_mutex)); 302 302 303 303 if (!logger) 304 - ret = seq_printf(s, "%2lld NONE (", *pos); 304 + seq_printf(s, "%2lld NONE (", *pos); 305 305 else 306 - ret = seq_printf(s, "%2lld %s (", *pos, logger->name); 306 + seq_printf(s, "%2lld %s (", *pos, logger->name); 307 307 308 - if (ret < 0) 309 - return ret; 308 + if (seq_has_overflowed(s)) 309 + return -ENOSPC; 310 310 311 311 for (i = 0; i < NF_LOG_TYPE_MAX; i++) { 312 312 if (loggers[*pos][i] == NULL) ··· 314 314 315 315 logger = rcu_dereference_protected(loggers[*pos][i], 316 316 lockdep_is_held(&nf_log_mutex)); 317 - ret = seq_printf(s, "%s", logger->name); 318 - if (ret < 0) 319 - return ret; 320 - if (i == 0 && loggers[*pos][i + 1] != NULL) { 321 - ret = seq_printf(s, ","); 322 - if (ret < 0) 323 - return ret; 324 - } 317 + seq_printf(s, "%s", logger->name); 318 + if (i == 0 && loggers[*pos][i + 1] != NULL) 319 + seq_printf(s, ","); 320 + 321 + if (seq_has_overflowed(s)) 322 + return -ENOSPC; 325 323 } 326 324 327 - return seq_printf(s, ")\n"); 325 + seq_printf(s, ")\n"); 326 + 327 + if (seq_has_overflowed(s)) 328 + return -ENOSPC; 329 + return 0; 328 330 } 329 331 330 332 static const struct seq_operations nflog_seq_ops = {
+12 -7
net/netfilter/x_tables.c
··· 947 947 { 948 948 struct xt_table *table = list_entry(v, struct xt_table, list); 949 949 950 - if (strlen(table->name)) 951 - return seq_printf(seq, "%s\n", table->name); 952 - else 950 + if (strlen(table->name)) { 951 + seq_printf(seq, "%s\n", table->name); 952 + return seq_has_overflowed(seq); 953 + } else 953 954 return 0; 954 955 } 955 956 ··· 1087 1086 if (trav->curr == trav->head) 1088 1087 return 0; 1089 1088 match = list_entry(trav->curr, struct xt_match, list); 1090 - return (*match->name == '\0') ? 0 : 1091 - seq_printf(seq, "%s\n", match->name); 1089 + if (*match->name == '\0') 1090 + return 0; 1091 + seq_printf(seq, "%s\n", match->name); 1092 + return seq_has_overflowed(seq); 1092 1093 } 1093 1094 return 0; 1094 1095 } ··· 1142 1139 if (trav->curr == trav->head) 1143 1140 return 0; 1144 1141 target = list_entry(trav->curr, struct xt_target, list); 1145 - return (*target->name == '\0') ? 0 : 1146 - seq_printf(seq, "%s\n", target->name); 1142 + if (*target->name == '\0') 1143 + return 0; 1144 + seq_printf(seq, "%s\n", target->name); 1145 + return seq_has_overflowed(seq); 1147 1146 } 1148 1147 return 0; 1149 1148 }
+17 -19
net/netfilter/xt_hashlimit.c
··· 789 789 static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family, 790 790 struct seq_file *s) 791 791 { 792 - int res; 793 792 const struct xt_hashlimit_htable *ht = s->private; 794 793 795 794 spin_lock(&ent->lock); ··· 797 798 798 799 switch (family) { 799 800 case NFPROTO_IPV4: 800 - res = seq_printf(s, "%ld %pI4:%u->%pI4:%u %u %u %u\n", 801 - (long)(ent->expires - jiffies)/HZ, 802 - &ent->dst.ip.src, 803 - ntohs(ent->dst.src_port), 804 - &ent->dst.ip.dst, 805 - ntohs(ent->dst.dst_port), 806 - ent->rateinfo.credit, ent->rateinfo.credit_cap, 807 - ent->rateinfo.cost); 801 + seq_printf(s, "%ld %pI4:%u->%pI4:%u %u %u %u\n", 802 + (long)(ent->expires - jiffies)/HZ, 803 + &ent->dst.ip.src, 804 + ntohs(ent->dst.src_port), 805 + &ent->dst.ip.dst, 806 + ntohs(ent->dst.dst_port), 807 + ent->rateinfo.credit, ent->rateinfo.credit_cap, 808 + ent->rateinfo.cost); 808 809 break; 809 810 #if IS_ENABLED(CONFIG_IP6_NF_IPTABLES) 810 811 case NFPROTO_IPV6: 811 - res = seq_printf(s, "%ld %pI6:%u->%pI6:%u %u %u %u\n", 812 - (long)(ent->expires - jiffies)/HZ, 813 - &ent->dst.ip6.src, 814 - ntohs(ent->dst.src_port), 815 - &ent->dst.ip6.dst, 816 - ntohs(ent->dst.dst_port), 817 - ent->rateinfo.credit, ent->rateinfo.credit_cap, 818 - ent->rateinfo.cost); 812 + seq_printf(s, "%ld %pI6:%u->%pI6:%u %u %u %u\n", 813 + (long)(ent->expires - jiffies)/HZ, 814 + &ent->dst.ip6.src, 815 + ntohs(ent->dst.src_port), 816 + &ent->dst.ip6.dst, 817 + ntohs(ent->dst.dst_port), 818 + ent->rateinfo.credit, ent->rateinfo.credit_cap, 819 + ent->rateinfo.cost); 819 820 break; 820 821 #endif 821 822 default: 822 823 BUG(); 823 - res = 0; 824 824 } 825 825 spin_unlock(&ent->lock); 826 - return res; 826 + return seq_has_overflowed(s); 827 827 } 828 828 829 829 static int dl_seq_show(struct seq_file *s, void *v)
+1 -1
security/commoncap.c
··· 446 446 if (bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) 447 447 return 0; 448 448 449 - dentry = dget(bprm->file->f_dentry); 449 + dentry = dget(bprm->file->f_path.dentry); 450 450 451 451 rc = get_vfs_caps_from_disk(dentry, &vcaps); 452 452 if (rc < 0) {
+2 -2
security/integrity/ima/ima_api.c
··· 196 196 { 197 197 const char *audit_cause = "failed"; 198 198 struct inode *inode = file_inode(file); 199 - const char *filename = file->f_dentry->d_name.name; 199 + const char *filename = file->f_path.dentry->d_name.name; 200 200 int result = 0; 201 201 struct { 202 202 struct ima_digest_data hdr; ··· 204 204 } hash; 205 205 206 206 if (xattr_value) 207 - *xattr_len = ima_read_xattr(file->f_dentry, xattr_value); 207 + *xattr_len = ima_read_xattr(file->f_path.dentry, xattr_value); 208 208 209 209 if (!(iint->flags & IMA_COLLECTED)) { 210 210 u64 i_version = file_inode(file)->i_version;
+2 -2
security/integrity/ima/ima_appraise.c
··· 189 189 { 190 190 static const char op[] = "appraise_data"; 191 191 char *cause = "unknown"; 192 - struct dentry *dentry = file->f_dentry; 192 + struct dentry *dentry = file->f_path.dentry; 193 193 struct inode *inode = dentry->d_inode; 194 194 enum integrity_status status = INTEGRITY_UNKNOWN; 195 195 int rc = xattr_len, hash_start = 0; ··· 289 289 */ 290 290 void ima_update_xattr(struct integrity_iint_cache *iint, struct file *file) 291 291 { 292 - struct dentry *dentry = file->f_dentry; 292 + struct dentry *dentry = file->f_path.dentry; 293 293 int rc = 0; 294 294 295 295 /* do not collect and update hash for digital signatures */
+1 -1
security/integrity/ima/ima_template_lib.c
··· 284 284 } 285 285 286 286 if (file) { 287 - cur_filename = file->f_dentry->d_name.name; 287 + cur_filename = file->f_path.dentry->d_name.name; 288 288 cur_filename_len = strlen(cur_filename); 289 289 } else 290 290 /*
+3 -3
security/selinux/selinuxfs.c
··· 1200 1200 spin_lock(&de->d_lock); 1201 1201 node = de->d_subdirs.next; 1202 1202 while (node != &de->d_subdirs) { 1203 - struct dentry *d = list_entry(node, struct dentry, d_u.d_child); 1203 + struct dentry *d = list_entry(node, struct dentry, d_child); 1204 1204 1205 1205 spin_lock_nested(&d->d_lock, DENTRY_D_LOCK_NESTED); 1206 1206 list_del_init(node); ··· 1674 1674 1675 1675 list_for_each(class_node, &class_dir->d_subdirs) { 1676 1676 struct dentry *class_subdir = list_entry(class_node, 1677 - struct dentry, d_u.d_child); 1677 + struct dentry, d_child); 1678 1678 struct list_head *class_subdir_node; 1679 1679 1680 1680 list_for_each(class_subdir_node, &class_subdir->d_subdirs) { 1681 1681 struct dentry *d = list_entry(class_subdir_node, 1682 - struct dentry, d_u.d_child); 1682 + struct dentry, d_child); 1683 1683 1684 1684 if (d->d_inode) 1685 1685 if (d->d_inode->i_mode & S_IFDIR)
+4 -4
security/smack/smack_lsm.c
··· 166 166 return rc; 167 167 168 168 smk_bu_mode(mode, acc); 169 - pr_info("Smack Bringup: (%s %s %s) file=(%s %ld %s) %s\n", 169 + pr_info("Smack Bringup: (%s %s %s) file=(%s %ld %pD) %s\n", 170 170 sskp->smk_known, (char *)file->f_security, acc, 171 - inode->i_sb->s_id, inode->i_ino, file->f_dentry->d_name.name, 171 + inode->i_sb->s_id, inode->i_ino, file, 172 172 current->comm); 173 173 return 0; 174 174 } ··· 189 189 return rc; 190 190 191 191 smk_bu_mode(mode, acc); 192 - pr_info("Smack Bringup: (%s %s %s) file=(%s %ld %s) %s\n", 192 + pr_info("Smack Bringup: (%s %s %s) file=(%s %ld %pD) %s\n", 193 193 sskp->smk_known, smk_of_inode(inode)->smk_known, acc, 194 - inode->i_sb->s_id, inode->i_ino, file->f_dentry->d_name.name, 194 + inode->i_sb->s_id, inode->i_ino, file, 195 195 current->comm); 196 196 return 0; 197 197 }