Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull vfs pile 1 from Al Viro:
"This is _not_ all; in particular, Miklos' and Jan's stuff is not there
yet."

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (64 commits)
ext4: initialization of ext4_li_mtx needs to be done earlier
debugfs-related mode_t whack-a-mole
hfsplus: add an ioctl to bless files
hfsplus: change finder_info to u32
hfsplus: initialise userflags
qnx4: new helper - try_extent()
qnx4: get rid of qnx4_bread/qnx4_getblk
take removal of PF_FORKNOEXEC to flush_old_exec()
trim includes in inode.c
um: uml_dup_mmap() relies on ->mmap_sem being held, but activate_mm() doesn't hold it
um: embed ->stub_pages[] into mmu_context
gadgetfs: list_for_each_safe() misuse
ocfs2: fix leaks on failure exits in module_init
ecryptfs: make register_filesystem() the last potential failure exit
ntfs: forgets to unregister sysctls on register_filesystem() failure
logfs: missing cleanup on register_filesystem() failure
jfs: mising cleanup on register_filesystem() failure
make configfs_pin_fs() return root dentry on success
configfs: configfs_create_dir() has parent dentry in dentry->d_parent
configfs: sanitize configfs_create()
...

+5363 -4144
+1 -1
Documentation/filesystems/debugfs.txt
··· 136 136 void __iomem *base; 137 137 }; 138 138 139 - struct dentry *debugfs_create_regset32(const char *name, mode_t mode, 139 + struct dentry *debugfs_create_regset32(const char *name, umode_t mode, 140 140 struct dentry *parent, 141 141 struct debugfs_regset32 *regset); 142 142
+6
Documentation/filesystems/porting
··· 429 429 You must also keep in mind that ->fsync() is not called with i_mutex held 430 430 anymore, so if you require i_mutex locking you must make sure to take it and 431 431 release it yourself. 432 + 433 + -- 434 + [mandatory] 435 + d_alloc_root() is gone, along with a lot of bugs caused by code 436 + misusing it. Replacement: d_make_root(inode). The difference is, 437 + d_make_root() drops the reference to inode if dentry allocation fails.
+174
Documentation/filesystems/qnx6.txt
··· 1 + The QNX6 Filesystem 2 + =================== 3 + 4 + The qnx6fs is used by newer QNX operating system versions. (e.g. Neutrino) 5 + It got introduced in QNX 6.4.0 and is used default since 6.4.1. 6 + 7 + Option 8 + ====== 9 + 10 + mmi_fs Mount filesystem as used for example by Audi MMI 3G system 11 + 12 + Specification 13 + ============= 14 + 15 + qnx6fs shares many properties with traditional Unix filesystems. It has the 16 + concepts of blocks, inodes and directories. 17 + On QNX it is possible to create little endian and big endian qnx6 filesystems. 18 + This feature makes it possible to create and use a different endianness fs 19 + for the target (QNX is used on quite a range of embedded systems) plattform 20 + running on a different endianess. 21 + The Linux driver handles endianness transparently. (LE and BE) 22 + 23 + Blocks 24 + ------ 25 + 26 + The space in the device or file is split up into blocks. These are a fixed 27 + size of 512, 1024, 2048 or 4096, which is decided when the filesystem is 28 + created. 29 + Blockpointers are 32bit, so the maximum space that can be adressed is 30 + 2^32 * 4096 bytes or 16TB 31 + 32 + The superblocks 33 + --------------- 34 + 35 + The superblock contains all global information about the filesystem. 36 + Each qnx6fs got two superblocks, each one having a 64bit serial number. 37 + That serial number is used to identify the "active" superblock. 38 + In write mode with reach new snapshot (after each synchronous write), the 39 + serial of the new master superblock is increased (old superblock serial + 1) 40 + 41 + So basically the snapshot functionality is realized by an atomic final 42 + update of the serial number. Before updating that serial, all modifications 43 + are done by copying all modified blocks during that specific write request 44 + (or period) and building up a new (stable) filesystem structure under the 45 + inactive superblock. 46 + 47 + Each superblock holds a set of root inodes for the different filesystem 48 + parts. (Inode, Bitmap and Longfilenames) 49 + Each of these root nodes holds information like total size of the stored 50 + data and the adressing levels in that specific tree. 51 + If the level value is 0, up to 16 direct blocks can be adressed by each 52 + node. 53 + Level 1 adds an additional indirect adressing level where each indirect 54 + adressing block holds up to blocksize / 4 bytes pointers to data blocks. 55 + Level 2 adds an additional indirect adressig block level (so, already up 56 + to 16 * 256 * 256 = 1048576 blocks that can be adressed by such a tree)a 57 + 58 + Unused block pointers are always set to ~0 - regardless of root node, 59 + indirect adressing blocks or inodes. 60 + Data leaves are always on the lowest level. So no data is stored on upper 61 + tree levels. 62 + 63 + The first Superblock is located at 0x2000. (0x2000 is the bootblock size) 64 + The Audi MMI 3G first superblock directly starts at byte 0. 65 + Second superblock position can either be calculated from the superblock 66 + information (total number of filesystem blocks) or by taking the highest 67 + device address, zeroing the last 3 bytes and then substracting 0x1000 from 68 + that address. 69 + 70 + 0x1000 is the size reserved for each superblock - regardless of the 71 + blocksize of the filesystem. 72 + 73 + Inodes 74 + ------ 75 + 76 + Each object in the filesystem is represented by an inode. (index node) 77 + The inode structure contains pointers to the filesystem blocks which contain 78 + the data held in the object and all of the metadata about an object except 79 + its longname. (filenames longer than 27 characters) 80 + The metadata about an object includes the permissions, owner, group, flags, 81 + size, number of blocks used, access time, change time and modification time. 82 + 83 + Object mode field is POSIX format. (which makes things easier) 84 + 85 + There are also pointers to the first 16 blocks, if the object data can be 86 + adressed with 16 direct blocks. 87 + For more than 16 blocks an indirect adressing in form of another tree is 88 + used. (scheme is the same as the one used for the superblock root nodes) 89 + 90 + The filesize is stored 64bit. Inode counting starts with 1. (whilst long 91 + filename inodes start with 0) 92 + 93 + Directories 94 + ----------- 95 + 96 + A directory is a filesystem object and has an inode just like a file. 97 + It is a specially formatted file containing records which associate each 98 + name with an inode number. 99 + '.' inode number points to the directory inode 100 + '..' inode number points to the parent directory inode 101 + Eeach filename record additionally got a filename length field. 102 + 103 + One special case are long filenames or subdirectory names. 104 + These got set a filename length field of 0xff in the corresponding directory 105 + record plus the longfile inode number also stored in that record. 106 + With that longfilename inode number, the longfilename tree can be walked 107 + starting with the superblock longfilename root node pointers. 108 + 109 + Special files 110 + ------------- 111 + 112 + Symbolic links are also filesystem objects with inodes. They got a specific 113 + bit in the inode mode field identifying them as symbolic link. 114 + The directory entry file inode pointer points to the target file inode. 115 + 116 + Hard links got an inode, a directory entry, but a specific mode bit set, 117 + no block pointers and the directory file record pointing to the target file 118 + inode. 119 + 120 + Character and block special devices do not exist in QNX as those files 121 + are handled by the QNX kernel/drivers and created in /dev independant of the 122 + underlaying filesystem. 123 + 124 + Long filenames 125 + -------------- 126 + 127 + Long filenames are stored in a seperate adressing tree. The staring point 128 + is the longfilename root node in the active superblock. 129 + Each data block (tree leaves) holds one long filename. That filename is 130 + limited to 510 bytes. The first two starting bytes are used as length field 131 + for the actual filename. 132 + If that structure shall fit for all allowed blocksizes, it is clear why there 133 + is a limit of 510 bytes for the actual filename stored. 134 + 135 + Bitmap 136 + ------ 137 + 138 + The qnx6fs filesystem allocation bitmap is stored in a tree under bitmap 139 + root node in the superblock and each bit in the bitmap represents one 140 + filesystem block. 141 + The first block is block 0, which starts 0x1000 after superblock start. 142 + So for a normal qnx6fs 0x3000 (bootblock + superblock) is the physical 143 + address at which block 0 is located. 144 + 145 + Bits at the end of the last bitmap block are set to 1, if the device is 146 + smaller than addressing space in the bitmap. 147 + 148 + Bitmap system area 149 + ------------------ 150 + 151 + The bitmap itself is devided into three parts. 152 + First the system area, that is split into two halfs. 153 + Then userspace. 154 + 155 + The requirement for a static, fixed preallocated system area comes from how 156 + qnx6fs deals with writes. 157 + Each superblock got it's own half of the system area. So superblock #1 158 + always uses blocks from the lower half whilst superblock #2 just writes to 159 + blocks represented by the upper half bitmap system area bits. 160 + 161 + Bitmap blocks, Inode blocks and indirect addressing blocks for those two 162 + tree structures are treated as system blocks. 163 + 164 + The rational behind that is that a write request can work on a new snapshot 165 + (system area of the inactive - resp. lower serial numbered superblock) while 166 + at the same time there is still a complete stable filesystem structer in the 167 + other half of the system area. 168 + 169 + When finished with writing (a sync write is completed, the maximum sync leap 170 + time or a filesystem sync is requested), serial of the previously inactive 171 + superblock atomically is increased and the fs switches over to that - then 172 + stable declared - superblock. 173 + 174 + For all data outside the system area, blocks are just copied while writing.
+1
Documentation/ioctl/ioctl-number.txt
··· 218 218 'h' 00-7F conflict! Charon filesystem 219 219 <mailto:zapman@interlan.net> 220 220 'h' 00-1F linux/hpet.h conflict! 221 + 'h' 80-8F fs/hfsplus/ioctl.c 221 222 'i' 00-3F linux/i2o-dev.h conflict! 222 223 'i' 0B-1F linux/ipmi.h conflict! 223 224 'i' 80-8F linux/i8k.h
+2 -1
arch/alpha/kernel/binfmt_loader.c
··· 46 46 47 47 static int __init init_loader_binfmt(void) 48 48 { 49 - return insert_binfmt(&loader_format); 49 + insert_binfmt(&loader_format); 50 + return 0; 50 51 } 51 52 arch_initcall(init_loader_binfmt);
+8 -8
arch/powerpc/platforms/cell/spufs/inode.c
··· 757 757 goto out_iput; 758 758 759 759 ret = -ENOMEM; 760 - sb->s_root = d_alloc_root(inode); 760 + sb->s_root = d_make_root(inode); 761 761 if (!sb->s_root) 762 - goto out_iput; 762 + goto out; 763 763 764 764 return 0; 765 765 out_iput: ··· 828 828 ret = spu_sched_init(); 829 829 if (ret) 830 830 goto out_cache; 831 - ret = register_filesystem(&spufs_type); 832 - if (ret) 833 - goto out_sched; 834 831 ret = register_spu_syscalls(&spufs_calls); 835 832 if (ret) 836 - goto out_fs; 833 + goto out_sched; 834 + ret = register_filesystem(&spufs_type); 835 + if (ret) 836 + goto out_syscalls; 837 837 838 838 spufs_init_isolated_loader(); 839 839 840 840 return 0; 841 841 842 - out_fs: 843 - unregister_filesystem(&spufs_type); 842 + out_syscalls: 843 + unregister_spu_syscalls(&spufs_calls); 844 844 out_sched: 845 845 spu_sched_exit(); 846 846 out_cache:
+2 -4
arch/s390/hypfs/inode.c
··· 293 293 return -ENOMEM; 294 294 root_inode->i_op = &simple_dir_inode_operations; 295 295 root_inode->i_fop = &simple_dir_operations; 296 - sb->s_root = root_dentry = d_alloc_root(root_inode); 297 - if (!root_dentry) { 298 - iput(root_inode); 296 + sb->s_root = root_dentry = d_make_root(root_inode); 297 + if (!root_dentry) 299 298 return -ENOMEM; 300 - } 301 299 if (MACHINE_IS_VM) 302 300 rc = hypfs_vm_create_files(sb, root_dentry); 303 301 else
+1 -1
arch/um/include/asm/mmu.h
··· 12 12 typedef struct mm_context { 13 13 struct mm_id id; 14 14 struct uml_arch_mm_context arch; 15 - struct page **stub_pages; 15 + struct page *stub_pages[2]; 16 16 } mm_context_t; 17 17 18 18 extern void __switch_mm(struct mm_id * mm_idp);
+9 -2
arch/um/include/asm/mmu_context.h
··· 9 9 #include <linux/sched.h> 10 10 #include <asm/mmu.h> 11 11 12 - extern void arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm); 12 + extern void uml_setup_stubs(struct mm_struct *mm); 13 13 extern void arch_exit_mmap(struct mm_struct *mm); 14 14 15 15 #define deactivate_mm(tsk,mm) do { } while (0) ··· 23 23 * when the new ->mm is used for the first time. 24 24 */ 25 25 __switch_mm(&new->context.id); 26 - arch_dup_mmap(old, new); 26 + down_write(&new->mmap_sem); 27 + uml_setup_stubs(new); 28 + up_write(&new->mmap_sem); 27 29 } 28 30 29 31 static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, ··· 39 37 if(next != &init_mm) 40 38 __switch_mm(&next->context.id); 41 39 } 40 + } 41 + 42 + static inline void arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm) 43 + { 44 + uml_setup_stubs(mm); 42 45 } 43 46 44 47 static inline void enter_lazy_tlb(struct mm_struct *mm,
+6 -19
arch/um/kernel/skas/mmu.c
··· 92 92 goto out_free; 93 93 } 94 94 95 - to_mm->stub_pages = NULL; 96 - 97 95 return 0; 98 96 99 97 out_free: ··· 101 103 return ret; 102 104 } 103 105 104 - void arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm) 106 + void uml_setup_stubs(struct mm_struct *mm) 105 107 { 106 108 struct page **pages; 107 109 int err, ret; ··· 118 120 if (ret) 119 121 goto out; 120 122 121 - pages = kmalloc(2 * sizeof(struct page *), GFP_KERNEL); 122 - if (pages == NULL) { 123 - printk(KERN_ERR "arch_dup_mmap failed to allocate 2 page " 124 - "pointers\n"); 125 - goto out; 126 - } 127 - 128 - pages[0] = virt_to_page(&__syscall_stub_start); 129 - pages[1] = virt_to_page(mm->context.id.stack); 130 - mm->context.stub_pages = pages; 123 + mm->context.stub_pages[0] = virt_to_page(&__syscall_stub_start); 124 + mm->context.stub_pages[1] = virt_to_page(mm->context.id.stack); 131 125 132 126 /* dup_mmap already holds mmap_sem */ 133 127 err = install_special_mapping(mm, STUB_START, STUB_END - STUB_START, 134 128 VM_READ | VM_MAYREAD | VM_EXEC | 135 - VM_MAYEXEC | VM_DONTCOPY, pages); 129 + VM_MAYEXEC | VM_DONTCOPY, 130 + mm->context.stub_pages); 136 131 if (err) { 137 132 printk(KERN_ERR "install_special_mapping returned %d\n", err); 138 - goto out_free; 133 + goto out; 139 134 } 140 135 return; 141 136 142 - out_free: 143 - kfree(pages); 144 137 out: 145 138 force_sigsegv(SIGSEGV, current); 146 139 } ··· 140 151 { 141 152 pte_t *pte; 142 153 143 - if (mm->context.stub_pages != NULL) 144 - kfree(mm->context.stub_pages); 145 154 pte = virt_to_pte(mm, STUB_CODE); 146 155 if (pte != NULL) 147 156 pte_clear(mm, STUB_CODE, pte);
+2 -2
arch/x86/ia32/ia32_aout.c
··· 323 323 } 324 324 325 325 install_exec_creds(bprm); 326 - current->flags &= ~PF_FORKNOEXEC; 327 326 328 327 if (N_MAGIC(ex) == OMAGIC) { 329 328 unsigned long text_addr, map_size; ··· 518 519 519 520 static int __init init_aout_binfmt(void) 520 521 { 521 - return register_binfmt(&aout_format); 522 + register_binfmt(&aout_format); 523 + return 0; 522 524 } 523 525 524 526 static void __exit exit_aout_binfmt(void)
+6 -10
drivers/misc/ibmasm/ibmasmfs.c
··· 87 87 static LIST_HEAD(service_processors); 88 88 89 89 static struct inode *ibmasmfs_make_inode(struct super_block *sb, int mode); 90 - static void ibmasmfs_create_files (struct super_block *sb, struct dentry *root); 90 + static void ibmasmfs_create_files (struct super_block *sb); 91 91 static int ibmasmfs_fill_super (struct super_block *sb, void *data, int silent); 92 92 93 93 ··· 114 114 static int ibmasmfs_fill_super (struct super_block *sb, void *data, int silent) 115 115 { 116 116 struct inode *root; 117 - struct dentry *root_dentry; 118 117 119 118 sb->s_blocksize = PAGE_CACHE_SIZE; 120 119 sb->s_blocksize_bits = PAGE_CACHE_SHIFT; ··· 128 129 root->i_op = &simple_dir_inode_operations; 129 130 root->i_fop = ibmasmfs_dir_ops; 130 131 131 - root_dentry = d_alloc_root(root); 132 - if (!root_dentry) { 133 - iput(root); 132 + sb->s_root = d_make_root(root); 133 + if (!sb->s_root) 134 134 return -ENOMEM; 135 - } 136 - sb->s_root = root_dentry; 137 135 138 - ibmasmfs_create_files(sb, root_dentry); 136 + ibmasmfs_create_files(sb); 139 137 return 0; 140 138 } 141 139 ··· 608 612 }; 609 613 610 614 611 - static void ibmasmfs_create_files (struct super_block *sb, struct dentry *root) 615 + static void ibmasmfs_create_files (struct super_block *sb) 612 616 { 613 617 struct list_head *entry; 614 618 struct service_processor *sp; ··· 617 621 struct dentry *dir; 618 622 struct dentry *remote_dir; 619 623 sp = list_entry(entry, struct service_processor, node); 620 - dir = ibmasmfs_create_dir(sb, root, sp->dirname); 624 + dir = ibmasmfs_create_dir(sb, sb->s_root, sp->dirname); 621 625 if (!dir) 622 626 continue; 623 627
+5 -6
drivers/misc/ibmasm/module.c
··· 211 211 212 212 static int __init ibmasm_init(void) 213 213 { 214 - int result; 214 + int result = pci_register_driver(&ibmasm_driver); 215 + if (result) 216 + return result; 215 217 216 218 result = ibmasmfs_register(); 217 219 if (result) { 220 + pci_unregister_driver(&ibmasm_driver); 218 221 err("Failed to register ibmasmfs file system"); 219 222 return result; 220 223 } 221 - result = pci_register_driver(&ibmasm_driver); 222 - if (result) { 223 - ibmasmfs_unregister(); 224 - return result; 225 - } 224 + 226 225 ibmasm_register_panic_notifier(); 227 226 info(DRIVER_DESC " version " DRIVER_VERSION " loaded"); 228 227 return 0;
+1 -1
drivers/mmc/card/block.c
··· 1685 1685 1686 1686 if ((md->area_type & MMC_BLK_DATA_AREA_BOOT) && 1687 1687 card->ext_csd.boot_ro_lockable) { 1688 - mode_t mode; 1688 + umode_t mode; 1689 1689 1690 1690 if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PWR_WP_DIS) 1691 1691 mode = S_IRUGO;
+1 -1
drivers/net/ethernet/brocade/bna/bnad_debugfs.c
··· 516 516 517 517 struct bnad_debugfs_entry { 518 518 const char *name; 519 - mode_t mode; 519 + umode_t mode; 520 520 const struct file_operations *fops; 521 521 }; 522 522
+3 -8
drivers/oprofile/oprofilefs.c
··· 238 238 static int oprofilefs_fill_super(struct super_block *sb, void *data, int silent) 239 239 { 240 240 struct inode *root_inode; 241 - struct dentry *root_dentry; 242 241 243 242 sb->s_blocksize = PAGE_CACHE_SIZE; 244 243 sb->s_blocksize_bits = PAGE_CACHE_SHIFT; ··· 250 251 return -ENOMEM; 251 252 root_inode->i_op = &simple_dir_inode_operations; 252 253 root_inode->i_fop = &simple_dir_operations; 253 - root_dentry = d_alloc_root(root_inode); 254 - if (!root_dentry) { 255 - iput(root_inode); 254 + sb->s_root = d_make_root(root_inode); 255 + if (!sb->s_root) 256 256 return -ENOMEM; 257 - } 258 257 259 - sb->s_root = root_dentry; 260 - 261 - oprofile_create_files(sb, root_dentry); 258 + oprofile_create_files(sb, sb->s_root); 262 259 263 260 // FIXME: verify kill_litter_super removes our dentries 264 261 return 0;
+3 -20
drivers/usb/core/inode.c
··· 50 50 static const struct file_operations default_file_operations; 51 51 static struct vfsmount *usbfs_mount; 52 52 static int usbfs_mount_count; /* = 0 */ 53 - static int ignore_mount = 0; 54 53 55 54 static struct dentry *devices_usbfs_dentry; 56 55 static int num_buses; /* = 0 */ ··· 255 256 * i.e. it's a simple_pin_fs from create_special_files, 256 257 * then ignore it. 257 258 */ 258 - if (ignore_mount) 259 + if (*flags & MS_KERNMOUNT) 259 260 return 0; 260 261 261 262 if (parse_options(sb, data)) { ··· 453 454 static int usbfs_fill_super(struct super_block *sb, void *data, int silent) 454 455 { 455 456 struct inode *inode; 456 - struct dentry *root; 457 457 458 458 sb->s_blocksize = PAGE_CACHE_SIZE; 459 459 sb->s_blocksize_bits = PAGE_CACHE_SHIFT; ··· 460 462 sb->s_op = &usbfs_ops; 461 463 sb->s_time_gran = 1; 462 464 inode = usbfs_get_inode(sb, S_IFDIR | 0755, 0); 463 - 464 - if (!inode) { 465 - dbg("%s: could not get inode!",__func__); 466 - return -ENOMEM; 467 - } 468 - 469 - root = d_alloc_root(inode); 470 - if (!root) { 465 + sb->s_root = d_make_root(inode); 466 + if (!sb->s_root) { 471 467 dbg("%s: could not get root dentry!",__func__); 472 - iput(inode); 473 468 return -ENOMEM; 474 469 } 475 - sb->s_root = root; 476 470 return 0; 477 471 } 478 472 ··· 581 591 struct dentry *parent; 582 592 int retval; 583 593 584 - /* the simple_pin_fs calls will call remount with no options 585 - * without this flag that would overwrite the real mount options (if any) 586 - */ 587 - ignore_mount = 1; 588 - 589 594 /* create the devices special file */ 590 595 retval = simple_pin_fs(&usb_fs_type, &usbfs_mount, &usbfs_mount_count); 591 596 if (retval) { 592 597 printk(KERN_ERR "Unable to get usbfs mount\n"); 593 598 goto exit; 594 599 } 595 - 596 - ignore_mount = 0; 597 600 598 601 parent = usbfs_mount->mnt_root; 599 602 devices_usbfs_dentry = fs_create_file ("devices",
+2 -6
drivers/usb/gadget/f_fs.c
··· 1063 1063 &simple_dir_operations, 1064 1064 &simple_dir_inode_operations, 1065 1065 &data->perms); 1066 - if (unlikely(!inode)) 1066 + sb->s_root = d_make_root(inode); 1067 + if (unlikely(!sb->s_root)) 1067 1068 goto Enomem; 1068 - sb->s_root = d_alloc_root(inode); 1069 - if (unlikely(!sb->s_root)) { 1070 - iput(inode); 1071 - goto Enomem; 1072 - } 1073 1069 1074 1070 /* EP0 file */ 1075 1071 if (unlikely(!ffs_sb_create_file(sb, "ep0", ffs,
+4 -9
drivers/usb/gadget/inode.c
··· 1571 1571 1572 1572 static void destroy_ep_files (struct dev_data *dev) 1573 1573 { 1574 - struct list_head *entry, *tmp; 1575 - 1576 1574 DBG (dev, "%s %d\n", __func__, dev->state); 1577 1575 1578 1576 /* dev->state must prevent interference */ 1579 1577 restart: 1580 1578 spin_lock_irq (&dev->lock); 1581 - list_for_each_safe (entry, tmp, &dev->epfiles) { 1579 + while (!list_empty(&dev->epfiles)) { 1582 1580 struct ep_data *ep; 1583 1581 struct inode *parent; 1584 1582 struct dentry *dentry; 1585 1583 1586 1584 /* break link to FS */ 1587 - ep = list_entry (entry, struct ep_data, epfiles); 1585 + ep = list_first_entry (&dev->epfiles, struct ep_data, epfiles); 1588 1586 list_del_init (&ep->epfiles); 1589 1587 dentry = ep->dentry; 1590 1588 ep->dentry = NULL; ··· 1605 1607 dput (dentry); 1606 1608 mutex_unlock (&parent->i_mutex); 1607 1609 1608 - /* fds may still be open */ 1609 - goto restart; 1610 + spin_lock_irq (&dev->lock); 1610 1611 } 1611 1612 spin_unlock_irq (&dev->lock); 1612 1613 } ··· 2058 2061 if (!inode) 2059 2062 goto Enomem; 2060 2063 inode->i_op = &simple_dir_inode_operations; 2061 - if (!(sb->s_root = d_alloc_root (inode))) { 2062 - iput(inode); 2064 + if (!(sb->s_root = d_make_root (inode))) 2063 2065 goto Enomem; 2064 - } 2065 2066 2066 2067 /* the ep0 file is named after the controller we expect; 2067 2068 * user mode code can use it for sanity checks, like we do.
+8 -8
fs/9p/v9fs.c
··· 594 594 int err; 595 595 pr_info("Installing v9fs 9p2000 file system support\n"); 596 596 /* TODO: Setup list of registered trasnport modules */ 597 - err = register_filesystem(&v9fs_fs_type); 598 - if (err < 0) { 599 - pr_err("Failed to register filesystem\n"); 600 - return err; 601 - } 602 597 603 598 err = v9fs_cache_register(); 604 599 if (err < 0) { 605 600 pr_err("Failed to register v9fs for caching\n"); 606 - goto out_fs_unreg; 601 + return err; 607 602 } 608 603 609 604 err = v9fs_sysfs_init(); 610 605 if (err < 0) { 611 606 pr_err("Failed to register with sysfs\n"); 607 + goto out_cache; 608 + } 609 + err = register_filesystem(&v9fs_fs_type); 610 + if (err < 0) { 611 + pr_err("Failed to register filesystem\n"); 612 612 goto out_sysfs_cleanup; 613 613 } 614 614 ··· 617 617 out_sysfs_cleanup: 618 618 v9fs_sysfs_cleanup(); 619 619 620 - out_fs_unreg: 621 - unregister_filesystem(&v9fs_fs_type); 620 + out_cache: 621 + v9fs_cache_unregister(); 622 622 623 623 return err; 624 624 }
+1 -2
fs/9p/vfs_super.c
··· 155 155 goto release_sb; 156 156 } 157 157 158 - root = d_alloc_root(inode); 158 + root = d_make_root(inode); 159 159 if (!root) { 160 - iput(inode); 161 160 retval = -ENOMEM; 162 161 goto release_sb; 163 162 }
+1
fs/Kconfig
··· 214 214 source "fs/omfs/Kconfig" 215 215 source "fs/hpfs/Kconfig" 216 216 source "fs/qnx4/Kconfig" 217 + source "fs/qnx6/Kconfig" 217 218 source "fs/romfs/Kconfig" 218 219 source "fs/pstore/Kconfig" 219 220 source "fs/sysv/Kconfig"
+1
fs/Makefile
··· 102 102 obj-$(CONFIG_AFFS_FS) += affs/ 103 103 obj-$(CONFIG_ROMFS_FS) += romfs/ 104 104 obj-$(CONFIG_QNX4FS_FS) += qnx4/ 105 + obj-$(CONFIG_QNX6FS_FS) += qnx6/ 105 106 obj-$(CONFIG_AUTOFS4_FS) += autofs4/ 106 107 obj-$(CONFIG_ADFS_FS) += adfs/ 107 108 obj-$(CONFIG_FUSE_FS) += fuse/
+1 -2
fs/adfs/super.c
··· 483 483 484 484 sb->s_d_op = &adfs_dentry_operations; 485 485 root = adfs_iget(sb, &root_obj); 486 - sb->s_root = d_alloc_root(root); 486 + sb->s_root = d_make_root(root); 487 487 if (!sb->s_root) { 488 488 int i; 489 - iput(root); 490 489 for (i = 0; i < asb->s_map_size; i++) 491 490 brelse(asb->s_map[i].dm_bh); 492 491 kfree(asb->s_map);
+2 -5
fs/affs/super.c
··· 473 473 root_inode = affs_iget(sb, root_block); 474 474 if (IS_ERR(root_inode)) { 475 475 ret = PTR_ERR(root_inode); 476 - goto out_error_noinode; 476 + goto out_error; 477 477 } 478 478 479 479 if (AFFS_SB(sb)->s_flags & SF_INTL) ··· 481 481 else 482 482 sb->s_d_op = &affs_dentry_operations; 483 483 484 - sb->s_root = d_alloc_root(root_inode); 484 + sb->s_root = d_make_root(root_inode); 485 485 if (!sb->s_root) { 486 486 printk(KERN_ERR "AFFS: Get root inode failed\n"); 487 487 goto out_error; ··· 494 494 * Begin the cascaded cleanup ... 495 495 */ 496 496 out_error: 497 - if (root_inode) 498 - iput(root_inode); 499 - out_error_noinode: 500 497 kfree(sbi->s_bitmap); 501 498 affs_brelse(root_bh); 502 499 kfree(sbi->s_prefix);
+2 -5
fs/afs/super.c
··· 301 301 { 302 302 struct afs_super_info *as = sb->s_fs_info; 303 303 struct afs_fid fid; 304 - struct dentry *root = NULL; 305 304 struct inode *inode = NULL; 306 305 int ret; 307 306 ··· 326 327 set_bit(AFS_VNODE_AUTOCELL, &AFS_FS_I(inode)->flags); 327 328 328 329 ret = -ENOMEM; 329 - root = d_alloc_root(inode); 330 - if (!root) 330 + sb->s_root = d_make_root(inode); 331 + if (!sb->s_root) 331 332 goto error; 332 333 333 334 sb->s_d_op = &afs_fs_dentry_operations; 334 - sb->s_root = root; 335 335 336 336 _leave(" = 0"); 337 337 return 0; 338 338 339 339 error: 340 - iput(inode); 341 340 _leave(" = %d", ret); 342 341 return ret; 343 342 }
+22 -43
fs/aio.c
··· 199 199 static void ctx_rcu_free(struct rcu_head *head) 200 200 { 201 201 struct kioctx *ctx = container_of(head, struct kioctx, rcu_head); 202 - unsigned nr_events = ctx->max_reqs; 203 - 204 202 kmem_cache_free(kioctx_cachep, ctx); 205 - 206 - if (nr_events) { 207 - spin_lock(&aio_nr_lock); 208 - BUG_ON(aio_nr - nr_events > aio_nr); 209 - aio_nr -= nr_events; 210 - spin_unlock(&aio_nr_lock); 211 - } 212 203 } 213 204 214 205 /* __put_ioctx ··· 208 217 */ 209 218 static void __put_ioctx(struct kioctx *ctx) 210 219 { 220 + unsigned nr_events = ctx->max_reqs; 211 221 BUG_ON(ctx->reqs_active); 212 222 213 - cancel_delayed_work(&ctx->wq); 214 - cancel_work_sync(&ctx->wq.work); 223 + cancel_delayed_work_sync(&ctx->wq); 215 224 aio_free_ring(ctx); 216 225 mmdrop(ctx->mm); 217 226 ctx->mm = NULL; 227 + if (nr_events) { 228 + spin_lock(&aio_nr_lock); 229 + BUG_ON(aio_nr - nr_events > aio_nr); 230 + aio_nr -= nr_events; 231 + spin_unlock(&aio_nr_lock); 232 + } 218 233 pr_debug("__put_ioctx: freeing %p\n", ctx); 219 234 call_rcu(&ctx->rcu_head, ctx_rcu_free); 220 235 } ··· 244 247 { 245 248 struct mm_struct *mm; 246 249 struct kioctx *ctx; 247 - int did_sync = 0; 250 + int err = -ENOMEM; 248 251 249 252 /* Prevent overflows */ 250 253 if ((nr_events > (0x10000000U / sizeof(struct io_event))) || ··· 253 256 return ERR_PTR(-EINVAL); 254 257 } 255 258 256 - if ((unsigned long)nr_events > aio_max_nr) 259 + if (!nr_events || (unsigned long)nr_events > aio_max_nr) 257 260 return ERR_PTR(-EAGAIN); 258 261 259 262 ctx = kmem_cache_zalloc(kioctx_cachep, GFP_KERNEL); ··· 277 280 goto out_freectx; 278 281 279 282 /* limit the number of system wide aios */ 280 - do { 281 - spin_lock_bh(&aio_nr_lock); 282 - if (aio_nr + nr_events > aio_max_nr || 283 - aio_nr + nr_events < aio_nr) 284 - ctx->max_reqs = 0; 285 - else 286 - aio_nr += ctx->max_reqs; 287 - spin_unlock_bh(&aio_nr_lock); 288 - if (ctx->max_reqs || did_sync) 289 - break; 290 - 291 - /* wait for rcu callbacks to have completed before giving up */ 292 - synchronize_rcu(); 293 - did_sync = 1; 294 - ctx->max_reqs = nr_events; 295 - } while (1); 296 - 297 - if (ctx->max_reqs == 0) 283 + spin_lock(&aio_nr_lock); 284 + if (aio_nr + nr_events > aio_max_nr || 285 + aio_nr + nr_events < aio_nr) { 286 + spin_unlock(&aio_nr_lock); 298 287 goto out_cleanup; 288 + } 289 + aio_nr += ctx->max_reqs; 290 + spin_unlock(&aio_nr_lock); 299 291 300 292 /* now link into global list. */ 301 293 spin_lock(&mm->ioctx_lock); ··· 296 310 return ctx; 297 311 298 312 out_cleanup: 299 - __put_ioctx(ctx); 300 - return ERR_PTR(-EAGAIN); 301 - 313 + err = -EAGAIN; 314 + aio_free_ring(ctx); 302 315 out_freectx: 303 316 mmdrop(mm); 304 317 kmem_cache_free(kioctx_cachep, ctx); 305 - ctx = ERR_PTR(-ENOMEM); 306 - 307 - dprintk("aio: error allocating ioctx %p\n", ctx); 308 - return ctx; 318 + dprintk("aio: error allocating ioctx %d\n", err); 319 + return ERR_PTR(err); 309 320 } 310 321 311 322 /* aio_cancel_all ··· 390 407 aio_cancel_all(ctx); 391 408 392 409 wait_for_all_aios(ctx); 393 - /* 394 - * Ensure we don't leave the ctx on the aio_wq 395 - */ 396 - cancel_work_sync(&ctx->wq.work); 397 410 398 411 if (1 != atomic_read(&ctx->users)) 399 412 printk(KERN_DEBUG ··· 899 920 unuse_mm(mm); 900 921 set_fs(oldfs); 901 922 /* 902 - * we're in a worker thread already, don't use queue_delayed_work, 923 + * we're in a worker thread already; no point using non-zero delay 903 924 */ 904 925 if (requeue) 905 926 queue_delayed_work(aio_wq, &ctx->wq, 0);
+56 -53
fs/anon_inodes.c
··· 39 39 .d_dname = anon_inodefs_dname, 40 40 }; 41 41 42 - static struct dentry *anon_inodefs_mount(struct file_system_type *fs_type, 43 - int flags, const char *dev_name, void *data) 44 - { 45 - return mount_pseudo(fs_type, "anon_inode:", NULL, 46 - &anon_inodefs_dentry_operations, ANON_INODE_FS_MAGIC); 47 - } 48 - 49 - static struct file_system_type anon_inode_fs_type = { 50 - .name = "anon_inodefs", 51 - .mount = anon_inodefs_mount, 52 - .kill_sb = kill_anon_super, 53 - }; 54 - 55 42 /* 56 43 * nop .set_page_dirty method so that people can use .page_mkwrite on 57 44 * anon inodes. ··· 50 63 51 64 static const struct address_space_operations anon_aops = { 52 65 .set_page_dirty = anon_set_page_dirty, 66 + }; 67 + 68 + /* 69 + * A single inode exists for all anon_inode files. Contrary to pipes, 70 + * anon_inode inodes have no associated per-instance data, so we need 71 + * only allocate one of them. 72 + */ 73 + static struct inode *anon_inode_mkinode(struct super_block *s) 74 + { 75 + struct inode *inode = new_inode_pseudo(s); 76 + 77 + if (!inode) 78 + return ERR_PTR(-ENOMEM); 79 + 80 + inode->i_ino = get_next_ino(); 81 + inode->i_fop = &anon_inode_fops; 82 + 83 + inode->i_mapping->a_ops = &anon_aops; 84 + 85 + /* 86 + * Mark the inode dirty from the very beginning, 87 + * that way it will never be moved to the dirty 88 + * list because mark_inode_dirty() will think 89 + * that it already _is_ on the dirty list. 90 + */ 91 + inode->i_state = I_DIRTY; 92 + inode->i_mode = S_IRUSR | S_IWUSR; 93 + inode->i_uid = current_fsuid(); 94 + inode->i_gid = current_fsgid(); 95 + inode->i_flags |= S_PRIVATE; 96 + inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME; 97 + return inode; 98 + } 99 + 100 + static struct dentry *anon_inodefs_mount(struct file_system_type *fs_type, 101 + int flags, const char *dev_name, void *data) 102 + { 103 + struct dentry *root; 104 + root = mount_pseudo(fs_type, "anon_inode:", NULL, 105 + &anon_inodefs_dentry_operations, ANON_INODE_FS_MAGIC); 106 + if (!IS_ERR(root)) { 107 + struct super_block *s = root->d_sb; 108 + anon_inode_inode = anon_inode_mkinode(s); 109 + if (IS_ERR(anon_inode_inode)) { 110 + dput(root); 111 + deactivate_locked_super(s); 112 + root = ERR_CAST(anon_inode_inode); 113 + } 114 + } 115 + return root; 116 + } 117 + 118 + static struct file_system_type anon_inode_fs_type = { 119 + .name = "anon_inodefs", 120 + .mount = anon_inodefs_mount, 121 + .kill_sb = kill_anon_super, 53 122 }; 54 123 55 124 /** ··· 223 180 } 224 181 EXPORT_SYMBOL_GPL(anon_inode_getfd); 225 182 226 - /* 227 - * A single inode exists for all anon_inode files. Contrary to pipes, 228 - * anon_inode inodes have no associated per-instance data, so we need 229 - * only allocate one of them. 230 - */ 231 - static struct inode *anon_inode_mkinode(void) 232 - { 233 - struct inode *inode = new_inode_pseudo(anon_inode_mnt->mnt_sb); 234 - 235 - if (!inode) 236 - return ERR_PTR(-ENOMEM); 237 - 238 - inode->i_ino = get_next_ino(); 239 - inode->i_fop = &anon_inode_fops; 240 - 241 - inode->i_mapping->a_ops = &anon_aops; 242 - 243 - /* 244 - * Mark the inode dirty from the very beginning, 245 - * that way it will never be moved to the dirty 246 - * list because mark_inode_dirty() will think 247 - * that it already _is_ on the dirty list. 248 - */ 249 - inode->i_state = I_DIRTY; 250 - inode->i_mode = S_IRUSR | S_IWUSR; 251 - inode->i_uid = current_fsuid(); 252 - inode->i_gid = current_fsgid(); 253 - inode->i_flags |= S_PRIVATE; 254 - inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME; 255 - return inode; 256 - } 257 - 258 183 static int __init anon_inode_init(void) 259 184 { 260 185 int error; ··· 235 224 error = PTR_ERR(anon_inode_mnt); 236 225 goto err_unregister_filesystem; 237 226 } 238 - anon_inode_inode = anon_inode_mkinode(); 239 - if (IS_ERR(anon_inode_inode)) { 240 - error = PTR_ERR(anon_inode_inode); 241 - goto err_mntput; 242 - } 243 - 244 227 return 0; 245 228 246 - err_mntput: 247 - kern_unmount(anon_inode_mnt); 248 229 err_unregister_filesystem: 249 230 unregister_filesystem(&anon_inode_fs_type); 250 231 err_exit:
+3 -3
fs/autofs4/init.c
··· 31 31 { 32 32 int err; 33 33 34 + autofs_dev_ioctl_init(); 35 + 34 36 err = register_filesystem(&autofs_fs_type); 35 37 if (err) 36 - return err; 37 - 38 - autofs_dev_ioctl_init(); 38 + autofs_dev_ioctl_exit(); 39 39 40 40 return err; 41 41 }
+2 -8
fs/autofs4/inode.c
··· 247 247 if (!ino) 248 248 goto fail_free; 249 249 root_inode = autofs4_get_inode(s, S_IFDIR | 0755); 250 - if (!root_inode) 251 - goto fail_ino; 252 - 253 - root = d_alloc_root(root_inode); 250 + root = d_make_root(root_inode); 254 251 if (!root) 255 - goto fail_iput; 252 + goto fail_ino; 256 253 pipe = NULL; 257 254 258 255 root->d_fsdata = ino; ··· 314 317 fail_dput: 315 318 dput(root); 316 319 goto fail_free; 317 - fail_iput: 318 - printk("autofs: get root dentry failed\n"); 319 - iput(root_inode); 320 320 fail_ino: 321 321 kfree(ino); 322 322 fail_free:
+1 -2
fs/befs/linuxvfs.c
··· 852 852 ret = PTR_ERR(root); 853 853 goto unacquire_priv_sbp; 854 854 } 855 - sb->s_root = d_alloc_root(root); 855 + sb->s_root = d_make_root(root); 856 856 if (!sb->s_root) { 857 - iput(root); 858 857 befs_error(sb, "get root inode failed"); 859 858 goto unacquire_priv_sbp; 860 859 }
+1 -2
fs/bfs/inode.c
··· 367 367 ret = PTR_ERR(inode); 368 368 goto out2; 369 369 } 370 - s->s_root = d_alloc_root(inode); 370 + s->s_root = d_make_root(inode); 371 371 if (!s->s_root) { 372 - iput(inode); 373 372 ret = -ENOMEM; 374 373 goto out2; 375 374 }
+2 -2
fs/binfmt_aout.c
··· 267 267 } 268 268 269 269 install_exec_creds(bprm); 270 - current->flags &= ~PF_FORKNOEXEC; 271 270 272 271 if (N_MAGIC(ex) == OMAGIC) { 273 272 unsigned long text_addr, map_size; ··· 453 454 454 455 static int __init init_aout_binfmt(void) 455 456 { 456 - return register_binfmt(&aout_format); 457 + register_binfmt(&aout_format); 458 + return 0; 457 459 } 458 460 459 461 static void __exit exit_aout_binfmt(void)
+2 -3
fs/binfmt_elf.c
··· 712 712 goto out_free_dentry; 713 713 714 714 /* OK, This is the point of no return */ 715 - current->flags &= ~PF_FORKNOEXEC; 716 715 current->mm->def_flags = def_flags; 717 716 718 717 /* Do this immediately, since STACK_TOP as used in setup_arg_pages ··· 933 934 #endif /* ARCH_HAS_SETUP_ADDITIONAL_PAGES */ 934 935 935 936 install_exec_creds(bprm); 936 - current->flags &= ~PF_FORKNOEXEC; 937 937 retval = create_elf_tables(bprm, &loc->elf_ex, 938 938 load_addr, interp_load_addr); 939 939 if (retval < 0) { ··· 2075 2077 2076 2078 static int __init init_elf_binfmt(void) 2077 2079 { 2078 - return register_binfmt(&elf_format); 2080 + register_binfmt(&elf_format); 2081 + return 0; 2079 2082 } 2080 2083 2081 2084 static void __exit exit_elf_binfmt(void)
+2 -4
fs/binfmt_elf_fdpic.c
··· 91 91 92 92 static int __init init_elf_fdpic_binfmt(void) 93 93 { 94 - return register_binfmt(&elf_fdpic_format); 94 + register_binfmt(&elf_fdpic_format); 95 + return 0; 95 96 } 96 97 97 98 static void __exit exit_elf_fdpic_binfmt(void) ··· 335 334 current->mm->context.exec_fdpic_loadmap = 0; 336 335 current->mm->context.interp_fdpic_loadmap = 0; 337 336 338 - current->flags &= ~PF_FORKNOEXEC; 339 - 340 337 #ifdef CONFIG_MMU 341 338 elf_fdpic_arch_lay_out_mm(&exec_params, 342 339 &interp_params, ··· 412 413 #endif 413 414 414 415 install_exec_creds(bprm); 415 - current->flags &= ~PF_FORKNOEXEC; 416 416 if (create_elf_fdpic_tables(bprm, current->mm, 417 417 &exec_params, &interp_params) < 0) 418 418 goto error_kill;
+2 -1
fs/binfmt_em86.c
··· 100 100 101 101 static int __init init_em86_binfmt(void) 102 102 { 103 - return register_binfmt(&em86_format); 103 + register_binfmt(&em86_format); 104 + return 0; 104 105 } 105 106 106 107 static void __exit exit_em86_binfmt(void)
+2 -2
fs/binfmt_flat.c
··· 902 902 libinfo.lib_list[j].start_data:UNLOADED_LIB; 903 903 904 904 install_exec_creds(bprm); 905 - current->flags &= ~PF_FORKNOEXEC; 906 905 907 906 set_binfmt(&flat_format); 908 907 ··· 949 950 950 951 static int __init init_flat_binfmt(void) 951 952 { 952 - return register_binfmt(&flat_format); 953 + register_binfmt(&flat_format); 954 + return 0; 953 955 } 954 956 955 957 /****************************************************************************/
+2 -5
fs/binfmt_misc.c
··· 726 726 static int __init init_misc_binfmt(void) 727 727 { 728 728 int err = register_filesystem(&bm_fs_type); 729 - if (!err) { 730 - err = insert_binfmt(&misc_format); 731 - if (err) 732 - unregister_filesystem(&bm_fs_type); 733 - } 729 + if (!err) 730 + insert_binfmt(&misc_format); 734 731 return err; 735 732 } 736 733
+2 -1
fs/binfmt_script.c
··· 105 105 106 106 static int __init init_script_binfmt(void) 107 107 { 108 - return register_binfmt(&script_format); 108 + register_binfmt(&script_format); 109 + return 0; 109 110 } 110 111 111 112 static void __exit exit_script_binfmt(void)
+2 -2
fs/binfmt_som.c
··· 225 225 goto out_free; 226 226 227 227 /* OK, This is the point of no return */ 228 - current->flags &= ~PF_FORKNOEXEC; 229 228 current->personality = PER_HPUX; 230 229 setup_new_exec(bprm); 231 230 ··· 288 289 289 290 static int __init init_som_binfmt(void) 290 291 { 291 - return register_binfmt(&som_format); 292 + register_binfmt(&som_format); 293 + return 0; 292 294 } 293 295 294 296 static void __exit exit_som_binfmt(void)
+2 -6
fs/btrfs/super.c
··· 629 629 void *data, int silent) 630 630 { 631 631 struct inode *inode; 632 - struct dentry *root_dentry; 633 632 struct btrfs_fs_info *fs_info = btrfs_sb(sb); 634 633 struct btrfs_key key; 635 634 int err; ··· 659 660 goto fail_close; 660 661 } 661 662 662 - root_dentry = d_alloc_root(inode); 663 - if (!root_dentry) { 664 - iput(inode); 663 + sb->s_root = d_make_root(inode); 664 + if (!sb->s_root) { 665 665 err = -ENOMEM; 666 666 goto fail_close; 667 667 } 668 - 669 - sb->s_root = root_dentry; 670 668 671 669 save_mount_options(sb, data); 672 670 cleancache_init_fs(sb);
+2 -1
fs/cachefiles/namei.c
··· 646 646 * (this is used to keep track of culling, and atimes are only 647 647 * updated by read, write and readdir but not lookup or 648 648 * open) */ 649 - touch_atime(cache->mnt, next); 649 + path.dentry = next; 650 + touch_atime(&path); 650 651 } 651 652 652 653 /* open a file interface onto a data file */
+1 -2
fs/ceph/super.c
··· 655 655 dout("open_root_inode success\n"); 656 656 if (ceph_ino(inode) == CEPH_INO_ROOT && 657 657 fsc->sb->s_root == NULL) { 658 - root = d_alloc_root(inode); 658 + root = d_make_root(inode); 659 659 if (!root) { 660 - iput(inode); 661 660 root = ERR_PTR(-ENOMEM); 662 661 goto out; 663 662 }
+1 -6
fs/cifs/cifsfs.c
··· 119 119 120 120 if (IS_ERR(inode)) { 121 121 rc = PTR_ERR(inode); 122 - inode = NULL; 123 122 goto out_no_root; 124 123 } 125 124 126 - sb->s_root = d_alloc_root(inode); 127 - 125 + sb->s_root = d_make_root(inode); 128 126 if (!sb->s_root) { 129 127 rc = -ENOMEM; 130 128 goto out_no_root; ··· 145 147 146 148 out_no_root: 147 149 cERROR(1, "cifs_read_super: get root inode failed"); 148 - if (inode) 149 - iput(inode); 150 - 151 150 return rc; 152 151 } 153 152
+1 -5
fs/coda/inode.c
··· 208 208 if (IS_ERR(root)) { 209 209 error = PTR_ERR(root); 210 210 printk("Failure of coda_cnode_make for root: error %d\n", error); 211 - root = NULL; 212 211 goto error; 213 212 } 214 213 215 214 printk("coda_read_super: rootinode is %ld dev %s\n", 216 215 root->i_ino, root->i_sb->s_id); 217 - sb->s_root = d_alloc_root(root); 216 + sb->s_root = d_make_root(root); 218 217 if (!sb->s_root) { 219 218 error = -EINVAL; 220 219 goto error; ··· 221 222 return 0; 222 223 223 224 error: 224 - if (root) 225 - iput(root); 226 - 227 225 mutex_lock(&vc->vc_mutex); 228 226 bdi_destroy(&vc->bdi); 229 227 vc->vc_sb = NULL;
+3 -4
fs/configfs/configfs_internal.h
··· 58 58 extern struct mutex configfs_symlink_mutex; 59 59 extern spinlock_t configfs_dirent_lock; 60 60 61 - extern struct vfsmount * configfs_mount; 62 61 extern struct kmem_cache *configfs_dir_cachep; 63 62 64 63 extern int configfs_is_root(struct config_item *item); 65 64 66 - extern struct inode * configfs_new_inode(umode_t mode, struct configfs_dirent *); 65 + extern struct inode * configfs_new_inode(umode_t mode, struct configfs_dirent *, struct super_block *); 67 66 extern int configfs_create(struct dentry *, umode_t mode, int (*init)(struct inode *)); 68 67 extern int configfs_inode_init(void); 69 68 extern void configfs_inode_exit(void); ··· 79 80 extern void configfs_drop_dentry(struct configfs_dirent *sd, struct dentry *parent); 80 81 extern int configfs_setattr(struct dentry *dentry, struct iattr *iattr); 81 82 82 - extern int configfs_pin_fs(void); 83 + extern struct dentry *configfs_pin_fs(void); 83 84 extern void configfs_release_fs(void); 84 85 85 86 extern struct rw_semaphore configfs_rename_sem; 86 - extern struct super_block * configfs_sb; 87 87 extern const struct file_operations configfs_dir_operations; 88 88 extern const struct file_operations configfs_file_operations; 89 89 extern const struct file_operations bin_fops; 90 90 extern const struct inode_operations configfs_dir_inode_operations; 91 + extern const struct inode_operations configfs_root_inode_operations; 91 92 extern const struct inode_operations configfs_symlink_inode_operations; 92 93 extern const struct dentry_operations configfs_dentry_ops; 93 94
+31 -41
fs/configfs/dir.c
··· 264 264 return 0; 265 265 } 266 266 267 - static int create_dir(struct config_item * k, struct dentry * p, 268 - struct dentry * d) 267 + static int create_dir(struct config_item *k, struct dentry *d) 269 268 { 270 269 int error; 271 270 umode_t mode = S_IFDIR| S_IRWXU | S_IRUGO | S_IXUGO; 271 + struct dentry *p = d->d_parent; 272 + 273 + BUG_ON(!k); 272 274 273 275 error = configfs_dirent_exists(p->d_fsdata, d->d_name.name); 274 276 if (!error) ··· 306 304 307 305 static int configfs_create_dir(struct config_item * item, struct dentry *dentry) 308 306 { 309 - struct dentry * parent; 310 - int error = 0; 311 - 312 - BUG_ON(!item); 313 - 314 - if (item->ci_parent) 315 - parent = item->ci_parent->ci_dentry; 316 - else if (configfs_mount) 317 - parent = configfs_mount->mnt_root; 318 - else 319 - return -EFAULT; 320 - 321 - error = create_dir(item,parent,dentry); 307 + int error = create_dir(item, dentry); 322 308 if (!error) 323 309 item->ci_dentry = dentry; 324 310 return error; ··· 1069 1079 int ret; 1070 1080 struct configfs_dirent *p, *root_sd, *subsys_sd = NULL; 1071 1081 struct config_item *s_item = &subsys->su_group.cg_item; 1082 + struct dentry *root; 1072 1083 1073 1084 /* 1074 1085 * Pin the configfs filesystem. This means we can safely access 1075 1086 * the root of the configfs filesystem. 1076 1087 */ 1077 - ret = configfs_pin_fs(); 1078 - if (ret) 1079 - return ret; 1088 + root = configfs_pin_fs(); 1089 + if (IS_ERR(root)) 1090 + return PTR_ERR(root); 1080 1091 1081 1092 /* 1082 1093 * Next, lock the root directory. We're going to check that the 1083 1094 * subsystem is really registered, and so we need to lock out 1084 1095 * configfs_[un]register_subsystem(). 1085 1096 */ 1086 - mutex_lock(&configfs_sb->s_root->d_inode->i_mutex); 1097 + mutex_lock(&root->d_inode->i_mutex); 1087 1098 1088 - root_sd = configfs_sb->s_root->d_fsdata; 1099 + root_sd = root->d_fsdata; 1089 1100 1090 1101 list_for_each_entry(p, &root_sd->s_children, s_sibling) { 1091 1102 if (p->s_type & CONFIGFS_DIR) { ··· 1120 1129 out_unlock_dirent_lock: 1121 1130 spin_unlock(&configfs_dirent_lock); 1122 1131 out_unlock_fs: 1123 - mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex); 1132 + mutex_unlock(&root->d_inode->i_mutex); 1124 1133 1125 1134 /* 1126 1135 * If we succeeded, the fs is pinned via other methods. If not, ··· 1173 1182 struct config_item_type *type; 1174 1183 struct module *subsys_owner = NULL, *new_item_owner = NULL; 1175 1184 char *name; 1176 - 1177 - if (dentry->d_parent == configfs_sb->s_root) { 1178 - ret = -EPERM; 1179 - goto out; 1180 - } 1181 1185 1182 1186 sd = dentry->d_parent->d_fsdata; 1183 1187 ··· 1345 1359 struct module *subsys_owner = NULL, *dead_item_owner = NULL; 1346 1360 int ret; 1347 1361 1348 - if (dentry->d_parent == configfs_sb->s_root) 1349 - return -EPERM; 1350 - 1351 1362 sd = dentry->d_fsdata; 1352 1363 if (sd->s_type & CONFIGFS_USET_DEFAULT) 1353 1364 return -EPERM; ··· 1442 1459 .setattr = configfs_setattr, 1443 1460 }; 1444 1461 1462 + const struct inode_operations configfs_root_inode_operations = { 1463 + .lookup = configfs_lookup, 1464 + .setattr = configfs_setattr, 1465 + }; 1466 + 1445 1467 #if 0 1446 1468 int configfs_rename_dir(struct config_item * item, const char *new_name) 1447 1469 { ··· 1534 1546 static int configfs_readdir(struct file * filp, void * dirent, filldir_t filldir) 1535 1547 { 1536 1548 struct dentry *dentry = filp->f_path.dentry; 1549 + struct super_block *sb = dentry->d_sb; 1537 1550 struct configfs_dirent * parent_sd = dentry->d_fsdata; 1538 1551 struct configfs_dirent *cursor = filp->private_data; 1539 1552 struct list_head *p, *q = &cursor->s_sibling; ··· 1597 1608 ino = inode->i_ino; 1598 1609 spin_unlock(&configfs_dirent_lock); 1599 1610 if (!inode) 1600 - ino = iunique(configfs_sb, 2); 1611 + ino = iunique(sb, 2); 1601 1612 1602 1613 if (filldir(dirent, name, len, filp->f_pos, ino, 1603 1614 dt_type(next)) < 0) ··· 1669 1680 struct config_group *group = &subsys->su_group; 1670 1681 struct qstr name; 1671 1682 struct dentry *dentry; 1683 + struct dentry *root; 1672 1684 struct configfs_dirent *sd; 1673 1685 1674 - err = configfs_pin_fs(); 1675 - if (err) 1676 - return err; 1686 + root = configfs_pin_fs(); 1687 + if (IS_ERR(root)) 1688 + return PTR_ERR(root); 1677 1689 1678 1690 if (!group->cg_item.ci_name) 1679 1691 group->cg_item.ci_name = group->cg_item.ci_namebuf; 1680 1692 1681 - sd = configfs_sb->s_root->d_fsdata; 1693 + sd = root->d_fsdata; 1682 1694 link_group(to_config_group(sd->s_element), group); 1683 1695 1684 - mutex_lock_nested(&configfs_sb->s_root->d_inode->i_mutex, 1685 - I_MUTEX_PARENT); 1696 + mutex_lock_nested(&root->d_inode->i_mutex, I_MUTEX_PARENT); 1686 1697 1687 1698 name.name = group->cg_item.ci_name; 1688 1699 name.len = strlen(name.name); 1689 1700 name.hash = full_name_hash(name.name, name.len); 1690 1701 1691 1702 err = -ENOMEM; 1692 - dentry = d_alloc(configfs_sb->s_root, &name); 1703 + dentry = d_alloc(root, &name); 1693 1704 if (dentry) { 1694 1705 d_add(dentry, NULL); 1695 1706 ··· 1706 1717 } 1707 1718 } 1708 1719 1709 - mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex); 1720 + mutex_unlock(&root->d_inode->i_mutex); 1710 1721 1711 1722 if (err) { 1712 1723 unlink_group(group); ··· 1720 1731 { 1721 1732 struct config_group *group = &subsys->su_group; 1722 1733 struct dentry *dentry = group->cg_item.ci_dentry; 1734 + struct dentry *root = dentry->d_sb->s_root; 1723 1735 1724 - if (dentry->d_parent != configfs_sb->s_root) { 1736 + if (dentry->d_parent != root) { 1725 1737 printk(KERN_ERR "configfs: Tried to unregister non-subsystem!\n"); 1726 1738 return; 1727 1739 } 1728 1740 1729 - mutex_lock_nested(&configfs_sb->s_root->d_inode->i_mutex, 1741 + mutex_lock_nested(&root->d_inode->i_mutex, 1730 1742 I_MUTEX_PARENT); 1731 1743 mutex_lock_nested(&dentry->d_inode->i_mutex, I_MUTEX_CHILD); 1732 1744 mutex_lock(&configfs_symlink_mutex); ··· 1744 1754 1745 1755 d_delete(dentry); 1746 1756 1747 - mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex); 1757 + mutex_unlock(&root->d_inode->i_mutex); 1748 1758 1749 1759 dput(dentry); 1750 1760
+30 -32
fs/configfs/inode.c
··· 44 44 static struct lock_class_key default_group_class[MAX_LOCK_DEPTH]; 45 45 #endif 46 46 47 - extern struct super_block * configfs_sb; 48 - 49 47 static const struct address_space_operations configfs_aops = { 50 48 .readpage = simple_readpage, 51 49 .write_begin = simple_write_begin, ··· 130 132 inode->i_ctime = iattr->ia_ctime; 131 133 } 132 134 133 - struct inode *configfs_new_inode(umode_t mode, struct configfs_dirent * sd) 135 + struct inode *configfs_new_inode(umode_t mode, struct configfs_dirent *sd, 136 + struct super_block *s) 134 137 { 135 - struct inode * inode = new_inode(configfs_sb); 138 + struct inode * inode = new_inode(s); 136 139 if (inode) { 137 140 inode->i_ino = get_next_ino(); 138 141 inode->i_mapping->a_ops = &configfs_aops; ··· 187 188 int configfs_create(struct dentry * dentry, umode_t mode, int (*init)(struct inode *)) 188 189 { 189 190 int error = 0; 190 - struct inode * inode = NULL; 191 - if (dentry) { 192 - if (!dentry->d_inode) { 193 - struct configfs_dirent *sd = dentry->d_fsdata; 194 - if ((inode = configfs_new_inode(mode, sd))) { 195 - if (dentry->d_parent && dentry->d_parent->d_inode) { 196 - struct inode *p_inode = dentry->d_parent->d_inode; 197 - p_inode->i_mtime = p_inode->i_ctime = CURRENT_TIME; 198 - } 199 - configfs_set_inode_lock_class(sd, inode); 200 - goto Proceed; 201 - } 202 - else 203 - error = -ENOMEM; 204 - } else 205 - error = -EEXIST; 206 - } else 207 - error = -ENOENT; 208 - goto Done; 191 + struct inode *inode = NULL; 192 + struct configfs_dirent *sd; 193 + struct inode *p_inode; 209 194 210 - Proceed: 211 - if (init) 195 + if (!dentry) 196 + return -ENOENT; 197 + 198 + if (dentry->d_inode) 199 + return -EEXIST; 200 + 201 + sd = dentry->d_fsdata; 202 + inode = configfs_new_inode(mode, sd, dentry->d_sb); 203 + if (!inode) 204 + return -ENOMEM; 205 + 206 + p_inode = dentry->d_parent->d_inode; 207 + p_inode->i_mtime = p_inode->i_ctime = CURRENT_TIME; 208 + configfs_set_inode_lock_class(sd, inode); 209 + 210 + if (init) { 212 211 error = init(inode); 213 - if (!error) { 214 - d_instantiate(dentry, inode); 215 - if (S_ISDIR(mode) || S_ISLNK(mode)) 216 - dget(dentry); /* pin link and directory dentries in core */ 217 - } else 218 - iput(inode); 219 - Done: 212 + if (error) { 213 + iput(inode); 214 + return error; 215 + } 216 + } 217 + d_instantiate(dentry, inode); 218 + if (S_ISDIR(mode) || S_ISLNK(mode)) 219 + dget(dentry); /* pin link and directory dentries in core */ 220 220 return error; 221 221 } 222 222
+7 -9
fs/configfs/mount.c
··· 37 37 /* Random magic number */ 38 38 #define CONFIGFS_MAGIC 0x62656570 39 39 40 - struct vfsmount * configfs_mount = NULL; 41 - struct super_block * configfs_sb = NULL; 40 + static struct vfsmount *configfs_mount = NULL; 42 41 struct kmem_cache *configfs_dir_cachep; 43 42 static int configfs_mnt_count = 0; 44 43 ··· 76 77 sb->s_magic = CONFIGFS_MAGIC; 77 78 sb->s_op = &configfs_ops; 78 79 sb->s_time_gran = 1; 79 - configfs_sb = sb; 80 80 81 81 inode = configfs_new_inode(S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO, 82 - &configfs_root); 82 + &configfs_root, sb); 83 83 if (inode) { 84 - inode->i_op = &configfs_dir_inode_operations; 84 + inode->i_op = &configfs_root_inode_operations; 85 85 inode->i_fop = &configfs_dir_operations; 86 86 /* directory inodes start off with i_nlink == 2 (for "." entry) */ 87 87 inc_nlink(inode); ··· 89 91 return -ENOMEM; 90 92 } 91 93 92 - root = d_alloc_root(inode); 94 + root = d_make_root(inode); 93 95 if (!root) { 94 96 pr_debug("%s: could not get root dentry!\n",__func__); 95 - iput(inode); 96 97 return -ENOMEM; 97 98 } 98 99 config_group_init(&configfs_root_group); ··· 115 118 .kill_sb = kill_litter_super, 116 119 }; 117 120 118 - int configfs_pin_fs(void) 121 + struct dentry *configfs_pin_fs(void) 119 122 { 120 - return simple_pin_fs(&configfs_fs_type, &configfs_mount, 123 + int err = simple_pin_fs(&configfs_fs_type, &configfs_mount, 121 124 &configfs_mnt_count); 125 + return err ? ERR_PTR(err) : configfs_mount->mnt_root; 122 126 } 123 127 124 128 void configfs_release_fs(void)
+3 -9
fs/configfs/symlink.c
··· 110 110 111 111 112 112 static int get_target(const char *symname, struct path *path, 113 - struct config_item **target) 113 + struct config_item **target, struct super_block *sb) 114 114 { 115 115 int ret; 116 116 117 117 ret = kern_path(symname, LOOKUP_FOLLOW|LOOKUP_DIRECTORY, path); 118 118 if (!ret) { 119 - if (path->dentry->d_sb == configfs_sb) { 119 + if (path->dentry->d_sb == sb) { 120 120 *target = configfs_get_config_item(path->dentry); 121 121 if (!*target) { 122 122 ret = -ENOENT; ··· 141 141 struct config_item *target_item = NULL; 142 142 struct config_item_type *type; 143 143 144 - ret = -EPERM; /* What lack-of-symlink returns */ 145 - if (dentry->d_parent == configfs_sb->s_root) 146 - goto out; 147 - 148 144 sd = dentry->d_parent->d_fsdata; 149 145 /* 150 146 * Fake invisibility if dir belongs to a group/default groups hierarchy ··· 158 162 !type->ct_item_ops->allow_link) 159 163 goto out_put; 160 164 161 - ret = get_target(symname, &path, &target_item); 165 + ret = get_target(symname, &path, &target_item, dentry->d_sb); 162 166 if (ret) 163 167 goto out_put; 164 168 ··· 193 197 ret = -EPERM; /* What lack-of-symlink returns */ 194 198 if (!(sd->s_type & CONFIGFS_ITEM_LINK)) 195 199 goto out; 196 - 197 - BUG_ON(dentry->d_parent == configfs_sb->s_root); 198 200 199 201 sl = sd->s_element; 200 202
+2 -4
fs/cramfs/inode.c
··· 318 318 root = get_cramfs_inode(sb, &super.root, 0); 319 319 if (IS_ERR(root)) 320 320 goto out; 321 - sb->s_root = d_alloc_root(root); 322 - if (!sb->s_root) { 323 - iput(root); 321 + sb->s_root = d_make_root(root); 322 + if (!sb->s_root) 324 323 goto out; 325 - } 326 324 return 0; 327 325 out: 328 326 kfree(sbi);
-24
fs/dcache.c
··· 1466 1466 1467 1467 EXPORT_SYMBOL(d_instantiate_unique); 1468 1468 1469 - /** 1470 - * d_alloc_root - allocate root dentry 1471 - * @root_inode: inode to allocate the root for 1472 - * 1473 - * Allocate a root ("/") dentry for the inode given. The inode is 1474 - * instantiated and returned. %NULL is returned if there is insufficient 1475 - * memory or the inode passed is %NULL. 1476 - */ 1477 - 1478 - struct dentry * d_alloc_root(struct inode * root_inode) 1479 - { 1480 - struct dentry *res = NULL; 1481 - 1482 - if (root_inode) { 1483 - static const struct qstr name = { .name = "/", .len = 1 }; 1484 - 1485 - res = __d_alloc(root_inode->i_sb, &name); 1486 - if (res) 1487 - d_instantiate(res, root_inode); 1488 - } 1489 - return res; 1490 - } 1491 - EXPORT_SYMBOL(d_alloc_root); 1492 - 1493 1469 struct dentry *d_make_root(struct inode *root_inode) 1494 1470 { 1495 1471 struct dentry *res = NULL;
+1 -1
fs/debugfs/file.c
··· 611 611 * %NULL or !%NULL instead as to eliminate the need for #ifdef in the calling 612 612 * code. 613 613 */ 614 - struct dentry *debugfs_create_regset32(const char *name, mode_t mode, 614 + struct dentry *debugfs_create_regset32(const char *name, umode_t mode, 615 615 struct dentry *parent, 616 616 struct debugfs_regset32 *regset) 617 617 {
+1 -2
fs/devpts/inode.c
··· 374 374 inode->i_fop = &simple_dir_operations; 375 375 set_nlink(inode, 2); 376 376 377 - s->s_root = d_alloc_root(inode); 377 + s->s_root = d_make_root(inode); 378 378 if (s->s_root) 379 379 return 0; 380 380 381 381 printk(KERN_ERR "devpts: get root dentry failed\n"); 382 - iput(inode); 383 382 384 383 fail: 385 384 return -ENOMEM;
+4 -5
fs/ecryptfs/file.c
··· 48 48 unsigned long nr_segs, loff_t pos) 49 49 { 50 50 ssize_t rc; 51 - struct dentry *lower_dentry; 52 - struct vfsmount *lower_vfsmount; 51 + struct path lower; 53 52 struct file *file = iocb->ki_filp; 54 53 55 54 rc = generic_file_aio_read(iocb, iov, nr_segs, pos); ··· 59 60 if (-EIOCBQUEUED == rc) 60 61 rc = wait_on_sync_kiocb(iocb); 61 62 if (rc >= 0) { 62 - lower_dentry = ecryptfs_dentry_to_lower(file->f_path.dentry); 63 - lower_vfsmount = ecryptfs_dentry_to_lower_mnt(file->f_path.dentry); 64 - touch_atime(lower_vfsmount, lower_dentry); 63 + lower.dentry = ecryptfs_dentry_to_lower(file->f_path.dentry); 64 + lower.mnt = ecryptfs_dentry_to_lower_mnt(file->f_path.dentry); 65 + touch_atime(&lower); 65 66 } 66 67 return rc; 67 68 }
+9 -10
fs/ecryptfs/main.c
··· 550 550 if (IS_ERR(inode)) 551 551 goto out_free; 552 552 553 - s->s_root = d_alloc_root(inode); 553 + s->s_root = d_make_root(inode); 554 554 if (!s->s_root) { 555 - iput(inode); 556 555 rc = -ENOMEM; 557 556 goto out_free; 558 557 } ··· 794 795 "Failed to allocate one or more kmem_cache objects\n"); 795 796 goto out; 796 797 } 797 - rc = register_filesystem(&ecryptfs_fs_type); 798 - if (rc) { 799 - printk(KERN_ERR "Failed to register filesystem\n"); 800 - goto out_free_kmem_caches; 801 - } 802 798 rc = do_sysfs_registration(); 803 799 if (rc) { 804 800 printk(KERN_ERR "sysfs registration failed\n"); 805 - goto out_unregister_filesystem; 801 + goto out_free_kmem_caches; 806 802 } 807 803 rc = ecryptfs_init_kthread(); 808 804 if (rc) { ··· 818 824 "rc = [%d]\n", rc); 819 825 goto out_release_messaging; 820 826 } 827 + rc = register_filesystem(&ecryptfs_fs_type); 828 + if (rc) { 829 + printk(KERN_ERR "Failed to register filesystem\n"); 830 + goto out_destroy_crypto; 831 + } 821 832 if (ecryptfs_verbosity > 0) 822 833 printk(KERN_CRIT "eCryptfs verbosity set to %d. Secret values " 823 834 "will be written to the syslog!\n", ecryptfs_verbosity); 824 835 825 836 goto out; 837 + out_destroy_crypto: 838 + ecryptfs_destroy_crypto(); 826 839 out_release_messaging: 827 840 ecryptfs_release_messaging(); 828 841 out_destroy_kthread: 829 842 ecryptfs_destroy_kthread(); 830 843 out_do_sysfs_unregistration: 831 844 do_sysfs_unregistration(); 832 - out_unregister_filesystem: 833 - unregister_filesystem(&ecryptfs_fs_type); 834 845 out_free_kmem_caches: 835 846 ecryptfs_free_kmem_caches(); 836 847 out:
-1
fs/ecryptfs/super.c
··· 184 184 const struct super_operations ecryptfs_sops = { 185 185 .alloc_inode = ecryptfs_alloc_inode, 186 186 .destroy_inode = ecryptfs_destroy_inode, 187 - .drop_inode = generic_drop_inode, 188 187 .statfs = ecryptfs_statfs, 189 188 .remount_fs = NULL, 190 189 .evict_inode = ecryptfs_evict_inode,
+1 -2
fs/efs/super.c
··· 317 317 goto out_no_fs; 318 318 } 319 319 320 - s->s_root = d_alloc_root(root); 320 + s->s_root = d_make_root(root); 321 321 if (!(s->s_root)) { 322 322 printk(KERN_ERR "EFS: get root dentry failed\n"); 323 - iput(root); 324 323 ret = -ENOMEM; 325 324 goto out_no_fs; 326 325 }
+3 -5
fs/exec.c
··· 81 81 static LIST_HEAD(formats); 82 82 static DEFINE_RWLOCK(binfmt_lock); 83 83 84 - int __register_binfmt(struct linux_binfmt * fmt, int insert) 84 + void __register_binfmt(struct linux_binfmt * fmt, int insert) 85 85 { 86 - if (!fmt) 87 - return -EINVAL; 86 + BUG_ON(!fmt); 88 87 write_lock(&binfmt_lock); 89 88 insert ? list_add(&fmt->lh, &formats) : 90 89 list_add_tail(&fmt->lh, &formats); 91 90 write_unlock(&binfmt_lock); 92 - return 0; 93 91 } 94 92 95 93 EXPORT_SYMBOL(__register_binfmt); ··· 1113 1115 bprm->mm = NULL; /* We're using it now */ 1114 1116 1115 1117 set_fs(USER_DS); 1116 - current->flags &= ~(PF_RANDOMIZE | PF_KTHREAD); 1118 + current->flags &= ~(PF_RANDOMIZE | PF_FORKNOEXEC | PF_KTHREAD); 1117 1119 flush_thread(); 1118 1120 current->personality &= ~bprm->per_clear; 1119 1121
+1 -12
fs/exofs/namei.c
··· 143 143 { 144 144 struct inode *inode = old_dentry->d_inode; 145 145 146 - if (inode->i_nlink >= EXOFS_LINK_MAX) 147 - return -EMLINK; 148 - 149 146 inode->i_ctime = CURRENT_TIME; 150 147 inode_inc_link_count(inode); 151 148 ihold(inode); ··· 153 156 static int exofs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) 154 157 { 155 158 struct inode *inode; 156 - int err = -EMLINK; 157 - 158 - if (dir->i_nlink >= EXOFS_LINK_MAX) 159 - goto out; 159 + int err; 160 160 161 161 inode_inc_link_count(dir); 162 162 ··· 269 275 if (err) 270 276 goto out_dir; 271 277 } else { 272 - if (dir_de) { 273 - err = -EMLINK; 274 - if (new_dir->i_nlink >= EXOFS_LINK_MAX) 275 - goto out_dir; 276 - } 277 278 err = exofs_add_link(new_dentry, old_inode); 278 279 if (err) 279 280 goto out_dir;
+2 -2
fs/exofs/super.c
··· 754 754 sb->s_blocksize = EXOFS_BLKSIZE; 755 755 sb->s_blocksize_bits = EXOFS_BLKSHIFT; 756 756 sb->s_maxbytes = MAX_LFS_FILESIZE; 757 + sb->s_max_links = EXOFS_LINK_MAX; 757 758 atomic_set(&sbi->s_curr_pending, 0); 758 759 sb->s_bdev = NULL; 759 760 sb->s_dev = 0; ··· 819 818 ret = PTR_ERR(root); 820 819 goto free_sbi; 821 820 } 822 - sb->s_root = d_alloc_root(root); 821 + sb->s_root = d_make_root(root); 823 822 if (!sb->s_root) { 824 - iput(root); 825 823 EXOFS_ERR("ERROR: get root inode failed\n"); 826 824 ret = -ENOMEM; 827 825 goto free_sbi;
+1 -12
fs/ext2/namei.c
··· 195 195 struct inode *inode = old_dentry->d_inode; 196 196 int err; 197 197 198 - if (inode->i_nlink >= EXT2_LINK_MAX) 199 - return -EMLINK; 200 - 201 198 dquot_initialize(dir); 202 199 203 200 inode->i_ctime = CURRENT_TIME_SEC; ··· 214 217 static int ext2_mkdir(struct inode * dir, struct dentry * dentry, umode_t mode) 215 218 { 216 219 struct inode * inode; 217 - int err = -EMLINK; 218 - 219 - if (dir->i_nlink >= EXT2_LINK_MAX) 220 - goto out; 220 + int err; 221 221 222 222 dquot_initialize(dir); 223 223 ··· 340 346 drop_nlink(new_inode); 341 347 inode_dec_link_count(new_inode); 342 348 } else { 343 - if (dir_de) { 344 - err = -EMLINK; 345 - if (new_dir->i_nlink >= EXT2_LINK_MAX) 346 - goto out_dir; 347 - } 348 349 err = ext2_add_link(new_dentry, old_inode); 349 350 if (err) 350 351 goto out_dir;
+2 -2
fs/ext2/super.c
··· 919 919 } 920 920 921 921 sb->s_maxbytes = ext2_max_size(sb->s_blocksize_bits); 922 + sb->s_max_links = EXT2_LINK_MAX; 922 923 923 924 if (le32_to_cpu(es->s_rev_level) == EXT2_GOOD_OLD_REV) { 924 925 sbi->s_inode_size = EXT2_GOOD_OLD_INODE_SIZE; ··· 1088 1087 goto failed_mount3; 1089 1088 } 1090 1089 1091 - sb->s_root = d_alloc_root(root); 1090 + sb->s_root = d_make_root(root); 1092 1091 if (!sb->s_root) { 1093 - iput(root); 1094 1092 ext2_msg(sb, KERN_ERR, "error: get root inode failed"); 1095 1093 ret = -ENOMEM; 1096 1094 goto failed_mount3;
+1 -2
fs/ext3/super.c
··· 2046 2046 ext3_msg(sb, KERN_ERR, "error: corrupt root inode, run e2fsck"); 2047 2047 goto failed_mount3; 2048 2048 } 2049 - sb->s_root = d_alloc_root(root); 2049 + sb->s_root = d_make_root(root); 2050 2050 if (!sb->s_root) { 2051 2051 ext3_msg(sb, KERN_ERR, "error: get root dentry failed"); 2052 - iput(root); 2053 2052 ret = -ENOMEM; 2054 2053 goto failed_mount3; 2055 2054 }
+4 -4
fs/ext4/super.c
··· 3735 3735 iput(root); 3736 3736 goto failed_mount4; 3737 3737 } 3738 - sb->s_root = d_alloc_root(root); 3738 + sb->s_root = d_make_root(root); 3739 3739 if (!sb->s_root) { 3740 - iput(root); 3741 3740 ext4_msg(sb, KERN_ERR, "get root dentry failed"); 3742 3741 ret = -ENOMEM; 3743 3742 goto failed_mount4; ··· 5055 5056 { 5056 5057 int i, err; 5057 5058 5059 + ext4_li_info = NULL; 5060 + mutex_init(&ext4_li_mtx); 5061 + 5058 5062 ext4_check_flag_values(); 5059 5063 5060 5064 for (i = 0; i < EXT4_WQ_HASH_SZ; i++) { ··· 5096 5094 if (err) 5097 5095 goto out; 5098 5096 5099 - ext4_li_info = NULL; 5100 - mutex_init(&ext4_li_mtx); 5101 5097 return 0; 5102 5098 out: 5103 5099 unregister_as_ext2();
+4 -4
fs/fat/inode.c
··· 1496 1496 root_inode->i_ino = MSDOS_ROOT_INO; 1497 1497 root_inode->i_version = 1; 1498 1498 error = fat_read_root(root_inode); 1499 - if (error < 0) 1499 + if (error < 0) { 1500 + iput(root_inode); 1500 1501 goto out_fail; 1502 + } 1501 1503 error = -ENOMEM; 1502 1504 insert_inode_hash(root_inode); 1503 - sb->s_root = d_alloc_root(root_inode); 1505 + sb->s_root = d_make_root(root_inode); 1504 1506 if (!sb->s_root) { 1505 1507 fat_msg(sb, KERN_ERR, "get root inode failed"); 1506 1508 goto out_fail; ··· 1518 1516 out_fail: 1519 1517 if (fat_inode) 1520 1518 iput(fat_inode); 1521 - if (root_inode) 1522 - iput(root_inode); 1523 1519 unload_nls(sbi->nls_io); 1524 1520 unload_nls(sbi->nls_disk); 1525 1521 if (sbi->options.iocharset != fat_default_iocharset)
+1 -2
fs/file_table.c
··· 204 204 * to write to @file, along with access to write through 205 205 * its vfsmount. 206 206 */ 207 - void drop_file_write_access(struct file *file) 207 + static void drop_file_write_access(struct file *file) 208 208 { 209 209 struct vfsmount *mnt = file->f_path.mnt; 210 210 struct dentry *dentry = file->f_path.dentry; ··· 219 219 mnt_drop_write(mnt); 220 220 file_release_write(file); 221 221 } 222 - EXPORT_SYMBOL_GPL(drop_file_write_access); 223 222 224 223 /* the real guts of fput() - releasing the last reference to file 225 224 */
+1 -2
fs/freevxfs/vxfs_super.c
··· 224 224 ret = PTR_ERR(root); 225 225 goto out; 226 226 } 227 - sbp->s_root = d_alloc_root(root); 227 + sbp->s_root = d_make_root(root); 228 228 if (!sbp->s_root) { 229 - iput(root); 230 229 printk(KERN_WARNING "vxfs: unable to get root dentry.\n"); 231 230 goto out_free_ilist; 232 231 }
+17 -16
fs/fs_struct.c
··· 26 26 { 27 27 struct path old_root; 28 28 29 + path_get_longterm(path); 29 30 spin_lock(&fs->lock); 30 31 write_seqcount_begin(&fs->seq); 31 32 old_root = fs->root; 32 33 fs->root = *path; 33 - path_get_longterm(path); 34 34 write_seqcount_end(&fs->seq); 35 35 spin_unlock(&fs->lock); 36 36 if (old_root.dentry) ··· 45 45 { 46 46 struct path old_pwd; 47 47 48 + path_get_longterm(path); 48 49 spin_lock(&fs->lock); 49 50 write_seqcount_begin(&fs->seq); 50 51 old_pwd = fs->pwd; 51 52 fs->pwd = *path; 52 - path_get_longterm(path); 53 53 write_seqcount_end(&fs->seq); 54 54 spin_unlock(&fs->lock); 55 55 56 56 if (old_pwd.dentry) 57 57 path_put_longterm(&old_pwd); 58 + } 59 + 60 + static inline int replace_path(struct path *p, const struct path *old, const struct path *new) 61 + { 62 + if (likely(p->dentry != old->dentry || p->mnt != old->mnt)) 63 + return 0; 64 + *p = *new; 65 + return 1; 58 66 } 59 67 60 68 void chroot_fs_refs(struct path *old_root, struct path *new_root) ··· 76 68 task_lock(p); 77 69 fs = p->fs; 78 70 if (fs) { 71 + int hits = 0; 79 72 spin_lock(&fs->lock); 80 73 write_seqcount_begin(&fs->seq); 81 - if (fs->root.dentry == old_root->dentry 82 - && fs->root.mnt == old_root->mnt) { 83 - path_get_longterm(new_root); 84 - fs->root = *new_root; 85 - count++; 86 - } 87 - if (fs->pwd.dentry == old_root->dentry 88 - && fs->pwd.mnt == old_root->mnt) { 89 - path_get_longterm(new_root); 90 - fs->pwd = *new_root; 91 - count++; 92 - } 74 + hits += replace_path(&fs->root, old_root, new_root); 75 + hits += replace_path(&fs->pwd, old_root, new_root); 93 76 write_seqcount_end(&fs->seq); 77 + while (hits--) { 78 + count++; 79 + path_get_longterm(new_root); 80 + } 94 81 spin_unlock(&fs->lock); 95 82 } 96 83 task_unlock(p); ··· 110 107 int kill; 111 108 task_lock(tsk); 112 109 spin_lock(&fs->lock); 113 - write_seqcount_begin(&fs->seq); 114 110 tsk->fs = NULL; 115 111 kill = !--fs->users; 116 - write_seqcount_end(&fs->seq); 117 112 spin_unlock(&fs->lock); 118 113 task_unlock(tsk); 119 114 if (kill)
+2 -7
fs/fuse/inode.c
··· 988 988 989 989 err = -ENOMEM; 990 990 root = fuse_get_root_inode(sb, d.rootmode); 991 - if (!root) 991 + root_dentry = d_make_root(root); 992 + if (!root_dentry) 992 993 goto err_put_conn; 993 - 994 - root_dentry = d_alloc_root(root); 995 - if (!root_dentry) { 996 - iput(root); 997 - goto err_put_conn; 998 - } 999 994 /* only now - we want root dentry with NULL ->d_op */ 1000 995 sb->s_d_op = &fuse_dentry_operations; 1001 996
+1 -2
fs/gfs2/ops_fstype.c
··· 431 431 fs_err(sdp, "can't read in %s inode: %ld\n", name, PTR_ERR(inode)); 432 432 return PTR_ERR(inode); 433 433 } 434 - dentry = d_alloc_root(inode); 434 + dentry = d_make_root(inode); 435 435 if (!dentry) { 436 436 fs_err(sdp, "can't alloc %s dentry\n", name); 437 - iput(inode); 438 437 return -ENOMEM; 439 438 } 440 439 *dptr = dentry;
+2 -4
fs/hfs/super.c
··· 430 430 431 431 sb->s_d_op = &hfs_dentry_operations; 432 432 res = -ENOMEM; 433 - sb->s_root = d_alloc_root(root_inode); 433 + sb->s_root = d_make_root(root_inode); 434 434 if (!sb->s_root) 435 - goto bail_iput; 435 + goto bail_no_root; 436 436 437 437 /* everything's okay */ 438 438 return 0; 439 439 440 - bail_iput: 441 - iput(root_inode); 442 440 bail_no_root: 443 441 printk(KERN_ERR "hfs: get root inode failed.\n"); 444 442 bail:
+5
fs/hfsplus/hfsplus_fs.h
··· 317 317 318 318 319 319 /* 320 + * hfs+-specific ioctl for making the filesystem bootable 321 + */ 322 + #define HFSPLUS_IOC_BLESS _IO('h', 0x80) 323 + 324 + /* 320 325 * Functions in any *.c used in other files 321 326 */ 322 327
+1 -1
fs/hfsplus/hfsplus_raw.h
··· 117 117 __be32 write_count; 118 118 __be64 encodings_bmp; 119 119 120 - u8 finder_info[32]; 120 + u32 finder_info[8]; 121 121 122 122 struct hfsplus_fork_raw alloc_file; 123 123 struct hfsplus_fork_raw ext_file;
+2
fs/hfsplus/inode.c
··· 193 193 mutex_init(&hip->extents_lock); 194 194 hip->extent_state = 0; 195 195 hip->flags = 0; 196 + hip->userflags = 0; 196 197 set_bit(HFSPLUS_I_RSRC, &hip->flags); 197 198 198 199 err = hfs_find_init(HFSPLUS_SB(sb)->cat_tree, &fd); ··· 401 400 atomic_set(&hip->opencnt, 0); 402 401 hip->extent_state = 0; 403 402 hip->flags = 0; 403 + hip->userflags = 0; 404 404 memset(hip->first_extents, 0, sizeof(hfsplus_extent_rec)); 405 405 memset(hip->cached_extents, 0, sizeof(hfsplus_extent_rec)); 406 406 hip->alloc_blocks = 0;
+34
fs/hfsplus/ioctl.c
··· 20 20 #include <asm/uaccess.h> 21 21 #include "hfsplus_fs.h" 22 22 23 + /* 24 + * "Blessing" an HFS+ filesystem writes metadata to the superblock informing 25 + * the platform firmware which file to boot from 26 + */ 27 + static int hfsplus_ioctl_bless(struct file *file, int __user *user_flags) 28 + { 29 + struct dentry *dentry = file->f_path.dentry; 30 + struct inode *inode = dentry->d_inode; 31 + struct hfsplus_sb_info *sbi = HFSPLUS_SB(inode->i_sb); 32 + struct hfsplus_vh *vh = sbi->s_vhdr; 33 + struct hfsplus_vh *bvh = sbi->s_backup_vhdr; 34 + 35 + if (!capable(CAP_SYS_ADMIN)) 36 + return -EPERM; 37 + 38 + mutex_lock(&sbi->vh_mutex); 39 + 40 + /* Directory containing the bootable system */ 41 + vh->finder_info[0] = bvh->finder_info[0] = 42 + cpu_to_be32(parent_ino(dentry)); 43 + 44 + /* Bootloader */ 45 + vh->finder_info[1] = bvh->finder_info[1] = cpu_to_be32(inode->i_ino); 46 + 47 + /* Per spec, the OS X system folder - same as finder_info[0] here */ 48 + vh->finder_info[5] = bvh->finder_info[5] = 49 + cpu_to_be32(parent_ino(dentry)); 50 + 51 + mutex_unlock(&sbi->vh_mutex); 52 + return 0; 53 + } 54 + 23 55 static int hfsplus_ioctl_getflags(struct file *file, int __user *user_flags) 24 56 { 25 57 struct inode *inode = file->f_path.dentry->d_inode; ··· 140 108 return hfsplus_ioctl_getflags(file, argp); 141 109 case HFSPLUS_IOC_EXT2_SETFLAGS: 142 110 return hfsplus_ioctl_setflags(file, argp); 111 + case HFSPLUS_IOC_BLESS: 112 + return hfsplus_ioctl_bless(file, argp); 143 113 default: 144 114 return -ENOTTY; 145 115 }
+9 -8
fs/hfsplus/super.c
··· 465 465 goto out_put_alloc_file; 466 466 } 467 467 468 + sb->s_d_op = &hfsplus_dentry_operations; 469 + sb->s_root = d_make_root(root); 470 + if (!sb->s_root) { 471 + err = -ENOMEM; 472 + goto out_put_alloc_file; 473 + } 474 + 468 475 str.len = sizeof(HFSP_HIDDENDIR_NAME) - 1; 469 476 str.name = HFSP_HIDDENDIR_NAME; 470 477 err = hfs_find_init(sbi->cat_tree, &fd); ··· 522 515 } 523 516 } 524 517 525 - sb->s_d_op = &hfsplus_dentry_operations; 526 - sb->s_root = d_alloc_root(root); 527 - if (!sb->s_root) { 528 - err = -ENOMEM; 529 - goto out_put_hidden_dir; 530 - } 531 - 532 518 unload_nls(sbi->nls); 533 519 sbi->nls = nls; 534 520 return 0; ··· 529 529 out_put_hidden_dir: 530 530 iput(sbi->hidden_dir); 531 531 out_put_root: 532 - iput(root); 532 + dput(sb->s_root); 533 + sb->s_root = NULL; 533 534 out_put_alloc_file: 534 535 iput(sbi->alloc_file); 535 536 out_close_cat_tree:
+2 -2
fs/hostfs/hostfs_kern.c
··· 966 966 } 967 967 968 968 err = -ENOMEM; 969 - sb->s_root = d_alloc_root(root_inode); 969 + sb->s_root = d_make_root(root_inode); 970 970 if (sb->s_root == NULL) 971 - goto out_put; 971 + goto out; 972 972 973 973 return 0; 974 974
+2 -4
fs/hpfs/super.c
··· 625 625 hpfs_init_inode(root); 626 626 hpfs_read_inode(root); 627 627 unlock_new_inode(root); 628 - s->s_root = d_alloc_root(root); 629 - if (!s->s_root) { 630 - iput(root); 628 + s->s_root = d_make_root(root); 629 + if (!s->s_root) 631 630 goto bail0; 632 - } 633 631 634 632 /* 635 633 * find the root directory's . pointer & finish filling in the inode
+2 -7
fs/hppfs/hppfs.c
··· 726 726 727 727 err = -ENOMEM; 728 728 root_inode = get_inode(sb, dget(proc_mnt->mnt_root)); 729 - if (!root_inode) 730 - goto out_mntput; 731 - 732 - sb->s_root = d_alloc_root(root_inode); 729 + sb->s_root = d_make_root(root_inode); 733 730 if (!sb->s_root) 734 - goto out_iput; 731 + goto out_mntput; 735 732 736 733 return 0; 737 734 738 - out_iput: 739 - iput(root_inode); 740 735 out_mntput: 741 736 mntput(proc_mnt); 742 737 out:
+2 -11
fs/hugetlbfs/inode.c
··· 831 831 static int 832 832 hugetlbfs_fill_super(struct super_block *sb, void *data, int silent) 833 833 { 834 - struct inode * inode; 835 - struct dentry * root; 836 834 int ret; 837 835 struct hugetlbfs_config config; 838 836 struct hugetlbfs_sb_info *sbinfo; ··· 863 865 sb->s_magic = HUGETLBFS_MAGIC; 864 866 sb->s_op = &hugetlbfs_ops; 865 867 sb->s_time_gran = 1; 866 - inode = hugetlbfs_get_root(sb, &config); 867 - if (!inode) 868 + sb->s_root = d_make_root(hugetlbfs_get_root(sb, &config)); 869 + if (!sb->s_root) 868 870 goto out_free; 869 - 870 - root = d_alloc_root(inode); 871 - if (!root) { 872 - iput(inode); 873 - goto out_free; 874 - } 875 - sb->s_root = root; 876 871 return 0; 877 872 out_free: 878 873 kfree(sbinfo);
+4 -24
fs/inode.c
··· 2 2 * (C) 1997 Linus Torvalds 3 3 * (C) 1999 Andrea Arcangeli <andrea@suse.de> (dynamic inode allocation) 4 4 */ 5 + #include <linux/export.h> 5 6 #include <linux/fs.h> 6 7 #include <linux/mm.h> 7 - #include <linux/dcache.h> 8 - #include <linux/init.h> 9 - #include <linux/slab.h> 10 - #include <linux/writeback.h> 11 - #include <linux/module.h> 12 8 #include <linux/backing-dev.h> 13 - #include <linux/wait.h> 14 - #include <linux/rwsem.h> 15 9 #include <linux/hash.h> 16 10 #include <linux/swap.h> 17 11 #include <linux/security.h> 18 - #include <linux/pagemap.h> 19 12 #include <linux/cdev.h> 20 13 #include <linux/bootmem.h> 21 14 #include <linux/fsnotify.h> 22 15 #include <linux/mount.h> 23 - #include <linux/async.h> 24 16 #include <linux/posix_acl.h> 25 17 #include <linux/prefetch.h> 26 - #include <linux/ima.h> 27 - #include <linux/cred.h> 28 18 #include <linux/buffer_head.h> /* for inode_has_buffers */ 29 19 #include <linux/ratelimit.h> 30 20 #include "internal.h" ··· 1359 1369 EXPORT_SYMBOL(generic_delete_inode); 1360 1370 1361 1371 /* 1362 - * Normal UNIX filesystem behaviour: delete the 1363 - * inode when the usage count drops to zero, and 1364 - * i_nlink is zero. 1365 - */ 1366 - int generic_drop_inode(struct inode *inode) 1367 - { 1368 - return !inode->i_nlink || inode_unhashed(inode); 1369 - } 1370 - EXPORT_SYMBOL_GPL(generic_drop_inode); 1371 - 1372 - /* 1373 1372 * Called when we're dropping the last reference 1374 1373 * to an inode. 1375 1374 * ··· 1489 1510 * This function automatically handles read only file systems and media, 1490 1511 * as well as the "noatime" flag and inode specific "noatime" markers. 1491 1512 */ 1492 - void touch_atime(struct vfsmount *mnt, struct dentry *dentry) 1513 + void touch_atime(struct path *path) 1493 1514 { 1494 - struct inode *inode = dentry->d_inode; 1515 + struct vfsmount *mnt = path->mnt; 1516 + struct inode *inode = path->dentry->d_inode; 1495 1517 struct timespec now; 1496 1518 1497 1519 if (inode->i_flags & S_NOATIME)
+1 -2
fs/isofs/inode.c
··· 947 947 s->s_d_op = &isofs_dentry_ops[table]; 948 948 949 949 /* get the root dentry */ 950 - s->s_root = d_alloc_root(inode); 950 + s->s_root = d_make_root(inode); 951 951 if (!(s->s_root)) { 952 - iput(inode); 953 952 error = -ENOMEM; 954 953 goto out_no_inode; 955 954 }
+2 -4
fs/jffs2/fs.c
··· 561 561 ret = -ENOMEM; 562 562 563 563 D1(printk(KERN_DEBUG "jffs2_do_fill_super(): d_alloc_root()\n")); 564 - sb->s_root = d_alloc_root(root_i); 564 + sb->s_root = d_make_root(root_i); 565 565 if (!sb->s_root) 566 - goto out_root_i; 566 + goto out_root; 567 567 568 568 sb->s_maxbytes = 0xFFFFFFFF; 569 569 sb->s_blocksize = PAGE_CACHE_SIZE; ··· 573 573 jffs2_start_garbage_collect_thread(c); 574 574 return 0; 575 575 576 - out_root_i: 577 - iput(root_i); 578 576 out_root: 579 577 jffs2_free_ino_caches(c); 580 578 jffs2_free_raw_node_refs(c);
-13
fs/jfs/namei.c
··· 220 220 221 221 dquot_initialize(dip); 222 222 223 - /* link count overflow on parent directory ? */ 224 - if (dip->i_nlink == JFS_LINK_MAX) { 225 - rc = -EMLINK; 226 - goto out1; 227 - } 228 - 229 223 /* 230 224 * search parent directory for entry/freespace 231 225 * (dtSearch() returns parent directory page pinned) ··· 800 806 jfs_info("jfs_link: %s %s", old_dentry->d_name.name, 801 807 dentry->d_name.name); 802 808 803 - if (ip->i_nlink == JFS_LINK_MAX) 804 - return -EMLINK; 805 - 806 809 dquot_initialize(dir); 807 810 808 811 tid = txBegin(ip->i_sb, 0); ··· 1129 1138 rc = -ENOTEMPTY; 1130 1139 goto out3; 1131 1140 } 1132 - } else if ((new_dir != old_dir) && 1133 - (new_dir->i_nlink == JFS_LINK_MAX)) { 1134 - rc = -EMLINK; 1135 - goto out3; 1136 1141 } 1137 1142 } else if (new_ip) { 1138 1143 IWRITE_LOCK(new_ip, RDWRLOCK_NORMAL);
+9 -3
fs/jfs/super.c
··· 441 441 return -ENOMEM; 442 442 443 443 sb->s_fs_info = sbi; 444 + sb->s_max_links = JFS_LINK_MAX; 444 445 sbi->sb = sb; 445 446 sbi->uid = sbi->gid = sbi->umask = -1; 446 447 ··· 522 521 ret = PTR_ERR(inode); 523 522 goto out_no_rw; 524 523 } 525 - sb->s_root = d_alloc_root(inode); 524 + sb->s_root = d_make_root(inode); 526 525 if (!sb->s_root) 527 526 goto out_no_root; 528 527 ··· 540 539 541 540 out_no_root: 542 541 jfs_err("jfs_read_super: get root dentry failed"); 543 - iput(inode); 544 542 545 543 out_no_rw: 546 544 rc = jfs_umount(sb); ··· 860 860 jfs_proc_init(); 861 861 #endif 862 862 863 - return register_filesystem(&jfs_fs_type); 863 + rc = register_filesystem(&jfs_fs_type); 864 + if (!rc) 865 + return 0; 864 866 867 + #ifdef PROC_FS_JFS 868 + jfs_proc_clean(); 869 + #endif 870 + kthread_stop(jfsSyncThread); 865 871 kill_committask: 866 872 for (i = 0; i < commit_threads; i++) 867 873 kthread_stop(jfsCommitThread[i]);
+3 -5
fs/libfs.c
··· 491 491 inode->i_op = &simple_dir_inode_operations; 492 492 inode->i_fop = &simple_dir_operations; 493 493 set_nlink(inode, 2); 494 - root = d_alloc_root(inode); 495 - if (!root) { 496 - iput(inode); 494 + root = d_make_root(inode); 495 + if (!root) 497 496 return -ENOMEM; 498 - } 499 497 for (i = 0; !files->name || files->name[0]; i++, files++) { 500 498 if (!files->name) 501 499 continue; ··· 534 536 spin_lock(&pin_fs_lock); 535 537 if (unlikely(!*mount)) { 536 538 spin_unlock(&pin_fs_lock); 537 - mnt = vfs_kern_mount(type, 0, type->name, NULL); 539 + mnt = vfs_kern_mount(type, MS_KERNMOUNT, type->name, NULL); 538 540 if (IS_ERR(mnt)) 539 541 return PTR_ERR(mnt); 540 542 spin_lock(&pin_fs_lock);
-3
fs/logfs/dir.c
··· 558 558 { 559 559 struct inode *inode = old_dentry->d_inode; 560 560 561 - if (inode->i_nlink >= LOGFS_LINK_MAX) 562 - return -EMLINK; 563 - 564 561 inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME; 565 562 ihold(inode); 566 563 inc_nlink(inode);
+7 -5
fs/logfs/super.c
··· 315 315 if (IS_ERR(rootdir)) 316 316 goto fail; 317 317 318 - sb->s_root = d_alloc_root(rootdir); 319 - if (!sb->s_root) { 320 - iput(rootdir); 318 + sb->s_root = d_make_root(rootdir); 319 + if (!sb->s_root) 321 320 goto fail; 322 - } 323 321 324 322 /* at that point we know that ->put_super() will be called */ 325 323 super->s_erase_page = alloc_pages(GFP_KERNEL, 0); ··· 540 542 * the filesystem incompatible with 32bit systems. 541 543 */ 542 544 sb->s_maxbytes = (1ull << 43) - 1; 545 + sb->s_max_links = LOGFS_LINK_MAX; 543 546 sb->s_op = &logfs_super_operations; 544 547 sb->s_flags = flags | MS_NOATIME; 545 548 ··· 626 627 if (ret) 627 628 goto out2; 628 629 629 - return register_filesystem(&logfs_fs_type); 630 + ret = register_filesystem(&logfs_fs_type); 631 + if (!ret) 632 + return 0; 633 + logfs_destroy_inode_cache(); 630 634 out2: 631 635 logfs_compr_exit(); 632 636 out1:
+17 -21
fs/minix/inode.c
··· 190 190 sbi->s_version = MINIX_V1; 191 191 sbi->s_dirsize = 16; 192 192 sbi->s_namelen = 14; 193 - sbi->s_link_max = MINIX_LINK_MAX; 193 + s->s_max_links = MINIX_LINK_MAX; 194 194 } else if (s->s_magic == MINIX_SUPER_MAGIC2) { 195 195 sbi->s_version = MINIX_V1; 196 196 sbi->s_dirsize = 32; 197 197 sbi->s_namelen = 30; 198 - sbi->s_link_max = MINIX_LINK_MAX; 198 + s->s_max_links = MINIX_LINK_MAX; 199 199 } else if (s->s_magic == MINIX2_SUPER_MAGIC) { 200 200 sbi->s_version = MINIX_V2; 201 201 sbi->s_nzones = ms->s_zones; 202 202 sbi->s_dirsize = 16; 203 203 sbi->s_namelen = 14; 204 - sbi->s_link_max = MINIX2_LINK_MAX; 204 + s->s_max_links = MINIX2_LINK_MAX; 205 205 } else if (s->s_magic == MINIX2_SUPER_MAGIC2) { 206 206 sbi->s_version = MINIX_V2; 207 207 sbi->s_nzones = ms->s_zones; 208 208 sbi->s_dirsize = 32; 209 209 sbi->s_namelen = 30; 210 - sbi->s_link_max = MINIX2_LINK_MAX; 210 + s->s_max_links = MINIX2_LINK_MAX; 211 211 } else if ( *(__u16 *)(bh->b_data + 24) == MINIX3_SUPER_MAGIC) { 212 212 m3s = (struct minix3_super_block *) bh->b_data; 213 213 s->s_magic = m3s->s_magic; ··· 221 221 sbi->s_dirsize = 64; 222 222 sbi->s_namelen = 60; 223 223 sbi->s_version = MINIX_V3; 224 - sbi->s_link_max = MINIX2_LINK_MAX; 225 224 sbi->s_mount_state = MINIX_VALID_FS; 226 225 sb_set_blocksize(s, m3s->s_blocksize); 226 + s->s_max_links = MINIX2_LINK_MAX; 227 227 } else 228 228 goto out_no_fs; 229 229 ··· 254 254 minix_set_bit(0,sbi->s_imap[0]->b_data); 255 255 minix_set_bit(0,sbi->s_zmap[0]->b_data); 256 256 257 - /* set up enough so that it can read an inode */ 258 - s->s_op = &minix_sops; 259 - root_inode = minix_iget(s, MINIX_ROOT_INO); 260 - if (IS_ERR(root_inode)) { 261 - ret = PTR_ERR(root_inode); 262 - goto out_no_root; 263 - } 264 - 265 257 /* Apparently minix can create filesystems that allocate more blocks for 266 258 * the bitmaps than needed. We simply ignore that, but verify it didn't 267 259 * create one with not enough blocks and bail out if so. ··· 262 270 if (sbi->s_imap_blocks < block) { 263 271 printk("MINIX-fs: file system does not have enough " 264 272 "imap blocks allocated. Refusing to mount\n"); 265 - goto out_iput; 273 + goto out_no_bitmap; 266 274 } 267 275 268 276 block = minix_blocks_needed( ··· 271 279 if (sbi->s_zmap_blocks < block) { 272 280 printk("MINIX-fs: file system does not have enough " 273 281 "zmap blocks allocated. Refusing to mount.\n"); 274 - goto out_iput; 282 + goto out_no_bitmap; 283 + } 284 + 285 + /* set up enough so that it can read an inode */ 286 + s->s_op = &minix_sops; 287 + root_inode = minix_iget(s, MINIX_ROOT_INO); 288 + if (IS_ERR(root_inode)) { 289 + ret = PTR_ERR(root_inode); 290 + goto out_no_root; 275 291 } 276 292 277 293 ret = -ENOMEM; 278 - s->s_root = d_alloc_root(root_inode); 294 + s->s_root = d_make_root(root_inode); 279 295 if (!s->s_root) 280 - goto out_iput; 296 + goto out_no_root; 281 297 282 298 if (!(s->s_flags & MS_RDONLY)) { 283 299 if (sbi->s_version != MINIX_V3) /* s_state is now out from V3 sb */ ··· 300 300 "running fsck is recommended\n"); 301 301 302 302 return 0; 303 - 304 - out_iput: 305 - iput(root_inode); 306 - goto out_freemap; 307 303 308 304 out_no_root: 309 305 if (!silent)
-1
fs/minix/minix.h
··· 34 34 unsigned long s_max_size; 35 35 int s_dirsize; 36 36 int s_namelen; 37 - int s_link_max; 38 37 struct buffer_head ** s_imap; 39 38 struct buffer_head ** s_zmap; 40 39 struct buffer_head * s_sbh;
+1 -13
fs/minix/namei.c
··· 94 94 { 95 95 struct inode *inode = old_dentry->d_inode; 96 96 97 - if (inode->i_nlink >= minix_sb(inode->i_sb)->s_link_max) 98 - return -EMLINK; 99 - 100 97 inode->i_ctime = CURRENT_TIME_SEC; 101 98 inode_inc_link_count(inode); 102 99 ihold(inode); ··· 103 106 static int minix_mkdir(struct inode * dir, struct dentry *dentry, umode_t mode) 104 107 { 105 108 struct inode * inode; 106 - int err = -EMLINK; 107 - 108 - if (dir->i_nlink >= minix_sb(dir->i_sb)->s_link_max) 109 - goto out; 109 + int err; 110 110 111 111 inode_inc_link_count(dir); 112 112 ··· 175 181 static int minix_rename(struct inode * old_dir, struct dentry *old_dentry, 176 182 struct inode * new_dir, struct dentry *new_dentry) 177 183 { 178 - struct minix_sb_info * info = minix_sb(old_dir->i_sb); 179 184 struct inode * old_inode = old_dentry->d_inode; 180 185 struct inode * new_inode = new_dentry->d_inode; 181 186 struct page * dir_page = NULL; ··· 212 219 drop_nlink(new_inode); 213 220 inode_dec_link_count(new_inode); 214 221 } else { 215 - if (dir_de) { 216 - err = -EMLINK; 217 - if (new_dir->i_nlink >= info->s_link_max) 218 - goto out_dir; 219 - } 220 222 err = minix_add_link(new_dentry, old_inode); 221 223 if (err) 222 224 goto out_dir;
+14 -1
fs/namei.c
··· 642 642 cond_resched(); 643 643 current->total_link_count++; 644 644 645 - touch_atime(link->mnt, dentry); 645 + touch_atime(link); 646 646 nd_set_link(nd, NULL); 647 647 648 648 error = security_inode_follow_link(link->dentry, nd); ··· 2691 2691 int vfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) 2692 2692 { 2693 2693 int error = may_create(dir, dentry); 2694 + unsigned max_links = dir->i_sb->s_max_links; 2694 2695 2695 2696 if (error) 2696 2697 return error; ··· 2703 2702 error = security_inode_mkdir(dir, dentry, mode); 2704 2703 if (error) 2705 2704 return error; 2705 + 2706 + if (max_links && dir->i_nlink >= max_links) 2707 + return -EMLINK; 2706 2708 2707 2709 error = dir->i_op->mkdir(dir, dentry, mode); 2708 2710 if (!error) ··· 3037 3033 int vfs_link(struct dentry *old_dentry, struct inode *dir, struct dentry *new_dentry) 3038 3034 { 3039 3035 struct inode *inode = old_dentry->d_inode; 3036 + unsigned max_links = dir->i_sb->s_max_links; 3040 3037 int error; 3041 3038 3042 3039 if (!inode) ··· 3068 3063 /* Make sure we don't allow creating hardlink to an unlinked file */ 3069 3064 if (inode->i_nlink == 0) 3070 3065 error = -ENOENT; 3066 + else if (max_links && inode->i_nlink >= max_links) 3067 + error = -EMLINK; 3071 3068 else 3072 3069 error = dir->i_op->link(old_dentry, dir, new_dentry); 3073 3070 mutex_unlock(&inode->i_mutex); ··· 3179 3172 { 3180 3173 int error = 0; 3181 3174 struct inode *target = new_dentry->d_inode; 3175 + unsigned max_links = new_dir->i_sb->s_max_links; 3182 3176 3183 3177 /* 3184 3178 * If we are going to change the parent - check write permissions, ··· 3201 3193 3202 3194 error = -EBUSY; 3203 3195 if (d_mountpoint(old_dentry) || d_mountpoint(new_dentry)) 3196 + goto out; 3197 + 3198 + error = -EMLINK; 3199 + if (max_links && !target && new_dir != old_dir && 3200 + new_dir->i_nlink >= max_links) 3204 3201 goto out; 3205 3202 3206 3203 if (target)
+2 -4
fs/ncpfs/inode.c
··· 716 716 if (!root_inode) 717 717 goto out_disconnect; 718 718 DPRINTK("ncp_fill_super: root vol=%d\n", NCP_FINFO(root_inode)->volNumber); 719 - sb->s_root = d_alloc_root(root_inode); 719 + sb->s_root = d_make_root(root_inode); 720 720 if (!sb->s_root) 721 - goto out_no_root; 721 + goto out_disconnect; 722 722 return 0; 723 723 724 - out_no_root: 725 - iput(root_inode); 726 724 out_disconnect: 727 725 ncp_lock_server(server); 728 726 ncp_disconnect(server);
+2 -4
fs/nfs/getroot.c
··· 49 49 { 50 50 /* The mntroot acts as the dummy root dentry for this superblock */ 51 51 if (sb->s_root == NULL) { 52 - sb->s_root = d_alloc_root(inode); 53 - if (sb->s_root == NULL) { 54 - iput(inode); 52 + sb->s_root = d_make_root(inode); 53 + if (sb->s_root == NULL) 55 54 return -ENOMEM; 56 - } 57 55 ihold(inode); 58 56 /* 59 57 * Ensure that this dentry is invisible to d_find_alias().
+1 -1
fs/nfsd/fault_inject.c
··· 72 72 { 73 73 unsigned int i; 74 74 struct nfsd_fault_inject_op *op; 75 - mode_t mode = S_IFREG | S_IRUSR | S_IWUSR; 75 + umode_t mode = S_IFREG | S_IRUSR | S_IWUSR; 76 76 77 77 debug_dir = debugfs_create_dir("nfsd", NULL); 78 78 if (!debug_dir)
+6 -5
fs/nfsd/vfs.c
··· 1541 1541 __be32 1542 1542 nfsd_readlink(struct svc_rqst *rqstp, struct svc_fh *fhp, char *buf, int *lenp) 1543 1543 { 1544 - struct dentry *dentry; 1545 1544 struct inode *inode; 1546 1545 mm_segment_t oldfs; 1547 1546 __be32 err; 1548 1547 int host_err; 1548 + struct path path; 1549 1549 1550 1550 err = fh_verify(rqstp, fhp, S_IFLNK, NFSD_MAY_NOP); 1551 1551 if (err) 1552 1552 goto out; 1553 1553 1554 - dentry = fhp->fh_dentry; 1555 - inode = dentry->d_inode; 1554 + path.mnt = fhp->fh_export->ex_path.mnt; 1555 + path.dentry = fhp->fh_dentry; 1556 + inode = path.dentry->d_inode; 1556 1557 1557 1558 err = nfserr_inval; 1558 1559 if (!inode->i_op->readlink) 1559 1560 goto out; 1560 1561 1561 - touch_atime(fhp->fh_export->ex_path.mnt, dentry); 1562 + touch_atime(&path); 1562 1563 /* N.B. Why does this call need a get_fs()?? 1563 1564 * Remove the set_fs and watch the fireworks:-) --okir 1564 1565 */ 1565 1566 1566 1567 oldfs = get_fs(); set_fs(KERNEL_DS); 1567 - host_err = inode->i_op->readlink(dentry, buf, *lenp); 1568 + host_err = inode->i_op->readlink(path.dentry, buf, *lenp); 1568 1569 set_fs(oldfs); 1569 1570 1570 1571 if (host_err < 0)
-11
fs/nilfs2/namei.c
··· 193 193 struct nilfs_transaction_info ti; 194 194 int err; 195 195 196 - if (inode->i_nlink >= NILFS_LINK_MAX) 197 - return -EMLINK; 198 - 199 196 err = nilfs_transaction_begin(dir->i_sb, &ti, 1); 200 197 if (err) 201 198 return err; ··· 215 218 struct inode *inode; 216 219 struct nilfs_transaction_info ti; 217 220 int err; 218 - 219 - if (dir->i_nlink >= NILFS_LINK_MAX) 220 - return -EMLINK; 221 221 222 222 err = nilfs_transaction_begin(dir->i_sb, &ti, 1); 223 223 if (err) ··· 394 400 drop_nlink(new_inode); 395 401 nilfs_mark_inode_dirty(new_inode); 396 402 } else { 397 - if (dir_de) { 398 - err = -EMLINK; 399 - if (new_dir->i_nlink >= NILFS_LINK_MAX) 400 - goto out_dir; 401 - } 402 403 err = nilfs_add_link(new_dentry, old_inode); 403 404 if (err) 404 405 goto out_dir;
+2 -2
fs/nilfs2/super.c
··· 917 917 if (root->cno == NILFS_CPTREE_CURRENT_CNO) { 918 918 dentry = d_find_alias(inode); 919 919 if (!dentry) { 920 - dentry = d_alloc_root(inode); 920 + dentry = d_make_root(inode); 921 921 if (!dentry) { 922 - iput(inode); 923 922 ret = -ENOMEM; 924 923 goto failed_dentry; 925 924 } ··· 1058 1059 sb->s_export_op = &nilfs_export_ops; 1059 1060 sb->s_root = NULL; 1060 1061 sb->s_time_gran = 1; 1062 + sb->s_max_links = NILFS_LINK_MAX; 1061 1063 1062 1064 bdi = sb->s_bdev->bd_inode->i_mapping->backing_dev_info; 1063 1065 sb->s_bdi = bdi ? : &default_backing_dev_info;
+6 -3
fs/ntfs/super.c
··· 2908 2908 ntfs_error(sb, "Failed to load system files."); 2909 2909 goto unl_upcase_iput_tmp_ino_err_out_now; 2910 2910 } 2911 - if ((sb->s_root = d_alloc_root(vol->root_ino))) { 2912 - /* We grab a reference, simulating an ntfs_iget(). */ 2913 - ihold(vol->root_ino); 2911 + 2912 + /* We grab a reference, simulating an ntfs_iget(). */ 2913 + ihold(vol->root_ino); 2914 + if ((sb->s_root = d_make_root(vol->root_ino))) { 2914 2915 ntfs_debug("Exiting, status successful."); 2915 2916 /* Release the default upcase if it has no users. */ 2916 2917 mutex_lock(&ntfs_lock); ··· 3159 3158 } 3160 3159 printk(KERN_CRIT "NTFS: Failed to register NTFS filesystem driver!\n"); 3161 3160 3161 + /* Unregister the ntfs sysctls. */ 3162 + ntfs_sysctl(0); 3162 3163 sysctl_err_out: 3163 3164 kmem_cache_destroy(ntfs_big_inode_cache); 3164 3165 big_inode_err_out:
+2 -12
fs/ocfs2/dlmfs/dlmfs.c
··· 582 582 void * data, 583 583 int silent) 584 584 { 585 - struct inode * inode; 586 - struct dentry * root; 587 - 588 585 sb->s_maxbytes = MAX_LFS_FILESIZE; 589 586 sb->s_blocksize = PAGE_CACHE_SIZE; 590 587 sb->s_blocksize_bits = PAGE_CACHE_SHIFT; 591 588 sb->s_magic = DLMFS_MAGIC; 592 589 sb->s_op = &dlmfs_ops; 593 - inode = dlmfs_get_root_inode(sb); 594 - if (!inode) 590 + sb->s_root = d_make_root(dlmfs_get_root_inode(sb)); 591 + if (!sb->s_root) 595 592 return -ENOMEM; 596 - 597 - root = d_alloc_root(inode); 598 - if (!root) { 599 - iput(inode); 600 - return -ENOMEM; 601 - } 602 - sb->s_root = root; 603 593 return 0; 604 594 } 605 595
+25 -26
fs/ocfs2/super.c
··· 1154 1154 } 1155 1155 1156 1156 status = ocfs2_mount_volume(sb); 1157 - if (osb->root_inode) 1158 - inode = igrab(osb->root_inode); 1159 - 1160 1157 if (status < 0) 1161 1158 goto read_super_error; 1159 + 1160 + if (osb->root_inode) 1161 + inode = igrab(osb->root_inode); 1162 1162 1163 1163 if (!inode) { 1164 1164 status = -EIO; ··· 1166 1166 goto read_super_error; 1167 1167 } 1168 1168 1169 - root = d_alloc_root(inode); 1169 + root = d_make_root(inode); 1170 1170 if (!root) { 1171 1171 status = -ENOMEM; 1172 1172 mlog_errno(status); ··· 1219 1219 1220 1220 read_super_error: 1221 1221 brelse(bh); 1222 - 1223 - if (inode) 1224 - iput(inode); 1225 1222 1226 1223 if (osb) { 1227 1224 atomic_set(&osb->vol_state, VOLUME_DISABLED); ··· 1624 1627 init_waitqueue_head(&ocfs2__ioend_wq[i]); 1625 1628 1626 1629 status = init_ocfs2_uptodate_cache(); 1627 - if (status < 0) { 1628 - mlog_errno(status); 1629 - goto leave; 1630 - } 1630 + if (status < 0) 1631 + goto out1; 1631 1632 1632 1633 status = ocfs2_initialize_mem_caches(); 1633 - if (status < 0) { 1634 - mlog_errno(status); 1635 - goto leave; 1636 - } 1634 + if (status < 0) 1635 + goto out2; 1637 1636 1638 1637 ocfs2_wq = create_singlethread_workqueue("ocfs2_wq"); 1639 1638 if (!ocfs2_wq) { 1640 1639 status = -ENOMEM; 1641 - goto leave; 1640 + goto out3; 1642 1641 } 1643 1642 1644 1643 ocfs2_debugfs_root = debugfs_create_dir("ocfs2", NULL); ··· 1646 1653 ocfs2_set_locking_protocol(); 1647 1654 1648 1655 status = register_quota_format(&ocfs2_quota_format); 1649 - leave: 1650 - if (status < 0) { 1651 - ocfs2_free_mem_caches(); 1652 - exit_ocfs2_uptodate_cache(); 1653 - mlog_errno(status); 1654 - } 1656 + if (status < 0) 1657 + goto out4; 1658 + status = register_filesystem(&ocfs2_fs_type); 1659 + if (!status) 1660 + return 0; 1655 1661 1656 - if (status >= 0) { 1657 - return register_filesystem(&ocfs2_fs_type); 1658 - } else 1659 - return -1; 1662 + unregister_quota_format(&ocfs2_quota_format); 1663 + out4: 1664 + destroy_workqueue(ocfs2_wq); 1665 + debugfs_remove(ocfs2_debugfs_root); 1666 + out3: 1667 + ocfs2_free_mem_caches(); 1668 + out2: 1669 + exit_ocfs2_uptodate_cache(); 1670 + out1: 1671 + mlog_errno(status); 1672 + return status; 1660 1673 } 1661 1674 1662 1675 static void __exit ocfs2_exit(void)
+2 -4
fs/omfs/inode.c
··· 539 539 goto out_brelse_bh2; 540 540 } 541 541 542 - sb->s_root = d_alloc_root(root); 543 - if (!sb->s_root) { 544 - iput(root); 542 + sb->s_root = d_make_root(root); 543 + if (!sb->s_root) 545 544 goto out_brelse_bh2; 546 - } 547 545 printk(KERN_DEBUG "omfs: Mounted volume %s\n", omfs_rb->r_name); 548 546 549 547 ret = 0;
+1 -2
fs/openpromfs/inode.c
··· 408 408 oi->type = op_inode_node; 409 409 oi->u.node = of_find_node_by_path("/"); 410 410 411 - s->s_root = d_alloc_root(root_inode); 411 + s->s_root = d_make_root(root_inode); 412 412 if (!s->s_root) 413 413 goto out_no_root_dentry; 414 414 return 0; 415 415 416 416 out_no_root_dentry: 417 - iput(root_inode); 418 417 ret = -ENOMEM; 419 418 out_no_root: 420 419 printk("openprom_fill_super: get root inode failed\n");
+3 -13
fs/proc/inode.c
··· 486 486 487 487 int proc_fill_super(struct super_block *s) 488 488 { 489 - struct inode * root_inode; 490 - 491 489 s->s_flags |= MS_NODIRATIME | MS_NOSUID | MS_NOEXEC; 492 490 s->s_blocksize = 1024; 493 491 s->s_blocksize_bits = 10; ··· 494 496 s->s_time_gran = 1; 495 497 496 498 pde_get(&proc_root); 497 - root_inode = proc_get_inode(s, &proc_root); 498 - if (!root_inode) 499 - goto out_no_root; 500 - root_inode->i_uid = 0; 501 - root_inode->i_gid = 0; 502 - s->s_root = d_alloc_root(root_inode); 503 - if (!s->s_root) 504 - goto out_no_root; 505 - return 0; 499 + s->s_root = d_make_root(proc_get_inode(s, &proc_root)); 500 + if (s->s_root) 501 + return 0; 506 502 507 - out_no_root: 508 503 printk("proc_read_super: get root inode failed\n"); 509 - iput(root_inode); 510 504 pde_put(&proc_root); 511 505 return -ENOMEM; 512 506 }
+7 -18
fs/pstore/inode.c
··· 278 278 279 279 int pstore_fill_super(struct super_block *sb, void *data, int silent) 280 280 { 281 - struct inode *inode = NULL; 282 - struct dentry *root; 283 - int err; 281 + struct inode *inode; 284 282 285 283 save_mount_options(sb, data); 286 284 ··· 294 296 parse_options(data); 295 297 296 298 inode = pstore_get_inode(sb, NULL, S_IFDIR | 0755, 0); 297 - if (!inode) { 298 - err = -ENOMEM; 299 - goto fail; 299 + if (inode) { 300 + /* override ramfs "dir" options so we catch unlink(2) */ 301 + inode->i_op = &pstore_dir_inode_operations; 300 302 } 301 - /* override ramfs "dir" options so we catch unlink(2) */ 302 - inode->i_op = &pstore_dir_inode_operations; 303 - 304 - root = d_alloc_root(inode); 305 - sb->s_root = root; 306 - if (!root) { 307 - err = -ENOMEM; 308 - goto fail; 309 - } 303 + sb->s_root = d_make_root(inode); 304 + if (!sb->s_root) 305 + return -ENOMEM; 310 306 311 307 pstore_get_records(0); 312 308 313 309 return 0; 314 - fail: 315 - iput(inode); 316 - return err; 317 310 } 318 311 319 312 static struct dentry *pstore_mount(struct file_system_type *fs_type,
+17 -71
fs/qnx4/inode.c
··· 52 52 return 0; 53 53 } 54 54 55 - static struct buffer_head *qnx4_getblk(struct inode *inode, int nr, 56 - int create) 57 - { 58 - struct buffer_head *result = NULL; 59 - 60 - if ( nr >= 0 ) 61 - nr = qnx4_block_map( inode, nr ); 62 - if (nr) { 63 - result = sb_getblk(inode->i_sb, nr); 64 - return result; 65 - } 66 - return NULL; 67 - } 68 - 69 - struct buffer_head *qnx4_bread(struct inode *inode, int block, int create) 70 - { 71 - struct buffer_head *bh; 72 - 73 - bh = qnx4_getblk(inode, block, create); 74 - if (!bh || buffer_uptodate(bh)) { 75 - return bh; 76 - } 77 - ll_rw_block(READ, 1, &bh); 78 - wait_on_buffer(bh); 79 - if (buffer_uptodate(bh)) { 80 - return bh; 81 - } 82 - brelse(bh); 83 - 84 - return NULL; 85 - } 86 - 87 55 static int qnx4_get_block( struct inode *inode, sector_t iblock, struct buffer_head *bh, int create ) 88 56 { 89 57 unsigned long phys; ··· 66 98 return 0; 67 99 } 68 100 101 + static inline u32 try_extent(qnx4_xtnt_t *extent, u32 *offset) 102 + { 103 + u32 size = le32_to_cpu(extent->xtnt_size); 104 + if (*offset < size) 105 + return le32_to_cpu(extent->xtnt_blk) + *offset - 1; 106 + *offset -= size; 107 + return 0; 108 + } 109 + 69 110 unsigned long qnx4_block_map( struct inode *inode, long iblock ) 70 111 { 71 112 int ix; 72 - long offset, i_xblk; 73 - unsigned long block = 0; 113 + long i_xblk; 74 114 struct buffer_head *bh = NULL; 75 115 struct qnx4_xblk *xblk = NULL; 76 116 struct qnx4_inode_entry *qnx4_inode = qnx4_raw_inode(inode); 77 117 u16 nxtnt = le16_to_cpu(qnx4_inode->di_num_xtnts); 118 + u32 offset = iblock; 119 + u32 block = try_extent(&qnx4_inode->di_first_xtnt, &offset); 78 120 79 - if ( iblock < le32_to_cpu(qnx4_inode->di_first_xtnt.xtnt_size) ) { 121 + if (block) { 80 122 // iblock is in the first extent. This is easy. 81 - block = le32_to_cpu(qnx4_inode->di_first_xtnt.xtnt_blk) + iblock - 1; 82 123 } else { 83 124 // iblock is beyond first extent. We have to follow the extent chain. 84 125 i_xblk = le32_to_cpu(qnx4_inode->di_xblk); 85 - offset = iblock - le32_to_cpu(qnx4_inode->di_first_xtnt.xtnt_size); 86 126 ix = 0; 87 127 while ( --nxtnt > 0 ) { 88 128 if ( ix == 0 ) { ··· 106 130 return -EIO; 107 131 } 108 132 } 109 - if ( offset < le32_to_cpu(xblk->xblk_xtnts[ix].xtnt_size) ) { 133 + block = try_extent(&xblk->xblk_xtnts[ix], &offset); 134 + if (block) { 110 135 // got it! 111 - block = le32_to_cpu(xblk->xblk_xtnts[ix].xtnt_blk) + offset - 1; 112 136 break; 113 137 } 114 - offset -= le32_to_cpu(xblk->xblk_xtnts[ix].xtnt_size); 115 138 if ( ++ix >= xblk->xblk_num_xtnts ) { 116 139 i_xblk = le32_to_cpu(xblk->xblk_next_xblk); 117 140 ix = 0; ··· 235 260 } 236 261 237 262 ret = -ENOMEM; 238 - s->s_root = d_alloc_root(root); 263 + s->s_root = d_make_root(root); 239 264 if (s->s_root == NULL) 240 - goto outi; 265 + goto outb; 241 266 242 267 brelse(bh); 243 268 return 0; 244 269 245 - outi: 246 - iput(root); 247 270 outb: 248 271 kfree(qs->BitMap); 249 272 out: ··· 261 288 return; 262 289 } 263 290 264 - static int qnx4_writepage(struct page *page, struct writeback_control *wbc) 265 - { 266 - return block_write_full_page(page,qnx4_get_block, wbc); 267 - } 268 - 269 291 static int qnx4_readpage(struct file *file, struct page *page) 270 292 { 271 293 return block_read_full_page(page,qnx4_get_block); 272 294 } 273 295 274 - static int qnx4_write_begin(struct file *file, struct address_space *mapping, 275 - loff_t pos, unsigned len, unsigned flags, 276 - struct page **pagep, void **fsdata) 277 - { 278 - struct qnx4_inode_info *qnx4_inode = qnx4_i(mapping->host); 279 - int ret; 280 - 281 - *pagep = NULL; 282 - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, 283 - qnx4_get_block, 284 - &qnx4_inode->mmu_private); 285 - if (unlikely(ret)) { 286 - loff_t isize = mapping->host->i_size; 287 - if (pos + len > isize) 288 - vmtruncate(mapping->host, isize); 289 - } 290 - 291 - return ret; 292 - } 293 296 static sector_t qnx4_bmap(struct address_space *mapping, sector_t block) 294 297 { 295 298 return generic_block_bmap(mapping,block,qnx4_get_block); 296 299 } 297 300 static const struct address_space_operations qnx4_aops = { 298 301 .readpage = qnx4_readpage, 299 - .writepage = qnx4_writepage, 300 - .write_begin = qnx4_write_begin, 301 - .write_end = generic_write_end, 302 302 .bmap = qnx4_bmap 303 303 }; 304 304
+3 -6
fs/qnx4/namei.c
··· 39 39 } else { 40 40 namelen = QNX4_SHORT_NAME_MAX; 41 41 } 42 - /* "" means "." ---> so paths like "/usr/lib//libc.a" work */ 43 - if (!len && (de->di_fname[0] == '.') && (de->di_fname[1] == '\0')) { 44 - return 1; 45 - } 46 42 thislen = strlen( de->di_fname ); 47 43 if ( thislen > namelen ) 48 44 thislen = namelen; ··· 68 72 block = offset = blkofs = 0; 69 73 while (blkofs * QNX4_BLOCK_SIZE + offset < dir->i_size) { 70 74 if (!bh) { 71 - bh = qnx4_bread(dir, blkofs, 0); 75 + block = qnx4_block_map(dir, blkofs); 76 + if (block) 77 + bh = sb_bread(dir->i_sb, block); 72 78 if (!bh) { 73 79 blkofs++; 74 80 continue; ··· 78 80 } 79 81 *res_dir = (struct qnx4_inode_entry *) (bh->b_data + offset); 80 82 if (qnx4_match(len, name, bh, &offset)) { 81 - block = qnx4_block_map( dir, blkofs ); 82 83 *ino = block * QNX4_INODES_PER_BLOCK + 83 84 (offset / QNX4_DIR_ENTRY_SIZE) - 1; 84 85 return bh;
-2
fs/qnx4/qnx4.h
··· 27 27 extern unsigned long qnx4_count_free_blocks(struct super_block *sb); 28 28 extern unsigned long qnx4_block_map(struct inode *inode, long iblock); 29 29 30 - extern struct buffer_head *qnx4_bread(struct inode *, int, int); 31 - 32 30 extern const struct inode_operations qnx4_dir_inode_operations; 33 31 extern const struct file_operations qnx4_dir_operations; 34 32 extern int qnx4_is_free(struct super_block *sb, long block);
+26
fs/qnx6/Kconfig
··· 1 + config QNX6FS_FS 2 + tristate "QNX6 file system support (read only)" 3 + depends on BLOCK && CRC32 4 + help 5 + This is the file system used by the real-time operating systems 6 + QNX 6 (also called QNX RTP). 7 + Further information is available at <http://www.qnx.com/>. 8 + Say Y if you intend to mount QNX hard disks or floppies formatted 9 + with a mkqnx6fs. 10 + However, keep in mind that this currently is a readonly driver! 11 + 12 + To compile this file system support as a module, choose M here: the 13 + module will be called qnx6. 14 + 15 + If you don't know whether you need it, then you don't need it: 16 + answer N. 17 + 18 + config QNX6FS_DEBUG 19 + bool "QNX6 debugging information" 20 + depends on QNX6FS_FS 21 + help 22 + Turns on extended debugging output. 23 + 24 + If you are not a developer working on the QNX6FS, you probably don't 25 + want this: 26 + answer N.
+7
fs/qnx6/Makefile
··· 1 + # 2 + # Makefile for the linux qnx4-filesystem routines. 3 + # 4 + 5 + obj-$(CONFIG_QNX6FS_FS) += qnx6.o 6 + 7 + qnx6-objs := inode.o dir.o namei.o super_mmi.o
+8
fs/qnx6/README
··· 1 + 2 + This is a snapshot of the QNX6 filesystem for Linux. 3 + Please send diffs and remarks to <chaosman@ontika.net> . 4 + 5 + Credits : 6 + 7 + Al Viro <viro@ZenIV.linux.org.uk> (endless patience with me & support ;)) 8 + Kai Bankett <chaosman@ontika.net> (Maintainer)
+291
fs/qnx6/dir.c
··· 1 + /* 2 + * QNX6 file system, Linux implementation. 3 + * 4 + * Version : 1.0.0 5 + * 6 + * History : 7 + * 8 + * 01-02-2012 by Kai Bankett (chaosman@ontika.net) : first release. 9 + * 16-02-2012 pagemap extension by Al Viro 10 + * 11 + */ 12 + 13 + #include "qnx6.h" 14 + 15 + static unsigned qnx6_lfile_checksum(char *name, unsigned size) 16 + { 17 + unsigned crc = 0; 18 + char *end = name + size; 19 + while (name < end) { 20 + crc = ((crc >> 1) + *(name++)) ^ 21 + ((crc & 0x00000001) ? 0x80000000 : 0); 22 + } 23 + return crc; 24 + } 25 + 26 + static struct page *qnx6_get_page(struct inode *dir, unsigned long n) 27 + { 28 + struct address_space *mapping = dir->i_mapping; 29 + struct page *page = read_mapping_page(mapping, n, NULL); 30 + if (!IS_ERR(page)) 31 + kmap(page); 32 + return page; 33 + } 34 + 35 + static inline unsigned long dir_pages(struct inode *inode) 36 + { 37 + return (inode->i_size+PAGE_CACHE_SIZE-1)>>PAGE_CACHE_SHIFT; 38 + } 39 + 40 + static unsigned last_entry(struct inode *inode, unsigned long page_nr) 41 + { 42 + unsigned long last_byte = inode->i_size; 43 + last_byte -= page_nr << PAGE_CACHE_SHIFT; 44 + if (last_byte > PAGE_CACHE_SIZE) 45 + last_byte = PAGE_CACHE_SIZE; 46 + return last_byte / QNX6_DIR_ENTRY_SIZE; 47 + } 48 + 49 + static struct qnx6_long_filename *qnx6_longname(struct super_block *sb, 50 + struct qnx6_long_dir_entry *de, 51 + struct page **p) 52 + { 53 + struct qnx6_sb_info *sbi = QNX6_SB(sb); 54 + u32 s = fs32_to_cpu(sbi, de->de_long_inode); /* in block units */ 55 + u32 n = s >> (PAGE_CACHE_SHIFT - sb->s_blocksize_bits); /* in pages */ 56 + /* within page */ 57 + u32 offs = (s << sb->s_blocksize_bits) & ~PAGE_CACHE_MASK; 58 + struct address_space *mapping = sbi->longfile->i_mapping; 59 + struct page *page = read_mapping_page(mapping, n, NULL); 60 + if (IS_ERR(page)) 61 + return ERR_CAST(page); 62 + kmap(*p = page); 63 + return (struct qnx6_long_filename *)(page_address(page) + offs); 64 + } 65 + 66 + static int qnx6_dir_longfilename(struct inode *inode, 67 + struct qnx6_long_dir_entry *de, 68 + void *dirent, loff_t pos, 69 + unsigned de_inode, filldir_t filldir) 70 + { 71 + struct qnx6_long_filename *lf; 72 + struct super_block *s = inode->i_sb; 73 + struct qnx6_sb_info *sbi = QNX6_SB(s); 74 + struct page *page; 75 + int lf_size; 76 + 77 + if (de->de_size != 0xff) { 78 + /* error - long filename entries always have size 0xff 79 + in direntry */ 80 + printk(KERN_ERR "qnx6: invalid direntry size (%i).\n", 81 + de->de_size); 82 + return 0; 83 + } 84 + lf = qnx6_longname(s, de, &page); 85 + if (IS_ERR(lf)) { 86 + printk(KERN_ERR "qnx6:Error reading longname\n"); 87 + return 0; 88 + } 89 + 90 + lf_size = fs16_to_cpu(sbi, lf->lf_size); 91 + 92 + if (lf_size > QNX6_LONG_NAME_MAX) { 93 + QNX6DEBUG((KERN_INFO "file %s\n", lf->lf_fname)); 94 + printk(KERN_ERR "qnx6:Filename too long (%i)\n", lf_size); 95 + qnx6_put_page(page); 96 + return 0; 97 + } 98 + 99 + /* calc & validate longfilename checksum 100 + mmi 3g filesystem does not have that checksum */ 101 + if (!test_opt(s, MMI_FS) && fs32_to_cpu(sbi, de->de_checksum) != 102 + qnx6_lfile_checksum(lf->lf_fname, lf_size)) 103 + printk(KERN_INFO "qnx6: long filename checksum error.\n"); 104 + 105 + QNX6DEBUG((KERN_INFO "qnx6_readdir:%.*s inode:%u\n", 106 + lf_size, lf->lf_fname, de_inode)); 107 + if (filldir(dirent, lf->lf_fname, lf_size, pos, de_inode, 108 + DT_UNKNOWN) < 0) { 109 + qnx6_put_page(page); 110 + return 0; 111 + } 112 + 113 + qnx6_put_page(page); 114 + /* success */ 115 + return 1; 116 + } 117 + 118 + static int qnx6_readdir(struct file *filp, void *dirent, filldir_t filldir) 119 + { 120 + struct inode *inode = filp->f_path.dentry->d_inode; 121 + struct super_block *s = inode->i_sb; 122 + struct qnx6_sb_info *sbi = QNX6_SB(s); 123 + loff_t pos = filp->f_pos & (QNX6_DIR_ENTRY_SIZE - 1); 124 + unsigned long npages = dir_pages(inode); 125 + unsigned long n = pos >> PAGE_CACHE_SHIFT; 126 + unsigned start = (pos & ~PAGE_CACHE_MASK) / QNX6_DIR_ENTRY_SIZE; 127 + bool done = false; 128 + 129 + if (filp->f_pos >= inode->i_size) 130 + return 0; 131 + 132 + for ( ; !done && n < npages; n++, start = 0) { 133 + struct page *page = qnx6_get_page(inode, n); 134 + int limit = last_entry(inode, n); 135 + struct qnx6_dir_entry *de; 136 + int i = start; 137 + 138 + if (IS_ERR(page)) { 139 + printk(KERN_ERR "qnx6_readdir: read failed\n"); 140 + filp->f_pos = (n + 1) << PAGE_CACHE_SHIFT; 141 + return PTR_ERR(page); 142 + } 143 + de = ((struct qnx6_dir_entry *)page_address(page)) + start; 144 + for (; i < limit; i++, de++, pos += QNX6_DIR_ENTRY_SIZE) { 145 + int size = de->de_size; 146 + u32 no_inode = fs32_to_cpu(sbi, de->de_inode); 147 + 148 + if (!no_inode || !size) 149 + continue; 150 + 151 + if (size > QNX6_SHORT_NAME_MAX) { 152 + /* long filename detected 153 + get the filename from long filename 154 + structure / block */ 155 + if (!qnx6_dir_longfilename(inode, 156 + (struct qnx6_long_dir_entry *)de, 157 + dirent, pos, no_inode, 158 + filldir)) { 159 + done = true; 160 + break; 161 + } 162 + } else { 163 + QNX6DEBUG((KERN_INFO "qnx6_readdir:%.*s" 164 + " inode:%u\n", size, de->de_fname, 165 + no_inode)); 166 + if (filldir(dirent, de->de_fname, size, 167 + pos, no_inode, DT_UNKNOWN) 168 + < 0) { 169 + done = true; 170 + break; 171 + } 172 + } 173 + } 174 + qnx6_put_page(page); 175 + } 176 + filp->f_pos = pos; 177 + return 0; 178 + } 179 + 180 + /* 181 + * check if the long filename is correct. 182 + */ 183 + static unsigned qnx6_long_match(int len, const char *name, 184 + struct qnx6_long_dir_entry *de, struct inode *dir) 185 + { 186 + struct super_block *s = dir->i_sb; 187 + struct qnx6_sb_info *sbi = QNX6_SB(s); 188 + struct page *page; 189 + int thislen; 190 + struct qnx6_long_filename *lf = qnx6_longname(s, de, &page); 191 + 192 + if (IS_ERR(lf)) 193 + return 0; 194 + 195 + thislen = fs16_to_cpu(sbi, lf->lf_size); 196 + if (len != thislen) { 197 + qnx6_put_page(page); 198 + return 0; 199 + } 200 + if (memcmp(name, lf->lf_fname, len) == 0) { 201 + qnx6_put_page(page); 202 + return fs32_to_cpu(sbi, de->de_inode); 203 + } 204 + qnx6_put_page(page); 205 + return 0; 206 + } 207 + 208 + /* 209 + * check if the filename is correct. 210 + */ 211 + static unsigned qnx6_match(struct super_block *s, int len, const char *name, 212 + struct qnx6_dir_entry *de) 213 + { 214 + struct qnx6_sb_info *sbi = QNX6_SB(s); 215 + if (memcmp(name, de->de_fname, len) == 0) 216 + return fs32_to_cpu(sbi, de->de_inode); 217 + return 0; 218 + } 219 + 220 + 221 + unsigned qnx6_find_entry(int len, struct inode *dir, const char *name, 222 + struct page **res_page) 223 + { 224 + struct super_block *s = dir->i_sb; 225 + struct qnx6_inode_info *ei = QNX6_I(dir); 226 + struct page *page = NULL; 227 + unsigned long start, n; 228 + unsigned long npages = dir_pages(dir); 229 + unsigned ino; 230 + struct qnx6_dir_entry *de; 231 + struct qnx6_long_dir_entry *lde; 232 + 233 + *res_page = NULL; 234 + 235 + if (npages == 0) 236 + return 0; 237 + start = ei->i_dir_start_lookup; 238 + if (start >= npages) 239 + start = 0; 240 + n = start; 241 + 242 + do { 243 + page = qnx6_get_page(dir, n); 244 + if (!IS_ERR(page)) { 245 + int limit = last_entry(dir, n); 246 + int i; 247 + 248 + de = (struct qnx6_dir_entry *)page_address(page); 249 + for (i = 0; i < limit; i++, de++) { 250 + if (len <= QNX6_SHORT_NAME_MAX) { 251 + /* short filename */ 252 + if (len != de->de_size) 253 + continue; 254 + ino = qnx6_match(s, len, name, de); 255 + if (ino) 256 + goto found; 257 + } else if (de->de_size == 0xff) { 258 + /* deal with long filename */ 259 + lde = (struct qnx6_long_dir_entry *)de; 260 + ino = qnx6_long_match(len, 261 + name, lde, dir); 262 + if (ino) 263 + goto found; 264 + } else 265 + printk(KERN_ERR "qnx6: undefined " 266 + "filename size in inode.\n"); 267 + } 268 + qnx6_put_page(page); 269 + } 270 + 271 + if (++n >= npages) 272 + n = 0; 273 + } while (n != start); 274 + return 0; 275 + 276 + found: 277 + *res_page = page; 278 + ei->i_dir_start_lookup = n; 279 + return ino; 280 + } 281 + 282 + const struct file_operations qnx6_dir_operations = { 283 + .llseek = generic_file_llseek, 284 + .read = generic_read_dir, 285 + .readdir = qnx6_readdir, 286 + .fsync = generic_file_fsync, 287 + }; 288 + 289 + const struct inode_operations qnx6_dir_inode_operations = { 290 + .lookup = qnx6_lookup, 291 + };
+698
fs/qnx6/inode.c
··· 1 + /* 2 + * QNX6 file system, Linux implementation. 3 + * 4 + * Version : 1.0.0 5 + * 6 + * History : 7 + * 8 + * 01-02-2012 by Kai Bankett (chaosman@ontika.net) : first release. 9 + * 16-02-2012 pagemap extension by Al Viro 10 + * 11 + */ 12 + 13 + #include <linux/module.h> 14 + #include <linux/init.h> 15 + #include <linux/slab.h> 16 + #include <linux/highuid.h> 17 + #include <linux/pagemap.h> 18 + #include <linux/buffer_head.h> 19 + #include <linux/writeback.h> 20 + #include <linux/statfs.h> 21 + #include <linux/parser.h> 22 + #include <linux/seq_file.h> 23 + #include <linux/mount.h> 24 + #include <linux/crc32.h> 25 + #include <linux/mpage.h> 26 + #include "qnx6.h" 27 + 28 + static const struct super_operations qnx6_sops; 29 + 30 + static void qnx6_put_super(struct super_block *sb); 31 + static struct inode *qnx6_alloc_inode(struct super_block *sb); 32 + static void qnx6_destroy_inode(struct inode *inode); 33 + static int qnx6_remount(struct super_block *sb, int *flags, char *data); 34 + static int qnx6_statfs(struct dentry *dentry, struct kstatfs *buf); 35 + static int qnx6_show_options(struct seq_file *seq, struct dentry *root); 36 + 37 + static const struct super_operations qnx6_sops = { 38 + .alloc_inode = qnx6_alloc_inode, 39 + .destroy_inode = qnx6_destroy_inode, 40 + .put_super = qnx6_put_super, 41 + .statfs = qnx6_statfs, 42 + .remount_fs = qnx6_remount, 43 + .show_options = qnx6_show_options, 44 + }; 45 + 46 + static int qnx6_show_options(struct seq_file *seq, struct dentry *root) 47 + { 48 + struct super_block *sb = root->d_sb; 49 + struct qnx6_sb_info *sbi = QNX6_SB(sb); 50 + 51 + if (sbi->s_mount_opt & QNX6_MOUNT_MMI_FS) 52 + seq_puts(seq, ",mmi_fs"); 53 + return 0; 54 + } 55 + 56 + static int qnx6_remount(struct super_block *sb, int *flags, char *data) 57 + { 58 + *flags |= MS_RDONLY; 59 + return 0; 60 + } 61 + 62 + static unsigned qnx6_get_devblock(struct super_block *sb, __fs32 block) 63 + { 64 + struct qnx6_sb_info *sbi = QNX6_SB(sb); 65 + return fs32_to_cpu(sbi, block) + sbi->s_blks_off; 66 + } 67 + 68 + static unsigned qnx6_block_map(struct inode *inode, unsigned iblock); 69 + 70 + static int qnx6_get_block(struct inode *inode, sector_t iblock, 71 + struct buffer_head *bh, int create) 72 + { 73 + unsigned phys; 74 + 75 + QNX6DEBUG((KERN_INFO "qnx6: qnx6_get_block inode=[%ld] iblock=[%ld]\n", 76 + inode->i_ino, (unsigned long)iblock)); 77 + 78 + phys = qnx6_block_map(inode, iblock); 79 + if (phys) { 80 + /* logical block is before EOF */ 81 + map_bh(bh, inode->i_sb, phys); 82 + } 83 + return 0; 84 + } 85 + 86 + static int qnx6_check_blockptr(__fs32 ptr) 87 + { 88 + if (ptr == ~(__fs32)0) { 89 + printk(KERN_ERR "qnx6: hit unused blockpointer.\n"); 90 + return 0; 91 + } 92 + return 1; 93 + } 94 + 95 + static int qnx6_readpage(struct file *file, struct page *page) 96 + { 97 + return mpage_readpage(page, qnx6_get_block); 98 + } 99 + 100 + static int qnx6_readpages(struct file *file, struct address_space *mapping, 101 + struct list_head *pages, unsigned nr_pages) 102 + { 103 + return mpage_readpages(mapping, pages, nr_pages, qnx6_get_block); 104 + } 105 + 106 + /* 107 + * returns the block number for the no-th element in the tree 108 + * inodebits requred as there are multiple inodes in one inode block 109 + */ 110 + static unsigned qnx6_block_map(struct inode *inode, unsigned no) 111 + { 112 + struct super_block *s = inode->i_sb; 113 + struct qnx6_sb_info *sbi = QNX6_SB(s); 114 + struct qnx6_inode_info *ei = QNX6_I(inode); 115 + unsigned block = 0; 116 + struct buffer_head *bh; 117 + __fs32 ptr; 118 + int levelptr; 119 + int ptrbits = sbi->s_ptrbits; 120 + int bitdelta; 121 + u32 mask = (1 << ptrbits) - 1; 122 + int depth = ei->di_filelevels; 123 + int i; 124 + 125 + bitdelta = ptrbits * depth; 126 + levelptr = no >> bitdelta; 127 + 128 + if (levelptr > QNX6_NO_DIRECT_POINTERS - 1) { 129 + printk(KERN_ERR "qnx6:Requested file block number (%u) too big.", 130 + no); 131 + return 0; 132 + } 133 + 134 + block = qnx6_get_devblock(s, ei->di_block_ptr[levelptr]); 135 + 136 + for (i = 0; i < depth; i++) { 137 + bh = sb_bread(s, block); 138 + if (!bh) { 139 + printk(KERN_ERR "qnx6:Error reading block (%u)\n", 140 + block); 141 + return 0; 142 + } 143 + bitdelta -= ptrbits; 144 + levelptr = (no >> bitdelta) & mask; 145 + ptr = ((__fs32 *)bh->b_data)[levelptr]; 146 + 147 + if (!qnx6_check_blockptr(ptr)) 148 + return 0; 149 + 150 + block = qnx6_get_devblock(s, ptr); 151 + brelse(bh); 152 + } 153 + return block; 154 + } 155 + 156 + static int qnx6_statfs(struct dentry *dentry, struct kstatfs *buf) 157 + { 158 + struct super_block *sb = dentry->d_sb; 159 + struct qnx6_sb_info *sbi = QNX6_SB(sb); 160 + u64 id = huge_encode_dev(sb->s_bdev->bd_dev); 161 + 162 + buf->f_type = sb->s_magic; 163 + buf->f_bsize = sb->s_blocksize; 164 + buf->f_blocks = fs32_to_cpu(sbi, sbi->sb->sb_num_blocks); 165 + buf->f_bfree = fs32_to_cpu(sbi, sbi->sb->sb_free_blocks); 166 + buf->f_files = fs32_to_cpu(sbi, sbi->sb->sb_num_inodes); 167 + buf->f_ffree = fs32_to_cpu(sbi, sbi->sb->sb_free_inodes); 168 + buf->f_bavail = buf->f_bfree; 169 + buf->f_namelen = QNX6_LONG_NAME_MAX; 170 + buf->f_fsid.val[0] = (u32)id; 171 + buf->f_fsid.val[1] = (u32)(id >> 32); 172 + 173 + return 0; 174 + } 175 + 176 + /* 177 + * Check the root directory of the filesystem to make sure 178 + * it really _is_ a qnx6 filesystem, and to check the size 179 + * of the directory entry. 180 + */ 181 + static const char *qnx6_checkroot(struct super_block *s) 182 + { 183 + static char match_root[2][3] = {".\0\0", "..\0"}; 184 + int i, error = 0; 185 + struct qnx6_dir_entry *dir_entry; 186 + struct inode *root = s->s_root->d_inode; 187 + struct address_space *mapping = root->i_mapping; 188 + struct page *page = read_mapping_page(mapping, 0, NULL); 189 + if (IS_ERR(page)) 190 + return "error reading root directory"; 191 + kmap(page); 192 + dir_entry = page_address(page); 193 + for (i = 0; i < 2; i++) { 194 + /* maximum 3 bytes - due to match_root limitation */ 195 + if (strncmp(dir_entry[i].de_fname, match_root[i], 3)) 196 + error = 1; 197 + } 198 + qnx6_put_page(page); 199 + if (error) 200 + return "error reading root directory."; 201 + return NULL; 202 + } 203 + 204 + #ifdef CONFIG_QNX6FS_DEBUG 205 + void qnx6_superblock_debug(struct qnx6_super_block *sb, struct super_block *s) 206 + { 207 + struct qnx6_sb_info *sbi = QNX6_SB(s); 208 + 209 + QNX6DEBUG((KERN_INFO "magic: %08x\n", 210 + fs32_to_cpu(sbi, sb->sb_magic))); 211 + QNX6DEBUG((KERN_INFO "checksum: %08x\n", 212 + fs32_to_cpu(sbi, sb->sb_checksum))); 213 + QNX6DEBUG((KERN_INFO "serial: %llx\n", 214 + fs64_to_cpu(sbi, sb->sb_serial))); 215 + QNX6DEBUG((KERN_INFO "flags: %08x\n", 216 + fs32_to_cpu(sbi, sb->sb_flags))); 217 + QNX6DEBUG((KERN_INFO "blocksize: %08x\n", 218 + fs32_to_cpu(sbi, sb->sb_blocksize))); 219 + QNX6DEBUG((KERN_INFO "num_inodes: %08x\n", 220 + fs32_to_cpu(sbi, sb->sb_num_inodes))); 221 + QNX6DEBUG((KERN_INFO "free_inodes: %08x\n", 222 + fs32_to_cpu(sbi, sb->sb_free_inodes))); 223 + QNX6DEBUG((KERN_INFO "num_blocks: %08x\n", 224 + fs32_to_cpu(sbi, sb->sb_num_blocks))); 225 + QNX6DEBUG((KERN_INFO "free_blocks: %08x\n", 226 + fs32_to_cpu(sbi, sb->sb_free_blocks))); 227 + QNX6DEBUG((KERN_INFO "inode_levels: %02x\n", 228 + sb->Inode.levels)); 229 + } 230 + #endif 231 + 232 + enum { 233 + Opt_mmifs, 234 + Opt_err 235 + }; 236 + 237 + static const match_table_t tokens = { 238 + {Opt_mmifs, "mmi_fs"}, 239 + {Opt_err, NULL} 240 + }; 241 + 242 + static int qnx6_parse_options(char *options, struct super_block *sb) 243 + { 244 + char *p; 245 + struct qnx6_sb_info *sbi = QNX6_SB(sb); 246 + substring_t args[MAX_OPT_ARGS]; 247 + 248 + if (!options) 249 + return 1; 250 + 251 + while ((p = strsep(&options, ",")) != NULL) { 252 + int token; 253 + if (!*p) 254 + continue; 255 + 256 + token = match_token(p, tokens, args); 257 + switch (token) { 258 + case Opt_mmifs: 259 + set_opt(sbi->s_mount_opt, MMI_FS); 260 + break; 261 + default: 262 + return 0; 263 + } 264 + } 265 + return 1; 266 + } 267 + 268 + static struct buffer_head *qnx6_check_first_superblock(struct super_block *s, 269 + int offset, int silent) 270 + { 271 + struct qnx6_sb_info *sbi = QNX6_SB(s); 272 + struct buffer_head *bh; 273 + struct qnx6_super_block *sb; 274 + 275 + /* Check the superblock signatures 276 + start with the first superblock */ 277 + bh = sb_bread(s, offset); 278 + if (!bh) { 279 + printk(KERN_ERR "qnx6: unable to read the first superblock\n"); 280 + return NULL; 281 + } 282 + sb = (struct qnx6_super_block *)bh->b_data; 283 + if (fs32_to_cpu(sbi, sb->sb_magic) != QNX6_SUPER_MAGIC) { 284 + sbi->s_bytesex = BYTESEX_BE; 285 + if (fs32_to_cpu(sbi, sb->sb_magic) == QNX6_SUPER_MAGIC) { 286 + /* we got a big endian fs */ 287 + QNX6DEBUG((KERN_INFO "qnx6: fs got different" 288 + " endianess.\n")); 289 + return bh; 290 + } else 291 + sbi->s_bytesex = BYTESEX_LE; 292 + if (!silent) { 293 + if (offset == 0) { 294 + printk(KERN_ERR "qnx6: wrong signature (magic)" 295 + " in superblock #1.\n"); 296 + } else { 297 + printk(KERN_INFO "qnx6: wrong signature (magic)" 298 + " at position (0x%lx) - will try" 299 + " alternative position (0x0000).\n", 300 + offset * s->s_blocksize); 301 + } 302 + } 303 + brelse(bh); 304 + return NULL; 305 + } 306 + return bh; 307 + } 308 + 309 + static struct inode *qnx6_private_inode(struct super_block *s, 310 + struct qnx6_root_node *p); 311 + 312 + static int qnx6_fill_super(struct super_block *s, void *data, int silent) 313 + { 314 + struct buffer_head *bh1 = NULL, *bh2 = NULL; 315 + struct qnx6_super_block *sb1 = NULL, *sb2 = NULL; 316 + struct qnx6_sb_info *sbi; 317 + struct inode *root; 318 + const char *errmsg; 319 + struct qnx6_sb_info *qs; 320 + int ret = -EINVAL; 321 + u64 offset; 322 + int bootblock_offset = QNX6_BOOTBLOCK_SIZE; 323 + 324 + qs = kzalloc(sizeof(struct qnx6_sb_info), GFP_KERNEL); 325 + if (!qs) 326 + return -ENOMEM; 327 + s->s_fs_info = qs; 328 + 329 + /* Superblock always is 512 Byte long */ 330 + if (!sb_set_blocksize(s, QNX6_SUPERBLOCK_SIZE)) { 331 + printk(KERN_ERR "qnx6: unable to set blocksize\n"); 332 + goto outnobh; 333 + } 334 + 335 + /* parse the mount-options */ 336 + if (!qnx6_parse_options((char *) data, s)) { 337 + printk(KERN_ERR "qnx6: invalid mount options.\n"); 338 + goto outnobh; 339 + } 340 + if (test_opt(s, MMI_FS)) { 341 + sb1 = qnx6_mmi_fill_super(s, silent); 342 + if (sb1) 343 + goto mmi_success; 344 + else 345 + goto outnobh; 346 + } 347 + sbi = QNX6_SB(s); 348 + sbi->s_bytesex = BYTESEX_LE; 349 + /* Check the superblock signatures 350 + start with the first superblock */ 351 + bh1 = qnx6_check_first_superblock(s, 352 + bootblock_offset / QNX6_SUPERBLOCK_SIZE, silent); 353 + if (!bh1) { 354 + /* try again without bootblock offset */ 355 + bh1 = qnx6_check_first_superblock(s, 0, silent); 356 + if (!bh1) { 357 + printk(KERN_ERR "qnx6: unable to read the first superblock\n"); 358 + goto outnobh; 359 + } 360 + /* seems that no bootblock at partition start */ 361 + bootblock_offset = 0; 362 + } 363 + sb1 = (struct qnx6_super_block *)bh1->b_data; 364 + 365 + #ifdef CONFIG_QNX6FS_DEBUG 366 + qnx6_superblock_debug(sb1, s); 367 + #endif 368 + 369 + /* checksum check - start at byte 8 and end at byte 512 */ 370 + if (fs32_to_cpu(sbi, sb1->sb_checksum) != 371 + crc32_be(0, (char *)(bh1->b_data + 8), 504)) { 372 + printk(KERN_ERR "qnx6: superblock #1 checksum error\n"); 373 + goto out; 374 + } 375 + 376 + /* set new blocksize */ 377 + if (!sb_set_blocksize(s, fs32_to_cpu(sbi, sb1->sb_blocksize))) { 378 + printk(KERN_ERR "qnx6: unable to set blocksize\n"); 379 + goto out; 380 + } 381 + /* blocksize invalidates bh - pull it back in */ 382 + brelse(bh1); 383 + bh1 = sb_bread(s, bootblock_offset >> s->s_blocksize_bits); 384 + if (!bh1) 385 + goto outnobh; 386 + sb1 = (struct qnx6_super_block *)bh1->b_data; 387 + 388 + /* calculate second superblock blocknumber */ 389 + offset = fs32_to_cpu(sbi, sb1->sb_num_blocks) + 390 + (bootblock_offset >> s->s_blocksize_bits) + 391 + (QNX6_SUPERBLOCK_AREA >> s->s_blocksize_bits); 392 + 393 + /* set bootblock offset */ 394 + sbi->s_blks_off = (bootblock_offset >> s->s_blocksize_bits) + 395 + (QNX6_SUPERBLOCK_AREA >> s->s_blocksize_bits); 396 + 397 + /* next the second superblock */ 398 + bh2 = sb_bread(s, offset); 399 + if (!bh2) { 400 + printk(KERN_ERR "qnx6: unable to read the second superblock\n"); 401 + goto out; 402 + } 403 + sb2 = (struct qnx6_super_block *)bh2->b_data; 404 + if (fs32_to_cpu(sbi, sb2->sb_magic) != QNX6_SUPER_MAGIC) { 405 + if (!silent) 406 + printk(KERN_ERR "qnx6: wrong signature (magic)" 407 + " in superblock #2.\n"); 408 + goto out; 409 + } 410 + 411 + /* checksum check - start at byte 8 and end at byte 512 */ 412 + if (fs32_to_cpu(sbi, sb2->sb_checksum) != 413 + crc32_be(0, (char *)(bh2->b_data + 8), 504)) { 414 + printk(KERN_ERR "qnx6: superblock #2 checksum error\n"); 415 + goto out; 416 + } 417 + 418 + if (fs64_to_cpu(sbi, sb1->sb_serial) >= 419 + fs64_to_cpu(sbi, sb2->sb_serial)) { 420 + /* superblock #1 active */ 421 + sbi->sb_buf = bh1; 422 + sbi->sb = (struct qnx6_super_block *)bh1->b_data; 423 + brelse(bh2); 424 + printk(KERN_INFO "qnx6: superblock #1 active\n"); 425 + } else { 426 + /* superblock #2 active */ 427 + sbi->sb_buf = bh2; 428 + sbi->sb = (struct qnx6_super_block *)bh2->b_data; 429 + brelse(bh1); 430 + printk(KERN_INFO "qnx6: superblock #2 active\n"); 431 + } 432 + mmi_success: 433 + /* sanity check - limit maximum indirect pointer levels */ 434 + if (sb1->Inode.levels > QNX6_PTR_MAX_LEVELS) { 435 + printk(KERN_ERR "qnx6: too many inode levels (max %i, sb %i)\n", 436 + QNX6_PTR_MAX_LEVELS, sb1->Inode.levels); 437 + goto out; 438 + } 439 + if (sb1->Longfile.levels > QNX6_PTR_MAX_LEVELS) { 440 + printk(KERN_ERR "qnx6: too many longfilename levels" 441 + " (max %i, sb %i)\n", 442 + QNX6_PTR_MAX_LEVELS, sb1->Longfile.levels); 443 + goto out; 444 + } 445 + s->s_op = &qnx6_sops; 446 + s->s_magic = QNX6_SUPER_MAGIC; 447 + s->s_flags |= MS_RDONLY; /* Yup, read-only yet */ 448 + 449 + /* ease the later tree level calculations */ 450 + sbi = QNX6_SB(s); 451 + sbi->s_ptrbits = ilog2(s->s_blocksize / 4); 452 + sbi->inodes = qnx6_private_inode(s, &sb1->Inode); 453 + if (!sbi->inodes) 454 + goto out; 455 + sbi->longfile = qnx6_private_inode(s, &sb1->Longfile); 456 + if (!sbi->longfile) 457 + goto out1; 458 + 459 + /* prefetch root inode */ 460 + root = qnx6_iget(s, QNX6_ROOT_INO); 461 + if (IS_ERR(root)) { 462 + printk(KERN_ERR "qnx6: get inode failed\n"); 463 + ret = PTR_ERR(root); 464 + goto out2; 465 + } 466 + 467 + ret = -ENOMEM; 468 + s->s_root = d_make_root(root); 469 + if (!s->s_root) 470 + goto out2; 471 + 472 + ret = -EINVAL; 473 + errmsg = qnx6_checkroot(s); 474 + if (errmsg != NULL) { 475 + if (!silent) 476 + printk(KERN_ERR "qnx6: %s\n", errmsg); 477 + goto out3; 478 + } 479 + return 0; 480 + 481 + out3: 482 + dput(s->s_root); 483 + s->s_root = NULL; 484 + out2: 485 + iput(sbi->longfile); 486 + out1: 487 + iput(sbi->inodes); 488 + out: 489 + if (bh1) 490 + brelse(bh1); 491 + if (bh2) 492 + brelse(bh2); 493 + outnobh: 494 + kfree(qs); 495 + s->s_fs_info = NULL; 496 + return ret; 497 + } 498 + 499 + static void qnx6_put_super(struct super_block *sb) 500 + { 501 + struct qnx6_sb_info *qs = QNX6_SB(sb); 502 + brelse(qs->sb_buf); 503 + iput(qs->longfile); 504 + iput(qs->inodes); 505 + kfree(qs); 506 + sb->s_fs_info = NULL; 507 + return; 508 + } 509 + 510 + static sector_t qnx6_bmap(struct address_space *mapping, sector_t block) 511 + { 512 + return generic_block_bmap(mapping, block, qnx6_get_block); 513 + } 514 + static const struct address_space_operations qnx6_aops = { 515 + .readpage = qnx6_readpage, 516 + .readpages = qnx6_readpages, 517 + .bmap = qnx6_bmap 518 + }; 519 + 520 + static struct inode *qnx6_private_inode(struct super_block *s, 521 + struct qnx6_root_node *p) 522 + { 523 + struct inode *inode = new_inode(s); 524 + if (inode) { 525 + struct qnx6_inode_info *ei = QNX6_I(inode); 526 + struct qnx6_sb_info *sbi = QNX6_SB(s); 527 + inode->i_size = fs64_to_cpu(sbi, p->size); 528 + memcpy(ei->di_block_ptr, p->ptr, sizeof(p->ptr)); 529 + ei->di_filelevels = p->levels; 530 + inode->i_mode = S_IFREG | S_IRUSR; /* probably wrong */ 531 + inode->i_mapping->a_ops = &qnx6_aops; 532 + } 533 + return inode; 534 + } 535 + 536 + struct inode *qnx6_iget(struct super_block *sb, unsigned ino) 537 + { 538 + struct qnx6_sb_info *sbi = QNX6_SB(sb); 539 + struct qnx6_inode_entry *raw_inode; 540 + struct inode *inode; 541 + struct qnx6_inode_info *ei; 542 + struct address_space *mapping; 543 + struct page *page; 544 + u32 n, offs; 545 + 546 + inode = iget_locked(sb, ino); 547 + if (!inode) 548 + return ERR_PTR(-ENOMEM); 549 + if (!(inode->i_state & I_NEW)) 550 + return inode; 551 + 552 + ei = QNX6_I(inode); 553 + 554 + inode->i_mode = 0; 555 + 556 + if (ino == 0) { 557 + printk(KERN_ERR "qnx6: bad inode number on dev %s: %u is " 558 + "out of range\n", 559 + sb->s_id, ino); 560 + iget_failed(inode); 561 + return ERR_PTR(-EIO); 562 + } 563 + n = (ino - 1) >> (PAGE_CACHE_SHIFT - QNX6_INODE_SIZE_BITS); 564 + offs = (ino - 1) & (~PAGE_CACHE_MASK >> QNX6_INODE_SIZE_BITS); 565 + mapping = sbi->inodes->i_mapping; 566 + page = read_mapping_page(mapping, n, NULL); 567 + if (IS_ERR(page)) { 568 + printk(KERN_ERR "qnx6: major problem: unable to read inode from " 569 + "dev %s\n", sb->s_id); 570 + iget_failed(inode); 571 + return ERR_CAST(page); 572 + } 573 + kmap(page); 574 + raw_inode = ((struct qnx6_inode_entry *)page_address(page)) + offs; 575 + 576 + inode->i_mode = fs16_to_cpu(sbi, raw_inode->di_mode); 577 + inode->i_uid = (uid_t)fs32_to_cpu(sbi, raw_inode->di_uid); 578 + inode->i_gid = (gid_t)fs32_to_cpu(sbi, raw_inode->di_gid); 579 + inode->i_size = fs64_to_cpu(sbi, raw_inode->di_size); 580 + inode->i_mtime.tv_sec = fs32_to_cpu(sbi, raw_inode->di_mtime); 581 + inode->i_mtime.tv_nsec = 0; 582 + inode->i_atime.tv_sec = fs32_to_cpu(sbi, raw_inode->di_atime); 583 + inode->i_atime.tv_nsec = 0; 584 + inode->i_ctime.tv_sec = fs32_to_cpu(sbi, raw_inode->di_ctime); 585 + inode->i_ctime.tv_nsec = 0; 586 + 587 + /* calc blocks based on 512 byte blocksize */ 588 + inode->i_blocks = (inode->i_size + 511) >> 9; 589 + 590 + memcpy(&ei->di_block_ptr, &raw_inode->di_block_ptr, 591 + sizeof(raw_inode->di_block_ptr)); 592 + ei->di_filelevels = raw_inode->di_filelevels; 593 + 594 + if (S_ISREG(inode->i_mode)) { 595 + inode->i_fop = &generic_ro_fops; 596 + inode->i_mapping->a_ops = &qnx6_aops; 597 + } else if (S_ISDIR(inode->i_mode)) { 598 + inode->i_op = &qnx6_dir_inode_operations; 599 + inode->i_fop = &qnx6_dir_operations; 600 + inode->i_mapping->a_ops = &qnx6_aops; 601 + } else if (S_ISLNK(inode->i_mode)) { 602 + inode->i_op = &page_symlink_inode_operations; 603 + inode->i_mapping->a_ops = &qnx6_aops; 604 + } else 605 + init_special_inode(inode, inode->i_mode, 0); 606 + qnx6_put_page(page); 607 + unlock_new_inode(inode); 608 + return inode; 609 + } 610 + 611 + static struct kmem_cache *qnx6_inode_cachep; 612 + 613 + static struct inode *qnx6_alloc_inode(struct super_block *sb) 614 + { 615 + struct qnx6_inode_info *ei; 616 + ei = kmem_cache_alloc(qnx6_inode_cachep, GFP_KERNEL); 617 + if (!ei) 618 + return NULL; 619 + return &ei->vfs_inode; 620 + } 621 + 622 + static void qnx6_i_callback(struct rcu_head *head) 623 + { 624 + struct inode *inode = container_of(head, struct inode, i_rcu); 625 + INIT_LIST_HEAD(&inode->i_dentry); 626 + kmem_cache_free(qnx6_inode_cachep, QNX6_I(inode)); 627 + } 628 + 629 + static void qnx6_destroy_inode(struct inode *inode) 630 + { 631 + call_rcu(&inode->i_rcu, qnx6_i_callback); 632 + } 633 + 634 + static void init_once(void *foo) 635 + { 636 + struct qnx6_inode_info *ei = (struct qnx6_inode_info *) foo; 637 + 638 + inode_init_once(&ei->vfs_inode); 639 + } 640 + 641 + static int init_inodecache(void) 642 + { 643 + qnx6_inode_cachep = kmem_cache_create("qnx6_inode_cache", 644 + sizeof(struct qnx6_inode_info), 645 + 0, (SLAB_RECLAIM_ACCOUNT| 646 + SLAB_MEM_SPREAD), 647 + init_once); 648 + if (!qnx6_inode_cachep) 649 + return -ENOMEM; 650 + return 0; 651 + } 652 + 653 + static void destroy_inodecache(void) 654 + { 655 + kmem_cache_destroy(qnx6_inode_cachep); 656 + } 657 + 658 + static struct dentry *qnx6_mount(struct file_system_type *fs_type, 659 + int flags, const char *dev_name, void *data) 660 + { 661 + return mount_bdev(fs_type, flags, dev_name, data, qnx6_fill_super); 662 + } 663 + 664 + static struct file_system_type qnx6_fs_type = { 665 + .owner = THIS_MODULE, 666 + .name = "qnx6", 667 + .mount = qnx6_mount, 668 + .kill_sb = kill_block_super, 669 + .fs_flags = FS_REQUIRES_DEV, 670 + }; 671 + 672 + static int __init init_qnx6_fs(void) 673 + { 674 + int err; 675 + 676 + err = init_inodecache(); 677 + if (err) 678 + return err; 679 + 680 + err = register_filesystem(&qnx6_fs_type); 681 + if (err) { 682 + destroy_inodecache(); 683 + return err; 684 + } 685 + 686 + printk(KERN_INFO "QNX6 filesystem 1.0.0 registered.\n"); 687 + return 0; 688 + } 689 + 690 + static void __exit exit_qnx6_fs(void) 691 + { 692 + unregister_filesystem(&qnx6_fs_type); 693 + destroy_inodecache(); 694 + } 695 + 696 + module_init(init_qnx6_fs) 697 + module_exit(exit_qnx6_fs) 698 + MODULE_LICENSE("GPL");
+42
fs/qnx6/namei.c
··· 1 + /* 2 + * QNX6 file system, Linux implementation. 3 + * 4 + * Version : 1.0.0 5 + * 6 + * History : 7 + * 8 + * 01-02-2012 by Kai Bankett (chaosman@ontika.net) : first release. 9 + * 16-02-2012 pagemap extension by Al Viro 10 + * 11 + */ 12 + 13 + #include "qnx6.h" 14 + 15 + struct dentry *qnx6_lookup(struct inode *dir, struct dentry *dentry, 16 + struct nameidata *nd) 17 + { 18 + unsigned ino; 19 + struct page *page; 20 + struct inode *foundinode = NULL; 21 + const char *name = dentry->d_name.name; 22 + int len = dentry->d_name.len; 23 + 24 + if (len > QNX6_LONG_NAME_MAX) 25 + return ERR_PTR(-ENAMETOOLONG); 26 + 27 + ino = qnx6_find_entry(len, dir, name, &page); 28 + if (ino) { 29 + foundinode = qnx6_iget(dir->i_sb, ino); 30 + qnx6_put_page(page); 31 + if (IS_ERR(foundinode)) { 32 + QNX6DEBUG((KERN_ERR "qnx6: lookup->iget -> " 33 + " error %ld\n", PTR_ERR(foundinode))); 34 + return ERR_CAST(foundinode); 35 + } 36 + } else { 37 + QNX6DEBUG((KERN_INFO "qnx6_lookup: not found %s\n", name)); 38 + return NULL; 39 + } 40 + d_add(dentry, foundinode); 41 + return NULL; 42 + }
+135
fs/qnx6/qnx6.h
··· 1 + /* 2 + * QNX6 file system, Linux implementation. 3 + * 4 + * Version : 1.0.0 5 + * 6 + * History : 7 + * 8 + * 01-02-2012 by Kai Bankett (chaosman@ontika.net) : first release. 9 + * 16-02-2012 page map extension by Al Viro 10 + * 11 + */ 12 + 13 + #include <linux/fs.h> 14 + #include <linux/pagemap.h> 15 + 16 + typedef __u16 __bitwise __fs16; 17 + typedef __u32 __bitwise __fs32; 18 + typedef __u64 __bitwise __fs64; 19 + 20 + #include <linux/qnx6_fs.h> 21 + 22 + #ifdef CONFIG_QNX6FS_DEBUG 23 + #define QNX6DEBUG(X) printk X 24 + #else 25 + #define QNX6DEBUG(X) (void) 0 26 + #endif 27 + 28 + struct qnx6_sb_info { 29 + struct buffer_head *sb_buf; /* superblock buffer */ 30 + struct qnx6_super_block *sb; /* our superblock */ 31 + int s_blks_off; /* blkoffset fs-startpoint */ 32 + int s_ptrbits; /* indirect pointer bitfield */ 33 + unsigned long s_mount_opt; /* all mount options */ 34 + int s_bytesex; /* holds endianess info */ 35 + struct inode * inodes; 36 + struct inode * longfile; 37 + }; 38 + 39 + struct qnx6_inode_info { 40 + __fs32 di_block_ptr[QNX6_NO_DIRECT_POINTERS]; 41 + __u8 di_filelevels; 42 + __u32 i_dir_start_lookup; 43 + struct inode vfs_inode; 44 + }; 45 + 46 + extern struct inode *qnx6_iget(struct super_block *sb, unsigned ino); 47 + extern struct dentry *qnx6_lookup(struct inode *dir, struct dentry *dentry, 48 + struct nameidata *nd); 49 + 50 + #ifdef CONFIG_QNX6FS_DEBUG 51 + extern void qnx6_superblock_debug(struct qnx6_super_block *, 52 + struct super_block *); 53 + #endif 54 + 55 + extern const struct inode_operations qnx6_dir_inode_operations; 56 + extern const struct file_operations qnx6_dir_operations; 57 + 58 + static inline struct qnx6_sb_info *QNX6_SB(struct super_block *sb) 59 + { 60 + return sb->s_fs_info; 61 + } 62 + 63 + static inline struct qnx6_inode_info *QNX6_I(struct inode *inode) 64 + { 65 + return container_of(inode, struct qnx6_inode_info, vfs_inode); 66 + } 67 + 68 + #define clear_opt(o, opt) (o &= ~(QNX6_MOUNT_##opt)) 69 + #define set_opt(o, opt) (o |= (QNX6_MOUNT_##opt)) 70 + #define test_opt(sb, opt) (QNX6_SB(sb)->s_mount_opt & \ 71 + QNX6_MOUNT_##opt) 72 + enum { 73 + BYTESEX_LE, 74 + BYTESEX_BE, 75 + }; 76 + 77 + static inline __u64 fs64_to_cpu(struct qnx6_sb_info *sbi, __fs64 n) 78 + { 79 + if (sbi->s_bytesex == BYTESEX_LE) 80 + return le64_to_cpu((__force __le64)n); 81 + else 82 + return be64_to_cpu((__force __be64)n); 83 + } 84 + 85 + static inline __fs64 cpu_to_fs64(struct qnx6_sb_info *sbi, __u64 n) 86 + { 87 + if (sbi->s_bytesex == BYTESEX_LE) 88 + return (__force __fs64)cpu_to_le64(n); 89 + else 90 + return (__force __fs64)cpu_to_be64(n); 91 + } 92 + 93 + static inline __u32 fs32_to_cpu(struct qnx6_sb_info *sbi, __fs32 n) 94 + { 95 + if (sbi->s_bytesex == BYTESEX_LE) 96 + return le32_to_cpu((__force __le32)n); 97 + else 98 + return be32_to_cpu((__force __be32)n); 99 + } 100 + 101 + static inline __fs32 cpu_to_fs32(struct qnx6_sb_info *sbi, __u32 n) 102 + { 103 + if (sbi->s_bytesex == BYTESEX_LE) 104 + return (__force __fs32)cpu_to_le32(n); 105 + else 106 + return (__force __fs32)cpu_to_be32(n); 107 + } 108 + 109 + static inline __u16 fs16_to_cpu(struct qnx6_sb_info *sbi, __fs16 n) 110 + { 111 + if (sbi->s_bytesex == BYTESEX_LE) 112 + return le16_to_cpu((__force __le16)n); 113 + else 114 + return be16_to_cpu((__force __be16)n); 115 + } 116 + 117 + static inline __fs16 cpu_to_fs16(struct qnx6_sb_info *sbi, __u16 n) 118 + { 119 + if (sbi->s_bytesex == BYTESEX_LE) 120 + return (__force __fs16)cpu_to_le16(n); 121 + else 122 + return (__force __fs16)cpu_to_be16(n); 123 + } 124 + 125 + extern struct qnx6_super_block *qnx6_mmi_fill_super(struct super_block *s, 126 + int silent); 127 + 128 + static inline void qnx6_put_page(struct page *page) 129 + { 130 + kunmap(page); 131 + page_cache_release(page); 132 + } 133 + 134 + extern unsigned qnx6_find_entry(int len, struct inode *dir, const char *name, 135 + struct page **res_page);
+150
fs/qnx6/super_mmi.c
··· 1 + /* 2 + * QNX6 file system, Linux implementation. 3 + * 4 + * Version : 1.0.0 5 + * 6 + * History : 7 + * 8 + * 01-02-2012 by Kai Bankett (chaosman@ontika.net) : first release. 9 + * 10 + */ 11 + 12 + #include <linux/buffer_head.h> 13 + #include <linux/slab.h> 14 + #include <linux/crc32.h> 15 + #include "qnx6.h" 16 + 17 + static void qnx6_mmi_copy_sb(struct qnx6_super_block *qsb, 18 + struct qnx6_mmi_super_block *sb) 19 + { 20 + qsb->sb_magic = sb->sb_magic; 21 + qsb->sb_checksum = sb->sb_checksum; 22 + qsb->sb_serial = sb->sb_serial; 23 + qsb->sb_blocksize = sb->sb_blocksize; 24 + qsb->sb_num_inodes = sb->sb_num_inodes; 25 + qsb->sb_free_inodes = sb->sb_free_inodes; 26 + qsb->sb_num_blocks = sb->sb_num_blocks; 27 + qsb->sb_free_blocks = sb->sb_free_blocks; 28 + 29 + /* the rest of the superblock is the same */ 30 + memcpy(&qsb->Inode, &sb->Inode, sizeof(sb->Inode)); 31 + memcpy(&qsb->Bitmap, &sb->Bitmap, sizeof(sb->Bitmap)); 32 + memcpy(&qsb->Longfile, &sb->Longfile, sizeof(sb->Longfile)); 33 + } 34 + 35 + struct qnx6_super_block *qnx6_mmi_fill_super(struct super_block *s, int silent) 36 + { 37 + struct buffer_head *bh1, *bh2 = NULL; 38 + struct qnx6_mmi_super_block *sb1, *sb2; 39 + struct qnx6_super_block *qsb = NULL; 40 + struct qnx6_sb_info *sbi; 41 + __u64 offset; 42 + 43 + /* Check the superblock signatures 44 + start with the first superblock */ 45 + bh1 = sb_bread(s, 0); 46 + if (!bh1) { 47 + printk(KERN_ERR "qnx6: Unable to read first mmi superblock\n"); 48 + return NULL; 49 + } 50 + sb1 = (struct qnx6_mmi_super_block *)bh1->b_data; 51 + sbi = QNX6_SB(s); 52 + if (fs32_to_cpu(sbi, sb1->sb_magic) != QNX6_SUPER_MAGIC) { 53 + if (!silent) { 54 + printk(KERN_ERR "qnx6: wrong signature (magic) in" 55 + " superblock #1.\n"); 56 + goto out; 57 + } 58 + } 59 + 60 + /* checksum check - start at byte 8 and end at byte 512 */ 61 + if (fs32_to_cpu(sbi, sb1->sb_checksum) != 62 + crc32_be(0, (char *)(bh1->b_data + 8), 504)) { 63 + printk(KERN_ERR "qnx6: superblock #1 checksum error\n"); 64 + goto out; 65 + } 66 + 67 + /* calculate second superblock blocknumber */ 68 + offset = fs32_to_cpu(sbi, sb1->sb_num_blocks) + QNX6_SUPERBLOCK_AREA / 69 + fs32_to_cpu(sbi, sb1->sb_blocksize); 70 + 71 + /* set new blocksize */ 72 + if (!sb_set_blocksize(s, fs32_to_cpu(sbi, sb1->sb_blocksize))) { 73 + printk(KERN_ERR "qnx6: unable to set blocksize\n"); 74 + goto out; 75 + } 76 + /* blocksize invalidates bh - pull it back in */ 77 + brelse(bh1); 78 + bh1 = sb_bread(s, 0); 79 + if (!bh1) 80 + goto out; 81 + sb1 = (struct qnx6_mmi_super_block *)bh1->b_data; 82 + 83 + /* read second superblock */ 84 + bh2 = sb_bread(s, offset); 85 + if (!bh2) { 86 + printk(KERN_ERR "qnx6: unable to read the second superblock\n"); 87 + goto out; 88 + } 89 + sb2 = (struct qnx6_mmi_super_block *)bh2->b_data; 90 + if (fs32_to_cpu(sbi, sb2->sb_magic) != QNX6_SUPER_MAGIC) { 91 + if (!silent) 92 + printk(KERN_ERR "qnx6: wrong signature (magic) in" 93 + " superblock #2.\n"); 94 + goto out; 95 + } 96 + 97 + /* checksum check - start at byte 8 and end at byte 512 */ 98 + if (fs32_to_cpu(sbi, sb2->sb_checksum) 99 + != crc32_be(0, (char *)(bh2->b_data + 8), 504)) { 100 + printk(KERN_ERR "qnx6: superblock #1 checksum error\n"); 101 + goto out; 102 + } 103 + 104 + qsb = kmalloc(sizeof(*qsb), GFP_KERNEL); 105 + if (!qsb) { 106 + printk(KERN_ERR "qnx6: unable to allocate memory.\n"); 107 + goto out; 108 + } 109 + 110 + if (fs64_to_cpu(sbi, sb1->sb_serial) > 111 + fs64_to_cpu(sbi, sb2->sb_serial)) { 112 + /* superblock #1 active */ 113 + qnx6_mmi_copy_sb(qsb, sb1); 114 + #ifdef CONFIG_QNX6FS_DEBUG 115 + qnx6_superblock_debug(qsb, s); 116 + #endif 117 + memcpy(bh1->b_data, qsb, sizeof(struct qnx6_super_block)); 118 + 119 + sbi->sb_buf = bh1; 120 + sbi->sb = (struct qnx6_super_block *)bh1->b_data; 121 + brelse(bh2); 122 + printk(KERN_INFO "qnx6: superblock #1 active\n"); 123 + } else { 124 + /* superblock #2 active */ 125 + qnx6_mmi_copy_sb(qsb, sb2); 126 + #ifdef CONFIG_QNX6FS_DEBUG 127 + qnx6_superblock_debug(qsb, s); 128 + #endif 129 + memcpy(bh2->b_data, qsb, sizeof(struct qnx6_super_block)); 130 + 131 + sbi->sb_buf = bh2; 132 + sbi->sb = (struct qnx6_super_block *)bh2->b_data; 133 + brelse(bh1); 134 + printk(KERN_INFO "qnx6: superblock #2 active\n"); 135 + } 136 + kfree(qsb); 137 + 138 + /* offset for mmi_fs is just SUPERBLOCK_AREA bytes */ 139 + sbi->s_blks_off = QNX6_SUPERBLOCK_AREA / s->s_blocksize; 140 + 141 + /* success */ 142 + return sbi->sb; 143 + 144 + out: 145 + if (bh1 != NULL) 146 + brelse(bh1); 147 + if (bh2 != NULL) 148 + brelse(bh2); 149 + return NULL; 150 + }
+7 -23
fs/ramfs/inode.c
··· 209 209 int ramfs_fill_super(struct super_block *sb, void *data, int silent) 210 210 { 211 211 struct ramfs_fs_info *fsi; 212 - struct inode *inode = NULL; 213 - struct dentry *root; 212 + struct inode *inode; 214 213 int err; 215 214 216 215 save_mount_options(sb, data); 217 216 218 217 fsi = kzalloc(sizeof(struct ramfs_fs_info), GFP_KERNEL); 219 218 sb->s_fs_info = fsi; 220 - if (!fsi) { 221 - err = -ENOMEM; 222 - goto fail; 223 - } 219 + if (!fsi) 220 + return -ENOMEM; 224 221 225 222 err = ramfs_parse_options(data, &fsi->mount_opts); 226 223 if (err) 227 - goto fail; 224 + return err; 228 225 229 226 sb->s_maxbytes = MAX_LFS_FILESIZE; 230 227 sb->s_blocksize = PAGE_CACHE_SIZE; ··· 231 234 sb->s_time_gran = 1; 232 235 233 236 inode = ramfs_get_inode(sb, NULL, S_IFDIR | fsi->mount_opts.mode, 0); 234 - if (!inode) { 235 - err = -ENOMEM; 236 - goto fail; 237 - } 238 - 239 - root = d_alloc_root(inode); 240 - sb->s_root = root; 241 - if (!root) { 242 - err = -ENOMEM; 243 - goto fail; 244 - } 237 + sb->s_root = d_make_root(inode); 238 + if (!sb->s_root) 239 + return -ENOMEM; 245 240 246 241 return 0; 247 - fail: 248 - kfree(fsi); 249 - sb->s_fs_info = NULL; 250 - iput(inode); 251 - return err; 252 242 } 253 243 254 244 struct dentry *ramfs_mount(struct file_system_type *fs_type,
+1 -3
fs/reiserfs/bitmap.c
··· 4 4 /* Reiserfs block (de)allocator, bitmap-based. */ 5 5 6 6 #include <linux/time.h> 7 - #include <linux/reiserfs_fs.h> 7 + #include "reiserfs.h" 8 8 #include <linux/errno.h> 9 9 #include <linux/buffer_head.h> 10 10 #include <linux/kernel.h> 11 11 #include <linux/pagemap.h> 12 12 #include <linux/vmalloc.h> 13 - #include <linux/reiserfs_fs_sb.h> 14 - #include <linux/reiserfs_fs_i.h> 15 13 #include <linux/quotaops.h> 16 14 #include <linux/seq_file.h> 17 15
+1 -1
fs/reiserfs/dir.c
··· 5 5 #include <linux/string.h> 6 6 #include <linux/errno.h> 7 7 #include <linux/fs.h> 8 - #include <linux/reiserfs_fs.h> 8 + #include "reiserfs.h" 9 9 #include <linux/stat.h> 10 10 #include <linux/buffer_head.h> 11 11 #include <linux/slab.h>
+1 -1
fs/reiserfs/do_balan.c
··· 17 17 18 18 #include <asm/uaccess.h> 19 19 #include <linux/time.h> 20 - #include <linux/reiserfs_fs.h> 20 + #include "reiserfs.h" 21 21 #include <linux/buffer_head.h> 22 22 #include <linux/kernel.h> 23 23
+3 -3
fs/reiserfs/file.c
··· 3 3 */ 4 4 5 5 #include <linux/time.h> 6 - #include <linux/reiserfs_fs.h> 7 - #include <linux/reiserfs_acl.h> 8 - #include <linux/reiserfs_xattr.h> 6 + #include "reiserfs.h" 7 + #include "acl.h" 8 + #include "xattr.h" 9 9 #include <asm/uaccess.h> 10 10 #include <linux/pagemap.h> 11 11 #include <linux/swap.h>
+1 -1
fs/reiserfs/fix_node.c
··· 37 37 #include <linux/time.h> 38 38 #include <linux/slab.h> 39 39 #include <linux/string.h> 40 - #include <linux/reiserfs_fs.h> 40 + #include "reiserfs.h" 41 41 #include <linux/buffer_head.h> 42 42 43 43 /* To make any changes in the tree we find a node, that contains item
+1 -1
fs/reiserfs/hashes.c
··· 19 19 // 20 20 21 21 #include <linux/kernel.h> 22 - #include <linux/reiserfs_fs.h> 22 + #include "reiserfs.h" 23 23 #include <asm/types.h> 24 24 25 25 #define DELTA 0x9E3779B9
+1 -1
fs/reiserfs/ibalance.c
··· 5 5 #include <asm/uaccess.h> 6 6 #include <linux/string.h> 7 7 #include <linux/time.h> 8 - #include <linux/reiserfs_fs.h> 8 + #include "reiserfs.h" 9 9 #include <linux/buffer_head.h> 10 10 11 11 /* this is one and only function that is used outside (do_balance.c) */
+3 -3
fs/reiserfs/inode.c
··· 4 4 5 5 #include <linux/time.h> 6 6 #include <linux/fs.h> 7 - #include <linux/reiserfs_fs.h> 8 - #include <linux/reiserfs_acl.h> 9 - #include <linux/reiserfs_xattr.h> 7 + #include "reiserfs.h" 8 + #include "acl.h" 9 + #include "xattr.h" 10 10 #include <linux/exportfs.h> 11 11 #include <linux/pagemap.h> 12 12 #include <linux/highmem.h>
+1 -1
fs/reiserfs/ioctl.c
··· 5 5 #include <linux/capability.h> 6 6 #include <linux/fs.h> 7 7 #include <linux/mount.h> 8 - #include <linux/reiserfs_fs.h> 8 + #include "reiserfs.h" 9 9 #include <linux/time.h> 10 10 #include <asm/uaccess.h> 11 11 #include <linux/pagemap.h>
+1 -1
fs/reiserfs/item_ops.c
··· 3 3 */ 4 4 5 5 #include <linux/time.h> 6 - #include <linux/reiserfs_fs.h> 6 + #include "reiserfs.h" 7 7 8 8 // this contains item handlers for old item types: sd, direct, 9 9 // indirect, directory
+1 -1
fs/reiserfs/journal.c
··· 37 37 #include <linux/time.h> 38 38 #include <linux/semaphore.h> 39 39 #include <linux/vmalloc.h> 40 - #include <linux/reiserfs_fs.h> 40 + #include "reiserfs.h" 41 41 #include <linux/kernel.h> 42 42 #include <linux/errno.h> 43 43 #include <linux/fcntl.h>
+1 -1
fs/reiserfs/lbalance.c
··· 5 5 #include <asm/uaccess.h> 6 6 #include <linux/string.h> 7 7 #include <linux/time.h> 8 - #include <linux/reiserfs_fs.h> 8 + #include "reiserfs.h" 9 9 #include <linux/buffer_head.h> 10 10 11 11 /* these are used in do_balance.c */
+1 -1
fs/reiserfs/lock.c
··· 1 - #include <linux/reiserfs_fs.h> 1 + #include "reiserfs.h" 2 2 #include <linux/mutex.h> 3 3 4 4 /*
+3 -3
fs/reiserfs/namei.c
··· 14 14 #include <linux/time.h> 15 15 #include <linux/bitops.h> 16 16 #include <linux/slab.h> 17 - #include <linux/reiserfs_fs.h> 18 - #include <linux/reiserfs_acl.h> 19 - #include <linux/reiserfs_xattr.h> 17 + #include "reiserfs.h" 18 + #include "acl.h" 19 + #include "xattr.h" 20 20 #include <linux/quotaops.h> 21 21 22 22 #define INC_DIR_INODE_NLINK(i) if (i->i_nlink != 1) { inc_nlink(i); if (i->i_nlink >= REISERFS_LINK_MAX) set_nlink(i, 1); }
+1 -2
fs/reiserfs/objectid.c
··· 5 5 #include <linux/string.h> 6 6 #include <linux/random.h> 7 7 #include <linux/time.h> 8 - #include <linux/reiserfs_fs.h> 9 - #include <linux/reiserfs_fs_sb.h> 8 + #include "reiserfs.h" 10 9 11 10 // find where objectid map starts 12 11 #define objectid_map(s,rs) (old_format_only (s) ? \
+2 -2
fs/reiserfs/prints.c
··· 4 4 5 5 #include <linux/time.h> 6 6 #include <linux/fs.h> 7 - #include <linux/reiserfs_fs.h> 7 + #include "reiserfs.h" 8 8 #include <linux/string.h> 9 9 #include <linux/buffer_head.h> 10 10 ··· 329 329 Numbering scheme for panic used by Vladimir and Anatoly( Hans completely ignores this scheme, and considers it 330 330 pointless complexity): 331 331 332 - panics in reiserfs_fs.h have numbers from 1000 to 1999 332 + panics in reiserfs.h have numbers from 1000 to 1999 333 333 super.c 2000 to 2999 334 334 preserve.c (unused) 3000 to 3999 335 335 bitmap.c 4000 to 4999
+1 -2
fs/reiserfs/procfs.c
··· 12 12 #include <linux/time.h> 13 13 #include <linux/seq_file.h> 14 14 #include <asm/uaccess.h> 15 - #include <linux/reiserfs_fs.h> 16 - #include <linux/reiserfs_fs_sb.h> 15 + #include "reiserfs.h" 17 16 #include <linux/init.h> 18 17 #include <linux/proc_fs.h> 19 18
+2922
fs/reiserfs/reiserfs.h
··· 1 + /* 2 + * Copyright 1996, 1997, 1998 Hans Reiser, see reiserfs/README for licensing and copyright details 3 + */ 4 + 5 + #include <linux/reiserfs_fs.h> 6 + 7 + #include <linux/slab.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/sched.h> 10 + #include <linux/workqueue.h> 11 + #include <asm/unaligned.h> 12 + #include <linux/bitops.h> 13 + #include <linux/proc_fs.h> 14 + #include <linux/buffer_head.h> 15 + 16 + /* the 32 bit compat definitions with int argument */ 17 + #define REISERFS_IOC32_UNPACK _IOW(0xCD, 1, int) 18 + #define REISERFS_IOC32_GETFLAGS FS_IOC32_GETFLAGS 19 + #define REISERFS_IOC32_SETFLAGS FS_IOC32_SETFLAGS 20 + #define REISERFS_IOC32_GETVERSION FS_IOC32_GETVERSION 21 + #define REISERFS_IOC32_SETVERSION FS_IOC32_SETVERSION 22 + 23 + struct reiserfs_journal_list; 24 + 25 + /** bitmasks for i_flags field in reiserfs-specific part of inode */ 26 + typedef enum { 27 + /** this says what format of key do all items (but stat data) of 28 + an object have. If this is set, that format is 3.6 otherwise 29 + - 3.5 */ 30 + i_item_key_version_mask = 0x0001, 31 + /** If this is unset, object has 3.5 stat data, otherwise, it has 32 + 3.6 stat data with 64bit size, 32bit nlink etc. */ 33 + i_stat_data_version_mask = 0x0002, 34 + /** file might need tail packing on close */ 35 + i_pack_on_close_mask = 0x0004, 36 + /** don't pack tail of file */ 37 + i_nopack_mask = 0x0008, 38 + /** If those is set, "safe link" was created for this file during 39 + truncate or unlink. Safe link is used to avoid leakage of disk 40 + space on crash with some files open, but unlinked. */ 41 + i_link_saved_unlink_mask = 0x0010, 42 + i_link_saved_truncate_mask = 0x0020, 43 + i_has_xattr_dir = 0x0040, 44 + i_data_log = 0x0080, 45 + } reiserfs_inode_flags; 46 + 47 + struct reiserfs_inode_info { 48 + __u32 i_key[4]; /* key is still 4 32 bit integers */ 49 + /** transient inode flags that are never stored on disk. Bitmasks 50 + for this field are defined above. */ 51 + __u32 i_flags; 52 + 53 + __u32 i_first_direct_byte; // offset of first byte stored in direct item. 54 + 55 + /* copy of persistent inode flags read from sd_attrs. */ 56 + __u32 i_attrs; 57 + 58 + int i_prealloc_block; /* first unused block of a sequence of unused blocks */ 59 + int i_prealloc_count; /* length of that sequence */ 60 + struct list_head i_prealloc_list; /* per-transaction list of inodes which 61 + * have preallocated blocks */ 62 + 63 + unsigned new_packing_locality:1; /* new_packig_locality is created; new blocks 64 + * for the contents of this directory should be 65 + * displaced */ 66 + 67 + /* we use these for fsync or O_SYNC to decide which transaction 68 + ** needs to be committed in order for this inode to be properly 69 + ** flushed */ 70 + unsigned int i_trans_id; 71 + struct reiserfs_journal_list *i_jl; 72 + atomic_t openers; 73 + struct mutex tailpack; 74 + #ifdef CONFIG_REISERFS_FS_XATTR 75 + struct rw_semaphore i_xattr_sem; 76 + #endif 77 + struct inode vfs_inode; 78 + }; 79 + 80 + typedef enum { 81 + reiserfs_attrs_cleared = 0x00000001, 82 + } reiserfs_super_block_flags; 83 + 84 + /* struct reiserfs_super_block accessors/mutators 85 + * since this is a disk structure, it will always be in 86 + * little endian format. */ 87 + #define sb_block_count(sbp) (le32_to_cpu((sbp)->s_v1.s_block_count)) 88 + #define set_sb_block_count(sbp,v) ((sbp)->s_v1.s_block_count = cpu_to_le32(v)) 89 + #define sb_free_blocks(sbp) (le32_to_cpu((sbp)->s_v1.s_free_blocks)) 90 + #define set_sb_free_blocks(sbp,v) ((sbp)->s_v1.s_free_blocks = cpu_to_le32(v)) 91 + #define sb_root_block(sbp) (le32_to_cpu((sbp)->s_v1.s_root_block)) 92 + #define set_sb_root_block(sbp,v) ((sbp)->s_v1.s_root_block = cpu_to_le32(v)) 93 + 94 + #define sb_jp_journal_1st_block(sbp) \ 95 + (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_1st_block)) 96 + #define set_sb_jp_journal_1st_block(sbp,v) \ 97 + ((sbp)->s_v1.s_journal.jp_journal_1st_block = cpu_to_le32(v)) 98 + #define sb_jp_journal_dev(sbp) \ 99 + (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_dev)) 100 + #define set_sb_jp_journal_dev(sbp,v) \ 101 + ((sbp)->s_v1.s_journal.jp_journal_dev = cpu_to_le32(v)) 102 + #define sb_jp_journal_size(sbp) \ 103 + (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_size)) 104 + #define set_sb_jp_journal_size(sbp,v) \ 105 + ((sbp)->s_v1.s_journal.jp_journal_size = cpu_to_le32(v)) 106 + #define sb_jp_journal_trans_max(sbp) \ 107 + (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_trans_max)) 108 + #define set_sb_jp_journal_trans_max(sbp,v) \ 109 + ((sbp)->s_v1.s_journal.jp_journal_trans_max = cpu_to_le32(v)) 110 + #define sb_jp_journal_magic(sbp) \ 111 + (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_magic)) 112 + #define set_sb_jp_journal_magic(sbp,v) \ 113 + ((sbp)->s_v1.s_journal.jp_journal_magic = cpu_to_le32(v)) 114 + #define sb_jp_journal_max_batch(sbp) \ 115 + (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_max_batch)) 116 + #define set_sb_jp_journal_max_batch(sbp,v) \ 117 + ((sbp)->s_v1.s_journal.jp_journal_max_batch = cpu_to_le32(v)) 118 + #define sb_jp_jourmal_max_commit_age(sbp) \ 119 + (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_max_commit_age)) 120 + #define set_sb_jp_journal_max_commit_age(sbp,v) \ 121 + ((sbp)->s_v1.s_journal.jp_journal_max_commit_age = cpu_to_le32(v)) 122 + 123 + #define sb_blocksize(sbp) (le16_to_cpu((sbp)->s_v1.s_blocksize)) 124 + #define set_sb_blocksize(sbp,v) ((sbp)->s_v1.s_blocksize = cpu_to_le16(v)) 125 + #define sb_oid_maxsize(sbp) (le16_to_cpu((sbp)->s_v1.s_oid_maxsize)) 126 + #define set_sb_oid_maxsize(sbp,v) ((sbp)->s_v1.s_oid_maxsize = cpu_to_le16(v)) 127 + #define sb_oid_cursize(sbp) (le16_to_cpu((sbp)->s_v1.s_oid_cursize)) 128 + #define set_sb_oid_cursize(sbp,v) ((sbp)->s_v1.s_oid_cursize = cpu_to_le16(v)) 129 + #define sb_umount_state(sbp) (le16_to_cpu((sbp)->s_v1.s_umount_state)) 130 + #define set_sb_umount_state(sbp,v) ((sbp)->s_v1.s_umount_state = cpu_to_le16(v)) 131 + #define sb_fs_state(sbp) (le16_to_cpu((sbp)->s_v1.s_fs_state)) 132 + #define set_sb_fs_state(sbp,v) ((sbp)->s_v1.s_fs_state = cpu_to_le16(v)) 133 + #define sb_hash_function_code(sbp) \ 134 + (le32_to_cpu((sbp)->s_v1.s_hash_function_code)) 135 + #define set_sb_hash_function_code(sbp,v) \ 136 + ((sbp)->s_v1.s_hash_function_code = cpu_to_le32(v)) 137 + #define sb_tree_height(sbp) (le16_to_cpu((sbp)->s_v1.s_tree_height)) 138 + #define set_sb_tree_height(sbp,v) ((sbp)->s_v1.s_tree_height = cpu_to_le16(v)) 139 + #define sb_bmap_nr(sbp) (le16_to_cpu((sbp)->s_v1.s_bmap_nr)) 140 + #define set_sb_bmap_nr(sbp,v) ((sbp)->s_v1.s_bmap_nr = cpu_to_le16(v)) 141 + #define sb_version(sbp) (le16_to_cpu((sbp)->s_v1.s_version)) 142 + #define set_sb_version(sbp,v) ((sbp)->s_v1.s_version = cpu_to_le16(v)) 143 + 144 + #define sb_mnt_count(sbp) (le16_to_cpu((sbp)->s_mnt_count)) 145 + #define set_sb_mnt_count(sbp, v) ((sbp)->s_mnt_count = cpu_to_le16(v)) 146 + 147 + #define sb_reserved_for_journal(sbp) \ 148 + (le16_to_cpu((sbp)->s_v1.s_reserved_for_journal)) 149 + #define set_sb_reserved_for_journal(sbp,v) \ 150 + ((sbp)->s_v1.s_reserved_for_journal = cpu_to_le16(v)) 151 + 152 + /* LOGGING -- */ 153 + 154 + /* These all interelate for performance. 155 + ** 156 + ** If the journal block count is smaller than n transactions, you lose speed. 157 + ** I don't know what n is yet, I'm guessing 8-16. 158 + ** 159 + ** typical transaction size depends on the application, how often fsync is 160 + ** called, and how many metadata blocks you dirty in a 30 second period. 161 + ** The more small files (<16k) you use, the larger your transactions will 162 + ** be. 163 + ** 164 + ** If your journal fills faster than dirty buffers get flushed to disk, it must flush them before allowing the journal 165 + ** to wrap, which slows things down. If you need high speed meta data updates, the journal should be big enough 166 + ** to prevent wrapping before dirty meta blocks get to disk. 167 + ** 168 + ** If the batch max is smaller than the transaction max, you'll waste space at the end of the journal 169 + ** because journal_end sets the next transaction to start at 0 if the next transaction has any chance of wrapping. 170 + ** 171 + ** The large the batch max age, the better the speed, and the more meta data changes you'll lose after a crash. 172 + ** 173 + */ 174 + 175 + /* don't mess with these for a while */ 176 + /* we have a node size define somewhere in reiserfs_fs.h. -Hans */ 177 + #define JOURNAL_BLOCK_SIZE 4096 /* BUG gotta get rid of this */ 178 + #define JOURNAL_MAX_CNODE 1500 /* max cnodes to allocate. */ 179 + #define JOURNAL_HASH_SIZE 8192 180 + #define JOURNAL_NUM_BITMAPS 5 /* number of copies of the bitmaps to have floating. Must be >= 2 */ 181 + 182 + /* One of these for every block in every transaction 183 + ** Each one is in two hash tables. First, a hash of the current transaction, and after journal_end, a 184 + ** hash of all the in memory transactions. 185 + ** next and prev are used by the current transaction (journal_hash). 186 + ** hnext and hprev are used by journal_list_hash. If a block is in more than one transaction, the journal_list_hash 187 + ** links it in multiple times. This allows flush_journal_list to remove just the cnode belonging 188 + ** to a given transaction. 189 + */ 190 + struct reiserfs_journal_cnode { 191 + struct buffer_head *bh; /* real buffer head */ 192 + struct super_block *sb; /* dev of real buffer head */ 193 + __u32 blocknr; /* block number of real buffer head, == 0 when buffer on disk */ 194 + unsigned long state; 195 + struct reiserfs_journal_list *jlist; /* journal list this cnode lives in */ 196 + struct reiserfs_journal_cnode *next; /* next in transaction list */ 197 + struct reiserfs_journal_cnode *prev; /* prev in transaction list */ 198 + struct reiserfs_journal_cnode *hprev; /* prev in hash list */ 199 + struct reiserfs_journal_cnode *hnext; /* next in hash list */ 200 + }; 201 + 202 + struct reiserfs_bitmap_node { 203 + int id; 204 + char *data; 205 + struct list_head list; 206 + }; 207 + 208 + struct reiserfs_list_bitmap { 209 + struct reiserfs_journal_list *journal_list; 210 + struct reiserfs_bitmap_node **bitmaps; 211 + }; 212 + 213 + /* 214 + ** one of these for each transaction. The most important part here is the j_realblock. 215 + ** this list of cnodes is used to hash all the blocks in all the commits, to mark all the 216 + ** real buffer heads dirty once all the commits hit the disk, 217 + ** and to make sure every real block in a transaction is on disk before allowing the log area 218 + ** to be overwritten */ 219 + struct reiserfs_journal_list { 220 + unsigned long j_start; 221 + unsigned long j_state; 222 + unsigned long j_len; 223 + atomic_t j_nonzerolen; 224 + atomic_t j_commit_left; 225 + atomic_t j_older_commits_done; /* all commits older than this on disk */ 226 + struct mutex j_commit_mutex; 227 + unsigned int j_trans_id; 228 + time_t j_timestamp; 229 + struct reiserfs_list_bitmap *j_list_bitmap; 230 + struct buffer_head *j_commit_bh; /* commit buffer head */ 231 + struct reiserfs_journal_cnode *j_realblock; 232 + struct reiserfs_journal_cnode *j_freedlist; /* list of buffers that were freed during this trans. free each of these on flush */ 233 + /* time ordered list of all active transactions */ 234 + struct list_head j_list; 235 + 236 + /* time ordered list of all transactions we haven't tried to flush yet */ 237 + struct list_head j_working_list; 238 + 239 + /* list of tail conversion targets in need of flush before commit */ 240 + struct list_head j_tail_bh_list; 241 + /* list of data=ordered buffers in need of flush before commit */ 242 + struct list_head j_bh_list; 243 + int j_refcount; 244 + }; 245 + 246 + struct reiserfs_journal { 247 + struct buffer_head **j_ap_blocks; /* journal blocks on disk */ 248 + struct reiserfs_journal_cnode *j_last; /* newest journal block */ 249 + struct reiserfs_journal_cnode *j_first; /* oldest journal block. start here for traverse */ 250 + 251 + struct block_device *j_dev_bd; 252 + fmode_t j_dev_mode; 253 + int j_1st_reserved_block; /* first block on s_dev of reserved area journal */ 254 + 255 + unsigned long j_state; 256 + unsigned int j_trans_id; 257 + unsigned long j_mount_id; 258 + unsigned long j_start; /* start of current waiting commit (index into j_ap_blocks) */ 259 + unsigned long j_len; /* length of current waiting commit */ 260 + unsigned long j_len_alloc; /* number of buffers requested by journal_begin() */ 261 + atomic_t j_wcount; /* count of writers for current commit */ 262 + unsigned long j_bcount; /* batch count. allows turning X transactions into 1 */ 263 + unsigned long j_first_unflushed_offset; /* first unflushed transactions offset */ 264 + unsigned j_last_flush_trans_id; /* last fully flushed journal timestamp */ 265 + struct buffer_head *j_header_bh; 266 + 267 + time_t j_trans_start_time; /* time this transaction started */ 268 + struct mutex j_mutex; 269 + struct mutex j_flush_mutex; 270 + wait_queue_head_t j_join_wait; /* wait for current transaction to finish before starting new one */ 271 + atomic_t j_jlock; /* lock for j_join_wait */ 272 + int j_list_bitmap_index; /* number of next list bitmap to use */ 273 + int j_must_wait; /* no more journal begins allowed. MUST sleep on j_join_wait */ 274 + int j_next_full_flush; /* next journal_end will flush all journal list */ 275 + int j_next_async_flush; /* next journal_end will flush all async commits */ 276 + 277 + int j_cnode_used; /* number of cnodes on the used list */ 278 + int j_cnode_free; /* number of cnodes on the free list */ 279 + 280 + unsigned int j_trans_max; /* max number of blocks in a transaction. */ 281 + unsigned int j_max_batch; /* max number of blocks to batch into a trans */ 282 + unsigned int j_max_commit_age; /* in seconds, how old can an async commit be */ 283 + unsigned int j_max_trans_age; /* in seconds, how old can a transaction be */ 284 + unsigned int j_default_max_commit_age; /* the default for the max commit age */ 285 + 286 + struct reiserfs_journal_cnode *j_cnode_free_list; 287 + struct reiserfs_journal_cnode *j_cnode_free_orig; /* orig pointer returned from vmalloc */ 288 + 289 + struct reiserfs_journal_list *j_current_jl; 290 + int j_free_bitmap_nodes; 291 + int j_used_bitmap_nodes; 292 + 293 + int j_num_lists; /* total number of active transactions */ 294 + int j_num_work_lists; /* number that need attention from kreiserfsd */ 295 + 296 + /* debugging to make sure things are flushed in order */ 297 + unsigned int j_last_flush_id; 298 + 299 + /* debugging to make sure things are committed in order */ 300 + unsigned int j_last_commit_id; 301 + 302 + struct list_head j_bitmap_nodes; 303 + struct list_head j_dirty_buffers; 304 + spinlock_t j_dirty_buffers_lock; /* protects j_dirty_buffers */ 305 + 306 + /* list of all active transactions */ 307 + struct list_head j_journal_list; 308 + /* lists that haven't been touched by writeback attempts */ 309 + struct list_head j_working_list; 310 + 311 + struct reiserfs_list_bitmap j_list_bitmap[JOURNAL_NUM_BITMAPS]; /* array of bitmaps to record the deleted blocks */ 312 + struct reiserfs_journal_cnode *j_hash_table[JOURNAL_HASH_SIZE]; /* hash table for real buffer heads in current trans */ 313 + struct reiserfs_journal_cnode *j_list_hash_table[JOURNAL_HASH_SIZE]; /* hash table for all the real buffer heads in all 314 + the transactions */ 315 + struct list_head j_prealloc_list; /* list of inodes which have preallocated blocks */ 316 + int j_persistent_trans; 317 + unsigned long j_max_trans_size; 318 + unsigned long j_max_batch_size; 319 + 320 + int j_errno; 321 + 322 + /* when flushing ordered buffers, throttle new ordered writers */ 323 + struct delayed_work j_work; 324 + struct super_block *j_work_sb; 325 + atomic_t j_async_throttle; 326 + }; 327 + 328 + enum journal_state_bits { 329 + J_WRITERS_BLOCKED = 1, /* set when new writers not allowed */ 330 + J_WRITERS_QUEUED, /* set when log is full due to too many writers */ 331 + J_ABORTED, /* set when log is aborted */ 332 + }; 333 + 334 + #define JOURNAL_DESC_MAGIC "ReIsErLB" /* ick. magic string to find desc blocks in the journal */ 335 + 336 + typedef __u32(*hashf_t) (const signed char *, int); 337 + 338 + struct reiserfs_bitmap_info { 339 + __u32 free_count; 340 + }; 341 + 342 + struct proc_dir_entry; 343 + 344 + #if defined( CONFIG_PROC_FS ) && defined( CONFIG_REISERFS_PROC_INFO ) 345 + typedef unsigned long int stat_cnt_t; 346 + typedef struct reiserfs_proc_info_data { 347 + spinlock_t lock; 348 + int exiting; 349 + int max_hash_collisions; 350 + 351 + stat_cnt_t breads; 352 + stat_cnt_t bread_miss; 353 + stat_cnt_t search_by_key; 354 + stat_cnt_t search_by_key_fs_changed; 355 + stat_cnt_t search_by_key_restarted; 356 + 357 + stat_cnt_t insert_item_restarted; 358 + stat_cnt_t paste_into_item_restarted; 359 + stat_cnt_t cut_from_item_restarted; 360 + stat_cnt_t delete_solid_item_restarted; 361 + stat_cnt_t delete_item_restarted; 362 + 363 + stat_cnt_t leaked_oid; 364 + stat_cnt_t leaves_removable; 365 + 366 + /* balances per level. Use explicit 5 as MAX_HEIGHT is not visible yet. */ 367 + stat_cnt_t balance_at[5]; /* XXX */ 368 + /* sbk == search_by_key */ 369 + stat_cnt_t sbk_read_at[5]; /* XXX */ 370 + stat_cnt_t sbk_fs_changed[5]; 371 + stat_cnt_t sbk_restarted[5]; 372 + stat_cnt_t items_at[5]; /* XXX */ 373 + stat_cnt_t free_at[5]; /* XXX */ 374 + stat_cnt_t can_node_be_removed[5]; /* XXX */ 375 + long int lnum[5]; /* XXX */ 376 + long int rnum[5]; /* XXX */ 377 + long int lbytes[5]; /* XXX */ 378 + long int rbytes[5]; /* XXX */ 379 + stat_cnt_t get_neighbors[5]; 380 + stat_cnt_t get_neighbors_restart[5]; 381 + stat_cnt_t need_l_neighbor[5]; 382 + stat_cnt_t need_r_neighbor[5]; 383 + 384 + stat_cnt_t free_block; 385 + struct __scan_bitmap_stats { 386 + stat_cnt_t call; 387 + stat_cnt_t wait; 388 + stat_cnt_t bmap; 389 + stat_cnt_t retry; 390 + stat_cnt_t in_journal_hint; 391 + stat_cnt_t in_journal_nohint; 392 + stat_cnt_t stolen; 393 + } scan_bitmap; 394 + struct __journal_stats { 395 + stat_cnt_t in_journal; 396 + stat_cnt_t in_journal_bitmap; 397 + stat_cnt_t in_journal_reusable; 398 + stat_cnt_t lock_journal; 399 + stat_cnt_t lock_journal_wait; 400 + stat_cnt_t journal_being; 401 + stat_cnt_t journal_relock_writers; 402 + stat_cnt_t journal_relock_wcount; 403 + stat_cnt_t mark_dirty; 404 + stat_cnt_t mark_dirty_already; 405 + stat_cnt_t mark_dirty_notjournal; 406 + stat_cnt_t restore_prepared; 407 + stat_cnt_t prepare; 408 + stat_cnt_t prepare_retry; 409 + } journal; 410 + } reiserfs_proc_info_data_t; 411 + #else 412 + typedef struct reiserfs_proc_info_data { 413 + } reiserfs_proc_info_data_t; 414 + #endif 415 + 416 + /* reiserfs union of in-core super block data */ 417 + struct reiserfs_sb_info { 418 + struct buffer_head *s_sbh; /* Buffer containing the super block */ 419 + /* both the comment and the choice of 420 + name are unclear for s_rs -Hans */ 421 + struct reiserfs_super_block *s_rs; /* Pointer to the super block in the buffer */ 422 + struct reiserfs_bitmap_info *s_ap_bitmap; 423 + struct reiserfs_journal *s_journal; /* pointer to journal information */ 424 + unsigned short s_mount_state; /* reiserfs state (valid, invalid) */ 425 + 426 + /* Serialize writers access, replace the old bkl */ 427 + struct mutex lock; 428 + /* Owner of the lock (can be recursive) */ 429 + struct task_struct *lock_owner; 430 + /* Depth of the lock, start from -1 like the bkl */ 431 + int lock_depth; 432 + 433 + /* Comment? -Hans */ 434 + void (*end_io_handler) (struct buffer_head *, int); 435 + hashf_t s_hash_function; /* pointer to function which is used 436 + to sort names in directory. Set on 437 + mount */ 438 + unsigned long s_mount_opt; /* reiserfs's mount options are set 439 + here (currently - NOTAIL, NOLOG, 440 + REPLAYONLY) */ 441 + 442 + struct { /* This is a structure that describes block allocator options */ 443 + unsigned long bits; /* Bitfield for enable/disable kind of options */ 444 + unsigned long large_file_size; /* size started from which we consider file to be a large one(in blocks) */ 445 + int border; /* percentage of disk, border takes */ 446 + int preallocmin; /* Minimal file size (in blocks) starting from which we do preallocations */ 447 + int preallocsize; /* Number of blocks we try to prealloc when file 448 + reaches preallocmin size (in blocks) or 449 + prealloc_list is empty. */ 450 + } s_alloc_options; 451 + 452 + /* Comment? -Hans */ 453 + wait_queue_head_t s_wait; 454 + /* To be obsoleted soon by per buffer seals.. -Hans */ 455 + atomic_t s_generation_counter; // increased by one every time the 456 + // tree gets re-balanced 457 + unsigned long s_properties; /* File system properties. Currently holds 458 + on-disk FS format */ 459 + 460 + /* session statistics */ 461 + int s_disk_reads; 462 + int s_disk_writes; 463 + int s_fix_nodes; 464 + int s_do_balance; 465 + int s_unneeded_left_neighbor; 466 + int s_good_search_by_key_reada; 467 + int s_bmaps; 468 + int s_bmaps_without_search; 469 + int s_direct2indirect; 470 + int s_indirect2direct; 471 + /* set up when it's ok for reiserfs_read_inode2() to read from 472 + disk inode with nlink==0. Currently this is only used during 473 + finish_unfinished() processing at mount time */ 474 + int s_is_unlinked_ok; 475 + reiserfs_proc_info_data_t s_proc_info_data; 476 + struct proc_dir_entry *procdir; 477 + int reserved_blocks; /* amount of blocks reserved for further allocations */ 478 + spinlock_t bitmap_lock; /* this lock on now only used to protect reserved_blocks variable */ 479 + struct dentry *priv_root; /* root of /.reiserfs_priv */ 480 + struct dentry *xattr_root; /* root of /.reiserfs_priv/xattrs */ 481 + int j_errno; 482 + #ifdef CONFIG_QUOTA 483 + char *s_qf_names[MAXQUOTAS]; 484 + int s_jquota_fmt; 485 + #endif 486 + char *s_jdev; /* Stored jdev for mount option showing */ 487 + #ifdef CONFIG_REISERFS_CHECK 488 + 489 + struct tree_balance *cur_tb; /* 490 + * Detects whether more than one 491 + * copy of tb exists per superblock 492 + * as a means of checking whether 493 + * do_balance is executing concurrently 494 + * against another tree reader/writer 495 + * on a same mount point. 496 + */ 497 + #endif 498 + }; 499 + 500 + /* Definitions of reiserfs on-disk properties: */ 501 + #define REISERFS_3_5 0 502 + #define REISERFS_3_6 1 503 + #define REISERFS_OLD_FORMAT 2 504 + 505 + enum reiserfs_mount_options { 506 + /* Mount options */ 507 + REISERFS_LARGETAIL, /* large tails will be created in a session */ 508 + REISERFS_SMALLTAIL, /* small (for files less than block size) tails will be created in a session */ 509 + REPLAYONLY, /* replay journal and return 0. Use by fsck */ 510 + REISERFS_CONVERT, /* -o conv: causes conversion of old 511 + format super block to the new 512 + format. If not specified - old 513 + partition will be dealt with in a 514 + manner of 3.5.x */ 515 + 516 + /* -o hash={tea, rupasov, r5, detect} is meant for properly mounting 517 + ** reiserfs disks from 3.5.19 or earlier. 99% of the time, this option 518 + ** is not required. If the normal autodection code can't determine which 519 + ** hash to use (because both hashes had the same value for a file) 520 + ** use this option to force a specific hash. It won't allow you to override 521 + ** the existing hash on the FS, so if you have a tea hash disk, and mount 522 + ** with -o hash=rupasov, the mount will fail. 523 + */ 524 + FORCE_TEA_HASH, /* try to force tea hash on mount */ 525 + FORCE_RUPASOV_HASH, /* try to force rupasov hash on mount */ 526 + FORCE_R5_HASH, /* try to force rupasov hash on mount */ 527 + FORCE_HASH_DETECT, /* try to detect hash function on mount */ 528 + 529 + REISERFS_DATA_LOG, 530 + REISERFS_DATA_ORDERED, 531 + REISERFS_DATA_WRITEBACK, 532 + 533 + /* used for testing experimental features, makes benchmarking new 534 + features with and without more convenient, should never be used by 535 + users in any code shipped to users (ideally) */ 536 + 537 + REISERFS_NO_BORDER, 538 + REISERFS_NO_UNHASHED_RELOCATION, 539 + REISERFS_HASHED_RELOCATION, 540 + REISERFS_ATTRS, 541 + REISERFS_XATTRS_USER, 542 + REISERFS_POSIXACL, 543 + REISERFS_EXPOSE_PRIVROOT, 544 + REISERFS_BARRIER_NONE, 545 + REISERFS_BARRIER_FLUSH, 546 + 547 + /* Actions on error */ 548 + REISERFS_ERROR_PANIC, 549 + REISERFS_ERROR_RO, 550 + REISERFS_ERROR_CONTINUE, 551 + 552 + REISERFS_USRQUOTA, /* User quota option specified */ 553 + REISERFS_GRPQUOTA, /* Group quota option specified */ 554 + 555 + REISERFS_TEST1, 556 + REISERFS_TEST2, 557 + REISERFS_TEST3, 558 + REISERFS_TEST4, 559 + REISERFS_UNSUPPORTED_OPT, 560 + }; 561 + 562 + #define reiserfs_r5_hash(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_R5_HASH)) 563 + #define reiserfs_rupasov_hash(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_RUPASOV_HASH)) 564 + #define reiserfs_tea_hash(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_TEA_HASH)) 565 + #define reiserfs_hash_detect(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_HASH_DETECT)) 566 + #define reiserfs_no_border(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_NO_BORDER)) 567 + #define reiserfs_no_unhashed_relocation(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_NO_UNHASHED_RELOCATION)) 568 + #define reiserfs_hashed_relocation(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_HASHED_RELOCATION)) 569 + #define reiserfs_test4(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_TEST4)) 570 + 571 + #define have_large_tails(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_LARGETAIL)) 572 + #define have_small_tails(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_SMALLTAIL)) 573 + #define replay_only(s) (REISERFS_SB(s)->s_mount_opt & (1 << REPLAYONLY)) 574 + #define reiserfs_attrs(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_ATTRS)) 575 + #define old_format_only(s) (REISERFS_SB(s)->s_properties & (1 << REISERFS_3_5)) 576 + #define convert_reiserfs(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_CONVERT)) 577 + #define reiserfs_data_log(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_DATA_LOG)) 578 + #define reiserfs_data_ordered(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_DATA_ORDERED)) 579 + #define reiserfs_data_writeback(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_DATA_WRITEBACK)) 580 + #define reiserfs_xattrs_user(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_XATTRS_USER)) 581 + #define reiserfs_posixacl(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_POSIXACL)) 582 + #define reiserfs_expose_privroot(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_EXPOSE_PRIVROOT)) 583 + #define reiserfs_xattrs_optional(s) (reiserfs_xattrs_user(s) || reiserfs_posixacl(s)) 584 + #define reiserfs_barrier_none(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_BARRIER_NONE)) 585 + #define reiserfs_barrier_flush(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_BARRIER_FLUSH)) 586 + 587 + #define reiserfs_error_panic(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_ERROR_PANIC)) 588 + #define reiserfs_error_ro(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_ERROR_RO)) 589 + 590 + void reiserfs_file_buffer(struct buffer_head *bh, int list); 591 + extern struct file_system_type reiserfs_fs_type; 592 + int reiserfs_resize(struct super_block *, unsigned long); 593 + 594 + #define CARRY_ON 0 595 + #define SCHEDULE_OCCURRED 1 596 + 597 + #define SB_BUFFER_WITH_SB(s) (REISERFS_SB(s)->s_sbh) 598 + #define SB_JOURNAL(s) (REISERFS_SB(s)->s_journal) 599 + #define SB_JOURNAL_1st_RESERVED_BLOCK(s) (SB_JOURNAL(s)->j_1st_reserved_block) 600 + #define SB_JOURNAL_LEN_FREE(s) (SB_JOURNAL(s)->j_journal_len_free) 601 + #define SB_AP_BITMAP(s) (REISERFS_SB(s)->s_ap_bitmap) 602 + 603 + #define SB_DISK_JOURNAL_HEAD(s) (SB_JOURNAL(s)->j_header_bh->) 604 + 605 + /* A safe version of the "bdevname", which returns the "s_id" field of 606 + * a superblock or else "Null superblock" if the super block is NULL. 607 + */ 608 + static inline char *reiserfs_bdevname(struct super_block *s) 609 + { 610 + return (s == NULL) ? "Null superblock" : s->s_id; 611 + } 612 + 613 + #define reiserfs_is_journal_aborted(journal) (unlikely (__reiserfs_is_journal_aborted (journal))) 614 + static inline int __reiserfs_is_journal_aborted(struct reiserfs_journal 615 + *journal) 616 + { 617 + return test_bit(J_ABORTED, &journal->j_state); 618 + } 619 + 620 + /* 621 + * Locking primitives. The write lock is a per superblock 622 + * special mutex that has properties close to the Big Kernel Lock 623 + * which was used in the previous locking scheme. 624 + */ 625 + void reiserfs_write_lock(struct super_block *s); 626 + void reiserfs_write_unlock(struct super_block *s); 627 + int reiserfs_write_lock_once(struct super_block *s); 628 + void reiserfs_write_unlock_once(struct super_block *s, int lock_depth); 629 + 630 + #ifdef CONFIG_REISERFS_CHECK 631 + void reiserfs_lock_check_recursive(struct super_block *s); 632 + #else 633 + static inline void reiserfs_lock_check_recursive(struct super_block *s) { } 634 + #endif 635 + 636 + /* 637 + * Several mutexes depend on the write lock. 638 + * However sometimes we want to relax the write lock while we hold 639 + * these mutexes, according to the release/reacquire on schedule() 640 + * properties of the Bkl that were used. 641 + * Reiserfs performances and locking were based on this scheme. 642 + * Now that the write lock is a mutex and not the bkl anymore, doing so 643 + * may result in a deadlock: 644 + * 645 + * A acquire write_lock 646 + * A acquire j_commit_mutex 647 + * A release write_lock and wait for something 648 + * B acquire write_lock 649 + * B can't acquire j_commit_mutex and sleep 650 + * A can't acquire write lock anymore 651 + * deadlock 652 + * 653 + * What we do here is avoiding such deadlock by playing the same game 654 + * than the Bkl: if we can't acquire a mutex that depends on the write lock, 655 + * we release the write lock, wait a bit and then retry. 656 + * 657 + * The mutexes concerned by this hack are: 658 + * - The commit mutex of a journal list 659 + * - The flush mutex 660 + * - The journal lock 661 + * - The inode mutex 662 + */ 663 + static inline void reiserfs_mutex_lock_safe(struct mutex *m, 664 + struct super_block *s) 665 + { 666 + reiserfs_lock_check_recursive(s); 667 + reiserfs_write_unlock(s); 668 + mutex_lock(m); 669 + reiserfs_write_lock(s); 670 + } 671 + 672 + static inline void 673 + reiserfs_mutex_lock_nested_safe(struct mutex *m, unsigned int subclass, 674 + struct super_block *s) 675 + { 676 + reiserfs_lock_check_recursive(s); 677 + reiserfs_write_unlock(s); 678 + mutex_lock_nested(m, subclass); 679 + reiserfs_write_lock(s); 680 + } 681 + 682 + static inline void 683 + reiserfs_down_read_safe(struct rw_semaphore *sem, struct super_block *s) 684 + { 685 + reiserfs_lock_check_recursive(s); 686 + reiserfs_write_unlock(s); 687 + down_read(sem); 688 + reiserfs_write_lock(s); 689 + } 690 + 691 + /* 692 + * When we schedule, we usually want to also release the write lock, 693 + * according to the previous bkl based locking scheme of reiserfs. 694 + */ 695 + static inline void reiserfs_cond_resched(struct super_block *s) 696 + { 697 + if (need_resched()) { 698 + reiserfs_write_unlock(s); 699 + schedule(); 700 + reiserfs_write_lock(s); 701 + } 702 + } 703 + 704 + struct fid; 705 + 706 + /* in reading the #defines, it may help to understand that they employ 707 + the following abbreviations: 708 + 709 + B = Buffer 710 + I = Item header 711 + H = Height within the tree (should be changed to LEV) 712 + N = Number of the item in the node 713 + STAT = stat data 714 + DEH = Directory Entry Header 715 + EC = Entry Count 716 + E = Entry number 717 + UL = Unsigned Long 718 + BLKH = BLocK Header 719 + UNFM = UNForMatted node 720 + DC = Disk Child 721 + P = Path 722 + 723 + These #defines are named by concatenating these abbreviations, 724 + where first comes the arguments, and last comes the return value, 725 + of the macro. 726 + 727 + */ 728 + 729 + #define USE_INODE_GENERATION_COUNTER 730 + 731 + #define REISERFS_PREALLOCATE 732 + #define DISPLACE_NEW_PACKING_LOCALITIES 733 + #define PREALLOCATION_SIZE 9 734 + 735 + /* n must be power of 2 */ 736 + #define _ROUND_UP(x,n) (((x)+(n)-1u) & ~((n)-1u)) 737 + 738 + // to be ok for alpha and others we have to align structures to 8 byte 739 + // boundary. 740 + // FIXME: do not change 4 by anything else: there is code which relies on that 741 + #define ROUND_UP(x) _ROUND_UP(x,8LL) 742 + 743 + /* debug levels. Right now, CONFIG_REISERFS_CHECK means print all debug 744 + ** messages. 745 + */ 746 + #define REISERFS_DEBUG_CODE 5 /* extra messages to help find/debug errors */ 747 + 748 + void __reiserfs_warning(struct super_block *s, const char *id, 749 + const char *func, const char *fmt, ...); 750 + #define reiserfs_warning(s, id, fmt, args...) \ 751 + __reiserfs_warning(s, id, __func__, fmt, ##args) 752 + /* assertions handling */ 753 + 754 + /** always check a condition and panic if it's false. */ 755 + #define __RASSERT(cond, scond, format, args...) \ 756 + do { \ 757 + if (!(cond)) \ 758 + reiserfs_panic(NULL, "assertion failure", "(" #cond ") at " \ 759 + __FILE__ ":%i:%s: " format "\n", \ 760 + in_interrupt() ? -1 : task_pid_nr(current), \ 761 + __LINE__, __func__ , ##args); \ 762 + } while (0) 763 + 764 + #define RASSERT(cond, format, args...) __RASSERT(cond, #cond, format, ##args) 765 + 766 + #if defined( CONFIG_REISERFS_CHECK ) 767 + #define RFALSE(cond, format, args...) __RASSERT(!(cond), "!(" #cond ")", format, ##args) 768 + #else 769 + #define RFALSE( cond, format, args... ) do {;} while( 0 ) 770 + #endif 771 + 772 + #define CONSTF __attribute_const__ 773 + /* 774 + * Disk Data Structures 775 + */ 776 + 777 + /***************************************************************************/ 778 + /* SUPER BLOCK */ 779 + /***************************************************************************/ 780 + 781 + /* 782 + * Structure of super block on disk, a version of which in RAM is often accessed as REISERFS_SB(s)->s_rs 783 + * the version in RAM is part of a larger structure containing fields never written to disk. 784 + */ 785 + #define UNSET_HASH 0 // read_super will guess about, what hash names 786 + // in directories were sorted with 787 + #define TEA_HASH 1 788 + #define YURA_HASH 2 789 + #define R5_HASH 3 790 + #define DEFAULT_HASH R5_HASH 791 + 792 + struct journal_params { 793 + __le32 jp_journal_1st_block; /* where does journal start from on its 794 + * device */ 795 + __le32 jp_journal_dev; /* journal device st_rdev */ 796 + __le32 jp_journal_size; /* size of the journal */ 797 + __le32 jp_journal_trans_max; /* max number of blocks in a transaction. */ 798 + __le32 jp_journal_magic; /* random value made on fs creation (this 799 + * was sb_journal_block_count) */ 800 + __le32 jp_journal_max_batch; /* max number of blocks to batch into a 801 + * trans */ 802 + __le32 jp_journal_max_commit_age; /* in seconds, how old can an async 803 + * commit be */ 804 + __le32 jp_journal_max_trans_age; /* in seconds, how old can a transaction 805 + * be */ 806 + }; 807 + 808 + /* this is the super from 3.5.X, where X >= 10 */ 809 + struct reiserfs_super_block_v1 { 810 + __le32 s_block_count; /* blocks count */ 811 + __le32 s_free_blocks; /* free blocks count */ 812 + __le32 s_root_block; /* root block number */ 813 + struct journal_params s_journal; 814 + __le16 s_blocksize; /* block size */ 815 + __le16 s_oid_maxsize; /* max size of object id array, see 816 + * get_objectid() commentary */ 817 + __le16 s_oid_cursize; /* current size of object id array */ 818 + __le16 s_umount_state; /* this is set to 1 when filesystem was 819 + * umounted, to 2 - when not */ 820 + char s_magic[10]; /* reiserfs magic string indicates that 821 + * file system is reiserfs: 822 + * "ReIsErFs" or "ReIsEr2Fs" or "ReIsEr3Fs" */ 823 + __le16 s_fs_state; /* it is set to used by fsck to mark which 824 + * phase of rebuilding is done */ 825 + __le32 s_hash_function_code; /* indicate, what hash function is being use 826 + * to sort names in a directory*/ 827 + __le16 s_tree_height; /* height of disk tree */ 828 + __le16 s_bmap_nr; /* amount of bitmap blocks needed to address 829 + * each block of file system */ 830 + __le16 s_version; /* this field is only reliable on filesystem 831 + * with non-standard journal */ 832 + __le16 s_reserved_for_journal; /* size in blocks of journal area on main 833 + * device, we need to keep after 834 + * making fs with non-standard journal */ 835 + } __attribute__ ((__packed__)); 836 + 837 + #define SB_SIZE_V1 (sizeof(struct reiserfs_super_block_v1)) 838 + 839 + /* this is the on disk super block */ 840 + struct reiserfs_super_block { 841 + struct reiserfs_super_block_v1 s_v1; 842 + __le32 s_inode_generation; 843 + __le32 s_flags; /* Right now used only by inode-attributes, if enabled */ 844 + unsigned char s_uuid[16]; /* filesystem unique identifier */ 845 + unsigned char s_label[16]; /* filesystem volume label */ 846 + __le16 s_mnt_count; /* Count of mounts since last fsck */ 847 + __le16 s_max_mnt_count; /* Maximum mounts before check */ 848 + __le32 s_lastcheck; /* Timestamp of last fsck */ 849 + __le32 s_check_interval; /* Interval between checks */ 850 + char s_unused[76]; /* zero filled by mkreiserfs and 851 + * reiserfs_convert_objectid_map_v1() 852 + * so any additions must be updated 853 + * there as well. */ 854 + } __attribute__ ((__packed__)); 855 + 856 + #define SB_SIZE (sizeof(struct reiserfs_super_block)) 857 + 858 + #define REISERFS_VERSION_1 0 859 + #define REISERFS_VERSION_2 2 860 + 861 + // on-disk super block fields converted to cpu form 862 + #define SB_DISK_SUPER_BLOCK(s) (REISERFS_SB(s)->s_rs) 863 + #define SB_V1_DISK_SUPER_BLOCK(s) (&(SB_DISK_SUPER_BLOCK(s)->s_v1)) 864 + #define SB_BLOCKSIZE(s) \ 865 + le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_blocksize)) 866 + #define SB_BLOCK_COUNT(s) \ 867 + le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_block_count)) 868 + #define SB_FREE_BLOCKS(s) \ 869 + le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_free_blocks)) 870 + #define SB_REISERFS_MAGIC(s) \ 871 + (SB_V1_DISK_SUPER_BLOCK(s)->s_magic) 872 + #define SB_ROOT_BLOCK(s) \ 873 + le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_root_block)) 874 + #define SB_TREE_HEIGHT(s) \ 875 + le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_tree_height)) 876 + #define SB_REISERFS_STATE(s) \ 877 + le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_umount_state)) 878 + #define SB_VERSION(s) le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_version)) 879 + #define SB_BMAP_NR(s) le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_bmap_nr)) 880 + 881 + #define PUT_SB_BLOCK_COUNT(s, val) \ 882 + do { SB_V1_DISK_SUPER_BLOCK(s)->s_block_count = cpu_to_le32(val); } while (0) 883 + #define PUT_SB_FREE_BLOCKS(s, val) \ 884 + do { SB_V1_DISK_SUPER_BLOCK(s)->s_free_blocks = cpu_to_le32(val); } while (0) 885 + #define PUT_SB_ROOT_BLOCK(s, val) \ 886 + do { SB_V1_DISK_SUPER_BLOCK(s)->s_root_block = cpu_to_le32(val); } while (0) 887 + #define PUT_SB_TREE_HEIGHT(s, val) \ 888 + do { SB_V1_DISK_SUPER_BLOCK(s)->s_tree_height = cpu_to_le16(val); } while (0) 889 + #define PUT_SB_REISERFS_STATE(s, val) \ 890 + do { SB_V1_DISK_SUPER_BLOCK(s)->s_umount_state = cpu_to_le16(val); } while (0) 891 + #define PUT_SB_VERSION(s, val) \ 892 + do { SB_V1_DISK_SUPER_BLOCK(s)->s_version = cpu_to_le16(val); } while (0) 893 + #define PUT_SB_BMAP_NR(s, val) \ 894 + do { SB_V1_DISK_SUPER_BLOCK(s)->s_bmap_nr = cpu_to_le16 (val); } while (0) 895 + 896 + #define SB_ONDISK_JP(s) (&SB_V1_DISK_SUPER_BLOCK(s)->s_journal) 897 + #define SB_ONDISK_JOURNAL_SIZE(s) \ 898 + le32_to_cpu ((SB_ONDISK_JP(s)->jp_journal_size)) 899 + #define SB_ONDISK_JOURNAL_1st_BLOCK(s) \ 900 + le32_to_cpu ((SB_ONDISK_JP(s)->jp_journal_1st_block)) 901 + #define SB_ONDISK_JOURNAL_DEVICE(s) \ 902 + le32_to_cpu ((SB_ONDISK_JP(s)->jp_journal_dev)) 903 + #define SB_ONDISK_RESERVED_FOR_JOURNAL(s) \ 904 + le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_reserved_for_journal)) 905 + 906 + #define is_block_in_log_or_reserved_area(s, block) \ 907 + block >= SB_JOURNAL_1st_RESERVED_BLOCK(s) \ 908 + && block < SB_JOURNAL_1st_RESERVED_BLOCK(s) + \ 909 + ((!is_reiserfs_jr(SB_DISK_SUPER_BLOCK(s)) ? \ 910 + SB_ONDISK_JOURNAL_SIZE(s) + 1 : SB_ONDISK_RESERVED_FOR_JOURNAL(s))) 911 + 912 + int is_reiserfs_3_5(struct reiserfs_super_block *rs); 913 + int is_reiserfs_3_6(struct reiserfs_super_block *rs); 914 + int is_reiserfs_jr(struct reiserfs_super_block *rs); 915 + 916 + /* ReiserFS leaves the first 64k unused, so that partition labels have 917 + enough space. If someone wants to write a fancy bootloader that 918 + needs more than 64k, let us know, and this will be increased in size. 919 + This number must be larger than than the largest block size on any 920 + platform, or code will break. -Hans */ 921 + #define REISERFS_DISK_OFFSET_IN_BYTES (64 * 1024) 922 + #define REISERFS_FIRST_BLOCK unused_define 923 + #define REISERFS_JOURNAL_OFFSET_IN_BYTES REISERFS_DISK_OFFSET_IN_BYTES 924 + 925 + /* the spot for the super in versions 3.5 - 3.5.10 (inclusive) */ 926 + #define REISERFS_OLD_DISK_OFFSET_IN_BYTES (8 * 1024) 927 + 928 + /* reiserfs internal error code (used by search_by_key and fix_nodes)) */ 929 + #define CARRY_ON 0 930 + #define REPEAT_SEARCH -1 931 + #define IO_ERROR -2 932 + #define NO_DISK_SPACE -3 933 + #define NO_BALANCING_NEEDED (-4) 934 + #define NO_MORE_UNUSED_CONTIGUOUS_BLOCKS (-5) 935 + #define QUOTA_EXCEEDED -6 936 + 937 + typedef __u32 b_blocknr_t; 938 + typedef __le32 unp_t; 939 + 940 + struct unfm_nodeinfo { 941 + unp_t unfm_nodenum; 942 + unsigned short unfm_freespace; 943 + }; 944 + 945 + /* there are two formats of keys: 3.5 and 3.6 946 + */ 947 + #define KEY_FORMAT_3_5 0 948 + #define KEY_FORMAT_3_6 1 949 + 950 + /* there are two stat datas */ 951 + #define STAT_DATA_V1 0 952 + #define STAT_DATA_V2 1 953 + 954 + static inline struct reiserfs_inode_info *REISERFS_I(const struct inode *inode) 955 + { 956 + return container_of(inode, struct reiserfs_inode_info, vfs_inode); 957 + } 958 + 959 + static inline struct reiserfs_sb_info *REISERFS_SB(const struct super_block *sb) 960 + { 961 + return sb->s_fs_info; 962 + } 963 + 964 + /* Don't trust REISERFS_SB(sb)->s_bmap_nr, it's a u16 965 + * which overflows on large file systems. */ 966 + static inline __u32 reiserfs_bmap_count(struct super_block *sb) 967 + { 968 + return (SB_BLOCK_COUNT(sb) - 1) / (sb->s_blocksize * 8) + 1; 969 + } 970 + 971 + static inline int bmap_would_wrap(unsigned bmap_nr) 972 + { 973 + return bmap_nr > ((1LL << 16) - 1); 974 + } 975 + 976 + /** this says about version of key of all items (but stat data) the 977 + object consists of */ 978 + #define get_inode_item_key_version( inode ) \ 979 + ((REISERFS_I(inode)->i_flags & i_item_key_version_mask) ? KEY_FORMAT_3_6 : KEY_FORMAT_3_5) 980 + 981 + #define set_inode_item_key_version( inode, version ) \ 982 + ({ if((version)==KEY_FORMAT_3_6) \ 983 + REISERFS_I(inode)->i_flags |= i_item_key_version_mask; \ 984 + else \ 985 + REISERFS_I(inode)->i_flags &= ~i_item_key_version_mask; }) 986 + 987 + #define get_inode_sd_version(inode) \ 988 + ((REISERFS_I(inode)->i_flags & i_stat_data_version_mask) ? STAT_DATA_V2 : STAT_DATA_V1) 989 + 990 + #define set_inode_sd_version(inode, version) \ 991 + ({ if((version)==STAT_DATA_V2) \ 992 + REISERFS_I(inode)->i_flags |= i_stat_data_version_mask; \ 993 + else \ 994 + REISERFS_I(inode)->i_flags &= ~i_stat_data_version_mask; }) 995 + 996 + /* This is an aggressive tail suppression policy, I am hoping it 997 + improves our benchmarks. The principle behind it is that percentage 998 + space saving is what matters, not absolute space saving. This is 999 + non-intuitive, but it helps to understand it if you consider that the 1000 + cost to access 4 blocks is not much more than the cost to access 1 1001 + block, if you have to do a seek and rotate. A tail risks a 1002 + non-linear disk access that is significant as a percentage of total 1003 + time cost for a 4 block file and saves an amount of space that is 1004 + less significant as a percentage of space, or so goes the hypothesis. 1005 + -Hans */ 1006 + #define STORE_TAIL_IN_UNFM_S1(n_file_size,n_tail_size,n_block_size) \ 1007 + (\ 1008 + (!(n_tail_size)) || \ 1009 + (((n_tail_size) > MAX_DIRECT_ITEM_LEN(n_block_size)) || \ 1010 + ( (n_file_size) >= (n_block_size) * 4 ) || \ 1011 + ( ( (n_file_size) >= (n_block_size) * 3 ) && \ 1012 + ( (n_tail_size) >= (MAX_DIRECT_ITEM_LEN(n_block_size))/4) ) || \ 1013 + ( ( (n_file_size) >= (n_block_size) * 2 ) && \ 1014 + ( (n_tail_size) >= (MAX_DIRECT_ITEM_LEN(n_block_size))/2) ) || \ 1015 + ( ( (n_file_size) >= (n_block_size) ) && \ 1016 + ( (n_tail_size) >= (MAX_DIRECT_ITEM_LEN(n_block_size) * 3)/4) ) ) \ 1017 + ) 1018 + 1019 + /* Another strategy for tails, this one means only create a tail if all the 1020 + file would fit into one DIRECT item. 1021 + Primary intention for this one is to increase performance by decreasing 1022 + seeking. 1023 + */ 1024 + #define STORE_TAIL_IN_UNFM_S2(n_file_size,n_tail_size,n_block_size) \ 1025 + (\ 1026 + (!(n_tail_size)) || \ 1027 + (((n_file_size) > MAX_DIRECT_ITEM_LEN(n_block_size)) ) \ 1028 + ) 1029 + 1030 + /* 1031 + * values for s_umount_state field 1032 + */ 1033 + #define REISERFS_VALID_FS 1 1034 + #define REISERFS_ERROR_FS 2 1035 + 1036 + // 1037 + // there are 5 item types currently 1038 + // 1039 + #define TYPE_STAT_DATA 0 1040 + #define TYPE_INDIRECT 1 1041 + #define TYPE_DIRECT 2 1042 + #define TYPE_DIRENTRY 3 1043 + #define TYPE_MAXTYPE 3 1044 + #define TYPE_ANY 15 // FIXME: comment is required 1045 + 1046 + /***************************************************************************/ 1047 + /* KEY & ITEM HEAD */ 1048 + /***************************************************************************/ 1049 + 1050 + // 1051 + // directories use this key as well as old files 1052 + // 1053 + struct offset_v1 { 1054 + __le32 k_offset; 1055 + __le32 k_uniqueness; 1056 + } __attribute__ ((__packed__)); 1057 + 1058 + struct offset_v2 { 1059 + __le64 v; 1060 + } __attribute__ ((__packed__)); 1061 + 1062 + static inline __u16 offset_v2_k_type(const struct offset_v2 *v2) 1063 + { 1064 + __u8 type = le64_to_cpu(v2->v) >> 60; 1065 + return (type <= TYPE_MAXTYPE) ? type : TYPE_ANY; 1066 + } 1067 + 1068 + static inline void set_offset_v2_k_type(struct offset_v2 *v2, int type) 1069 + { 1070 + v2->v = 1071 + (v2->v & cpu_to_le64(~0ULL >> 4)) | cpu_to_le64((__u64) type << 60); 1072 + } 1073 + 1074 + static inline loff_t offset_v2_k_offset(const struct offset_v2 *v2) 1075 + { 1076 + return le64_to_cpu(v2->v) & (~0ULL >> 4); 1077 + } 1078 + 1079 + static inline void set_offset_v2_k_offset(struct offset_v2 *v2, loff_t offset) 1080 + { 1081 + offset &= (~0ULL >> 4); 1082 + v2->v = (v2->v & cpu_to_le64(15ULL << 60)) | cpu_to_le64(offset); 1083 + } 1084 + 1085 + /* Key of an item determines its location in the S+tree, and 1086 + is composed of 4 components */ 1087 + struct reiserfs_key { 1088 + __le32 k_dir_id; /* packing locality: by default parent 1089 + directory object id */ 1090 + __le32 k_objectid; /* object identifier */ 1091 + union { 1092 + struct offset_v1 k_offset_v1; 1093 + struct offset_v2 k_offset_v2; 1094 + } __attribute__ ((__packed__)) u; 1095 + } __attribute__ ((__packed__)); 1096 + 1097 + struct in_core_key { 1098 + __u32 k_dir_id; /* packing locality: by default parent 1099 + directory object id */ 1100 + __u32 k_objectid; /* object identifier */ 1101 + __u64 k_offset; 1102 + __u8 k_type; 1103 + }; 1104 + 1105 + struct cpu_key { 1106 + struct in_core_key on_disk_key; 1107 + int version; 1108 + int key_length; /* 3 in all cases but direct2indirect and 1109 + indirect2direct conversion */ 1110 + }; 1111 + 1112 + /* Our function for comparing keys can compare keys of different 1113 + lengths. It takes as a parameter the length of the keys it is to 1114 + compare. These defines are used in determining what is to be passed 1115 + to it as that parameter. */ 1116 + #define REISERFS_FULL_KEY_LEN 4 1117 + #define REISERFS_SHORT_KEY_LEN 2 1118 + 1119 + /* The result of the key compare */ 1120 + #define FIRST_GREATER 1 1121 + #define SECOND_GREATER -1 1122 + #define KEYS_IDENTICAL 0 1123 + #define KEY_FOUND 1 1124 + #define KEY_NOT_FOUND 0 1125 + 1126 + #define KEY_SIZE (sizeof(struct reiserfs_key)) 1127 + #define SHORT_KEY_SIZE (sizeof (__u32) + sizeof (__u32)) 1128 + 1129 + /* return values for search_by_key and clones */ 1130 + #define ITEM_FOUND 1 1131 + #define ITEM_NOT_FOUND 0 1132 + #define ENTRY_FOUND 1 1133 + #define ENTRY_NOT_FOUND 0 1134 + #define DIRECTORY_NOT_FOUND -1 1135 + #define REGULAR_FILE_FOUND -2 1136 + #define DIRECTORY_FOUND -3 1137 + #define BYTE_FOUND 1 1138 + #define BYTE_NOT_FOUND 0 1139 + #define FILE_NOT_FOUND -1 1140 + 1141 + #define POSITION_FOUND 1 1142 + #define POSITION_NOT_FOUND 0 1143 + 1144 + // return values for reiserfs_find_entry and search_by_entry_key 1145 + #define NAME_FOUND 1 1146 + #define NAME_NOT_FOUND 0 1147 + #define GOTO_PREVIOUS_ITEM 2 1148 + #define NAME_FOUND_INVISIBLE 3 1149 + 1150 + /* Everything in the filesystem is stored as a set of items. The 1151 + item head contains the key of the item, its free space (for 1152 + indirect items) and specifies the location of the item itself 1153 + within the block. */ 1154 + 1155 + struct item_head { 1156 + /* Everything in the tree is found by searching for it based on 1157 + * its key.*/ 1158 + struct reiserfs_key ih_key; 1159 + union { 1160 + /* The free space in the last unformatted node of an 1161 + indirect item if this is an indirect item. This 1162 + equals 0xFFFF iff this is a direct item or stat data 1163 + item. Note that the key, not this field, is used to 1164 + determine the item type, and thus which field this 1165 + union contains. */ 1166 + __le16 ih_free_space_reserved; 1167 + /* Iff this is a directory item, this field equals the 1168 + number of directory entries in the directory item. */ 1169 + __le16 ih_entry_count; 1170 + } __attribute__ ((__packed__)) u; 1171 + __le16 ih_item_len; /* total size of the item body */ 1172 + __le16 ih_item_location; /* an offset to the item body 1173 + * within the block */ 1174 + __le16 ih_version; /* 0 for all old items, 2 for new 1175 + ones. Highest bit is set by fsck 1176 + temporary, cleaned after all 1177 + done */ 1178 + } __attribute__ ((__packed__)); 1179 + /* size of item header */ 1180 + #define IH_SIZE (sizeof(struct item_head)) 1181 + 1182 + #define ih_free_space(ih) le16_to_cpu((ih)->u.ih_free_space_reserved) 1183 + #define ih_version(ih) le16_to_cpu((ih)->ih_version) 1184 + #define ih_entry_count(ih) le16_to_cpu((ih)->u.ih_entry_count) 1185 + #define ih_location(ih) le16_to_cpu((ih)->ih_item_location) 1186 + #define ih_item_len(ih) le16_to_cpu((ih)->ih_item_len) 1187 + 1188 + #define put_ih_free_space(ih, val) do { (ih)->u.ih_free_space_reserved = cpu_to_le16(val); } while(0) 1189 + #define put_ih_version(ih, val) do { (ih)->ih_version = cpu_to_le16(val); } while (0) 1190 + #define put_ih_entry_count(ih, val) do { (ih)->u.ih_entry_count = cpu_to_le16(val); } while (0) 1191 + #define put_ih_location(ih, val) do { (ih)->ih_item_location = cpu_to_le16(val); } while (0) 1192 + #define put_ih_item_len(ih, val) do { (ih)->ih_item_len = cpu_to_le16(val); } while (0) 1193 + 1194 + #define unreachable_item(ih) (ih_version(ih) & (1 << 15)) 1195 + 1196 + #define get_ih_free_space(ih) (ih_version (ih) == KEY_FORMAT_3_6 ? 0 : ih_free_space (ih)) 1197 + #define set_ih_free_space(ih,val) put_ih_free_space((ih), ((ih_version(ih) == KEY_FORMAT_3_6) ? 0 : (val))) 1198 + 1199 + /* these operate on indirect items, where you've got an array of ints 1200 + ** at a possibly unaligned location. These are a noop on ia32 1201 + ** 1202 + ** p is the array of __u32, i is the index into the array, v is the value 1203 + ** to store there. 1204 + */ 1205 + #define get_block_num(p, i) get_unaligned_le32((p) + (i)) 1206 + #define put_block_num(p, i, v) put_unaligned_le32((v), (p) + (i)) 1207 + 1208 + // 1209 + // in old version uniqueness field shows key type 1210 + // 1211 + #define V1_SD_UNIQUENESS 0 1212 + #define V1_INDIRECT_UNIQUENESS 0xfffffffe 1213 + #define V1_DIRECT_UNIQUENESS 0xffffffff 1214 + #define V1_DIRENTRY_UNIQUENESS 500 1215 + #define V1_ANY_UNIQUENESS 555 // FIXME: comment is required 1216 + 1217 + // 1218 + // here are conversion routines 1219 + // 1220 + static inline int uniqueness2type(__u32 uniqueness) CONSTF; 1221 + static inline int uniqueness2type(__u32 uniqueness) 1222 + { 1223 + switch ((int)uniqueness) { 1224 + case V1_SD_UNIQUENESS: 1225 + return TYPE_STAT_DATA; 1226 + case V1_INDIRECT_UNIQUENESS: 1227 + return TYPE_INDIRECT; 1228 + case V1_DIRECT_UNIQUENESS: 1229 + return TYPE_DIRECT; 1230 + case V1_DIRENTRY_UNIQUENESS: 1231 + return TYPE_DIRENTRY; 1232 + case V1_ANY_UNIQUENESS: 1233 + default: 1234 + return TYPE_ANY; 1235 + } 1236 + } 1237 + 1238 + static inline __u32 type2uniqueness(int type) CONSTF; 1239 + static inline __u32 type2uniqueness(int type) 1240 + { 1241 + switch (type) { 1242 + case TYPE_STAT_DATA: 1243 + return V1_SD_UNIQUENESS; 1244 + case TYPE_INDIRECT: 1245 + return V1_INDIRECT_UNIQUENESS; 1246 + case TYPE_DIRECT: 1247 + return V1_DIRECT_UNIQUENESS; 1248 + case TYPE_DIRENTRY: 1249 + return V1_DIRENTRY_UNIQUENESS; 1250 + case TYPE_ANY: 1251 + default: 1252 + return V1_ANY_UNIQUENESS; 1253 + } 1254 + } 1255 + 1256 + // 1257 + // key is pointer to on disk key which is stored in le, result is cpu, 1258 + // there is no way to get version of object from key, so, provide 1259 + // version to these defines 1260 + // 1261 + static inline loff_t le_key_k_offset(int version, 1262 + const struct reiserfs_key *key) 1263 + { 1264 + return (version == KEY_FORMAT_3_5) ? 1265 + le32_to_cpu(key->u.k_offset_v1.k_offset) : 1266 + offset_v2_k_offset(&(key->u.k_offset_v2)); 1267 + } 1268 + 1269 + static inline loff_t le_ih_k_offset(const struct item_head *ih) 1270 + { 1271 + return le_key_k_offset(ih_version(ih), &(ih->ih_key)); 1272 + } 1273 + 1274 + static inline loff_t le_key_k_type(int version, const struct reiserfs_key *key) 1275 + { 1276 + return (version == KEY_FORMAT_3_5) ? 1277 + uniqueness2type(le32_to_cpu(key->u.k_offset_v1.k_uniqueness)) : 1278 + offset_v2_k_type(&(key->u.k_offset_v2)); 1279 + } 1280 + 1281 + static inline loff_t le_ih_k_type(const struct item_head *ih) 1282 + { 1283 + return le_key_k_type(ih_version(ih), &(ih->ih_key)); 1284 + } 1285 + 1286 + static inline void set_le_key_k_offset(int version, struct reiserfs_key *key, 1287 + loff_t offset) 1288 + { 1289 + (version == KEY_FORMAT_3_5) ? (void)(key->u.k_offset_v1.k_offset = cpu_to_le32(offset)) : /* jdm check */ 1290 + (void)(set_offset_v2_k_offset(&(key->u.k_offset_v2), offset)); 1291 + } 1292 + 1293 + static inline void set_le_ih_k_offset(struct item_head *ih, loff_t offset) 1294 + { 1295 + set_le_key_k_offset(ih_version(ih), &(ih->ih_key), offset); 1296 + } 1297 + 1298 + static inline void set_le_key_k_type(int version, struct reiserfs_key *key, 1299 + int type) 1300 + { 1301 + (version == KEY_FORMAT_3_5) ? 1302 + (void)(key->u.k_offset_v1.k_uniqueness = 1303 + cpu_to_le32(type2uniqueness(type))) 1304 + : (void)(set_offset_v2_k_type(&(key->u.k_offset_v2), type)); 1305 + } 1306 + 1307 + static inline void set_le_ih_k_type(struct item_head *ih, int type) 1308 + { 1309 + set_le_key_k_type(ih_version(ih), &(ih->ih_key), type); 1310 + } 1311 + 1312 + static inline int is_direntry_le_key(int version, struct reiserfs_key *key) 1313 + { 1314 + return le_key_k_type(version, key) == TYPE_DIRENTRY; 1315 + } 1316 + 1317 + static inline int is_direct_le_key(int version, struct reiserfs_key *key) 1318 + { 1319 + return le_key_k_type(version, key) == TYPE_DIRECT; 1320 + } 1321 + 1322 + static inline int is_indirect_le_key(int version, struct reiserfs_key *key) 1323 + { 1324 + return le_key_k_type(version, key) == TYPE_INDIRECT; 1325 + } 1326 + 1327 + static inline int is_statdata_le_key(int version, struct reiserfs_key *key) 1328 + { 1329 + return le_key_k_type(version, key) == TYPE_STAT_DATA; 1330 + } 1331 + 1332 + // 1333 + // item header has version. 1334 + // 1335 + static inline int is_direntry_le_ih(struct item_head *ih) 1336 + { 1337 + return is_direntry_le_key(ih_version(ih), &ih->ih_key); 1338 + } 1339 + 1340 + static inline int is_direct_le_ih(struct item_head *ih) 1341 + { 1342 + return is_direct_le_key(ih_version(ih), &ih->ih_key); 1343 + } 1344 + 1345 + static inline int is_indirect_le_ih(struct item_head *ih) 1346 + { 1347 + return is_indirect_le_key(ih_version(ih), &ih->ih_key); 1348 + } 1349 + 1350 + static inline int is_statdata_le_ih(struct item_head *ih) 1351 + { 1352 + return is_statdata_le_key(ih_version(ih), &ih->ih_key); 1353 + } 1354 + 1355 + // 1356 + // key is pointer to cpu key, result is cpu 1357 + // 1358 + static inline loff_t cpu_key_k_offset(const struct cpu_key *key) 1359 + { 1360 + return key->on_disk_key.k_offset; 1361 + } 1362 + 1363 + static inline loff_t cpu_key_k_type(const struct cpu_key *key) 1364 + { 1365 + return key->on_disk_key.k_type; 1366 + } 1367 + 1368 + static inline void set_cpu_key_k_offset(struct cpu_key *key, loff_t offset) 1369 + { 1370 + key->on_disk_key.k_offset = offset; 1371 + } 1372 + 1373 + static inline void set_cpu_key_k_type(struct cpu_key *key, int type) 1374 + { 1375 + key->on_disk_key.k_type = type; 1376 + } 1377 + 1378 + static inline void cpu_key_k_offset_dec(struct cpu_key *key) 1379 + { 1380 + key->on_disk_key.k_offset--; 1381 + } 1382 + 1383 + #define is_direntry_cpu_key(key) (cpu_key_k_type (key) == TYPE_DIRENTRY) 1384 + #define is_direct_cpu_key(key) (cpu_key_k_type (key) == TYPE_DIRECT) 1385 + #define is_indirect_cpu_key(key) (cpu_key_k_type (key) == TYPE_INDIRECT) 1386 + #define is_statdata_cpu_key(key) (cpu_key_k_type (key) == TYPE_STAT_DATA) 1387 + 1388 + /* are these used ? */ 1389 + #define is_direntry_cpu_ih(ih) (is_direntry_cpu_key (&((ih)->ih_key))) 1390 + #define is_direct_cpu_ih(ih) (is_direct_cpu_key (&((ih)->ih_key))) 1391 + #define is_indirect_cpu_ih(ih) (is_indirect_cpu_key (&((ih)->ih_key))) 1392 + #define is_statdata_cpu_ih(ih) (is_statdata_cpu_key (&((ih)->ih_key))) 1393 + 1394 + #define I_K_KEY_IN_ITEM(ih, key, n_blocksize) \ 1395 + (!COMP_SHORT_KEYS(ih, key) && \ 1396 + I_OFF_BYTE_IN_ITEM(ih, k_offset(key), n_blocksize)) 1397 + 1398 + /* maximal length of item */ 1399 + #define MAX_ITEM_LEN(block_size) (block_size - BLKH_SIZE - IH_SIZE) 1400 + #define MIN_ITEM_LEN 1 1401 + 1402 + /* object identifier for root dir */ 1403 + #define REISERFS_ROOT_OBJECTID 2 1404 + #define REISERFS_ROOT_PARENT_OBJECTID 1 1405 + 1406 + extern struct reiserfs_key root_key; 1407 + 1408 + /* 1409 + * Picture represents a leaf of the S+tree 1410 + * ______________________________________________________ 1411 + * | | Array of | | | 1412 + * |Block | Object-Item | F r e e | Objects- | 1413 + * | head | Headers | S p a c e | Items | 1414 + * |______|_______________|___________________|___________| 1415 + */ 1416 + 1417 + /* Header of a disk block. More precisely, header of a formatted leaf 1418 + or internal node, and not the header of an unformatted node. */ 1419 + struct block_head { 1420 + __le16 blk_level; /* Level of a block in the tree. */ 1421 + __le16 blk_nr_item; /* Number of keys/items in a block. */ 1422 + __le16 blk_free_space; /* Block free space in bytes. */ 1423 + __le16 blk_reserved; 1424 + /* dump this in v4/planA */ 1425 + struct reiserfs_key blk_right_delim_key; /* kept only for compatibility */ 1426 + }; 1427 + 1428 + #define BLKH_SIZE (sizeof(struct block_head)) 1429 + #define blkh_level(p_blkh) (le16_to_cpu((p_blkh)->blk_level)) 1430 + #define blkh_nr_item(p_blkh) (le16_to_cpu((p_blkh)->blk_nr_item)) 1431 + #define blkh_free_space(p_blkh) (le16_to_cpu((p_blkh)->blk_free_space)) 1432 + #define blkh_reserved(p_blkh) (le16_to_cpu((p_blkh)->blk_reserved)) 1433 + #define set_blkh_level(p_blkh,val) ((p_blkh)->blk_level = cpu_to_le16(val)) 1434 + #define set_blkh_nr_item(p_blkh,val) ((p_blkh)->blk_nr_item = cpu_to_le16(val)) 1435 + #define set_blkh_free_space(p_blkh,val) ((p_blkh)->blk_free_space = cpu_to_le16(val)) 1436 + #define set_blkh_reserved(p_blkh,val) ((p_blkh)->blk_reserved = cpu_to_le16(val)) 1437 + #define blkh_right_delim_key(p_blkh) ((p_blkh)->blk_right_delim_key) 1438 + #define set_blkh_right_delim_key(p_blkh,val) ((p_blkh)->blk_right_delim_key = val) 1439 + 1440 + /* 1441 + * values for blk_level field of the struct block_head 1442 + */ 1443 + 1444 + #define FREE_LEVEL 0 /* when node gets removed from the tree its 1445 + blk_level is set to FREE_LEVEL. It is then 1446 + used to see whether the node is still in the 1447 + tree */ 1448 + 1449 + #define DISK_LEAF_NODE_LEVEL 1 /* Leaf node level. */ 1450 + 1451 + /* Given the buffer head of a formatted node, resolve to the block head of that node. */ 1452 + #define B_BLK_HEAD(bh) ((struct block_head *)((bh)->b_data)) 1453 + /* Number of items that are in buffer. */ 1454 + #define B_NR_ITEMS(bh) (blkh_nr_item(B_BLK_HEAD(bh))) 1455 + #define B_LEVEL(bh) (blkh_level(B_BLK_HEAD(bh))) 1456 + #define B_FREE_SPACE(bh) (blkh_free_space(B_BLK_HEAD(bh))) 1457 + 1458 + #define PUT_B_NR_ITEMS(bh, val) do { set_blkh_nr_item(B_BLK_HEAD(bh), val); } while (0) 1459 + #define PUT_B_LEVEL(bh, val) do { set_blkh_level(B_BLK_HEAD(bh), val); } while (0) 1460 + #define PUT_B_FREE_SPACE(bh, val) do { set_blkh_free_space(B_BLK_HEAD(bh), val); } while (0) 1461 + 1462 + /* Get right delimiting key. -- little endian */ 1463 + #define B_PRIGHT_DELIM_KEY(bh) (&(blk_right_delim_key(B_BLK_HEAD(bh)))) 1464 + 1465 + /* Does the buffer contain a disk leaf. */ 1466 + #define B_IS_ITEMS_LEVEL(bh) (B_LEVEL(bh) == DISK_LEAF_NODE_LEVEL) 1467 + 1468 + /* Does the buffer contain a disk internal node */ 1469 + #define B_IS_KEYS_LEVEL(bh) (B_LEVEL(bh) > DISK_LEAF_NODE_LEVEL \ 1470 + && B_LEVEL(bh) <= MAX_HEIGHT) 1471 + 1472 + /***************************************************************************/ 1473 + /* STAT DATA */ 1474 + /***************************************************************************/ 1475 + 1476 + // 1477 + // old stat data is 32 bytes long. We are going to distinguish new one by 1478 + // different size 1479 + // 1480 + struct stat_data_v1 { 1481 + __le16 sd_mode; /* file type, permissions */ 1482 + __le16 sd_nlink; /* number of hard links */ 1483 + __le16 sd_uid; /* owner */ 1484 + __le16 sd_gid; /* group */ 1485 + __le32 sd_size; /* file size */ 1486 + __le32 sd_atime; /* time of last access */ 1487 + __le32 sd_mtime; /* time file was last modified */ 1488 + __le32 sd_ctime; /* time inode (stat data) was last changed (except changes to sd_atime and sd_mtime) */ 1489 + union { 1490 + __le32 sd_rdev; 1491 + __le32 sd_blocks; /* number of blocks file uses */ 1492 + } __attribute__ ((__packed__)) u; 1493 + __le32 sd_first_direct_byte; /* first byte of file which is stored 1494 + in a direct item: except that if it 1495 + equals 1 it is a symlink and if it 1496 + equals ~(__u32)0 there is no 1497 + direct item. The existence of this 1498 + field really grates on me. Let's 1499 + replace it with a macro based on 1500 + sd_size and our tail suppression 1501 + policy. Someday. -Hans */ 1502 + } __attribute__ ((__packed__)); 1503 + 1504 + #define SD_V1_SIZE (sizeof(struct stat_data_v1)) 1505 + #define stat_data_v1(ih) (ih_version (ih) == KEY_FORMAT_3_5) 1506 + #define sd_v1_mode(sdp) (le16_to_cpu((sdp)->sd_mode)) 1507 + #define set_sd_v1_mode(sdp,v) ((sdp)->sd_mode = cpu_to_le16(v)) 1508 + #define sd_v1_nlink(sdp) (le16_to_cpu((sdp)->sd_nlink)) 1509 + #define set_sd_v1_nlink(sdp,v) ((sdp)->sd_nlink = cpu_to_le16(v)) 1510 + #define sd_v1_uid(sdp) (le16_to_cpu((sdp)->sd_uid)) 1511 + #define set_sd_v1_uid(sdp,v) ((sdp)->sd_uid = cpu_to_le16(v)) 1512 + #define sd_v1_gid(sdp) (le16_to_cpu((sdp)->sd_gid)) 1513 + #define set_sd_v1_gid(sdp,v) ((sdp)->sd_gid = cpu_to_le16(v)) 1514 + #define sd_v1_size(sdp) (le32_to_cpu((sdp)->sd_size)) 1515 + #define set_sd_v1_size(sdp,v) ((sdp)->sd_size = cpu_to_le32(v)) 1516 + #define sd_v1_atime(sdp) (le32_to_cpu((sdp)->sd_atime)) 1517 + #define set_sd_v1_atime(sdp,v) ((sdp)->sd_atime = cpu_to_le32(v)) 1518 + #define sd_v1_mtime(sdp) (le32_to_cpu((sdp)->sd_mtime)) 1519 + #define set_sd_v1_mtime(sdp,v) ((sdp)->sd_mtime = cpu_to_le32(v)) 1520 + #define sd_v1_ctime(sdp) (le32_to_cpu((sdp)->sd_ctime)) 1521 + #define set_sd_v1_ctime(sdp,v) ((sdp)->sd_ctime = cpu_to_le32(v)) 1522 + #define sd_v1_rdev(sdp) (le32_to_cpu((sdp)->u.sd_rdev)) 1523 + #define set_sd_v1_rdev(sdp,v) ((sdp)->u.sd_rdev = cpu_to_le32(v)) 1524 + #define sd_v1_blocks(sdp) (le32_to_cpu((sdp)->u.sd_blocks)) 1525 + #define set_sd_v1_blocks(sdp,v) ((sdp)->u.sd_blocks = cpu_to_le32(v)) 1526 + #define sd_v1_first_direct_byte(sdp) \ 1527 + (le32_to_cpu((sdp)->sd_first_direct_byte)) 1528 + #define set_sd_v1_first_direct_byte(sdp,v) \ 1529 + ((sdp)->sd_first_direct_byte = cpu_to_le32(v)) 1530 + 1531 + /* inode flags stored in sd_attrs (nee sd_reserved) */ 1532 + 1533 + /* we want common flags to have the same values as in ext2, 1534 + so chattr(1) will work without problems */ 1535 + #define REISERFS_IMMUTABLE_FL FS_IMMUTABLE_FL 1536 + #define REISERFS_APPEND_FL FS_APPEND_FL 1537 + #define REISERFS_SYNC_FL FS_SYNC_FL 1538 + #define REISERFS_NOATIME_FL FS_NOATIME_FL 1539 + #define REISERFS_NODUMP_FL FS_NODUMP_FL 1540 + #define REISERFS_SECRM_FL FS_SECRM_FL 1541 + #define REISERFS_UNRM_FL FS_UNRM_FL 1542 + #define REISERFS_COMPR_FL FS_COMPR_FL 1543 + #define REISERFS_NOTAIL_FL FS_NOTAIL_FL 1544 + 1545 + /* persistent flags that file inherits from the parent directory */ 1546 + #define REISERFS_INHERIT_MASK ( REISERFS_IMMUTABLE_FL | \ 1547 + REISERFS_SYNC_FL | \ 1548 + REISERFS_NOATIME_FL | \ 1549 + REISERFS_NODUMP_FL | \ 1550 + REISERFS_SECRM_FL | \ 1551 + REISERFS_COMPR_FL | \ 1552 + REISERFS_NOTAIL_FL ) 1553 + 1554 + /* Stat Data on disk (reiserfs version of UFS disk inode minus the 1555 + address blocks) */ 1556 + struct stat_data { 1557 + __le16 sd_mode; /* file type, permissions */ 1558 + __le16 sd_attrs; /* persistent inode flags */ 1559 + __le32 sd_nlink; /* number of hard links */ 1560 + __le64 sd_size; /* file size */ 1561 + __le32 sd_uid; /* owner */ 1562 + __le32 sd_gid; /* group */ 1563 + __le32 sd_atime; /* time of last access */ 1564 + __le32 sd_mtime; /* time file was last modified */ 1565 + __le32 sd_ctime; /* time inode (stat data) was last changed (except changes to sd_atime and sd_mtime) */ 1566 + __le32 sd_blocks; 1567 + union { 1568 + __le32 sd_rdev; 1569 + __le32 sd_generation; 1570 + //__le32 sd_first_direct_byte; 1571 + /* first byte of file which is stored in a 1572 + direct item: except that if it equals 1 1573 + it is a symlink and if it equals 1574 + ~(__u32)0 there is no direct item. The 1575 + existence of this field really grates 1576 + on me. Let's replace it with a macro 1577 + based on sd_size and our tail 1578 + suppression policy? */ 1579 + } __attribute__ ((__packed__)) u; 1580 + } __attribute__ ((__packed__)); 1581 + // 1582 + // this is 44 bytes long 1583 + // 1584 + #define SD_SIZE (sizeof(struct stat_data)) 1585 + #define SD_V2_SIZE SD_SIZE 1586 + #define stat_data_v2(ih) (ih_version (ih) == KEY_FORMAT_3_6) 1587 + #define sd_v2_mode(sdp) (le16_to_cpu((sdp)->sd_mode)) 1588 + #define set_sd_v2_mode(sdp,v) ((sdp)->sd_mode = cpu_to_le16(v)) 1589 + /* sd_reserved */ 1590 + /* set_sd_reserved */ 1591 + #define sd_v2_nlink(sdp) (le32_to_cpu((sdp)->sd_nlink)) 1592 + #define set_sd_v2_nlink(sdp,v) ((sdp)->sd_nlink = cpu_to_le32(v)) 1593 + #define sd_v2_size(sdp) (le64_to_cpu((sdp)->sd_size)) 1594 + #define set_sd_v2_size(sdp,v) ((sdp)->sd_size = cpu_to_le64(v)) 1595 + #define sd_v2_uid(sdp) (le32_to_cpu((sdp)->sd_uid)) 1596 + #define set_sd_v2_uid(sdp,v) ((sdp)->sd_uid = cpu_to_le32(v)) 1597 + #define sd_v2_gid(sdp) (le32_to_cpu((sdp)->sd_gid)) 1598 + #define set_sd_v2_gid(sdp,v) ((sdp)->sd_gid = cpu_to_le32(v)) 1599 + #define sd_v2_atime(sdp) (le32_to_cpu((sdp)->sd_atime)) 1600 + #define set_sd_v2_atime(sdp,v) ((sdp)->sd_atime = cpu_to_le32(v)) 1601 + #define sd_v2_mtime(sdp) (le32_to_cpu((sdp)->sd_mtime)) 1602 + #define set_sd_v2_mtime(sdp,v) ((sdp)->sd_mtime = cpu_to_le32(v)) 1603 + #define sd_v2_ctime(sdp) (le32_to_cpu((sdp)->sd_ctime)) 1604 + #define set_sd_v2_ctime(sdp,v) ((sdp)->sd_ctime = cpu_to_le32(v)) 1605 + #define sd_v2_blocks(sdp) (le32_to_cpu((sdp)->sd_blocks)) 1606 + #define set_sd_v2_blocks(sdp,v) ((sdp)->sd_blocks = cpu_to_le32(v)) 1607 + #define sd_v2_rdev(sdp) (le32_to_cpu((sdp)->u.sd_rdev)) 1608 + #define set_sd_v2_rdev(sdp,v) ((sdp)->u.sd_rdev = cpu_to_le32(v)) 1609 + #define sd_v2_generation(sdp) (le32_to_cpu((sdp)->u.sd_generation)) 1610 + #define set_sd_v2_generation(sdp,v) ((sdp)->u.sd_generation = cpu_to_le32(v)) 1611 + #define sd_v2_attrs(sdp) (le16_to_cpu((sdp)->sd_attrs)) 1612 + #define set_sd_v2_attrs(sdp,v) ((sdp)->sd_attrs = cpu_to_le16(v)) 1613 + 1614 + /***************************************************************************/ 1615 + /* DIRECTORY STRUCTURE */ 1616 + /***************************************************************************/ 1617 + /* 1618 + Picture represents the structure of directory items 1619 + ________________________________________________ 1620 + | Array of | | | | | | 1621 + | directory |N-1| N-2 | .... | 1st |0th| 1622 + | entry headers | | | | | | 1623 + |_______________|___|_____|________|_______|___| 1624 + <---- directory entries ------> 1625 + 1626 + First directory item has k_offset component 1. We store "." and ".." 1627 + in one item, always, we never split "." and ".." into differing 1628 + items. This makes, among other things, the code for removing 1629 + directories simpler. */ 1630 + #define SD_OFFSET 0 1631 + #define SD_UNIQUENESS 0 1632 + #define DOT_OFFSET 1 1633 + #define DOT_DOT_OFFSET 2 1634 + #define DIRENTRY_UNIQUENESS 500 1635 + 1636 + /* */ 1637 + #define FIRST_ITEM_OFFSET 1 1638 + 1639 + /* 1640 + Q: How to get key of object pointed to by entry from entry? 1641 + 1642 + A: Each directory entry has its header. This header has deh_dir_id and deh_objectid fields, those are key 1643 + of object, entry points to */ 1644 + 1645 + /* NOT IMPLEMENTED: 1646 + Directory will someday contain stat data of object */ 1647 + 1648 + struct reiserfs_de_head { 1649 + __le32 deh_offset; /* third component of the directory entry key */ 1650 + __le32 deh_dir_id; /* objectid of the parent directory of the object, that is referenced 1651 + by directory entry */ 1652 + __le32 deh_objectid; /* objectid of the object, that is referenced by directory entry */ 1653 + __le16 deh_location; /* offset of name in the whole item */ 1654 + __le16 deh_state; /* whether 1) entry contains stat data (for future), and 2) whether 1655 + entry is hidden (unlinked) */ 1656 + } __attribute__ ((__packed__)); 1657 + #define DEH_SIZE sizeof(struct reiserfs_de_head) 1658 + #define deh_offset(p_deh) (le32_to_cpu((p_deh)->deh_offset)) 1659 + #define deh_dir_id(p_deh) (le32_to_cpu((p_deh)->deh_dir_id)) 1660 + #define deh_objectid(p_deh) (le32_to_cpu((p_deh)->deh_objectid)) 1661 + #define deh_location(p_deh) (le16_to_cpu((p_deh)->deh_location)) 1662 + #define deh_state(p_deh) (le16_to_cpu((p_deh)->deh_state)) 1663 + 1664 + #define put_deh_offset(p_deh,v) ((p_deh)->deh_offset = cpu_to_le32((v))) 1665 + #define put_deh_dir_id(p_deh,v) ((p_deh)->deh_dir_id = cpu_to_le32((v))) 1666 + #define put_deh_objectid(p_deh,v) ((p_deh)->deh_objectid = cpu_to_le32((v))) 1667 + #define put_deh_location(p_deh,v) ((p_deh)->deh_location = cpu_to_le16((v))) 1668 + #define put_deh_state(p_deh,v) ((p_deh)->deh_state = cpu_to_le16((v))) 1669 + 1670 + /* empty directory contains two entries "." and ".." and their headers */ 1671 + #define EMPTY_DIR_SIZE \ 1672 + (DEH_SIZE * 2 + ROUND_UP (strlen (".")) + ROUND_UP (strlen (".."))) 1673 + 1674 + /* old format directories have this size when empty */ 1675 + #define EMPTY_DIR_SIZE_V1 (DEH_SIZE * 2 + 3) 1676 + 1677 + #define DEH_Statdata 0 /* not used now */ 1678 + #define DEH_Visible 2 1679 + 1680 + /* 64 bit systems (and the S/390) need to be aligned explicitly -jdm */ 1681 + #if BITS_PER_LONG == 64 || defined(__s390__) || defined(__hppa__) 1682 + # define ADDR_UNALIGNED_BITS (3) 1683 + #endif 1684 + 1685 + /* These are only used to manipulate deh_state. 1686 + * Because of this, we'll use the ext2_ bit routines, 1687 + * since they are little endian */ 1688 + #ifdef ADDR_UNALIGNED_BITS 1689 + 1690 + # define aligned_address(addr) ((void *)((long)(addr) & ~((1UL << ADDR_UNALIGNED_BITS) - 1))) 1691 + # define unaligned_offset(addr) (((int)((long)(addr) & ((1 << ADDR_UNALIGNED_BITS) - 1))) << 3) 1692 + 1693 + # define set_bit_unaligned(nr, addr) \ 1694 + __test_and_set_bit_le((nr) + unaligned_offset(addr), aligned_address(addr)) 1695 + # define clear_bit_unaligned(nr, addr) \ 1696 + __test_and_clear_bit_le((nr) + unaligned_offset(addr), aligned_address(addr)) 1697 + # define test_bit_unaligned(nr, addr) \ 1698 + test_bit_le((nr) + unaligned_offset(addr), aligned_address(addr)) 1699 + 1700 + #else 1701 + 1702 + # define set_bit_unaligned(nr, addr) __test_and_set_bit_le(nr, addr) 1703 + # define clear_bit_unaligned(nr, addr) __test_and_clear_bit_le(nr, addr) 1704 + # define test_bit_unaligned(nr, addr) test_bit_le(nr, addr) 1705 + 1706 + #endif 1707 + 1708 + #define mark_de_with_sd(deh) set_bit_unaligned (DEH_Statdata, &((deh)->deh_state)) 1709 + #define mark_de_without_sd(deh) clear_bit_unaligned (DEH_Statdata, &((deh)->deh_state)) 1710 + #define mark_de_visible(deh) set_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1711 + #define mark_de_hidden(deh) clear_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1712 + 1713 + #define de_with_sd(deh) test_bit_unaligned (DEH_Statdata, &((deh)->deh_state)) 1714 + #define de_visible(deh) test_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1715 + #define de_hidden(deh) !test_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1716 + 1717 + extern void make_empty_dir_item_v1(char *body, __le32 dirid, __le32 objid, 1718 + __le32 par_dirid, __le32 par_objid); 1719 + extern void make_empty_dir_item(char *body, __le32 dirid, __le32 objid, 1720 + __le32 par_dirid, __le32 par_objid); 1721 + 1722 + /* array of the entry headers */ 1723 + /* get item body */ 1724 + #define B_I_PITEM(bh,ih) ( (bh)->b_data + ih_location(ih) ) 1725 + #define B_I_DEH(bh,ih) ((struct reiserfs_de_head *)(B_I_PITEM(bh,ih))) 1726 + 1727 + /* length of the directory entry in directory item. This define 1728 + calculates length of i-th directory entry using directory entry 1729 + locations from dir entry head. When it calculates length of 0-th 1730 + directory entry, it uses length of whole item in place of entry 1731 + location of the non-existent following entry in the calculation. 1732 + See picture above.*/ 1733 + /* 1734 + #define I_DEH_N_ENTRY_LENGTH(ih,deh,i) \ 1735 + ((i) ? (deh_location((deh)-1) - deh_location((deh))) : (ih_item_len((ih)) - deh_location((deh)))) 1736 + */ 1737 + static inline int entry_length(const struct buffer_head *bh, 1738 + const struct item_head *ih, int pos_in_item) 1739 + { 1740 + struct reiserfs_de_head *deh; 1741 + 1742 + deh = B_I_DEH(bh, ih) + pos_in_item; 1743 + if (pos_in_item) 1744 + return deh_location(deh - 1) - deh_location(deh); 1745 + 1746 + return ih_item_len(ih) - deh_location(deh); 1747 + } 1748 + 1749 + /* number of entries in the directory item, depends on ENTRY_COUNT being at the start of directory dynamic data. */ 1750 + #define I_ENTRY_COUNT(ih) (ih_entry_count((ih))) 1751 + 1752 + /* name by bh, ih and entry_num */ 1753 + #define B_I_E_NAME(bh,ih,entry_num) ((char *)(bh->b_data + ih_location(ih) + deh_location(B_I_DEH(bh,ih)+(entry_num)))) 1754 + 1755 + // two entries per block (at least) 1756 + #define REISERFS_MAX_NAME(block_size) 255 1757 + 1758 + /* this structure is used for operations on directory entries. It is 1759 + not a disk structure. */ 1760 + /* When reiserfs_find_entry or search_by_entry_key find directory 1761 + entry, they return filled reiserfs_dir_entry structure */ 1762 + struct reiserfs_dir_entry { 1763 + struct buffer_head *de_bh; 1764 + int de_item_num; 1765 + struct item_head *de_ih; 1766 + int de_entry_num; 1767 + struct reiserfs_de_head *de_deh; 1768 + int de_entrylen; 1769 + int de_namelen; 1770 + char *de_name; 1771 + unsigned long *de_gen_number_bit_string; 1772 + 1773 + __u32 de_dir_id; 1774 + __u32 de_objectid; 1775 + 1776 + struct cpu_key de_entry_key; 1777 + }; 1778 + 1779 + /* these defines are useful when a particular member of a reiserfs_dir_entry is needed */ 1780 + 1781 + /* pointer to file name, stored in entry */ 1782 + #define B_I_DEH_ENTRY_FILE_NAME(bh,ih,deh) (B_I_PITEM (bh, ih) + deh_location(deh)) 1783 + 1784 + /* length of name */ 1785 + #define I_DEH_N_ENTRY_FILE_NAME_LENGTH(ih,deh,entry_num) \ 1786 + (I_DEH_N_ENTRY_LENGTH (ih, deh, entry_num) - (de_with_sd (deh) ? SD_SIZE : 0)) 1787 + 1788 + /* hash value occupies bits from 7 up to 30 */ 1789 + #define GET_HASH_VALUE(offset) ((offset) & 0x7fffff80LL) 1790 + /* generation number occupies 7 bits starting from 0 up to 6 */ 1791 + #define GET_GENERATION_NUMBER(offset) ((offset) & 0x7fLL) 1792 + #define MAX_GENERATION_NUMBER 127 1793 + 1794 + #define SET_GENERATION_NUMBER(offset,gen_number) (GET_HASH_VALUE(offset)|(gen_number)) 1795 + 1796 + /* 1797 + * Picture represents an internal node of the reiserfs tree 1798 + * ______________________________________________________ 1799 + * | | Array of | Array of | Free | 1800 + * |block | keys | pointers | space | 1801 + * | head | N | N+1 | | 1802 + * |______|_______________|___________________|___________| 1803 + */ 1804 + 1805 + /***************************************************************************/ 1806 + /* DISK CHILD */ 1807 + /***************************************************************************/ 1808 + /* Disk child pointer: The pointer from an internal node of the tree 1809 + to a node that is on disk. */ 1810 + struct disk_child { 1811 + __le32 dc_block_number; /* Disk child's block number. */ 1812 + __le16 dc_size; /* Disk child's used space. */ 1813 + __le16 dc_reserved; 1814 + }; 1815 + 1816 + #define DC_SIZE (sizeof(struct disk_child)) 1817 + #define dc_block_number(dc_p) (le32_to_cpu((dc_p)->dc_block_number)) 1818 + #define dc_size(dc_p) (le16_to_cpu((dc_p)->dc_size)) 1819 + #define put_dc_block_number(dc_p, val) do { (dc_p)->dc_block_number = cpu_to_le32(val); } while(0) 1820 + #define put_dc_size(dc_p, val) do { (dc_p)->dc_size = cpu_to_le16(val); } while(0) 1821 + 1822 + /* Get disk child by buffer header and position in the tree node. */ 1823 + #define B_N_CHILD(bh, n_pos) ((struct disk_child *)\ 1824 + ((bh)->b_data + BLKH_SIZE + B_NR_ITEMS(bh) * KEY_SIZE + DC_SIZE * (n_pos))) 1825 + 1826 + /* Get disk child number by buffer header and position in the tree node. */ 1827 + #define B_N_CHILD_NUM(bh, n_pos) (dc_block_number(B_N_CHILD(bh, n_pos))) 1828 + #define PUT_B_N_CHILD_NUM(bh, n_pos, val) \ 1829 + (put_dc_block_number(B_N_CHILD(bh, n_pos), val)) 1830 + 1831 + /* maximal value of field child_size in structure disk_child */ 1832 + /* child size is the combined size of all items and their headers */ 1833 + #define MAX_CHILD_SIZE(bh) ((int)( (bh)->b_size - BLKH_SIZE )) 1834 + 1835 + /* amount of used space in buffer (not including block head) */ 1836 + #define B_CHILD_SIZE(cur) (MAX_CHILD_SIZE(cur)-(B_FREE_SPACE(cur))) 1837 + 1838 + /* max and min number of keys in internal node */ 1839 + #define MAX_NR_KEY(bh) ( (MAX_CHILD_SIZE(bh)-DC_SIZE)/(KEY_SIZE+DC_SIZE) ) 1840 + #define MIN_NR_KEY(bh) (MAX_NR_KEY(bh)/2) 1841 + 1842 + /***************************************************************************/ 1843 + /* PATH STRUCTURES AND DEFINES */ 1844 + /***************************************************************************/ 1845 + 1846 + /* Search_by_key fills up the path from the root to the leaf as it descends the tree looking for the 1847 + key. It uses reiserfs_bread to try to find buffers in the cache given their block number. If it 1848 + does not find them in the cache it reads them from disk. For each node search_by_key finds using 1849 + reiserfs_bread it then uses bin_search to look through that node. bin_search will find the 1850 + position of the block_number of the next node if it is looking through an internal node. If it 1851 + is looking through a leaf node bin_search will find the position of the item which has key either 1852 + equal to given key, or which is the maximal key less than the given key. */ 1853 + 1854 + struct path_element { 1855 + struct buffer_head *pe_buffer; /* Pointer to the buffer at the path in the tree. */ 1856 + int pe_position; /* Position in the tree node which is placed in the */ 1857 + /* buffer above. */ 1858 + }; 1859 + 1860 + #define MAX_HEIGHT 5 /* maximal height of a tree. don't change this without changing JOURNAL_PER_BALANCE_CNT */ 1861 + #define EXTENDED_MAX_HEIGHT 7 /* Must be equals MAX_HEIGHT + FIRST_PATH_ELEMENT_OFFSET */ 1862 + #define FIRST_PATH_ELEMENT_OFFSET 2 /* Must be equal to at least 2. */ 1863 + 1864 + #define ILLEGAL_PATH_ELEMENT_OFFSET 1 /* Must be equal to FIRST_PATH_ELEMENT_OFFSET - 1 */ 1865 + #define MAX_FEB_SIZE 6 /* this MUST be MAX_HEIGHT + 1. See about FEB below */ 1866 + 1867 + /* We need to keep track of who the ancestors of nodes are. When we 1868 + perform a search we record which nodes were visited while 1869 + descending the tree looking for the node we searched for. This list 1870 + of nodes is called the path. This information is used while 1871 + performing balancing. Note that this path information may become 1872 + invalid, and this means we must check it when using it to see if it 1873 + is still valid. You'll need to read search_by_key and the comments 1874 + in it, especially about decrement_counters_in_path(), to understand 1875 + this structure. 1876 + 1877 + Paths make the code so much harder to work with and debug.... An 1878 + enormous number of bugs are due to them, and trying to write or modify 1879 + code that uses them just makes my head hurt. They are based on an 1880 + excessive effort to avoid disturbing the precious VFS code.:-( The 1881 + gods only know how we are going to SMP the code that uses them. 1882 + znodes are the way! */ 1883 + 1884 + #define PATH_READA 0x1 /* do read ahead */ 1885 + #define PATH_READA_BACK 0x2 /* read backwards */ 1886 + 1887 + struct treepath { 1888 + int path_length; /* Length of the array above. */ 1889 + int reada; 1890 + struct path_element path_elements[EXTENDED_MAX_HEIGHT]; /* Array of the path elements. */ 1891 + int pos_in_item; 1892 + }; 1893 + 1894 + #define pos_in_item(path) ((path)->pos_in_item) 1895 + 1896 + #define INITIALIZE_PATH(var) \ 1897 + struct treepath var = {.path_length = ILLEGAL_PATH_ELEMENT_OFFSET, .reada = 0,} 1898 + 1899 + /* Get path element by path and path position. */ 1900 + #define PATH_OFFSET_PELEMENT(path, n_offset) ((path)->path_elements + (n_offset)) 1901 + 1902 + /* Get buffer header at the path by path and path position. */ 1903 + #define PATH_OFFSET_PBUFFER(path, n_offset) (PATH_OFFSET_PELEMENT(path, n_offset)->pe_buffer) 1904 + 1905 + /* Get position in the element at the path by path and path position. */ 1906 + #define PATH_OFFSET_POSITION(path, n_offset) (PATH_OFFSET_PELEMENT(path, n_offset)->pe_position) 1907 + 1908 + #define PATH_PLAST_BUFFER(path) (PATH_OFFSET_PBUFFER((path), (path)->path_length)) 1909 + /* you know, to the person who didn't 1910 + write this the macro name does not 1911 + at first suggest what it does. 1912 + Maybe POSITION_FROM_PATH_END? Or 1913 + maybe we should just focus on 1914 + dumping paths... -Hans */ 1915 + #define PATH_LAST_POSITION(path) (PATH_OFFSET_POSITION((path), (path)->path_length)) 1916 + 1917 + #define PATH_PITEM_HEAD(path) B_N_PITEM_HEAD(PATH_PLAST_BUFFER(path), PATH_LAST_POSITION(path)) 1918 + 1919 + /* in do_balance leaf has h == 0 in contrast with path structure, 1920 + where root has level == 0. That is why we need these defines */ 1921 + #define PATH_H_PBUFFER(path, h) PATH_OFFSET_PBUFFER (path, path->path_length - (h)) /* tb->S[h] */ 1922 + #define PATH_H_PPARENT(path, h) PATH_H_PBUFFER (path, (h) + 1) /* tb->F[h] or tb->S[0]->b_parent */ 1923 + #define PATH_H_POSITION(path, h) PATH_OFFSET_POSITION (path, path->path_length - (h)) 1924 + #define PATH_H_B_ITEM_ORDER(path, h) PATH_H_POSITION(path, h + 1) /* tb->S[h]->b_item_order */ 1925 + 1926 + #define PATH_H_PATH_OFFSET(path, n_h) ((path)->path_length - (n_h)) 1927 + 1928 + #define get_last_bh(path) PATH_PLAST_BUFFER(path) 1929 + #define get_ih(path) PATH_PITEM_HEAD(path) 1930 + #define get_item_pos(path) PATH_LAST_POSITION(path) 1931 + #define get_item(path) ((void *)B_N_PITEM(PATH_PLAST_BUFFER(path), PATH_LAST_POSITION (path))) 1932 + #define item_moved(ih,path) comp_items(ih, path) 1933 + #define path_changed(ih,path) comp_items (ih, path) 1934 + 1935 + /***************************************************************************/ 1936 + /* MISC */ 1937 + /***************************************************************************/ 1938 + 1939 + /* Size of pointer to the unformatted node. */ 1940 + #define UNFM_P_SIZE (sizeof(unp_t)) 1941 + #define UNFM_P_SHIFT 2 1942 + 1943 + // in in-core inode key is stored on le form 1944 + #define INODE_PKEY(inode) ((struct reiserfs_key *)(REISERFS_I(inode)->i_key)) 1945 + 1946 + #define MAX_UL_INT 0xffffffff 1947 + #define MAX_INT 0x7ffffff 1948 + #define MAX_US_INT 0xffff 1949 + 1950 + // reiserfs version 2 has max offset 60 bits. Version 1 - 32 bit offset 1951 + #define U32_MAX (~(__u32)0) 1952 + 1953 + static inline loff_t max_reiserfs_offset(struct inode *inode) 1954 + { 1955 + if (get_inode_item_key_version(inode) == KEY_FORMAT_3_5) 1956 + return (loff_t) U32_MAX; 1957 + 1958 + return (loff_t) ((~(__u64) 0) >> 4); 1959 + } 1960 + 1961 + /*#define MAX_KEY_UNIQUENESS MAX_UL_INT*/ 1962 + #define MAX_KEY_OBJECTID MAX_UL_INT 1963 + 1964 + #define MAX_B_NUM MAX_UL_INT 1965 + #define MAX_FC_NUM MAX_US_INT 1966 + 1967 + /* the purpose is to detect overflow of an unsigned short */ 1968 + #define REISERFS_LINK_MAX (MAX_US_INT - 1000) 1969 + 1970 + /* The following defines are used in reiserfs_insert_item and reiserfs_append_item */ 1971 + #define REISERFS_KERNEL_MEM 0 /* reiserfs kernel memory mode */ 1972 + #define REISERFS_USER_MEM 1 /* reiserfs user memory mode */ 1973 + 1974 + #define fs_generation(s) (REISERFS_SB(s)->s_generation_counter) 1975 + #define get_generation(s) atomic_read (&fs_generation(s)) 1976 + #define FILESYSTEM_CHANGED_TB(tb) (get_generation((tb)->tb_sb) != (tb)->fs_gen) 1977 + #define __fs_changed(gen,s) (gen != get_generation (s)) 1978 + #define fs_changed(gen,s) \ 1979 + ({ \ 1980 + reiserfs_cond_resched(s); \ 1981 + __fs_changed(gen, s); \ 1982 + }) 1983 + 1984 + /***************************************************************************/ 1985 + /* FIXATE NODES */ 1986 + /***************************************************************************/ 1987 + 1988 + #define VI_TYPE_LEFT_MERGEABLE 1 1989 + #define VI_TYPE_RIGHT_MERGEABLE 2 1990 + 1991 + /* To make any changes in the tree we always first find node, that 1992 + contains item to be changed/deleted or place to insert a new 1993 + item. We call this node S. To do balancing we need to decide what 1994 + we will shift to left/right neighbor, or to a new node, where new 1995 + item will be etc. To make this analysis simpler we build virtual 1996 + node. Virtual node is an array of items, that will replace items of 1997 + node S. (For instance if we are going to delete an item, virtual 1998 + node does not contain it). Virtual node keeps information about 1999 + item sizes and types, mergeability of first and last items, sizes 2000 + of all entries in directory item. We use this array of items when 2001 + calculating what we can shift to neighbors and how many nodes we 2002 + have to have if we do not any shiftings, if we shift to left/right 2003 + neighbor or to both. */ 2004 + struct virtual_item { 2005 + int vi_index; // index in the array of item operations 2006 + unsigned short vi_type; // left/right mergeability 2007 + unsigned short vi_item_len; /* length of item that it will have after balancing */ 2008 + struct item_head *vi_ih; 2009 + const char *vi_item; // body of item (old or new) 2010 + const void *vi_new_data; // 0 always but paste mode 2011 + void *vi_uarea; // item specific area 2012 + }; 2013 + 2014 + struct virtual_node { 2015 + char *vn_free_ptr; /* this is a pointer to the free space in the buffer */ 2016 + unsigned short vn_nr_item; /* number of items in virtual node */ 2017 + short vn_size; /* size of node , that node would have if it has unlimited size and no balancing is performed */ 2018 + short vn_mode; /* mode of balancing (paste, insert, delete, cut) */ 2019 + short vn_affected_item_num; 2020 + short vn_pos_in_item; 2021 + struct item_head *vn_ins_ih; /* item header of inserted item, 0 for other modes */ 2022 + const void *vn_data; 2023 + struct virtual_item *vn_vi; /* array of items (including a new one, excluding item to be deleted) */ 2024 + }; 2025 + 2026 + /* used by directory items when creating virtual nodes */ 2027 + struct direntry_uarea { 2028 + int flags; 2029 + __u16 entry_count; 2030 + __u16 entry_sizes[1]; 2031 + } __attribute__ ((__packed__)); 2032 + 2033 + /***************************************************************************/ 2034 + /* TREE BALANCE */ 2035 + /***************************************************************************/ 2036 + 2037 + /* This temporary structure is used in tree balance algorithms, and 2038 + constructed as we go to the extent that its various parts are 2039 + needed. It contains arrays of nodes that can potentially be 2040 + involved in the balancing of node S, and parameters that define how 2041 + each of the nodes must be balanced. Note that in these algorithms 2042 + for balancing the worst case is to need to balance the current node 2043 + S and the left and right neighbors and all of their parents plus 2044 + create a new node. We implement S1 balancing for the leaf nodes 2045 + and S0 balancing for the internal nodes (S1 and S0 are defined in 2046 + our papers.)*/ 2047 + 2048 + #define MAX_FREE_BLOCK 7 /* size of the array of buffers to free at end of do_balance */ 2049 + 2050 + /* maximum number of FEB blocknrs on a single level */ 2051 + #define MAX_AMOUNT_NEEDED 2 2052 + 2053 + /* someday somebody will prefix every field in this struct with tb_ */ 2054 + struct tree_balance { 2055 + int tb_mode; 2056 + int need_balance_dirty; 2057 + struct super_block *tb_sb; 2058 + struct reiserfs_transaction_handle *transaction_handle; 2059 + struct treepath *tb_path; 2060 + struct buffer_head *L[MAX_HEIGHT]; /* array of left neighbors of nodes in the path */ 2061 + struct buffer_head *R[MAX_HEIGHT]; /* array of right neighbors of nodes in the path */ 2062 + struct buffer_head *FL[MAX_HEIGHT]; /* array of fathers of the left neighbors */ 2063 + struct buffer_head *FR[MAX_HEIGHT]; /* array of fathers of the right neighbors */ 2064 + struct buffer_head *CFL[MAX_HEIGHT]; /* array of common parents of center node and its left neighbor */ 2065 + struct buffer_head *CFR[MAX_HEIGHT]; /* array of common parents of center node and its right neighbor */ 2066 + 2067 + struct buffer_head *FEB[MAX_FEB_SIZE]; /* array of empty buffers. Number of buffers in array equals 2068 + cur_blknum. */ 2069 + struct buffer_head *used[MAX_FEB_SIZE]; 2070 + struct buffer_head *thrown[MAX_FEB_SIZE]; 2071 + int lnum[MAX_HEIGHT]; /* array of number of items which must be 2072 + shifted to the left in order to balance the 2073 + current node; for leaves includes item that 2074 + will be partially shifted; for internal 2075 + nodes, it is the number of child pointers 2076 + rather than items. It includes the new item 2077 + being created. The code sometimes subtracts 2078 + one to get the number of wholly shifted 2079 + items for other purposes. */ 2080 + int rnum[MAX_HEIGHT]; /* substitute right for left in comment above */ 2081 + int lkey[MAX_HEIGHT]; /* array indexed by height h mapping the key delimiting L[h] and 2082 + S[h] to its item number within the node CFL[h] */ 2083 + int rkey[MAX_HEIGHT]; /* substitute r for l in comment above */ 2084 + int insert_size[MAX_HEIGHT]; /* the number of bytes by we are trying to add or remove from 2085 + S[h]. A negative value means removing. */ 2086 + int blknum[MAX_HEIGHT]; /* number of nodes that will replace node S[h] after 2087 + balancing on the level h of the tree. If 0 then S is 2088 + being deleted, if 1 then S is remaining and no new nodes 2089 + are being created, if 2 or 3 then 1 or 2 new nodes is 2090 + being created */ 2091 + 2092 + /* fields that are used only for balancing leaves of the tree */ 2093 + int cur_blknum; /* number of empty blocks having been already allocated */ 2094 + int s0num; /* number of items that fall into left most node when S[0] splits */ 2095 + int s1num; /* number of items that fall into first new node when S[0] splits */ 2096 + int s2num; /* number of items that fall into second new node when S[0] splits */ 2097 + int lbytes; /* number of bytes which can flow to the left neighbor from the left */ 2098 + /* most liquid item that cannot be shifted from S[0] entirely */ 2099 + /* if -1 then nothing will be partially shifted */ 2100 + int rbytes; /* number of bytes which will flow to the right neighbor from the right */ 2101 + /* most liquid item that cannot be shifted from S[0] entirely */ 2102 + /* if -1 then nothing will be partially shifted */ 2103 + int s1bytes; /* number of bytes which flow to the first new node when S[0] splits */ 2104 + /* note: if S[0] splits into 3 nodes, then items do not need to be cut */ 2105 + int s2bytes; 2106 + struct buffer_head *buf_to_free[MAX_FREE_BLOCK]; /* buffers which are to be freed after do_balance finishes by unfix_nodes */ 2107 + char *vn_buf; /* kmalloced memory. Used to create 2108 + virtual node and keep map of 2109 + dirtied bitmap blocks */ 2110 + int vn_buf_size; /* size of the vn_buf */ 2111 + struct virtual_node *tb_vn; /* VN starts after bitmap of bitmap blocks */ 2112 + 2113 + int fs_gen; /* saved value of `reiserfs_generation' counter 2114 + see FILESYSTEM_CHANGED() macro in reiserfs_fs.h */ 2115 + #ifdef DISPLACE_NEW_PACKING_LOCALITIES 2116 + struct in_core_key key; /* key pointer, to pass to block allocator or 2117 + another low-level subsystem */ 2118 + #endif 2119 + }; 2120 + 2121 + /* These are modes of balancing */ 2122 + 2123 + /* When inserting an item. */ 2124 + #define M_INSERT 'i' 2125 + /* When inserting into (directories only) or appending onto an already 2126 + existent item. */ 2127 + #define M_PASTE 'p' 2128 + /* When deleting an item. */ 2129 + #define M_DELETE 'd' 2130 + /* When truncating an item or removing an entry from a (directory) item. */ 2131 + #define M_CUT 'c' 2132 + 2133 + /* used when balancing on leaf level skipped (in reiserfsck) */ 2134 + #define M_INTERNAL 'n' 2135 + 2136 + /* When further balancing is not needed, then do_balance does not need 2137 + to be called. */ 2138 + #define M_SKIP_BALANCING 's' 2139 + #define M_CONVERT 'v' 2140 + 2141 + /* modes of leaf_move_items */ 2142 + #define LEAF_FROM_S_TO_L 0 2143 + #define LEAF_FROM_S_TO_R 1 2144 + #define LEAF_FROM_R_TO_L 2 2145 + #define LEAF_FROM_L_TO_R 3 2146 + #define LEAF_FROM_S_TO_SNEW 4 2147 + 2148 + #define FIRST_TO_LAST 0 2149 + #define LAST_TO_FIRST 1 2150 + 2151 + /* used in do_balance for passing parent of node information that has 2152 + been gotten from tb struct */ 2153 + struct buffer_info { 2154 + struct tree_balance *tb; 2155 + struct buffer_head *bi_bh; 2156 + struct buffer_head *bi_parent; 2157 + int bi_position; 2158 + }; 2159 + 2160 + static inline struct super_block *sb_from_tb(struct tree_balance *tb) 2161 + { 2162 + return tb ? tb->tb_sb : NULL; 2163 + } 2164 + 2165 + static inline struct super_block *sb_from_bi(struct buffer_info *bi) 2166 + { 2167 + return bi ? sb_from_tb(bi->tb) : NULL; 2168 + } 2169 + 2170 + /* there are 4 types of items: stat data, directory item, indirect, direct. 2171 + +-------------------+------------+--------------+------------+ 2172 + | | k_offset | k_uniqueness | mergeable? | 2173 + +-------------------+------------+--------------+------------+ 2174 + | stat data | 0 | 0 | no | 2175 + +-------------------+------------+--------------+------------+ 2176 + | 1st directory item| DOT_OFFSET |DIRENTRY_UNIQUENESS| no | 2177 + | non 1st directory | hash value | | yes | 2178 + | item | | | | 2179 + +-------------------+------------+--------------+------------+ 2180 + | indirect item | offset + 1 |TYPE_INDIRECT | if this is not the first indirect item of the object 2181 + +-------------------+------------+--------------+------------+ 2182 + | direct item | offset + 1 |TYPE_DIRECT | if not this is not the first direct item of the object 2183 + +-------------------+------------+--------------+------------+ 2184 + */ 2185 + 2186 + struct item_operations { 2187 + int (*bytes_number) (struct item_head * ih, int block_size); 2188 + void (*decrement_key) (struct cpu_key *); 2189 + int (*is_left_mergeable) (struct reiserfs_key * ih, 2190 + unsigned long bsize); 2191 + void (*print_item) (struct item_head *, char *item); 2192 + void (*check_item) (struct item_head *, char *item); 2193 + 2194 + int (*create_vi) (struct virtual_node * vn, struct virtual_item * vi, 2195 + int is_affected, int insert_size); 2196 + int (*check_left) (struct virtual_item * vi, int free, 2197 + int start_skip, int end_skip); 2198 + int (*check_right) (struct virtual_item * vi, int free); 2199 + int (*part_size) (struct virtual_item * vi, int from, int to); 2200 + int (*unit_num) (struct virtual_item * vi); 2201 + void (*print_vi) (struct virtual_item * vi); 2202 + }; 2203 + 2204 + extern struct item_operations *item_ops[TYPE_ANY + 1]; 2205 + 2206 + #define op_bytes_number(ih,bsize) item_ops[le_ih_k_type (ih)]->bytes_number (ih, bsize) 2207 + #define op_is_left_mergeable(key,bsize) item_ops[le_key_k_type (le_key_version (key), key)]->is_left_mergeable (key, bsize) 2208 + #define op_print_item(ih,item) item_ops[le_ih_k_type (ih)]->print_item (ih, item) 2209 + #define op_check_item(ih,item) item_ops[le_ih_k_type (ih)]->check_item (ih, item) 2210 + #define op_create_vi(vn,vi,is_affected,insert_size) item_ops[le_ih_k_type ((vi)->vi_ih)]->create_vi (vn,vi,is_affected,insert_size) 2211 + #define op_check_left(vi,free,start_skip,end_skip) item_ops[(vi)->vi_index]->check_left (vi, free, start_skip, end_skip) 2212 + #define op_check_right(vi,free) item_ops[(vi)->vi_index]->check_right (vi, free) 2213 + #define op_part_size(vi,from,to) item_ops[(vi)->vi_index]->part_size (vi, from, to) 2214 + #define op_unit_num(vi) item_ops[(vi)->vi_index]->unit_num (vi) 2215 + #define op_print_vi(vi) item_ops[(vi)->vi_index]->print_vi (vi) 2216 + 2217 + #define COMP_SHORT_KEYS comp_short_keys 2218 + 2219 + /* number of blocks pointed to by the indirect item */ 2220 + #define I_UNFM_NUM(ih) (ih_item_len(ih) / UNFM_P_SIZE) 2221 + 2222 + /* the used space within the unformatted node corresponding to pos within the item pointed to by ih */ 2223 + #define I_POS_UNFM_SIZE(ih,pos,size) (((pos) == I_UNFM_NUM(ih) - 1 ) ? (size) - ih_free_space(ih) : (size)) 2224 + 2225 + /* number of bytes contained by the direct item or the unformatted nodes the indirect item points to */ 2226 + 2227 + /* get the item header */ 2228 + #define B_N_PITEM_HEAD(bh,item_num) ( (struct item_head * )((bh)->b_data + BLKH_SIZE) + (item_num) ) 2229 + 2230 + /* get key */ 2231 + #define B_N_PDELIM_KEY(bh,item_num) ( (struct reiserfs_key * )((bh)->b_data + BLKH_SIZE) + (item_num) ) 2232 + 2233 + /* get the key */ 2234 + #define B_N_PKEY(bh,item_num) ( &(B_N_PITEM_HEAD(bh,item_num)->ih_key) ) 2235 + 2236 + /* get item body */ 2237 + #define B_N_PITEM(bh,item_num) ( (bh)->b_data + ih_location(B_N_PITEM_HEAD((bh),(item_num)))) 2238 + 2239 + /* get the stat data by the buffer header and the item order */ 2240 + #define B_N_STAT_DATA(bh,nr) \ 2241 + ( (struct stat_data *)((bh)->b_data + ih_location(B_N_PITEM_HEAD((bh),(nr))) ) ) 2242 + 2243 + /* following defines use reiserfs buffer header and item header */ 2244 + 2245 + /* get stat-data */ 2246 + #define B_I_STAT_DATA(bh, ih) ( (struct stat_data * )((bh)->b_data + ih_location(ih)) ) 2247 + 2248 + // this is 3976 for size==4096 2249 + #define MAX_DIRECT_ITEM_LEN(size) ((size) - BLKH_SIZE - 2*IH_SIZE - SD_SIZE - UNFM_P_SIZE) 2250 + 2251 + /* indirect items consist of entries which contain blocknrs, pos 2252 + indicates which entry, and B_I_POS_UNFM_POINTER resolves to the 2253 + blocknr contained by the entry pos points to */ 2254 + #define B_I_POS_UNFM_POINTER(bh,ih,pos) le32_to_cpu(*(((unp_t *)B_I_PITEM(bh,ih)) + (pos))) 2255 + #define PUT_B_I_POS_UNFM_POINTER(bh,ih,pos, val) do {*(((unp_t *)B_I_PITEM(bh,ih)) + (pos)) = cpu_to_le32(val); } while (0) 2256 + 2257 + struct reiserfs_iget_args { 2258 + __u32 objectid; 2259 + __u32 dirid; 2260 + }; 2261 + 2262 + /***************************************************************************/ 2263 + /* FUNCTION DECLARATIONS */ 2264 + /***************************************************************************/ 2265 + 2266 + #define get_journal_desc_magic(bh) (bh->b_data + bh->b_size - 12) 2267 + 2268 + #define journal_trans_half(blocksize) \ 2269 + ((blocksize - sizeof (struct reiserfs_journal_desc) + sizeof (__u32) - 12) / sizeof (__u32)) 2270 + 2271 + /* journal.c see journal.c for all the comments here */ 2272 + 2273 + /* first block written in a commit. */ 2274 + struct reiserfs_journal_desc { 2275 + __le32 j_trans_id; /* id of commit */ 2276 + __le32 j_len; /* length of commit. len +1 is the commit block */ 2277 + __le32 j_mount_id; /* mount id of this trans */ 2278 + __le32 j_realblock[1]; /* real locations for each block */ 2279 + }; 2280 + 2281 + #define get_desc_trans_id(d) le32_to_cpu((d)->j_trans_id) 2282 + #define get_desc_trans_len(d) le32_to_cpu((d)->j_len) 2283 + #define get_desc_mount_id(d) le32_to_cpu((d)->j_mount_id) 2284 + 2285 + #define set_desc_trans_id(d,val) do { (d)->j_trans_id = cpu_to_le32 (val); } while (0) 2286 + #define set_desc_trans_len(d,val) do { (d)->j_len = cpu_to_le32 (val); } while (0) 2287 + #define set_desc_mount_id(d,val) do { (d)->j_mount_id = cpu_to_le32 (val); } while (0) 2288 + 2289 + /* last block written in a commit */ 2290 + struct reiserfs_journal_commit { 2291 + __le32 j_trans_id; /* must match j_trans_id from the desc block */ 2292 + __le32 j_len; /* ditto */ 2293 + __le32 j_realblock[1]; /* real locations for each block */ 2294 + }; 2295 + 2296 + #define get_commit_trans_id(c) le32_to_cpu((c)->j_trans_id) 2297 + #define get_commit_trans_len(c) le32_to_cpu((c)->j_len) 2298 + #define get_commit_mount_id(c) le32_to_cpu((c)->j_mount_id) 2299 + 2300 + #define set_commit_trans_id(c,val) do { (c)->j_trans_id = cpu_to_le32 (val); } while (0) 2301 + #define set_commit_trans_len(c,val) do { (c)->j_len = cpu_to_le32 (val); } while (0) 2302 + 2303 + /* this header block gets written whenever a transaction is considered fully flushed, and is more recent than the 2304 + ** last fully flushed transaction. fully flushed means all the log blocks and all the real blocks are on disk, 2305 + ** and this transaction does not need to be replayed. 2306 + */ 2307 + struct reiserfs_journal_header { 2308 + __le32 j_last_flush_trans_id; /* id of last fully flushed transaction */ 2309 + __le32 j_first_unflushed_offset; /* offset in the log of where to start replay after a crash */ 2310 + __le32 j_mount_id; 2311 + /* 12 */ struct journal_params jh_journal; 2312 + }; 2313 + 2314 + /* biggest tunable defines are right here */ 2315 + #define JOURNAL_BLOCK_COUNT 8192 /* number of blocks in the journal */ 2316 + #define JOURNAL_TRANS_MAX_DEFAULT 1024 /* biggest possible single transaction, don't change for now (8/3/99) */ 2317 + #define JOURNAL_TRANS_MIN_DEFAULT 256 2318 + #define JOURNAL_MAX_BATCH_DEFAULT 900 /* max blocks to batch into one transaction, don't make this any bigger than 900 */ 2319 + #define JOURNAL_MIN_RATIO 2 2320 + #define JOURNAL_MAX_COMMIT_AGE 30 2321 + #define JOURNAL_MAX_TRANS_AGE 30 2322 + #define JOURNAL_PER_BALANCE_CNT (3 * (MAX_HEIGHT-2) + 9) 2323 + #define JOURNAL_BLOCKS_PER_OBJECT(sb) (JOURNAL_PER_BALANCE_CNT * 3 + \ 2324 + 2 * (REISERFS_QUOTA_INIT_BLOCKS(sb) + \ 2325 + REISERFS_QUOTA_TRANS_BLOCKS(sb))) 2326 + 2327 + #ifdef CONFIG_QUOTA 2328 + #define REISERFS_QUOTA_OPTS ((1 << REISERFS_USRQUOTA) | (1 << REISERFS_GRPQUOTA)) 2329 + /* We need to update data and inode (atime) */ 2330 + #define REISERFS_QUOTA_TRANS_BLOCKS(s) (REISERFS_SB(s)->s_mount_opt & REISERFS_QUOTA_OPTS ? 2 : 0) 2331 + /* 1 balancing, 1 bitmap, 1 data per write + stat data update */ 2332 + #define REISERFS_QUOTA_INIT_BLOCKS(s) (REISERFS_SB(s)->s_mount_opt & REISERFS_QUOTA_OPTS ? \ 2333 + (DQUOT_INIT_ALLOC*(JOURNAL_PER_BALANCE_CNT+2)+DQUOT_INIT_REWRITE+1) : 0) 2334 + /* same as with INIT */ 2335 + #define REISERFS_QUOTA_DEL_BLOCKS(s) (REISERFS_SB(s)->s_mount_opt & REISERFS_QUOTA_OPTS ? \ 2336 + (DQUOT_DEL_ALLOC*(JOURNAL_PER_BALANCE_CNT+2)+DQUOT_DEL_REWRITE+1) : 0) 2337 + #else 2338 + #define REISERFS_QUOTA_TRANS_BLOCKS(s) 0 2339 + #define REISERFS_QUOTA_INIT_BLOCKS(s) 0 2340 + #define REISERFS_QUOTA_DEL_BLOCKS(s) 0 2341 + #endif 2342 + 2343 + /* both of these can be as low as 1, or as high as you want. The min is the 2344 + ** number of 4k bitmap nodes preallocated on mount. New nodes are allocated 2345 + ** as needed, and released when transactions are committed. On release, if 2346 + ** the current number of nodes is > max, the node is freed, otherwise, 2347 + ** it is put on a free list for faster use later. 2348 + */ 2349 + #define REISERFS_MIN_BITMAP_NODES 10 2350 + #define REISERFS_MAX_BITMAP_NODES 100 2351 + 2352 + #define JBH_HASH_SHIFT 13 /* these are based on journal hash size of 8192 */ 2353 + #define JBH_HASH_MASK 8191 2354 + 2355 + #define _jhashfn(sb,block) \ 2356 + (((unsigned long)sb>>L1_CACHE_SHIFT) ^ \ 2357 + (((block)<<(JBH_HASH_SHIFT - 6)) ^ ((block) >> 13) ^ ((block) << (JBH_HASH_SHIFT - 12)))) 2358 + #define journal_hash(t,sb,block) ((t)[_jhashfn((sb),(block)) & JBH_HASH_MASK]) 2359 + 2360 + // We need these to make journal.c code more readable 2361 + #define journal_find_get_block(s, block) __find_get_block(SB_JOURNAL(s)->j_dev_bd, block, s->s_blocksize) 2362 + #define journal_getblk(s, block) __getblk(SB_JOURNAL(s)->j_dev_bd, block, s->s_blocksize) 2363 + #define journal_bread(s, block) __bread(SB_JOURNAL(s)->j_dev_bd, block, s->s_blocksize) 2364 + 2365 + enum reiserfs_bh_state_bits { 2366 + BH_JDirty = BH_PrivateStart, /* buffer is in current transaction */ 2367 + BH_JDirty_wait, 2368 + BH_JNew, /* disk block was taken off free list before 2369 + * being in a finished transaction, or 2370 + * written to disk. Can be reused immed. */ 2371 + BH_JPrepared, 2372 + BH_JRestore_dirty, 2373 + BH_JTest, // debugging only will go away 2374 + }; 2375 + 2376 + BUFFER_FNS(JDirty, journaled); 2377 + TAS_BUFFER_FNS(JDirty, journaled); 2378 + BUFFER_FNS(JDirty_wait, journal_dirty); 2379 + TAS_BUFFER_FNS(JDirty_wait, journal_dirty); 2380 + BUFFER_FNS(JNew, journal_new); 2381 + TAS_BUFFER_FNS(JNew, journal_new); 2382 + BUFFER_FNS(JPrepared, journal_prepared); 2383 + TAS_BUFFER_FNS(JPrepared, journal_prepared); 2384 + BUFFER_FNS(JRestore_dirty, journal_restore_dirty); 2385 + TAS_BUFFER_FNS(JRestore_dirty, journal_restore_dirty); 2386 + BUFFER_FNS(JTest, journal_test); 2387 + TAS_BUFFER_FNS(JTest, journal_test); 2388 + 2389 + /* 2390 + ** transaction handle which is passed around for all journal calls 2391 + */ 2392 + struct reiserfs_transaction_handle { 2393 + struct super_block *t_super; /* super for this FS when journal_begin was 2394 + called. saves calls to reiserfs_get_super 2395 + also used by nested transactions to make 2396 + sure they are nesting on the right FS 2397 + _must_ be first in the handle 2398 + */ 2399 + int t_refcount; 2400 + int t_blocks_logged; /* number of blocks this writer has logged */ 2401 + int t_blocks_allocated; /* number of blocks this writer allocated */ 2402 + unsigned int t_trans_id; /* sanity check, equals the current trans id */ 2403 + void *t_handle_save; /* save existing current->journal_info */ 2404 + unsigned displace_new_blocks:1; /* if new block allocation occurres, that block 2405 + should be displaced from others */ 2406 + struct list_head t_list; 2407 + }; 2408 + 2409 + /* used to keep track of ordered and tail writes, attached to the buffer 2410 + * head through b_journal_head. 2411 + */ 2412 + struct reiserfs_jh { 2413 + struct reiserfs_journal_list *jl; 2414 + struct buffer_head *bh; 2415 + struct list_head list; 2416 + }; 2417 + 2418 + void reiserfs_free_jh(struct buffer_head *bh); 2419 + int reiserfs_add_tail_list(struct inode *inode, struct buffer_head *bh); 2420 + int reiserfs_add_ordered_list(struct inode *inode, struct buffer_head *bh); 2421 + int journal_mark_dirty(struct reiserfs_transaction_handle *, 2422 + struct super_block *, struct buffer_head *bh); 2423 + 2424 + static inline int reiserfs_file_data_log(struct inode *inode) 2425 + { 2426 + if (reiserfs_data_log(inode->i_sb) || 2427 + (REISERFS_I(inode)->i_flags & i_data_log)) 2428 + return 1; 2429 + return 0; 2430 + } 2431 + 2432 + static inline int reiserfs_transaction_running(struct super_block *s) 2433 + { 2434 + struct reiserfs_transaction_handle *th = current->journal_info; 2435 + if (th && th->t_super == s) 2436 + return 1; 2437 + if (th && th->t_super == NULL) 2438 + BUG(); 2439 + return 0; 2440 + } 2441 + 2442 + static inline int reiserfs_transaction_free_space(struct reiserfs_transaction_handle *th) 2443 + { 2444 + return th->t_blocks_allocated - th->t_blocks_logged; 2445 + } 2446 + 2447 + struct reiserfs_transaction_handle *reiserfs_persistent_transaction(struct 2448 + super_block 2449 + *, 2450 + int count); 2451 + int reiserfs_end_persistent_transaction(struct reiserfs_transaction_handle *); 2452 + int reiserfs_commit_page(struct inode *inode, struct page *page, 2453 + unsigned from, unsigned to); 2454 + int reiserfs_flush_old_commits(struct super_block *); 2455 + int reiserfs_commit_for_inode(struct inode *); 2456 + int reiserfs_inode_needs_commit(struct inode *); 2457 + void reiserfs_update_inode_transaction(struct inode *); 2458 + void reiserfs_wait_on_write_block(struct super_block *s); 2459 + void reiserfs_block_writes(struct reiserfs_transaction_handle *th); 2460 + void reiserfs_allow_writes(struct super_block *s); 2461 + void reiserfs_check_lock_depth(struct super_block *s, char *caller); 2462 + int reiserfs_prepare_for_journal(struct super_block *, struct buffer_head *bh, 2463 + int wait); 2464 + void reiserfs_restore_prepared_buffer(struct super_block *, 2465 + struct buffer_head *bh); 2466 + int journal_init(struct super_block *, const char *j_dev_name, int old_format, 2467 + unsigned int); 2468 + int journal_release(struct reiserfs_transaction_handle *, struct super_block *); 2469 + int journal_release_error(struct reiserfs_transaction_handle *, 2470 + struct super_block *); 2471 + int journal_end(struct reiserfs_transaction_handle *, struct super_block *, 2472 + unsigned long); 2473 + int journal_end_sync(struct reiserfs_transaction_handle *, struct super_block *, 2474 + unsigned long); 2475 + int journal_mark_freed(struct reiserfs_transaction_handle *, 2476 + struct super_block *, b_blocknr_t blocknr); 2477 + int journal_transaction_should_end(struct reiserfs_transaction_handle *, int); 2478 + int reiserfs_in_journal(struct super_block *sb, unsigned int bmap_nr, 2479 + int bit_nr, int searchall, b_blocknr_t *next); 2480 + int journal_begin(struct reiserfs_transaction_handle *, 2481 + struct super_block *sb, unsigned long); 2482 + int journal_join_abort(struct reiserfs_transaction_handle *, 2483 + struct super_block *sb, unsigned long); 2484 + void reiserfs_abort_journal(struct super_block *sb, int errno); 2485 + void reiserfs_abort(struct super_block *sb, int errno, const char *fmt, ...); 2486 + int reiserfs_allocate_list_bitmaps(struct super_block *s, 2487 + struct reiserfs_list_bitmap *, unsigned int); 2488 + 2489 + void add_save_link(struct reiserfs_transaction_handle *th, 2490 + struct inode *inode, int truncate); 2491 + int remove_save_link(struct inode *inode, int truncate); 2492 + 2493 + /* objectid.c */ 2494 + __u32 reiserfs_get_unused_objectid(struct reiserfs_transaction_handle *th); 2495 + void reiserfs_release_objectid(struct reiserfs_transaction_handle *th, 2496 + __u32 objectid_to_release); 2497 + int reiserfs_convert_objectid_map_v1(struct super_block *); 2498 + 2499 + /* stree.c */ 2500 + int B_IS_IN_TREE(const struct buffer_head *); 2501 + extern void copy_item_head(struct item_head *to, 2502 + const struct item_head *from); 2503 + 2504 + // first key is in cpu form, second - le 2505 + extern int comp_short_keys(const struct reiserfs_key *le_key, 2506 + const struct cpu_key *cpu_key); 2507 + extern void le_key2cpu_key(struct cpu_key *to, const struct reiserfs_key *from); 2508 + 2509 + // both are in le form 2510 + extern int comp_le_keys(const struct reiserfs_key *, 2511 + const struct reiserfs_key *); 2512 + extern int comp_short_le_keys(const struct reiserfs_key *, 2513 + const struct reiserfs_key *); 2514 + 2515 + // 2516 + // get key version from on disk key - kludge 2517 + // 2518 + static inline int le_key_version(const struct reiserfs_key *key) 2519 + { 2520 + int type; 2521 + 2522 + type = offset_v2_k_type(&(key->u.k_offset_v2)); 2523 + if (type != TYPE_DIRECT && type != TYPE_INDIRECT 2524 + && type != TYPE_DIRENTRY) 2525 + return KEY_FORMAT_3_5; 2526 + 2527 + return KEY_FORMAT_3_6; 2528 + 2529 + } 2530 + 2531 + static inline void copy_key(struct reiserfs_key *to, 2532 + const struct reiserfs_key *from) 2533 + { 2534 + memcpy(to, from, KEY_SIZE); 2535 + } 2536 + 2537 + int comp_items(const struct item_head *stored_ih, const struct treepath *path); 2538 + const struct reiserfs_key *get_rkey(const struct treepath *chk_path, 2539 + const struct super_block *sb); 2540 + int search_by_key(struct super_block *, const struct cpu_key *, 2541 + struct treepath *, int); 2542 + #define search_item(s,key,path) search_by_key (s, key, path, DISK_LEAF_NODE_LEVEL) 2543 + int search_for_position_by_key(struct super_block *sb, 2544 + const struct cpu_key *cpu_key, 2545 + struct treepath *search_path); 2546 + extern void decrement_bcount(struct buffer_head *bh); 2547 + void decrement_counters_in_path(struct treepath *search_path); 2548 + void pathrelse(struct treepath *search_path); 2549 + int reiserfs_check_path(struct treepath *p); 2550 + void pathrelse_and_restore(struct super_block *s, struct treepath *search_path); 2551 + 2552 + int reiserfs_insert_item(struct reiserfs_transaction_handle *th, 2553 + struct treepath *path, 2554 + const struct cpu_key *key, 2555 + struct item_head *ih, 2556 + struct inode *inode, const char *body); 2557 + 2558 + int reiserfs_paste_into_item(struct reiserfs_transaction_handle *th, 2559 + struct treepath *path, 2560 + const struct cpu_key *key, 2561 + struct inode *inode, 2562 + const char *body, int paste_size); 2563 + 2564 + int reiserfs_cut_from_item(struct reiserfs_transaction_handle *th, 2565 + struct treepath *path, 2566 + struct cpu_key *key, 2567 + struct inode *inode, 2568 + struct page *page, loff_t new_file_size); 2569 + 2570 + int reiserfs_delete_item(struct reiserfs_transaction_handle *th, 2571 + struct treepath *path, 2572 + const struct cpu_key *key, 2573 + struct inode *inode, struct buffer_head *un_bh); 2574 + 2575 + void reiserfs_delete_solid_item(struct reiserfs_transaction_handle *th, 2576 + struct inode *inode, struct reiserfs_key *key); 2577 + int reiserfs_delete_object(struct reiserfs_transaction_handle *th, 2578 + struct inode *inode); 2579 + int reiserfs_do_truncate(struct reiserfs_transaction_handle *th, 2580 + struct inode *inode, struct page *, 2581 + int update_timestamps); 2582 + 2583 + #define i_block_size(inode) ((inode)->i_sb->s_blocksize) 2584 + #define file_size(inode) ((inode)->i_size) 2585 + #define tail_size(inode) (file_size (inode) & (i_block_size (inode) - 1)) 2586 + 2587 + #define tail_has_to_be_packed(inode) (have_large_tails ((inode)->i_sb)?\ 2588 + !STORE_TAIL_IN_UNFM_S1(file_size (inode), tail_size(inode), inode->i_sb->s_blocksize):have_small_tails ((inode)->i_sb)?!STORE_TAIL_IN_UNFM_S2(file_size (inode), tail_size(inode), inode->i_sb->s_blocksize):0 ) 2589 + 2590 + void padd_item(char *item, int total_length, int length); 2591 + 2592 + /* inode.c */ 2593 + /* args for the create parameter of reiserfs_get_block */ 2594 + #define GET_BLOCK_NO_CREATE 0 /* don't create new blocks or convert tails */ 2595 + #define GET_BLOCK_CREATE 1 /* add anything you need to find block */ 2596 + #define GET_BLOCK_NO_HOLE 2 /* return -ENOENT for file holes */ 2597 + #define GET_BLOCK_READ_DIRECT 4 /* read the tail if indirect item not found */ 2598 + #define GET_BLOCK_NO_IMUX 8 /* i_mutex is not held, don't preallocate */ 2599 + #define GET_BLOCK_NO_DANGLE 16 /* don't leave any transactions running */ 2600 + 2601 + void reiserfs_read_locked_inode(struct inode *inode, 2602 + struct reiserfs_iget_args *args); 2603 + int reiserfs_find_actor(struct inode *inode, void *p); 2604 + int reiserfs_init_locked_inode(struct inode *inode, void *p); 2605 + void reiserfs_evict_inode(struct inode *inode); 2606 + int reiserfs_write_inode(struct inode *inode, struct writeback_control *wbc); 2607 + int reiserfs_get_block(struct inode *inode, sector_t block, 2608 + struct buffer_head *bh_result, int create); 2609 + struct dentry *reiserfs_fh_to_dentry(struct super_block *sb, struct fid *fid, 2610 + int fh_len, int fh_type); 2611 + struct dentry *reiserfs_fh_to_parent(struct super_block *sb, struct fid *fid, 2612 + int fh_len, int fh_type); 2613 + int reiserfs_encode_fh(struct dentry *dentry, __u32 * data, int *lenp, 2614 + int connectable); 2615 + 2616 + int reiserfs_truncate_file(struct inode *, int update_timestamps); 2617 + void make_cpu_key(struct cpu_key *cpu_key, struct inode *inode, loff_t offset, 2618 + int type, int key_length); 2619 + void make_le_item_head(struct item_head *ih, const struct cpu_key *key, 2620 + int version, 2621 + loff_t offset, int type, int length, int entry_count); 2622 + struct inode *reiserfs_iget(struct super_block *s, const struct cpu_key *key); 2623 + 2624 + struct reiserfs_security_handle; 2625 + int reiserfs_new_inode(struct reiserfs_transaction_handle *th, 2626 + struct inode *dir, umode_t mode, 2627 + const char *symname, loff_t i_size, 2628 + struct dentry *dentry, struct inode *inode, 2629 + struct reiserfs_security_handle *security); 2630 + 2631 + void reiserfs_update_sd_size(struct reiserfs_transaction_handle *th, 2632 + struct inode *inode, loff_t size); 2633 + 2634 + static inline void reiserfs_update_sd(struct reiserfs_transaction_handle *th, 2635 + struct inode *inode) 2636 + { 2637 + reiserfs_update_sd_size(th, inode, inode->i_size); 2638 + } 2639 + 2640 + void sd_attrs_to_i_attrs(__u16 sd_attrs, struct inode *inode); 2641 + void i_attrs_to_sd_attrs(struct inode *inode, __u16 * sd_attrs); 2642 + int reiserfs_setattr(struct dentry *dentry, struct iattr *attr); 2643 + 2644 + int __reiserfs_write_begin(struct page *page, unsigned from, unsigned len); 2645 + 2646 + /* namei.c */ 2647 + void set_de_name_and_namelen(struct reiserfs_dir_entry *de); 2648 + int search_by_entry_key(struct super_block *sb, const struct cpu_key *key, 2649 + struct treepath *path, struct reiserfs_dir_entry *de); 2650 + struct dentry *reiserfs_get_parent(struct dentry *); 2651 + 2652 + #ifdef CONFIG_REISERFS_PROC_INFO 2653 + int reiserfs_proc_info_init(struct super_block *sb); 2654 + int reiserfs_proc_info_done(struct super_block *sb); 2655 + int reiserfs_proc_info_global_init(void); 2656 + int reiserfs_proc_info_global_done(void); 2657 + 2658 + #define PROC_EXP( e ) e 2659 + 2660 + #define __PINFO( sb ) REISERFS_SB(sb) -> s_proc_info_data 2661 + #define PROC_INFO_MAX( sb, field, value ) \ 2662 + __PINFO( sb ).field = \ 2663 + max( REISERFS_SB( sb ) -> s_proc_info_data.field, value ) 2664 + #define PROC_INFO_INC( sb, field ) ( ++ ( __PINFO( sb ).field ) ) 2665 + #define PROC_INFO_ADD( sb, field, val ) ( __PINFO( sb ).field += ( val ) ) 2666 + #define PROC_INFO_BH_STAT( sb, bh, level ) \ 2667 + PROC_INFO_INC( sb, sbk_read_at[ ( level ) ] ); \ 2668 + PROC_INFO_ADD( sb, free_at[ ( level ) ], B_FREE_SPACE( bh ) ); \ 2669 + PROC_INFO_ADD( sb, items_at[ ( level ) ], B_NR_ITEMS( bh ) ) 2670 + #else 2671 + static inline int reiserfs_proc_info_init(struct super_block *sb) 2672 + { 2673 + return 0; 2674 + } 2675 + 2676 + static inline int reiserfs_proc_info_done(struct super_block *sb) 2677 + { 2678 + return 0; 2679 + } 2680 + 2681 + static inline int reiserfs_proc_info_global_init(void) 2682 + { 2683 + return 0; 2684 + } 2685 + 2686 + static inline int reiserfs_proc_info_global_done(void) 2687 + { 2688 + return 0; 2689 + } 2690 + 2691 + #define PROC_EXP( e ) 2692 + #define VOID_V ( ( void ) 0 ) 2693 + #define PROC_INFO_MAX( sb, field, value ) VOID_V 2694 + #define PROC_INFO_INC( sb, field ) VOID_V 2695 + #define PROC_INFO_ADD( sb, field, val ) VOID_V 2696 + #define PROC_INFO_BH_STAT(sb, bh, n_node_level) VOID_V 2697 + #endif 2698 + 2699 + /* dir.c */ 2700 + extern const struct inode_operations reiserfs_dir_inode_operations; 2701 + extern const struct inode_operations reiserfs_symlink_inode_operations; 2702 + extern const struct inode_operations reiserfs_special_inode_operations; 2703 + extern const struct file_operations reiserfs_dir_operations; 2704 + int reiserfs_readdir_dentry(struct dentry *, void *, filldir_t, loff_t *); 2705 + 2706 + /* tail_conversion.c */ 2707 + int direct2indirect(struct reiserfs_transaction_handle *, struct inode *, 2708 + struct treepath *, struct buffer_head *, loff_t); 2709 + int indirect2direct(struct reiserfs_transaction_handle *, struct inode *, 2710 + struct page *, struct treepath *, const struct cpu_key *, 2711 + loff_t, char *); 2712 + void reiserfs_unmap_buffer(struct buffer_head *); 2713 + 2714 + /* file.c */ 2715 + extern const struct inode_operations reiserfs_file_inode_operations; 2716 + extern const struct file_operations reiserfs_file_operations; 2717 + extern const struct address_space_operations reiserfs_address_space_operations; 2718 + 2719 + /* fix_nodes.c */ 2720 + 2721 + int fix_nodes(int n_op_mode, struct tree_balance *tb, 2722 + struct item_head *ins_ih, const void *); 2723 + void unfix_nodes(struct tree_balance *); 2724 + 2725 + /* prints.c */ 2726 + void __reiserfs_panic(struct super_block *s, const char *id, 2727 + const char *function, const char *fmt, ...) 2728 + __attribute__ ((noreturn)); 2729 + #define reiserfs_panic(s, id, fmt, args...) \ 2730 + __reiserfs_panic(s, id, __func__, fmt, ##args) 2731 + void __reiserfs_error(struct super_block *s, const char *id, 2732 + const char *function, const char *fmt, ...); 2733 + #define reiserfs_error(s, id, fmt, args...) \ 2734 + __reiserfs_error(s, id, __func__, fmt, ##args) 2735 + void reiserfs_info(struct super_block *s, const char *fmt, ...); 2736 + void reiserfs_debug(struct super_block *s, int level, const char *fmt, ...); 2737 + void print_indirect_item(struct buffer_head *bh, int item_num); 2738 + void store_print_tb(struct tree_balance *tb); 2739 + void print_cur_tb(char *mes); 2740 + void print_de(struct reiserfs_dir_entry *de); 2741 + void print_bi(struct buffer_info *bi, char *mes); 2742 + #define PRINT_LEAF_ITEMS 1 /* print all items */ 2743 + #define PRINT_DIRECTORY_ITEMS 2 /* print directory items */ 2744 + #define PRINT_DIRECT_ITEMS 4 /* print contents of direct items */ 2745 + void print_block(struct buffer_head *bh, ...); 2746 + void print_bmap(struct super_block *s, int silent); 2747 + void print_bmap_block(int i, char *data, int size, int silent); 2748 + /*void print_super_block (struct super_block * s, char * mes);*/ 2749 + void print_objectid_map(struct super_block *s); 2750 + void print_block_head(struct buffer_head *bh, char *mes); 2751 + void check_leaf(struct buffer_head *bh); 2752 + void check_internal(struct buffer_head *bh); 2753 + void print_statistics(struct super_block *s); 2754 + char *reiserfs_hashname(int code); 2755 + 2756 + /* lbalance.c */ 2757 + int leaf_move_items(int shift_mode, struct tree_balance *tb, int mov_num, 2758 + int mov_bytes, struct buffer_head *Snew); 2759 + int leaf_shift_left(struct tree_balance *tb, int shift_num, int shift_bytes); 2760 + int leaf_shift_right(struct tree_balance *tb, int shift_num, int shift_bytes); 2761 + void leaf_delete_items(struct buffer_info *cur_bi, int last_first, int first, 2762 + int del_num, int del_bytes); 2763 + void leaf_insert_into_buf(struct buffer_info *bi, int before, 2764 + struct item_head *inserted_item_ih, 2765 + const char *inserted_item_body, int zeros_number); 2766 + void leaf_paste_in_buffer(struct buffer_info *bi, int pasted_item_num, 2767 + int pos_in_item, int paste_size, const char *body, 2768 + int zeros_number); 2769 + void leaf_cut_from_buffer(struct buffer_info *bi, int cut_item_num, 2770 + int pos_in_item, int cut_size); 2771 + void leaf_paste_entries(struct buffer_info *bi, int item_num, int before, 2772 + int new_entry_count, struct reiserfs_de_head *new_dehs, 2773 + const char *records, int paste_size); 2774 + /* ibalance.c */ 2775 + int balance_internal(struct tree_balance *, int, int, struct item_head *, 2776 + struct buffer_head **); 2777 + 2778 + /* do_balance.c */ 2779 + void do_balance_mark_leaf_dirty(struct tree_balance *tb, 2780 + struct buffer_head *bh, int flag); 2781 + #define do_balance_mark_internal_dirty do_balance_mark_leaf_dirty 2782 + #define do_balance_mark_sb_dirty do_balance_mark_leaf_dirty 2783 + 2784 + void do_balance(struct tree_balance *tb, struct item_head *ih, 2785 + const char *body, int flag); 2786 + void reiserfs_invalidate_buffer(struct tree_balance *tb, 2787 + struct buffer_head *bh); 2788 + 2789 + int get_left_neighbor_position(struct tree_balance *tb, int h); 2790 + int get_right_neighbor_position(struct tree_balance *tb, int h); 2791 + void replace_key(struct tree_balance *tb, struct buffer_head *, int, 2792 + struct buffer_head *, int); 2793 + void make_empty_node(struct buffer_info *); 2794 + struct buffer_head *get_FEB(struct tree_balance *); 2795 + 2796 + /* bitmap.c */ 2797 + 2798 + /* structure contains hints for block allocator, and it is a container for 2799 + * arguments, such as node, search path, transaction_handle, etc. */ 2800 + struct __reiserfs_blocknr_hint { 2801 + struct inode *inode; /* inode passed to allocator, if we allocate unf. nodes */ 2802 + sector_t block; /* file offset, in blocks */ 2803 + struct in_core_key key; 2804 + struct treepath *path; /* search path, used by allocator to deternine search_start by 2805 + * various ways */ 2806 + struct reiserfs_transaction_handle *th; /* transaction handle is needed to log super blocks and 2807 + * bitmap blocks changes */ 2808 + b_blocknr_t beg, end; 2809 + b_blocknr_t search_start; /* a field used to transfer search start value (block number) 2810 + * between different block allocator procedures 2811 + * (determine_search_start() and others) */ 2812 + int prealloc_size; /* is set in determine_prealloc_size() function, used by underlayed 2813 + * function that do actual allocation */ 2814 + 2815 + unsigned formatted_node:1; /* the allocator uses different polices for getting disk space for 2816 + * formatted/unformatted blocks with/without preallocation */ 2817 + unsigned preallocate:1; 2818 + }; 2819 + 2820 + typedef struct __reiserfs_blocknr_hint reiserfs_blocknr_hint_t; 2821 + 2822 + int reiserfs_parse_alloc_options(struct super_block *, char *); 2823 + void reiserfs_init_alloc_options(struct super_block *s); 2824 + 2825 + /* 2826 + * given a directory, this will tell you what packing locality 2827 + * to use for a new object underneat it. The locality is returned 2828 + * in disk byte order (le). 2829 + */ 2830 + __le32 reiserfs_choose_packing(struct inode *dir); 2831 + 2832 + int reiserfs_init_bitmap_cache(struct super_block *sb); 2833 + void reiserfs_free_bitmap_cache(struct super_block *sb); 2834 + void reiserfs_cache_bitmap_metadata(struct super_block *sb, struct buffer_head *bh, struct reiserfs_bitmap_info *info); 2835 + struct buffer_head *reiserfs_read_bitmap_block(struct super_block *sb, unsigned int bitmap); 2836 + int is_reusable(struct super_block *s, b_blocknr_t block, int bit_value); 2837 + void reiserfs_free_block(struct reiserfs_transaction_handle *th, struct inode *, 2838 + b_blocknr_t, int for_unformatted); 2839 + int reiserfs_allocate_blocknrs(reiserfs_blocknr_hint_t *, b_blocknr_t *, int, 2840 + int); 2841 + static inline int reiserfs_new_form_blocknrs(struct tree_balance *tb, 2842 + b_blocknr_t * new_blocknrs, 2843 + int amount_needed) 2844 + { 2845 + reiserfs_blocknr_hint_t hint = { 2846 + .th = tb->transaction_handle, 2847 + .path = tb->tb_path, 2848 + .inode = NULL, 2849 + .key = tb->key, 2850 + .block = 0, 2851 + .formatted_node = 1 2852 + }; 2853 + return reiserfs_allocate_blocknrs(&hint, new_blocknrs, amount_needed, 2854 + 0); 2855 + } 2856 + 2857 + static inline int reiserfs_new_unf_blocknrs(struct reiserfs_transaction_handle 2858 + *th, struct inode *inode, 2859 + b_blocknr_t * new_blocknrs, 2860 + struct treepath *path, 2861 + sector_t block) 2862 + { 2863 + reiserfs_blocknr_hint_t hint = { 2864 + .th = th, 2865 + .path = path, 2866 + .inode = inode, 2867 + .block = block, 2868 + .formatted_node = 0, 2869 + .preallocate = 0 2870 + }; 2871 + return reiserfs_allocate_blocknrs(&hint, new_blocknrs, 1, 0); 2872 + } 2873 + 2874 + #ifdef REISERFS_PREALLOCATE 2875 + static inline int reiserfs_new_unf_blocknrs2(struct reiserfs_transaction_handle 2876 + *th, struct inode *inode, 2877 + b_blocknr_t * new_blocknrs, 2878 + struct treepath *path, 2879 + sector_t block) 2880 + { 2881 + reiserfs_blocknr_hint_t hint = { 2882 + .th = th, 2883 + .path = path, 2884 + .inode = inode, 2885 + .block = block, 2886 + .formatted_node = 0, 2887 + .preallocate = 1 2888 + }; 2889 + return reiserfs_allocate_blocknrs(&hint, new_blocknrs, 1, 0); 2890 + } 2891 + 2892 + void reiserfs_discard_prealloc(struct reiserfs_transaction_handle *th, 2893 + struct inode *inode); 2894 + void reiserfs_discard_all_prealloc(struct reiserfs_transaction_handle *th); 2895 + #endif 2896 + 2897 + /* hashes.c */ 2898 + __u32 keyed_hash(const signed char *msg, int len); 2899 + __u32 yura_hash(const signed char *msg, int len); 2900 + __u32 r5_hash(const signed char *msg, int len); 2901 + 2902 + #define reiserfs_set_le_bit __set_bit_le 2903 + #define reiserfs_test_and_set_le_bit __test_and_set_bit_le 2904 + #define reiserfs_clear_le_bit __clear_bit_le 2905 + #define reiserfs_test_and_clear_le_bit __test_and_clear_bit_le 2906 + #define reiserfs_test_le_bit test_bit_le 2907 + #define reiserfs_find_next_zero_le_bit find_next_zero_bit_le 2908 + 2909 + /* sometimes reiserfs_truncate may require to allocate few new blocks 2910 + to perform indirect2direct conversion. People probably used to 2911 + think, that truncate should work without problems on a filesystem 2912 + without free disk space. They may complain that they can not 2913 + truncate due to lack of free disk space. This spare space allows us 2914 + to not worry about it. 500 is probably too much, but it should be 2915 + absolutely safe */ 2916 + #define SPARE_SPACE 500 2917 + 2918 + /* prototypes from ioctl.c */ 2919 + long reiserfs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); 2920 + long reiserfs_compat_ioctl(struct file *filp, 2921 + unsigned int cmd, unsigned long arg); 2922 + int reiserfs_unpack(struct inode *inode, struct file *filp);
+1 -2
fs/reiserfs/resize.c
··· 13 13 #include <linux/vmalloc.h> 14 14 #include <linux/string.h> 15 15 #include <linux/errno.h> 16 - #include <linux/reiserfs_fs.h> 17 - #include <linux/reiserfs_fs_sb.h> 16 + #include "reiserfs.h" 18 17 #include <linux/buffer_head.h> 19 18 20 19 int reiserfs_resize(struct super_block *s, unsigned long block_count_new)
+1 -1
fs/reiserfs/stree.c
··· 51 51 #include <linux/time.h> 52 52 #include <linux/string.h> 53 53 #include <linux/pagemap.h> 54 - #include <linux/reiserfs_fs.h> 54 + #include "reiserfs.h" 55 55 #include <linux/buffer_head.h> 56 56 #include <linux/quotaops.h> 57 57
+5 -7
fs/reiserfs/super.c
··· 16 16 #include <linux/vmalloc.h> 17 17 #include <linux/time.h> 18 18 #include <asm/uaccess.h> 19 - #include <linux/reiserfs_fs.h> 20 - #include <linux/reiserfs_acl.h> 21 - #include <linux/reiserfs_xattr.h> 19 + #include "reiserfs.h" 20 + #include "acl.h" 21 + #include "xattr.h" 22 22 #include <linux/init.h> 23 23 #include <linux/blkdev.h> 24 24 #include <linux/buffer_head.h> ··· 1874 1874 unlock_new_inode(root_inode); 1875 1875 } 1876 1876 1877 - s->s_root = d_alloc_root(root_inode); 1878 - if (!s->s_root) { 1879 - iput(root_inode); 1877 + s->s_root = d_make_root(root_inode); 1878 + if (!s->s_root) 1880 1879 goto error; 1881 - } 1882 1880 // define and initialize hash function 1883 1881 sbi->s_hash_function = hash_function(s); 1884 1882 if (sbi->s_hash_function == NULL) {
+1 -1
fs/reiserfs/tail_conversion.c
··· 5 5 #include <linux/time.h> 6 6 #include <linux/pagemap.h> 7 7 #include <linux/buffer_head.h> 8 - #include <linux/reiserfs_fs.h> 8 + #include "reiserfs.h" 9 9 10 10 /* access to tail : when one is going to read tail it must make sure, that is not running. 11 11 direct2indirect and indirect2direct can not run concurrently */
+3 -3
fs/reiserfs/xattr.c
··· 33 33 * The xattrs themselves are protected by the xattr_sem. 34 34 */ 35 35 36 - #include <linux/reiserfs_fs.h> 36 + #include "reiserfs.h" 37 37 #include <linux/capability.h> 38 38 #include <linux/dcache.h> 39 39 #include <linux/namei.h> ··· 43 43 #include <linux/file.h> 44 44 #include <linux/pagemap.h> 45 45 #include <linux/xattr.h> 46 - #include <linux/reiserfs_xattr.h> 47 - #include <linux/reiserfs_acl.h> 46 + #include "xattr.h" 47 + #include "acl.h" 48 48 #include <asm/uaccess.h> 49 49 #include <net/checksum.h> 50 50 #include <linux/stat.h>
+122
fs/reiserfs/xattr.h
··· 1 + #include <linux/reiserfs_xattr.h> 2 + #include <linux/init.h> 3 + #include <linux/list.h> 4 + #include <linux/rwsem.h> 5 + 6 + struct inode; 7 + struct dentry; 8 + struct iattr; 9 + struct super_block; 10 + struct nameidata; 11 + 12 + int reiserfs_xattr_register_handlers(void) __init; 13 + void reiserfs_xattr_unregister_handlers(void); 14 + int reiserfs_xattr_init(struct super_block *sb, int mount_flags); 15 + int reiserfs_lookup_privroot(struct super_block *sb); 16 + int reiserfs_delete_xattrs(struct inode *inode); 17 + int reiserfs_chown_xattrs(struct inode *inode, struct iattr *attrs); 18 + int reiserfs_permission(struct inode *inode, int mask); 19 + 20 + #ifdef CONFIG_REISERFS_FS_XATTR 21 + #define has_xattr_dir(inode) (REISERFS_I(inode)->i_flags & i_has_xattr_dir) 22 + ssize_t reiserfs_getxattr(struct dentry *dentry, const char *name, 23 + void *buffer, size_t size); 24 + int reiserfs_setxattr(struct dentry *dentry, const char *name, 25 + const void *value, size_t size, int flags); 26 + ssize_t reiserfs_listxattr(struct dentry *dentry, char *buffer, size_t size); 27 + int reiserfs_removexattr(struct dentry *dentry, const char *name); 28 + 29 + int reiserfs_xattr_get(struct inode *, const char *, void *, size_t); 30 + int reiserfs_xattr_set(struct inode *, const char *, const void *, size_t, int); 31 + int reiserfs_xattr_set_handle(struct reiserfs_transaction_handle *, 32 + struct inode *, const char *, const void *, 33 + size_t, int); 34 + 35 + extern const struct xattr_handler reiserfs_xattr_user_handler; 36 + extern const struct xattr_handler reiserfs_xattr_trusted_handler; 37 + extern const struct xattr_handler reiserfs_xattr_security_handler; 38 + #ifdef CONFIG_REISERFS_FS_SECURITY 39 + int reiserfs_security_init(struct inode *dir, struct inode *inode, 40 + const struct qstr *qstr, 41 + struct reiserfs_security_handle *sec); 42 + int reiserfs_security_write(struct reiserfs_transaction_handle *th, 43 + struct inode *inode, 44 + struct reiserfs_security_handle *sec); 45 + void reiserfs_security_free(struct reiserfs_security_handle *sec); 46 + #endif 47 + 48 + static inline int reiserfs_xattrs_initialized(struct super_block *sb) 49 + { 50 + return REISERFS_SB(sb)->priv_root != NULL; 51 + } 52 + 53 + #define xattr_size(size) ((size) + sizeof(struct reiserfs_xattr_header)) 54 + static inline loff_t reiserfs_xattr_nblocks(struct inode *inode, loff_t size) 55 + { 56 + loff_t ret = 0; 57 + if (reiserfs_file_data_log(inode)) { 58 + ret = _ROUND_UP(xattr_size(size), inode->i_sb->s_blocksize); 59 + ret >>= inode->i_sb->s_blocksize_bits; 60 + } 61 + return ret; 62 + } 63 + 64 + /* We may have to create up to 3 objects: xattr root, xattr dir, xattr file. 65 + * Let's try to be smart about it. 66 + * xattr root: We cache it. If it's not cached, we may need to create it. 67 + * xattr dir: If anything has been loaded for this inode, we can set a flag 68 + * saying so. 69 + * xattr file: Since we don't cache xattrs, we can't tell. We always include 70 + * blocks for it. 71 + * 72 + * However, since root and dir can be created between calls - YOU MUST SAVE 73 + * THIS VALUE. 74 + */ 75 + static inline size_t reiserfs_xattr_jcreate_nblocks(struct inode *inode) 76 + { 77 + size_t nblocks = JOURNAL_BLOCKS_PER_OBJECT(inode->i_sb); 78 + 79 + if ((REISERFS_I(inode)->i_flags & i_has_xattr_dir) == 0) { 80 + nblocks += JOURNAL_BLOCKS_PER_OBJECT(inode->i_sb); 81 + if (!REISERFS_SB(inode->i_sb)->xattr_root->d_inode) 82 + nblocks += JOURNAL_BLOCKS_PER_OBJECT(inode->i_sb); 83 + } 84 + 85 + return nblocks; 86 + } 87 + 88 + static inline void reiserfs_init_xattr_rwsem(struct inode *inode) 89 + { 90 + init_rwsem(&REISERFS_I(inode)->i_xattr_sem); 91 + } 92 + 93 + #else 94 + 95 + #define reiserfs_getxattr NULL 96 + #define reiserfs_setxattr NULL 97 + #define reiserfs_listxattr NULL 98 + #define reiserfs_removexattr NULL 99 + 100 + static inline void reiserfs_init_xattr_rwsem(struct inode *inode) 101 + { 102 + } 103 + #endif /* CONFIG_REISERFS_FS_XATTR */ 104 + 105 + #ifndef CONFIG_REISERFS_FS_SECURITY 106 + static inline int reiserfs_security_init(struct inode *dir, 107 + struct inode *inode, 108 + const struct qstr *qstr, 109 + struct reiserfs_security_handle *sec) 110 + { 111 + return 0; 112 + } 113 + static inline int 114 + reiserfs_security_write(struct reiserfs_transaction_handle *th, 115 + struct inode *inode, 116 + struct reiserfs_security_handle *sec) 117 + { 118 + return 0; 119 + } 120 + static inline void reiserfs_security_free(struct reiserfs_security_handle *sec) 121 + {} 122 + #endif
+3 -3
fs/reiserfs/xattr_acl.c
··· 1 1 #include <linux/capability.h> 2 2 #include <linux/fs.h> 3 3 #include <linux/posix_acl.h> 4 - #include <linux/reiserfs_fs.h> 4 + #include "reiserfs.h" 5 5 #include <linux/errno.h> 6 6 #include <linux/pagemap.h> 7 7 #include <linux/xattr.h> 8 8 #include <linux/slab.h> 9 9 #include <linux/posix_acl_xattr.h> 10 - #include <linux/reiserfs_xattr.h> 11 - #include <linux/reiserfs_acl.h> 10 + #include "xattr.h" 11 + #include "acl.h" 12 12 #include <asm/uaccess.h> 13 13 14 14 static int reiserfs_set_acl(struct reiserfs_transaction_handle *th,
+2 -2
fs/reiserfs/xattr_security.c
··· 1 - #include <linux/reiserfs_fs.h> 1 + #include "reiserfs.h" 2 2 #include <linux/errno.h> 3 3 #include <linux/fs.h> 4 4 #include <linux/pagemap.h> 5 5 #include <linux/xattr.h> 6 6 #include <linux/slab.h> 7 - #include <linux/reiserfs_xattr.h> 7 + #include "xattr.h" 8 8 #include <linux/security.h> 9 9 #include <asm/uaccess.h> 10 10
+2 -2
fs/reiserfs/xattr_trusted.c
··· 1 - #include <linux/reiserfs_fs.h> 1 + #include "reiserfs.h" 2 2 #include <linux/capability.h> 3 3 #include <linux/errno.h> 4 4 #include <linux/fs.h> 5 5 #include <linux/pagemap.h> 6 6 #include <linux/xattr.h> 7 - #include <linux/reiserfs_xattr.h> 7 + #include "xattr.h" 8 8 #include <asm/uaccess.h> 9 9 10 10 static int
+2 -2
fs/reiserfs/xattr_user.c
··· 1 - #include <linux/reiserfs_fs.h> 1 + #include "reiserfs.h" 2 2 #include <linux/errno.h> 3 3 #include <linux/fs.h> 4 4 #include <linux/pagemap.h> 5 5 #include <linux/xattr.h> 6 - #include <linux/reiserfs_xattr.h> 6 + #include "xattr.h" 7 7 #include <asm/uaccess.h> 8 8 9 9 static int
+2 -4
fs/romfs/super.c
··· 538 538 if (IS_ERR(root)) 539 539 goto error; 540 540 541 - sb->s_root = d_alloc_root(root); 541 + sb->s_root = d_make_root(root); 542 542 if (!sb->s_root) 543 - goto error_i; 543 + goto error; 544 544 545 545 return 0; 546 546 547 - error_i: 548 - iput(root); 549 547 error: 550 548 return -EINVAL; 551 549 error_rsb_inval:
+1 -2
fs/squashfs/super.c
··· 316 316 } 317 317 insert_inode_hash(root); 318 318 319 - sb->s_root = d_alloc_root(root); 319 + sb->s_root = d_make_root(root); 320 320 if (sb->s_root == NULL) { 321 321 ERROR("Root inode create failed\n"); 322 322 err = -ENOMEM; 323 - iput(root); 324 323 goto failed_mount; 325 324 } 326 325
+1 -1
fs/stat.c
··· 307 307 if (inode->i_op->readlink) { 308 308 error = security_inode_readlink(path.dentry); 309 309 if (!error) { 310 - touch_atime(path.mnt, path.dentry); 310 + touch_atime(&path); 311 311 error = inode->i_op->readlink(path.dentry, 312 312 buf, bufsiz); 313 313 }
+1 -2
fs/sysfs/mount.c
··· 61 61 } 62 62 63 63 /* instantiate and link root dentry */ 64 - root = d_alloc_root(inode); 64 + root = d_make_root(inode); 65 65 if (!root) { 66 66 pr_debug("%s: could not get root dentry!\n",__func__); 67 - iput(inode); 68 67 return -ENOMEM; 69 68 } 70 69 root->d_fsdata = &sysfs_root;
+1 -11
fs/sysv/namei.c
··· 121 121 { 122 122 struct inode *inode = old_dentry->d_inode; 123 123 124 - if (inode->i_nlink >= SYSV_SB(inode->i_sb)->s_link_max) 125 - return -EMLINK; 126 - 127 124 inode->i_ctime = CURRENT_TIME_SEC; 128 125 inode_inc_link_count(inode); 129 126 ihold(inode); ··· 131 134 static int sysv_mkdir(struct inode * dir, struct dentry *dentry, umode_t mode) 132 135 { 133 136 struct inode * inode; 134 - int err = -EMLINK; 137 + int err; 135 138 136 - if (dir->i_nlink >= SYSV_SB(dir->i_sb)->s_link_max) 137 - goto out; 138 139 inode_inc_link_count(dir); 139 140 140 141 inode = sysv_new_inode(dir, S_IFDIR|mode); ··· 246 251 drop_nlink(new_inode); 247 252 inode_dec_link_count(new_inode); 248 253 } else { 249 - if (dir_de) { 250 - err = -EMLINK; 251 - if (new_dir->i_nlink >= SYSV_SB(new_dir->i_sb)->s_link_max) 252 - goto out_dir; 253 - } 254 254 err = sysv_add_link(new_dentry, old_inode); 255 255 if (err) 256 256 goto out_dir;
+13 -14
fs/sysv/super.c
··· 44 44 JAN_1_1980 = (10*365 + 2) * 24 * 60 * 60 45 45 }; 46 46 47 - static void detected_xenix(struct sysv_sb_info *sbi) 47 + static void detected_xenix(struct sysv_sb_info *sbi, unsigned *max_links) 48 48 { 49 49 struct buffer_head *bh1 = sbi->s_bh1; 50 50 struct buffer_head *bh2 = sbi->s_bh2; ··· 59 59 sbd2 = (struct xenix_super_block *) (bh2->b_data - 512); 60 60 } 61 61 62 - sbi->s_link_max = XENIX_LINK_MAX; 62 + *max_links = XENIX_LINK_MAX; 63 63 sbi->s_fic_size = XENIX_NICINOD; 64 64 sbi->s_flc_size = XENIX_NICFREE; 65 65 sbi->s_sbd1 = (char *)sbd1; ··· 75 75 sbi->s_nzones = fs32_to_cpu(sbi, sbd1->s_fsize); 76 76 } 77 77 78 - static void detected_sysv4(struct sysv_sb_info *sbi) 78 + static void detected_sysv4(struct sysv_sb_info *sbi, unsigned *max_links) 79 79 { 80 80 struct sysv4_super_block * sbd; 81 81 struct buffer_head *bh1 = sbi->s_bh1; ··· 86 86 else 87 87 sbd = (struct sysv4_super_block *) bh2->b_data; 88 88 89 - sbi->s_link_max = SYSV_LINK_MAX; 89 + *max_links = SYSV_LINK_MAX; 90 90 sbi->s_fic_size = SYSV_NICINOD; 91 91 sbi->s_flc_size = SYSV_NICFREE; 92 92 sbi->s_sbd1 = (char *)sbd; ··· 103 103 sbi->s_nzones = fs32_to_cpu(sbi, sbd->s_fsize); 104 104 } 105 105 106 - static void detected_sysv2(struct sysv_sb_info *sbi) 106 + static void detected_sysv2(struct sysv_sb_info *sbi, unsigned *max_links) 107 107 { 108 108 struct sysv2_super_block *sbd; 109 109 struct buffer_head *bh1 = sbi->s_bh1; ··· 114 114 else 115 115 sbd = (struct sysv2_super_block *) bh2->b_data; 116 116 117 - sbi->s_link_max = SYSV_LINK_MAX; 117 + *max_links = SYSV_LINK_MAX; 118 118 sbi->s_fic_size = SYSV_NICINOD; 119 119 sbi->s_flc_size = SYSV_NICFREE; 120 120 sbi->s_sbd1 = (char *)sbd; ··· 131 131 sbi->s_nzones = fs32_to_cpu(sbi, sbd->s_fsize); 132 132 } 133 133 134 - static void detected_coherent(struct sysv_sb_info *sbi) 134 + static void detected_coherent(struct sysv_sb_info *sbi, unsigned *max_links) 135 135 { 136 136 struct coh_super_block * sbd; 137 137 struct buffer_head *bh1 = sbi->s_bh1; 138 138 139 139 sbd = (struct coh_super_block *) bh1->b_data; 140 140 141 - sbi->s_link_max = COH_LINK_MAX; 141 + *max_links = COH_LINK_MAX; 142 142 sbi->s_fic_size = COH_NICINOD; 143 143 sbi->s_flc_size = COH_NICFREE; 144 144 sbi->s_sbd1 = (char *)sbd; ··· 154 154 sbi->s_nzones = fs32_to_cpu(sbi, sbd->s_fsize); 155 155 } 156 156 157 - static void detected_v7(struct sysv_sb_info *sbi) 157 + static void detected_v7(struct sysv_sb_info *sbi, unsigned *max_links) 158 158 { 159 159 struct buffer_head *bh2 = sbi->s_bh2; 160 160 struct v7_super_block *sbd = (struct v7_super_block *)bh2->b_data; 161 161 162 - sbi->s_link_max = V7_LINK_MAX; 162 + *max_links = V7_LINK_MAX; 163 163 sbi->s_fic_size = V7_NICINOD; 164 164 sbi->s_flc_size = V7_NICFREE; 165 165 sbi->s_sbd1 = (char *)sbd; ··· 290 290 [FSTYPE_AFS] = "AFS", 291 291 }; 292 292 293 - static void (*flavour_setup[])(struct sysv_sb_info *) = { 293 + static void (*flavour_setup[])(struct sysv_sb_info *, unsigned *) = { 294 294 [FSTYPE_XENIX] = detected_xenix, 295 295 [FSTYPE_SYSV4] = detected_sysv4, 296 296 [FSTYPE_SYSV2] = detected_sysv2, ··· 310 310 311 311 sbi->s_firstinodezone = 2; 312 312 313 - flavour_setup[sbi->s_type](sbi); 313 + flavour_setup[sbi->s_type](sbi, &sb->s_max_links); 314 314 315 315 sbi->s_truncate = 1; 316 316 sbi->s_ndatazones = sbi->s_nzones - sbi->s_firstdatazone; ··· 341 341 printk("SysV FS: get root inode failed\n"); 342 342 return 0; 343 343 } 344 - sb->s_root = d_alloc_root(root_inode); 344 + sb->s_root = d_make_root(root_inode); 345 345 if (!sb->s_root) { 346 - iput(root_inode); 347 346 printk("SysV FS: get root dentry failed\n"); 348 347 return 0; 349 348 }
-1
fs/sysv/sysv.h
··· 24 24 char s_bytesex; /* bytesex (le/be/pdp) */ 25 25 char s_truncate; /* if 1: names > SYSV_NAMELEN chars are truncated */ 26 26 /* if 0: they are disallowed (ENAMETOOLONG) */ 27 - nlink_t s_link_max; /* max number of hard links to a file */ 28 27 unsigned int s_inodes_per_block; /* number of inodes per block */ 29 28 unsigned int s_inodes_per_block_1; /* inodes_per_block - 1 */ 30 29 unsigned int s_inodes_per_block_bits; /* log2(inodes_per_block) */
+2 -4
fs/ubifs/super.c
··· 2076 2076 goto out_umount; 2077 2077 } 2078 2078 2079 - sb->s_root = d_alloc_root(root); 2079 + sb->s_root = d_make_root(root); 2080 2080 if (!sb->s_root) 2081 - goto out_iput; 2081 + goto out_umount; 2082 2082 2083 2083 mutex_unlock(&c->umount_mutex); 2084 2084 return 0; 2085 2085 2086 - out_iput: 2087 - iput(root); 2088 2086 out_umount: 2089 2087 ubifs_umount(c); 2090 2088 out_unlock:
-13
fs/udf/namei.c
··· 32 32 #include <linux/crc-itu-t.h> 33 33 #include <linux/exportfs.h> 34 34 35 - enum { UDF_MAX_LINKS = 0xffff }; 36 - 37 35 static inline int udf_match(int len1, const unsigned char *name1, int len2, 38 36 const unsigned char *name2) 39 37 { ··· 647 649 struct udf_inode_info *dinfo = UDF_I(dir); 648 650 struct udf_inode_info *iinfo; 649 651 650 - err = -EMLINK; 651 - if (dir->i_nlink >= UDF_MAX_LINKS) 652 - goto out; 653 - 654 652 err = -EIO; 655 653 inode = udf_new_inode(dir, S_IFDIR | mode, &err); 656 654 if (!inode) ··· 1026 1032 struct fileIdentDesc cfi, *fi; 1027 1033 int err; 1028 1034 1029 - if (inode->i_nlink >= UDF_MAX_LINKS) 1030 - return -EMLINK; 1031 - 1032 1035 fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err); 1033 1036 if (!fi) { 1034 1037 return err; ··· 1116 1125 tloc = lelb_to_cpu(dir_fi->icb.extLocation); 1117 1126 if (udf_get_lb_pblock(old_inode->i_sb, &tloc, 0) != 1118 1127 old_dir->i_ino) 1119 - goto end_rename; 1120 - 1121 - retval = -EMLINK; 1122 - if (!new_inode && new_dir->i_nlink >= UDF_MAX_LINKS) 1123 1128 goto end_rename; 1124 1129 } 1125 1130 if (!nfi) {
+4 -2
fs/udf/super.c
··· 75 75 76 76 #define UDF_DEFAULT_BLOCKSIZE 2048 77 77 78 + enum { UDF_MAX_LINKS = 0xffff }; 79 + 78 80 /* These are the "meat" - everything else is stuffing */ 79 81 static int udf_fill_super(struct super_block *, void *, int); 80 82 static void udf_put_super(struct super_block *); ··· 2037 2035 } 2038 2036 2039 2037 /* Allocate a dentry for the root inode */ 2040 - sb->s_root = d_alloc_root(inode); 2038 + sb->s_root = d_make_root(inode); 2041 2039 if (!sb->s_root) { 2042 2040 udf_err(sb, "Couldn't allocate root dentry\n"); 2043 - iput(inode); 2044 2041 goto error_out; 2045 2042 } 2046 2043 sb->s_maxbytes = MAX_LFS_FILESIZE; 2044 + sb->s_max_links = UDF_MAX_LINKS; 2047 2045 return 0; 2048 2046 2049 2047 error_out:
+1 -13
fs/ufs/namei.c
··· 166 166 int error; 167 167 168 168 lock_ufs(dir->i_sb); 169 - if (inode->i_nlink >= UFS_LINK_MAX) { 170 - unlock_ufs(dir->i_sb); 171 - return -EMLINK; 172 - } 173 169 174 170 inode->i_ctime = CURRENT_TIME_SEC; 175 171 inode_inc_link_count(inode); ··· 179 183 static int ufs_mkdir(struct inode * dir, struct dentry * dentry, umode_t mode) 180 184 { 181 185 struct inode * inode; 182 - int err = -EMLINK; 183 - 184 - if (dir->i_nlink >= UFS_LINK_MAX) 185 - goto out; 186 + int err; 186 187 187 188 lock_ufs(dir->i_sb); 188 189 inode_inc_link_count(dir); ··· 298 305 drop_nlink(new_inode); 299 306 inode_dec_link_count(new_inode); 300 307 } else { 301 - if (dir_de) { 302 - err = -EMLINK; 303 - if (new_dir->i_nlink >= UFS_LINK_MAX) 304 - goto out_dir; 305 - } 306 308 err = ufs_add_link(new_dentry, old_inode); 307 309 if (err) 308 310 goto out_dir;
+3 -4
fs/ufs/super.c
··· 1157 1157 "fast symlink size (%u)\n", uspi->s_maxsymlinklen); 1158 1158 uspi->s_maxsymlinklen = maxsymlen; 1159 1159 } 1160 + sb->s_max_links = UFS_LINK_MAX; 1160 1161 1161 1162 inode = ufs_iget(sb, UFS_ROOTINO); 1162 1163 if (IS_ERR(inode)) { 1163 1164 ret = PTR_ERR(inode); 1164 1165 goto failed; 1165 1166 } 1166 - sb->s_root = d_alloc_root(inode); 1167 + sb->s_root = d_make_root(inode); 1167 1168 if (!sb->s_root) { 1168 1169 ret = -ENOMEM; 1169 - goto dalloc_failed; 1170 + goto failed; 1170 1171 } 1171 1172 1172 1173 ufs_setup_cstotal(sb); ··· 1181 1180 UFSD("EXIT\n"); 1182 1181 return 0; 1183 1182 1184 - dalloc_failed: 1185 - iput(inode); 1186 1183 failed: 1187 1184 if (ubh) 1188 1185 ubh_brelse_uspi (uspi);
-11
fs/xfs/xfs_rename.c
··· 118 118 new_parent = (src_dp != target_dp); 119 119 src_is_directory = S_ISDIR(src_ip->i_d.di_mode); 120 120 121 - if (src_is_directory) { 122 - /* 123 - * Check for link count overflow on target_dp 124 - */ 125 - if (target_ip == NULL && new_parent && 126 - target_dp->i_d.di_nlink >= XFS_MAXLINK) { 127 - error = XFS_ERROR(EMLINK); 128 - goto std_return; 129 - } 130 - } 131 - 132 121 xfs_sort_for_rename(src_dp, target_dp, src_ip, target_ip, 133 122 inodes, &num_inodes); 134 123
+3 -4
fs/xfs/xfs_super.c
··· 1341 1341 sb->s_blocksize = mp->m_sb.sb_blocksize; 1342 1342 sb->s_blocksize_bits = ffs(sb->s_blocksize) - 1; 1343 1343 sb->s_maxbytes = xfs_max_file_offset(sb->s_blocksize_bits); 1344 + sb->s_max_links = XFS_MAXLINK; 1344 1345 sb->s_time_gran = 1; 1345 1346 set_posix_acl_flag(sb); 1346 1347 ··· 1362 1361 error = EINVAL; 1363 1362 goto out_syncd_stop; 1364 1363 } 1365 - sb->s_root = d_alloc_root(root); 1364 + sb->s_root = d_make_root(root); 1366 1365 if (!sb->s_root) { 1367 1366 error = ENOMEM; 1368 - goto out_iput; 1367 + goto out_syncd_stop; 1369 1368 } 1370 1369 1371 1370 return 0; ··· 1384 1383 out: 1385 1384 return -error; 1386 1385 1387 - out_iput: 1388 - iput(root); 1389 1386 out_syncd_stop: 1390 1387 xfs_syncd_stop(mp); 1391 1388 out_unmount:
-2
fs/xfs/xfs_utils.c
··· 296 296 xfs_trans_t *tp, 297 297 xfs_inode_t *ip) 298 298 { 299 - if (ip->i_d.di_nlink >= XFS_MAXLINK) 300 - return XFS_ERROR(EMLINK); 301 299 xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_CHG); 302 300 303 301 ASSERT(ip->i_d.di_nlink > 0);
-16
fs/xfs/xfs_vnodeops.c
··· 917 917 xfs_ilock(dp, XFS_ILOCK_EXCL | XFS_ILOCK_PARENT); 918 918 unlock_dp_on_error = B_TRUE; 919 919 920 - /* 921 - * Check for directory link count overflow. 922 - */ 923 - if (is_dir && dp->i_d.di_nlink >= XFS_MAXLINK) { 924 - error = XFS_ERROR(EMLINK); 925 - goto out_trans_cancel; 926 - } 927 - 928 920 xfs_bmap_init(&free_list, &first_block); 929 921 930 922 /* ··· 1419 1427 1420 1428 xfs_trans_ijoin(tp, sip, XFS_ILOCK_EXCL); 1421 1429 xfs_trans_ijoin(tp, tdp, XFS_ILOCK_EXCL); 1422 - 1423 - /* 1424 - * If the source has too many links, we can't make any more to it. 1425 - */ 1426 - if (sip->i_d.di_nlink >= XFS_MAXLINK) { 1427 - error = XFS_ERROR(EMLINK); 1428 - goto error_return; 1429 - } 1430 1430 1431 1431 /* 1432 1432 * If we are using project inheritance, we only allow hard link
+1 -1
include/linux/audit.h
··· 684 684 const char *string); 685 685 extern void audit_log_d_path(struct audit_buffer *ab, 686 686 const char *prefix, 687 - struct path *path); 687 + const struct path *path); 688 688 extern void audit_log_key(struct audit_buffer *ab, 689 689 char *key); 690 690 extern void audit_log_lost(const char *message);
+5 -5
include/linux/binfmts.h
··· 92 92 unsigned long min_coredump; /* minimal dump size */ 93 93 }; 94 94 95 - extern int __register_binfmt(struct linux_binfmt *fmt, int insert); 95 + extern void __register_binfmt(struct linux_binfmt *fmt, int insert); 96 96 97 97 /* Registration of default binfmt handlers */ 98 - static inline int register_binfmt(struct linux_binfmt *fmt) 98 + static inline void register_binfmt(struct linux_binfmt *fmt) 99 99 { 100 - return __register_binfmt(fmt, 0); 100 + __register_binfmt(fmt, 0); 101 101 } 102 102 /* Same as above, but adds a new binfmt at the top of the list */ 103 - static inline int insert_binfmt(struct linux_binfmt *fmt) 103 + static inline void insert_binfmt(struct linux_binfmt *fmt) 104 104 { 105 - return __register_binfmt(fmt, 1); 105 + __register_binfmt(fmt, 1); 106 106 } 107 107 108 108 extern void unregister_binfmt(struct linux_binfmt *);
-1
include/linux/dcache.h
··· 222 222 extern int d_invalidate(struct dentry *); 223 223 224 224 /* only used at mount-time */ 225 - extern struct dentry * d_alloc_root(struct inode *); 226 225 extern struct dentry * d_make_root(struct inode *); 227 226 228 227 /* <clickety>-<click> the ramfs-type tree */
+2 -2
include/linux/debugfs.h
··· 86 86 struct dentry *parent, 87 87 struct debugfs_blob_wrapper *blob); 88 88 89 - struct dentry *debugfs_create_regset32(const char *name, mode_t mode, 89 + struct dentry *debugfs_create_regset32(const char *name, umode_t mode, 90 90 struct dentry *parent, 91 91 struct debugfs_regset32 *regset); 92 92 ··· 208 208 } 209 209 210 210 static inline struct dentry *debugfs_create_regset32(const char *name, 211 - mode_t mode, struct dentry *parent, 211 + umode_t mode, struct dentry *parent, 212 212 struct debugfs_regset32 *regset) 213 213 { 214 214 return ERR_PTR(-ENODEV);
-1
include/linux/file.h
··· 12 12 struct file; 13 13 14 14 extern void fput(struct file *); 15 - extern void drop_file_write_access(struct file *file); 16 15 17 16 struct file_operations; 18 17 struct vfsmount;
+7 -3
include/linux/fs.h
··· 1459 1459 u8 s_uuid[16]; /* UUID */ 1460 1460 1461 1461 void *s_fs_info; /* Filesystem private info */ 1462 + unsigned int s_max_links; 1462 1463 fmode_t s_mode; 1463 1464 1464 1465 /* Granularity of c/m/atime in ns. ··· 1812 1811 spin_unlock(&inode->i_lock); 1813 1812 } 1814 1813 1815 - extern void touch_atime(struct vfsmount *mnt, struct dentry *dentry); 1814 + extern void touch_atime(struct path *); 1816 1815 static inline void file_accessed(struct file *file) 1817 1816 { 1818 1817 if (!(file->f_flags & O_NOATIME)) 1819 - touch_atime(file->f_path.mnt, file->f_path.dentry); 1818 + touch_atime(&file->f_path); 1820 1819 } 1821 1820 1822 1821 int sync_inode(struct inode *inode, struct writeback_control *wbc); ··· 2305 2304 extern ino_t iunique(struct super_block *, ino_t); 2306 2305 extern int inode_needs_sync(struct inode *inode); 2307 2306 extern int generic_delete_inode(struct inode *inode); 2308 - extern int generic_drop_inode(struct inode *inode); 2307 + static inline int generic_drop_inode(struct inode *inode) 2308 + { 2309 + return !inode->i_nlink || inode_unhashed(inode); 2310 + } 2309 2311 2310 2312 extern struct inode *ilookup5_nowait(struct super_block *sb, 2311 2313 unsigned long hashval, int (*test)(struct inode *, void *),
+1
include/linux/magic.h
··· 42 42 #define OPENPROM_SUPER_MAGIC 0x9fa1 43 43 #define PROC_SUPER_MAGIC 0x9fa0 44 44 #define QNX4_SUPER_MAGIC 0x002f /* qnx4 fs detection */ 45 + #define QNX6_SUPER_MAGIC 0x68191122 /* qnx6 fs detection */ 45 46 46 47 #define REISERFS_SUPER_MAGIC 0x52654973 /* used by gcc */ 47 48 /* used by file system utilities that
+134
include/linux/qnx6_fs.h
··· 1 + /* 2 + * Name : qnx6_fs.h 3 + * Author : Kai Bankett 4 + * Function : qnx6 global filesystem definitions 5 + * History : 17-01-2012 created 6 + */ 7 + #ifndef _LINUX_QNX6_FS_H 8 + #define _LINUX_QNX6_FS_H 9 + 10 + #include <linux/types.h> 11 + #include <linux/magic.h> 12 + 13 + #define QNX6_ROOT_INO 1 14 + 15 + /* for di_status */ 16 + #define QNX6_FILE_DIRECTORY 0x01 17 + #define QNX6_FILE_DELETED 0x02 18 + #define QNX6_FILE_NORMAL 0x03 19 + 20 + #define QNX6_SUPERBLOCK_SIZE 0x200 /* superblock always is 512 bytes */ 21 + #define QNX6_SUPERBLOCK_AREA 0x1000 /* area reserved for superblock */ 22 + #define QNX6_BOOTBLOCK_SIZE 0x2000 /* heading bootblock area */ 23 + #define QNX6_DIR_ENTRY_SIZE 0x20 /* dir entry size of 32 bytes */ 24 + #define QNX6_INODE_SIZE 0x80 /* each inode is 128 bytes */ 25 + #define QNX6_INODE_SIZE_BITS 7 /* inode entry size shift */ 26 + 27 + #define QNX6_NO_DIRECT_POINTERS 16 /* 16 blockptrs in sbl/inode */ 28 + #define QNX6_PTR_MAX_LEVELS 5 /* maximum indirect levels */ 29 + 30 + /* for filenames */ 31 + #define QNX6_SHORT_NAME_MAX 27 32 + #define QNX6_LONG_NAME_MAX 510 33 + 34 + /* list of mount options */ 35 + #define QNX6_MOUNT_MMI_FS 0x010000 /* mount as Audi MMI 3G fs */ 36 + 37 + /* 38 + * This is the original qnx6 inode layout on disk. 39 + * Each inode is 128 byte long. 40 + */ 41 + struct qnx6_inode_entry { 42 + __fs64 di_size; 43 + __fs32 di_uid; 44 + __fs32 di_gid; 45 + __fs32 di_ftime; 46 + __fs32 di_mtime; 47 + __fs32 di_atime; 48 + __fs32 di_ctime; 49 + __fs16 di_mode; 50 + __fs16 di_ext_mode; 51 + __fs32 di_block_ptr[QNX6_NO_DIRECT_POINTERS]; 52 + __u8 di_filelevels; 53 + __u8 di_status; 54 + __u8 di_unknown2[2]; 55 + __fs32 di_zero2[6]; 56 + }; 57 + 58 + /* 59 + * Each directory entry is maximum 32 bytes long. 60 + * If more characters or special characters required it is stored 61 + * in the longfilenames structure. 62 + */ 63 + struct qnx6_dir_entry { 64 + __fs32 de_inode; 65 + __u8 de_size; 66 + char de_fname[QNX6_SHORT_NAME_MAX]; 67 + }; 68 + 69 + /* 70 + * Longfilename direntries have a different structure 71 + */ 72 + struct qnx6_long_dir_entry { 73 + __fs32 de_inode; 74 + __u8 de_size; 75 + __u8 de_unknown[3]; 76 + __fs32 de_long_inode; 77 + __fs32 de_checksum; 78 + }; 79 + 80 + struct qnx6_long_filename { 81 + __fs16 lf_size; 82 + __u8 lf_fname[QNX6_LONG_NAME_MAX]; 83 + }; 84 + 85 + struct qnx6_root_node { 86 + __fs64 size; 87 + __fs32 ptr[QNX6_NO_DIRECT_POINTERS]; 88 + __u8 levels; 89 + __u8 mode; 90 + __u8 spare[6]; 91 + }; 92 + 93 + struct qnx6_super_block { 94 + __fs32 sb_magic; 95 + __fs32 sb_checksum; 96 + __fs64 sb_serial; 97 + __fs32 sb_ctime; /* time the fs was created */ 98 + __fs32 sb_atime; /* last access time */ 99 + __fs32 sb_flags; 100 + __fs16 sb_version1; /* filesystem version information */ 101 + __fs16 sb_version2; /* filesystem version information */ 102 + __u8 sb_volumeid[16]; 103 + __fs32 sb_blocksize; 104 + __fs32 sb_num_inodes; 105 + __fs32 sb_free_inodes; 106 + __fs32 sb_num_blocks; 107 + __fs32 sb_free_blocks; 108 + __fs32 sb_allocgroup; 109 + struct qnx6_root_node Inode; 110 + struct qnx6_root_node Bitmap; 111 + struct qnx6_root_node Longfile; 112 + struct qnx6_root_node Unknown; 113 + }; 114 + 115 + /* Audi MMI 3G superblock layout is different to plain qnx6 */ 116 + struct qnx6_mmi_super_block { 117 + __fs32 sb_magic; 118 + __fs32 sb_checksum; 119 + __fs64 sb_serial; 120 + __u8 sb_spare0[12]; 121 + __u8 sb_id[12]; 122 + __fs32 sb_blocksize; 123 + __fs32 sb_num_inodes; 124 + __fs32 sb_free_inodes; 125 + __fs32 sb_num_blocks; 126 + __fs32 sb_free_blocks; 127 + __u8 sb_spare1[4]; 128 + struct qnx6_root_node Inode; 129 + struct qnx6_root_node Bitmap; 130 + struct qnx6_root_node Longfile; 131 + struct qnx6_root_node Unknown; 132 + }; 133 + 134 + #endif
include/linux/reiserfs_acl.h fs/reiserfs/acl.h
-2334
include/linux/reiserfs_fs.h
··· 1 1 /* 2 2 * Copyright 1996, 1997, 1998 Hans Reiser, see reiserfs/README for licensing and copyright details 3 3 */ 4 - 5 - /* this file has an amazingly stupid 6 - name, yura please fix it to be 7 - reiserfs.h, and merge all the rest 8 - of our .h files that are in this 9 - directory into it. */ 10 - 11 4 #ifndef _LINUX_REISER_FS_H 12 5 #define _LINUX_REISER_FS_H 13 6 14 7 #include <linux/types.h> 15 8 #include <linux/magic.h> 16 - 17 - #ifdef __KERNEL__ 18 - #include <linux/slab.h> 19 - #include <linux/interrupt.h> 20 - #include <linux/sched.h> 21 - #include <linux/workqueue.h> 22 - #include <asm/unaligned.h> 23 - #include <linux/bitops.h> 24 - #include <linux/proc_fs.h> 25 - #include <linux/buffer_head.h> 26 - #include <linux/reiserfs_fs_i.h> 27 - #include <linux/reiserfs_fs_sb.h> 28 - #endif 29 9 30 10 /* 31 11 * include/linux/reiser_fs.h ··· 22 42 #define REISERFS_IOC_SETFLAGS FS_IOC_SETFLAGS 23 43 #define REISERFS_IOC_GETVERSION FS_IOC_GETVERSION 24 44 #define REISERFS_IOC_SETVERSION FS_IOC_SETVERSION 25 - 26 - #ifdef __KERNEL__ 27 - /* the 32 bit compat definitions with int argument */ 28 - #define REISERFS_IOC32_UNPACK _IOW(0xCD, 1, int) 29 - #define REISERFS_IOC32_GETFLAGS FS_IOC32_GETFLAGS 30 - #define REISERFS_IOC32_SETFLAGS FS_IOC32_SETFLAGS 31 - #define REISERFS_IOC32_GETVERSION FS_IOC32_GETVERSION 32 - #define REISERFS_IOC32_SETVERSION FS_IOC32_SETVERSION 33 - 34 - /* 35 - * Locking primitives. The write lock is a per superblock 36 - * special mutex that has properties close to the Big Kernel Lock 37 - * which was used in the previous locking scheme. 38 - */ 39 - void reiserfs_write_lock(struct super_block *s); 40 - void reiserfs_write_unlock(struct super_block *s); 41 - int reiserfs_write_lock_once(struct super_block *s); 42 - void reiserfs_write_unlock_once(struct super_block *s, int lock_depth); 43 - 44 - #ifdef CONFIG_REISERFS_CHECK 45 - void reiserfs_lock_check_recursive(struct super_block *s); 46 - #else 47 - static inline void reiserfs_lock_check_recursive(struct super_block *s) { } 48 - #endif 49 - 50 - /* 51 - * Several mutexes depend on the write lock. 52 - * However sometimes we want to relax the write lock while we hold 53 - * these mutexes, according to the release/reacquire on schedule() 54 - * properties of the Bkl that were used. 55 - * Reiserfs performances and locking were based on this scheme. 56 - * Now that the write lock is a mutex and not the bkl anymore, doing so 57 - * may result in a deadlock: 58 - * 59 - * A acquire write_lock 60 - * A acquire j_commit_mutex 61 - * A release write_lock and wait for something 62 - * B acquire write_lock 63 - * B can't acquire j_commit_mutex and sleep 64 - * A can't acquire write lock anymore 65 - * deadlock 66 - * 67 - * What we do here is avoiding such deadlock by playing the same game 68 - * than the Bkl: if we can't acquire a mutex that depends on the write lock, 69 - * we release the write lock, wait a bit and then retry. 70 - * 71 - * The mutexes concerned by this hack are: 72 - * - The commit mutex of a journal list 73 - * - The flush mutex 74 - * - The journal lock 75 - * - The inode mutex 76 - */ 77 - static inline void reiserfs_mutex_lock_safe(struct mutex *m, 78 - struct super_block *s) 79 - { 80 - reiserfs_lock_check_recursive(s); 81 - reiserfs_write_unlock(s); 82 - mutex_lock(m); 83 - reiserfs_write_lock(s); 84 - } 85 - 86 - static inline void 87 - reiserfs_mutex_lock_nested_safe(struct mutex *m, unsigned int subclass, 88 - struct super_block *s) 89 - { 90 - reiserfs_lock_check_recursive(s); 91 - reiserfs_write_unlock(s); 92 - mutex_lock_nested(m, subclass); 93 - reiserfs_write_lock(s); 94 - } 95 - 96 - static inline void 97 - reiserfs_down_read_safe(struct rw_semaphore *sem, struct super_block *s) 98 - { 99 - reiserfs_lock_check_recursive(s); 100 - reiserfs_write_unlock(s); 101 - down_read(sem); 102 - reiserfs_write_lock(s); 103 - } 104 - 105 - /* 106 - * When we schedule, we usually want to also release the write lock, 107 - * according to the previous bkl based locking scheme of reiserfs. 108 - */ 109 - static inline void reiserfs_cond_resched(struct super_block *s) 110 - { 111 - if (need_resched()) { 112 - reiserfs_write_unlock(s); 113 - schedule(); 114 - reiserfs_write_lock(s); 115 - } 116 - } 117 - 118 - struct fid; 119 - 120 - /* in reading the #defines, it may help to understand that they employ 121 - the following abbreviations: 122 - 123 - B = Buffer 124 - I = Item header 125 - H = Height within the tree (should be changed to LEV) 126 - N = Number of the item in the node 127 - STAT = stat data 128 - DEH = Directory Entry Header 129 - EC = Entry Count 130 - E = Entry number 131 - UL = Unsigned Long 132 - BLKH = BLocK Header 133 - UNFM = UNForMatted node 134 - DC = Disk Child 135 - P = Path 136 - 137 - These #defines are named by concatenating these abbreviations, 138 - where first comes the arguments, and last comes the return value, 139 - of the macro. 140 - 141 - */ 142 - 143 - #define USE_INODE_GENERATION_COUNTER 144 - 145 - #define REISERFS_PREALLOCATE 146 - #define DISPLACE_NEW_PACKING_LOCALITIES 147 - #define PREALLOCATION_SIZE 9 148 - 149 - /* n must be power of 2 */ 150 - #define _ROUND_UP(x,n) (((x)+(n)-1u) & ~((n)-1u)) 151 - 152 - // to be ok for alpha and others we have to align structures to 8 byte 153 - // boundary. 154 - // FIXME: do not change 4 by anything else: there is code which relies on that 155 - #define ROUND_UP(x) _ROUND_UP(x,8LL) 156 - 157 - /* debug levels. Right now, CONFIG_REISERFS_CHECK means print all debug 158 - ** messages. 159 - */ 160 - #define REISERFS_DEBUG_CODE 5 /* extra messages to help find/debug errors */ 161 - 162 - void __reiserfs_warning(struct super_block *s, const char *id, 163 - const char *func, const char *fmt, ...); 164 - #define reiserfs_warning(s, id, fmt, args...) \ 165 - __reiserfs_warning(s, id, __func__, fmt, ##args) 166 - /* assertions handling */ 167 - 168 - /** always check a condition and panic if it's false. */ 169 - #define __RASSERT(cond, scond, format, args...) \ 170 - do { \ 171 - if (!(cond)) \ 172 - reiserfs_panic(NULL, "assertion failure", "(" #cond ") at " \ 173 - __FILE__ ":%i:%s: " format "\n", \ 174 - in_interrupt() ? -1 : task_pid_nr(current), \ 175 - __LINE__, __func__ , ##args); \ 176 - } while (0) 177 - 178 - #define RASSERT(cond, format, args...) __RASSERT(cond, #cond, format, ##args) 179 - 180 - #if defined( CONFIG_REISERFS_CHECK ) 181 - #define RFALSE(cond, format, args...) __RASSERT(!(cond), "!(" #cond ")", format, ##args) 182 - #else 183 - #define RFALSE( cond, format, args... ) do {;} while( 0 ) 184 - #endif 185 - 186 - #define CONSTF __attribute_const__ 187 - /* 188 - * Disk Data Structures 189 - */ 190 - 191 - /***************************************************************************/ 192 - /* SUPER BLOCK */ 193 - /***************************************************************************/ 194 - 195 - /* 196 - * Structure of super block on disk, a version of which in RAM is often accessed as REISERFS_SB(s)->s_rs 197 - * the version in RAM is part of a larger structure containing fields never written to disk. 198 - */ 199 - #define UNSET_HASH 0 // read_super will guess about, what hash names 200 - // in directories were sorted with 201 - #define TEA_HASH 1 202 - #define YURA_HASH 2 203 - #define R5_HASH 3 204 - #define DEFAULT_HASH R5_HASH 205 - 206 - struct journal_params { 207 - __le32 jp_journal_1st_block; /* where does journal start from on its 208 - * device */ 209 - __le32 jp_journal_dev; /* journal device st_rdev */ 210 - __le32 jp_journal_size; /* size of the journal */ 211 - __le32 jp_journal_trans_max; /* max number of blocks in a transaction. */ 212 - __le32 jp_journal_magic; /* random value made on fs creation (this 213 - * was sb_journal_block_count) */ 214 - __le32 jp_journal_max_batch; /* max number of blocks to batch into a 215 - * trans */ 216 - __le32 jp_journal_max_commit_age; /* in seconds, how old can an async 217 - * commit be */ 218 - __le32 jp_journal_max_trans_age; /* in seconds, how old can a transaction 219 - * be */ 220 - }; 221 - 222 - /* this is the super from 3.5.X, where X >= 10 */ 223 - struct reiserfs_super_block_v1 { 224 - __le32 s_block_count; /* blocks count */ 225 - __le32 s_free_blocks; /* free blocks count */ 226 - __le32 s_root_block; /* root block number */ 227 - struct journal_params s_journal; 228 - __le16 s_blocksize; /* block size */ 229 - __le16 s_oid_maxsize; /* max size of object id array, see 230 - * get_objectid() commentary */ 231 - __le16 s_oid_cursize; /* current size of object id array */ 232 - __le16 s_umount_state; /* this is set to 1 when filesystem was 233 - * umounted, to 2 - when not */ 234 - char s_magic[10]; /* reiserfs magic string indicates that 235 - * file system is reiserfs: 236 - * "ReIsErFs" or "ReIsEr2Fs" or "ReIsEr3Fs" */ 237 - __le16 s_fs_state; /* it is set to used by fsck to mark which 238 - * phase of rebuilding is done */ 239 - __le32 s_hash_function_code; /* indicate, what hash function is being use 240 - * to sort names in a directory*/ 241 - __le16 s_tree_height; /* height of disk tree */ 242 - __le16 s_bmap_nr; /* amount of bitmap blocks needed to address 243 - * each block of file system */ 244 - __le16 s_version; /* this field is only reliable on filesystem 245 - * with non-standard journal */ 246 - __le16 s_reserved_for_journal; /* size in blocks of journal area on main 247 - * device, we need to keep after 248 - * making fs with non-standard journal */ 249 - } __attribute__ ((__packed__)); 250 - 251 - #define SB_SIZE_V1 (sizeof(struct reiserfs_super_block_v1)) 252 - 253 - /* this is the on disk super block */ 254 - struct reiserfs_super_block { 255 - struct reiserfs_super_block_v1 s_v1; 256 - __le32 s_inode_generation; 257 - __le32 s_flags; /* Right now used only by inode-attributes, if enabled */ 258 - unsigned char s_uuid[16]; /* filesystem unique identifier */ 259 - unsigned char s_label[16]; /* filesystem volume label */ 260 - __le16 s_mnt_count; /* Count of mounts since last fsck */ 261 - __le16 s_max_mnt_count; /* Maximum mounts before check */ 262 - __le32 s_lastcheck; /* Timestamp of last fsck */ 263 - __le32 s_check_interval; /* Interval between checks */ 264 - char s_unused[76]; /* zero filled by mkreiserfs and 265 - * reiserfs_convert_objectid_map_v1() 266 - * so any additions must be updated 267 - * there as well. */ 268 - } __attribute__ ((__packed__)); 269 - 270 - #define SB_SIZE (sizeof(struct reiserfs_super_block)) 271 - 272 - #define REISERFS_VERSION_1 0 273 - #define REISERFS_VERSION_2 2 274 - 275 - // on-disk super block fields converted to cpu form 276 - #define SB_DISK_SUPER_BLOCK(s) (REISERFS_SB(s)->s_rs) 277 - #define SB_V1_DISK_SUPER_BLOCK(s) (&(SB_DISK_SUPER_BLOCK(s)->s_v1)) 278 - #define SB_BLOCKSIZE(s) \ 279 - le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_blocksize)) 280 - #define SB_BLOCK_COUNT(s) \ 281 - le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_block_count)) 282 - #define SB_FREE_BLOCKS(s) \ 283 - le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_free_blocks)) 284 - #define SB_REISERFS_MAGIC(s) \ 285 - (SB_V1_DISK_SUPER_BLOCK(s)->s_magic) 286 - #define SB_ROOT_BLOCK(s) \ 287 - le32_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_root_block)) 288 - #define SB_TREE_HEIGHT(s) \ 289 - le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_tree_height)) 290 - #define SB_REISERFS_STATE(s) \ 291 - le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_umount_state)) 292 - #define SB_VERSION(s) le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_version)) 293 - #define SB_BMAP_NR(s) le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_bmap_nr)) 294 - 295 - #define PUT_SB_BLOCK_COUNT(s, val) \ 296 - do { SB_V1_DISK_SUPER_BLOCK(s)->s_block_count = cpu_to_le32(val); } while (0) 297 - #define PUT_SB_FREE_BLOCKS(s, val) \ 298 - do { SB_V1_DISK_SUPER_BLOCK(s)->s_free_blocks = cpu_to_le32(val); } while (0) 299 - #define PUT_SB_ROOT_BLOCK(s, val) \ 300 - do { SB_V1_DISK_SUPER_BLOCK(s)->s_root_block = cpu_to_le32(val); } while (0) 301 - #define PUT_SB_TREE_HEIGHT(s, val) \ 302 - do { SB_V1_DISK_SUPER_BLOCK(s)->s_tree_height = cpu_to_le16(val); } while (0) 303 - #define PUT_SB_REISERFS_STATE(s, val) \ 304 - do { SB_V1_DISK_SUPER_BLOCK(s)->s_umount_state = cpu_to_le16(val); } while (0) 305 - #define PUT_SB_VERSION(s, val) \ 306 - do { SB_V1_DISK_SUPER_BLOCK(s)->s_version = cpu_to_le16(val); } while (0) 307 - #define PUT_SB_BMAP_NR(s, val) \ 308 - do { SB_V1_DISK_SUPER_BLOCK(s)->s_bmap_nr = cpu_to_le16 (val); } while (0) 309 - 310 - #define SB_ONDISK_JP(s) (&SB_V1_DISK_SUPER_BLOCK(s)->s_journal) 311 - #define SB_ONDISK_JOURNAL_SIZE(s) \ 312 - le32_to_cpu ((SB_ONDISK_JP(s)->jp_journal_size)) 313 - #define SB_ONDISK_JOURNAL_1st_BLOCK(s) \ 314 - le32_to_cpu ((SB_ONDISK_JP(s)->jp_journal_1st_block)) 315 - #define SB_ONDISK_JOURNAL_DEVICE(s) \ 316 - le32_to_cpu ((SB_ONDISK_JP(s)->jp_journal_dev)) 317 - #define SB_ONDISK_RESERVED_FOR_JOURNAL(s) \ 318 - le16_to_cpu ((SB_V1_DISK_SUPER_BLOCK(s)->s_reserved_for_journal)) 319 - 320 - #define is_block_in_log_or_reserved_area(s, block) \ 321 - block >= SB_JOURNAL_1st_RESERVED_BLOCK(s) \ 322 - && block < SB_JOURNAL_1st_RESERVED_BLOCK(s) + \ 323 - ((!is_reiserfs_jr(SB_DISK_SUPER_BLOCK(s)) ? \ 324 - SB_ONDISK_JOURNAL_SIZE(s) + 1 : SB_ONDISK_RESERVED_FOR_JOURNAL(s))) 325 - 326 - int is_reiserfs_3_5(struct reiserfs_super_block *rs); 327 - int is_reiserfs_3_6(struct reiserfs_super_block *rs); 328 - int is_reiserfs_jr(struct reiserfs_super_block *rs); 329 - 330 - /* ReiserFS leaves the first 64k unused, so that partition labels have 331 - enough space. If someone wants to write a fancy bootloader that 332 - needs more than 64k, let us know, and this will be increased in size. 333 - This number must be larger than than the largest block size on any 334 - platform, or code will break. -Hans */ 335 - #define REISERFS_DISK_OFFSET_IN_BYTES (64 * 1024) 336 - #define REISERFS_FIRST_BLOCK unused_define 337 - #define REISERFS_JOURNAL_OFFSET_IN_BYTES REISERFS_DISK_OFFSET_IN_BYTES 338 - 339 - /* the spot for the super in versions 3.5 - 3.5.10 (inclusive) */ 340 - #define REISERFS_OLD_DISK_OFFSET_IN_BYTES (8 * 1024) 341 - 342 - /* reiserfs internal error code (used by search_by_key and fix_nodes)) */ 343 - #define CARRY_ON 0 344 - #define REPEAT_SEARCH -1 345 - #define IO_ERROR -2 346 - #define NO_DISK_SPACE -3 347 - #define NO_BALANCING_NEEDED (-4) 348 - #define NO_MORE_UNUSED_CONTIGUOUS_BLOCKS (-5) 349 - #define QUOTA_EXCEEDED -6 350 - 351 - typedef __u32 b_blocknr_t; 352 - typedef __le32 unp_t; 353 - 354 - struct unfm_nodeinfo { 355 - unp_t unfm_nodenum; 356 - unsigned short unfm_freespace; 357 - }; 358 - 359 - /* there are two formats of keys: 3.5 and 3.6 360 - */ 361 - #define KEY_FORMAT_3_5 0 362 - #define KEY_FORMAT_3_6 1 363 - 364 - /* there are two stat datas */ 365 - #define STAT_DATA_V1 0 366 - #define STAT_DATA_V2 1 367 - 368 - static inline struct reiserfs_inode_info *REISERFS_I(const struct inode *inode) 369 - { 370 - return container_of(inode, struct reiserfs_inode_info, vfs_inode); 371 - } 372 - 373 - static inline struct reiserfs_sb_info *REISERFS_SB(const struct super_block *sb) 374 - { 375 - return sb->s_fs_info; 376 - } 377 - 378 - /* Don't trust REISERFS_SB(sb)->s_bmap_nr, it's a u16 379 - * which overflows on large file systems. */ 380 - static inline __u32 reiserfs_bmap_count(struct super_block *sb) 381 - { 382 - return (SB_BLOCK_COUNT(sb) - 1) / (sb->s_blocksize * 8) + 1; 383 - } 384 - 385 - static inline int bmap_would_wrap(unsigned bmap_nr) 386 - { 387 - return bmap_nr > ((1LL << 16) - 1); 388 - } 389 - 390 - /** this says about version of key of all items (but stat data) the 391 - object consists of */ 392 - #define get_inode_item_key_version( inode ) \ 393 - ((REISERFS_I(inode)->i_flags & i_item_key_version_mask) ? KEY_FORMAT_3_6 : KEY_FORMAT_3_5) 394 - 395 - #define set_inode_item_key_version( inode, version ) \ 396 - ({ if((version)==KEY_FORMAT_3_6) \ 397 - REISERFS_I(inode)->i_flags |= i_item_key_version_mask; \ 398 - else \ 399 - REISERFS_I(inode)->i_flags &= ~i_item_key_version_mask; }) 400 - 401 - #define get_inode_sd_version(inode) \ 402 - ((REISERFS_I(inode)->i_flags & i_stat_data_version_mask) ? STAT_DATA_V2 : STAT_DATA_V1) 403 - 404 - #define set_inode_sd_version(inode, version) \ 405 - ({ if((version)==STAT_DATA_V2) \ 406 - REISERFS_I(inode)->i_flags |= i_stat_data_version_mask; \ 407 - else \ 408 - REISERFS_I(inode)->i_flags &= ~i_stat_data_version_mask; }) 409 - 410 - /* This is an aggressive tail suppression policy, I am hoping it 411 - improves our benchmarks. The principle behind it is that percentage 412 - space saving is what matters, not absolute space saving. This is 413 - non-intuitive, but it helps to understand it if you consider that the 414 - cost to access 4 blocks is not much more than the cost to access 1 415 - block, if you have to do a seek and rotate. A tail risks a 416 - non-linear disk access that is significant as a percentage of total 417 - time cost for a 4 block file and saves an amount of space that is 418 - less significant as a percentage of space, or so goes the hypothesis. 419 - -Hans */ 420 - #define STORE_TAIL_IN_UNFM_S1(n_file_size,n_tail_size,n_block_size) \ 421 - (\ 422 - (!(n_tail_size)) || \ 423 - (((n_tail_size) > MAX_DIRECT_ITEM_LEN(n_block_size)) || \ 424 - ( (n_file_size) >= (n_block_size) * 4 ) || \ 425 - ( ( (n_file_size) >= (n_block_size) * 3 ) && \ 426 - ( (n_tail_size) >= (MAX_DIRECT_ITEM_LEN(n_block_size))/4) ) || \ 427 - ( ( (n_file_size) >= (n_block_size) * 2 ) && \ 428 - ( (n_tail_size) >= (MAX_DIRECT_ITEM_LEN(n_block_size))/2) ) || \ 429 - ( ( (n_file_size) >= (n_block_size) ) && \ 430 - ( (n_tail_size) >= (MAX_DIRECT_ITEM_LEN(n_block_size) * 3)/4) ) ) \ 431 - ) 432 - 433 - /* Another strategy for tails, this one means only create a tail if all the 434 - file would fit into one DIRECT item. 435 - Primary intention for this one is to increase performance by decreasing 436 - seeking. 437 - */ 438 - #define STORE_TAIL_IN_UNFM_S2(n_file_size,n_tail_size,n_block_size) \ 439 - (\ 440 - (!(n_tail_size)) || \ 441 - (((n_file_size) > MAX_DIRECT_ITEM_LEN(n_block_size)) ) \ 442 - ) 443 - 444 - /* 445 - * values for s_umount_state field 446 - */ 447 - #define REISERFS_VALID_FS 1 448 - #define REISERFS_ERROR_FS 2 449 - 450 - // 451 - // there are 5 item types currently 452 - // 453 - #define TYPE_STAT_DATA 0 454 - #define TYPE_INDIRECT 1 455 - #define TYPE_DIRECT 2 456 - #define TYPE_DIRENTRY 3 457 - #define TYPE_MAXTYPE 3 458 - #define TYPE_ANY 15 // FIXME: comment is required 459 - 460 - /***************************************************************************/ 461 - /* KEY & ITEM HEAD */ 462 - /***************************************************************************/ 463 - 464 - // 465 - // directories use this key as well as old files 466 - // 467 - struct offset_v1 { 468 - __le32 k_offset; 469 - __le32 k_uniqueness; 470 - } __attribute__ ((__packed__)); 471 - 472 - struct offset_v2 { 473 - __le64 v; 474 - } __attribute__ ((__packed__)); 475 - 476 - static inline __u16 offset_v2_k_type(const struct offset_v2 *v2) 477 - { 478 - __u8 type = le64_to_cpu(v2->v) >> 60; 479 - return (type <= TYPE_MAXTYPE) ? type : TYPE_ANY; 480 - } 481 - 482 - static inline void set_offset_v2_k_type(struct offset_v2 *v2, int type) 483 - { 484 - v2->v = 485 - (v2->v & cpu_to_le64(~0ULL >> 4)) | cpu_to_le64((__u64) type << 60); 486 - } 487 - 488 - static inline loff_t offset_v2_k_offset(const struct offset_v2 *v2) 489 - { 490 - return le64_to_cpu(v2->v) & (~0ULL >> 4); 491 - } 492 - 493 - static inline void set_offset_v2_k_offset(struct offset_v2 *v2, loff_t offset) 494 - { 495 - offset &= (~0ULL >> 4); 496 - v2->v = (v2->v & cpu_to_le64(15ULL << 60)) | cpu_to_le64(offset); 497 - } 498 - 499 - /* Key of an item determines its location in the S+tree, and 500 - is composed of 4 components */ 501 - struct reiserfs_key { 502 - __le32 k_dir_id; /* packing locality: by default parent 503 - directory object id */ 504 - __le32 k_objectid; /* object identifier */ 505 - union { 506 - struct offset_v1 k_offset_v1; 507 - struct offset_v2 k_offset_v2; 508 - } __attribute__ ((__packed__)) u; 509 - } __attribute__ ((__packed__)); 510 - 511 - struct in_core_key { 512 - __u32 k_dir_id; /* packing locality: by default parent 513 - directory object id */ 514 - __u32 k_objectid; /* object identifier */ 515 - __u64 k_offset; 516 - __u8 k_type; 517 - }; 518 - 519 - struct cpu_key { 520 - struct in_core_key on_disk_key; 521 - int version; 522 - int key_length; /* 3 in all cases but direct2indirect and 523 - indirect2direct conversion */ 524 - }; 525 - 526 - /* Our function for comparing keys can compare keys of different 527 - lengths. It takes as a parameter the length of the keys it is to 528 - compare. These defines are used in determining what is to be passed 529 - to it as that parameter. */ 530 - #define REISERFS_FULL_KEY_LEN 4 531 - #define REISERFS_SHORT_KEY_LEN 2 532 - 533 - /* The result of the key compare */ 534 - #define FIRST_GREATER 1 535 - #define SECOND_GREATER -1 536 - #define KEYS_IDENTICAL 0 537 - #define KEY_FOUND 1 538 - #define KEY_NOT_FOUND 0 539 - 540 - #define KEY_SIZE (sizeof(struct reiserfs_key)) 541 - #define SHORT_KEY_SIZE (sizeof (__u32) + sizeof (__u32)) 542 - 543 - /* return values for search_by_key and clones */ 544 - #define ITEM_FOUND 1 545 - #define ITEM_NOT_FOUND 0 546 - #define ENTRY_FOUND 1 547 - #define ENTRY_NOT_FOUND 0 548 - #define DIRECTORY_NOT_FOUND -1 549 - #define REGULAR_FILE_FOUND -2 550 - #define DIRECTORY_FOUND -3 551 - #define BYTE_FOUND 1 552 - #define BYTE_NOT_FOUND 0 553 - #define FILE_NOT_FOUND -1 554 - 555 - #define POSITION_FOUND 1 556 - #define POSITION_NOT_FOUND 0 557 - 558 - // return values for reiserfs_find_entry and search_by_entry_key 559 - #define NAME_FOUND 1 560 - #define NAME_NOT_FOUND 0 561 - #define GOTO_PREVIOUS_ITEM 2 562 - #define NAME_FOUND_INVISIBLE 3 563 - 564 - /* Everything in the filesystem is stored as a set of items. The 565 - item head contains the key of the item, its free space (for 566 - indirect items) and specifies the location of the item itself 567 - within the block. */ 568 - 569 - struct item_head { 570 - /* Everything in the tree is found by searching for it based on 571 - * its key.*/ 572 - struct reiserfs_key ih_key; 573 - union { 574 - /* The free space in the last unformatted node of an 575 - indirect item if this is an indirect item. This 576 - equals 0xFFFF iff this is a direct item or stat data 577 - item. Note that the key, not this field, is used to 578 - determine the item type, and thus which field this 579 - union contains. */ 580 - __le16 ih_free_space_reserved; 581 - /* Iff this is a directory item, this field equals the 582 - number of directory entries in the directory item. */ 583 - __le16 ih_entry_count; 584 - } __attribute__ ((__packed__)) u; 585 - __le16 ih_item_len; /* total size of the item body */ 586 - __le16 ih_item_location; /* an offset to the item body 587 - * within the block */ 588 - __le16 ih_version; /* 0 for all old items, 2 for new 589 - ones. Highest bit is set by fsck 590 - temporary, cleaned after all 591 - done */ 592 - } __attribute__ ((__packed__)); 593 - /* size of item header */ 594 - #define IH_SIZE (sizeof(struct item_head)) 595 - 596 - #define ih_free_space(ih) le16_to_cpu((ih)->u.ih_free_space_reserved) 597 - #define ih_version(ih) le16_to_cpu((ih)->ih_version) 598 - #define ih_entry_count(ih) le16_to_cpu((ih)->u.ih_entry_count) 599 - #define ih_location(ih) le16_to_cpu((ih)->ih_item_location) 600 - #define ih_item_len(ih) le16_to_cpu((ih)->ih_item_len) 601 - 602 - #define put_ih_free_space(ih, val) do { (ih)->u.ih_free_space_reserved = cpu_to_le16(val); } while(0) 603 - #define put_ih_version(ih, val) do { (ih)->ih_version = cpu_to_le16(val); } while (0) 604 - #define put_ih_entry_count(ih, val) do { (ih)->u.ih_entry_count = cpu_to_le16(val); } while (0) 605 - #define put_ih_location(ih, val) do { (ih)->ih_item_location = cpu_to_le16(val); } while (0) 606 - #define put_ih_item_len(ih, val) do { (ih)->ih_item_len = cpu_to_le16(val); } while (0) 607 - 608 - #define unreachable_item(ih) (ih_version(ih) & (1 << 15)) 609 - 610 - #define get_ih_free_space(ih) (ih_version (ih) == KEY_FORMAT_3_6 ? 0 : ih_free_space (ih)) 611 - #define set_ih_free_space(ih,val) put_ih_free_space((ih), ((ih_version(ih) == KEY_FORMAT_3_6) ? 0 : (val))) 612 - 613 - /* these operate on indirect items, where you've got an array of ints 614 - ** at a possibly unaligned location. These are a noop on ia32 615 - ** 616 - ** p is the array of __u32, i is the index into the array, v is the value 617 - ** to store there. 618 - */ 619 - #define get_block_num(p, i) get_unaligned_le32((p) + (i)) 620 - #define put_block_num(p, i, v) put_unaligned_le32((v), (p) + (i)) 621 - 622 - // 623 - // in old version uniqueness field shows key type 624 - // 625 - #define V1_SD_UNIQUENESS 0 626 - #define V1_INDIRECT_UNIQUENESS 0xfffffffe 627 - #define V1_DIRECT_UNIQUENESS 0xffffffff 628 - #define V1_DIRENTRY_UNIQUENESS 500 629 - #define V1_ANY_UNIQUENESS 555 // FIXME: comment is required 630 - 631 - // 632 - // here are conversion routines 633 - // 634 - static inline int uniqueness2type(__u32 uniqueness) CONSTF; 635 - static inline int uniqueness2type(__u32 uniqueness) 636 - { 637 - switch ((int)uniqueness) { 638 - case V1_SD_UNIQUENESS: 639 - return TYPE_STAT_DATA; 640 - case V1_INDIRECT_UNIQUENESS: 641 - return TYPE_INDIRECT; 642 - case V1_DIRECT_UNIQUENESS: 643 - return TYPE_DIRECT; 644 - case V1_DIRENTRY_UNIQUENESS: 645 - return TYPE_DIRENTRY; 646 - case V1_ANY_UNIQUENESS: 647 - default: 648 - return TYPE_ANY; 649 - } 650 - } 651 - 652 - static inline __u32 type2uniqueness(int type) CONSTF; 653 - static inline __u32 type2uniqueness(int type) 654 - { 655 - switch (type) { 656 - case TYPE_STAT_DATA: 657 - return V1_SD_UNIQUENESS; 658 - case TYPE_INDIRECT: 659 - return V1_INDIRECT_UNIQUENESS; 660 - case TYPE_DIRECT: 661 - return V1_DIRECT_UNIQUENESS; 662 - case TYPE_DIRENTRY: 663 - return V1_DIRENTRY_UNIQUENESS; 664 - case TYPE_ANY: 665 - default: 666 - return V1_ANY_UNIQUENESS; 667 - } 668 - } 669 - 670 - // 671 - // key is pointer to on disk key which is stored in le, result is cpu, 672 - // there is no way to get version of object from key, so, provide 673 - // version to these defines 674 - // 675 - static inline loff_t le_key_k_offset(int version, 676 - const struct reiserfs_key *key) 677 - { 678 - return (version == KEY_FORMAT_3_5) ? 679 - le32_to_cpu(key->u.k_offset_v1.k_offset) : 680 - offset_v2_k_offset(&(key->u.k_offset_v2)); 681 - } 682 - 683 - static inline loff_t le_ih_k_offset(const struct item_head *ih) 684 - { 685 - return le_key_k_offset(ih_version(ih), &(ih->ih_key)); 686 - } 687 - 688 - static inline loff_t le_key_k_type(int version, const struct reiserfs_key *key) 689 - { 690 - return (version == KEY_FORMAT_3_5) ? 691 - uniqueness2type(le32_to_cpu(key->u.k_offset_v1.k_uniqueness)) : 692 - offset_v2_k_type(&(key->u.k_offset_v2)); 693 - } 694 - 695 - static inline loff_t le_ih_k_type(const struct item_head *ih) 696 - { 697 - return le_key_k_type(ih_version(ih), &(ih->ih_key)); 698 - } 699 - 700 - static inline void set_le_key_k_offset(int version, struct reiserfs_key *key, 701 - loff_t offset) 702 - { 703 - (version == KEY_FORMAT_3_5) ? (void)(key->u.k_offset_v1.k_offset = cpu_to_le32(offset)) : /* jdm check */ 704 - (void)(set_offset_v2_k_offset(&(key->u.k_offset_v2), offset)); 705 - } 706 - 707 - static inline void set_le_ih_k_offset(struct item_head *ih, loff_t offset) 708 - { 709 - set_le_key_k_offset(ih_version(ih), &(ih->ih_key), offset); 710 - } 711 - 712 - static inline void set_le_key_k_type(int version, struct reiserfs_key *key, 713 - int type) 714 - { 715 - (version == KEY_FORMAT_3_5) ? 716 - (void)(key->u.k_offset_v1.k_uniqueness = 717 - cpu_to_le32(type2uniqueness(type))) 718 - : (void)(set_offset_v2_k_type(&(key->u.k_offset_v2), type)); 719 - } 720 - 721 - static inline void set_le_ih_k_type(struct item_head *ih, int type) 722 - { 723 - set_le_key_k_type(ih_version(ih), &(ih->ih_key), type); 724 - } 725 - 726 - static inline int is_direntry_le_key(int version, struct reiserfs_key *key) 727 - { 728 - return le_key_k_type(version, key) == TYPE_DIRENTRY; 729 - } 730 - 731 - static inline int is_direct_le_key(int version, struct reiserfs_key *key) 732 - { 733 - return le_key_k_type(version, key) == TYPE_DIRECT; 734 - } 735 - 736 - static inline int is_indirect_le_key(int version, struct reiserfs_key *key) 737 - { 738 - return le_key_k_type(version, key) == TYPE_INDIRECT; 739 - } 740 - 741 - static inline int is_statdata_le_key(int version, struct reiserfs_key *key) 742 - { 743 - return le_key_k_type(version, key) == TYPE_STAT_DATA; 744 - } 745 - 746 - // 747 - // item header has version. 748 - // 749 - static inline int is_direntry_le_ih(struct item_head *ih) 750 - { 751 - return is_direntry_le_key(ih_version(ih), &ih->ih_key); 752 - } 753 - 754 - static inline int is_direct_le_ih(struct item_head *ih) 755 - { 756 - return is_direct_le_key(ih_version(ih), &ih->ih_key); 757 - } 758 - 759 - static inline int is_indirect_le_ih(struct item_head *ih) 760 - { 761 - return is_indirect_le_key(ih_version(ih), &ih->ih_key); 762 - } 763 - 764 - static inline int is_statdata_le_ih(struct item_head *ih) 765 - { 766 - return is_statdata_le_key(ih_version(ih), &ih->ih_key); 767 - } 768 - 769 - // 770 - // key is pointer to cpu key, result is cpu 771 - // 772 - static inline loff_t cpu_key_k_offset(const struct cpu_key *key) 773 - { 774 - return key->on_disk_key.k_offset; 775 - } 776 - 777 - static inline loff_t cpu_key_k_type(const struct cpu_key *key) 778 - { 779 - return key->on_disk_key.k_type; 780 - } 781 - 782 - static inline void set_cpu_key_k_offset(struct cpu_key *key, loff_t offset) 783 - { 784 - key->on_disk_key.k_offset = offset; 785 - } 786 - 787 - static inline void set_cpu_key_k_type(struct cpu_key *key, int type) 788 - { 789 - key->on_disk_key.k_type = type; 790 - } 791 - 792 - static inline void cpu_key_k_offset_dec(struct cpu_key *key) 793 - { 794 - key->on_disk_key.k_offset--; 795 - } 796 - 797 - #define is_direntry_cpu_key(key) (cpu_key_k_type (key) == TYPE_DIRENTRY) 798 - #define is_direct_cpu_key(key) (cpu_key_k_type (key) == TYPE_DIRECT) 799 - #define is_indirect_cpu_key(key) (cpu_key_k_type (key) == TYPE_INDIRECT) 800 - #define is_statdata_cpu_key(key) (cpu_key_k_type (key) == TYPE_STAT_DATA) 801 - 802 - /* are these used ? */ 803 - #define is_direntry_cpu_ih(ih) (is_direntry_cpu_key (&((ih)->ih_key))) 804 - #define is_direct_cpu_ih(ih) (is_direct_cpu_key (&((ih)->ih_key))) 805 - #define is_indirect_cpu_ih(ih) (is_indirect_cpu_key (&((ih)->ih_key))) 806 - #define is_statdata_cpu_ih(ih) (is_statdata_cpu_key (&((ih)->ih_key))) 807 - 808 - #define I_K_KEY_IN_ITEM(ih, key, n_blocksize) \ 809 - (!COMP_SHORT_KEYS(ih, key) && \ 810 - I_OFF_BYTE_IN_ITEM(ih, k_offset(key), n_blocksize)) 811 - 812 - /* maximal length of item */ 813 - #define MAX_ITEM_LEN(block_size) (block_size - BLKH_SIZE - IH_SIZE) 814 - #define MIN_ITEM_LEN 1 815 - 816 - /* object identifier for root dir */ 817 - #define REISERFS_ROOT_OBJECTID 2 818 - #define REISERFS_ROOT_PARENT_OBJECTID 1 819 - 820 - extern struct reiserfs_key root_key; 821 - 822 - /* 823 - * Picture represents a leaf of the S+tree 824 - * ______________________________________________________ 825 - * | | Array of | | | 826 - * |Block | Object-Item | F r e e | Objects- | 827 - * | head | Headers | S p a c e | Items | 828 - * |______|_______________|___________________|___________| 829 - */ 830 - 831 - /* Header of a disk block. More precisely, header of a formatted leaf 832 - or internal node, and not the header of an unformatted node. */ 833 - struct block_head { 834 - __le16 blk_level; /* Level of a block in the tree. */ 835 - __le16 blk_nr_item; /* Number of keys/items in a block. */ 836 - __le16 blk_free_space; /* Block free space in bytes. */ 837 - __le16 blk_reserved; 838 - /* dump this in v4/planA */ 839 - struct reiserfs_key blk_right_delim_key; /* kept only for compatibility */ 840 - }; 841 - 842 - #define BLKH_SIZE (sizeof(struct block_head)) 843 - #define blkh_level(p_blkh) (le16_to_cpu((p_blkh)->blk_level)) 844 - #define blkh_nr_item(p_blkh) (le16_to_cpu((p_blkh)->blk_nr_item)) 845 - #define blkh_free_space(p_blkh) (le16_to_cpu((p_blkh)->blk_free_space)) 846 - #define blkh_reserved(p_blkh) (le16_to_cpu((p_blkh)->blk_reserved)) 847 - #define set_blkh_level(p_blkh,val) ((p_blkh)->blk_level = cpu_to_le16(val)) 848 - #define set_blkh_nr_item(p_blkh,val) ((p_blkh)->blk_nr_item = cpu_to_le16(val)) 849 - #define set_blkh_free_space(p_blkh,val) ((p_blkh)->blk_free_space = cpu_to_le16(val)) 850 - #define set_blkh_reserved(p_blkh,val) ((p_blkh)->blk_reserved = cpu_to_le16(val)) 851 - #define blkh_right_delim_key(p_blkh) ((p_blkh)->blk_right_delim_key) 852 - #define set_blkh_right_delim_key(p_blkh,val) ((p_blkh)->blk_right_delim_key = val) 853 - 854 - /* 855 - * values for blk_level field of the struct block_head 856 - */ 857 - 858 - #define FREE_LEVEL 0 /* when node gets removed from the tree its 859 - blk_level is set to FREE_LEVEL. It is then 860 - used to see whether the node is still in the 861 - tree */ 862 - 863 - #define DISK_LEAF_NODE_LEVEL 1 /* Leaf node level. */ 864 - 865 - /* Given the buffer head of a formatted node, resolve to the block head of that node. */ 866 - #define B_BLK_HEAD(bh) ((struct block_head *)((bh)->b_data)) 867 - /* Number of items that are in buffer. */ 868 - #define B_NR_ITEMS(bh) (blkh_nr_item(B_BLK_HEAD(bh))) 869 - #define B_LEVEL(bh) (blkh_level(B_BLK_HEAD(bh))) 870 - #define B_FREE_SPACE(bh) (blkh_free_space(B_BLK_HEAD(bh))) 871 - 872 - #define PUT_B_NR_ITEMS(bh, val) do { set_blkh_nr_item(B_BLK_HEAD(bh), val); } while (0) 873 - #define PUT_B_LEVEL(bh, val) do { set_blkh_level(B_BLK_HEAD(bh), val); } while (0) 874 - #define PUT_B_FREE_SPACE(bh, val) do { set_blkh_free_space(B_BLK_HEAD(bh), val); } while (0) 875 - 876 - /* Get right delimiting key. -- little endian */ 877 - #define B_PRIGHT_DELIM_KEY(bh) (&(blk_right_delim_key(B_BLK_HEAD(bh)))) 878 - 879 - /* Does the buffer contain a disk leaf. */ 880 - #define B_IS_ITEMS_LEVEL(bh) (B_LEVEL(bh) == DISK_LEAF_NODE_LEVEL) 881 - 882 - /* Does the buffer contain a disk internal node */ 883 - #define B_IS_KEYS_LEVEL(bh) (B_LEVEL(bh) > DISK_LEAF_NODE_LEVEL \ 884 - && B_LEVEL(bh) <= MAX_HEIGHT) 885 - 886 - /***************************************************************************/ 887 - /* STAT DATA */ 888 - /***************************************************************************/ 889 - 890 - // 891 - // old stat data is 32 bytes long. We are going to distinguish new one by 892 - // different size 893 - // 894 - struct stat_data_v1 { 895 - __le16 sd_mode; /* file type, permissions */ 896 - __le16 sd_nlink; /* number of hard links */ 897 - __le16 sd_uid; /* owner */ 898 - __le16 sd_gid; /* group */ 899 - __le32 sd_size; /* file size */ 900 - __le32 sd_atime; /* time of last access */ 901 - __le32 sd_mtime; /* time file was last modified */ 902 - __le32 sd_ctime; /* time inode (stat data) was last changed (except changes to sd_atime and sd_mtime) */ 903 - union { 904 - __le32 sd_rdev; 905 - __le32 sd_blocks; /* number of blocks file uses */ 906 - } __attribute__ ((__packed__)) u; 907 - __le32 sd_first_direct_byte; /* first byte of file which is stored 908 - in a direct item: except that if it 909 - equals 1 it is a symlink and if it 910 - equals ~(__u32)0 there is no 911 - direct item. The existence of this 912 - field really grates on me. Let's 913 - replace it with a macro based on 914 - sd_size and our tail suppression 915 - policy. Someday. -Hans */ 916 - } __attribute__ ((__packed__)); 917 - 918 - #define SD_V1_SIZE (sizeof(struct stat_data_v1)) 919 - #define stat_data_v1(ih) (ih_version (ih) == KEY_FORMAT_3_5) 920 - #define sd_v1_mode(sdp) (le16_to_cpu((sdp)->sd_mode)) 921 - #define set_sd_v1_mode(sdp,v) ((sdp)->sd_mode = cpu_to_le16(v)) 922 - #define sd_v1_nlink(sdp) (le16_to_cpu((sdp)->sd_nlink)) 923 - #define set_sd_v1_nlink(sdp,v) ((sdp)->sd_nlink = cpu_to_le16(v)) 924 - #define sd_v1_uid(sdp) (le16_to_cpu((sdp)->sd_uid)) 925 - #define set_sd_v1_uid(sdp,v) ((sdp)->sd_uid = cpu_to_le16(v)) 926 - #define sd_v1_gid(sdp) (le16_to_cpu((sdp)->sd_gid)) 927 - #define set_sd_v1_gid(sdp,v) ((sdp)->sd_gid = cpu_to_le16(v)) 928 - #define sd_v1_size(sdp) (le32_to_cpu((sdp)->sd_size)) 929 - #define set_sd_v1_size(sdp,v) ((sdp)->sd_size = cpu_to_le32(v)) 930 - #define sd_v1_atime(sdp) (le32_to_cpu((sdp)->sd_atime)) 931 - #define set_sd_v1_atime(sdp,v) ((sdp)->sd_atime = cpu_to_le32(v)) 932 - #define sd_v1_mtime(sdp) (le32_to_cpu((sdp)->sd_mtime)) 933 - #define set_sd_v1_mtime(sdp,v) ((sdp)->sd_mtime = cpu_to_le32(v)) 934 - #define sd_v1_ctime(sdp) (le32_to_cpu((sdp)->sd_ctime)) 935 - #define set_sd_v1_ctime(sdp,v) ((sdp)->sd_ctime = cpu_to_le32(v)) 936 - #define sd_v1_rdev(sdp) (le32_to_cpu((sdp)->u.sd_rdev)) 937 - #define set_sd_v1_rdev(sdp,v) ((sdp)->u.sd_rdev = cpu_to_le32(v)) 938 - #define sd_v1_blocks(sdp) (le32_to_cpu((sdp)->u.sd_blocks)) 939 - #define set_sd_v1_blocks(sdp,v) ((sdp)->u.sd_blocks = cpu_to_le32(v)) 940 - #define sd_v1_first_direct_byte(sdp) \ 941 - (le32_to_cpu((sdp)->sd_first_direct_byte)) 942 - #define set_sd_v1_first_direct_byte(sdp,v) \ 943 - ((sdp)->sd_first_direct_byte = cpu_to_le32(v)) 944 - 945 - /* inode flags stored in sd_attrs (nee sd_reserved) */ 946 - 947 - /* we want common flags to have the same values as in ext2, 948 - so chattr(1) will work without problems */ 949 - #define REISERFS_IMMUTABLE_FL FS_IMMUTABLE_FL 950 - #define REISERFS_APPEND_FL FS_APPEND_FL 951 - #define REISERFS_SYNC_FL FS_SYNC_FL 952 - #define REISERFS_NOATIME_FL FS_NOATIME_FL 953 - #define REISERFS_NODUMP_FL FS_NODUMP_FL 954 - #define REISERFS_SECRM_FL FS_SECRM_FL 955 - #define REISERFS_UNRM_FL FS_UNRM_FL 956 - #define REISERFS_COMPR_FL FS_COMPR_FL 957 - #define REISERFS_NOTAIL_FL FS_NOTAIL_FL 958 - 959 - /* persistent flags that file inherits from the parent directory */ 960 - #define REISERFS_INHERIT_MASK ( REISERFS_IMMUTABLE_FL | \ 961 - REISERFS_SYNC_FL | \ 962 - REISERFS_NOATIME_FL | \ 963 - REISERFS_NODUMP_FL | \ 964 - REISERFS_SECRM_FL | \ 965 - REISERFS_COMPR_FL | \ 966 - REISERFS_NOTAIL_FL ) 967 - 968 - /* Stat Data on disk (reiserfs version of UFS disk inode minus the 969 - address blocks) */ 970 - struct stat_data { 971 - __le16 sd_mode; /* file type, permissions */ 972 - __le16 sd_attrs; /* persistent inode flags */ 973 - __le32 sd_nlink; /* number of hard links */ 974 - __le64 sd_size; /* file size */ 975 - __le32 sd_uid; /* owner */ 976 - __le32 sd_gid; /* group */ 977 - __le32 sd_atime; /* time of last access */ 978 - __le32 sd_mtime; /* time file was last modified */ 979 - __le32 sd_ctime; /* time inode (stat data) was last changed (except changes to sd_atime and sd_mtime) */ 980 - __le32 sd_blocks; 981 - union { 982 - __le32 sd_rdev; 983 - __le32 sd_generation; 984 - //__le32 sd_first_direct_byte; 985 - /* first byte of file which is stored in a 986 - direct item: except that if it equals 1 987 - it is a symlink and if it equals 988 - ~(__u32)0 there is no direct item. The 989 - existence of this field really grates 990 - on me. Let's replace it with a macro 991 - based on sd_size and our tail 992 - suppression policy? */ 993 - } __attribute__ ((__packed__)) u; 994 - } __attribute__ ((__packed__)); 995 - // 996 - // this is 44 bytes long 997 - // 998 - #define SD_SIZE (sizeof(struct stat_data)) 999 - #define SD_V2_SIZE SD_SIZE 1000 - #define stat_data_v2(ih) (ih_version (ih) == KEY_FORMAT_3_6) 1001 - #define sd_v2_mode(sdp) (le16_to_cpu((sdp)->sd_mode)) 1002 - #define set_sd_v2_mode(sdp,v) ((sdp)->sd_mode = cpu_to_le16(v)) 1003 - /* sd_reserved */ 1004 - /* set_sd_reserved */ 1005 - #define sd_v2_nlink(sdp) (le32_to_cpu((sdp)->sd_nlink)) 1006 - #define set_sd_v2_nlink(sdp,v) ((sdp)->sd_nlink = cpu_to_le32(v)) 1007 - #define sd_v2_size(sdp) (le64_to_cpu((sdp)->sd_size)) 1008 - #define set_sd_v2_size(sdp,v) ((sdp)->sd_size = cpu_to_le64(v)) 1009 - #define sd_v2_uid(sdp) (le32_to_cpu((sdp)->sd_uid)) 1010 - #define set_sd_v2_uid(sdp,v) ((sdp)->sd_uid = cpu_to_le32(v)) 1011 - #define sd_v2_gid(sdp) (le32_to_cpu((sdp)->sd_gid)) 1012 - #define set_sd_v2_gid(sdp,v) ((sdp)->sd_gid = cpu_to_le32(v)) 1013 - #define sd_v2_atime(sdp) (le32_to_cpu((sdp)->sd_atime)) 1014 - #define set_sd_v2_atime(sdp,v) ((sdp)->sd_atime = cpu_to_le32(v)) 1015 - #define sd_v2_mtime(sdp) (le32_to_cpu((sdp)->sd_mtime)) 1016 - #define set_sd_v2_mtime(sdp,v) ((sdp)->sd_mtime = cpu_to_le32(v)) 1017 - #define sd_v2_ctime(sdp) (le32_to_cpu((sdp)->sd_ctime)) 1018 - #define set_sd_v2_ctime(sdp,v) ((sdp)->sd_ctime = cpu_to_le32(v)) 1019 - #define sd_v2_blocks(sdp) (le32_to_cpu((sdp)->sd_blocks)) 1020 - #define set_sd_v2_blocks(sdp,v) ((sdp)->sd_blocks = cpu_to_le32(v)) 1021 - #define sd_v2_rdev(sdp) (le32_to_cpu((sdp)->u.sd_rdev)) 1022 - #define set_sd_v2_rdev(sdp,v) ((sdp)->u.sd_rdev = cpu_to_le32(v)) 1023 - #define sd_v2_generation(sdp) (le32_to_cpu((sdp)->u.sd_generation)) 1024 - #define set_sd_v2_generation(sdp,v) ((sdp)->u.sd_generation = cpu_to_le32(v)) 1025 - #define sd_v2_attrs(sdp) (le16_to_cpu((sdp)->sd_attrs)) 1026 - #define set_sd_v2_attrs(sdp,v) ((sdp)->sd_attrs = cpu_to_le16(v)) 1027 - 1028 - /***************************************************************************/ 1029 - /* DIRECTORY STRUCTURE */ 1030 - /***************************************************************************/ 1031 - /* 1032 - Picture represents the structure of directory items 1033 - ________________________________________________ 1034 - | Array of | | | | | | 1035 - | directory |N-1| N-2 | .... | 1st |0th| 1036 - | entry headers | | | | | | 1037 - |_______________|___|_____|________|_______|___| 1038 - <---- directory entries ------> 1039 - 1040 - First directory item has k_offset component 1. We store "." and ".." 1041 - in one item, always, we never split "." and ".." into differing 1042 - items. This makes, among other things, the code for removing 1043 - directories simpler. */ 1044 - #define SD_OFFSET 0 1045 - #define SD_UNIQUENESS 0 1046 - #define DOT_OFFSET 1 1047 - #define DOT_DOT_OFFSET 2 1048 - #define DIRENTRY_UNIQUENESS 500 1049 - 1050 - /* */ 1051 - #define FIRST_ITEM_OFFSET 1 1052 - 1053 - /* 1054 - Q: How to get key of object pointed to by entry from entry? 1055 - 1056 - A: Each directory entry has its header. This header has deh_dir_id and deh_objectid fields, those are key 1057 - of object, entry points to */ 1058 - 1059 - /* NOT IMPLEMENTED: 1060 - Directory will someday contain stat data of object */ 1061 - 1062 - struct reiserfs_de_head { 1063 - __le32 deh_offset; /* third component of the directory entry key */ 1064 - __le32 deh_dir_id; /* objectid of the parent directory of the object, that is referenced 1065 - by directory entry */ 1066 - __le32 deh_objectid; /* objectid of the object, that is referenced by directory entry */ 1067 - __le16 deh_location; /* offset of name in the whole item */ 1068 - __le16 deh_state; /* whether 1) entry contains stat data (for future), and 2) whether 1069 - entry is hidden (unlinked) */ 1070 - } __attribute__ ((__packed__)); 1071 - #define DEH_SIZE sizeof(struct reiserfs_de_head) 1072 - #define deh_offset(p_deh) (le32_to_cpu((p_deh)->deh_offset)) 1073 - #define deh_dir_id(p_deh) (le32_to_cpu((p_deh)->deh_dir_id)) 1074 - #define deh_objectid(p_deh) (le32_to_cpu((p_deh)->deh_objectid)) 1075 - #define deh_location(p_deh) (le16_to_cpu((p_deh)->deh_location)) 1076 - #define deh_state(p_deh) (le16_to_cpu((p_deh)->deh_state)) 1077 - 1078 - #define put_deh_offset(p_deh,v) ((p_deh)->deh_offset = cpu_to_le32((v))) 1079 - #define put_deh_dir_id(p_deh,v) ((p_deh)->deh_dir_id = cpu_to_le32((v))) 1080 - #define put_deh_objectid(p_deh,v) ((p_deh)->deh_objectid = cpu_to_le32((v))) 1081 - #define put_deh_location(p_deh,v) ((p_deh)->deh_location = cpu_to_le16((v))) 1082 - #define put_deh_state(p_deh,v) ((p_deh)->deh_state = cpu_to_le16((v))) 1083 - 1084 - /* empty directory contains two entries "." and ".." and their headers */ 1085 - #define EMPTY_DIR_SIZE \ 1086 - (DEH_SIZE * 2 + ROUND_UP (strlen (".")) + ROUND_UP (strlen (".."))) 1087 - 1088 - /* old format directories have this size when empty */ 1089 - #define EMPTY_DIR_SIZE_V1 (DEH_SIZE * 2 + 3) 1090 - 1091 - #define DEH_Statdata 0 /* not used now */ 1092 - #define DEH_Visible 2 1093 - 1094 - /* 64 bit systems (and the S/390) need to be aligned explicitly -jdm */ 1095 - #if BITS_PER_LONG == 64 || defined(__s390__) || defined(__hppa__) 1096 - # define ADDR_UNALIGNED_BITS (3) 1097 - #endif 1098 - 1099 - /* These are only used to manipulate deh_state. 1100 - * Because of this, we'll use the ext2_ bit routines, 1101 - * since they are little endian */ 1102 - #ifdef ADDR_UNALIGNED_BITS 1103 - 1104 - # define aligned_address(addr) ((void *)((long)(addr) & ~((1UL << ADDR_UNALIGNED_BITS) - 1))) 1105 - # define unaligned_offset(addr) (((int)((long)(addr) & ((1 << ADDR_UNALIGNED_BITS) - 1))) << 3) 1106 - 1107 - # define set_bit_unaligned(nr, addr) \ 1108 - __test_and_set_bit_le((nr) + unaligned_offset(addr), aligned_address(addr)) 1109 - # define clear_bit_unaligned(nr, addr) \ 1110 - __test_and_clear_bit_le((nr) + unaligned_offset(addr), aligned_address(addr)) 1111 - # define test_bit_unaligned(nr, addr) \ 1112 - test_bit_le((nr) + unaligned_offset(addr), aligned_address(addr)) 1113 - 1114 - #else 1115 - 1116 - # define set_bit_unaligned(nr, addr) __test_and_set_bit_le(nr, addr) 1117 - # define clear_bit_unaligned(nr, addr) __test_and_clear_bit_le(nr, addr) 1118 - # define test_bit_unaligned(nr, addr) test_bit_le(nr, addr) 1119 - 1120 - #endif 1121 - 1122 - #define mark_de_with_sd(deh) set_bit_unaligned (DEH_Statdata, &((deh)->deh_state)) 1123 - #define mark_de_without_sd(deh) clear_bit_unaligned (DEH_Statdata, &((deh)->deh_state)) 1124 - #define mark_de_visible(deh) set_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1125 - #define mark_de_hidden(deh) clear_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1126 - 1127 - #define de_with_sd(deh) test_bit_unaligned (DEH_Statdata, &((deh)->deh_state)) 1128 - #define de_visible(deh) test_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1129 - #define de_hidden(deh) !test_bit_unaligned (DEH_Visible, &((deh)->deh_state)) 1130 - 1131 - extern void make_empty_dir_item_v1(char *body, __le32 dirid, __le32 objid, 1132 - __le32 par_dirid, __le32 par_objid); 1133 - extern void make_empty_dir_item(char *body, __le32 dirid, __le32 objid, 1134 - __le32 par_dirid, __le32 par_objid); 1135 - 1136 - /* array of the entry headers */ 1137 - /* get item body */ 1138 - #define B_I_PITEM(bh,ih) ( (bh)->b_data + ih_location(ih) ) 1139 - #define B_I_DEH(bh,ih) ((struct reiserfs_de_head *)(B_I_PITEM(bh,ih))) 1140 - 1141 - /* length of the directory entry in directory item. This define 1142 - calculates length of i-th directory entry using directory entry 1143 - locations from dir entry head. When it calculates length of 0-th 1144 - directory entry, it uses length of whole item in place of entry 1145 - location of the non-existent following entry in the calculation. 1146 - See picture above.*/ 1147 - /* 1148 - #define I_DEH_N_ENTRY_LENGTH(ih,deh,i) \ 1149 - ((i) ? (deh_location((deh)-1) - deh_location((deh))) : (ih_item_len((ih)) - deh_location((deh)))) 1150 - */ 1151 - static inline int entry_length(const struct buffer_head *bh, 1152 - const struct item_head *ih, int pos_in_item) 1153 - { 1154 - struct reiserfs_de_head *deh; 1155 - 1156 - deh = B_I_DEH(bh, ih) + pos_in_item; 1157 - if (pos_in_item) 1158 - return deh_location(deh - 1) - deh_location(deh); 1159 - 1160 - return ih_item_len(ih) - deh_location(deh); 1161 - } 1162 - 1163 - /* number of entries in the directory item, depends on ENTRY_COUNT being at the start of directory dynamic data. */ 1164 - #define I_ENTRY_COUNT(ih) (ih_entry_count((ih))) 1165 - 1166 - /* name by bh, ih and entry_num */ 1167 - #define B_I_E_NAME(bh,ih,entry_num) ((char *)(bh->b_data + ih_location(ih) + deh_location(B_I_DEH(bh,ih)+(entry_num)))) 1168 - 1169 - // two entries per block (at least) 1170 - #define REISERFS_MAX_NAME(block_size) 255 1171 - 1172 - /* this structure is used for operations on directory entries. It is 1173 - not a disk structure. */ 1174 - /* When reiserfs_find_entry or search_by_entry_key find directory 1175 - entry, they return filled reiserfs_dir_entry structure */ 1176 - struct reiserfs_dir_entry { 1177 - struct buffer_head *de_bh; 1178 - int de_item_num; 1179 - struct item_head *de_ih; 1180 - int de_entry_num; 1181 - struct reiserfs_de_head *de_deh; 1182 - int de_entrylen; 1183 - int de_namelen; 1184 - char *de_name; 1185 - unsigned long *de_gen_number_bit_string; 1186 - 1187 - __u32 de_dir_id; 1188 - __u32 de_objectid; 1189 - 1190 - struct cpu_key de_entry_key; 1191 - }; 1192 - 1193 - /* these defines are useful when a particular member of a reiserfs_dir_entry is needed */ 1194 - 1195 - /* pointer to file name, stored in entry */ 1196 - #define B_I_DEH_ENTRY_FILE_NAME(bh,ih,deh) (B_I_PITEM (bh, ih) + deh_location(deh)) 1197 - 1198 - /* length of name */ 1199 - #define I_DEH_N_ENTRY_FILE_NAME_LENGTH(ih,deh,entry_num) \ 1200 - (I_DEH_N_ENTRY_LENGTH (ih, deh, entry_num) - (de_with_sd (deh) ? SD_SIZE : 0)) 1201 - 1202 - /* hash value occupies bits from 7 up to 30 */ 1203 - #define GET_HASH_VALUE(offset) ((offset) & 0x7fffff80LL) 1204 - /* generation number occupies 7 bits starting from 0 up to 6 */ 1205 - #define GET_GENERATION_NUMBER(offset) ((offset) & 0x7fLL) 1206 - #define MAX_GENERATION_NUMBER 127 1207 - 1208 - #define SET_GENERATION_NUMBER(offset,gen_number) (GET_HASH_VALUE(offset)|(gen_number)) 1209 - 1210 - /* 1211 - * Picture represents an internal node of the reiserfs tree 1212 - * ______________________________________________________ 1213 - * | | Array of | Array of | Free | 1214 - * |block | keys | pointers | space | 1215 - * | head | N | N+1 | | 1216 - * |______|_______________|___________________|___________| 1217 - */ 1218 - 1219 - /***************************************************************************/ 1220 - /* DISK CHILD */ 1221 - /***************************************************************************/ 1222 - /* Disk child pointer: The pointer from an internal node of the tree 1223 - to a node that is on disk. */ 1224 - struct disk_child { 1225 - __le32 dc_block_number; /* Disk child's block number. */ 1226 - __le16 dc_size; /* Disk child's used space. */ 1227 - __le16 dc_reserved; 1228 - }; 1229 - 1230 - #define DC_SIZE (sizeof(struct disk_child)) 1231 - #define dc_block_number(dc_p) (le32_to_cpu((dc_p)->dc_block_number)) 1232 - #define dc_size(dc_p) (le16_to_cpu((dc_p)->dc_size)) 1233 - #define put_dc_block_number(dc_p, val) do { (dc_p)->dc_block_number = cpu_to_le32(val); } while(0) 1234 - #define put_dc_size(dc_p, val) do { (dc_p)->dc_size = cpu_to_le16(val); } while(0) 1235 - 1236 - /* Get disk child by buffer header and position in the tree node. */ 1237 - #define B_N_CHILD(bh, n_pos) ((struct disk_child *)\ 1238 - ((bh)->b_data + BLKH_SIZE + B_NR_ITEMS(bh) * KEY_SIZE + DC_SIZE * (n_pos))) 1239 - 1240 - /* Get disk child number by buffer header and position in the tree node. */ 1241 - #define B_N_CHILD_NUM(bh, n_pos) (dc_block_number(B_N_CHILD(bh, n_pos))) 1242 - #define PUT_B_N_CHILD_NUM(bh, n_pos, val) \ 1243 - (put_dc_block_number(B_N_CHILD(bh, n_pos), val)) 1244 - 1245 - /* maximal value of field child_size in structure disk_child */ 1246 - /* child size is the combined size of all items and their headers */ 1247 - #define MAX_CHILD_SIZE(bh) ((int)( (bh)->b_size - BLKH_SIZE )) 1248 - 1249 - /* amount of used space in buffer (not including block head) */ 1250 - #define B_CHILD_SIZE(cur) (MAX_CHILD_SIZE(cur)-(B_FREE_SPACE(cur))) 1251 - 1252 - /* max and min number of keys in internal node */ 1253 - #define MAX_NR_KEY(bh) ( (MAX_CHILD_SIZE(bh)-DC_SIZE)/(KEY_SIZE+DC_SIZE) ) 1254 - #define MIN_NR_KEY(bh) (MAX_NR_KEY(bh)/2) 1255 - 1256 - /***************************************************************************/ 1257 - /* PATH STRUCTURES AND DEFINES */ 1258 - /***************************************************************************/ 1259 - 1260 - /* Search_by_key fills up the path from the root to the leaf as it descends the tree looking for the 1261 - key. It uses reiserfs_bread to try to find buffers in the cache given their block number. If it 1262 - does not find them in the cache it reads them from disk. For each node search_by_key finds using 1263 - reiserfs_bread it then uses bin_search to look through that node. bin_search will find the 1264 - position of the block_number of the next node if it is looking through an internal node. If it 1265 - is looking through a leaf node bin_search will find the position of the item which has key either 1266 - equal to given key, or which is the maximal key less than the given key. */ 1267 - 1268 - struct path_element { 1269 - struct buffer_head *pe_buffer; /* Pointer to the buffer at the path in the tree. */ 1270 - int pe_position; /* Position in the tree node which is placed in the */ 1271 - /* buffer above. */ 1272 - }; 1273 - 1274 - #define MAX_HEIGHT 5 /* maximal height of a tree. don't change this without changing JOURNAL_PER_BALANCE_CNT */ 1275 - #define EXTENDED_MAX_HEIGHT 7 /* Must be equals MAX_HEIGHT + FIRST_PATH_ELEMENT_OFFSET */ 1276 - #define FIRST_PATH_ELEMENT_OFFSET 2 /* Must be equal to at least 2. */ 1277 - 1278 - #define ILLEGAL_PATH_ELEMENT_OFFSET 1 /* Must be equal to FIRST_PATH_ELEMENT_OFFSET - 1 */ 1279 - #define MAX_FEB_SIZE 6 /* this MUST be MAX_HEIGHT + 1. See about FEB below */ 1280 - 1281 - /* We need to keep track of who the ancestors of nodes are. When we 1282 - perform a search we record which nodes were visited while 1283 - descending the tree looking for the node we searched for. This list 1284 - of nodes is called the path. This information is used while 1285 - performing balancing. Note that this path information may become 1286 - invalid, and this means we must check it when using it to see if it 1287 - is still valid. You'll need to read search_by_key and the comments 1288 - in it, especially about decrement_counters_in_path(), to understand 1289 - this structure. 1290 - 1291 - Paths make the code so much harder to work with and debug.... An 1292 - enormous number of bugs are due to them, and trying to write or modify 1293 - code that uses them just makes my head hurt. They are based on an 1294 - excessive effort to avoid disturbing the precious VFS code.:-( The 1295 - gods only know how we are going to SMP the code that uses them. 1296 - znodes are the way! */ 1297 - 1298 - #define PATH_READA 0x1 /* do read ahead */ 1299 - #define PATH_READA_BACK 0x2 /* read backwards */ 1300 - 1301 - struct treepath { 1302 - int path_length; /* Length of the array above. */ 1303 - int reada; 1304 - struct path_element path_elements[EXTENDED_MAX_HEIGHT]; /* Array of the path elements. */ 1305 - int pos_in_item; 1306 - }; 1307 - 1308 - #define pos_in_item(path) ((path)->pos_in_item) 1309 - 1310 - #define INITIALIZE_PATH(var) \ 1311 - struct treepath var = {.path_length = ILLEGAL_PATH_ELEMENT_OFFSET, .reada = 0,} 1312 - 1313 - /* Get path element by path and path position. */ 1314 - #define PATH_OFFSET_PELEMENT(path, n_offset) ((path)->path_elements + (n_offset)) 1315 - 1316 - /* Get buffer header at the path by path and path position. */ 1317 - #define PATH_OFFSET_PBUFFER(path, n_offset) (PATH_OFFSET_PELEMENT(path, n_offset)->pe_buffer) 1318 - 1319 - /* Get position in the element at the path by path and path position. */ 1320 - #define PATH_OFFSET_POSITION(path, n_offset) (PATH_OFFSET_PELEMENT(path, n_offset)->pe_position) 1321 - 1322 - #define PATH_PLAST_BUFFER(path) (PATH_OFFSET_PBUFFER((path), (path)->path_length)) 1323 - /* you know, to the person who didn't 1324 - write this the macro name does not 1325 - at first suggest what it does. 1326 - Maybe POSITION_FROM_PATH_END? Or 1327 - maybe we should just focus on 1328 - dumping paths... -Hans */ 1329 - #define PATH_LAST_POSITION(path) (PATH_OFFSET_POSITION((path), (path)->path_length)) 1330 - 1331 - #define PATH_PITEM_HEAD(path) B_N_PITEM_HEAD(PATH_PLAST_BUFFER(path), PATH_LAST_POSITION(path)) 1332 - 1333 - /* in do_balance leaf has h == 0 in contrast with path structure, 1334 - where root has level == 0. That is why we need these defines */ 1335 - #define PATH_H_PBUFFER(path, h) PATH_OFFSET_PBUFFER (path, path->path_length - (h)) /* tb->S[h] */ 1336 - #define PATH_H_PPARENT(path, h) PATH_H_PBUFFER (path, (h) + 1) /* tb->F[h] or tb->S[0]->b_parent */ 1337 - #define PATH_H_POSITION(path, h) PATH_OFFSET_POSITION (path, path->path_length - (h)) 1338 - #define PATH_H_B_ITEM_ORDER(path, h) PATH_H_POSITION(path, h + 1) /* tb->S[h]->b_item_order */ 1339 - 1340 - #define PATH_H_PATH_OFFSET(path, n_h) ((path)->path_length - (n_h)) 1341 - 1342 - #define get_last_bh(path) PATH_PLAST_BUFFER(path) 1343 - #define get_ih(path) PATH_PITEM_HEAD(path) 1344 - #define get_item_pos(path) PATH_LAST_POSITION(path) 1345 - #define get_item(path) ((void *)B_N_PITEM(PATH_PLAST_BUFFER(path), PATH_LAST_POSITION (path))) 1346 - #define item_moved(ih,path) comp_items(ih, path) 1347 - #define path_changed(ih,path) comp_items (ih, path) 1348 - 1349 - /***************************************************************************/ 1350 - /* MISC */ 1351 - /***************************************************************************/ 1352 - 1353 - /* Size of pointer to the unformatted node. */ 1354 - #define UNFM_P_SIZE (sizeof(unp_t)) 1355 - #define UNFM_P_SHIFT 2 1356 - 1357 - // in in-core inode key is stored on le form 1358 - #define INODE_PKEY(inode) ((struct reiserfs_key *)(REISERFS_I(inode)->i_key)) 1359 - 1360 - #define MAX_UL_INT 0xffffffff 1361 - #define MAX_INT 0x7ffffff 1362 - #define MAX_US_INT 0xffff 1363 - 1364 - // reiserfs version 2 has max offset 60 bits. Version 1 - 32 bit offset 1365 - #define U32_MAX (~(__u32)0) 1366 - 1367 - static inline loff_t max_reiserfs_offset(struct inode *inode) 1368 - { 1369 - if (get_inode_item_key_version(inode) == KEY_FORMAT_3_5) 1370 - return (loff_t) U32_MAX; 1371 - 1372 - return (loff_t) ((~(__u64) 0) >> 4); 1373 - } 1374 - 1375 - /*#define MAX_KEY_UNIQUENESS MAX_UL_INT*/ 1376 - #define MAX_KEY_OBJECTID MAX_UL_INT 1377 - 1378 - #define MAX_B_NUM MAX_UL_INT 1379 - #define MAX_FC_NUM MAX_US_INT 1380 - 1381 - /* the purpose is to detect overflow of an unsigned short */ 1382 - #define REISERFS_LINK_MAX (MAX_US_INT - 1000) 1383 - 1384 - /* The following defines are used in reiserfs_insert_item and reiserfs_append_item */ 1385 - #define REISERFS_KERNEL_MEM 0 /* reiserfs kernel memory mode */ 1386 - #define REISERFS_USER_MEM 1 /* reiserfs user memory mode */ 1387 - 1388 - #define fs_generation(s) (REISERFS_SB(s)->s_generation_counter) 1389 - #define get_generation(s) atomic_read (&fs_generation(s)) 1390 - #define FILESYSTEM_CHANGED_TB(tb) (get_generation((tb)->tb_sb) != (tb)->fs_gen) 1391 - #define __fs_changed(gen,s) (gen != get_generation (s)) 1392 - #define fs_changed(gen,s) \ 1393 - ({ \ 1394 - reiserfs_cond_resched(s); \ 1395 - __fs_changed(gen, s); \ 1396 - }) 1397 - 1398 - /***************************************************************************/ 1399 - /* FIXATE NODES */ 1400 - /***************************************************************************/ 1401 - 1402 - #define VI_TYPE_LEFT_MERGEABLE 1 1403 - #define VI_TYPE_RIGHT_MERGEABLE 2 1404 - 1405 - /* To make any changes in the tree we always first find node, that 1406 - contains item to be changed/deleted or place to insert a new 1407 - item. We call this node S. To do balancing we need to decide what 1408 - we will shift to left/right neighbor, or to a new node, where new 1409 - item will be etc. To make this analysis simpler we build virtual 1410 - node. Virtual node is an array of items, that will replace items of 1411 - node S. (For instance if we are going to delete an item, virtual 1412 - node does not contain it). Virtual node keeps information about 1413 - item sizes and types, mergeability of first and last items, sizes 1414 - of all entries in directory item. We use this array of items when 1415 - calculating what we can shift to neighbors and how many nodes we 1416 - have to have if we do not any shiftings, if we shift to left/right 1417 - neighbor or to both. */ 1418 - struct virtual_item { 1419 - int vi_index; // index in the array of item operations 1420 - unsigned short vi_type; // left/right mergeability 1421 - unsigned short vi_item_len; /* length of item that it will have after balancing */ 1422 - struct item_head *vi_ih; 1423 - const char *vi_item; // body of item (old or new) 1424 - const void *vi_new_data; // 0 always but paste mode 1425 - void *vi_uarea; // item specific area 1426 - }; 1427 - 1428 - struct virtual_node { 1429 - char *vn_free_ptr; /* this is a pointer to the free space in the buffer */ 1430 - unsigned short vn_nr_item; /* number of items in virtual node */ 1431 - short vn_size; /* size of node , that node would have if it has unlimited size and no balancing is performed */ 1432 - short vn_mode; /* mode of balancing (paste, insert, delete, cut) */ 1433 - short vn_affected_item_num; 1434 - short vn_pos_in_item; 1435 - struct item_head *vn_ins_ih; /* item header of inserted item, 0 for other modes */ 1436 - const void *vn_data; 1437 - struct virtual_item *vn_vi; /* array of items (including a new one, excluding item to be deleted) */ 1438 - }; 1439 - 1440 - /* used by directory items when creating virtual nodes */ 1441 - struct direntry_uarea { 1442 - int flags; 1443 - __u16 entry_count; 1444 - __u16 entry_sizes[1]; 1445 - } __attribute__ ((__packed__)); 1446 - 1447 - /***************************************************************************/ 1448 - /* TREE BALANCE */ 1449 - /***************************************************************************/ 1450 - 1451 - /* This temporary structure is used in tree balance algorithms, and 1452 - constructed as we go to the extent that its various parts are 1453 - needed. It contains arrays of nodes that can potentially be 1454 - involved in the balancing of node S, and parameters that define how 1455 - each of the nodes must be balanced. Note that in these algorithms 1456 - for balancing the worst case is to need to balance the current node 1457 - S and the left and right neighbors and all of their parents plus 1458 - create a new node. We implement S1 balancing for the leaf nodes 1459 - and S0 balancing for the internal nodes (S1 and S0 are defined in 1460 - our papers.)*/ 1461 - 1462 - #define MAX_FREE_BLOCK 7 /* size of the array of buffers to free at end of do_balance */ 1463 - 1464 - /* maximum number of FEB blocknrs on a single level */ 1465 - #define MAX_AMOUNT_NEEDED 2 1466 - 1467 - /* someday somebody will prefix every field in this struct with tb_ */ 1468 - struct tree_balance { 1469 - int tb_mode; 1470 - int need_balance_dirty; 1471 - struct super_block *tb_sb; 1472 - struct reiserfs_transaction_handle *transaction_handle; 1473 - struct treepath *tb_path; 1474 - struct buffer_head *L[MAX_HEIGHT]; /* array of left neighbors of nodes in the path */ 1475 - struct buffer_head *R[MAX_HEIGHT]; /* array of right neighbors of nodes in the path */ 1476 - struct buffer_head *FL[MAX_HEIGHT]; /* array of fathers of the left neighbors */ 1477 - struct buffer_head *FR[MAX_HEIGHT]; /* array of fathers of the right neighbors */ 1478 - struct buffer_head *CFL[MAX_HEIGHT]; /* array of common parents of center node and its left neighbor */ 1479 - struct buffer_head *CFR[MAX_HEIGHT]; /* array of common parents of center node and its right neighbor */ 1480 - 1481 - struct buffer_head *FEB[MAX_FEB_SIZE]; /* array of empty buffers. Number of buffers in array equals 1482 - cur_blknum. */ 1483 - struct buffer_head *used[MAX_FEB_SIZE]; 1484 - struct buffer_head *thrown[MAX_FEB_SIZE]; 1485 - int lnum[MAX_HEIGHT]; /* array of number of items which must be 1486 - shifted to the left in order to balance the 1487 - current node; for leaves includes item that 1488 - will be partially shifted; for internal 1489 - nodes, it is the number of child pointers 1490 - rather than items. It includes the new item 1491 - being created. The code sometimes subtracts 1492 - one to get the number of wholly shifted 1493 - items for other purposes. */ 1494 - int rnum[MAX_HEIGHT]; /* substitute right for left in comment above */ 1495 - int lkey[MAX_HEIGHT]; /* array indexed by height h mapping the key delimiting L[h] and 1496 - S[h] to its item number within the node CFL[h] */ 1497 - int rkey[MAX_HEIGHT]; /* substitute r for l in comment above */ 1498 - int insert_size[MAX_HEIGHT]; /* the number of bytes by we are trying to add or remove from 1499 - S[h]. A negative value means removing. */ 1500 - int blknum[MAX_HEIGHT]; /* number of nodes that will replace node S[h] after 1501 - balancing on the level h of the tree. If 0 then S is 1502 - being deleted, if 1 then S is remaining and no new nodes 1503 - are being created, if 2 or 3 then 1 or 2 new nodes is 1504 - being created */ 1505 - 1506 - /* fields that are used only for balancing leaves of the tree */ 1507 - int cur_blknum; /* number of empty blocks having been already allocated */ 1508 - int s0num; /* number of items that fall into left most node when S[0] splits */ 1509 - int s1num; /* number of items that fall into first new node when S[0] splits */ 1510 - int s2num; /* number of items that fall into second new node when S[0] splits */ 1511 - int lbytes; /* number of bytes which can flow to the left neighbor from the left */ 1512 - /* most liquid item that cannot be shifted from S[0] entirely */ 1513 - /* if -1 then nothing will be partially shifted */ 1514 - int rbytes; /* number of bytes which will flow to the right neighbor from the right */ 1515 - /* most liquid item that cannot be shifted from S[0] entirely */ 1516 - /* if -1 then nothing will be partially shifted */ 1517 - int s1bytes; /* number of bytes which flow to the first new node when S[0] splits */ 1518 - /* note: if S[0] splits into 3 nodes, then items do not need to be cut */ 1519 - int s2bytes; 1520 - struct buffer_head *buf_to_free[MAX_FREE_BLOCK]; /* buffers which are to be freed after do_balance finishes by unfix_nodes */ 1521 - char *vn_buf; /* kmalloced memory. Used to create 1522 - virtual node and keep map of 1523 - dirtied bitmap blocks */ 1524 - int vn_buf_size; /* size of the vn_buf */ 1525 - struct virtual_node *tb_vn; /* VN starts after bitmap of bitmap blocks */ 1526 - 1527 - int fs_gen; /* saved value of `reiserfs_generation' counter 1528 - see FILESYSTEM_CHANGED() macro in reiserfs_fs.h */ 1529 - #ifdef DISPLACE_NEW_PACKING_LOCALITIES 1530 - struct in_core_key key; /* key pointer, to pass to block allocator or 1531 - another low-level subsystem */ 1532 - #endif 1533 - }; 1534 - 1535 - /* These are modes of balancing */ 1536 - 1537 - /* When inserting an item. */ 1538 - #define M_INSERT 'i' 1539 - /* When inserting into (directories only) or appending onto an already 1540 - existent item. */ 1541 - #define M_PASTE 'p' 1542 - /* When deleting an item. */ 1543 - #define M_DELETE 'd' 1544 - /* When truncating an item or removing an entry from a (directory) item. */ 1545 - #define M_CUT 'c' 1546 - 1547 - /* used when balancing on leaf level skipped (in reiserfsck) */ 1548 - #define M_INTERNAL 'n' 1549 - 1550 - /* When further balancing is not needed, then do_balance does not need 1551 - to be called. */ 1552 - #define M_SKIP_BALANCING 's' 1553 - #define M_CONVERT 'v' 1554 - 1555 - /* modes of leaf_move_items */ 1556 - #define LEAF_FROM_S_TO_L 0 1557 - #define LEAF_FROM_S_TO_R 1 1558 - #define LEAF_FROM_R_TO_L 2 1559 - #define LEAF_FROM_L_TO_R 3 1560 - #define LEAF_FROM_S_TO_SNEW 4 1561 - 1562 - #define FIRST_TO_LAST 0 1563 - #define LAST_TO_FIRST 1 1564 - 1565 - /* used in do_balance for passing parent of node information that has 1566 - been gotten from tb struct */ 1567 - struct buffer_info { 1568 - struct tree_balance *tb; 1569 - struct buffer_head *bi_bh; 1570 - struct buffer_head *bi_parent; 1571 - int bi_position; 1572 - }; 1573 - 1574 - static inline struct super_block *sb_from_tb(struct tree_balance *tb) 1575 - { 1576 - return tb ? tb->tb_sb : NULL; 1577 - } 1578 - 1579 - static inline struct super_block *sb_from_bi(struct buffer_info *bi) 1580 - { 1581 - return bi ? sb_from_tb(bi->tb) : NULL; 1582 - } 1583 - 1584 - /* there are 4 types of items: stat data, directory item, indirect, direct. 1585 - +-------------------+------------+--------------+------------+ 1586 - | | k_offset | k_uniqueness | mergeable? | 1587 - +-------------------+------------+--------------+------------+ 1588 - | stat data | 0 | 0 | no | 1589 - +-------------------+------------+--------------+------------+ 1590 - | 1st directory item| DOT_OFFSET |DIRENTRY_UNIQUENESS| no | 1591 - | non 1st directory | hash value | | yes | 1592 - | item | | | | 1593 - +-------------------+------------+--------------+------------+ 1594 - | indirect item | offset + 1 |TYPE_INDIRECT | if this is not the first indirect item of the object 1595 - +-------------------+------------+--------------+------------+ 1596 - | direct item | offset + 1 |TYPE_DIRECT | if not this is not the first direct item of the object 1597 - +-------------------+------------+--------------+------------+ 1598 - */ 1599 - 1600 - struct item_operations { 1601 - int (*bytes_number) (struct item_head * ih, int block_size); 1602 - void (*decrement_key) (struct cpu_key *); 1603 - int (*is_left_mergeable) (struct reiserfs_key * ih, 1604 - unsigned long bsize); 1605 - void (*print_item) (struct item_head *, char *item); 1606 - void (*check_item) (struct item_head *, char *item); 1607 - 1608 - int (*create_vi) (struct virtual_node * vn, struct virtual_item * vi, 1609 - int is_affected, int insert_size); 1610 - int (*check_left) (struct virtual_item * vi, int free, 1611 - int start_skip, int end_skip); 1612 - int (*check_right) (struct virtual_item * vi, int free); 1613 - int (*part_size) (struct virtual_item * vi, int from, int to); 1614 - int (*unit_num) (struct virtual_item * vi); 1615 - void (*print_vi) (struct virtual_item * vi); 1616 - }; 1617 - 1618 - extern struct item_operations *item_ops[TYPE_ANY + 1]; 1619 - 1620 - #define op_bytes_number(ih,bsize) item_ops[le_ih_k_type (ih)]->bytes_number (ih, bsize) 1621 - #define op_is_left_mergeable(key,bsize) item_ops[le_key_k_type (le_key_version (key), key)]->is_left_mergeable (key, bsize) 1622 - #define op_print_item(ih,item) item_ops[le_ih_k_type (ih)]->print_item (ih, item) 1623 - #define op_check_item(ih,item) item_ops[le_ih_k_type (ih)]->check_item (ih, item) 1624 - #define op_create_vi(vn,vi,is_affected,insert_size) item_ops[le_ih_k_type ((vi)->vi_ih)]->create_vi (vn,vi,is_affected,insert_size) 1625 - #define op_check_left(vi,free,start_skip,end_skip) item_ops[(vi)->vi_index]->check_left (vi, free, start_skip, end_skip) 1626 - #define op_check_right(vi,free) item_ops[(vi)->vi_index]->check_right (vi, free) 1627 - #define op_part_size(vi,from,to) item_ops[(vi)->vi_index]->part_size (vi, from, to) 1628 - #define op_unit_num(vi) item_ops[(vi)->vi_index]->unit_num (vi) 1629 - #define op_print_vi(vi) item_ops[(vi)->vi_index]->print_vi (vi) 1630 - 1631 - #define COMP_SHORT_KEYS comp_short_keys 1632 - 1633 - /* number of blocks pointed to by the indirect item */ 1634 - #define I_UNFM_NUM(ih) (ih_item_len(ih) / UNFM_P_SIZE) 1635 - 1636 - /* the used space within the unformatted node corresponding to pos within the item pointed to by ih */ 1637 - #define I_POS_UNFM_SIZE(ih,pos,size) (((pos) == I_UNFM_NUM(ih) - 1 ) ? (size) - ih_free_space(ih) : (size)) 1638 - 1639 - /* number of bytes contained by the direct item or the unformatted nodes the indirect item points to */ 1640 - 1641 - /* get the item header */ 1642 - #define B_N_PITEM_HEAD(bh,item_num) ( (struct item_head * )((bh)->b_data + BLKH_SIZE) + (item_num) ) 1643 - 1644 - /* get key */ 1645 - #define B_N_PDELIM_KEY(bh,item_num) ( (struct reiserfs_key * )((bh)->b_data + BLKH_SIZE) + (item_num) ) 1646 - 1647 - /* get the key */ 1648 - #define B_N_PKEY(bh,item_num) ( &(B_N_PITEM_HEAD(bh,item_num)->ih_key) ) 1649 - 1650 - /* get item body */ 1651 - #define B_N_PITEM(bh,item_num) ( (bh)->b_data + ih_location(B_N_PITEM_HEAD((bh),(item_num)))) 1652 - 1653 - /* get the stat data by the buffer header and the item order */ 1654 - #define B_N_STAT_DATA(bh,nr) \ 1655 - ( (struct stat_data *)((bh)->b_data + ih_location(B_N_PITEM_HEAD((bh),(nr))) ) ) 1656 - 1657 - /* following defines use reiserfs buffer header and item header */ 1658 - 1659 - /* get stat-data */ 1660 - #define B_I_STAT_DATA(bh, ih) ( (struct stat_data * )((bh)->b_data + ih_location(ih)) ) 1661 - 1662 - // this is 3976 for size==4096 1663 - #define MAX_DIRECT_ITEM_LEN(size) ((size) - BLKH_SIZE - 2*IH_SIZE - SD_SIZE - UNFM_P_SIZE) 1664 - 1665 - /* indirect items consist of entries which contain blocknrs, pos 1666 - indicates which entry, and B_I_POS_UNFM_POINTER resolves to the 1667 - blocknr contained by the entry pos points to */ 1668 - #define B_I_POS_UNFM_POINTER(bh,ih,pos) le32_to_cpu(*(((unp_t *)B_I_PITEM(bh,ih)) + (pos))) 1669 - #define PUT_B_I_POS_UNFM_POINTER(bh,ih,pos, val) do {*(((unp_t *)B_I_PITEM(bh,ih)) + (pos)) = cpu_to_le32(val); } while (0) 1670 - 1671 - struct reiserfs_iget_args { 1672 - __u32 objectid; 1673 - __u32 dirid; 1674 - }; 1675 - 1676 - /***************************************************************************/ 1677 - /* FUNCTION DECLARATIONS */ 1678 - /***************************************************************************/ 1679 - 1680 - #define get_journal_desc_magic(bh) (bh->b_data + bh->b_size - 12) 1681 - 1682 - #define journal_trans_half(blocksize) \ 1683 - ((blocksize - sizeof (struct reiserfs_journal_desc) + sizeof (__u32) - 12) / sizeof (__u32)) 1684 - 1685 - /* journal.c see journal.c for all the comments here */ 1686 - 1687 - /* first block written in a commit. */ 1688 - struct reiserfs_journal_desc { 1689 - __le32 j_trans_id; /* id of commit */ 1690 - __le32 j_len; /* length of commit. len +1 is the commit block */ 1691 - __le32 j_mount_id; /* mount id of this trans */ 1692 - __le32 j_realblock[1]; /* real locations for each block */ 1693 - }; 1694 - 1695 - #define get_desc_trans_id(d) le32_to_cpu((d)->j_trans_id) 1696 - #define get_desc_trans_len(d) le32_to_cpu((d)->j_len) 1697 - #define get_desc_mount_id(d) le32_to_cpu((d)->j_mount_id) 1698 - 1699 - #define set_desc_trans_id(d,val) do { (d)->j_trans_id = cpu_to_le32 (val); } while (0) 1700 - #define set_desc_trans_len(d,val) do { (d)->j_len = cpu_to_le32 (val); } while (0) 1701 - #define set_desc_mount_id(d,val) do { (d)->j_mount_id = cpu_to_le32 (val); } while (0) 1702 - 1703 - /* last block written in a commit */ 1704 - struct reiserfs_journal_commit { 1705 - __le32 j_trans_id; /* must match j_trans_id from the desc block */ 1706 - __le32 j_len; /* ditto */ 1707 - __le32 j_realblock[1]; /* real locations for each block */ 1708 - }; 1709 - 1710 - #define get_commit_trans_id(c) le32_to_cpu((c)->j_trans_id) 1711 - #define get_commit_trans_len(c) le32_to_cpu((c)->j_len) 1712 - #define get_commit_mount_id(c) le32_to_cpu((c)->j_mount_id) 1713 - 1714 - #define set_commit_trans_id(c,val) do { (c)->j_trans_id = cpu_to_le32 (val); } while (0) 1715 - #define set_commit_trans_len(c,val) do { (c)->j_len = cpu_to_le32 (val); } while (0) 1716 - 1717 - /* this header block gets written whenever a transaction is considered fully flushed, and is more recent than the 1718 - ** last fully flushed transaction. fully flushed means all the log blocks and all the real blocks are on disk, 1719 - ** and this transaction does not need to be replayed. 1720 - */ 1721 - struct reiserfs_journal_header { 1722 - __le32 j_last_flush_trans_id; /* id of last fully flushed transaction */ 1723 - __le32 j_first_unflushed_offset; /* offset in the log of where to start replay after a crash */ 1724 - __le32 j_mount_id; 1725 - /* 12 */ struct journal_params jh_journal; 1726 - }; 1727 - 1728 - /* biggest tunable defines are right here */ 1729 - #define JOURNAL_BLOCK_COUNT 8192 /* number of blocks in the journal */ 1730 - #define JOURNAL_TRANS_MAX_DEFAULT 1024 /* biggest possible single transaction, don't change for now (8/3/99) */ 1731 - #define JOURNAL_TRANS_MIN_DEFAULT 256 1732 - #define JOURNAL_MAX_BATCH_DEFAULT 900 /* max blocks to batch into one transaction, don't make this any bigger than 900 */ 1733 - #define JOURNAL_MIN_RATIO 2 1734 - #define JOURNAL_MAX_COMMIT_AGE 30 1735 - #define JOURNAL_MAX_TRANS_AGE 30 1736 - #define JOURNAL_PER_BALANCE_CNT (3 * (MAX_HEIGHT-2) + 9) 1737 - #define JOURNAL_BLOCKS_PER_OBJECT(sb) (JOURNAL_PER_BALANCE_CNT * 3 + \ 1738 - 2 * (REISERFS_QUOTA_INIT_BLOCKS(sb) + \ 1739 - REISERFS_QUOTA_TRANS_BLOCKS(sb))) 1740 - 1741 - #ifdef CONFIG_QUOTA 1742 - #define REISERFS_QUOTA_OPTS ((1 << REISERFS_USRQUOTA) | (1 << REISERFS_GRPQUOTA)) 1743 - /* We need to update data and inode (atime) */ 1744 - #define REISERFS_QUOTA_TRANS_BLOCKS(s) (REISERFS_SB(s)->s_mount_opt & REISERFS_QUOTA_OPTS ? 2 : 0) 1745 - /* 1 balancing, 1 bitmap, 1 data per write + stat data update */ 1746 - #define REISERFS_QUOTA_INIT_BLOCKS(s) (REISERFS_SB(s)->s_mount_opt & REISERFS_QUOTA_OPTS ? \ 1747 - (DQUOT_INIT_ALLOC*(JOURNAL_PER_BALANCE_CNT+2)+DQUOT_INIT_REWRITE+1) : 0) 1748 - /* same as with INIT */ 1749 - #define REISERFS_QUOTA_DEL_BLOCKS(s) (REISERFS_SB(s)->s_mount_opt & REISERFS_QUOTA_OPTS ? \ 1750 - (DQUOT_DEL_ALLOC*(JOURNAL_PER_BALANCE_CNT+2)+DQUOT_DEL_REWRITE+1) : 0) 1751 - #else 1752 - #define REISERFS_QUOTA_TRANS_BLOCKS(s) 0 1753 - #define REISERFS_QUOTA_INIT_BLOCKS(s) 0 1754 - #define REISERFS_QUOTA_DEL_BLOCKS(s) 0 1755 - #endif 1756 - 1757 - /* both of these can be as low as 1, or as high as you want. The min is the 1758 - ** number of 4k bitmap nodes preallocated on mount. New nodes are allocated 1759 - ** as needed, and released when transactions are committed. On release, if 1760 - ** the current number of nodes is > max, the node is freed, otherwise, 1761 - ** it is put on a free list for faster use later. 1762 - */ 1763 - #define REISERFS_MIN_BITMAP_NODES 10 1764 - #define REISERFS_MAX_BITMAP_NODES 100 1765 - 1766 - #define JBH_HASH_SHIFT 13 /* these are based on journal hash size of 8192 */ 1767 - #define JBH_HASH_MASK 8191 1768 - 1769 - #define _jhashfn(sb,block) \ 1770 - (((unsigned long)sb>>L1_CACHE_SHIFT) ^ \ 1771 - (((block)<<(JBH_HASH_SHIFT - 6)) ^ ((block) >> 13) ^ ((block) << (JBH_HASH_SHIFT - 12)))) 1772 - #define journal_hash(t,sb,block) ((t)[_jhashfn((sb),(block)) & JBH_HASH_MASK]) 1773 - 1774 - // We need these to make journal.c code more readable 1775 - #define journal_find_get_block(s, block) __find_get_block(SB_JOURNAL(s)->j_dev_bd, block, s->s_blocksize) 1776 - #define journal_getblk(s, block) __getblk(SB_JOURNAL(s)->j_dev_bd, block, s->s_blocksize) 1777 - #define journal_bread(s, block) __bread(SB_JOURNAL(s)->j_dev_bd, block, s->s_blocksize) 1778 - 1779 - enum reiserfs_bh_state_bits { 1780 - BH_JDirty = BH_PrivateStart, /* buffer is in current transaction */ 1781 - BH_JDirty_wait, 1782 - BH_JNew, /* disk block was taken off free list before 1783 - * being in a finished transaction, or 1784 - * written to disk. Can be reused immed. */ 1785 - BH_JPrepared, 1786 - BH_JRestore_dirty, 1787 - BH_JTest, // debugging only will go away 1788 - }; 1789 - 1790 - BUFFER_FNS(JDirty, journaled); 1791 - TAS_BUFFER_FNS(JDirty, journaled); 1792 - BUFFER_FNS(JDirty_wait, journal_dirty); 1793 - TAS_BUFFER_FNS(JDirty_wait, journal_dirty); 1794 - BUFFER_FNS(JNew, journal_new); 1795 - TAS_BUFFER_FNS(JNew, journal_new); 1796 - BUFFER_FNS(JPrepared, journal_prepared); 1797 - TAS_BUFFER_FNS(JPrepared, journal_prepared); 1798 - BUFFER_FNS(JRestore_dirty, journal_restore_dirty); 1799 - TAS_BUFFER_FNS(JRestore_dirty, journal_restore_dirty); 1800 - BUFFER_FNS(JTest, journal_test); 1801 - TAS_BUFFER_FNS(JTest, journal_test); 1802 - 1803 - /* 1804 - ** transaction handle which is passed around for all journal calls 1805 - */ 1806 - struct reiserfs_transaction_handle { 1807 - struct super_block *t_super; /* super for this FS when journal_begin was 1808 - called. saves calls to reiserfs_get_super 1809 - also used by nested transactions to make 1810 - sure they are nesting on the right FS 1811 - _must_ be first in the handle 1812 - */ 1813 - int t_refcount; 1814 - int t_blocks_logged; /* number of blocks this writer has logged */ 1815 - int t_blocks_allocated; /* number of blocks this writer allocated */ 1816 - unsigned int t_trans_id; /* sanity check, equals the current trans id */ 1817 - void *t_handle_save; /* save existing current->journal_info */ 1818 - unsigned displace_new_blocks:1; /* if new block allocation occurres, that block 1819 - should be displaced from others */ 1820 - struct list_head t_list; 1821 - }; 1822 - 1823 - /* used to keep track of ordered and tail writes, attached to the buffer 1824 - * head through b_journal_head. 1825 - */ 1826 - struct reiserfs_jh { 1827 - struct reiserfs_journal_list *jl; 1828 - struct buffer_head *bh; 1829 - struct list_head list; 1830 - }; 1831 - 1832 - void reiserfs_free_jh(struct buffer_head *bh); 1833 - int reiserfs_add_tail_list(struct inode *inode, struct buffer_head *bh); 1834 - int reiserfs_add_ordered_list(struct inode *inode, struct buffer_head *bh); 1835 - int journal_mark_dirty(struct reiserfs_transaction_handle *, 1836 - struct super_block *, struct buffer_head *bh); 1837 - 1838 - static inline int reiserfs_file_data_log(struct inode *inode) 1839 - { 1840 - if (reiserfs_data_log(inode->i_sb) || 1841 - (REISERFS_I(inode)->i_flags & i_data_log)) 1842 - return 1; 1843 - return 0; 1844 - } 1845 - 1846 - static inline int reiserfs_transaction_running(struct super_block *s) 1847 - { 1848 - struct reiserfs_transaction_handle *th = current->journal_info; 1849 - if (th && th->t_super == s) 1850 - return 1; 1851 - if (th && th->t_super == NULL) 1852 - BUG(); 1853 - return 0; 1854 - } 1855 - 1856 - static inline int reiserfs_transaction_free_space(struct reiserfs_transaction_handle *th) 1857 - { 1858 - return th->t_blocks_allocated - th->t_blocks_logged; 1859 - } 1860 - 1861 - struct reiserfs_transaction_handle *reiserfs_persistent_transaction(struct 1862 - super_block 1863 - *, 1864 - int count); 1865 - int reiserfs_end_persistent_transaction(struct reiserfs_transaction_handle *); 1866 - int reiserfs_commit_page(struct inode *inode, struct page *page, 1867 - unsigned from, unsigned to); 1868 - int reiserfs_flush_old_commits(struct super_block *); 1869 - int reiserfs_commit_for_inode(struct inode *); 1870 - int reiserfs_inode_needs_commit(struct inode *); 1871 - void reiserfs_update_inode_transaction(struct inode *); 1872 - void reiserfs_wait_on_write_block(struct super_block *s); 1873 - void reiserfs_block_writes(struct reiserfs_transaction_handle *th); 1874 - void reiserfs_allow_writes(struct super_block *s); 1875 - void reiserfs_check_lock_depth(struct super_block *s, char *caller); 1876 - int reiserfs_prepare_for_journal(struct super_block *, struct buffer_head *bh, 1877 - int wait); 1878 - void reiserfs_restore_prepared_buffer(struct super_block *, 1879 - struct buffer_head *bh); 1880 - int journal_init(struct super_block *, const char *j_dev_name, int old_format, 1881 - unsigned int); 1882 - int journal_release(struct reiserfs_transaction_handle *, struct super_block *); 1883 - int journal_release_error(struct reiserfs_transaction_handle *, 1884 - struct super_block *); 1885 - int journal_end(struct reiserfs_transaction_handle *, struct super_block *, 1886 - unsigned long); 1887 - int journal_end_sync(struct reiserfs_transaction_handle *, struct super_block *, 1888 - unsigned long); 1889 - int journal_mark_freed(struct reiserfs_transaction_handle *, 1890 - struct super_block *, b_blocknr_t blocknr); 1891 - int journal_transaction_should_end(struct reiserfs_transaction_handle *, int); 1892 - int reiserfs_in_journal(struct super_block *sb, unsigned int bmap_nr, 1893 - int bit_nr, int searchall, b_blocknr_t *next); 1894 - int journal_begin(struct reiserfs_transaction_handle *, 1895 - struct super_block *sb, unsigned long); 1896 - int journal_join_abort(struct reiserfs_transaction_handle *, 1897 - struct super_block *sb, unsigned long); 1898 - void reiserfs_abort_journal(struct super_block *sb, int errno); 1899 - void reiserfs_abort(struct super_block *sb, int errno, const char *fmt, ...); 1900 - int reiserfs_allocate_list_bitmaps(struct super_block *s, 1901 - struct reiserfs_list_bitmap *, unsigned int); 1902 - 1903 - void add_save_link(struct reiserfs_transaction_handle *th, 1904 - struct inode *inode, int truncate); 1905 - int remove_save_link(struct inode *inode, int truncate); 1906 - 1907 - /* objectid.c */ 1908 - __u32 reiserfs_get_unused_objectid(struct reiserfs_transaction_handle *th); 1909 - void reiserfs_release_objectid(struct reiserfs_transaction_handle *th, 1910 - __u32 objectid_to_release); 1911 - int reiserfs_convert_objectid_map_v1(struct super_block *); 1912 - 1913 - /* stree.c */ 1914 - int B_IS_IN_TREE(const struct buffer_head *); 1915 - extern void copy_item_head(struct item_head *to, 1916 - const struct item_head *from); 1917 - 1918 - // first key is in cpu form, second - le 1919 - extern int comp_short_keys(const struct reiserfs_key *le_key, 1920 - const struct cpu_key *cpu_key); 1921 - extern void le_key2cpu_key(struct cpu_key *to, const struct reiserfs_key *from); 1922 - 1923 - // both are in le form 1924 - extern int comp_le_keys(const struct reiserfs_key *, 1925 - const struct reiserfs_key *); 1926 - extern int comp_short_le_keys(const struct reiserfs_key *, 1927 - const struct reiserfs_key *); 1928 - 1929 - // 1930 - // get key version from on disk key - kludge 1931 - // 1932 - static inline int le_key_version(const struct reiserfs_key *key) 1933 - { 1934 - int type; 1935 - 1936 - type = offset_v2_k_type(&(key->u.k_offset_v2)); 1937 - if (type != TYPE_DIRECT && type != TYPE_INDIRECT 1938 - && type != TYPE_DIRENTRY) 1939 - return KEY_FORMAT_3_5; 1940 - 1941 - return KEY_FORMAT_3_6; 1942 - 1943 - } 1944 - 1945 - static inline void copy_key(struct reiserfs_key *to, 1946 - const struct reiserfs_key *from) 1947 - { 1948 - memcpy(to, from, KEY_SIZE); 1949 - } 1950 - 1951 - int comp_items(const struct item_head *stored_ih, const struct treepath *path); 1952 - const struct reiserfs_key *get_rkey(const struct treepath *chk_path, 1953 - const struct super_block *sb); 1954 - int search_by_key(struct super_block *, const struct cpu_key *, 1955 - struct treepath *, int); 1956 - #define search_item(s,key,path) search_by_key (s, key, path, DISK_LEAF_NODE_LEVEL) 1957 - int search_for_position_by_key(struct super_block *sb, 1958 - const struct cpu_key *cpu_key, 1959 - struct treepath *search_path); 1960 - extern void decrement_bcount(struct buffer_head *bh); 1961 - void decrement_counters_in_path(struct treepath *search_path); 1962 - void pathrelse(struct treepath *search_path); 1963 - int reiserfs_check_path(struct treepath *p); 1964 - void pathrelse_and_restore(struct super_block *s, struct treepath *search_path); 1965 - 1966 - int reiserfs_insert_item(struct reiserfs_transaction_handle *th, 1967 - struct treepath *path, 1968 - const struct cpu_key *key, 1969 - struct item_head *ih, 1970 - struct inode *inode, const char *body); 1971 - 1972 - int reiserfs_paste_into_item(struct reiserfs_transaction_handle *th, 1973 - struct treepath *path, 1974 - const struct cpu_key *key, 1975 - struct inode *inode, 1976 - const char *body, int paste_size); 1977 - 1978 - int reiserfs_cut_from_item(struct reiserfs_transaction_handle *th, 1979 - struct treepath *path, 1980 - struct cpu_key *key, 1981 - struct inode *inode, 1982 - struct page *page, loff_t new_file_size); 1983 - 1984 - int reiserfs_delete_item(struct reiserfs_transaction_handle *th, 1985 - struct treepath *path, 1986 - const struct cpu_key *key, 1987 - struct inode *inode, struct buffer_head *un_bh); 1988 - 1989 - void reiserfs_delete_solid_item(struct reiserfs_transaction_handle *th, 1990 - struct inode *inode, struct reiserfs_key *key); 1991 - int reiserfs_delete_object(struct reiserfs_transaction_handle *th, 1992 - struct inode *inode); 1993 - int reiserfs_do_truncate(struct reiserfs_transaction_handle *th, 1994 - struct inode *inode, struct page *, 1995 - int update_timestamps); 1996 - 1997 - #define i_block_size(inode) ((inode)->i_sb->s_blocksize) 1998 - #define file_size(inode) ((inode)->i_size) 1999 - #define tail_size(inode) (file_size (inode) & (i_block_size (inode) - 1)) 2000 - 2001 - #define tail_has_to_be_packed(inode) (have_large_tails ((inode)->i_sb)?\ 2002 - !STORE_TAIL_IN_UNFM_S1(file_size (inode), tail_size(inode), inode->i_sb->s_blocksize):have_small_tails ((inode)->i_sb)?!STORE_TAIL_IN_UNFM_S2(file_size (inode), tail_size(inode), inode->i_sb->s_blocksize):0 ) 2003 - 2004 - void padd_item(char *item, int total_length, int length); 2005 - 2006 - /* inode.c */ 2007 - /* args for the create parameter of reiserfs_get_block */ 2008 - #define GET_BLOCK_NO_CREATE 0 /* don't create new blocks or convert tails */ 2009 - #define GET_BLOCK_CREATE 1 /* add anything you need to find block */ 2010 - #define GET_BLOCK_NO_HOLE 2 /* return -ENOENT for file holes */ 2011 - #define GET_BLOCK_READ_DIRECT 4 /* read the tail if indirect item not found */ 2012 - #define GET_BLOCK_NO_IMUX 8 /* i_mutex is not held, don't preallocate */ 2013 - #define GET_BLOCK_NO_DANGLE 16 /* don't leave any transactions running */ 2014 - 2015 - void reiserfs_read_locked_inode(struct inode *inode, 2016 - struct reiserfs_iget_args *args); 2017 - int reiserfs_find_actor(struct inode *inode, void *p); 2018 - int reiserfs_init_locked_inode(struct inode *inode, void *p); 2019 - void reiserfs_evict_inode(struct inode *inode); 2020 - int reiserfs_write_inode(struct inode *inode, struct writeback_control *wbc); 2021 - int reiserfs_get_block(struct inode *inode, sector_t block, 2022 - struct buffer_head *bh_result, int create); 2023 - struct dentry *reiserfs_fh_to_dentry(struct super_block *sb, struct fid *fid, 2024 - int fh_len, int fh_type); 2025 - struct dentry *reiserfs_fh_to_parent(struct super_block *sb, struct fid *fid, 2026 - int fh_len, int fh_type); 2027 - int reiserfs_encode_fh(struct dentry *dentry, __u32 * data, int *lenp, 2028 - int connectable); 2029 - 2030 - int reiserfs_truncate_file(struct inode *, int update_timestamps); 2031 - void make_cpu_key(struct cpu_key *cpu_key, struct inode *inode, loff_t offset, 2032 - int type, int key_length); 2033 - void make_le_item_head(struct item_head *ih, const struct cpu_key *key, 2034 - int version, 2035 - loff_t offset, int type, int length, int entry_count); 2036 - struct inode *reiserfs_iget(struct super_block *s, const struct cpu_key *key); 2037 - 2038 - struct reiserfs_security_handle; 2039 - int reiserfs_new_inode(struct reiserfs_transaction_handle *th, 2040 - struct inode *dir, umode_t mode, 2041 - const char *symname, loff_t i_size, 2042 - struct dentry *dentry, struct inode *inode, 2043 - struct reiserfs_security_handle *security); 2044 - 2045 - void reiserfs_update_sd_size(struct reiserfs_transaction_handle *th, 2046 - struct inode *inode, loff_t size); 2047 - 2048 - static inline void reiserfs_update_sd(struct reiserfs_transaction_handle *th, 2049 - struct inode *inode) 2050 - { 2051 - reiserfs_update_sd_size(th, inode, inode->i_size); 2052 - } 2053 - 2054 - void sd_attrs_to_i_attrs(__u16 sd_attrs, struct inode *inode); 2055 - void i_attrs_to_sd_attrs(struct inode *inode, __u16 * sd_attrs); 2056 - int reiserfs_setattr(struct dentry *dentry, struct iattr *attr); 2057 - 2058 - int __reiserfs_write_begin(struct page *page, unsigned from, unsigned len); 2059 - 2060 - /* namei.c */ 2061 - void set_de_name_and_namelen(struct reiserfs_dir_entry *de); 2062 - int search_by_entry_key(struct super_block *sb, const struct cpu_key *key, 2063 - struct treepath *path, struct reiserfs_dir_entry *de); 2064 - struct dentry *reiserfs_get_parent(struct dentry *); 2065 - 2066 - #ifdef CONFIG_REISERFS_PROC_INFO 2067 - int reiserfs_proc_info_init(struct super_block *sb); 2068 - int reiserfs_proc_info_done(struct super_block *sb); 2069 - int reiserfs_proc_info_global_init(void); 2070 - int reiserfs_proc_info_global_done(void); 2071 - 2072 - #define PROC_EXP( e ) e 2073 - 2074 - #define __PINFO( sb ) REISERFS_SB(sb) -> s_proc_info_data 2075 - #define PROC_INFO_MAX( sb, field, value ) \ 2076 - __PINFO( sb ).field = \ 2077 - max( REISERFS_SB( sb ) -> s_proc_info_data.field, value ) 2078 - #define PROC_INFO_INC( sb, field ) ( ++ ( __PINFO( sb ).field ) ) 2079 - #define PROC_INFO_ADD( sb, field, val ) ( __PINFO( sb ).field += ( val ) ) 2080 - #define PROC_INFO_BH_STAT( sb, bh, level ) \ 2081 - PROC_INFO_INC( sb, sbk_read_at[ ( level ) ] ); \ 2082 - PROC_INFO_ADD( sb, free_at[ ( level ) ], B_FREE_SPACE( bh ) ); \ 2083 - PROC_INFO_ADD( sb, items_at[ ( level ) ], B_NR_ITEMS( bh ) ) 2084 - #else 2085 - static inline int reiserfs_proc_info_init(struct super_block *sb) 2086 - { 2087 - return 0; 2088 - } 2089 - 2090 - static inline int reiserfs_proc_info_done(struct super_block *sb) 2091 - { 2092 - return 0; 2093 - } 2094 - 2095 - static inline int reiserfs_proc_info_global_init(void) 2096 - { 2097 - return 0; 2098 - } 2099 - 2100 - static inline int reiserfs_proc_info_global_done(void) 2101 - { 2102 - return 0; 2103 - } 2104 - 2105 - #define PROC_EXP( e ) 2106 - #define VOID_V ( ( void ) 0 ) 2107 - #define PROC_INFO_MAX( sb, field, value ) VOID_V 2108 - #define PROC_INFO_INC( sb, field ) VOID_V 2109 - #define PROC_INFO_ADD( sb, field, val ) VOID_V 2110 - #define PROC_INFO_BH_STAT(sb, bh, n_node_level) VOID_V 2111 - #endif 2112 - 2113 - /* dir.c */ 2114 - extern const struct inode_operations reiserfs_dir_inode_operations; 2115 - extern const struct inode_operations reiserfs_symlink_inode_operations; 2116 - extern const struct inode_operations reiserfs_special_inode_operations; 2117 - extern const struct file_operations reiserfs_dir_operations; 2118 - int reiserfs_readdir_dentry(struct dentry *, void *, filldir_t, loff_t *); 2119 - 2120 - /* tail_conversion.c */ 2121 - int direct2indirect(struct reiserfs_transaction_handle *, struct inode *, 2122 - struct treepath *, struct buffer_head *, loff_t); 2123 - int indirect2direct(struct reiserfs_transaction_handle *, struct inode *, 2124 - struct page *, struct treepath *, const struct cpu_key *, 2125 - loff_t, char *); 2126 - void reiserfs_unmap_buffer(struct buffer_head *); 2127 - 2128 - /* file.c */ 2129 - extern const struct inode_operations reiserfs_file_inode_operations; 2130 - extern const struct file_operations reiserfs_file_operations; 2131 - extern const struct address_space_operations reiserfs_address_space_operations; 2132 - 2133 - /* fix_nodes.c */ 2134 - 2135 - int fix_nodes(int n_op_mode, struct tree_balance *tb, 2136 - struct item_head *ins_ih, const void *); 2137 - void unfix_nodes(struct tree_balance *); 2138 - 2139 - /* prints.c */ 2140 - void __reiserfs_panic(struct super_block *s, const char *id, 2141 - const char *function, const char *fmt, ...) 2142 - __attribute__ ((noreturn)); 2143 - #define reiserfs_panic(s, id, fmt, args...) \ 2144 - __reiserfs_panic(s, id, __func__, fmt, ##args) 2145 - void __reiserfs_error(struct super_block *s, const char *id, 2146 - const char *function, const char *fmt, ...); 2147 - #define reiserfs_error(s, id, fmt, args...) \ 2148 - __reiserfs_error(s, id, __func__, fmt, ##args) 2149 - void reiserfs_info(struct super_block *s, const char *fmt, ...); 2150 - void reiserfs_debug(struct super_block *s, int level, const char *fmt, ...); 2151 - void print_indirect_item(struct buffer_head *bh, int item_num); 2152 - void store_print_tb(struct tree_balance *tb); 2153 - void print_cur_tb(char *mes); 2154 - void print_de(struct reiserfs_dir_entry *de); 2155 - void print_bi(struct buffer_info *bi, char *mes); 2156 - #define PRINT_LEAF_ITEMS 1 /* print all items */ 2157 - #define PRINT_DIRECTORY_ITEMS 2 /* print directory items */ 2158 - #define PRINT_DIRECT_ITEMS 4 /* print contents of direct items */ 2159 - void print_block(struct buffer_head *bh, ...); 2160 - void print_bmap(struct super_block *s, int silent); 2161 - void print_bmap_block(int i, char *data, int size, int silent); 2162 - /*void print_super_block (struct super_block * s, char * mes);*/ 2163 - void print_objectid_map(struct super_block *s); 2164 - void print_block_head(struct buffer_head *bh, char *mes); 2165 - void check_leaf(struct buffer_head *bh); 2166 - void check_internal(struct buffer_head *bh); 2167 - void print_statistics(struct super_block *s); 2168 - char *reiserfs_hashname(int code); 2169 - 2170 - /* lbalance.c */ 2171 - int leaf_move_items(int shift_mode, struct tree_balance *tb, int mov_num, 2172 - int mov_bytes, struct buffer_head *Snew); 2173 - int leaf_shift_left(struct tree_balance *tb, int shift_num, int shift_bytes); 2174 - int leaf_shift_right(struct tree_balance *tb, int shift_num, int shift_bytes); 2175 - void leaf_delete_items(struct buffer_info *cur_bi, int last_first, int first, 2176 - int del_num, int del_bytes); 2177 - void leaf_insert_into_buf(struct buffer_info *bi, int before, 2178 - struct item_head *inserted_item_ih, 2179 - const char *inserted_item_body, int zeros_number); 2180 - void leaf_paste_in_buffer(struct buffer_info *bi, int pasted_item_num, 2181 - int pos_in_item, int paste_size, const char *body, 2182 - int zeros_number); 2183 - void leaf_cut_from_buffer(struct buffer_info *bi, int cut_item_num, 2184 - int pos_in_item, int cut_size); 2185 - void leaf_paste_entries(struct buffer_info *bi, int item_num, int before, 2186 - int new_entry_count, struct reiserfs_de_head *new_dehs, 2187 - const char *records, int paste_size); 2188 - /* ibalance.c */ 2189 - int balance_internal(struct tree_balance *, int, int, struct item_head *, 2190 - struct buffer_head **); 2191 - 2192 - /* do_balance.c */ 2193 - void do_balance_mark_leaf_dirty(struct tree_balance *tb, 2194 - struct buffer_head *bh, int flag); 2195 - #define do_balance_mark_internal_dirty do_balance_mark_leaf_dirty 2196 - #define do_balance_mark_sb_dirty do_balance_mark_leaf_dirty 2197 - 2198 - void do_balance(struct tree_balance *tb, struct item_head *ih, 2199 - const char *body, int flag); 2200 - void reiserfs_invalidate_buffer(struct tree_balance *tb, 2201 - struct buffer_head *bh); 2202 - 2203 - int get_left_neighbor_position(struct tree_balance *tb, int h); 2204 - int get_right_neighbor_position(struct tree_balance *tb, int h); 2205 - void replace_key(struct tree_balance *tb, struct buffer_head *, int, 2206 - struct buffer_head *, int); 2207 - void make_empty_node(struct buffer_info *); 2208 - struct buffer_head *get_FEB(struct tree_balance *); 2209 - 2210 - /* bitmap.c */ 2211 - 2212 - /* structure contains hints for block allocator, and it is a container for 2213 - * arguments, such as node, search path, transaction_handle, etc. */ 2214 - struct __reiserfs_blocknr_hint { 2215 - struct inode *inode; /* inode passed to allocator, if we allocate unf. nodes */ 2216 - sector_t block; /* file offset, in blocks */ 2217 - struct in_core_key key; 2218 - struct treepath *path; /* search path, used by allocator to deternine search_start by 2219 - * various ways */ 2220 - struct reiserfs_transaction_handle *th; /* transaction handle is needed to log super blocks and 2221 - * bitmap blocks changes */ 2222 - b_blocknr_t beg, end; 2223 - b_blocknr_t search_start; /* a field used to transfer search start value (block number) 2224 - * between different block allocator procedures 2225 - * (determine_search_start() and others) */ 2226 - int prealloc_size; /* is set in determine_prealloc_size() function, used by underlayed 2227 - * function that do actual allocation */ 2228 - 2229 - unsigned formatted_node:1; /* the allocator uses different polices for getting disk space for 2230 - * formatted/unformatted blocks with/without preallocation */ 2231 - unsigned preallocate:1; 2232 - }; 2233 - 2234 - typedef struct __reiserfs_blocknr_hint reiserfs_blocknr_hint_t; 2235 - 2236 - int reiserfs_parse_alloc_options(struct super_block *, char *); 2237 - void reiserfs_init_alloc_options(struct super_block *s); 2238 - 2239 - /* 2240 - * given a directory, this will tell you what packing locality 2241 - * to use for a new object underneat it. The locality is returned 2242 - * in disk byte order (le). 2243 - */ 2244 - __le32 reiserfs_choose_packing(struct inode *dir); 2245 - 2246 - int reiserfs_init_bitmap_cache(struct super_block *sb); 2247 - void reiserfs_free_bitmap_cache(struct super_block *sb); 2248 - void reiserfs_cache_bitmap_metadata(struct super_block *sb, struct buffer_head *bh, struct reiserfs_bitmap_info *info); 2249 - struct buffer_head *reiserfs_read_bitmap_block(struct super_block *sb, unsigned int bitmap); 2250 - int is_reusable(struct super_block *s, b_blocknr_t block, int bit_value); 2251 - void reiserfs_free_block(struct reiserfs_transaction_handle *th, struct inode *, 2252 - b_blocknr_t, int for_unformatted); 2253 - int reiserfs_allocate_blocknrs(reiserfs_blocknr_hint_t *, b_blocknr_t *, int, 2254 - int); 2255 - static inline int reiserfs_new_form_blocknrs(struct tree_balance *tb, 2256 - b_blocknr_t * new_blocknrs, 2257 - int amount_needed) 2258 - { 2259 - reiserfs_blocknr_hint_t hint = { 2260 - .th = tb->transaction_handle, 2261 - .path = tb->tb_path, 2262 - .inode = NULL, 2263 - .key = tb->key, 2264 - .block = 0, 2265 - .formatted_node = 1 2266 - }; 2267 - return reiserfs_allocate_blocknrs(&hint, new_blocknrs, amount_needed, 2268 - 0); 2269 - } 2270 - 2271 - static inline int reiserfs_new_unf_blocknrs(struct reiserfs_transaction_handle 2272 - *th, struct inode *inode, 2273 - b_blocknr_t * new_blocknrs, 2274 - struct treepath *path, 2275 - sector_t block) 2276 - { 2277 - reiserfs_blocknr_hint_t hint = { 2278 - .th = th, 2279 - .path = path, 2280 - .inode = inode, 2281 - .block = block, 2282 - .formatted_node = 0, 2283 - .preallocate = 0 2284 - }; 2285 - return reiserfs_allocate_blocknrs(&hint, new_blocknrs, 1, 0); 2286 - } 2287 - 2288 - #ifdef REISERFS_PREALLOCATE 2289 - static inline int reiserfs_new_unf_blocknrs2(struct reiserfs_transaction_handle 2290 - *th, struct inode *inode, 2291 - b_blocknr_t * new_blocknrs, 2292 - struct treepath *path, 2293 - sector_t block) 2294 - { 2295 - reiserfs_blocknr_hint_t hint = { 2296 - .th = th, 2297 - .path = path, 2298 - .inode = inode, 2299 - .block = block, 2300 - .formatted_node = 0, 2301 - .preallocate = 1 2302 - }; 2303 - return reiserfs_allocate_blocknrs(&hint, new_blocknrs, 1, 0); 2304 - } 2305 - 2306 - void reiserfs_discard_prealloc(struct reiserfs_transaction_handle *th, 2307 - struct inode *inode); 2308 - void reiserfs_discard_all_prealloc(struct reiserfs_transaction_handle *th); 2309 - #endif 2310 - 2311 - /* hashes.c */ 2312 - __u32 keyed_hash(const signed char *msg, int len); 2313 - __u32 yura_hash(const signed char *msg, int len); 2314 - __u32 r5_hash(const signed char *msg, int len); 2315 - 2316 - #define reiserfs_set_le_bit __set_bit_le 2317 - #define reiserfs_test_and_set_le_bit __test_and_set_bit_le 2318 - #define reiserfs_clear_le_bit __clear_bit_le 2319 - #define reiserfs_test_and_clear_le_bit __test_and_clear_bit_le 2320 - #define reiserfs_test_le_bit test_bit_le 2321 - #define reiserfs_find_next_zero_le_bit find_next_zero_bit_le 2322 - 2323 - /* sometimes reiserfs_truncate may require to allocate few new blocks 2324 - to perform indirect2direct conversion. People probably used to 2325 - think, that truncate should work without problems on a filesystem 2326 - without free disk space. They may complain that they can not 2327 - truncate due to lack of free disk space. This spare space allows us 2328 - to not worry about it. 500 is probably too much, but it should be 2329 - absolutely safe */ 2330 - #define SPARE_SPACE 500 2331 - 2332 - /* prototypes from ioctl.c */ 2333 - long reiserfs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); 2334 - long reiserfs_compat_ioctl(struct file *filp, 2335 - unsigned int cmd, unsigned long arg); 2336 - int reiserfs_unpack(struct inode *inode, struct file *filp); 2337 - 2338 - #endif /* __KERNEL__ */ 2339 45 2340 46 #endif /* _LINUX_REISER_FS_H */
-63
include/linux/reiserfs_fs_i.h
··· 1 - #ifndef _REISER_FS_I 2 - #define _REISER_FS_I 3 - 4 - #include <linux/list.h> 5 - 6 - struct reiserfs_journal_list; 7 - 8 - /** bitmasks for i_flags field in reiserfs-specific part of inode */ 9 - typedef enum { 10 - /** this says what format of key do all items (but stat data) of 11 - an object have. If this is set, that format is 3.6 otherwise 12 - - 3.5 */ 13 - i_item_key_version_mask = 0x0001, 14 - /** If this is unset, object has 3.5 stat data, otherwise, it has 15 - 3.6 stat data with 64bit size, 32bit nlink etc. */ 16 - i_stat_data_version_mask = 0x0002, 17 - /** file might need tail packing on close */ 18 - i_pack_on_close_mask = 0x0004, 19 - /** don't pack tail of file */ 20 - i_nopack_mask = 0x0008, 21 - /** If those is set, "safe link" was created for this file during 22 - truncate or unlink. Safe link is used to avoid leakage of disk 23 - space on crash with some files open, but unlinked. */ 24 - i_link_saved_unlink_mask = 0x0010, 25 - i_link_saved_truncate_mask = 0x0020, 26 - i_has_xattr_dir = 0x0040, 27 - i_data_log = 0x0080, 28 - } reiserfs_inode_flags; 29 - 30 - struct reiserfs_inode_info { 31 - __u32 i_key[4]; /* key is still 4 32 bit integers */ 32 - /** transient inode flags that are never stored on disk. Bitmasks 33 - for this field are defined above. */ 34 - __u32 i_flags; 35 - 36 - __u32 i_first_direct_byte; // offset of first byte stored in direct item. 37 - 38 - /* copy of persistent inode flags read from sd_attrs. */ 39 - __u32 i_attrs; 40 - 41 - int i_prealloc_block; /* first unused block of a sequence of unused blocks */ 42 - int i_prealloc_count; /* length of that sequence */ 43 - struct list_head i_prealloc_list; /* per-transaction list of inodes which 44 - * have preallocated blocks */ 45 - 46 - unsigned new_packing_locality:1; /* new_packig_locality is created; new blocks 47 - * for the contents of this directory should be 48 - * displaced */ 49 - 50 - /* we use these for fsync or O_SYNC to decide which transaction 51 - ** needs to be committed in order for this inode to be properly 52 - ** flushed */ 53 - unsigned int i_trans_id; 54 - struct reiserfs_journal_list *i_jl; 55 - atomic_t openers; 56 - struct mutex tailpack; 57 - #ifdef CONFIG_REISERFS_FS_XATTR 58 - struct rw_semaphore i_xattr_sem; 59 - #endif 60 - struct inode vfs_inode; 61 - }; 62 - 63 - #endif
-554
include/linux/reiserfs_fs_sb.h
··· 1 - /* Copyright 1996-2000 Hans Reiser, see reiserfs/README for licensing 2 - * and copyright details */ 3 - 4 - #ifndef _LINUX_REISER_FS_SB 5 - #define _LINUX_REISER_FS_SB 6 - 7 - #ifdef __KERNEL__ 8 - #include <linux/workqueue.h> 9 - #include <linux/rwsem.h> 10 - #include <linux/mutex.h> 11 - #include <linux/sched.h> 12 - #endif 13 - 14 - typedef enum { 15 - reiserfs_attrs_cleared = 0x00000001, 16 - } reiserfs_super_block_flags; 17 - 18 - /* struct reiserfs_super_block accessors/mutators 19 - * since this is a disk structure, it will always be in 20 - * little endian format. */ 21 - #define sb_block_count(sbp) (le32_to_cpu((sbp)->s_v1.s_block_count)) 22 - #define set_sb_block_count(sbp,v) ((sbp)->s_v1.s_block_count = cpu_to_le32(v)) 23 - #define sb_free_blocks(sbp) (le32_to_cpu((sbp)->s_v1.s_free_blocks)) 24 - #define set_sb_free_blocks(sbp,v) ((sbp)->s_v1.s_free_blocks = cpu_to_le32(v)) 25 - #define sb_root_block(sbp) (le32_to_cpu((sbp)->s_v1.s_root_block)) 26 - #define set_sb_root_block(sbp,v) ((sbp)->s_v1.s_root_block = cpu_to_le32(v)) 27 - 28 - #define sb_jp_journal_1st_block(sbp) \ 29 - (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_1st_block)) 30 - #define set_sb_jp_journal_1st_block(sbp,v) \ 31 - ((sbp)->s_v1.s_journal.jp_journal_1st_block = cpu_to_le32(v)) 32 - #define sb_jp_journal_dev(sbp) \ 33 - (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_dev)) 34 - #define set_sb_jp_journal_dev(sbp,v) \ 35 - ((sbp)->s_v1.s_journal.jp_journal_dev = cpu_to_le32(v)) 36 - #define sb_jp_journal_size(sbp) \ 37 - (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_size)) 38 - #define set_sb_jp_journal_size(sbp,v) \ 39 - ((sbp)->s_v1.s_journal.jp_journal_size = cpu_to_le32(v)) 40 - #define sb_jp_journal_trans_max(sbp) \ 41 - (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_trans_max)) 42 - #define set_sb_jp_journal_trans_max(sbp,v) \ 43 - ((sbp)->s_v1.s_journal.jp_journal_trans_max = cpu_to_le32(v)) 44 - #define sb_jp_journal_magic(sbp) \ 45 - (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_magic)) 46 - #define set_sb_jp_journal_magic(sbp,v) \ 47 - ((sbp)->s_v1.s_journal.jp_journal_magic = cpu_to_le32(v)) 48 - #define sb_jp_journal_max_batch(sbp) \ 49 - (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_max_batch)) 50 - #define set_sb_jp_journal_max_batch(sbp,v) \ 51 - ((sbp)->s_v1.s_journal.jp_journal_max_batch = cpu_to_le32(v)) 52 - #define sb_jp_jourmal_max_commit_age(sbp) \ 53 - (le32_to_cpu((sbp)->s_v1.s_journal.jp_journal_max_commit_age)) 54 - #define set_sb_jp_journal_max_commit_age(sbp,v) \ 55 - ((sbp)->s_v1.s_journal.jp_journal_max_commit_age = cpu_to_le32(v)) 56 - 57 - #define sb_blocksize(sbp) (le16_to_cpu((sbp)->s_v1.s_blocksize)) 58 - #define set_sb_blocksize(sbp,v) ((sbp)->s_v1.s_blocksize = cpu_to_le16(v)) 59 - #define sb_oid_maxsize(sbp) (le16_to_cpu((sbp)->s_v1.s_oid_maxsize)) 60 - #define set_sb_oid_maxsize(sbp,v) ((sbp)->s_v1.s_oid_maxsize = cpu_to_le16(v)) 61 - #define sb_oid_cursize(sbp) (le16_to_cpu((sbp)->s_v1.s_oid_cursize)) 62 - #define set_sb_oid_cursize(sbp,v) ((sbp)->s_v1.s_oid_cursize = cpu_to_le16(v)) 63 - #define sb_umount_state(sbp) (le16_to_cpu((sbp)->s_v1.s_umount_state)) 64 - #define set_sb_umount_state(sbp,v) ((sbp)->s_v1.s_umount_state = cpu_to_le16(v)) 65 - #define sb_fs_state(sbp) (le16_to_cpu((sbp)->s_v1.s_fs_state)) 66 - #define set_sb_fs_state(sbp,v) ((sbp)->s_v1.s_fs_state = cpu_to_le16(v)) 67 - #define sb_hash_function_code(sbp) \ 68 - (le32_to_cpu((sbp)->s_v1.s_hash_function_code)) 69 - #define set_sb_hash_function_code(sbp,v) \ 70 - ((sbp)->s_v1.s_hash_function_code = cpu_to_le32(v)) 71 - #define sb_tree_height(sbp) (le16_to_cpu((sbp)->s_v1.s_tree_height)) 72 - #define set_sb_tree_height(sbp,v) ((sbp)->s_v1.s_tree_height = cpu_to_le16(v)) 73 - #define sb_bmap_nr(sbp) (le16_to_cpu((sbp)->s_v1.s_bmap_nr)) 74 - #define set_sb_bmap_nr(sbp,v) ((sbp)->s_v1.s_bmap_nr = cpu_to_le16(v)) 75 - #define sb_version(sbp) (le16_to_cpu((sbp)->s_v1.s_version)) 76 - #define set_sb_version(sbp,v) ((sbp)->s_v1.s_version = cpu_to_le16(v)) 77 - 78 - #define sb_mnt_count(sbp) (le16_to_cpu((sbp)->s_mnt_count)) 79 - #define set_sb_mnt_count(sbp, v) ((sbp)->s_mnt_count = cpu_to_le16(v)) 80 - 81 - #define sb_reserved_for_journal(sbp) \ 82 - (le16_to_cpu((sbp)->s_v1.s_reserved_for_journal)) 83 - #define set_sb_reserved_for_journal(sbp,v) \ 84 - ((sbp)->s_v1.s_reserved_for_journal = cpu_to_le16(v)) 85 - 86 - /* LOGGING -- */ 87 - 88 - /* These all interelate for performance. 89 - ** 90 - ** If the journal block count is smaller than n transactions, you lose speed. 91 - ** I don't know what n is yet, I'm guessing 8-16. 92 - ** 93 - ** typical transaction size depends on the application, how often fsync is 94 - ** called, and how many metadata blocks you dirty in a 30 second period. 95 - ** The more small files (<16k) you use, the larger your transactions will 96 - ** be. 97 - ** 98 - ** If your journal fills faster than dirty buffers get flushed to disk, it must flush them before allowing the journal 99 - ** to wrap, which slows things down. If you need high speed meta data updates, the journal should be big enough 100 - ** to prevent wrapping before dirty meta blocks get to disk. 101 - ** 102 - ** If the batch max is smaller than the transaction max, you'll waste space at the end of the journal 103 - ** because journal_end sets the next transaction to start at 0 if the next transaction has any chance of wrapping. 104 - ** 105 - ** The large the batch max age, the better the speed, and the more meta data changes you'll lose after a crash. 106 - ** 107 - */ 108 - 109 - /* don't mess with these for a while */ 110 - /* we have a node size define somewhere in reiserfs_fs.h. -Hans */ 111 - #define JOURNAL_BLOCK_SIZE 4096 /* BUG gotta get rid of this */ 112 - #define JOURNAL_MAX_CNODE 1500 /* max cnodes to allocate. */ 113 - #define JOURNAL_HASH_SIZE 8192 114 - #define JOURNAL_NUM_BITMAPS 5 /* number of copies of the bitmaps to have floating. Must be >= 2 */ 115 - 116 - /* One of these for every block in every transaction 117 - ** Each one is in two hash tables. First, a hash of the current transaction, and after journal_end, a 118 - ** hash of all the in memory transactions. 119 - ** next and prev are used by the current transaction (journal_hash). 120 - ** hnext and hprev are used by journal_list_hash. If a block is in more than one transaction, the journal_list_hash 121 - ** links it in multiple times. This allows flush_journal_list to remove just the cnode belonging 122 - ** to a given transaction. 123 - */ 124 - struct reiserfs_journal_cnode { 125 - struct buffer_head *bh; /* real buffer head */ 126 - struct super_block *sb; /* dev of real buffer head */ 127 - __u32 blocknr; /* block number of real buffer head, == 0 when buffer on disk */ 128 - unsigned long state; 129 - struct reiserfs_journal_list *jlist; /* journal list this cnode lives in */ 130 - struct reiserfs_journal_cnode *next; /* next in transaction list */ 131 - struct reiserfs_journal_cnode *prev; /* prev in transaction list */ 132 - struct reiserfs_journal_cnode *hprev; /* prev in hash list */ 133 - struct reiserfs_journal_cnode *hnext; /* next in hash list */ 134 - }; 135 - 136 - struct reiserfs_bitmap_node { 137 - int id; 138 - char *data; 139 - struct list_head list; 140 - }; 141 - 142 - struct reiserfs_list_bitmap { 143 - struct reiserfs_journal_list *journal_list; 144 - struct reiserfs_bitmap_node **bitmaps; 145 - }; 146 - 147 - /* 148 - ** one of these for each transaction. The most important part here is the j_realblock. 149 - ** this list of cnodes is used to hash all the blocks in all the commits, to mark all the 150 - ** real buffer heads dirty once all the commits hit the disk, 151 - ** and to make sure every real block in a transaction is on disk before allowing the log area 152 - ** to be overwritten */ 153 - struct reiserfs_journal_list { 154 - unsigned long j_start; 155 - unsigned long j_state; 156 - unsigned long j_len; 157 - atomic_t j_nonzerolen; 158 - atomic_t j_commit_left; 159 - atomic_t j_older_commits_done; /* all commits older than this on disk */ 160 - struct mutex j_commit_mutex; 161 - unsigned int j_trans_id; 162 - time_t j_timestamp; 163 - struct reiserfs_list_bitmap *j_list_bitmap; 164 - struct buffer_head *j_commit_bh; /* commit buffer head */ 165 - struct reiserfs_journal_cnode *j_realblock; 166 - struct reiserfs_journal_cnode *j_freedlist; /* list of buffers that were freed during this trans. free each of these on flush */ 167 - /* time ordered list of all active transactions */ 168 - struct list_head j_list; 169 - 170 - /* time ordered list of all transactions we haven't tried to flush yet */ 171 - struct list_head j_working_list; 172 - 173 - /* list of tail conversion targets in need of flush before commit */ 174 - struct list_head j_tail_bh_list; 175 - /* list of data=ordered buffers in need of flush before commit */ 176 - struct list_head j_bh_list; 177 - int j_refcount; 178 - }; 179 - 180 - struct reiserfs_journal { 181 - struct buffer_head **j_ap_blocks; /* journal blocks on disk */ 182 - struct reiserfs_journal_cnode *j_last; /* newest journal block */ 183 - struct reiserfs_journal_cnode *j_first; /* oldest journal block. start here for traverse */ 184 - 185 - struct block_device *j_dev_bd; 186 - fmode_t j_dev_mode; 187 - int j_1st_reserved_block; /* first block on s_dev of reserved area journal */ 188 - 189 - unsigned long j_state; 190 - unsigned int j_trans_id; 191 - unsigned long j_mount_id; 192 - unsigned long j_start; /* start of current waiting commit (index into j_ap_blocks) */ 193 - unsigned long j_len; /* length of current waiting commit */ 194 - unsigned long j_len_alloc; /* number of buffers requested by journal_begin() */ 195 - atomic_t j_wcount; /* count of writers for current commit */ 196 - unsigned long j_bcount; /* batch count. allows turning X transactions into 1 */ 197 - unsigned long j_first_unflushed_offset; /* first unflushed transactions offset */ 198 - unsigned j_last_flush_trans_id; /* last fully flushed journal timestamp */ 199 - struct buffer_head *j_header_bh; 200 - 201 - time_t j_trans_start_time; /* time this transaction started */ 202 - struct mutex j_mutex; 203 - struct mutex j_flush_mutex; 204 - wait_queue_head_t j_join_wait; /* wait for current transaction to finish before starting new one */ 205 - atomic_t j_jlock; /* lock for j_join_wait */ 206 - int j_list_bitmap_index; /* number of next list bitmap to use */ 207 - int j_must_wait; /* no more journal begins allowed. MUST sleep on j_join_wait */ 208 - int j_next_full_flush; /* next journal_end will flush all journal list */ 209 - int j_next_async_flush; /* next journal_end will flush all async commits */ 210 - 211 - int j_cnode_used; /* number of cnodes on the used list */ 212 - int j_cnode_free; /* number of cnodes on the free list */ 213 - 214 - unsigned int j_trans_max; /* max number of blocks in a transaction. */ 215 - unsigned int j_max_batch; /* max number of blocks to batch into a trans */ 216 - unsigned int j_max_commit_age; /* in seconds, how old can an async commit be */ 217 - unsigned int j_max_trans_age; /* in seconds, how old can a transaction be */ 218 - unsigned int j_default_max_commit_age; /* the default for the max commit age */ 219 - 220 - struct reiserfs_journal_cnode *j_cnode_free_list; 221 - struct reiserfs_journal_cnode *j_cnode_free_orig; /* orig pointer returned from vmalloc */ 222 - 223 - struct reiserfs_journal_list *j_current_jl; 224 - int j_free_bitmap_nodes; 225 - int j_used_bitmap_nodes; 226 - 227 - int j_num_lists; /* total number of active transactions */ 228 - int j_num_work_lists; /* number that need attention from kreiserfsd */ 229 - 230 - /* debugging to make sure things are flushed in order */ 231 - unsigned int j_last_flush_id; 232 - 233 - /* debugging to make sure things are committed in order */ 234 - unsigned int j_last_commit_id; 235 - 236 - struct list_head j_bitmap_nodes; 237 - struct list_head j_dirty_buffers; 238 - spinlock_t j_dirty_buffers_lock; /* protects j_dirty_buffers */ 239 - 240 - /* list of all active transactions */ 241 - struct list_head j_journal_list; 242 - /* lists that haven't been touched by writeback attempts */ 243 - struct list_head j_working_list; 244 - 245 - struct reiserfs_list_bitmap j_list_bitmap[JOURNAL_NUM_BITMAPS]; /* array of bitmaps to record the deleted blocks */ 246 - struct reiserfs_journal_cnode *j_hash_table[JOURNAL_HASH_SIZE]; /* hash table for real buffer heads in current trans */ 247 - struct reiserfs_journal_cnode *j_list_hash_table[JOURNAL_HASH_SIZE]; /* hash table for all the real buffer heads in all 248 - the transactions */ 249 - struct list_head j_prealloc_list; /* list of inodes which have preallocated blocks */ 250 - int j_persistent_trans; 251 - unsigned long j_max_trans_size; 252 - unsigned long j_max_batch_size; 253 - 254 - int j_errno; 255 - 256 - /* when flushing ordered buffers, throttle new ordered writers */ 257 - struct delayed_work j_work; 258 - struct super_block *j_work_sb; 259 - atomic_t j_async_throttle; 260 - }; 261 - 262 - enum journal_state_bits { 263 - J_WRITERS_BLOCKED = 1, /* set when new writers not allowed */ 264 - J_WRITERS_QUEUED, /* set when log is full due to too many writers */ 265 - J_ABORTED, /* set when log is aborted */ 266 - }; 267 - 268 - #define JOURNAL_DESC_MAGIC "ReIsErLB" /* ick. magic string to find desc blocks in the journal */ 269 - 270 - typedef __u32(*hashf_t) (const signed char *, int); 271 - 272 - struct reiserfs_bitmap_info { 273 - __u32 free_count; 274 - }; 275 - 276 - struct proc_dir_entry; 277 - 278 - #if defined( CONFIG_PROC_FS ) && defined( CONFIG_REISERFS_PROC_INFO ) 279 - typedef unsigned long int stat_cnt_t; 280 - typedef struct reiserfs_proc_info_data { 281 - spinlock_t lock; 282 - int exiting; 283 - int max_hash_collisions; 284 - 285 - stat_cnt_t breads; 286 - stat_cnt_t bread_miss; 287 - stat_cnt_t search_by_key; 288 - stat_cnt_t search_by_key_fs_changed; 289 - stat_cnt_t search_by_key_restarted; 290 - 291 - stat_cnt_t insert_item_restarted; 292 - stat_cnt_t paste_into_item_restarted; 293 - stat_cnt_t cut_from_item_restarted; 294 - stat_cnt_t delete_solid_item_restarted; 295 - stat_cnt_t delete_item_restarted; 296 - 297 - stat_cnt_t leaked_oid; 298 - stat_cnt_t leaves_removable; 299 - 300 - /* balances per level. Use explicit 5 as MAX_HEIGHT is not visible yet. */ 301 - stat_cnt_t balance_at[5]; /* XXX */ 302 - /* sbk == search_by_key */ 303 - stat_cnt_t sbk_read_at[5]; /* XXX */ 304 - stat_cnt_t sbk_fs_changed[5]; 305 - stat_cnt_t sbk_restarted[5]; 306 - stat_cnt_t items_at[5]; /* XXX */ 307 - stat_cnt_t free_at[5]; /* XXX */ 308 - stat_cnt_t can_node_be_removed[5]; /* XXX */ 309 - long int lnum[5]; /* XXX */ 310 - long int rnum[5]; /* XXX */ 311 - long int lbytes[5]; /* XXX */ 312 - long int rbytes[5]; /* XXX */ 313 - stat_cnt_t get_neighbors[5]; 314 - stat_cnt_t get_neighbors_restart[5]; 315 - stat_cnt_t need_l_neighbor[5]; 316 - stat_cnt_t need_r_neighbor[5]; 317 - 318 - stat_cnt_t free_block; 319 - struct __scan_bitmap_stats { 320 - stat_cnt_t call; 321 - stat_cnt_t wait; 322 - stat_cnt_t bmap; 323 - stat_cnt_t retry; 324 - stat_cnt_t in_journal_hint; 325 - stat_cnt_t in_journal_nohint; 326 - stat_cnt_t stolen; 327 - } scan_bitmap; 328 - struct __journal_stats { 329 - stat_cnt_t in_journal; 330 - stat_cnt_t in_journal_bitmap; 331 - stat_cnt_t in_journal_reusable; 332 - stat_cnt_t lock_journal; 333 - stat_cnt_t lock_journal_wait; 334 - stat_cnt_t journal_being; 335 - stat_cnt_t journal_relock_writers; 336 - stat_cnt_t journal_relock_wcount; 337 - stat_cnt_t mark_dirty; 338 - stat_cnt_t mark_dirty_already; 339 - stat_cnt_t mark_dirty_notjournal; 340 - stat_cnt_t restore_prepared; 341 - stat_cnt_t prepare; 342 - stat_cnt_t prepare_retry; 343 - } journal; 344 - } reiserfs_proc_info_data_t; 345 - #else 346 - typedef struct reiserfs_proc_info_data { 347 - } reiserfs_proc_info_data_t; 348 - #endif 349 - 350 - /* reiserfs union of in-core super block data */ 351 - struct reiserfs_sb_info { 352 - struct buffer_head *s_sbh; /* Buffer containing the super block */ 353 - /* both the comment and the choice of 354 - name are unclear for s_rs -Hans */ 355 - struct reiserfs_super_block *s_rs; /* Pointer to the super block in the buffer */ 356 - struct reiserfs_bitmap_info *s_ap_bitmap; 357 - struct reiserfs_journal *s_journal; /* pointer to journal information */ 358 - unsigned short s_mount_state; /* reiserfs state (valid, invalid) */ 359 - 360 - /* Serialize writers access, replace the old bkl */ 361 - struct mutex lock; 362 - /* Owner of the lock (can be recursive) */ 363 - struct task_struct *lock_owner; 364 - /* Depth of the lock, start from -1 like the bkl */ 365 - int lock_depth; 366 - 367 - /* Comment? -Hans */ 368 - void (*end_io_handler) (struct buffer_head *, int); 369 - hashf_t s_hash_function; /* pointer to function which is used 370 - to sort names in directory. Set on 371 - mount */ 372 - unsigned long s_mount_opt; /* reiserfs's mount options are set 373 - here (currently - NOTAIL, NOLOG, 374 - REPLAYONLY) */ 375 - 376 - struct { /* This is a structure that describes block allocator options */ 377 - unsigned long bits; /* Bitfield for enable/disable kind of options */ 378 - unsigned long large_file_size; /* size started from which we consider file to be a large one(in blocks) */ 379 - int border; /* percentage of disk, border takes */ 380 - int preallocmin; /* Minimal file size (in blocks) starting from which we do preallocations */ 381 - int preallocsize; /* Number of blocks we try to prealloc when file 382 - reaches preallocmin size (in blocks) or 383 - prealloc_list is empty. */ 384 - } s_alloc_options; 385 - 386 - /* Comment? -Hans */ 387 - wait_queue_head_t s_wait; 388 - /* To be obsoleted soon by per buffer seals.. -Hans */ 389 - atomic_t s_generation_counter; // increased by one every time the 390 - // tree gets re-balanced 391 - unsigned long s_properties; /* File system properties. Currently holds 392 - on-disk FS format */ 393 - 394 - /* session statistics */ 395 - int s_disk_reads; 396 - int s_disk_writes; 397 - int s_fix_nodes; 398 - int s_do_balance; 399 - int s_unneeded_left_neighbor; 400 - int s_good_search_by_key_reada; 401 - int s_bmaps; 402 - int s_bmaps_without_search; 403 - int s_direct2indirect; 404 - int s_indirect2direct; 405 - /* set up when it's ok for reiserfs_read_inode2() to read from 406 - disk inode with nlink==0. Currently this is only used during 407 - finish_unfinished() processing at mount time */ 408 - int s_is_unlinked_ok; 409 - reiserfs_proc_info_data_t s_proc_info_data; 410 - struct proc_dir_entry *procdir; 411 - int reserved_blocks; /* amount of blocks reserved for further allocations */ 412 - spinlock_t bitmap_lock; /* this lock on now only used to protect reserved_blocks variable */ 413 - struct dentry *priv_root; /* root of /.reiserfs_priv */ 414 - struct dentry *xattr_root; /* root of /.reiserfs_priv/xattrs */ 415 - int j_errno; 416 - #ifdef CONFIG_QUOTA 417 - char *s_qf_names[MAXQUOTAS]; 418 - int s_jquota_fmt; 419 - #endif 420 - char *s_jdev; /* Stored jdev for mount option showing */ 421 - #ifdef CONFIG_REISERFS_CHECK 422 - 423 - struct tree_balance *cur_tb; /* 424 - * Detects whether more than one 425 - * copy of tb exists per superblock 426 - * as a means of checking whether 427 - * do_balance is executing concurrently 428 - * against another tree reader/writer 429 - * on a same mount point. 430 - */ 431 - #endif 432 - }; 433 - 434 - /* Definitions of reiserfs on-disk properties: */ 435 - #define REISERFS_3_5 0 436 - #define REISERFS_3_6 1 437 - #define REISERFS_OLD_FORMAT 2 438 - 439 - enum reiserfs_mount_options { 440 - /* Mount options */ 441 - REISERFS_LARGETAIL, /* large tails will be created in a session */ 442 - REISERFS_SMALLTAIL, /* small (for files less than block size) tails will be created in a session */ 443 - REPLAYONLY, /* replay journal and return 0. Use by fsck */ 444 - REISERFS_CONVERT, /* -o conv: causes conversion of old 445 - format super block to the new 446 - format. If not specified - old 447 - partition will be dealt with in a 448 - manner of 3.5.x */ 449 - 450 - /* -o hash={tea, rupasov, r5, detect} is meant for properly mounting 451 - ** reiserfs disks from 3.5.19 or earlier. 99% of the time, this option 452 - ** is not required. If the normal autodection code can't determine which 453 - ** hash to use (because both hashes had the same value for a file) 454 - ** use this option to force a specific hash. It won't allow you to override 455 - ** the existing hash on the FS, so if you have a tea hash disk, and mount 456 - ** with -o hash=rupasov, the mount will fail. 457 - */ 458 - FORCE_TEA_HASH, /* try to force tea hash on mount */ 459 - FORCE_RUPASOV_HASH, /* try to force rupasov hash on mount */ 460 - FORCE_R5_HASH, /* try to force rupasov hash on mount */ 461 - FORCE_HASH_DETECT, /* try to detect hash function on mount */ 462 - 463 - REISERFS_DATA_LOG, 464 - REISERFS_DATA_ORDERED, 465 - REISERFS_DATA_WRITEBACK, 466 - 467 - /* used for testing experimental features, makes benchmarking new 468 - features with and without more convenient, should never be used by 469 - users in any code shipped to users (ideally) */ 470 - 471 - REISERFS_NO_BORDER, 472 - REISERFS_NO_UNHASHED_RELOCATION, 473 - REISERFS_HASHED_RELOCATION, 474 - REISERFS_ATTRS, 475 - REISERFS_XATTRS_USER, 476 - REISERFS_POSIXACL, 477 - REISERFS_EXPOSE_PRIVROOT, 478 - REISERFS_BARRIER_NONE, 479 - REISERFS_BARRIER_FLUSH, 480 - 481 - /* Actions on error */ 482 - REISERFS_ERROR_PANIC, 483 - REISERFS_ERROR_RO, 484 - REISERFS_ERROR_CONTINUE, 485 - 486 - REISERFS_USRQUOTA, /* User quota option specified */ 487 - REISERFS_GRPQUOTA, /* Group quota option specified */ 488 - 489 - REISERFS_TEST1, 490 - REISERFS_TEST2, 491 - REISERFS_TEST3, 492 - REISERFS_TEST4, 493 - REISERFS_UNSUPPORTED_OPT, 494 - }; 495 - 496 - #define reiserfs_r5_hash(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_R5_HASH)) 497 - #define reiserfs_rupasov_hash(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_RUPASOV_HASH)) 498 - #define reiserfs_tea_hash(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_TEA_HASH)) 499 - #define reiserfs_hash_detect(s) (REISERFS_SB(s)->s_mount_opt & (1 << FORCE_HASH_DETECT)) 500 - #define reiserfs_no_border(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_NO_BORDER)) 501 - #define reiserfs_no_unhashed_relocation(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_NO_UNHASHED_RELOCATION)) 502 - #define reiserfs_hashed_relocation(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_HASHED_RELOCATION)) 503 - #define reiserfs_test4(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_TEST4)) 504 - 505 - #define have_large_tails(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_LARGETAIL)) 506 - #define have_small_tails(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_SMALLTAIL)) 507 - #define replay_only(s) (REISERFS_SB(s)->s_mount_opt & (1 << REPLAYONLY)) 508 - #define reiserfs_attrs(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_ATTRS)) 509 - #define old_format_only(s) (REISERFS_SB(s)->s_properties & (1 << REISERFS_3_5)) 510 - #define convert_reiserfs(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_CONVERT)) 511 - #define reiserfs_data_log(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_DATA_LOG)) 512 - #define reiserfs_data_ordered(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_DATA_ORDERED)) 513 - #define reiserfs_data_writeback(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_DATA_WRITEBACK)) 514 - #define reiserfs_xattrs_user(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_XATTRS_USER)) 515 - #define reiserfs_posixacl(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_POSIXACL)) 516 - #define reiserfs_expose_privroot(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_EXPOSE_PRIVROOT)) 517 - #define reiserfs_xattrs_optional(s) (reiserfs_xattrs_user(s) || reiserfs_posixacl(s)) 518 - #define reiserfs_barrier_none(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_BARRIER_NONE)) 519 - #define reiserfs_barrier_flush(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_BARRIER_FLUSH)) 520 - 521 - #define reiserfs_error_panic(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_ERROR_PANIC)) 522 - #define reiserfs_error_ro(s) (REISERFS_SB(s)->s_mount_opt & (1 << REISERFS_ERROR_RO)) 523 - 524 - void reiserfs_file_buffer(struct buffer_head *bh, int list); 525 - extern struct file_system_type reiserfs_fs_type; 526 - int reiserfs_resize(struct super_block *, unsigned long); 527 - 528 - #define CARRY_ON 0 529 - #define SCHEDULE_OCCURRED 1 530 - 531 - #define SB_BUFFER_WITH_SB(s) (REISERFS_SB(s)->s_sbh) 532 - #define SB_JOURNAL(s) (REISERFS_SB(s)->s_journal) 533 - #define SB_JOURNAL_1st_RESERVED_BLOCK(s) (SB_JOURNAL(s)->j_1st_reserved_block) 534 - #define SB_JOURNAL_LEN_FREE(s) (SB_JOURNAL(s)->j_journal_len_free) 535 - #define SB_AP_BITMAP(s) (REISERFS_SB(s)->s_ap_bitmap) 536 - 537 - #define SB_DISK_JOURNAL_HEAD(s) (SB_JOURNAL(s)->j_header_bh->) 538 - 539 - /* A safe version of the "bdevname", which returns the "s_id" field of 540 - * a superblock or else "Null superblock" if the super block is NULL. 541 - */ 542 - static inline char *reiserfs_bdevname(struct super_block *s) 543 - { 544 - return (s == NULL) ? "Null superblock" : s->s_id; 545 - } 546 - 547 - #define reiserfs_is_journal_aborted(journal) (unlikely (__reiserfs_is_journal_aborted (journal))) 548 - static inline int __reiserfs_is_journal_aborted(struct reiserfs_journal 549 - *journal) 550 - { 551 - return test_bit(J_ABORTED, &journal->j_state); 552 - } 553 - 554 - #endif /* _LINUX_REISER_FS_SB */
-128
include/linux/reiserfs_xattr.h
··· 21 21 size_t length; 22 22 }; 23 23 24 - #ifdef __KERNEL__ 25 - 26 - #include <linux/init.h> 27 - #include <linux/list.h> 28 - #include <linux/rwsem.h> 29 - #include <linux/reiserfs_fs_i.h> 30 - #include <linux/reiserfs_fs.h> 31 - 32 - struct inode; 33 - struct dentry; 34 - struct iattr; 35 - struct super_block; 36 - struct nameidata; 37 - 38 - int reiserfs_xattr_register_handlers(void) __init; 39 - void reiserfs_xattr_unregister_handlers(void); 40 - int reiserfs_xattr_init(struct super_block *sb, int mount_flags); 41 - int reiserfs_lookup_privroot(struct super_block *sb); 42 - int reiserfs_delete_xattrs(struct inode *inode); 43 - int reiserfs_chown_xattrs(struct inode *inode, struct iattr *attrs); 44 - int reiserfs_permission(struct inode *inode, int mask); 45 - 46 - #ifdef CONFIG_REISERFS_FS_XATTR 47 - #define has_xattr_dir(inode) (REISERFS_I(inode)->i_flags & i_has_xattr_dir) 48 - ssize_t reiserfs_getxattr(struct dentry *dentry, const char *name, 49 - void *buffer, size_t size); 50 - int reiserfs_setxattr(struct dentry *dentry, const char *name, 51 - const void *value, size_t size, int flags); 52 - ssize_t reiserfs_listxattr(struct dentry *dentry, char *buffer, size_t size); 53 - int reiserfs_removexattr(struct dentry *dentry, const char *name); 54 - 55 - int reiserfs_xattr_get(struct inode *, const char *, void *, size_t); 56 - int reiserfs_xattr_set(struct inode *, const char *, const void *, size_t, int); 57 - int reiserfs_xattr_set_handle(struct reiserfs_transaction_handle *, 58 - struct inode *, const char *, const void *, 59 - size_t, int); 60 - 61 - extern const struct xattr_handler reiserfs_xattr_user_handler; 62 - extern const struct xattr_handler reiserfs_xattr_trusted_handler; 63 - extern const struct xattr_handler reiserfs_xattr_security_handler; 64 - #ifdef CONFIG_REISERFS_FS_SECURITY 65 - int reiserfs_security_init(struct inode *dir, struct inode *inode, 66 - const struct qstr *qstr, 67 - struct reiserfs_security_handle *sec); 68 - int reiserfs_security_write(struct reiserfs_transaction_handle *th, 69 - struct inode *inode, 70 - struct reiserfs_security_handle *sec); 71 - void reiserfs_security_free(struct reiserfs_security_handle *sec); 72 - #endif 73 - 74 - static inline int reiserfs_xattrs_initialized(struct super_block *sb) 75 - { 76 - return REISERFS_SB(sb)->priv_root != NULL; 77 - } 78 - 79 - #define xattr_size(size) ((size) + sizeof(struct reiserfs_xattr_header)) 80 - static inline loff_t reiserfs_xattr_nblocks(struct inode *inode, loff_t size) 81 - { 82 - loff_t ret = 0; 83 - if (reiserfs_file_data_log(inode)) { 84 - ret = _ROUND_UP(xattr_size(size), inode->i_sb->s_blocksize); 85 - ret >>= inode->i_sb->s_blocksize_bits; 86 - } 87 - return ret; 88 - } 89 - 90 - /* We may have to create up to 3 objects: xattr root, xattr dir, xattr file. 91 - * Let's try to be smart about it. 92 - * xattr root: We cache it. If it's not cached, we may need to create it. 93 - * xattr dir: If anything has been loaded for this inode, we can set a flag 94 - * saying so. 95 - * xattr file: Since we don't cache xattrs, we can't tell. We always include 96 - * blocks for it. 97 - * 98 - * However, since root and dir can be created between calls - YOU MUST SAVE 99 - * THIS VALUE. 100 - */ 101 - static inline size_t reiserfs_xattr_jcreate_nblocks(struct inode *inode) 102 - { 103 - size_t nblocks = JOURNAL_BLOCKS_PER_OBJECT(inode->i_sb); 104 - 105 - if ((REISERFS_I(inode)->i_flags & i_has_xattr_dir) == 0) { 106 - nblocks += JOURNAL_BLOCKS_PER_OBJECT(inode->i_sb); 107 - if (!REISERFS_SB(inode->i_sb)->xattr_root->d_inode) 108 - nblocks += JOURNAL_BLOCKS_PER_OBJECT(inode->i_sb); 109 - } 110 - 111 - return nblocks; 112 - } 113 - 114 - static inline void reiserfs_init_xattr_rwsem(struct inode *inode) 115 - { 116 - init_rwsem(&REISERFS_I(inode)->i_xattr_sem); 117 - } 118 - 119 - #else 120 - 121 - #define reiserfs_getxattr NULL 122 - #define reiserfs_setxattr NULL 123 - #define reiserfs_listxattr NULL 124 - #define reiserfs_removexattr NULL 125 - 126 - static inline void reiserfs_init_xattr_rwsem(struct inode *inode) 127 - { 128 - } 129 - #endif /* CONFIG_REISERFS_FS_XATTR */ 130 - 131 - #ifndef CONFIG_REISERFS_FS_SECURITY 132 - static inline int reiserfs_security_init(struct inode *dir, 133 - struct inode *inode, 134 - const struct qstr *qstr, 135 - struct reiserfs_security_handle *sec) 136 - { 137 - return 0; 138 - } 139 - static inline int 140 - reiserfs_security_write(struct reiserfs_transaction_handle *th, 141 - struct inode *inode, 142 - struct reiserfs_security_handle *sec) 143 - { 144 - return 0; 145 - } 146 - static inline void reiserfs_security_free(struct reiserfs_security_handle *sec) 147 - {} 148 - #endif 149 - 150 - #endif /* __KERNEL__ */ 151 - 152 24 #endif /* _LINUX_REISERFS_XATTR_H */
+2 -2
include/linux/trace_seq.h
··· 44 44 extern int trace_seq_putmem_hex(struct trace_seq *s, const void *mem, 45 45 size_t len); 46 46 extern void *trace_seq_reserve(struct trace_seq *s, size_t len); 47 - extern int trace_seq_path(struct trace_seq *s, struct path *path); 47 + extern int trace_seq_path(struct trace_seq *s, const struct path *path); 48 48 49 49 #else /* CONFIG_TRACING */ 50 50 static inline int trace_seq_printf(struct trace_seq *s, const char *fmt, ...) ··· 88 88 { 89 89 return NULL; 90 90 } 91 - static inline int trace_seq_path(struct trace_seq *s, struct path *path) 91 + static inline int trace_seq_path(struct trace_seq *s, const struct path *path) 92 92 { 93 93 return 0; 94 94 }
+1 -2
include/net/af_unix.h
··· 49 49 /* WARNING: sk has to be the first member */ 50 50 struct sock sk; 51 51 struct unix_address *addr; 52 - struct dentry *dentry; 53 - struct vfsmount *mnt; 52 + struct path path; 54 53 struct mutex readlock; 55 54 struct sock *peer; 56 55 struct sock *other;
+7 -17
ipc/mqueue.c
··· 188 188 { 189 189 struct inode *inode; 190 190 struct ipc_namespace *ns = data; 191 - int error; 192 191 193 192 sb->s_blocksize = PAGE_CACHE_SIZE; 194 193 sb->s_blocksize_bits = PAGE_CACHE_SHIFT; 195 194 sb->s_magic = MQUEUE_MAGIC; 196 195 sb->s_op = &mqueue_super_ops; 197 196 198 - inode = mqueue_get_inode(sb, ns, S_IFDIR | S_ISVTX | S_IRWXUGO, 199 - NULL); 200 - if (IS_ERR(inode)) { 201 - error = PTR_ERR(inode); 202 - goto out; 203 - } 197 + inode = mqueue_get_inode(sb, ns, S_IFDIR | S_ISVTX | S_IRWXUGO, NULL); 198 + if (IS_ERR(inode)) 199 + return PTR_ERR(inode); 204 200 205 - sb->s_root = d_alloc_root(inode); 206 - if (!sb->s_root) { 207 - iput(inode); 208 - error = -ENOMEM; 209 - goto out; 210 - } 211 - error = 0; 212 - 213 - out: 214 - return error; 201 + sb->s_root = d_make_root(inode); 202 + if (!sb->s_root) 203 + return -ENOMEM; 204 + return 0; 215 205 } 216 206 217 207 static struct dentry *mqueue_mount(struct file_system_type *fs_type,
+1 -1
kernel/audit.c
··· 1418 1418 1419 1419 /* This is a helper-function to print the escaped d_path */ 1420 1420 void audit_log_d_path(struct audit_buffer *ab, const char *prefix, 1421 - struct path *path) 1421 + const struct path *path) 1422 1422 { 1423 1423 char *p, *pathname; 1424 1424
+2 -6
kernel/cgroup.c
··· 1472 1472 1473 1473 struct inode *inode = 1474 1474 cgroup_new_inode(S_IFDIR | S_IRUGO | S_IXUGO | S_IWUSR, sb); 1475 - struct dentry *dentry; 1476 1475 1477 1476 if (!inode) 1478 1477 return -ENOMEM; ··· 1480 1481 inode->i_op = &cgroup_dir_inode_operations; 1481 1482 /* directories start off with i_nlink == 2 (for "." entry) */ 1482 1483 inc_nlink(inode); 1483 - dentry = d_alloc_root(inode); 1484 - if (!dentry) { 1485 - iput(inode); 1484 + sb->s_root = d_make_root(inode); 1485 + if (!sb->s_root) 1486 1486 return -ENOMEM; 1487 - } 1488 - sb->s_root = dentry; 1489 1487 /* for everything else we want ->d_op set */ 1490 1488 sb->s_d_op = &cgroup_dops; 1491 1489 return 0;
+1 -1
kernel/trace/trace_output.c
··· 264 264 return ret; 265 265 } 266 266 267 - int trace_seq_path(struct trace_seq *s, struct path *path) 267 + int trace_seq_path(struct trace_seq *s, const struct path *path) 268 268 { 269 269 unsigned char *p; 270 270
+3 -7
mm/shmem.c
··· 2175 2175 int shmem_fill_super(struct super_block *sb, void *data, int silent) 2176 2176 { 2177 2177 struct inode *inode; 2178 - struct dentry *root; 2179 2178 struct shmem_sb_info *sbinfo; 2180 2179 int err = -ENOMEM; 2181 2180 ··· 2231 2232 goto failed; 2232 2233 inode->i_uid = sbinfo->uid; 2233 2234 inode->i_gid = sbinfo->gid; 2234 - root = d_alloc_root(inode); 2235 - if (!root) 2236 - goto failed_iput; 2237 - sb->s_root = root; 2235 + sb->s_root = d_make_root(inode); 2236 + if (!sb->s_root) 2237 + goto failed; 2238 2238 return 0; 2239 2239 2240 - failed_iput: 2241 - iput(inode); 2242 2240 failed: 2243 2241 shmem_put_super(sb); 2244 2242 return err;
+2 -6
net/sunrpc/rpc_pipe.c
··· 1033 1033 sb->s_time_gran = 1; 1034 1034 1035 1035 inode = rpc_get_inode(sb, S_IFDIR | 0755); 1036 - if (!inode) 1036 + sb->s_root = root = d_make_root(inode); 1037 + if (!root) 1037 1038 return -ENOMEM; 1038 - sb->s_root = root = d_alloc_root(inode); 1039 - if (!root) { 1040 - iput(inode); 1041 - return -ENOMEM; 1042 - } 1043 1039 if (rpc_populate(root, files, RPCAUTH_lockd, RPCAUTH_RootEOF, NULL)) 1044 1040 return -ENOMEM; 1045 1041 return 0;
+16 -21
net/unix/af_unix.c
··· 293 293 spin_lock(&unix_table_lock); 294 294 sk_for_each(s, node, 295 295 &unix_socket_table[i->i_ino & (UNIX_HASH_SIZE - 1)]) { 296 - struct dentry *dentry = unix_sk(s)->dentry; 296 + struct dentry *dentry = unix_sk(s)->path.dentry; 297 297 298 298 if (dentry && dentry->d_inode == i) { 299 299 sock_hold(s); ··· 377 377 static int unix_release_sock(struct sock *sk, int embrion) 378 378 { 379 379 struct unix_sock *u = unix_sk(sk); 380 - struct dentry *dentry; 381 - struct vfsmount *mnt; 380 + struct path path; 382 381 struct sock *skpair; 383 382 struct sk_buff *skb; 384 383 int state; ··· 388 389 unix_state_lock(sk); 389 390 sock_orphan(sk); 390 391 sk->sk_shutdown = SHUTDOWN_MASK; 391 - dentry = u->dentry; 392 - u->dentry = NULL; 393 - mnt = u->mnt; 394 - u->mnt = NULL; 392 + path = u->path; 393 + u->path.dentry = NULL; 394 + u->path.mnt = NULL; 395 395 state = sk->sk_state; 396 396 sk->sk_state = TCP_CLOSE; 397 397 unix_state_unlock(sk); ··· 423 425 kfree_skb(skb); 424 426 } 425 427 426 - if (dentry) { 427 - dput(dentry); 428 - mntput(mnt); 429 - } 428 + if (path.dentry) 429 + path_put(&path); 430 430 431 431 sock_put(sk); 432 432 ··· 637 641 sk->sk_max_ack_backlog = net->unx.sysctl_max_dgram_qlen; 638 642 sk->sk_destruct = unix_sock_destructor; 639 643 u = unix_sk(sk); 640 - u->dentry = NULL; 641 - u->mnt = NULL; 644 + u->path.dentry = NULL; 645 + u->path.mnt = NULL; 642 646 spin_lock_init(&u->lock); 643 647 atomic_long_set(&u->inflight, 0); 644 648 INIT_LIST_HEAD(&u->link); ··· 784 788 goto put_fail; 785 789 786 790 if (u->sk_type == type) 787 - touch_atime(path.mnt, path.dentry); 791 + touch_atime(&path); 788 792 789 793 path_put(&path); 790 794 ··· 798 802 u = unix_find_socket_byname(net, sunname, len, type, hash); 799 803 if (u) { 800 804 struct dentry *dentry; 801 - dentry = unix_sk(u)->dentry; 805 + dentry = unix_sk(u)->path.dentry; 802 806 if (dentry) 803 - touch_atime(unix_sk(u)->mnt, dentry); 807 + touch_atime(&unix_sk(u)->path); 804 808 } else 805 809 goto fail; 806 810 } ··· 906 910 list = &unix_socket_table[addr->hash]; 907 911 } else { 908 912 list = &unix_socket_table[dentry->d_inode->i_ino & (UNIX_HASH_SIZE-1)]; 909 - u->dentry = path.dentry; 910 - u->mnt = path.mnt; 913 + u->path = path; 911 914 } 912 915 913 916 err = 0; ··· 1188 1193 atomic_inc(&otheru->addr->refcnt); 1189 1194 newu->addr = otheru->addr; 1190 1195 } 1191 - if (otheru->dentry) { 1192 - newu->dentry = dget(otheru->dentry); 1193 - newu->mnt = mntget(otheru->mnt); 1196 + if (otheru->path.dentry) { 1197 + path_get(&otheru->path); 1198 + newu->path = otheru->path; 1194 1199 } 1195 1200 1196 1201 /* Set credentials */
+1 -1
net/unix/diag.c
··· 29 29 30 30 static int sk_diag_dump_vfs(struct sock *sk, struct sk_buff *nlskb) 31 31 { 32 - struct dentry *dentry = unix_sk(sk)->dentry; 32 + struct dentry *dentry = unix_sk(sk)->path.dentry; 33 33 struct unix_diag_vfs *uv; 34 34 35 35 if (dentry) {
+2 -6
security/lsm_audit.c
··· 313 313 } 314 314 case AF_UNIX: 315 315 u = unix_sk(sk); 316 - if (u->dentry) { 317 - struct path path = { 318 - .dentry = u->dentry, 319 - .mnt = u->mnt 320 - }; 321 - audit_log_d_path(ab, " path=", &path); 316 + if (u->path.dentry) { 317 + audit_log_d_path(ab, " path=", &u->path); 322 318 break; 323 319 } 324 320 if (!u->addr)