fs: avoid softlockups in s_inodes iterators

Anything that walks all inodes on sb->s_inodes list without rescheduling
risks softlockups.

Previous efforts were made in 2 functions, see:

c27d82f fs/drop_caches.c: avoid softlockups in drop_pagecache_sb()
ac05fbb inode: don't softlockup when evicting inodes

but there hasn't been an audit of all walkers, so do that now. This
also consistently moves the cond_resched() calls to the bottom of each
loop in cases where it already exists.

One loop remains: remove_dquot_ref(), because I'm not quite sure how
to deal with that one w/o taking the i_lock.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>

authored by Eric Sandeen and committed by Al Viro 04646aeb e0ff126e

Changed files
+10 -1
fs
+1 -1
fs/drop_caches.c
··· 35 35 spin_unlock(&inode->i_lock); 36 36 spin_unlock(&sb->s_inode_list_lock); 37 37 38 - cond_resched(); 39 38 invalidate_mapping_pages(inode->i_mapping, 0, -1); 40 39 iput(toput_inode); 41 40 toput_inode = inode; 42 41 42 + cond_resched(); 43 43 spin_lock(&sb->s_inode_list_lock); 44 44 } 45 45 spin_unlock(&sb->s_inode_list_lock);
+7
fs/inode.c
··· 676 676 struct inode *inode, *next; 677 677 LIST_HEAD(dispose); 678 678 679 + again: 679 680 spin_lock(&sb->s_inode_list_lock); 680 681 list_for_each_entry_safe(inode, next, &sb->s_inodes, i_sb_list) { 681 682 spin_lock(&inode->i_lock); ··· 699 698 inode_lru_list_del(inode); 700 699 spin_unlock(&inode->i_lock); 701 700 list_add(&inode->i_lru, &dispose); 701 + if (need_resched()) { 702 + spin_unlock(&sb->s_inode_list_lock); 703 + cond_resched(); 704 + dispose_list(&dispose); 705 + goto again; 706 + } 702 707 } 703 708 spin_unlock(&sb->s_inode_list_lock); 704 709
+1
fs/notify/fsnotify.c
··· 77 77 78 78 iput_inode = inode; 79 79 80 + cond_resched(); 80 81 spin_lock(&sb->s_inode_list_lock); 81 82 } 82 83 spin_unlock(&sb->s_inode_list_lock);
+1
fs/quota/dquot.c
··· 984 984 * later. 985 985 */ 986 986 old_inode = inode; 987 + cond_resched(); 987 988 spin_lock(&sb->s_inode_list_lock); 988 989 } 989 990 spin_unlock(&sb->s_inode_list_lock);