Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

eventpoll: don't decrement ep refcount while still holding the ep mutex

Jann Horn points out that epoll is decrementing the ep refcount and then
doing a

mutex_unlock(&ep->mtx);

afterwards. That's very wrong, because it can lead to a use-after-free.

That pattern is actually fine for the very last reference, because the
code in question will delay the actual call to "ep_free(ep)" until after
it has unlocked the mutex.

But it's wrong for the much subtler "next to last" case when somebody
*else* may also be dropping their reference and free the ep while we're
still using the mutex.

Note that this is true even if that other user is also using the same ep
mutex: mutexes, unlike spinlocks, can not be used for object ownership,
even if they guarantee mutual exclusion.

A mutex "unlock" operation is not atomic, and as one user is still
accessing the mutex as part of unlocking it, another user can come in
and get the now released mutex and free the data structure while the
first user is still cleaning up.

See our mutex documentation in Documentation/locking/mutex-design.rst,
in particular the section [1] about semantics:

"mutex_unlock() may access the mutex structure even after it has
internally released the lock already - so it's not safe for
another context to acquire the mutex and assume that the
mutex_unlock() context is not using the structure anymore"

So if we drop our ep ref before the mutex unlock, but we weren't the
last one, we may then unlock the mutex, another user comes in, drops
_their_ reference and releases the 'ep' as it now has no users - all
while the mutex_unlock() is still accessing it.

Fix this by simply moving the ep refcount dropping to outside the mutex:
the refcount itself is atomic, and doesn't need mutex protection (that's
the whole _point_ of refcounts: unlike mutexes, they are inherently
about object lifetimes).

Reported-by: Jann Horn <jannh@google.com>
Link: https://docs.kernel.org/locking/mutex-design.html#semantics [1]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

+5 -7
+5 -7
fs/eventpoll.c
··· 828 828 kfree_rcu(epi, rcu); 829 829 830 830 percpu_counter_dec(&ep->user->epoll_watches); 831 - return ep_refcount_dec_and_test(ep); 831 + return true; 832 832 } 833 833 834 834 /* ··· 836 836 */ 837 837 static void ep_remove_safe(struct eventpoll *ep, struct epitem *epi) 838 838 { 839 - WARN_ON_ONCE(__ep_remove(ep, epi, false)); 839 + if (__ep_remove(ep, epi, false)) 840 + WARN_ON_ONCE(ep_refcount_dec_and_test(ep)); 840 841 } 841 842 842 843 static void ep_clear_and_put(struct eventpoll *ep) 843 844 { 844 845 struct rb_node *rbp, *next; 845 846 struct epitem *epi; 846 - bool dispose; 847 847 848 848 /* We need to release all tasks waiting for these file */ 849 849 if (waitqueue_active(&ep->poll_wait)) ··· 876 876 cond_resched(); 877 877 } 878 878 879 - dispose = ep_refcount_dec_and_test(ep); 880 879 mutex_unlock(&ep->mtx); 881 - 882 - if (dispose) 880 + if (ep_refcount_dec_and_test(ep)) 883 881 ep_free(ep); 884 882 } 885 883 ··· 1098 1100 dispose = __ep_remove(ep, epi, true); 1099 1101 mutex_unlock(&ep->mtx); 1100 1102 1101 - if (dispose) 1103 + if (dispose && ep_refcount_dec_and_test(ep)) 1102 1104 ep_free(ep); 1103 1105 goto again; 1104 1106 }