Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping

The core dumping code has always run without holding the mmap_sem for
writing, despite that is the only way to ensure that the entire vma
layout will not change from under it. Only using some signal
serialization on the processes belonging to the mm is not nearly enough.
This was pointed out earlier. For example in Hugh's post from Jul 2017:

https://lkml.kernel.org/r/alpine.LSU.2.11.1707191716030.2055@eggly.anvils

"Not strictly relevant here, but a related note: I was very surprised
to discover, only quite recently, how handle_mm_fault() may be called
without down_read(mmap_sem) - when core dumping. That seems a
misguided optimization to me, which would also be nice to correct"

In particular because the growsdown and growsup can move the
vm_start/vm_end the various loops the core dump does around the vma will
not be consistent if page faults can happen concurrently.

Pretty much all users calling mmget_not_zero()/get_task_mm() and then
taking the mmap_sem had the potential to introduce unexpected side
effects in the core dumping code.

Adding mmap_sem for writing around the ->core_dump invocation is a
viable long term fix, but it requires removing all copy user and page
faults and to replace them with get_dump_page() for all binary formats
which is not suitable as a short term fix.

For the time being this solution manually covers the places that can
confuse the core dump either by altering the vma layout or the vma flags
while it runs. Once ->core_dump runs under mmap_sem for writing the
function mmget_still_valid() can be dropped.

Allowing mmap_sem protected sections to run in parallel with the
coredump provides some minor parallelism advantage to the swapoff code
(which seems to be safe enough by never mangling any vma field and can
keep doing swapins in parallel to the core dumping) and to some other
corner case.

In order to facilitate the backporting I added "Fixes: 86039bd3b4e6"
however the side effect of this same race condition in /proc/pid/mem
should be reproducible since before 2.6.12-rc2 so I couldn't add any
other "Fixes:" because there's no hash beyond the git genesis commit.

Because find_extend_vma() is the only location outside of the process
context that could modify the "mm" structures under mmap_sem for
reading, by adding the mmget_still_valid() check to it, all other cases
that take the mmap_sem for reading don't need the new check after
mmget_not_zero()/get_task_mm(). The expand_stack() in page fault
context also doesn't need the new check, because all tasks under core
dumping are frozen.

Link: http://lkml.kernel.org/r/20190325224949.11068-1-aarcange@redhat.com
Fixes: 86039bd3b4e6 ("userfaultfd: add new syscall to provide memory externalization")
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Jann Horn <jannh@google.com>
Suggested-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Jann Horn <jannh@google.com>
Acked-by: Jason Gunthorpe <jgg@mellanox.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Andrea Arcangeli and committed by
Linus Torvalds
04f5866e dce5b0bd

+57 -1
+3
drivers/infiniband/core/uverbs_main.c
··· 993 993 * will only be one mm, so no big deal. 994 994 */ 995 995 down_write(&mm->mmap_sem); 996 + if (!mmget_still_valid(mm)) 997 + goto skip_mm; 996 998 mutex_lock(&ufile->umap_lock); 997 999 list_for_each_entry_safe (priv, next_priv, &ufile->umaps, 998 1000 list) { ··· 1009 1007 vma->vm_flags &= ~(VM_SHARED | VM_MAYSHARE); 1010 1008 } 1011 1009 mutex_unlock(&ufile->umap_lock); 1010 + skip_mm: 1012 1011 up_write(&mm->mmap_sem); 1013 1012 mmput(mm); 1014 1013 }
+18
fs/proc/task_mmu.c
··· 1143 1143 count = -EINTR; 1144 1144 goto out_mm; 1145 1145 } 1146 + /* 1147 + * Avoid to modify vma->vm_flags 1148 + * without locked ops while the 1149 + * coredump reads the vm_flags. 1150 + */ 1151 + if (!mmget_still_valid(mm)) { 1152 + /* 1153 + * Silently return "count" 1154 + * like if get_task_mm() 1155 + * failed. FIXME: should this 1156 + * function have returned 1157 + * -ESRCH if get_task_mm() 1158 + * failed like if 1159 + * get_proc_task() fails? 1160 + */ 1161 + up_write(&mm->mmap_sem); 1162 + goto out_mm; 1163 + } 1146 1164 for (vma = mm->mmap; vma; vma = vma->vm_next) { 1147 1165 vma->vm_flags &= ~VM_SOFTDIRTY; 1148 1166 vma_set_page_prot(vma);
+9
fs/userfaultfd.c
··· 629 629 630 630 /* the various vma->vm_userfaultfd_ctx still points to it */ 631 631 down_write(&mm->mmap_sem); 632 + /* no task can run (and in turn coredump) yet */ 633 + VM_WARN_ON(!mmget_still_valid(mm)); 632 634 for (vma = mm->mmap; vma; vma = vma->vm_next) 633 635 if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) { 634 636 vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; ··· 885 883 * taking the mmap_sem for writing. 886 884 */ 887 885 down_write(&mm->mmap_sem); 886 + if (!mmget_still_valid(mm)) 887 + goto skip_mm; 888 888 prev = NULL; 889 889 for (vma = mm->mmap; vma; vma = vma->vm_next) { 890 890 cond_resched(); ··· 909 905 vma->vm_flags = new_flags; 910 906 vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; 911 907 } 908 + skip_mm: 912 909 up_write(&mm->mmap_sem); 913 910 mmput(mm); 914 911 wakeup: ··· 1338 1333 goto out; 1339 1334 1340 1335 down_write(&mm->mmap_sem); 1336 + if (!mmget_still_valid(mm)) 1337 + goto out_unlock; 1341 1338 vma = find_vma_prev(mm, start, &prev); 1342 1339 if (!vma) 1343 1340 goto out_unlock; ··· 1527 1520 goto out; 1528 1521 1529 1522 down_write(&mm->mmap_sem); 1523 + if (!mmget_still_valid(mm)) 1524 + goto out_unlock; 1530 1525 vma = find_vma_prev(mm, start, &prev); 1531 1526 if (!vma) 1532 1527 goto out_unlock;
+21
include/linux/sched/mm.h
··· 49 49 __mmdrop(mm); 50 50 } 51 51 52 + /* 53 + * This has to be called after a get_task_mm()/mmget_not_zero() 54 + * followed by taking the mmap_sem for writing before modifying the 55 + * vmas or anything the coredump pretends not to change from under it. 56 + * 57 + * NOTE: find_extend_vma() called from GUP context is the only place 58 + * that can modify the "mm" (notably the vm_start/end) under mmap_sem 59 + * for reading and outside the context of the process, so it is also 60 + * the only case that holds the mmap_sem for reading that must call 61 + * this function. Generally if the mmap_sem is hold for reading 62 + * there's no need of this check after get_task_mm()/mmget_not_zero(). 63 + * 64 + * This function can be obsoleted and the check can be removed, after 65 + * the coredump code will hold the mmap_sem for writing before 66 + * invoking the ->core_dump methods. 67 + */ 68 + static inline bool mmget_still_valid(struct mm_struct *mm) 69 + { 70 + return likely(!mm->core_state); 71 + } 72 + 52 73 /** 53 74 * mmget() - Pin the address space associated with a &struct mm_struct. 54 75 * @mm: The address space to pin.
+6 -1
mm/mmap.c
··· 45 45 #include <linux/moduleparam.h> 46 46 #include <linux/pkeys.h> 47 47 #include <linux/oom.h> 48 + #include <linux/sched/mm.h> 48 49 49 50 #include <linux/uaccess.h> 50 51 #include <asm/cacheflush.h> ··· 2526 2525 vma = find_vma_prev(mm, addr, &prev); 2527 2526 if (vma && (vma->vm_start <= addr)) 2528 2527 return vma; 2529 - if (!prev || expand_stack(prev, addr)) 2528 + /* don't alter vm_end if the coredump is running */ 2529 + if (!prev || !mmget_still_valid(mm) || expand_stack(prev, addr)) 2530 2530 return NULL; 2531 2531 if (prev->vm_flags & VM_LOCKED) 2532 2532 populate_vma_page_range(prev, addr, prev->vm_end, NULL); ··· 2552 2550 if (vma->vm_start <= addr) 2553 2551 return vma; 2554 2552 if (!(vma->vm_flags & VM_GROWSDOWN)) 2553 + return NULL; 2554 + /* don't alter vm_start if the coredump is running */ 2555 + if (!mmget_still_valid(mm)) 2555 2556 return NULL; 2556 2557 start = vma->vm_start; 2557 2558 if (expand_stack(vma, addr))