Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

kernel: fix is_single_threaded

- Fix the comment, is_single_threaded(p) actually means that nobody shares
->mm with p.

I think this helper should be renamed, and it should not have arguments.
With or without this patch it must not be used unless p == current,
otherwise we can't safely use p->signal or p->mm.

- "if (atomic_read(&p->signal->count) != 1)" is not right when we have a
zombie group leader, use signal->live instead.

- Add PF_KTHREAD check to skip kernel threads which may borrow p->mm,
otherwise we can return the wrong "false".

- Use for_each_process() instead of do_each_thread(), all threads must use
the same ->mm.

- Use down_write(mm->mmap_sem) + rcu_read_lock() instead of tasklist_lock
to iterate over the process list. If there is another CLONE_VM process
it can't pass exit_mm() which takes the same mm->mmap_sem. We can miss
a freshly forked CLONE_VM task, but this doesn't matter because we must
see its parent and return false.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: James Morris <jmorris@namei.org>
Cc: Roland McGrath <roland@redhat.com>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: James Morris <jmorris@namei.org>

authored by

Oleg Nesterov and committed by
James Morris
d2e3ee9b 713c0ecd

+34 -24
+34 -24
lib/is_single_threaded.c
··· 12 12 13 13 #include <linux/sched.h> 14 14 15 - /** 16 - * is_single_threaded - Determine if a thread group is single-threaded or not 17 - * @p: A task in the thread group in question 18 - * 19 - * This returns true if the thread group to which a task belongs is single 20 - * threaded, false if it is not. 15 + /* 16 + * Returns true if the task does not share ->mm with another thread/process. 21 17 */ 22 - bool is_single_threaded(struct task_struct *p) 18 + bool is_single_threaded(struct task_struct *task) 23 19 { 24 - struct task_struct *g, *t; 25 - struct mm_struct *mm = p->mm; 20 + struct mm_struct *mm = task->mm; 21 + struct task_struct *p, *t; 22 + bool ret; 26 23 27 - if (atomic_read(&p->signal->count) != 1) 28 - goto no; 24 + might_sleep(); 29 25 30 - if (atomic_read(&p->mm->mm_users) != 1) { 31 - read_lock(&tasklist_lock); 32 - do_each_thread(g, t) { 33 - if (t->mm == mm && t != p) 34 - goto no_unlock; 35 - } while_each_thread(g, t); 36 - read_unlock(&tasklist_lock); 26 + if (atomic_read(&task->signal->live) != 1) 27 + return false; 28 + 29 + if (atomic_read(&mm->mm_users) == 1) 30 + return true; 31 + 32 + ret = false; 33 + down_write(&mm->mmap_sem); 34 + rcu_read_lock(); 35 + for_each_process(p) { 36 + if (unlikely(p->flags & PF_KTHREAD)) 37 + continue; 38 + if (unlikely(p == task->group_leader)) 39 + continue; 40 + 41 + t = p; 42 + do { 43 + if (unlikely(t->mm == mm)) 44 + goto found; 45 + if (likely(t->mm)) 46 + break; 47 + } while_each_thread(p, t); 37 48 } 49 + ret = true; 50 + found: 51 + rcu_read_unlock(); 52 + up_write(&mm->mmap_sem); 38 53 39 - return true; 40 - 41 - no_unlock: 42 - read_unlock(&tasklist_lock); 43 - no: 44 - return false; 54 + return ret; 45 55 }