Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched/mmcid: Prevent pointless work in mm_update_cpus_allowed()

mm_update_cpus_allowed() is not required to be invoked for affinity changes
due to migrate_disable() and migrate_enable().

migrate_disable() restricts the task temporarily to a CPU on which the task
was already allowed to run, so nothing changes. migrate_enable() restores
the actual task affinity mask.

If that mask changed between migrate_disable() and migrate_enable() then
that change was already accounted for.

Move the invocation to the proper place to avoid that.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20251119172549.385208276@linutronix.de

authored by

Thomas Gleixner and committed by
Peter Zijlstra
0d032a43 b08ef5fc

+8 -3
+8 -3
kernel/sched/core.c
··· 2684 2684 2685 2685 cpumask_copy(&p->cpus_mask, ctx->new_mask); 2686 2686 p->nr_cpus_allowed = cpumask_weight(ctx->new_mask); 2687 + mm_update_cpus_allowed(p->mm, ctx->new_mask); 2687 2688 2688 2689 /* 2689 2690 * Swap in a new user_cpus_ptr if SCA_USER flag set ··· 2731 2730 put_prev_task(rq, p); 2732 2731 2733 2732 p->sched_class->set_cpus_allowed(p, ctx); 2734 - mm_update_cpus_allowed(p->mm, ctx->new_mask); 2735 2733 2736 2734 if (queued) 2737 2735 enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK); ··· 10376 10376 */ 10377 10377 static inline void mm_update_cpus_allowed(struct mm_struct *mm, const struct cpumask *affmsk) 10378 10378 { 10379 - struct cpumask *mm_allowed = mm_cpus_allowed(mm); 10379 + struct cpumask *mm_allowed; 10380 10380 10381 10381 if (!mm) 10382 10382 return; 10383 - /* The mm_cpus_allowed is the union of each thread allowed CPUs masks. */ 10383 + 10384 + /* 10385 + * mm::mm_cid::mm_cpus_allowed is the superset of each threads 10386 + * allowed CPUs mask which means it can only grow. 10387 + */ 10384 10388 guard(raw_spinlock)(&mm->mm_cid.lock); 10389 + mm_allowed = mm_cpus_allowed(mm); 10385 10390 cpumask_or(mm_allowed, mm_allowed, affmsk); 10386 10391 WRITE_ONCE(mm->mm_cid.nr_cpus_allowed, cpumask_weight(mm_allowed)); 10387 10392 }