Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

timers/migration: Exclude isolated cpus from hierarchy

The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.

Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if non isolated and online;

A core is considered unavailable as isolated if it belongs to:
* the isolcpus (domain) list
* an isolated cpuset
Except if it is:
* in the nohz_full list (already idle for the hierarchy)
* the nohz timekeeper core (must be available to handle global timers)

CPUs are added to the hierarchy during late boot, excluding isolated
ones, the hierarchy is also adapted when the cpuset isolation changes.

Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).

This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:

before the change:

# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)

after the change:

# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)

The same behaviour was observed on a machine with as few as 20 cores /
40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top.

Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: John B. Wyatt IV <jwyatt@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://patch.msgid.link/20251120145653.296659-8-gmonaco@redhat.com

authored by

Gabriele Monaco and committed by
Thomas Gleixner
7dec062c b5665100

+155
+9
include/linux/timer.h
··· 188 188 #define timers_dead_cpu NULL 189 189 #endif 190 190 191 + #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) 192 + extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask); 193 + #else 194 + static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask) 195 + { 196 + return 0; 197 + } 198 + #endif 199 + 191 200 #endif
+3
kernel/cgroup/cpuset.c
··· 1350 1350 1351 1351 ret = workqueue_unbound_exclude_cpumask(isolated_cpus); 1352 1352 WARN_ON_ONCE(ret < 0); 1353 + 1354 + ret = tmigr_isolated_exclude_cpumask(isolated_cpus); 1355 + WARN_ON_ONCE(ret < 0); 1353 1356 } 1354 1357 1355 1358 /**
+143
kernel/time/timer_migration.c
··· 10 10 #include <linux/spinlock.h> 11 11 #include <linux/timerqueue.h> 12 12 #include <trace/events/ipi.h> 13 + #include <linux/sched/isolation.h> 13 14 14 15 #include "timer_migration.h" 15 16 #include "tick-internal.h" ··· 428 427 /* 429 428 * CPUs available for timer migration. 430 429 * Protected by cpuset_mutex (with cpus_read_lock held) or cpus_write_lock. 430 + * Additionally tmigr_available_mutex serializes set/clear operations with each other. 431 431 */ 432 432 static cpumask_var_t tmigr_available_cpumask; 433 + static DEFINE_MUTEX(tmigr_available_mutex); 434 + 435 + /* Enabled during late initcall */ 436 + static DEFINE_STATIC_KEY_FALSE(tmigr_exclude_isolated); 433 437 434 438 #define TMIGR_NONE 0xFF 435 439 #define BIT_CNT 8 ··· 442 436 static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc) 443 437 { 444 438 return !(tmc->tmgroup && tmc->available); 439 + } 440 + 441 + /* 442 + * Returns true if @cpu should be excluded from the hierarchy as isolated. 443 + * Domain isolated CPUs don't participate in timer migration, nohz_full CPUs 444 + * are still part of the hierarchy but become idle (from a tick and timer 445 + * migration perspective) when they stop their tick. This lets the timekeeping 446 + * CPU handle their global timers. Marking also isolated CPUs as idle would be 447 + * too costly, hence they are completely excluded from the hierarchy. 448 + * This check is necessary, for instance, to prevent offline isolated CPUs from 449 + * being incorrectly marked as available once getting back online. 450 + * 451 + * This function returns false during early boot and the isolation logic is 452 + * enabled only after isolated CPUs are marked as unavailable at late boot. 453 + * The tick CPU can be isolated at boot, however we cannot mark it as 454 + * unavailable to avoid having no global migrator for the nohz_full CPUs. This 455 + * should be ensured by the callers of this function: implicitly from hotplug 456 + * callbacks and explicitly in tmigr_init_isolation() and 457 + * tmigr_isolated_exclude_cpumask(). 458 + */ 459 + static inline bool tmigr_is_isolated(int cpu) 460 + { 461 + if (!static_branch_unlikely(&tmigr_exclude_isolated)) 462 + return false; 463 + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) || 464 + cpuset_cpu_is_isolated(cpu)) && 465 + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE); 445 466 } 446 467 447 468 /* ··· 1472 1439 int migrator; 1473 1440 u64 firstexp; 1474 1441 1442 + guard(mutex)(&tmigr_available_mutex); 1443 + 1475 1444 cpumask_clear_cpu(cpu, tmigr_available_cpumask); 1476 1445 scoped_guard(raw_spinlock_irq, &tmc->lock) { 1446 + if (!tmc->available) 1447 + return 0; 1477 1448 tmc->available = false; 1478 1449 WRITE_ONCE(tmc->wakeup, KTIME_MAX); 1479 1450 ··· 1505 1468 if (WARN_ON_ONCE(!tmc->tmgroup)) 1506 1469 return -EINVAL; 1507 1470 1471 + if (tmigr_is_isolated(cpu)) 1472 + return 0; 1473 + 1474 + guard(mutex)(&tmigr_available_mutex); 1475 + 1508 1476 cpumask_set_cpu(cpu, tmigr_available_cpumask); 1509 1477 scoped_guard(raw_spinlock_irq, &tmc->lock) { 1478 + if (tmc->available) 1479 + return 0; 1510 1480 trace_tmigr_cpu_available(tmc); 1511 1481 tmc->idle = timer_base_is_idle(); 1512 1482 if (!tmc->idle) ··· 1522 1478 } 1523 1479 return 0; 1524 1480 } 1481 + 1482 + static void tmigr_cpu_isolate(struct work_struct *ignored) 1483 + { 1484 + tmigr_clear_cpu_available(smp_processor_id()); 1485 + } 1486 + 1487 + static void tmigr_cpu_unisolate(struct work_struct *ignored) 1488 + { 1489 + tmigr_set_cpu_available(smp_processor_id()); 1490 + } 1491 + 1492 + /** 1493 + * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy 1494 + * @exclude_cpumask: the cpumask to be excluded from timer migration hierarchy 1495 + * 1496 + * This function can be called from cpuset code to provide the new set of 1497 + * isolated CPUs that should be excluded from the hierarchy. 1498 + * Online CPUs not present in exclude_cpumask but already excluded are brought 1499 + * back to the hierarchy. 1500 + * Functions to isolate/unisolate need to be called locally and can sleep. 1501 + */ 1502 + int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask) 1503 + { 1504 + struct work_struct __percpu *works __free(free_percpu) = 1505 + alloc_percpu(struct work_struct); 1506 + cpumask_var_t cpumask __free(free_cpumask_var) = CPUMASK_VAR_NULL; 1507 + int cpu; 1508 + 1509 + lockdep_assert_cpus_held(); 1510 + 1511 + if (!works) 1512 + return -ENOMEM; 1513 + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL)) 1514 + return -ENOMEM; 1515 + 1516 + /* 1517 + * First set previously isolated CPUs as available (unisolate). 1518 + * This cpumask contains only CPUs that switched to available now. 1519 + */ 1520 + cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask); 1521 + cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask); 1522 + 1523 + for_each_cpu(cpu, cpumask) { 1524 + struct work_struct *work = per_cpu_ptr(works, cpu); 1525 + 1526 + INIT_WORK(work, tmigr_cpu_unisolate); 1527 + schedule_work_on(cpu, work); 1528 + } 1529 + for_each_cpu(cpu, cpumask) 1530 + flush_work(per_cpu_ptr(works, cpu)); 1531 + 1532 + /* 1533 + * Then clear previously available CPUs (isolate). 1534 + * This cpumask contains only CPUs that switched to not available now. 1535 + * There cannot be overlap with the newly available ones. 1536 + */ 1537 + cpumask_and(cpumask, exclude_cpumask, tmigr_available_cpumask); 1538 + cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE)); 1539 + /* 1540 + * Handle this here and not in the cpuset code because exclude_cpumask 1541 + * might include also the tick CPU if included in isolcpus. 1542 + */ 1543 + for_each_cpu(cpu, cpumask) { 1544 + if (!tick_nohz_cpu_hotpluggable(cpu)) { 1545 + cpumask_clear_cpu(cpu, cpumask); 1546 + break; 1547 + } 1548 + } 1549 + 1550 + for_each_cpu(cpu, cpumask) { 1551 + struct work_struct *work = per_cpu_ptr(works, cpu); 1552 + 1553 + INIT_WORK(work, tmigr_cpu_isolate); 1554 + schedule_work_on(cpu, work); 1555 + } 1556 + for_each_cpu(cpu, cpumask) 1557 + flush_work(per_cpu_ptr(works, cpu)); 1558 + 1559 + return 0; 1560 + } 1561 + 1562 + static int __init tmigr_init_isolation(void) 1563 + { 1564 + cpumask_var_t cpumask __free(free_cpumask_var) = CPUMASK_VAR_NULL; 1565 + 1566 + static_branch_enable(&tmigr_exclude_isolated); 1567 + 1568 + if (!housekeeping_enabled(HK_TYPE_DOMAIN)) 1569 + return 0; 1570 + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL)) 1571 + return -ENOMEM; 1572 + 1573 + cpumask_andnot(cpumask, cpu_possible_mask, housekeeping_cpumask(HK_TYPE_DOMAIN)); 1574 + 1575 + /* Protect against RCU torture hotplug testing */ 1576 + guard(cpus_read_lock)(); 1577 + return tmigr_isolated_exclude_cpumask(cpumask); 1578 + } 1579 + late_initcall(tmigr_init_isolation); 1525 1580 1526 1581 static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl, 1527 1582 int node)