Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched/topology, drivers/base/arch_topology: Rebuild the sched_domain hierarchy when capacities change

The setting of SD_ASYM_CPUCAPACITY depends on the per-CPU capacities.
These might not have their final values when the hierarchy is initially
built as the values depend on cpufreq to be initialized or the values
being set through sysfs. To ensure that the flags are set correctly we
need to rebuild the sched_domain hierarchy whenever the reported per-CPU
capacity (arch_scale_cpu_capacity()) changes.

This patch ensure that a full sched_domain rebuild happens when CPU
capacity changes occur.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1532093554-30504-3-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Morten Rasmussen and committed by
Ingo Molnar
bb1fbdd3 05484e09

+27
+26
drivers/base/arch_topology.c
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/string.h> 17 17 #include <linux/sched/topology.h> 18 + #include <linux/cpuset.h> 18 19 19 20 DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE; 20 21 ··· 48 47 return sprintf(buf, "%lu\n", topology_get_cpu_scale(NULL, cpu->dev.id)); 49 48 } 50 49 50 + static void update_topology_flags_workfn(struct work_struct *work); 51 + static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn); 52 + 51 53 static ssize_t cpu_capacity_store(struct device *dev, 52 54 struct device_attribute *attr, 53 55 const char *buf, ··· 76 72 topology_set_cpu_scale(i, new_capacity); 77 73 mutex_unlock(&cpu_scale_mutex); 78 74 75 + schedule_work(&update_topology_flags_work); 76 + 79 77 return count; 80 78 } 81 79 ··· 101 95 return 0; 102 96 } 103 97 subsys_initcall(register_cpu_capacity_sysctl); 98 + 99 + static int update_topology; 100 + 101 + int topology_update_cpu_topology(void) 102 + { 103 + return update_topology; 104 + } 105 + 106 + /* 107 + * Updating the sched_domains can't be done directly from cpufreq callbacks 108 + * due to locking, so queue the work for later. 109 + */ 110 + static void update_topology_flags_workfn(struct work_struct *work) 111 + { 112 + update_topology = 1; 113 + rebuild_sched_domains(); 114 + pr_debug("sched_domain hierarchy rebuilt, flags updated\n"); 115 + update_topology = 0; 116 + } 104 117 105 118 static u32 capacity_scale; 106 119 static u32 *raw_capacity; ··· 226 201 227 202 if (cpumask_empty(cpus_to_visit)) { 228 203 topology_normalize_cpu_scale(); 204 + schedule_work(&update_topology_flags_work); 229 205 free_raw_capacity(); 230 206 pr_debug("cpu_capacity: parsing done\n"); 231 207 schedule_work(&parsing_done_work);
+1
include/linux/arch_topology.h
··· 9 9 #include <linux/percpu.h> 10 10 11 11 void topology_normalize_cpu_scale(void); 12 + int topology_update_cpu_topology(void); 12 13 13 14 struct device_node; 14 15 bool topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu);