Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched: Remove stale power aware scheduling remnants and dysfunctional knobs

It's been broken forever (i.e. it's not scheduling in a power
aware fashion), as reported by Suresh and others sending
patches, and nobody cares enough to fix it properly ...
so remove it to make space free for something better.

There's various problems with the code as it stands today, first
and foremost the user interface which is bound to topology
levels and has multiple values per level. This results in a
state explosion which the administrator or distro needs to
master and almost nobody does.

Furthermore large configuration state spaces aren't good, it
means the thing doesn't just work right because it's either
under so many impossibe to meet constraints, or even if
there's an achievable state workloads have to be aware of
it precisely and can never meet it for dynamic workloads.

So pushing this kind of decision to user-space was a bad idea
even with a single knob - it's exponentially worse with knobs
on every node of the topology.

There is a proposal to replace the user interface with a single
3 state knob:

sched_balance_policy := { performance, power, auto }

where 'auto' would be the preferred default which looks at things
like Battery/AC mode and possible cpufreq state or whatever the hw
exposes to show us power use expectations - but there's been no
progress on it in the past many months.

Aside from that, the actual implementation of the various knobs
is known to be broken. There have been sporadic attempts at
fixing things but these always stop short of reaching a mergable
state.

Therefore this wholesale removal with the hopes of spurring
people who care to come forward once again and work on a
coherent replacement.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1326104915.2442.53.camel@twins
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Peter Zijlstra and committed by
Ingo Molnar
8e7fbcbc fac536f7

+5 -498
-25
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 9 9 10 10 /sys/devices/system/cpu/cpu#/ 11 11 12 - What: /sys/devices/system/cpu/sched_mc_power_savings 13 - /sys/devices/system/cpu/sched_smt_power_savings 14 - Date: June 2006 15 - Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 16 - Description: Discover and adjust the kernel's multi-core scheduler support. 17 - 18 - Possible values are: 19 - 20 - 0 - No power saving load balance (default value) 21 - 1 - Fill one thread/core/package first for long running threads 22 - 2 - Also bias task wakeups to semi-idle cpu package for power 23 - savings 24 - 25 - sched_mc_power_savings is dependent upon SCHED_MC, which is 26 - itself architecture dependent. 27 - 28 - sched_smt_power_savings is dependent upon SCHED_SMT, which 29 - is itself architecture dependent. 30 - 31 - The two files are independent of each other. It is possible 32 - that one file may be present without the other. 33 - 34 - Introduced by git commit 5c45bf27. 35 - 36 - 37 12 What: /sys/devices/system/cpu/kernel_max 38 13 /sys/devices/system/cpu/offline 39 14 /sys/devices/system/cpu/online
-4
Documentation/scheduler/sched-domains.txt
··· 61 61 struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of 62 62 the specifics and what to tune. 63 63 64 - For SMT, the architecture must define CONFIG_SCHED_SMT and provide a 65 - cpumask_t cpu_sibling_map[NR_CPUS], where cpu_sibling_map[i] is the mask of 66 - all "i"'s siblings as well as "i" itself. 67 - 68 64 Architectures may retain the regular override the default SD_*_INIT flags 69 65 while using the generic domain builder in kernel/sched.c if they wish to 70 66 retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
+1 -2
arch/x86/kernel/smpboot.c
··· 429 429 * For perf, we return last level cache shared map. 430 430 * And for power savings, we return cpu_core_map 431 431 */ 432 - if ((sched_mc_power_savings || sched_smt_power_savings) && 433 - !(cpu_has(c, X86_FEATURE_AMD_DCM))) 432 + if (!(cpu_has(c, X86_FEATURE_AMD_DCM))) 434 433 return cpu_core_mask(cpu); 435 434 else 436 435 return cpu_llc_shared_mask(cpu);
-4
drivers/base/cpu.c
··· 330 330 panic("Failed to register CPU subsystem"); 331 331 332 332 cpu_dev_register_generic(); 333 - 334 - #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) 335 - sched_create_sysfs_power_savings_entries(cpu_subsys.dev_root); 336 - #endif 337 333 }
-2
include/linux/cpu.h
··· 36 36 extern int cpu_add_dev_attr_group(struct attribute_group *attrs); 37 37 extern void cpu_remove_dev_attr_group(struct attribute_group *attrs); 38 38 39 - extern int sched_create_sysfs_power_savings_entries(struct device *dev); 40 - 41 39 #ifdef CONFIG_HOTPLUG_CPU 42 40 extern void unregister_cpu(struct cpu *cpu); 43 41 extern ssize_t arch_cpu_probe(const char *, size_t);
-47
include/linux/sched.h
··· 855 855 #define SD_WAKE_AFFINE 0x0020 /* Wake task to waking CPU */ 856 856 #define SD_PREFER_LOCAL 0x0040 /* Prefer to keep tasks local to this domain */ 857 857 #define SD_SHARE_CPUPOWER 0x0080 /* Domain members share cpu power */ 858 - #define SD_POWERSAVINGS_BALANCE 0x0100 /* Balance for power savings */ 859 858 #define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg resources */ 860 859 #define SD_SERIALIZE 0x0400 /* Only a single load balancing instance */ 861 860 #define SD_ASYM_PACKING 0x0800 /* Place busy groups earlier in the domain */ 862 861 #define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */ 863 862 #define SD_OVERLAP 0x2000 /* sched_domains of this level overlap */ 864 863 865 - enum powersavings_balance_level { 866 - POWERSAVINGS_BALANCE_NONE = 0, /* No power saving load balance */ 867 - POWERSAVINGS_BALANCE_BASIC, /* Fill one thread/core/package 868 - * first for long running threads 869 - */ 870 - POWERSAVINGS_BALANCE_WAKEUP, /* Also bias task wakeups to semi-idle 871 - * cpu package for power savings 872 - */ 873 - MAX_POWERSAVINGS_BALANCE_LEVELS 874 - }; 875 - 876 - extern int sched_mc_power_savings, sched_smt_power_savings; 877 - 878 - static inline int sd_balance_for_mc_power(void) 879 - { 880 - if (sched_smt_power_savings) 881 - return SD_POWERSAVINGS_BALANCE; 882 - 883 - if (!sched_mc_power_savings) 884 - return SD_PREFER_SIBLING; 885 - 886 - return 0; 887 - } 888 - 889 - static inline int sd_balance_for_package_power(void) 890 - { 891 - if (sched_mc_power_savings | sched_smt_power_savings) 892 - return SD_POWERSAVINGS_BALANCE; 893 - 894 - return SD_PREFER_SIBLING; 895 - } 896 - 897 864 extern int __weak arch_sd_sibiling_asym_packing(void); 898 - 899 - /* 900 - * Optimise SD flags for power savings: 901 - * SD_BALANCE_NEWIDLE helps aggressive task consolidation and power savings. 902 - * Keep default SD flags if sched_{smt,mc}_power_saving=0 903 - */ 904 - 905 - static inline int sd_power_saving_flags(void) 906 - { 907 - if (sched_mc_power_savings | sched_smt_power_savings) 908 - return SD_BALANCE_NEWIDLE; 909 - 910 - return 0; 911 - } 912 865 913 866 struct sched_group_power { 914 867 atomic_t ref;
-5
include/linux/topology.h
··· 98 98 | 0*SD_BALANCE_WAKE \ 99 99 | 1*SD_WAKE_AFFINE \ 100 100 | 1*SD_SHARE_CPUPOWER \ 101 - | 0*SD_POWERSAVINGS_BALANCE \ 102 101 | 1*SD_SHARE_PKG_RESOURCES \ 103 102 | 0*SD_SERIALIZE \ 104 103 | 0*SD_PREFER_SIBLING \ ··· 133 134 | 0*SD_SHARE_CPUPOWER \ 134 135 | 1*SD_SHARE_PKG_RESOURCES \ 135 136 | 0*SD_SERIALIZE \ 136 - | sd_balance_for_mc_power() \ 137 - | sd_power_saving_flags() \ 138 137 , \ 139 138 .last_balance = jiffies, \ 140 139 .balance_interval = 1, \ ··· 164 167 | 0*SD_SHARE_CPUPOWER \ 165 168 | 0*SD_SHARE_PKG_RESOURCES \ 166 169 | 0*SD_SERIALIZE \ 167 - | sd_balance_for_package_power() \ 168 - | sd_power_saving_flags() \ 169 170 , \ 170 171 .last_balance = jiffies, \ 171 172 .balance_interval = 1, \
-94
kernel/sched/core.c
··· 5929 5929 return cpumask_of_node(cpu_to_node(cpu)); 5930 5930 } 5931 5931 5932 - int sched_smt_power_savings = 0, sched_mc_power_savings = 0; 5933 - 5934 5932 struct sd_data { 5935 5933 struct sched_domain **__percpu sd; 5936 5934 struct sched_group **__percpu sg; ··· 6320 6322 | 0*SD_WAKE_AFFINE 6321 6323 | 0*SD_PREFER_LOCAL 6322 6324 | 0*SD_SHARE_CPUPOWER 6323 - | 0*SD_POWERSAVINGS_BALANCE 6324 6325 | 0*SD_SHARE_PKG_RESOURCES 6325 6326 | 1*SD_SERIALIZE 6326 6327 | 0*SD_PREFER_SIBLING ··· 6815 6818 6816 6819 mutex_unlock(&sched_domains_mutex); 6817 6820 } 6818 - 6819 - #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) 6820 - static void reinit_sched_domains(void) 6821 - { 6822 - get_online_cpus(); 6823 - 6824 - /* Destroy domains first to force the rebuild */ 6825 - partition_sched_domains(0, NULL, NULL); 6826 - 6827 - rebuild_sched_domains(); 6828 - put_online_cpus(); 6829 - } 6830 - 6831 - static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt) 6832 - { 6833 - unsigned int level = 0; 6834 - 6835 - if (sscanf(buf, "%u", &level) != 1) 6836 - return -EINVAL; 6837 - 6838 - /* 6839 - * level is always be positive so don't check for 6840 - * level < POWERSAVINGS_BALANCE_NONE which is 0 6841 - * What happens on 0 or 1 byte write, 6842 - * need to check for count as well? 6843 - */ 6844 - 6845 - if (level >= MAX_POWERSAVINGS_BALANCE_LEVELS) 6846 - return -EINVAL; 6847 - 6848 - if (smt) 6849 - sched_smt_power_savings = level; 6850 - else 6851 - sched_mc_power_savings = level; 6852 - 6853 - reinit_sched_domains(); 6854 - 6855 - return count; 6856 - } 6857 - 6858 - #ifdef CONFIG_SCHED_MC 6859 - static ssize_t sched_mc_power_savings_show(struct device *dev, 6860 - struct device_attribute *attr, 6861 - char *buf) 6862 - { 6863 - return sprintf(buf, "%u\n", sched_mc_power_savings); 6864 - } 6865 - static ssize_t sched_mc_power_savings_store(struct device *dev, 6866 - struct device_attribute *attr, 6867 - const char *buf, size_t count) 6868 - { 6869 - return sched_power_savings_store(buf, count, 0); 6870 - } 6871 - static DEVICE_ATTR(sched_mc_power_savings, 0644, 6872 - sched_mc_power_savings_show, 6873 - sched_mc_power_savings_store); 6874 - #endif 6875 - 6876 - #ifdef CONFIG_SCHED_SMT 6877 - static ssize_t sched_smt_power_savings_show(struct device *dev, 6878 - struct device_attribute *attr, 6879 - char *buf) 6880 - { 6881 - return sprintf(buf, "%u\n", sched_smt_power_savings); 6882 - } 6883 - static ssize_t sched_smt_power_savings_store(struct device *dev, 6884 - struct device_attribute *attr, 6885 - const char *buf, size_t count) 6886 - { 6887 - return sched_power_savings_store(buf, count, 1); 6888 - } 6889 - static DEVICE_ATTR(sched_smt_power_savings, 0644, 6890 - sched_smt_power_savings_show, 6891 - sched_smt_power_savings_store); 6892 - #endif 6893 - 6894 - int __init sched_create_sysfs_power_savings_entries(struct device *dev) 6895 - { 6896 - int err = 0; 6897 - 6898 - #ifdef CONFIG_SCHED_SMT 6899 - if (smt_capable()) 6900 - err = device_create_file(dev, &dev_attr_sched_smt_power_savings); 6901 - #endif 6902 - #ifdef CONFIG_SCHED_MC 6903 - if (!err && mc_capable()) 6904 - err = device_create_file(dev, &dev_attr_sched_mc_power_savings); 6905 - #endif 6906 - return err; 6907 - } 6908 - #endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */ 6909 6821 6910 6822 /* 6911 6823 * Update cpusets according to cpu_active mask. If cpusets are
+2 -273
kernel/sched/fair.c
··· 2721 2721 * If power savings logic is enabled for a domain, see if we 2722 2722 * are not overloaded, if so, don't balance wider. 2723 2723 */ 2724 - if (tmp->flags & (SD_POWERSAVINGS_BALANCE|SD_PREFER_LOCAL)) { 2724 + if (tmp->flags & (SD_PREFER_LOCAL)) { 2725 2725 unsigned long power = 0; 2726 2726 unsigned long nr_running = 0; 2727 2727 unsigned long capacity; ··· 2733 2733 } 2734 2734 2735 2735 capacity = DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE); 2736 - 2737 - if (tmp->flags & SD_POWERSAVINGS_BALANCE) 2738 - nr_running /= 2; 2739 2736 2740 2737 if (nr_running < capacity) 2741 2738 want_sd = 0; ··· 3432 3435 unsigned int busiest_group_weight; 3433 3436 3434 3437 int group_imb; /* Is there imbalance in this sd */ 3435 - #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) 3436 - int power_savings_balance; /* Is powersave balance needed for this sd */ 3437 - struct sched_group *group_min; /* Least loaded group in sd */ 3438 - struct sched_group *group_leader; /* Group which relieves group_min */ 3439 - unsigned long min_load_per_task; /* load_per_task in group_min */ 3440 - unsigned long leader_nr_running; /* Nr running of group_leader */ 3441 - unsigned long min_nr_running; /* Nr running of group_min */ 3442 - #endif 3443 3438 }; 3444 3439 3445 3440 /* ··· 3474 3485 3475 3486 return load_idx; 3476 3487 } 3477 - 3478 - 3479 - #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) 3480 - /** 3481 - * init_sd_power_savings_stats - Initialize power savings statistics for 3482 - * the given sched_domain, during load balancing. 3483 - * 3484 - * @sd: Sched domain whose power-savings statistics are to be initialized. 3485 - * @sds: Variable containing the statistics for sd. 3486 - * @idle: Idle status of the CPU at which we're performing load-balancing. 3487 - */ 3488 - static inline void init_sd_power_savings_stats(struct sched_domain *sd, 3489 - struct sd_lb_stats *sds, enum cpu_idle_type idle) 3490 - { 3491 - /* 3492 - * Busy processors will not participate in power savings 3493 - * balance. 3494 - */ 3495 - if (idle == CPU_NOT_IDLE || !(sd->flags & SD_POWERSAVINGS_BALANCE)) 3496 - sds->power_savings_balance = 0; 3497 - else { 3498 - sds->power_savings_balance = 1; 3499 - sds->min_nr_running = ULONG_MAX; 3500 - sds->leader_nr_running = 0; 3501 - } 3502 - } 3503 - 3504 - /** 3505 - * update_sd_power_savings_stats - Update the power saving stats for a 3506 - * sched_domain while performing load balancing. 3507 - * 3508 - * @group: sched_group belonging to the sched_domain under consideration. 3509 - * @sds: Variable containing the statistics of the sched_domain 3510 - * @local_group: Does group contain the CPU for which we're performing 3511 - * load balancing ? 3512 - * @sgs: Variable containing the statistics of the group. 3513 - */ 3514 - static inline void update_sd_power_savings_stats(struct sched_group *group, 3515 - struct sd_lb_stats *sds, int local_group, struct sg_lb_stats *sgs) 3516 - { 3517 - 3518 - if (!sds->power_savings_balance) 3519 - return; 3520 - 3521 - /* 3522 - * If the local group is idle or completely loaded 3523 - * no need to do power savings balance at this domain 3524 - */ 3525 - if (local_group && (sds->this_nr_running >= sgs->group_capacity || 3526 - !sds->this_nr_running)) 3527 - sds->power_savings_balance = 0; 3528 - 3529 - /* 3530 - * If a group is already running at full capacity or idle, 3531 - * don't include that group in power savings calculations 3532 - */ 3533 - if (!sds->power_savings_balance || 3534 - sgs->sum_nr_running >= sgs->group_capacity || 3535 - !sgs->sum_nr_running) 3536 - return; 3537 - 3538 - /* 3539 - * Calculate the group which has the least non-idle load. 3540 - * This is the group from where we need to pick up the load 3541 - * for saving power 3542 - */ 3543 - if ((sgs->sum_nr_running < sds->min_nr_running) || 3544 - (sgs->sum_nr_running == sds->min_nr_running && 3545 - group_first_cpu(group) > group_first_cpu(sds->group_min))) { 3546 - sds->group_min = group; 3547 - sds->min_nr_running = sgs->sum_nr_running; 3548 - sds->min_load_per_task = sgs->sum_weighted_load / 3549 - sgs->sum_nr_running; 3550 - } 3551 - 3552 - /* 3553 - * Calculate the group which is almost near its 3554 - * capacity but still has some space to pick up some load 3555 - * from other group and save more power 3556 - */ 3557 - if (sgs->sum_nr_running + 1 > sgs->group_capacity) 3558 - return; 3559 - 3560 - if (sgs->sum_nr_running > sds->leader_nr_running || 3561 - (sgs->sum_nr_running == sds->leader_nr_running && 3562 - group_first_cpu(group) < group_first_cpu(sds->group_leader))) { 3563 - sds->group_leader = group; 3564 - sds->leader_nr_running = sgs->sum_nr_running; 3565 - } 3566 - } 3567 - 3568 - /** 3569 - * check_power_save_busiest_group - see if there is potential for some power-savings balance 3570 - * @env: load balance environment 3571 - * @sds: Variable containing the statistics of the sched_domain 3572 - * under consideration. 3573 - * 3574 - * Description: 3575 - * Check if we have potential to perform some power-savings balance. 3576 - * If yes, set the busiest group to be the least loaded group in the 3577 - * sched_domain, so that it's CPUs can be put to idle. 3578 - * 3579 - * Returns 1 if there is potential to perform power-savings balance. 3580 - * Else returns 0. 3581 - */ 3582 - static inline 3583 - int check_power_save_busiest_group(struct lb_env *env, struct sd_lb_stats *sds) 3584 - { 3585 - if (!sds->power_savings_balance) 3586 - return 0; 3587 - 3588 - if (sds->this != sds->group_leader || 3589 - sds->group_leader == sds->group_min) 3590 - return 0; 3591 - 3592 - env->imbalance = sds->min_load_per_task; 3593 - sds->busiest = sds->group_min; 3594 - 3595 - return 1; 3596 - 3597 - } 3598 - #else /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */ 3599 - static inline void init_sd_power_savings_stats(struct sched_domain *sd, 3600 - struct sd_lb_stats *sds, enum cpu_idle_type idle) 3601 - { 3602 - return; 3603 - } 3604 - 3605 - static inline void update_sd_power_savings_stats(struct sched_group *group, 3606 - struct sd_lb_stats *sds, int local_group, struct sg_lb_stats *sgs) 3607 - { 3608 - return; 3609 - } 3610 - 3611 - static inline 3612 - int check_power_save_busiest_group(struct lb_env *env, struct sd_lb_stats *sds) 3613 - { 3614 - return 0; 3615 - } 3616 - #endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */ 3617 - 3618 3488 3619 3489 unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu) 3620 3490 { ··· 3780 3932 if (child && child->flags & SD_PREFER_SIBLING) 3781 3933 prefer_sibling = 1; 3782 3934 3783 - init_sd_power_savings_stats(env->sd, sds, env->idle); 3784 3935 load_idx = get_sd_load_idx(env->sd, env->idle); 3785 3936 3786 3937 do { ··· 3828 3981 sds->group_imb = sgs.group_imb; 3829 3982 } 3830 3983 3831 - update_sd_power_savings_stats(sg, sds, local_group, &sgs); 3832 3984 sg = sg->next; 3833 3985 } while (sg != env->sd->groups); 3834 3986 } ··· 4122 4276 return sds.busiest; 4123 4277 4124 4278 out_balanced: 4125 - /* 4126 - * There is no obvious imbalance. But check if we can do some balancing 4127 - * to save power. 4128 - */ 4129 - if (check_power_save_busiest_group(env, &sds)) 4130 - return sds.busiest; 4131 4279 ret: 4132 4280 env->imbalance = 0; 4133 4281 return NULL; ··· 4199 4359 */ 4200 4360 if ((sd->flags & SD_ASYM_PACKING) && env->src_cpu > env->dst_cpu) 4201 4361 return 1; 4202 - 4203 - /* 4204 - * The only task running in a non-idle cpu can be moved to this 4205 - * cpu in an attempt to completely freeup the other CPU 4206 - * package. 4207 - * 4208 - * The package power saving logic comes from 4209 - * find_busiest_group(). If there are no imbalance, then 4210 - * f_b_g() will return NULL. However when sched_mc={1,2} then 4211 - * f_b_g() will select a group from which a running task may be 4212 - * pulled to this cpu in order to make the other package idle. 4213 - * If there is no opportunity to make a package idle and if 4214 - * there are no imbalance, then f_b_g() will return NULL and no 4215 - * action will be taken in load_balance_newidle(). 4216 - * 4217 - * Under normal task pull operation due to imbalance, there 4218 - * will be more than one task in the source run queue and 4219 - * move_tasks() will succeed. ld_moved will be true and this 4220 - * active balance code will not be triggered. 4221 - */ 4222 - if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP) 4223 - return 0; 4224 4362 } 4225 4363 4226 4364 return unlikely(sd->nr_balance_failed > sd->cache_nice_tries+2); ··· 4518 4700 unsigned long next_balance; /* in jiffy units */ 4519 4701 } nohz ____cacheline_aligned; 4520 4702 4521 - #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) 4522 - /** 4523 - * lowest_flag_domain - Return lowest sched_domain containing flag. 4524 - * @cpu: The cpu whose lowest level of sched domain is to 4525 - * be returned. 4526 - * @flag: The flag to check for the lowest sched_domain 4527 - * for the given cpu. 4528 - * 4529 - * Returns the lowest sched_domain of a cpu which contains the given flag. 4530 - */ 4531 - static inline struct sched_domain *lowest_flag_domain(int cpu, int flag) 4532 - { 4533 - struct sched_domain *sd; 4534 - 4535 - for_each_domain(cpu, sd) 4536 - if (sd->flags & flag) 4537 - break; 4538 - 4539 - return sd; 4540 - } 4541 - 4542 - /** 4543 - * for_each_flag_domain - Iterates over sched_domains containing the flag. 4544 - * @cpu: The cpu whose domains we're iterating over. 4545 - * @sd: variable holding the value of the power_savings_sd 4546 - * for cpu. 4547 - * @flag: The flag to filter the sched_domains to be iterated. 4548 - * 4549 - * Iterates over all the scheduler domains for a given cpu that has the 'flag' 4550 - * set, starting from the lowest sched_domain to the highest. 4551 - */ 4552 - #define for_each_flag_domain(cpu, sd, flag) \ 4553 - for (sd = lowest_flag_domain(cpu, flag); \ 4554 - (sd && (sd->flags & flag)); sd = sd->parent) 4555 - 4556 - /** 4557 - * find_new_ilb - Finds the optimum idle load balancer for nomination. 4558 - * @cpu: The cpu which is nominating a new idle_load_balancer. 4559 - * 4560 - * Returns: Returns the id of the idle load balancer if it exists, 4561 - * Else, returns >= nr_cpu_ids. 4562 - * 4563 - * This algorithm picks the idle load balancer such that it belongs to a 4564 - * semi-idle powersavings sched_domain. The idea is to try and avoid 4565 - * completely idle packages/cores just for the purpose of idle load balancing 4566 - * when there are other idle cpu's which are better suited for that job. 4567 - */ 4568 - static int find_new_ilb(int cpu) 4703 + static inline int find_new_ilb(int call_cpu) 4569 4704 { 4570 4705 int ilb = cpumask_first(nohz.idle_cpus_mask); 4571 - struct sched_group *ilbg; 4572 - struct sched_domain *sd; 4573 4706 4574 - /* 4575 - * Have idle load balancer selection from semi-idle packages only 4576 - * when power-aware load balancing is enabled 4577 - */ 4578 - if (!(sched_smt_power_savings || sched_mc_power_savings)) 4579 - goto out_done; 4580 - 4581 - /* 4582 - * Optimize for the case when we have no idle CPUs or only one 4583 - * idle CPU. Don't walk the sched_domain hierarchy in such cases 4584 - */ 4585 - if (cpumask_weight(nohz.idle_cpus_mask) < 2) 4586 - goto out_done; 4587 - 4588 - rcu_read_lock(); 4589 - for_each_flag_domain(cpu, sd, SD_POWERSAVINGS_BALANCE) { 4590 - ilbg = sd->groups; 4591 - 4592 - do { 4593 - if (ilbg->group_weight != 4594 - atomic_read(&ilbg->sgp->nr_busy_cpus)) { 4595 - ilb = cpumask_first_and(nohz.idle_cpus_mask, 4596 - sched_group_cpus(ilbg)); 4597 - goto unlock; 4598 - } 4599 - 4600 - ilbg = ilbg->next; 4601 - 4602 - } while (ilbg != sd->groups); 4603 - } 4604 - unlock: 4605 - rcu_read_unlock(); 4606 - 4607 - out_done: 4608 4707 if (ilb < nr_cpu_ids && idle_cpu(ilb)) 4609 4708 return ilb; 4610 4709 4611 4710 return nr_cpu_ids; 4612 4711 } 4613 - #else /* (CONFIG_SCHED_MC || CONFIG_SCHED_SMT) */ 4614 - static inline int find_new_ilb(int call_cpu) 4615 - { 4616 - return nr_cpu_ids; 4617 - } 4618 - #endif 4619 4712 4620 4713 /* 4621 4714 * Kick a CPU to do the nohz balancing, if it is time for it. We pick the
-9
tools/power/cpupower/man/cpupower-set.1
··· 85 85 savings 86 86 .RE 87 87 88 - sched_mc_power_savings is dependent upon SCHED_MC, which is 89 - itself architecture dependent. 90 - 91 - sched_smt_power_savings is dependent upon SCHED_SMT, which 92 - is itself architecture dependent. 93 - 94 - The two files are independent of each other. It is possible 95 - that one file may be present without the other. 96 - 97 88 .SH "SEE ALSO" 98 89 cpupower-info(1), cpupower-monitor(1), powertop(1) 99 90 .PP
+2 -33
tools/power/cpupower/utils/helpers/sysfs.c
··· 362 362 */ 363 363 int sysfs_get_sched(const char *smt_mc) 364 364 { 365 - unsigned long value; 366 - char linebuf[MAX_LINE_LEN]; 367 - char *endp; 368 - char path[SYSFS_PATH_MAX]; 369 - 370 - if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc)) 371 - return -EINVAL; 372 - 373 - snprintf(path, sizeof(path), 374 - PATH_TO_CPU "sched_%s_power_savings", smt_mc); 375 - if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0) 376 - return -1; 377 - value = strtoul(linebuf, &endp, 0); 378 - if (endp == linebuf || errno == ERANGE) 379 - return -1; 380 - return value; 365 + return -ENODEV; 381 366 } 382 367 383 368 /* ··· 373 388 */ 374 389 int sysfs_set_sched(const char *smt_mc, int val) 375 390 { 376 - char linebuf[MAX_LINE_LEN]; 377 - char path[SYSFS_PATH_MAX]; 378 - struct stat statbuf; 379 - 380 - if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc)) 381 - return -EINVAL; 382 - 383 - snprintf(path, sizeof(path), 384 - PATH_TO_CPU "sched_%s_power_savings", smt_mc); 385 - sprintf(linebuf, "%d", val); 386 - 387 - if (stat(path, &statbuf) != 0) 388 - return -ENODEV; 389 - 390 - if (sysfs_write_file(path, linebuf, MAX_LINE_LEN) == 0) 391 - return -1; 392 - return 0; 391 + return -ENODEV; 393 392 }