Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
"Again, the majority of changes go into the cpufreq subsystem, but
there are no big features this time. The cpufreq changes that stand
out somewhat are the governor interface rework and improvements
related to the handling of frequency tables. Apart from those, there
are fixes and new device/CPU IDs in drivers, cleanups and an
improvement of the new schedutil governor.

Next, there are some changes in the hibernation core, including a fix
for a nasty problem related to the MONITOR/MWAIT usage by CPU offline
during resume from hibernation, a few core improvements related to
memory management during resume, a couple of additional debug features
and cleanups.

Finally, we have some fixes and cleanups in the devfreq subsystem,
generic power domains framework improvements related to system
suspend/resume, support for some new chips in intel_idle and in the
power capping RAPL driver, a new version of the AnalyzeSuspend utility
and some assorted fixes and cleanups.

Specifics:

- Rework the cpufreq governor interface to make it more
straightforward and modify the conservative governor to avoid using
transition notifications (Rafael Wysocki).

- Rework the handling of frequency tables by the cpufreq core to make
it more efficient (Viresh Kumar).

- Modify the schedutil governor to reduce the number of wakeups it
causes to occur in cases when the CPU frequency doesn't need to be
changed (Steve Muckle, Viresh Kumar).

- Fix some minor issues and clean up code in the cpufreq core and
governors (Rafael Wysocki, Viresh Kumar).

- Add Intel Broxton support to the intel_pstate driver (Srinivas
Pandruvada).

- Fix problems related to the config TDP feature and to the validity
of the MSR_HWP_INTERRUPT register in intel_pstate (Jan Kiszka,
Srinivas Pandruvada).

- Make intel_pstate update the cpu_frequency tracepoint even if the
frequency doesn't change to avoid confusing powertop (Rafael
Wysocki).

- Clean up the usage of __init/__initdata in intel_pstate, mark some
of its internal variables as __read_mostly and drop an unused
structure element from it (Jisheng Zhang, Carsten Emde).

- Clean up the usage of some duplicate MSR symbols in intel_pstate
and turbostat (Srinivas Pandruvada).

- Update/fix the powernv, s3c24xx and mvebu cpufreq drivers (Akshay
Adiga, Viresh Kumar, Ben Dooks).

- Fix a regression (introduced during the 4.5 cycle) in the
pcc-cpufreq driver by reverting the problematic commit (Andreas
Herrmann).

- Add support for Intel Denverton to intel_idle, clean up Broxton
support in it and make it explicitly non-modular (Jacob Pan, Jan
Beulich, Paul Gortmaker).

- Add support for Denverton and Ivy Bridge server to the Intel RAPL
power capping driver and make it more careful about the handing of
MSRs that may not be present (Jacob Pan, Xiaolong Wang).

- Fix resume from hibernation on x86-64 by making the CPU offline
during resume avoid using MONITOR/MWAIT in the "play dead" loop
which may lead to an inadvertent "revival" of a "dead" CPU and a
page fault leading to a kernel crash from it (Rafael Wysocki).

- Make memory management during resume from hibernation more
straightforward (Rafael Wysocki).

- Add debug features that should help to detect problems related to
hibernation and resume from it (Rafael Wysocki, Chen Yu).

- Clean up hibernation core somewhat (Rafael Wysocki).

- Prevent KASAN from instrumenting the hibernation core which leads
to large numbers of false-positives from it (James Morse).

- Prevent PM (hibernate and suspend) notifiers from being called
during the cleanup phase if they have not been called during the
corresponding preparation phase which is possible if one of the
other notifiers returns an error at that time (Lianwei Wang).

- Improve suspend-related debug printout in the tasks freezer and
clean up suspend-related console handling (Roger Lu, Borislav
Petkov).

- Update the AnalyzeSuspend script in the kernel sources to version
4.2 (Todd Brandt).

- Modify the generic power domains framework to make it handle system
suspend/resume better (Ulf Hansson).

- Make the runtime PM framework avoid resuming devices synchronously
when user space changes the runtime PM settings for them and
improve its error reporting (Rafael Wysocki, Linus Walleij).

- Fix error paths in devfreq drivers (exynos, exynos-ppmu,
exynos-bus) and in the core, make some devfreq code explicitly
non-modular and change some of it into tristate (Bartlomiej
Zolnierkiewicz, Peter Chen, Paul Gortmaker).

- Add DT support to the generic PM clocks management code and make it
export some more symbols (Jon Hunter, Paul Gortmaker).

- Make the PCI PM core code slightly more robust against possible
driver errors (Andy Shevchenko).

- Make it possible to change DESTDIR and PREFIX in turbostat (Andy
Shevchenko)"

* tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (89 commits)
Revert "cpufreq: pcc-cpufreq: update default value of cpuinfo_transition_latency"
PM / hibernate: Introduce test_resume mode for hibernation
cpufreq: export cpufreq_driver_resolve_freq()
cpufreq: Disallow ->resolve_freq() for drivers providing ->target_index()
PCI / PM: check all fields in pci_set_platform_pm()
cpufreq: acpi-cpufreq: use cached frequency mapping when possible
cpufreq: schedutil: map raw required frequency to driver frequency
cpufreq: add cpufreq_driver_resolve_freq()
cpufreq: intel_pstate: Check cpuid for MSR_HWP_INTERRUPT
intel_pstate: Update cpu_frequency tracepoint every time
cpufreq: intel_pstate: clean remnant struct element
PM / tools: scripts: AnalyzeSuspend v4.2
x86 / hibernate: Use hlt_play_dead() when resuming from hibernation
cpufreq: powernv: Replacing pstate_id with frequency table index
intel_pstate: Fix MSR_CONFIG_TDP_x addressing in core_get_max_pstate()
PM / hibernate: Image data protection during restoration
PM / hibernate: Add missing braces in __register_nosave_region()
PM / hibernate: Clean up comments in snapshot.c
PM / hibernate: Clean up function headers in snapshot.c
PM / hibernate: Add missing braces in hibernate_setup()
...

+4342 -2831
+2 -2
Documentation/cpu-freq/core.txt
··· 96 96 For details about OPP, see Documentation/power/opp.txt 97 97 98 98 dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with 99 - cpufreq_frequency_table_cpuinfo which is provided with the list of 99 + cpufreq_table_validate_and_show() which is provided with the list of 100 100 frequencies that are available for operation. This function provides 101 101 a ready to use conversion routine to translate the OPP layer's internal 102 102 information about the available frequencies into a format readily ··· 110 110 /* Do things */ 111 111 r = dev_pm_opp_init_cpufreq_table(dev, &freq_table); 112 112 if (!r) 113 - cpufreq_frequency_table_cpuinfo(policy, freq_table); 113 + cpufreq_table_validate_and_show(policy, freq_table); 114 114 /* Do other things */ 115 115 } 116 116
+4 -6
Documentation/cpu-freq/cpu-drivers.txt
··· 231 231 CPUFREQ_ENTRY_INVALID. The entries don't need to be in ascending 232 232 order. 233 233 234 - By calling cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, 234 + By calling cpufreq_table_validate_and_show(struct cpufreq_policy *policy, 235 235 struct cpufreq_frequency_table *table); 236 236 the cpuinfo.min_freq and cpuinfo.max_freq values are detected, and 237 237 policy->min and policy->max are set to the same values. This is ··· 244 244 ->verify call. 245 245 246 246 int cpufreq_frequency_table_target(struct cpufreq_policy *policy, 247 - struct cpufreq_frequency_table *table, 248 247 unsigned int target_freq, 249 - unsigned int relation, 250 - unsigned int *index); 248 + unsigned int relation); 251 249 252 250 is the corresponding frequency table helper for the ->target 253 - stage. Just pass the values to this function, and the unsigned int 254 - index returns the number of the frequency table entry which contains 251 + stage. Just pass the values to this function, and this function 252 + returns the number of the frequency table entry which contains 255 253 the frequency the CPU shall be set to. 256 254 257 255 The following macros can be used as iterators over cpufreq_frequency_table:
+2 -2
Documentation/cpu-freq/pcc-cpufreq.txt
··· 159 159 160 160 2.2 cpuinfo_transition_latency: 161 161 ------------------------------- 162 - The cpuinfo_transition_latency field is CPUFREQ_ETERNAL. The PCC specification 163 - does not include a field to expose this value currently. 162 + The cpuinfo_transition_latency field is 0. The PCC specification does 163 + not include a field to expose this value currently. 164 164 165 165 2.3 cpuinfo_cur_freq: 166 166 ---------------------
+3
Documentation/kernel-parameters.txt
··· 3598 3598 present during boot. 3599 3599 nocompress Don't compress/decompress hibernation images. 3600 3600 no Disable hibernation and resume. 3601 + protect_image Turn on image protection during restoration 3602 + (that will set all pages holding image data 3603 + during restoration read-only). 3601 3604 3602 3605 retain_initrd [RAM] Keep initrd memory after extraction 3603 3606
+41 -45
arch/powerpc/platforms/cell/cpufreq_spudemand.c
··· 85 85 cancel_delayed_work_sync(&info->work); 86 86 } 87 87 88 - static int spu_gov_govern(struct cpufreq_policy *policy, unsigned int event) 88 + static int spu_gov_start(struct cpufreq_policy *policy) 89 89 { 90 90 unsigned int cpu = policy->cpu; 91 - struct spu_gov_info_struct *info, *affected_info; 91 + struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu); 92 + struct spu_gov_info_struct *affected_info; 92 93 int i; 93 - int ret = 0; 94 94 95 - info = &per_cpu(spu_gov_info, cpu); 96 - 97 - switch (event) { 98 - case CPUFREQ_GOV_START: 99 - if (!cpu_online(cpu)) { 100 - printk(KERN_ERR "cpu %d is not online\n", cpu); 101 - ret = -EINVAL; 102 - break; 103 - } 104 - 105 - if (!policy->cur) { 106 - printk(KERN_ERR "no cpu specified in policy\n"); 107 - ret = -EINVAL; 108 - break; 109 - } 110 - 111 - /* initialize spu_gov_info for all affected cpus */ 112 - for_each_cpu(i, policy->cpus) { 113 - affected_info = &per_cpu(spu_gov_info, i); 114 - affected_info->policy = policy; 115 - } 116 - 117 - info->poll_int = POLL_TIME; 118 - 119 - /* setup timer */ 120 - spu_gov_init_work(info); 121 - 122 - break; 123 - 124 - case CPUFREQ_GOV_STOP: 125 - /* cancel timer */ 126 - spu_gov_cancel_work(info); 127 - 128 - /* clean spu_gov_info for all affected cpus */ 129 - for_each_cpu (i, policy->cpus) { 130 - info = &per_cpu(spu_gov_info, i); 131 - info->policy = NULL; 132 - } 133 - 134 - break; 95 + if (!cpu_online(cpu)) { 96 + printk(KERN_ERR "cpu %d is not online\n", cpu); 97 + return -EINVAL; 135 98 } 136 99 137 - return ret; 100 + if (!policy->cur) { 101 + printk(KERN_ERR "no cpu specified in policy\n"); 102 + return -EINVAL; 103 + } 104 + 105 + /* initialize spu_gov_info for all affected cpus */ 106 + for_each_cpu(i, policy->cpus) { 107 + affected_info = &per_cpu(spu_gov_info, i); 108 + affected_info->policy = policy; 109 + } 110 + 111 + info->poll_int = POLL_TIME; 112 + 113 + /* setup timer */ 114 + spu_gov_init_work(info); 115 + 116 + return 0; 117 + } 118 + 119 + static void spu_gov_stop(struct cpufreq_policy *policy) 120 + { 121 + unsigned int cpu = policy->cpu; 122 + struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu); 123 + int i; 124 + 125 + /* cancel timer */ 126 + spu_gov_cancel_work(info); 127 + 128 + /* clean spu_gov_info for all affected cpus */ 129 + for_each_cpu (i, policy->cpus) { 130 + info = &per_cpu(spu_gov_info, i); 131 + info->policy = NULL; 132 + } 138 133 } 139 134 140 135 static struct cpufreq_governor spu_governor = { 141 136 .name = "spudemand", 142 - .governor = spu_gov_govern, 137 + .start = spu_gov_start, 138 + .stop = spu_gov_stop, 143 139 .owner = THIS_MODULE, 144 140 }; 145 141
-2
arch/x86/include/asm/msr-index.h
··· 64 64 65 65 #define MSR_OFFCORE_RSP_0 0x000001a6 66 66 #define MSR_OFFCORE_RSP_1 0x000001a7 67 - #define MSR_NHM_TURBO_RATIO_LIMIT 0x000001ad 68 - #define MSR_IVT_TURBO_RATIO_LIMIT 0x000001ae 69 67 #define MSR_TURBO_RATIO_LIMIT 0x000001ad 70 68 #define MSR_TURBO_RATIO_LIMIT1 0x000001ae 71 69 #define MSR_TURBO_RATIO_LIMIT2 0x000001af
+1
arch/x86/include/asm/smp.h
··· 135 135 int native_cpu_disable(void); 136 136 int common_cpu_die(unsigned int cpu); 137 137 void native_cpu_die(unsigned int cpu); 138 + void hlt_play_dead(void); 138 139 void native_play_dead(void); 139 140 void play_dead_common(void); 140 141 void wbinvd_on_cpu(int cpu);
+1 -1
arch/x86/kernel/smpboot.c
··· 1644 1644 } 1645 1645 } 1646 1646 1647 - static inline void hlt_play_dead(void) 1647 + void hlt_play_dead(void) 1648 1648 { 1649 1649 if (__this_cpu_read(cpu_info.x86) >= 4) 1650 1650 wbinvd();
+30
arch/x86/power/cpu.c
··· 12 12 #include <linux/export.h> 13 13 #include <linux/smp.h> 14 14 #include <linux/perf_event.h> 15 + #include <linux/tboot.h> 15 16 16 17 #include <asm/pgtable.h> 17 18 #include <asm/proto.h> ··· 265 264 } 266 265 #ifdef CONFIG_X86_32 267 266 EXPORT_SYMBOL(restore_processor_state); 267 + #endif 268 + 269 + #if defined(CONFIG_HIBERNATION) && defined(CONFIG_HOTPLUG_CPU) 270 + static void resume_play_dead(void) 271 + { 272 + play_dead_common(); 273 + tboot_shutdown(TB_SHUTDOWN_WFS); 274 + hlt_play_dead(); 275 + } 276 + 277 + int hibernate_resume_nonboot_cpu_disable(void) 278 + { 279 + void (*play_dead)(void) = smp_ops.play_dead; 280 + int ret; 281 + 282 + /* 283 + * Ensure that MONITOR/MWAIT will not be used in the "play dead" loop 284 + * during hibernate image restoration, because it is likely that the 285 + * monitored address will be actually written to at that time and then 286 + * the "dead" CPU will attempt to execute instructions again, but the 287 + * address in its instruction pointer may not be possible to resolve 288 + * any more at that point (the page tables used by it previously may 289 + * have been overwritten by hibernate image data). 290 + */ 291 + smp_ops.play_dead = resume_play_dead; 292 + ret = disable_nonboot_cpus(); 293 + smp_ops.play_dead = play_dead; 294 + return ret; 295 + } 268 296 #endif 269 297 270 298 /*
+45
drivers/base/power/clock_ops.c
··· 121 121 { 122 122 return __pm_clk_add(dev, con_id, NULL); 123 123 } 124 + EXPORT_SYMBOL_GPL(pm_clk_add); 124 125 125 126 /** 126 127 * pm_clk_add_clk - Start using a device clock for power management. ··· 137 136 { 138 137 return __pm_clk_add(dev, NULL, clk); 139 138 } 139 + EXPORT_SYMBOL_GPL(pm_clk_add_clk); 140 140 141 + 142 + /** 143 + * of_pm_clk_add_clk - Start using a device clock for power management. 144 + * @dev: Device whose clock is going to be used for power management. 145 + * @name: Name of clock that is going to be used for power management. 146 + * 147 + * Add the clock described in the 'clocks' device-tree node that matches 148 + * with the 'name' provided, to the list of clocks used for the power 149 + * management of @dev. On success, returns 0. Returns a negative error 150 + * code if the clock is not found or cannot be added. 151 + */ 152 + int of_pm_clk_add_clk(struct device *dev, const char *name) 153 + { 154 + struct clk *clk; 155 + int ret; 156 + 157 + if (!dev || !dev->of_node || !name) 158 + return -EINVAL; 159 + 160 + clk = of_clk_get_by_name(dev->of_node, name); 161 + if (IS_ERR(clk)) 162 + return PTR_ERR(clk); 163 + 164 + ret = pm_clk_add_clk(dev, clk); 165 + if (ret) { 166 + clk_put(clk); 167 + return ret; 168 + } 169 + 170 + return 0; 171 + } 172 + EXPORT_SYMBOL_GPL(of_pm_clk_add_clk); 141 173 142 174 /** 143 175 * of_pm_clk_add_clks - Start using device clock(s) for power management. ··· 226 192 227 193 return ret; 228 194 } 195 + EXPORT_SYMBOL_GPL(of_pm_clk_add_clks); 229 196 230 197 /** 231 198 * __pm_clk_remove - Destroy PM clock entry. ··· 287 252 288 253 __pm_clk_remove(ce); 289 254 } 255 + EXPORT_SYMBOL_GPL(pm_clk_remove); 290 256 291 257 /** 292 258 * pm_clk_remove_clk - Stop using a device clock for power management. ··· 321 285 322 286 __pm_clk_remove(ce); 323 287 } 288 + EXPORT_SYMBOL_GPL(pm_clk_remove_clk); 324 289 325 290 /** 326 291 * pm_clk_init - Initialize a device's list of power management clocks. ··· 336 299 if (psd) 337 300 INIT_LIST_HEAD(&psd->clock_list); 338 301 } 302 + EXPORT_SYMBOL_GPL(pm_clk_init); 339 303 340 304 /** 341 305 * pm_clk_create - Create and initialize a device's list of PM clocks. ··· 349 311 { 350 312 return dev_pm_get_subsys_data(dev); 351 313 } 314 + EXPORT_SYMBOL_GPL(pm_clk_create); 352 315 353 316 /** 354 317 * pm_clk_destroy - Destroy a device's list of power management clocks. ··· 384 345 __pm_clk_remove(ce); 385 346 } 386 347 } 348 + EXPORT_SYMBOL_GPL(pm_clk_destroy); 387 349 388 350 /** 389 351 * pm_clk_suspend - Disable clocks in a device's PM clock list. ··· 415 375 416 376 return 0; 417 377 } 378 + EXPORT_SYMBOL_GPL(pm_clk_suspend); 418 379 419 380 /** 420 381 * pm_clk_resume - Enable clocks in a device's PM clock list. ··· 441 400 442 401 return 0; 443 402 } 403 + EXPORT_SYMBOL_GPL(pm_clk_resume); 444 404 445 405 /** 446 406 * pm_clk_notify - Notify routine for device addition and removal. ··· 522 480 523 481 return 0; 524 482 } 483 + EXPORT_SYMBOL_GPL(pm_clk_runtime_suspend); 525 484 526 485 int pm_clk_runtime_resume(struct device *dev) 527 486 { ··· 538 495 539 496 return pm_generic_runtime_resume(dev); 540 497 } 498 + EXPORT_SYMBOL_GPL(pm_clk_runtime_resume); 541 499 542 500 #else /* !CONFIG_PM_CLK */ 543 501 ··· 642 598 clknb->nb.notifier_call = pm_clk_notify; 643 599 bus_register_notifier(bus, &clknb->nb); 644 600 } 601 + EXPORT_SYMBOL_GPL(pm_clk_add_notifier);
+54 -247
drivers/base/power/domain.c
··· 187 187 struct gpd_link *link; 188 188 int ret = 0; 189 189 190 - if (genpd->status == GPD_STATE_ACTIVE 191 - || (genpd->prepared_count > 0 && genpd->suspend_power_off)) 190 + if (genpd->status == GPD_STATE_ACTIVE) 192 191 return 0; 193 192 194 193 /* ··· 734 735 735 736 mutex_lock(&genpd->lock); 736 737 737 - if (genpd->prepared_count++ == 0) { 738 + if (genpd->prepared_count++ == 0) 738 739 genpd->suspended_count = 0; 739 - genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF; 740 - } 741 740 742 741 mutex_unlock(&genpd->lock); 743 - 744 - if (genpd->suspend_power_off) 745 - return 0; 746 - 747 - /* 748 - * The PM domain must be in the GPD_STATE_ACTIVE state at this point, 749 - * so genpd_poweron() will return immediately, but if the device 750 - * is suspended (e.g. it's been stopped by genpd_stop_dev()), we need 751 - * to make it operational. 752 - */ 753 - pm_runtime_resume(dev); 754 - __pm_runtime_disable(dev, false); 755 742 756 743 ret = pm_generic_prepare(dev); 757 744 if (ret) { 758 745 mutex_lock(&genpd->lock); 759 746 760 - if (--genpd->prepared_count == 0) 761 - genpd->suspend_power_off = false; 747 + genpd->prepared_count--; 762 748 763 749 mutex_unlock(&genpd->lock); 764 - pm_runtime_enable(dev); 765 750 } 766 751 767 752 return ret; 768 - } 769 - 770 - /** 771 - * pm_genpd_suspend - Suspend a device belonging to an I/O PM domain. 772 - * @dev: Device to suspend. 773 - * 774 - * Suspend a device under the assumption that its pm_domain field points to the 775 - * domain member of an object of type struct generic_pm_domain representing 776 - * a PM domain consisting of I/O devices. 777 - */ 778 - static int pm_genpd_suspend(struct device *dev) 779 - { 780 - struct generic_pm_domain *genpd; 781 - 782 - dev_dbg(dev, "%s()\n", __func__); 783 - 784 - genpd = dev_to_genpd(dev); 785 - if (IS_ERR(genpd)) 786 - return -EINVAL; 787 - 788 - return genpd->suspend_power_off ? 0 : pm_generic_suspend(dev); 789 - } 790 - 791 - /** 792 - * pm_genpd_suspend_late - Late suspend of a device from an I/O PM domain. 793 - * @dev: Device to suspend. 794 - * 795 - * Carry out a late suspend of a device under the assumption that its 796 - * pm_domain field points to the domain member of an object of type 797 - * struct generic_pm_domain representing a PM domain consisting of I/O devices. 798 - */ 799 - static int pm_genpd_suspend_late(struct device *dev) 800 - { 801 - struct generic_pm_domain *genpd; 802 - 803 - dev_dbg(dev, "%s()\n", __func__); 804 - 805 - genpd = dev_to_genpd(dev); 806 - if (IS_ERR(genpd)) 807 - return -EINVAL; 808 - 809 - return genpd->suspend_power_off ? 0 : pm_generic_suspend_late(dev); 810 753 } 811 754 812 755 /** ··· 761 820 static int pm_genpd_suspend_noirq(struct device *dev) 762 821 { 763 822 struct generic_pm_domain *genpd; 823 + int ret; 764 824 765 825 dev_dbg(dev, "%s()\n", __func__); 766 826 ··· 769 827 if (IS_ERR(genpd)) 770 828 return -EINVAL; 771 829 772 - if (genpd->suspend_power_off 773 - || (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))) 830 + if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) 774 831 return 0; 775 832 776 - genpd_stop_dev(genpd, dev); 833 + if (genpd->dev_ops.stop && genpd->dev_ops.start) { 834 + ret = pm_runtime_force_suspend(dev); 835 + if (ret) 836 + return ret; 837 + } 777 838 778 839 /* 779 840 * Since all of the "noirq" callbacks are executed sequentially, it is ··· 798 853 static int pm_genpd_resume_noirq(struct device *dev) 799 854 { 800 855 struct generic_pm_domain *genpd; 856 + int ret = 0; 801 857 802 858 dev_dbg(dev, "%s()\n", __func__); 803 859 ··· 806 860 if (IS_ERR(genpd)) 807 861 return -EINVAL; 808 862 809 - if (genpd->suspend_power_off 810 - || (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))) 863 + if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) 811 864 return 0; 812 865 813 866 /* ··· 817 872 pm_genpd_sync_poweron(genpd, true); 818 873 genpd->suspended_count--; 819 874 820 - return genpd_start_dev(genpd, dev); 821 - } 875 + if (genpd->dev_ops.stop && genpd->dev_ops.start) 876 + ret = pm_runtime_force_resume(dev); 822 877 823 - /** 824 - * pm_genpd_resume_early - Early resume of a device in an I/O PM domain. 825 - * @dev: Device to resume. 826 - * 827 - * Carry out an early resume of a device under the assumption that its 828 - * pm_domain field points to the domain member of an object of type 829 - * struct generic_pm_domain representing a power domain consisting of I/O 830 - * devices. 831 - */ 832 - static int pm_genpd_resume_early(struct device *dev) 833 - { 834 - struct generic_pm_domain *genpd; 835 - 836 - dev_dbg(dev, "%s()\n", __func__); 837 - 838 - genpd = dev_to_genpd(dev); 839 - if (IS_ERR(genpd)) 840 - return -EINVAL; 841 - 842 - return genpd->suspend_power_off ? 0 : pm_generic_resume_early(dev); 843 - } 844 - 845 - /** 846 - * pm_genpd_resume - Resume of device in an I/O PM domain. 847 - * @dev: Device to resume. 848 - * 849 - * Resume a device under the assumption that its pm_domain field points to the 850 - * domain member of an object of type struct generic_pm_domain representing 851 - * a power domain consisting of I/O devices. 852 - */ 853 - static int pm_genpd_resume(struct device *dev) 854 - { 855 - struct generic_pm_domain *genpd; 856 - 857 - dev_dbg(dev, "%s()\n", __func__); 858 - 859 - genpd = dev_to_genpd(dev); 860 - if (IS_ERR(genpd)) 861 - return -EINVAL; 862 - 863 - return genpd->suspend_power_off ? 0 : pm_generic_resume(dev); 864 - } 865 - 866 - /** 867 - * pm_genpd_freeze - Freezing a device in an I/O PM domain. 868 - * @dev: Device to freeze. 869 - * 870 - * Freeze a device under the assumption that its pm_domain field points to the 871 - * domain member of an object of type struct generic_pm_domain representing 872 - * a power domain consisting of I/O devices. 873 - */ 874 - static int pm_genpd_freeze(struct device *dev) 875 - { 876 - struct generic_pm_domain *genpd; 877 - 878 - dev_dbg(dev, "%s()\n", __func__); 879 - 880 - genpd = dev_to_genpd(dev); 881 - if (IS_ERR(genpd)) 882 - return -EINVAL; 883 - 884 - return genpd->suspend_power_off ? 0 : pm_generic_freeze(dev); 885 - } 886 - 887 - /** 888 - * pm_genpd_freeze_late - Late freeze of a device in an I/O PM domain. 889 - * @dev: Device to freeze. 890 - * 891 - * Carry out a late freeze of a device under the assumption that its 892 - * pm_domain field points to the domain member of an object of type 893 - * struct generic_pm_domain representing a power domain consisting of I/O 894 - * devices. 895 - */ 896 - static int pm_genpd_freeze_late(struct device *dev) 897 - { 898 - struct generic_pm_domain *genpd; 899 - 900 - dev_dbg(dev, "%s()\n", __func__); 901 - 902 - genpd = dev_to_genpd(dev); 903 - if (IS_ERR(genpd)) 904 - return -EINVAL; 905 - 906 - return genpd->suspend_power_off ? 0 : pm_generic_freeze_late(dev); 878 + return ret; 907 879 } 908 880 909 881 /** ··· 835 973 static int pm_genpd_freeze_noirq(struct device *dev) 836 974 { 837 975 struct generic_pm_domain *genpd; 976 + int ret = 0; 838 977 839 978 dev_dbg(dev, "%s()\n", __func__); 840 979 ··· 843 980 if (IS_ERR(genpd)) 844 981 return -EINVAL; 845 982 846 - return genpd->suspend_power_off ? 0 : genpd_stop_dev(genpd, dev); 983 + if (genpd->dev_ops.stop && genpd->dev_ops.start) 984 + ret = pm_runtime_force_suspend(dev); 985 + 986 + return ret; 847 987 } 848 988 849 989 /** ··· 859 993 static int pm_genpd_thaw_noirq(struct device *dev) 860 994 { 861 995 struct generic_pm_domain *genpd; 996 + int ret = 0; 862 997 863 998 dev_dbg(dev, "%s()\n", __func__); 864 999 ··· 867 1000 if (IS_ERR(genpd)) 868 1001 return -EINVAL; 869 1002 870 - return genpd->suspend_power_off ? 871 - 0 : genpd_start_dev(genpd, dev); 872 - } 1003 + if (genpd->dev_ops.stop && genpd->dev_ops.start) 1004 + ret = pm_runtime_force_resume(dev); 873 1005 874 - /** 875 - * pm_genpd_thaw_early - Early thaw of device in an I/O PM domain. 876 - * @dev: Device to thaw. 877 - * 878 - * Carry out an early thaw of a device under the assumption that its 879 - * pm_domain field points to the domain member of an object of type 880 - * struct generic_pm_domain representing a power domain consisting of I/O 881 - * devices. 882 - */ 883 - static int pm_genpd_thaw_early(struct device *dev) 884 - { 885 - struct generic_pm_domain *genpd; 886 - 887 - dev_dbg(dev, "%s()\n", __func__); 888 - 889 - genpd = dev_to_genpd(dev); 890 - if (IS_ERR(genpd)) 891 - return -EINVAL; 892 - 893 - return genpd->suspend_power_off ? 0 : pm_generic_thaw_early(dev); 894 - } 895 - 896 - /** 897 - * pm_genpd_thaw - Thaw a device belonging to an I/O power domain. 898 - * @dev: Device to thaw. 899 - * 900 - * Thaw a device under the assumption that its pm_domain field points to the 901 - * domain member of an object of type struct generic_pm_domain representing 902 - * a power domain consisting of I/O devices. 903 - */ 904 - static int pm_genpd_thaw(struct device *dev) 905 - { 906 - struct generic_pm_domain *genpd; 907 - 908 - dev_dbg(dev, "%s()\n", __func__); 909 - 910 - genpd = dev_to_genpd(dev); 911 - if (IS_ERR(genpd)) 912 - return -EINVAL; 913 - 914 - return genpd->suspend_power_off ? 0 : pm_generic_thaw(dev); 1006 + return ret; 915 1007 } 916 1008 917 1009 /** ··· 883 1057 static int pm_genpd_restore_noirq(struct device *dev) 884 1058 { 885 1059 struct generic_pm_domain *genpd; 1060 + int ret = 0; 886 1061 887 1062 dev_dbg(dev, "%s()\n", __func__); 888 1063 ··· 899 1072 * At this point suspended_count == 0 means we are being run for the 900 1073 * first time for the given domain in the present cycle. 901 1074 */ 902 - if (genpd->suspended_count++ == 0) { 1075 + if (genpd->suspended_count++ == 0) 903 1076 /* 904 1077 * The boot kernel might put the domain into arbitrary state, 905 1078 * so make it appear as powered off to pm_genpd_sync_poweron(), 906 1079 * so that it tries to power it on in case it was really off. 907 1080 */ 908 1081 genpd->status = GPD_STATE_POWER_OFF; 909 - if (genpd->suspend_power_off) { 910 - /* 911 - * If the domain was off before the hibernation, make 912 - * sure it will be off going forward. 913 - */ 914 - genpd_power_off(genpd, true); 915 - 916 - return 0; 917 - } 918 - } 919 - 920 - if (genpd->suspend_power_off) 921 - return 0; 922 1082 923 1083 pm_genpd_sync_poweron(genpd, true); 924 1084 925 - return genpd_start_dev(genpd, dev); 1085 + if (genpd->dev_ops.stop && genpd->dev_ops.start) 1086 + ret = pm_runtime_force_resume(dev); 1087 + 1088 + return ret; 926 1089 } 927 1090 928 1091 /** ··· 927 1110 static void pm_genpd_complete(struct device *dev) 928 1111 { 929 1112 struct generic_pm_domain *genpd; 930 - bool run_complete; 931 1113 932 1114 dev_dbg(dev, "%s()\n", __func__); 933 1115 ··· 934 1118 if (IS_ERR(genpd)) 935 1119 return; 936 1120 1121 + pm_generic_complete(dev); 1122 + 937 1123 mutex_lock(&genpd->lock); 938 1124 939 - run_complete = !genpd->suspend_power_off; 940 - if (--genpd->prepared_count == 0) 941 - genpd->suspend_power_off = false; 1125 + genpd->prepared_count--; 1126 + if (!genpd->prepared_count) 1127 + genpd_queue_power_off_work(genpd); 942 1128 943 1129 mutex_unlock(&genpd->lock); 944 - 945 - if (run_complete) { 946 - pm_generic_complete(dev); 947 - pm_runtime_set_active(dev); 948 - pm_runtime_enable(dev); 949 - pm_request_idle(dev); 950 - } 951 1130 } 952 1131 953 1132 /** ··· 984 1173 #else /* !CONFIG_PM_SLEEP */ 985 1174 986 1175 #define pm_genpd_prepare NULL 987 - #define pm_genpd_suspend NULL 988 - #define pm_genpd_suspend_late NULL 989 1176 #define pm_genpd_suspend_noirq NULL 990 - #define pm_genpd_resume_early NULL 991 1177 #define pm_genpd_resume_noirq NULL 992 - #define pm_genpd_resume NULL 993 - #define pm_genpd_freeze NULL 994 - #define pm_genpd_freeze_late NULL 995 1178 #define pm_genpd_freeze_noirq NULL 996 - #define pm_genpd_thaw_early NULL 997 1179 #define pm_genpd_thaw_noirq NULL 998 - #define pm_genpd_thaw NULL 999 1180 #define pm_genpd_restore_noirq NULL 1000 1181 #define pm_genpd_complete NULL 1001 1182 ··· 1258 1455 * @genpd: PM domain object to initialize. 1259 1456 * @gov: PM domain governor to associate with the domain (may be NULL). 1260 1457 * @is_off: Initial value of the domain's power_is_off field. 1458 + * 1459 + * Returns 0 on successful initialization, else a negative error code. 1261 1460 */ 1262 - void pm_genpd_init(struct generic_pm_domain *genpd, 1263 - struct dev_power_governor *gov, bool is_off) 1461 + int pm_genpd_init(struct generic_pm_domain *genpd, 1462 + struct dev_power_governor *gov, bool is_off) 1264 1463 { 1265 1464 if (IS_ERR_OR_NULL(genpd)) 1266 - return; 1465 + return -EINVAL; 1267 1466 1268 1467 INIT_LIST_HEAD(&genpd->master_links); 1269 1468 INIT_LIST_HEAD(&genpd->slave_links); ··· 1281 1476 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; 1282 1477 genpd->domain.ops.runtime_resume = genpd_runtime_resume; 1283 1478 genpd->domain.ops.prepare = pm_genpd_prepare; 1284 - genpd->domain.ops.suspend = pm_genpd_suspend; 1285 - genpd->domain.ops.suspend_late = pm_genpd_suspend_late; 1479 + genpd->domain.ops.suspend = pm_generic_suspend; 1480 + genpd->domain.ops.suspend_late = pm_generic_suspend_late; 1286 1481 genpd->domain.ops.suspend_noirq = pm_genpd_suspend_noirq; 1287 1482 genpd->domain.ops.resume_noirq = pm_genpd_resume_noirq; 1288 - genpd->domain.ops.resume_early = pm_genpd_resume_early; 1289 - genpd->domain.ops.resume = pm_genpd_resume; 1290 - genpd->domain.ops.freeze = pm_genpd_freeze; 1291 - genpd->domain.ops.freeze_late = pm_genpd_freeze_late; 1483 + genpd->domain.ops.resume_early = pm_generic_resume_early; 1484 + genpd->domain.ops.resume = pm_generic_resume; 1485 + genpd->domain.ops.freeze = pm_generic_freeze; 1486 + genpd->domain.ops.freeze_late = pm_generic_freeze_late; 1292 1487 genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq; 1293 1488 genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq; 1294 - genpd->domain.ops.thaw_early = pm_genpd_thaw_early; 1295 - genpd->domain.ops.thaw = pm_genpd_thaw; 1296 - genpd->domain.ops.poweroff = pm_genpd_suspend; 1297 - genpd->domain.ops.poweroff_late = pm_genpd_suspend_late; 1489 + genpd->domain.ops.thaw_early = pm_generic_thaw_early; 1490 + genpd->domain.ops.thaw = pm_generic_thaw; 1491 + genpd->domain.ops.poweroff = pm_generic_poweroff; 1492 + genpd->domain.ops.poweroff_late = pm_generic_poweroff_late; 1298 1493 genpd->domain.ops.poweroff_noirq = pm_genpd_suspend_noirq; 1299 1494 genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq; 1300 - genpd->domain.ops.restore_early = pm_genpd_resume_early; 1301 - genpd->domain.ops.restore = pm_genpd_resume; 1495 + genpd->domain.ops.restore_early = pm_generic_restore_early; 1496 + genpd->domain.ops.restore = pm_generic_restore; 1302 1497 genpd->domain.ops.complete = pm_genpd_complete; 1303 1498 1304 1499 if (genpd->flags & GENPD_FLAG_PM_CLK) { ··· 1323 1518 mutex_lock(&gpd_list_lock); 1324 1519 list_add(&genpd->gpd_list_node, &gpd_list); 1325 1520 mutex_unlock(&gpd_list_lock); 1521 + 1522 + return 0; 1326 1523 } 1327 1524 EXPORT_SYMBOL_GPL(pm_genpd_init); 1328 1525
+10 -3
drivers/base/power/runtime.c
··· 1045 1045 */ 1046 1046 if (!parent->power.disable_depth 1047 1047 && !parent->power.ignore_children 1048 - && parent->power.runtime_status != RPM_ACTIVE) 1048 + && parent->power.runtime_status != RPM_ACTIVE) { 1049 + dev_err(dev, "runtime PM trying to activate child device %s but parent (%s) is not active\n", 1050 + dev_name(dev), 1051 + dev_name(parent)); 1049 1052 error = -EBUSY; 1050 - else if (dev->power.runtime_status == RPM_SUSPENDED) 1053 + } else if (dev->power.runtime_status == RPM_SUSPENDED) { 1051 1054 atomic_inc(&parent->power.child_count); 1055 + } 1052 1056 1053 1057 spin_unlock(&parent->power.lock); 1054 1058 ··· 1260 1256 1261 1257 dev->power.runtime_auto = true; 1262 1258 if (atomic_dec_and_test(&dev->power.usage_count)) 1263 - rpm_idle(dev, RPM_AUTO); 1259 + rpm_idle(dev, RPM_AUTO | RPM_ASYNC); 1264 1260 1265 1261 out: 1266 1262 spin_unlock_irq(&dev->power.lock); ··· 1509 1505 ret = -ENOSYS; 1510 1506 goto out; 1511 1507 } 1508 + 1509 + if (!pm_runtime_status_suspended(dev)) 1510 + goto out; 1512 1511 1513 1512 ret = pm_runtime_set_active(dev); 1514 1513 if (ret)
+4 -9
drivers/cpufreq/Kconfig
··· 31 31 depends on THERMAL 32 32 33 33 config CPU_FREQ_STAT 34 - tristate "CPU frequency translation statistics" 34 + bool "CPU frequency transition statistics" 35 35 default y 36 36 help 37 - This driver exports CPU frequency statistics information through sysfs 38 - file system. 39 - 40 - To compile this driver as a module, choose M here: the 41 - module will be called cpufreq_stats. 37 + Export CPU frequency statistics information through sysfs. 42 38 43 39 If in doubt, say N. 44 40 45 41 config CPU_FREQ_STAT_DETAILS 46 - bool "CPU frequency translation statistics details" 42 + bool "CPU frequency transition statistics details" 47 43 depends on CPU_FREQ_STAT 48 44 help 49 - This will show detail CPU frequency translation table in sysfs file 50 - system. 45 + Show detailed CPU frequency transition table in sysfs. 51 46 52 47 If in doubt, say N. 53 48
+7 -10
drivers/cpufreq/acpi-cpufreq.c
··· 468 468 struct acpi_cpufreq_data *data = policy->driver_data; 469 469 struct acpi_processor_performance *perf; 470 470 struct cpufreq_frequency_table *entry; 471 - unsigned int next_perf_state, next_freq, freq; 471 + unsigned int next_perf_state, next_freq, index; 472 472 473 473 /* 474 474 * Find the closest frequency above target_freq. 475 - * 476 - * The table is sorted in the reverse order with respect to the 477 - * frequency and all of the entries are valid (see the initialization). 478 475 */ 479 - entry = policy->freq_table; 480 - do { 481 - entry++; 482 - freq = entry->frequency; 483 - } while (freq >= target_freq && freq != CPUFREQ_TABLE_END); 484 - entry--; 476 + if (policy->cached_target_freq == target_freq) 477 + index = policy->cached_resolved_idx; 478 + else 479 + index = cpufreq_table_find_index_dl(policy, target_freq); 480 + 481 + entry = &policy->freq_table[index]; 485 482 next_freq = entry->frequency; 486 483 next_perf_state = entry->driver_data; 487 484
+4 -6
drivers/cpufreq/amd_freq_sensitivity.c
··· 48 48 struct policy_dbs_info *policy_dbs = policy->governor_data; 49 49 struct dbs_data *od_data = policy_dbs->dbs_data; 50 50 struct od_dbs_tuners *od_tuners = od_data->tuners; 51 - struct od_policy_dbs_info *od_info = to_dbs_info(policy_dbs); 52 51 53 - if (!od_info->freq_table) 52 + if (!policy->freq_table) 54 53 return freq_next; 55 54 56 55 rdmsr_on_cpu(policy->cpu, MSR_AMD64_FREQ_SENSITIVITY_ACTUAL, ··· 91 92 else { 92 93 unsigned int index; 93 94 94 - cpufreq_frequency_table_target(policy, 95 - od_info->freq_table, policy->cur - 1, 96 - CPUFREQ_RELATION_H, &index); 97 - freq_next = od_info->freq_table[index].frequency; 95 + index = cpufreq_table_find_index_h(policy, 96 + policy->cur - 1); 97 + freq_next = policy->freq_table[index].frequency; 98 98 } 99 99 100 100 data->freq_prev = freq_next;
+135 -86
drivers/cpufreq/cpufreq.c
··· 74 74 } 75 75 76 76 /* internal prototypes */ 77 - static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event); 78 77 static unsigned int __cpufreq_get(struct cpufreq_policy *policy); 78 + static int cpufreq_init_governor(struct cpufreq_policy *policy); 79 + static void cpufreq_exit_governor(struct cpufreq_policy *policy); 79 80 static int cpufreq_start_governor(struct cpufreq_policy *policy); 80 - 81 - static inline void cpufreq_exit_governor(struct cpufreq_policy *policy) 82 - { 83 - (void)cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); 84 - } 85 - 86 - static inline void cpufreq_stop_governor(struct cpufreq_policy *policy) 87 - { 88 - (void)cpufreq_governor(policy, CPUFREQ_GOV_STOP); 89 - } 81 + static void cpufreq_stop_governor(struct cpufreq_policy *policy); 82 + static void cpufreq_governor_limits(struct cpufreq_policy *policy); 90 83 91 84 /** 92 85 * Two notifier lists: the "policy" list is involved in the ··· 125 132 return cpufreq_global_kobject; 126 133 } 127 134 EXPORT_SYMBOL_GPL(get_governor_parent_kobj); 128 - 129 - struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu) 130 - { 131 - struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); 132 - 133 - return policy && !policy_is_inactive(policy) ? 134 - policy->freq_table : NULL; 135 - } 136 - EXPORT_SYMBOL_GPL(cpufreq_frequency_get_table); 137 135 138 136 static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall) 139 137 { ··· 338 354 pr_debug("FREQ: %lu - CPU: %lu\n", 339 355 (unsigned long)freqs->new, (unsigned long)freqs->cpu); 340 356 trace_cpu_frequency(freqs->new, freqs->cpu); 357 + cpufreq_stats_record_transition(policy, freqs->new); 341 358 srcu_notifier_call_chain(&cpufreq_transition_notifier_list, 342 359 CPUFREQ_POSTCHANGE, freqs); 343 360 if (likely(policy) && likely(policy->cpu == freqs->cpu)) ··· 491 506 mutex_unlock(&cpufreq_fast_switch_lock); 492 507 } 493 508 EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch); 509 + 510 + /** 511 + * cpufreq_driver_resolve_freq - Map a target frequency to a driver-supported 512 + * one. 513 + * @target_freq: target frequency to resolve. 514 + * 515 + * The target to driver frequency mapping is cached in the policy. 516 + * 517 + * Return: Lowest driver-supported frequency greater than or equal to the 518 + * given target_freq, subject to policy (min/max) and driver limitations. 519 + */ 520 + unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, 521 + unsigned int target_freq) 522 + { 523 + target_freq = clamp_val(target_freq, policy->min, policy->max); 524 + policy->cached_target_freq = target_freq; 525 + 526 + if (cpufreq_driver->target_index) { 527 + int idx; 528 + 529 + idx = cpufreq_frequency_table_target(policy, target_freq, 530 + CPUFREQ_RELATION_L); 531 + policy->cached_resolved_idx = idx; 532 + return policy->freq_table[idx].frequency; 533 + } 534 + 535 + if (cpufreq_driver->resolve_freq) 536 + return cpufreq_driver->resolve_freq(policy, target_freq); 537 + 538 + return target_freq; 539 + } 540 + EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq); 494 541 495 542 /********************************************************************* 496 543 * SYSFS INTERFACE * ··· 1132 1115 CPUFREQ_REMOVE_POLICY, policy); 1133 1116 1134 1117 down_write(&policy->rwsem); 1118 + cpufreq_stats_free_table(policy); 1135 1119 cpufreq_remove_dev_symlink(policy); 1136 1120 kobj = &policy->kobj; 1137 1121 cmp = &policy->kobj_unregister; ··· 1283 1265 } 1284 1266 } 1285 1267 1286 - blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1287 - CPUFREQ_START, policy); 1288 - 1289 1268 if (new_policy) { 1290 1269 ret = cpufreq_add_dev_interface(policy); 1291 1270 if (ret) 1292 1271 goto out_exit_policy; 1272 + 1273 + cpufreq_stats_create_table(policy); 1293 1274 blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1294 1275 CPUFREQ_CREATE_POLICY, policy); 1295 1276 ··· 1296 1279 list_add(&policy->policy_list, &cpufreq_policy_list); 1297 1280 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1298 1281 } 1282 + 1283 + blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1284 + CPUFREQ_START, policy); 1299 1285 1300 1286 ret = cpufreq_init_policy(policy); 1301 1287 if (ret) { ··· 1575 1555 static unsigned int cpufreq_update_current_freq(struct cpufreq_policy *policy) 1576 1556 { 1577 1557 unsigned int new_freq; 1578 - 1579 - if (cpufreq_suspended) 1580 - return 0; 1581 1558 1582 1559 new_freq = cpufreq_driver->get(policy->cpu); 1583 1560 if (!new_freq) ··· 1881 1864 return ret; 1882 1865 } 1883 1866 1884 - static int __target_index(struct cpufreq_policy *policy, 1885 - struct cpufreq_frequency_table *freq_table, int index) 1867 + static int __target_index(struct cpufreq_policy *policy, int index) 1886 1868 { 1887 1869 struct cpufreq_freqs freqs = {.old = policy->cur, .flags = 0}; 1888 1870 unsigned int intermediate_freq = 0; 1871 + unsigned int newfreq = policy->freq_table[index].frequency; 1889 1872 int retval = -EINVAL; 1890 1873 bool notify; 1874 + 1875 + if (newfreq == policy->cur) 1876 + return 0; 1891 1877 1892 1878 notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION); 1893 1879 if (notify) { ··· 1906 1886 freqs.old = freqs.new; 1907 1887 } 1908 1888 1909 - freqs.new = freq_table[index].frequency; 1889 + freqs.new = newfreq; 1910 1890 pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n", 1911 1891 __func__, policy->cpu, freqs.old, freqs.new); 1912 1892 ··· 1943 1923 unsigned int relation) 1944 1924 { 1945 1925 unsigned int old_target_freq = target_freq; 1946 - struct cpufreq_frequency_table *freq_table; 1947 - int index, retval; 1926 + int index; 1948 1927 1949 1928 if (cpufreq_disabled()) 1950 1929 return -ENODEV; 1951 1930 1952 1931 /* Make sure that target_freq is within supported range */ 1953 - if (target_freq > policy->max) 1954 - target_freq = policy->max; 1955 - if (target_freq < policy->min) 1956 - target_freq = policy->min; 1932 + target_freq = clamp_val(target_freq, policy->min, policy->max); 1957 1933 1958 1934 pr_debug("target for CPU %u: %u kHz, relation %u, requested %u kHz\n", 1959 1935 policy->cpu, target_freq, relation, old_target_freq); ··· 1972 1956 if (!cpufreq_driver->target_index) 1973 1957 return -EINVAL; 1974 1958 1975 - freq_table = cpufreq_frequency_get_table(policy->cpu); 1976 - if (unlikely(!freq_table)) { 1977 - pr_err("%s: Unable to find freq_table\n", __func__); 1978 - return -EINVAL; 1979 - } 1959 + index = cpufreq_frequency_table_target(policy, target_freq, relation); 1980 1960 1981 - retval = cpufreq_frequency_table_target(policy, freq_table, target_freq, 1982 - relation, &index); 1983 - if (unlikely(retval)) { 1984 - pr_err("%s: Unable to find matching freq\n", __func__); 1985 - return retval; 1986 - } 1987 - 1988 - if (freq_table[index].frequency == policy->cur) 1989 - return 0; 1990 - 1991 - return __target_index(policy, freq_table, index); 1961 + return __target_index(policy, index); 1992 1962 } 1993 1963 EXPORT_SYMBOL_GPL(__cpufreq_driver_target); 1994 1964 ··· 1999 1997 return NULL; 2000 1998 } 2001 1999 2002 - static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event) 2000 + static int cpufreq_init_governor(struct cpufreq_policy *policy) 2003 2001 { 2004 2002 int ret; 2005 2003 ··· 2027 2025 } 2028 2026 } 2029 2027 2030 - if (event == CPUFREQ_GOV_POLICY_INIT) 2031 - if (!try_module_get(policy->governor->owner)) 2032 - return -EINVAL; 2028 + if (!try_module_get(policy->governor->owner)) 2029 + return -EINVAL; 2033 2030 2034 - pr_debug("%s: for CPU %u, event %u\n", __func__, policy->cpu, event); 2031 + pr_debug("%s: for CPU %u\n", __func__, policy->cpu); 2035 2032 2036 - ret = policy->governor->governor(policy, event); 2037 - 2038 - if (event == CPUFREQ_GOV_POLICY_INIT) { 2039 - if (ret) 2033 + if (policy->governor->init) { 2034 + ret = policy->governor->init(policy); 2035 + if (ret) { 2040 2036 module_put(policy->governor->owner); 2041 - else 2042 - policy->governor->initialized++; 2043 - } else if (event == CPUFREQ_GOV_POLICY_EXIT) { 2044 - policy->governor->initialized--; 2045 - module_put(policy->governor->owner); 2037 + return ret; 2038 + } 2046 2039 } 2047 2040 2048 - return ret; 2041 + return 0; 2042 + } 2043 + 2044 + static void cpufreq_exit_governor(struct cpufreq_policy *policy) 2045 + { 2046 + if (cpufreq_suspended || !policy->governor) 2047 + return; 2048 + 2049 + pr_debug("%s: for CPU %u\n", __func__, policy->cpu); 2050 + 2051 + if (policy->governor->exit) 2052 + policy->governor->exit(policy); 2053 + 2054 + module_put(policy->governor->owner); 2049 2055 } 2050 2056 2051 2057 static int cpufreq_start_governor(struct cpufreq_policy *policy) 2052 2058 { 2053 2059 int ret; 2054 2060 2061 + if (cpufreq_suspended) 2062 + return 0; 2063 + 2064 + if (!policy->governor) 2065 + return -EINVAL; 2066 + 2067 + pr_debug("%s: for CPU %u\n", __func__, policy->cpu); 2068 + 2055 2069 if (cpufreq_driver->get && !cpufreq_driver->setpolicy) 2056 2070 cpufreq_update_current_freq(policy); 2057 2071 2058 - ret = cpufreq_governor(policy, CPUFREQ_GOV_START); 2059 - return ret ? ret : cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); 2072 + if (policy->governor->start) { 2073 + ret = policy->governor->start(policy); 2074 + if (ret) 2075 + return ret; 2076 + } 2077 + 2078 + if (policy->governor->limits) 2079 + policy->governor->limits(policy); 2080 + 2081 + return 0; 2082 + } 2083 + 2084 + static void cpufreq_stop_governor(struct cpufreq_policy *policy) 2085 + { 2086 + if (cpufreq_suspended || !policy->governor) 2087 + return; 2088 + 2089 + pr_debug("%s: for CPU %u\n", __func__, policy->cpu); 2090 + 2091 + if (policy->governor->stop) 2092 + policy->governor->stop(policy); 2093 + } 2094 + 2095 + static void cpufreq_governor_limits(struct cpufreq_policy *policy) 2096 + { 2097 + if (cpufreq_suspended || !policy->governor) 2098 + return; 2099 + 2100 + pr_debug("%s: for CPU %u\n", __func__, policy->cpu); 2101 + 2102 + if (policy->governor->limits) 2103 + policy->governor->limits(policy); 2060 2104 } 2061 2105 2062 2106 int cpufreq_register_governor(struct cpufreq_governor *governor) ··· 2117 2069 2118 2070 mutex_lock(&cpufreq_governor_mutex); 2119 2071 2120 - governor->initialized = 0; 2121 2072 err = -EBUSY; 2122 2073 if (!find_governor(governor->name)) { 2123 2074 err = 0; ··· 2231 2184 policy->min = new_policy->min; 2232 2185 policy->max = new_policy->max; 2233 2186 2187 + policy->cached_target_freq = UINT_MAX; 2188 + 2234 2189 pr_debug("new min and max freqs are %u - %u kHz\n", 2235 2190 policy->min, policy->max); 2236 2191 ··· 2244 2195 2245 2196 if (new_policy->governor == policy->governor) { 2246 2197 pr_debug("cpufreq: governor limits update\n"); 2247 - return cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); 2198 + cpufreq_governor_limits(policy); 2199 + return 0; 2248 2200 } 2249 2201 2250 2202 pr_debug("governor switch\n"); ··· 2260 2210 2261 2211 /* start new governor */ 2262 2212 policy->governor = new_policy->governor; 2263 - ret = cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT); 2213 + ret = cpufreq_init_governor(policy); 2264 2214 if (!ret) { 2265 2215 ret = cpufreq_start_governor(policy); 2266 2216 if (!ret) { ··· 2274 2224 pr_debug("starting governor %s failed\n", policy->governor->name); 2275 2225 if (old_gov) { 2276 2226 policy->governor = old_gov; 2277 - if (cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT)) 2227 + if (cpufreq_init_governor(policy)) 2278 2228 policy->governor = NULL; 2279 2229 else 2280 2230 cpufreq_start_governor(policy); ··· 2359 2309 *********************************************************************/ 2360 2310 static int cpufreq_boost_set_sw(int state) 2361 2311 { 2362 - struct cpufreq_frequency_table *freq_table; 2363 2312 struct cpufreq_policy *policy; 2364 2313 int ret = -EINVAL; 2365 2314 2366 2315 for_each_active_policy(policy) { 2367 - freq_table = cpufreq_frequency_get_table(policy->cpu); 2368 - if (freq_table) { 2369 - ret = cpufreq_frequency_table_cpuinfo(policy, 2370 - freq_table); 2371 - if (ret) { 2372 - pr_err("%s: Policy frequency update failed\n", 2373 - __func__); 2374 - break; 2375 - } 2316 + if (!policy->freq_table) 2317 + continue; 2376 2318 2377 - down_write(&policy->rwsem); 2378 - policy->user_policy.max = policy->max; 2379 - cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); 2380 - up_write(&policy->rwsem); 2319 + ret = cpufreq_frequency_table_cpuinfo(policy, 2320 + policy->freq_table); 2321 + if (ret) { 2322 + pr_err("%s: Policy frequency update failed\n", 2323 + __func__); 2324 + break; 2381 2325 } 2326 + 2327 + down_write(&policy->rwsem); 2328 + policy->user_policy.max = policy->max; 2329 + cpufreq_governor_limits(policy); 2330 + up_write(&policy->rwsem); 2382 2331 } 2383 2332 2384 2333 return ret;
+17 -71
drivers/cpufreq/cpufreq_conservative.c
··· 17 17 struct cs_policy_dbs_info { 18 18 struct policy_dbs_info policy_dbs; 19 19 unsigned int down_skip; 20 - unsigned int requested_freq; 21 20 }; 22 21 23 22 static inline struct cs_policy_dbs_info *to_dbs_info(struct policy_dbs_info *policy_dbs) ··· 74 75 75 76 /* Check for frequency increase */ 76 77 if (load > dbs_data->up_threshold) { 78 + unsigned int requested_freq = policy->cur; 79 + 77 80 dbs_info->down_skip = 0; 78 81 79 82 /* if we are already at full speed then break out early */ 80 - if (dbs_info->requested_freq == policy->max) 83 + if (requested_freq == policy->max) 81 84 goto out; 82 85 83 - dbs_info->requested_freq += get_freq_target(cs_tuners, policy); 86 + requested_freq += get_freq_target(cs_tuners, policy); 84 87 85 - if (dbs_info->requested_freq > policy->max) 86 - dbs_info->requested_freq = policy->max; 87 - 88 - __cpufreq_driver_target(policy, dbs_info->requested_freq, 89 - CPUFREQ_RELATION_H); 88 + __cpufreq_driver_target(policy, requested_freq, CPUFREQ_RELATION_H); 90 89 goto out; 91 90 } 92 91 ··· 95 98 96 99 /* Check for frequency decrease */ 97 100 if (load < cs_tuners->down_threshold) { 98 - unsigned int freq_target; 101 + unsigned int freq_target, requested_freq = policy->cur; 99 102 /* 100 103 * if we cannot reduce the frequency anymore, break out early 101 104 */ 102 - if (policy->cur == policy->min) 105 + if (requested_freq == policy->min) 103 106 goto out; 104 107 105 108 freq_target = get_freq_target(cs_tuners, policy); 106 - if (dbs_info->requested_freq > freq_target) 107 - dbs_info->requested_freq -= freq_target; 109 + if (requested_freq > freq_target) 110 + requested_freq -= freq_target; 108 111 else 109 - dbs_info->requested_freq = policy->min; 112 + requested_freq = policy->min; 110 113 111 - __cpufreq_driver_target(policy, dbs_info->requested_freq, 112 - CPUFREQ_RELATION_L); 114 + __cpufreq_driver_target(policy, requested_freq, CPUFREQ_RELATION_L); 113 115 } 114 116 115 117 out: 116 118 return dbs_data->sampling_rate; 117 119 } 118 120 119 - static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val, 120 - void *data); 121 - 122 - static struct notifier_block cs_cpufreq_notifier_block = { 123 - .notifier_call = dbs_cpufreq_notifier, 124 - }; 125 - 126 121 /************************** sysfs interface ************************/ 127 - static struct dbs_governor cs_dbs_gov; 128 122 129 123 static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set, 130 124 const char *buf, size_t count) ··· 256 268 kfree(to_dbs_info(policy_dbs)); 257 269 } 258 270 259 - static int cs_init(struct dbs_data *dbs_data, bool notify) 271 + static int cs_init(struct dbs_data *dbs_data) 260 272 { 261 273 struct cs_dbs_tuners *tuners; 262 274 263 275 tuners = kzalloc(sizeof(*tuners), GFP_KERNEL); 264 - if (!tuners) { 265 - pr_err("%s: kzalloc failed\n", __func__); 276 + if (!tuners) 266 277 return -ENOMEM; 267 - } 268 278 269 279 tuners->down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD; 270 280 tuners->freq_step = DEF_FREQUENCY_STEP; ··· 274 288 dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO * 275 289 jiffies_to_usecs(10); 276 290 277 - if (notify) 278 - cpufreq_register_notifier(&cs_cpufreq_notifier_block, 279 - CPUFREQ_TRANSITION_NOTIFIER); 280 - 281 291 return 0; 282 292 } 283 293 284 - static void cs_exit(struct dbs_data *dbs_data, bool notify) 294 + static void cs_exit(struct dbs_data *dbs_data) 285 295 { 286 - if (notify) 287 - cpufreq_unregister_notifier(&cs_cpufreq_notifier_block, 288 - CPUFREQ_TRANSITION_NOTIFIER); 289 - 290 296 kfree(dbs_data->tuners); 291 297 } 292 298 ··· 287 309 struct cs_policy_dbs_info *dbs_info = to_dbs_info(policy->governor_data); 288 310 289 311 dbs_info->down_skip = 0; 290 - dbs_info->requested_freq = policy->cur; 291 312 } 292 313 293 - static struct dbs_governor cs_dbs_gov = { 294 - .gov = { 295 - .name = "conservative", 296 - .governor = cpufreq_governor_dbs, 297 - .max_transition_latency = TRANSITION_LATENCY_LIMIT, 298 - .owner = THIS_MODULE, 299 - }, 314 + static struct dbs_governor cs_governor = { 315 + .gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("conservative"), 300 316 .kobj_type = { .default_attrs = cs_attributes }, 301 317 .gov_dbs_timer = cs_dbs_timer, 302 318 .alloc = cs_alloc, ··· 300 328 .start = cs_start, 301 329 }; 302 330 303 - #define CPU_FREQ_GOV_CONSERVATIVE (&cs_dbs_gov.gov) 304 - 305 - static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val, 306 - void *data) 307 - { 308 - struct cpufreq_freqs *freq = data; 309 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(freq->cpu); 310 - struct cs_policy_dbs_info *dbs_info; 311 - 312 - if (!policy) 313 - return 0; 314 - 315 - /* policy isn't governed by conservative governor */ 316 - if (policy->governor != CPU_FREQ_GOV_CONSERVATIVE) 317 - return 0; 318 - 319 - dbs_info = to_dbs_info(policy->governor_data); 320 - /* 321 - * we only care if our internally tracked freq moves outside the 'valid' 322 - * ranges of frequency available to us otherwise we do not change it 323 - */ 324 - if (dbs_info->requested_freq > policy->max 325 - || dbs_info->requested_freq < policy->min) 326 - dbs_info->requested_freq = freq->new; 327 - 328 - return 0; 329 - } 331 + #define CPU_FREQ_GOV_CONSERVATIVE (&cs_governor.gov) 330 332 331 333 static int __init cpufreq_gov_dbs_init(void) 332 334 {
+23 -52
drivers/cpufreq/cpufreq_governor.c
··· 336 336 synchronize_sched(); 337 337 } 338 338 339 - static void gov_cancel_work(struct cpufreq_policy *policy) 340 - { 341 - struct policy_dbs_info *policy_dbs = policy->governor_data; 342 - 343 - gov_clear_update_util(policy_dbs->policy); 344 - irq_work_sync(&policy_dbs->irq_work); 345 - cancel_work_sync(&policy_dbs->work); 346 - atomic_set(&policy_dbs->work_count, 0); 347 - policy_dbs->work_in_progress = false; 348 - } 349 - 350 339 static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *policy, 351 340 struct dbs_governor *gov) 352 341 { ··· 378 389 gov->free(policy_dbs); 379 390 } 380 391 381 - static int cpufreq_governor_init(struct cpufreq_policy *policy) 392 + int cpufreq_dbs_governor_init(struct cpufreq_policy *policy) 382 393 { 383 394 struct dbs_governor *gov = dbs_governor_of(policy); 384 395 struct dbs_data *dbs_data; ··· 418 429 419 430 gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list); 420 431 421 - ret = gov->init(dbs_data, !policy->governor->initialized); 432 + ret = gov->init(dbs_data); 422 433 if (ret) 423 434 goto free_policy_dbs_info; 424 435 ··· 447 458 goto out; 448 459 449 460 /* Failure, so roll back. */ 450 - pr_err("cpufreq: Governor initialization failed (dbs_data kobject init error %d)\n", ret); 461 + pr_err("initialization failed (dbs_data kobject init error %d)\n", ret); 451 462 452 463 policy->governor_data = NULL; 453 464 454 465 if (!have_governor_per_policy()) 455 466 gov->gdbs_data = NULL; 456 - gov->exit(dbs_data, !policy->governor->initialized); 467 + gov->exit(dbs_data); 457 468 kfree(dbs_data); 458 469 459 470 free_policy_dbs_info: ··· 463 474 mutex_unlock(&gov_dbs_data_mutex); 464 475 return ret; 465 476 } 477 + EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_init); 466 478 467 - static int cpufreq_governor_exit(struct cpufreq_policy *policy) 479 + void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy) 468 480 { 469 481 struct dbs_governor *gov = dbs_governor_of(policy); 470 482 struct policy_dbs_info *policy_dbs = policy->governor_data; ··· 483 493 if (!have_governor_per_policy()) 484 494 gov->gdbs_data = NULL; 485 495 486 - gov->exit(dbs_data, policy->governor->initialized == 1); 496 + gov->exit(dbs_data); 487 497 kfree(dbs_data); 488 498 } 489 499 490 500 free_policy_dbs_info(policy_dbs, gov); 491 501 492 502 mutex_unlock(&gov_dbs_data_mutex); 493 - return 0; 494 503 } 504 + EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_exit); 495 505 496 - static int cpufreq_governor_start(struct cpufreq_policy *policy) 506 + int cpufreq_dbs_governor_start(struct cpufreq_policy *policy) 497 507 { 498 508 struct dbs_governor *gov = dbs_governor_of(policy); 499 509 struct policy_dbs_info *policy_dbs = policy->governor_data; ··· 529 539 gov_set_update_util(policy_dbs, sampling_rate); 530 540 return 0; 531 541 } 542 + EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_start); 532 543 533 - static int cpufreq_governor_stop(struct cpufreq_policy *policy) 544 + void cpufreq_dbs_governor_stop(struct cpufreq_policy *policy) 534 545 { 535 - gov_cancel_work(policy); 536 - return 0; 537 - } 546 + struct policy_dbs_info *policy_dbs = policy->governor_data; 538 547 539 - static int cpufreq_governor_limits(struct cpufreq_policy *policy) 548 + gov_clear_update_util(policy_dbs->policy); 549 + irq_work_sync(&policy_dbs->irq_work); 550 + cancel_work_sync(&policy_dbs->work); 551 + atomic_set(&policy_dbs->work_count, 0); 552 + policy_dbs->work_in_progress = false; 553 + } 554 + EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_stop); 555 + 556 + void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy) 540 557 { 541 558 struct policy_dbs_info *policy_dbs = policy->governor_data; 542 559 543 560 mutex_lock(&policy_dbs->timer_mutex); 544 - 545 - if (policy->max < policy->cur) 546 - __cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H); 547 - else if (policy->min > policy->cur) 548 - __cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L); 549 - 561 + cpufreq_policy_apply_limits(policy); 550 562 gov_update_sample_delay(policy_dbs, 0); 551 563 552 564 mutex_unlock(&policy_dbs->timer_mutex); 553 - 554 - return 0; 555 565 } 556 - 557 - int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event) 558 - { 559 - if (event == CPUFREQ_GOV_POLICY_INIT) { 560 - return cpufreq_governor_init(policy); 561 - } else if (policy->governor_data) { 562 - switch (event) { 563 - case CPUFREQ_GOV_POLICY_EXIT: 564 - return cpufreq_governor_exit(policy); 565 - case CPUFREQ_GOV_START: 566 - return cpufreq_governor_start(policy); 567 - case CPUFREQ_GOV_STOP: 568 - return cpufreq_governor_stop(policy); 569 - case CPUFREQ_GOV_LIMITS: 570 - return cpufreq_governor_limits(policy); 571 - } 572 - } 573 - return -EINVAL; 574 - } 575 - EXPORT_SYMBOL_GPL(cpufreq_governor_dbs); 566 + EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_limits);
+21 -3
drivers/cpufreq/cpufreq_governor.h
··· 138 138 unsigned int (*gov_dbs_timer)(struct cpufreq_policy *policy); 139 139 struct policy_dbs_info *(*alloc)(void); 140 140 void (*free)(struct policy_dbs_info *policy_dbs); 141 - int (*init)(struct dbs_data *dbs_data, bool notify); 142 - void (*exit)(struct dbs_data *dbs_data, bool notify); 141 + int (*init)(struct dbs_data *dbs_data); 142 + void (*exit)(struct dbs_data *dbs_data); 143 143 void (*start)(struct cpufreq_policy *policy); 144 144 }; 145 145 ··· 148 148 return container_of(policy->governor, struct dbs_governor, gov); 149 149 } 150 150 151 + /* Governor callback routines */ 152 + int cpufreq_dbs_governor_init(struct cpufreq_policy *policy); 153 + void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy); 154 + int cpufreq_dbs_governor_start(struct cpufreq_policy *policy); 155 + void cpufreq_dbs_governor_stop(struct cpufreq_policy *policy); 156 + void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy); 157 + 158 + #define CPUFREQ_DBS_GOVERNOR_INITIALIZER(_name_) \ 159 + { \ 160 + .name = _name_, \ 161 + .max_transition_latency = TRANSITION_LATENCY_LIMIT, \ 162 + .owner = THIS_MODULE, \ 163 + .init = cpufreq_dbs_governor_init, \ 164 + .exit = cpufreq_dbs_governor_exit, \ 165 + .start = cpufreq_dbs_governor_start, \ 166 + .stop = cpufreq_dbs_governor_stop, \ 167 + .limits = cpufreq_dbs_governor_limits, \ 168 + } 169 + 151 170 /* Governor specific operations */ 152 171 struct od_ops { 153 172 unsigned int (*powersave_bias_target)(struct cpufreq_policy *policy, ··· 174 155 }; 175 156 176 157 unsigned int dbs_update(struct cpufreq_policy *policy); 177 - int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event); 178 158 void od_register_powersave_bias_handler(unsigned int (*f) 179 159 (struct cpufreq_policy *, unsigned int, unsigned int), 180 160 unsigned int powersave_bias);
+13 -25
drivers/cpufreq/cpufreq_ondemand.c
··· 65 65 { 66 66 unsigned int freq_req, freq_reduc, freq_avg; 67 67 unsigned int freq_hi, freq_lo; 68 - unsigned int index = 0; 68 + unsigned int index; 69 69 unsigned int delay_hi_us; 70 70 struct policy_dbs_info *policy_dbs = policy->governor_data; 71 71 struct od_policy_dbs_info *dbs_info = to_dbs_info(policy_dbs); 72 72 struct dbs_data *dbs_data = policy_dbs->dbs_data; 73 73 struct od_dbs_tuners *od_tuners = dbs_data->tuners; 74 + struct cpufreq_frequency_table *freq_table = policy->freq_table; 74 75 75 - if (!dbs_info->freq_table) { 76 + if (!freq_table) { 76 77 dbs_info->freq_lo = 0; 77 78 dbs_info->freq_lo_delay_us = 0; 78 79 return freq_next; 79 80 } 80 81 81 - cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_next, 82 - relation, &index); 83 - freq_req = dbs_info->freq_table[index].frequency; 82 + index = cpufreq_frequency_table_target(policy, freq_next, relation); 83 + freq_req = freq_table[index].frequency; 84 84 freq_reduc = freq_req * od_tuners->powersave_bias / 1000; 85 85 freq_avg = freq_req - freq_reduc; 86 86 87 87 /* Find freq bounds for freq_avg in freq_table */ 88 - index = 0; 89 - cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_avg, 90 - CPUFREQ_RELATION_H, &index); 91 - freq_lo = dbs_info->freq_table[index].frequency; 92 - index = 0; 93 - cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_avg, 94 - CPUFREQ_RELATION_L, &index); 95 - freq_hi = dbs_info->freq_table[index].frequency; 88 + index = cpufreq_table_find_index_h(policy, freq_avg); 89 + freq_lo = freq_table[index].frequency; 90 + index = cpufreq_table_find_index_l(policy, freq_avg); 91 + freq_hi = freq_table[index].frequency; 96 92 97 93 /* Find out how long we have to be in hi and lo freqs */ 98 94 if (freq_hi == freq_lo) { ··· 109 113 { 110 114 struct od_policy_dbs_info *dbs_info = to_dbs_info(policy->governor_data); 111 115 112 - dbs_info->freq_table = cpufreq_frequency_get_table(policy->cpu); 113 116 dbs_info->freq_lo = 0; 114 117 } 115 118 ··· 356 361 kfree(to_dbs_info(policy_dbs)); 357 362 } 358 363 359 - static int od_init(struct dbs_data *dbs_data, bool notify) 364 + static int od_init(struct dbs_data *dbs_data) 360 365 { 361 366 struct od_dbs_tuners *tuners; 362 367 u64 idle_time; 363 368 int cpu; 364 369 365 370 tuners = kzalloc(sizeof(*tuners), GFP_KERNEL); 366 - if (!tuners) { 367 - pr_err("%s: kzalloc failed\n", __func__); 371 + if (!tuners) 368 372 return -ENOMEM; 369 - } 370 373 371 374 cpu = get_cpu(); 372 375 idle_time = get_cpu_idle_time_us(cpu, NULL); ··· 395 402 return 0; 396 403 } 397 404 398 - static void od_exit(struct dbs_data *dbs_data, bool notify) 405 + static void od_exit(struct dbs_data *dbs_data) 399 406 { 400 407 kfree(dbs_data->tuners); 401 408 } ··· 413 420 }; 414 421 415 422 static struct dbs_governor od_dbs_gov = { 416 - .gov = { 417 - .name = "ondemand", 418 - .governor = cpufreq_governor_dbs, 419 - .max_transition_latency = TRANSITION_LATENCY_LIMIT, 420 - .owner = THIS_MODULE, 421 - }, 423 + .gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("ondemand"), 422 424 .kobj_type = { .default_attrs = od_attributes }, 423 425 .gov_dbs_timer = od_dbs_timer, 424 426 .alloc = od_alloc,
-1
drivers/cpufreq/cpufreq_ondemand.h
··· 13 13 14 14 struct od_policy_dbs_info { 15 15 struct policy_dbs_info policy_dbs; 16 - struct cpufreq_frequency_table *freq_table; 17 16 unsigned int freq_lo; 18 17 unsigned int freq_lo_delay_us; 19 18 unsigned int freq_hi_delay_us;
+4 -15
drivers/cpufreq/cpufreq_performance.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/module.h> 18 18 19 - static int cpufreq_governor_performance(struct cpufreq_policy *policy, 20 - unsigned int event) 19 + static void cpufreq_gov_performance_limits(struct cpufreq_policy *policy) 21 20 { 22 - switch (event) { 23 - case CPUFREQ_GOV_START: 24 - case CPUFREQ_GOV_LIMITS: 25 - pr_debug("setting to %u kHz because of event %u\n", 26 - policy->max, event); 27 - __cpufreq_driver_target(policy, policy->max, 28 - CPUFREQ_RELATION_H); 29 - break; 30 - default: 31 - break; 32 - } 33 - return 0; 21 + pr_debug("setting to %u kHz\n", policy->max); 22 + __cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H); 34 23 } 35 24 36 25 static struct cpufreq_governor cpufreq_gov_performance = { 37 26 .name = "performance", 38 - .governor = cpufreq_governor_performance, 39 27 .owner = THIS_MODULE, 28 + .limits = cpufreq_gov_performance_limits, 40 29 }; 41 30 42 31 static int __init cpufreq_gov_performance_init(void)
+4 -15
drivers/cpufreq/cpufreq_powersave.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/module.h> 18 18 19 - static int cpufreq_governor_powersave(struct cpufreq_policy *policy, 20 - unsigned int event) 19 + static void cpufreq_gov_powersave_limits(struct cpufreq_policy *policy) 21 20 { 22 - switch (event) { 23 - case CPUFREQ_GOV_START: 24 - case CPUFREQ_GOV_LIMITS: 25 - pr_debug("setting to %u kHz because of event %u\n", 26 - policy->min, event); 27 - __cpufreq_driver_target(policy, policy->min, 28 - CPUFREQ_RELATION_L); 29 - break; 30 - default: 31 - break; 32 - } 33 - return 0; 21 + pr_debug("setting to %u kHz\n", policy->min); 22 + __cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L); 34 23 } 35 24 36 25 static struct cpufreq_governor cpufreq_gov_powersave = { 37 26 .name = "powersave", 38 - .governor = cpufreq_governor_powersave, 27 + .limits = cpufreq_gov_powersave_limits, 39 28 .owner = THIS_MODULE, 40 29 }; 41 30
+22 -135
drivers/cpufreq/cpufreq_stats.c
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/cputime.h> 17 17 18 - static spinlock_t cpufreq_stats_lock; 18 + static DEFINE_SPINLOCK(cpufreq_stats_lock); 19 19 20 20 struct cpufreq_stats { 21 21 unsigned int total_trans; ··· 52 52 ssize_t len = 0; 53 53 int i; 54 54 55 + if (policy->fast_switch_enabled) 56 + return 0; 57 + 55 58 cpufreq_stats_update(stats); 56 59 for (i = 0; i < stats->state_num; i++) { 57 60 len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i], ··· 70 67 struct cpufreq_stats *stats = policy->stats; 71 68 ssize_t len = 0; 72 69 int i, j; 70 + 71 + if (policy->fast_switch_enabled) 72 + return 0; 73 73 74 74 len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n"); 75 75 len += snprintf(buf + len, PAGE_SIZE - len, " : "); ··· 136 130 return -1; 137 131 } 138 132 139 - static void __cpufreq_stats_free_table(struct cpufreq_policy *policy) 133 + void cpufreq_stats_free_table(struct cpufreq_policy *policy) 140 134 { 141 135 struct cpufreq_stats *stats = policy->stats; 142 136 ··· 152 146 policy->stats = NULL; 153 147 } 154 148 155 - static void cpufreq_stats_free_table(unsigned int cpu) 156 - { 157 - struct cpufreq_policy *policy; 158 - 159 - policy = cpufreq_cpu_get(cpu); 160 - if (!policy) 161 - return; 162 - 163 - __cpufreq_stats_free_table(policy); 164 - 165 - cpufreq_cpu_put(policy); 166 - } 167 - 168 - static int __cpufreq_stats_create_table(struct cpufreq_policy *policy) 149 + void cpufreq_stats_create_table(struct cpufreq_policy *policy) 169 150 { 170 151 unsigned int i = 0, count = 0, ret = -ENOMEM; 171 152 struct cpufreq_stats *stats; 172 153 unsigned int alloc_size; 173 - unsigned int cpu = policy->cpu; 174 154 struct cpufreq_frequency_table *pos, *table; 175 155 176 156 /* We need cpufreq table for creating stats table */ 177 - table = cpufreq_frequency_get_table(cpu); 157 + table = policy->freq_table; 178 158 if (unlikely(!table)) 179 - return 0; 159 + return; 180 160 181 161 /* stats already initialized */ 182 162 if (policy->stats) 183 - return -EEXIST; 163 + return; 184 164 185 165 stats = kzalloc(sizeof(*stats), GFP_KERNEL); 186 166 if (!stats) 187 - return -ENOMEM; 167 + return; 188 168 189 169 /* Find total allocation size */ 190 170 cpufreq_for_each_valid_entry(pos, table) ··· 207 215 policy->stats = stats; 208 216 ret = sysfs_create_group(&policy->kobj, &stats_attr_group); 209 217 if (!ret) 210 - return 0; 218 + return; 211 219 212 220 /* We failed, release resources */ 213 221 policy->stats = NULL; 214 222 kfree(stats->time_in_state); 215 223 free_stat: 216 224 kfree(stats); 217 - 218 - return ret; 219 225 } 220 226 221 - static void cpufreq_stats_create_table(unsigned int cpu) 227 + void cpufreq_stats_record_transition(struct cpufreq_policy *policy, 228 + unsigned int new_freq) 222 229 { 223 - struct cpufreq_policy *policy; 224 - 225 - /* 226 - * "likely(!policy)" because normally cpufreq_stats will be registered 227 - * before cpufreq driver 228 - */ 229 - policy = cpufreq_cpu_get(cpu); 230 - if (likely(!policy)) 231 - return; 232 - 233 - __cpufreq_stats_create_table(policy); 234 - 235 - cpufreq_cpu_put(policy); 236 - } 237 - 238 - static int cpufreq_stat_notifier_policy(struct notifier_block *nb, 239 - unsigned long val, void *data) 240 - { 241 - int ret = 0; 242 - struct cpufreq_policy *policy = data; 243 - 244 - if (val == CPUFREQ_CREATE_POLICY) 245 - ret = __cpufreq_stats_create_table(policy); 246 - else if (val == CPUFREQ_REMOVE_POLICY) 247 - __cpufreq_stats_free_table(policy); 248 - 249 - return ret; 250 - } 251 - 252 - static int cpufreq_stat_notifier_trans(struct notifier_block *nb, 253 - unsigned long val, void *data) 254 - { 255 - struct cpufreq_freqs *freq = data; 256 - struct cpufreq_policy *policy = cpufreq_cpu_get(freq->cpu); 257 - struct cpufreq_stats *stats; 230 + struct cpufreq_stats *stats = policy->stats; 258 231 int old_index, new_index; 259 232 260 - if (!policy) { 261 - pr_err("%s: No policy found\n", __func__); 262 - return 0; 263 - } 264 - 265 - if (val != CPUFREQ_POSTCHANGE) 266 - goto put_policy; 267 - 268 - if (!policy->stats) { 233 + if (!stats) { 269 234 pr_debug("%s: No stats found\n", __func__); 270 - goto put_policy; 235 + return; 271 236 } 272 - 273 - stats = policy->stats; 274 237 275 238 old_index = stats->last_index; 276 - new_index = freq_table_get_index(stats, freq->new); 239 + new_index = freq_table_get_index(stats, new_freq); 277 240 278 241 /* We can't do stats->time_in_state[-1]= .. */ 279 - if (old_index == -1 || new_index == -1) 280 - goto put_policy; 281 - 282 - if (old_index == new_index) 283 - goto put_policy; 242 + if (old_index == -1 || new_index == -1 || old_index == new_index) 243 + return; 284 244 285 245 cpufreq_stats_update(stats); 286 246 ··· 241 297 stats->trans_table[old_index * stats->max_state + new_index]++; 242 298 #endif 243 299 stats->total_trans++; 244 - 245 - put_policy: 246 - cpufreq_cpu_put(policy); 247 - return 0; 248 300 } 249 - 250 - static struct notifier_block notifier_policy_block = { 251 - .notifier_call = cpufreq_stat_notifier_policy 252 - }; 253 - 254 - static struct notifier_block notifier_trans_block = { 255 - .notifier_call = cpufreq_stat_notifier_trans 256 - }; 257 - 258 - static int __init cpufreq_stats_init(void) 259 - { 260 - int ret; 261 - unsigned int cpu; 262 - 263 - spin_lock_init(&cpufreq_stats_lock); 264 - ret = cpufreq_register_notifier(&notifier_policy_block, 265 - CPUFREQ_POLICY_NOTIFIER); 266 - if (ret) 267 - return ret; 268 - 269 - for_each_online_cpu(cpu) 270 - cpufreq_stats_create_table(cpu); 271 - 272 - ret = cpufreq_register_notifier(&notifier_trans_block, 273 - CPUFREQ_TRANSITION_NOTIFIER); 274 - if (ret) { 275 - cpufreq_unregister_notifier(&notifier_policy_block, 276 - CPUFREQ_POLICY_NOTIFIER); 277 - for_each_online_cpu(cpu) 278 - cpufreq_stats_free_table(cpu); 279 - return ret; 280 - } 281 - 282 - return 0; 283 - } 284 - static void __exit cpufreq_stats_exit(void) 285 - { 286 - unsigned int cpu; 287 - 288 - cpufreq_unregister_notifier(&notifier_policy_block, 289 - CPUFREQ_POLICY_NOTIFIER); 290 - cpufreq_unregister_notifier(&notifier_trans_block, 291 - CPUFREQ_TRANSITION_NOTIFIER); 292 - for_each_online_cpu(cpu) 293 - cpufreq_stats_free_table(cpu); 294 - } 295 - 296 - MODULE_AUTHOR("Zou Nan hai <nanhai.zou@intel.com>"); 297 - MODULE_DESCRIPTION("Export cpufreq stats via sysfs"); 298 - MODULE_LICENSE("GPL"); 299 - 300 - module_init(cpufreq_stats_init); 301 - module_exit(cpufreq_stats_exit);
+48 -48
drivers/cpufreq/cpufreq_userspace.c
··· 65 65 return 0; 66 66 } 67 67 68 - static int cpufreq_governor_userspace(struct cpufreq_policy *policy, 69 - unsigned int event) 68 + static void cpufreq_userspace_policy_exit(struct cpufreq_policy *policy) 69 + { 70 + mutex_lock(&userspace_mutex); 71 + kfree(policy->governor_data); 72 + policy->governor_data = NULL; 73 + mutex_unlock(&userspace_mutex); 74 + } 75 + 76 + static int cpufreq_userspace_policy_start(struct cpufreq_policy *policy) 70 77 { 71 78 unsigned int *setspeed = policy->governor_data; 72 - unsigned int cpu = policy->cpu; 73 - int rc = 0; 74 79 75 - if (event == CPUFREQ_GOV_POLICY_INIT) 76 - return cpufreq_userspace_policy_init(policy); 80 + BUG_ON(!policy->cur); 81 + pr_debug("started managing cpu %u\n", policy->cpu); 77 82 78 - if (!setspeed) 79 - return -EINVAL; 83 + mutex_lock(&userspace_mutex); 84 + per_cpu(cpu_is_managed, policy->cpu) = 1; 85 + *setspeed = policy->cur; 86 + mutex_unlock(&userspace_mutex); 87 + return 0; 88 + } 80 89 81 - switch (event) { 82 - case CPUFREQ_GOV_POLICY_EXIT: 83 - mutex_lock(&userspace_mutex); 84 - policy->governor_data = NULL; 85 - kfree(setspeed); 86 - mutex_unlock(&userspace_mutex); 87 - break; 88 - case CPUFREQ_GOV_START: 89 - BUG_ON(!policy->cur); 90 - pr_debug("started managing cpu %u\n", cpu); 90 + static void cpufreq_userspace_policy_stop(struct cpufreq_policy *policy) 91 + { 92 + unsigned int *setspeed = policy->governor_data; 91 93 92 - mutex_lock(&userspace_mutex); 93 - per_cpu(cpu_is_managed, cpu) = 1; 94 - *setspeed = policy->cur; 95 - mutex_unlock(&userspace_mutex); 96 - break; 97 - case CPUFREQ_GOV_STOP: 98 - pr_debug("managing cpu %u stopped\n", cpu); 94 + pr_debug("managing cpu %u stopped\n", policy->cpu); 99 95 100 - mutex_lock(&userspace_mutex); 101 - per_cpu(cpu_is_managed, cpu) = 0; 102 - *setspeed = 0; 103 - mutex_unlock(&userspace_mutex); 104 - break; 105 - case CPUFREQ_GOV_LIMITS: 106 - mutex_lock(&userspace_mutex); 107 - pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz, last set to %u kHz\n", 108 - cpu, policy->min, policy->max, policy->cur, *setspeed); 96 + mutex_lock(&userspace_mutex); 97 + per_cpu(cpu_is_managed, policy->cpu) = 0; 98 + *setspeed = 0; 99 + mutex_unlock(&userspace_mutex); 100 + } 109 101 110 - if (policy->max < *setspeed) 111 - __cpufreq_driver_target(policy, policy->max, 112 - CPUFREQ_RELATION_H); 113 - else if (policy->min > *setspeed) 114 - __cpufreq_driver_target(policy, policy->min, 115 - CPUFREQ_RELATION_L); 116 - else 117 - __cpufreq_driver_target(policy, *setspeed, 118 - CPUFREQ_RELATION_L); 119 - mutex_unlock(&userspace_mutex); 120 - break; 121 - } 122 - return rc; 102 + static void cpufreq_userspace_policy_limits(struct cpufreq_policy *policy) 103 + { 104 + unsigned int *setspeed = policy->governor_data; 105 + 106 + mutex_lock(&userspace_mutex); 107 + 108 + pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz, last set to %u kHz\n", 109 + policy->cpu, policy->min, policy->max, policy->cur, *setspeed); 110 + 111 + if (policy->max < *setspeed) 112 + __cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H); 113 + else if (policy->min > *setspeed) 114 + __cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L); 115 + else 116 + __cpufreq_driver_target(policy, *setspeed, CPUFREQ_RELATION_L); 117 + 118 + mutex_unlock(&userspace_mutex); 123 119 } 124 120 125 121 static struct cpufreq_governor cpufreq_gov_userspace = { 126 122 .name = "userspace", 127 - .governor = cpufreq_governor_userspace, 123 + .init = cpufreq_userspace_policy_init, 124 + .exit = cpufreq_userspace_policy_exit, 125 + .start = cpufreq_userspace_policy_start, 126 + .stop = cpufreq_userspace_policy_stop, 127 + .limits = cpufreq_userspace_policy_limits, 128 128 .store_setspeed = cpufreq_set, 129 129 .show_setspeed = show_speed, 130 130 .owner = THIS_MODULE,
+1 -21
drivers/cpufreq/davinci-cpufreq.c
··· 38 38 }; 39 39 static struct davinci_cpufreq cpufreq; 40 40 41 - static int davinci_verify_speed(struct cpufreq_policy *policy) 42 - { 43 - struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data; 44 - struct cpufreq_frequency_table *freq_table = pdata->freq_table; 45 - struct clk *armclk = cpufreq.armclk; 46 - 47 - if (freq_table) 48 - return cpufreq_frequency_table_verify(policy, freq_table); 49 - 50 - if (policy->cpu) 51 - return -EINVAL; 52 - 53 - cpufreq_verify_within_cpu_limits(policy); 54 - policy->min = clk_round_rate(armclk, policy->min * 1000) / 1000; 55 - policy->max = clk_round_rate(armclk, policy->max * 1000) / 1000; 56 - cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, 57 - policy->cpuinfo.max_freq); 58 - return 0; 59 - } 60 - 61 41 static int davinci_target(struct cpufreq_policy *policy, unsigned int idx) 62 42 { 63 43 struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data; ··· 101 121 102 122 static struct cpufreq_driver davinci_driver = { 103 123 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 104 - .verify = davinci_verify_speed, 124 + .verify = cpufreq_generic_frequency_table_verify, 105 125 .target_index = davinci_target, 106 126 .get = cpufreq_generic_get, 107 127 .init = davinci_cpu_init,
+80 -26
drivers/cpufreq/freq_table.c
··· 63 63 else 64 64 return 0; 65 65 } 66 - EXPORT_SYMBOL_GPL(cpufreq_frequency_table_cpuinfo); 67 - 68 66 69 67 int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, 70 68 struct cpufreq_frequency_table *table) ··· 106 108 */ 107 109 int cpufreq_generic_frequency_table_verify(struct cpufreq_policy *policy) 108 110 { 109 - struct cpufreq_frequency_table *table = 110 - cpufreq_frequency_get_table(policy->cpu); 111 - if (!table) 111 + if (!policy->freq_table) 112 112 return -ENODEV; 113 113 114 - return cpufreq_frequency_table_verify(policy, table); 114 + return cpufreq_frequency_table_verify(policy, policy->freq_table); 115 115 } 116 116 EXPORT_SYMBOL_GPL(cpufreq_generic_frequency_table_verify); 117 117 118 - int cpufreq_frequency_table_target(struct cpufreq_policy *policy, 119 - struct cpufreq_frequency_table *table, 120 - unsigned int target_freq, 121 - unsigned int relation, 122 - unsigned int *index) 118 + int cpufreq_table_index_unsorted(struct cpufreq_policy *policy, 119 + unsigned int target_freq, 120 + unsigned int relation) 123 121 { 124 122 struct cpufreq_frequency_table optimal = { 125 123 .driver_data = ~0, ··· 126 132 .frequency = 0, 127 133 }; 128 134 struct cpufreq_frequency_table *pos; 135 + struct cpufreq_frequency_table *table = policy->freq_table; 129 136 unsigned int freq, diff, i = 0; 137 + int index; 130 138 131 139 pr_debug("request for target %u kHz (relation: %u) for cpu %u\n", 132 140 target_freq, relation, policy->cpu); ··· 192 196 } 193 197 } 194 198 if (optimal.driver_data > i) { 195 - if (suboptimal.driver_data > i) 196 - return -EINVAL; 197 - *index = suboptimal.driver_data; 199 + if (suboptimal.driver_data > i) { 200 + WARN(1, "Invalid frequency table: %d\n", policy->cpu); 201 + return 0; 202 + } 203 + 204 + index = suboptimal.driver_data; 198 205 } else 199 - *index = optimal.driver_data; 206 + index = optimal.driver_data; 200 207 201 - pr_debug("target index is %u, freq is:%u kHz\n", *index, 202 - table[*index].frequency); 203 - 204 - return 0; 208 + pr_debug("target index is %u, freq is:%u kHz\n", index, 209 + table[index].frequency); 210 + return index; 205 211 } 206 - EXPORT_SYMBOL_GPL(cpufreq_frequency_table_target); 212 + EXPORT_SYMBOL_GPL(cpufreq_table_index_unsorted); 207 213 208 214 int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy, 209 215 unsigned int freq) 210 216 { 211 - struct cpufreq_frequency_table *pos, *table; 217 + struct cpufreq_frequency_table *pos, *table = policy->freq_table; 212 218 213 - table = cpufreq_frequency_get_table(policy->cpu); 214 219 if (unlikely(!table)) { 215 220 pr_debug("%s: Unable to find frequency table\n", __func__); 216 221 return -ENOENT; ··· 297 300 }; 298 301 EXPORT_SYMBOL_GPL(cpufreq_generic_attr); 299 302 303 + static int set_freq_table_sorted(struct cpufreq_policy *policy) 304 + { 305 + struct cpufreq_frequency_table *pos, *table = policy->freq_table; 306 + struct cpufreq_frequency_table *prev = NULL; 307 + int ascending = 0; 308 + 309 + policy->freq_table_sorted = CPUFREQ_TABLE_UNSORTED; 310 + 311 + cpufreq_for_each_valid_entry(pos, table) { 312 + if (!prev) { 313 + prev = pos; 314 + continue; 315 + } 316 + 317 + if (pos->frequency == prev->frequency) { 318 + pr_warn("Duplicate freq-table entries: %u\n", 319 + pos->frequency); 320 + return -EINVAL; 321 + } 322 + 323 + /* Frequency increased from prev to pos */ 324 + if (pos->frequency > prev->frequency) { 325 + /* But frequency was decreasing earlier */ 326 + if (ascending < 0) { 327 + pr_debug("Freq table is unsorted\n"); 328 + return 0; 329 + } 330 + 331 + ascending++; 332 + } else { 333 + /* Frequency decreased from prev to pos */ 334 + 335 + /* But frequency was increasing earlier */ 336 + if (ascending > 0) { 337 + pr_debug("Freq table is unsorted\n"); 338 + return 0; 339 + } 340 + 341 + ascending--; 342 + } 343 + 344 + prev = pos; 345 + } 346 + 347 + if (ascending > 0) 348 + policy->freq_table_sorted = CPUFREQ_TABLE_SORTED_ASCENDING; 349 + else 350 + policy->freq_table_sorted = CPUFREQ_TABLE_SORTED_DESCENDING; 351 + 352 + pr_debug("Freq table is sorted in %s order\n", 353 + ascending > 0 ? "ascending" : "descending"); 354 + 355 + return 0; 356 + } 357 + 300 358 int cpufreq_table_validate_and_show(struct cpufreq_policy *policy, 301 359 struct cpufreq_frequency_table *table) 302 360 { 303 - int ret = cpufreq_frequency_table_cpuinfo(policy, table); 361 + int ret; 304 362 305 - if (!ret) 306 - policy->freq_table = table; 363 + ret = cpufreq_frequency_table_cpuinfo(policy, table); 364 + if (ret) 365 + return ret; 307 366 308 - return ret; 367 + policy->freq_table = table; 368 + return set_freq_table_sorted(policy); 309 369 } 310 370 EXPORT_SYMBOL_GPL(cpufreq_table_validate_and_show); 311 371
+45 -29
drivers/cpufreq/intel_pstate.c
··· 97 97 * read from MPERF MSR between last and current sample 98 98 * @tsc: Difference of time stamp counter between last and 99 99 * current sample 100 - * @freq: Effective frequency calculated from APERF/MPERF 101 100 * @time: Current time from scheduler 102 101 * 103 102 * This structure is used in the cpudata structure to store performance sample ··· 108 109 u64 aperf; 109 110 u64 mperf; 110 111 u64 tsc; 111 - int freq; 112 112 u64 time; 113 113 }; 114 114 ··· 280 282 static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu); 281 283 static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu); 282 284 283 - static struct pstate_adjust_policy pid_params; 284 - static struct pstate_funcs pstate_funcs; 285 - static int hwp_active; 285 + static struct pstate_adjust_policy pid_params __read_mostly; 286 + static struct pstate_funcs pstate_funcs __read_mostly; 287 + static int hwp_active __read_mostly; 286 288 287 289 #ifdef CONFIG_ACPI 288 290 static bool acpi_ppc; ··· 806 808 static void intel_pstate_hwp_enable(struct cpudata *cpudata) 807 809 { 808 810 /* First disable HWP notification interrupt as we don't process them */ 809 - wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00); 811 + if (static_cpu_has(X86_FEATURE_HWP_NOTIFY)) 812 + wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00); 810 813 811 814 wrmsrl_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1); 812 815 } ··· 944 945 if (err) 945 946 goto skip_tar; 946 947 947 - tdp_msr = MSR_CONFIG_TDP_NOMINAL + tdp_ctrl; 948 + tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x3); 948 949 err = rdmsrl_safe(tdp_msr, &tdp_ratio); 949 950 if (err) 950 951 goto skip_tar; ··· 972 973 u64 value; 973 974 int nont, ret; 974 975 975 - rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value); 976 + rdmsrl(MSR_TURBO_RATIO_LIMIT, value); 976 977 nont = core_get_max_pstate(); 977 978 ret = (value) & 255; 978 979 if (ret <= nont) ··· 1001 1002 u64 value; 1002 1003 int nont, ret; 1003 1004 1004 - rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value); 1005 + rdmsrl(MSR_TURBO_RATIO_LIMIT, value); 1005 1006 nont = core_get_max_pstate(); 1006 1007 ret = (((value) >> 8) & 0xFF); 1007 1008 if (ret <= nont) ··· 1091 1092 }, 1092 1093 }; 1093 1094 1095 + static struct cpu_defaults bxt_params = { 1096 + .pid_policy = { 1097 + .sample_rate_ms = 10, 1098 + .deadband = 0, 1099 + .setpoint = 60, 1100 + .p_gain_pct = 14, 1101 + .d_gain_pct = 0, 1102 + .i_gain_pct = 4, 1103 + }, 1104 + .funcs = { 1105 + .get_max = core_get_max_pstate, 1106 + .get_max_physical = core_get_max_pstate_physical, 1107 + .get_min = core_get_min_pstate, 1108 + .get_turbo = core_get_turbo_pstate, 1109 + .get_scaling = core_get_scaling, 1110 + .get_val = core_get_val, 1111 + .get_target_pstate = get_target_pstate_use_cpu_load, 1112 + }, 1113 + }; 1114 + 1094 1115 static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max) 1095 1116 { 1096 1117 int max_perf = cpu->pstate.turbo_pstate; ··· 1133 1114 *min = clamp_t(int, min_perf, cpu->pstate.min_pstate, max_perf); 1134 1115 } 1135 1116 1136 - static inline void intel_pstate_record_pstate(struct cpudata *cpu, int pstate) 1137 - { 1138 - trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu); 1139 - cpu->pstate.current_pstate = pstate; 1140 - } 1141 - 1142 1117 static void intel_pstate_set_min_pstate(struct cpudata *cpu) 1143 1118 { 1144 1119 int pstate = cpu->pstate.min_pstate; 1145 1120 1146 - intel_pstate_record_pstate(cpu, pstate); 1121 + trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu); 1122 + cpu->pstate.current_pstate = pstate; 1147 1123 /* 1148 1124 * Generally, there is no guarantee that this code will always run on 1149 1125 * the CPU being updated, so force the register update to run on the ··· 1298 1284 1299 1285 intel_pstate_get_min_max(cpu, &min_perf, &max_perf); 1300 1286 pstate = clamp_t(int, pstate, min_perf, max_perf); 1287 + trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu); 1301 1288 if (pstate == cpu->pstate.current_pstate) 1302 1289 return; 1303 1290 1304 - intel_pstate_record_pstate(cpu, pstate); 1291 + cpu->pstate.current_pstate = pstate; 1305 1292 wrmsrl(MSR_IA32_PERF_CTL, pstate_funcs.get_val(cpu, pstate)); 1306 1293 } 1307 1294 ··· 1367 1352 ICPU(INTEL_FAM6_SKYLAKE_DESKTOP, core_params), 1368 1353 ICPU(INTEL_FAM6_BROADWELL_XEON_D, core_params), 1369 1354 ICPU(INTEL_FAM6_XEON_PHI_KNL, knl_params), 1355 + ICPU(INTEL_FAM6_ATOM_GOLDMONT, bxt_params), 1370 1356 {} 1371 1357 }; 1372 1358 MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids); 1373 1359 1374 - static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] = { 1360 + static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = { 1375 1361 ICPU(INTEL_FAM6_BROADWELL_XEON_D, core_params), 1376 1362 {} 1377 1363 }; ··· 1592 1576 .name = "intel_pstate", 1593 1577 }; 1594 1578 1595 - static int __initdata no_load; 1596 - static int __initdata no_hwp; 1597 - static int __initdata hwp_only; 1598 - static unsigned int force_load; 1579 + static int no_load __initdata; 1580 + static int no_hwp __initdata; 1581 + static int hwp_only __initdata; 1582 + static unsigned int force_load __initdata; 1599 1583 1600 - static int intel_pstate_msrs_not_valid(void) 1584 + static int __init intel_pstate_msrs_not_valid(void) 1601 1585 { 1602 1586 if (!pstate_funcs.get_max() || 1603 1587 !pstate_funcs.get_min() || ··· 1607 1591 return 0; 1608 1592 } 1609 1593 1610 - static void copy_pid_params(struct pstate_adjust_policy *policy) 1594 + static void __init copy_pid_params(struct pstate_adjust_policy *policy) 1611 1595 { 1612 1596 pid_params.sample_rate_ms = policy->sample_rate_ms; 1613 1597 pid_params.sample_rate_ns = pid_params.sample_rate_ms * NSEC_PER_MSEC; ··· 1618 1602 pid_params.setpoint = policy->setpoint; 1619 1603 } 1620 1604 1621 - static void copy_cpu_funcs(struct pstate_funcs *funcs) 1605 + static void __init copy_cpu_funcs(struct pstate_funcs *funcs) 1622 1606 { 1623 1607 pstate_funcs.get_max = funcs->get_max; 1624 1608 pstate_funcs.get_max_physical = funcs->get_max_physical; ··· 1633 1617 1634 1618 #ifdef CONFIG_ACPI 1635 1619 1636 - static bool intel_pstate_no_acpi_pss(void) 1620 + static bool __init intel_pstate_no_acpi_pss(void) 1637 1621 { 1638 1622 int i; 1639 1623 ··· 1662 1646 return true; 1663 1647 } 1664 1648 1665 - static bool intel_pstate_has_acpi_ppc(void) 1649 + static bool __init intel_pstate_has_acpi_ppc(void) 1666 1650 { 1667 1651 int i; 1668 1652 ··· 1690 1674 }; 1691 1675 1692 1676 /* Hardware vendor-specific info that has its own power management modes */ 1693 - static struct hw_vendor_info vendor_info[] = { 1677 + static struct hw_vendor_info vendor_info[] __initdata = { 1694 1678 {1, "HP ", "ProLiant", PSS}, 1695 1679 {1, "ORACLE", "X4-2 ", PPC}, 1696 1680 {1, "ORACLE", "X4-2L ", PPC}, ··· 1709 1693 {0, "", ""}, 1710 1694 }; 1711 1695 1712 - static bool intel_pstate_platform_pwr_mgmt_exists(void) 1696 + static bool __init intel_pstate_platform_pwr_mgmt_exists(void) 1713 1697 { 1714 1698 struct acpi_table_header hdr; 1715 1699 struct hw_vendor_info *v_info;
+1 -1
drivers/cpufreq/mvebu-cpufreq.c
··· 70 70 continue; 71 71 } 72 72 73 - clk = clk_get(cpu_dev, 0); 73 + clk = clk_get(cpu_dev, NULL); 74 74 if (IS_ERR(clk)) { 75 75 pr_err("Cannot get clock for CPU %d\n", cpu); 76 76 return PTR_ERR(clk);
-2
drivers/cpufreq/pcc-cpufreq.c
··· 555 555 policy->min = policy->cpuinfo.min_freq = 556 556 ioread32(&pcch_hdr->minimum_frequency) * 1000; 557 557 558 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 559 - 560 558 pr_debug("init: policy->max is %d, policy->min is %d\n", 561 559 policy->max, policy->min); 562 560 out:
+103 -78
drivers/cpufreq/powernv-cpufreq.c
··· 64 64 /** 65 65 * struct global_pstate_info - Per policy data structure to maintain history of 66 66 * global pstates 67 - * @highest_lpstate: The local pstate from which we are ramping down 67 + * @highest_lpstate_idx: The local pstate index from which we are 68 + * ramping down 68 69 * @elapsed_time: Time in ms spent in ramping down from 69 - * highest_lpstate 70 + * highest_lpstate_idx 70 71 * @last_sampled_time: Time from boot in ms when global pstates were 71 72 * last set 72 - * @last_lpstate,last_gpstate: Last set values for local and global pstates 73 + * @last_lpstate_idx, Last set value of local pstate and global 74 + * last_gpstate_idx pstate in terms of cpufreq table index 73 75 * @timer: Is used for ramping down if cpu goes idle for 74 76 * a long time with global pstate held high 75 77 * @gpstate_lock: A spinlock to maintain synchronization between ··· 79 77 * governer's target_index calls 80 78 */ 81 79 struct global_pstate_info { 82 - int highest_lpstate; 80 + int highest_lpstate_idx; 83 81 unsigned int elapsed_time; 84 82 unsigned int last_sampled_time; 85 - int last_lpstate; 86 - int last_gpstate; 83 + int last_lpstate_idx; 84 + int last_gpstate_idx; 87 85 spinlock_t gpstate_lock; 88 86 struct timer_list timer; 89 87 }; ··· 126 124 static DEFINE_PER_CPU(struct chip *, chip_info); 127 125 128 126 /* 129 - * Note: The set of pstates consists of contiguous integers, the 130 - * smallest of which is indicated by powernv_pstate_info.min, the 131 - * largest of which is indicated by powernv_pstate_info.max. 127 + * Note: 128 + * The set of pstates consists of contiguous integers. 129 + * powernv_pstate_info stores the index of the frequency table for 130 + * max, min and nominal frequencies. It also stores number of 131 + * available frequencies. 132 132 * 133 - * The nominal pstate is the highest non-turbo pstate in this 134 - * platform. This is indicated by powernv_pstate_info.nominal. 133 + * powernv_pstate_info.nominal indicates the index to the highest 134 + * non-turbo frequency. 135 135 */ 136 136 static struct powernv_pstate_info { 137 - int min; 138 - int max; 139 - int nominal; 140 - int nr_pstates; 137 + unsigned int min; 138 + unsigned int max; 139 + unsigned int nominal; 140 + unsigned int nr_pstates; 141 141 } powernv_pstate_info; 142 + 143 + /* Use following macros for conversions between pstate_id and index */ 144 + static inline int idx_to_pstate(unsigned int i) 145 + { 146 + return powernv_freqs[i].driver_data; 147 + } 148 + 149 + static inline unsigned int pstate_to_idx(int pstate) 150 + { 151 + /* 152 + * abs() is deliberately used so that is works with 153 + * both monotonically increasing and decreasing 154 + * pstate values 155 + */ 156 + return abs(pstate - idx_to_pstate(powernv_pstate_info.max)); 157 + } 142 158 143 159 static inline void reset_gpstates(struct cpufreq_policy *policy) 144 160 { 145 161 struct global_pstate_info *gpstates = policy->driver_data; 146 162 147 - gpstates->highest_lpstate = 0; 163 + gpstates->highest_lpstate_idx = 0; 148 164 gpstates->elapsed_time = 0; 149 165 gpstates->last_sampled_time = 0; 150 - gpstates->last_lpstate = 0; 151 - gpstates->last_gpstate = 0; 166 + gpstates->last_lpstate_idx = 0; 167 + gpstates->last_gpstate_idx = 0; 152 168 } 153 169 154 170 /* ··· 176 156 static int init_powernv_pstates(void) 177 157 { 178 158 struct device_node *power_mgt; 179 - int i, pstate_min, pstate_max, pstate_nominal, nr_pstates = 0; 159 + int i, nr_pstates = 0; 180 160 const __be32 *pstate_ids, *pstate_freqs; 181 161 u32 len_ids, len_freqs; 162 + u32 pstate_min, pstate_max, pstate_nominal; 182 163 183 164 power_mgt = of_find_node_by_path("/ibm,opal/power-mgt"); 184 165 if (!power_mgt) { ··· 229 208 return -ENODEV; 230 209 } 231 210 211 + powernv_pstate_info.nr_pstates = nr_pstates; 232 212 pr_debug("NR PStates %d\n", nr_pstates); 233 213 for (i = 0; i < nr_pstates; i++) { 234 214 u32 id = be32_to_cpu(pstate_ids[i]); ··· 238 216 pr_debug("PState id %d freq %d MHz\n", id, freq); 239 217 powernv_freqs[i].frequency = freq * 1000; /* kHz */ 240 218 powernv_freqs[i].driver_data = id; 219 + 220 + if (id == pstate_max) 221 + powernv_pstate_info.max = i; 222 + else if (id == pstate_nominal) 223 + powernv_pstate_info.nominal = i; 224 + else if (id == pstate_min) 225 + powernv_pstate_info.min = i; 241 226 } 227 + 242 228 /* End of list marker entry */ 243 229 powernv_freqs[i].frequency = CPUFREQ_TABLE_END; 244 - 245 - powernv_pstate_info.min = pstate_min; 246 - powernv_pstate_info.max = pstate_max; 247 - powernv_pstate_info.nominal = pstate_nominal; 248 - powernv_pstate_info.nr_pstates = nr_pstates; 249 - 250 230 return 0; 251 231 } 252 232 ··· 257 233 { 258 234 int i; 259 235 260 - i = powernv_pstate_info.max - pstate_id; 236 + i = pstate_to_idx(pstate_id); 261 237 if (i >= powernv_pstate_info.nr_pstates || i < 0) { 262 238 pr_warn("PState id %d outside of PState table, " 263 239 "reporting nominal id %d instead\n", 264 - pstate_id, powernv_pstate_info.nominal); 265 - i = powernv_pstate_info.max - powernv_pstate_info.nominal; 240 + pstate_id, idx_to_pstate(powernv_pstate_info.nominal)); 241 + i = powernv_pstate_info.nominal; 266 242 } 267 243 268 244 return powernv_freqs[i].frequency; ··· 276 252 char *buf) 277 253 { 278 254 return sprintf(buf, "%u\n", 279 - pstate_id_to_freq(powernv_pstate_info.nominal)); 255 + powernv_freqs[powernv_pstate_info.nominal].frequency); 280 256 } 281 257 282 258 struct freq_attr cpufreq_freq_attr_cpuinfo_nominal_freq = ··· 450 426 */ 451 427 static inline unsigned int get_nominal_index(void) 452 428 { 453 - return powernv_pstate_info.max - powernv_pstate_info.nominal; 429 + return powernv_pstate_info.nominal; 454 430 } 455 431 456 432 static void powernv_cpufreq_throttle_check(void *data) ··· 459 435 unsigned int cpu = smp_processor_id(); 460 436 unsigned long pmsr; 461 437 int pmsr_pmax; 438 + unsigned int pmsr_pmax_idx; 462 439 463 440 pmsr = get_pmspr(SPRN_PMSR); 464 441 chip = this_cpu_read(chip_info); 465 442 466 443 /* Check for Pmax Capping */ 467 444 pmsr_pmax = (s8)PMSR_MAX(pmsr); 468 - if (pmsr_pmax != powernv_pstate_info.max) { 445 + pmsr_pmax_idx = pstate_to_idx(pmsr_pmax); 446 + if (pmsr_pmax_idx != powernv_pstate_info.max) { 469 447 if (chip->throttled) 470 448 goto next; 471 449 chip->throttled = true; 472 - if (pmsr_pmax < powernv_pstate_info.nominal) { 473 - pr_warn_once("CPU %d on Chip %u has Pmax reduced below nominal frequency (%d < %d)\n", 450 + if (pmsr_pmax_idx > powernv_pstate_info.nominal) { 451 + pr_warn_once("CPU %d on Chip %u has Pmax(%d) reduced below nominal frequency(%d)\n", 474 452 cpu, chip->id, pmsr_pmax, 475 - powernv_pstate_info.nominal); 453 + idx_to_pstate(powernv_pstate_info.nominal)); 476 454 chip->throttle_sub_turbo++; 477 455 } else { 478 456 chip->throttle_turbo++; ··· 510 484 511 485 /** 512 486 * calc_global_pstate - Calculate global pstate 513 - * @elapsed_time: Elapsed time in milliseconds 514 - * @local_pstate: New local pstate 515 - * @highest_lpstate: pstate from which its ramping down 487 + * @elapsed_time: Elapsed time in milliseconds 488 + * @local_pstate_idx: New local pstate 489 + * @highest_lpstate_idx: pstate from which its ramping down 516 490 * 517 491 * Finds the appropriate global pstate based on the pstate from which its 518 492 * ramping down and the time elapsed in ramping down. It follows a quadratic 519 493 * equation which ensures that it reaches ramping down to pmin in 5sec. 520 494 */ 521 495 static inline int calc_global_pstate(unsigned int elapsed_time, 522 - int highest_lpstate, int local_pstate) 496 + int highest_lpstate_idx, 497 + int local_pstate_idx) 523 498 { 524 - int pstate_diff; 499 + int index_diff; 525 500 526 501 /* 527 502 * Using ramp_down_percent we get the percentage of rampdown 528 503 * that we are expecting to be dropping. Difference between 529 - * highest_lpstate and powernv_pstate_info.min will give a absolute 504 + * highest_lpstate_idx and powernv_pstate_info.min will give a absolute 530 505 * number of how many pstates we will drop eventually by the end of 531 506 * 5 seconds, then just scale it get the number pstates to be dropped. 532 507 */ 533 - pstate_diff = ((int)ramp_down_percent(elapsed_time) * 534 - (highest_lpstate - powernv_pstate_info.min)) / 100; 508 + index_diff = ((int)ramp_down_percent(elapsed_time) * 509 + (powernv_pstate_info.min - highest_lpstate_idx)) / 100; 535 510 536 511 /* Ensure that global pstate is >= to local pstate */ 537 - if (highest_lpstate - pstate_diff < local_pstate) 538 - return local_pstate; 512 + if (highest_lpstate_idx + index_diff >= local_pstate_idx) 513 + return local_pstate_idx; 539 514 else 540 - return highest_lpstate - pstate_diff; 515 + return highest_lpstate_idx + index_diff; 541 516 } 542 517 543 518 static inline void queue_gpstate_timer(struct global_pstate_info *gpstates) ··· 573 546 { 574 547 struct cpufreq_policy *policy = (struct cpufreq_policy *)data; 575 548 struct global_pstate_info *gpstates = policy->driver_data; 576 - int gpstate_id; 549 + int gpstate_idx; 577 550 unsigned int time_diff = jiffies_to_msecs(jiffies) 578 551 - gpstates->last_sampled_time; 579 552 struct powernv_smp_call_data freq_data; ··· 583 556 584 557 gpstates->last_sampled_time += time_diff; 585 558 gpstates->elapsed_time += time_diff; 586 - freq_data.pstate_id = gpstates->last_lpstate; 559 + freq_data.pstate_id = idx_to_pstate(gpstates->last_lpstate_idx); 587 560 588 - if ((gpstates->last_gpstate == freq_data.pstate_id) || 561 + if ((gpstates->last_gpstate_idx == gpstates->last_lpstate_idx) || 589 562 (gpstates->elapsed_time > MAX_RAMP_DOWN_TIME)) { 590 - gpstate_id = freq_data.pstate_id; 563 + gpstate_idx = pstate_to_idx(freq_data.pstate_id); 591 564 reset_gpstates(policy); 592 - gpstates->highest_lpstate = freq_data.pstate_id; 565 + gpstates->highest_lpstate_idx = gpstate_idx; 593 566 } else { 594 - gpstate_id = calc_global_pstate(gpstates->elapsed_time, 595 - gpstates->highest_lpstate, 596 - freq_data.pstate_id); 567 + gpstate_idx = calc_global_pstate(gpstates->elapsed_time, 568 + gpstates->highest_lpstate_idx, 569 + freq_data.pstate_id); 597 570 } 598 571 599 572 /* 600 573 * If local pstate is equal to global pstate, rampdown is over 601 574 * So timer is not required to be queued. 602 575 */ 603 - if (gpstate_id != freq_data.pstate_id) 576 + if (gpstate_idx != gpstates->last_lpstate_idx) 604 577 queue_gpstate_timer(gpstates); 605 578 606 - freq_data.gpstate_id = gpstate_id; 607 - gpstates->last_gpstate = freq_data.gpstate_id; 608 - gpstates->last_lpstate = freq_data.pstate_id; 579 + freq_data.gpstate_id = idx_to_pstate(gpstate_idx); 580 + gpstates->last_gpstate_idx = pstate_to_idx(freq_data.gpstate_id); 581 + gpstates->last_lpstate_idx = pstate_to_idx(freq_data.pstate_id); 609 582 610 583 spin_unlock(&gpstates->gpstate_lock); 611 584 ··· 622 595 unsigned int new_index) 623 596 { 624 597 struct powernv_smp_call_data freq_data; 625 - unsigned int cur_msec, gpstate_id; 598 + unsigned int cur_msec, gpstate_idx; 626 599 struct global_pstate_info *gpstates = policy->driver_data; 627 600 628 601 if (unlikely(rebooting) && new_index != get_nominal_index()) ··· 634 607 cur_msec = jiffies_to_msecs(get_jiffies_64()); 635 608 636 609 spin_lock(&gpstates->gpstate_lock); 637 - freq_data.pstate_id = powernv_freqs[new_index].driver_data; 610 + freq_data.pstate_id = idx_to_pstate(new_index); 638 611 639 612 if (!gpstates->last_sampled_time) { 640 - gpstate_id = freq_data.pstate_id; 641 - gpstates->highest_lpstate = freq_data.pstate_id; 613 + gpstate_idx = new_index; 614 + gpstates->highest_lpstate_idx = new_index; 642 615 goto gpstates_done; 643 616 } 644 617 645 - if (gpstates->last_gpstate > freq_data.pstate_id) { 618 + if (gpstates->last_gpstate_idx < new_index) { 646 619 gpstates->elapsed_time += cur_msec - 647 620 gpstates->last_sampled_time; 648 621 ··· 653 626 */ 654 627 if (gpstates->elapsed_time > MAX_RAMP_DOWN_TIME) { 655 628 reset_gpstates(policy); 656 - gpstates->highest_lpstate = freq_data.pstate_id; 657 - gpstate_id = freq_data.pstate_id; 629 + gpstates->highest_lpstate_idx = new_index; 630 + gpstate_idx = new_index; 658 631 } else { 659 632 /* Elaspsed_time is less than 5 seconds, continue to rampdown */ 660 - gpstate_id = calc_global_pstate(gpstates->elapsed_time, 661 - gpstates->highest_lpstate, 662 - freq_data.pstate_id); 633 + gpstate_idx = calc_global_pstate(gpstates->elapsed_time, 634 + gpstates->highest_lpstate_idx, 635 + new_index); 663 636 } 664 637 } else { 665 638 reset_gpstates(policy); 666 - gpstates->highest_lpstate = freq_data.pstate_id; 667 - gpstate_id = freq_data.pstate_id; 639 + gpstates->highest_lpstate_idx = new_index; 640 + gpstate_idx = new_index; 668 641 } 669 642 670 643 /* 671 644 * If local pstate is equal to global pstate, rampdown is over 672 645 * So timer is not required to be queued. 673 646 */ 674 - if (gpstate_id != freq_data.pstate_id) 647 + if (gpstate_idx != new_index) 675 648 queue_gpstate_timer(gpstates); 676 649 else 677 650 del_timer_sync(&gpstates->timer); 678 651 679 652 gpstates_done: 680 - freq_data.gpstate_id = gpstate_id; 653 + freq_data.gpstate_id = idx_to_pstate(gpstate_idx); 681 654 gpstates->last_sampled_time = cur_msec; 682 - gpstates->last_gpstate = freq_data.gpstate_id; 683 - gpstates->last_lpstate = freq_data.pstate_id; 655 + gpstates->last_gpstate_idx = gpstate_idx; 656 + gpstates->last_lpstate_idx = new_index; 684 657 685 658 spin_unlock(&gpstates->gpstate_lock); 686 659 ··· 786 759 struct cpufreq_policy policy; 787 760 788 761 cpufreq_get_policy(&policy, cpu); 789 - cpufreq_frequency_table_target(&policy, policy.freq_table, 790 - policy.cur, 791 - CPUFREQ_RELATION_C, &index); 762 + index = cpufreq_table_find_index_c(&policy, policy.cur); 792 763 powernv_cpufreq_target_index(&policy, index); 793 764 cpumask_andnot(&mask, &mask, policy.cpus); 794 765 } ··· 872 847 struct powernv_smp_call_data freq_data; 873 848 struct global_pstate_info *gpstates = policy->driver_data; 874 849 875 - freq_data.pstate_id = powernv_pstate_info.min; 876 - freq_data.gpstate_id = powernv_pstate_info.min; 850 + freq_data.pstate_id = idx_to_pstate(powernv_pstate_info.min); 851 + freq_data.gpstate_id = idx_to_pstate(powernv_pstate_info.min); 877 852 smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1); 878 853 del_timer_sync(&gpstates->timer); 879 854 }
+1 -2
drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
··· 94 94 unsigned long event, void *data) 95 95 { 96 96 struct cpufreq_policy *policy = data; 97 - struct cpufreq_frequency_table *cbe_freqs; 97 + struct cpufreq_frequency_table *cbe_freqs = policy->freq_table; 98 98 u8 node; 99 99 100 100 /* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY ··· 103 103 if (event == CPUFREQ_START) 104 104 return 0; 105 105 106 - cbe_freqs = cpufreq_frequency_get_table(policy->cpu); 107 106 node = cbe_cpu_to_node(policy->cpu); 108 107 109 108 pr_debug("got notified, event=%lu, node=%u\n", event, node);
+7 -26
drivers/cpufreq/s3c24xx-cpufreq.c
··· 293 293 __func__, policy, target_freq, relation); 294 294 295 295 if (ftab) { 296 - if (cpufreq_frequency_table_target(policy, ftab, 297 - target_freq, relation, 298 - &index)) { 299 - s3c_freq_dbg("%s: table failed\n", __func__); 300 - return -EINVAL; 301 - } 296 + index = cpufreq_frequency_table_target(policy, target_freq, 297 + relation); 302 298 303 299 s3c_freq_dbg("%s: adjust %d to entry %d (%u)\n", __func__, 304 300 target_freq, index, ftab[index].frequency); ··· 311 315 pll = NULL; 312 316 } else { 313 317 struct cpufreq_policy tmp_policy; 314 - int ret; 315 318 316 319 /* we keep the cpu pll table in Hz, to ensure we get an 317 320 * accurate value for the PLL output. */ ··· 318 323 tmp_policy.min = policy->min * 1000; 319 324 tmp_policy.max = policy->max * 1000; 320 325 tmp_policy.cpu = policy->cpu; 326 + tmp_policy.freq_table = pll_reg; 321 327 322 - /* cpufreq_frequency_table_target uses a pointer to 'index' 323 - * which is the number of the table entry, not the value of 328 + /* cpufreq_frequency_table_target returns the index 329 + * of the table entry, not the value of 324 330 * the table entry's index field. */ 325 331 326 - ret = cpufreq_frequency_table_target(&tmp_policy, pll_reg, 327 - target_freq, relation, 328 - &index); 329 - 330 - if (ret < 0) { 331 - pr_err("%s: no PLL available\n", __func__); 332 - goto err_notpossible; 333 - } 334 - 332 + index = cpufreq_frequency_table_target(&tmp_policy, target_freq, 333 + relation); 335 334 pll = pll_reg + index; 336 335 337 336 s3c_freq_dbg("%s: target %u => %u\n", ··· 335 346 } 336 347 337 348 return s3c_cpufreq_settarget(policy, target_freq, pll); 338 - 339 - err_notpossible: 340 - pr_err("no compatible settings for %d\n", target_freq); 341 - return -EINVAL; 342 349 } 343 350 344 351 struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name) ··· 556 571 { 557 572 int size, ret; 558 573 559 - if (!cpu_cur.info->calc_freqtable) 560 - return -EINVAL; 561 - 562 574 kfree(ftab); 563 - ftab = NULL; 564 575 565 576 size = cpu_cur.info->calc_freqtable(&cpu_cur, NULL, 0); 566 577 size++;
+1 -6
drivers/cpufreq/s5pv210-cpufreq.c
··· 246 246 new_freq = s5pv210_freq_table[index].frequency; 247 247 248 248 /* Finding current running level index */ 249 - if (cpufreq_frequency_table_target(policy, s5pv210_freq_table, 250 - old_freq, CPUFREQ_RELATION_H, 251 - &priv_index)) { 252 - ret = -EINVAL; 253 - goto exit; 254 - } 249 + priv_index = cpufreq_table_find_index_h(policy, old_freq); 255 250 256 251 arm_volt = dvs_conf[index].arm_volt; 257 252 int_volt = dvs_conf[index].int_volt;
+1 -1
drivers/devfreq/Kconfig
··· 75 75 comment "DEVFREQ Drivers" 76 76 77 77 config ARM_EXYNOS_BUS_DEVFREQ 78 - bool "ARM EXYNOS Generic Memory Bus DEVFREQ Driver" 78 + tristate "ARM EXYNOS Generic Memory Bus DEVFREQ Driver" 79 79 depends on ARCH_EXYNOS 80 80 select DEVFREQ_GOV_SIMPLE_ONDEMAND 81 81 select DEVFREQ_GOV_PASSIVE
+1 -11
drivers/devfreq/devfreq-event.c
··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/err.h> 17 17 #include <linux/init.h> 18 - #include <linux/module.h> 18 + #include <linux/export.h> 19 19 #include <linux/slab.h> 20 20 #include <linux/list.h> 21 21 #include <linux/of.h> ··· 481 481 return 0; 482 482 } 483 483 subsys_initcall(devfreq_event_init); 484 - 485 - static void __exit devfreq_event_exit(void) 486 - { 487 - class_destroy(devfreq_event_class); 488 - } 489 - module_exit(devfreq_event_exit); 490 - 491 - MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>"); 492 - MODULE_DESCRIPTION("DEVFREQ-Event class support"); 493 - MODULE_LICENSE("GPL");
+3 -12
drivers/devfreq/devfreq.c
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/err.h> 17 17 #include <linux/init.h> 18 - #include <linux/module.h> 18 + #include <linux/export.h> 19 19 #include <linux/slab.h> 20 20 #include <linux/stat.h> 21 21 #include <linux/pm_opp.h> ··· 707 707 if (devfreq->dev.parent 708 708 && devfreq->dev.parent->of_node == node) { 709 709 mutex_unlock(&devfreq_list_lock); 710 + of_node_put(node); 710 711 return devfreq; 711 712 } 712 713 } 713 714 mutex_unlock(&devfreq_list_lock); 715 + of_node_put(node); 714 716 715 717 return ERR_PTR(-EPROBE_DEFER); 716 718 } ··· 1201 1199 } 1202 1200 subsys_initcall(devfreq_init); 1203 1201 1204 - static void __exit devfreq_exit(void) 1205 - { 1206 - class_destroy(devfreq_class); 1207 - destroy_workqueue(devfreq_wq); 1208 - } 1209 - module_exit(devfreq_exit); 1210 - 1211 1202 /* 1212 1203 * The followings are helper functions for devfreq user device drivers with 1213 1204 * OPP framework. ··· 1466 1471 devm_devfreq_dev_match, devfreq)); 1467 1472 } 1468 1473 EXPORT_SYMBOL(devm_devfreq_unregister_notifier); 1469 - 1470 - MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>"); 1471 - MODULE_DESCRIPTION("devfreq class support"); 1472 - MODULE_LICENSE("GPL");
+2 -2
drivers/devfreq/event/Kconfig
··· 14 14 if PM_DEVFREQ_EVENT 15 15 16 16 config DEVFREQ_EVENT_EXYNOS_NOCP 17 - bool "EXYNOS NoC (Network On Chip) Probe DEVFREQ event Driver" 17 + tristate "EXYNOS NoC (Network On Chip) Probe DEVFREQ event Driver" 18 18 depends on ARCH_EXYNOS 19 19 select PM_OPP 20 20 help ··· 22 22 (Network on Chip) Probe counters to measure the bandwidth of AXI bus. 23 23 24 24 config DEVFREQ_EVENT_EXYNOS_PPMU 25 - bool "EXYNOS PPMU (Platform Performance Monitoring Unit) DEVFREQ event Driver" 25 + tristate "EXYNOS PPMU (Platform Performance Monitoring Unit) DEVFREQ event Driver" 26 26 depends on ARCH_EXYNOS 27 27 select PM_OPP 28 28 help
+2 -1
drivers/devfreq/event/exynos-ppmu.c
··· 482 482 if (!info->edev) { 483 483 dev_err(&pdev->dev, 484 484 "failed to allocate memory devfreq-event devices\n"); 485 - return -ENOMEM; 485 + ret = -ENOMEM; 486 + goto err; 486 487 } 487 488 edev = info->edev; 488 489 platform_set_drvdata(pdev, info);
+7 -4
drivers/devfreq/exynos-bus.c
··· 383 383 static int exynos_bus_probe(struct platform_device *pdev) 384 384 { 385 385 struct device *dev = &pdev->dev; 386 - struct device_node *np = dev->of_node; 386 + struct device_node *np = dev->of_node, *node; 387 387 struct devfreq_dev_profile *profile; 388 388 struct devfreq_simple_ondemand_data *ondemand_data; 389 389 struct devfreq_passive_data *passive_data; ··· 407 407 /* Parse the device-tree to get the resource information */ 408 408 ret = exynos_bus_parse_of(np, bus); 409 409 if (ret < 0) 410 - goto err; 410 + return ret; 411 411 412 412 profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL); 413 413 if (!profile) { ··· 415 415 goto err; 416 416 } 417 417 418 - if (of_parse_phandle(dev->of_node, "devfreq", 0)) 418 + node = of_parse_phandle(dev->of_node, "devfreq", 0); 419 + if (node) { 420 + of_node_put(node); 419 421 goto passive; 420 - else 422 + } else { 421 423 ret = exynos_bus_parent_parse_of(np, bus); 424 + } 422 425 423 426 if (ret < 0) 424 427 goto err;
-23
drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
··· 421 421 422 422 static int acp_resume(void *handle) 423 423 { 424 - int i, ret; 425 - struct acp_pm_domain *apd; 426 - struct amdgpu_device *adev = (struct amdgpu_device *)handle; 427 - 428 - /* return early if no ACP */ 429 - if (!adev->acp.acp_genpd) 430 - return 0; 431 - 432 - /* SMU block will power on ACP irrespective of ACP runtime status. 433 - * Power off explicitly based on genpd ACP runtime status so that ACP 434 - * hw and ACP-genpd status are in sync. 435 - * 'suspend_power_off' represents "Power status before system suspend" 436 - */ 437 - if (adev->acp.acp_genpd->gpd.suspend_power_off == true) { 438 - apd = container_of(&adev->acp.acp_genpd->gpd, 439 - struct acp_pm_domain, gpd); 440 - 441 - for (i = 4; i >= 0 ; i--) { 442 - ret = acp_suspend_tile(apd->cgs_dev, ACP_TILE_P1 + i); 443 - if (ret) 444 - pr_err("ACP tile %d tile suspend failed\n", i); 445 - } 446 - } 447 424 return 0; 448 425 } 449 426
+58 -49
drivers/idle/intel_idle.c
··· 46 46 * to avoid complications with the lapic timer workaround. 47 47 * Have not seen issues with suspend, but may need same workaround here. 48 48 * 49 - * There is currently no kernel-based automatic probing/loading mechanism 50 - * if the driver is built as a module. 51 49 */ 52 50 53 51 /* un-comment DEBUG to enable pr_debug() statements */ ··· 58 60 #include <linux/sched.h> 59 61 #include <linux/notifier.h> 60 62 #include <linux/cpu.h> 61 - #include <linux/module.h> 63 + #include <linux/moduleparam.h> 62 64 #include <asm/cpu_device_id.h> 63 65 #include <asm/intel-family.h> 64 66 #include <asm/mwait.h> ··· 826 828 .enter = NULL } 827 829 }; 828 830 831 + static struct cpuidle_state dnv_cstates[] = { 832 + { 833 + .name = "C1-DNV", 834 + .desc = "MWAIT 0x00", 835 + .flags = MWAIT2flg(0x00), 836 + .exit_latency = 2, 837 + .target_residency = 2, 838 + .enter = &intel_idle, 839 + .enter_freeze = intel_idle_freeze, }, 840 + { 841 + .name = "C1E-DNV", 842 + .desc = "MWAIT 0x01", 843 + .flags = MWAIT2flg(0x01), 844 + .exit_latency = 10, 845 + .target_residency = 20, 846 + .enter = &intel_idle, 847 + .enter_freeze = intel_idle_freeze, }, 848 + { 849 + .name = "C6-DNV", 850 + .desc = "MWAIT 0x20", 851 + .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, 852 + .exit_latency = 50, 853 + .target_residency = 500, 854 + .enter = &intel_idle, 855 + .enter_freeze = intel_idle_freeze, }, 856 + { 857 + .enter = NULL } 858 + }; 859 + 829 860 /** 830 861 * intel_idle 831 862 * @dev: cpuidle_device ··· 1044 1017 .disable_promotion_to_c1e = true, 1045 1018 }; 1046 1019 1020 + static const struct idle_cpu idle_cpu_dnv = { 1021 + .state_table = dnv_cstates, 1022 + .disable_promotion_to_c1e = true, 1023 + }; 1024 + 1047 1025 #define ICPU(model, cpu) \ 1048 1026 { X86_VENDOR_INTEL, 6, model, X86_FEATURE_MWAIT, (unsigned long)&cpu } 1049 1027 ··· 1085 1053 ICPU(INTEL_FAM6_SKYLAKE_X, idle_cpu_skx), 1086 1054 ICPU(INTEL_FAM6_XEON_PHI_KNL, idle_cpu_knl), 1087 1055 ICPU(INTEL_FAM6_ATOM_GOLDMONT, idle_cpu_bxt), 1056 + ICPU(INTEL_FAM6_ATOM_DENVERTON, idle_cpu_dnv), 1088 1057 {} 1089 1058 }; 1090 - MODULE_DEVICE_TABLE(x86cpu, intel_idle_ids); 1091 1059 1092 1060 /* 1093 1061 * intel_idle_probe() ··· 1187 1155 { 1188 1156 unsigned long long ns; 1189 1157 1190 - ns = irtl_ns_units[(irtl >> 10) & 0x3]; 1158 + if (!irtl) 1159 + return 0; 1160 + 1161 + ns = irtl_ns_units[(irtl >> 10) & 0x7]; 1191 1162 1192 1163 return div64_u64((irtl & 0x3FF) * ns, 1000); 1193 1164 } ··· 1203 1168 static void bxt_idle_state_table_update(void) 1204 1169 { 1205 1170 unsigned long long msr; 1171 + unsigned int usec; 1206 1172 1207 1173 rdmsrl(MSR_PKGC6_IRTL, msr); 1208 - if (msr) { 1209 - unsigned int usec = irtl_2_usec(msr); 1210 - 1174 + usec = irtl_2_usec(msr); 1175 + if (usec) { 1211 1176 bxt_cstates[2].exit_latency = usec; 1212 1177 bxt_cstates[2].target_residency = usec; 1213 1178 } 1214 1179 1215 1180 rdmsrl(MSR_PKGC7_IRTL, msr); 1216 - if (msr) { 1217 - unsigned int usec = irtl_2_usec(msr); 1218 - 1181 + usec = irtl_2_usec(msr); 1182 + if (usec) { 1219 1183 bxt_cstates[3].exit_latency = usec; 1220 1184 bxt_cstates[3].target_residency = usec; 1221 1185 } 1222 1186 1223 1187 rdmsrl(MSR_PKGC8_IRTL, msr); 1224 - if (msr) { 1225 - unsigned int usec = irtl_2_usec(msr); 1226 - 1188 + usec = irtl_2_usec(msr); 1189 + if (usec) { 1227 1190 bxt_cstates[4].exit_latency = usec; 1228 1191 bxt_cstates[4].target_residency = usec; 1229 1192 } 1230 1193 1231 1194 rdmsrl(MSR_PKGC9_IRTL, msr); 1232 - if (msr) { 1233 - unsigned int usec = irtl_2_usec(msr); 1234 - 1195 + usec = irtl_2_usec(msr); 1196 + if (usec) { 1235 1197 bxt_cstates[5].exit_latency = usec; 1236 1198 bxt_cstates[5].target_residency = usec; 1237 1199 } 1238 1200 1239 1201 rdmsrl(MSR_PKGC10_IRTL, msr); 1240 - if (msr) { 1241 - unsigned int usec = irtl_2_usec(msr); 1242 - 1202 + usec = irtl_2_usec(msr); 1203 + if (usec) { 1243 1204 bxt_cstates[6].exit_latency = usec; 1244 1205 bxt_cstates[6].target_residency = usec; 1245 1206 } ··· 1447 1416 1448 1417 return 0; 1449 1418 } 1419 + device_initcall(intel_idle_init); 1450 1420 1451 - static void __exit intel_idle_exit(void) 1452 - { 1453 - struct cpuidle_device *dev; 1454 - int i; 1455 - 1456 - cpu_notifier_register_begin(); 1457 - 1458 - if (lapic_timer_reliable_states != LAPIC_TIMER_ALWAYS_RELIABLE) 1459 - on_each_cpu(__setup_broadcast_timer, (void *)false, 1); 1460 - __unregister_cpu_notifier(&cpu_hotplug_notifier); 1461 - 1462 - for_each_possible_cpu(i) { 1463 - dev = per_cpu_ptr(intel_idle_cpuidle_devices, i); 1464 - cpuidle_unregister_device(dev); 1465 - } 1466 - 1467 - cpu_notifier_register_done(); 1468 - 1469 - cpuidle_unregister_driver(&intel_idle_driver); 1470 - free_percpu(intel_idle_cpuidle_devices); 1471 - } 1472 - 1473 - module_init(intel_idle_init); 1474 - module_exit(intel_idle_exit); 1475 - 1421 + /* 1422 + * We are not really modular, but we used to support that. Meaning we also 1423 + * support "intel_idle.max_cstate=..." at boot and also a read-only export of 1424 + * it at /sys/module/intel_idle/parameters/max_cstate -- so using module_param 1425 + * is the easiest way (currently) to continue doing that. 1426 + */ 1476 1427 module_param(max_cstate, int, 0444); 1477 - 1478 - MODULE_AUTHOR("Len Brown <len.brown@intel.com>"); 1479 - MODULE_DESCRIPTION("Cpuidle driver for Intel Hardware v" INTEL_IDLE_VERSION); 1480 - MODULE_LICENSE("GPL");
+2 -2
drivers/pci/pci.c
··· 530 530 531 531 int pci_set_platform_pm(const struct pci_platform_pm_ops *ops) 532 532 { 533 - if (!ops->is_manageable || !ops->set_state || !ops->choose_state 534 - || !ops->sleep_wake) 533 + if (!ops->is_manageable || !ops->set_state || !ops->choose_state || 534 + !ops->sleep_wake || !ops->run_wake || !ops->need_resume) 535 535 return -EINVAL; 536 536 pci_platform_pm = ops; 537 537 return 0;
+82 -25
drivers/powercap/intel_rapl.c
··· 336 336 337 337 static int find_nr_power_limit(struct rapl_domain *rd) 338 338 { 339 - int i; 339 + int i, nr_pl = 0; 340 340 341 341 for (i = 0; i < NR_POWER_LIMITS; i++) { 342 - if (rd->rpl[i].name == NULL) 343 - break; 342 + if (rd->rpl[i].name) 343 + nr_pl++; 344 344 } 345 345 346 - return i; 346 + return nr_pl; 347 347 } 348 348 349 349 static int set_domain_enable(struct powercap_zone *power_zone, bool mode) ··· 426 426 }, 427 427 }; 428 428 429 - static int set_power_limit(struct powercap_zone *power_zone, int id, 429 + 430 + /* 431 + * Constraint index used by powercap can be different than power limit (PL) 432 + * index in that some PLs maybe missing due to non-existant MSRs. So we 433 + * need to convert here by finding the valid PLs only (name populated). 434 + */ 435 + static int contraint_to_pl(struct rapl_domain *rd, int cid) 436 + { 437 + int i, j; 438 + 439 + for (i = 0, j = 0; i < NR_POWER_LIMITS; i++) { 440 + if ((rd->rpl[i].name) && j++ == cid) { 441 + pr_debug("%s: index %d\n", __func__, i); 442 + return i; 443 + } 444 + } 445 + 446 + return -EINVAL; 447 + } 448 + 449 + static int set_power_limit(struct powercap_zone *power_zone, int cid, 430 450 u64 power_limit) 431 451 { 432 452 struct rapl_domain *rd; 433 453 struct rapl_package *rp; 434 454 int ret = 0; 455 + int id; 435 456 436 457 get_online_cpus(); 437 458 rd = power_zone_to_rapl_domain(power_zone); 459 + id = contraint_to_pl(rd, cid); 460 + 438 461 rp = rd->rp; 439 462 440 463 if (rd->state & DOMAIN_STATE_BIOS_LOCKED) { ··· 484 461 return ret; 485 462 } 486 463 487 - static int get_current_power_limit(struct powercap_zone *power_zone, int id, 464 + static int get_current_power_limit(struct powercap_zone *power_zone, int cid, 488 465 u64 *data) 489 466 { 490 467 struct rapl_domain *rd; 491 468 u64 val; 492 469 int prim; 493 470 int ret = 0; 471 + int id; 494 472 495 473 get_online_cpus(); 496 474 rd = power_zone_to_rapl_domain(power_zone); 475 + id = contraint_to_pl(rd, cid); 497 476 switch (rd->rpl[id].prim_id) { 498 477 case PL1_ENABLE: 499 478 prim = POWER_LIMIT1; ··· 517 492 return ret; 518 493 } 519 494 520 - static int set_time_window(struct powercap_zone *power_zone, int id, 495 + static int set_time_window(struct powercap_zone *power_zone, int cid, 521 496 u64 window) 522 497 { 523 498 struct rapl_domain *rd; 524 499 int ret = 0; 500 + int id; 525 501 526 502 get_online_cpus(); 527 503 rd = power_zone_to_rapl_domain(power_zone); 504 + id = contraint_to_pl(rd, cid); 505 + 528 506 switch (rd->rpl[id].prim_id) { 529 507 case PL1_ENABLE: 530 508 rapl_write_data_raw(rd, TIME_WINDOW1, window); ··· 542 514 return ret; 543 515 } 544 516 545 - static int get_time_window(struct powercap_zone *power_zone, int id, u64 *data) 517 + static int get_time_window(struct powercap_zone *power_zone, int cid, u64 *data) 546 518 { 547 519 struct rapl_domain *rd; 548 520 u64 val; 549 521 int ret = 0; 522 + int id; 550 523 551 524 get_online_cpus(); 552 525 rd = power_zone_to_rapl_domain(power_zone); 526 + id = contraint_to_pl(rd, cid); 527 + 553 528 switch (rd->rpl[id].prim_id) { 554 529 case PL1_ENABLE: 555 530 ret = rapl_read_data_raw(rd, TIME_WINDOW1, true, &val); ··· 571 540 return ret; 572 541 } 573 542 574 - static const char *get_constraint_name(struct powercap_zone *power_zone, int id) 543 + static const char *get_constraint_name(struct powercap_zone *power_zone, int cid) 575 544 { 576 - struct rapl_power_limit *rpl; 577 545 struct rapl_domain *rd; 546 + int id; 578 547 579 548 rd = power_zone_to_rapl_domain(power_zone); 580 - rpl = (struct rapl_power_limit *) &rd->rpl[id]; 549 + id = contraint_to_pl(rd, cid); 550 + if (id >= 0) 551 + return rd->rpl[id].name; 581 552 582 - return rpl->name; 553 + return NULL; 583 554 } 584 555 585 556 ··· 1134 1101 RAPL_CPU(INTEL_FAM6_SANDYBRIDGE_X, rapl_defaults_core), 1135 1102 1136 1103 RAPL_CPU(INTEL_FAM6_IVYBRIDGE, rapl_defaults_core), 1104 + RAPL_CPU(INTEL_FAM6_IVYBRIDGE_X, rapl_defaults_core), 1137 1105 1138 1106 RAPL_CPU(INTEL_FAM6_HASWELL_CORE, rapl_defaults_core), 1139 1107 RAPL_CPU(INTEL_FAM6_HASWELL_ULT, rapl_defaults_core), ··· 1157 1123 RAPL_CPU(INTEL_FAM6_ATOM_MERRIFIELD1, rapl_defaults_tng), 1158 1124 RAPL_CPU(INTEL_FAM6_ATOM_MERRIFIELD2, rapl_defaults_ann), 1159 1125 RAPL_CPU(INTEL_FAM6_ATOM_GOLDMONT, rapl_defaults_core), 1126 + RAPL_CPU(INTEL_FAM6_ATOM_DENVERTON, rapl_defaults_core), 1160 1127 1161 1128 RAPL_CPU(INTEL_FAM6_XEON_PHI_KNL, rapl_defaults_hsw_server), 1162 1129 {} ··· 1416 1381 return 0; 1417 1382 } 1418 1383 1384 + 1385 + /* 1386 + * Check if power limits are available. Two cases when they are not available: 1387 + * 1. Locked by BIOS, in this case we still provide read-only access so that 1388 + * users can see what limit is set by the BIOS. 1389 + * 2. Some CPUs make some domains monitoring only which means PLx MSRs may not 1390 + * exist at all. In this case, we do not show the contraints in powercap. 1391 + * 1392 + * Called after domains are detected and initialized. 1393 + */ 1394 + static void rapl_detect_powerlimit(struct rapl_domain *rd) 1395 + { 1396 + u64 val64; 1397 + int i; 1398 + 1399 + /* check if the domain is locked by BIOS, ignore if MSR doesn't exist */ 1400 + if (!rapl_read_data_raw(rd, FW_LOCK, false, &val64)) { 1401 + if (val64) { 1402 + pr_info("RAPL package %d domain %s locked by BIOS\n", 1403 + rd->rp->id, rd->name); 1404 + rd->state |= DOMAIN_STATE_BIOS_LOCKED; 1405 + } 1406 + } 1407 + /* check if power limit MSRs exists, otherwise domain is monitoring only */ 1408 + for (i = 0; i < NR_POWER_LIMITS; i++) { 1409 + int prim = rd->rpl[i].prim_id; 1410 + if (rapl_read_data_raw(rd, prim, false, &val64)) 1411 + rd->rpl[i].name = NULL; 1412 + } 1413 + } 1414 + 1419 1415 /* Detect active and valid domains for the given CPU, caller must 1420 1416 * ensure the CPU belongs to the targeted package and CPU hotlug is disabled. 1421 1417 */ ··· 1455 1389 int i; 1456 1390 int ret = 0; 1457 1391 struct rapl_domain *rd; 1458 - u64 locked; 1459 1392 1460 1393 for (i = 0; i < RAPL_DOMAIN_MAX; i++) { 1461 1394 /* use physical package id to read counters */ ··· 1465 1400 } 1466 1401 rp->nr_domains = bitmap_weight(&rp->domain_map, RAPL_DOMAIN_MAX); 1467 1402 if (!rp->nr_domains) { 1468 - pr_err("no valid rapl domains found in package %d\n", rp->id); 1403 + pr_debug("no valid rapl domains found in package %d\n", rp->id); 1469 1404 ret = -ENODEV; 1470 1405 goto done; 1471 1406 } ··· 1479 1414 } 1480 1415 rapl_init_domains(rp); 1481 1416 1482 - for (rd = rp->domains; rd < rp->domains + rp->nr_domains; rd++) { 1483 - /* check if the domain is locked by BIOS */ 1484 - ret = rapl_read_data_raw(rd, FW_LOCK, false, &locked); 1485 - if (ret) 1486 - return ret; 1487 - if (locked) { 1488 - pr_info("RAPL package %d domain %s locked by BIOS\n", 1489 - rp->id, rd->name); 1490 - rd->state |= DOMAIN_STATE_BIOS_LOCKED; 1491 - } 1492 - } 1417 + for (rd = rp->domains; rd < rp->domains + rp->nr_domains; rd++) 1418 + rapl_detect_powerlimit(rd); 1419 + 1493 1420 1494 1421 1495 1422 done:
+20 -6
drivers/thermal/cpu_cooling.c
··· 787 787 const struct cpumask *clip_cpus, u32 capacitance, 788 788 get_static_t plat_static_func) 789 789 { 790 + struct cpufreq_policy *policy; 790 791 struct thermal_cooling_device *cool_dev; 791 792 struct cpufreq_cooling_device *cpufreq_dev; 792 793 char dev_name[THERMAL_NAME_LENGTH]; 793 794 struct cpufreq_frequency_table *pos, *table; 795 + struct cpumask temp_mask; 794 796 unsigned int freq, i, num_cpus; 795 797 int ret; 796 798 797 - table = cpufreq_frequency_get_table(cpumask_first(clip_cpus)); 798 - if (!table) { 799 - pr_debug("%s: CPUFreq table not found\n", __func__); 799 + cpumask_and(&temp_mask, clip_cpus, cpu_online_mask); 800 + policy = cpufreq_cpu_get(cpumask_first(&temp_mask)); 801 + if (!policy) { 802 + pr_debug("%s: CPUFreq policy not found\n", __func__); 800 803 return ERR_PTR(-EPROBE_DEFER); 801 804 } 802 805 806 + table = policy->freq_table; 807 + if (!table) { 808 + pr_debug("%s: CPUFreq table not found\n", __func__); 809 + cool_dev = ERR_PTR(-ENODEV); 810 + goto put_policy; 811 + } 812 + 803 813 cpufreq_dev = kzalloc(sizeof(*cpufreq_dev), GFP_KERNEL); 804 - if (!cpufreq_dev) 805 - return ERR_PTR(-ENOMEM); 814 + if (!cpufreq_dev) { 815 + cool_dev = ERR_PTR(-ENOMEM); 816 + goto put_policy; 817 + } 806 818 807 819 num_cpus = cpumask_weight(clip_cpus); 808 820 cpufreq_dev->time_in_idle = kcalloc(num_cpus, ··· 904 892 CPUFREQ_POLICY_NOTIFIER); 905 893 mutex_unlock(&cooling_cpufreq_lock); 906 894 907 - return cool_dev; 895 + goto put_policy; 908 896 909 897 remove_idr: 910 898 release_idr(&cpufreq_idr, cpufreq_dev->id); ··· 918 906 kfree(cpufreq_dev->time_in_idle); 919 907 free_cdev: 920 908 kfree(cpufreq_dev); 909 + put_policy: 910 + cpufreq_cpu_put(policy); 921 911 922 912 return cool_dev; 923 913 }
+272 -17
include/linux/cpufreq.h
··· 36 36 37 37 struct cpufreq_governor; 38 38 39 + enum cpufreq_table_sorting { 40 + CPUFREQ_TABLE_UNSORTED, 41 + CPUFREQ_TABLE_SORTED_ASCENDING, 42 + CPUFREQ_TABLE_SORTED_DESCENDING 43 + }; 44 + 39 45 struct cpufreq_freqs { 40 46 unsigned int cpu; /* cpu nr */ 41 47 unsigned int old; ··· 93 87 94 88 struct cpufreq_user_policy user_policy; 95 89 struct cpufreq_frequency_table *freq_table; 90 + enum cpufreq_table_sorting freq_table_sorted; 96 91 97 92 struct list_head policy_list; 98 93 struct kobject kobj; ··· 119 112 */ 120 113 bool fast_switch_possible; 121 114 bool fast_switch_enabled; 115 + 116 + /* Cached frequency lookup from cpufreq_driver_resolve_freq. */ 117 + unsigned int cached_target_freq; 118 + int cached_resolved_idx; 122 119 123 120 /* Synchronization for frequency transitions */ 124 121 bool transition_ongoing; /* Tracks transition status */ ··· 196 185 static inline void disable_cpufreq(void) { } 197 186 #endif 198 187 188 + #ifdef CONFIG_CPU_FREQ_STAT 189 + void cpufreq_stats_create_table(struct cpufreq_policy *policy); 190 + void cpufreq_stats_free_table(struct cpufreq_policy *policy); 191 + void cpufreq_stats_record_transition(struct cpufreq_policy *policy, 192 + unsigned int new_freq); 193 + #else 194 + static inline void cpufreq_stats_create_table(struct cpufreq_policy *policy) { } 195 + static inline void cpufreq_stats_free_table(struct cpufreq_policy *policy) { } 196 + static inline void cpufreq_stats_record_transition(struct cpufreq_policy *policy, 197 + unsigned int new_freq) { } 198 + #endif /* CONFIG_CPU_FREQ_STAT */ 199 + 199 200 /********************************************************************* 200 201 * CPUFREQ DRIVER INTERFACE * 201 202 *********************************************************************/ ··· 274 251 unsigned int index); 275 252 unsigned int (*fast_switch)(struct cpufreq_policy *policy, 276 253 unsigned int target_freq); 254 + 255 + /* 256 + * Caches and returns the lowest driver-supported frequency greater than 257 + * or equal to the target frequency, subject to any driver limitations. 258 + * Does not set the frequency. Only to be implemented for drivers with 259 + * target(). 260 + */ 261 + unsigned int (*resolve_freq)(struct cpufreq_policy *policy, 262 + unsigned int target_freq); 263 + 277 264 /* 278 265 * Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION 279 266 * unset. ··· 488 455 #define MIN_LATENCY_MULTIPLIER (20) 489 456 #define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000) 490 457 491 - /* Governor Events */ 492 - #define CPUFREQ_GOV_START 1 493 - #define CPUFREQ_GOV_STOP 2 494 - #define CPUFREQ_GOV_LIMITS 3 495 - #define CPUFREQ_GOV_POLICY_INIT 4 496 - #define CPUFREQ_GOV_POLICY_EXIT 5 497 - 498 458 struct cpufreq_governor { 499 459 char name[CPUFREQ_NAME_LEN]; 500 - int initialized; 501 - int (*governor) (struct cpufreq_policy *policy, 502 - unsigned int event); 460 + int (*init)(struct cpufreq_policy *policy); 461 + void (*exit)(struct cpufreq_policy *policy); 462 + int (*start)(struct cpufreq_policy *policy); 463 + void (*stop)(struct cpufreq_policy *policy); 464 + void (*limits)(struct cpufreq_policy *policy); 503 465 ssize_t (*show_setspeed) (struct cpufreq_policy *policy, 504 466 char *buf); 505 467 int (*store_setspeed) (struct cpufreq_policy *policy, ··· 515 487 int __cpufreq_driver_target(struct cpufreq_policy *policy, 516 488 unsigned int target_freq, 517 489 unsigned int relation); 490 + unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, 491 + unsigned int target_freq); 518 492 int cpufreq_register_governor(struct cpufreq_governor *governor); 519 493 void cpufreq_unregister_governor(struct cpufreq_governor *governor); 520 494 521 495 struct cpufreq_governor *cpufreq_default_governor(void); 522 496 struct cpufreq_governor *cpufreq_fallback_governor(void); 497 + 498 + static inline void cpufreq_policy_apply_limits(struct cpufreq_policy *policy) 499 + { 500 + if (policy->max < policy->cur) 501 + __cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H); 502 + else if (policy->min > policy->cur) 503 + __cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L); 504 + } 523 505 524 506 /* Governor attribute set */ 525 507 struct gov_attr_set { ··· 620 582 struct cpufreq_frequency_table *table); 621 583 int cpufreq_generic_frequency_table_verify(struct cpufreq_policy *policy); 622 584 623 - int cpufreq_frequency_table_target(struct cpufreq_policy *policy, 624 - struct cpufreq_frequency_table *table, 625 - unsigned int target_freq, 626 - unsigned int relation, 627 - unsigned int *index); 585 + int cpufreq_table_index_unsorted(struct cpufreq_policy *policy, 586 + unsigned int target_freq, 587 + unsigned int relation); 628 588 int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy, 629 589 unsigned int freq); 630 590 ··· 633 597 int cpufreq_boost_enabled(void); 634 598 int cpufreq_enable_boost_support(void); 635 599 bool policy_has_boost_freq(struct cpufreq_policy *policy); 600 + 601 + /* Find lowest freq at or above target in a table in ascending order */ 602 + static inline int cpufreq_table_find_index_al(struct cpufreq_policy *policy, 603 + unsigned int target_freq) 604 + { 605 + struct cpufreq_frequency_table *table = policy->freq_table; 606 + unsigned int freq; 607 + int i, best = -1; 608 + 609 + for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 610 + freq = table[i].frequency; 611 + 612 + if (freq >= target_freq) 613 + return i; 614 + 615 + best = i; 616 + } 617 + 618 + return best; 619 + } 620 + 621 + /* Find lowest freq at or above target in a table in descending order */ 622 + static inline int cpufreq_table_find_index_dl(struct cpufreq_policy *policy, 623 + unsigned int target_freq) 624 + { 625 + struct cpufreq_frequency_table *table = policy->freq_table; 626 + unsigned int freq; 627 + int i, best = -1; 628 + 629 + for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 630 + freq = table[i].frequency; 631 + 632 + if (freq == target_freq) 633 + return i; 634 + 635 + if (freq > target_freq) { 636 + best = i; 637 + continue; 638 + } 639 + 640 + /* No freq found above target_freq */ 641 + if (best == -1) 642 + return i; 643 + 644 + return best; 645 + } 646 + 647 + return best; 648 + } 649 + 650 + /* Works only on sorted freq-tables */ 651 + static inline int cpufreq_table_find_index_l(struct cpufreq_policy *policy, 652 + unsigned int target_freq) 653 + { 654 + target_freq = clamp_val(target_freq, policy->min, policy->max); 655 + 656 + if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING) 657 + return cpufreq_table_find_index_al(policy, target_freq); 658 + else 659 + return cpufreq_table_find_index_dl(policy, target_freq); 660 + } 661 + 662 + /* Find highest freq at or below target in a table in ascending order */ 663 + static inline int cpufreq_table_find_index_ah(struct cpufreq_policy *policy, 664 + unsigned int target_freq) 665 + { 666 + struct cpufreq_frequency_table *table = policy->freq_table; 667 + unsigned int freq; 668 + int i, best = -1; 669 + 670 + for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 671 + freq = table[i].frequency; 672 + 673 + if (freq == target_freq) 674 + return i; 675 + 676 + if (freq < target_freq) { 677 + best = i; 678 + continue; 679 + } 680 + 681 + /* No freq found below target_freq */ 682 + if (best == -1) 683 + return i; 684 + 685 + return best; 686 + } 687 + 688 + return best; 689 + } 690 + 691 + /* Find highest freq at or below target in a table in descending order */ 692 + static inline int cpufreq_table_find_index_dh(struct cpufreq_policy *policy, 693 + unsigned int target_freq) 694 + { 695 + struct cpufreq_frequency_table *table = policy->freq_table; 696 + unsigned int freq; 697 + int i, best = -1; 698 + 699 + for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 700 + freq = table[i].frequency; 701 + 702 + if (freq <= target_freq) 703 + return i; 704 + 705 + best = i; 706 + } 707 + 708 + return best; 709 + } 710 + 711 + /* Works only on sorted freq-tables */ 712 + static inline int cpufreq_table_find_index_h(struct cpufreq_policy *policy, 713 + unsigned int target_freq) 714 + { 715 + target_freq = clamp_val(target_freq, policy->min, policy->max); 716 + 717 + if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING) 718 + return cpufreq_table_find_index_ah(policy, target_freq); 719 + else 720 + return cpufreq_table_find_index_dh(policy, target_freq); 721 + } 722 + 723 + /* Find closest freq to target in a table in ascending order */ 724 + static inline int cpufreq_table_find_index_ac(struct cpufreq_policy *policy, 725 + unsigned int target_freq) 726 + { 727 + struct cpufreq_frequency_table *table = policy->freq_table; 728 + unsigned int freq; 729 + int i, best = -1; 730 + 731 + for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 732 + freq = table[i].frequency; 733 + 734 + if (freq == target_freq) 735 + return i; 736 + 737 + if (freq < target_freq) { 738 + best = i; 739 + continue; 740 + } 741 + 742 + /* No freq found below target_freq */ 743 + if (best == -1) 744 + return i; 745 + 746 + /* Choose the closest freq */ 747 + if (target_freq - table[best].frequency > freq - target_freq) 748 + return i; 749 + 750 + return best; 751 + } 752 + 753 + return best; 754 + } 755 + 756 + /* Find closest freq to target in a table in descending order */ 757 + static inline int cpufreq_table_find_index_dc(struct cpufreq_policy *policy, 758 + unsigned int target_freq) 759 + { 760 + struct cpufreq_frequency_table *table = policy->freq_table; 761 + unsigned int freq; 762 + int i, best = -1; 763 + 764 + for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 765 + freq = table[i].frequency; 766 + 767 + if (freq == target_freq) 768 + return i; 769 + 770 + if (freq > target_freq) { 771 + best = i; 772 + continue; 773 + } 774 + 775 + /* No freq found above target_freq */ 776 + if (best == -1) 777 + return i; 778 + 779 + /* Choose the closest freq */ 780 + if (table[best].frequency - target_freq > target_freq - freq) 781 + return i; 782 + 783 + return best; 784 + } 785 + 786 + return best; 787 + } 788 + 789 + /* Works only on sorted freq-tables */ 790 + static inline int cpufreq_table_find_index_c(struct cpufreq_policy *policy, 791 + unsigned int target_freq) 792 + { 793 + target_freq = clamp_val(target_freq, policy->min, policy->max); 794 + 795 + if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING) 796 + return cpufreq_table_find_index_ac(policy, target_freq); 797 + else 798 + return cpufreq_table_find_index_dc(policy, target_freq); 799 + } 800 + 801 + static inline int cpufreq_frequency_table_target(struct cpufreq_policy *policy, 802 + unsigned int target_freq, 803 + unsigned int relation) 804 + { 805 + if (unlikely(policy->freq_table_sorted == CPUFREQ_TABLE_UNSORTED)) 806 + return cpufreq_table_index_unsorted(policy, target_freq, 807 + relation); 808 + 809 + switch (relation) { 810 + case CPUFREQ_RELATION_L: 811 + return cpufreq_table_find_index_l(policy, target_freq); 812 + case CPUFREQ_RELATION_H: 813 + return cpufreq_table_find_index_h(policy, target_freq); 814 + case CPUFREQ_RELATION_C: 815 + return cpufreq_table_find_index_c(policy, target_freq); 816 + default: 817 + pr_err("%s: Invalid relation: %d\n", __func__, relation); 818 + return -EINVAL; 819 + } 820 + } 636 821 #else 637 822 static inline int cpufreq_boost_trigger_state(int state) 638 823 { ··· 874 617 return false; 875 618 } 876 619 #endif 877 - /* the following funtion is for cpufreq core use only */ 878 - struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu); 879 620 880 621 /* the following are really really optional */ 881 622 extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs;
+1
include/linux/pm_clock.h
··· 42 42 extern void pm_clk_destroy(struct device *dev); 43 43 extern int pm_clk_add(struct device *dev, const char *con_id); 44 44 extern int pm_clk_add_clk(struct device *dev, struct clk *clk); 45 + extern int of_pm_clk_add_clk(struct device *dev, const char *name); 45 46 extern int of_pm_clk_add_clks(struct device *dev); 46 47 extern void pm_clk_remove(struct device *dev, const char *con_id); 47 48 extern void pm_clk_remove_clk(struct device *dev, struct clk *clk);
+5 -5
include/linux/pm_domain.h
··· 57 57 unsigned int device_count; /* Number of devices */ 58 58 unsigned int suspended_count; /* System suspend device counter */ 59 59 unsigned int prepared_count; /* Suspend counter of prepared devices */ 60 - bool suspend_power_off; /* Power status before system suspend */ 61 60 int (*power_off)(struct generic_pm_domain *domain); 62 61 int (*power_on)(struct generic_pm_domain *domain); 63 62 struct gpd_dev_ops dev_ops; ··· 127 128 struct generic_pm_domain *new_subdomain); 128 129 extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 129 130 struct generic_pm_domain *target); 130 - extern void pm_genpd_init(struct generic_pm_domain *genpd, 131 - struct dev_power_governor *gov, bool is_off); 131 + extern int pm_genpd_init(struct generic_pm_domain *genpd, 132 + struct dev_power_governor *gov, bool is_off); 132 133 133 134 extern struct dev_power_governor simple_qos_governor; 134 135 extern struct dev_power_governor pm_domain_always_on_gov; ··· 163 164 { 164 165 return -ENOSYS; 165 166 } 166 - static inline void pm_genpd_init(struct generic_pm_domain *genpd, 167 - struct dev_power_governor *gov, bool is_off) 167 + static inline int pm_genpd_init(struct generic_pm_domain *genpd, 168 + struct dev_power_governor *gov, bool is_off) 168 169 { 170 + return -ENOSYS; 169 171 } 170 172 #endif 171 173
+2 -3
include/linux/suspend.h
··· 18 18 #endif 19 19 20 20 #ifdef CONFIG_VT_CONSOLE_SLEEP 21 - extern int pm_prepare_console(void); 21 + extern void pm_prepare_console(void); 22 22 extern void pm_restore_console(void); 23 23 #else 24 - static inline int pm_prepare_console(void) 24 + static inline void pm_prepare_console(void) 25 25 { 26 - return 0; 27 26 } 28 27 29 28 static inline void pm_restore_console(void)
+2
kernel/power/Makefile
··· 1 1 2 2 ccflags-$(CONFIG_PM_DEBUG) := -DDEBUG 3 3 4 + KASAN_SANITIZE_snapshot.o := n 5 + 4 6 obj-y += qos.o 5 7 obj-$(CONFIG_PM) += main.o 6 8 obj-$(CONFIG_VT_CONSOLE_SLEEP) += console.o
+4 -4
kernel/power/console.c
··· 126 126 return ret; 127 127 } 128 128 129 - int pm_prepare_console(void) 129 + void pm_prepare_console(void) 130 130 { 131 131 if (!pm_vt_switch()) 132 - return 0; 132 + return; 133 133 134 134 orig_fgconsole = vt_move_to_console(SUSPEND_CONSOLE, 1); 135 135 if (orig_fgconsole < 0) 136 - return 1; 136 + return; 137 137 138 138 orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE); 139 - return 0; 139 + return; 140 140 } 141 141 142 142 void pm_restore_console(void)
+68 -33
kernel/power/hibernate.c
··· 52 52 #ifdef CONFIG_SUSPEND 53 53 HIBERNATION_SUSPEND, 54 54 #endif 55 + HIBERNATION_TEST_RESUME, 55 56 /* keep last */ 56 57 __HIBERNATION_AFTER_LAST 57 58 }; ··· 410 409 goto Close; 411 410 } 412 411 412 + int __weak hibernate_resume_nonboot_cpu_disable(void) 413 + { 414 + return disable_nonboot_cpus(); 415 + } 416 + 413 417 /** 414 418 * resume_target_kernel - Restore system state from a hibernation image. 415 419 * @platform_mode: Whether or not to use the platform driver. ··· 439 433 if (error) 440 434 goto Cleanup; 441 435 442 - error = disable_nonboot_cpus(); 436 + error = hibernate_resume_nonboot_cpu_disable(); 443 437 if (error) 444 438 goto Enable_cpus; 445 439 ··· 648 642 cpu_relax(); 649 643 } 650 644 645 + static int load_image_and_restore(void) 646 + { 647 + int error; 648 + unsigned int flags; 649 + 650 + pr_debug("PM: Loading hibernation image.\n"); 651 + 652 + lock_device_hotplug(); 653 + error = create_basic_memory_bitmaps(); 654 + if (error) 655 + goto Unlock; 656 + 657 + error = swsusp_read(&flags); 658 + swsusp_close(FMODE_READ); 659 + if (!error) 660 + hibernation_restore(flags & SF_PLATFORM_MODE); 661 + 662 + printk(KERN_ERR "PM: Failed to load hibernation image, recovering.\n"); 663 + swsusp_free(); 664 + free_basic_memory_bitmaps(); 665 + Unlock: 666 + unlock_device_hotplug(); 667 + 668 + return error; 669 + } 670 + 651 671 /** 652 672 * hibernate - Carry out system hibernation, including saving the image. 653 673 */ 654 674 int hibernate(void) 655 675 { 656 - int error; 676 + int error, nr_calls = 0; 677 + bool snapshot_test = false; 657 678 658 679 if (!hibernation_available()) { 659 680 pr_debug("PM: Hibernation not available.\n"); ··· 695 662 } 696 663 697 664 pm_prepare_console(); 698 - error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE); 699 - if (error) 665 + error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); 666 + if (error) { 667 + nr_calls--; 700 668 goto Exit; 669 + } 701 670 702 671 printk(KERN_INFO "PM: Syncing filesystems ... "); 703 672 sys_sync(); ··· 732 697 pr_debug("PM: writing image.\n"); 733 698 error = swsusp_write(flags); 734 699 swsusp_free(); 735 - if (!error) 736 - power_down(); 700 + if (!error) { 701 + if (hibernation_mode == HIBERNATION_TEST_RESUME) 702 + snapshot_test = true; 703 + else 704 + power_down(); 705 + } 737 706 in_suspend = 0; 738 707 pm_restore_gfp_mask(); 739 708 } else { ··· 748 709 free_basic_memory_bitmaps(); 749 710 Thaw: 750 711 unlock_device_hotplug(); 712 + if (snapshot_test) { 713 + pr_debug("PM: Checking hibernation image\n"); 714 + error = swsusp_check(); 715 + if (!error) 716 + error = load_image_and_restore(); 717 + } 751 718 thaw_processes(); 752 719 753 720 /* Don't bother checking whether freezer_test_done is true */ 754 721 freezer_test_done = false; 755 722 Exit: 756 - pm_notifier_call_chain(PM_POST_HIBERNATION); 723 + __pm_notifier_call_chain(PM_POST_HIBERNATION, nr_calls, NULL); 757 724 pm_restore_console(); 758 725 atomic_inc(&snapshot_device_available); 759 726 Unlock: ··· 785 740 */ 786 741 static int software_resume(void) 787 742 { 788 - int error; 789 - unsigned int flags; 743 + int error, nr_calls = 0; 790 744 791 745 /* 792 746 * If the user said "noresume".. bail out early. ··· 871 827 } 872 828 873 829 pm_prepare_console(); 874 - error = pm_notifier_call_chain(PM_RESTORE_PREPARE); 875 - if (error) 830 + error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls); 831 + if (error) { 832 + nr_calls--; 876 833 goto Close_Finish; 834 + } 877 835 878 836 pr_debug("PM: Preparing processes for restore.\n"); 879 837 error = freeze_processes(); 880 838 if (error) 881 839 goto Close_Finish; 882 - 883 - pr_debug("PM: Loading hibernation image.\n"); 884 - 885 - lock_device_hotplug(); 886 - error = create_basic_memory_bitmaps(); 887 - if (error) 888 - goto Thaw; 889 - 890 - error = swsusp_read(&flags); 891 - swsusp_close(FMODE_READ); 892 - if (!error) 893 - hibernation_restore(flags & SF_PLATFORM_MODE); 894 - 895 - printk(KERN_ERR "PM: Failed to load hibernation image, recovering.\n"); 896 - swsusp_free(); 897 - free_basic_memory_bitmaps(); 898 - Thaw: 899 - unlock_device_hotplug(); 840 + error = load_image_and_restore(); 900 841 thaw_processes(); 901 842 Finish: 902 - pm_notifier_call_chain(PM_POST_RESTORE); 843 + __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL); 903 844 pm_restore_console(); 904 845 atomic_inc(&snapshot_device_available); 905 846 /* For success case, the suspend path will release the lock */ ··· 907 878 #ifdef CONFIG_SUSPEND 908 879 [HIBERNATION_SUSPEND] = "suspend", 909 880 #endif 881 + [HIBERNATION_TEST_RESUME] = "test_resume", 910 882 }; 911 883 912 884 /* ··· 954 924 #ifdef CONFIG_SUSPEND 955 925 case HIBERNATION_SUSPEND: 956 926 #endif 927 + case HIBERNATION_TEST_RESUME: 957 928 break; 958 929 case HIBERNATION_PLATFORM: 959 930 if (hibernation_ops) ··· 1001 970 #ifdef CONFIG_SUSPEND 1002 971 case HIBERNATION_SUSPEND: 1003 972 #endif 973 + case HIBERNATION_TEST_RESUME: 1004 974 hibernation_mode = mode; 1005 975 break; 1006 976 case HIBERNATION_PLATFORM: ··· 1147 1115 1148 1116 static int __init hibernate_setup(char *str) 1149 1117 { 1150 - if (!strncmp(str, "noresume", 8)) 1118 + if (!strncmp(str, "noresume", 8)) { 1151 1119 noresume = 1; 1152 - else if (!strncmp(str, "nocompress", 10)) 1120 + } else if (!strncmp(str, "nocompress", 10)) { 1153 1121 nocompress = 1; 1154 - else if (!strncmp(str, "no", 2)) { 1122 + } else if (!strncmp(str, "no", 2)) { 1155 1123 noresume = 1; 1156 1124 nohibernate = 1; 1125 + } else if (IS_ENABLED(CONFIG_DEBUG_RODATA) 1126 + && !strncmp(str, "protect_image", 13)) { 1127 + enable_restore_image_protection(); 1157 1128 } 1158 1129 return 1; 1159 1130 }
+9 -2
kernel/power/main.c
··· 38 38 } 39 39 EXPORT_SYMBOL_GPL(unregister_pm_notifier); 40 40 41 - int pm_notifier_call_chain(unsigned long val) 41 + int __pm_notifier_call_chain(unsigned long val, int nr_to_call, int *nr_calls) 42 42 { 43 - int ret = blocking_notifier_call_chain(&pm_chain_head, val, NULL); 43 + int ret; 44 + 45 + ret = __blocking_notifier_call_chain(&pm_chain_head, val, NULL, 46 + nr_to_call, nr_calls); 44 47 45 48 return notifier_to_errno(ret); 49 + } 50 + int pm_notifier_call_chain(unsigned long val) 51 + { 52 + return __pm_notifier_call_chain(val, -1, NULL); 46 53 } 47 54 48 55 /* If set, devices may be suspended and resumed asynchronously. */
+11
kernel/power/power.h
··· 38 38 } 39 39 #endif /* CONFIG_ARCH_HIBERNATION_HEADER */ 40 40 41 + extern int hibernate_resume_nonboot_cpu_disable(void); 42 + 41 43 /* 42 44 * Keep some memory free so that I/O operations can succeed without paging 43 45 * [Might this be more than 4 MB?] ··· 60 58 extern int hibernation_snapshot(int platform_mode); 61 59 extern int hibernation_restore(int platform_mode); 62 60 extern int hibernation_platform_enter(void); 61 + 62 + #ifdef CONFIG_DEBUG_RODATA 63 + /* kernel/power/snapshot.c */ 64 + extern void enable_restore_image_protection(void); 65 + #else 66 + static inline void enable_restore_image_protection(void) {} 67 + #endif /* CONFIG_DEBUG_RODATA */ 63 68 64 69 #else /* !CONFIG_HIBERNATION */ 65 70 ··· 209 200 210 201 #ifdef CONFIG_PM_SLEEP 211 202 /* kernel/power/main.c */ 203 + extern int __pm_notifier_call_chain(unsigned long val, int nr_to_call, 204 + int *nr_calls); 212 205 extern int pm_notifier_call_chain(unsigned long val); 213 206 #endif 214 207
+3
kernel/power/process.c
··· 89 89 elapsed_msecs / 1000, elapsed_msecs % 1000, 90 90 todo - wq_busy, wq_busy); 91 91 92 + if (wq_busy) 93 + show_workqueue_state(); 94 + 92 95 if (!wakeup) { 93 96 read_lock(&tasklist_lock); 94 97 for_each_process_thread(g, p) {
+521 -431
kernel/power/snapshot.c
··· 38 38 39 39 #include "power.h" 40 40 41 + #ifdef CONFIG_DEBUG_RODATA 42 + static bool hibernate_restore_protection; 43 + static bool hibernate_restore_protection_active; 44 + 45 + void enable_restore_image_protection(void) 46 + { 47 + hibernate_restore_protection = true; 48 + } 49 + 50 + static inline void hibernate_restore_protection_begin(void) 51 + { 52 + hibernate_restore_protection_active = hibernate_restore_protection; 53 + } 54 + 55 + static inline void hibernate_restore_protection_end(void) 56 + { 57 + hibernate_restore_protection_active = false; 58 + } 59 + 60 + static inline void hibernate_restore_protect_page(void *page_address) 61 + { 62 + if (hibernate_restore_protection_active) 63 + set_memory_ro((unsigned long)page_address, 1); 64 + } 65 + 66 + static inline void hibernate_restore_unprotect_page(void *page_address) 67 + { 68 + if (hibernate_restore_protection_active) 69 + set_memory_rw((unsigned long)page_address, 1); 70 + } 71 + #else 72 + static inline void hibernate_restore_protection_begin(void) {} 73 + static inline void hibernate_restore_protection_end(void) {} 74 + static inline void hibernate_restore_protect_page(void *page_address) {} 75 + static inline void hibernate_restore_unprotect_page(void *page_address) {} 76 + #endif /* CONFIG_DEBUG_RODATA */ 77 + 41 78 static int swsusp_page_is_free(struct page *); 42 79 static void swsusp_set_page_forbidden(struct page *); 43 80 static void swsusp_unset_page_forbidden(struct page *); ··· 104 67 image_size = ((totalram_pages * 2) / 5) * PAGE_SIZE; 105 68 } 106 69 107 - /* List of PBEs needed for restoring the pages that were allocated before 70 + /* 71 + * List of PBEs needed for restoring the pages that were allocated before 108 72 * the suspend and included in the suspend image, but have also been 109 73 * allocated by the "resume" kernel, so their contents cannot be written 110 74 * directly to their "original" page frames. 111 75 */ 112 76 struct pbe *restore_pblist; 113 77 78 + /* struct linked_page is used to build chains of pages */ 79 + 80 + #define LINKED_PAGE_DATA_SIZE (PAGE_SIZE - sizeof(void *)) 81 + 82 + struct linked_page { 83 + struct linked_page *next; 84 + char data[LINKED_PAGE_DATA_SIZE]; 85 + } __packed; 86 + 87 + /* 88 + * List of "safe" pages (ie. pages that were not used by the image kernel 89 + * before hibernation) that may be used as temporary storage for image kernel 90 + * memory contents. 91 + */ 92 + static struct linked_page *safe_pages_list; 93 + 114 94 /* Pointer to an auxiliary buffer (1 page) */ 115 95 static void *buffer; 116 - 117 - /** 118 - * @safe_needed - on resume, for storing the PBE list and the image, 119 - * we can only use memory pages that do not conflict with the pages 120 - * used before suspend. The unsafe pages have PageNosaveFree set 121 - * and we count them using unsafe_pages. 122 - * 123 - * Each allocated image page is marked as PageNosave and PageNosaveFree 124 - * so that swsusp_free() can release it. 125 - */ 126 96 127 97 #define PG_ANY 0 128 98 #define PG_SAFE 1 ··· 138 94 139 95 static unsigned int allocated_unsafe_pages; 140 96 97 + /** 98 + * get_image_page - Allocate a page for a hibernation image. 99 + * @gfp_mask: GFP mask for the allocation. 100 + * @safe_needed: Get pages that were not used before hibernation (restore only) 101 + * 102 + * During image restoration, for storing the PBE list and the image data, we can 103 + * only use memory pages that do not conflict with the pages used before 104 + * hibernation. The "unsafe" pages have PageNosaveFree set and we count them 105 + * using allocated_unsafe_pages. 106 + * 107 + * Each allocated image page is marked as PageNosave and PageNosaveFree so that 108 + * swsusp_free() can release it. 109 + */ 141 110 static void *get_image_page(gfp_t gfp_mask, int safe_needed) 142 111 { 143 112 void *res; ··· 170 113 return res; 171 114 } 172 115 116 + static void *__get_safe_page(gfp_t gfp_mask) 117 + { 118 + if (safe_pages_list) { 119 + void *ret = safe_pages_list; 120 + 121 + safe_pages_list = safe_pages_list->next; 122 + memset(ret, 0, PAGE_SIZE); 123 + return ret; 124 + } 125 + return get_image_page(gfp_mask, PG_SAFE); 126 + } 127 + 173 128 unsigned long get_safe_page(gfp_t gfp_mask) 174 129 { 175 - return (unsigned long)get_image_page(gfp_mask, PG_SAFE); 130 + return (unsigned long)__get_safe_page(gfp_mask); 176 131 } 177 132 178 133 static struct page *alloc_image_page(gfp_t gfp_mask) ··· 199 130 return page; 200 131 } 201 132 202 - /** 203 - * free_image_page - free page represented by @addr, allocated with 204 - * get_image_page (page flags set by it must be cleared) 205 - */ 133 + static void recycle_safe_page(void *page_address) 134 + { 135 + struct linked_page *lp = page_address; 206 136 137 + lp->next = safe_pages_list; 138 + safe_pages_list = lp; 139 + } 140 + 141 + /** 142 + * free_image_page - Free a page allocated for hibernation image. 143 + * @addr: Address of the page to free. 144 + * @clear_nosave_free: If set, clear the PageNosaveFree bit for the page. 145 + * 146 + * The page to free should have been allocated by get_image_page() (page flags 147 + * set by it are affected). 148 + */ 207 149 static inline void free_image_page(void *addr, int clear_nosave_free) 208 150 { 209 151 struct page *page; ··· 230 150 __free_page(page); 231 151 } 232 152 233 - /* struct linked_page is used to build chains of pages */ 234 - 235 - #define LINKED_PAGE_DATA_SIZE (PAGE_SIZE - sizeof(void *)) 236 - 237 - struct linked_page { 238 - struct linked_page *next; 239 - char data[LINKED_PAGE_DATA_SIZE]; 240 - } __packed; 241 - 242 - static inline void 243 - free_list_of_pages(struct linked_page *list, int clear_page_nosave) 153 + static inline void free_list_of_pages(struct linked_page *list, 154 + int clear_page_nosave) 244 155 { 245 156 while (list) { 246 157 struct linked_page *lp = list->next; ··· 241 170 } 242 171 } 243 172 244 - /** 245 - * struct chain_allocator is used for allocating small objects out of 246 - * a linked list of pages called 'the chain'. 247 - * 248 - * The chain grows each time when there is no room for a new object in 249 - * the current page. The allocated objects cannot be freed individually. 250 - * It is only possible to free them all at once, by freeing the entire 251 - * chain. 252 - * 253 - * NOTE: The chain allocator may be inefficient if the allocated objects 254 - * are not much smaller than PAGE_SIZE. 255 - */ 256 - 173 + /* 174 + * struct chain_allocator is used for allocating small objects out of 175 + * a linked list of pages called 'the chain'. 176 + * 177 + * The chain grows each time when there is no room for a new object in 178 + * the current page. The allocated objects cannot be freed individually. 179 + * It is only possible to free them all at once, by freeing the entire 180 + * chain. 181 + * 182 + * NOTE: The chain allocator may be inefficient if the allocated objects 183 + * are not much smaller than PAGE_SIZE. 184 + */ 257 185 struct chain_allocator { 258 186 struct linked_page *chain; /* the chain */ 259 187 unsigned int used_space; /* total size of objects allocated out 260 - * of the current page 261 - */ 188 + of the current page */ 262 189 gfp_t gfp_mask; /* mask for allocating pages */ 263 190 int safe_needed; /* if set, only "safe" pages are allocated */ 264 191 }; 265 192 266 - static void 267 - chain_init(struct chain_allocator *ca, gfp_t gfp_mask, int safe_needed) 193 + static void chain_init(struct chain_allocator *ca, gfp_t gfp_mask, 194 + int safe_needed) 268 195 { 269 196 ca->chain = NULL; 270 197 ca->used_space = LINKED_PAGE_DATA_SIZE; ··· 277 208 if (LINKED_PAGE_DATA_SIZE - ca->used_space < size) { 278 209 struct linked_page *lp; 279 210 280 - lp = get_image_page(ca->gfp_mask, ca->safe_needed); 211 + lp = ca->safe_needed ? __get_safe_page(ca->gfp_mask) : 212 + get_image_page(ca->gfp_mask, PG_ANY); 281 213 if (!lp) 282 214 return NULL; 283 215 ··· 292 222 } 293 223 294 224 /** 295 - * Data types related to memory bitmaps. 225 + * Data types related to memory bitmaps. 296 226 * 297 - * Memory bitmap is a structure consiting of many linked lists of 298 - * objects. The main list's elements are of type struct zone_bitmap 299 - * and each of them corresonds to one zone. For each zone bitmap 300 - * object there is a list of objects of type struct bm_block that 301 - * represent each blocks of bitmap in which information is stored. 227 + * Memory bitmap is a structure consiting of many linked lists of 228 + * objects. The main list's elements are of type struct zone_bitmap 229 + * and each of them corresonds to one zone. For each zone bitmap 230 + * object there is a list of objects of type struct bm_block that 231 + * represent each blocks of bitmap in which information is stored. 302 232 * 303 - * struct memory_bitmap contains a pointer to the main list of zone 304 - * bitmap objects, a struct bm_position used for browsing the bitmap, 305 - * and a pointer to the list of pages used for allocating all of the 306 - * zone bitmap objects and bitmap block objects. 233 + * struct memory_bitmap contains a pointer to the main list of zone 234 + * bitmap objects, a struct bm_position used for browsing the bitmap, 235 + * and a pointer to the list of pages used for allocating all of the 236 + * zone bitmap objects and bitmap block objects. 307 237 * 308 - * NOTE: It has to be possible to lay out the bitmap in memory 309 - * using only allocations of order 0. Additionally, the bitmap is 310 - * designed to work with arbitrary number of zones (this is over the 311 - * top for now, but let's avoid making unnecessary assumptions ;-). 238 + * NOTE: It has to be possible to lay out the bitmap in memory 239 + * using only allocations of order 0. Additionally, the bitmap is 240 + * designed to work with arbitrary number of zones (this is over the 241 + * top for now, but let's avoid making unnecessary assumptions ;-). 312 242 * 313 - * struct zone_bitmap contains a pointer to a list of bitmap block 314 - * objects and a pointer to the bitmap block object that has been 315 - * most recently used for setting bits. Additionally, it contains the 316 - * pfns that correspond to the start and end of the represented zone. 243 + * struct zone_bitmap contains a pointer to a list of bitmap block 244 + * objects and a pointer to the bitmap block object that has been 245 + * most recently used for setting bits. Additionally, it contains the 246 + * PFNs that correspond to the start and end of the represented zone. 317 247 * 318 - * struct bm_block contains a pointer to the memory page in which 319 - * information is stored (in the form of a block of bitmap) 320 - * It also contains the pfns that correspond to the start and end of 321 - * the represented memory area. 248 + * struct bm_block contains a pointer to the memory page in which 249 + * information is stored (in the form of a block of bitmap) 250 + * It also contains the pfns that correspond to the start and end of 251 + * the represented memory area. 322 252 * 323 - * The memory bitmap is organized as a radix tree to guarantee fast random 324 - * access to the bits. There is one radix tree for each zone (as returned 325 - * from create_mem_extents). 253 + * The memory bitmap is organized as a radix tree to guarantee fast random 254 + * access to the bits. There is one radix tree for each zone (as returned 255 + * from create_mem_extents). 326 256 * 327 - * One radix tree is represented by one struct mem_zone_bm_rtree. There are 328 - * two linked lists for the nodes of the tree, one for the inner nodes and 329 - * one for the leave nodes. The linked leave nodes are used for fast linear 330 - * access of the memory bitmap. 257 + * One radix tree is represented by one struct mem_zone_bm_rtree. There are 258 + * two linked lists for the nodes of the tree, one for the inner nodes and 259 + * one for the leave nodes. The linked leave nodes are used for fast linear 260 + * access of the memory bitmap. 331 261 * 332 - * The struct rtree_node represents one node of the radix tree. 262 + * The struct rtree_node represents one node of the radix tree. 333 263 */ 334 264 335 265 #define BM_END_OF_MAP (~0UL) ··· 375 305 struct memory_bitmap { 376 306 struct list_head zones; 377 307 struct linked_page *p_list; /* list of pages used to store zone 378 - * bitmap objects and bitmap block 379 - * objects 380 - */ 308 + bitmap objects and bitmap block 309 + objects */ 381 310 struct bm_position cur; /* most recently used bit position */ 382 311 }; 383 312 ··· 390 321 #endif 391 322 #define BM_RTREE_LEVEL_MASK ((1UL << BM_RTREE_LEVEL_SHIFT) - 1) 392 323 393 - /* 394 - * alloc_rtree_node - Allocate a new node and add it to the radix tree. 324 + /** 325 + * alloc_rtree_node - Allocate a new node and add it to the radix tree. 395 326 * 396 - * This function is used to allocate inner nodes as well as the 397 - * leave nodes of the radix tree. It also adds the node to the 398 - * corresponding linked list passed in by the *list parameter. 327 + * This function is used to allocate inner nodes as well as the 328 + * leave nodes of the radix tree. It also adds the node to the 329 + * corresponding linked list passed in by the *list parameter. 399 330 */ 400 331 static struct rtree_node *alloc_rtree_node(gfp_t gfp_mask, int safe_needed, 401 332 struct chain_allocator *ca, ··· 416 347 return node; 417 348 } 418 349 419 - /* 420 - * add_rtree_block - Add a new leave node to the radix tree 350 + /** 351 + * add_rtree_block - Add a new leave node to the radix tree. 421 352 * 422 - * The leave nodes need to be allocated in order to keep the leaves 423 - * linked list in order. This is guaranteed by the zone->blocks 424 - * counter. 353 + * The leave nodes need to be allocated in order to keep the leaves 354 + * linked list in order. This is guaranteed by the zone->blocks 355 + * counter. 425 356 */ 426 357 static int add_rtree_block(struct mem_zone_bm_rtree *zone, gfp_t gfp_mask, 427 358 int safe_needed, struct chain_allocator *ca) ··· 486 417 static void free_zone_bm_rtree(struct mem_zone_bm_rtree *zone, 487 418 int clear_nosave_free); 488 419 489 - /* 490 - * create_zone_bm_rtree - create a radix tree for one zone 420 + /** 421 + * create_zone_bm_rtree - Create a radix tree for one zone. 491 422 * 492 - * Allocated the mem_zone_bm_rtree structure and initializes it. 493 - * This function also allocated and builds the radix tree for the 494 - * zone. 423 + * Allocated the mem_zone_bm_rtree structure and initializes it. 424 + * This function also allocated and builds the radix tree for the 425 + * zone. 495 426 */ 496 - static struct mem_zone_bm_rtree * 497 - create_zone_bm_rtree(gfp_t gfp_mask, int safe_needed, 498 - struct chain_allocator *ca, 499 - unsigned long start, unsigned long end) 427 + static struct mem_zone_bm_rtree *create_zone_bm_rtree(gfp_t gfp_mask, 428 + int safe_needed, 429 + struct chain_allocator *ca, 430 + unsigned long start, 431 + unsigned long end) 500 432 { 501 433 struct mem_zone_bm_rtree *zone; 502 434 unsigned int i, nr_blocks; ··· 524 454 return zone; 525 455 } 526 456 527 - /* 528 - * free_zone_bm_rtree - Free the memory of the radix tree 457 + /** 458 + * free_zone_bm_rtree - Free the memory of the radix tree. 529 459 * 530 - * Free all node pages of the radix tree. The mem_zone_bm_rtree 531 - * structure itself is not freed here nor are the rtree_node 532 - * structs. 460 + * Free all node pages of the radix tree. The mem_zone_bm_rtree 461 + * structure itself is not freed here nor are the rtree_node 462 + * structs. 533 463 */ 534 464 static void free_zone_bm_rtree(struct mem_zone_bm_rtree *zone, 535 465 int clear_nosave_free) ··· 562 492 }; 563 493 564 494 /** 565 - * free_mem_extents - free a list of memory extents 566 - * @list - list of extents to empty 495 + * free_mem_extents - Free a list of memory extents. 496 + * @list: List of extents to free. 567 497 */ 568 498 static void free_mem_extents(struct list_head *list) 569 499 { ··· 576 506 } 577 507 578 508 /** 579 - * create_mem_extents - create a list of memory extents representing 580 - * contiguous ranges of PFNs 581 - * @list - list to put the extents into 582 - * @gfp_mask - mask to use for memory allocations 509 + * create_mem_extents - Create a list of memory extents. 510 + * @list: List to put the extents into. 511 + * @gfp_mask: Mask to use for memory allocations. 512 + * 513 + * The extents represent contiguous ranges of PFNs. 583 514 */ 584 515 static int create_mem_extents(struct list_head *list, gfp_t gfp_mask) 585 516 { ··· 636 565 } 637 566 638 567 /** 639 - * memory_bm_create - allocate memory for a memory bitmap 640 - */ 641 - static int 642 - memory_bm_create(struct memory_bitmap *bm, gfp_t gfp_mask, int safe_needed) 568 + * memory_bm_create - Allocate memory for a memory bitmap. 569 + */ 570 + static int memory_bm_create(struct memory_bitmap *bm, gfp_t gfp_mask, 571 + int safe_needed) 643 572 { 644 573 struct chain_allocator ca; 645 574 struct list_head mem_extents; ··· 678 607 } 679 608 680 609 /** 681 - * memory_bm_free - free memory occupied by the memory bitmap @bm 682 - */ 610 + * memory_bm_free - Free memory occupied by the memory bitmap. 611 + * @bm: Memory bitmap. 612 + */ 683 613 static void memory_bm_free(struct memory_bitmap *bm, int clear_nosave_free) 684 614 { 685 615 struct mem_zone_bm_rtree *zone; ··· 694 622 } 695 623 696 624 /** 697 - * memory_bm_find_bit - Find the bit for pfn in the memory 698 - * bitmap 625 + * memory_bm_find_bit - Find the bit for a given PFN in a memory bitmap. 699 626 * 700 - * Find the bit in the bitmap @bm that corresponds to given pfn. 701 - * The cur.zone, cur.block and cur.node_pfn member of @bm are 702 - * updated. 703 - * It walks the radix tree to find the page which contains the bit for 704 - * pfn and returns the bit position in **addr and *bit_nr. 627 + * Find the bit in memory bitmap @bm that corresponds to the given PFN. 628 + * The cur.zone, cur.block and cur.node_pfn members of @bm are updated. 629 + * 630 + * Walk the radix tree to find the page containing the bit that represents @pfn 631 + * and return the position of the bit in @addr and @bit_nr. 705 632 */ 706 633 static int memory_bm_find_bit(struct memory_bitmap *bm, unsigned long pfn, 707 634 void **addr, unsigned int *bit_nr) ··· 729 658 730 659 zone_found: 731 660 /* 732 - * We have a zone. Now walk the radix tree to find the leave 733 - * node for our pfn. 661 + * We have found the zone. Now walk the radix tree to find the leaf node 662 + * for our PFN. 734 663 */ 735 - 736 664 node = bm->cur.node; 737 665 if (((pfn - zone->start_pfn) & ~BM_BLOCK_MASK) == bm->cur.node_pfn) 738 666 goto node_found; ··· 824 754 } 825 755 826 756 /* 827 - * rtree_next_node - Jumps to the next leave node 757 + * rtree_next_node - Jump to the next leaf node. 828 758 * 829 - * Sets the position to the beginning of the next node in the 830 - * memory bitmap. This is either the next node in the current 831 - * zone's radix tree or the first node in the radix tree of the 832 - * next zone. 759 + * Set the position to the beginning of the next node in the 760 + * memory bitmap. This is either the next node in the current 761 + * zone's radix tree or the first node in the radix tree of the 762 + * next zone. 833 763 * 834 - * Returns true if there is a next node, false otherwise. 764 + * Return true if there is a next node, false otherwise. 835 765 */ 836 766 static bool rtree_next_node(struct memory_bitmap *bm) 837 767 { ··· 860 790 } 861 791 862 792 /** 863 - * memory_bm_rtree_next_pfn - Find the next set bit in the bitmap @bm 793 + * memory_bm_rtree_next_pfn - Find the next set bit in a memory bitmap. 794 + * @bm: Memory bitmap. 864 795 * 865 - * Starting from the last returned position this function searches 866 - * for the next set bit in the memory bitmap and returns its 867 - * number. If no more bit is set BM_END_OF_MAP is returned. 796 + * Starting from the last returned position this function searches for the next 797 + * set bit in @bm and returns the PFN represented by it. If no more bits are 798 + * set, BM_END_OF_MAP is returned. 868 799 * 869 - * It is required to run memory_bm_position_reset() before the 870 - * first call to this function. 800 + * It is required to run memory_bm_position_reset() before the first call to 801 + * this function for the given memory bitmap. 871 802 */ 872 803 static unsigned long memory_bm_next_pfn(struct memory_bitmap *bm) 873 804 { ··· 890 819 return BM_END_OF_MAP; 891 820 } 892 821 893 - /** 894 - * This structure represents a range of page frames the contents of which 895 - * should not be saved during the suspend. 822 + /* 823 + * This structure represents a range of page frames the contents of which 824 + * should not be saved during hibernation. 896 825 */ 897 - 898 826 struct nosave_region { 899 827 struct list_head list; 900 828 unsigned long start_pfn; ··· 902 832 903 833 static LIST_HEAD(nosave_regions); 904 834 905 - /** 906 - * register_nosave_region - register a range of page frames the contents 907 - * of which should not be saved during the suspend (to be used in the early 908 - * initialization code) 909 - */ 835 + static void recycle_zone_bm_rtree(struct mem_zone_bm_rtree *zone) 836 + { 837 + struct rtree_node *node; 910 838 911 - void __init 912 - __register_nosave_region(unsigned long start_pfn, unsigned long end_pfn, 913 - int use_kmalloc) 839 + list_for_each_entry(node, &zone->nodes, list) 840 + recycle_safe_page(node->data); 841 + 842 + list_for_each_entry(node, &zone->leaves, list) 843 + recycle_safe_page(node->data); 844 + } 845 + 846 + static void memory_bm_recycle(struct memory_bitmap *bm) 847 + { 848 + struct mem_zone_bm_rtree *zone; 849 + struct linked_page *p_list; 850 + 851 + list_for_each_entry(zone, &bm->zones, list) 852 + recycle_zone_bm_rtree(zone); 853 + 854 + p_list = bm->p_list; 855 + while (p_list) { 856 + struct linked_page *lp = p_list; 857 + 858 + p_list = lp->next; 859 + recycle_safe_page(lp); 860 + } 861 + } 862 + 863 + /** 864 + * register_nosave_region - Register a region of unsaveable memory. 865 + * 866 + * Register a range of page frames the contents of which should not be saved 867 + * during hibernation (to be used in the early initialization code). 868 + */ 869 + void __init __register_nosave_region(unsigned long start_pfn, 870 + unsigned long end_pfn, int use_kmalloc) 914 871 { 915 872 struct nosave_region *region; 916 873 ··· 954 857 } 955 858 } 956 859 if (use_kmalloc) { 957 - /* during init, this shouldn't fail */ 860 + /* During init, this shouldn't fail */ 958 861 region = kmalloc(sizeof(struct nosave_region), GFP_KERNEL); 959 862 BUG_ON(!region); 960 - } else 863 + } else { 961 864 /* This allocation cannot fail */ 962 865 region = memblock_virt_alloc(sizeof(struct nosave_region), 0); 866 + } 963 867 region->start_pfn = start_pfn; 964 868 region->end_pfn = end_pfn; 965 869 list_add_tail(&region->list, &nosave_regions); ··· 1021 923 } 1022 924 1023 925 /** 1024 - * mark_nosave_pages - set bits corresponding to the page frames the 1025 - * contents of which should not be saved in a given bitmap. 926 + * mark_nosave_pages - Mark pages that should not be saved. 927 + * @bm: Memory bitmap. 928 + * 929 + * Set the bits in @bm that correspond to the page frames the contents of which 930 + * should not be saved. 1026 931 */ 1027 - 1028 932 static void mark_nosave_pages(struct memory_bitmap *bm) 1029 933 { 1030 934 struct nosave_region *region; ··· 1056 956 } 1057 957 1058 958 /** 1059 - * create_basic_memory_bitmaps - create bitmaps needed for marking page 1060 - * frames that should not be saved and free page frames. The pointers 1061 - * forbidden_pages_map and free_pages_map are only modified if everything 1062 - * goes well, because we don't want the bits to be used before both bitmaps 1063 - * are set up. 959 + * create_basic_memory_bitmaps - Create bitmaps to hold basic page information. 960 + * 961 + * Create bitmaps needed for marking page frames that should not be saved and 962 + * free page frames. The forbidden_pages_map and free_pages_map pointers are 963 + * only modified if everything goes well, because we don't want the bits to be 964 + * touched before both bitmaps are set up. 1064 965 */ 1065 - 1066 966 int create_basic_memory_bitmaps(void) 1067 967 { 1068 968 struct memory_bitmap *bm1, *bm2; ··· 1107 1007 } 1108 1008 1109 1009 /** 1110 - * free_basic_memory_bitmaps - free memory bitmaps allocated by 1111 - * create_basic_memory_bitmaps(). The auxiliary pointers are necessary 1112 - * so that the bitmaps themselves are not referred to while they are being 1113 - * freed. 1010 + * free_basic_memory_bitmaps - Free memory bitmaps holding basic information. 1011 + * 1012 + * Free memory bitmaps allocated by create_basic_memory_bitmaps(). The 1013 + * auxiliary pointers are necessary so that the bitmaps themselves are not 1014 + * referred to while they are being freed. 1114 1015 */ 1115 - 1116 1016 void free_basic_memory_bitmaps(void) 1117 1017 { 1118 1018 struct memory_bitmap *bm1, *bm2; ··· 1133 1033 } 1134 1034 1135 1035 /** 1136 - * snapshot_additional_pages - estimate the number of additional pages 1137 - * be needed for setting up the suspend image data structures for given 1138 - * zone (usually the returned value is greater than the exact number) 1036 + * snapshot_additional_pages - Estimate the number of extra pages needed. 1037 + * @zone: Memory zone to carry out the computation for. 1038 + * 1039 + * Estimate the number of additional pages needed for setting up a hibernation 1040 + * image data structures for @zone (usually, the returned value is greater than 1041 + * the exact number). 1139 1042 */ 1140 - 1141 1043 unsigned int snapshot_additional_pages(struct zone *zone) 1142 1044 { 1143 1045 unsigned int rtree, nodes; ··· 1157 1055 1158 1056 #ifdef CONFIG_HIGHMEM 1159 1057 /** 1160 - * count_free_highmem_pages - compute the total number of free highmem 1161 - * pages, system-wide. 1058 + * count_free_highmem_pages - Compute the total number of free highmem pages. 1059 + * 1060 + * The returned number is system-wide. 1162 1061 */ 1163 - 1164 1062 static unsigned int count_free_highmem_pages(void) 1165 1063 { 1166 1064 struct zone *zone; ··· 1174 1072 } 1175 1073 1176 1074 /** 1177 - * saveable_highmem_page - Determine whether a highmem page should be 1178 - * included in the suspend image. 1075 + * saveable_highmem_page - Check if a highmem page is saveable. 1179 1076 * 1180 - * We should save the page if it isn't Nosave or NosaveFree, or Reserved, 1181 - * and it isn't a part of a free chunk of pages. 1077 + * Determine whether a highmem page should be included in a hibernation image. 1078 + * 1079 + * We should save the page if it isn't Nosave or NosaveFree, or Reserved, 1080 + * and it isn't part of a free chunk of pages. 1182 1081 */ 1183 1082 static struct page *saveable_highmem_page(struct zone *zone, unsigned long pfn) 1184 1083 { ··· 1205 1102 } 1206 1103 1207 1104 /** 1208 - * count_highmem_pages - compute the total number of saveable highmem 1209 - * pages. 1105 + * count_highmem_pages - Compute the total number of saveable highmem pages. 1210 1106 */ 1211 - 1212 1107 static unsigned int count_highmem_pages(void) 1213 1108 { 1214 1109 struct zone *zone; ··· 1234 1133 #endif /* CONFIG_HIGHMEM */ 1235 1134 1236 1135 /** 1237 - * saveable_page - Determine whether a non-highmem page should be included 1238 - * in the suspend image. 1136 + * saveable_page - Check if the given page is saveable. 1239 1137 * 1240 - * We should save the page if it isn't Nosave, and is not in the range 1241 - * of pages statically defined as 'unsaveable', and it isn't a part of 1242 - * a free chunk of pages. 1138 + * Determine whether a non-highmem page should be included in a hibernation 1139 + * image. 1140 + * 1141 + * We should save the page if it isn't Nosave, and is not in the range 1142 + * of pages statically defined as 'unsaveable', and it isn't part of 1143 + * a free chunk of pages. 1243 1144 */ 1244 1145 static struct page *saveable_page(struct zone *zone, unsigned long pfn) 1245 1146 { ··· 1270 1167 } 1271 1168 1272 1169 /** 1273 - * count_data_pages - compute the total number of saveable non-highmem 1274 - * pages. 1170 + * count_data_pages - Compute the total number of saveable non-highmem pages. 1275 1171 */ 1276 - 1277 1172 static unsigned int count_data_pages(void) 1278 1173 { 1279 1174 struct zone *zone; ··· 1291 1190 return n; 1292 1191 } 1293 1192 1294 - /* This is needed, because copy_page and memcpy are not usable for copying 1193 + /* 1194 + * This is needed, because copy_page and memcpy are not usable for copying 1295 1195 * task structs. 1296 1196 */ 1297 1197 static inline void do_copy_page(long *dst, long *src) ··· 1303 1201 *dst++ = *src++; 1304 1202 } 1305 1203 1306 - 1307 1204 /** 1308 - * safe_copy_page - check if the page we are going to copy is marked as 1309 - * present in the kernel page tables (this always is the case if 1310 - * CONFIG_DEBUG_PAGEALLOC is not set and in that case 1311 - * kernel_page_present() always returns 'true'). 1205 + * safe_copy_page - Copy a page in a safe way. 1206 + * 1207 + * Check if the page we are going to copy is marked as present in the kernel 1208 + * page tables (this always is the case if CONFIG_DEBUG_PAGEALLOC is not set 1209 + * and in that case kernel_page_present() always returns 'true'). 1312 1210 */ 1313 1211 static void safe_copy_page(void *dst, struct page *s_page) 1314 1212 { ··· 1321 1219 } 1322 1220 } 1323 1221 1324 - 1325 1222 #ifdef CONFIG_HIGHMEM 1326 - static inline struct page * 1327 - page_is_saveable(struct zone *zone, unsigned long pfn) 1223 + static inline struct page *page_is_saveable(struct zone *zone, unsigned long pfn) 1328 1224 { 1329 1225 return is_highmem(zone) ? 1330 1226 saveable_highmem_page(zone, pfn) : saveable_page(zone, pfn); ··· 1343 1243 kunmap_atomic(src); 1344 1244 } else { 1345 1245 if (PageHighMem(d_page)) { 1346 - /* Page pointed to by src may contain some kernel 1246 + /* 1247 + * The page pointed to by src may contain some kernel 1347 1248 * data modified by kmap_atomic() 1348 1249 */ 1349 1250 safe_copy_page(buffer, s_page); ··· 1366 1265 } 1367 1266 #endif /* CONFIG_HIGHMEM */ 1368 1267 1369 - static void 1370 - copy_data_pages(struct memory_bitmap *copy_bm, struct memory_bitmap *orig_bm) 1268 + static void copy_data_pages(struct memory_bitmap *copy_bm, 1269 + struct memory_bitmap *orig_bm) 1371 1270 { 1372 1271 struct zone *zone; 1373 1272 unsigned long pfn; ··· 1416 1315 static struct memory_bitmap copy_bm; 1417 1316 1418 1317 /** 1419 - * swsusp_free - free pages allocated for the suspend. 1318 + * swsusp_free - Free pages allocated for hibernation image. 1420 1319 * 1421 - * Suspend pages are alocated before the atomic copy is made, so we 1422 - * need to release them after the resume. 1320 + * Image pages are alocated before snapshot creation, so they need to be 1321 + * released after resume. 1423 1322 */ 1424 - 1425 1323 void swsusp_free(void) 1426 1324 { 1427 1325 unsigned long fb_pfn, fr_pfn; ··· 1451 1351 1452 1352 memory_bm_clear_current(forbidden_pages_map); 1453 1353 memory_bm_clear_current(free_pages_map); 1354 + hibernate_restore_unprotect_page(page_address(page)); 1454 1355 __free_page(page); 1455 1356 goto loop; 1456 1357 } ··· 1463 1362 buffer = NULL; 1464 1363 alloc_normal = 0; 1465 1364 alloc_highmem = 0; 1365 + hibernate_restore_protection_end(); 1466 1366 } 1467 1367 1468 1368 /* Helper functions used for the shrinking of memory. */ ··· 1471 1369 #define GFP_IMAGE (GFP_KERNEL | __GFP_NOWARN) 1472 1370 1473 1371 /** 1474 - * preallocate_image_pages - Allocate a number of pages for hibernation image 1372 + * preallocate_image_pages - Allocate a number of pages for hibernation image. 1475 1373 * @nr_pages: Number of page frames to allocate. 1476 1374 * @mask: GFP flags to use for the allocation. 1477 1375 * ··· 1521 1419 } 1522 1420 1523 1421 /** 1524 - * __fraction - Compute (an approximation of) x * (multiplier / base) 1422 + * __fraction - Compute (an approximation of) x * (multiplier / base). 1525 1423 */ 1526 1424 static unsigned long __fraction(u64 x, u64 multiplier, u64 base) 1527 1425 { ··· 1531 1429 } 1532 1430 1533 1431 static unsigned long preallocate_highmem_fraction(unsigned long nr_pages, 1534 - unsigned long highmem, 1535 - unsigned long total) 1432 + unsigned long highmem, 1433 + unsigned long total) 1536 1434 { 1537 1435 unsigned long alloc = __fraction(nr_pages, highmem, total); 1538 1436 ··· 1545 1443 } 1546 1444 1547 1445 static inline unsigned long preallocate_highmem_fraction(unsigned long nr_pages, 1548 - unsigned long highmem, 1549 - unsigned long total) 1446 + unsigned long highmem, 1447 + unsigned long total) 1550 1448 { 1551 1449 return 0; 1552 1450 } 1553 1451 #endif /* CONFIG_HIGHMEM */ 1554 1452 1555 1453 /** 1556 - * free_unnecessary_pages - Release preallocated pages not needed for the image 1454 + * free_unnecessary_pages - Release preallocated pages not needed for the image. 1557 1455 */ 1558 1456 static unsigned long free_unnecessary_pages(void) 1559 1457 { ··· 1607 1505 } 1608 1506 1609 1507 /** 1610 - * minimum_image_size - Estimate the minimum acceptable size of an image 1508 + * minimum_image_size - Estimate the minimum acceptable size of an image. 1611 1509 * @saveable: Number of saveable pages in the system. 1612 1510 * 1613 1511 * We want to avoid attempting to free too much memory too hard, so estimate the ··· 1637 1535 } 1638 1536 1639 1537 /** 1640 - * hibernate_preallocate_memory - Preallocate memory for hibernation image 1538 + * hibernate_preallocate_memory - Preallocate memory for hibernation image. 1641 1539 * 1642 1540 * To create a hibernation image it is necessary to make a copy of every page 1643 1541 * frame in use. We also need a number of page frames to be free during ··· 1810 1708 1811 1709 #ifdef CONFIG_HIGHMEM 1812 1710 /** 1813 - * count_pages_for_highmem - compute the number of non-highmem pages 1814 - * that will be necessary for creating copies of highmem pages. 1815 - */ 1816 - 1711 + * count_pages_for_highmem - Count non-highmem pages needed for copying highmem. 1712 + * 1713 + * Compute the number of non-highmem pages that will be necessary for creating 1714 + * copies of highmem pages. 1715 + */ 1817 1716 static unsigned int count_pages_for_highmem(unsigned int nr_highmem) 1818 1717 { 1819 1718 unsigned int free_highmem = count_free_highmem_pages() + alloc_highmem; ··· 1827 1724 return nr_highmem; 1828 1725 } 1829 1726 #else 1830 - static unsigned int 1831 - count_pages_for_highmem(unsigned int nr_highmem) { return 0; } 1727 + static unsigned int count_pages_for_highmem(unsigned int nr_highmem) { return 0; } 1832 1728 #endif /* CONFIG_HIGHMEM */ 1833 1729 1834 1730 /** 1835 - * enough_free_mem - Make sure we have enough free memory for the 1836 - * snapshot image. 1731 + * enough_free_mem - Check if there is enough free memory for the image. 1837 1732 */ 1838 - 1839 1733 static int enough_free_mem(unsigned int nr_pages, unsigned int nr_highmem) 1840 1734 { 1841 1735 struct zone *zone; ··· 1851 1751 1852 1752 #ifdef CONFIG_HIGHMEM 1853 1753 /** 1854 - * get_highmem_buffer - if there are some highmem pages in the suspend 1855 - * image, we may need the buffer to copy them and/or load their data. 1754 + * get_highmem_buffer - Allocate a buffer for highmem pages. 1755 + * 1756 + * If there are some highmem pages in the hibernation image, we may need a 1757 + * buffer to copy them and/or load their data. 1856 1758 */ 1857 - 1858 1759 static inline int get_highmem_buffer(int safe_needed) 1859 1760 { 1860 1761 buffer = get_image_page(GFP_ATOMIC | __GFP_COLD, safe_needed); ··· 1863 1762 } 1864 1763 1865 1764 /** 1866 - * alloc_highmem_image_pages - allocate some highmem pages for the image. 1867 - * Try to allocate as many pages as needed, but if the number of free 1868 - * highmem pages is lesser than that, allocate them all. 1765 + * alloc_highmem_image_pages - Allocate some highmem pages for the image. 1766 + * 1767 + * Try to allocate as many pages as needed, but if the number of free highmem 1768 + * pages is less than that, allocate them all. 1869 1769 */ 1870 - 1871 - static inline unsigned int 1872 - alloc_highmem_pages(struct memory_bitmap *bm, unsigned int nr_highmem) 1770 + static inline unsigned int alloc_highmem_pages(struct memory_bitmap *bm, 1771 + unsigned int nr_highmem) 1873 1772 { 1874 1773 unsigned int to_alloc = count_free_highmem_pages(); 1875 1774 ··· 1888 1787 #else 1889 1788 static inline int get_highmem_buffer(int safe_needed) { return 0; } 1890 1789 1891 - static inline unsigned int 1892 - alloc_highmem_pages(struct memory_bitmap *bm, unsigned int n) { return 0; } 1790 + static inline unsigned int alloc_highmem_pages(struct memory_bitmap *bm, 1791 + unsigned int n) { return 0; } 1893 1792 #endif /* CONFIG_HIGHMEM */ 1894 1793 1895 1794 /** 1896 - * swsusp_alloc - allocate memory for the suspend image 1795 + * swsusp_alloc - Allocate memory for hibernation image. 1897 1796 * 1898 - * We first try to allocate as many highmem pages as there are 1899 - * saveable highmem pages in the system. If that fails, we allocate 1900 - * non-highmem pages for the copies of the remaining highmem ones. 1797 + * We first try to allocate as many highmem pages as there are 1798 + * saveable highmem pages in the system. If that fails, we allocate 1799 + * non-highmem pages for the copies of the remaining highmem ones. 1901 1800 * 1902 - * In this approach it is likely that the copies of highmem pages will 1903 - * also be located in the high memory, because of the way in which 1904 - * copy_data_pages() works. 1801 + * In this approach it is likely that the copies of highmem pages will 1802 + * also be located in the high memory, because of the way in which 1803 + * copy_data_pages() works. 1905 1804 */ 1906 - 1907 - static int 1908 - swsusp_alloc(struct memory_bitmap *orig_bm, struct memory_bitmap *copy_bm, 1909 - unsigned int nr_pages, unsigned int nr_highmem) 1805 + static int swsusp_alloc(struct memory_bitmap *orig_bm, 1806 + struct memory_bitmap *copy_bm, 1807 + unsigned int nr_pages, unsigned int nr_highmem) 1910 1808 { 1911 1809 if (nr_highmem > 0) { 1912 1810 if (get_highmem_buffer(PG_ANY)) ··· 1955 1855 return -ENOMEM; 1956 1856 } 1957 1857 1958 - /* During allocating of suspend pagedir, new cold pages may appear. 1858 + /* 1859 + * During allocating of suspend pagedir, new cold pages may appear. 1959 1860 * Kill them. 1960 1861 */ 1961 1862 drain_local_pages(NULL); ··· 2019 1918 } 2020 1919 2021 1920 /** 2022 - * pack_pfns - pfns corresponding to the set bits found in the bitmap @bm 2023 - * are stored in the array @buf[] (1 page at a time) 1921 + * pack_pfns - Prepare PFNs for saving. 1922 + * @bm: Memory bitmap. 1923 + * @buf: Memory buffer to store the PFNs in. 1924 + * 1925 + * PFNs corresponding to set bits in @bm are stored in the area of memory 1926 + * pointed to by @buf (1 page at a time). 2024 1927 */ 2025 - 2026 - static inline void 2027 - pack_pfns(unsigned long *buf, struct memory_bitmap *bm) 1928 + static inline void pack_pfns(unsigned long *buf, struct memory_bitmap *bm) 2028 1929 { 2029 1930 int j; 2030 1931 ··· 2040 1937 } 2041 1938 2042 1939 /** 2043 - * snapshot_read_next - used for reading the system memory snapshot. 1940 + * snapshot_read_next - Get the address to read the next image page from. 1941 + * @handle: Snapshot handle to be used for the reading. 2044 1942 * 2045 - * On the first call to it @handle should point to a zeroed 2046 - * snapshot_handle structure. The structure gets updated and a pointer 2047 - * to it should be passed to this function every next time. 1943 + * On the first call, @handle should point to a zeroed snapshot_handle 1944 + * structure. The structure gets populated then and a pointer to it should be 1945 + * passed to this function every next time. 2048 1946 * 2049 - * On success the function returns a positive number. Then, the caller 2050 - * is allowed to read up to the returned number of bytes from the memory 2051 - * location computed by the data_of() macro. 1947 + * On success, the function returns a positive number. Then, the caller 1948 + * is allowed to read up to the returned number of bytes from the memory 1949 + * location computed by the data_of() macro. 2052 1950 * 2053 - * The function returns 0 to indicate the end of data stream condition, 2054 - * and a negative number is returned on error. In such cases the 2055 - * structure pointed to by @handle is not updated and should not be used 2056 - * any more. 1951 + * The function returns 0 to indicate the end of the data stream condition, 1952 + * and negative numbers are returned on errors. If that happens, the structure 1953 + * pointed to by @handle is not updated and should not be used any more. 2057 1954 */ 2058 - 2059 1955 int snapshot_read_next(struct snapshot_handle *handle) 2060 1956 { 2061 1957 if (handle->cur > nr_meta_pages + nr_copy_pages) ··· 2083 1981 2084 1982 page = pfn_to_page(memory_bm_next_pfn(&copy_bm)); 2085 1983 if (PageHighMem(page)) { 2086 - /* Highmem pages are copied to the buffer, 1984 + /* 1985 + * Highmem pages are copied to the buffer, 2087 1986 * because we can't return with a kmapped 2088 1987 * highmem page (we may not be called again). 2089 1988 */ ··· 2102 1999 return PAGE_SIZE; 2103 2000 } 2104 2001 2105 - /** 2106 - * mark_unsafe_pages - mark the pages that cannot be used for storing 2107 - * the image during resume, because they conflict with the pages that 2108 - * had been used before suspend 2109 - */ 2110 - 2111 - static int mark_unsafe_pages(struct memory_bitmap *bm) 2112 - { 2113 - struct zone *zone; 2114 - unsigned long pfn, max_zone_pfn; 2115 - 2116 - /* Clear page flags */ 2117 - for_each_populated_zone(zone) { 2118 - max_zone_pfn = zone_end_pfn(zone); 2119 - for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) 2120 - if (pfn_valid(pfn)) 2121 - swsusp_unset_page_free(pfn_to_page(pfn)); 2122 - } 2123 - 2124 - /* Mark pages that correspond to the "original" pfns as "unsafe" */ 2125 - memory_bm_position_reset(bm); 2126 - do { 2127 - pfn = memory_bm_next_pfn(bm); 2128 - if (likely(pfn != BM_END_OF_MAP)) { 2129 - if (likely(pfn_valid(pfn))) 2130 - swsusp_set_page_free(pfn_to_page(pfn)); 2131 - else 2132 - return -EFAULT; 2133 - } 2134 - } while (pfn != BM_END_OF_MAP); 2135 - 2136 - allocated_unsafe_pages = 0; 2137 - 2138 - return 0; 2139 - } 2140 - 2141 - static void 2142 - duplicate_memory_bitmap(struct memory_bitmap *dst, struct memory_bitmap *src) 2002 + static void duplicate_memory_bitmap(struct memory_bitmap *dst, 2003 + struct memory_bitmap *src) 2143 2004 { 2144 2005 unsigned long pfn; 2145 2006 ··· 2113 2046 memory_bm_set_bit(dst, pfn); 2114 2047 pfn = memory_bm_next_pfn(src); 2115 2048 } 2049 + } 2050 + 2051 + /** 2052 + * mark_unsafe_pages - Mark pages that were used before hibernation. 2053 + * 2054 + * Mark the pages that cannot be used for storing the image during restoration, 2055 + * because they conflict with the pages that had been used before hibernation. 2056 + */ 2057 + static void mark_unsafe_pages(struct memory_bitmap *bm) 2058 + { 2059 + unsigned long pfn; 2060 + 2061 + /* Clear the "free"/"unsafe" bit for all PFNs */ 2062 + memory_bm_position_reset(free_pages_map); 2063 + pfn = memory_bm_next_pfn(free_pages_map); 2064 + while (pfn != BM_END_OF_MAP) { 2065 + memory_bm_clear_current(free_pages_map); 2066 + pfn = memory_bm_next_pfn(free_pages_map); 2067 + } 2068 + 2069 + /* Mark pages that correspond to the "original" PFNs as "unsafe" */ 2070 + duplicate_memory_bitmap(free_pages_map, bm); 2071 + 2072 + allocated_unsafe_pages = 0; 2116 2073 } 2117 2074 2118 2075 static int check_header(struct swsusp_info *info) ··· 2154 2063 } 2155 2064 2156 2065 /** 2157 - * load header - check the image header and copy data from it 2066 + * load header - Check the image header and copy the data from it. 2158 2067 */ 2159 - 2160 - static int 2161 - load_header(struct swsusp_info *info) 2068 + static int load_header(struct swsusp_info *info) 2162 2069 { 2163 2070 int error; 2164 2071 ··· 2170 2081 } 2171 2082 2172 2083 /** 2173 - * unpack_orig_pfns - for each element of @buf[] (1 page at a time) set 2174 - * the corresponding bit in the memory bitmap @bm 2084 + * unpack_orig_pfns - Set bits corresponding to given PFNs in a memory bitmap. 2085 + * @bm: Memory bitmap. 2086 + * @buf: Area of memory containing the PFNs. 2087 + * 2088 + * For each element of the array pointed to by @buf (1 page at a time), set the 2089 + * corresponding bit in @bm. 2175 2090 */ 2176 2091 static int unpack_orig_pfns(unsigned long *buf, struct memory_bitmap *bm) 2177 2092 { ··· 2188 2095 /* Extract and buffer page key for data page (s390 only). */ 2189 2096 page_key_memorize(buf + j); 2190 2097 2191 - if (memory_bm_pfn_present(bm, buf[j])) 2098 + if (pfn_valid(buf[j]) && memory_bm_pfn_present(bm, buf[j])) 2192 2099 memory_bm_set_bit(bm, buf[j]); 2193 2100 else 2194 2101 return -EFAULT; ··· 2197 2104 return 0; 2198 2105 } 2199 2106 2200 - /* List of "safe" pages that may be used to store data loaded from the suspend 2201 - * image 2202 - */ 2203 - static struct linked_page *safe_pages_list; 2204 - 2205 2107 #ifdef CONFIG_HIGHMEM 2206 - /* struct highmem_pbe is used for creating the list of highmem pages that 2108 + /* 2109 + * struct highmem_pbe is used for creating the list of highmem pages that 2207 2110 * should be restored atomically during the resume from disk, because the page 2208 2111 * frames they have occupied before the suspend are in use. 2209 2112 */ ··· 2209 2120 struct highmem_pbe *next; 2210 2121 }; 2211 2122 2212 - /* List of highmem PBEs needed for restoring the highmem pages that were 2123 + /* 2124 + * List of highmem PBEs needed for restoring the highmem pages that were 2213 2125 * allocated before the suspend and included in the suspend image, but have 2214 2126 * also been allocated by the "resume" kernel, so their contents cannot be 2215 2127 * written directly to their "original" page frames. ··· 2218 2128 static struct highmem_pbe *highmem_pblist; 2219 2129 2220 2130 /** 2221 - * count_highmem_image_pages - compute the number of highmem pages in the 2222 - * suspend image. The bits in the memory bitmap @bm that correspond to the 2223 - * image pages are assumed to be set. 2131 + * count_highmem_image_pages - Compute the number of highmem pages in the image. 2132 + * @bm: Memory bitmap. 2133 + * 2134 + * The bits in @bm that correspond to image pages are assumed to be set. 2224 2135 */ 2225 - 2226 2136 static unsigned int count_highmem_image_pages(struct memory_bitmap *bm) 2227 2137 { 2228 2138 unsigned long pfn; ··· 2239 2149 return cnt; 2240 2150 } 2241 2151 2242 - /** 2243 - * prepare_highmem_image - try to allocate as many highmem pages as 2244 - * there are highmem image pages (@nr_highmem_p points to the variable 2245 - * containing the number of highmem image pages). The pages that are 2246 - * "safe" (ie. will not be overwritten when the suspend image is 2247 - * restored) have the corresponding bits set in @bm (it must be 2248 - * unitialized). 2249 - * 2250 - * NOTE: This function should not be called if there are no highmem 2251 - * image pages. 2252 - */ 2253 - 2254 2152 static unsigned int safe_highmem_pages; 2255 2153 2256 2154 static struct memory_bitmap *safe_highmem_bm; 2257 2155 2258 - static int 2259 - prepare_highmem_image(struct memory_bitmap *bm, unsigned int *nr_highmem_p) 2156 + /** 2157 + * prepare_highmem_image - Allocate memory for loading highmem data from image. 2158 + * @bm: Pointer to an uninitialized memory bitmap structure. 2159 + * @nr_highmem_p: Pointer to the number of highmem image pages. 2160 + * 2161 + * Try to allocate as many highmem pages as there are highmem image pages 2162 + * (@nr_highmem_p points to the variable containing the number of highmem image 2163 + * pages). The pages that are "safe" (ie. will not be overwritten when the 2164 + * hibernation image is restored entirely) have the corresponding bits set in 2165 + * @bm (it must be unitialized). 2166 + * 2167 + * NOTE: This function should not be called if there are no highmem image pages. 2168 + */ 2169 + static int prepare_highmem_image(struct memory_bitmap *bm, 2170 + unsigned int *nr_highmem_p) 2260 2171 { 2261 2172 unsigned int to_alloc; 2262 2173 ··· 2292 2201 return 0; 2293 2202 } 2294 2203 2295 - /** 2296 - * get_highmem_page_buffer - for given highmem image page find the buffer 2297 - * that suspend_write_next() should set for its caller to write to. 2298 - * 2299 - * If the page is to be saved to its "original" page frame or a copy of 2300 - * the page is to be made in the highmem, @buffer is returned. Otherwise, 2301 - * the copy of the page is to be made in normal memory, so the address of 2302 - * the copy is returned. 2303 - * 2304 - * If @buffer is returned, the caller of suspend_write_next() will write 2305 - * the page's contents to @buffer, so they will have to be copied to the 2306 - * right location on the next call to suspend_write_next() and it is done 2307 - * with the help of copy_last_highmem_page(). For this purpose, if 2308 - * @buffer is returned, @last_highmem page is set to the page to which 2309 - * the data will have to be copied from @buffer. 2310 - */ 2311 - 2312 2204 static struct page *last_highmem_page; 2313 2205 2314 - static void * 2315 - get_highmem_page_buffer(struct page *page, struct chain_allocator *ca) 2206 + /** 2207 + * get_highmem_page_buffer - Prepare a buffer to store a highmem image page. 2208 + * 2209 + * For a given highmem image page get a buffer that suspend_write_next() should 2210 + * return to its caller to write to. 2211 + * 2212 + * If the page is to be saved to its "original" page frame or a copy of 2213 + * the page is to be made in the highmem, @buffer is returned. Otherwise, 2214 + * the copy of the page is to be made in normal memory, so the address of 2215 + * the copy is returned. 2216 + * 2217 + * If @buffer is returned, the caller of suspend_write_next() will write 2218 + * the page's contents to @buffer, so they will have to be copied to the 2219 + * right location on the next call to suspend_write_next() and it is done 2220 + * with the help of copy_last_highmem_page(). For this purpose, if 2221 + * @buffer is returned, @last_highmem_page is set to the page to which 2222 + * the data will have to be copied from @buffer. 2223 + */ 2224 + static void *get_highmem_page_buffer(struct page *page, 2225 + struct chain_allocator *ca) 2316 2226 { 2317 2227 struct highmem_pbe *pbe; 2318 2228 void *kaddr; 2319 2229 2320 2230 if (swsusp_page_is_forbidden(page) && swsusp_page_is_free(page)) { 2321 - /* We have allocated the "original" page frame and we can 2231 + /* 2232 + * We have allocated the "original" page frame and we can 2322 2233 * use it directly to store the loaded page. 2323 2234 */ 2324 2235 last_highmem_page = page; 2325 2236 return buffer; 2326 2237 } 2327 - /* The "original" page frame has not been allocated and we have to 2238 + /* 2239 + * The "original" page frame has not been allocated and we have to 2328 2240 * use a "safe" page frame to store the loaded page. 2329 2241 */ 2330 2242 pbe = chain_alloc(ca, sizeof(struct highmem_pbe)); ··· 2357 2263 } 2358 2264 2359 2265 /** 2360 - * copy_last_highmem_page - copy the contents of a highmem image from 2361 - * @buffer, where the caller of snapshot_write_next() has place them, 2362 - * to the right location represented by @last_highmem_page . 2266 + * copy_last_highmem_page - Copy most the most recent highmem image page. 2267 + * 2268 + * Copy the contents of a highmem image from @buffer, where the caller of 2269 + * snapshot_write_next() has stored them, to the right location represented by 2270 + * @last_highmem_page . 2363 2271 */ 2364 - 2365 2272 static void copy_last_highmem_page(void) 2366 2273 { 2367 2274 if (last_highmem_page) { ··· 2389 2294 free_image_page(buffer, PG_UNSAFE_CLEAR); 2390 2295 } 2391 2296 #else 2392 - static unsigned int 2393 - count_highmem_image_pages(struct memory_bitmap *bm) { return 0; } 2297 + static unsigned int count_highmem_image_pages(struct memory_bitmap *bm) { return 0; } 2394 2298 2395 - static inline int 2396 - prepare_highmem_image(struct memory_bitmap *bm, unsigned int *nr_highmem_p) 2397 - { 2398 - return 0; 2399 - } 2299 + static inline int prepare_highmem_image(struct memory_bitmap *bm, 2300 + unsigned int *nr_highmem_p) { return 0; } 2400 2301 2401 - static inline void * 2402 - get_highmem_page_buffer(struct page *page, struct chain_allocator *ca) 2302 + static inline void *get_highmem_page_buffer(struct page *page, 2303 + struct chain_allocator *ca) 2403 2304 { 2404 2305 return ERR_PTR(-EINVAL); 2405 2306 } ··· 2405 2314 static inline void free_highmem_data(void) {} 2406 2315 #endif /* CONFIG_HIGHMEM */ 2407 2316 2408 - /** 2409 - * prepare_image - use the memory bitmap @bm to mark the pages that will 2410 - * be overwritten in the process of restoring the system memory state 2411 - * from the suspend image ("unsafe" pages) and allocate memory for the 2412 - * image. 2413 - * 2414 - * The idea is to allocate a new memory bitmap first and then allocate 2415 - * as many pages as needed for the image data, but not to assign these 2416 - * pages to specific tasks initially. Instead, we just mark them as 2417 - * allocated and create a lists of "safe" pages that will be used 2418 - * later. On systems with high memory a list of "safe" highmem pages is 2419 - * also created. 2420 - */ 2421 - 2422 2317 #define PBES_PER_LINKED_PAGE (LINKED_PAGE_DATA_SIZE / sizeof(struct pbe)) 2423 2318 2424 - static int 2425 - prepare_image(struct memory_bitmap *new_bm, struct memory_bitmap *bm) 2319 + /** 2320 + * prepare_image - Make room for loading hibernation image. 2321 + * @new_bm: Unitialized memory bitmap structure. 2322 + * @bm: Memory bitmap with unsafe pages marked. 2323 + * 2324 + * Use @bm to mark the pages that will be overwritten in the process of 2325 + * restoring the system memory state from the suspend image ("unsafe" pages) 2326 + * and allocate memory for the image. 2327 + * 2328 + * The idea is to allocate a new memory bitmap first and then allocate 2329 + * as many pages as needed for image data, but without specifying what those 2330 + * pages will be used for just yet. Instead, we mark them all as allocated and 2331 + * create a lists of "safe" pages to be used later. On systems with high 2332 + * memory a list of "safe" highmem pages is created too. 2333 + */ 2334 + static int prepare_image(struct memory_bitmap *new_bm, struct memory_bitmap *bm) 2426 2335 { 2427 2336 unsigned int nr_pages, nr_highmem; 2428 - struct linked_page *sp_list, *lp; 2337 + struct linked_page *lp; 2429 2338 int error; 2430 2339 2431 2340 /* If there is no highmem, the buffer will not be necessary */ ··· 2433 2342 buffer = NULL; 2434 2343 2435 2344 nr_highmem = count_highmem_image_pages(bm); 2436 - error = mark_unsafe_pages(bm); 2437 - if (error) 2438 - goto Free; 2345 + mark_unsafe_pages(bm); 2439 2346 2440 2347 error = memory_bm_create(new_bm, GFP_ATOMIC, PG_SAFE); 2441 2348 if (error) ··· 2446 2357 if (error) 2447 2358 goto Free; 2448 2359 } 2449 - /* Reserve some safe pages for potential later use. 2360 + /* 2361 + * Reserve some safe pages for potential later use. 2450 2362 * 2451 2363 * NOTE: This way we make sure there will be enough safe pages for the 2452 2364 * chain_alloc() in get_buffer(). It is a bit wasteful, but 2453 2365 * nr_copy_pages cannot be greater than 50% of the memory anyway. 2366 + * 2367 + * nr_copy_pages cannot be less than allocated_unsafe_pages too. 2454 2368 */ 2455 - sp_list = NULL; 2456 - /* nr_copy_pages cannot be lesser than allocated_unsafe_pages */ 2457 2369 nr_pages = nr_copy_pages - nr_highmem - allocated_unsafe_pages; 2458 2370 nr_pages = DIV_ROUND_UP(nr_pages, PBES_PER_LINKED_PAGE); 2459 2371 while (nr_pages > 0) { ··· 2463 2373 error = -ENOMEM; 2464 2374 goto Free; 2465 2375 } 2466 - lp->next = sp_list; 2467 - sp_list = lp; 2376 + lp->next = safe_pages_list; 2377 + safe_pages_list = lp; 2468 2378 nr_pages--; 2469 2379 } 2470 2380 /* Preallocate memory for the image */ 2471 - safe_pages_list = NULL; 2472 2381 nr_pages = nr_copy_pages - nr_highmem - allocated_unsafe_pages; 2473 2382 while (nr_pages > 0) { 2474 2383 lp = (struct linked_page *)get_zeroed_page(GFP_ATOMIC); ··· 2485 2396 swsusp_set_page_free(virt_to_page(lp)); 2486 2397 nr_pages--; 2487 2398 } 2488 - /* Free the reserved safe pages so that chain_alloc() can use them */ 2489 - while (sp_list) { 2490 - lp = sp_list->next; 2491 - free_image_page(sp_list, PG_UNSAFE_CLEAR); 2492 - sp_list = lp; 2493 - } 2494 2399 return 0; 2495 2400 2496 2401 Free: ··· 2493 2410 } 2494 2411 2495 2412 /** 2496 - * get_buffer - compute the address that snapshot_write_next() should 2497 - * set for its caller to write to. 2413 + * get_buffer - Get the address to store the next image data page. 2414 + * 2415 + * Get the address that snapshot_write_next() should return to its caller to 2416 + * write to. 2498 2417 */ 2499 - 2500 2418 static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca) 2501 2419 { 2502 2420 struct pbe *pbe; ··· 2512 2428 return get_highmem_page_buffer(page, ca); 2513 2429 2514 2430 if (swsusp_page_is_forbidden(page) && swsusp_page_is_free(page)) 2515 - /* We have allocated the "original" page frame and we can 2431 + /* 2432 + * We have allocated the "original" page frame and we can 2516 2433 * use it directly to store the loaded page. 2517 2434 */ 2518 2435 return page_address(page); 2519 2436 2520 - /* The "original" page frame has not been allocated and we have to 2437 + /* 2438 + * The "original" page frame has not been allocated and we have to 2521 2439 * use a "safe" page frame to store the loaded page. 2522 2440 */ 2523 2441 pbe = chain_alloc(ca, sizeof(struct pbe)); ··· 2536 2450 } 2537 2451 2538 2452 /** 2539 - * snapshot_write_next - used for writing the system memory snapshot. 2453 + * snapshot_write_next - Get the address to store the next image page. 2454 + * @handle: Snapshot handle structure to guide the writing. 2540 2455 * 2541 - * On the first call to it @handle should point to a zeroed 2542 - * snapshot_handle structure. The structure gets updated and a pointer 2543 - * to it should be passed to this function every next time. 2456 + * On the first call, @handle should point to a zeroed snapshot_handle 2457 + * structure. The structure gets populated then and a pointer to it should be 2458 + * passed to this function every next time. 2544 2459 * 2545 - * On success the function returns a positive number. Then, the caller 2546 - * is allowed to write up to the returned number of bytes to the memory 2547 - * location computed by the data_of() macro. 2460 + * On success, the function returns a positive number. Then, the caller 2461 + * is allowed to write up to the returned number of bytes to the memory 2462 + * location computed by the data_of() macro. 2548 2463 * 2549 - * The function returns 0 to indicate the "end of file" condition, 2550 - * and a negative number is returned on error. In such cases the 2551 - * structure pointed to by @handle is not updated and should not be used 2552 - * any more. 2464 + * The function returns 0 to indicate the "end of file" condition. Negative 2465 + * numbers are returned on errors, in which cases the structure pointed to by 2466 + * @handle is not updated and should not be used any more. 2553 2467 */ 2554 - 2555 2468 int snapshot_write_next(struct snapshot_handle *handle) 2556 2469 { 2557 2470 static struct chain_allocator ca; ··· 2576 2491 if (error) 2577 2492 return error; 2578 2493 2494 + safe_pages_list = NULL; 2495 + 2579 2496 error = memory_bm_create(&copy_bm, GFP_ATOMIC, PG_ANY); 2580 2497 if (error) 2581 2498 return error; ··· 2587 2500 if (error) 2588 2501 return error; 2589 2502 2503 + hibernate_restore_protection_begin(); 2590 2504 } else if (handle->cur <= nr_meta_pages + 1) { 2591 2505 error = unpack_orig_pfns(buffer, &copy_bm); 2592 2506 if (error) ··· 2610 2522 copy_last_highmem_page(); 2611 2523 /* Restore page key for data page (s390 only). */ 2612 2524 page_key_write(handle->buffer); 2525 + hibernate_restore_protect_page(handle->buffer); 2613 2526 handle->buffer = get_buffer(&orig_bm, &ca); 2614 2527 if (IS_ERR(handle->buffer)) 2615 2528 return PTR_ERR(handle->buffer); ··· 2622 2533 } 2623 2534 2624 2535 /** 2625 - * snapshot_write_finalize - must be called after the last call to 2626 - * snapshot_write_next() in case the last page in the image happens 2627 - * to be a highmem page and its contents should be stored in the 2628 - * highmem. Additionally, it releases the memory that will not be 2629 - * used any more. 2536 + * snapshot_write_finalize - Complete the loading of a hibernation image. 2537 + * 2538 + * Must be called after the last call to snapshot_write_next() in case the last 2539 + * page in the image happens to be a highmem page and its contents should be 2540 + * stored in highmem. Additionally, it recycles bitmap memory that's not 2541 + * necessary any more. 2630 2542 */ 2631 - 2632 2543 void snapshot_write_finalize(struct snapshot_handle *handle) 2633 2544 { 2634 2545 copy_last_highmem_page(); 2635 2546 /* Restore page key for data page (s390 only). */ 2636 2547 page_key_write(handle->buffer); 2637 2548 page_key_free(); 2638 - /* Free only if we have loaded the image entirely */ 2549 + hibernate_restore_protect_page(handle->buffer); 2550 + /* Do that only if we have loaded the image entirely */ 2639 2551 if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages) { 2640 - memory_bm_free(&orig_bm, PG_UNSAFE_CLEAR); 2552 + memory_bm_recycle(&orig_bm); 2641 2553 free_highmem_data(); 2642 2554 } 2643 2555 } ··· 2651 2561 2652 2562 #ifdef CONFIG_HIGHMEM 2653 2563 /* Assumes that @buf is ready and points to a "safe" page */ 2654 - static inline void 2655 - swap_two_pages_data(struct page *p1, struct page *p2, void *buf) 2564 + static inline void swap_two_pages_data(struct page *p1, struct page *p2, 2565 + void *buf) 2656 2566 { 2657 2567 void *kaddr1, *kaddr2; 2658 2568 ··· 2666 2576 } 2667 2577 2668 2578 /** 2669 - * restore_highmem - for each highmem page that was allocated before 2670 - * the suspend and included in the suspend image, and also has been 2671 - * allocated by the "resume" kernel swap its current (ie. "before 2672 - * resume") contents with the previous (ie. "before suspend") one. 2579 + * restore_highmem - Put highmem image pages into their original locations. 2673 2580 * 2674 - * If the resume eventually fails, we can call this function once 2675 - * again and restore the "before resume" highmem state. 2581 + * For each highmem page that was in use before hibernation and is included in 2582 + * the image, and also has been allocated by the "restore" kernel, swap its 2583 + * current contents with the previous (ie. "before hibernation") ones. 2584 + * 2585 + * If the restore eventually fails, we can call this function once again and 2586 + * restore the highmem state as seen by the restore kernel. 2676 2587 */ 2677 - 2678 2588 int restore_highmem(void) 2679 2589 { 2680 2590 struct highmem_pbe *pbe = highmem_pblist;
+6 -4
kernel/power/suspend.c
··· 266 266 */ 267 267 static int suspend_prepare(suspend_state_t state) 268 268 { 269 - int error; 269 + int error, nr_calls = 0; 270 270 271 271 if (!sleep_state_supported(state)) 272 272 return -EPERM; 273 273 274 274 pm_prepare_console(); 275 275 276 - error = pm_notifier_call_chain(PM_SUSPEND_PREPARE); 277 - if (error) 276 + error = __pm_notifier_call_chain(PM_SUSPEND_PREPARE, -1, &nr_calls); 277 + if (error) { 278 + nr_calls--; 278 279 goto Finish; 280 + } 279 281 280 282 trace_suspend_resume(TPS("freeze_processes"), 0, true); 281 283 error = suspend_freeze_processes(); ··· 288 286 suspend_stats.failed_freeze++; 289 287 dpm_save_failed_step(SUSPEND_FREEZE); 290 288 Finish: 291 - pm_notifier_call_chain(PM_POST_SUSPEND); 289 + __pm_notifier_call_chain(PM_POST_SUSPEND, nr_calls, NULL); 292 290 pm_restore_console(); 293 291 return error; 294 292 }
+6
kernel/power/swap.c
··· 350 350 if (res < 0) 351 351 blkdev_put(hib_resume_bdev, FMODE_WRITE); 352 352 353 + /* 354 + * Update the resume device to the one actually used, 355 + * so the test_resume mode can use it in case it is 356 + * invoked from hibernate() to test the snapshot. 357 + */ 358 + swsusp_resume_device = hib_resume_bdev->bd_dev; 353 359 return res; 354 360 } 355 361
+8 -6
kernel/power/user.c
··· 47 47 static int snapshot_open(struct inode *inode, struct file *filp) 48 48 { 49 49 struct snapshot_data *data; 50 - int error; 50 + int error, nr_calls = 0; 51 51 52 52 if (!hibernation_available()) 53 53 return -EPERM; ··· 74 74 swap_type_of(swsusp_resume_device, 0, NULL) : -1; 75 75 data->mode = O_RDONLY; 76 76 data->free_bitmaps = false; 77 - error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE); 77 + error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); 78 78 if (error) 79 - pm_notifier_call_chain(PM_POST_HIBERNATION); 79 + __pm_notifier_call_chain(PM_POST_HIBERNATION, --nr_calls, NULL); 80 80 } else { 81 81 /* 82 82 * Resuming. We may need to wait for the image device to ··· 86 86 87 87 data->swap = -1; 88 88 data->mode = O_WRONLY; 89 - error = pm_notifier_call_chain(PM_RESTORE_PREPARE); 89 + error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls); 90 90 if (!error) { 91 91 error = create_basic_memory_bitmaps(); 92 92 data->free_bitmaps = !error; 93 - } 93 + } else 94 + nr_calls--; 95 + 94 96 if (error) 95 - pm_notifier_call_chain(PM_POST_RESTORE); 97 + __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL); 96 98 } 97 99 if (error) 98 100 atomic_inc(&snapshot_device_available);
+32 -42
kernel/sched/cpufreq_schedutil.c
··· 47 47 struct update_util_data update_util; 48 48 struct sugov_policy *sg_policy; 49 49 50 + unsigned int cached_raw_freq; 51 + 50 52 /* The fields below are only needed when sharing a policy. */ 51 53 unsigned long util; 52 54 unsigned long max; ··· 108 106 109 107 /** 110 108 * get_next_freq - Compute a new frequency for a given cpufreq policy. 111 - * @policy: cpufreq policy object to compute the new frequency for. 109 + * @sg_cpu: schedutil cpu object to compute the new frequency for. 112 110 * @util: Current CPU utilization. 113 111 * @max: CPU capacity. 114 112 * ··· 123 121 * next_freq = C * curr_freq * util_raw / max 124 122 * 125 123 * Take C = 1.25 for the frequency tipping point at (util / max) = 0.8. 124 + * 125 + * The lowest driver-supported frequency which is equal or greater than the raw 126 + * next_freq (as calculated above) is returned, subject to policy min/max and 127 + * cpufreq driver limitations. 126 128 */ 127 - static unsigned int get_next_freq(struct cpufreq_policy *policy, 128 - unsigned long util, unsigned long max) 129 + static unsigned int get_next_freq(struct sugov_cpu *sg_cpu, unsigned long util, 130 + unsigned long max) 129 131 { 132 + struct sugov_policy *sg_policy = sg_cpu->sg_policy; 133 + struct cpufreq_policy *policy = sg_policy->policy; 130 134 unsigned int freq = arch_scale_freq_invariant() ? 131 135 policy->cpuinfo.max_freq : policy->cur; 132 136 133 - return (freq + (freq >> 2)) * util / max; 137 + freq = (freq + (freq >> 2)) * util / max; 138 + 139 + if (freq == sg_cpu->cached_raw_freq && sg_policy->next_freq != UINT_MAX) 140 + return sg_policy->next_freq; 141 + sg_cpu->cached_raw_freq = freq; 142 + return cpufreq_driver_resolve_freq(policy, freq); 134 143 } 135 144 136 145 static void sugov_update_single(struct update_util_data *hook, u64 time, ··· 156 143 return; 157 144 158 145 next_f = util == ULONG_MAX ? policy->cpuinfo.max_freq : 159 - get_next_freq(policy, util, max); 146 + get_next_freq(sg_cpu, util, max); 160 147 sugov_update_commit(sg_policy, time, next_f); 161 148 } 162 149 163 - static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy, 150 + static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, 164 151 unsigned long util, unsigned long max) 165 152 { 153 + struct sugov_policy *sg_policy = sg_cpu->sg_policy; 166 154 struct cpufreq_policy *policy = sg_policy->policy; 167 155 unsigned int max_f = policy->cpuinfo.max_freq; 168 156 u64 last_freq_update_time = sg_policy->last_freq_update_time; ··· 203 189 } 204 190 } 205 191 206 - return get_next_freq(policy, util, max); 192 + return get_next_freq(sg_cpu, util, max); 207 193 } 208 194 209 195 static void sugov_update_shared(struct update_util_data *hook, u64 time, ··· 220 206 sg_cpu->last_update = time; 221 207 222 208 if (sugov_should_update_freq(sg_policy, time)) { 223 - next_f = sugov_next_freq_shared(sg_policy, util, max); 209 + next_f = sugov_next_freq_shared(sg_cpu, util, max); 224 210 sugov_update_commit(sg_policy, time, next_f); 225 211 } 226 212 ··· 408 394 return ret; 409 395 } 410 396 411 - static int sugov_exit(struct cpufreq_policy *policy) 397 + static void sugov_exit(struct cpufreq_policy *policy) 412 398 { 413 399 struct sugov_policy *sg_policy = policy->governor_data; 414 400 struct sugov_tunables *tunables = sg_policy->tunables; ··· 426 412 mutex_unlock(&global_tunables_lock); 427 413 428 414 sugov_policy_free(sg_policy); 429 - return 0; 430 415 } 431 416 432 417 static int sugov_start(struct cpufreq_policy *policy) ··· 447 434 sg_cpu->util = ULONG_MAX; 448 435 sg_cpu->max = 0; 449 436 sg_cpu->last_update = 0; 437 + sg_cpu->cached_raw_freq = 0; 450 438 cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util, 451 439 sugov_update_shared); 452 440 } else { ··· 458 444 return 0; 459 445 } 460 446 461 - static int sugov_stop(struct cpufreq_policy *policy) 447 + static void sugov_stop(struct cpufreq_policy *policy) 462 448 { 463 449 struct sugov_policy *sg_policy = policy->governor_data; 464 450 unsigned int cpu; ··· 470 456 471 457 irq_work_sync(&sg_policy->irq_work); 472 458 cancel_work_sync(&sg_policy->work); 473 - return 0; 474 459 } 475 460 476 - static int sugov_limits(struct cpufreq_policy *policy) 461 + static void sugov_limits(struct cpufreq_policy *policy) 477 462 { 478 463 struct sugov_policy *sg_policy = policy->governor_data; 479 464 480 465 if (!policy->fast_switch_enabled) { 481 466 mutex_lock(&sg_policy->work_lock); 482 - 483 - if (policy->max < policy->cur) 484 - __cpufreq_driver_target(policy, policy->max, 485 - CPUFREQ_RELATION_H); 486 - else if (policy->min > policy->cur) 487 - __cpufreq_driver_target(policy, policy->min, 488 - CPUFREQ_RELATION_L); 489 - 467 + cpufreq_policy_apply_limits(policy); 490 468 mutex_unlock(&sg_policy->work_lock); 491 469 } 492 470 493 471 sg_policy->need_freq_update = true; 494 - return 0; 495 - } 496 - 497 - int sugov_governor(struct cpufreq_policy *policy, unsigned int event) 498 - { 499 - if (event == CPUFREQ_GOV_POLICY_INIT) { 500 - return sugov_init(policy); 501 - } else if (policy->governor_data) { 502 - switch (event) { 503 - case CPUFREQ_GOV_POLICY_EXIT: 504 - return sugov_exit(policy); 505 - case CPUFREQ_GOV_START: 506 - return sugov_start(policy); 507 - case CPUFREQ_GOV_STOP: 508 - return sugov_stop(policy); 509 - case CPUFREQ_GOV_LIMITS: 510 - return sugov_limits(policy); 511 - } 512 - } 513 - return -EINVAL; 514 472 } 515 473 516 474 static struct cpufreq_governor schedutil_gov = { 517 475 .name = "schedutil", 518 - .governor = sugov_governor, 519 476 .owner = THIS_MODULE, 477 + .init = sugov_init, 478 + .exit = sugov_exit, 479 + .start = sugov_start, 480 + .stop = sugov_stop, 481 + .limits = sugov_limits, 520 482 }; 521 483 522 484 static int __init sugov_module_init(void)
+2 -2
kernel/workqueue.c
··· 4369 4369 /** 4370 4370 * show_workqueue_state - dump workqueue state 4371 4371 * 4372 - * Called from a sysrq handler and prints out all busy workqueues and 4373 - * pools. 4372 + * Called from a sysrq handler or try_to_freeze_tasks() and prints out 4373 + * all busy workqueues and pools. 4374 4374 */ 4375 4375 void show_workqueue_state(void) 4376 4376 {
+2474 -1167
scripts/analyze_suspend.py
··· 19 19 # Authors: 20 20 # Todd Brandt <todd.e.brandt@linux.intel.com> 21 21 # 22 + # Links: 23 + # Home Page 24 + # https://01.org/suspendresume 25 + # Source repo 26 + # https://github.com/01org/suspendresume 27 + # Documentation 28 + # Getting Started 29 + # https://01.org/suspendresume/documentation/getting-started 30 + # Command List: 31 + # https://01.org/suspendresume/documentation/command-list 32 + # 22 33 # Description: 23 34 # This tool is designed to assist kernel and OS developers in optimizing 24 35 # their linux stack's suspend/resume time. Using a kernel image built ··· 46 35 # CONFIG_FTRACE=y 47 36 # CONFIG_FUNCTION_TRACER=y 48 37 # CONFIG_FUNCTION_GRAPH_TRACER=y 38 + # CONFIG_KPROBES=y 39 + # CONFIG_KPROBES_ON_FTRACE=y 49 40 # 50 41 # For kernel versions older than 3.15: 51 42 # The following additional kernel parameters are required: ··· 65 52 import platform 66 53 from datetime import datetime 67 54 import struct 55 + import ConfigParser 68 56 69 57 # ----------------- CLASSES -------------------- 70 58 ··· 74 60 # A global, single-instance container used to 75 61 # store system values and test parameters 76 62 class SystemValues: 77 - version = 3.0 63 + ansi = False 64 + version = '4.2' 78 65 verbose = False 66 + addlogs = False 67 + mindevlen = 0.001 68 + mincglen = 1.0 69 + srgap = 0 70 + cgexp = False 71 + outdir = '' 79 72 testdir = '.' 80 73 tpath = '/sys/kernel/debug/tracing/' 81 74 fpdtpath = '/sys/firmware/acpi/tables/FPDT' ··· 92 71 'device_pm_callback_end', 93 72 'device_pm_callback_start' 94 73 ] 95 - modename = { 96 - 'freeze': 'Suspend-To-Idle (S0)', 97 - 'standby': 'Power-On Suspend (S1)', 98 - 'mem': 'Suspend-to-RAM (S3)', 99 - 'disk': 'Suspend-to-disk (S4)' 100 - } 74 + testcommand = '' 101 75 mempath = '/dev/mem' 102 76 powerfile = '/sys/power/state' 103 77 suspendmode = 'mem' 104 78 hostname = 'localhost' 105 79 prefix = 'test' 106 80 teststamp = '' 81 + dmesgstart = 0.0 107 82 dmesgfile = '' 108 83 ftracefile = '' 109 84 htmlfile = '' 85 + embedded = False 110 86 rtcwake = False 111 87 rtcwaketime = 10 112 88 rtcpath = '' 113 - android = False 114 - adb = 'adb' 115 89 devicefilter = [] 116 90 stamp = 0 117 91 execcount = 1 ··· 114 98 usecallgraph = False 115 99 usetraceevents = False 116 100 usetraceeventsonly = False 101 + usetracemarkers = True 102 + usekprobes = True 103 + usedevsrc = False 117 104 notestrun = False 118 - altdevname = dict() 105 + devprops = dict() 119 106 postresumetime = 0 107 + devpropfmt = '# Device Properties: .*' 120 108 tracertypefmt = '# tracer: (?P<t>.*)' 121 109 firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$' 122 110 postresumefmt = '# post resume time (?P<t>[0-9]*)$' 123 111 stampfmt = '# suspend-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-'+\ 124 112 '(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})'+\ 125 113 ' (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$' 114 + kprobecolor = 'rgba(204,204,204,0.5)' 115 + synccolor = 'rgba(204,204,204,0.5)' 116 + debugfuncs = [] 117 + tracefuncs = { 118 + 'sys_sync': dict(), 119 + 'pm_prepare_console': dict(), 120 + 'pm_notifier_call_chain': dict(), 121 + 'freeze_processes': dict(), 122 + 'freeze_kernel_threads': dict(), 123 + 'pm_restrict_gfp_mask': dict(), 124 + 'acpi_suspend_begin': dict(), 125 + 'suspend_console': dict(), 126 + 'acpi_pm_prepare': dict(), 127 + 'syscore_suspend': dict(), 128 + 'arch_enable_nonboot_cpus_end': dict(), 129 + 'syscore_resume': dict(), 130 + 'acpi_pm_finish': dict(), 131 + 'resume_console': dict(), 132 + 'acpi_pm_end': dict(), 133 + 'pm_restore_gfp_mask': dict(), 134 + 'thaw_processes': dict(), 135 + 'pm_restore_console': dict(), 136 + 'CPU_OFF': { 137 + 'func':'_cpu_down', 138 + 'args_x86_64': {'cpu':'%di:s32'}, 139 + 'format': 'CPU_OFF[{cpu}]', 140 + 'mask': 'CPU_.*_DOWN' 141 + }, 142 + 'CPU_ON': { 143 + 'func':'_cpu_up', 144 + 'args_x86_64': {'cpu':'%di:s32'}, 145 + 'format': 'CPU_ON[{cpu}]', 146 + 'mask': 'CPU_.*_UP' 147 + }, 148 + } 149 + dev_tracefuncs = { 150 + # general wait/delay/sleep 151 + 'msleep': { 'args_x86_64': {'time':'%di:s32'} }, 152 + 'udelay': { 'func':'__const_udelay', 'args_x86_64': {'loops':'%di:s32'} }, 153 + 'acpi_os_stall': dict(), 154 + # ACPI 155 + 'acpi_resume_power_resources': dict(), 156 + 'acpi_ps_parse_aml': dict(), 157 + # filesystem 158 + 'ext4_sync_fs': dict(), 159 + # ATA 160 + 'ata_eh_recover': { 'args_x86_64': {'port':'+36(%di):s32'} }, 161 + # i915 162 + 'i915_gem_restore_gtt_mappings': dict(), 163 + 'intel_opregion_setup': dict(), 164 + 'intel_dp_detect': dict(), 165 + 'intel_hdmi_detect': dict(), 166 + 'intel_opregion_init': dict(), 167 + } 168 + kprobes_postresume = [ 169 + { 170 + 'name': 'ataportrst', 171 + 'func': 'ata_eh_recover', 172 + 'args': {'port':'+36(%di):s32'}, 173 + 'format': 'ata{port}_port_reset', 174 + 'mask': 'ata.*_port_reset' 175 + } 176 + ] 177 + kprobes = dict() 178 + timeformat = '%.3f' 126 179 def __init__(self): 180 + # if this is a phoronix test run, set some default options 181 + if('LOG_FILE' in os.environ and 'TEST_RESULTS_IDENTIFIER' in os.environ): 182 + self.embedded = True 183 + self.addlogs = True 184 + self.htmlfile = os.environ['LOG_FILE'] 127 185 self.hostname = platform.node() 128 186 if(self.hostname == ''): 129 187 self.hostname = 'localhost' ··· 208 118 if os.path.exists(rtc) and os.path.exists(rtc+'/date') and \ 209 119 os.path.exists(rtc+'/time') and os.path.exists(rtc+'/wakealarm'): 210 120 self.rtcpath = rtc 121 + if (hasattr(sys.stdout, 'isatty') and sys.stdout.isatty()): 122 + self.ansi = True 123 + def setPrecision(self, num): 124 + if num < 0 or num > 6: 125 + return 126 + self.timeformat = '%.{0}f'.format(num) 211 127 def setOutputFile(self): 212 128 if((self.htmlfile == '') and (self.dmesgfile != '')): 213 129 m = re.match('(?P<name>.*)_dmesg\.txt$', self.dmesgfile) ··· 225 129 self.htmlfile = m.group('name')+'.html' 226 130 if(self.htmlfile == ''): 227 131 self.htmlfile = 'output.html' 228 - def initTestOutput(self, subdir): 229 - if(not self.android): 230 - self.prefix = self.hostname 231 - v = open('/proc/version', 'r').read().strip() 232 - kver = string.split(v)[2] 233 - else: 234 - self.prefix = 'android' 235 - v = os.popen(self.adb+' shell cat /proc/version').read().strip() 236 - kver = string.split(v)[2] 237 - testtime = datetime.now().strftime('suspend-%m%d%y-%H%M%S') 132 + def initTestOutput(self, subdir, testpath=''): 133 + self.prefix = self.hostname 134 + v = open('/proc/version', 'r').read().strip() 135 + kver = string.split(v)[2] 136 + n = datetime.now() 137 + testtime = n.strftime('suspend-%m%d%y-%H%M%S') 138 + if not testpath: 139 + testpath = n.strftime('suspend-%y%m%d-%H%M%S') 238 140 if(subdir != "."): 239 - self.testdir = subdir+"/"+testtime 141 + self.testdir = subdir+"/"+testpath 240 142 else: 241 - self.testdir = testtime 143 + self.testdir = testpath 242 144 self.teststamp = \ 243 145 '# '+testtime+' '+self.prefix+' '+self.suspendmode+' '+kver 146 + if(self.embedded): 147 + self.dmesgfile = \ 148 + '/tmp/'+testtime+'_'+self.suspendmode+'_dmesg.txt' 149 + self.ftracefile = \ 150 + '/tmp/'+testtime+'_'+self.suspendmode+'_ftrace.txt' 151 + return 244 152 self.dmesgfile = \ 245 153 self.testdir+'/'+self.prefix+'_'+self.suspendmode+'_dmesg.txt' 246 154 self.ftracefile = \ 247 155 self.testdir+'/'+self.prefix+'_'+self.suspendmode+'_ftrace.txt' 248 156 self.htmlfile = \ 249 157 self.testdir+'/'+self.prefix+'_'+self.suspendmode+'.html' 250 - os.mkdir(self.testdir) 158 + if not os.path.isdir(self.testdir): 159 + os.mkdir(self.testdir) 251 160 def setDeviceFilter(self, devnames): 252 161 self.devicefilter = string.split(devnames) 253 - def rtcWakeAlarm(self): 162 + def rtcWakeAlarmOn(self): 254 163 os.system('echo 0 > '+self.rtcpath+'/wakealarm') 255 164 outD = open(self.rtcpath+'/date', 'r').read().strip() 256 165 outT = open(self.rtcpath+'/time', 'r').read().strip() ··· 273 172 nowtime = int(datetime.now().strftime('%s')) 274 173 alarm = nowtime + self.rtcwaketime 275 174 os.system('echo %d > %s/wakealarm' % (alarm, self.rtcpath)) 175 + def rtcWakeAlarmOff(self): 176 + os.system('echo 0 > %s/wakealarm' % self.rtcpath) 177 + def initdmesg(self): 178 + # get the latest time stamp from the dmesg log 179 + fp = os.popen('dmesg') 180 + ktime = '0' 181 + for line in fp: 182 + line = line.replace('\r\n', '') 183 + idx = line.find('[') 184 + if idx > 1: 185 + line = line[idx:] 186 + m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line) 187 + if(m): 188 + ktime = m.group('ktime') 189 + fp.close() 190 + self.dmesgstart = float(ktime) 191 + def getdmesg(self): 192 + # store all new dmesg lines since initdmesg was called 193 + fp = os.popen('dmesg') 194 + op = open(self.dmesgfile, 'a') 195 + for line in fp: 196 + line = line.replace('\r\n', '') 197 + idx = line.find('[') 198 + if idx > 1: 199 + line = line[idx:] 200 + m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line) 201 + if(not m): 202 + continue 203 + ktime = float(m.group('ktime')) 204 + if ktime > self.dmesgstart: 205 + op.write(line) 206 + fp.close() 207 + op.close() 208 + def addFtraceFilterFunctions(self, file): 209 + fp = open(file) 210 + list = fp.read().split('\n') 211 + fp.close() 212 + for i in list: 213 + if len(i) < 2: 214 + continue 215 + self.tracefuncs[i] = dict() 216 + def getFtraceFilterFunctions(self, current): 217 + rootCheck(True) 218 + if not current: 219 + os.system('cat '+self.tpath+'available_filter_functions') 220 + return 221 + fp = open(self.tpath+'available_filter_functions') 222 + master = fp.read().split('\n') 223 + fp.close() 224 + if len(self.debugfuncs) > 0: 225 + for i in self.debugfuncs: 226 + if i in master: 227 + print i 228 + else: 229 + print self.colorText(i) 230 + else: 231 + for i in self.tracefuncs: 232 + if 'func' in self.tracefuncs[i]: 233 + i = self.tracefuncs[i]['func'] 234 + if i in master: 235 + print i 236 + else: 237 + print self.colorText(i) 238 + def setFtraceFilterFunctions(self, list): 239 + fp = open(self.tpath+'available_filter_functions') 240 + master = fp.read().split('\n') 241 + fp.close() 242 + flist = '' 243 + for i in list: 244 + if i not in master: 245 + continue 246 + if ' [' in i: 247 + flist += i.split(' ')[0]+'\n' 248 + else: 249 + flist += i+'\n' 250 + fp = open(self.tpath+'set_graph_function', 'w') 251 + fp.write(flist) 252 + fp.close() 253 + def kprobeMatch(self, name, target): 254 + if name not in self.kprobes: 255 + return False 256 + if re.match(self.kprobes[name]['mask'], target): 257 + return True 258 + return False 259 + def basicKprobe(self, name): 260 + self.kprobes[name] = {'name': name,'func': name,'args': dict(),'format': name,'mask': name} 261 + def defaultKprobe(self, name, kdata): 262 + k = kdata 263 + for field in ['name', 'format', 'mask', 'func']: 264 + if field not in k: 265 + k[field] = name 266 + archargs = 'args_'+platform.machine() 267 + if archargs in k: 268 + k['args'] = k[archargs] 269 + else: 270 + k['args'] = dict() 271 + k['format'] = name 272 + self.kprobes[name] = k 273 + def kprobeColor(self, name): 274 + if name not in self.kprobes or 'color' not in self.kprobes[name]: 275 + return '' 276 + return self.kprobes[name]['color'] 277 + def kprobeDisplayName(self, name, dataraw): 278 + if name not in self.kprobes: 279 + self.basicKprobe(name) 280 + data = '' 281 + quote=0 282 + # first remvoe any spaces inside quotes, and the quotes 283 + for c in dataraw: 284 + if c == '"': 285 + quote = (quote + 1) % 2 286 + if quote and c == ' ': 287 + data += '_' 288 + elif c != '"': 289 + data += c 290 + fmt, args = self.kprobes[name]['format'], self.kprobes[name]['args'] 291 + arglist = dict() 292 + # now process the args 293 + for arg in sorted(args): 294 + arglist[arg] = '' 295 + m = re.match('.* '+arg+'=(?P<arg>.*) ', data); 296 + if m: 297 + arglist[arg] = m.group('arg') 298 + else: 299 + m = re.match('.* '+arg+'=(?P<arg>.*)', data); 300 + if m: 301 + arglist[arg] = m.group('arg') 302 + out = fmt.format(**arglist) 303 + out = out.replace(' ', '_').replace('"', '') 304 + return out 305 + def kprobeText(self, kprobe): 306 + name, fmt, func, args = kprobe['name'], kprobe['format'], kprobe['func'], kprobe['args'] 307 + if re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', func): 308 + doError('Kprobe "%s" has format info in the function name "%s"' % (name, func), False) 309 + for arg in re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', fmt): 310 + if arg not in args: 311 + doError('Kprobe "%s" is missing argument "%s"' % (name, arg), False) 312 + val = 'p:%s_cal %s' % (name, func) 313 + for i in sorted(args): 314 + val += ' %s=%s' % (i, args[i]) 315 + val += '\nr:%s_ret %s $retval\n' % (name, func) 316 + return val 317 + def addKprobes(self): 318 + # first test each kprobe 319 + print('INITIALIZING KPROBES...') 320 + rejects = [] 321 + for name in sorted(self.kprobes): 322 + if not self.testKprobe(self.kprobes[name]): 323 + rejects.append(name) 324 + # remove all failed ones from the list 325 + for name in rejects: 326 + vprint('Skipping KPROBE: %s' % name) 327 + self.kprobes.pop(name) 328 + self.fsetVal('', 'kprobe_events') 329 + kprobeevents = '' 330 + # set the kprobes all at once 331 + for kp in self.kprobes: 332 + val = self.kprobeText(self.kprobes[kp]) 333 + vprint('Adding KPROBE: %s\n%s' % (kp, val.strip())) 334 + kprobeevents += self.kprobeText(self.kprobes[kp]) 335 + self.fsetVal(kprobeevents, 'kprobe_events') 336 + # verify that the kprobes were set as ordered 337 + check = self.fgetVal('kprobe_events') 338 + linesout = len(kprobeevents.split('\n')) 339 + linesack = len(check.split('\n')) 340 + if linesack < linesout: 341 + # if not, try appending the kprobes 1 by 1 342 + for kp in self.kprobes: 343 + kprobeevents = self.kprobeText(self.kprobes[kp]) 344 + self.fsetVal(kprobeevents, 'kprobe_events', 'a') 345 + self.fsetVal('1', 'events/kprobes/enable') 346 + def testKprobe(self, kprobe): 347 + kprobeevents = self.kprobeText(kprobe) 348 + if not kprobeevents: 349 + return False 350 + try: 351 + self.fsetVal(kprobeevents, 'kprobe_events') 352 + check = self.fgetVal('kprobe_events') 353 + except: 354 + return False 355 + linesout = len(kprobeevents.split('\n')) 356 + linesack = len(check.split('\n')) 357 + if linesack < linesout: 358 + return False 359 + return True 360 + def fsetVal(self, val, path, mode='w'): 361 + file = self.tpath+path 362 + if not os.path.exists(file): 363 + return False 364 + try: 365 + fp = open(file, mode) 366 + fp.write(val) 367 + fp.close() 368 + except: 369 + pass 370 + return True 371 + def fgetVal(self, path): 372 + file = self.tpath+path 373 + res = '' 374 + if not os.path.exists(file): 375 + return res 376 + try: 377 + fp = open(file, 'r') 378 + res = fp.read() 379 + fp.close() 380 + except: 381 + pass 382 + return res 383 + def cleanupFtrace(self): 384 + if(self.usecallgraph or self.usetraceevents): 385 + self.fsetVal('0', 'events/kprobes/enable') 386 + self.fsetVal('', 'kprobe_events') 387 + def setupAllKprobes(self): 388 + for name in self.tracefuncs: 389 + self.defaultKprobe(name, self.tracefuncs[name]) 390 + for name in self.dev_tracefuncs: 391 + self.defaultKprobe(name, self.dev_tracefuncs[name]) 392 + def isCallgraphFunc(self, name): 393 + if len(self.debugfuncs) < 1 and self.suspendmode == 'command': 394 + return True 395 + if name in self.debugfuncs: 396 + return True 397 + funclist = [] 398 + for i in self.tracefuncs: 399 + if 'func' in self.tracefuncs[i]: 400 + funclist.append(self.tracefuncs[i]['func']) 401 + else: 402 + funclist.append(i) 403 + if name in funclist: 404 + return True 405 + return False 406 + def initFtrace(self, testing=False): 407 + tp = self.tpath 408 + print('INITIALIZING FTRACE...') 409 + # turn trace off 410 + self.fsetVal('0', 'tracing_on') 411 + self.cleanupFtrace() 412 + # set the trace clock to global 413 + self.fsetVal('global', 'trace_clock') 414 + # set trace buffer to a huge value 415 + self.fsetVal('nop', 'current_tracer') 416 + self.fsetVal('100000', 'buffer_size_kb') 417 + # go no further if this is just a status check 418 + if testing: 419 + return 420 + if self.usekprobes: 421 + # add tracefunc kprobes so long as were not using full callgraph 422 + if(not self.usecallgraph or len(self.debugfuncs) > 0): 423 + for name in self.tracefuncs: 424 + self.defaultKprobe(name, self.tracefuncs[name]) 425 + if self.usedevsrc: 426 + for name in self.dev_tracefuncs: 427 + self.defaultKprobe(name, self.dev_tracefuncs[name]) 428 + else: 429 + self.usedevsrc = False 430 + self.addKprobes() 431 + # initialize the callgraph trace, unless this is an x2 run 432 + if(self.usecallgraph): 433 + # set trace type 434 + self.fsetVal('function_graph', 'current_tracer') 435 + self.fsetVal('', 'set_ftrace_filter') 436 + # set trace format options 437 + self.fsetVal('print-parent', 'trace_options') 438 + self.fsetVal('funcgraph-abstime', 'trace_options') 439 + self.fsetVal('funcgraph-cpu', 'trace_options') 440 + self.fsetVal('funcgraph-duration', 'trace_options') 441 + self.fsetVal('funcgraph-proc', 'trace_options') 442 + self.fsetVal('funcgraph-tail', 'trace_options') 443 + self.fsetVal('nofuncgraph-overhead', 'trace_options') 444 + self.fsetVal('context-info', 'trace_options') 445 + self.fsetVal('graph-time', 'trace_options') 446 + self.fsetVal('0', 'max_graph_depth') 447 + if len(self.debugfuncs) > 0: 448 + self.setFtraceFilterFunctions(self.debugfuncs) 449 + elif self.suspendmode == 'command': 450 + self.fsetVal('', 'set_graph_function') 451 + else: 452 + cf = ['dpm_run_callback'] 453 + if(self.usetraceeventsonly): 454 + cf += ['dpm_prepare', 'dpm_complete'] 455 + for fn in self.tracefuncs: 456 + if 'func' in self.tracefuncs[fn]: 457 + cf.append(self.tracefuncs[fn]['func']) 458 + else: 459 + cf.append(fn) 460 + self.setFtraceFilterFunctions(cf) 461 + if(self.usetraceevents): 462 + # turn trace events on 463 + events = iter(self.traceevents) 464 + for e in events: 465 + self.fsetVal('1', 'events/power/'+e+'/enable') 466 + # clear the trace buffer 467 + self.fsetVal('', 'trace') 468 + def verifyFtrace(self): 469 + # files needed for any trace data 470 + files = ['buffer_size_kb', 'current_tracer', 'trace', 'trace_clock', 471 + 'trace_marker', 'trace_options', 'tracing_on'] 472 + # files needed for callgraph trace data 473 + tp = self.tpath 474 + if(self.usecallgraph): 475 + files += [ 476 + 'available_filter_functions', 477 + 'set_ftrace_filter', 478 + 'set_graph_function' 479 + ] 480 + for f in files: 481 + if(os.path.exists(tp+f) == False): 482 + return False 483 + return True 484 + def verifyKprobes(self): 485 + # files needed for kprobes to work 486 + files = ['kprobe_events', 'events'] 487 + tp = self.tpath 488 + for f in files: 489 + if(os.path.exists(tp+f) == False): 490 + return False 491 + return True 492 + def colorText(self, str): 493 + if not self.ansi: 494 + return str 495 + return '\x1B[31;40m'+str+'\x1B[m' 276 496 277 497 sysvals = SystemValues() 498 + 499 + # Class: DevProps 500 + # Description: 501 + # Simple class which holds property values collected 502 + # for all the devices used in the timeline. 503 + class DevProps: 504 + syspath = '' 505 + altname = '' 506 + async = True 507 + xtraclass = '' 508 + xtrainfo = '' 509 + def out(self, dev): 510 + return '%s,%s,%d;' % (dev, self.altname, self.async) 511 + def debug(self, dev): 512 + print '%s:\n\taltname = %s\n\t async = %s' % (dev, self.altname, self.async) 513 + def altName(self, dev): 514 + if not self.altname or self.altname == dev: 515 + return dev 516 + return '%s [%s]' % (self.altname, dev) 517 + def xtraClass(self): 518 + if self.xtraclass: 519 + return ' '+self.xtraclass 520 + if not self.async: 521 + return ' sync' 522 + return '' 523 + def xtraInfo(self): 524 + if self.xtraclass: 525 + return ' '+self.xtraclass 526 + if self.async: 527 + return ' async' 528 + return ' sync' 278 529 279 530 # Class: DeviceNode 280 531 # Description: ··· 681 228 html_device_id = 0 682 229 stamp = 0 683 230 outfile = '' 231 + dev_ubiquitous = ['msleep', 'udelay'] 684 232 def __init__(self, num): 685 233 idchar = 'abcdefghijklmnopqrstuvwxyz' 686 234 self.testnumber = num ··· 711 257 'row': 0, 'color': '#FFFFCC', 'order': 9} 712 258 } 713 259 self.phases = self.sortedPhases() 260 + self.devicegroups = [] 261 + for phase in self.phases: 262 + self.devicegroups.append([phase]) 714 263 def getStart(self): 715 264 return self.dmesg[self.phases[0]]['start'] 716 265 def setStart(self, time): ··· 730 273 for dev in list: 731 274 d = list[dev] 732 275 if(d['pid'] == pid and time >= d['start'] and 733 - time <= d['end']): 276 + time < d['end']): 734 277 return False 735 278 return True 736 - def addIntraDevTraceEvent(self, action, name, pid, time): 737 - if(action == 'mutex_lock_try'): 738 - color = 'red' 739 - elif(action == 'mutex_lock_pass'): 740 - color = 'green' 741 - elif(action == 'mutex_unlock'): 742 - color = 'blue' 743 - else: 744 - # create separate colors based on the name 745 - v1 = len(name)*10 % 256 746 - v2 = string.count(name, 'e')*100 % 256 747 - v3 = ord(name[0])*20 % 256 748 - color = '#%06X' % ((v1*0x10000) + (v2*0x100) + v3) 749 - for phase in self.phases: 279 + def targetDevice(self, phaselist, start, end, pid=-1): 280 + tgtdev = '' 281 + for phase in phaselist: 750 282 list = self.dmesg[phase]['list'] 751 - for dev in list: 752 - d = list[dev] 753 - if(d['pid'] == pid and time >= d['start'] and 754 - time <= d['end']): 755 - e = TraceEvent(action, name, color, time) 756 - if('traceevents' not in d): 757 - d['traceevents'] = [] 758 - d['traceevents'].append(e) 759 - return d 760 - break 761 - return 0 762 - def capIntraDevTraceEvent(self, action, name, pid, time): 763 - for phase in self.phases: 764 - list = self.dmesg[phase]['list'] 765 - for dev in list: 766 - d = list[dev] 767 - if(d['pid'] == pid and time >= d['start'] and 768 - time <= d['end']): 769 - if('traceevents' not in d): 770 - return 771 - for e in d['traceevents']: 772 - if(e.action == action and 773 - e.name == name and not e.ready): 774 - e.length = time - e.time 775 - e.ready = True 776 - break 777 - return 283 + for devname in list: 284 + dev = list[devname] 285 + if(pid >= 0 and dev['pid'] != pid): 286 + continue 287 + devS = dev['start'] 288 + devE = dev['end'] 289 + if(start < devS or start >= devE or end <= devS or end > devE): 290 + continue 291 + tgtdev = dev 292 + break 293 + return tgtdev 294 + def addDeviceFunctionCall(self, displayname, kprobename, proc, pid, start, end, cdata, rdata): 295 + machstart = self.dmesg['suspend_machine']['start'] 296 + machend = self.dmesg['resume_machine']['end'] 297 + tgtdev = self.targetDevice(self.phases, start, end, pid) 298 + if not tgtdev and start >= machstart and end < machend: 299 + # device calls in machine phases should be serial 300 + tgtdev = self.targetDevice(['suspend_machine', 'resume_machine'], start, end) 301 + if not tgtdev: 302 + if 'scsi_eh' in proc: 303 + self.newActionGlobal(proc, start, end, pid) 304 + self.addDeviceFunctionCall(displayname, kprobename, proc, pid, start, end, cdata, rdata) 305 + else: 306 + vprint('IGNORE: %s[%s](%d) [%f - %f] | %s | %s | %s' % (displayname, kprobename, 307 + pid, start, end, cdata, rdata, proc)) 308 + return False 309 + # detail block fits within tgtdev 310 + if('src' not in tgtdev): 311 + tgtdev['src'] = [] 312 + title = cdata+' '+rdata 313 + mstr = '\(.*\) *(?P<args>.*) *\((?P<caller>.*)\+.* arg1=(?P<ret>.*)' 314 + m = re.match(mstr, title) 315 + if m: 316 + c = m.group('caller') 317 + a = m.group('args').strip() 318 + r = m.group('ret') 319 + if len(r) > 6: 320 + r = '' 321 + else: 322 + r = 'ret=%s ' % r 323 + l = '%0.3fms' % ((end - start) * 1000) 324 + if kprobename in self.dev_ubiquitous: 325 + title = '%s(%s) <- %s, %s(%s)' % (displayname, a, c, r, l) 326 + else: 327 + title = '%s(%s) %s(%s)' % (displayname, a, r, l) 328 + e = TraceEvent(title, kprobename, start, end - start) 329 + tgtdev['src'].append(e) 330 + return True 778 331 def trimTimeVal(self, t, t0, dT, left): 779 332 if left: 780 333 if(t > t0): ··· 820 353 cg.end = self.trimTimeVal(cg.end, t0, dT, left) 821 354 for line in cg.list: 822 355 line.time = self.trimTimeVal(line.time, t0, dT, left) 823 - if('traceevents' in d): 824 - for e in d['traceevents']: 356 + if('src' in d): 357 + for e in d['src']: 825 358 e.time = self.trimTimeVal(e.time, t0, dT, left) 826 359 def normalizeTime(self, tZero): 827 - # first trim out any standby or freeze clock time 360 + # trim out any standby or freeze clock time 828 361 if(self.tSuspended != self.tResumed): 829 362 if(self.tResumed > tZero): 830 363 self.trimTime(self.tSuspended, \ ··· 832 365 else: 833 366 self.trimTime(self.tSuspended, \ 834 367 self.tResumed-self.tSuspended, False) 835 - # shift the timeline so that tZero is the new 0 836 - self.tSuspended -= tZero 837 - self.tResumed -= tZero 838 - self.start -= tZero 839 - self.end -= tZero 840 - for phase in self.phases: 841 - p = self.dmesg[phase] 842 - p['start'] -= tZero 843 - p['end'] -= tZero 844 - list = p['list'] 845 - for name in list: 846 - d = list[name] 847 - d['start'] -= tZero 848 - d['end'] -= tZero 849 - if('ftrace' in d): 850 - cg = d['ftrace'] 851 - cg.start -= tZero 852 - cg.end -= tZero 853 - for line in cg.list: 854 - line.time -= tZero 855 - if('traceevents' in d): 856 - for e in d['traceevents']: 857 - e.time -= tZero 858 368 def newPhaseWithSingleAction(self, phasename, devname, start, end, color): 859 369 for phase in self.phases: 860 370 self.dmesg[phase]['order'] += 1 ··· 861 417 {'list': list, 'start': start, 'end': end, 862 418 'row': 0, 'color': color, 'order': order} 863 419 self.phases = self.sortedPhases() 420 + self.devicegroups.append([phasename]) 864 421 def setPhase(self, phase, ktime, isbegin): 865 422 if(isbegin): 866 423 self.dmesg[phase]['start'] = ktime ··· 887 442 for devname in phaselist: 888 443 dev = phaselist[devname] 889 444 if(dev['end'] < 0): 890 - dev['end'] = end 445 + for p in self.phases: 446 + if self.dmesg[p]['end'] > dev['start']: 447 + dev['end'] = self.dmesg[p]['end'] 448 + break 891 449 vprint('%s (%s): callback didnt return' % (devname, phase)) 892 450 def deviceFilter(self, devicefilter): 893 451 # remove all by the relatives of the filter devnames ··· 920 472 # if any calls never returned, clip them at system resume end 921 473 for phase in self.phases: 922 474 self.fixupInitcalls(phase, self.getEnd()) 923 - def newActionGlobal(self, name, start, end): 475 + def isInsideTimeline(self, start, end): 476 + if(self.start <= start and self.end > start): 477 + return True 478 + return False 479 + def phaseOverlap(self, phases): 480 + rmgroups = [] 481 + newgroup = [] 482 + for group in self.devicegroups: 483 + for phase in phases: 484 + if phase not in group: 485 + continue 486 + for p in group: 487 + if p not in newgroup: 488 + newgroup.append(p) 489 + if group not in rmgroups: 490 + rmgroups.append(group) 491 + for group in rmgroups: 492 + self.devicegroups.remove(group) 493 + self.devicegroups.append(newgroup) 494 + def newActionGlobal(self, name, start, end, pid=-1, color=''): 495 + # if event starts before timeline start, expand timeline 496 + if(start < self.start): 497 + self.setStart(start) 498 + # if event ends after timeline end, expand the timeline 499 + if(end > self.end): 500 + self.setEnd(end) 924 501 # which phase is this device callback or action "in" 925 502 targetphase = "none" 503 + htmlclass = '' 926 504 overlap = 0.0 505 + phases = [] 927 506 for phase in self.phases: 928 507 pstart = self.dmesg[phase]['start'] 929 508 pend = self.dmesg[phase]['end'] 930 509 o = max(0, min(end, pend) - max(start, pstart)) 931 - if(o > overlap): 510 + if o > 0: 511 + phases.append(phase) 512 + if o > overlap: 513 + if overlap > 0 and phase == 'post_resume': 514 + continue 932 515 targetphase = phase 933 516 overlap = o 517 + if pid == -2: 518 + htmlclass = ' bg' 519 + if len(phases) > 1: 520 + htmlclass = ' bg' 521 + self.phaseOverlap(phases) 934 522 if targetphase in self.phases: 935 - self.newAction(targetphase, name, -1, '', start, end, '') 936 - return True 523 + newname = self.newAction(targetphase, name, pid, '', start, end, '', htmlclass, color) 524 + return (targetphase, newname) 937 525 return False 938 - def newAction(self, phase, name, pid, parent, start, end, drv): 526 + def newAction(self, phase, name, pid, parent, start, end, drv, htmlclass='', color=''): 939 527 # new device callback for a specific phase 940 528 self.html_device_id += 1 941 529 devid = '%s%d' % (self.idstr, self.html_device_id) ··· 979 495 length = -1.0 980 496 if(start >= 0 and end >= 0): 981 497 length = end - start 498 + if pid == -2: 499 + i = 2 500 + origname = name 501 + while(name in list): 502 + name = '%s[%d]' % (origname, i) 503 + i += 1 982 504 list[name] = {'start': start, 'end': end, 'pid': pid, 'par': parent, 983 505 'length': length, 'row': 0, 'id': devid, 'drv': drv } 506 + if htmlclass: 507 + list[name]['htmlclass'] = htmlclass 508 + if color: 509 + list[name]['color'] = color 510 + return name 984 511 def deviceIDs(self, devlist, phase): 985 512 idlist = [] 986 513 list = self.dmesg[phase]['list'] ··· 1031 536 vprint(' %16s: %f - %f (%d devices)' % (phase, \ 1032 537 self.dmesg[phase]['start'], self.dmesg[phase]['end'], dc)) 1033 538 vprint(' test end: %f' % self.end) 539 + def deviceChildrenAllPhases(self, devname): 540 + devlist = [] 541 + for phase in self.phases: 542 + list = self.deviceChildren(devname, phase) 543 + for dev in list: 544 + if dev not in devlist: 545 + devlist.append(dev) 546 + return devlist 1034 547 def masterTopology(self, name, list, depth): 1035 548 node = DeviceNode(name, depth) 1036 549 for cname in list: 1037 - clist = self.deviceChildren(cname, 'resume') 550 + # avoid recursions 551 + if name == cname: 552 + continue 553 + clist = self.deviceChildrenAllPhases(cname) 1038 554 cnode = self.masterTopology(cname, clist, depth+1) 1039 555 node.children.append(cnode) 1040 556 return node ··· 1086 580 list = self.dmesg[phase]['list'] 1087 581 for dev in list: 1088 582 pdev = list[dev]['par'] 1089 - if(re.match('[0-9]*-[0-9]*\.[0-9]*[\.0-9]*\:[\.0-9]*$', pdev)): 583 + pid = list[dev]['pid'] 584 + if(pid < 0 or re.match('[0-9]*-[0-9]*\.[0-9]*[\.0-9]*\:[\.0-9]*$', pdev)): 1090 585 continue 1091 586 if pdev and pdev not in real and pdev not in rootlist: 1092 587 rootlist.append(pdev) ··· 1096 589 rootlist = self.rootDeviceList() 1097 590 master = self.masterTopology('', rootlist, 0) 1098 591 return self.printTopology(master) 592 + def selectTimelineDevices(self, widfmt, tTotal, mindevlen): 593 + # only select devices that will actually show up in html 594 + self.tdevlist = dict() 595 + for phase in self.dmesg: 596 + devlist = [] 597 + list = self.dmesg[phase]['list'] 598 + for dev in list: 599 + length = (list[dev]['end'] - list[dev]['start']) * 1000 600 + width = widfmt % (((list[dev]['end']-list[dev]['start'])*100)/tTotal) 601 + if width != '0.000000' and length >= mindevlen: 602 + devlist.append(dev) 603 + self.tdevlist[phase] = devlist 1099 604 1100 605 # Class: TraceEvent 1101 606 # Description: 1102 607 # A container for trace event data found in the ftrace file 1103 608 class TraceEvent: 1104 - ready = False 1105 - name = '' 609 + text = '' 1106 610 time = 0.0 1107 - color = '#FFFFFF' 1108 611 length = 0.0 1109 - action = '' 1110 - def __init__(self, a, n, c, t): 1111 - self.action = a 1112 - self.name = n 1113 - self.color = c 612 + title = '' 613 + row = 0 614 + def __init__(self, a, n, t, l): 615 + self.title = a 616 + self.text = n 1114 617 self.time = t 618 + self.length = l 1115 619 1116 620 # Class: FTraceLine 1117 621 # Description: ··· 1141 623 fcall = False 1142 624 freturn = False 1143 625 fevent = False 626 + fkprobe = False 1144 627 depth = 0 1145 628 name = '' 1146 629 type = '' 1147 - def __init__(self, t, m, d): 630 + def __init__(self, t, m='', d=''): 1148 631 self.time = float(t) 632 + if not m and not d: 633 + return 1149 634 # is this a trace event 1150 635 if(d == 'traceevent' or re.match('^ *\/\* *(?P<msg>.*) \*\/ *$', m)): 1151 636 if(d == 'traceevent'): ··· 1165 644 self.type = emm.group('call') 1166 645 else: 1167 646 self.name = msg 647 + km = re.match('^(?P<n>.*)_cal$', self.type) 648 + if km: 649 + self.fcall = True 650 + self.fkprobe = True 651 + self.type = km.group('n') 652 + return 653 + km = re.match('^(?P<n>.*)_ret$', self.type) 654 + if km: 655 + self.freturn = True 656 + self.fkprobe = True 657 + self.type = km.group('n') 658 + return 1168 659 self.fevent = True 1169 660 return 1170 661 # convert the duration to seconds ··· 1195 662 # includes comment with function name 1196 663 match = re.match('^} *\/\* *(?P<n>.*) *\*\/$', m) 1197 664 if(match): 1198 - self.name = match.group('n') 665 + self.name = match.group('n').strip() 1199 666 # function call 1200 667 else: 1201 668 self.fcall = True ··· 1203 670 if(m[-1] == '{'): 1204 671 match = re.match('^(?P<n>.*) *\(.*', m) 1205 672 if(match): 1206 - self.name = match.group('n') 673 + self.name = match.group('n').strip() 1207 674 # function call with no children (leaf) 1208 675 elif(m[-1] == ';'): 1209 676 self.freturn = True 1210 677 match = re.match('^(?P<n>.*) *\(.*', m) 1211 678 if(match): 1212 - self.name = match.group('n') 679 + self.name = match.group('n').strip() 1213 680 # something else (possibly a trace marker) 1214 681 else: 1215 682 self.name = m 1216 683 def getDepth(self, str): 1217 684 return len(str)/2 1218 - def debugPrint(self, dev): 685 + def debugPrint(self, dev=''): 1219 686 if(self.freturn and self.fcall): 1220 687 print('%s -- %f (%02d): %s(); (%.3f us)' % (dev, self.time, \ 1221 688 self.depth, self.name, self.length*1000000)) ··· 1225 692 else: 1226 693 print('%s -- %f (%02d): %s() { (%.3f us)' % (dev, self.time, \ 1227 694 self.depth, self.name, self.length*1000000)) 695 + def startMarker(self): 696 + global sysvals 697 + # Is this the starting line of a suspend? 698 + if not self.fevent: 699 + return False 700 + if sysvals.usetracemarkers: 701 + if(self.name == 'SUSPEND START'): 702 + return True 703 + return False 704 + else: 705 + if(self.type == 'suspend_resume' and 706 + re.match('suspend_enter\[.*\] begin', self.name)): 707 + return True 708 + return False 709 + def endMarker(self): 710 + # Is this the ending line of a resume? 711 + if not self.fevent: 712 + return False 713 + if sysvals.usetracemarkers: 714 + if(self.name == 'RESUME COMPLETE'): 715 + return True 716 + return False 717 + else: 718 + if(self.type == 'suspend_resume' and 719 + re.match('thaw_processes\[.*\] end', self.name)): 720 + return True 721 + return False 1228 722 1229 723 # Class: FTraceCallGraph 1230 724 # Description: ··· 1265 705 list = [] 1266 706 invalid = False 1267 707 depth = 0 1268 - def __init__(self): 708 + pid = 0 709 + def __init__(self, pid): 1269 710 self.start = -1.0 1270 711 self.end = -1.0 1271 712 self.list = [] 1272 713 self.depth = 0 1273 - def setDepth(self, line): 714 + self.pid = pid 715 + def addLine(self, line, debug=False): 716 + # if this is already invalid, just leave 717 + if(self.invalid): 718 + return False 719 + # invalidate on too much data or bad depth 720 + if(len(self.list) >= 1000000 or self.depth < 0): 721 + self.invalidate(line) 722 + return False 723 + # compare current depth with this lines pre-call depth 724 + prelinedep = line.depth 725 + if(line.freturn and not line.fcall): 726 + prelinedep += 1 727 + last = 0 728 + lasttime = line.time 729 + virtualfname = 'execution_misalignment' 730 + if len(self.list) > 0: 731 + last = self.list[-1] 732 + lasttime = last.time 733 + # handle low misalignments by inserting returns 734 + if prelinedep < self.depth: 735 + if debug and last: 736 + print '-------- task %d --------' % self.pid 737 + last.debugPrint() 738 + idx = 0 739 + # add return calls to get the depth down 740 + while prelinedep < self.depth: 741 + if debug: 742 + print 'MISALIGN LOW (add returns): C%d - eC%d' % (self.depth, prelinedep) 743 + self.depth -= 1 744 + if idx == 0 and last and last.fcall and not last.freturn: 745 + # special case, turn last call into a leaf 746 + last.depth = self.depth 747 + last.freturn = True 748 + last.length = line.time - last.time 749 + if debug: 750 + last.debugPrint() 751 + else: 752 + vline = FTraceLine(lasttime) 753 + vline.depth = self.depth 754 + vline.name = virtualfname 755 + vline.freturn = True 756 + self.list.append(vline) 757 + if debug: 758 + vline.debugPrint() 759 + idx += 1 760 + if debug: 761 + line.debugPrint() 762 + print '' 763 + # handle high misalignments by inserting calls 764 + elif prelinedep > self.depth: 765 + if debug and last: 766 + print '-------- task %d --------' % self.pid 767 + last.debugPrint() 768 + idx = 0 769 + # add calls to get the depth up 770 + while prelinedep > self.depth: 771 + if debug: 772 + print 'MISALIGN HIGH (add calls): C%d - eC%d' % (self.depth, prelinedep) 773 + if idx == 0 and line.freturn and not line.fcall: 774 + # special case, turn this return into a leaf 775 + line.fcall = True 776 + prelinedep -= 1 777 + else: 778 + vline = FTraceLine(lasttime) 779 + vline.depth = self.depth 780 + vline.name = virtualfname 781 + vline.fcall = True 782 + if debug: 783 + vline.debugPrint() 784 + self.list.append(vline) 785 + self.depth += 1 786 + if not last: 787 + self.start = vline.time 788 + idx += 1 789 + if debug: 790 + line.debugPrint() 791 + print '' 792 + # process the call and set the new depth 1274 793 if(line.fcall and not line.freturn): 1275 - line.depth = self.depth 1276 794 self.depth += 1 1277 795 elif(line.freturn and not line.fcall): 1278 796 self.depth -= 1 1279 - line.depth = self.depth 1280 - else: 1281 - line.depth = self.depth 1282 - def addLine(self, line, match): 1283 - if(not self.invalid): 1284 - self.setDepth(line) 797 + if len(self.list) < 1: 798 + self.start = line.time 799 + self.list.append(line) 1285 800 if(line.depth == 0 and line.freturn): 1286 801 if(self.start < 0): 1287 802 self.start = line.time 1288 803 self.end = line.time 1289 - self.list.append(line) 804 + if line.fcall: 805 + self.end += line.length 806 + if self.list[0].name == virtualfname: 807 + self.invalid = True 1290 808 return True 1291 - if(self.invalid): 1292 - return False 1293 - if(len(self.list) >= 1000000 or self.depth < 0): 1294 - if(len(self.list) > 0): 1295 - first = self.list[0] 1296 - self.list = [] 1297 - self.list.append(first) 1298 - self.invalid = True 1299 - if(not match): 1300 - return False 1301 - id = 'task %s cpu %s' % (match.group('pid'), match.group('cpu')) 1302 - window = '(%f - %f)' % (self.start, line.time) 1303 - if(self.depth < 0): 1304 - print('Too much data for '+id+\ 1305 - ' (buffer overflow), ignoring this callback') 1306 - else: 1307 - print('Too much data for '+id+\ 1308 - ' '+window+', ignoring this callback') 1309 - return False 1310 - self.list.append(line) 1311 - if(self.start < 0): 1312 - self.start = line.time 1313 809 return False 810 + def invalidate(self, line): 811 + if(len(self.list) > 0): 812 + first = self.list[0] 813 + self.list = [] 814 + self.list.append(first) 815 + self.invalid = True 816 + id = 'task %s' % (self.pid) 817 + window = '(%f - %f)' % (self.start, line.time) 818 + if(self.depth < 0): 819 + vprint('Too much data for '+id+\ 820 + ' (buffer overflow), ignoring this callback') 821 + else: 822 + vprint('Too much data for '+id+\ 823 + ' '+window+', ignoring this callback') 1314 824 def slice(self, t0, tN): 1315 - minicg = FTraceCallGraph() 825 + minicg = FTraceCallGraph(0) 1316 826 count = -1 1317 827 firstdepth = 0 1318 828 for l in self.list: ··· 1394 764 firstdepth = l.depth 1395 765 count = 0 1396 766 l.depth -= firstdepth 1397 - minicg.addLine(l, 0) 767 + minicg.addLine(l) 1398 768 if((count == 0 and l.freturn and l.fcall) or 1399 769 (count > 0 and l.depth <= 0)): 1400 770 break 1401 771 count += 1 1402 772 return minicg 1403 - def sanityCheck(self): 773 + def repair(self, enddepth): 774 + # bring the depth back to 0 with additional returns 775 + fixed = False 776 + last = self.list[-1] 777 + for i in reversed(range(enddepth)): 778 + t = FTraceLine(last.time) 779 + t.depth = i 780 + t.freturn = True 781 + fixed = self.addLine(t) 782 + if fixed: 783 + self.end = last.time 784 + return True 785 + return False 786 + def postProcess(self, debug=False): 1404 787 stack = dict() 1405 788 cnt = 0 1406 789 for l in self.list: ··· 1422 779 cnt += 1 1423 780 elif(l.freturn and not l.fcall): 1424 781 if(l.depth not in stack): 782 + if debug: 783 + print 'Post Process Error: Depth missing' 784 + l.debugPrint() 1425 785 return False 786 + # transfer total time from return line to call line 1426 787 stack[l.depth].length = l.length 1427 - stack[l.depth] = 0 788 + stack.pop(l.depth) 1428 789 l.length = 0 1429 790 cnt -= 1 1430 791 if(cnt == 0): 792 + # trace caught the whole call tree 1431 793 return True 1432 - return False 1433 - def debugPrint(self, filename): 1434 - if(filename == 'stdout'): 1435 - print('[%f - %f]') % (self.start, self.end) 1436 - for l in self.list: 1437 - if(l.freturn and l.fcall): 1438 - print('%f (%02d): %s(); (%.3f us)' % (l.time, \ 1439 - l.depth, l.name, l.length*1000000)) 1440 - elif(l.freturn): 1441 - print('%f (%02d): %s} (%.3f us)' % (l.time, \ 1442 - l.depth, l.name, l.length*1000000)) 1443 - else: 1444 - print('%f (%02d): %s() { (%.3f us)' % (l.time, \ 1445 - l.depth, l.name, l.length*1000000)) 1446 - print(' ') 1447 - else: 1448 - fp = open(filename, 'w') 1449 - print(filename) 1450 - for l in self.list: 1451 - if(l.freturn and l.fcall): 1452 - fp.write('%f (%02d): %s(); (%.3f us)\n' % (l.time, \ 1453 - l.depth, l.name, l.length*1000000)) 1454 - elif(l.freturn): 1455 - fp.write('%f (%02d): %s} (%.3f us)\n' % (l.time, \ 1456 - l.depth, l.name, l.length*1000000)) 1457 - else: 1458 - fp.write('%f (%02d): %s() { (%.3f us)\n' % (l.time, \ 1459 - l.depth, l.name, l.length*1000000)) 1460 - fp.close() 794 + elif(cnt < 0): 795 + if debug: 796 + print 'Post Process Error: Depth is less than 0' 797 + return False 798 + # trace ended before call tree finished 799 + return self.repair(cnt) 800 + def deviceMatch(self, pid, data): 801 + found = False 802 + # add the callgraph data to the device hierarchy 803 + borderphase = { 804 + 'dpm_prepare': 'suspend_prepare', 805 + 'dpm_complete': 'resume_complete' 806 + } 807 + if(self.list[0].name in borderphase): 808 + p = borderphase[self.list[0].name] 809 + list = data.dmesg[p]['list'] 810 + for devname in list: 811 + dev = list[devname] 812 + if(pid == dev['pid'] and 813 + self.start <= dev['start'] and 814 + self.end >= dev['end']): 815 + dev['ftrace'] = self.slice(dev['start'], dev['end']) 816 + found = True 817 + return found 818 + for p in data.phases: 819 + if(data.dmesg[p]['start'] <= self.start and 820 + self.start <= data.dmesg[p]['end']): 821 + list = data.dmesg[p]['list'] 822 + for devname in list: 823 + dev = list[devname] 824 + if(pid == dev['pid'] and 825 + self.start <= dev['start'] and 826 + self.end >= dev['end']): 827 + dev['ftrace'] = self 828 + found = True 829 + break 830 + break 831 + return found 832 + def newActionFromFunction(self, data): 833 + name = self.list[0].name 834 + if name in ['dpm_run_callback', 'dpm_prepare', 'dpm_complete']: 835 + return 836 + fs = self.start 837 + fe = self.end 838 + if fs < data.start or fe > data.end: 839 + return 840 + phase = '' 841 + for p in data.phases: 842 + if(data.dmesg[p]['start'] <= self.start and 843 + self.start < data.dmesg[p]['end']): 844 + phase = p 845 + break 846 + if not phase: 847 + return 848 + out = data.newActionGlobal(name, fs, fe, -2) 849 + if out: 850 + phase, myname = out 851 + data.dmesg[phase]['list'][myname]['ftrace'] = self 852 + def debugPrint(self): 853 + print('[%f - %f] %s (%d)') % (self.start, self.end, self.list[0].name, self.pid) 854 + for l in self.list: 855 + if(l.freturn and l.fcall): 856 + print('%f (%02d): %s(); (%.3f us)' % (l.time, \ 857 + l.depth, l.name, l.length*1000000)) 858 + elif(l.freturn): 859 + print('%f (%02d): %s} (%.3f us)' % (l.time, \ 860 + l.depth, l.name, l.length*1000000)) 861 + else: 862 + print('%f (%02d): %s() { (%.3f us)' % (l.time, \ 863 + l.depth, l.name, l.length*1000000)) 864 + print(' ') 1461 865 1462 866 # Class: Timeline 1463 867 # Description: 1464 - # A container for a suspend/resume html timeline. In older versions 1465 - # of the script there were multiple timelines, but in the latest 1466 - # there is only one. 868 + # A container for a device timeline which calculates 869 + # all the html properties to display it correctly 1467 870 class Timeline: 1468 871 html = {} 1469 - scaleH = 0.0 # height of the row as a percent of the timeline height 1470 - rowH = 0.0 # height of each row in percent of the timeline height 1471 - row_height_pixels = 30 1472 - maxrows = 0 1473 - height = 0 1474 - def __init__(self): 872 + height = 0 # total timeline height 873 + scaleH = 20 # timescale (top) row height 874 + rowH = 30 # device row height 875 + bodyH = 0 # body height 876 + rows = 0 # total timeline rows 877 + phases = [] 878 + rowmaxlines = dict() 879 + rowcount = dict() 880 + rowheight = dict() 881 + def __init__(self, rowheight): 882 + self.rowH = rowheight 1475 883 self.html = { 884 + 'header': '', 1476 885 'timeline': '', 1477 886 'legend': '', 1478 - 'scale': '' 1479 887 } 1480 - def setRows(self, rows): 1481 - self.maxrows = int(rows) 1482 - self.scaleH = 100.0/float(self.maxrows) 1483 - self.height = self.maxrows*self.row_height_pixels 1484 - r = float(self.maxrows - 1) 1485 - if(r < 1.0): 1486 - r = 1.0 1487 - self.rowH = (100.0 - self.scaleH)/r 888 + # Function: getDeviceRows 889 + # Description: 890 + # determine how may rows the device funcs will take 891 + # Arguments: 892 + # rawlist: the list of devices/actions for a single phase 893 + # Output: 894 + # The total number of rows needed to display this phase of the timeline 895 + def getDeviceRows(self, rawlist): 896 + # clear all rows and set them to undefined 897 + lendict = dict() 898 + for item in rawlist: 899 + item.row = -1 900 + lendict[item] = item.length 901 + list = [] 902 + for i in sorted(lendict, key=lendict.get, reverse=True): 903 + list.append(i) 904 + remaining = len(list) 905 + rowdata = dict() 906 + row = 1 907 + # try to pack each row with as many ranges as possible 908 + while(remaining > 0): 909 + if(row not in rowdata): 910 + rowdata[row] = [] 911 + for i in list: 912 + if(i.row >= 0): 913 + continue 914 + s = i.time 915 + e = i.time + i.length 916 + valid = True 917 + for ritem in rowdata[row]: 918 + rs = ritem.time 919 + re = ritem.time + ritem.length 920 + if(not (((s <= rs) and (e <= rs)) or 921 + ((s >= re) and (e >= re)))): 922 + valid = False 923 + break 924 + if(valid): 925 + rowdata[row].append(i) 926 + i.row = row 927 + remaining -= 1 928 + row += 1 929 + return row 930 + # Function: getPhaseRows 931 + # Description: 932 + # Organize the timeline entries into the smallest 933 + # number of rows possible, with no entry overlapping 934 + # Arguments: 935 + # list: the list of devices/actions for a single phase 936 + # devlist: string list of device names to use 937 + # Output: 938 + # The total number of rows needed to display this phase of the timeline 939 + def getPhaseRows(self, dmesg, devlist): 940 + # clear all rows and set them to undefined 941 + remaining = len(devlist) 942 + rowdata = dict() 943 + row = 0 944 + lendict = dict() 945 + myphases = [] 946 + for item in devlist: 947 + if item[0] not in self.phases: 948 + self.phases.append(item[0]) 949 + if item[0] not in myphases: 950 + myphases.append(item[0]) 951 + self.rowmaxlines[item[0]] = dict() 952 + self.rowheight[item[0]] = dict() 953 + dev = dmesg[item[0]]['list'][item[1]] 954 + dev['row'] = -1 955 + lendict[item] = float(dev['end']) - float(dev['start']) 956 + if 'src' in dev: 957 + dev['devrows'] = self.getDeviceRows(dev['src']) 958 + lenlist = [] 959 + for i in sorted(lendict, key=lendict.get, reverse=True): 960 + lenlist.append(i) 961 + orderedlist = [] 962 + for item in lenlist: 963 + dev = dmesg[item[0]]['list'][item[1]] 964 + if dev['pid'] == -2: 965 + orderedlist.append(item) 966 + for item in lenlist: 967 + if item not in orderedlist: 968 + orderedlist.append(item) 969 + # try to pack each row with as many ranges as possible 970 + while(remaining > 0): 971 + rowheight = 1 972 + if(row not in rowdata): 973 + rowdata[row] = [] 974 + for item in orderedlist: 975 + dev = dmesg[item[0]]['list'][item[1]] 976 + if(dev['row'] < 0): 977 + s = dev['start'] 978 + e = dev['end'] 979 + valid = True 980 + for ritem in rowdata[row]: 981 + rs = ritem['start'] 982 + re = ritem['end'] 983 + if(not (((s <= rs) and (e <= rs)) or 984 + ((s >= re) and (e >= re)))): 985 + valid = False 986 + break 987 + if(valid): 988 + rowdata[row].append(dev) 989 + dev['row'] = row 990 + remaining -= 1 991 + if 'devrows' in dev and dev['devrows'] > rowheight: 992 + rowheight = dev['devrows'] 993 + for phase in myphases: 994 + self.rowmaxlines[phase][row] = rowheight 995 + self.rowheight[phase][row] = rowheight * self.rowH 996 + row += 1 997 + if(row > self.rows): 998 + self.rows = int(row) 999 + for phase in myphases: 1000 + self.rowcount[phase] = row 1001 + return row 1002 + def phaseRowHeight(self, phase, row): 1003 + return self.rowheight[phase][row] 1004 + def phaseRowTop(self, phase, row): 1005 + top = 0 1006 + for i in sorted(self.rowheight[phase]): 1007 + if i >= row: 1008 + break 1009 + top += self.rowheight[phase][i] 1010 + return top 1011 + # Function: calcTotalRows 1012 + # Description: 1013 + # Calculate the heights and offsets for the header and rows 1014 + def calcTotalRows(self): 1015 + maxrows = 0 1016 + standardphases = [] 1017 + for phase in self.phases: 1018 + total = 0 1019 + for i in sorted(self.rowmaxlines[phase]): 1020 + total += self.rowmaxlines[phase][i] 1021 + if total > maxrows: 1022 + maxrows = total 1023 + if total == self.rowcount[phase]: 1024 + standardphases.append(phase) 1025 + self.height = self.scaleH + (maxrows*self.rowH) 1026 + self.bodyH = self.height - self.scaleH 1027 + for phase in standardphases: 1028 + for i in sorted(self.rowheight[phase]): 1029 + self.rowheight[phase][i] = self.bodyH/self.rowcount[phase] 1030 + # Function: createTimeScale 1031 + # Description: 1032 + # Create the timescale for a timeline block 1033 + # Arguments: 1034 + # m0: start time (mode begin) 1035 + # mMax: end time (mode end) 1036 + # tTotal: total timeline time 1037 + # mode: suspend or resume 1038 + # Output: 1039 + # The html code needed to display the time scale 1040 + def createTimeScale(self, m0, mMax, tTotal, mode): 1041 + timescale = '<div class="t" style="right:{0}%">{1}</div>\n' 1042 + rline = '<div class="t" style="left:0;border-left:1px solid black;border-right:0;">Resume</div>\n' 1043 + output = '<div class="timescale">\n' 1044 + # set scale for timeline 1045 + mTotal = mMax - m0 1046 + tS = 0.1 1047 + if(tTotal <= 0): 1048 + return output+'</div>\n' 1049 + if(tTotal > 4): 1050 + tS = 1 1051 + divTotal = int(mTotal/tS) + 1 1052 + divEdge = (mTotal - tS*(divTotal-1))*100/mTotal 1053 + for i in range(divTotal): 1054 + htmlline = '' 1055 + if(mode == 'resume'): 1056 + pos = '%0.3f' % (100 - ((float(i)*tS*100)/mTotal)) 1057 + val = '%0.fms' % (float(i)*tS*1000) 1058 + htmlline = timescale.format(pos, val) 1059 + if(i == 0): 1060 + htmlline = rline 1061 + else: 1062 + pos = '%0.3f' % (100 - ((float(i)*tS*100)/mTotal) - divEdge) 1063 + val = '%0.fms' % (float(i-divTotal+1)*tS*1000) 1064 + if(i == divTotal - 1): 1065 + val = 'Suspend' 1066 + htmlline = timescale.format(pos, val) 1067 + output += htmlline 1068 + output += '</div>\n' 1069 + return output 1488 1070 1489 - # Class: TestRun 1071 + # Class: TestProps 1490 1072 # Description: 1491 - # A container for a suspend/resume test run. This is necessary as 1492 - # there could be more than one, and they need to be separate. 1493 - class TestRun: 1073 + # A list of values describing the properties of these test runs 1074 + class TestProps: 1075 + stamp = '' 1076 + tracertype = '' 1077 + S0i3 = False 1078 + fwdata = [] 1494 1079 ftrace_line_fmt_fg = \ 1495 1080 '^ *(?P<time>[0-9\.]*) *\| *(?P<cpu>[0-9]*)\)'+\ 1496 1081 ' *(?P<proc>.*)-(?P<pid>[0-9]*) *\|'+\ 1497 - '[ +!]*(?P<dur>[0-9\.]*) .*\| (?P<msg>.*)' 1082 + '[ +!#\*@$]*(?P<dur>[0-9\.]*) .*\| (?P<msg>.*)' 1498 1083 ftrace_line_fmt_nop = \ 1499 1084 ' *(?P<proc>.*)-(?P<pid>[0-9]*) *\[(?P<cpu>[0-9]*)\] *'+\ 1500 1085 '(?P<flags>.{4}) *(?P<time>[0-9\.]*): *'+\ 1501 1086 '(?P<msg>.*)' 1502 1087 ftrace_line_fmt = ftrace_line_fmt_nop 1503 1088 cgformat = False 1504 - ftemp = dict() 1505 - ttemp = dict() 1506 - inthepipe = False 1507 - tracertype = '' 1508 1089 data = 0 1509 - def __init__(self, dataobj): 1510 - self.data = dataobj 1511 - self.ftemp = dict() 1512 - self.ttemp = dict() 1513 - def isReady(self): 1514 - if(tracertype == '' or not data): 1515 - return False 1516 - return True 1090 + ktemp = dict() 1091 + def __init__(self): 1092 + self.ktemp = dict() 1517 1093 def setTracerType(self, tracer): 1518 1094 self.tracertype = tracer 1519 1095 if(tracer == 'function_graph'): ··· 1742 880 self.ftrace_line_fmt = self.ftrace_line_fmt_nop 1743 881 else: 1744 882 doError('Invalid tracer format: [%s]' % tracer, False) 883 + 884 + # Class: TestRun 885 + # Description: 886 + # A container for a suspend/resume test run. This is necessary as 887 + # there could be more than one, and they need to be separate. 888 + class TestRun: 889 + ftemp = dict() 890 + ttemp = dict() 891 + data = 0 892 + def __init__(self, dataobj): 893 + self.data = dataobj 894 + self.ftemp = dict() 895 + self.ttemp = dict() 1745 896 1746 897 # ----------------- FUNCTIONS -------------------- 1747 898 ··· 1768 893 if(sysvals.verbose): 1769 894 print(msg) 1770 895 1771 - # Function: initFtrace 1772 - # Description: 1773 - # Configure ftrace to use trace events and/or a callgraph 1774 - def initFtrace(): 1775 - global sysvals 1776 - 1777 - tp = sysvals.tpath 1778 - cf = 'dpm_run_callback' 1779 - if(sysvals.usetraceeventsonly): 1780 - cf = '-e dpm_prepare -e dpm_complete -e dpm_run_callback' 1781 - if(sysvals.usecallgraph or sysvals.usetraceevents): 1782 - print('INITIALIZING FTRACE...') 1783 - # turn trace off 1784 - os.system('echo 0 > '+tp+'tracing_on') 1785 - # set the trace clock to global 1786 - os.system('echo global > '+tp+'trace_clock') 1787 - # set trace buffer to a huge value 1788 - os.system('echo nop > '+tp+'current_tracer') 1789 - os.system('echo 100000 > '+tp+'buffer_size_kb') 1790 - # initialize the callgraph trace, unless this is an x2 run 1791 - if(sysvals.usecallgraph and sysvals.execcount == 1): 1792 - # set trace type 1793 - os.system('echo function_graph > '+tp+'current_tracer') 1794 - os.system('echo "" > '+tp+'set_ftrace_filter') 1795 - # set trace format options 1796 - os.system('echo funcgraph-abstime > '+tp+'trace_options') 1797 - os.system('echo funcgraph-proc > '+tp+'trace_options') 1798 - # focus only on device suspend and resume 1799 - os.system('cat '+tp+'available_filter_functions | grep '+\ 1800 - cf+' > '+tp+'set_graph_function') 1801 - if(sysvals.usetraceevents): 1802 - # turn trace events on 1803 - events = iter(sysvals.traceevents) 1804 - for e in events: 1805 - os.system('echo 1 > '+sysvals.epath+e+'/enable') 1806 - # clear the trace buffer 1807 - os.system('echo "" > '+tp+'trace') 1808 - 1809 - # Function: initFtraceAndroid 1810 - # Description: 1811 - # Configure ftrace to capture trace events 1812 - def initFtraceAndroid(): 1813 - global sysvals 1814 - 1815 - tp = sysvals.tpath 1816 - if(sysvals.usetraceevents): 1817 - print('INITIALIZING FTRACE...') 1818 - # turn trace off 1819 - os.system(sysvals.adb+" shell 'echo 0 > "+tp+"tracing_on'") 1820 - # set the trace clock to global 1821 - os.system(sysvals.adb+" shell 'echo global > "+tp+"trace_clock'") 1822 - # set trace buffer to a huge value 1823 - os.system(sysvals.adb+" shell 'echo nop > "+tp+"current_tracer'") 1824 - os.system(sysvals.adb+" shell 'echo 10000 > "+tp+"buffer_size_kb'") 1825 - # turn trace events on 1826 - events = iter(sysvals.traceevents) 1827 - for e in events: 1828 - os.system(sysvals.adb+" shell 'echo 1 > "+\ 1829 - sysvals.epath+e+"/enable'") 1830 - # clear the trace buffer 1831 - os.system(sysvals.adb+" shell 'echo \"\" > "+tp+"trace'") 1832 - 1833 - # Function: verifyFtrace 1834 - # Description: 1835 - # Check that ftrace is working on the system 1836 - # Output: 1837 - # True or False 1838 - def verifyFtrace(): 1839 - global sysvals 1840 - # files needed for any trace data 1841 - files = ['buffer_size_kb', 'current_tracer', 'trace', 'trace_clock', 1842 - 'trace_marker', 'trace_options', 'tracing_on'] 1843 - # files needed for callgraph trace data 1844 - tp = sysvals.tpath 1845 - if(sysvals.usecallgraph): 1846 - files += [ 1847 - 'available_filter_functions', 1848 - 'set_ftrace_filter', 1849 - 'set_graph_function' 1850 - ] 1851 - for f in files: 1852 - if(sysvals.android): 1853 - out = os.popen(sysvals.adb+' shell ls '+tp+f).read().strip() 1854 - if(out != tp+f): 1855 - return False 1856 - else: 1857 - if(os.path.exists(tp+f) == False): 1858 - return False 1859 - return True 1860 - 1861 896 # Function: parseStamp 1862 897 # Description: 1863 898 # Pull in the stamp comment line from the data file(s), 1864 899 # create the stamp, and add it to the global sysvals object 1865 900 # Arguments: 1866 901 # m: the valid re.match output for the stamp line 1867 - def parseStamp(m, data): 902 + def parseStamp(line, data): 1868 903 global sysvals 904 + 905 + m = re.match(sysvals.stampfmt, line) 1869 906 data.stamp = {'time': '', 'host': '', 'mode': ''} 1870 907 dt = datetime(int(m.group('y'))+2000, int(m.group('m')), 1871 908 int(m.group('d')), int(m.group('H')), int(m.group('M')), ··· 1786 999 data.stamp['host'] = m.group('host') 1787 1000 data.stamp['mode'] = m.group('mode') 1788 1001 data.stamp['kernel'] = m.group('kernel') 1002 + sysvals.hostname = data.stamp['host'] 1789 1003 sysvals.suspendmode = data.stamp['mode'] 1790 1004 if not sysvals.stamp: 1791 1005 sysvals.stamp = data.stamp ··· 1819 1031 def doesTraceLogHaveTraceEvents(): 1820 1032 global sysvals 1821 1033 1034 + # check for kprobes 1035 + sysvals.usekprobes = False 1036 + out = os.system('grep -q "_cal: (" '+sysvals.ftracefile) 1037 + if(out == 0): 1038 + sysvals.usekprobes = True 1039 + # check for callgraph data on trace event blocks 1040 + out = os.system('grep -q "_cpu_down()" '+sysvals.ftracefile) 1041 + if(out == 0): 1042 + sysvals.usekprobes = True 1043 + out = os.popen('head -1 '+sysvals.ftracefile).read().replace('\n', '') 1044 + m = re.match(sysvals.stampfmt, out) 1045 + if m and m.group('mode') == 'command': 1046 + sysvals.usetraceeventsonly = True 1047 + sysvals.usetraceevents = True 1048 + return 1049 + # figure out what level of trace events are supported 1822 1050 sysvals.usetraceeventsonly = True 1823 1051 sysvals.usetraceevents = False 1824 1052 for e in sysvals.traceevents: 1825 - out = os.popen('cat '+sysvals.ftracefile+' | grep "'+e+': "').read() 1826 - if(not out): 1053 + out = os.system('grep -q "'+e+': " '+sysvals.ftracefile) 1054 + if(out != 0): 1827 1055 sysvals.usetraceeventsonly = False 1828 - if(e == 'suspend_resume' and out): 1056 + if(e == 'suspend_resume' and out == 0): 1829 1057 sysvals.usetraceevents = True 1058 + # determine is this log is properly formatted 1059 + for e in ['SUSPEND START', 'RESUME COMPLETE']: 1060 + out = os.system('grep -q "'+e+'" '+sysvals.ftracefile) 1061 + if(out != 0): 1062 + sysvals.usetracemarkers = False 1830 1063 1831 1064 # Function: appendIncompleteTraceLog 1832 1065 # Description: ··· 1864 1055 1865 1056 # create TestRun vessels for ftrace parsing 1866 1057 testcnt = len(testruns) 1867 - testidx = -1 1058 + testidx = 0 1868 1059 testrun = [] 1869 1060 for data in testruns: 1870 1061 testrun.append(TestRun(data)) 1871 1062 1872 1063 # extract the callgraph and traceevent data 1873 1064 vprint('Analyzing the ftrace data...') 1065 + tp = TestProps() 1874 1066 tf = open(sysvals.ftracefile, 'r') 1067 + data = 0 1875 1068 for line in tf: 1876 1069 # remove any latent carriage returns 1877 1070 line = line.replace('\r\n', '') 1878 - # grab the time stamp first (signifies the start of the test run) 1071 + # grab the time stamp 1879 1072 m = re.match(sysvals.stampfmt, line) 1880 1073 if(m): 1881 - testidx += 1 1882 - parseStamp(m, testrun[testidx].data) 1883 - continue 1884 - # pull out any firmware data 1885 - if(re.match(sysvals.firmwarefmt, line)): 1886 - continue 1887 - # if we havent found a test time stamp yet keep spinning til we do 1888 - if(testidx < 0): 1074 + tp.stamp = line 1889 1075 continue 1890 1076 # determine the trace data type (required for further parsing) 1891 1077 m = re.match(sysvals.tracertypefmt, line) 1892 1078 if(m): 1893 - tracer = m.group('t') 1894 - testrun[testidx].setTracerType(tracer) 1079 + tp.setTracerType(m.group('t')) 1895 1080 continue 1896 - # parse only valid lines, if this isnt one move on 1897 - m = re.match(testrun[testidx].ftrace_line_fmt, line) 1081 + # device properties line 1082 + if(re.match(sysvals.devpropfmt, line)): 1083 + devProps(line) 1084 + continue 1085 + # parse only valid lines, if this is not one move on 1086 + m = re.match(tp.ftrace_line_fmt, line) 1898 1087 if(not m): 1899 1088 continue 1900 1089 # gather the basic message data from the line 1901 1090 m_time = m.group('time') 1902 1091 m_pid = m.group('pid') 1903 1092 m_msg = m.group('msg') 1904 - if(testrun[testidx].cgformat): 1093 + if(tp.cgformat): 1905 1094 m_param3 = m.group('dur') 1906 1095 else: 1907 1096 m_param3 = 'traceevent' ··· 1911 1104 # the line should be a call, return, or event 1912 1105 if(not t.fcall and not t.freturn and not t.fevent): 1913 1106 continue 1914 - # only parse the ftrace data during suspend/resume 1915 - data = testrun[testidx].data 1916 - if(not testrun[testidx].inthepipe): 1917 - # look for the suspend start marker 1918 - if(t.fevent): 1919 - if(t.name == 'SUSPEND START'): 1920 - testrun[testidx].inthepipe = True 1921 - data.setStart(t.time) 1107 + # look for the suspend start marker 1108 + if(t.startMarker()): 1109 + data = testrun[testidx].data 1110 + parseStamp(tp.stamp, data) 1111 + data.setStart(t.time) 1112 + continue 1113 + if(not data): 1114 + continue 1115 + # find the end of resume 1116 + if(t.endMarker()): 1117 + data.setEnd(t.time) 1118 + testidx += 1 1119 + if(testidx >= testcnt): 1120 + break 1121 + continue 1122 + # trace event processing 1123 + if(t.fevent): 1124 + # general trace events have two types, begin and end 1125 + if(re.match('(?P<name>.*) begin$', t.name)): 1126 + isbegin = True 1127 + elif(re.match('(?P<name>.*) end$', t.name)): 1128 + isbegin = False 1129 + else: 1922 1130 continue 1923 - else: 1924 - # trace event processing 1925 - if(t.fevent): 1926 - if(t.name == 'RESUME COMPLETE'): 1927 - testrun[testidx].inthepipe = False 1928 - data.setEnd(t.time) 1929 - if(testidx == testcnt - 1): 1930 - break 1931 - continue 1932 - # general trace events have two types, begin and end 1933 - if(re.match('(?P<name>.*) begin$', t.name)): 1934 - isbegin = True 1935 - elif(re.match('(?P<name>.*) end$', t.name)): 1936 - isbegin = False 1937 - else: 1938 - continue 1939 - m = re.match('(?P<name>.*)\[(?P<val>[0-9]*)\] .*', t.name) 1940 - if(m): 1941 - val = m.group('val') 1942 - if val == '0': 1943 - name = m.group('name') 1944 - else: 1945 - name = m.group('name')+'['+val+']' 1946 - else: 1947 - m = re.match('(?P<name>.*) .*', t.name) 1131 + m = re.match('(?P<name>.*)\[(?P<val>[0-9]*)\] .*', t.name) 1132 + if(m): 1133 + val = m.group('val') 1134 + if val == '0': 1948 1135 name = m.group('name') 1949 - # special processing for trace events 1950 - if re.match('dpm_prepare\[.*', name): 1951 - continue 1952 - elif re.match('machine_suspend.*', name): 1953 - continue 1954 - elif re.match('suspend_enter\[.*', name): 1955 - if(not isbegin): 1956 - data.dmesg['suspend_prepare']['end'] = t.time 1957 - continue 1958 - elif re.match('dpm_suspend\[.*', name): 1959 - if(not isbegin): 1960 - data.dmesg['suspend']['end'] = t.time 1961 - continue 1962 - elif re.match('dpm_suspend_late\[.*', name): 1963 - if(isbegin): 1964 - data.dmesg['suspend_late']['start'] = t.time 1965 - else: 1966 - data.dmesg['suspend_late']['end'] = t.time 1967 - continue 1968 - elif re.match('dpm_suspend_noirq\[.*', name): 1969 - if(isbegin): 1970 - data.dmesg['suspend_noirq']['start'] = t.time 1971 - else: 1972 - data.dmesg['suspend_noirq']['end'] = t.time 1973 - continue 1974 - elif re.match('dpm_resume_noirq\[.*', name): 1975 - if(isbegin): 1976 - data.dmesg['resume_machine']['end'] = t.time 1977 - data.dmesg['resume_noirq']['start'] = t.time 1978 - else: 1979 - data.dmesg['resume_noirq']['end'] = t.time 1980 - continue 1981 - elif re.match('dpm_resume_early\[.*', name): 1982 - if(isbegin): 1983 - data.dmesg['resume_early']['start'] = t.time 1984 - else: 1985 - data.dmesg['resume_early']['end'] = t.time 1986 - continue 1987 - elif re.match('dpm_resume\[.*', name): 1988 - if(isbegin): 1989 - data.dmesg['resume']['start'] = t.time 1990 - else: 1991 - data.dmesg['resume']['end'] = t.time 1992 - continue 1993 - elif re.match('dpm_complete\[.*', name): 1994 - if(isbegin): 1995 - data.dmesg['resume_complete']['start'] = t.time 1996 - else: 1997 - data.dmesg['resume_complete']['end'] = t.time 1998 - continue 1999 - # is this trace event outside of the devices calls 2000 - if(data.isTraceEventOutsideDeviceCalls(pid, t.time)): 2001 - # global events (outside device calls) are simply graphed 2002 - if(isbegin): 2003 - # store each trace event in ttemp 2004 - if(name not in testrun[testidx].ttemp): 2005 - testrun[testidx].ttemp[name] = [] 2006 - testrun[testidx].ttemp[name].append(\ 2007 - {'begin': t.time, 'end': t.time}) 2008 - else: 2009 - # finish off matching trace event in ttemp 2010 - if(name in testrun[testidx].ttemp): 2011 - testrun[testidx].ttemp[name][-1]['end'] = t.time 2012 1136 else: 2013 - if(isbegin): 2014 - data.addIntraDevTraceEvent('', name, pid, t.time) 2015 - else: 2016 - data.capIntraDevTraceEvent('', name, pid, t.time) 2017 - # call/return processing 2018 - elif sysvals.usecallgraph: 2019 - # create a callgraph object for the data 2020 - if(pid not in testrun[testidx].ftemp): 2021 - testrun[testidx].ftemp[pid] = [] 2022 - testrun[testidx].ftemp[pid].append(FTraceCallGraph()) 2023 - # when the call is finished, see which device matches it 2024 - cg = testrun[testidx].ftemp[pid][-1] 2025 - if(cg.addLine(t, m)): 2026 - testrun[testidx].ftemp[pid].append(FTraceCallGraph()) 1137 + name = m.group('name')+'['+val+']' 1138 + else: 1139 + m = re.match('(?P<name>.*) .*', t.name) 1140 + name = m.group('name') 1141 + # special processing for trace events 1142 + if re.match('dpm_prepare\[.*', name): 1143 + continue 1144 + elif re.match('machine_suspend.*', name): 1145 + continue 1146 + elif re.match('suspend_enter\[.*', name): 1147 + if(not isbegin): 1148 + data.dmesg['suspend_prepare']['end'] = t.time 1149 + continue 1150 + elif re.match('dpm_suspend\[.*', name): 1151 + if(not isbegin): 1152 + data.dmesg['suspend']['end'] = t.time 1153 + continue 1154 + elif re.match('dpm_suspend_late\[.*', name): 1155 + if(isbegin): 1156 + data.dmesg['suspend_late']['start'] = t.time 1157 + else: 1158 + data.dmesg['suspend_late']['end'] = t.time 1159 + continue 1160 + elif re.match('dpm_suspend_noirq\[.*', name): 1161 + if(isbegin): 1162 + data.dmesg['suspend_noirq']['start'] = t.time 1163 + else: 1164 + data.dmesg['suspend_noirq']['end'] = t.time 1165 + continue 1166 + elif re.match('dpm_resume_noirq\[.*', name): 1167 + if(isbegin): 1168 + data.dmesg['resume_machine']['end'] = t.time 1169 + data.dmesg['resume_noirq']['start'] = t.time 1170 + else: 1171 + data.dmesg['resume_noirq']['end'] = t.time 1172 + continue 1173 + elif re.match('dpm_resume_early\[.*', name): 1174 + if(isbegin): 1175 + data.dmesg['resume_early']['start'] = t.time 1176 + else: 1177 + data.dmesg['resume_early']['end'] = t.time 1178 + continue 1179 + elif re.match('dpm_resume\[.*', name): 1180 + if(isbegin): 1181 + data.dmesg['resume']['start'] = t.time 1182 + else: 1183 + data.dmesg['resume']['end'] = t.time 1184 + continue 1185 + elif re.match('dpm_complete\[.*', name): 1186 + if(isbegin): 1187 + data.dmesg['resume_complete']['start'] = t.time 1188 + else: 1189 + data.dmesg['resume_complete']['end'] = t.time 1190 + continue 1191 + # skip trace events inside devices calls 1192 + if(not data.isTraceEventOutsideDeviceCalls(pid, t.time)): 1193 + continue 1194 + # global events (outside device calls) are simply graphed 1195 + if(isbegin): 1196 + # store each trace event in ttemp 1197 + if(name not in testrun[testidx].ttemp): 1198 + testrun[testidx].ttemp[name] = [] 1199 + testrun[testidx].ttemp[name].append(\ 1200 + {'begin': t.time, 'end': t.time}) 1201 + else: 1202 + # finish off matching trace event in ttemp 1203 + if(name in testrun[testidx].ttemp): 1204 + testrun[testidx].ttemp[name][-1]['end'] = t.time 1205 + # call/return processing 1206 + elif sysvals.usecallgraph: 1207 + # create a callgraph object for the data 1208 + if(pid not in testrun[testidx].ftemp): 1209 + testrun[testidx].ftemp[pid] = [] 1210 + testrun[testidx].ftemp[pid].append(FTraceCallGraph(pid)) 1211 + # when the call is finished, see which device matches it 1212 + cg = testrun[testidx].ftemp[pid][-1] 1213 + if(cg.addLine(t)): 1214 + testrun[testidx].ftemp[pid].append(FTraceCallGraph(pid)) 2027 1215 tf.close() 2028 1216 2029 1217 for test in testrun: ··· 2026 1224 if(sysvals.usetraceevents): 2027 1225 for name in test.ttemp: 2028 1226 for event in test.ttemp[name]: 2029 - begin = event['begin'] 2030 - end = event['end'] 2031 - # if event starts before timeline start, expand timeline 2032 - if(begin < test.data.start): 2033 - test.data.setStart(begin) 2034 - # if event ends after timeline end, expand the timeline 2035 - if(end > test.data.end): 2036 - test.data.setEnd(end) 2037 - test.data.newActionGlobal(name, begin, end) 1227 + test.data.newActionGlobal(name, event['begin'], event['end']) 2038 1228 2039 1229 # add the callgraph data to the device hierarchy 2040 1230 for pid in test.ftemp: 2041 1231 for cg in test.ftemp[pid]: 2042 - if(not cg.sanityCheck()): 1232 + if len(cg.list) < 1 or cg.invalid: 1233 + continue 1234 + if(not cg.postProcess()): 2043 1235 id = 'task %s cpu %s' % (pid, m.group('cpu')) 2044 1236 vprint('Sanity check failed for '+\ 2045 1237 id+', ignoring this callback') ··· 2055 1259 if(sysvals.verbose): 2056 1260 test.data.printDetails() 2057 1261 2058 - 2059 - # add the time in between the tests as a new phase so we can see it 2060 - if(len(testruns) > 1): 2061 - t1e = testruns[0].getEnd() 2062 - t2s = testruns[-1].getStart() 2063 - testruns[-1].newPhaseWithSingleAction('user mode', \ 2064 - 'user mode', t1e, t2s, '#FF9966') 2065 - 2066 1262 # Function: parseTraceLog 2067 1263 # Description: 2068 1264 # Analyze an ftrace log output file generated from this app during ··· 2068 1280 2069 1281 vprint('Analyzing the ftrace data...') 2070 1282 if(os.path.exists(sysvals.ftracefile) == False): 2071 - doError('%s doesnt exist' % sysvals.ftracefile, False) 1283 + doError('%s does not exist' % sysvals.ftracefile, False) 1284 + 1285 + sysvals.setupAllKprobes() 1286 + tracewatch = ['suspend_enter'] 1287 + if sysvals.usekprobes: 1288 + tracewatch += ['sync_filesystems', 'freeze_processes', 'syscore_suspend', 1289 + 'syscore_resume', 'resume_console', 'thaw_processes', 'CPU_ON', 'CPU_OFF'] 2072 1290 2073 1291 # extract the callgraph and traceevent data 1292 + tp = TestProps() 2074 1293 testruns = [] 2075 1294 testdata = [] 2076 1295 testrun = 0 ··· 2090 1295 # stamp line: each stamp means a new test run 2091 1296 m = re.match(sysvals.stampfmt, line) 2092 1297 if(m): 2093 - data = Data(len(testdata)) 2094 - testdata.append(data) 2095 - testrun = TestRun(data) 2096 - testruns.append(testrun) 2097 - parseStamp(m, data) 2098 - continue 2099 - if(not data): 1298 + tp.stamp = line 2100 1299 continue 2101 1300 # firmware line: pull out any firmware data 2102 1301 m = re.match(sysvals.firmwarefmt, line) 2103 1302 if(m): 2104 - data.fwSuspend = int(m.group('s')) 2105 - data.fwResume = int(m.group('r')) 2106 - if(data.fwSuspend > 0 or data.fwResume > 0): 2107 - data.fwValid = True 1303 + tp.fwdata.append((int(m.group('s')), int(m.group('r')))) 2108 1304 continue 2109 1305 # tracer type line: determine the trace data type 2110 1306 m = re.match(sysvals.tracertypefmt, line) 2111 1307 if(m): 2112 - tracer = m.group('t') 2113 - testrun.setTracerType(tracer) 1308 + tp.setTracerType(m.group('t')) 2114 1309 continue 2115 1310 # post resume time line: did this test run include post-resume data 2116 1311 m = re.match(sysvals.postresumefmt, line) ··· 2109 1324 if(t > 0): 2110 1325 sysvals.postresumetime = t 2111 1326 continue 1327 + # device properties line 1328 + if(re.match(sysvals.devpropfmt, line)): 1329 + devProps(line) 1330 + continue 2112 1331 # ftrace line: parse only valid lines 2113 - m = re.match(testrun.ftrace_line_fmt, line) 1332 + m = re.match(tp.ftrace_line_fmt, line) 2114 1333 if(not m): 2115 1334 continue 2116 1335 # gather the basic message data from the line 2117 1336 m_time = m.group('time') 1337 + m_proc = m.group('proc') 2118 1338 m_pid = m.group('pid') 2119 1339 m_msg = m.group('msg') 2120 - if(testrun.cgformat): 1340 + if(tp.cgformat): 2121 1341 m_param3 = m.group('dur') 2122 1342 else: 2123 1343 m_param3 = 'traceevent' ··· 2134 1344 # the line should be a call, return, or event 2135 1345 if(not t.fcall and not t.freturn and not t.fevent): 2136 1346 continue 2137 - # only parse the ftrace data during suspend/resume 2138 - if(not testrun.inthepipe): 2139 - # look for the suspend start marker 2140 - if(t.fevent): 2141 - if(t.name == 'SUSPEND START'): 2142 - testrun.inthepipe = True 2143 - data.setStart(t.time) 1347 + # find the start of suspend 1348 + if(t.startMarker()): 1349 + phase = 'suspend_prepare' 1350 + data = Data(len(testdata)) 1351 + testdata.append(data) 1352 + testrun = TestRun(data) 1353 + testruns.append(testrun) 1354 + parseStamp(tp.stamp, data) 1355 + if len(tp.fwdata) > data.testnumber: 1356 + data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber] 1357 + if(data.fwSuspend > 0 or data.fwResume > 0): 1358 + data.fwValid = True 1359 + data.setStart(t.time) 1360 + continue 1361 + if(not data): 1362 + continue 1363 + # find the end of resume 1364 + if(t.endMarker()): 1365 + if(sysvals.usetracemarkers and sysvals.postresumetime > 0): 1366 + phase = 'post_resume' 1367 + data.newPhase(phase, t.time, t.time, '#F0F0F0', -1) 1368 + data.setEnd(t.time) 1369 + if(not sysvals.usetracemarkers): 1370 + # no trace markers? then quit and be sure to finish recording 1371 + # the event we used to trigger resume end 1372 + if(len(testrun.ttemp['thaw_processes']) > 0): 1373 + # if an entry exists, assume this is its end 1374 + testrun.ttemp['thaw_processes'][-1]['end'] = t.time 1375 + break 2144 1376 continue 2145 1377 # trace event processing 2146 1378 if(t.fevent): 2147 - if(t.name == 'RESUME COMPLETE'): 2148 - if(sysvals.postresumetime > 0): 2149 - phase = 'post_resume' 2150 - data.newPhase(phase, t.time, t.time, '#FF9966', -1) 2151 - else: 2152 - testrun.inthepipe = False 2153 - data.setEnd(t.time) 2154 - continue 2155 1379 if(phase == 'post_resume'): 2156 1380 data.setEnd(t.time) 2157 1381 if(t.type == 'suspend_resume'): ··· 2187 1383 m = re.match('(?P<name>.*) .*', t.name) 2188 1384 name = m.group('name') 2189 1385 # ignore these events 2190 - if(re.match('acpi_suspend\[.*', t.name) or 2191 - re.match('suspend_enter\[.*', name)): 1386 + if(name.split('[')[0] in tracewatch): 2192 1387 continue 2193 1388 # -- phase changes -- 2194 1389 # suspend_prepare start ··· 2221 1418 data.dmesg[phase]['end'] = t.time 2222 1419 data.tSuspended = t.time 2223 1420 else: 2224 - if(sysvals.suspendmode in ['mem', 'disk']): 1421 + if(sysvals.suspendmode in ['mem', 'disk'] and not tp.S0i3): 2225 1422 data.dmesg['suspend_machine']['end'] = t.time 2226 1423 data.tSuspended = t.time 2227 1424 phase = 'resume_machine' 2228 1425 data.dmesg[phase]['start'] = t.time 2229 1426 data.tResumed = t.time 2230 1427 data.tLow = data.tResumed - data.tSuspended 1428 + continue 1429 + # acpi_suspend 1430 + elif(re.match('acpi_suspend\[.*', t.name)): 1431 + # acpi_suspend[0] S0i3 1432 + if(re.match('acpi_suspend\[0\] begin', t.name)): 1433 + if(sysvals.suspendmode == 'mem'): 1434 + tp.S0i3 = True 1435 + data.dmesg['suspend_machine']['end'] = t.time 1436 + data.tSuspended = t.time 2231 1437 continue 2232 1438 # resume_noirq start 2233 1439 elif(re.match('dpm_resume_noirq\[.*', t.name)): ··· 2261 1449 if(isbegin): 2262 1450 data.dmesg[phase]['start'] = t.time 2263 1451 continue 2264 - 2265 - # is this trace event outside of the devices calls 2266 - if(data.isTraceEventOutsideDeviceCalls(pid, t.time)): 2267 - # global events (outside device calls) are simply graphed 2268 - if(name not in testrun.ttemp): 2269 - testrun.ttemp[name] = [] 2270 - if(isbegin): 2271 - # create a new list entry 2272 - testrun.ttemp[name].append(\ 2273 - {'begin': t.time, 'end': t.time}) 2274 - else: 2275 - if(len(testrun.ttemp[name]) > 0): 2276 - # if an antry exists, assume this is its end 2277 - testrun.ttemp[name][-1]['end'] = t.time 2278 - elif(phase == 'post_resume'): 2279 - # post resume events can just have ends 2280 - testrun.ttemp[name].append({ 2281 - 'begin': data.dmesg[phase]['start'], 2282 - 'end': t.time}) 1452 + # skip trace events inside devices calls 1453 + if(not data.isTraceEventOutsideDeviceCalls(pid, t.time)): 1454 + continue 1455 + # global events (outside device calls) are graphed 1456 + if(name not in testrun.ttemp): 1457 + testrun.ttemp[name] = [] 1458 + if(isbegin): 1459 + # create a new list entry 1460 + testrun.ttemp[name].append(\ 1461 + {'begin': t.time, 'end': t.time, 'pid': pid}) 2283 1462 else: 2284 - if(isbegin): 2285 - data.addIntraDevTraceEvent('', name, pid, t.time) 2286 - else: 2287 - data.capIntraDevTraceEvent('', name, pid, t.time) 1463 + if(len(testrun.ttemp[name]) > 0): 1464 + # if an entry exists, assume this is its end 1465 + testrun.ttemp[name][-1]['end'] = t.time 1466 + elif(phase == 'post_resume'): 1467 + # post resume events can just have ends 1468 + testrun.ttemp[name].append({ 1469 + 'begin': data.dmesg[phase]['start'], 1470 + 'end': t.time}) 2288 1471 # device callback start 2289 1472 elif(t.type == 'device_pm_callback_start'): 2290 1473 m = re.match('(?P<drv>.*) (?P<d>.*), parent: *(?P<p>.*), .*',\ ··· 2302 1495 dev = list[n] 2303 1496 dev['length'] = t.time - dev['start'] 2304 1497 dev['end'] = t.time 1498 + # kprobe event processing 1499 + elif(t.fkprobe): 1500 + kprobename = t.type 1501 + kprobedata = t.name 1502 + key = (kprobename, pid) 1503 + # displayname is generated from kprobe data 1504 + displayname = '' 1505 + if(t.fcall): 1506 + displayname = sysvals.kprobeDisplayName(kprobename, kprobedata) 1507 + if not displayname: 1508 + continue 1509 + if(key not in tp.ktemp): 1510 + tp.ktemp[key] = [] 1511 + tp.ktemp[key].append({ 1512 + 'pid': pid, 1513 + 'begin': t.time, 1514 + 'end': t.time, 1515 + 'name': displayname, 1516 + 'cdata': kprobedata, 1517 + 'proc': m_proc, 1518 + }) 1519 + elif(t.freturn): 1520 + if(key not in tp.ktemp) or len(tp.ktemp[key]) < 1: 1521 + continue 1522 + e = tp.ktemp[key][-1] 1523 + if e['begin'] < 0.0 or t.time - e['begin'] < 0.000001: 1524 + tp.ktemp[key].pop() 1525 + else: 1526 + e['end'] = t.time 1527 + e['rdata'] = kprobedata 2305 1528 # callgraph processing 2306 1529 elif sysvals.usecallgraph: 2307 - # this shouldn't happen, but JIC, ignore callgraph data post-res 2308 - if(phase == 'post_resume'): 2309 - continue 2310 1530 # create a callgraph object for the data 2311 - if(pid not in testrun.ftemp): 2312 - testrun.ftemp[pid] = [] 2313 - testrun.ftemp[pid].append(FTraceCallGraph()) 1531 + key = (m_proc, pid) 1532 + if(key not in testrun.ftemp): 1533 + testrun.ftemp[key] = [] 1534 + testrun.ftemp[key].append(FTraceCallGraph(pid)) 2314 1535 # when the call is finished, see which device matches it 2315 - cg = testrun.ftemp[pid][-1] 2316 - if(cg.addLine(t, m)): 2317 - testrun.ftemp[pid].append(FTraceCallGraph()) 1536 + cg = testrun.ftemp[key][-1] 1537 + if(cg.addLine(t)): 1538 + testrun.ftemp[key].append(FTraceCallGraph(pid)) 2318 1539 tf.close() 1540 + 1541 + if sysvals.suspendmode == 'command': 1542 + for test in testruns: 1543 + for p in test.data.phases: 1544 + if p == 'resume_complete': 1545 + test.data.dmesg[p]['start'] = test.data.start 1546 + test.data.dmesg[p]['end'] = test.data.end 1547 + else: 1548 + test.data.dmesg[p]['start'] = test.data.start 1549 + test.data.dmesg[p]['end'] = test.data.start 1550 + test.data.tSuspended = test.data.start 1551 + test.data.tResumed = test.data.start 1552 + test.data.tLow = 0 1553 + test.data.fwValid = False 2319 1554 2320 1555 for test in testruns: 2321 1556 # add the traceevent data to the device hierarchy 2322 1557 if(sysvals.usetraceevents): 1558 + # add actual trace funcs 2323 1559 for name in test.ttemp: 2324 1560 for event in test.ttemp[name]: 2325 - begin = event['begin'] 2326 - end = event['end'] 2327 - # if event starts before timeline start, expand timeline 2328 - if(begin < test.data.start): 2329 - test.data.setStart(begin) 2330 - # if event ends after timeline end, expand the timeline 2331 - if(end > test.data.end): 2332 - test.data.setEnd(end) 2333 - test.data.newActionGlobal(name, begin, end) 1561 + test.data.newActionGlobal(name, event['begin'], event['end'], event['pid']) 1562 + # add the kprobe based virtual tracefuncs as actual devices 1563 + for key in tp.ktemp: 1564 + name, pid = key 1565 + if name not in sysvals.tracefuncs: 1566 + continue 1567 + for e in tp.ktemp[key]: 1568 + kb, ke = e['begin'], e['end'] 1569 + if kb == ke or not test.data.isInsideTimeline(kb, ke): 1570 + continue 1571 + test.data.newActionGlobal(e['name'], kb, ke, pid) 1572 + # add config base kprobes and dev kprobes 1573 + for key in tp.ktemp: 1574 + name, pid = key 1575 + if name in sysvals.tracefuncs: 1576 + continue 1577 + for e in tp.ktemp[key]: 1578 + kb, ke = e['begin'], e['end'] 1579 + if kb == ke or not test.data.isInsideTimeline(kb, ke): 1580 + continue 1581 + color = sysvals.kprobeColor(e['name']) 1582 + if name not in sysvals.dev_tracefuncs: 1583 + # config base kprobe 1584 + test.data.newActionGlobal(e['name'], kb, ke, -2, color) 1585 + elif sysvals.usedevsrc: 1586 + # dev kprobe 1587 + data.addDeviceFunctionCall(e['name'], name, e['proc'], pid, kb, 1588 + ke, e['cdata'], e['rdata']) 1589 + if sysvals.usecallgraph: 1590 + # add the callgraph data to the device hierarchy 1591 + sortlist = dict() 1592 + for key in test.ftemp: 1593 + proc, pid = key 1594 + for cg in test.ftemp[key]: 1595 + if len(cg.list) < 1 or cg.invalid: 1596 + continue 1597 + if(not cg.postProcess()): 1598 + id = 'task %s' % (pid) 1599 + vprint('Sanity check failed for '+\ 1600 + id+', ignoring this callback') 1601 + continue 1602 + # match cg data to devices 1603 + if sysvals.suspendmode == 'command' or not cg.deviceMatch(pid, test.data): 1604 + sortkey = '%f%f%d' % (cg.start, cg.end, pid) 1605 + sortlist[sortkey] = cg 1606 + # create blocks for orphan cg data 1607 + for sortkey in sorted(sortlist): 1608 + cg = sortlist[sortkey] 1609 + name = cg.list[0].name 1610 + if sysvals.isCallgraphFunc(name): 1611 + vprint('Callgraph found for task %d: %.3fms, %s' % (cg.pid, (cg.end - cg.start)*1000, name)) 1612 + cg.newActionFromFunction(test.data) 2334 1613 2335 - # add the callgraph data to the device hierarchy 2336 - borderphase = { 2337 - 'dpm_prepare': 'suspend_prepare', 2338 - 'dpm_complete': 'resume_complete' 2339 - } 2340 - for pid in test.ftemp: 2341 - for cg in test.ftemp[pid]: 2342 - if len(cg.list) < 2: 2343 - continue 2344 - if(not cg.sanityCheck()): 2345 - id = 'task %s cpu %s' % (pid, m.group('cpu')) 2346 - vprint('Sanity check failed for '+\ 2347 - id+', ignoring this callback') 2348 - continue 2349 - callstart = cg.start 2350 - callend = cg.end 2351 - if(cg.list[0].name in borderphase): 2352 - p = borderphase[cg.list[0].name] 2353 - list = test.data.dmesg[p]['list'] 2354 - for devname in list: 2355 - dev = list[devname] 2356 - if(pid == dev['pid'] and 2357 - callstart <= dev['start'] and 2358 - callend >= dev['end']): 2359 - dev['ftrace'] = cg.slice(dev['start'], dev['end']) 2360 - continue 2361 - if(cg.list[0].name != 'dpm_run_callback'): 2362 - continue 2363 - for p in test.data.phases: 2364 - if(test.data.dmesg[p]['start'] <= callstart and 2365 - callstart <= test.data.dmesg[p]['end']): 2366 - list = test.data.dmesg[p]['list'] 2367 - for devname in list: 2368 - dev = list[devname] 2369 - if(pid == dev['pid'] and 2370 - callstart <= dev['start'] and 2371 - callend >= dev['end']): 2372 - dev['ftrace'] = cg 2373 - break 1614 + if sysvals.suspendmode == 'command': 1615 + if(sysvals.verbose): 1616 + for data in testdata: 1617 + data.printDetails() 1618 + return testdata 2374 1619 2375 1620 # fill in any missing phases 2376 1621 for data in testdata: ··· 2446 1587 if(sysvals.verbose): 2447 1588 data.printDetails() 2448 1589 2449 - # add the time in between the tests as a new phase so we can see it 2450 - if(len(testdata) > 1): 2451 - t1e = testdata[0].getEnd() 2452 - t2s = testdata[-1].getStart() 2453 - testdata[-1].newPhaseWithSingleAction('user mode', \ 2454 - 'user mode', t1e, t2s, '#FF9966') 2455 1590 return testdata 1591 + 1592 + # Function: loadRawKernelLog 1593 + # Description: 1594 + # Load a raw kernel log that wasn't created by this tool, it might be 1595 + # possible to extract a valid suspend/resume log 1596 + def loadRawKernelLog(dmesgfile): 1597 + global sysvals 1598 + 1599 + stamp = {'time': '', 'host': '', 'mode': 'mem', 'kernel': ''} 1600 + stamp['time'] = datetime.now().strftime('%B %d %Y, %I:%M:%S %p') 1601 + stamp['host'] = sysvals.hostname 1602 + 1603 + testruns = [] 1604 + data = 0 1605 + lf = open(dmesgfile, 'r') 1606 + for line in lf: 1607 + line = line.replace('\r\n', '') 1608 + idx = line.find('[') 1609 + if idx > 1: 1610 + line = line[idx:] 1611 + m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line) 1612 + if(not m): 1613 + continue 1614 + msg = m.group("msg") 1615 + m = re.match('PM: Syncing filesystems.*', msg) 1616 + if(m): 1617 + if(data): 1618 + testruns.append(data) 1619 + data = Data(len(testruns)) 1620 + data.stamp = stamp 1621 + if(data): 1622 + m = re.match('.* *(?P<k>[0-9]\.[0-9]{2}\.[0-9]-.*) .*', msg) 1623 + if(m): 1624 + stamp['kernel'] = m.group('k') 1625 + m = re.match('PM: Preparing system for (?P<m>.*) sleep', msg) 1626 + if(m): 1627 + stamp['mode'] = m.group('m') 1628 + data.dmesgtext.append(line) 1629 + if(data): 1630 + testruns.append(data) 1631 + sysvals.stamp = stamp 1632 + sysvals.suspendmode = stamp['mode'] 1633 + lf.close() 1634 + return testruns 2456 1635 2457 1636 # Function: loadKernelLog 2458 1637 # Description: ··· 2504 1607 2505 1608 vprint('Analyzing the dmesg data...') 2506 1609 if(os.path.exists(sysvals.dmesgfile) == False): 2507 - doError('%s doesnt exist' % sysvals.dmesgfile, False) 1610 + doError('%s does not exist' % sysvals.dmesgfile, False) 2508 1611 2509 - # there can be multiple test runs in a single file delineated by stamps 1612 + # there can be multiple test runs in a single file 1613 + tp = TestProps() 2510 1614 testruns = [] 2511 1615 data = 0 2512 1616 lf = open(sysvals.dmesgfile, 'r') ··· 2518 1620 line = line[idx:] 2519 1621 m = re.match(sysvals.stampfmt, line) 2520 1622 if(m): 2521 - if(data): 2522 - testruns.append(data) 2523 - data = Data(len(testruns)) 2524 - parseStamp(m, data) 2525 - continue 2526 - if(not data): 1623 + tp.stamp = line 2527 1624 continue 2528 1625 m = re.match(sysvals.firmwarefmt, line) 2529 1626 if(m): 2530 - data.fwSuspend = int(m.group('s')) 2531 - data.fwResume = int(m.group('r')) 2532 - if(data.fwSuspend > 0 or data.fwResume > 0): 2533 - data.fwValid = True 1627 + tp.fwdata.append((int(m.group('s')), int(m.group('r')))) 2534 1628 continue 2535 1629 m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line) 2536 - if(m): 2537 - data.dmesgtext.append(line) 2538 - if(re.match('ACPI: resume from mwait', m.group('msg'))): 2539 - print('NOTE: This suspend appears to be freeze rather than'+\ 2540 - ' %s, it will be treated as such' % sysvals.suspendmode) 2541 - sysvals.suspendmode = 'freeze' 2542 - else: 2543 - vprint('ignoring dmesg line: %s' % line.replace('\n', '')) 2544 - testruns.append(data) 1630 + if(not m): 1631 + continue 1632 + msg = m.group("msg") 1633 + if(re.match('PM: Syncing filesystems.*', msg)): 1634 + if(data): 1635 + testruns.append(data) 1636 + data = Data(len(testruns)) 1637 + parseStamp(tp.stamp, data) 1638 + if len(tp.fwdata) > data.testnumber: 1639 + data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber] 1640 + if(data.fwSuspend > 0 or data.fwResume > 0): 1641 + data.fwValid = True 1642 + if(re.match('ACPI: resume from mwait', msg)): 1643 + print('NOTE: This suspend appears to be freeze rather than'+\ 1644 + ' %s, it will be treated as such' % sysvals.suspendmode) 1645 + sysvals.suspendmode = 'freeze' 1646 + if(not data): 1647 + continue 1648 + data.dmesgtext.append(line) 1649 + if(data): 1650 + testruns.append(data) 2545 1651 lf.close() 2546 1652 2547 - if(not data): 2548 - print('ERROR: analyze_suspend header missing from dmesg log') 2549 - sys.exit() 1653 + if(len(testruns) < 1): 1654 + # bad log, but see if you can extract something meaningful anyway 1655 + testruns = loadRawKernelLog(sysvals.dmesgfile) 1656 + 1657 + if(len(testruns) < 1): 1658 + doError(' dmesg log is completely unreadable: %s' \ 1659 + % sysvals.dmesgfile, False) 2550 1660 2551 1661 # fix lines with same timestamp/function with the call and return swapped 2552 1662 for data in testruns: ··· 2771 1865 actions[a] = [] 2772 1866 actions[a].append({'begin': ktime, 'end': ktime}) 2773 1867 if(re.match(at[a]['emsg'], msg)): 2774 - actions[a][-1]['end'] = ktime 1868 + if(a in actions): 1869 + actions[a][-1]['end'] = ktime 2775 1870 # now look for CPU on/off events 2776 1871 if(re.match('Disabling non-boot CPUs .*', msg)): 2777 1872 # start of first cpu suspend ··· 2819 1912 # fill in any actions we've found 2820 1913 for name in actions: 2821 1914 for event in actions[name]: 2822 - begin = event['begin'] 2823 - end = event['end'] 2824 - # if event starts before timeline start, expand timeline 2825 - if(begin < data.start): 2826 - data.setStart(begin) 2827 - # if event ends after timeline end, expand the timeline 2828 - if(end > data.end): 2829 - data.setEnd(end) 2830 - data.newActionGlobal(name, begin, end) 1915 + data.newActionGlobal(name, event['begin'], event['end']) 2831 1916 2832 1917 if(sysvals.verbose): 2833 1918 data.printDetails() ··· 2827 1928 data.deviceFilter(sysvals.devicefilter) 2828 1929 data.fixupInitcallsThatDidntReturn() 2829 1930 return True 2830 - 2831 - # Function: setTimelineRows 2832 - # Description: 2833 - # Organize the timeline entries into the smallest 2834 - # number of rows possible, with no entry overlapping 2835 - # Arguments: 2836 - # list: the list of devices/actions for a single phase 2837 - # sortedkeys: cronologically sorted key list to use 2838 - # Output: 2839 - # The total number of rows needed to display this phase of the timeline 2840 - def setTimelineRows(list, sortedkeys): 2841 - 2842 - # clear all rows and set them to undefined 2843 - remaining = len(list) 2844 - rowdata = dict() 2845 - row = 0 2846 - for item in list: 2847 - list[item]['row'] = -1 2848 - 2849 - # try to pack each row with as many ranges as possible 2850 - while(remaining > 0): 2851 - if(row not in rowdata): 2852 - rowdata[row] = [] 2853 - for item in sortedkeys: 2854 - if(list[item]['row'] < 0): 2855 - s = list[item]['start'] 2856 - e = list[item]['end'] 2857 - valid = True 2858 - for ritem in rowdata[row]: 2859 - rs = ritem['start'] 2860 - re = ritem['end'] 2861 - if(not (((s <= rs) and (e <= rs)) or 2862 - ((s >= re) and (e >= re)))): 2863 - valid = False 2864 - break 2865 - if(valid): 2866 - rowdata[row].append(list[item]) 2867 - list[item]['row'] = row 2868 - remaining -= 1 2869 - row += 1 2870 - return row 2871 - 2872 - # Function: createTimeScale 2873 - # Description: 2874 - # Create the timescale header for the html timeline 2875 - # Arguments: 2876 - # t0: start time (suspend begin) 2877 - # tMax: end time (resume end) 2878 - # tSuspend: time when suspend occurs, i.e. the zero time 2879 - # Output: 2880 - # The html code needed to display the time scale 2881 - def createTimeScale(t0, tMax, tSuspended): 2882 - timescale = '<div class="t" style="right:{0}%">{1}</div>\n' 2883 - output = '<div id="timescale">\n' 2884 - 2885 - # set scale for timeline 2886 - tTotal = tMax - t0 2887 - tS = 0.1 2888 - if(tTotal <= 0): 2889 - return output 2890 - if(tTotal > 4): 2891 - tS = 1 2892 - if(tSuspended < 0): 2893 - for i in range(int(tTotal/tS)+1): 2894 - pos = '%0.3f' % (100 - ((float(i)*tS*100)/tTotal)) 2895 - if(i > 0): 2896 - val = '%0.fms' % (float(i)*tS*1000) 2897 - else: 2898 - val = '' 2899 - output += timescale.format(pos, val) 2900 - else: 2901 - tSuspend = tSuspended - t0 2902 - divTotal = int(tTotal/tS) + 1 2903 - divSuspend = int(tSuspend/tS) 2904 - s0 = (tSuspend - tS*divSuspend)*100/tTotal 2905 - for i in range(divTotal): 2906 - pos = '%0.3f' % (100 - ((float(i)*tS*100)/tTotal) - s0) 2907 - if((i == 0) and (s0 < 3)): 2908 - val = '' 2909 - elif(i == divSuspend): 2910 - val = 'S/R' 2911 - else: 2912 - val = '%0.fms' % (float(i-divSuspend)*tS*1000) 2913 - output += timescale.format(pos, val) 2914 - output += '</div>\n' 2915 - return output 2916 1931 2917 1932 # Function: createHTMLSummarySimple 2918 1933 # Description: ··· 2959 2146 hf.write('</body>\n</html>\n') 2960 2147 hf.close() 2961 2148 2149 + def htmlTitle(): 2150 + global sysvals 2151 + modename = { 2152 + 'freeze': 'Freeze (S0)', 2153 + 'standby': 'Standby (S1)', 2154 + 'mem': 'Suspend (S3)', 2155 + 'disk': 'Hibernate (S4)' 2156 + } 2157 + kernel = sysvals.stamp['kernel'] 2158 + host = sysvals.hostname[0].upper()+sysvals.hostname[1:] 2159 + mode = sysvals.suspendmode 2160 + if sysvals.suspendmode in modename: 2161 + mode = modename[sysvals.suspendmode] 2162 + return host+' '+mode+' '+kernel 2163 + 2164 + def ordinal(value): 2165 + suffix = 'th' 2166 + if value < 10 or value > 19: 2167 + if value % 10 == 1: 2168 + suffix = 'st' 2169 + elif value % 10 == 2: 2170 + suffix = 'nd' 2171 + elif value % 10 == 3: 2172 + suffix = 'rd' 2173 + return '%d%s' % (value, suffix) 2174 + 2962 2175 # Function: createHTML 2963 2176 # Description: 2964 2177 # Create the output html file from the resident test data ··· 2995 2156 def createHTML(testruns): 2996 2157 global sysvals 2997 2158 2159 + if len(testruns) < 1: 2160 + print('ERROR: Not enough test data to build a timeline') 2161 + return 2162 + 2998 2163 for data in testruns: 2999 2164 data.normalizeTime(testruns[-1].tSuspended) 3000 2165 ··· 3006 2163 if len(testruns) > 1: 3007 2164 x2changes = ['1', 'relative'] 3008 2165 # html function templates 2166 + headline_version = '<div class="version"><a href="https://01.org/suspendresume">AnalyzeSuspend v%s</a></div>' % sysvals.version 3009 2167 headline_stamp = '<div class="stamp">{0} {1} {2} {3}</div>\n' 3010 2168 html_devlist1 = '<button id="devlist1" class="devlist" style="float:left;">Device Detail%s</button>' % x2changes[0] 3011 2169 html_zoombox = '<center><button id="zoomin">ZOOM IN</button><button id="zoomout">ZOOM OUT</button><button id="zoomdef">ZOOM 1:1</button></center>\n' 3012 2170 html_devlist2 = '<button id="devlist2" class="devlist" style="float:right;">Device Detail2</button>\n' 3013 2171 html_timeline = '<div id="dmesgzoombox" class="zoombox">\n<div id="{0}" class="timeline" style="height:{1}px">\n' 3014 - html_device = '<div id="{0}" title="{1}" class="thread" style="left:{2}%;top:{3}%;height:{4}%;width:{5}%;">{6}</div>\n' 3015 - html_traceevent = '<div title="{0}" class="traceevent" style="left:{1}%;top:{2}%;height:{3}%;width:{4}%;border:1px solid {5};background-color:{5}">{6}</div>\n' 3016 - html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}%;height:{3}%;background-color:{4}">{5}</div>\n' 2172 + html_tblock = '<div id="block{0}" class="tblock" style="left:{1}%;width:{2}%;">\n' 2173 + html_device = '<div id="{0}" title="{1}" class="thread{7}" style="left:{2}%;top:{3}px;height:{4}px;width:{5}%;{8}">{6}</div>\n' 2174 + html_traceevent = '<div title="{0}" class="traceevent" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;">{5}</div>\n' 2175 + html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}px;height:{3}px;background-color:{4}">{5}</div>\n' 3017 2176 html_phaselet = '<div id="{0}" class="phaselet" style="left:{1}%;width:{2}%;background-color:{3}"></div>\n' 3018 - html_legend = '<div class="square" style="left:{0}%;background-color:{1}">&nbsp;{2}</div>\n' 2177 + html_legend = '<div id="p{3}" class="square" style="left:{0}%;background-color:{1}">&nbsp;{2}</div>\n' 3019 2178 html_timetotal = '<table class="time1">\n<tr>'\ 3020 2179 '<td class="green">{2} Suspend Time: <b>{0} ms</b></td>'\ 3021 2180 '<td class="yellow">{2} Resume Time: <b>{1} ms</b></td>'\ ··· 3027 2182 '<td class="gray">'+sysvals.suspendmode+' time: <b>{1} ms</b></td>'\ 3028 2183 '<td class="yellow">{3} Resume Time: <b>{2} ms</b></td>'\ 3029 2184 '</tr>\n</table>\n' 2185 + html_timetotal3 = '<table class="time1">\n<tr>'\ 2186 + '<td class="green">Execution Time: <b>{0} ms</b></td>'\ 2187 + '<td class="yellow">Command: <b>{1}</b></td>'\ 2188 + '</tr>\n</table>\n' 3030 2189 html_timegroups = '<table class="time2">\n<tr>'\ 3031 2190 '<td class="green">{4}Kernel Suspend: {0} ms</td>'\ 3032 2191 '<td class="purple">{4}Firmware Suspend: {1} ms</td>'\ ··· 3038 2189 '<td class="yellow">{4}Kernel Resume: {3} ms</td>'\ 3039 2190 '</tr>\n</table>\n' 3040 2191 2192 + # html format variables 2193 + rowheight = 30 2194 + devtextS = '14px' 2195 + devtextH = '30px' 2196 + hoverZ = 'z-index:10;' 2197 + 2198 + if sysvals.usedevsrc: 2199 + hoverZ = '' 2200 + 3041 2201 # device timeline 3042 2202 vprint('Creating Device Timeline...') 3043 - devtl = Timeline() 2203 + 2204 + devtl = Timeline(rowheight) 3044 2205 3045 2206 # Generate the header for this timeline 3046 - textnum = ['First', 'Second'] 3047 2207 for data in testruns: 3048 2208 tTotal = data.end - data.start 3049 2209 tEnd = data.dmesg['resume_complete']['end'] ··· 3061 2203 sys.exit() 3062 2204 if(data.tLow > 0): 3063 2205 low_time = '%.0f'%(data.tLow*1000) 3064 - if data.fwValid: 2206 + if sysvals.suspendmode == 'command': 2207 + run_time = '%.0f'%((data.end-data.start)*1000) 2208 + if sysvals.testcommand: 2209 + testdesc = sysvals.testcommand 2210 + else: 2211 + testdesc = 'unknown' 2212 + if(len(testruns) > 1): 2213 + testdesc = ordinal(data.testnumber+1)+' '+testdesc 2214 + thtml = html_timetotal3.format(run_time, testdesc) 2215 + devtl.html['header'] += thtml 2216 + elif data.fwValid: 3065 2217 suspend_time = '%.0f'%((data.tSuspended-data.start)*1000 + \ 3066 2218 (data.fwSuspend/1000000.0)) 3067 2219 resume_time = '%.0f'%((tEnd-data.tSuspended)*1000 + \ ··· 3079 2211 testdesc1 = 'Total' 3080 2212 testdesc2 = '' 3081 2213 if(len(testruns) > 1): 3082 - testdesc1 = testdesc2 = textnum[data.testnumber] 2214 + testdesc1 = testdesc2 = ordinal(data.testnumber+1) 3083 2215 testdesc2 += ' ' 3084 2216 if(data.tLow == 0): 3085 2217 thtml = html_timetotal.format(suspend_time, \ ··· 3087 2219 else: 3088 2220 thtml = html_timetotal2.format(suspend_time, low_time, \ 3089 2221 resume_time, testdesc1) 3090 - devtl.html['timeline'] += thtml 2222 + devtl.html['header'] += thtml 3091 2223 sktime = '%.3f'%((data.dmesg['suspend_machine']['end'] - \ 3092 2224 data.getStart())*1000) 3093 2225 sftime = '%.3f'%(data.fwSuspend / 1000000.0) 3094 2226 rftime = '%.3f'%(data.fwResume / 1000000.0) 3095 - rktime = '%.3f'%((data.getEnd() - \ 2227 + rktime = '%.3f'%((data.dmesg['resume_complete']['end'] - \ 3096 2228 data.dmesg['resume_machine']['start'])*1000) 3097 - devtl.html['timeline'] += html_timegroups.format(sktime, \ 2229 + devtl.html['header'] += html_timegroups.format(sktime, \ 3098 2230 sftime, rftime, rktime, testdesc2) 3099 2231 else: 3100 2232 suspend_time = '%.0f'%((data.tSuspended-data.start)*1000) 3101 2233 resume_time = '%.0f'%((tEnd-data.tSuspended)*1000) 3102 2234 testdesc = 'Kernel' 3103 2235 if(len(testruns) > 1): 3104 - testdesc = textnum[data.testnumber]+' '+testdesc 2236 + testdesc = ordinal(data.testnumber+1)+' '+testdesc 3105 2237 if(data.tLow == 0): 3106 2238 thtml = html_timetotal.format(suspend_time, \ 3107 2239 resume_time, testdesc) 3108 2240 else: 3109 2241 thtml = html_timetotal2.format(suspend_time, low_time, \ 3110 2242 resume_time, testdesc) 3111 - devtl.html['timeline'] += thtml 2243 + devtl.html['header'] += thtml 3112 2244 3113 2245 # time scale for potentially multiple datasets 3114 2246 t0 = testruns[0].start ··· 3117 2249 tTotal = tMax - t0 3118 2250 3119 2251 # determine the maximum number of rows we need to draw 3120 - timelinerows = 0 3121 2252 for data in testruns: 3122 - for phase in data.dmesg: 3123 - list = data.dmesg[phase]['list'] 3124 - rows = setTimelineRows(list, list) 3125 - data.dmesg[phase]['row'] = rows 3126 - if(rows > timelinerows): 3127 - timelinerows = rows 2253 + data.selectTimelineDevices('%f', tTotal, sysvals.mindevlen) 2254 + for group in data.devicegroups: 2255 + devlist = [] 2256 + for phase in group: 2257 + for devname in data.tdevlist[phase]: 2258 + devlist.append((phase,devname)) 2259 + devtl.getPhaseRows(data.dmesg, devlist) 2260 + devtl.calcTotalRows() 3128 2261 3129 - # calculate the timeline height and create bounding box, add buttons 3130 - devtl.setRows(timelinerows + 1) 3131 - devtl.html['timeline'] += html_devlist1 3132 - if len(testruns) > 1: 3133 - devtl.html['timeline'] += html_devlist2 2262 + # create bounding box, add buttons 2263 + if sysvals.suspendmode != 'command': 2264 + devtl.html['timeline'] += html_devlist1 2265 + if len(testruns) > 1: 2266 + devtl.html['timeline'] += html_devlist2 3134 2267 devtl.html['timeline'] += html_zoombox 3135 2268 devtl.html['timeline'] += html_timeline.format('dmesg', devtl.height) 3136 2269 3137 - # draw the colored boxes for each of the phases 3138 - for data in testruns: 3139 - for b in data.dmesg: 3140 - phase = data.dmesg[b] 3141 - length = phase['end']-phase['start'] 3142 - left = '%.3f' % (((phase['start']-t0)*100.0)/tTotal) 3143 - width = '%.3f' % ((length*100.0)/tTotal) 3144 - devtl.html['timeline'] += html_phase.format(left, width, \ 3145 - '%.3f'%devtl.scaleH, '%.3f'%(100-devtl.scaleH), \ 3146 - data.dmesg[b]['color'], '') 2270 + # draw the full timeline 2271 + phases = {'suspend':[],'resume':[]} 2272 + for phase in data.dmesg: 2273 + if 'resume' in phase: 2274 + phases['resume'].append(phase) 2275 + else: 2276 + phases['suspend'].append(phase) 3147 2277 3148 - # draw the time scale, try to make the number of labels readable 3149 - devtl.html['scale'] = createTimeScale(t0, tMax, tSuspended) 3150 - devtl.html['timeline'] += devtl.html['scale'] 2278 + # draw each test run chronologically 3151 2279 for data in testruns: 3152 - for b in data.dmesg: 3153 - phaselist = data.dmesg[b]['list'] 3154 - for d in phaselist: 3155 - name = d 3156 - drv = '' 3157 - dev = phaselist[d] 3158 - if(d in sysvals.altdevname): 3159 - name = sysvals.altdevname[d] 3160 - if('drv' in dev and dev['drv']): 3161 - drv = ' {%s}' % dev['drv'] 3162 - height = (100.0 - devtl.scaleH)/data.dmesg[b]['row'] 3163 - top = '%.3f' % ((dev['row']*height) + devtl.scaleH) 3164 - left = '%.3f' % (((dev['start']-t0)*100)/tTotal) 3165 - width = '%.3f' % (((dev['end']-dev['start'])*100)/tTotal) 3166 - length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000) 3167 - color = 'rgba(204,204,204,0.5)' 3168 - devtl.html['timeline'] += html_device.format(dev['id'], \ 3169 - d+drv+length+b, left, top, '%.3f'%height, width, name+drv) 3170 - 3171 - # draw any trace events found 3172 - for data in testruns: 3173 - for b in data.dmesg: 3174 - phaselist = data.dmesg[b]['list'] 3175 - for name in phaselist: 3176 - dev = phaselist[name] 3177 - if('traceevents' in dev): 3178 - vprint('Debug trace events found for device %s' % name) 3179 - vprint('%20s %20s %10s %8s' % ('action', \ 2280 + # if nore than one test, draw a block to represent user mode 2281 + if(data.testnumber > 0): 2282 + m0 = testruns[data.testnumber-1].end 2283 + mMax = testruns[data.testnumber].start 2284 + mTotal = mMax - m0 2285 + name = 'usermode%d' % data.testnumber 2286 + top = '%d' % devtl.scaleH 2287 + left = '%f' % (((m0-t0)*100.0)/tTotal) 2288 + width = '%f' % ((mTotal*100.0)/tTotal) 2289 + title = 'user mode (%0.3f ms) ' % (mTotal*1000) 2290 + devtl.html['timeline'] += html_device.format(name, \ 2291 + title, left, top, '%d'%devtl.bodyH, width, '', '', '') 2292 + # now draw the actual timeline blocks 2293 + for dir in phases: 2294 + # draw suspend and resume blocks separately 2295 + bname = '%s%d' % (dir[0], data.testnumber) 2296 + if dir == 'suspend': 2297 + m0 = testruns[data.testnumber].start 2298 + mMax = testruns[data.testnumber].tSuspended 2299 + mTotal = mMax - m0 2300 + left = '%f' % (((m0-t0)*100.0)/tTotal) 2301 + else: 2302 + m0 = testruns[data.testnumber].tSuspended 2303 + mMax = testruns[data.testnumber].end 2304 + mTotal = mMax - m0 2305 + left = '%f' % ((((m0-t0)*100.0)+sysvals.srgap/2)/tTotal) 2306 + # if a timeline block is 0 length, skip altogether 2307 + if mTotal == 0: 2308 + continue 2309 + width = '%f' % (((mTotal*100.0)-sysvals.srgap/2)/tTotal) 2310 + devtl.html['timeline'] += html_tblock.format(bname, left, width) 2311 + for b in sorted(phases[dir]): 2312 + # draw the phase color background 2313 + phase = data.dmesg[b] 2314 + length = phase['end']-phase['start'] 2315 + left = '%f' % (((phase['start']-m0)*100.0)/mTotal) 2316 + width = '%f' % ((length*100.0)/mTotal) 2317 + devtl.html['timeline'] += html_phase.format(left, width, \ 2318 + '%.3f'%devtl.scaleH, '%.3f'%devtl.bodyH, \ 2319 + data.dmesg[b]['color'], '') 2320 + # draw the devices for this phase 2321 + phaselist = data.dmesg[b]['list'] 2322 + for d in data.tdevlist[b]: 2323 + name = d 2324 + drv = '' 2325 + dev = phaselist[d] 2326 + xtraclass = '' 2327 + xtrainfo = '' 2328 + xtrastyle = '' 2329 + if 'htmlclass' in dev: 2330 + xtraclass = dev['htmlclass'] 2331 + xtrainfo = dev['htmlclass'] 2332 + if 'color' in dev: 2333 + xtrastyle = 'background-color:%s;' % dev['color'] 2334 + if(d in sysvals.devprops): 2335 + name = sysvals.devprops[d].altName(d) 2336 + xtraclass = sysvals.devprops[d].xtraClass() 2337 + xtrainfo = sysvals.devprops[d].xtraInfo() 2338 + if('drv' in dev and dev['drv']): 2339 + drv = ' {%s}' % dev['drv'] 2340 + rowheight = devtl.phaseRowHeight(b, dev['row']) 2341 + rowtop = devtl.phaseRowTop(b, dev['row']) 2342 + top = '%.3f' % (rowtop + devtl.scaleH) 2343 + left = '%f' % (((dev['start']-m0)*100)/mTotal) 2344 + width = '%f' % (((dev['end']-dev['start'])*100)/mTotal) 2345 + length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000) 2346 + if sysvals.suspendmode == 'command': 2347 + title = name+drv+xtrainfo+length+'cmdexec' 2348 + else: 2349 + title = name+drv+xtrainfo+length+b 2350 + devtl.html['timeline'] += html_device.format(dev['id'], \ 2351 + title, left, top, '%.3f'%rowheight, width, \ 2352 + d+drv, xtraclass, xtrastyle) 2353 + if('src' not in dev): 2354 + continue 2355 + # draw any trace events for this device 2356 + vprint('Debug trace events found for device %s' % d) 2357 + vprint('%20s %20s %10s %8s' % ('title', \ 3180 2358 'name', 'time(ms)', 'length(ms)')) 3181 - for e in dev['traceevents']: 3182 - vprint('%20s %20s %10.3f %8.3f' % (e.action, \ 3183 - e.name, e.time*1000, e.length*1000)) 3184 - height = (100.0 - devtl.scaleH)/data.dmesg[b]['row'] 3185 - top = '%.3f' % ((dev['row']*height) + devtl.scaleH) 3186 - left = '%.3f' % (((e.time-t0)*100)/tTotal) 3187 - width = '%.3f' % (e.length*100/tTotal) 2359 + for e in dev['src']: 2360 + vprint('%20s %20s %10.3f %8.3f' % (e.title, \ 2361 + e.text, e.time*1000, e.length*1000)) 2362 + height = devtl.rowH 2363 + top = '%.3f' % (rowtop + devtl.scaleH + (e.row*devtl.rowH)) 2364 + left = '%f' % (((e.time-m0)*100)/mTotal) 2365 + width = '%f' % (e.length*100/mTotal) 3188 2366 color = 'rgba(204,204,204,0.5)' 3189 2367 devtl.html['timeline'] += \ 3190 - html_traceevent.format(e.action+' '+e.name, \ 2368 + html_traceevent.format(e.title, \ 3191 2369 left, top, '%.3f'%height, \ 3192 - width, e.color, '') 2370 + width, e.text) 2371 + # draw the time scale, try to make the number of labels readable 2372 + devtl.html['timeline'] += devtl.createTimeScale(m0, mMax, tTotal, dir) 2373 + devtl.html['timeline'] += '</div>\n' 3193 2374 3194 2375 # timeline is finished 3195 2376 devtl.html['timeline'] += '</div>\n</div>\n' 3196 2377 3197 2378 # draw a legend which describes the phases by color 3198 - data = testruns[-1] 3199 - devtl.html['legend'] = '<div class="legend">\n' 3200 - pdelta = 100.0/len(data.phases) 3201 - pmargin = pdelta / 4.0 3202 - for phase in data.phases: 3203 - order = '%.2f' % ((data.dmesg[phase]['order'] * pdelta) + pmargin) 3204 - name = string.replace(phase, '_', ' &nbsp;') 3205 - devtl.html['legend'] += html_legend.format(order, \ 3206 - data.dmesg[phase]['color'], name) 3207 - devtl.html['legend'] += '</div>\n' 2379 + if sysvals.suspendmode != 'command': 2380 + data = testruns[-1] 2381 + devtl.html['legend'] = '<div class="legend">\n' 2382 + pdelta = 100.0/len(data.phases) 2383 + pmargin = pdelta / 4.0 2384 + for phase in data.phases: 2385 + tmp = phase.split('_') 2386 + id = tmp[0][0] 2387 + if(len(tmp) > 1): 2388 + id += tmp[1][0] 2389 + order = '%.2f' % ((data.dmesg[phase]['order'] * pdelta) + pmargin) 2390 + name = string.replace(phase, '_', ' &nbsp;') 2391 + devtl.html['legend'] += html_legend.format(order, \ 2392 + data.dmesg[phase]['color'], name, id) 2393 + devtl.html['legend'] += '</div>\n' 3208 2394 3209 2395 hf = open(sysvals.htmlfile, 'w') 3210 - thread_height = 0 2396 + 2397 + if not sysvals.cgexp: 2398 + cgchk = 'checked' 2399 + cgnchk = 'not(:checked)' 2400 + else: 2401 + cgchk = 'not(:checked)' 2402 + cgnchk = 'checked' 3211 2403 3212 2404 # write the html header first (html head, css code, up to body start) 3213 2405 html_header = '<!DOCTYPE html>\n<html>\n<head>\n\ 3214 2406 <meta http-equiv="content-type" content="text/html; charset=UTF-8">\n\ 3215 - <title>AnalyzeSuspend</title>\n\ 2407 + <title>'+htmlTitle()+'</title>\n\ 3216 2408 <style type=\'text/css\'>\n\ 3217 - body {overflow-y: scroll;}\n\ 3218 - .stamp {width: 100%;text-align:center;background-color:gray;line-height:30px;color:white;font: 25px Arial;}\n\ 3219 - .callgraph {margin-top: 30px;box-shadow: 5px 5px 20px black;}\n\ 3220 - .callgraph article * {padding-left: 28px;}\n\ 3221 - h1 {color:black;font: bold 30px Times;}\n\ 3222 - t0 {color:black;font: bold 30px Times;}\n\ 3223 - t1 {color:black;font: 30px Times;}\n\ 3224 - t2 {color:black;font: 25px Times;}\n\ 3225 - t3 {color:black;font: 20px Times;white-space:nowrap;}\n\ 3226 - t4 {color:black;font: bold 30px Times;line-height:60px;white-space:nowrap;}\n\ 2409 + body {overflow-y:scroll;}\n\ 2410 + .stamp {width:100%;text-align:center;background-color:gray;line-height:30px;color:white;font:25px Arial;}\n\ 2411 + .callgraph {margin-top:30px;box-shadow:5px 5px 20px black;}\n\ 2412 + .callgraph article * {padding-left:28px;}\n\ 2413 + h1 {color:black;font:bold 30px Times;}\n\ 2414 + t0 {color:black;font:bold 30px Times;}\n\ 2415 + t1 {color:black;font:30px Times;}\n\ 2416 + t2 {color:black;font:25px Times;}\n\ 2417 + t3 {color:black;font:20px Times;white-space:nowrap;}\n\ 2418 + t4 {color:black;font:bold 30px Times;line-height:60px;white-space:nowrap;}\n\ 2419 + cS {color:blue;font:bold 11px Times;}\n\ 2420 + cR {color:red;font:bold 11px Times;}\n\ 3227 2421 table {width:100%;}\n\ 3228 2422 .gray {background-color:rgba(80,80,80,0.1);}\n\ 3229 2423 .green {background-color:rgba(204,255,204,0.4);}\n\ 3230 2424 .purple {background-color:rgba(128,0,128,0.2);}\n\ 3231 2425 .yellow {background-color:rgba(255,255,204,0.4);}\n\ 3232 - .time1 {font: 22px Arial;border:1px solid;}\n\ 3233 - .time2 {font: 15px Arial;border-bottom:1px solid;border-left:1px solid;border-right:1px solid;}\n\ 3234 - td {text-align: center;}\n\ 2426 + .time1 {font:22px Arial;border:1px solid;}\n\ 2427 + .time2 {font:15px Arial;border-bottom:1px solid;border-left:1px solid;border-right:1px solid;}\n\ 2428 + td {text-align:center;}\n\ 3235 2429 r {color:#500000;font:15px Tahoma;}\n\ 3236 2430 n {color:#505050;font:15px Tahoma;}\n\ 3237 - .tdhl {color: red;}\n\ 3238 - .hide {display: none;}\n\ 3239 - .pf {display: none;}\n\ 3240 - .pf:checked + label {background: url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/><rect x="8" y="4" width="2" height="10" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\ 3241 - .pf:not(:checked) ~ label {background: url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\ 3242 - .pf:checked ~ *:not(:nth-child(2)) {display: none;}\n\ 3243 - .zoombox {position: relative; width: 100%; overflow-x: scroll;}\n\ 3244 - .timeline {position: relative; font-size: 14px;cursor: pointer;width: 100%; overflow: hidden; background-color:#dddddd;}\n\ 3245 - .thread {position: absolute; height: '+'%.3f'%thread_height+'%; overflow: hidden; line-height: 30px; border:1px solid;text-align:center;white-space:nowrap;background-color:rgba(204,204,204,0.5);}\n\ 3246 - .thread:hover {background-color:white;border:1px solid red;z-index:10;}\n\ 3247 - .hover {background-color:white;border:1px solid red;z-index:10;}\n\ 3248 - .traceevent {position: absolute;opacity: 0.3;height: '+'%.3f'%thread_height+'%;width:0;overflow:hidden;line-height:30px;text-align:center;white-space:nowrap;}\n\ 3249 - .phase {position: absolute;overflow: hidden;border:0px;text-align:center;}\n\ 2431 + .tdhl {color:red;}\n\ 2432 + .hide {display:none;}\n\ 2433 + .pf {display:none;}\n\ 2434 + .pf:'+cgchk+' + label {background:url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/><rect x="8" y="4" width="2" height="10" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\ 2435 + .pf:'+cgnchk+' ~ label {background:url(\'data:image/svg+xml;utf,<?xml version="1.0" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" height="18" width="18" version="1.1"><circle cx="9" cy="9" r="8" stroke="black" stroke-width="1" fill="white"/><rect x="4" y="8" width="10" height="2" style="fill:black;stroke-width:0"/></svg>\') no-repeat left center;}\n\ 2436 + .pf:'+cgchk+' ~ *:not(:nth-child(2)) {display:none;}\n\ 2437 + .zoombox {position:relative;width:100%;overflow-x:scroll;}\n\ 2438 + .timeline {position:relative;font-size:14px;cursor:pointer;width:100%; overflow:hidden;background:linear-gradient(#cccccc, white);}\n\ 2439 + .thread {position:absolute;height:0%;overflow:hidden;line-height:'+devtextH+';font-size:'+devtextS+';border:1px solid;text-align:center;white-space:nowrap;background-color:rgba(204,204,204,0.5);}\n\ 2440 + .thread.sync {background-color:'+sysvals.synccolor+';}\n\ 2441 + .thread.bg {background-color:'+sysvals.kprobecolor+';}\n\ 2442 + .thread:hover {background-color:white;border:1px solid red;'+hoverZ+'}\n\ 2443 + .hover {background-color:white;border:1px solid red;'+hoverZ+'}\n\ 2444 + .hover.sync {background-color:white;}\n\ 2445 + .hover.bg {background-color:white;}\n\ 2446 + .traceevent {position:absolute;font-size:10px;overflow:hidden;color:black;text-align:center;white-space:nowrap;border-radius:5px;border:1px solid black;background:linear-gradient(to bottom right,rgba(204,204,204,1),rgba(150,150,150,1));}\n\ 2447 + .traceevent:hover {background:white;}\n\ 2448 + .phase {position:absolute;overflow:hidden;border:0px;text-align:center;}\n\ 3250 2449 .phaselet {position:absolute;overflow:hidden;border:0px;text-align:center;height:100px;font-size:24px;}\n\ 3251 - .t {position:absolute;top:0%;height:100%;border-right:1px solid black;}\n\ 3252 - .legend {position: relative; width: 100%; height: 40px; text-align: center;margin-bottom:20px}\n\ 3253 - .legend .square {position:absolute;top:10px; width: 0px;height: 20px;border:1px solid;padding-left:20px;}\n\ 2450 + .t {z-index:2;position:absolute;pointer-events:none;top:0%;height:100%;border-right:1px solid black;}\n\ 2451 + .legend {position:relative; width:100%; height:40px; text-align:center;margin-bottom:20px}\n\ 2452 + .legend .square {position:absolute;cursor:pointer;top:10px; width:0px;height:20px;border:1px solid;padding-left:20px;}\n\ 3254 2453 button {height:40px;width:200px;margin-bottom:20px;margin-top:20px;font-size:24px;}\n\ 2454 + .logbtn {position:relative;float:right;height:25px;width:50px;margin-top:3px;margin-bottom:0;font-size:10px;text-align:center;}\n\ 3255 2455 .devlist {position:'+x2changes[1]+';width:190px;}\n\ 3256 - #devicedetail {height:100px;box-shadow: 5px 5px 20px black;}\n\ 2456 + a:link {color:white;text-decoration:none;}\n\ 2457 + a:visited {color:white;}\n\ 2458 + a:hover {color:white;}\n\ 2459 + a:active {color:white;}\n\ 2460 + .version {position:relative;float:left;color:white;font-size:10px;line-height:30px;margin-left:10px;}\n\ 2461 + #devicedetail {height:100px;box-shadow:5px 5px 20px black;}\n\ 2462 + .tblock {position:absolute;height:100%;}\n\ 2463 + .bg {z-index:1;}\n\ 3257 2464 </style>\n</head>\n<body>\n' 3258 - hf.write(html_header) 2465 + 2466 + # no header or css if its embedded 2467 + if(sysvals.embedded): 2468 + hf.write('pass True tSus %.3f tRes %.3f tLow %.3f fwvalid %s tSus %.3f tRes %.3f\n' % 2469 + (data.tSuspended-data.start, data.end-data.tSuspended, data.tLow, data.fwValid, \ 2470 + data.fwSuspend/1000000, data.fwResume/1000000)) 2471 + else: 2472 + hf.write(html_header) 3259 2473 3260 2474 # write the test title and general info header 3261 2475 if(sysvals.stamp['time'] != ""): 2476 + hf.write(headline_version) 2477 + if sysvals.addlogs and sysvals.dmesgfile: 2478 + hf.write('<button id="showdmesg" class="logbtn">dmesg</button>') 2479 + if sysvals.addlogs and sysvals.ftracefile: 2480 + hf.write('<button id="showftrace" class="logbtn">ftrace</button>') 3262 2481 hf.write(headline_stamp.format(sysvals.stamp['host'], 3263 2482 sysvals.stamp['kernel'], sysvals.stamp['mode'], \ 3264 2483 sysvals.stamp['time'])) 3265 2484 3266 2485 # write the device timeline 2486 + hf.write(devtl.html['header']) 3267 2487 hf.write(devtl.html['timeline']) 3268 2488 hf.write(devtl.html['legend']) 3269 2489 hf.write('<div id="devicedetailtitle"></div>\n') ··· 3366 2410 width = '%.3f' % ((length*100.0)/tTotal) 3367 2411 hf.write(html_phaselet.format(b, left, width, \ 3368 2412 data.dmesg[b]['color'])) 2413 + if sysvals.suspendmode == 'command': 2414 + hf.write(html_phaselet.format('cmdexec', '0', '0', \ 2415 + data.dmesg['resume_complete']['color'])) 3369 2416 hf.write('</div>\n') 3370 2417 hf.write('</div>\n') 3371 2418 3372 2419 # write the ftrace data (callgraph) 3373 2420 data = testruns[-1] 3374 - if(sysvals.usecallgraph): 2421 + if(sysvals.usecallgraph and not sysvals.embedded): 3375 2422 hf.write('<section id="callgraphs" class="callgraph">\n') 3376 2423 # write out the ftrace data converted to html 3377 2424 html_func_top = '<article id="{0}" class="atop" style="background-color:{1}">\n<input type="checkbox" class="pf" id="f{2}" checked/><label for="f{2}">{3} {4}</label>\n' ··· 3387 2428 for devname in data.sortedDevices(p): 3388 2429 if('ftrace' not in list[devname]): 3389 2430 continue 3390 - name = devname 3391 - if(devname in sysvals.altdevname): 3392 - name = sysvals.altdevname[devname] 3393 2431 devid = list[devname]['id'] 3394 2432 cg = list[devname]['ftrace'] 3395 - flen = '<r>(%.3f ms @ %.3f to %.3f)</r>' % \ 3396 - ((cg.end - cg.start)*1000, cg.start*1000, cg.end*1000) 2433 + clen = (cg.end - cg.start) * 1000 2434 + if clen < sysvals.mincglen: 2435 + continue 2436 + fmt = '<r>(%.3f ms @ '+sysvals.timeformat+' to '+sysvals.timeformat+')</r>' 2437 + flen = fmt % (clen, cg.start, cg.end) 2438 + name = devname 2439 + if(devname in sysvals.devprops): 2440 + name = sysvals.devprops[devname].altName(devname) 2441 + if sysvals.suspendmode == 'command': 2442 + ftitle = name 2443 + else: 2444 + ftitle = name+' '+p 3397 2445 hf.write(html_func_top.format(devid, data.dmesg[p]['color'], \ 3398 - num, name+' '+p, flen)) 2446 + num, ftitle, flen)) 3399 2447 num += 1 3400 2448 for line in cg.list: 3401 2449 if(line.length < 0.000000001): 3402 2450 flen = '' 3403 2451 else: 3404 - flen = '<n>(%.3f ms @ %.3f)</n>' % (line.length*1000, \ 3405 - line.time*1000) 2452 + fmt = '<n>(%.3f ms @ '+sysvals.timeformat+')</n>' 2453 + flen = fmt % (line.length*1000, line.time) 3406 2454 if(line.freturn and line.fcall): 3407 2455 hf.write(html_func_leaf.format(line.name, flen)) 3408 2456 elif(line.freturn): ··· 3419 2453 num += 1 3420 2454 hf.write(html_func_end) 3421 2455 hf.write('\n\n </section>\n') 3422 - # write the footer and close 3423 - addScriptCode(hf, testruns) 3424 - hf.write('</body>\n</html>\n') 2456 + 2457 + # add the dmesg log as a hidden div 2458 + if sysvals.addlogs and sysvals.dmesgfile: 2459 + hf.write('<div id="dmesglog" style="display:none;">\n') 2460 + lf = open(sysvals.dmesgfile, 'r') 2461 + for line in lf: 2462 + hf.write(line) 2463 + lf.close() 2464 + hf.write('</div>\n') 2465 + # add the ftrace log as a hidden div 2466 + if sysvals.addlogs and sysvals.ftracefile: 2467 + hf.write('<div id="ftracelog" style="display:none;">\n') 2468 + lf = open(sysvals.ftracefile, 'r') 2469 + for line in lf: 2470 + hf.write(line) 2471 + lf.close() 2472 + hf.write('</div>\n') 2473 + 2474 + if(not sysvals.embedded): 2475 + # write the footer and close 2476 + addScriptCode(hf, testruns) 2477 + hf.write('</body>\n</html>\n') 2478 + else: 2479 + # embedded out will be loaded in a page, skip the js 2480 + t0 = (testruns[0].start - testruns[-1].tSuspended) * 1000 2481 + tMax = (testruns[-1].end - testruns[-1].tSuspended) * 1000 2482 + # add js code in a div entry for later evaluation 2483 + detail = 'var bounds = [%f,%f];\n' % (t0, tMax) 2484 + detail += 'var devtable = [\n' 2485 + for data in testruns: 2486 + topo = data.deviceTopology() 2487 + detail += '\t"%s",\n' % (topo) 2488 + detail += '];\n' 2489 + hf.write('<div id=customcode style=display:none>\n'+detail+'</div>\n') 3425 2490 hf.close() 3426 2491 return True 3427 2492 ··· 3463 2466 # hf: the open html file pointer 3464 2467 # testruns: array of Data objects from parseKernelLog or parseTraceLog 3465 2468 def addScriptCode(hf, testruns): 3466 - t0 = (testruns[0].start - testruns[-1].tSuspended) * 1000 3467 - tMax = (testruns[-1].end - testruns[-1].tSuspended) * 1000 2469 + t0 = testruns[0].start * 1000 2470 + tMax = testruns[-1].end * 1000 3468 2471 # create an array in javascript memory with the device details 3469 2472 detail = ' var devtable = [];\n' 3470 2473 for data in testruns: ··· 3474 2477 # add the code which will manipulate the data in the browser 3475 2478 script_code = \ 3476 2479 '<script type="text/javascript">\n'+detail+\ 2480 + ' var resolution = -1;\n'\ 2481 + ' function redrawTimescale(t0, tMax, tS) {\n'\ 2482 + ' var rline = \'<div class="t" style="left:0;border-left:1px solid black;border-right:0;"><cR><-R</cR></div>\';\n'\ 2483 + ' var tTotal = tMax - t0;\n'\ 2484 + ' var list = document.getElementsByClassName("tblock");\n'\ 2485 + ' for (var i = 0; i < list.length; i++) {\n'\ 2486 + ' var timescale = list[i].getElementsByClassName("timescale")[0];\n'\ 2487 + ' var m0 = t0 + (tTotal*parseFloat(list[i].style.left)/100);\n'\ 2488 + ' var mTotal = tTotal*parseFloat(list[i].style.width)/100;\n'\ 2489 + ' var mMax = m0 + mTotal;\n'\ 2490 + ' var html = "";\n'\ 2491 + ' var divTotal = Math.floor(mTotal/tS) + 1;\n'\ 2492 + ' if(divTotal > 1000) continue;\n'\ 2493 + ' var divEdge = (mTotal - tS*(divTotal-1))*100/mTotal;\n'\ 2494 + ' var pos = 0.0, val = 0.0;\n'\ 2495 + ' for (var j = 0; j < divTotal; j++) {\n'\ 2496 + ' var htmlline = "";\n'\ 2497 + ' if(list[i].id[5] == "r") {\n'\ 2498 + ' pos = 100 - (((j)*tS*100)/mTotal);\n'\ 2499 + ' val = (j)*tS;\n'\ 2500 + ' htmlline = \'<div class="t" style="right:\'+pos+\'%">\'+val+\'ms</div>\';\n'\ 2501 + ' if(j == 0)\n'\ 2502 + ' htmlline = rline;\n'\ 2503 + ' } else {\n'\ 2504 + ' pos = 100 - (((j)*tS*100)/mTotal) - divEdge;\n'\ 2505 + ' val = (j-divTotal+1)*tS;\n'\ 2506 + ' if(j == divTotal - 1)\n'\ 2507 + ' htmlline = \'<div class="t" style="right:\'+pos+\'%"><cS>S-></cS></div>\';\n'\ 2508 + ' else\n'\ 2509 + ' htmlline = \'<div class="t" style="right:\'+pos+\'%">\'+val+\'ms</div>\';\n'\ 2510 + ' }\n'\ 2511 + ' html += htmlline;\n'\ 2512 + ' }\n'\ 2513 + ' timescale.innerHTML = html;\n'\ 2514 + ' }\n'\ 2515 + ' }\n'\ 3477 2516 ' function zoomTimeline() {\n'\ 3478 - ' var timescale = document.getElementById("timescale");\n'\ 3479 2517 ' var dmesg = document.getElementById("dmesg");\n'\ 3480 2518 ' var zoombox = document.getElementById("dmesgzoombox");\n'\ 3481 2519 ' var val = parseFloat(dmesg.style.width);\n'\ ··· 3518 2486 ' var sh = window.outerWidth / 2;\n'\ 3519 2487 ' if(this.id == "zoomin") {\n'\ 3520 2488 ' newval = val * 1.2;\n'\ 3521 - ' if(newval > 40000) newval = 40000;\n'\ 2489 + ' if(newval > 910034) newval = 910034;\n'\ 3522 2490 ' dmesg.style.width = newval+"%";\n'\ 3523 2491 ' zoombox.scrollLeft = ((zoombox.scrollLeft + sh) * newval / val) - sh;\n'\ 3524 2492 ' } else if (this.id == "zoomout") {\n'\ ··· 3530 2498 ' zoombox.scrollLeft = 0;\n'\ 3531 2499 ' dmesg.style.width = "100%";\n'\ 3532 2500 ' }\n'\ 3533 - ' var html = "";\n'\ 2501 + ' var tS = [10000, 5000, 2000, 1000, 500, 200, 100, 50, 20, 10, 5, 2, 1];\n'\ 3534 2502 ' var t0 = bounds[0];\n'\ 3535 2503 ' var tMax = bounds[1];\n'\ 3536 2504 ' var tTotal = tMax - t0;\n'\ 3537 2505 ' var wTotal = tTotal * 100.0 / newval;\n'\ 3538 - ' for(var tS = 1000; (wTotal / tS) < 3; tS /= 10);\n'\ 3539 - ' if(tS < 1) tS = 1;\n'\ 3540 - ' for(var s = ((t0 / tS)|0) * tS; s < tMax; s += tS) {\n'\ 3541 - ' var pos = (tMax - s) * 100.0 / tTotal;\n'\ 3542 - ' var name = (s == 0)?"S/R":(s+"ms");\n'\ 3543 - ' html += "<div class=\\"t\\" style=\\"right:"+pos+"%\\">"+name+"</div>";\n'\ 3544 - ' }\n'\ 3545 - ' timescale.innerHTML = html;\n'\ 2506 + ' var idx = 7*window.innerWidth/1100;\n'\ 2507 + ' for(var i = 0; (i < tS.length)&&((wTotal / tS[i]) < idx); i++);\n'\ 2508 + ' if(i >= tS.length) i = tS.length - 1;\n'\ 2509 + ' if(tS[i] == resolution) return;\n'\ 2510 + ' resolution = tS[i];\n'\ 2511 + ' redrawTimescale(t0, tMax, tS[i]);\n'\ 3546 2512 ' }\n'\ 3547 2513 ' function deviceHover() {\n'\ 3548 2514 ' var name = this.title.slice(0, this.title.indexOf(" ("));\n'\ ··· 3553 2523 ' cpu = parseInt(name.slice(8));\n'\ 3554 2524 ' for (var i = 0; i < dev.length; i++) {\n'\ 3555 2525 ' dname = dev[i].title.slice(0, dev[i].title.indexOf(" ("));\n'\ 2526 + ' var cname = dev[i].className.slice(dev[i].className.indexOf("thread"));\n'\ 3556 2527 ' if((cpu >= 0 && dname.match("CPU_O[NF]*\\\[*"+cpu+"\\\]")) ||\n'\ 3557 2528 ' (name == dname))\n'\ 3558 2529 ' {\n'\ 3559 - ' dev[i].className = "thread hover";\n'\ 2530 + ' dev[i].className = "hover "+cname;\n'\ 3560 2531 ' } else {\n'\ 3561 - ' dev[i].className = "thread";\n'\ 2532 + ' dev[i].className = cname;\n'\ 3562 2533 ' }\n'\ 3563 2534 ' }\n'\ 3564 2535 ' }\n'\ ··· 3567 2536 ' var dmesg = document.getElementById("dmesg");\n'\ 3568 2537 ' var dev = dmesg.getElementsByClassName("thread");\n'\ 3569 2538 ' for (var i = 0; i < dev.length; i++) {\n'\ 3570 - ' dev[i].className = "thread";\n'\ 2539 + ' dev[i].className = dev[i].className.slice(dev[i].className.indexOf("thread"));\n'\ 3571 2540 ' }\n'\ 3572 2541 ' }\n'\ 3573 2542 ' function deviceTitle(title, total, cpu) {\n'\ ··· 3578 2547 ' total[2] = (total[2]+total[4])/2;\n'\ 3579 2548 ' }\n'\ 3580 2549 ' var devtitle = document.getElementById("devicedetailtitle");\n'\ 3581 - ' var name = title.slice(0, title.indexOf(" "));\n'\ 2550 + ' var name = title.slice(0, title.indexOf(" ("));\n'\ 3582 2551 ' if(cpu >= 0) name = "CPU"+cpu;\n'\ 3583 2552 ' var driver = "";\n'\ 3584 2553 ' var tS = "<t2>(</t2>";\n'\ ··· 3610 2579 ' var dev = dmesg.getElementsByClassName("thread");\n'\ 3611 2580 ' var idlist = [];\n'\ 3612 2581 ' var pdata = [[]];\n'\ 2582 + ' if(document.getElementById("devicedetail1"))\n'\ 2583 + ' pdata = [[], []];\n'\ 3613 2584 ' var pd = pdata[0];\n'\ 3614 2585 ' var total = [0.0, 0.0, 0.0];\n'\ 3615 2586 ' for (var i = 0; i < dev.length; i++) {\n'\ ··· 3667 2634 ' var cglist = document.getElementById("callgraphs");\n'\ 3668 2635 ' if(!cglist) return;\n'\ 3669 2636 ' var cg = cglist.getElementsByClassName("atop");\n'\ 2637 + ' if(cg.length < 10) return;\n'\ 3670 2638 ' for (var i = 0; i < cg.length; i++) {\n'\ 3671 2639 ' if(idlist.indexOf(cg[i].id) >= 0) {\n'\ 3672 2640 ' cg[i].style.display = "block";\n'\ ··· 3692 2658 ' dt = devtable[1];\n'\ 3693 2659 ' win.document.write(html+dt);\n'\ 3694 2660 ' }\n'\ 2661 + ' function logWindow(e) {\n'\ 2662 + ' var name = e.target.id.slice(4);\n'\ 2663 + ' var win = window.open();\n'\ 2664 + ' var log = document.getElementById(name+"log");\n'\ 2665 + ' var title = "<title>"+document.title.split(" ")[0]+" "+name+" log</title>";\n'\ 2666 + ' win.document.write(title+"<pre>"+log.innerHTML+"</pre>");\n'\ 2667 + ' win.document.close();\n'\ 2668 + ' }\n'\ 2669 + ' function onClickPhase(e) {\n'\ 2670 + ' }\n'\ 2671 + ' window.addEventListener("resize", function () {zoomTimeline();});\n'\ 3695 2672 ' window.addEventListener("load", function () {\n'\ 3696 2673 ' var dmesg = document.getElementById("dmesg");\n'\ 3697 2674 ' dmesg.style.width = "100%"\n'\ 3698 2675 ' document.getElementById("zoomin").onclick = zoomTimeline;\n'\ 3699 2676 ' document.getElementById("zoomout").onclick = zoomTimeline;\n'\ 3700 2677 ' document.getElementById("zoomdef").onclick = zoomTimeline;\n'\ 3701 - ' var devlist = document.getElementsByClassName("devlist");\n'\ 3702 - ' for (var i = 0; i < devlist.length; i++)\n'\ 3703 - ' devlist[i].onclick = devListWindow;\n'\ 2678 + ' var list = document.getElementsByClassName("square");\n'\ 2679 + ' for (var i = 0; i < list.length; i++)\n'\ 2680 + ' list[i].onclick = onClickPhase;\n'\ 2681 + ' var list = document.getElementsByClassName("logbtn");\n'\ 2682 + ' for (var i = 0; i < list.length; i++)\n'\ 2683 + ' list[i].onclick = logWindow;\n'\ 2684 + ' list = document.getElementsByClassName("devlist");\n'\ 2685 + ' for (var i = 0; i < list.length; i++)\n'\ 2686 + ' list[i].onclick = devListWindow;\n'\ 3704 2687 ' var dev = dmesg.getElementsByClassName("thread");\n'\ 3705 2688 ' for (var i = 0; i < dev.length; i++) {\n'\ 3706 2689 ' dev[i].onclick = deviceDetail;\n'\ ··· 3736 2685 def executeSuspend(): 3737 2686 global sysvals 3738 2687 3739 - detectUSB(False) 3740 2688 t0 = time.time()*1000 3741 2689 tp = sysvals.tpath 2690 + fwdata = [] 2691 + # mark the start point in the kernel ring buffer just as we start 2692 + sysvals.initdmesg() 2693 + # start ftrace 2694 + if(sysvals.usecallgraph or sysvals.usetraceevents): 2695 + print('START TRACING') 2696 + sysvals.fsetVal('1', 'tracing_on') 3742 2697 # execute however many s/r runs requested 3743 2698 for count in range(1,sysvals.execcount+1): 3744 - # clear the kernel ring buffer just as we start 3745 - os.system('dmesg -C') 3746 - # enable callgraph ftrace only for the second run 3747 - if(sysvals.usecallgraph and count == 2): 3748 - # set trace type 3749 - os.system('echo function_graph > '+tp+'current_tracer') 3750 - os.system('echo "" > '+tp+'set_ftrace_filter') 3751 - # set trace format options 3752 - os.system('echo funcgraph-abstime > '+tp+'trace_options') 3753 - os.system('echo funcgraph-proc > '+tp+'trace_options') 3754 - # focus only on device suspend and resume 3755 - os.system('cat '+tp+'available_filter_functions | '+\ 3756 - 'grep dpm_run_callback > '+tp+'set_graph_function') 3757 2699 # if this is test2 and there's a delay, start here 3758 2700 if(count > 1 and sysvals.x2delay > 0): 3759 2701 tN = time.time()*1000 3760 2702 while (tN - t0) < sysvals.x2delay: 3761 2703 tN = time.time()*1000 3762 2704 time.sleep(0.001) 3763 - # start ftrace 3764 - if(sysvals.usecallgraph or sysvals.usetraceevents): 3765 - print('START TRACING') 3766 - os.system('echo 1 > '+tp+'tracing_on') 3767 2705 # initiate suspend 3768 2706 if(sysvals.usecallgraph or sysvals.usetraceevents): 3769 - os.system('echo SUSPEND START > '+tp+'trace_marker') 3770 - if(sysvals.rtcwake): 3771 - print('SUSPEND START') 3772 - print('will autoresume in %d seconds' % sysvals.rtcwaketime) 3773 - sysvals.rtcWakeAlarm() 2707 + sysvals.fsetVal('SUSPEND START', 'trace_marker') 2708 + if sysvals.suspendmode == 'command': 2709 + print('COMMAND START') 2710 + if(sysvals.rtcwake): 2711 + print('will issue an rtcwake in %d seconds' % sysvals.rtcwaketime) 2712 + sysvals.rtcWakeAlarmOn() 2713 + os.system(sysvals.testcommand) 3774 2714 else: 3775 - print('SUSPEND START (press a key to resume)') 3776 - pf = open(sysvals.powerfile, 'w') 3777 - pf.write(sysvals.suspendmode) 3778 - # execution will pause here 3779 - pf.close() 2715 + if(sysvals.rtcwake): 2716 + print('SUSPEND START') 2717 + print('will autoresume in %d seconds' % sysvals.rtcwaketime) 2718 + sysvals.rtcWakeAlarmOn() 2719 + else: 2720 + print('SUSPEND START (press a key to resume)') 2721 + pf = open(sysvals.powerfile, 'w') 2722 + pf.write(sysvals.suspendmode) 2723 + # execution will pause here 2724 + try: 2725 + pf.close() 2726 + except: 2727 + pass 3780 2728 t0 = time.time()*1000 2729 + if(sysvals.rtcwake): 2730 + sysvals.rtcWakeAlarmOff() 3781 2731 # return from suspend 3782 2732 print('RESUME COMPLETE') 3783 2733 if(sysvals.usecallgraph or sysvals.usetraceevents): 3784 - os.system('echo RESUME COMPLETE > '+tp+'trace_marker') 3785 - # see if there's firmware timing data to be had 3786 - t = sysvals.postresumetime 3787 - if(t > 0): 3788 - print('Waiting %d seconds for POST-RESUME trace events...' % t) 3789 - time.sleep(t) 3790 - # stop ftrace 3791 - if(sysvals.usecallgraph or sysvals.usetraceevents): 3792 - os.system('echo 0 > '+tp+'tracing_on') 3793 - print('CAPTURING TRACE') 3794 - writeDatafileHeader(sysvals.ftracefile) 3795 - os.system('cat '+tp+'trace >> '+sysvals.ftracefile) 3796 - os.system('echo "" > '+tp+'trace') 3797 - # grab a copy of the dmesg output 3798 - print('CAPTURING DMESG') 3799 - writeDatafileHeader(sysvals.dmesgfile) 3800 - os.system('dmesg -c >> '+sysvals.dmesgfile) 2734 + sysvals.fsetVal('RESUME COMPLETE', 'trace_marker') 2735 + if(sysvals.suspendmode == 'mem'): 2736 + fwdata.append(getFPDT(False)) 2737 + # look for post resume events after the last test run 2738 + t = sysvals.postresumetime 2739 + if(t > 0): 2740 + print('Waiting %d seconds for POST-RESUME trace events...' % t) 2741 + time.sleep(t) 2742 + # stop ftrace 2743 + if(sysvals.usecallgraph or sysvals.usetraceevents): 2744 + sysvals.fsetVal('0', 'tracing_on') 2745 + print('CAPTURING TRACE') 2746 + writeDatafileHeader(sysvals.ftracefile, fwdata) 2747 + os.system('cat '+tp+'trace >> '+sysvals.ftracefile) 2748 + sysvals.fsetVal('', 'trace') 2749 + devProps() 2750 + # grab a copy of the dmesg output 2751 + print('CAPTURING DMESG') 2752 + writeDatafileHeader(sysvals.dmesgfile, fwdata) 2753 + sysvals.getdmesg() 3801 2754 3802 - def writeDatafileHeader(filename): 2755 + def writeDatafileHeader(filename, fwdata): 3803 2756 global sysvals 3804 2757 3805 - fw = getFPDT(False) 3806 2758 prt = sysvals.postresumetime 3807 2759 fp = open(filename, 'a') 3808 2760 fp.write(sysvals.teststamp+'\n') 3809 - if(fw): 3810 - fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1])) 2761 + if(sysvals.suspendmode == 'mem'): 2762 + for fw in fwdata: 2763 + if(fw): 2764 + fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1])) 3811 2765 if(prt > 0): 3812 2766 fp.write('# post resume time %u\n' % prt) 3813 2767 fp.close() 3814 - 3815 - # Function: executeAndroidSuspend 3816 - # Description: 3817 - # Execute system suspend through the sysfs interface 3818 - # on a remote android device, then transfer the output 3819 - # dmesg and ftrace files to the local output directory. 3820 - def executeAndroidSuspend(): 3821 - global sysvals 3822 - 3823 - # check to see if the display is currently off 3824 - tp = sysvals.tpath 3825 - out = os.popen(sysvals.adb+\ 3826 - ' shell dumpsys power | grep mScreenOn').read().strip() 3827 - # if so we need to turn it on so we can issue a new suspend 3828 - if(out.endswith('false')): 3829 - print('Waking the device up for the test...') 3830 - # send the KEYPAD_POWER keyevent to wake it up 3831 - os.system(sysvals.adb+' shell input keyevent 26') 3832 - # wait a few seconds so the user can see the device wake up 3833 - time.sleep(3) 3834 - # execute however many s/r runs requested 3835 - for count in range(1,sysvals.execcount+1): 3836 - # clear the kernel ring buffer just as we start 3837 - os.system(sysvals.adb+' shell dmesg -c > /dev/null 2>&1') 3838 - # start ftrace 3839 - if(sysvals.usetraceevents): 3840 - print('START TRACING') 3841 - os.system(sysvals.adb+" shell 'echo 1 > "+tp+"tracing_on'") 3842 - # initiate suspend 3843 - for count in range(1,sysvals.execcount+1): 3844 - if(sysvals.usetraceevents): 3845 - os.system(sysvals.adb+\ 3846 - " shell 'echo SUSPEND START > "+tp+"trace_marker'") 3847 - print('SUSPEND START (press a key on the device to resume)') 3848 - os.system(sysvals.adb+" shell 'echo "+sysvals.suspendmode+\ 3849 - " > "+sysvals.powerfile+"'") 3850 - # execution will pause here, then adb will exit 3851 - while(True): 3852 - check = os.popen(sysvals.adb+\ 3853 - ' shell pwd 2>/dev/null').read().strip() 3854 - if(len(check) > 0): 3855 - break 3856 - time.sleep(1) 3857 - if(sysvals.usetraceevents): 3858 - os.system(sysvals.adb+" shell 'echo RESUME COMPLETE > "+tp+\ 3859 - "trace_marker'") 3860 - # return from suspend 3861 - print('RESUME COMPLETE') 3862 - # stop ftrace 3863 - if(sysvals.usetraceevents): 3864 - os.system(sysvals.adb+" shell 'echo 0 > "+tp+"tracing_on'") 3865 - print('CAPTURING TRACE') 3866 - os.system('echo "'+sysvals.teststamp+'" > '+sysvals.ftracefile) 3867 - os.system(sysvals.adb+' shell cat '+tp+\ 3868 - 'trace >> '+sysvals.ftracefile) 3869 - # grab a copy of the dmesg output 3870 - print('CAPTURING DMESG') 3871 - os.system('echo "'+sysvals.teststamp+'" > '+sysvals.dmesgfile) 3872 - os.system(sysvals.adb+' shell dmesg >> '+sysvals.dmesgfile) 3873 2768 3874 2769 # Function: setUSBDevicesAuto 3875 2770 # Description: ··· 3826 2829 def setUSBDevicesAuto(): 3827 2830 global sysvals 3828 2831 3829 - rootCheck() 2832 + rootCheck(True) 3830 2833 for dirname, dirnames, filenames in os.walk('/sys/devices'): 3831 2834 if(re.match('.*/usb[0-9]*.*', dirname) and 3832 2835 'idVendor' in filenames and 'idProduct' in filenames): ··· 3871 2874 # Description: 3872 2875 # Detect all the USB hosts and devices currently connected and add 3873 2876 # a list of USB device names to sysvals for better timeline readability 3874 - # Arguments: 3875 - # output: True to output the info to stdout, False otherwise 3876 - def detectUSB(output): 2877 + def detectUSB(): 3877 2878 global sysvals 3878 2879 3879 2880 field = {'idVendor':'', 'idProduct':'', 'product':'', 'speed':''} ··· 3882 2887 'runtime_suspended_time':'', 3883 2888 'active_duration':'', 3884 2889 'connected_duration':''} 3885 - if(output): 3886 - print('LEGEND') 3887 - print('---------------------------------------------------------------------------------------------') 3888 - print(' A = async/sync PM queue Y/N D = autosuspend delay (seconds)') 3889 - print(' S = autosuspend Y/N rACTIVE = runtime active (min/sec)') 3890 - print(' P = persist across suspend Y/N rSUSPEN = runtime suspend (min/sec)') 3891 - print(' E = runtime suspend enabled/forbidden Y/N ACTIVE = active duration (min/sec)') 3892 - print(' R = runtime status active/suspended Y/N CONNECT = connected duration (min/sec)') 3893 - print(' U = runtime usage count') 3894 - print('---------------------------------------------------------------------------------------------') 3895 - print(' NAME ID DESCRIPTION SPEED A S P E R U D rACTIVE rSUSPEN ACTIVE CONNECT') 3896 - print('---------------------------------------------------------------------------------------------') 2890 + 2891 + print('LEGEND') 2892 + print('---------------------------------------------------------------------------------------------') 2893 + print(' A = async/sync PM queue Y/N D = autosuspend delay (seconds)') 2894 + print(' S = autosuspend Y/N rACTIVE = runtime active (min/sec)') 2895 + print(' P = persist across suspend Y/N rSUSPEN = runtime suspend (min/sec)') 2896 + print(' E = runtime suspend enabled/forbidden Y/N ACTIVE = active duration (min/sec)') 2897 + print(' R = runtime status active/suspended Y/N CONNECT = connected duration (min/sec)') 2898 + print(' U = runtime usage count') 2899 + print('---------------------------------------------------------------------------------------------') 2900 + print(' NAME ID DESCRIPTION SPEED A S P E R U D rACTIVE rSUSPEN ACTIVE CONNECT') 2901 + print('---------------------------------------------------------------------------------------------') 3897 2902 3898 2903 for dirname, dirnames, filenames in os.walk('/sys/devices'): 3899 2904 if(re.match('.*/usb[0-9]*.*', dirname) and ··· 3902 2907 field[i] = os.popen('cat %s/%s 2>/dev/null' % \ 3903 2908 (dirname, i)).read().replace('\n', '') 3904 2909 name = dirname.split('/')[-1] 3905 - if(len(field['product']) > 0): 3906 - sysvals.altdevname[name] = \ 3907 - '%s [%s]' % (field['product'], name) 2910 + for i in power: 2911 + power[i] = os.popen('cat %s/power/%s 2>/dev/null' % \ 2912 + (dirname, i)).read().replace('\n', '') 2913 + if(re.match('usb[0-9]*', name)): 2914 + first = '%-8s' % name 3908 2915 else: 3909 - sysvals.altdevname[name] = \ 3910 - '%s:%s [%s]' % (field['idVendor'], \ 3911 - field['idProduct'], name) 3912 - if(output): 3913 - for i in power: 3914 - power[i] = os.popen('cat %s/power/%s 2>/dev/null' % \ 3915 - (dirname, i)).read().replace('\n', '') 3916 - if(re.match('usb[0-9]*', name)): 3917 - first = '%-8s' % name 3918 - else: 3919 - first = '%8s' % name 3920 - print('%s [%s:%s] %-20s %-4s %1s %1s %1s %1s %1s %1s %1s %s %s %s %s' % \ 3921 - (first, field['idVendor'], field['idProduct'], \ 3922 - field['product'][0:20], field['speed'], \ 3923 - yesno(power['async']), \ 3924 - yesno(power['control']), \ 3925 - yesno(power['persist']), \ 3926 - yesno(power['runtime_enabled']), \ 3927 - yesno(power['runtime_status']), \ 3928 - power['runtime_usage'], \ 3929 - power['autosuspend'], \ 3930 - ms2nice(power['runtime_active_time']), \ 3931 - ms2nice(power['runtime_suspended_time']), \ 3932 - ms2nice(power['active_duration']), \ 3933 - ms2nice(power['connected_duration']))) 2916 + first = '%8s' % name 2917 + print('%s [%s:%s] %-20s %-4s %1s %1s %1s %1s %1s %1s %1s %s %s %s %s' % \ 2918 + (first, field['idVendor'], field['idProduct'], \ 2919 + field['product'][0:20], field['speed'], \ 2920 + yesno(power['async']), \ 2921 + yesno(power['control']), \ 2922 + yesno(power['persist']), \ 2923 + yesno(power['runtime_enabled']), \ 2924 + yesno(power['runtime_status']), \ 2925 + power['runtime_usage'], \ 2926 + power['autosuspend'], \ 2927 + ms2nice(power['runtime_active_time']), \ 2928 + ms2nice(power['runtime_suspended_time']), \ 2929 + ms2nice(power['active_duration']), \ 2930 + ms2nice(power['connected_duration']))) 2931 + 2932 + # Function: devProps 2933 + # Description: 2934 + # Retrieve a list of properties for all devices in the trace log 2935 + def devProps(data=0): 2936 + global sysvals 2937 + props = dict() 2938 + 2939 + if data: 2940 + idx = data.index(': ') + 2 2941 + if idx >= len(data): 2942 + return 2943 + devlist = data[idx:].split(';') 2944 + for dev in devlist: 2945 + f = dev.split(',') 2946 + if len(f) < 3: 2947 + continue 2948 + dev = f[0] 2949 + props[dev] = DevProps() 2950 + props[dev].altname = f[1] 2951 + if int(f[2]): 2952 + props[dev].async = True 2953 + else: 2954 + props[dev].async = False 2955 + sysvals.devprops = props 2956 + if sysvals.suspendmode == 'command' and 'testcommandstring' in props: 2957 + sysvals.testcommand = props['testcommandstring'].altname 2958 + return 2959 + 2960 + if(os.path.exists(sysvals.ftracefile) == False): 2961 + doError('%s does not exist' % sysvals.ftracefile, False) 2962 + 2963 + # first get the list of devices we need properties for 2964 + msghead = 'Additional data added by AnalyzeSuspend' 2965 + alreadystamped = False 2966 + tp = TestProps() 2967 + tf = open(sysvals.ftracefile, 'r') 2968 + for line in tf: 2969 + if msghead in line: 2970 + alreadystamped = True 2971 + continue 2972 + # determine the trace data type (required for further parsing) 2973 + m = re.match(sysvals.tracertypefmt, line) 2974 + if(m): 2975 + tp.setTracerType(m.group('t')) 2976 + continue 2977 + # parse only valid lines, if this is not one move on 2978 + m = re.match(tp.ftrace_line_fmt, line) 2979 + if(not m or 'device_pm_callback_start' not in line): 2980 + continue 2981 + m = re.match('.*: (?P<drv>.*) (?P<d>.*), parent: *(?P<p>.*), .*', m.group('msg')); 2982 + if(not m): 2983 + continue 2984 + drv, dev, par = m.group('drv'), m.group('d'), m.group('p') 2985 + if dev not in props: 2986 + props[dev] = DevProps() 2987 + tf.close() 2988 + 2989 + if not alreadystamped and sysvals.suspendmode == 'command': 2990 + out = '#\n# '+msghead+'\n# Device Properties: ' 2991 + out += 'testcommandstring,%s,0;' % (sysvals.testcommand) 2992 + with open(sysvals.ftracefile, 'a') as fp: 2993 + fp.write(out+'\n') 2994 + sysvals.devprops = props 2995 + return 2996 + 2997 + # now get the syspath for each of our target devices 2998 + for dirname, dirnames, filenames in os.walk('/sys/devices'): 2999 + if(re.match('.*/power', dirname) and 'async' in filenames): 3000 + dev = dirname.split('/')[-2] 3001 + if dev in props and (not props[dev].syspath or len(dirname) < len(props[dev].syspath)): 3002 + props[dev].syspath = dirname[:-6] 3003 + 3004 + # now fill in the properties for our target devices 3005 + for dev in props: 3006 + dirname = props[dev].syspath 3007 + if not dirname or not os.path.exists(dirname): 3008 + continue 3009 + with open(dirname+'/power/async') as fp: 3010 + text = fp.read() 3011 + props[dev].async = False 3012 + if 'enabled' in text: 3013 + props[dev].async = True 3014 + fields = os.listdir(dirname) 3015 + if 'product' in fields: 3016 + with open(dirname+'/product') as fp: 3017 + props[dev].altname = fp.read() 3018 + elif 'name' in fields: 3019 + with open(dirname+'/name') as fp: 3020 + props[dev].altname = fp.read() 3021 + elif 'model' in fields: 3022 + with open(dirname+'/model') as fp: 3023 + props[dev].altname = fp.read() 3024 + elif 'description' in fields: 3025 + with open(dirname+'/description') as fp: 3026 + props[dev].altname = fp.read() 3027 + elif 'id' in fields: 3028 + with open(dirname+'/id') as fp: 3029 + props[dev].altname = fp.read() 3030 + elif 'idVendor' in fields and 'idProduct' in fields: 3031 + idv, idp = '', '' 3032 + with open(dirname+'/idVendor') as fp: 3033 + idv = fp.read().strip() 3034 + with open(dirname+'/idProduct') as fp: 3035 + idp = fp.read().strip() 3036 + props[dev].altname = '%s:%s' % (idv, idp) 3037 + 3038 + if props[dev].altname: 3039 + out = props[dev].altname.strip().replace('\n', ' ') 3040 + out = out.replace(',', ' ') 3041 + out = out.replace(';', ' ') 3042 + props[dev].altname = out 3043 + 3044 + # and now write the data to the ftrace file 3045 + if not alreadystamped: 3046 + out = '#\n# '+msghead+'\n# Device Properties: ' 3047 + for dev in sorted(props): 3048 + out += props[dev].out(dev) 3049 + with open(sysvals.ftracefile, 'a') as fp: 3050 + fp.write(out+'\n') 3051 + 3052 + sysvals.devprops = props 3934 3053 3935 3054 # Function: getModes 3936 3055 # Description: ··· 4054 2945 def getModes(): 4055 2946 global sysvals 4056 2947 modes = '' 4057 - if(not sysvals.android): 4058 - if(os.path.exists(sysvals.powerfile)): 4059 - fp = open(sysvals.powerfile, 'r') 4060 - modes = string.split(fp.read()) 4061 - fp.close() 4062 - else: 4063 - line = os.popen(sysvals.adb+' shell cat '+\ 4064 - sysvals.powerfile).read().strip() 4065 - modes = string.split(line) 2948 + if(os.path.exists(sysvals.powerfile)): 2949 + fp = open(sysvals.powerfile, 'r') 2950 + modes = string.split(fp.read()) 2951 + fp.close() 4066 2952 return modes 4067 2953 4068 2954 # Function: getFPDT ··· 4075 2971 prectype[0] = 'Basic S3 Resume Performance Record' 4076 2972 prectype[1] = 'Basic S3 Suspend Performance Record' 4077 2973 4078 - rootCheck() 2974 + rootCheck(True) 4079 2975 if(not os.path.exists(sysvals.fpdtpath)): 4080 2976 if(output): 4081 - doError('file doesnt exist: %s' % sysvals.fpdtpath, False) 2977 + doError('file does not exist: %s' % sysvals.fpdtpath, False) 4082 2978 return False 4083 2979 if(not os.access(sysvals.fpdtpath, os.R_OK)): 4084 2980 if(output): 4085 - doError('file isnt readable: %s' % sysvals.fpdtpath, False) 2981 + doError('file is not readable: %s' % sysvals.fpdtpath, False) 4086 2982 return False 4087 2983 if(not os.path.exists(sysvals.mempath)): 4088 2984 if(output): 4089 - doError('file doesnt exist: %s' % sysvals.mempath, False) 2985 + doError('file does not exist: %s' % sysvals.mempath, False) 4090 2986 return False 4091 2987 if(not os.access(sysvals.mempath, os.R_OK)): 4092 2988 if(output): 4093 - doError('file isnt readable: %s' % sysvals.mempath, False) 2989 + doError('file is not readable: %s' % sysvals.mempath, False) 4094 2990 return False 4095 2991 4096 2992 fp = open(sysvals.fpdtpath, 'rb') ··· 4131 3027 while(i < len(records)): 4132 3028 header = struct.unpack('HBB', records[i:i+4]) 4133 3029 if(header[0] not in rectype): 3030 + i += header[1] 4134 3031 continue 4135 3032 if(header[1] != 16): 3033 + i += header[1] 4136 3034 continue 4137 3035 addr = struct.unpack('Q', records[i+8:i+16])[0] 4138 3036 try: 4139 3037 fp.seek(addr) 4140 3038 first = fp.read(8) 4141 3039 except: 4142 - doError('Bad address 0x%x in %s' % (addr, sysvals.mempath), False) 3040 + if(output): 3041 + print('Bad address 0x%x in %s' % (addr, sysvals.mempath)) 3042 + return [0, 0] 4143 3043 rechead = struct.unpack('4sI', first) 4144 3044 recdata = fp.read(rechead[1]-8) 4145 3045 if(rechead[0] == 'FBPT'): ··· 4198 3090 # print the results to the terminal 4199 3091 # Output: 4200 3092 # True if the test will work, False if not 4201 - def statusCheck(): 3093 + def statusCheck(probecheck=False): 4202 3094 global sysvals 4203 3095 status = True 4204 3096 4205 - if(sysvals.android): 4206 - print('Checking the android system ...') 4207 - else: 4208 - print('Checking this system (%s)...' % platform.node()) 4209 - 4210 - # check if adb is connected to a device 4211 - if(sysvals.android): 4212 - res = 'NO' 4213 - out = os.popen(sysvals.adb+' get-state').read().strip() 4214 - if(out == 'device'): 4215 - res = 'YES' 4216 - print(' is android device connected: %s' % res) 4217 - if(res != 'YES'): 4218 - print(' Please connect the device before using this tool') 4219 - return False 3097 + print('Checking this system (%s)...' % platform.node()) 4220 3098 4221 3099 # check we have root access 4222 - res = 'NO (No features of this tool will work!)' 4223 - if(sysvals.android): 4224 - out = os.popen(sysvals.adb+' shell id').read().strip() 4225 - if('root' in out): 4226 - res = 'YES' 4227 - else: 4228 - if(os.environ['USER'] == 'root'): 4229 - res = 'YES' 3100 + res = sysvals.colorText('NO (No features of this tool will work!)') 3101 + if(rootCheck(False)): 3102 + res = 'YES' 4230 3103 print(' have root access: %s' % res) 4231 3104 if(res != 'YES'): 4232 - if(sysvals.android): 4233 - print(' Try running "adb root" to restart the daemon as root') 4234 - else: 4235 - print(' Try running this script with sudo') 3105 + print(' Try running this script with sudo') 4236 3106 return False 4237 3107 4238 3108 # check sysfs is mounted 4239 - res = 'NO (No features of this tool will work!)' 4240 - if(sysvals.android): 4241 - out = os.popen(sysvals.adb+' shell ls '+\ 4242 - sysvals.powerfile).read().strip() 4243 - if(out == sysvals.powerfile): 4244 - res = 'YES' 4245 - else: 4246 - if(os.path.exists(sysvals.powerfile)): 4247 - res = 'YES' 3109 + res = sysvals.colorText('NO (No features of this tool will work!)') 3110 + if(os.path.exists(sysvals.powerfile)): 3111 + res = 'YES' 4248 3112 print(' is sysfs mounted: %s' % res) 4249 3113 if(res != 'YES'): 4250 3114 return False 4251 3115 4252 3116 # check target mode is a valid mode 4253 - res = 'NO' 4254 - modes = getModes() 4255 - if(sysvals.suspendmode in modes): 4256 - res = 'YES' 4257 - else: 4258 - status = False 4259 - print(' is "%s" a valid power mode: %s' % (sysvals.suspendmode, res)) 4260 - if(res == 'NO'): 4261 - print(' valid power modes are: %s' % modes) 4262 - print(' please choose one with -m') 4263 - 4264 - # check if the tool can unlock the device 4265 - if(sysvals.android): 4266 - res = 'YES' 4267 - out1 = os.popen(sysvals.adb+\ 4268 - ' shell dumpsys power | grep mScreenOn').read().strip() 4269 - out2 = os.popen(sysvals.adb+\ 4270 - ' shell input').read().strip() 4271 - if(not out1.startswith('mScreenOn') or not out2.startswith('usage')): 4272 - res = 'NO (wake the android device up before running the test)' 4273 - print(' can I unlock the screen: %s' % res) 3117 + if sysvals.suspendmode != 'command': 3118 + res = sysvals.colorText('NO') 3119 + modes = getModes() 3120 + if(sysvals.suspendmode in modes): 3121 + res = 'YES' 3122 + else: 3123 + status = False 3124 + print(' is "%s" a valid power mode: %s' % (sysvals.suspendmode, res)) 3125 + if(res == 'NO'): 3126 + print(' valid power modes are: %s' % modes) 3127 + print(' please choose one with -m') 4274 3128 4275 3129 # check if ftrace is available 4276 - res = 'NO' 4277 - ftgood = verifyFtrace() 3130 + res = sysvals.colorText('NO') 3131 + ftgood = sysvals.verifyFtrace() 4278 3132 if(ftgood): 4279 3133 res = 'YES' 4280 3134 elif(sysvals.usecallgraph): 4281 3135 status = False 4282 3136 print(' is ftrace supported: %s' % res) 3137 + 3138 + # check if kprobes are available 3139 + res = sysvals.colorText('NO') 3140 + sysvals.usekprobes = sysvals.verifyKprobes() 3141 + if(sysvals.usekprobes): 3142 + res = 'YES' 3143 + else: 3144 + sysvals.usedevsrc = False 3145 + print(' are kprobes supported: %s' % res) 4283 3146 4284 3147 # what data source are we using 4285 3148 res = 'DMESG' ··· 4259 3180 sysvals.usetraceevents = False 4260 3181 for e in sysvals.traceevents: 4261 3182 check = False 4262 - if(sysvals.android): 4263 - out = os.popen(sysvals.adb+' shell ls -d '+\ 4264 - sysvals.epath+e).read().strip() 4265 - if(out == sysvals.epath+e): 4266 - check = True 4267 - else: 4268 - if(os.path.exists(sysvals.epath+e)): 4269 - check = True 3183 + if(os.path.exists(sysvals.epath+e)): 3184 + check = True 4270 3185 if(not check): 4271 3186 sysvals.usetraceeventsonly = False 4272 3187 if(e == 'suspend_resume' and check): ··· 4272 3199 print(' timeline data source: %s' % res) 4273 3200 4274 3201 # check if rtcwake 4275 - res = 'NO' 3202 + res = sysvals.colorText('NO') 4276 3203 if(sysvals.rtcpath != ''): 4277 3204 res = 'YES' 4278 3205 elif(sysvals.rtcwake): 4279 3206 status = False 4280 3207 print(' is rtcwake supported: %s' % res) 3208 + 3209 + if not probecheck: 3210 + return status 3211 + 3212 + if (sysvals.usecallgraph and len(sysvals.debugfuncs) > 0) or len(sysvals.kprobes) > 0: 3213 + sysvals.initFtrace(True) 3214 + 3215 + # verify callgraph debugfuncs 3216 + if sysvals.usecallgraph and len(sysvals.debugfuncs) > 0: 3217 + print(' verifying these ftrace callgraph functions work:') 3218 + sysvals.setFtraceFilterFunctions(sysvals.debugfuncs) 3219 + fp = open(sysvals.tpath+'set_graph_function', 'r') 3220 + flist = fp.read().split('\n') 3221 + fp.close() 3222 + for func in sysvals.debugfuncs: 3223 + res = sysvals.colorText('NO') 3224 + if func in flist: 3225 + res = 'YES' 3226 + else: 3227 + for i in flist: 3228 + if ' [' in i and func == i.split(' ')[0]: 3229 + res = 'YES' 3230 + break 3231 + print(' %s: %s' % (func, res)) 3232 + 3233 + # verify kprobes 3234 + if len(sysvals.kprobes) > 0: 3235 + print(' verifying these kprobes work:') 3236 + for name in sorted(sysvals.kprobes): 3237 + if name in sysvals.tracefuncs: 3238 + continue 3239 + res = sysvals.colorText('NO') 3240 + if sysvals.testKprobe(sysvals.kprobes[name]): 3241 + res = 'YES' 3242 + print(' %s: %s' % (name, res)) 4281 3243 4282 3244 return status 4283 3245 ··· 4334 3226 # Arguments: 4335 3227 # msg: the warning message to print 4336 3228 # file: If not empty, a filename to request be sent to the owner for debug 4337 - def doWarning(msg, file): 3229 + def doWarning(msg, file=''): 4338 3230 print('/* %s */') % msg 4339 3231 if(file): 4340 3232 print('/* For a fix, please send this'+\ ··· 4343 3235 # Function: rootCheck 4344 3236 # Description: 4345 3237 # quick check to see if we have root access 4346 - def rootCheck(): 4347 - if(os.environ['USER'] != 'root'): 4348 - doError('This script must be run as root', False) 3238 + def rootCheck(fatal): 3239 + global sysvals 3240 + if(os.access(sysvals.powerfile, os.W_OK)): 3241 + return True 3242 + if fatal: 3243 + doError('This command must be run as root', False) 3244 + return False 4349 3245 4350 3246 # Function: getArgInt 4351 3247 # Description: 4352 3248 # pull out an integer argument from the command line with checks 4353 - def getArgInt(name, args, min, max): 4354 - try: 4355 - arg = args.next() 4356 - except: 4357 - doError(name+': no argument supplied', True) 3249 + def getArgInt(name, args, min, max, main=True): 3250 + if main: 3251 + try: 3252 + arg = args.next() 3253 + except: 3254 + doError(name+': no argument supplied', True) 3255 + else: 3256 + arg = args 4358 3257 try: 4359 3258 val = int(arg) 4360 3259 except: 4361 3260 doError(name+': non-integer value given', True) 4362 3261 if(val < min or val > max): 4363 3262 doError(name+': value should be between %d and %d' % (min, max), True) 3263 + return val 3264 + 3265 + # Function: getArgFloat 3266 + # Description: 3267 + # pull out a float argument from the command line with checks 3268 + def getArgFloat(name, args, min, max, main=True): 3269 + if main: 3270 + try: 3271 + arg = args.next() 3272 + except: 3273 + doError(name+': no argument supplied', True) 3274 + else: 3275 + arg = args 3276 + try: 3277 + val = float(arg) 3278 + except: 3279 + doError(name+': non-numerical value given', True) 3280 + if(val < min or val > max): 3281 + doError(name+': value should be between %f and %f' % (min, max), True) 4364 3282 return val 4365 3283 4366 3284 # Function: rerunTest ··· 4416 3282 # Function: runTest 4417 3283 # Description: 4418 3284 # execute a suspend/resume, gather the logs, and generate the output 4419 - def runTest(subdir): 3285 + def runTest(subdir, testpath=''): 4420 3286 global sysvals 4421 3287 4422 3288 # prepare for the test 4423 - if(not sysvals.android): 4424 - initFtrace() 4425 - else: 4426 - initFtraceAndroid() 4427 - sysvals.initTestOutput(subdir) 3289 + sysvals.initFtrace() 3290 + sysvals.initTestOutput(subdir, testpath) 4428 3291 4429 3292 vprint('Output files:\n %s' % sysvals.dmesgfile) 4430 3293 if(sysvals.usecallgraph or ··· 4431 3300 vprint(' %s' % sysvals.htmlfile) 4432 3301 4433 3302 # execute the test 4434 - if(not sysvals.android): 4435 - executeSuspend() 4436 - else: 4437 - executeAndroidSuspend() 3303 + executeSuspend() 3304 + sysvals.cleanupFtrace() 4438 3305 4439 3306 # analyze the data and create the html output 4440 3307 print('PROCESSING DATA') ··· 4496 3367 4497 3368 createHTMLSummarySimple(testruns, subdir+'/summary.html') 4498 3369 3370 + # Function: checkArgBool 3371 + # Description: 3372 + # check if a boolean string value is true or false 3373 + def checkArgBool(value): 3374 + yes = ['1', 'true', 'yes', 'on'] 3375 + if value.lower() in yes: 3376 + return True 3377 + return False 3378 + 3379 + # Function: configFromFile 3380 + # Description: 3381 + # Configure the script via the info in a config file 3382 + def configFromFile(file): 3383 + global sysvals 3384 + Config = ConfigParser.ConfigParser() 3385 + 3386 + ignorekprobes = False 3387 + Config.read(file) 3388 + sections = Config.sections() 3389 + if 'Settings' in sections: 3390 + for opt in Config.options('Settings'): 3391 + value = Config.get('Settings', opt).lower() 3392 + if(opt.lower() == 'verbose'): 3393 + sysvals.verbose = checkArgBool(value) 3394 + elif(opt.lower() == 'addlogs'): 3395 + sysvals.addlogs = checkArgBool(value) 3396 + elif(opt.lower() == 'dev'): 3397 + sysvals.usedevsrc = checkArgBool(value) 3398 + elif(opt.lower() == 'ignorekprobes'): 3399 + ignorekprobes = checkArgBool(value) 3400 + elif(opt.lower() == 'x2'): 3401 + if checkArgBool(value): 3402 + sysvals.execcount = 2 3403 + elif(opt.lower() == 'callgraph'): 3404 + sysvals.usecallgraph = checkArgBool(value) 3405 + elif(opt.lower() == 'callgraphfunc'): 3406 + sysvals.debugfuncs = [] 3407 + if value: 3408 + value = value.split(',') 3409 + for i in value: 3410 + sysvals.debugfuncs.append(i.strip()) 3411 + elif(opt.lower() == 'expandcg'): 3412 + sysvals.cgexp = checkArgBool(value) 3413 + elif(opt.lower() == 'srgap'): 3414 + if checkArgBool(value): 3415 + sysvals.srgap = 5 3416 + elif(opt.lower() == 'mode'): 3417 + sysvals.suspendmode = value 3418 + elif(opt.lower() == 'command'): 3419 + sysvals.testcommand = value 3420 + elif(opt.lower() == 'x2delay'): 3421 + sysvals.x2delay = getArgInt('-x2delay', value, 0, 60000, False) 3422 + elif(opt.lower() == 'postres'): 3423 + sysvals.postresumetime = getArgInt('-postres', value, 0, 3600, False) 3424 + elif(opt.lower() == 'rtcwake'): 3425 + sysvals.rtcwake = True 3426 + sysvals.rtcwaketime = getArgInt('-rtcwake', value, 0, 3600, False) 3427 + elif(opt.lower() == 'timeprec'): 3428 + sysvals.setPrecision(getArgInt('-timeprec', value, 0, 6, False)) 3429 + elif(opt.lower() == 'mindev'): 3430 + sysvals.mindevlen = getArgFloat('-mindev', value, 0.0, 10000.0, False) 3431 + elif(opt.lower() == 'mincg'): 3432 + sysvals.mincglen = getArgFloat('-mincg', value, 0.0, 10000.0, False) 3433 + elif(opt.lower() == 'kprobecolor'): 3434 + try: 3435 + val = int(value, 16) 3436 + sysvals.kprobecolor = '#'+value 3437 + except: 3438 + sysvals.kprobecolor = value 3439 + elif(opt.lower() == 'synccolor'): 3440 + try: 3441 + val = int(value, 16) 3442 + sysvals.synccolor = '#'+value 3443 + except: 3444 + sysvals.synccolor = value 3445 + elif(opt.lower() == 'output-dir'): 3446 + args = dict() 3447 + n = datetime.now() 3448 + args['date'] = n.strftime('%y%m%d') 3449 + args['time'] = n.strftime('%H%M%S') 3450 + args['hostname'] = sysvals.hostname 3451 + sysvals.outdir = value.format(**args) 3452 + 3453 + if sysvals.suspendmode == 'command' and not sysvals.testcommand: 3454 + doError('No command supplied for mode "command"', False) 3455 + if sysvals.usedevsrc and sysvals.usecallgraph: 3456 + doError('dev and callgraph cannot both be true', False) 3457 + if sysvals.usecallgraph and sysvals.execcount > 1: 3458 + doError('-x2 is not compatible with -f', False) 3459 + 3460 + if ignorekprobes: 3461 + return 3462 + 3463 + kprobes = dict() 3464 + archkprobe = 'Kprobe_'+platform.machine() 3465 + if archkprobe in sections: 3466 + for name in Config.options(archkprobe): 3467 + kprobes[name] = Config.get(archkprobe, name) 3468 + if 'Kprobe' in sections: 3469 + for name in Config.options('Kprobe'): 3470 + kprobes[name] = Config.get('Kprobe', name) 3471 + 3472 + for name in kprobes: 3473 + function = name 3474 + format = name 3475 + color = '' 3476 + args = dict() 3477 + data = kprobes[name].split() 3478 + i = 0 3479 + for val in data: 3480 + # bracketted strings are special formatting, read them separately 3481 + if val[0] == '[' and val[-1] == ']': 3482 + for prop in val[1:-1].split(','): 3483 + p = prop.split('=') 3484 + if p[0] == 'color': 3485 + try: 3486 + color = int(p[1], 16) 3487 + color = '#'+p[1] 3488 + except: 3489 + color = p[1] 3490 + continue 3491 + # first real arg should be the format string 3492 + if i == 0: 3493 + format = val 3494 + # all other args are actual function args 3495 + else: 3496 + d = val.split('=') 3497 + args[d[0]] = d[1] 3498 + i += 1 3499 + if not function or not format: 3500 + doError('Invalid kprobe: %s' % name, False) 3501 + for arg in re.findall('{(?P<n>[a-z,A-Z,0-9]*)}', format): 3502 + if arg not in args: 3503 + doError('Kprobe "%s" is missing argument "%s"' % (name, arg), False) 3504 + if name in sysvals.kprobes: 3505 + doError('Duplicate kprobe found "%s"' % (name), False) 3506 + vprint('Adding KPROBE: %s %s %s %s' % (name, function, format, args)) 3507 + sysvals.kprobes[name] = { 3508 + 'name': name, 3509 + 'func': function, 3510 + 'format': format, 3511 + 'args': args, 3512 + 'mask': re.sub('{(?P<n>[a-z,A-Z,0-9]*)}', '.*', format) 3513 + } 3514 + if color: 3515 + sysvals.kprobes[name]['color'] = color 3516 + 4499 3517 # Function: printHelp 4500 3518 # Description: 4501 3519 # print out the help text ··· 4651 3375 modes = getModes() 4652 3376 4653 3377 print('') 4654 - print('AnalyzeSuspend v%.1f' % sysvals.version) 3378 + print('AnalyzeSuspend v%s' % sysvals.version) 4655 3379 print('Usage: sudo analyze_suspend.py <options>') 4656 3380 print('') 4657 3381 print('Description:') ··· 4672 3396 print(' [general]') 4673 3397 print(' -h Print this help text') 4674 3398 print(' -v Print the current tool version') 3399 + print(' -config file Pull arguments and config options from a file') 4675 3400 print(' -verbose Print extra information during execution and analysis') 4676 3401 print(' -status Test to see if the system is enabled to run this tool') 4677 3402 print(' -modes List available suspend modes') 4678 3403 print(' -m mode Mode to initiate for suspend %s (default: %s)') % (modes, sysvals.suspendmode) 4679 - print(' -rtcwake t Use rtcwake to autoresume after <t> seconds (default: disabled)') 3404 + print(' -o subdir Override the output subdirectory') 4680 3405 print(' [advanced]') 3406 + print(' -rtcwake t Use rtcwake to autoresume after <t> seconds (default: disabled)') 3407 + print(' -addlogs Add the dmesg and ftrace logs to the html output') 3408 + print(' -multi n d Execute <n> consecutive tests at <d> seconds intervals. The outputs will') 3409 + print(' be created in a new subdirectory with a summary page.') 3410 + print(' -srgap Add a visible gap in the timeline between sus/res (default: disabled)') 3411 + print(' -cmd {s} Instead of suspend/resume, run a command, e.g. "sync -d"') 3412 + print(' -mindev ms Discard all device blocks shorter than ms milliseconds (e.g. 0.001 for us)') 3413 + print(' -mincg ms Discard all callgraphs shorter than ms milliseconds (e.g. 0.001 for us)') 3414 + print(' -timeprec N Number of significant digits in timestamps (0:S, [3:ms], 6:us)') 3415 + print(' [debug]') 4681 3416 print(' -f Use ftrace to create device callgraphs (default: disabled)') 4682 - print(' -filter "d1 d2 ..." Filter out all but this list of dev names') 3417 + print(' -expandcg pre-expand the callgraph data in the html output (default: disabled)') 3418 + print(' -flist Print the list of functions currently being captured in ftrace') 3419 + print(' -flistall Print all functions capable of being captured in ftrace') 3420 + print(' -fadd file Add functions to be graphed in the timeline from a list in a text file') 3421 + print(' -filter "d1 d2 ..." Filter out all but this list of device names') 3422 + print(' -dev Display common low level functions in the timeline') 3423 + print(' [post-resume task analysis]') 4683 3424 print(' -x2 Run two suspend/resumes back to back (default: disabled)') 4684 3425 print(' -x2delay t Minimum millisecond delay <t> between the two test runs (default: 0 ms)') 4685 3426 print(' -postres t Time after resume completion to wait for post-resume events (default: 0 S)') 4686 - print(' -multi n d Execute <n> consecutive tests at <d> seconds intervals. The outputs will') 4687 - print(' be created in a new subdirectory with a summary page.') 4688 3427 print(' [utilities]') 4689 3428 print(' -fpdt Print out the contents of the ACPI Firmware Performance Data Table') 4690 3429 print(' -usbtopo Print out the current USB topology with power info') 4691 3430 print(' -usbauto Enable autosuspend for all connected USB devices') 4692 - print(' [android testing]') 4693 - print(' -adb binary Use the given adb binary to run the test on an android device.') 4694 - print(' The device should already be connected and with root access.') 4695 - print(' Commands will be executed on the device using "adb shell"') 4696 3431 print(' [re-analyze data from previous runs]') 4697 3432 print(' -ftrace ftracefile Create HTML output using ftrace input') 4698 3433 print(' -dmesg dmesgfile Create HTML output using dmesg (not needed for kernel >= 3.15)') ··· 4717 3430 cmd = '' 4718 3431 cmdarg = '' 4719 3432 multitest = {'run': False, 'count': 0, 'delay': 0} 3433 + simplecmds = ['-modes', '-fpdt', '-flist', '-flistall', '-usbtopo', '-usbauto', '-status'] 4720 3434 # loop through the command line arguments 4721 3435 args = iter(sys.argv[1:]) 4722 3436 for arg in args: ··· 4726 3438 val = args.next() 4727 3439 except: 4728 3440 doError('No mode supplied', True) 3441 + if val == 'command' and not sysvals.testcommand: 3442 + doError('No command supplied for mode "command"', True) 4729 3443 sysvals.suspendmode = val 4730 - elif(arg == '-adb'): 4731 - try: 4732 - val = args.next() 4733 - except: 4734 - doError('No adb binary supplied', True) 4735 - if(not os.path.exists(val)): 4736 - doError('file doesnt exist: %s' % val, False) 4737 - if(not os.access(val, os.X_OK)): 4738 - doError('file isnt executable: %s' % val, False) 4739 - try: 4740 - check = os.popen(val+' version').read().strip() 4741 - except: 4742 - doError('adb version failed to execute', False) 4743 - if(not re.match('Android Debug Bridge .*', check)): 4744 - doError('adb version failed to execute', False) 4745 - sysvals.adb = val 4746 - sysvals.android = True 3444 + elif(arg in simplecmds): 3445 + cmd = arg[1:] 3446 + elif(arg == '-h'): 3447 + printHelp() 3448 + sys.exit() 3449 + elif(arg == '-v'): 3450 + print("Version %s" % sysvals.version) 3451 + sys.exit() 4747 3452 elif(arg == '-x2'): 4748 - if(sysvals.postresumetime > 0): 4749 - doError('-x2 is not compatible with -postres', False) 4750 3453 sysvals.execcount = 2 3454 + if(sysvals.usecallgraph): 3455 + doError('-x2 is not compatible with -f', False) 4751 3456 elif(arg == '-x2delay'): 4752 3457 sysvals.x2delay = getArgInt('-x2delay', args, 0, 60000) 4753 3458 elif(arg == '-postres'): 4754 - if(sysvals.execcount != 1): 4755 - doError('-x2 is not compatible with -postres', False) 4756 3459 sysvals.postresumetime = getArgInt('-postres', args, 0, 3600) 4757 3460 elif(arg == '-f'): 4758 3461 sysvals.usecallgraph = True 4759 - elif(arg == '-modes'): 4760 - cmd = 'modes' 4761 - elif(arg == '-fpdt'): 4762 - cmd = 'fpdt' 4763 - elif(arg == '-usbtopo'): 4764 - cmd = 'usbtopo' 4765 - elif(arg == '-usbauto'): 4766 - cmd = 'usbauto' 4767 - elif(arg == '-status'): 4768 - cmd = 'status' 3462 + if(sysvals.execcount > 1): 3463 + doError('-x2 is not compatible with -f', False) 3464 + if(sysvals.usedevsrc): 3465 + doError('-dev is not compatible with -f', False) 3466 + elif(arg == '-addlogs'): 3467 + sysvals.addlogs = True 4769 3468 elif(arg == '-verbose'): 4770 3469 sysvals.verbose = True 4771 - elif(arg == '-v'): 4772 - print("Version %.1f" % sysvals.version) 4773 - sys.exit() 3470 + elif(arg == '-dev'): 3471 + sysvals.usedevsrc = True 3472 + if(sysvals.usecallgraph): 3473 + doError('-dev is not compatible with -f', False) 4774 3474 elif(arg == '-rtcwake'): 4775 3475 sysvals.rtcwake = True 4776 3476 sysvals.rtcwaketime = getArgInt('-rtcwake', args, 0, 3600) 3477 + elif(arg == '-timeprec'): 3478 + sysvals.setPrecision(getArgInt('-timeprec', args, 0, 6)) 3479 + elif(arg == '-mindev'): 3480 + sysvals.mindevlen = getArgFloat('-mindev', args, 0.0, 10000.0) 3481 + elif(arg == '-mincg'): 3482 + sysvals.mincglen = getArgFloat('-mincg', args, 0.0, 10000.0) 3483 + elif(arg == '-cmd'): 3484 + try: 3485 + val = args.next() 3486 + except: 3487 + doError('No command string supplied', True) 3488 + sysvals.testcommand = val 3489 + sysvals.suspendmode = 'command' 3490 + elif(arg == '-expandcg'): 3491 + sysvals.cgexp = True 3492 + elif(arg == '-srgap'): 3493 + sysvals.srgap = 5 4777 3494 elif(arg == '-multi'): 4778 3495 multitest['run'] = True 4779 3496 multitest['count'] = getArgInt('-multi n (exec count)', args, 2, 1000000) 4780 3497 multitest['delay'] = getArgInt('-multi d (delay between tests)', args, 0, 3600) 3498 + elif(arg == '-o'): 3499 + try: 3500 + val = args.next() 3501 + except: 3502 + doError('No subdirectory name supplied', True) 3503 + sysvals.outdir = val 3504 + elif(arg == '-config'): 3505 + try: 3506 + val = args.next() 3507 + except: 3508 + doError('No text file supplied', True) 3509 + if(os.path.exists(val) == False): 3510 + doError('%s does not exist' % val, False) 3511 + configFromFile(val) 3512 + elif(arg == '-fadd'): 3513 + try: 3514 + val = args.next() 3515 + except: 3516 + doError('No text file supplied', True) 3517 + if(os.path.exists(val) == False): 3518 + doError('%s does not exist' % val, False) 3519 + sysvals.addFtraceFilterFunctions(val) 4781 3520 elif(arg == '-dmesg'): 4782 3521 try: 4783 3522 val = args.next() ··· 4813 3498 sysvals.notestrun = True 4814 3499 sysvals.dmesgfile = val 4815 3500 if(os.path.exists(sysvals.dmesgfile) == False): 4816 - doError('%s doesnt exist' % sysvals.dmesgfile, False) 3501 + doError('%s does not exist' % sysvals.dmesgfile, False) 4817 3502 elif(arg == '-ftrace'): 4818 3503 try: 4819 3504 val = args.next() 4820 3505 except: 4821 3506 doError('No ftrace file supplied', True) 4822 3507 sysvals.notestrun = True 4823 - sysvals.usecallgraph = True 4824 3508 sysvals.ftracefile = val 4825 3509 if(os.path.exists(sysvals.ftracefile) == False): 4826 - doError('%s doesnt exist' % sysvals.ftracefile, False) 3510 + doError('%s does not exist' % sysvals.ftracefile, False) 4827 3511 elif(arg == '-summary'): 4828 3512 try: 4829 3513 val = args.next() ··· 4832 3518 cmdarg = val 4833 3519 sysvals.notestrun = True 4834 3520 if(os.path.isdir(val) == False): 4835 - doError('%s isnt accesible' % val, False) 3521 + doError('%s is not accesible' % val, False) 4836 3522 elif(arg == '-filter'): 4837 3523 try: 4838 3524 val = args.next() 4839 3525 except: 4840 3526 doError('No devnames supplied', True) 4841 3527 sysvals.setDeviceFilter(val) 4842 - elif(arg == '-h'): 4843 - printHelp() 4844 - sys.exit() 4845 3528 else: 4846 3529 doError('Invalid argument: '+arg, True) 3530 + 3531 + # callgraph size cannot exceed device size 3532 + if sysvals.mincglen < sysvals.mindevlen: 3533 + sysvals.mincglen = sysvals.mindevlen 4847 3534 4848 3535 # just run a utility command and exit 4849 3536 if(cmd != ''): 4850 3537 if(cmd == 'status'): 4851 - statusCheck() 3538 + statusCheck(True) 4852 3539 elif(cmd == 'fpdt'): 4853 - if(sysvals.android): 4854 - doError('cannot read FPDT on android device', False) 4855 3540 getFPDT(True) 4856 3541 elif(cmd == 'usbtopo'): 4857 - if(sysvals.android): 4858 - doError('cannot read USB topology '+\ 4859 - 'on an android device', False) 4860 - detectUSB(True) 3542 + detectUSB() 4861 3543 elif(cmd == 'modes'): 4862 3544 modes = getModes() 4863 3545 print modes 3546 + elif(cmd == 'flist'): 3547 + sysvals.getFtraceFilterFunctions(True) 3548 + elif(cmd == 'flistall'): 3549 + sysvals.getFtraceFilterFunctions(False) 4864 3550 elif(cmd == 'usbauto'): 4865 3551 setUSBDevicesAuto() 4866 3552 elif(cmd == 'summary'): 4867 3553 print("Generating a summary of folder \"%s\"" % cmdarg) 4868 3554 runSummary(cmdarg, True) 4869 3555 sys.exit() 4870 - 4871 - # run test on android device 4872 - if(sysvals.android): 4873 - if(sysvals.usecallgraph): 4874 - doError('ftrace (-f) is not yet supported '+\ 4875 - 'in the android kernel', False) 4876 - if(sysvals.notestrun): 4877 - doError('cannot analyze test files on the '+\ 4878 - 'android device', False) 4879 3556 4880 3557 # if instructed, re-analyze existing data files 4881 3558 if(sysvals.notestrun): ··· 4879 3574 sys.exit() 4880 3575 4881 3576 if multitest['run']: 4882 - # run multiple tests in a separte subdirectory 3577 + # run multiple tests in a separate subdirectory 4883 3578 s = 'x%d' % multitest['count'] 4884 - subdir = datetime.now().strftime('suspend-'+s+'-%m%d%y-%H%M%S') 4885 - os.mkdir(subdir) 3579 + if not sysvals.outdir: 3580 + sysvals.outdir = datetime.now().strftime('suspend-'+s+'-%m%d%y-%H%M%S') 3581 + if not os.path.isdir(sysvals.outdir): 3582 + os.mkdir(sysvals.outdir) 4886 3583 for i in range(multitest['count']): 4887 3584 if(i != 0): 4888 3585 print('Waiting %d seconds...' % (multitest['delay'])) 4889 3586 time.sleep(multitest['delay']) 4890 3587 print('TEST (%d/%d) START' % (i+1, multitest['count'])) 4891 - runTest(subdir) 3588 + runTest(sysvals.outdir) 4892 3589 print('TEST (%d/%d) COMPLETE' % (i+1, multitest['count'])) 4893 - runSummary(subdir, False) 3590 + runSummary(sysvals.outdir, False) 4894 3591 else: 4895 3592 # run the test in the current directory 4896 - runTest(".") 3593 + runTest('.', sysvals.outdir)
+2 -2
tools/power/x86/turbostat/Makefile
··· 1 1 CC = $(CROSS_COMPILE)gcc 2 2 BUILD_OUTPUT := $(CURDIR) 3 - PREFIX := /usr 4 - DESTDIR := 3 + PREFIX ?= /usr 4 + DESTDIR ?= 5 5 6 6 ifeq ("$(origin O)", "command line") 7 7 BUILD_OUTPUT := $(O)
+1 -1
tools/power/x86/turbostat/turbostat.8
··· 123 123 35 * 100 = 3500 MHz TSC frequency 124 124 cpu0: MSR_IA32_POWER_CTL: 0x0004005d (C1E auto-promotion: DISabled) 125 125 cpu0: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x1e000400 (UNdemote-C3, UNdemote-C1, demote-C3, demote-C1, UNlocked: pkg-cstate-limit=0: pc0) 126 - cpu0: MSR_NHM_TURBO_RATIO_LIMIT: 0x25262727 126 + cpu0: MSR_TURBO_RATIO_LIMIT: 0x25262727 127 127 37 * 100 = 3700 MHz max turbo 4 active cores 128 128 38 * 100 = 3800 MHz max turbo 3 active cores 129 129 39 * 100 = 3900 MHz max turbo 2 active cores
+1 -1
tools/power/x86/turbostat/turbostat.c
··· 1480 1480 unsigned int cores[buckets_no]; 1481 1481 unsigned int ratio[buckets_no]; 1482 1482 1483 - get_msr(base_cpu, MSR_NHM_TURBO_RATIO_LIMIT, &msr); 1483 + get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT, &msr); 1484 1484 1485 1485 fprintf(outf, "cpu%d: MSR_TURBO_RATIO_LIMIT: 0x%08llx\n", 1486 1486 base_cpu, msr);