Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-3.16-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more ACPI and power management updates from Rafael Wysocki:
"These are fixups on top of the previous PM+ACPI pull request,
regression fixes (ACPI hotplug, cpufreq ppc-corenet), other bug fixes
(ACPI reset, cpufreq), new PM trace points for system suspend
profiling and a copyright notice update.

Specifics:

- I didn't remember correctly that the Hans de Goede's ACPI video
patches actually didn't flip the video.use_native_backlight
default, although we had discussed that and decided to do that.
Since I said we would do that in the previous PM+ACPI pull request,
make that change for real now.

- ACPI bus check notifications for PCI host bridges don't cause the
bus below the host bridge to be checked for changes as they should
because of a mistake in the ACPI-based PCI hotplug (ACPIPHP)
subsystem that forgets to add hotplug contexts to PCI host bridge
ACPI device objects. Create hotplug contexts for PCI host bridges
too as appropriate.

- Revert recent cpufreq commit related to the big.LITTLE cpufreq
driver that breaks arm64 builds.

- Fix for a regression in the ppc-corenet cpufreq driver introduced
during the 3.15 cycle and causing the driver to use the remainder
from do_div instead of the quotient. From Ed Swarthout.

- Resets triggered by panic activate a BUG_ON() in vmalloc.c on
systems where the ACPI reset register is located in memory address
space. Fix from Randy Wright.

- Fix for a problem with cpufreq governors that decisions made by
them may be suboptimal due to the fact that deferrable timers are
used by them for CPU load sampling. From Srivatsa S Bhat.

- Fix for a problem with the Tegra cpufreq driver where the CPU
frequency is temporarily switched to a "stable" level that is
different from both the initial and target frequencies during
transitions which causes udelay() to expire earlier than it should
sometimes. From Viresh Kumar.

- New trace points and rework of some existing trace points for
system suspend/resume profiling from Todd Brandt.

- Assorted cpufreq fixes and cleanups from Stratos Karafotis and
Viresh Kumar.

- Copyright notice update for suspend-and-cpuhotplug.txt from
Srivatsa S Bhat"

* tag 'pm+acpi-3.16-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI / hotplug / PCI: Add hotplug contexts to PCI host bridges
PM / sleep: trace events for device PM callbacks
cpufreq: cpufreq-cpu0: remove dependency on THERMAL and REGULATOR
cpufreq: tegra: update comment for clarity
cpufreq: intel_pstate: Remove duplicate CPU ID check
cpufreq: Mark CPU0 driver with CPUFREQ_NEED_INITIAL_FREQ_CHECK flag
PM / Documentation: Update copyright in suspend-and-cpuhotplug.txt
cpufreq: governor: remove copy_prev_load from 'struct cpu_dbs_common_info'
cpufreq: governor: Be friendly towards latency-sensitive bursty workloads
PM / sleep: trace events for suspend/resume
cpufreq: ppc-corenet-cpu-freq: do_div use quotient
Revert "cpufreq: Enable big.LITTLE cpufreq driver on arm64"
cpufreq: Tegra: implement intermediate frequency callbacks
cpufreq: add support for intermediate (stable) frequencies
ACPI / video: Change the default for video.use_native_backlight to 1
ACPI: Fix bug when ACPI reset register is implemented in system memory

+446 -122
+27 -2
Documentation/cpu-freq/cpu-drivers.txt
··· 26 26 1.4 target/target_index or setpolicy? 27 27 1.5 target/target_index 28 28 1.6 setpolicy 29 + 1.7 get_intermediate and target_intermediate 29 30 2. Frequency Table Helpers 30 31 31 32 ··· 79 78 cpufreq_driver.attr - A pointer to a NULL-terminated list of 80 79 "struct freq_attr" which allow to 81 80 export values to sysfs. 81 + 82 + cpufreq_driver.get_intermediate 83 + and target_intermediate Used to switch to stable frequency while 84 + changing CPU frequency. 82 85 83 86 84 87 1.2 Per-CPU Initialization ··· 156 151 limits on their own. These shall use the ->setpolicy call 157 152 158 153 159 - 1.4. target/target_index 154 + 1.5. target/target_index 160 155 ------------- 161 156 162 157 The target_index call has two arguments: struct cpufreq_policy *policy, ··· 164 159 165 160 The CPUfreq driver must set the new frequency when called here. The 166 161 actual frequency must be determined by freq_table[index].frequency. 162 + 163 + It should always restore to earlier frequency (i.e. policy->restore_freq) in 164 + case of errors, even if we switched to intermediate frequency earlier. 167 165 168 166 Deprecated: 169 167 ---------- ··· 187 179 for details. 188 180 189 181 190 - 1.5 setpolicy 182 + 1.6 setpolicy 191 183 --------------- 192 184 193 185 The setpolicy call only takes a struct cpufreq_policy *policy as ··· 198 190 powersaving-oriented setting when CPUFREQ_POLICY_POWERSAVE. Also check 199 191 the reference implementation in drivers/cpufreq/longrun.c 200 192 193 + 1.7 get_intermediate and target_intermediate 194 + -------------------------------------------- 195 + 196 + Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION unset. 197 + 198 + get_intermediate should return a stable intermediate frequency platform wants to 199 + switch to, and target_intermediate() should set CPU to to that frequency, before 200 + jumping to the frequency corresponding to 'index'. Core will take care of 201 + sending notifications and driver doesn't have to handle them in 202 + target_intermediate() or target_index(). 203 + 204 + Drivers can return '0' from get_intermediate() in case they don't wish to switch 205 + to intermediate frequency for some target frequency. In that case core will 206 + directly call ->target_index(). 207 + 208 + NOTE: ->target_index() should restore to policy->restore_freq in case of 209 + failures as core would send notifications for that. 201 210 202 211 203 212 2. Frequency Table Helpers
+1 -1
Documentation/power/suspend-and-cpuhotplug.txt
··· 1 1 Interaction of Suspend code (S3) with the CPU hotplug infrastructure 2 2 3 - (C) 2011 Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> 3 + (C) 2011 - 2014 Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> 4 4 5 5 6 6 I. How does the regular CPU hotplug code differ from how the Suspend-to-RAM
+12
drivers/acpi/osl.c
··· 1810 1810 acpi_os_map_generic_address(&acpi_gbl_FADT.xpm1b_event_block); 1811 1811 acpi_os_map_generic_address(&acpi_gbl_FADT.xgpe0_block); 1812 1812 acpi_os_map_generic_address(&acpi_gbl_FADT.xgpe1_block); 1813 + if (acpi_gbl_FADT.flags & ACPI_FADT_RESET_REGISTER) { 1814 + /* 1815 + * Use acpi_os_map_generic_address to pre-map the reset 1816 + * register if it's in system memory. 1817 + */ 1818 + int rv; 1819 + 1820 + rv = acpi_os_map_generic_address(&acpi_gbl_FADT.reset_register); 1821 + pr_debug(PREFIX "%s: map reset_reg status %d\n", __func__, rv); 1822 + } 1813 1823 1814 1824 return AE_OK; 1815 1825 } ··· 1848 1838 acpi_os_unmap_generic_address(&acpi_gbl_FADT.xgpe0_block); 1849 1839 acpi_os_unmap_generic_address(&acpi_gbl_FADT.xpm1b_event_block); 1850 1840 acpi_os_unmap_generic_address(&acpi_gbl_FADT.xpm1a_event_block); 1841 + if (acpi_gbl_FADT.flags & ACPI_FADT_RESET_REGISTER) 1842 + acpi_os_unmap_generic_address(&acpi_gbl_FADT.reset_register); 1851 1843 1852 1844 destroy_workqueue(kacpid_wq); 1853 1845 destroy_workqueue(kacpi_notify_wq);
+3
drivers/acpi/sleep.c
··· 19 19 #include <linux/acpi.h> 20 20 #include <linux/module.h> 21 21 #include <asm/io.h> 22 + #include <trace/events/power.h> 22 23 23 24 #include "internal.h" 24 25 #include "sleep.h" ··· 502 501 503 502 ACPI_FLUSH_CPU_CACHE(); 504 503 504 + trace_suspend_resume(TPS("acpi_suspend"), acpi_state, true); 505 505 switch (acpi_state) { 506 506 case ACPI_STATE_S1: 507 507 barrier(); ··· 518 516 pr_info(PREFIX "Low-level resume complete\n"); 519 517 break; 520 518 } 519 + trace_suspend_resume(TPS("acpi_suspend"), acpi_state, false); 521 520 522 521 /* This violates the spec but is required for bug compatibility. */ 523 522 acpi_write_bit_register(ACPI_BITREG_SCI_ENABLE, 1);
+1 -1
drivers/acpi/video.c
··· 82 82 * For Windows 8 systems: used to decide if video module 83 83 * should skip registering backlight interface of its own. 84 84 */ 85 - static int use_native_backlight_param = -1; 85 + static int use_native_backlight_param = 1; 86 86 module_param_named(use_native_backlight, use_native_backlight_param, int, 0444); 87 87 static bool use_native_backlight_dmi = false; 88 88
+26 -4
drivers/base/power/main.c
··· 214 214 pr_info("call %s+ returned %d after %Ld usecs\n", dev_name(dev), 215 215 error, (unsigned long long)nsecs >> 10); 216 216 } 217 - 218 - trace_device_pm_report_time(dev, info, nsecs, pm_verb(state.event), 219 - error); 220 217 } 221 218 222 219 /** ··· 384 387 calltime = initcall_debug_start(dev); 385 388 386 389 pm_dev_dbg(dev, state, info); 390 + trace_device_pm_callback_start(dev, info, state.event); 387 391 error = cb(dev); 392 + trace_device_pm_callback_end(dev, error); 388 393 suspend_report_result(cb, error); 389 394 390 395 initcall_debug_report(dev, calltime, error, state, info); ··· 544 545 struct device *dev; 545 546 ktime_t starttime = ktime_get(); 546 547 548 + trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, true); 547 549 mutex_lock(&dpm_list_mtx); 548 550 pm_transition = state; 549 551 ··· 587 587 dpm_show_time(starttime, state, "noirq"); 588 588 resume_device_irqs(); 589 589 cpuidle_resume(); 590 + trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); 590 591 } 591 592 592 593 /** ··· 665 664 struct device *dev; 666 665 ktime_t starttime = ktime_get(); 667 666 667 + trace_suspend_resume(TPS("dpm_resume_early"), state.event, true); 668 668 mutex_lock(&dpm_list_mtx); 669 669 pm_transition = state; 670 670 ··· 705 703 mutex_unlock(&dpm_list_mtx); 706 704 async_synchronize_full(); 707 705 dpm_show_time(starttime, state, "early"); 706 + trace_suspend_resume(TPS("dpm_resume_early"), state.event, false); 708 707 } 709 708 710 709 /** ··· 837 834 struct device *dev; 838 835 ktime_t starttime = ktime_get(); 839 836 837 + trace_suspend_resume(TPS("dpm_resume"), state.event, true); 840 838 might_sleep(); 841 839 842 840 mutex_lock(&dpm_list_mtx); ··· 879 875 dpm_show_time(starttime, state, NULL); 880 876 881 877 cpufreq_resume(); 878 + trace_suspend_resume(TPS("dpm_resume"), state.event, false); 882 879 } 883 880 884 881 /** ··· 918 913 919 914 if (callback) { 920 915 pm_dev_dbg(dev, state, info); 916 + trace_device_pm_callback_start(dev, info, state.event); 921 917 callback(dev); 918 + trace_device_pm_callback_end(dev, 0); 922 919 } 923 920 924 921 device_unlock(dev); ··· 939 932 { 940 933 struct list_head list; 941 934 935 + trace_suspend_resume(TPS("dpm_complete"), state.event, true); 942 936 might_sleep(); 943 937 944 938 INIT_LIST_HEAD(&list); ··· 959 951 } 960 952 list_splice(&list, &dpm_list); 961 953 mutex_unlock(&dpm_list_mtx); 954 + trace_suspend_resume(TPS("dpm_complete"), state.event, false); 962 955 } 963 956 964 957 /** ··· 1095 1086 ktime_t starttime = ktime_get(); 1096 1087 int error = 0; 1097 1088 1089 + trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true); 1098 1090 cpuidle_pause(); 1099 1091 suspend_device_irqs(); 1100 1092 mutex_lock(&dpm_list_mtx); ··· 1136 1126 } else { 1137 1127 dpm_show_time(starttime, state, "noirq"); 1138 1128 } 1129 + trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, false); 1139 1130 return error; 1140 1131 } 1141 1132 ··· 1233 1222 ktime_t starttime = ktime_get(); 1234 1223 int error = 0; 1235 1224 1225 + trace_suspend_resume(TPS("dpm_suspend_late"), state.event, true); 1236 1226 mutex_lock(&dpm_list_mtx); 1237 1227 pm_transition = state; 1238 1228 async_error = 0; ··· 1269 1257 } else { 1270 1258 dpm_show_time(starttime, state, "late"); 1271 1259 } 1260 + trace_suspend_resume(TPS("dpm_suspend_late"), state.event, false); 1272 1261 return error; 1273 1262 } 1274 1263 ··· 1308 1295 1309 1296 calltime = initcall_debug_start(dev); 1310 1297 1298 + trace_device_pm_callback_start(dev, info, state.event); 1311 1299 error = cb(dev, state); 1300 + trace_device_pm_callback_end(dev, error); 1312 1301 suspend_report_result(cb, error); 1313 1302 1314 1303 initcall_debug_report(dev, calltime, error, state, info); ··· 1476 1461 ktime_t starttime = ktime_get(); 1477 1462 int error = 0; 1478 1463 1464 + trace_suspend_resume(TPS("dpm_suspend"), state.event, true); 1479 1465 might_sleep(); 1480 1466 1481 1467 cpufreq_suspend(); ··· 1514 1498 dpm_save_failed_step(SUSPEND_SUSPEND); 1515 1499 } else 1516 1500 dpm_show_time(starttime, state, NULL); 1501 + trace_suspend_resume(TPS("dpm_suspend"), state.event, false); 1517 1502 return error; 1518 1503 } 1519 1504 ··· 1566 1549 callback = dev->driver->pm->prepare; 1567 1550 } 1568 1551 1569 - if (callback) 1552 + if (callback) { 1553 + trace_device_pm_callback_start(dev, info, state.event); 1570 1554 ret = callback(dev); 1555 + trace_device_pm_callback_end(dev, ret); 1556 + } 1571 1557 1572 1558 device_unlock(dev); 1573 1559 ··· 1602 1582 { 1603 1583 int error = 0; 1604 1584 1585 + trace_suspend_resume(TPS("dpm_prepare"), state.event, true); 1605 1586 might_sleep(); 1606 1587 1607 1588 mutex_lock(&dpm_list_mtx); ··· 1633 1612 put_device(dev); 1634 1613 } 1635 1614 mutex_unlock(&dpm_list_mtx); 1615 + trace_suspend_resume(TPS("dpm_prepare"), state.event, false); 1636 1616 return error; 1637 1617 } 1638 1618
+5
drivers/base/syscore.c
··· 10 10 #include <linux/mutex.h> 11 11 #include <linux/module.h> 12 12 #include <linux/interrupt.h> 13 + #include <trace/events/power.h> 13 14 14 15 static LIST_HEAD(syscore_ops_list); 15 16 static DEFINE_MUTEX(syscore_ops_lock); ··· 50 49 struct syscore_ops *ops; 51 50 int ret = 0; 52 51 52 + trace_suspend_resume(TPS("syscore_suspend"), 0, true); 53 53 pr_debug("Checking wakeup interrupts\n"); 54 54 55 55 /* Return error code if there are any wakeup interrupts pending. */ ··· 72 70 "Interrupts enabled after %pF\n", ops->suspend); 73 71 } 74 72 73 + trace_suspend_resume(TPS("syscore_suspend"), 0, false); 75 74 return 0; 76 75 77 76 err_out: ··· 95 92 { 96 93 struct syscore_ops *ops; 97 94 95 + trace_suspend_resume(TPS("syscore_resume"), 0, true); 98 96 WARN_ONCE(!irqs_disabled(), 99 97 "Interrupts enabled before system core resume.\n"); 100 98 ··· 107 103 WARN_ONCE(!irqs_disabled(), 108 104 "Interrupts enabled after %pF\n", ops->resume); 109 105 } 106 + trace_suspend_resume(TPS("syscore_resume"), 0, false); 110 107 } 111 108 EXPORT_SYMBOL_GPL(syscore_resume); 112 109 #endif /* CONFIG_PM_SLEEP */
+1 -1
drivers/cpufreq/Kconfig
··· 185 185 186 186 config GENERIC_CPUFREQ_CPU0 187 187 tristate "Generic CPU0 cpufreq driver" 188 - depends on HAVE_CLK && REGULATOR && OF && THERMAL && CPU_THERMAL 188 + depends on HAVE_CLK && OF 189 189 select PM_OPP 190 190 help 191 191 This adds a generic cpufreq driver for CPU0 frequency management.
+1 -2
drivers/cpufreq/Kconfig.arm
··· 5 5 # big LITTLE core layer and glue drivers 6 6 config ARM_BIG_LITTLE_CPUFREQ 7 7 tristate "Generic ARM big LITTLE CPUfreq driver" 8 - depends on (BIG_LITTLE && ARM_CPU_TOPOLOGY) || (ARM64 && SMP) 9 - depends on HAVE_CLK 8 + depends on ARM && BIG_LITTLE && ARM_CPU_TOPOLOGY && HAVE_CLK 10 9 select PM_OPP 11 10 help 12 11 This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
+1 -1
drivers/cpufreq/cpufreq-cpu0.c
··· 104 104 } 105 105 106 106 static struct cpufreq_driver cpu0_cpufreq_driver = { 107 - .flags = CPUFREQ_STICKY, 107 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 108 108 .verify = cpufreq_generic_frequency_table_verify, 109 109 .target_index = cpu0_set_target, 110 110 .get = cpufreq_generic_get,
+60 -7
drivers/cpufreq/cpufreq.c
··· 1816 1816 * GOVERNORS * 1817 1817 *********************************************************************/ 1818 1818 1819 + /* Must set freqs->new to intermediate frequency */ 1820 + static int __target_intermediate(struct cpufreq_policy *policy, 1821 + struct cpufreq_freqs *freqs, int index) 1822 + { 1823 + int ret; 1824 + 1825 + freqs->new = cpufreq_driver->get_intermediate(policy, index); 1826 + 1827 + /* We don't need to switch to intermediate freq */ 1828 + if (!freqs->new) 1829 + return 0; 1830 + 1831 + pr_debug("%s: cpu: %d, switching to intermediate freq: oldfreq: %u, intermediate freq: %u\n", 1832 + __func__, policy->cpu, freqs->old, freqs->new); 1833 + 1834 + cpufreq_freq_transition_begin(policy, freqs); 1835 + ret = cpufreq_driver->target_intermediate(policy, index); 1836 + cpufreq_freq_transition_end(policy, freqs, ret); 1837 + 1838 + if (ret) 1839 + pr_err("%s: Failed to change to intermediate frequency: %d\n", 1840 + __func__, ret); 1841 + 1842 + return ret; 1843 + } 1844 + 1819 1845 static int __target_index(struct cpufreq_policy *policy, 1820 1846 struct cpufreq_frequency_table *freq_table, int index) 1821 1847 { 1822 - struct cpufreq_freqs freqs; 1848 + struct cpufreq_freqs freqs = {.old = policy->cur, .flags = 0}; 1849 + unsigned int intermediate_freq = 0; 1823 1850 int retval = -EINVAL; 1824 1851 bool notify; 1825 1852 1826 1853 notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION); 1827 - 1828 1854 if (notify) { 1829 - freqs.old = policy->cur; 1830 - freqs.new = freq_table[index].frequency; 1831 - freqs.flags = 0; 1855 + /* Handle switching to intermediate frequency */ 1856 + if (cpufreq_driver->get_intermediate) { 1857 + retval = __target_intermediate(policy, &freqs, index); 1858 + if (retval) 1859 + return retval; 1832 1860 1861 + intermediate_freq = freqs.new; 1862 + /* Set old freq to intermediate */ 1863 + if (intermediate_freq) 1864 + freqs.old = freqs.new; 1865 + } 1866 + 1867 + freqs.new = freq_table[index].frequency; 1833 1868 pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n", 1834 1869 __func__, policy->cpu, freqs.old, freqs.new); 1835 1870 ··· 1876 1841 pr_err("%s: Failed to change cpu frequency: %d\n", __func__, 1877 1842 retval); 1878 1843 1879 - if (notify) 1844 + if (notify) { 1880 1845 cpufreq_freq_transition_end(policy, &freqs, retval); 1846 + 1847 + /* 1848 + * Failed after setting to intermediate freq? Driver should have 1849 + * reverted back to initial frequency and so should we. Check 1850 + * here for intermediate_freq instead of get_intermediate, in 1851 + * case we have't switched to intermediate freq at all. 1852 + */ 1853 + if (unlikely(retval && intermediate_freq)) { 1854 + freqs.old = intermediate_freq; 1855 + freqs.new = policy->restore_freq; 1856 + cpufreq_freq_transition_begin(policy, &freqs); 1857 + cpufreq_freq_transition_end(policy, &freqs, 0); 1858 + } 1859 + } 1881 1860 1882 1861 return retval; 1883 1862 } ··· 1923 1874 */ 1924 1875 if (target_freq == policy->cur) 1925 1876 return 0; 1877 + 1878 + /* Save last value to restore later on errors */ 1879 + policy->restore_freq = policy->cur; 1926 1880 1927 1881 if (cpufreq_driver->target) 1928 1882 retval = cpufreq_driver->target(policy, target_freq, relation); ··· 2413 2361 !(driver_data->setpolicy || driver_data->target_index || 2414 2362 driver_data->target) || 2415 2363 (driver_data->setpolicy && (driver_data->target_index || 2416 - driver_data->target))) 2364 + driver_data->target)) || 2365 + (!!driver_data->get_intermediate != !!driver_data->target_intermediate)) 2417 2366 return -EINVAL; 2418 2367 2419 2368 pr_debug("trying to register driver %s\n", driver_data->name);
+64 -3
drivers/cpufreq/cpufreq_governor.c
··· 36 36 struct od_dbs_tuners *od_tuners = dbs_data->tuners; 37 37 struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; 38 38 struct cpufreq_policy *policy; 39 + unsigned int sampling_rate; 39 40 unsigned int max_load = 0; 40 41 unsigned int ignore_nice; 41 42 unsigned int j; 42 43 43 - if (dbs_data->cdata->governor == GOV_ONDEMAND) 44 + if (dbs_data->cdata->governor == GOV_ONDEMAND) { 45 + struct od_cpu_dbs_info_s *od_dbs_info = 46 + dbs_data->cdata->get_cpu_dbs_info_s(cpu); 47 + 48 + /* 49 + * Sometimes, the ondemand governor uses an additional 50 + * multiplier to give long delays. So apply this multiplier to 51 + * the 'sampling_rate', so as to keep the wake-up-from-idle 52 + * detection logic a bit conservative. 53 + */ 54 + sampling_rate = od_tuners->sampling_rate; 55 + sampling_rate *= od_dbs_info->rate_mult; 56 + 44 57 ignore_nice = od_tuners->ignore_nice_load; 45 - else 58 + } else { 59 + sampling_rate = cs_tuners->sampling_rate; 46 60 ignore_nice = cs_tuners->ignore_nice_load; 61 + } 47 62 48 63 policy = cdbs->cur_policy; 49 64 ··· 111 96 if (unlikely(!wall_time || wall_time < idle_time)) 112 97 continue; 113 98 114 - load = 100 * (wall_time - idle_time) / wall_time; 99 + /* 100 + * If the CPU had gone completely idle, and a task just woke up 101 + * on this CPU now, it would be unfair to calculate 'load' the 102 + * usual way for this elapsed time-window, because it will show 103 + * near-zero load, irrespective of how CPU intensive that task 104 + * actually is. This is undesirable for latency-sensitive bursty 105 + * workloads. 106 + * 107 + * To avoid this, we reuse the 'load' from the previous 108 + * time-window and give this task a chance to start with a 109 + * reasonably high CPU frequency. (However, we shouldn't over-do 110 + * this copy, lest we get stuck at a high load (high frequency) 111 + * for too long, even when the current system load has actually 112 + * dropped down. So we perform the copy only once, upon the 113 + * first wake-up from idle.) 114 + * 115 + * Detecting this situation is easy: the governor's deferrable 116 + * timer would not have fired during CPU-idle periods. Hence 117 + * an unusually large 'wall_time' (as compared to the sampling 118 + * rate) indicates this scenario. 119 + * 120 + * prev_load can be zero in two cases and we must recalculate it 121 + * for both cases: 122 + * - during long idle intervals 123 + * - explicitly set to zero 124 + */ 125 + if (unlikely(wall_time > (2 * sampling_rate) && 126 + j_cdbs->prev_load)) { 127 + load = j_cdbs->prev_load; 128 + 129 + /* 130 + * Perform a destructive copy, to ensure that we copy 131 + * the previous load only once, upon the first wake-up 132 + * from idle. 133 + */ 134 + j_cdbs->prev_load = 0; 135 + } else { 136 + load = 100 * (wall_time - idle_time) / wall_time; 137 + j_cdbs->prev_load = load; 138 + } 115 139 116 140 if (load > max_load) 117 141 max_load = load; ··· 372 318 for_each_cpu(j, policy->cpus) { 373 319 struct cpu_dbs_common_info *j_cdbs = 374 320 dbs_data->cdata->get_cpu_cdbs(j); 321 + unsigned int prev_load; 375 322 376 323 j_cdbs->cpu = j; 377 324 j_cdbs->cur_policy = policy; 378 325 j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, 379 326 &j_cdbs->prev_cpu_wall, io_busy); 327 + 328 + prev_load = (unsigned int) 329 + (j_cdbs->prev_cpu_wall - j_cdbs->prev_cpu_idle); 330 + j_cdbs->prev_load = 100 * prev_load / 331 + (unsigned int) j_cdbs->prev_cpu_wall; 332 + 380 333 if (ignore_nice) 381 334 j_cdbs->prev_cpu_nice = 382 335 kcpustat_cpu(j).cpustat[CPUTIME_NICE];
+7
drivers/cpufreq/cpufreq_governor.h
··· 134 134 u64 prev_cpu_idle; 135 135 u64 prev_cpu_wall; 136 136 u64 prev_cpu_nice; 137 + /* 138 + * Used to keep track of load in the previous interval. However, when 139 + * explicitly set to zero, it is used as a flag to ensure that we copy 140 + * the previous load to the current interval only once, upon the first 141 + * wake-up from idle. 142 + */ 143 + unsigned int prev_load; 137 144 struct cpufreq_policy *cur_policy; 138 145 struct delayed_work work; 139 146 /*
-6
drivers/cpufreq/intel_pstate.c
··· 691 691 692 692 static int intel_pstate_init_cpu(unsigned int cpunum) 693 693 { 694 - 695 - const struct x86_cpu_id *id; 696 694 struct cpudata *cpu; 697 - 698 - id = x86_match_cpu(intel_pstate_cpu_ids); 699 - if (!id) 700 - return -ENODEV; 701 695 702 696 all_cpu_data[cpunum] = kzalloc(sizeof(struct cpudata), GFP_KERNEL); 703 697 if (!all_cpu_data[cpunum])
+5 -4
drivers/cpufreq/ppc-corenet-cpufreq.c
··· 138 138 struct cpufreq_frequency_table *table; 139 139 struct cpu_data *data; 140 140 unsigned int cpu = policy->cpu; 141 - u64 transition_latency_hz; 141 + u64 u64temp; 142 142 143 143 np = of_get_cpu_node(cpu, NULL); 144 144 if (!np) ··· 206 206 for_each_cpu(i, per_cpu(cpu_mask, cpu)) 207 207 per_cpu(cpu_data, i) = data; 208 208 209 - transition_latency_hz = 12ULL * NSEC_PER_SEC; 210 - policy->cpuinfo.transition_latency = 211 - do_div(transition_latency_hz, fsl_get_sys_freq()); 209 + /* Minimum transition latency is 12 platform clocks */ 210 + u64temp = 12ULL * NSEC_PER_SEC; 211 + do_div(u64temp, fsl_get_sys_freq()); 212 + policy->cpuinfo.transition_latency = u64temp + 1; 212 213 213 214 of_node_put(np); 214 215
+65 -35
drivers/cpufreq/tegra-cpufreq.c
··· 45 45 static struct clk *pll_x_clk; 46 46 static struct clk *pll_p_clk; 47 47 static struct clk *emc_clk; 48 + static bool pll_x_prepared; 48 49 49 - static int tegra_cpu_clk_set_rate(unsigned long rate) 50 + static unsigned int tegra_get_intermediate(struct cpufreq_policy *policy, 51 + unsigned int index) 52 + { 53 + unsigned int ifreq = clk_get_rate(pll_p_clk) / 1000; 54 + 55 + /* 56 + * Don't switch to intermediate freq if: 57 + * - we are already at it, i.e. policy->cur == ifreq 58 + * - index corresponds to ifreq 59 + */ 60 + if ((freq_table[index].frequency == ifreq) || (policy->cur == ifreq)) 61 + return 0; 62 + 63 + return ifreq; 64 + } 65 + 66 + static int tegra_target_intermediate(struct cpufreq_policy *policy, 67 + unsigned int index) 50 68 { 51 69 int ret; 52 70 53 71 /* 54 72 * Take an extra reference to the main pll so it doesn't turn 55 - * off when we move the cpu off of it 73 + * off when we move the cpu off of it as enabling it again while we 74 + * switch to it from tegra_target() would take additional time. 75 + * 76 + * When target-freq is equal to intermediate freq we don't need to 77 + * switch to an intermediate freq and so this routine isn't called. 78 + * Also, we wouldn't be using pll_x anymore and must not take extra 79 + * reference to it, as it can be disabled now to save some power. 56 80 */ 57 81 clk_prepare_enable(pll_x_clk); 58 82 59 83 ret = clk_set_parent(cpu_clk, pll_p_clk); 60 - if (ret) { 61 - pr_err("Failed to switch cpu to clock pll_p\n"); 62 - goto out; 63 - } 84 + if (ret) 85 + clk_disable_unprepare(pll_x_clk); 86 + else 87 + pll_x_prepared = true; 64 88 65 - if (rate == clk_get_rate(pll_p_clk)) 66 - goto out; 67 - 68 - ret = clk_set_rate(pll_x_clk, rate); 69 - if (ret) { 70 - pr_err("Failed to change pll_x to %lu\n", rate); 71 - goto out; 72 - } 73 - 74 - ret = clk_set_parent(cpu_clk, pll_x_clk); 75 - if (ret) { 76 - pr_err("Failed to switch cpu to clock pll_x\n"); 77 - goto out; 78 - } 79 - 80 - out: 81 - clk_disable_unprepare(pll_x_clk); 82 89 return ret; 83 90 } 84 91 85 92 static int tegra_target(struct cpufreq_policy *policy, unsigned int index) 86 93 { 87 94 unsigned long rate = freq_table[index].frequency; 95 + unsigned int ifreq = clk_get_rate(pll_p_clk) / 1000; 88 96 int ret = 0; 89 97 90 98 /* ··· 106 98 else 107 99 clk_set_rate(emc_clk, 100000000); /* emc 50Mhz */ 108 100 109 - ret = tegra_cpu_clk_set_rate(rate * 1000); 101 + /* 102 + * target freq == pll_p, don't need to take extra reference to pll_x_clk 103 + * as it isn't used anymore. 104 + */ 105 + if (rate == ifreq) 106 + return clk_set_parent(cpu_clk, pll_p_clk); 107 + 108 + ret = clk_set_rate(pll_x_clk, rate * 1000); 109 + /* Restore to earlier frequency on error, i.e. pll_x */ 110 110 if (ret) 111 - pr_err("cpu-tegra: Failed to set cpu frequency to %lu kHz\n", 112 - rate); 111 + pr_err("Failed to change pll_x to %lu\n", rate); 112 + 113 + ret = clk_set_parent(cpu_clk, pll_x_clk); 114 + /* This shouldn't fail while changing or restoring */ 115 + WARN_ON(ret); 116 + 117 + /* 118 + * Drop count to pll_x clock only if we switched to intermediate freq 119 + * earlier while transitioning to a target frequency. 120 + */ 121 + if (pll_x_prepared) { 122 + clk_disable_unprepare(pll_x_clk); 123 + pll_x_prepared = false; 124 + } 113 125 114 126 return ret; 115 127 } ··· 165 137 } 166 138 167 139 static struct cpufreq_driver tegra_cpufreq_driver = { 168 - .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 169 - .verify = cpufreq_generic_frequency_table_verify, 170 - .target_index = tegra_target, 171 - .get = cpufreq_generic_get, 172 - .init = tegra_cpu_init, 173 - .exit = tegra_cpu_exit, 174 - .name = "tegra", 175 - .attr = cpufreq_generic_attr, 140 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 141 + .verify = cpufreq_generic_frequency_table_verify, 142 + .get_intermediate = tegra_get_intermediate, 143 + .target_intermediate = tegra_target_intermediate, 144 + .target_index = tegra_target, 145 + .get = cpufreq_generic_get, 146 + .init = tegra_cpu_init, 147 + .exit = tegra_cpu_exit, 148 + .name = "tegra", 149 + .attr = cpufreq_generic_attr, 176 150 #ifdef CONFIG_PM 177 - .suspend = cpufreq_generic_suspend, 151 + .suspend = cpufreq_generic_suspend, 178 152 #endif 179 153 }; 180 154
+10
drivers/pci/hotplug/acpiphp.h
··· 142 142 return func_to_acpi_device(func)->handle; 143 143 } 144 144 145 + struct acpiphp_root_context { 146 + struct acpi_hotplug_context hp; 147 + struct acpiphp_bridge *root_bridge; 148 + }; 149 + 150 + static inline struct acpiphp_root_context *to_acpiphp_root_context(struct acpi_hotplug_context *hp) 151 + { 152 + return container_of(hp, struct acpiphp_root_context, hp); 153 + } 154 + 145 155 /* 146 156 * struct acpiphp_attention_info - device specific attention registration 147 157 *
+42 -18
drivers/pci/hotplug/acpiphp_glue.c
··· 373 373 374 374 static struct acpiphp_bridge *acpiphp_dev_to_bridge(struct acpi_device *adev) 375 375 { 376 - struct acpiphp_context *context; 377 376 struct acpiphp_bridge *bridge = NULL; 378 377 379 378 acpi_lock_hp_context(); 380 - context = acpiphp_get_context(adev); 381 - if (context) { 382 - bridge = context->bridge; 379 + if (adev->hp) { 380 + bridge = to_acpiphp_root_context(adev->hp)->root_bridge; 383 381 if (bridge) 384 382 get_bridge(bridge); 385 - 386 - acpiphp_put_context(context); 387 383 } 388 384 acpi_unlock_hp_context(); 389 385 return bridge; ··· 877 881 */ 878 882 get_device(&bus->dev); 879 883 880 - if (!pci_is_root_bus(bridge->pci_bus)) { 884 + acpi_lock_hp_context(); 885 + if (pci_is_root_bus(bridge->pci_bus)) { 886 + struct acpiphp_root_context *root_context; 887 + 888 + root_context = kzalloc(sizeof(*root_context), GFP_KERNEL); 889 + if (!root_context) 890 + goto err; 891 + 892 + root_context->root_bridge = bridge; 893 + acpi_set_hp_context(adev, &root_context->hp, NULL, NULL, NULL); 894 + } else { 881 895 struct acpiphp_context *context; 882 896 883 897 /* ··· 896 890 * parent is going to be handled by pciehp, in which case this 897 891 * bridge is not interesting to us either. 898 892 */ 899 - acpi_lock_hp_context(); 900 893 context = acpiphp_get_context(adev); 901 - if (!context) { 902 - acpi_unlock_hp_context(); 903 - put_device(&bus->dev); 904 - pci_dev_put(bridge->pci_dev); 905 - kfree(bridge); 906 - return; 907 - } 894 + if (!context) 895 + goto err; 896 + 908 897 bridge->context = context; 909 898 context->bridge = bridge; 910 899 /* Get a reference to the parent bridge. */ 911 900 get_bridge(context->func.parent); 912 - acpi_unlock_hp_context(); 913 901 } 902 + acpi_unlock_hp_context(); 914 903 915 904 /* Must be added to the list prior to calling acpiphp_add_context(). */ 916 905 mutex_lock(&bridge_mutex); ··· 920 919 cleanup_bridge(bridge); 921 920 put_bridge(bridge); 922 921 } 922 + return; 923 + 924 + err: 925 + acpi_unlock_hp_context(); 926 + put_device(&bus->dev); 927 + pci_dev_put(bridge->pci_dev); 928 + kfree(bridge); 929 + } 930 + 931 + void acpiphp_drop_bridge(struct acpiphp_bridge *bridge) 932 + { 933 + if (pci_is_root_bus(bridge->pci_bus)) { 934 + struct acpiphp_root_context *root_context; 935 + struct acpi_device *adev; 936 + 937 + acpi_lock_hp_context(); 938 + adev = ACPI_COMPANION(bridge->pci_bus->bridge); 939 + root_context = to_acpiphp_root_context(adev->hp); 940 + adev->hp = NULL; 941 + acpi_unlock_hp_context(); 942 + kfree(root_context); 943 + } 944 + cleanup_bridge(bridge); 945 + put_bridge(bridge); 923 946 } 924 947 925 948 /** ··· 961 936 list_for_each_entry(bridge, &bridge_list, list) 962 937 if (bridge->pci_bus == bus) { 963 938 mutex_unlock(&bridge_mutex); 964 - cleanup_bridge(bridge); 965 - put_bridge(bridge); 939 + acpiphp_drop_bridge(bridge); 966 940 return; 967 941 } 968 942
+25
include/linux/cpufreq.h
··· 75 75 unsigned int max; /* in kHz */ 76 76 unsigned int cur; /* in kHz, only needed if cpufreq 77 77 * governors are used */ 78 + unsigned int restore_freq; /* = policy->cur before transition */ 78 79 unsigned int suspend_freq; /* freq to set during suspend */ 79 80 80 81 unsigned int policy; /* see above */ ··· 222 221 223 222 /* define one out of two */ 224 223 int (*setpolicy) (struct cpufreq_policy *policy); 224 + 225 + /* 226 + * On failure, should always restore frequency to policy->restore_freq 227 + * (i.e. old freq). 228 + */ 225 229 int (*target) (struct cpufreq_policy *policy, /* Deprecated */ 226 230 unsigned int target_freq, 227 231 unsigned int relation); 228 232 int (*target_index) (struct cpufreq_policy *policy, 229 233 unsigned int index); 234 + /* 235 + * Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION 236 + * unset. 237 + * 238 + * get_intermediate should return a stable intermediate frequency 239 + * platform wants to switch to and target_intermediate() should set CPU 240 + * to to that frequency, before jumping to the frequency corresponding 241 + * to 'index'. Core will take care of sending notifications and driver 242 + * doesn't have to handle them in target_intermediate() or 243 + * target_index(). 244 + * 245 + * Drivers can return '0' from get_intermediate() in case they don't 246 + * wish to switch to intermediate frequency for some target frequency. 247 + * In that case core will directly call ->target_index(). 248 + */ 249 + unsigned int (*get_intermediate)(struct cpufreq_policy *policy, 250 + unsigned int index); 251 + int (*target_intermediate)(struct cpufreq_policy *policy, 252 + unsigned int index); 230 253 231 254 /* should be defined, if possible */ 232 255 unsigned int (*get) (unsigned int cpu);
+67 -35
include/trace/events/power.h
··· 7 7 #include <linux/ktime.h> 8 8 #include <linux/pm_qos.h> 9 9 #include <linux/tracepoint.h> 10 + #include <linux/ftrace_event.h> 11 + 12 + #define TPS(x) tracepoint_string(x) 10 13 11 14 DECLARE_EVENT_CLASS(cpu, 12 15 ··· 93 90 #define PWR_EVENT_EXIT -1 94 91 #endif 95 92 93 + #define pm_verb_symbolic(event) \ 94 + __print_symbolic(event, \ 95 + { PM_EVENT_SUSPEND, "suspend" }, \ 96 + { PM_EVENT_RESUME, "resume" }, \ 97 + { PM_EVENT_FREEZE, "freeze" }, \ 98 + { PM_EVENT_QUIESCE, "quiesce" }, \ 99 + { PM_EVENT_HIBERNATE, "hibernate" }, \ 100 + { PM_EVENT_THAW, "thaw" }, \ 101 + { PM_EVENT_RESTORE, "restore" }, \ 102 + { PM_EVENT_RECOVER, "recover" }) 103 + 96 104 DEFINE_EVENT(cpu, cpu_frequency, 97 105 98 106 TP_PROTO(unsigned int frequency, unsigned int cpu_id), ··· 111 97 TP_ARGS(frequency, cpu_id) 112 98 ); 113 99 114 - TRACE_EVENT(machine_suspend, 100 + TRACE_EVENT(device_pm_callback_start, 115 101 116 - TP_PROTO(unsigned int state), 102 + TP_PROTO(struct device *dev, const char *pm_ops, int event), 117 103 118 - TP_ARGS(state), 119 - 120 - TP_STRUCT__entry( 121 - __field( u32, state ) 122 - ), 123 - 124 - TP_fast_assign( 125 - __entry->state = state; 126 - ), 127 - 128 - TP_printk("state=%lu", (unsigned long)__entry->state) 129 - ); 130 - 131 - TRACE_EVENT(device_pm_report_time, 132 - 133 - TP_PROTO(struct device *dev, const char *pm_ops, s64 ops_time, 134 - char *pm_event_str, int error), 135 - 136 - TP_ARGS(dev, pm_ops, ops_time, pm_event_str, error), 104 + TP_ARGS(dev, pm_ops, event), 137 105 138 106 TP_STRUCT__entry( 139 107 __string(device, dev_name(dev)) 140 108 __string(driver, dev_driver_string(dev)) 141 109 __string(parent, dev->parent ? dev_name(dev->parent) : "none") 142 110 __string(pm_ops, pm_ops ? pm_ops : "none ") 143 - __string(pm_event_str, pm_event_str) 144 - __field(s64, ops_time) 111 + __field(int, event) 112 + ), 113 + 114 + TP_fast_assign( 115 + __assign_str(device, dev_name(dev)); 116 + __assign_str(driver, dev_driver_string(dev)); 117 + __assign_str(parent, 118 + dev->parent ? dev_name(dev->parent) : "none"); 119 + __assign_str(pm_ops, pm_ops ? pm_ops : "none "); 120 + __entry->event = event; 121 + ), 122 + 123 + TP_printk("%s %s, parent: %s, %s[%s]", __get_str(driver), 124 + __get_str(device), __get_str(parent), __get_str(pm_ops), 125 + pm_verb_symbolic(__entry->event)) 126 + ); 127 + 128 + TRACE_EVENT(device_pm_callback_end, 129 + 130 + TP_PROTO(struct device *dev, int error), 131 + 132 + TP_ARGS(dev, error), 133 + 134 + TP_STRUCT__entry( 135 + __string(device, dev_name(dev)) 136 + __string(driver, dev_driver_string(dev)) 145 137 __field(int, error) 146 138 ), 147 139 148 140 TP_fast_assign( 149 - const char *tmp = dev->parent ? dev_name(dev->parent) : "none"; 150 - const char *tmp_i = pm_ops ? pm_ops : "none "; 151 - 152 141 __assign_str(device, dev_name(dev)); 153 142 __assign_str(driver, dev_driver_string(dev)); 154 - __assign_str(parent, tmp); 155 - __assign_str(pm_ops, tmp_i); 156 - __assign_str(pm_event_str, pm_event_str); 157 - __entry->ops_time = ops_time; 158 143 __entry->error = error; 159 144 ), 160 145 161 - /* ops_str has an extra space at the end */ 162 - TP_printk("%s %s parent=%s state=%s ops=%snsecs=%lld err=%d", 163 - __get_str(driver), __get_str(device), __get_str(parent), 164 - __get_str(pm_event_str), __get_str(pm_ops), 165 - __entry->ops_time, __entry->error) 146 + TP_printk("%s %s, err=%d", 147 + __get_str(driver), __get_str(device), __entry->error) 148 + ); 149 + 150 + TRACE_EVENT(suspend_resume, 151 + 152 + TP_PROTO(const char *action, int val, bool start), 153 + 154 + TP_ARGS(action, val, start), 155 + 156 + TP_STRUCT__entry( 157 + __field(const char *, action) 158 + __field(int, val) 159 + __field(bool, start) 160 + ), 161 + 162 + TP_fast_assign( 163 + __entry->action = action; 164 + __entry->val = val; 165 + __entry->start = start; 166 + ), 167 + 168 + TP_printk("%s[%u] %s", __entry->action, (unsigned int)__entry->val, 169 + (__entry->start)?"begin":"end") 166 170 ); 167 171 168 172 DECLARE_EVENT_CLASS(wakeup_source,
+5
kernel/cpu.c
··· 20 20 #include <linux/gfp.h> 21 21 #include <linux/suspend.h> 22 22 #include <linux/lockdep.h> 23 + #include <trace/events/power.h> 23 24 24 25 #include "smpboot.h" 25 26 ··· 521 520 for_each_online_cpu(cpu) { 522 521 if (cpu == first_cpu) 523 522 continue; 523 + trace_suspend_resume(TPS("CPU_OFF"), cpu, true); 524 524 error = _cpu_down(cpu, 1); 525 + trace_suspend_resume(TPS("CPU_OFF"), cpu, false); 525 526 if (!error) 526 527 cpumask_set_cpu(cpu, frozen_cpus); 527 528 else { ··· 566 563 arch_enable_nonboot_cpus_begin(); 567 564 568 565 for_each_cpu(cpu, frozen_cpus) { 566 + trace_suspend_resume(TPS("CPU_ON"), cpu, true); 569 567 error = _cpu_up(cpu, 1); 568 + trace_suspend_resume(TPS("CPU_ON"), cpu, false); 570 569 if (!error) { 571 570 pr_info("CPU%d is up\n", cpu); 572 571 continue;
+3
kernel/power/hibernate.c
··· 28 28 #include <linux/syscore_ops.h> 29 29 #include <linux/ctype.h> 30 30 #include <linux/genhd.h> 31 + #include <trace/events/power.h> 31 32 32 33 #include "power.h" 33 34 ··· 293 292 294 293 in_suspend = 1; 295 294 save_processor_state(); 295 + trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, true); 296 296 error = swsusp_arch_suspend(); 297 + trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, false); 297 298 if (error) 298 299 printk(KERN_ERR "PM: Error %d creating hibernation image\n", 299 300 error);
+3
kernel/power/process.c
··· 17 17 #include <linux/delay.h> 18 18 #include <linux/workqueue.h> 19 19 #include <linux/kmod.h> 20 + #include <trace/events/power.h> 20 21 21 22 /* 22 23 * Timeout for stopping processes ··· 176 175 struct task_struct *g, *p; 177 176 struct task_struct *curr = current; 178 177 178 + trace_suspend_resume(TPS("thaw_processes"), 0, true); 179 179 if (pm_freezing) 180 180 atomic_dec(&system_freezing_cnt); 181 181 pm_freezing = false; ··· 203 201 204 202 schedule(); 205 203 printk("done.\n"); 204 + trace_suspend_resume(TPS("thaw_processes"), 0, false); 206 205 } 207 206 208 207 void thaw_kernel_threads(void)
+12 -2
kernel/power/suspend.c
··· 177 177 if (error) 178 178 goto Finish; 179 179 180 + trace_suspend_resume(TPS("freeze_processes"), 0, true); 180 181 error = suspend_freeze_processes(); 182 + trace_suspend_resume(TPS("freeze_processes"), 0, false); 181 183 if (!error) 182 184 return 0; 183 185 ··· 242 240 * all the devices are suspended. 243 241 */ 244 242 if (state == PM_SUSPEND_FREEZE) { 243 + trace_suspend_resume(TPS("machine_suspend"), state, true); 245 244 freeze_enter(); 245 + trace_suspend_resume(TPS("machine_suspend"), state, false); 246 246 goto Platform_wake; 247 247 } 248 248 ··· 260 256 if (!error) { 261 257 *wakeup = pm_wakeup_pending(); 262 258 if (!(suspend_test(TEST_CORE) || *wakeup)) { 259 + trace_suspend_resume(TPS("machine_suspend"), 260 + state, true); 263 261 error = suspend_ops->enter(state); 262 + trace_suspend_resume(TPS("machine_suspend"), 263 + state, false); 264 264 events_check_enabled = false; 265 265 } 266 266 syscore_resume(); ··· 302 294 if (need_suspend_ops(state) && !suspend_ops) 303 295 return -ENOSYS; 304 296 305 - trace_machine_suspend(state); 306 297 if (need_suspend_ops(state) && suspend_ops->begin) { 307 298 error = suspend_ops->begin(state); 308 299 if (error) ··· 338 331 else if (state == PM_SUSPEND_FREEZE && freeze_ops->end) 339 332 freeze_ops->end(); 340 333 341 - trace_machine_suspend(PWR_EVENT_EXIT); 342 334 return error; 343 335 344 336 Recover_platform: ··· 371 365 { 372 366 int error; 373 367 368 + trace_suspend_resume(TPS("suspend_enter"), state, true); 374 369 if (state == PM_SUSPEND_FREEZE) { 375 370 #ifdef CONFIG_PM_DEBUG 376 371 if (pm_test_level != TEST_NONE && pm_test_level <= TEST_CPUS) { ··· 389 382 if (state == PM_SUSPEND_FREEZE) 390 383 freeze_begin(); 391 384 385 + trace_suspend_resume(TPS("sync_filesystems"), 0, true); 392 386 printk(KERN_INFO "PM: Syncing filesystems ... "); 393 387 sys_sync(); 394 388 printk("done.\n"); 389 + trace_suspend_resume(TPS("sync_filesystems"), 0, false); 395 390 396 391 pr_debug("PM: Preparing system for %s sleep\n", pm_states[state].label); 397 392 error = suspend_prepare(state); ··· 403 394 if (suspend_test(TEST_FREEZER)) 404 395 goto Finish; 405 396 397 + trace_suspend_resume(TPS("suspend_enter"), state, false); 406 398 pr_debug("PM: Entering %s sleep\n", pm_states[state].label); 407 399 pm_restrict_gfp_mask(); 408 400 error = suspend_devices_and_enter(state);