Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
"This time (again) cpufreq gets the majority of changes which mostly
are driver updates (including a major consolidation of intel_pstate),
some schedutil governor modifications and core cleanups.

There also are some changes in the system suspend area, mostly related
to diagnostics and debug messages plus some renames of things related
to suspend-to-idle. One major change here is that suspend-to-idle is
now going to be preferred over S3 on systems where the ACPI tables
indicate to do so and provide requsite support (the Low Power Idle S0
_DSM in particular). The system sleep documentation and the tools
related to it are updated too.

The rest is a few cpuidle changes (nothing major), devfreq updates,
generic power domains (genpd) framework updates and a few assorted
modifications elsewhere.

Specifics:

- Drop the P-state selection algorithm based on a PID controller from
intel_pstate and make it use the same P-state selection method
(based on the CPU load) for all types of systems in the active mode
(Rafael Wysocki, Srinivas Pandruvada).

- Rework the cpufreq core and governors to make it possible to take
cross-CPU utilization updates into account and modify the schedutil
governor to actually do so (Viresh Kumar).

- Clean up the handling of transition latency information in the
cpufreq core and untangle it from the information on which drivers
cannot do dynamic frequency switching (Viresh Kumar).

- Add support for new SoCs (MT2701/MT7623 and MT7622) to the mediatek
cpufreq driver and update its DT bindings (Sean Wang).

- Modify the cpufreq dt-platdev driver to autimatically create
cpufreq devices for the new (v2) Operating Performance Points (OPP)
DT bindings and update its whitelist of supported systems (Viresh
Kumar, Shubhrajyoti Datta, Marc Gonzalez, Khiem Nguyen, Finley
Xiao).

- Add support for Ux500 to the cpufreq-dt driver and drop the
obsolete dbx500 cpufreq driver (Linus Walleij, Arnd Bergmann).

- Add new SoC (R8A7795) support to the cpufreq rcar driver (Khiem
Nguyen).

- Fix and clean up assorted issues in the cpufreq drivers and core
(Arvind Yadav, Christophe Jaillet, Colin Ian King, Gustavo Silva,
Julia Lawall, Leonard Crestez, Rob Herring, Sudeep Holla).

- Update the IO-wait boost handling in the schedutil governor to make
it less aggressive (Joel Fernandes).

- Rework system suspend diagnostics to make it print fewer messages
to the kernel log by default, add a sysfs knob to allow more
suspend-related messages to be printed and add Low Power S0 Idle
constraints checks to the ACPI suspend-to-idle code (Rafael
Wysocki, Srinivas Pandruvada).

- Prefer suspend-to-idle over S3 on ACPI-based systems with the
ACPI_FADT_LOW_POWER_S0 flag set and the Low Power Idle S0 _DSM
interface present in the ACPI tables (Rafael Wysocki).

- Update documentation related to system sleep and rename a number of
items in the code to make it cleare that they are related to
suspend-to-idle (Rafael Wysocki).

- Export a variable allowing device drivers to check the target
system sleep state from the core system suspend code (Florian
Fainelli).

- Clean up the cpuidle subsystem to handle the polling state on x86
in a more straightforward way and to use %pOF instead of full_name
(Rafael Wysocki, Rob Herring).

- Update the devfreq framework to fix and clean up a few minor issues
(Chanwoo Choi, Rob Herring).

- Extend diagnostics in the generic power domains (genpd) framework
and clean it up slightly (Thara Gopinath, Rob Herring).

- Fix and clean up a couple of issues in the operating performance
points (OPP) framework (Viresh Kumar, Waldemar Rymarkiewicz).

- Add support for RV1108 to the rockchip-io Adaptive Voltage Scaling
(AVS) driver (David Wu).

- Fix the usage of notifiers in CPU power management on some
platforms (Alex Shi).

- Update the pm-graph system suspend/hibernation and boot profiling
utility (Todd Brandt).

- Make it possible to run the cpupower utility without CPU0 (Prarit
Bhargava)"

* tag 'pm-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (87 commits)
cpuidle: Make drivers initialize polling state
cpuidle: Move polling state initialization code to separate file
cpuidle: Eliminate the CPUIDLE_DRIVER_STATE_START symbol
cpufreq: imx6q: Fix imx6sx low frequency support
cpufreq: speedstep-lib: make several arrays static, makes code smaller
PM: docs: Delete the obsolete states.txt document
PM: docs: Describe high-level PM strategies and sleep states
PM / devfreq: Fix memory leak when fail to register device
PM / devfreq: Add dependency on PM_OPP
PM / devfreq: Move private devfreq_update_stats() into devfreq
PM / devfreq: Convert to using %pOF instead of full_name
PM / AVS: rockchip-io: add io selectors and supplies for RV1108
cpufreq: ti: Fix 'of_node_put' being called twice in error handling path
cpufreq: dt-platdev: Drop few entries from whitelist
cpufreq: dt-platdev: Automatically create cpufreq device with OPP v2
ARM: ux500: don't select CPUFREQ_DT
cpuidle: Convert to using %pOF instead of full_name
cpufreq: Convert to using %pOF instead of full_name
PM / Domains: Convert to using %pOF instead of full_name
cpufreq: Cap the default transition delay value to 10 ms
...

+2835 -1746
+12
Documentation/ABI/testing/sysfs-power
··· 273 273 274 274 This output is useful for system wakeup diagnostics of spurious 275 275 wakeup interrupts. 276 + 277 + What: /sys/power/pm_debug_messages 278 + Date: July 2017 279 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 280 + Description: 281 + The /sys/power/pm_debug_messages file controls the printing 282 + of debug messages from the system suspend/hiberbation 283 + infrastructure to the kernel log. 284 + 285 + Writing a "1" to this file enables the debug messages and 286 + writing a "0" (default) to it disables them. Reads from 287 + this file return the current value.
-8
Documentation/admin-guide/pm/cpufreq.rst
··· 479 479 480 480 # echo `$(($(cat cpuinfo_transition_latency) * 750 / 1000)) > ondemand/sampling_rate 481 481 482 - 483 - ``min_sampling_rate`` 484 - The minimum value of ``sampling_rate``. 485 - 486 - Equal to 10000 (10 ms) if :c:macro:`CONFIG_NO_HZ_COMMON` and 487 - :c:data:`tick_nohz_active` are both set or to 20 times the value of 488 - :c:data:`jiffies` in microseconds otherwise. 489 - 490 482 ``up_threshold`` 491 483 If the estimated CPU load is above this value (in percent), the governor 492 484 will set the frequency to the maximum value allowed for the policy.
+3 -9
Documentation/admin-guide/pm/index.rst
··· 5 5 .. toctree:: 6 6 :maxdepth: 2 7 7 8 - cpufreq 9 - intel_pstate 10 - 11 - .. only:: subproject and html 12 - 13 - Indices 14 - ======= 15 - 16 - * :ref:`genindex` 8 + strategies 9 + system-wide 10 + working-state
+7 -52
Documentation/admin-guide/pm/intel_pstate.rst
··· 167 167 ``powersave`` 168 168 ............. 169 169 170 - Without HWP, this P-state selection algorithm generally depends on the 171 - processor model and/or the system profile setting in the ACPI tables and there 172 - are two variants of it. 173 - 174 - One of them is used with processors from the Atom line and (regardless of the 175 - processor model) on platforms with the system profile in the ACPI tables set to 176 - "mobile" (laptops mostly), "tablet", "appliance PC", "desktop", or 177 - "workstation". It is also used with processors supporting the HWP feature if 178 - that feature has not been enabled (that is, with the ``intel_pstate=no_hwp`` 179 - argument in the kernel command line). It is similar to the algorithm 170 + Without HWP, this P-state selection algorithm is similar to the algorithm 180 171 implemented by the generic ``schedutil`` scaling governor except that the 181 172 utilization metric used by it is based on numbers coming from feedback 182 173 registers of the CPU. It generally selects P-states proportional to the 183 - current CPU utilization, so it is referred to as the "proportional" algorithm. 174 + current CPU utilization. 184 175 185 - The second variant of the ``powersave`` P-state selection algorithm, used in all 186 - of the other cases (generally, on processors from the Core line, so it is 187 - referred to as the "Core" algorithm), is based on the values read from the APERF 188 - and MPERF feedback registers and the previously requested target P-state. 189 - It does not really take CPU utilization into account explicitly, but as a rule 190 - it causes the CPU P-state to ramp up very quickly in response to increased 191 - utilization which is generally desirable in server environments. 192 - 193 - Regardless of the variant, this algorithm is run by the driver's utilization 194 - update callback for the given CPU when it is invoked by the CPU scheduler, but 195 - not more often than every 10 ms (that can be tweaked via ``debugfs`` in `this 196 - particular case <Tuning Interface in debugfs_>`_). Like in the ``performance`` 197 - case, the hardware configuration is not touched if the new P-state turns out to 198 - be the same as the current one. 176 + This algorithm is run by the driver's utilization update callback for the 177 + given CPU when it is invoked by the CPU scheduler, but not more often than 178 + every 10 ms. Like in the ``performance`` case, the hardware configuration 179 + is not touched if the new P-state turns out to be the same as the current 180 + one. 199 181 200 182 This is the default P-state selection algorithm if the 201 183 :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option ··· 702 720 gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func 703 721 <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func 704 722 705 - Tuning Interface in ``debugfs`` 706 - ------------------------------- 707 - 708 - The ``powersave`` algorithm provided by ``intel_pstate`` for `the Core line of 709 - processors in the active mode <powersave_>`_ is based on a `PID controller`_ 710 - whose parameters were chosen to address a number of different use cases at the 711 - same time. However, it still is possible to fine-tune it to a specific workload 712 - and the ``debugfs`` interface under ``/sys/kernel/debug/pstate_snb/`` is 713 - provided for this purpose. [Note that the ``pstate_snb`` directory will be 714 - present only if the specific P-state selection algorithm matching the interface 715 - in it actually is in use.] 716 - 717 - The following files present in that directory can be used to modify the PID 718 - controller parameters at run time: 719 - 720 - | ``deadband`` 721 - | ``d_gain_pct`` 722 - | ``i_gain_pct`` 723 - | ``p_gain_pct`` 724 - | ``sample_rate_ms`` 725 - | ``setpoint`` 726 - 727 - Note, however, that achieving desirable results this way generally requires 728 - expert-level understanding of the power vs performance tradeoff, so extra care 729 - is recommended when attempting to do that. 730 - 731 723 732 724 .. _LCEU2015: http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf 733 725 .. _SDM: http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html 734 726 .. _ACPI specification: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf 735 - .. _PID controller: https://en.wikipedia.org/wiki/PID_controller
+245
Documentation/admin-guide/pm/sleep-states.rst
··· 1 + =================== 2 + System Sleep States 3 + =================== 4 + 5 + :: 6 + 7 + Copyright (c) 2017 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 8 + 9 + Sleep states are global low-power states of the entire system in which user 10 + space code cannot be executed and the overall system activity is significantly 11 + reduced. 12 + 13 + 14 + Sleep States That Can Be Supported 15 + ================================== 16 + 17 + Depending on its configuration and the capabilities of the platform it runs on, 18 + the Linux kernel can support up to four system sleep states, includig 19 + hibernation and up to three variants of system suspend. The sleep states that 20 + can be supported by the kernel are listed below. 21 + 22 + .. _s2idle: 23 + 24 + Suspend-to-Idle 25 + --------------- 26 + 27 + This is a generic, pure software, light-weight variant of system suspend (also 28 + referred to as S2I or S2Idle). It allows more energy to be saved relative to 29 + runtime idle by freezing user space, suspending the timekeeping and putting all 30 + I/O devices into low-power states (possibly lower-power than available in the 31 + working state), such that the processors can spend time in their deepest idle 32 + states while the system is suspended. 33 + 34 + The system is woken up from this state by in-band interrupts, so theoretically 35 + any devices that can cause interrupts to be generated in the working state can 36 + also be set up as wakeup devices for S2Idle. 37 + 38 + This state can be used on platforms without support for :ref:`standby <standby>` 39 + or :ref:`suspend-to-RAM <s2ram>`, or it can be used in addition to any of the 40 + deeper system suspend variants to provide reduced resume latency. It is always 41 + supported if the :c:macro:`CONFIG_SUSPEND` kernel configuration option is set. 42 + 43 + .. _standby: 44 + 45 + Standby 46 + ------- 47 + 48 + This state, if supported, offers moderate, but real, energy savings, while 49 + providing a relatively straightforward transition back to the working state. No 50 + operating state is lost (the system core logic retains power), so the system can 51 + go back to where it left off easily enough. 52 + 53 + In addition to freezing user space, suspending the timekeeping and putting all 54 + I/O devices into low-power states, which is done for :ref:`suspend-to-idle 55 + <s2idle>` too, nonboot CPUs are taken offline and all low-level system functions 56 + are suspended during transitions into this state. For this reason, it should 57 + allow more energy to be saved relative to :ref:`suspend-to-idle <s2idle>`, but 58 + the resume latency will generally be greater than for that state. 59 + 60 + The set of devices that can wake up the system from this state usually is 61 + reduced relative to :ref:`suspend-to-idle <s2idle>` and it may be necessary to 62 + rely on the platform for setting up the wakeup functionality as appropriate. 63 + 64 + This state is supported if the :c:macro:`CONFIG_SUSPEND` kernel configuration 65 + option is set and the support for it is registered by the platform with the 66 + core system suspend subsystem. On ACPI-based systems this state is mapped to 67 + the S1 system state defined by ACPI. 68 + 69 + .. _s2ram: 70 + 71 + Suspend-to-RAM 72 + -------------- 73 + 74 + This state (also referred to as STR or S2RAM), if supported, offers significant 75 + energy savings as everything in the system is put into a low-power state, except 76 + for memory, which should be placed into the self-refresh mode to retain its 77 + contents. All of the steps carried out when entering :ref:`standby <standby>` 78 + are also carried out during transitions to S2RAM. Additional operations may 79 + take place depending on the platform capabilities. In particular, on ACPI-based 80 + systems the kernel passes control to the platform firmware (BIOS) as the last 81 + step during S2RAM transitions and that usually results in powering down some 82 + more low-level components that are not directly controlled by the kernel. 83 + 84 + The state of devices and CPUs is saved and held in memory. All devices are 85 + suspended and put into low-power states. In many cases, all peripheral buses 86 + lose power when entering S2RAM, so devices must be able to handle the transition 87 + back to the "on" state. 88 + 89 + On ACPI-based systems S2RAM requires some minimal boot-strapping code in the 90 + platform firmware to resume the system from it. This may be the case on other 91 + platforms too. 92 + 93 + The set of devices that can wake up the system from S2RAM usually is reduced 94 + relative to :ref:`suspend-to-idle <s2idle>` and :ref:`standby <standby>` and it 95 + may be necessary to rely on the platform for setting up the wakeup functionality 96 + as appropriate. 97 + 98 + S2RAM is supported if the :c:macro:`CONFIG_SUSPEND` kernel configuration option 99 + is set and the support for it is registered by the platform with the core system 100 + suspend subsystem. On ACPI-based systems it is mapped to the S3 system state 101 + defined by ACPI. 102 + 103 + .. _hibernation: 104 + 105 + Hibernation 106 + ----------- 107 + 108 + This state (also referred to as Suspend-to-Disk or STD) offers the greatest 109 + energy savings and can be used even in the absence of low-level platform support 110 + for system suspend. However, it requires some low-level code for resuming the 111 + system to be present for the underlying CPU architecture. 112 + 113 + Hibernation is significantly different from any of the system suspend variants. 114 + It takes three system state changes to put it into hibernation and two system 115 + state changes to resume it. 116 + 117 + First, when hibernation is triggered, the kernel stops all system activity and 118 + creates a snapshot image of memory to be written into persistent storage. Next, 119 + the system goes into a state in which the snapshot image can be saved, the image 120 + is written out and finally the system goes into the target low-power state in 121 + which power is cut from almost all of its hardware components, including memory, 122 + except for a limited set of wakeup devices. 123 + 124 + Once the snapshot image has been written out, the system may either enter a 125 + special low-power state (like ACPI S4), or it may simply power down itself. 126 + Powering down means minimum power draw and it allows this mechanism to work on 127 + any system. However, entering a special low-power state may allow additional 128 + means of system wakeup to be used (e.g. pressing a key on the keyboard or 129 + opening a laptop lid). 130 + 131 + After wakeup, control goes to the platform firmware that runs a boot loader 132 + which boots a fresh instance of the kernel (control may also go directly to 133 + the boot loader, depending on the system configuration, but anyway it causes 134 + a fresh instance of the kernel to be booted). That new instance of the kernel 135 + (referred to as the ``restore kernel``) looks for a hibernation image in 136 + persistent storage and if one is found, it is loaded into memory. Next, all 137 + activity in the system is stopped and the restore kernel overwrites itself with 138 + the image contents and jumps into a special trampoline area in the original 139 + kernel stored in the image (referred to as the ``image kernel``), which is where 140 + the special architecture-specific low-level code is needed. Finally, the 141 + image kernel restores the system to the pre-hibernation state and allows user 142 + space to run again. 143 + 144 + Hibernation is supported if the :c:macro:`CONFIG_HIBERNATION` kernel 145 + configuration option is set. However, this option can only be set if support 146 + for the given CPU architecture includes the low-level code for system resume. 147 + 148 + 149 + Basic ``sysfs`` Interfaces for System Suspend and Hibernation 150 + ============================================================= 151 + 152 + The following files located in the :file:`/sys/power/` directory can be used by 153 + user space for sleep states control. 154 + 155 + ``state`` 156 + This file contains a list of strings representing sleep states supported 157 + by the kernel. Writing one of these strings into it causes the kernel 158 + to start a transition of the system into the sleep state represented by 159 + that string. 160 + 161 + In particular, the strings "disk", "freeze" and "standby" represent the 162 + :ref:`hibernation <hibernation>`, :ref:`suspend-to-idle <s2idle>` and 163 + :ref:`standby <standby>` sleep states, respectively. The string "mem" 164 + is interpreted in accordance with the contents of the ``mem_sleep`` file 165 + described below. 166 + 167 + If the kernel does not support any system sleep states, this file is 168 + not present. 169 + 170 + ``mem_sleep`` 171 + This file contains a list of strings representing supported system 172 + suspend variants and allows user space to select the variant to be 173 + associated with the "mem" string in the ``state`` file described above. 174 + 175 + The strings that may be present in this file are "s2idle", "shallow" 176 + and "deep". The string "s2idle" always represents :ref:`suspend-to-idle 177 + <s2idle>` and, by convention, "shallow" and "deep" represent 178 + :ref:`standby <standby>` and :ref:`suspend-to-RAM <s2ram>`, 179 + respectively. 180 + 181 + Writing one of the listed strings into this file causes the system 182 + suspend variant represented by it to be associated with the "mem" string 183 + in the ``state`` file. The string representing the suspend variant 184 + currently associated with the "mem" string in the ``state`` file 185 + is listed in square brackets. 186 + 187 + If the kernel does not support system suspend, this file is not present. 188 + 189 + ``disk`` 190 + This file contains a list of strings representing different operations 191 + that can be carried out after the hibernation image has been saved. The 192 + possible options are as follows: 193 + 194 + ``platform`` 195 + Put the system into a special low-power state (e.g. ACPI S4) to 196 + make additional wakeup options available and possibly allow the 197 + platform firmware to take a simplified initialization path after 198 + wakeup. 199 + 200 + ``shutdown`` 201 + Power off the system. 202 + 203 + ``reboot`` 204 + Reboot the system (useful for diagnostics mostly). 205 + 206 + ``suspend`` 207 + Hybrid system suspend. Put the system into the suspend sleep 208 + state selected through the ``mem_sleep`` file described above. 209 + If the system is successfully woken up from that state, discard 210 + the hibernation image and continue. Otherwise, use the image 211 + to restore the previous state of the system. 212 + 213 + ``test_resume`` 214 + Diagnostic operation. Load the image as though the system had 215 + just woken up from hibernation and the currently running kernel 216 + instance was a restore kernel and follow up with full system 217 + resume. 218 + 219 + Writing one of the listed strings into this file causes the option 220 + represented by it to be selected. 221 + 222 + The currently selected option is shown in square brackets which means 223 + that the operation represented by it will be carried out after creating 224 + and saving the image next time hibernation is triggered by writing 225 + ``disk`` to :file:`/sys/power/state`. 226 + 227 + If the kernel does not support hibernation, this file is not present. 228 + 229 + According to the above, there are two ways to make the system go into the 230 + :ref:`suspend-to-idle <s2idle>` state. The first one is to write "freeze" 231 + directly to :file:`/sys/power/state`. The second one is to write "s2idle" to 232 + :file:`/sys/power/mem_sleep` and then to write "mem" to 233 + :file:`/sys/power/state`. Likewise, there are two ways to make the system go 234 + into the :ref:`standby <standby>` state (the strings to write to the control 235 + files in that case are "standby" or "shallow" and "mem", respectively) if that 236 + state is supported by the platform. However, there is only one way to make the 237 + system go into the :ref:`suspend-to-RAM <s2ram>` state (write "deep" into 238 + :file:`/sys/power/mem_sleep` and "mem" into :file:`/sys/power/state`). 239 + 240 + The default suspend variant (ie. the one to be used without writing anything 241 + into :file:`/sys/power/mem_sleep`) is either "deep" (on the majority of systems 242 + supporting :ref:`suspend-to-RAM <s2ram>`) or "s2idle", but it can be overridden 243 + by the value of the "mem_sleep_default" parameter in the kernel command line. 244 + On some ACPI-based systems, depending on the information in the ACPI tables, the 245 + default may be "s2idle" even if :ref:`suspend-to-RAM <s2ram>` is supported.
+52
Documentation/admin-guide/pm/strategies.rst
··· 1 + =========================== 2 + Power Management Strategies 3 + =========================== 4 + 5 + :: 6 + 7 + Copyright (c) 2017 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 8 + 9 + The Linux kernel supports two major high-level power management strategies. 10 + 11 + One of them is based on using global low-power states of the whole system in 12 + which user space code cannot be executed and the overall system activity is 13 + significantly reduced, referred to as :doc:`sleep states <sleep-states>`. The 14 + kernel puts the system into one of these states when requested by user space 15 + and the system stays in it until a special signal is received from one of 16 + designated devices, triggering a transition to the ``working state`` in which 17 + user space code can run. Because sleep states are global and the whole system 18 + is affected by the state changes, this strategy is referred to as the 19 + :doc:`system-wide power management <system-wide>`. 20 + 21 + The other strategy, referred to as the :doc:`working-state power management 22 + <working-state>`, is based on adjusting the power states of individual hardware 23 + components of the system, as needed, in the working state. In consequence, if 24 + this strategy is in use, the working state of the system usually does not 25 + correspond to any particular physical configuration of it, but can be treated as 26 + a metastate covering a range of different power states of the system in which 27 + the individual components of it can be either ``active`` (in use) or 28 + ``inactive`` (idle). If they are active, they have to be in power states 29 + allowing them to process data and to be accessed by software. In turn, if they 30 + are inactive, ideally, they should be in low-power states in which they may not 31 + be accessible. 32 + 33 + If all of the system components are active, the system as a whole is regarded as 34 + "runtime active" and that situation typically corresponds to the maximum power 35 + draw (or maximum energy usage) of it. If all of them are inactive, the system 36 + as a whole is regarded as "runtime idle" which may be very close to a sleep 37 + state from the physical system configuration and power draw perspective, but 38 + then it takes much less time and effort to start executing user space code than 39 + for the same system in a sleep state. However, transitions from sleep states 40 + back to the working state can only be started by a limited set of devices, so 41 + typically the system can spend much more time in a sleep state than it can be 42 + runtime idle in one go. For this reason, systems usually use less energy in 43 + sleep states than when they are runtime idle most of the time. 44 + 45 + Moreover, the two power management strategies address different usage scenarios. 46 + Namely, if the user indicates that the system will not be in use going forward, 47 + for example by closing its lid (if the system is a laptop), it probably should 48 + go into a sleep state at that point. On the other hand, if the user simply goes 49 + away from the laptop keyboard, it probably should stay in the working state and 50 + use the working-state power management in case it becomes idle, because the user 51 + may come back to it at any time and then may want the system to be immediately 52 + accessible.
+8
Documentation/admin-guide/pm/system-wide.rst
··· 1 + ============================ 2 + System-Wide Power Management 3 + ============================ 4 + 5 + .. toctree:: 6 + :maxdepth: 2 7 + 8 + sleep-states
+9
Documentation/admin-guide/pm/working-state.rst
··· 1 + ============================== 2 + Working-State Power Management 3 + ============================== 4 + 5 + .. toctree:: 6 + :maxdepth: 2 7 + 8 + cpufreq 9 + intel_pstate
-83
Documentation/devicetree/bindings/clock/mt8173-cpu-dvfs.txt
··· 1 - Device Tree Clock bindins for CPU DVFS of Mediatek MT8173 SoC 2 - 3 - Required properties: 4 - - clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock names. 5 - - clock-names: Should contain the following: 6 - "cpu" - The multiplexer for clock input of CPU cluster. 7 - "intermediate" - A parent of "cpu" clock which is used as "intermediate" clock 8 - source (usually MAINPLL) when the original CPU PLL is under 9 - transition and not stable yet. 10 - Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for 11 - generic clock consumer properties. 12 - - proc-supply: Regulator for Vproc of CPU cluster. 13 - 14 - Optional properties: 15 - - sram-supply: Regulator for Vsram of CPU cluster. When present, the cpufreq driver 16 - needs to do "voltage tracking" to step by step scale up/down Vproc and 17 - Vsram to fit SoC specific needs. When absent, the voltage scaling 18 - flow is handled by hardware, hence no software "voltage tracking" is 19 - needed. 20 - 21 - Example: 22 - -------- 23 - cpu0: cpu@0 { 24 - device_type = "cpu"; 25 - compatible = "arm,cortex-a53"; 26 - reg = <0x000>; 27 - enable-method = "psci"; 28 - cpu-idle-states = <&CPU_SLEEP_0>; 29 - clocks = <&infracfg CLK_INFRA_CA53SEL>, 30 - <&apmixedsys CLK_APMIXED_MAINPLL>; 31 - clock-names = "cpu", "intermediate"; 32 - }; 33 - 34 - cpu1: cpu@1 { 35 - device_type = "cpu"; 36 - compatible = "arm,cortex-a53"; 37 - reg = <0x001>; 38 - enable-method = "psci"; 39 - cpu-idle-states = <&CPU_SLEEP_0>; 40 - clocks = <&infracfg CLK_INFRA_CA53SEL>, 41 - <&apmixedsys CLK_APMIXED_MAINPLL>; 42 - clock-names = "cpu", "intermediate"; 43 - }; 44 - 45 - cpu2: cpu@100 { 46 - device_type = "cpu"; 47 - compatible = "arm,cortex-a57"; 48 - reg = <0x100>; 49 - enable-method = "psci"; 50 - cpu-idle-states = <&CPU_SLEEP_0>; 51 - clocks = <&infracfg CLK_INFRA_CA57SEL>, 52 - <&apmixedsys CLK_APMIXED_MAINPLL>; 53 - clock-names = "cpu", "intermediate"; 54 - }; 55 - 56 - cpu3: cpu@101 { 57 - device_type = "cpu"; 58 - compatible = "arm,cortex-a57"; 59 - reg = <0x101>; 60 - enable-method = "psci"; 61 - cpu-idle-states = <&CPU_SLEEP_0>; 62 - clocks = <&infracfg CLK_INFRA_CA57SEL>, 63 - <&apmixedsys CLK_APMIXED_MAINPLL>; 64 - clock-names = "cpu", "intermediate"; 65 - }; 66 - 67 - &cpu0 { 68 - proc-supply = <&mt6397_vpca15_reg>; 69 - }; 70 - 71 - &cpu1 { 72 - proc-supply = <&mt6397_vpca15_reg>; 73 - }; 74 - 75 - &cpu2 { 76 - proc-supply = <&da9211_vcpu_reg>; 77 - sram-supply = <&mt6397_vsramca7_reg>; 78 - }; 79 - 80 - &cpu3 { 81 - proc-supply = <&da9211_vcpu_reg>; 82 - sram-supply = <&mt6397_vsramca7_reg>; 83 - };
+247
Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt
··· 1 + Binding for MediaTek's CPUFreq driver 2 + ===================================== 3 + 4 + Required properties: 5 + - clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock names. 6 + - clock-names: Should contain the following: 7 + "cpu" - The multiplexer for clock input of CPU cluster. 8 + "intermediate" - A parent of "cpu" clock which is used as "intermediate" clock 9 + source (usually MAINPLL) when the original CPU PLL is under 10 + transition and not stable yet. 11 + Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for 12 + generic clock consumer properties. 13 + - operating-points-v2: Please refer to Documentation/devicetree/bindings/opp/opp.txt 14 + for detail. 15 + - proc-supply: Regulator for Vproc of CPU cluster. 16 + 17 + Optional properties: 18 + - sram-supply: Regulator for Vsram of CPU cluster. When present, the cpufreq driver 19 + needs to do "voltage tracking" to step by step scale up/down Vproc and 20 + Vsram to fit SoC specific needs. When absent, the voltage scaling 21 + flow is handled by hardware, hence no software "voltage tracking" is 22 + needed. 23 + - #cooling-cells: 24 + - cooling-min-level: 25 + - cooling-max-level: 26 + Please refer to Documentation/devicetree/bindings/thermal/thermal.txt 27 + for detail. 28 + 29 + Example 1 (MT7623 SoC): 30 + 31 + cpu_opp_table: opp_table { 32 + compatible = "operating-points-v2"; 33 + opp-shared; 34 + 35 + opp-598000000 { 36 + opp-hz = /bits/ 64 <598000000>; 37 + opp-microvolt = <1050000>; 38 + }; 39 + 40 + opp-747500000 { 41 + opp-hz = /bits/ 64 <747500000>; 42 + opp-microvolt = <1050000>; 43 + }; 44 + 45 + opp-1040000000 { 46 + opp-hz = /bits/ 64 <1040000000>; 47 + opp-microvolt = <1150000>; 48 + }; 49 + 50 + opp-1196000000 { 51 + opp-hz = /bits/ 64 <1196000000>; 52 + opp-microvolt = <1200000>; 53 + }; 54 + 55 + opp-1300000000 { 56 + opp-hz = /bits/ 64 <1300000000>; 57 + opp-microvolt = <1300000>; 58 + }; 59 + }; 60 + 61 + cpu0: cpu@0 { 62 + device_type = "cpu"; 63 + compatible = "arm,cortex-a7"; 64 + reg = <0x0>; 65 + clocks = <&infracfg CLK_INFRA_CPUSEL>, 66 + <&apmixedsys CLK_APMIXED_MAINPLL>; 67 + clock-names = "cpu", "intermediate"; 68 + operating-points-v2 = <&cpu_opp_table>; 69 + #cooling-cells = <2>; 70 + cooling-min-level = <0>; 71 + cooling-max-level = <7>; 72 + }; 73 + cpu@1 { 74 + device_type = "cpu"; 75 + compatible = "arm,cortex-a7"; 76 + reg = <0x1>; 77 + operating-points-v2 = <&cpu_opp_table>; 78 + }; 79 + cpu@2 { 80 + device_type = "cpu"; 81 + compatible = "arm,cortex-a7"; 82 + reg = <0x2>; 83 + operating-points-v2 = <&cpu_opp_table>; 84 + }; 85 + cpu@3 { 86 + device_type = "cpu"; 87 + compatible = "arm,cortex-a7"; 88 + reg = <0x3>; 89 + operating-points-v2 = <&cpu_opp_table>; 90 + }; 91 + 92 + Example 2 (MT8173 SoC): 93 + cpu_opp_table_a: opp_table_a { 94 + compatible = "operating-points-v2"; 95 + opp-shared; 96 + 97 + opp-507000000 { 98 + opp-hz = /bits/ 64 <507000000>; 99 + opp-microvolt = <859000>; 100 + }; 101 + 102 + opp-702000000 { 103 + opp-hz = /bits/ 64 <702000000>; 104 + opp-microvolt = <908000>; 105 + }; 106 + 107 + opp-1001000000 { 108 + opp-hz = /bits/ 64 <1001000000>; 109 + opp-microvolt = <983000>; 110 + }; 111 + 112 + opp-1105000000 { 113 + opp-hz = /bits/ 64 <1105000000>; 114 + opp-microvolt = <1009000>; 115 + }; 116 + 117 + opp-1183000000 { 118 + opp-hz = /bits/ 64 <1183000000>; 119 + opp-microvolt = <1028000>; 120 + }; 121 + 122 + opp-1404000000 { 123 + opp-hz = /bits/ 64 <1404000000>; 124 + opp-microvolt = <1083000>; 125 + }; 126 + 127 + opp-1508000000 { 128 + opp-hz = /bits/ 64 <1508000000>; 129 + opp-microvolt = <1109000>; 130 + }; 131 + 132 + opp-1573000000 { 133 + opp-hz = /bits/ 64 <1573000000>; 134 + opp-microvolt = <1125000>; 135 + }; 136 + }; 137 + 138 + cpu_opp_table_b: opp_table_b { 139 + compatible = "operating-points-v2"; 140 + opp-shared; 141 + 142 + opp-507000000 { 143 + opp-hz = /bits/ 64 <507000000>; 144 + opp-microvolt = <828000>; 145 + }; 146 + 147 + opp-702000000 { 148 + opp-hz = /bits/ 64 <702000000>; 149 + opp-microvolt = <867000>; 150 + }; 151 + 152 + opp-1001000000 { 153 + opp-hz = /bits/ 64 <1001000000>; 154 + opp-microvolt = <927000>; 155 + }; 156 + 157 + opp-1209000000 { 158 + opp-hz = /bits/ 64 <1209000000>; 159 + opp-microvolt = <968000>; 160 + }; 161 + 162 + opp-1404000000 { 163 + opp-hz = /bits/ 64 <1007000000>; 164 + opp-microvolt = <1028000>; 165 + }; 166 + 167 + opp-1612000000 { 168 + opp-hz = /bits/ 64 <1612000000>; 169 + opp-microvolt = <1049000>; 170 + }; 171 + 172 + opp-1807000000 { 173 + opp-hz = /bits/ 64 <1807000000>; 174 + opp-microvolt = <1089000>; 175 + }; 176 + 177 + opp-1989000000 { 178 + opp-hz = /bits/ 64 <1989000000>; 179 + opp-microvolt = <1125000>; 180 + }; 181 + }; 182 + 183 + cpu0: cpu@0 { 184 + device_type = "cpu"; 185 + compatible = "arm,cortex-a53"; 186 + reg = <0x000>; 187 + enable-method = "psci"; 188 + cpu-idle-states = <&CPU_SLEEP_0>; 189 + clocks = <&infracfg CLK_INFRA_CA53SEL>, 190 + <&apmixedsys CLK_APMIXED_MAINPLL>; 191 + clock-names = "cpu", "intermediate"; 192 + operating-points-v2 = <&cpu_opp_table_a>; 193 + }; 194 + 195 + cpu1: cpu@1 { 196 + device_type = "cpu"; 197 + compatible = "arm,cortex-a53"; 198 + reg = <0x001>; 199 + enable-method = "psci"; 200 + cpu-idle-states = <&CPU_SLEEP_0>; 201 + clocks = <&infracfg CLK_INFRA_CA53SEL>, 202 + <&apmixedsys CLK_APMIXED_MAINPLL>; 203 + clock-names = "cpu", "intermediate"; 204 + operating-points-v2 = <&cpu_opp_table_a>; 205 + }; 206 + 207 + cpu2: cpu@100 { 208 + device_type = "cpu"; 209 + compatible = "arm,cortex-a57"; 210 + reg = <0x100>; 211 + enable-method = "psci"; 212 + cpu-idle-states = <&CPU_SLEEP_0>; 213 + clocks = <&infracfg CLK_INFRA_CA57SEL>, 214 + <&apmixedsys CLK_APMIXED_MAINPLL>; 215 + clock-names = "cpu", "intermediate"; 216 + operating-points-v2 = <&cpu_opp_table_b>; 217 + }; 218 + 219 + cpu3: cpu@101 { 220 + device_type = "cpu"; 221 + compatible = "arm,cortex-a57"; 222 + reg = <0x101>; 223 + enable-method = "psci"; 224 + cpu-idle-states = <&CPU_SLEEP_0>; 225 + clocks = <&infracfg CLK_INFRA_CA57SEL>, 226 + <&apmixedsys CLK_APMIXED_MAINPLL>; 227 + clock-names = "cpu", "intermediate"; 228 + operating-points-v2 = <&cpu_opp_table_b>; 229 + }; 230 + 231 + &cpu0 { 232 + proc-supply = <&mt6397_vpca15_reg>; 233 + }; 234 + 235 + &cpu1 { 236 + proc-supply = <&mt6397_vpca15_reg>; 237 + }; 238 + 239 + &cpu2 { 240 + proc-supply = <&da9211_vcpu_reg>; 241 + sram-supply = <&mt6397_vsramca7_reg>; 242 + }; 243 + 244 + &cpu3 { 245 + proc-supply = <&da9211_vcpu_reg>; 246 + sram-supply = <&mt6397_vsramca7_reg>; 247 + };
+2
Documentation/devicetree/bindings/power/rockchip-io-domain.txt
··· 39 39 - "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains 40 40 - "rockchip,rk3399-io-voltage-domain" for rk3399 41 41 - "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains 42 + - "rockchip,rv1108-io-voltage-domain" for rv1108 43 + - "rockchip,rv1108-pmu-io-voltage-domain" for rv1108 pmu-domains 42 44 43 45 Deprecated properties: 44 46 - rockchip,grf: phandle to the syscon managing the "general register files"
-125
Documentation/power/states.txt
··· 1 - System Power Management Sleep States 2 - 3 - (C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 4 - 5 - The kernel supports up to four system sleep states generically, although three 6 - of them depend on the platform support code to implement the low-level details 7 - for each state. 8 - 9 - The states are represented by strings that can be read or written to the 10 - /sys/power/state file. Those strings may be "mem", "standby", "freeze" and 11 - "disk", where the last three always represent Power-On Suspend (if supported), 12 - Suspend-To-Idle and hibernation (Suspend-To-Disk), respectively. 13 - 14 - The meaning of the "mem" string is controlled by the /sys/power/mem_sleep file. 15 - It contains strings representing the available modes of system suspend that may 16 - be triggered by writing "mem" to /sys/power/state. These modes are "s2idle" 17 - (Suspend-To-Idle), "shallow" (Power-On Suspend) and "deep" (Suspend-To-RAM). 18 - The "s2idle" mode is always available, while the other ones are only available 19 - if supported by the platform (if not supported, the strings representing them 20 - are not present in /sys/power/mem_sleep). The string representing the suspend 21 - mode to be used subsequently is enclosed in square brackets. Writing one of 22 - the other strings present in /sys/power/mem_sleep to it causes the suspend mode 23 - to be used subsequently to change to the one represented by that string. 24 - 25 - Consequently, there are two ways to cause the system to go into the 26 - Suspend-To-Idle sleep state. The first one is to write "freeze" directly to 27 - /sys/power/state. The second one is to write "s2idle" to /sys/power/mem_sleep 28 - and then to write "mem" to /sys/power/state. Similarly, there are two ways 29 - to cause the system to go into the Power-On Suspend sleep state (the strings to 30 - write to the control files in that case are "standby" or "shallow" and "mem", 31 - respectively) if that state is supported by the platform. In turn, there is 32 - only one way to cause the system to go into the Suspend-To-RAM state (write 33 - "deep" into /sys/power/mem_sleep and "mem" into /sys/power/state). 34 - 35 - The default suspend mode (ie. the one to be used without writing anything into 36 - /sys/power/mem_sleep) is either "deep" (if Suspend-To-RAM is supported) or 37 - "s2idle", but it can be overridden by the value of the "mem_sleep_default" 38 - parameter in the kernel command line. 39 - 40 - The properties of all of the sleep states are described below. 41 - 42 - 43 - State: Suspend-To-Idle 44 - ACPI state: S0 45 - Label: "s2idle" ("freeze") 46 - 47 - This state is a generic, pure software, light-weight, system sleep state. 48 - It allows more energy to be saved relative to runtime idle by freezing user 49 - space and putting all I/O devices into low-power states (possibly 50 - lower-power than available at run time), such that the processors can 51 - spend more time in their idle states. 52 - 53 - This state can be used for platforms without Power-On Suspend/Suspend-to-RAM 54 - support, or it can be used in addition to Suspend-to-RAM to provide reduced 55 - resume latency. It is always supported. 56 - 57 - 58 - State: Standby / Power-On Suspend 59 - ACPI State: S1 60 - Label: "shallow" ("standby") 61 - 62 - This state, if supported, offers moderate, though real, power savings, while 63 - providing a relatively low-latency transition back to a working system. No 64 - operating state is lost (the CPU retains power), so the system easily starts up 65 - again where it left off. 66 - 67 - In addition to freezing user space and putting all I/O devices into low-power 68 - states, which is done for Suspend-To-Idle too, nonboot CPUs are taken offline 69 - and all low-level system functions are suspended during transitions into this 70 - state. For this reason, it should allow more energy to be saved relative to 71 - Suspend-To-Idle, but the resume latency will generally be greater than for that 72 - state. 73 - 74 - 75 - State: Suspend-to-RAM 76 - ACPI State: S3 77 - Label: "deep" 78 - 79 - This state, if supported, offers significant power savings as everything in the 80 - system is put into a low-power state, except for memory, which should be placed 81 - into the self-refresh mode to retain its contents. All of the steps carried out 82 - when entering Power-On Suspend are also carried out during transitions to STR. 83 - Additional operations may take place depending on the platform capabilities. In 84 - particular, on ACPI systems the kernel passes control to the BIOS (platform 85 - firmware) as the last step during STR transitions and that usually results in 86 - powering down some more low-level components that aren't directly controlled by 87 - the kernel. 88 - 89 - System and device state is saved and kept in memory. All devices are suspended 90 - and put into low-power states. In many cases, all peripheral buses lose power 91 - when entering STR, so devices must be able to handle the transition back to the 92 - "on" state. 93 - 94 - For at least ACPI, STR requires some minimal boot-strapping code to resume the 95 - system from it. This may be the case on other platforms too. 96 - 97 - 98 - State: Suspend-to-disk 99 - ACPI State: S4 100 - Label: "disk" 101 - 102 - This state offers the greatest power savings, and can be used even in 103 - the absence of low-level platform support for power management. This 104 - state operates similarly to Suspend-to-RAM, but includes a final step 105 - of writing memory contents to disk. On resume, this is read and memory 106 - is restored to its pre-suspend state. 107 - 108 - STD can be handled by the firmware or the kernel. If it is handled by 109 - the firmware, it usually requires a dedicated partition that must be 110 - setup via another operating system for it to use. Despite the 111 - inconvenience, this method requires minimal work by the kernel, since 112 - the firmware will also handle restoring memory contents on resume. 113 - 114 - For suspend-to-disk, a mechanism called 'swsusp' (Swap Suspend) is used 115 - to write memory contents to free swap space. swsusp has some restrictive 116 - requirements, but should work in most cases. Some, albeit outdated, 117 - documentation can be found in Documentation/power/swsusp.txt. 118 - Alternatively, userspace can do most of the actual suspend to disk work, 119 - see userland-swsusp.txt. 120 - 121 - Once memory state is written to disk, the system may either enter a 122 - low-power state (like ACPI S4), or it may simply power down. Powering 123 - down offers greater savings, and allows this mechanism to work on any 124 - system. However, entering a real low-power state allows the user to 125 - trigger wake up events (e.g. pressing a key or opening a laptop lid).
-1
arch/arm/boot/dts/tango4-smp8758.dtsi
··· 13 13 reg = <0>; 14 14 clocks = <&clkgen CPU_CLK>; 15 15 clock-latency = <1>; 16 - operating-points = <1215000 0 607500 0 405000 0 243000 0 135000 0>; 17 16 }; 18 17 19 18 cpu1: cpu@1 {
+2 -2
arch/arm/mach-tegra/cpuidle-tegra114.c
··· 60 60 return index; 61 61 } 62 62 63 - static void tegra114_idle_enter_freeze(struct cpuidle_device *dev, 63 + static void tegra114_idle_enter_s2idle(struct cpuidle_device *dev, 64 64 struct cpuidle_driver *drv, 65 65 int index) 66 66 { ··· 77 77 #ifdef CONFIG_PM_SLEEP 78 78 [1] = { 79 79 .enter = tegra114_idle_power_down, 80 - .enter_freeze = tegra114_idle_enter_freeze, 80 + .enter_s2idle = tegra114_idle_enter_s2idle, 81 81 .exit_latency = 500, 82 82 .target_residency = 1000, 83 83 .flags = CPUIDLE_FLAG_TIMER_STOP,
+16 -7
drivers/acpi/processor_idle.c
··· 48 48 #define _COMPONENT ACPI_PROCESSOR_COMPONENT 49 49 ACPI_MODULE_NAME("processor_idle"); 50 50 51 + #define ACPI_IDLE_STATE_START (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX) ? 1 : 0) 52 + 51 53 static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER; 52 54 module_param(max_cstate, uint, 0000); 53 55 static unsigned int nocst __read_mostly; ··· 761 759 762 760 if (cx->type != ACPI_STATE_C1) { 763 761 if (acpi_idle_fallback_to_c1(pr) && num_online_cpus() > 1) { 764 - index = CPUIDLE_DRIVER_STATE_START; 762 + index = ACPI_IDLE_STATE_START; 765 763 cx = per_cpu(acpi_cstate[index], dev->cpu); 766 764 } else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) { 767 765 if (cx->bm_sts_skip || !acpi_idle_bm_check()) { ··· 789 787 return index; 790 788 } 791 789 792 - static void acpi_idle_enter_freeze(struct cpuidle_device *dev, 790 + static void acpi_idle_enter_s2idle(struct cpuidle_device *dev, 793 791 struct cpuidle_driver *drv, int index) 794 792 { 795 793 struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); ··· 813 811 static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr, 814 812 struct cpuidle_device *dev) 815 813 { 816 - int i, count = CPUIDLE_DRIVER_STATE_START; 814 + int i, count = ACPI_IDLE_STATE_START; 817 815 struct acpi_processor_cx *cx; 818 816 819 817 if (max_cstate == 0) ··· 840 838 841 839 static int acpi_processor_setup_cstates(struct acpi_processor *pr) 842 840 { 843 - int i, count = CPUIDLE_DRIVER_STATE_START; 841 + int i, count; 844 842 struct acpi_processor_cx *cx; 845 843 struct cpuidle_state *state; 846 844 struct cpuidle_driver *drv = &acpi_idle_driver; 847 845 848 846 if (max_cstate == 0) 849 847 max_cstate = 1; 848 + 849 + if (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX)) { 850 + cpuidle_poll_state_init(drv); 851 + count = 1; 852 + } else { 853 + count = 0; 854 + } 850 855 851 856 for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) { 852 857 cx = &pr->power.states[i]; ··· 874 865 drv->safe_state_index = count; 875 866 } 876 867 /* 877 - * Halt-induced C1 is not good for ->enter_freeze, because it 868 + * Halt-induced C1 is not good for ->enter_s2idle, because it 878 869 * re-enables interrupts on exit. Moreover, C1 is generally not 879 870 * particularly interesting from the suspend-to-idle angle, so 880 871 * avoid C1 and the situations in which we may need to fall back 881 872 * to it altogether. 882 873 */ 883 874 if (cx->type != ACPI_STATE_C1 && !acpi_idle_fallback_to_c1(pr)) 884 - state->enter_freeze = acpi_idle_enter_freeze; 875 + state->enter_s2idle = acpi_idle_enter_s2idle; 885 876 886 877 count++; 887 878 if (count == CPUIDLE_STATE_MAX) ··· 1298 1289 return -EINVAL; 1299 1290 1300 1291 drv->safe_state_index = -1; 1301 - for (i = CPUIDLE_DRIVER_STATE_START; i < CPUIDLE_STATE_MAX; i++) { 1292 + for (i = ACPI_IDLE_STATE_START; i < CPUIDLE_STATE_MAX; i++) { 1302 1293 drv->states[i].name[0] = '\0'; 1303 1294 drv->states[i].desc[0] = '\0'; 1304 1295 }
+188 -14
drivers/acpi/sleep.c
··· 669 669 670 670 #define ACPI_LPS0_DSM_UUID "c4eb40a0-6cd2-11e2-bcfd-0800200c9a66" 671 671 672 + #define ACPI_LPS0_GET_DEVICE_CONSTRAINTS 1 672 673 #define ACPI_LPS0_SCREEN_OFF 3 673 674 #define ACPI_LPS0_SCREEN_ON 4 674 675 #define ACPI_LPS0_ENTRY 5 ··· 680 679 static acpi_handle lps0_device_handle; 681 680 static guid_t lps0_dsm_guid; 682 681 static char lps0_dsm_func_mask; 682 + 683 + /* Device constraint entry structure */ 684 + struct lpi_device_info { 685 + char *name; 686 + int enabled; 687 + union acpi_object *package; 688 + }; 689 + 690 + /* Constraint package structure */ 691 + struct lpi_device_constraint { 692 + int uid; 693 + int min_dstate; 694 + int function_states; 695 + }; 696 + 697 + struct lpi_constraints { 698 + acpi_handle handle; 699 + int min_dstate; 700 + }; 701 + 702 + static struct lpi_constraints *lpi_constraints_table; 703 + static int lpi_constraints_table_size; 704 + 705 + static void lpi_device_get_constraints(void) 706 + { 707 + union acpi_object *out_obj; 708 + int i; 709 + 710 + out_obj = acpi_evaluate_dsm_typed(lps0_device_handle, &lps0_dsm_guid, 711 + 1, ACPI_LPS0_GET_DEVICE_CONSTRAINTS, 712 + NULL, ACPI_TYPE_PACKAGE); 713 + 714 + acpi_handle_debug(lps0_device_handle, "_DSM function 1 eval %s\n", 715 + out_obj ? "successful" : "failed"); 716 + 717 + if (!out_obj) 718 + return; 719 + 720 + lpi_constraints_table = kcalloc(out_obj->package.count, 721 + sizeof(*lpi_constraints_table), 722 + GFP_KERNEL); 723 + if (!lpi_constraints_table) 724 + goto free_acpi_buffer; 725 + 726 + acpi_handle_debug(lps0_device_handle, "LPI: constraints list begin:\n"); 727 + 728 + for (i = 0; i < out_obj->package.count; i++) { 729 + struct lpi_constraints *constraint; 730 + acpi_status status; 731 + union acpi_object *package = &out_obj->package.elements[i]; 732 + struct lpi_device_info info = { }; 733 + int package_count = 0, j; 734 + 735 + if (!package) 736 + continue; 737 + 738 + for (j = 0; j < package->package.count; ++j) { 739 + union acpi_object *element = 740 + &(package->package.elements[j]); 741 + 742 + switch (element->type) { 743 + case ACPI_TYPE_INTEGER: 744 + info.enabled = element->integer.value; 745 + break; 746 + case ACPI_TYPE_STRING: 747 + info.name = element->string.pointer; 748 + break; 749 + case ACPI_TYPE_PACKAGE: 750 + package_count = element->package.count; 751 + info.package = element->package.elements; 752 + break; 753 + } 754 + } 755 + 756 + if (!info.enabled || !info.package || !info.name) 757 + continue; 758 + 759 + constraint = &lpi_constraints_table[lpi_constraints_table_size]; 760 + 761 + status = acpi_get_handle(NULL, info.name, &constraint->handle); 762 + if (ACPI_FAILURE(status)) 763 + continue; 764 + 765 + acpi_handle_debug(lps0_device_handle, 766 + "index:%d Name:%s\n", i, info.name); 767 + 768 + constraint->min_dstate = -1; 769 + 770 + for (j = 0; j < package_count; ++j) { 771 + union acpi_object *info_obj = &info.package[j]; 772 + union acpi_object *cnstr_pkg; 773 + union acpi_object *obj; 774 + struct lpi_device_constraint dev_info; 775 + 776 + switch (info_obj->type) { 777 + case ACPI_TYPE_INTEGER: 778 + /* version */ 779 + break; 780 + case ACPI_TYPE_PACKAGE: 781 + if (info_obj->package.count < 2) 782 + break; 783 + 784 + cnstr_pkg = info_obj->package.elements; 785 + obj = &cnstr_pkg[0]; 786 + dev_info.uid = obj->integer.value; 787 + obj = &cnstr_pkg[1]; 788 + dev_info.min_dstate = obj->integer.value; 789 + 790 + acpi_handle_debug(lps0_device_handle, 791 + "uid:%d min_dstate:%s\n", 792 + dev_info.uid, 793 + acpi_power_state_string(dev_info.min_dstate)); 794 + 795 + constraint->min_dstate = dev_info.min_dstate; 796 + break; 797 + } 798 + } 799 + 800 + if (constraint->min_dstate < 0) { 801 + acpi_handle_debug(lps0_device_handle, 802 + "Incomplete constraint defined\n"); 803 + continue; 804 + } 805 + 806 + lpi_constraints_table_size++; 807 + } 808 + 809 + acpi_handle_debug(lps0_device_handle, "LPI: constraints list end\n"); 810 + 811 + free_acpi_buffer: 812 + ACPI_FREE(out_obj); 813 + } 814 + 815 + static void lpi_check_constraints(void) 816 + { 817 + int i; 818 + 819 + for (i = 0; i < lpi_constraints_table_size; ++i) { 820 + struct acpi_device *adev; 821 + 822 + if (acpi_bus_get_device(lpi_constraints_table[i].handle, &adev)) 823 + continue; 824 + 825 + acpi_handle_debug(adev->handle, 826 + "LPI: required min power state:%s current power state:%s\n", 827 + acpi_power_state_string(lpi_constraints_table[i].min_dstate), 828 + acpi_power_state_string(adev->power.state)); 829 + 830 + if (!adev->flags.power_manageable) { 831 + acpi_handle_info(adev->handle, "LPI: Device not power manageble\n"); 832 + continue; 833 + } 834 + 835 + if (adev->power.state < lpi_constraints_table[i].min_dstate) 836 + acpi_handle_info(adev->handle, 837 + "LPI: Constraint not met; min power state:%s current power state:%s\n", 838 + acpi_power_state_string(lpi_constraints_table[i].min_dstate), 839 + acpi_power_state_string(adev->power.state)); 840 + } 841 + } 683 842 684 843 static void acpi_sleep_run_lps0_dsm(unsigned int func) 685 844 { ··· 875 714 if ((bitmask & ACPI_S2IDLE_FUNC_MASK) == ACPI_S2IDLE_FUNC_MASK) { 876 715 lps0_dsm_func_mask = bitmask; 877 716 lps0_device_handle = adev->handle; 717 + /* 718 + * Use suspend-to-idle by default if the default 719 + * suspend mode was not set from the command line. 720 + */ 721 + if (mem_sleep_default > PM_SUSPEND_MEM) 722 + mem_sleep_current = PM_SUSPEND_TO_IDLE; 878 723 } 879 724 880 725 acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n", ··· 890 723 "_DSM function 0 evaluation failed\n"); 891 724 } 892 725 ACPI_FREE(out_obj); 726 + 727 + lpi_device_get_constraints(); 728 + 893 729 return 0; 894 730 } 895 731 ··· 901 731 .attach = lps0_device_attach, 902 732 }; 903 733 904 - static int acpi_freeze_begin(void) 734 + static int acpi_s2idle_begin(void) 905 735 { 906 736 acpi_scan_lock_acquire(); 907 737 s2idle_in_progress = true; 908 738 return 0; 909 739 } 910 740 911 - static int acpi_freeze_prepare(void) 741 + static int acpi_s2idle_prepare(void) 912 742 { 913 743 if (lps0_device_handle) { 914 744 acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF); ··· 928 758 return 0; 929 759 } 930 760 931 - static void acpi_freeze_wake(void) 761 + static void acpi_s2idle_wake(void) 932 762 { 763 + 764 + if (pm_debug_messages_on) 765 + lpi_check_constraints(); 766 + 933 767 /* 934 768 * If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means 935 769 * that the SCI has triggered while suspended, so cancel the wakeup in ··· 946 772 } 947 773 } 948 774 949 - static void acpi_freeze_sync(void) 775 + static void acpi_s2idle_sync(void) 950 776 { 951 777 /* 952 778 * Process all pending events in case there are any wakeup ones. ··· 959 785 s2idle_wakeup = false; 960 786 } 961 787 962 - static void acpi_freeze_restore(void) 788 + static void acpi_s2idle_restore(void) 963 789 { 964 790 if (acpi_sci_irq_valid()) 965 791 disable_irq_wake(acpi_sci_irq); ··· 972 798 } 973 799 } 974 800 975 - static void acpi_freeze_end(void) 801 + static void acpi_s2idle_end(void) 976 802 { 977 803 s2idle_in_progress = false; 978 804 acpi_scan_lock_release(); 979 805 } 980 806 981 - static const struct platform_freeze_ops acpi_freeze_ops = { 982 - .begin = acpi_freeze_begin, 983 - .prepare = acpi_freeze_prepare, 984 - .wake = acpi_freeze_wake, 985 - .sync = acpi_freeze_sync, 986 - .restore = acpi_freeze_restore, 987 - .end = acpi_freeze_end, 807 + static const struct platform_s2idle_ops acpi_s2idle_ops = { 808 + .begin = acpi_s2idle_begin, 809 + .prepare = acpi_s2idle_prepare, 810 + .wake = acpi_s2idle_wake, 811 + .sync = acpi_s2idle_sync, 812 + .restore = acpi_s2idle_restore, 813 + .end = acpi_s2idle_end, 988 814 }; 989 815 990 816 static void acpi_sleep_suspend_setup(void) ··· 999 825 &acpi_suspend_ops_old : &acpi_suspend_ops); 1000 826 1001 827 acpi_scan_add_handler(&lps0_handler); 1002 - freeze_set_ops(&acpi_freeze_ops); 828 + s2idle_set_ops(&acpi_s2idle_ops); 1003 829 } 1004 830 1005 831 #else /* !CONFIG_SUSPEND */
+234 -17
drivers/base/power/domain.c
··· 209 209 smp_mb__after_atomic(); 210 210 } 211 211 212 + #ifdef CONFIG_DEBUG_FS 213 + static void genpd_update_accounting(struct generic_pm_domain *genpd) 214 + { 215 + ktime_t delta, now; 216 + 217 + now = ktime_get(); 218 + delta = ktime_sub(now, genpd->accounting_time); 219 + 220 + /* 221 + * If genpd->status is active, it means we are just 222 + * out of off and so update the idle time and vice 223 + * versa. 224 + */ 225 + if (genpd->status == GPD_STATE_ACTIVE) { 226 + int state_idx = genpd->state_idx; 227 + 228 + genpd->states[state_idx].idle_time = 229 + ktime_add(genpd->states[state_idx].idle_time, delta); 230 + } else { 231 + genpd->on_time = ktime_add(genpd->on_time, delta); 232 + } 233 + 234 + genpd->accounting_time = now; 235 + } 236 + #else 237 + static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {} 238 + #endif 239 + 212 240 static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) 213 241 { 214 242 unsigned int state_idx = genpd->state_idx; ··· 389 361 } 390 362 391 363 genpd->status = GPD_STATE_POWER_OFF; 364 + genpd_update_accounting(genpd); 392 365 393 366 list_for_each_entry(link, &genpd->slave_links, slave_node) { 394 367 genpd_sd_counter_dec(link->master); ··· 442 413 goto err; 443 414 444 415 genpd->status = GPD_STATE_ACTIVE; 416 + genpd_update_accounting(genpd); 417 + 445 418 return 0; 446 419 447 420 err: ··· 1571 1540 genpd->max_off_time_changed = true; 1572 1541 genpd->provider = NULL; 1573 1542 genpd->has_provider = false; 1543 + genpd->accounting_time = ktime_get(); 1574 1544 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; 1575 1545 genpd->domain.ops.runtime_resume = genpd_runtime_resume; 1576 1546 genpd->domain.ops.prepare = pm_genpd_prepare; ··· 1775 1743 mutex_lock(&of_genpd_mutex); 1776 1744 list_add(&cp->link, &of_genpd_providers); 1777 1745 mutex_unlock(&of_genpd_mutex); 1778 - pr_debug("Added domain provider from %s\n", np->full_name); 1746 + pr_debug("Added domain provider from %pOF\n", np); 1779 1747 1780 1748 return 0; 1781 1749 } ··· 2181 2149 err = of_property_read_u32(state_node, "entry-latency-us", 2182 2150 &entry_latency); 2183 2151 if (err) { 2184 - pr_debug(" * %s missing entry-latency-us property\n", 2185 - state_node->full_name); 2152 + pr_debug(" * %pOF missing entry-latency-us property\n", 2153 + state_node); 2186 2154 return -EINVAL; 2187 2155 } 2188 2156 2189 2157 err = of_property_read_u32(state_node, "exit-latency-us", 2190 2158 &exit_latency); 2191 2159 if (err) { 2192 - pr_debug(" * %s missing exit-latency-us property\n", 2193 - state_node->full_name); 2160 + pr_debug(" * %pOF missing exit-latency-us property\n", 2161 + state_node); 2194 2162 return -EINVAL; 2195 2163 } 2196 2164 ··· 2244 2212 ret = genpd_parse_state(&st[i++], np); 2245 2213 if (ret) { 2246 2214 pr_err 2247 - ("Parsing idle state node %s failed with err %d\n", 2248 - np->full_name, ret); 2215 + ("Parsing idle state node %pOF failed with err %d\n", 2216 + np, ret); 2249 2217 of_node_put(np); 2250 2218 kfree(st); 2251 2219 return ret; ··· 2359 2327 return 0; 2360 2328 } 2361 2329 2362 - static int pm_genpd_summary_show(struct seq_file *s, void *data) 2330 + static int genpd_summary_show(struct seq_file *s, void *data) 2363 2331 { 2364 2332 struct generic_pm_domain *genpd; 2365 2333 int ret = 0; ··· 2382 2350 return ret; 2383 2351 } 2384 2352 2385 - static int pm_genpd_summary_open(struct inode *inode, struct file *file) 2353 + static int genpd_status_show(struct seq_file *s, void *data) 2386 2354 { 2387 - return single_open(file, pm_genpd_summary_show, NULL); 2355 + static const char * const status_lookup[] = { 2356 + [GPD_STATE_ACTIVE] = "on", 2357 + [GPD_STATE_POWER_OFF] = "off" 2358 + }; 2359 + 2360 + struct generic_pm_domain *genpd = s->private; 2361 + int ret = 0; 2362 + 2363 + ret = genpd_lock_interruptible(genpd); 2364 + if (ret) 2365 + return -ERESTARTSYS; 2366 + 2367 + if (WARN_ON_ONCE(genpd->status >= ARRAY_SIZE(status_lookup))) 2368 + goto exit; 2369 + 2370 + if (genpd->status == GPD_STATE_POWER_OFF) 2371 + seq_printf(s, "%s-%u\n", status_lookup[genpd->status], 2372 + genpd->state_idx); 2373 + else 2374 + seq_printf(s, "%s\n", status_lookup[genpd->status]); 2375 + exit: 2376 + genpd_unlock(genpd); 2377 + return ret; 2388 2378 } 2389 2379 2390 - static const struct file_operations pm_genpd_summary_fops = { 2391 - .open = pm_genpd_summary_open, 2392 - .read = seq_read, 2393 - .llseek = seq_lseek, 2394 - .release = single_release, 2395 - }; 2380 + static int genpd_sub_domains_show(struct seq_file *s, void *data) 2381 + { 2382 + struct generic_pm_domain *genpd = s->private; 2383 + struct gpd_link *link; 2384 + int ret = 0; 2385 + 2386 + ret = genpd_lock_interruptible(genpd); 2387 + if (ret) 2388 + return -ERESTARTSYS; 2389 + 2390 + list_for_each_entry(link, &genpd->master_links, master_node) 2391 + seq_printf(s, "%s\n", link->slave->name); 2392 + 2393 + genpd_unlock(genpd); 2394 + return ret; 2395 + } 2396 + 2397 + static int genpd_idle_states_show(struct seq_file *s, void *data) 2398 + { 2399 + struct generic_pm_domain *genpd = s->private; 2400 + unsigned int i; 2401 + int ret = 0; 2402 + 2403 + ret = genpd_lock_interruptible(genpd); 2404 + if (ret) 2405 + return -ERESTARTSYS; 2406 + 2407 + seq_puts(s, "State Time Spent(ms)\n"); 2408 + 2409 + for (i = 0; i < genpd->state_count; i++) { 2410 + ktime_t delta = 0; 2411 + s64 msecs; 2412 + 2413 + if ((genpd->status == GPD_STATE_POWER_OFF) && 2414 + (genpd->state_idx == i)) 2415 + delta = ktime_sub(ktime_get(), genpd->accounting_time); 2416 + 2417 + msecs = ktime_to_ms( 2418 + ktime_add(genpd->states[i].idle_time, delta)); 2419 + seq_printf(s, "S%-13i %lld\n", i, msecs); 2420 + } 2421 + 2422 + genpd_unlock(genpd); 2423 + return ret; 2424 + } 2425 + 2426 + static int genpd_active_time_show(struct seq_file *s, void *data) 2427 + { 2428 + struct generic_pm_domain *genpd = s->private; 2429 + ktime_t delta = 0; 2430 + int ret = 0; 2431 + 2432 + ret = genpd_lock_interruptible(genpd); 2433 + if (ret) 2434 + return -ERESTARTSYS; 2435 + 2436 + if (genpd->status == GPD_STATE_ACTIVE) 2437 + delta = ktime_sub(ktime_get(), genpd->accounting_time); 2438 + 2439 + seq_printf(s, "%lld ms\n", ktime_to_ms( 2440 + ktime_add(genpd->on_time, delta))); 2441 + 2442 + genpd_unlock(genpd); 2443 + return ret; 2444 + } 2445 + 2446 + static int genpd_total_idle_time_show(struct seq_file *s, void *data) 2447 + { 2448 + struct generic_pm_domain *genpd = s->private; 2449 + ktime_t delta = 0, total = 0; 2450 + unsigned int i; 2451 + int ret = 0; 2452 + 2453 + ret = genpd_lock_interruptible(genpd); 2454 + if (ret) 2455 + return -ERESTARTSYS; 2456 + 2457 + for (i = 0; i < genpd->state_count; i++) { 2458 + 2459 + if ((genpd->status == GPD_STATE_POWER_OFF) && 2460 + (genpd->state_idx == i)) 2461 + delta = ktime_sub(ktime_get(), genpd->accounting_time); 2462 + 2463 + total = ktime_add(total, genpd->states[i].idle_time); 2464 + } 2465 + total = ktime_add(total, delta); 2466 + 2467 + seq_printf(s, "%lld ms\n", ktime_to_ms(total)); 2468 + 2469 + genpd_unlock(genpd); 2470 + return ret; 2471 + } 2472 + 2473 + 2474 + static int genpd_devices_show(struct seq_file *s, void *data) 2475 + { 2476 + struct generic_pm_domain *genpd = s->private; 2477 + struct pm_domain_data *pm_data; 2478 + const char *kobj_path; 2479 + int ret = 0; 2480 + 2481 + ret = genpd_lock_interruptible(genpd); 2482 + if (ret) 2483 + return -ERESTARTSYS; 2484 + 2485 + list_for_each_entry(pm_data, &genpd->dev_list, list_node) { 2486 + kobj_path = kobject_get_path(&pm_data->dev->kobj, 2487 + genpd_is_irq_safe(genpd) ? 2488 + GFP_ATOMIC : GFP_KERNEL); 2489 + if (kobj_path == NULL) 2490 + continue; 2491 + 2492 + seq_printf(s, "%s\n", kobj_path); 2493 + kfree(kobj_path); 2494 + } 2495 + 2496 + genpd_unlock(genpd); 2497 + return ret; 2498 + } 2499 + 2500 + #define define_genpd_open_function(name) \ 2501 + static int genpd_##name##_open(struct inode *inode, struct file *file) \ 2502 + { \ 2503 + return single_open(file, genpd_##name##_show, inode->i_private); \ 2504 + } 2505 + 2506 + define_genpd_open_function(summary); 2507 + define_genpd_open_function(status); 2508 + define_genpd_open_function(sub_domains); 2509 + define_genpd_open_function(idle_states); 2510 + define_genpd_open_function(active_time); 2511 + define_genpd_open_function(total_idle_time); 2512 + define_genpd_open_function(devices); 2513 + 2514 + #define define_genpd_debugfs_fops(name) \ 2515 + static const struct file_operations genpd_##name##_fops = { \ 2516 + .open = genpd_##name##_open, \ 2517 + .read = seq_read, \ 2518 + .llseek = seq_lseek, \ 2519 + .release = single_release, \ 2520 + } 2521 + 2522 + define_genpd_debugfs_fops(summary); 2523 + define_genpd_debugfs_fops(status); 2524 + define_genpd_debugfs_fops(sub_domains); 2525 + define_genpd_debugfs_fops(idle_states); 2526 + define_genpd_debugfs_fops(active_time); 2527 + define_genpd_debugfs_fops(total_idle_time); 2528 + define_genpd_debugfs_fops(devices); 2396 2529 2397 2530 static int __init pm_genpd_debug_init(void) 2398 2531 { 2399 2532 struct dentry *d; 2533 + struct generic_pm_domain *genpd; 2400 2534 2401 2535 pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL); 2402 2536 ··· 2570 2372 return -ENOMEM; 2571 2373 2572 2374 d = debugfs_create_file("pm_genpd_summary", S_IRUGO, 2573 - pm_genpd_debugfs_dir, NULL, &pm_genpd_summary_fops); 2375 + pm_genpd_debugfs_dir, NULL, &genpd_summary_fops); 2574 2376 if (!d) 2575 2377 return -ENOMEM; 2378 + 2379 + list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2380 + d = debugfs_create_dir(genpd->name, pm_genpd_debugfs_dir); 2381 + if (!d) 2382 + return -ENOMEM; 2383 + 2384 + debugfs_create_file("current_state", 0444, 2385 + d, genpd, &genpd_status_fops); 2386 + debugfs_create_file("sub_domains", 0444, 2387 + d, genpd, &genpd_sub_domains_fops); 2388 + debugfs_create_file("idle_states", 0444, 2389 + d, genpd, &genpd_idle_states_fops); 2390 + debugfs_create_file("active_time", 0444, 2391 + d, genpd, &genpd_active_time_fops); 2392 + debugfs_create_file("total_idle_time", 0444, 2393 + d, genpd, &genpd_total_idle_time_fops); 2394 + debugfs_create_file("devices", 0444, 2395 + d, genpd, &genpd_devices_fops); 2396 + } 2576 2397 2577 2398 return 0; 2578 2399 }
+64 -39
drivers/base/power/main.c
··· 418 418 dev_name(dev), pm_verb(state.event), info, error); 419 419 } 420 420 421 - #ifdef CONFIG_PM_DEBUG 422 - static void dpm_show_time(ktime_t starttime, pm_message_t state, 421 + static void dpm_show_time(ktime_t starttime, pm_message_t state, int error, 423 422 const char *info) 424 423 { 425 424 ktime_t calltime; ··· 431 432 usecs = usecs64; 432 433 if (usecs == 0) 433 434 usecs = 1; 434 - pr_info("PM: %s%s%s of devices complete after %ld.%03ld msecs\n", 435 - info ?: "", info ? " " : "", pm_verb(state.event), 436 - usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC); 435 + 436 + pm_pr_dbg("%s%s%s of devices %s after %ld.%03ld msecs\n", 437 + info ?: "", info ? " " : "", pm_verb(state.event), 438 + error ? "aborted" : "complete", 439 + usecs / USEC_PER_MSEC, usecs % USEC_PER_MSEC); 437 440 } 438 - #else 439 - static inline void dpm_show_time(ktime_t starttime, pm_message_t state, 440 - const char *info) {} 441 - #endif /* CONFIG_PM_DEBUG */ 442 441 443 442 static int dpm_run_callback(pm_callback_t cb, struct device *dev, 444 443 pm_message_t state, const char *info) ··· 599 602 put_device(dev); 600 603 } 601 604 602 - /** 603 - * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices. 604 - * @state: PM transition of the system being carried out. 605 - * 606 - * Call the "noirq" resume handlers for all devices in dpm_noirq_list and 607 - * enable device drivers to receive interrupts. 608 - */ 609 - void dpm_resume_noirq(pm_message_t state) 605 + void dpm_noirq_resume_devices(pm_message_t state) 610 606 { 611 607 struct device *dev; 612 608 ktime_t starttime = ktime_get(); ··· 644 654 } 645 655 mutex_unlock(&dpm_list_mtx); 646 656 async_synchronize_full(); 647 - dpm_show_time(starttime, state, "noirq"); 657 + dpm_show_time(starttime, state, 0, "noirq"); 658 + trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); 659 + } 660 + 661 + void dpm_noirq_end(void) 662 + { 648 663 resume_device_irqs(); 649 664 device_wakeup_disarm_wake_irqs(); 650 665 cpuidle_resume(); 651 - trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); 666 + } 667 + 668 + /** 669 + * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices. 670 + * @state: PM transition of the system being carried out. 671 + * 672 + * Invoke the "noirq" resume callbacks for all devices in dpm_noirq_list and 673 + * allow device drivers' interrupt handlers to be called. 674 + */ 675 + void dpm_resume_noirq(pm_message_t state) 676 + { 677 + dpm_noirq_resume_devices(state); 678 + dpm_noirq_end(); 652 679 } 653 680 654 681 /** ··· 783 776 } 784 777 mutex_unlock(&dpm_list_mtx); 785 778 async_synchronize_full(); 786 - dpm_show_time(starttime, state, "early"); 779 + dpm_show_time(starttime, state, 0, "early"); 787 780 trace_suspend_resume(TPS("dpm_resume_early"), state.event, false); 788 781 } 789 782 ··· 955 948 } 956 949 mutex_unlock(&dpm_list_mtx); 957 950 async_synchronize_full(); 958 - dpm_show_time(starttime, state, NULL); 951 + dpm_show_time(starttime, state, 0, NULL); 959 952 960 953 cpufreq_resume(); 961 954 trace_suspend_resume(TPS("dpm_resume"), state.event, false); ··· 1105 1098 if (async_error) 1106 1099 goto Complete; 1107 1100 1101 + if (pm_wakeup_pending()) { 1102 + async_error = -EBUSY; 1103 + goto Complete; 1104 + } 1105 + 1108 1106 if (dev->power.syscore || dev->power.direct_complete) 1109 1107 goto Complete; 1110 1108 ··· 1170 1158 return __device_suspend_noirq(dev, pm_transition, false); 1171 1159 } 1172 1160 1173 - /** 1174 - * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices. 1175 - * @state: PM transition of the system being carried out. 1176 - * 1177 - * Prevent device drivers from receiving interrupts and call the "noirq" suspend 1178 - * handlers for all non-sysdev devices. 1179 - */ 1180 - int dpm_suspend_noirq(pm_message_t state) 1161 + void dpm_noirq_begin(void) 1162 + { 1163 + cpuidle_pause(); 1164 + device_wakeup_arm_wake_irqs(); 1165 + suspend_device_irqs(); 1166 + } 1167 + 1168 + int dpm_noirq_suspend_devices(pm_message_t state) 1181 1169 { 1182 1170 ktime_t starttime = ktime_get(); 1183 1171 int error = 0; 1184 1172 1185 1173 trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true); 1186 - cpuidle_pause(); 1187 - device_wakeup_arm_wake_irqs(); 1188 - suspend_device_irqs(); 1189 1174 mutex_lock(&dpm_list_mtx); 1190 1175 pm_transition = state; 1191 1176 async_error = 0; ··· 1217 1208 if (error) { 1218 1209 suspend_stats.failed_suspend_noirq++; 1219 1210 dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ); 1220 - dpm_resume_noirq(resume_event(state)); 1221 - } else { 1222 - dpm_show_time(starttime, state, "noirq"); 1223 1211 } 1212 + dpm_show_time(starttime, state, error, "noirq"); 1224 1213 trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, false); 1225 1214 return error; 1215 + } 1216 + 1217 + /** 1218 + * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices. 1219 + * @state: PM transition of the system being carried out. 1220 + * 1221 + * Prevent device drivers' interrupt handlers from being called and invoke 1222 + * "noirq" suspend callbacks for all non-sysdev devices. 1223 + */ 1224 + int dpm_suspend_noirq(pm_message_t state) 1225 + { 1226 + int ret; 1227 + 1228 + dpm_noirq_begin(); 1229 + ret = dpm_noirq_suspend_devices(state); 1230 + if (ret) 1231 + dpm_resume_noirq(resume_event(state)); 1232 + 1233 + return ret; 1226 1234 } 1227 1235 1228 1236 /** ··· 1376 1350 suspend_stats.failed_suspend_late++; 1377 1351 dpm_save_failed_step(SUSPEND_SUSPEND_LATE); 1378 1352 dpm_resume_early(resume_event(state)); 1379 - } else { 1380 - dpm_show_time(starttime, state, "late"); 1381 1353 } 1354 + dpm_show_time(starttime, state, error, "late"); 1382 1355 trace_suspend_resume(TPS("dpm_suspend_late"), state.event, false); 1383 1356 return error; 1384 1357 } ··· 1643 1618 if (error) { 1644 1619 suspend_stats.failed_suspend++; 1645 1620 dpm_save_failed_step(SUSPEND_SUSPEND); 1646 - } else 1647 - dpm_show_time(starttime, state, NULL); 1621 + } 1622 + dpm_show_time(starttime, state, error, NULL); 1648 1623 trace_suspend_resume(TPS("dpm_suspend"), state.event, false); 1649 1624 return error; 1650 1625 }
+23 -14
drivers/base/power/opp/of.c
··· 248 248 } 249 249 EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); 250 250 251 - /* Returns opp descriptor node for a device, caller must do of_node_put() */ 252 - struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev) 251 + /* Returns opp descriptor node for a device node, caller must 252 + * do of_node_put() */ 253 + static struct device_node *_opp_of_get_opp_desc_node(struct device_node *np) 253 254 { 254 255 /* 255 256 * There should be only ONE phandle present in "operating-points-v2" 256 257 * property. 257 258 */ 258 259 259 - return of_parse_phandle(dev->of_node, "operating-points-v2", 0); 260 + return of_parse_phandle(np, "operating-points-v2", 0); 261 + } 262 + 263 + /* Returns opp descriptor node for a device, caller must do of_node_put() */ 264 + struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev) 265 + { 266 + return _opp_of_get_opp_desc_node(dev->of_node); 260 267 } 261 268 EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node); 262 269 ··· 546 539 547 540 ret = dev_pm_opp_of_add_table(cpu_dev); 548 541 if (ret) { 549 - pr_err("%s: couldn't find opp table for cpu:%d, %d\n", 550 - __func__, cpu, ret); 542 + /* 543 + * OPP may get registered dynamically, don't print error 544 + * message here. 545 + */ 546 + pr_debug("%s: couldn't find opp table for cpu:%d, %d\n", 547 + __func__, cpu, ret); 551 548 552 549 /* Free all other OPPs */ 553 550 dev_pm_opp_of_cpumask_remove_table(cpumask); ··· 583 572 int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, 584 573 struct cpumask *cpumask) 585 574 { 586 - struct device_node *np, *tmp_np; 587 - struct device *tcpu_dev; 575 + struct device_node *np, *tmp_np, *cpu_np; 588 576 int cpu, ret = 0; 589 577 590 578 /* Get OPP descriptor node */ ··· 603 593 if (cpu == cpu_dev->id) 604 594 continue; 605 595 606 - tcpu_dev = get_cpu_device(cpu); 607 - if (!tcpu_dev) { 608 - dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 596 + cpu_np = of_get_cpu_node(cpu, NULL); 597 + if (!cpu_np) { 598 + dev_err(cpu_dev, "%s: failed to get cpu%d node\n", 609 599 __func__, cpu); 610 - ret = -ENODEV; 600 + ret = -ENOENT; 611 601 goto put_cpu_node; 612 602 } 613 603 614 604 /* Get OPP descriptor node */ 615 - tmp_np = dev_pm_opp_of_get_opp_desc_node(tcpu_dev); 605 + tmp_np = _opp_of_get_opp_desc_node(cpu_np); 616 606 if (!tmp_np) { 617 - dev_err(tcpu_dev, "%s: Couldn't find opp node.\n", 618 - __func__); 607 + pr_err("%pOF: Couldn't find opp node\n", cpu_np); 619 608 ret = -ENOENT; 620 609 goto put_cpu_node; 621 610 }
+6 -4
drivers/base/power/wakeup.c
··· 412 412 if (!!dev->power.can_wakeup == !!capable) 413 413 return; 414 414 415 + dev->power.can_wakeup = capable; 415 416 if (device_is_registered(dev) && !list_empty(&dev->power.entry)) { 416 417 if (capable) { 417 - if (wakeup_sysfs_add(dev)) 418 - return; 418 + int ret = wakeup_sysfs_add(dev); 419 + 420 + if (ret) 421 + dev_info(dev, "Wakeup sysfs attributes not added\n"); 419 422 } else { 420 423 wakeup_sysfs_remove(dev); 421 424 } 422 425 } 423 - dev->power.can_wakeup = capable; 424 426 } 425 427 EXPORT_SYMBOL_GPL(device_set_wakeup_capable); 426 428 ··· 865 863 void pm_system_wakeup(void) 866 864 { 867 865 atomic_inc(&pm_abort_suspend); 868 - freeze_wake(); 866 + s2idle_wake(); 869 867 } 870 868 EXPORT_SYMBOL_GPL(pm_system_wakeup); 871 869
+8 -13
drivers/cpufreq/Kconfig.arm
··· 71 71 72 72 If in doubt, say N. 73 73 74 - config ARM_DB8500_CPUFREQ 75 - tristate "ST-Ericsson DB8500 cpufreq" if COMPILE_TEST && !ARCH_U8500 76 - default ARCH_U8500 77 - depends on HAS_IOMEM 78 - depends on !CPU_THERMAL || THERMAL 79 - help 80 - This adds the CPUFreq driver for ST-Ericsson Ux500 (DB8500) SoC 81 - series. 82 - 83 74 config ARM_IMX6Q_CPUFREQ 84 75 tristate "Freescale i.MX6 cpufreq support" 85 76 depends on ARCH_MXC ··· 87 96 This adds the CPUFreq driver for Marvell Kirkwood 88 97 SoCs. 89 98 90 - config ARM_MT8173_CPUFREQ 91 - tristate "Mediatek MT8173 CPUFreq support" 99 + config ARM_MEDIATEK_CPUFREQ 100 + tristate "CPU Frequency scaling support for MediaTek SoCs" 92 101 depends on ARCH_MEDIATEK && REGULATOR 93 - depends on ARM64 || (ARM_CPU_TOPOLOGY && COMPILE_TEST) 94 102 depends on !CPU_THERMAL || THERMAL 95 103 select PM_OPP 96 104 help 97 - This adds the CPUFreq driver support for Mediatek MT8173 SoC. 105 + This adds the CPUFreq driver support for MediaTek SoCs. 98 106 99 107 config ARM_OMAP2PLUS_CPUFREQ 100 108 bool "TI OMAP2+" ··· 231 241 we will fall-back so safe-values contained in Device Tree. Enable 232 242 this config option if you wish to add CPUFreq support for STi based 233 243 SoCs. 244 + 245 + config ARM_TANGO_CPUFREQ 246 + bool 247 + depends on CPUFREQ_DT && ARCH_TANGO 248 + default y 234 249 235 250 config ARM_TEGRA20_CPUFREQ 236 251 bool "Tegra20 CPUFreq support"
+2 -2
drivers/cpufreq/Makefile
··· 53 53 54 54 obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o 55 55 obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 56 - obj-$(CONFIG_ARM_DB8500_CPUFREQ) += dbx500-cpufreq.o 57 56 obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o 58 57 obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o 59 58 obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o 60 59 obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o 61 - obj-$(CONFIG_ARM_MT8173_CPUFREQ) += mt8173-cpufreq.o 60 + obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o 62 61 obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o 63 62 obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o 64 63 obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o ··· 74 75 obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o 75 76 obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o 76 77 obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o 78 + obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o 77 79 obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o 78 80 obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o 79 81 obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o
+4 -6
drivers/cpufreq/arm_big_little.c
··· 483 483 return ret; 484 484 } 485 485 486 - if (arm_bL_ops->get_transition_latency) 487 - policy->cpuinfo.transition_latency = 488 - arm_bL_ops->get_transition_latency(cpu_dev); 489 - else 490 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 486 + policy->cpuinfo.transition_latency = 487 + arm_bL_ops->get_transition_latency(cpu_dev); 491 488 492 489 if (is_bL_switching_enabled()) 493 490 per_cpu(cpu_last_req_freq, policy->cpu) = clk_get_cpu_rate(policy->cpu); ··· 619 622 return -EBUSY; 620 623 } 621 624 622 - if (!ops || !strlen(ops->name) || !ops->init_opp_table) { 625 + if (!ops || !strlen(ops->name) || !ops->init_opp_table || 626 + !ops->get_transition_latency) { 623 627 pr_err("%s: Invalid arm_bL_ops, exiting\n", __func__); 624 628 return -ENODEV; 625 629 }
-1
drivers/cpufreq/cppc_cpufreq.c
··· 172 172 return -EFAULT; 173 173 } 174 174 175 - cpumask_set_cpu(policy->cpu, policy->cpus); 176 175 cpu->cur_policy = policy; 177 176 178 177 /* Set policy->cur to max now. The governors will adjust later. */
+50 -19
drivers/cpufreq/cpufreq-dt-platdev.c
··· 9 9 10 10 #include <linux/err.h> 11 11 #include <linux/of.h> 12 + #include <linux/of_device.h> 12 13 #include <linux/platform_device.h> 13 14 14 15 #include "cpufreq-dt.h" 15 16 16 - static const struct of_device_id machines[] __initconst = { 17 + /* 18 + * Machines for which the cpufreq device is *always* created, mostly used for 19 + * platforms using "operating-points" (V1) property. 20 + */ 21 + static const struct of_device_id whitelist[] __initconst = { 17 22 { .compatible = "allwinner,sun4i-a10", }, 18 23 { .compatible = "allwinner,sun5i-a10s", }, 19 24 { .compatible = "allwinner,sun5i-a13", }, ··· 27 22 { .compatible = "allwinner,sun6i-a31s", }, 28 23 { .compatible = "allwinner,sun7i-a20", }, 29 24 { .compatible = "allwinner,sun8i-a23", }, 30 - { .compatible = "allwinner,sun8i-a33", }, 31 25 { .compatible = "allwinner,sun8i-a83t", }, 32 26 { .compatible = "allwinner,sun8i-h3", }, 33 27 ··· 36 32 { .compatible = "arm,integrator-cp", }, 37 33 38 34 { .compatible = "hisilicon,hi3660", }, 39 - { .compatible = "hisilicon,hi6220", }, 40 35 41 36 { .compatible = "fsl,imx27", }, 42 37 { .compatible = "fsl,imx51", }, ··· 49 46 { .compatible = "samsung,exynos3250", }, 50 47 { .compatible = "samsung,exynos4210", }, 51 48 { .compatible = "samsung,exynos4212", }, 52 - { .compatible = "samsung,exynos4412", }, 53 49 { .compatible = "samsung,exynos5250", }, 54 50 #ifndef CONFIG_BL_SWITCHER 55 - { .compatible = "samsung,exynos5420", }, 56 - { .compatible = "samsung,exynos5433", }, 57 51 { .compatible = "samsung,exynos5800", }, 58 52 #endif 59 53 ··· 67 67 { .compatible = "renesas,r8a7792", }, 68 68 { .compatible = "renesas,r8a7793", }, 69 69 { .compatible = "renesas,r8a7794", }, 70 + { .compatible = "renesas,r8a7795", }, 71 + { .compatible = "renesas,r8a7796", }, 70 72 { .compatible = "renesas,sh73a0", }, 71 73 72 74 { .compatible = "rockchip,rk2928", }, ··· 78 76 { .compatible = "rockchip,rk3188", }, 79 77 { .compatible = "rockchip,rk3228", }, 80 78 { .compatible = "rockchip,rk3288", }, 79 + { .compatible = "rockchip,rk3328", }, 81 80 { .compatible = "rockchip,rk3366", }, 82 81 { .compatible = "rockchip,rk3368", }, 83 82 { .compatible = "rockchip,rk3399", }, 84 83 85 - { .compatible = "sigma,tango4" }, 86 - 87 - { .compatible = "socionext,uniphier-pro5", }, 88 - { .compatible = "socionext,uniphier-pxs2", }, 89 84 { .compatible = "socionext,uniphier-ld6b", }, 90 - { .compatible = "socionext,uniphier-ld11", }, 91 - { .compatible = "socionext,uniphier-ld20", }, 85 + 86 + { .compatible = "st-ericsson,u8500", }, 87 + { .compatible = "st-ericsson,u8540", }, 88 + { .compatible = "st-ericsson,u9500", }, 89 + { .compatible = "st-ericsson,u9540", }, 92 90 93 91 { .compatible = "ti,omap2", }, 94 92 { .compatible = "ti,omap3", }, ··· 96 94 { .compatible = "ti,omap5", }, 97 95 98 96 { .compatible = "xlnx,zynq-7000", }, 99 - 100 - { .compatible = "zte,zx296718", }, 97 + { .compatible = "xlnx,zynqmp", }, 101 98 102 99 { } 103 100 }; 101 + 102 + /* 103 + * Machines for which the cpufreq device is *not* created, mostly used for 104 + * platforms using "operating-points-v2" property. 105 + */ 106 + static const struct of_device_id blacklist[] __initconst = { 107 + { } 108 + }; 109 + 110 + static bool __init cpu0_node_has_opp_v2_prop(void) 111 + { 112 + struct device_node *np = of_cpu_device_node_get(0); 113 + bool ret = false; 114 + 115 + if (of_get_property(np, "operating-points-v2", NULL)) 116 + ret = true; 117 + 118 + of_node_put(np); 119 + return ret; 120 + } 104 121 105 122 static int __init cpufreq_dt_platdev_init(void) 106 123 { 107 124 struct device_node *np = of_find_node_by_path("/"); 108 125 const struct of_device_id *match; 126 + const void *data = NULL; 109 127 110 128 if (!np) 111 129 return -ENODEV; 112 130 113 - match = of_match_node(machines, np); 114 - of_node_put(np); 115 - if (!match) 116 - return -ENODEV; 131 + match = of_match_node(whitelist, np); 132 + if (match) { 133 + data = match->data; 134 + goto create_pdev; 135 + } 117 136 137 + if (cpu0_node_has_opp_v2_prop() && !of_match_node(blacklist, np)) 138 + goto create_pdev; 139 + 140 + of_node_put(np); 141 + return -ENODEV; 142 + 143 + create_pdev: 144 + of_node_put(np); 118 145 return PTR_ERR_OR_ZERO(platform_device_register_data(NULL, "cpufreq-dt", 119 - -1, match->data, 146 + -1, data, 120 147 sizeof(struct cpufreq_dt_platform_data))); 121 148 } 122 149 device_initcall(cpufreq_dt_platdev_init);
+1
drivers/cpufreq/cpufreq-dt.c
··· 274 274 transition_latency = CPUFREQ_ETERNAL; 275 275 276 276 policy->cpuinfo.transition_latency = transition_latency; 277 + policy->dvfs_possible_from_any_cpu = true; 277 278 278 279 return 0; 279 280
+1 -1
drivers/cpufreq/cpufreq-nforce2.c
··· 357 357 /* cpuinfo and default policy values */ 358 358 policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100; 359 359 policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100; 360 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 361 360 362 361 return 0; 363 362 } ··· 368 369 369 370 static struct cpufreq_driver nforce2_driver = { 370 371 .name = "nforce2", 372 + .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 371 373 .verify = nforce2_verify, 372 374 .target = nforce2_target, 373 375 .get = nforce2_get,
+34 -7
drivers/cpufreq/cpufreq.c
··· 524 524 } 525 525 EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq); 526 526 527 + unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy) 528 + { 529 + unsigned int latency; 530 + 531 + if (policy->transition_delay_us) 532 + return policy->transition_delay_us; 533 + 534 + latency = policy->cpuinfo.transition_latency / NSEC_PER_USEC; 535 + if (latency) { 536 + /* 537 + * For platforms that can change the frequency very fast (< 10 538 + * us), the above formula gives a decent transition delay. But 539 + * for platforms where transition_latency is in milliseconds, it 540 + * ends up giving unrealistic values. 541 + * 542 + * Cap the default transition delay to 10 ms, which seems to be 543 + * a reasonable amount of time after which we should reevaluate 544 + * the frequency. 545 + */ 546 + return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000); 547 + } 548 + 549 + return LATENCY_MULTIPLIER; 550 + } 551 + EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us); 552 + 527 553 /********************************************************************* 528 554 * SYSFS INTERFACE * 529 555 *********************************************************************/ ··· 1843 1817 * twice in parallel for the same policy and that it will never be called in 1844 1818 * parallel with either ->target() or ->target_index() for the same policy. 1845 1819 * 1846 - * If CPUFREQ_ENTRY_INVALID is returned by the driver's ->fast_switch() 1847 - * callback to indicate an error condition, the hardware configuration must be 1848 - * preserved. 1820 + * Returns the actual frequency set for the CPU. 1821 + * 1822 + * If 0 is returned by the driver's ->fast_switch() callback to indicate an 1823 + * error condition, the hardware configuration must be preserved. 1849 1824 */ 1850 1825 unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy, 1851 1826 unsigned int target_freq) ··· 2015 1988 if (!policy->governor) 2016 1989 return -EINVAL; 2017 1990 2018 - if (policy->governor->max_transition_latency && 2019 - policy->cpuinfo.transition_latency > 2020 - policy->governor->max_transition_latency) { 1991 + /* Platform doesn't want dynamic frequency switching ? */ 1992 + if (policy->governor->dynamic_switching && 1993 + cpufreq_driver->flags & CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING) { 2021 1994 struct cpufreq_governor *gov = cpufreq_fallback_governor(); 2022 1995 2023 1996 if (gov) { 2024 - pr_warn("%s governor failed, too long transition latency of HW, fallback to %s governor\n", 1997 + pr_warn("Can't use %s governor as dynamic switching is disallowed. Fallback to %s governor\n", 2025 1998 policy->governor->name, gov->name); 2026 1999 policy->governor = gov; 2027 2000 } else {
-6
drivers/cpufreq/cpufreq_conservative.c
··· 246 246 gov_show_one_common(sampling_down_factor); 247 247 gov_show_one_common(up_threshold); 248 248 gov_show_one_common(ignore_nice_load); 249 - gov_show_one_common(min_sampling_rate); 250 249 gov_show_one(cs, down_threshold); 251 250 gov_show_one(cs, freq_step); 252 251 ··· 253 254 gov_attr_rw(sampling_down_factor); 254 255 gov_attr_rw(up_threshold); 255 256 gov_attr_rw(ignore_nice_load); 256 - gov_attr_ro(min_sampling_rate); 257 257 gov_attr_rw(down_threshold); 258 258 gov_attr_rw(freq_step); 259 259 260 260 static struct attribute *cs_attributes[] = { 261 - &min_sampling_rate.attr, 262 261 &sampling_rate.attr, 263 262 &sampling_down_factor.attr, 264 263 &up_threshold.attr, ··· 294 297 dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; 295 298 dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; 296 299 dbs_data->ignore_nice_load = 0; 297 - 298 300 dbs_data->tuners = tuners; 299 - dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO * 300 - jiffies_to_usecs(10); 301 301 302 302 return 0; 303 303 }
+5 -15
drivers/cpufreq/cpufreq_governor.c
··· 47 47 { 48 48 struct dbs_data *dbs_data = to_dbs_data(attr_set); 49 49 struct policy_dbs_info *policy_dbs; 50 - unsigned int rate; 51 50 int ret; 52 - ret = sscanf(buf, "%u", &rate); 51 + ret = sscanf(buf, "%u", &dbs_data->sampling_rate); 53 52 if (ret != 1) 54 53 return -EINVAL; 55 - 56 - dbs_data->sampling_rate = max(rate, dbs_data->min_sampling_rate); 57 54 58 55 /* 59 56 * We are operating under dbs_data->mutex and so the list and its ··· 272 275 struct policy_dbs_info *policy_dbs = cdbs->policy_dbs; 273 276 u64 delta_ns, lst; 274 277 278 + if (!cpufreq_can_do_remote_dvfs(policy_dbs->policy)) 279 + return; 280 + 275 281 /* 276 282 * The work may not be allowed to be queued up right now. 277 283 * Possible reasons: ··· 392 392 struct dbs_governor *gov = dbs_governor_of(policy); 393 393 struct dbs_data *dbs_data; 394 394 struct policy_dbs_info *policy_dbs; 395 - unsigned int latency; 396 395 int ret = 0; 397 396 398 397 /* State should be equivalent to EXIT */ ··· 430 431 if (ret) 431 432 goto free_policy_dbs_info; 432 433 433 - /* policy latency is in ns. Convert it to us first */ 434 - latency = policy->cpuinfo.transition_latency / 1000; 435 - if (latency == 0) 436 - latency = 1; 437 - 438 - /* Bring kernel and HW constraints together */ 439 - dbs_data->min_sampling_rate = max(dbs_data->min_sampling_rate, 440 - MIN_LATENCY_MULTIPLIER * latency); 441 - dbs_data->sampling_rate = max(dbs_data->min_sampling_rate, 442 - LATENCY_MULTIPLIER * latency); 434 + dbs_data->sampling_rate = cpufreq_policy_transition_delay_us(policy); 443 435 444 436 if (!have_governor_per_policy()) 445 437 gov->gdbs_data = dbs_data;
+1 -2
drivers/cpufreq/cpufreq_governor.h
··· 41 41 struct dbs_data { 42 42 struct gov_attr_set attr_set; 43 43 void *tuners; 44 - unsigned int min_sampling_rate; 45 44 unsigned int ignore_nice_load; 46 45 unsigned int sampling_rate; 47 46 unsigned int sampling_down_factor; ··· 159 160 #define CPUFREQ_DBS_GOVERNOR_INITIALIZER(_name_) \ 160 161 { \ 161 162 .name = _name_, \ 162 - .max_transition_latency = TRANSITION_LATENCY_LIMIT, \ 163 + .dynamic_switching = true, \ 163 164 .owner = THIS_MODULE, \ 164 165 .init = cpufreq_dbs_governor_init, \ 165 166 .exit = cpufreq_dbs_governor_exit, \
-12
drivers/cpufreq/cpufreq_ondemand.c
··· 319 319 gov_show_one_common(up_threshold); 320 320 gov_show_one_common(sampling_down_factor); 321 321 gov_show_one_common(ignore_nice_load); 322 - gov_show_one_common(min_sampling_rate); 323 322 gov_show_one_common(io_is_busy); 324 323 gov_show_one(od, powersave_bias); 325 324 ··· 328 329 gov_attr_rw(sampling_down_factor); 329 330 gov_attr_rw(ignore_nice_load); 330 331 gov_attr_rw(powersave_bias); 331 - gov_attr_ro(min_sampling_rate); 332 332 333 333 static struct attribute *od_attributes[] = { 334 - &min_sampling_rate.attr, 335 334 &sampling_rate.attr, 336 335 &up_threshold.attr, 337 336 &sampling_down_factor.attr, ··· 370 373 if (idle_time != -1ULL) { 371 374 /* Idle micro accounting is supported. Use finer thresholds */ 372 375 dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD; 373 - /* 374 - * In nohz/micro accounting case we set the minimum frequency 375 - * not depending on HZ, but fixed (very low). 376 - */ 377 - dbs_data->min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE; 378 376 } else { 379 377 dbs_data->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; 380 - 381 - /* For correct statistics, we need 10 ticks for each measure */ 382 - dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO * 383 - jiffies_to_usecs(10); 384 378 } 385 379 386 380 dbs_data->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR;
-103
drivers/cpufreq/dbx500-cpufreq.c
··· 1 - /* 2 - * Copyright (C) STMicroelectronics 2009 3 - * Copyright (C) ST-Ericsson SA 2010-2012 4 - * 5 - * License Terms: GNU General Public License v2 6 - * Author: Sundar Iyer <sundar.iyer@stericsson.com> 7 - * Author: Martin Persson <martin.persson@stericsson.com> 8 - * Author: Jonas Aaberg <jonas.aberg@stericsson.com> 9 - */ 10 - 11 - #include <linux/module.h> 12 - #include <linux/kernel.h> 13 - #include <linux/cpufreq.h> 14 - #include <linux/cpu_cooling.h> 15 - #include <linux/delay.h> 16 - #include <linux/slab.h> 17 - #include <linux/platform_device.h> 18 - #include <linux/clk.h> 19 - 20 - static struct cpufreq_frequency_table *freq_table; 21 - static struct clk *armss_clk; 22 - static struct thermal_cooling_device *cdev; 23 - 24 - static int dbx500_cpufreq_target(struct cpufreq_policy *policy, 25 - unsigned int index) 26 - { 27 - /* update armss clk frequency */ 28 - return clk_set_rate(armss_clk, freq_table[index].frequency * 1000); 29 - } 30 - 31 - static int dbx500_cpufreq_init(struct cpufreq_policy *policy) 32 - { 33 - policy->clk = armss_clk; 34 - return cpufreq_generic_init(policy, freq_table, 20 * 1000); 35 - } 36 - 37 - static int dbx500_cpufreq_exit(struct cpufreq_policy *policy) 38 - { 39 - if (!IS_ERR(cdev)) 40 - cpufreq_cooling_unregister(cdev); 41 - return 0; 42 - } 43 - 44 - static void dbx500_cpufreq_ready(struct cpufreq_policy *policy) 45 - { 46 - cdev = cpufreq_cooling_register(policy); 47 - if (IS_ERR(cdev)) 48 - pr_err("Failed to register cooling device %ld\n", PTR_ERR(cdev)); 49 - else 50 - pr_info("Cooling device registered: %s\n", cdev->type); 51 - } 52 - 53 - static struct cpufreq_driver dbx500_cpufreq_driver = { 54 - .flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS | 55 - CPUFREQ_NEED_INITIAL_FREQ_CHECK, 56 - .verify = cpufreq_generic_frequency_table_verify, 57 - .target_index = dbx500_cpufreq_target, 58 - .get = cpufreq_generic_get, 59 - .init = dbx500_cpufreq_init, 60 - .exit = dbx500_cpufreq_exit, 61 - .ready = dbx500_cpufreq_ready, 62 - .name = "DBX500", 63 - .attr = cpufreq_generic_attr, 64 - }; 65 - 66 - static int dbx500_cpufreq_probe(struct platform_device *pdev) 67 - { 68 - struct cpufreq_frequency_table *pos; 69 - 70 - freq_table = dev_get_platdata(&pdev->dev); 71 - if (!freq_table) { 72 - pr_err("dbx500-cpufreq: Failed to fetch cpufreq table\n"); 73 - return -ENODEV; 74 - } 75 - 76 - armss_clk = clk_get(&pdev->dev, "armss"); 77 - if (IS_ERR(armss_clk)) { 78 - pr_err("dbx500-cpufreq: Failed to get armss clk\n"); 79 - return PTR_ERR(armss_clk); 80 - } 81 - 82 - pr_info("dbx500-cpufreq: Available frequencies:\n"); 83 - cpufreq_for_each_entry(pos, freq_table) 84 - pr_info(" %d Mhz\n", pos->frequency / 1000); 85 - 86 - return cpufreq_register_driver(&dbx500_cpufreq_driver); 87 - } 88 - 89 - static struct platform_driver dbx500_cpufreq_plat_driver = { 90 - .driver = { 91 - .name = "cpufreq-ux500", 92 - }, 93 - .probe = dbx500_cpufreq_probe, 94 - }; 95 - 96 - static int __init dbx500_cpufreq_register(void) 97 - { 98 - return platform_driver_register(&dbx500_cpufreq_plat_driver); 99 - } 100 - device_initcall(dbx500_cpufreq_register); 101 - 102 - MODULE_LICENSE("GPL v2"); 103 - MODULE_DESCRIPTION("cpufreq driver for DBX500");
+1 -3
drivers/cpufreq/elanfreq.c
··· 165 165 if (pos->frequency > max_freq) 166 166 pos->frequency = CPUFREQ_ENTRY_INVALID; 167 167 168 - /* cpuinfo and default policy values */ 169 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 170 - 171 168 return cpufreq_table_validate_and_show(policy, elanfreq_table); 172 169 } 173 170 ··· 193 196 194 197 static struct cpufreq_driver elanfreq_driver = { 195 198 .get = elanfreq_get_cpu_frequency, 199 + .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 196 200 .verify = cpufreq_generic_frequency_table_verify, 197 201 .target_index = elanfreq_target, 198 202 .init = elanfreq_cpu_init,
+1 -1
drivers/cpufreq/gx-suspmod.c
··· 428 428 policy->max = maxfreq; 429 429 policy->cpuinfo.min_freq = maxfreq / max_duration; 430 430 policy->cpuinfo.max_freq = maxfreq; 431 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 432 431 433 432 return 0; 434 433 } ··· 437 438 * MediaGX/Geode GX initialize cpufreq driver 438 439 */ 439 440 static struct cpufreq_driver gx_suspmod_driver = { 441 + .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 440 442 .get = gx_get_cpuspeed, 441 443 .verify = cpufreq_gx_verify, 442 444 .target = cpufreq_gx_target,
+9
drivers/cpufreq/imx6q-cpufreq.c
··· 47 47 struct dev_pm_opp *opp; 48 48 unsigned long freq_hz, volt, volt_old; 49 49 unsigned int old_freq, new_freq; 50 + bool pll1_sys_temp_enabled = false; 50 51 int ret; 51 52 52 53 new_freq = freq_table[index].frequency; ··· 125 124 if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) { 126 125 clk_set_rate(pll1_sys_clk, new_freq * 1000); 127 126 clk_set_parent(pll1_sw_clk, pll1_sys_clk); 127 + } else { 128 + /* pll1_sys needs to be enabled for divider rate change to work. */ 129 + pll1_sys_temp_enabled = true; 130 + clk_prepare_enable(pll1_sys_clk); 128 131 } 129 132 } 130 133 ··· 139 134 regulator_set_voltage_tol(arm_reg, volt_old, 0); 140 135 return ret; 141 136 } 137 + 138 + /* PLL1 is only needed until after ARM-PODF is set. */ 139 + if (pll1_sys_temp_enabled) 140 + clk_disable_unprepare(pll1_sys_clk); 142 141 143 142 /* scaling down? scale voltage after frequency */ 144 143 if (new_freq < old_freq) {
+24 -298
drivers/cpufreq/intel_pstate.c
··· 37 37 #include <asm/cpufeature.h> 38 38 #include <asm/intel-family.h> 39 39 40 - #define INTEL_PSTATE_DEFAULT_SAMPLING_INTERVAL (10 * NSEC_PER_MSEC) 41 - #define INTEL_PSTATE_HWP_SAMPLING_INTERVAL (50 * NSEC_PER_MSEC) 40 + #define INTEL_PSTATE_SAMPLING_INTERVAL (10 * NSEC_PER_MSEC) 42 41 43 42 #define INTEL_CPUFREQ_TRANSITION_LATENCY 20000 44 43 #define INTEL_CPUFREQ_TRANSITION_DELAY 500 ··· 172 173 }; 173 174 174 175 /** 175 - * struct _pid - Stores PID data 176 - * @setpoint: Target set point for busyness or performance 177 - * @integral: Storage for accumulated error values 178 - * @p_gain: PID proportional gain 179 - * @i_gain: PID integral gain 180 - * @d_gain: PID derivative gain 181 - * @deadband: PID deadband 182 - * @last_err: Last error storage for integral part of PID calculation 183 - * 184 - * Stores PID coefficients and last error for PID controller. 185 - */ 186 - struct _pid { 187 - int setpoint; 188 - int32_t integral; 189 - int32_t p_gain; 190 - int32_t i_gain; 191 - int32_t d_gain; 192 - int deadband; 193 - int32_t last_err; 194 - }; 195 - 196 - /** 197 176 * struct global_params - Global parameters, mostly tunable via sysfs. 198 177 * @no_turbo: Whether or not to use turbo P-states. 199 178 * @turbo_disabled: Whethet or not turbo P-states are available at all, ··· 200 223 * @last_update: Time of the last update. 201 224 * @pstate: Stores P state limits for this CPU 202 225 * @vid: Stores VID limits for this CPU 203 - * @pid: Stores PID parameters for this CPU 204 226 * @last_sample_time: Last Sample time 205 227 * @aperf_mperf_shift: Number of clock cycles after aperf, merf is incremented 206 228 * This shift is a multiplier to mperf delta to ··· 234 258 235 259 struct pstate_data pstate; 236 260 struct vid_data vid; 237 - struct _pid pid; 238 261 239 262 u64 last_update; 240 263 u64 last_sample_time; ··· 259 284 static struct cpudata **all_cpu_data; 260 285 261 286 /** 262 - * struct pstate_adjust_policy - Stores static PID configuration data 263 - * @sample_rate_ms: PID calculation sample rate in ms 264 - * @sample_rate_ns: Sample rate calculation in ns 265 - * @deadband: PID deadband 266 - * @setpoint: PID Setpoint 267 - * @p_gain_pct: PID proportional gain 268 - * @i_gain_pct: PID integral gain 269 - * @d_gain_pct: PID derivative gain 270 - * 271 - * Stores per CPU model static PID configuration data. 272 - */ 273 - struct pstate_adjust_policy { 274 - int sample_rate_ms; 275 - s64 sample_rate_ns; 276 - int deadband; 277 - int setpoint; 278 - int p_gain_pct; 279 - int d_gain_pct; 280 - int i_gain_pct; 281 - }; 282 - 283 - /** 284 287 * struct pstate_funcs - Per CPU model specific callbacks 285 288 * @get_max: Callback to get maximum non turbo effective P state 286 289 * @get_max_physical: Callback to get maximum non turbo physical P state ··· 267 314 * @get_scaling: Callback to get frequency scaling factor 268 315 * @get_val: Callback to convert P state to actual MSR write value 269 316 * @get_vid: Callback to get VID data for Atom platforms 270 - * @update_util: Active mode utilization update callback. 271 317 * 272 318 * Core and Atom CPU models have different way to get P State limits. This 273 319 * structure is used to store those callbacks. ··· 280 328 int (*get_aperf_mperf_shift)(void); 281 329 u64 (*get_val)(struct cpudata*, int pstate); 282 330 void (*get_vid)(struct cpudata *); 283 - void (*update_util)(struct update_util_data *data, u64 time, 284 - unsigned int flags); 285 331 }; 286 332 287 333 static struct pstate_funcs pstate_funcs __read_mostly; 288 - static struct pstate_adjust_policy pid_params __read_mostly = { 289 - .sample_rate_ms = 10, 290 - .sample_rate_ns = 10 * NSEC_PER_MSEC, 291 - .deadband = 0, 292 - .setpoint = 97, 293 - .p_gain_pct = 20, 294 - .d_gain_pct = 0, 295 - .i_gain_pct = 0, 296 - }; 297 334 298 335 static int hwp_active __read_mostly; 299 336 static bool per_cpu_limits __read_mostly; ··· 449 508 { 450 509 } 451 510 #endif 452 - 453 - static signed int pid_calc(struct _pid *pid, int32_t busy) 454 - { 455 - signed int result; 456 - int32_t pterm, dterm, fp_error; 457 - int32_t integral_limit; 458 - 459 - fp_error = pid->setpoint - busy; 460 - 461 - if (abs(fp_error) <= pid->deadband) 462 - return 0; 463 - 464 - pterm = mul_fp(pid->p_gain, fp_error); 465 - 466 - pid->integral += fp_error; 467 - 468 - /* 469 - * We limit the integral here so that it will never 470 - * get higher than 30. This prevents it from becoming 471 - * too large an input over long periods of time and allows 472 - * it to get factored out sooner. 473 - * 474 - * The value of 30 was chosen through experimentation. 475 - */ 476 - integral_limit = int_tofp(30); 477 - if (pid->integral > integral_limit) 478 - pid->integral = integral_limit; 479 - if (pid->integral < -integral_limit) 480 - pid->integral = -integral_limit; 481 - 482 - dterm = mul_fp(pid->d_gain, fp_error - pid->last_err); 483 - pid->last_err = fp_error; 484 - 485 - result = pterm + mul_fp(pid->integral, pid->i_gain) + dterm; 486 - result = result + (1 << (FRAC_BITS-1)); 487 - return (signed int)fp_toint(result); 488 - } 489 - 490 - static inline void intel_pstate_pid_reset(struct cpudata *cpu) 491 - { 492 - struct _pid *pid = &cpu->pid; 493 - 494 - pid->p_gain = percent_fp(pid_params.p_gain_pct); 495 - pid->d_gain = percent_fp(pid_params.d_gain_pct); 496 - pid->i_gain = percent_fp(pid_params.i_gain_pct); 497 - pid->setpoint = int_tofp(pid_params.setpoint); 498 - pid->last_err = pid->setpoint - int_tofp(100); 499 - pid->deadband = int_tofp(pid_params.deadband); 500 - pid->integral = 0; 501 - } 502 511 503 512 static inline void update_turbo_state(void) 504 513 { ··· 801 910 for_each_possible_cpu(cpu) 802 911 cpufreq_update_policy(cpu); 803 912 } 804 - 805 - /************************** debugfs begin ************************/ 806 - static int pid_param_set(void *data, u64 val) 807 - { 808 - unsigned int cpu; 809 - 810 - *(u32 *)data = val; 811 - pid_params.sample_rate_ns = pid_params.sample_rate_ms * NSEC_PER_MSEC; 812 - for_each_possible_cpu(cpu) 813 - if (all_cpu_data[cpu]) 814 - intel_pstate_pid_reset(all_cpu_data[cpu]); 815 - 816 - return 0; 817 - } 818 - 819 - static int pid_param_get(void *data, u64 *val) 820 - { 821 - *val = *(u32 *)data; 822 - return 0; 823 - } 824 - DEFINE_SIMPLE_ATTRIBUTE(fops_pid_param, pid_param_get, pid_param_set, "%llu\n"); 825 - 826 - static struct dentry *debugfs_parent; 827 - 828 - struct pid_param { 829 - char *name; 830 - void *value; 831 - struct dentry *dentry; 832 - }; 833 - 834 - static struct pid_param pid_files[] = { 835 - {"sample_rate_ms", &pid_params.sample_rate_ms, }, 836 - {"d_gain_pct", &pid_params.d_gain_pct, }, 837 - {"i_gain_pct", &pid_params.i_gain_pct, }, 838 - {"deadband", &pid_params.deadband, }, 839 - {"setpoint", &pid_params.setpoint, }, 840 - {"p_gain_pct", &pid_params.p_gain_pct, }, 841 - {NULL, NULL, } 842 - }; 843 - 844 - static void intel_pstate_debug_expose_params(void) 845 - { 846 - int i; 847 - 848 - debugfs_parent = debugfs_create_dir("pstate_snb", NULL); 849 - if (IS_ERR_OR_NULL(debugfs_parent)) 850 - return; 851 - 852 - for (i = 0; pid_files[i].name; i++) { 853 - struct dentry *dentry; 854 - 855 - dentry = debugfs_create_file(pid_files[i].name, 0660, 856 - debugfs_parent, pid_files[i].value, 857 - &fops_pid_param); 858 - if (!IS_ERR(dentry)) 859 - pid_files[i].dentry = dentry; 860 - } 861 - } 862 - 863 - static void intel_pstate_debug_hide_params(void) 864 - { 865 - int i; 866 - 867 - if (IS_ERR_OR_NULL(debugfs_parent)) 868 - return; 869 - 870 - for (i = 0; pid_files[i].name; i++) { 871 - debugfs_remove(pid_files[i].dentry); 872 - pid_files[i].dentry = NULL; 873 - } 874 - 875 - debugfs_remove(debugfs_parent); 876 - debugfs_parent = NULL; 877 - } 878 - 879 - /************************** debugfs end ************************/ 880 913 881 914 /************************** sysfs begin ************************/ 882 915 #define show_one(file_name, object) \ ··· 1437 1622 cpu->sample.core_avg_perf); 1438 1623 } 1439 1624 1440 - static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu) 1625 + static inline int32_t get_target_pstate(struct cpudata *cpu) 1441 1626 { 1442 1627 struct sample *sample = &cpu->sample; 1443 1628 int32_t busy_frac, boost; ··· 1475 1660 return target; 1476 1661 } 1477 1662 1478 - static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu) 1479 - { 1480 - int32_t perf_scaled, max_pstate, current_pstate, sample_ratio; 1481 - u64 duration_ns; 1482 - 1483 - /* 1484 - * perf_scaled is the ratio of the average P-state during the last 1485 - * sampling period to the P-state requested last time (in percent). 1486 - * 1487 - * That measures the system's response to the previous P-state 1488 - * selection. 1489 - */ 1490 - max_pstate = cpu->pstate.max_pstate_physical; 1491 - current_pstate = cpu->pstate.current_pstate; 1492 - perf_scaled = mul_ext_fp(cpu->sample.core_avg_perf, 1493 - div_fp(100 * max_pstate, current_pstate)); 1494 - 1495 - /* 1496 - * Since our utilization update callback will not run unless we are 1497 - * in C0, check if the actual elapsed time is significantly greater (3x) 1498 - * than our sample interval. If it is, then we were idle for a long 1499 - * enough period of time to adjust our performance metric. 1500 - */ 1501 - duration_ns = cpu->sample.time - cpu->last_sample_time; 1502 - if ((s64)duration_ns > pid_params.sample_rate_ns * 3) { 1503 - sample_ratio = div_fp(pid_params.sample_rate_ns, duration_ns); 1504 - perf_scaled = mul_fp(perf_scaled, sample_ratio); 1505 - } else { 1506 - sample_ratio = div_fp(100 * (cpu->sample.mperf << cpu->aperf_mperf_shift), 1507 - cpu->sample.tsc); 1508 - if (sample_ratio < int_tofp(1)) 1509 - perf_scaled = 0; 1510 - } 1511 - 1512 - cpu->sample.busy_scaled = perf_scaled; 1513 - return cpu->pstate.current_pstate - pid_calc(&cpu->pid, perf_scaled); 1514 - } 1515 - 1516 1663 static int intel_pstate_prepare_request(struct cpudata *cpu, int pstate) 1517 1664 { 1518 1665 int max_pstate = intel_pstate_get_base_pstate(cpu); ··· 1494 1717 wrmsrl(MSR_IA32_PERF_CTL, pstate_funcs.get_val(cpu, pstate)); 1495 1718 } 1496 1719 1497 - static void intel_pstate_adjust_pstate(struct cpudata *cpu, int target_pstate) 1720 + static void intel_pstate_adjust_pstate(struct cpudata *cpu) 1498 1721 { 1499 1722 int from = cpu->pstate.current_pstate; 1500 1723 struct sample *sample; 1724 + int target_pstate; 1501 1725 1502 1726 update_turbo_state(); 1503 1727 1728 + target_pstate = get_target_pstate(cpu); 1504 1729 target_pstate = intel_pstate_prepare_request(cpu, target_pstate); 1505 1730 trace_cpu_frequency(target_pstate * cpu->pstate.scaling, cpu->cpu); 1506 1731 intel_pstate_update_pstate(cpu, target_pstate); ··· 1519 1740 fp_toint(cpu->iowait_boost * 100)); 1520 1741 } 1521 1742 1522 - static void intel_pstate_update_util_pid(struct update_util_data *data, 1523 - u64 time, unsigned int flags) 1524 - { 1525 - struct cpudata *cpu = container_of(data, struct cpudata, update_util); 1526 - u64 delta_ns = time - cpu->sample.time; 1527 - 1528 - if ((s64)delta_ns < pid_params.sample_rate_ns) 1529 - return; 1530 - 1531 - if (intel_pstate_sample(cpu, time)) { 1532 - int target_pstate; 1533 - 1534 - target_pstate = get_target_pstate_use_performance(cpu); 1535 - intel_pstate_adjust_pstate(cpu, target_pstate); 1536 - } 1537 - } 1538 - 1539 1743 static void intel_pstate_update_util(struct update_util_data *data, u64 time, 1540 1744 unsigned int flags) 1541 1745 { 1542 1746 struct cpudata *cpu = container_of(data, struct cpudata, update_util); 1543 1747 u64 delta_ns; 1544 1748 1749 + /* Don't allow remote callbacks */ 1750 + if (smp_processor_id() != cpu->cpu) 1751 + return; 1752 + 1545 1753 if (flags & SCHED_CPUFREQ_IOWAIT) { 1546 1754 cpu->iowait_boost = int_tofp(1); 1755 + cpu->last_update = time; 1756 + /* 1757 + * The last time the busy was 100% so P-state was max anyway 1758 + * so avoid overhead of computation. 1759 + */ 1760 + if (fp_toint(cpu->sample.busy_scaled) == 100) 1761 + return; 1762 + 1763 + goto set_pstate; 1547 1764 } else if (cpu->iowait_boost) { 1548 1765 /* Clear iowait_boost if the CPU may have been idle. */ 1549 1766 delta_ns = time - cpu->last_update; ··· 1548 1773 } 1549 1774 cpu->last_update = time; 1550 1775 delta_ns = time - cpu->sample.time; 1551 - if ((s64)delta_ns < INTEL_PSTATE_DEFAULT_SAMPLING_INTERVAL) 1776 + if ((s64)delta_ns < INTEL_PSTATE_SAMPLING_INTERVAL) 1552 1777 return; 1553 1778 1554 - if (intel_pstate_sample(cpu, time)) { 1555 - int target_pstate; 1556 - 1557 - target_pstate = get_target_pstate_use_cpu_load(cpu); 1558 - intel_pstate_adjust_pstate(cpu, target_pstate); 1559 - } 1779 + set_pstate: 1780 + if (intel_pstate_sample(cpu, time)) 1781 + intel_pstate_adjust_pstate(cpu); 1560 1782 } 1561 1783 1562 1784 static struct pstate_funcs core_funcs = { ··· 1563 1791 .get_turbo = core_get_turbo_pstate, 1564 1792 .get_scaling = core_get_scaling, 1565 1793 .get_val = core_get_val, 1566 - .update_util = intel_pstate_update_util_pid, 1567 1794 }; 1568 1795 1569 1796 static const struct pstate_funcs silvermont_funcs = { ··· 1573 1802 .get_val = atom_get_val, 1574 1803 .get_scaling = silvermont_get_scaling, 1575 1804 .get_vid = atom_get_vid, 1576 - .update_util = intel_pstate_update_util, 1577 1805 }; 1578 1806 1579 1807 static const struct pstate_funcs airmont_funcs = { ··· 1583 1813 .get_val = atom_get_val, 1584 1814 .get_scaling = airmont_get_scaling, 1585 1815 .get_vid = atom_get_vid, 1586 - .update_util = intel_pstate_update_util, 1587 1816 }; 1588 1817 1589 1818 static const struct pstate_funcs knl_funcs = { ··· 1593 1824 .get_aperf_mperf_shift = knl_get_aperf_mperf_shift, 1594 1825 .get_scaling = core_get_scaling, 1595 1826 .get_val = core_get_val, 1596 - .update_util = intel_pstate_update_util_pid, 1597 1827 }; 1598 1828 1599 1829 static const struct pstate_funcs bxt_funcs = { ··· 1602 1834 .get_turbo = core_get_turbo_pstate, 1603 1835 .get_scaling = core_get_scaling, 1604 1836 .get_val = core_get_val, 1605 - .update_util = intel_pstate_update_util, 1606 1837 }; 1607 1838 1608 1839 #define ICPU(model, policy) \ ··· 1645 1878 {} 1646 1879 }; 1647 1880 1648 - static bool pid_in_use(void); 1649 - 1650 1881 static int intel_pstate_init_cpu(unsigned int cpunum) 1651 1882 { 1652 1883 struct cpudata *cpu; ··· 1675 1910 intel_pstate_disable_ee(cpunum); 1676 1911 1677 1912 intel_pstate_hwp_enable(cpu); 1678 - } else if (pid_in_use()) { 1679 - intel_pstate_pid_reset(cpu); 1680 1913 } 1681 1914 1682 1915 intel_pstate_get_cpu_pstates(cpu); ··· 1697 1934 /* Prevent intel_pstate_update_util() from using stale data. */ 1698 1935 cpu->sample.time = 0; 1699 1936 cpufreq_add_update_util_hook(cpu_num, &cpu->update_util, 1700 - pstate_funcs.update_util); 1937 + intel_pstate_update_util); 1701 1938 cpu->update_util_set = true; 1702 1939 } 1703 1940 ··· 1895 2132 policy->cpuinfo.max_freq *= cpu->pstate.scaling; 1896 2133 1897 2134 intel_pstate_init_acpi_perf_limits(policy); 1898 - cpumask_set_cpu(policy->cpu, policy->cpus); 1899 2135 1900 2136 policy->fast_switch_possible = true; 1901 2137 ··· 1908 2146 if (ret) 1909 2147 return ret; 1910 2148 1911 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 1912 2149 if (IS_ENABLED(CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE)) 1913 2150 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 1914 2151 else ··· 2022 2261 2023 2262 static struct cpufreq_driver *default_driver = &intel_pstate; 2024 2263 2025 - static bool pid_in_use(void) 2026 - { 2027 - return intel_pstate_driver == &intel_pstate && 2028 - pstate_funcs.update_util == intel_pstate_update_util_pid; 2029 - } 2030 - 2031 2264 static void intel_pstate_driver_cleanup(void) 2032 2265 { 2033 2266 unsigned int cpu; ··· 2056 2301 2057 2302 global.min_perf_pct = min_perf_pct_min(); 2058 2303 2059 - if (pid_in_use()) 2060 - intel_pstate_debug_expose_params(); 2061 - 2062 2304 return 0; 2063 2305 } 2064 2306 ··· 2063 2311 { 2064 2312 if (hwp_active) 2065 2313 return -EBUSY; 2066 - 2067 - if (pid_in_use()) 2068 - intel_pstate_debug_hide_params(); 2069 2314 2070 2315 cpufreq_unregister_driver(intel_pstate_driver); 2071 2316 intel_pstate_driver_cleanup(); ··· 2131 2382 return 0; 2132 2383 } 2133 2384 2134 - #ifdef CONFIG_ACPI 2135 - static void intel_pstate_use_acpi_profile(void) 2136 - { 2137 - switch (acpi_gbl_FADT.preferred_profile) { 2138 - case PM_MOBILE: 2139 - case PM_TABLET: 2140 - case PM_APPLIANCE_PC: 2141 - case PM_DESKTOP: 2142 - case PM_WORKSTATION: 2143 - pstate_funcs.update_util = intel_pstate_update_util; 2144 - } 2145 - } 2146 - #else 2147 - static void intel_pstate_use_acpi_profile(void) 2148 - { 2149 - } 2150 - #endif 2151 - 2152 2385 static void __init copy_cpu_funcs(struct pstate_funcs *funcs) 2153 2386 { 2154 2387 pstate_funcs.get_max = funcs->get_max; ··· 2140 2409 pstate_funcs.get_scaling = funcs->get_scaling; 2141 2410 pstate_funcs.get_val = funcs->get_val; 2142 2411 pstate_funcs.get_vid = funcs->get_vid; 2143 - pstate_funcs.update_util = funcs->update_util; 2144 2412 pstate_funcs.get_aperf_mperf_shift = funcs->get_aperf_mperf_shift; 2145 - 2146 - intel_pstate_use_acpi_profile(); 2147 2413 } 2148 2414 2149 2415 #ifdef CONFIG_ACPI ··· 2284 2556 2285 2557 if (x86_match_cpu(hwp_support_ids)) { 2286 2558 copy_cpu_funcs(&core_funcs); 2287 - if (no_hwp) { 2288 - pstate_funcs.update_util = intel_pstate_update_util; 2289 - } else { 2559 + if (!no_hwp) { 2290 2560 hwp_active++; 2291 2561 intel_pstate.attr = hwp_cpufreq_attrs; 2292 2562 goto hwp_cpu_matched;
-1
drivers/cpufreq/longrun.c
··· 270 270 /* cpuinfo and default policy values */ 271 271 policy->cpuinfo.min_freq = longrun_low_freq; 272 272 policy->cpuinfo.max_freq = longrun_high_freq; 273 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 274 273 longrun_get_policy(policy); 275 274 276 275 return 0;
+1 -1
drivers/cpufreq/loongson2_cpufreq.c
··· 114 114 .attr = cpufreq_generic_attr, 115 115 }; 116 116 117 - static struct platform_device_id platform_device_ids[] = { 117 + static const struct platform_device_id platform_device_ids[] = { 118 118 { 119 119 .name = "loongson2_cpufreq", 120 120 },
+16 -13
drivers/cpufreq/mt8173-cpufreq.c drivers/cpufreq/mediatek-cpufreq.c
··· 507 507 return 0; 508 508 } 509 509 510 - static struct cpufreq_driver mt8173_cpufreq_driver = { 510 + static struct cpufreq_driver mtk_cpufreq_driver = { 511 511 .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK | 512 512 CPUFREQ_HAVE_GOVERNOR_PER_POLICY, 513 513 .verify = cpufreq_generic_frequency_table_verify, ··· 520 520 .attr = cpufreq_generic_attr, 521 521 }; 522 522 523 - static int mt8173_cpufreq_probe(struct platform_device *pdev) 523 + static int mtk_cpufreq_probe(struct platform_device *pdev) 524 524 { 525 525 struct mtk_cpu_dvfs_info *info, *tmp; 526 526 int cpu, ret; ··· 547 547 list_add(&info->list_head, &dvfs_info_list); 548 548 } 549 549 550 - ret = cpufreq_register_driver(&mt8173_cpufreq_driver); 550 + ret = cpufreq_register_driver(&mtk_cpufreq_driver); 551 551 if (ret) { 552 552 dev_err(&pdev->dev, "failed to register mtk cpufreq driver\n"); 553 553 goto release_dvfs_info_list; ··· 564 564 return ret; 565 565 } 566 566 567 - static struct platform_driver mt8173_cpufreq_platdrv = { 567 + static struct platform_driver mtk_cpufreq_platdrv = { 568 568 .driver = { 569 - .name = "mt8173-cpufreq", 569 + .name = "mtk-cpufreq", 570 570 }, 571 - .probe = mt8173_cpufreq_probe, 571 + .probe = mtk_cpufreq_probe, 572 572 }; 573 573 574 574 /* List of machines supported by this driver */ 575 - static const struct of_device_id mt8173_cpufreq_machines[] __initconst = { 575 + static const struct of_device_id mtk_cpufreq_machines[] __initconst = { 576 + { .compatible = "mediatek,mt2701", }, 577 + { .compatible = "mediatek,mt7622", }, 578 + { .compatible = "mediatek,mt7623", }, 576 579 { .compatible = "mediatek,mt817x", }, 577 580 { .compatible = "mediatek,mt8173", }, 578 581 { .compatible = "mediatek,mt8176", }, ··· 583 580 { } 584 581 }; 585 582 586 - static int __init mt8173_cpufreq_driver_init(void) 583 + static int __init mtk_cpufreq_driver_init(void) 587 584 { 588 585 struct device_node *np; 589 586 const struct of_device_id *match; ··· 594 591 if (!np) 595 592 return -ENODEV; 596 593 597 - match = of_match_node(mt8173_cpufreq_machines, np); 594 + match = of_match_node(mtk_cpufreq_machines, np); 598 595 of_node_put(np); 599 596 if (!match) { 600 - pr_warn("Machine is not compatible with mt8173-cpufreq\n"); 597 + pr_warn("Machine is not compatible with mtk-cpufreq\n"); 601 598 return -ENODEV; 602 599 } 603 600 604 - err = platform_driver_register(&mt8173_cpufreq_platdrv); 601 + err = platform_driver_register(&mtk_cpufreq_platdrv); 605 602 if (err) 606 603 return err; 607 604 ··· 611 608 * and the device registration codes are put here to handle defer 612 609 * probing. 613 610 */ 614 - pdev = platform_device_register_simple("mt8173-cpufreq", -1, NULL, 0); 611 + pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0); 615 612 if (IS_ERR(pdev)) { 616 613 pr_err("failed to register mtk-cpufreq platform device\n"); 617 614 return PTR_ERR(pdev); ··· 619 616 620 617 return 0; 621 618 } 622 - device_initcall(mt8173_cpufreq_driver_init); 619 + device_initcall(mtk_cpufreq_driver_init);
+5 -2
drivers/cpufreq/pmac32-cpufreq.c
··· 442 442 .init = pmac_cpufreq_cpu_init, 443 443 .suspend = pmac_cpufreq_suspend, 444 444 .resume = pmac_cpufreq_resume, 445 - .flags = CPUFREQ_PM_NO_WARN, 445 + .flags = CPUFREQ_PM_NO_WARN | 446 + CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 446 447 .attr = cpufreq_generic_attr, 447 448 .name = "powermac", 448 449 }; ··· 627 626 if (!value) 628 627 goto out; 629 628 cur_freq = (*value) / 1000; 630 - transition_latency = CPUFREQ_ETERNAL; 631 629 632 630 /* Check for 7447A based MacRISC3 */ 633 631 if (of_machine_is_compatible("MacRISC3") && 634 632 of_get_property(cpunode, "dynamic-power-step", NULL) && 635 633 PVR_VER(mfspr(SPRN_PVR)) == 0x8003) { 636 634 pmac_cpufreq_init_7447A(cpunode); 635 + 636 + /* Allow dynamic switching */ 637 637 transition_latency = 8000000; 638 + pmac_cpufreq_driver.flags &= ~CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING; 638 639 /* Check for other MacRISC3 machines */ 639 640 } else if (of_machine_is_compatible("PowerBook3,4") || 640 641 of_machine_is_compatible("PowerBook3,5") ||
+1 -1
drivers/cpufreq/pmac64-cpufreq.c
··· 516 516 goto bail; 517 517 } 518 518 519 - DBG("cpufreq: i2c clock chip found: %s\n", hwclock->full_name); 519 + DBG("cpufreq: i2c clock chip found: %pOF\n", hwclock); 520 520 521 521 /* Now get all the platform functions */ 522 522 pfunc_cpu_getfreq =
+3
drivers/cpufreq/s5pv210-cpufreq.c
··· 602 602 } 603 603 604 604 clk_base = of_iomap(np, 0); 605 + of_node_put(np); 605 606 if (!clk_base) { 606 607 pr_err("%s: failed to map clock registers\n", __func__); 607 608 return -EFAULT; ··· 613 612 if (id < 0 || id >= ARRAY_SIZE(dmc_base)) { 614 613 pr_err("%s: failed to get alias of dmc node '%s'\n", 615 614 __func__, np->name); 615 + of_node_put(np); 616 616 return id; 617 617 } 618 618 ··· 621 619 if (!dmc_base[id]) { 622 620 pr_err("%s: failed to map dmc%d registers\n", 623 621 __func__, id); 622 + of_node_put(np); 624 623 return -EFAULT; 625 624 } 626 625 }
+3 -2
drivers/cpufreq/sa1100-cpufreq.c
··· 197 197 198 198 static int __init sa1100_cpu_init(struct cpufreq_policy *policy) 199 199 { 200 - return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL); 200 + return cpufreq_generic_init(policy, sa11x0_freq_table, 0); 201 201 } 202 202 203 203 static struct cpufreq_driver sa1100_driver __refdata = { 204 - .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 204 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK | 205 + CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 205 206 .verify = cpufreq_generic_frequency_table_verify, 206 207 .target_index = sa1100_target, 207 208 .get = sa11x0_getspeed,
+3 -2
drivers/cpufreq/sa1110-cpufreq.c
··· 306 306 307 307 static int __init sa1110_cpu_init(struct cpufreq_policy *policy) 308 308 { 309 - return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL); 309 + return cpufreq_generic_init(policy, sa11x0_freq_table, 0); 310 310 } 311 311 312 312 /* sa1110_driver needs __refdata because it must remain after init registers 313 313 * it with cpufreq_register_driver() */ 314 314 static struct cpufreq_driver sa1110_driver __refdata = { 315 - .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 315 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK | 316 + CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 316 317 .verify = cpufreq_generic_frequency_table_verify, 317 318 .target_index = sa1110_target, 318 319 .get = sa11x0_getspeed,
+1 -2
drivers/cpufreq/sh-cpufreq.c
··· 137 137 (clk_round_rate(cpuclk, ~0UL) + 500) / 1000; 138 138 } 139 139 140 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 141 - 142 140 dev_info(dev, "CPU Frequencies - Minimum %u.%03u MHz, " 143 141 "Maximum %u.%03u MHz.\n", 144 142 policy->min / 1000, policy->min % 1000, ··· 157 159 158 160 static struct cpufreq_driver sh_cpufreq_driver = { 159 161 .name = "sh", 162 + .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 160 163 .get = sh_cpufreq_get, 161 164 .target = sh_cpufreq_target, 162 165 .verify = sh_cpufreq_verify,
+1 -1
drivers/cpufreq/speedstep-ich.c
··· 207 207 * 8100 which use a pretty old revision of the 82815 208 208 * host bridge. Abort on these systems. 209 209 */ 210 - static struct pci_dev *hostbridge; 210 + struct pci_dev *hostbridge; 211 211 212 212 hostbridge = pci_get_subsys(PCI_VENDOR_ID_INTEL, 213 213 PCI_DEVICE_ID_INTEL_82815_MC,
+2 -2
drivers/cpufreq/speedstep-lib.c
··· 35 35 static unsigned int pentium3_get_frequency(enum speedstep_processor processor) 36 36 { 37 37 /* See table 14 of p3_ds.pdf and table 22 of 29834003.pdf */ 38 - struct { 38 + static const struct { 39 39 unsigned int ratio; /* Frequency Multiplier (x10) */ 40 40 u8 bitmap; /* power on configuration bits 41 41 [27, 25:22] (in MSR 0x2a) */ ··· 58 58 }; 59 59 60 60 /* PIII(-M) FSB settings: see table b1-b of 24547206.pdf */ 61 - struct { 61 + static const struct { 62 62 unsigned int value; /* Front Side Bus speed in MHz */ 63 63 u8 bitmap; /* power on configuration bits [18: 19] 64 64 (in MSR 0x2a) */
+1 -1
drivers/cpufreq/speedstep-smi.c
··· 266 266 pr_debug("workaround worked.\n"); 267 267 } 268 268 269 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 270 269 return cpufreq_table_validate_and_show(policy, speedstep_freqs); 271 270 } 272 271 ··· 289 290 290 291 static struct cpufreq_driver speedstep_driver = { 291 292 .name = "speedstep-smi", 293 + .flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 292 294 .verify = cpufreq_generic_frequency_table_verify, 293 295 .target_index = speedstep_target, 294 296 .init = speedstep_cpu_init,
+4 -4
drivers/cpufreq/sti-cpufreq.c
··· 65 65 ret = of_property_read_u32_index(np, "st,syscfg", 66 66 MAJOR_ID_INDEX, &major_offset); 67 67 if (ret) { 68 - dev_err(dev, "No major number offset provided in %s [%d]\n", 69 - np->full_name, ret); 68 + dev_err(dev, "No major number offset provided in %pOF [%d]\n", 69 + np, ret); 70 70 return ret; 71 71 } 72 72 ··· 92 92 MINOR_ID_INDEX, &minor_offset); 93 93 if (ret) { 94 94 dev_err(dev, 95 - "No minor number offset provided %s [%d]\n", 96 - np->full_name, ret); 95 + "No minor number offset provided %pOF [%d]\n", 96 + np, ret); 97 97 return ret; 98 98 } 99 99
+38
drivers/cpufreq/tango-cpufreq.c
··· 1 + #include <linux/of.h> 2 + #include <linux/cpu.h> 3 + #include <linux/clk.h> 4 + #include <linux/pm_opp.h> 5 + #include <linux/platform_device.h> 6 + 7 + static const struct of_device_id machines[] __initconst = { 8 + { .compatible = "sigma,tango4" }, 9 + { /* sentinel */ } 10 + }; 11 + 12 + static int __init tango_cpufreq_init(void) 13 + { 14 + struct device *cpu_dev = get_cpu_device(0); 15 + unsigned long max_freq; 16 + struct clk *cpu_clk; 17 + void *res; 18 + 19 + if (!of_match_node(machines, of_root)) 20 + return -ENODEV; 21 + 22 + cpu_clk = clk_get(cpu_dev, NULL); 23 + if (IS_ERR(cpu_clk)) 24 + return -ENODEV; 25 + 26 + max_freq = clk_get_rate(cpu_clk); 27 + 28 + dev_pm_opp_add(cpu_dev, max_freq / 1, 0); 29 + dev_pm_opp_add(cpu_dev, max_freq / 2, 0); 30 + dev_pm_opp_add(cpu_dev, max_freq / 3, 0); 31 + dev_pm_opp_add(cpu_dev, max_freq / 5, 0); 32 + dev_pm_opp_add(cpu_dev, max_freq / 9, 0); 33 + 34 + res = platform_device_register_data(NULL, "cpufreq-dt", -1, NULL, 0); 35 + 36 + return PTR_ERR_OR_ZERO(res); 37 + } 38 + device_initcall(tango_cpufreq_init);
+2 -2
drivers/cpufreq/ti-cpufreq.c
··· 245 245 if (ret) 246 246 goto fail_put_node; 247 247 248 - of_node_put(opp_data->opp_node); 249 - 250 248 ret = PTR_ERR_OR_ZERO(dev_pm_opp_set_supported_hw(opp_data->cpu_dev, 251 249 version, VERSION_COUNT)); 252 250 if (ret) { ··· 252 254 "Failed to set supported hardware\n"); 253 255 goto fail_put_node; 254 256 } 257 + 258 + of_node_put(opp_data->opp_node); 255 259 256 260 register_cpufreq_dt: 257 261 platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
+1 -2
drivers/cpufreq/unicore2-cpufreq.c
··· 58 58 59 59 policy->min = policy->cpuinfo.min_freq = 250000; 60 60 policy->max = policy->cpuinfo.max_freq = 1000000; 61 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 62 61 policy->clk = clk_get(NULL, "MAIN_CLK"); 63 62 return PTR_ERR_OR_ZERO(policy->clk); 64 63 } 65 64 66 65 static struct cpufreq_driver ucv2_driver = { 67 - .flags = CPUFREQ_STICKY, 66 + .flags = CPUFREQ_STICKY | CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING, 68 67 .verify = ucv2_verify_speed, 69 68 .target = ucv2_target, 70 69 .get = cpufreq_generic_get,
+1
drivers/cpuidle/Makefile
··· 5 5 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ 6 6 obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o 7 7 obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o 8 + obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o 8 9 9 10 ################################################################################## 10 11 # ARM SoC drivers
+9 -9
drivers/cpuidle/cpuidle.c
··· 77 77 struct cpuidle_device *dev, 78 78 unsigned int max_latency, 79 79 unsigned int forbidden_flags, 80 - bool freeze) 80 + bool s2idle) 81 81 { 82 82 unsigned int latency_req = 0; 83 83 int i, ret = 0; ··· 89 89 if (s->disabled || su->disable || s->exit_latency <= latency_req 90 90 || s->exit_latency > max_latency 91 91 || (s->flags & forbidden_flags) 92 - || (freeze && !s->enter_freeze)) 92 + || (s2idle && !s->enter_s2idle)) 93 93 continue; 94 94 95 95 latency_req = s->exit_latency; ··· 128 128 } 129 129 130 130 #ifdef CONFIG_SUSPEND 131 - static void enter_freeze_proper(struct cpuidle_driver *drv, 131 + static void enter_s2idle_proper(struct cpuidle_driver *drv, 132 132 struct cpuidle_device *dev, int index) 133 133 { 134 134 /* ··· 143 143 * suspended is generally unsafe. 144 144 */ 145 145 stop_critical_timings(); 146 - drv->states[index].enter_freeze(dev, drv, index); 146 + drv->states[index].enter_s2idle(dev, drv, index); 147 147 WARN_ON(!irqs_disabled()); 148 148 /* 149 149 * timekeeping_resume() that will be called by tick_unfreeze() for the ··· 155 155 } 156 156 157 157 /** 158 - * cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle. 158 + * cpuidle_enter_s2idle - Enter an idle state suitable for suspend-to-idle. 159 159 * @drv: cpuidle driver for the given CPU. 160 160 * @dev: cpuidle device for the given CPU. 161 161 * 162 - * If there are states with the ->enter_freeze callback, find the deepest of 162 + * If there are states with the ->enter_s2idle callback, find the deepest of 163 163 * them and enter it with frozen tick. 164 164 */ 165 - int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev) 165 + int cpuidle_enter_s2idle(struct cpuidle_driver *drv, struct cpuidle_device *dev) 166 166 { 167 167 int index; 168 168 169 169 /* 170 - * Find the deepest state with ->enter_freeze present, which guarantees 170 + * Find the deepest state with ->enter_s2idle present, which guarantees 171 171 * that interrupts won't be enabled when it exits and allows the tick to 172 172 * be frozen safely. 173 173 */ 174 174 index = find_deepest_state(drv, dev, UINT_MAX, 0, true); 175 175 if (index > 0) 176 - enter_freeze_proper(drv, dev, index); 176 + enter_s2idle_proper(drv, dev, index); 177 177 178 178 return index; 179 179 }
-32
drivers/cpuidle/driver.c
··· 179 179 } 180 180 } 181 181 182 - #ifdef CONFIG_ARCH_HAS_CPU_RELAX 183 - static int __cpuidle poll_idle(struct cpuidle_device *dev, 184 - struct cpuidle_driver *drv, int index) 185 - { 186 - local_irq_enable(); 187 - if (!current_set_polling_and_test()) { 188 - while (!need_resched()) 189 - cpu_relax(); 190 - } 191 - current_clr_polling(); 192 - 193 - return index; 194 - } 195 - 196 - static void poll_idle_init(struct cpuidle_driver *drv) 197 - { 198 - struct cpuidle_state *state = &drv->states[0]; 199 - 200 - snprintf(state->name, CPUIDLE_NAME_LEN, "POLL"); 201 - snprintf(state->desc, CPUIDLE_DESC_LEN, "CPUIDLE CORE POLL IDLE"); 202 - state->exit_latency = 0; 203 - state->target_residency = 0; 204 - state->power_usage = -1; 205 - state->enter = poll_idle; 206 - state->disabled = false; 207 - } 208 - #else 209 - static void poll_idle_init(struct cpuidle_driver *drv) {} 210 - #endif /* !CONFIG_ARCH_HAS_CPU_RELAX */ 211 - 212 182 /** 213 183 * __cpuidle_register_driver: register the driver 214 184 * @drv: a valid pointer to a struct cpuidle_driver ··· 215 245 if (drv->bctimer) 216 246 on_each_cpu_mask(drv->cpumask, cpuidle_setup_broadcast_timer, 217 247 (void *)1, 1); 218 - 219 - poll_idle_init(drv); 220 248 221 249 return 0; 222 250 }
+12 -12
drivers/cpuidle/dt_idle_states.c
··· 41 41 /* 42 42 * Since this is not a "coupled" state, it's safe to assume interrupts 43 43 * won't be enabled when it exits allowing the tick to be frozen 44 - * safely. So enter() can be also enter_freeze() callback. 44 + * safely. So enter() can be also enter_s2idle() callback. 45 45 */ 46 - idle_state->enter_freeze = match_id->data; 46 + idle_state->enter_s2idle = match_id->data; 47 47 48 48 err = of_property_read_u32(state_node, "wakeup-latency-us", 49 49 &idle_state->exit_latency); ··· 53 53 err = of_property_read_u32(state_node, "entry-latency-us", 54 54 &entry_latency); 55 55 if (err) { 56 - pr_debug(" * %s missing entry-latency-us property\n", 57 - state_node->full_name); 56 + pr_debug(" * %pOF missing entry-latency-us property\n", 57 + state_node); 58 58 return -EINVAL; 59 59 } 60 60 61 61 err = of_property_read_u32(state_node, "exit-latency-us", 62 62 &exit_latency); 63 63 if (err) { 64 - pr_debug(" * %s missing exit-latency-us property\n", 65 - state_node->full_name); 64 + pr_debug(" * %pOF missing exit-latency-us property\n", 65 + state_node); 66 66 return -EINVAL; 67 67 } 68 68 /* ··· 75 75 err = of_property_read_u32(state_node, "min-residency-us", 76 76 &idle_state->target_residency); 77 77 if (err) { 78 - pr_debug(" * %s missing min-residency-us property\n", 79 - state_node->full_name); 78 + pr_debug(" * %pOF missing min-residency-us property\n", 79 + state_node); 80 80 return -EINVAL; 81 81 } 82 82 ··· 186 186 } 187 187 188 188 if (!idle_state_valid(state_node, i, cpumask)) { 189 - pr_warn("%s idle state not valid, bailing out\n", 190 - state_node->full_name); 189 + pr_warn("%pOF idle state not valid, bailing out\n", 190 + state_node); 191 191 err = -EINVAL; 192 192 break; 193 193 } ··· 200 200 idle_state = &drv->states[state_idx++]; 201 201 err = init_state_node(idle_state, matches, state_node); 202 202 if (err) { 203 - pr_err("Parsing idle state node %s failed with err %d\n", 204 - state_node->full_name, err); 203 + pr_err("Parsing idle state node %pOF failed with err %d\n", 204 + state_node, err); 205 205 err = -EINVAL; 206 206 break; 207 207 }
+8 -6
drivers/cpuidle/governors/ladder.c
··· 69 69 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); 70 70 struct ladder_device_state *last_state; 71 71 int last_residency, last_idx = ldev->last_state_idx; 72 + int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0; 72 73 int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); 73 74 74 75 /* Special case when user has set very strict latency requirement */ ··· 97 96 } 98 97 99 98 /* consider demotion */ 100 - if (last_idx > CPUIDLE_DRIVER_STATE_START && 99 + if (last_idx > first_idx && 101 100 (drv->states[last_idx].disabled || 102 101 dev->states_usage[last_idx].disable || 103 102 drv->states[last_idx].exit_latency > latency_req)) { 104 103 int i; 105 104 106 - for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { 105 + for (i = last_idx - 1; i > first_idx; i--) { 107 106 if (drv->states[i].exit_latency <= latency_req) 108 107 break; 109 108 } ··· 111 110 return i; 112 111 } 113 112 114 - if (last_idx > CPUIDLE_DRIVER_STATE_START && 113 + if (last_idx > first_idx && 115 114 last_residency < last_state->threshold.demotion_time) { 116 115 last_state->stats.demotion_count++; 117 116 last_state->stats.promotion_count = 0; ··· 134 133 struct cpuidle_device *dev) 135 134 { 136 135 int i; 136 + int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0; 137 137 struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu); 138 138 struct ladder_device_state *lstate; 139 139 struct cpuidle_state *state; 140 140 141 - ldev->last_state_idx = CPUIDLE_DRIVER_STATE_START; 141 + ldev->last_state_idx = first_idx; 142 142 143 - for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) { 143 + for (i = first_idx; i < drv->state_count; i++) { 144 144 state = &drv->states[i]; 145 145 lstate = &ldev->states[i]; 146 146 ··· 153 151 154 152 if (i < drv->state_count - 1) 155 153 lstate->threshold.promotion_time = state->exit_latency; 156 - if (i > CPUIDLE_DRIVER_STATE_START) 154 + if (i > first_idx) 157 155 lstate->threshold.demotion_time = state->exit_latency; 158 156 } 159 157
+5 -8
drivers/cpuidle/governors/menu.c
··· 324 324 expected_interval = get_typical_interval(data); 325 325 expected_interval = min(expected_interval, data->next_timer_us); 326 326 327 - if (CPUIDLE_DRIVER_STATE_START > 0) { 328 - struct cpuidle_state *s = &drv->states[CPUIDLE_DRIVER_STATE_START]; 327 + first_idx = 0; 328 + if (drv->states[0].flags & CPUIDLE_FLAG_POLLING) { 329 + struct cpuidle_state *s = &drv->states[1]; 329 330 unsigned int polling_threshold; 330 331 331 332 /* ··· 337 336 polling_threshold = max_t(unsigned int, 20, s->target_residency); 338 337 if (data->next_timer_us > polling_threshold && 339 338 latency_req > s->exit_latency && !s->disabled && 340 - !dev->states_usage[CPUIDLE_DRIVER_STATE_START].disable) 341 - first_idx = CPUIDLE_DRIVER_STATE_START; 342 - else 343 - first_idx = CPUIDLE_DRIVER_STATE_START - 1; 344 - } else { 345 - first_idx = 0; 339 + !dev->states_usage[1].disable) 340 + first_idx = 1; 346 341 } 347 342 348 343 /*
+37
drivers/cpuidle/poll_state.c
··· 1 + /* 2 + * poll_state.c - Polling idle state 3 + * 4 + * This file is released under the GPLv2. 5 + */ 6 + 7 + #include <linux/cpuidle.h> 8 + #include <linux/sched.h> 9 + #include <linux/sched/idle.h> 10 + 11 + static int __cpuidle poll_idle(struct cpuidle_device *dev, 12 + struct cpuidle_driver *drv, int index) 13 + { 14 + local_irq_enable(); 15 + if (!current_set_polling_and_test()) { 16 + while (!need_resched()) 17 + cpu_relax(); 18 + } 19 + current_clr_polling(); 20 + 21 + return index; 22 + } 23 + 24 + void cpuidle_poll_state_init(struct cpuidle_driver *drv) 25 + { 26 + struct cpuidle_state *state = &drv->states[0]; 27 + 28 + snprintf(state->name, CPUIDLE_NAME_LEN, "POLL"); 29 + snprintf(state->desc, CPUIDLE_DESC_LEN, "CPUIDLE CORE POLL IDLE"); 30 + state->exit_latency = 0; 31 + state->target_residency = 0; 32 + state->power_usage = -1; 33 + state->enter = poll_idle; 34 + state->disabled = false; 35 + state->flags = CPUIDLE_FLAG_POLLING; 36 + } 37 + EXPORT_SYMBOL_GPL(cpuidle_poll_state_init);
+1
drivers/devfreq/Kconfig
··· 1 1 menuconfig PM_DEVFREQ 2 2 bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support" 3 3 select SRCU 4 + select PM_OPP 4 5 help 5 6 A device may have a list of frequencies and voltages available. 6 7 devfreq, a generic DVFS framework can be registered for a device
+2 -2
drivers/devfreq/devfreq-event.c
··· 277 277 sizeof(u32)); 278 278 if (count < 0) { 279 279 dev_err(dev, 280 - "failed to get the count of devfreq-event in %s node\n", 281 - dev->of_node->full_name); 280 + "failed to get the count of devfreq-event in %pOF node\n", 281 + dev->of_node); 282 282 return count; 283 283 } 284 284
+4 -1
drivers/devfreq/devfreq.c
··· 564 564 err = device_register(&devfreq->dev); 565 565 if (err) { 566 566 mutex_unlock(&devfreq->lock); 567 - goto err_out; 567 + goto err_dev; 568 568 } 569 569 570 570 devfreq->trans_table = devm_kzalloc(&devfreq->dev, ··· 610 610 mutex_unlock(&devfreq_list_lock); 611 611 612 612 device_unregister(&devfreq->dev); 613 + err_dev: 614 + if (devfreq) 615 + kfree(devfreq); 613 616 err_out: 614 617 return ERR_PTR(err); 615 618 }
+4
drivers/devfreq/governor.h
··· 69 69 70 70 extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq); 71 71 72 + static inline int devfreq_update_stats(struct devfreq *df) 73 + { 74 + return df->profile->get_dev_status(df->dev.parent, &df->last_status); 75 + } 72 76 #endif /* _GOVERNOR_H */
+91 -90
drivers/idle/intel_idle.c
··· 97 97 static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; 98 98 static int intel_idle(struct cpuidle_device *dev, 99 99 struct cpuidle_driver *drv, int index); 100 - static void intel_idle_freeze(struct cpuidle_device *dev, 100 + static void intel_idle_s2idle(struct cpuidle_device *dev, 101 101 struct cpuidle_driver *drv, int index); 102 102 static struct cpuidle_state *cpuidle_state_table; 103 103 ··· 132 132 .exit_latency = 3, 133 133 .target_residency = 6, 134 134 .enter = &intel_idle, 135 - .enter_freeze = intel_idle_freeze, }, 135 + .enter_s2idle = intel_idle_s2idle, }, 136 136 { 137 137 .name = "C1E", 138 138 .desc = "MWAIT 0x01", ··· 140 140 .exit_latency = 10, 141 141 .target_residency = 20, 142 142 .enter = &intel_idle, 143 - .enter_freeze = intel_idle_freeze, }, 143 + .enter_s2idle = intel_idle_s2idle, }, 144 144 { 145 145 .name = "C3", 146 146 .desc = "MWAIT 0x10", ··· 148 148 .exit_latency = 20, 149 149 .target_residency = 80, 150 150 .enter = &intel_idle, 151 - .enter_freeze = intel_idle_freeze, }, 151 + .enter_s2idle = intel_idle_s2idle, }, 152 152 { 153 153 .name = "C6", 154 154 .desc = "MWAIT 0x20", ··· 156 156 .exit_latency = 200, 157 157 .target_residency = 800, 158 158 .enter = &intel_idle, 159 - .enter_freeze = intel_idle_freeze, }, 159 + .enter_s2idle = intel_idle_s2idle, }, 160 160 { 161 161 .enter = NULL } 162 162 }; ··· 169 169 .exit_latency = 2, 170 170 .target_residency = 2, 171 171 .enter = &intel_idle, 172 - .enter_freeze = intel_idle_freeze, }, 172 + .enter_s2idle = intel_idle_s2idle, }, 173 173 { 174 174 .name = "C1E", 175 175 .desc = "MWAIT 0x01", ··· 177 177 .exit_latency = 10, 178 178 .target_residency = 20, 179 179 .enter = &intel_idle, 180 - .enter_freeze = intel_idle_freeze, }, 180 + .enter_s2idle = intel_idle_s2idle, }, 181 181 { 182 182 .name = "C3", 183 183 .desc = "MWAIT 0x10", ··· 185 185 .exit_latency = 80, 186 186 .target_residency = 211, 187 187 .enter = &intel_idle, 188 - .enter_freeze = intel_idle_freeze, }, 188 + .enter_s2idle = intel_idle_s2idle, }, 189 189 { 190 190 .name = "C6", 191 191 .desc = "MWAIT 0x20", ··· 193 193 .exit_latency = 104, 194 194 .target_residency = 345, 195 195 .enter = &intel_idle, 196 - .enter_freeze = intel_idle_freeze, }, 196 + .enter_s2idle = intel_idle_s2idle, }, 197 197 { 198 198 .name = "C7", 199 199 .desc = "MWAIT 0x30", ··· 201 201 .exit_latency = 109, 202 202 .target_residency = 345, 203 203 .enter = &intel_idle, 204 - .enter_freeze = intel_idle_freeze, }, 204 + .enter_s2idle = intel_idle_s2idle, }, 205 205 { 206 206 .enter = NULL } 207 207 }; ··· 214 214 .exit_latency = 1, 215 215 .target_residency = 1, 216 216 .enter = &intel_idle, 217 - .enter_freeze = intel_idle_freeze, }, 217 + .enter_s2idle = intel_idle_s2idle, }, 218 218 { 219 219 .name = "C6N", 220 220 .desc = "MWAIT 0x58", ··· 222 222 .exit_latency = 300, 223 223 .target_residency = 275, 224 224 .enter = &intel_idle, 225 - .enter_freeze = intel_idle_freeze, }, 225 + .enter_s2idle = intel_idle_s2idle, }, 226 226 { 227 227 .name = "C6S", 228 228 .desc = "MWAIT 0x52", ··· 230 230 .exit_latency = 500, 231 231 .target_residency = 560, 232 232 .enter = &intel_idle, 233 - .enter_freeze = intel_idle_freeze, }, 233 + .enter_s2idle = intel_idle_s2idle, }, 234 234 { 235 235 .name = "C7", 236 236 .desc = "MWAIT 0x60", ··· 238 238 .exit_latency = 1200, 239 239 .target_residency = 4000, 240 240 .enter = &intel_idle, 241 - .enter_freeze = intel_idle_freeze, }, 241 + .enter_s2idle = intel_idle_s2idle, }, 242 242 { 243 243 .name = "C7S", 244 244 .desc = "MWAIT 0x64", ··· 246 246 .exit_latency = 10000, 247 247 .target_residency = 20000, 248 248 .enter = &intel_idle, 249 - .enter_freeze = intel_idle_freeze, }, 249 + .enter_s2idle = intel_idle_s2idle, }, 250 250 { 251 251 .enter = NULL } 252 252 }; ··· 259 259 .exit_latency = 1, 260 260 .target_residency = 1, 261 261 .enter = &intel_idle, 262 - .enter_freeze = intel_idle_freeze, }, 262 + .enter_s2idle = intel_idle_s2idle, }, 263 263 { 264 264 .name = "C6N", 265 265 .desc = "MWAIT 0x58", ··· 267 267 .exit_latency = 80, 268 268 .target_residency = 275, 269 269 .enter = &intel_idle, 270 - .enter_freeze = intel_idle_freeze, }, 270 + .enter_s2idle = intel_idle_s2idle, }, 271 271 { 272 272 .name = "C6S", 273 273 .desc = "MWAIT 0x52", ··· 275 275 .exit_latency = 200, 276 276 .target_residency = 560, 277 277 .enter = &intel_idle, 278 - .enter_freeze = intel_idle_freeze, }, 278 + .enter_s2idle = intel_idle_s2idle, }, 279 279 { 280 280 .name = "C7", 281 281 .desc = "MWAIT 0x60", ··· 283 283 .exit_latency = 1200, 284 284 .target_residency = 4000, 285 285 .enter = &intel_idle, 286 - .enter_freeze = intel_idle_freeze, }, 286 + .enter_s2idle = intel_idle_s2idle, }, 287 287 { 288 288 .name = "C7S", 289 289 .desc = "MWAIT 0x64", ··· 291 291 .exit_latency = 10000, 292 292 .target_residency = 20000, 293 293 .enter = &intel_idle, 294 - .enter_freeze = intel_idle_freeze, }, 294 + .enter_s2idle = intel_idle_s2idle, }, 295 295 { 296 296 .enter = NULL } 297 297 }; ··· 304 304 .exit_latency = 1, 305 305 .target_residency = 1, 306 306 .enter = &intel_idle, 307 - .enter_freeze = intel_idle_freeze, }, 307 + .enter_s2idle = intel_idle_s2idle, }, 308 308 { 309 309 .name = "C1E", 310 310 .desc = "MWAIT 0x01", ··· 312 312 .exit_latency = 10, 313 313 .target_residency = 20, 314 314 .enter = &intel_idle, 315 - .enter_freeze = intel_idle_freeze, }, 315 + .enter_s2idle = intel_idle_s2idle, }, 316 316 { 317 317 .name = "C3", 318 318 .desc = "MWAIT 0x10", ··· 320 320 .exit_latency = 59, 321 321 .target_residency = 156, 322 322 .enter = &intel_idle, 323 - .enter_freeze = intel_idle_freeze, }, 323 + .enter_s2idle = intel_idle_s2idle, }, 324 324 { 325 325 .name = "C6", 326 326 .desc = "MWAIT 0x20", ··· 328 328 .exit_latency = 80, 329 329 .target_residency = 300, 330 330 .enter = &intel_idle, 331 - .enter_freeze = intel_idle_freeze, }, 331 + .enter_s2idle = intel_idle_s2idle, }, 332 332 { 333 333 .name = "C7", 334 334 .desc = "MWAIT 0x30", ··· 336 336 .exit_latency = 87, 337 337 .target_residency = 300, 338 338 .enter = &intel_idle, 339 - .enter_freeze = intel_idle_freeze, }, 339 + .enter_s2idle = intel_idle_s2idle, }, 340 340 { 341 341 .enter = NULL } 342 342 }; ··· 349 349 .exit_latency = 1, 350 350 .target_residency = 1, 351 351 .enter = &intel_idle, 352 - .enter_freeze = intel_idle_freeze, }, 352 + .enter_s2idle = intel_idle_s2idle, }, 353 353 { 354 354 .name = "C1E", 355 355 .desc = "MWAIT 0x01", ··· 357 357 .exit_latency = 10, 358 358 .target_residency = 80, 359 359 .enter = &intel_idle, 360 - .enter_freeze = intel_idle_freeze, }, 360 + .enter_s2idle = intel_idle_s2idle, }, 361 361 { 362 362 .name = "C3", 363 363 .desc = "MWAIT 0x10", ··· 365 365 .exit_latency = 59, 366 366 .target_residency = 156, 367 367 .enter = &intel_idle, 368 - .enter_freeze = intel_idle_freeze, }, 368 + .enter_s2idle = intel_idle_s2idle, }, 369 369 { 370 370 .name = "C6", 371 371 .desc = "MWAIT 0x20", ··· 373 373 .exit_latency = 82, 374 374 .target_residency = 300, 375 375 .enter = &intel_idle, 376 - .enter_freeze = intel_idle_freeze, }, 376 + .enter_s2idle = intel_idle_s2idle, }, 377 377 { 378 378 .enter = NULL } 379 379 }; ··· 386 386 .exit_latency = 1, 387 387 .target_residency = 1, 388 388 .enter = &intel_idle, 389 - .enter_freeze = intel_idle_freeze, }, 389 + .enter_s2idle = intel_idle_s2idle, }, 390 390 { 391 391 .name = "C1E", 392 392 .desc = "MWAIT 0x01", ··· 394 394 .exit_latency = 10, 395 395 .target_residency = 250, 396 396 .enter = &intel_idle, 397 - .enter_freeze = intel_idle_freeze, }, 397 + .enter_s2idle = intel_idle_s2idle, }, 398 398 { 399 399 .name = "C3", 400 400 .desc = "MWAIT 0x10", ··· 402 402 .exit_latency = 59, 403 403 .target_residency = 300, 404 404 .enter = &intel_idle, 405 - .enter_freeze = intel_idle_freeze, }, 405 + .enter_s2idle = intel_idle_s2idle, }, 406 406 { 407 407 .name = "C6", 408 408 .desc = "MWAIT 0x20", ··· 410 410 .exit_latency = 84, 411 411 .target_residency = 400, 412 412 .enter = &intel_idle, 413 - .enter_freeze = intel_idle_freeze, }, 413 + .enter_s2idle = intel_idle_s2idle, }, 414 414 { 415 415 .enter = NULL } 416 416 }; ··· 423 423 .exit_latency = 1, 424 424 .target_residency = 1, 425 425 .enter = &intel_idle, 426 - .enter_freeze = intel_idle_freeze, }, 426 + .enter_s2idle = intel_idle_s2idle, }, 427 427 { 428 428 .name = "C1E", 429 429 .desc = "MWAIT 0x01", ··· 431 431 .exit_latency = 10, 432 432 .target_residency = 500, 433 433 .enter = &intel_idle, 434 - .enter_freeze = intel_idle_freeze, }, 434 + .enter_s2idle = intel_idle_s2idle, }, 435 435 { 436 436 .name = "C3", 437 437 .desc = "MWAIT 0x10", ··· 439 439 .exit_latency = 59, 440 440 .target_residency = 600, 441 441 .enter = &intel_idle, 442 - .enter_freeze = intel_idle_freeze, }, 442 + .enter_s2idle = intel_idle_s2idle, }, 443 443 { 444 444 .name = "C6", 445 445 .desc = "MWAIT 0x20", ··· 447 447 .exit_latency = 88, 448 448 .target_residency = 700, 449 449 .enter = &intel_idle, 450 - .enter_freeze = intel_idle_freeze, }, 450 + .enter_s2idle = intel_idle_s2idle, }, 451 451 { 452 452 .enter = NULL } 453 453 }; ··· 460 460 .exit_latency = 2, 461 461 .target_residency = 2, 462 462 .enter = &intel_idle, 463 - .enter_freeze = intel_idle_freeze, }, 463 + .enter_s2idle = intel_idle_s2idle, }, 464 464 { 465 465 .name = "C1E", 466 466 .desc = "MWAIT 0x01", ··· 468 468 .exit_latency = 10, 469 469 .target_residency = 20, 470 470 .enter = &intel_idle, 471 - .enter_freeze = intel_idle_freeze, }, 471 + .enter_s2idle = intel_idle_s2idle, }, 472 472 { 473 473 .name = "C3", 474 474 .desc = "MWAIT 0x10", ··· 476 476 .exit_latency = 33, 477 477 .target_residency = 100, 478 478 .enter = &intel_idle, 479 - .enter_freeze = intel_idle_freeze, }, 479 + .enter_s2idle = intel_idle_s2idle, }, 480 480 { 481 481 .name = "C6", 482 482 .desc = "MWAIT 0x20", ··· 484 484 .exit_latency = 133, 485 485 .target_residency = 400, 486 486 .enter = &intel_idle, 487 - .enter_freeze = intel_idle_freeze, }, 487 + .enter_s2idle = intel_idle_s2idle, }, 488 488 { 489 489 .name = "C7s", 490 490 .desc = "MWAIT 0x32", ··· 492 492 .exit_latency = 166, 493 493 .target_residency = 500, 494 494 .enter = &intel_idle, 495 - .enter_freeze = intel_idle_freeze, }, 495 + .enter_s2idle = intel_idle_s2idle, }, 496 496 { 497 497 .name = "C8", 498 498 .desc = "MWAIT 0x40", ··· 500 500 .exit_latency = 300, 501 501 .target_residency = 900, 502 502 .enter = &intel_idle, 503 - .enter_freeze = intel_idle_freeze, }, 503 + .enter_s2idle = intel_idle_s2idle, }, 504 504 { 505 505 .name = "C9", 506 506 .desc = "MWAIT 0x50", ··· 508 508 .exit_latency = 600, 509 509 .target_residency = 1800, 510 510 .enter = &intel_idle, 511 - .enter_freeze = intel_idle_freeze, }, 511 + .enter_s2idle = intel_idle_s2idle, }, 512 512 { 513 513 .name = "C10", 514 514 .desc = "MWAIT 0x60", ··· 516 516 .exit_latency = 2600, 517 517 .target_residency = 7700, 518 518 .enter = &intel_idle, 519 - .enter_freeze = intel_idle_freeze, }, 519 + .enter_s2idle = intel_idle_s2idle, }, 520 520 { 521 521 .enter = NULL } 522 522 }; ··· 528 528 .exit_latency = 2, 529 529 .target_residency = 2, 530 530 .enter = &intel_idle, 531 - .enter_freeze = intel_idle_freeze, }, 531 + .enter_s2idle = intel_idle_s2idle, }, 532 532 { 533 533 .name = "C1E", 534 534 .desc = "MWAIT 0x01", ··· 536 536 .exit_latency = 10, 537 537 .target_residency = 20, 538 538 .enter = &intel_idle, 539 - .enter_freeze = intel_idle_freeze, }, 539 + .enter_s2idle = intel_idle_s2idle, }, 540 540 { 541 541 .name = "C3", 542 542 .desc = "MWAIT 0x10", ··· 544 544 .exit_latency = 40, 545 545 .target_residency = 100, 546 546 .enter = &intel_idle, 547 - .enter_freeze = intel_idle_freeze, }, 547 + .enter_s2idle = intel_idle_s2idle, }, 548 548 { 549 549 .name = "C6", 550 550 .desc = "MWAIT 0x20", ··· 552 552 .exit_latency = 133, 553 553 .target_residency = 400, 554 554 .enter = &intel_idle, 555 - .enter_freeze = intel_idle_freeze, }, 555 + .enter_s2idle = intel_idle_s2idle, }, 556 556 { 557 557 .name = "C7s", 558 558 .desc = "MWAIT 0x32", ··· 560 560 .exit_latency = 166, 561 561 .target_residency = 500, 562 562 .enter = &intel_idle, 563 - .enter_freeze = intel_idle_freeze, }, 563 + .enter_s2idle = intel_idle_s2idle, }, 564 564 { 565 565 .name = "C8", 566 566 .desc = "MWAIT 0x40", ··· 568 568 .exit_latency = 300, 569 569 .target_residency = 900, 570 570 .enter = &intel_idle, 571 - .enter_freeze = intel_idle_freeze, }, 571 + .enter_s2idle = intel_idle_s2idle, }, 572 572 { 573 573 .name = "C9", 574 574 .desc = "MWAIT 0x50", ··· 576 576 .exit_latency = 600, 577 577 .target_residency = 1800, 578 578 .enter = &intel_idle, 579 - .enter_freeze = intel_idle_freeze, }, 579 + .enter_s2idle = intel_idle_s2idle, }, 580 580 { 581 581 .name = "C10", 582 582 .desc = "MWAIT 0x60", ··· 584 584 .exit_latency = 2600, 585 585 .target_residency = 7700, 586 586 .enter = &intel_idle, 587 - .enter_freeze = intel_idle_freeze, }, 587 + .enter_s2idle = intel_idle_s2idle, }, 588 588 { 589 589 .enter = NULL } 590 590 }; ··· 597 597 .exit_latency = 2, 598 598 .target_residency = 2, 599 599 .enter = &intel_idle, 600 - .enter_freeze = intel_idle_freeze, }, 600 + .enter_s2idle = intel_idle_s2idle, }, 601 601 { 602 602 .name = "C1E", 603 603 .desc = "MWAIT 0x01", ··· 605 605 .exit_latency = 10, 606 606 .target_residency = 20, 607 607 .enter = &intel_idle, 608 - .enter_freeze = intel_idle_freeze, }, 608 + .enter_s2idle = intel_idle_s2idle, }, 609 609 { 610 610 .name = "C3", 611 611 .desc = "MWAIT 0x10", ··· 613 613 .exit_latency = 70, 614 614 .target_residency = 100, 615 615 .enter = &intel_idle, 616 - .enter_freeze = intel_idle_freeze, }, 616 + .enter_s2idle = intel_idle_s2idle, }, 617 617 { 618 618 .name = "C6", 619 619 .desc = "MWAIT 0x20", ··· 621 621 .exit_latency = 85, 622 622 .target_residency = 200, 623 623 .enter = &intel_idle, 624 - .enter_freeze = intel_idle_freeze, }, 624 + .enter_s2idle = intel_idle_s2idle, }, 625 625 { 626 626 .name = "C7s", 627 627 .desc = "MWAIT 0x33", ··· 629 629 .exit_latency = 124, 630 630 .target_residency = 800, 631 631 .enter = &intel_idle, 632 - .enter_freeze = intel_idle_freeze, }, 632 + .enter_s2idle = intel_idle_s2idle, }, 633 633 { 634 634 .name = "C8", 635 635 .desc = "MWAIT 0x40", ··· 637 637 .exit_latency = 200, 638 638 .target_residency = 800, 639 639 .enter = &intel_idle, 640 - .enter_freeze = intel_idle_freeze, }, 640 + .enter_s2idle = intel_idle_s2idle, }, 641 641 { 642 642 .name = "C9", 643 643 .desc = "MWAIT 0x50", ··· 645 645 .exit_latency = 480, 646 646 .target_residency = 5000, 647 647 .enter = &intel_idle, 648 - .enter_freeze = intel_idle_freeze, }, 648 + .enter_s2idle = intel_idle_s2idle, }, 649 649 { 650 650 .name = "C10", 651 651 .desc = "MWAIT 0x60", ··· 653 653 .exit_latency = 890, 654 654 .target_residency = 5000, 655 655 .enter = &intel_idle, 656 - .enter_freeze = intel_idle_freeze, }, 656 + .enter_s2idle = intel_idle_s2idle, }, 657 657 { 658 658 .enter = NULL } 659 659 }; ··· 666 666 .exit_latency = 2, 667 667 .target_residency = 2, 668 668 .enter = &intel_idle, 669 - .enter_freeze = intel_idle_freeze, }, 669 + .enter_s2idle = intel_idle_s2idle, }, 670 670 { 671 671 .name = "C1E", 672 672 .desc = "MWAIT 0x01", ··· 674 674 .exit_latency = 10, 675 675 .target_residency = 20, 676 676 .enter = &intel_idle, 677 - .enter_freeze = intel_idle_freeze, }, 677 + .enter_s2idle = intel_idle_s2idle, }, 678 678 { 679 679 .name = "C6", 680 680 .desc = "MWAIT 0x20", ··· 682 682 .exit_latency = 133, 683 683 .target_residency = 600, 684 684 .enter = &intel_idle, 685 - .enter_freeze = intel_idle_freeze, }, 685 + .enter_s2idle = intel_idle_s2idle, }, 686 686 { 687 687 .enter = NULL } 688 688 }; ··· 695 695 .exit_latency = 10, 696 696 .target_residency = 20, 697 697 .enter = &intel_idle, 698 - .enter_freeze = intel_idle_freeze, }, 698 + .enter_s2idle = intel_idle_s2idle, }, 699 699 { 700 700 .name = "C2", 701 701 .desc = "MWAIT 0x10", ··· 703 703 .exit_latency = 20, 704 704 .target_residency = 80, 705 705 .enter = &intel_idle, 706 - .enter_freeze = intel_idle_freeze, }, 706 + .enter_s2idle = intel_idle_s2idle, }, 707 707 { 708 708 .name = "C4", 709 709 .desc = "MWAIT 0x30", ··· 711 711 .exit_latency = 100, 712 712 .target_residency = 400, 713 713 .enter = &intel_idle, 714 - .enter_freeze = intel_idle_freeze, }, 714 + .enter_s2idle = intel_idle_s2idle, }, 715 715 { 716 716 .name = "C6", 717 717 .desc = "MWAIT 0x52", ··· 719 719 .exit_latency = 140, 720 720 .target_residency = 560, 721 721 .enter = &intel_idle, 722 - .enter_freeze = intel_idle_freeze, }, 722 + .enter_s2idle = intel_idle_s2idle, }, 723 723 { 724 724 .enter = NULL } 725 725 }; ··· 731 731 .exit_latency = 1, 732 732 .target_residency = 4, 733 733 .enter = &intel_idle, 734 - .enter_freeze = intel_idle_freeze, }, 734 + .enter_s2idle = intel_idle_s2idle, }, 735 735 { 736 736 .name = "C4", 737 737 .desc = "MWAIT 0x30", ··· 739 739 .exit_latency = 100, 740 740 .target_residency = 400, 741 741 .enter = &intel_idle, 742 - .enter_freeze = intel_idle_freeze, }, 742 + .enter_s2idle = intel_idle_s2idle, }, 743 743 { 744 744 .name = "C6", 745 745 .desc = "MWAIT 0x52", ··· 747 747 .exit_latency = 140, 748 748 .target_residency = 560, 749 749 .enter = &intel_idle, 750 - .enter_freeze = intel_idle_freeze, }, 750 + .enter_s2idle = intel_idle_s2idle, }, 751 751 { 752 752 .name = "C7", 753 753 .desc = "MWAIT 0x60", ··· 755 755 .exit_latency = 1200, 756 756 .target_residency = 4000, 757 757 .enter = &intel_idle, 758 - .enter_freeze = intel_idle_freeze, }, 758 + .enter_s2idle = intel_idle_s2idle, }, 759 759 { 760 760 .name = "C9", 761 761 .desc = "MWAIT 0x64", ··· 763 763 .exit_latency = 10000, 764 764 .target_residency = 20000, 765 765 .enter = &intel_idle, 766 - .enter_freeze = intel_idle_freeze, }, 766 + .enter_s2idle = intel_idle_s2idle, }, 767 767 { 768 768 .enter = NULL } 769 769 }; ··· 775 775 .exit_latency = 2, 776 776 .target_residency = 2, 777 777 .enter = &intel_idle, 778 - .enter_freeze = intel_idle_freeze, }, 778 + .enter_s2idle = intel_idle_s2idle, }, 779 779 { 780 780 .name = "C6", 781 781 .desc = "MWAIT 0x51", ··· 783 783 .exit_latency = 15, 784 784 .target_residency = 45, 785 785 .enter = &intel_idle, 786 - .enter_freeze = intel_idle_freeze, }, 786 + .enter_s2idle = intel_idle_s2idle, }, 787 787 { 788 788 .enter = NULL } 789 789 }; ··· 795 795 .exit_latency = 1, 796 796 .target_residency = 2, 797 797 .enter = &intel_idle, 798 - .enter_freeze = intel_idle_freeze }, 798 + .enter_s2idle = intel_idle_s2idle }, 799 799 { 800 800 .name = "C6", 801 801 .desc = "MWAIT 0x10", ··· 803 803 .exit_latency = 120, 804 804 .target_residency = 500, 805 805 .enter = &intel_idle, 806 - .enter_freeze = intel_idle_freeze }, 806 + .enter_s2idle = intel_idle_s2idle }, 807 807 { 808 808 .enter = NULL } 809 809 }; ··· 816 816 .exit_latency = 2, 817 817 .target_residency = 2, 818 818 .enter = &intel_idle, 819 - .enter_freeze = intel_idle_freeze, }, 819 + .enter_s2idle = intel_idle_s2idle, }, 820 820 { 821 821 .name = "C1E", 822 822 .desc = "MWAIT 0x01", ··· 824 824 .exit_latency = 10, 825 825 .target_residency = 20, 826 826 .enter = &intel_idle, 827 - .enter_freeze = intel_idle_freeze, }, 827 + .enter_s2idle = intel_idle_s2idle, }, 828 828 { 829 829 .name = "C6", 830 830 .desc = "MWAIT 0x20", ··· 832 832 .exit_latency = 133, 833 833 .target_residency = 133, 834 834 .enter = &intel_idle, 835 - .enter_freeze = intel_idle_freeze, }, 835 + .enter_s2idle = intel_idle_s2idle, }, 836 836 { 837 837 .name = "C7s", 838 838 .desc = "MWAIT 0x31", ··· 840 840 .exit_latency = 155, 841 841 .target_residency = 155, 842 842 .enter = &intel_idle, 843 - .enter_freeze = intel_idle_freeze, }, 843 + .enter_s2idle = intel_idle_s2idle, }, 844 844 { 845 845 .name = "C8", 846 846 .desc = "MWAIT 0x40", ··· 848 848 .exit_latency = 1000, 849 849 .target_residency = 1000, 850 850 .enter = &intel_idle, 851 - .enter_freeze = intel_idle_freeze, }, 851 + .enter_s2idle = intel_idle_s2idle, }, 852 852 { 853 853 .name = "C9", 854 854 .desc = "MWAIT 0x50", ··· 856 856 .exit_latency = 2000, 857 857 .target_residency = 2000, 858 858 .enter = &intel_idle, 859 - .enter_freeze = intel_idle_freeze, }, 859 + .enter_s2idle = intel_idle_s2idle, }, 860 860 { 861 861 .name = "C10", 862 862 .desc = "MWAIT 0x60", ··· 864 864 .exit_latency = 10000, 865 865 .target_residency = 10000, 866 866 .enter = &intel_idle, 867 - .enter_freeze = intel_idle_freeze, }, 867 + .enter_s2idle = intel_idle_s2idle, }, 868 868 { 869 869 .enter = NULL } 870 870 }; ··· 877 877 .exit_latency = 2, 878 878 .target_residency = 2, 879 879 .enter = &intel_idle, 880 - .enter_freeze = intel_idle_freeze, }, 880 + .enter_s2idle = intel_idle_s2idle, }, 881 881 { 882 882 .name = "C1E", 883 883 .desc = "MWAIT 0x01", ··· 885 885 .exit_latency = 10, 886 886 .target_residency = 20, 887 887 .enter = &intel_idle, 888 - .enter_freeze = intel_idle_freeze, }, 888 + .enter_s2idle = intel_idle_s2idle, }, 889 889 { 890 890 .name = "C6", 891 891 .desc = "MWAIT 0x20", ··· 893 893 .exit_latency = 50, 894 894 .target_residency = 500, 895 895 .enter = &intel_idle, 896 - .enter_freeze = intel_idle_freeze, }, 896 + .enter_s2idle = intel_idle_s2idle, }, 897 897 { 898 898 .enter = NULL } 899 899 }; ··· 935 935 } 936 936 937 937 /** 938 - * intel_idle_freeze - simplified "enter" callback routine for suspend-to-idle 938 + * intel_idle_s2idle - simplified "enter" callback routine for suspend-to-idle 939 939 * @dev: cpuidle_device 940 940 * @drv: cpuidle driver 941 941 * @index: state index 942 942 */ 943 - static void intel_idle_freeze(struct cpuidle_device *dev, 943 + static void intel_idle_s2idle(struct cpuidle_device *dev, 944 944 struct cpuidle_driver *drv, int index) 945 945 { 946 946 unsigned long ecx = 1; /* break on interrupt flag */ ··· 1330 1330 1331 1331 intel_idle_state_table_update(); 1332 1332 1333 + cpuidle_poll_state_init(drv); 1333 1334 drv->state_count = 1; 1334 1335 1335 1336 for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) { 1336 1337 int num_substates, mwait_hint, mwait_cstate; 1337 1338 1338 1339 if ((cpuidle_state_table[cstate].enter == NULL) && 1339 - (cpuidle_state_table[cstate].enter_freeze == NULL)) 1340 + (cpuidle_state_table[cstate].enter_s2idle == NULL)) 1340 1341 break; 1341 1342 1342 1343 if (cstate + 1 > max_cstate) {
+21 -41
drivers/mfd/db8500-prcmu.c
··· 33 33 #include <linux/mfd/abx500/ab8500.h> 34 34 #include <linux/regulator/db8500-prcmu.h> 35 35 #include <linux/regulator/machine.h> 36 - #include <linux/cpufreq.h> 37 36 #include <linux/platform_data/ux500_wdt.h> 38 37 #include <linux/platform_data/db8500_thermal.h> 39 38 #include "dbx500-prcmu-regs.h" ··· 1691 1692 return rounded_rate; 1692 1693 } 1693 1694 1694 - /* CPU FREQ table, may be changed due to if MAX_OPP is supported. */ 1695 - static struct cpufreq_frequency_table db8500_cpufreq_table[] = { 1696 - { .frequency = 200000, .driver_data = ARM_EXTCLK,}, 1697 - { .frequency = 400000, .driver_data = ARM_50_OPP,}, 1698 - { .frequency = 800000, .driver_data = ARM_100_OPP,}, 1699 - { .frequency = CPUFREQ_TABLE_END,}, /* To be used for MAX_OPP. */ 1700 - { .frequency = CPUFREQ_TABLE_END,}, 1695 + static const unsigned long armss_freqs[] = { 1696 + 200000000, 1697 + 400000000, 1698 + 800000000, 1699 + 998400000 1701 1700 }; 1702 1701 1703 1702 static long round_armss_rate(unsigned long rate) 1704 1703 { 1705 - struct cpufreq_frequency_table *pos; 1706 - long freq = 0; 1707 - 1708 - /* cpufreq table frequencies is in KHz. */ 1709 - rate = rate / 1000; 1704 + unsigned long freq = 0; 1705 + int i; 1710 1706 1711 1707 /* Find the corresponding arm opp from the cpufreq table. */ 1712 - cpufreq_for_each_entry(pos, db8500_cpufreq_table) { 1713 - freq = pos->frequency; 1714 - if (freq == rate) 1708 + for (i = 0; i < ARRAY_SIZE(armss_freqs); i++) { 1709 + freq = armss_freqs[i]; 1710 + if (rate <= freq) 1715 1711 break; 1716 1712 } 1717 1713 1718 1714 /* Return the last valid value, even if a match was not found. */ 1719 - return freq * 1000; 1715 + return freq; 1720 1716 } 1721 1717 1722 1718 #define MIN_PLL_VCO_RATE 600000000ULL ··· 1848 1854 1849 1855 static int set_armss_rate(unsigned long rate) 1850 1856 { 1851 - struct cpufreq_frequency_table *pos; 1852 - 1853 - /* cpufreq table frequencies is in KHz. */ 1854 - rate = rate / 1000; 1857 + unsigned long freq; 1858 + u8 opps[] = { ARM_EXTCLK, ARM_50_OPP, ARM_100_OPP, ARM_MAX_OPP }; 1859 + int i; 1855 1860 1856 1861 /* Find the corresponding arm opp from the cpufreq table. */ 1857 - cpufreq_for_each_entry(pos, db8500_cpufreq_table) 1858 - if (pos->frequency == rate) 1862 + for (i = 0; i < ARRAY_SIZE(armss_freqs); i++) { 1863 + freq = armss_freqs[i]; 1864 + if (rate == freq) 1859 1865 break; 1866 + } 1860 1867 1861 - if (pos->frequency != rate) 1868 + if (rate != freq) 1862 1869 return -EINVAL; 1863 1870 1864 1871 /* Set the new arm opp. */ 1865 - return db8500_prcmu_set_arm_opp(pos->driver_data); 1872 + pr_debug("SET ARM OPP 0x%02x\n", opps[i]); 1873 + return db8500_prcmu_set_arm_opp(opps[i]); 1866 1874 } 1867 1875 1868 1876 static int set_plldsi_rate(unsigned long rate) ··· 3045 3049 .pdata_size = sizeof(db8500_regulators), 3046 3050 }, 3047 3051 { 3048 - .name = "cpufreq-ux500", 3049 - .of_compatible = "stericsson,cpufreq-ux500", 3050 - .platform_data = &db8500_cpufreq_table, 3051 - .pdata_size = sizeof(db8500_cpufreq_table), 3052 - }, 3053 - { 3054 3052 .name = "cpuidle-dbx500", 3055 3053 .of_compatible = "stericsson,cpuidle-dbx500", 3056 3054 }, ··· 3056 3066 .pdata_size = sizeof(db8500_thsens_data), 3057 3067 }, 3058 3068 }; 3059 - 3060 - static void db8500_prcmu_update_cpufreq(void) 3061 - { 3062 - if (prcmu_has_arm_maxopp()) { 3063 - db8500_cpufreq_table[3].frequency = 1000000; 3064 - db8500_cpufreq_table[3].driver_data = ARM_MAX_OPP; 3065 - } 3066 - } 3067 3069 3068 3070 static int db8500_prcmu_register_ab8500(struct device *parent) 3069 3071 { ··· 3141 3159 db8500_irq_init(np); 3142 3160 3143 3161 prcmu_config_esram0_deep_sleep(ESRAM0_DEEP_SLEEP_STATE_RET); 3144 - 3145 - db8500_prcmu_update_cpufreq(); 3146 3162 3147 3163 err = mfd_add_devices(&pdev->dev, 0, common_prcmu_devs, 3148 3164 ARRAY_SIZE(common_prcmu_devs), NULL, 0, db8500_irq_domain);
+14 -3
drivers/platform/x86/intel-hid.c
··· 203 203 acpi_status status; 204 204 205 205 if (priv->wakeup_mode) { 206 + /* 207 + * Needed for wakeup from suspend-to-idle to work on some 208 + * platforms that don't expose the 5-button array, but still 209 + * send notifies with the power button event code to this 210 + * device object on power button actions while suspended. 211 + */ 212 + if (event == 0xce) 213 + goto wakeup; 214 + 206 215 /* Wake up on 5-button array events only. */ 207 216 if (event == 0xc0 || !priv->array) 208 217 return; 209 218 210 - if (sparse_keymap_entry_from_scancode(priv->array, event)) 211 - pm_wakeup_hard_event(&device->dev); 212 - else 219 + if (!sparse_keymap_entry_from_scancode(priv->array, event)) { 213 220 dev_info(&device->dev, "unknown event 0x%x\n", event); 221 + return; 222 + } 214 223 224 + wakeup: 225 + pm_wakeup_hard_event(&device->dev); 215 226 return; 216 227 } 217 228
+38
drivers/power/avs/rockchip-io-domain.c
··· 349 349 .init = rk3399_pmu_iodomain_init, 350 350 }; 351 351 352 + static const struct rockchip_iodomain_soc_data soc_data_rv1108 = { 353 + .grf_offset = 0x404, 354 + .supply_names = { 355 + NULL, 356 + NULL, 357 + NULL, 358 + NULL, 359 + NULL, 360 + NULL, 361 + NULL, 362 + NULL, 363 + NULL, 364 + NULL, 365 + NULL, 366 + "vccio1", 367 + "vccio2", 368 + "vccio3", 369 + "vccio5", 370 + "vccio6", 371 + }, 372 + 373 + }; 374 + 375 + static const struct rockchip_iodomain_soc_data soc_data_rv1108_pmu = { 376 + .grf_offset = 0x104, 377 + .supply_names = { 378 + "pmu", 379 + }, 380 + }; 381 + 352 382 static const struct of_device_id rockchip_iodomain_match[] = { 353 383 { 354 384 .compatible = "rockchip,rk3188-io-voltage-domain", ··· 411 381 { 412 382 .compatible = "rockchip,rk3399-pmu-io-voltage-domain", 413 383 .data = (void *)&soc_data_rk3399_pmu 384 + }, 385 + { 386 + .compatible = "rockchip,rv1108-io-voltage-domain", 387 + .data = (void *)&soc_data_rv1108 388 + }, 389 + { 390 + .compatible = "rockchip,rv1108-pmu-io-voltage-domain", 391 + .data = (void *)&soc_data_rv1108_pmu 414 392 }, 415 393 { /* sentinel */ }, 416 394 };
+1 -1
drivers/regulator/of_regulator.c
··· 150 150 suspend_state = &constraints->state_disk; 151 151 break; 152 152 case PM_SUSPEND_ON: 153 - case PM_SUSPEND_FREEZE: 153 + case PM_SUSPEND_TO_IDLE: 154 154 case PM_SUSPEND_STANDBY: 155 155 default: 156 156 continue;
+29 -9
include/linux/cpufreq.h
··· 127 127 */ 128 128 unsigned int transition_delay_us; 129 129 130 + /* 131 + * Remote DVFS flag (Not added to the driver structure as we don't want 132 + * to access another structure from scheduler hotpath). 133 + * 134 + * Should be set if CPUs can do DVFS on behalf of other CPUs from 135 + * different cpufreq policies. 136 + */ 137 + bool dvfs_possible_from_any_cpu; 138 + 130 139 /* Cached frequency lookup from cpufreq_driver_resolve_freq. */ 131 140 unsigned int cached_target_freq; 132 141 int cached_resolved_idx; ··· 379 370 */ 380 371 #define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5) 381 372 373 + /* 374 + * Set by drivers to disallow use of governors with "dynamic_switching" flag 375 + * set. 376 + */ 377 + #define CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING (1 << 6) 378 + 382 379 int cpufreq_register_driver(struct cpufreq_driver *driver_data); 383 380 int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); 384 381 ··· 502 487 * polling frequency is 1000 times the transition latency of the processor. The 503 488 * ondemand governor will work on any processor with transition latency <= 10ms, 504 489 * using appropriate sampling rate. 505 - * 506 - * For CPUs with transition latency > 10ms (mostly drivers with CPUFREQ_ETERNAL) 507 - * the ondemand governor will not work. All times here are in us (microseconds). 508 490 */ 509 - #define MIN_SAMPLING_RATE_RATIO (2) 510 491 #define LATENCY_MULTIPLIER (1000) 511 - #define MIN_LATENCY_MULTIPLIER (20) 512 - #define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000) 513 492 514 493 struct cpufreq_governor { 515 494 char name[CPUFREQ_NAME_LEN]; ··· 516 507 char *buf); 517 508 int (*store_setspeed) (struct cpufreq_policy *policy, 518 509 unsigned int freq); 519 - unsigned int max_transition_latency; /* HW must be able to switch to 520 - next freq faster than this value in nano secs or we 521 - will fallback to performance governor */ 510 + /* For governors which change frequency dynamically by themselves */ 511 + bool dynamic_switching; 522 512 struct list_head governor_list; 523 513 struct module *owner; 524 514 }; ··· 533 525 unsigned int relation); 534 526 unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, 535 527 unsigned int target_freq); 528 + unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy); 536 529 int cpufreq_register_governor(struct cpufreq_governor *governor); 537 530 void cpufreq_unregister_governor(struct cpufreq_governor *governor); 538 531 ··· 570 561 ssize_t (*store)(struct gov_attr_set *attr_set, const char *buf, 571 562 size_t count); 572 563 }; 564 + 565 + static inline bool cpufreq_can_do_remote_dvfs(struct cpufreq_policy *policy) 566 + { 567 + /* 568 + * Allow remote callbacks if: 569 + * - dvfs_possible_from_any_cpu flag is set 570 + * - the local and remote CPUs share cpufreq policy 571 + */ 572 + return policy->dvfs_possible_from_any_cpu || 573 + cpumask_test_cpu(smp_processor_id(), policy->cpus); 574 + } 573 575 574 576 /********************************************************************* 575 577 * FREQUENCY TABLE HELPERS *
+11 -10
include/linux/cpuidle.h
··· 52 52 int (*enter_dead) (struct cpuidle_device *dev, int index); 53 53 54 54 /* 55 - * CPUs execute ->enter_freeze with the local tick or entire timekeeping 55 + * CPUs execute ->enter_s2idle with the local tick or entire timekeeping 56 56 * suspended, so it must not re-enable interrupts at any point (even 57 57 * temporarily) or attempt to change states of clock event devices. 58 58 */ 59 - void (*enter_freeze) (struct cpuidle_device *dev, 59 + void (*enter_s2idle) (struct cpuidle_device *dev, 60 60 struct cpuidle_driver *drv, 61 61 int index); 62 62 }; 63 63 64 64 /* Idle State Flags */ 65 65 #define CPUIDLE_FLAG_NONE (0x00) 66 + #define CPUIDLE_FLAG_POLLING (0x01) /* polling state */ 66 67 #define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */ 67 68 #define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */ 68 69 ··· 198 197 #ifdef CONFIG_CPU_IDLE 199 198 extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 200 199 struct cpuidle_device *dev); 201 - extern int cpuidle_enter_freeze(struct cpuidle_driver *drv, 200 + extern int cpuidle_enter_s2idle(struct cpuidle_driver *drv, 202 201 struct cpuidle_device *dev); 203 202 extern void cpuidle_use_deepest_state(bool enable); 204 203 #else 205 204 static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 206 205 struct cpuidle_device *dev) 207 206 {return -ENODEV; } 208 - static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv, 207 + static inline int cpuidle_enter_s2idle(struct cpuidle_driver *drv, 209 208 struct cpuidle_device *dev) 210 209 {return -ENODEV; } 211 210 static inline void cpuidle_use_deepest_state(bool enable) ··· 223 222 static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a) 224 223 { 225 224 } 225 + #endif 226 + 227 + #ifdef CONFIG_ARCH_HAS_CPU_RELAX 228 + void cpuidle_poll_state_init(struct cpuidle_driver *drv); 229 + #else 230 + static inline void cpuidle_poll_state_init(struct cpuidle_driver *drv) {} 226 231 #endif 227 232 228 233 /****************************** ··· 255 248 #else 256 249 static inline int cpuidle_register_governor(struct cpuidle_governor *gov) 257 250 {return 0;} 258 - #endif 259 - 260 - #ifdef CONFIG_ARCH_HAS_CPU_RELAX 261 - #define CPUIDLE_DRIVER_STATE_START 1 262 - #else 263 - #define CPUIDLE_DRIVER_STATE_START 0 264 251 #endif 265 252 266 253 #define CPU_PM_CPU_IDLE_ENTER(low_level_idle_enter, idx) \
-13
include/linux/devfreq.h
··· 214 214 extern struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 215 215 int index); 216 216 217 - /** 218 - * devfreq_update_stats() - update the last_status pointer in struct devfreq 219 - * @df: the devfreq instance whose status needs updating 220 - * 221 - * Governors are recommended to use this function along with last_status, 222 - * which allows other entities to reuse the last_status without affecting 223 - * the values fetched later by governors. 224 - */ 225 - static inline int devfreq_update_stats(struct devfreq *df) 226 - { 227 - return df->profile->get_dev_status(df->dev.parent, &df->last_status); 228 - } 229 - 230 217 #if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) 231 218 /** 232 219 * struct devfreq_simple_ondemand_data - void *data fed to struct devfreq
+4
include/linux/pm.h
··· 689 689 extern void device_pm_lock(void); 690 690 extern void dpm_resume_start(pm_message_t state); 691 691 extern void dpm_resume_end(pm_message_t state); 692 + extern void dpm_noirq_resume_devices(pm_message_t state); 693 + extern void dpm_noirq_end(void); 692 694 extern void dpm_resume_noirq(pm_message_t state); 693 695 extern void dpm_resume_early(pm_message_t state); 694 696 extern void dpm_resume(pm_message_t state); ··· 699 697 extern void device_pm_unlock(void); 700 698 extern int dpm_suspend_end(pm_message_t state); 701 699 extern int dpm_suspend_start(pm_message_t state); 700 + extern void dpm_noirq_begin(void); 701 + extern int dpm_noirq_suspend_devices(pm_message_t state); 702 702 extern int dpm_suspend_noirq(pm_message_t state); 703 703 extern int dpm_suspend_late(pm_message_t state); 704 704 extern int dpm_suspend(pm_message_t state);
+3
include/linux/pm_domain.h
··· 43 43 s64 power_on_latency_ns; 44 44 s64 residency_ns; 45 45 struct fwnode_handle *fwnode; 46 + ktime_t idle_time; 46 47 }; 47 48 48 49 struct genpd_lock_ops; ··· 79 78 unsigned int state_count; /* number of states */ 80 79 unsigned int state_idx; /* state that genpd will go to when off */ 81 80 void *free; /* Free the state that was allocated for default */ 81 + ktime_t on_time; 82 + ktime_t accounting_time; 82 83 const struct genpd_lock_ops *lock_ops; 83 84 union { 84 85 struct mutex mlock;
+33 -15
include/linux/suspend.h
··· 33 33 typedef int __bitwise suspend_state_t; 34 34 35 35 #define PM_SUSPEND_ON ((__force suspend_state_t) 0) 36 - #define PM_SUSPEND_FREEZE ((__force suspend_state_t) 1) 36 + #define PM_SUSPEND_TO_IDLE ((__force suspend_state_t) 1) 37 37 #define PM_SUSPEND_STANDBY ((__force suspend_state_t) 2) 38 38 #define PM_SUSPEND_MEM ((__force suspend_state_t) 3) 39 - #define PM_SUSPEND_MIN PM_SUSPEND_FREEZE 39 + #define PM_SUSPEND_MIN PM_SUSPEND_TO_IDLE 40 40 #define PM_SUSPEND_MAX ((__force suspend_state_t) 4) 41 41 42 42 enum suspend_stat_step { ··· 186 186 void (*recover)(void); 187 187 }; 188 188 189 - struct platform_freeze_ops { 189 + struct platform_s2idle_ops { 190 190 int (*begin)(void); 191 191 int (*prepare)(void); 192 192 void (*wake)(void); ··· 196 196 }; 197 197 198 198 #ifdef CONFIG_SUSPEND 199 + extern suspend_state_t mem_sleep_current; 200 + extern suspend_state_t mem_sleep_default; 201 + 199 202 /** 200 203 * suspend_set_ops - set platform dependent suspend operations 201 204 * @ops: The new suspend operations to set. ··· 237 234 } 238 235 239 236 /* Suspend-to-idle state machnine. */ 240 - enum freeze_state { 241 - FREEZE_STATE_NONE, /* Not suspended/suspending. */ 242 - FREEZE_STATE_ENTER, /* Enter suspend-to-idle. */ 243 - FREEZE_STATE_WAKE, /* Wake up from suspend-to-idle. */ 237 + enum s2idle_states { 238 + S2IDLE_STATE_NONE, /* Not suspended/suspending. */ 239 + S2IDLE_STATE_ENTER, /* Enter suspend-to-idle. */ 240 + S2IDLE_STATE_WAKE, /* Wake up from suspend-to-idle. */ 244 241 }; 245 242 246 - extern enum freeze_state __read_mostly suspend_freeze_state; 243 + extern enum s2idle_states __read_mostly s2idle_state; 247 244 248 - static inline bool idle_should_freeze(void) 245 + static inline bool idle_should_enter_s2idle(void) 249 246 { 250 - return unlikely(suspend_freeze_state == FREEZE_STATE_ENTER); 247 + return unlikely(s2idle_state == S2IDLE_STATE_ENTER); 251 248 } 252 249 253 250 extern void __init pm_states_init(void); 254 - extern void freeze_set_ops(const struct platform_freeze_ops *ops); 255 - extern void freeze_wake(void); 251 + extern void s2idle_set_ops(const struct platform_s2idle_ops *ops); 252 + extern void s2idle_wake(void); 256 253 257 254 /** 258 255 * arch_suspend_disable_irqs - disable IRQs for suspend ··· 284 281 285 282 static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {} 286 283 static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; } 287 - static inline bool idle_should_freeze(void) { return false; } 284 + static inline bool idle_should_enter_s2idle(void) { return false; } 288 285 static inline void __init pm_states_init(void) {} 289 - static inline void freeze_set_ops(const struct platform_freeze_ops *ops) {} 290 - static inline void freeze_wake(void) {} 286 + static inline void s2idle_set_ops(const struct platform_s2idle_ops *ops) {} 287 + static inline void s2idle_wake(void) {} 291 288 #endif /* !CONFIG_SUSPEND */ 292 289 293 290 /* struct pbe is used for creating lists of pages that should be restored ··· 430 427 /* drivers/base/power/wakeup.c */ 431 428 extern bool events_check_enabled; 432 429 extern unsigned int pm_wakeup_irq; 430 + extern suspend_state_t pm_suspend_target_state; 433 431 434 432 extern bool pm_wakeup_pending(void); 435 433 extern void pm_system_wakeup(void); ··· 495 491 496 492 #ifdef CONFIG_PM_SLEEP_DEBUG 497 493 extern bool pm_print_times_enabled; 494 + extern bool pm_debug_messages_on; 495 + extern __printf(2, 3) void __pm_pr_dbg(bool defer, const char *fmt, ...); 498 496 #else 499 497 #define pm_print_times_enabled (false) 498 + #define pm_debug_messages_on (false) 499 + 500 + #include <linux/printk.h> 501 + 502 + #define __pm_pr_dbg(defer, fmt, ...) \ 503 + no_printk(KERN_DEBUG fmt, ##__VA_ARGS__) 500 504 #endif 505 + 506 + #define pm_pr_dbg(fmt, ...) \ 507 + __pm_pr_dbg(false, fmt, ##__VA_ARGS__) 508 + 509 + #define pm_deferred_pr_dbg(fmt, ...) \ 510 + __pm_pr_dbg(true, fmt, ##__VA_ARGS__) 501 511 502 512 #ifdef CONFIG_PM_AUTOSLEEP 503 513
+13 -37
kernel/cpu_pm.c
··· 22 22 #include <linux/spinlock.h> 23 23 #include <linux/syscore_ops.h> 24 24 25 - static DEFINE_RWLOCK(cpu_pm_notifier_lock); 26 - static RAW_NOTIFIER_HEAD(cpu_pm_notifier_chain); 25 + static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain); 27 26 28 27 static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls) 29 28 { 30 29 int ret; 31 30 32 - ret = __raw_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL, 31 + /* 32 + * __atomic_notifier_call_chain has a RCU read critical section, which 33 + * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let 34 + * RCU know this. 35 + */ 36 + rcu_irq_enter_irqson(); 37 + ret = __atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL, 33 38 nr_to_call, nr_calls); 39 + rcu_irq_exit_irqson(); 34 40 35 41 return notifier_to_errno(ret); 36 42 } ··· 53 47 */ 54 48 int cpu_pm_register_notifier(struct notifier_block *nb) 55 49 { 56 - unsigned long flags; 57 - int ret; 58 - 59 - write_lock_irqsave(&cpu_pm_notifier_lock, flags); 60 - ret = raw_notifier_chain_register(&cpu_pm_notifier_chain, nb); 61 - write_unlock_irqrestore(&cpu_pm_notifier_lock, flags); 62 - 63 - return ret; 50 + return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb); 64 51 } 65 52 EXPORT_SYMBOL_GPL(cpu_pm_register_notifier); 66 53 ··· 68 69 */ 69 70 int cpu_pm_unregister_notifier(struct notifier_block *nb) 70 71 { 71 - unsigned long flags; 72 - int ret; 73 - 74 - write_lock_irqsave(&cpu_pm_notifier_lock, flags); 75 - ret = raw_notifier_chain_unregister(&cpu_pm_notifier_chain, nb); 76 - write_unlock_irqrestore(&cpu_pm_notifier_lock, flags); 77 - 78 - return ret; 72 + return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb); 79 73 } 80 74 EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier); 81 75 ··· 92 100 int nr_calls; 93 101 int ret = 0; 94 102 95 - read_lock(&cpu_pm_notifier_lock); 96 103 ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); 97 104 if (ret) 98 105 /* ··· 99 108 * PM entry who are notified earlier to prepare for it. 100 109 */ 101 110 cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL); 102 - read_unlock(&cpu_pm_notifier_lock); 103 111 104 112 return ret; 105 113 } ··· 118 128 */ 119 129 int cpu_pm_exit(void) 120 130 { 121 - int ret; 122 - 123 - read_lock(&cpu_pm_notifier_lock); 124 - ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL); 125 - read_unlock(&cpu_pm_notifier_lock); 126 - 127 - return ret; 131 + return cpu_pm_notify(CPU_PM_EXIT, -1, NULL); 128 132 } 129 133 EXPORT_SYMBOL_GPL(cpu_pm_exit); 130 134 ··· 143 159 int nr_calls; 144 160 int ret = 0; 145 161 146 - read_lock(&cpu_pm_notifier_lock); 147 162 ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls); 148 163 if (ret) 149 164 /* ··· 150 167 * PM entry who are notified earlier to prepare for it. 151 168 */ 152 169 cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL); 153 - read_unlock(&cpu_pm_notifier_lock); 154 170 155 171 return ret; 156 172 } ··· 172 190 */ 173 191 int cpu_cluster_pm_exit(void) 174 192 { 175 - int ret; 176 - 177 - read_lock(&cpu_pm_notifier_lock); 178 - ret = cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL); 179 - read_unlock(&cpu_pm_notifier_lock); 180 - 181 - return ret; 193 + return cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL); 182 194 } 183 195 EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit); 184 196
+17 -12
kernel/power/hibernate.c
··· 651 651 int error; 652 652 unsigned int flags; 653 653 654 - pr_debug("Loading hibernation image.\n"); 654 + pm_pr_dbg("Loading hibernation image.\n"); 655 655 656 656 lock_device_hotplug(); 657 657 error = create_basic_memory_bitmaps(); ··· 681 681 bool snapshot_test = false; 682 682 683 683 if (!hibernation_available()) { 684 - pr_debug("Hibernation not available.\n"); 684 + pm_pr_dbg("Hibernation not available.\n"); 685 685 return -EPERM; 686 686 } 687 687 ··· 692 692 goto Unlock; 693 693 } 694 694 695 + pr_info("hibernation entry\n"); 695 696 pm_prepare_console(); 696 697 error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); 697 698 if (error) { ··· 728 727 else 729 728 flags |= SF_CRC32_MODE; 730 729 731 - pr_debug("Writing image.\n"); 730 + pm_pr_dbg("Writing image.\n"); 732 731 error = swsusp_write(flags); 733 732 swsusp_free(); 734 733 if (!error) { ··· 740 739 in_suspend = 0; 741 740 pm_restore_gfp_mask(); 742 741 } else { 743 - pr_debug("Image restored successfully.\n"); 742 + pm_pr_dbg("Image restored successfully.\n"); 744 743 } 745 744 746 745 Free_bitmaps: ··· 748 747 Thaw: 749 748 unlock_device_hotplug(); 750 749 if (snapshot_test) { 751 - pr_debug("Checking hibernation image\n"); 750 + pm_pr_dbg("Checking hibernation image\n"); 752 751 error = swsusp_check(); 753 752 if (!error) 754 753 error = load_image_and_restore(); ··· 763 762 atomic_inc(&snapshot_device_available); 764 763 Unlock: 765 764 unlock_system_sleep(); 765 + pr_info("hibernation exit\n"); 766 + 766 767 return error; 767 768 } 768 769 ··· 814 811 goto Unlock; 815 812 } 816 813 817 - pr_debug("Checking hibernation image partition %s\n", resume_file); 814 + pm_pr_dbg("Checking hibernation image partition %s\n", resume_file); 818 815 819 816 if (resume_delay) { 820 817 pr_info("Waiting %dsec before reading resume device ...\n", ··· 856 853 } 857 854 858 855 Check_image: 859 - pr_debug("Hibernation image partition %d:%d present\n", 856 + pm_pr_dbg("Hibernation image partition %d:%d present\n", 860 857 MAJOR(swsusp_resume_device), MINOR(swsusp_resume_device)); 861 858 862 - pr_debug("Looking for hibernation image.\n"); 859 + pm_pr_dbg("Looking for hibernation image.\n"); 863 860 error = swsusp_check(); 864 861 if (error) 865 862 goto Unlock; ··· 871 868 goto Unlock; 872 869 } 873 870 871 + pr_info("resume from hibernation\n"); 874 872 pm_prepare_console(); 875 873 error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls); 876 874 if (error) { ··· 879 875 goto Close_Finish; 880 876 } 881 877 882 - pr_debug("Preparing processes for restore.\n"); 878 + pm_pr_dbg("Preparing processes for restore.\n"); 883 879 error = freeze_processes(); 884 880 if (error) 885 881 goto Close_Finish; ··· 888 884 Finish: 889 885 __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL); 890 886 pm_restore_console(); 887 + pr_info("resume from hibernation failed (%d)\n", error); 891 888 atomic_inc(&snapshot_device_available); 892 889 /* For success case, the suspend path will release the lock */ 893 890 Unlock: 894 891 mutex_unlock(&pm_mutex); 895 - pr_debug("Hibernation image not present or could not be loaded.\n"); 892 + pm_pr_dbg("Hibernation image not present or could not be loaded.\n"); 896 893 return error; 897 894 Close_Finish: 898 895 swsusp_close(FMODE_READ); ··· 1017 1012 error = -EINVAL; 1018 1013 1019 1014 if (!error) 1020 - pr_debug("Hibernation mode set to '%s'\n", 1021 - hibernation_modes[mode]); 1015 + pm_pr_dbg("Hibernation mode set to '%s'\n", 1016 + hibernation_modes[mode]); 1022 1017 unlock_system_sleep(); 1023 1018 return error ? error : n; 1024 1019 }
+59 -5
kernel/power/main.c
··· 150 150 power_attr(mem_sleep); 151 151 #endif /* CONFIG_SUSPEND */ 152 152 153 - #ifdef CONFIG_PM_DEBUG 153 + #ifdef CONFIG_PM_SLEEP_DEBUG 154 154 int pm_test_level = TEST_NONE; 155 155 156 156 static const char * const pm_tests[__TEST_AFTER_LAST] = { ··· 211 211 } 212 212 213 213 power_attr(pm_test); 214 - #endif /* CONFIG_PM_DEBUG */ 214 + #endif /* CONFIG_PM_SLEEP_DEBUG */ 215 215 216 216 #ifdef CONFIG_DEBUG_FS 217 217 static char *suspend_step_name(enum suspend_stat_step step) ··· 360 360 } 361 361 362 362 power_attr_ro(pm_wakeup_irq); 363 + 364 + bool pm_debug_messages_on __read_mostly; 365 + 366 + static ssize_t pm_debug_messages_show(struct kobject *kobj, 367 + struct kobj_attribute *attr, char *buf) 368 + { 369 + return sprintf(buf, "%d\n", pm_debug_messages_on); 370 + } 371 + 372 + static ssize_t pm_debug_messages_store(struct kobject *kobj, 373 + struct kobj_attribute *attr, 374 + const char *buf, size_t n) 375 + { 376 + unsigned long val; 377 + 378 + if (kstrtoul(buf, 10, &val)) 379 + return -EINVAL; 380 + 381 + if (val > 1) 382 + return -EINVAL; 383 + 384 + pm_debug_messages_on = !!val; 385 + return n; 386 + } 387 + 388 + power_attr(pm_debug_messages); 389 + 390 + /** 391 + * __pm_pr_dbg - Print a suspend debug message to the kernel log. 392 + * @defer: Whether or not to use printk_deferred() to print the message. 393 + * @fmt: Message format. 394 + * 395 + * The message will be emitted if enabled through the pm_debug_messages 396 + * sysfs attribute. 397 + */ 398 + void __pm_pr_dbg(bool defer, const char *fmt, ...) 399 + { 400 + struct va_format vaf; 401 + va_list args; 402 + 403 + if (!pm_debug_messages_on) 404 + return; 405 + 406 + va_start(args, fmt); 407 + 408 + vaf.fmt = fmt; 409 + vaf.va = &args; 410 + 411 + if (defer) 412 + printk_deferred(KERN_DEBUG "PM: %pV", &vaf); 413 + else 414 + printk(KERN_DEBUG "PM: %pV", &vaf); 415 + 416 + va_end(args); 417 + } 363 418 364 419 #else /* !CONFIG_PM_SLEEP_DEBUG */ 365 420 static inline void pm_print_times_init(void) {} ··· 746 691 &wake_lock_attr.attr, 747 692 &wake_unlock_attr.attr, 748 693 #endif 749 - #ifdef CONFIG_PM_DEBUG 750 - &pm_test_attr.attr, 751 - #endif 752 694 #ifdef CONFIG_PM_SLEEP_DEBUG 695 + &pm_test_attr.attr, 753 696 &pm_print_times_attr.attr, 754 697 &pm_wakeup_irq_attr.attr, 698 + &pm_debug_messages_attr.attr, 755 699 #endif 756 700 #endif 757 701 #ifdef CONFIG_FREEZER
+4 -1
kernel/power/power.h
··· 192 192 extern const char * const pm_labels[]; 193 193 extern const char *pm_states[]; 194 194 extern const char *mem_sleep_states[]; 195 - extern suspend_state_t mem_sleep_current; 196 195 197 196 extern int suspend_devices_and_enter(suspend_state_t state); 198 197 #else /* !CONFIG_SUSPEND */ ··· 244 245 #define TEST_FIRST TEST_NONE 245 246 #define TEST_MAX (__TEST_AFTER_LAST - 1) 246 247 248 + #ifdef CONFIG_PM_SLEEP_DEBUG 247 249 extern int pm_test_level; 250 + #else 251 + #define pm_test_level (TEST_NONE) 252 + #endif 248 253 249 254 #ifdef CONFIG_SUSPEND_FREEZER 250 255 static inline int suspend_freeze_processes(void)
+103 -81
kernel/power/suspend.c
··· 8 8 * This file is released under the GPLv2. 9 9 */ 10 10 11 + #define pr_fmt(fmt) "PM: " fmt 12 + 11 13 #include <linux/string.h> 12 14 #include <linux/delay.h> 13 15 #include <linux/errno.h> ··· 35 33 #include "power.h" 36 34 37 35 const char * const pm_labels[] = { 38 - [PM_SUSPEND_FREEZE] = "freeze", 36 + [PM_SUSPEND_TO_IDLE] = "freeze", 39 37 [PM_SUSPEND_STANDBY] = "standby", 40 38 [PM_SUSPEND_MEM] = "mem", 41 39 }; 42 40 const char *pm_states[PM_SUSPEND_MAX]; 43 41 static const char * const mem_sleep_labels[] = { 44 - [PM_SUSPEND_FREEZE] = "s2idle", 42 + [PM_SUSPEND_TO_IDLE] = "s2idle", 45 43 [PM_SUSPEND_STANDBY] = "shallow", 46 44 [PM_SUSPEND_MEM] = "deep", 47 45 }; 48 46 const char *mem_sleep_states[PM_SUSPEND_MAX]; 49 47 50 - suspend_state_t mem_sleep_current = PM_SUSPEND_FREEZE; 51 - static suspend_state_t mem_sleep_default = PM_SUSPEND_MEM; 48 + suspend_state_t mem_sleep_current = PM_SUSPEND_TO_IDLE; 49 + suspend_state_t mem_sleep_default = PM_SUSPEND_MAX; 50 + suspend_state_t pm_suspend_target_state; 51 + EXPORT_SYMBOL_GPL(pm_suspend_target_state); 52 52 53 53 unsigned int pm_suspend_global_flags; 54 54 EXPORT_SYMBOL_GPL(pm_suspend_global_flags); 55 55 56 56 static const struct platform_suspend_ops *suspend_ops; 57 - static const struct platform_freeze_ops *freeze_ops; 58 - static DECLARE_WAIT_QUEUE_HEAD(suspend_freeze_wait_head); 57 + static const struct platform_s2idle_ops *s2idle_ops; 58 + static DECLARE_WAIT_QUEUE_HEAD(s2idle_wait_head); 59 59 60 - enum freeze_state __read_mostly suspend_freeze_state; 61 - static DEFINE_SPINLOCK(suspend_freeze_lock); 60 + enum s2idle_states __read_mostly s2idle_state; 61 + static DEFINE_SPINLOCK(s2idle_lock); 62 62 63 - void freeze_set_ops(const struct platform_freeze_ops *ops) 63 + void s2idle_set_ops(const struct platform_s2idle_ops *ops) 64 64 { 65 65 lock_system_sleep(); 66 - freeze_ops = ops; 66 + s2idle_ops = ops; 67 67 unlock_system_sleep(); 68 68 } 69 69 70 - static void freeze_begin(void) 70 + static void s2idle_begin(void) 71 71 { 72 - suspend_freeze_state = FREEZE_STATE_NONE; 72 + s2idle_state = S2IDLE_STATE_NONE; 73 73 } 74 74 75 - static void freeze_enter(void) 75 + static void s2idle_enter(void) 76 76 { 77 - trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_FREEZE, true); 77 + trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, true); 78 78 79 - spin_lock_irq(&suspend_freeze_lock); 79 + spin_lock_irq(&s2idle_lock); 80 80 if (pm_wakeup_pending()) 81 81 goto out; 82 82 83 - suspend_freeze_state = FREEZE_STATE_ENTER; 84 - spin_unlock_irq(&suspend_freeze_lock); 83 + s2idle_state = S2IDLE_STATE_ENTER; 84 + spin_unlock_irq(&s2idle_lock); 85 85 86 86 get_online_cpus(); 87 87 cpuidle_resume(); ··· 91 87 /* Push all the CPUs into the idle loop. */ 92 88 wake_up_all_idle_cpus(); 93 89 /* Make the current CPU wait so it can enter the idle loop too. */ 94 - wait_event(suspend_freeze_wait_head, 95 - suspend_freeze_state == FREEZE_STATE_WAKE); 90 + wait_event(s2idle_wait_head, 91 + s2idle_state == S2IDLE_STATE_WAKE); 96 92 97 93 cpuidle_pause(); 98 94 put_online_cpus(); 99 95 100 - spin_lock_irq(&suspend_freeze_lock); 96 + spin_lock_irq(&s2idle_lock); 101 97 102 98 out: 103 - suspend_freeze_state = FREEZE_STATE_NONE; 104 - spin_unlock_irq(&suspend_freeze_lock); 99 + s2idle_state = S2IDLE_STATE_NONE; 100 + spin_unlock_irq(&s2idle_lock); 105 101 106 - trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_FREEZE, false); 102 + trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, false); 107 103 } 108 104 109 105 static void s2idle_loop(void) 110 106 { 111 - pr_debug("PM: suspend-to-idle\n"); 107 + pm_pr_dbg("suspend-to-idle\n"); 112 108 113 - do { 114 - freeze_enter(); 109 + for (;;) { 110 + int error; 115 111 116 - if (freeze_ops && freeze_ops->wake) 117 - freeze_ops->wake(); 112 + dpm_noirq_begin(); 118 113 119 - dpm_resume_noirq(PMSG_RESUME); 120 - if (freeze_ops && freeze_ops->sync) 121 - freeze_ops->sync(); 114 + /* 115 + * Suspend-to-idle equals 116 + * frozen processes + suspended devices + idle processors. 117 + * Thus s2idle_enter() should be called right after 118 + * all devices have been suspended. 119 + */ 120 + error = dpm_noirq_suspend_devices(PMSG_SUSPEND); 121 + if (!error) 122 + s2idle_enter(); 123 + 124 + dpm_noirq_resume_devices(PMSG_RESUME); 125 + if (error && (error != -EBUSY || !pm_wakeup_pending())) { 126 + dpm_noirq_end(); 127 + break; 128 + } 129 + 130 + if (s2idle_ops && s2idle_ops->wake) 131 + s2idle_ops->wake(); 132 + 133 + dpm_noirq_end(); 134 + 135 + if (s2idle_ops && s2idle_ops->sync) 136 + s2idle_ops->sync(); 122 137 123 138 if (pm_wakeup_pending()) 124 139 break; 125 140 126 141 pm_wakeup_clear(false); 127 - } while (!dpm_suspend_noirq(PMSG_SUSPEND)); 142 + } 128 143 129 - pr_debug("PM: resume from suspend-to-idle\n"); 144 + pm_pr_dbg("resume from suspend-to-idle\n"); 130 145 } 131 146 132 - void freeze_wake(void) 147 + void s2idle_wake(void) 133 148 { 134 149 unsigned long flags; 135 150 136 - spin_lock_irqsave(&suspend_freeze_lock, flags); 137 - if (suspend_freeze_state > FREEZE_STATE_NONE) { 138 - suspend_freeze_state = FREEZE_STATE_WAKE; 139 - wake_up(&suspend_freeze_wait_head); 151 + spin_lock_irqsave(&s2idle_lock, flags); 152 + if (s2idle_state > S2IDLE_STATE_NONE) { 153 + s2idle_state = S2IDLE_STATE_WAKE; 154 + wake_up(&s2idle_wait_head); 140 155 } 141 - spin_unlock_irqrestore(&suspend_freeze_lock, flags); 156 + spin_unlock_irqrestore(&s2idle_lock, flags); 142 157 } 143 - EXPORT_SYMBOL_GPL(freeze_wake); 158 + EXPORT_SYMBOL_GPL(s2idle_wake); 144 159 145 160 static bool valid_state(suspend_state_t state) 146 161 { ··· 175 152 { 176 153 /* "mem" and "freeze" are always present in /sys/power/state. */ 177 154 pm_states[PM_SUSPEND_MEM] = pm_labels[PM_SUSPEND_MEM]; 178 - pm_states[PM_SUSPEND_FREEZE] = pm_labels[PM_SUSPEND_FREEZE]; 155 + pm_states[PM_SUSPEND_TO_IDLE] = pm_labels[PM_SUSPEND_TO_IDLE]; 179 156 /* 180 157 * Suspend-to-idle should be supported even without any suspend_ops, 181 158 * initialize mem_sleep_states[] accordingly here. 182 159 */ 183 - mem_sleep_states[PM_SUSPEND_FREEZE] = mem_sleep_labels[PM_SUSPEND_FREEZE]; 160 + mem_sleep_states[PM_SUSPEND_TO_IDLE] = mem_sleep_labels[PM_SUSPEND_TO_IDLE]; 184 161 } 185 162 186 163 static int __init mem_sleep_default_setup(char *str) 187 164 { 188 165 suspend_state_t state; 189 166 190 - for (state = PM_SUSPEND_FREEZE; state <= PM_SUSPEND_MEM; state++) 167 + for (state = PM_SUSPEND_TO_IDLE; state <= PM_SUSPEND_MEM; state++) 191 168 if (mem_sleep_labels[state] && 192 169 !strcmp(str, mem_sleep_labels[state])) { 193 170 mem_sleep_default = state; ··· 216 193 } 217 194 if (valid_state(PM_SUSPEND_MEM)) { 218 195 mem_sleep_states[PM_SUSPEND_MEM] = mem_sleep_labels[PM_SUSPEND_MEM]; 219 - if (mem_sleep_default == PM_SUSPEND_MEM) 196 + if (mem_sleep_default >= PM_SUSPEND_MEM) 220 197 mem_sleep_current = PM_SUSPEND_MEM; 221 198 } 222 199 ··· 239 216 240 217 static bool sleep_state_supported(suspend_state_t state) 241 218 { 242 - return state == PM_SUSPEND_FREEZE || (suspend_ops && suspend_ops->enter); 219 + return state == PM_SUSPEND_TO_IDLE || (suspend_ops && suspend_ops->enter); 243 220 } 244 221 245 222 static int platform_suspend_prepare(suspend_state_t state) 246 223 { 247 - return state != PM_SUSPEND_FREEZE && suspend_ops->prepare ? 224 + return state != PM_SUSPEND_TO_IDLE && suspend_ops->prepare ? 248 225 suspend_ops->prepare() : 0; 249 226 } 250 227 251 228 static int platform_suspend_prepare_late(suspend_state_t state) 252 229 { 253 - return state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->prepare ? 254 - freeze_ops->prepare() : 0; 230 + return state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->prepare ? 231 + s2idle_ops->prepare() : 0; 255 232 } 256 233 257 234 static int platform_suspend_prepare_noirq(suspend_state_t state) 258 235 { 259 - return state != PM_SUSPEND_FREEZE && suspend_ops->prepare_late ? 236 + return state != PM_SUSPEND_TO_IDLE && suspend_ops->prepare_late ? 260 237 suspend_ops->prepare_late() : 0; 261 238 } 262 239 263 240 static void platform_resume_noirq(suspend_state_t state) 264 241 { 265 - if (state != PM_SUSPEND_FREEZE && suspend_ops->wake) 242 + if (state != PM_SUSPEND_TO_IDLE && suspend_ops->wake) 266 243 suspend_ops->wake(); 267 244 } 268 245 269 246 static void platform_resume_early(suspend_state_t state) 270 247 { 271 - if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->restore) 272 - freeze_ops->restore(); 248 + if (state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->restore) 249 + s2idle_ops->restore(); 273 250 } 274 251 275 252 static void platform_resume_finish(suspend_state_t state) 276 253 { 277 - if (state != PM_SUSPEND_FREEZE && suspend_ops->finish) 254 + if (state != PM_SUSPEND_TO_IDLE && suspend_ops->finish) 278 255 suspend_ops->finish(); 279 256 } 280 257 281 258 static int platform_suspend_begin(suspend_state_t state) 282 259 { 283 - if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->begin) 284 - return freeze_ops->begin(); 260 + if (state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->begin) 261 + return s2idle_ops->begin(); 285 262 else if (suspend_ops && suspend_ops->begin) 286 263 return suspend_ops->begin(state); 287 264 else ··· 290 267 291 268 static void platform_resume_end(suspend_state_t state) 292 269 { 293 - if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end) 294 - freeze_ops->end(); 270 + if (state == PM_SUSPEND_TO_IDLE && s2idle_ops && s2idle_ops->end) 271 + s2idle_ops->end(); 295 272 else if (suspend_ops && suspend_ops->end) 296 273 suspend_ops->end(); 297 274 } 298 275 299 276 static void platform_recover(suspend_state_t state) 300 277 { 301 - if (state != PM_SUSPEND_FREEZE && suspend_ops->recover) 278 + if (state != PM_SUSPEND_TO_IDLE && suspend_ops->recover) 302 279 suspend_ops->recover(); 303 280 } 304 281 305 282 static bool platform_suspend_again(suspend_state_t state) 306 283 { 307 - return state != PM_SUSPEND_FREEZE && suspend_ops->suspend_again ? 284 + return state != PM_SUSPEND_TO_IDLE && suspend_ops->suspend_again ? 308 285 suspend_ops->suspend_again() : false; 309 286 } 310 287 ··· 393 370 394 371 error = dpm_suspend_late(PMSG_SUSPEND); 395 372 if (error) { 396 - pr_err("PM: late suspend of devices failed\n"); 373 + pr_err("late suspend of devices failed\n"); 397 374 goto Platform_finish; 398 375 } 399 376 error = platform_suspend_prepare_late(state); 400 377 if (error) 401 378 goto Devices_early_resume; 402 379 380 + if (state == PM_SUSPEND_TO_IDLE && pm_test_level != TEST_PLATFORM) { 381 + s2idle_loop(); 382 + goto Platform_early_resume; 383 + } 384 + 403 385 error = dpm_suspend_noirq(PMSG_SUSPEND); 404 386 if (error) { 405 - pr_err("PM: noirq suspend of devices failed\n"); 387 + pr_err("noirq suspend of devices failed\n"); 406 388 goto Platform_early_resume; 407 389 } 408 390 error = platform_suspend_prepare_noirq(state); ··· 416 388 417 389 if (suspend_test(TEST_PLATFORM)) 418 390 goto Platform_wake; 419 - 420 - /* 421 - * PM_SUSPEND_FREEZE equals 422 - * frozen processes + suspended devices + idle processors. 423 - * Thus we should invoke freeze_enter() soon after 424 - * all the devices are suspended. 425 - */ 426 - if (state == PM_SUSPEND_FREEZE) { 427 - s2idle_loop(); 428 - goto Platform_early_resume; 429 - } 430 391 431 392 error = disable_nonboot_cpus(); 432 393 if (error || suspend_test(TEST_CPUS)) ··· 473 456 if (!sleep_state_supported(state)) 474 457 return -ENOSYS; 475 458 459 + pm_suspend_target_state = state; 460 + 476 461 error = platform_suspend_begin(state); 477 462 if (error) 478 463 goto Close; ··· 483 464 suspend_test_start(); 484 465 error = dpm_suspend_start(PMSG_SUSPEND); 485 466 if (error) { 486 - pr_err("PM: Some devices failed to suspend, or early wake event detected\n"); 467 + pr_err("Some devices failed to suspend, or early wake event detected\n"); 487 468 goto Recover_platform; 488 469 } 489 470 suspend_test_finish("suspend devices"); ··· 504 485 505 486 Close: 506 487 platform_resume_end(state); 488 + pm_suspend_target_state = PM_SUSPEND_ON; 507 489 return error; 508 490 509 491 Recover_platform: ··· 538 518 int error; 539 519 540 520 trace_suspend_resume(TPS("suspend_enter"), state, true); 541 - if (state == PM_SUSPEND_FREEZE) { 521 + if (state == PM_SUSPEND_TO_IDLE) { 542 522 #ifdef CONFIG_PM_DEBUG 543 523 if (pm_test_level != TEST_NONE && pm_test_level <= TEST_CPUS) { 544 - pr_warn("PM: Unsupported test mode for suspend to idle, please choose none/freezer/devices/platform.\n"); 524 + pr_warn("Unsupported test mode for suspend to idle, please choose none/freezer/devices/platform.\n"); 545 525 return -EAGAIN; 546 526 } 547 527 #endif ··· 551 531 if (!mutex_trylock(&pm_mutex)) 552 532 return -EBUSY; 553 533 554 - if (state == PM_SUSPEND_FREEZE) 555 - freeze_begin(); 534 + if (state == PM_SUSPEND_TO_IDLE) 535 + s2idle_begin(); 556 536 557 537 #ifndef CONFIG_SUSPEND_SKIP_SYNC 558 538 trace_suspend_resume(TPS("sync_filesystems"), 0, true); 559 - pr_info("PM: Syncing filesystems ... "); 539 + pr_info("Syncing filesystems ... "); 560 540 sys_sync(); 561 541 pr_cont("done.\n"); 562 542 trace_suspend_resume(TPS("sync_filesystems"), 0, false); 563 543 #endif 564 544 565 - pr_debug("PM: Preparing system for sleep (%s)\n", pm_states[state]); 545 + pm_pr_dbg("Preparing system for sleep (%s)\n", mem_sleep_labels[state]); 566 546 pm_suspend_clear_flags(); 567 547 error = suspend_prepare(state); 568 548 if (error) ··· 572 552 goto Finish; 573 553 574 554 trace_suspend_resume(TPS("suspend_enter"), state, false); 575 - pr_debug("PM: Suspending system (%s)\n", pm_states[state]); 555 + pm_pr_dbg("Suspending system (%s)\n", mem_sleep_labels[state]); 576 556 pm_restrict_gfp_mask(); 577 557 error = suspend_devices_and_enter(state); 578 558 pm_restore_gfp_mask(); 579 559 580 560 Finish: 581 - pr_debug("PM: Finishing wakeup.\n"); 561 + pm_pr_dbg("Finishing wakeup.\n"); 582 562 suspend_finish(); 583 563 Unlock: 584 564 mutex_unlock(&pm_mutex); ··· 599 579 if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX) 600 580 return -EINVAL; 601 581 582 + pr_info("suspend entry (%s)\n", mem_sleep_labels[state]); 602 583 error = enter_state(state); 603 584 if (error) { 604 585 suspend_stats.fail++; ··· 607 586 } else { 608 587 suspend_stats.success++; 609 588 } 589 + pr_info("suspend exit\n"); 610 590 return error; 611 591 } 612 592 EXPORT_SYMBOL(pm_suspend);
+2 -2
kernel/power/suspend_test.c
··· 104 104 printk(info_test, pm_states[state]); 105 105 status = pm_suspend(state); 106 106 if (status < 0) 107 - state = PM_SUSPEND_FREEZE; 107 + state = PM_SUSPEND_TO_IDLE; 108 108 } 109 - if (state == PM_SUSPEND_FREEZE) { 109 + if (state == PM_SUSPEND_TO_IDLE) { 110 110 printk(info_test, pm_states[state]); 111 111 status = pm_suspend(state); 112 112 }
+73 -25
kernel/sched/cpufreq_schedutil.c
··· 52 52 struct sugov_cpu { 53 53 struct update_util_data update_util; 54 54 struct sugov_policy *sg_policy; 55 + unsigned int cpu; 55 56 56 - unsigned long iowait_boost; 57 - unsigned long iowait_boost_max; 57 + bool iowait_boost_pending; 58 + unsigned int iowait_boost; 59 + unsigned int iowait_boost_max; 58 60 u64 last_update; 59 61 60 62 /* The fields below are only needed when sharing a policy. */ ··· 77 75 static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) 78 76 { 79 77 s64 delta_ns; 78 + 79 + /* 80 + * Since cpufreq_update_util() is called with rq->lock held for 81 + * the @target_cpu, our per-cpu data is fully serialized. 82 + * 83 + * However, drivers cannot in general deal with cross-cpu 84 + * requests, so while get_next_freq() will work, our 85 + * sugov_update_commit() call may not for the fast switching platforms. 86 + * 87 + * Hence stop here for remote requests if they aren't supported 88 + * by the hardware, as calculating the frequency is pointless if 89 + * we cannot in fact act on it. 90 + * 91 + * For the slow switching platforms, the kthread is always scheduled on 92 + * the right set of CPUs and any CPU can find the next frequency and 93 + * schedule the kthread. 94 + */ 95 + if (sg_policy->policy->fast_switch_enabled && 96 + !cpufreq_can_do_remote_dvfs(sg_policy->policy)) 97 + return false; 80 98 81 99 if (sg_policy->work_in_progress) 82 100 return false; ··· 128 106 129 107 if (policy->fast_switch_enabled) { 130 108 next_freq = cpufreq_driver_fast_switch(policy, next_freq); 131 - if (next_freq == CPUFREQ_ENTRY_INVALID) 109 + if (!next_freq) 132 110 return; 133 111 134 112 policy->cur = next_freq; ··· 176 154 return cpufreq_driver_resolve_freq(policy, freq); 177 155 } 178 156 179 - static void sugov_get_util(unsigned long *util, unsigned long *max) 157 + static void sugov_get_util(unsigned long *util, unsigned long *max, int cpu) 180 158 { 181 - struct rq *rq = this_rq(); 159 + struct rq *rq = cpu_rq(cpu); 182 160 unsigned long cfs_max; 183 161 184 - cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id()); 162 + cfs_max = arch_scale_cpu_capacity(NULL, cpu); 185 163 186 164 *util = min(rq->cfs.avg.util_avg, cfs_max); 187 165 *max = cfs_max; ··· 191 169 unsigned int flags) 192 170 { 193 171 if (flags & SCHED_CPUFREQ_IOWAIT) { 194 - sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; 172 + if (sg_cpu->iowait_boost_pending) 173 + return; 174 + 175 + sg_cpu->iowait_boost_pending = true; 176 + 177 + if (sg_cpu->iowait_boost) { 178 + sg_cpu->iowait_boost <<= 1; 179 + if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max) 180 + sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; 181 + } else { 182 + sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min; 183 + } 195 184 } else if (sg_cpu->iowait_boost) { 196 185 s64 delta_ns = time - sg_cpu->last_update; 197 186 198 187 /* Clear iowait_boost if the CPU apprears to have been idle. */ 199 - if (delta_ns > TICK_NSEC) 188 + if (delta_ns > TICK_NSEC) { 200 189 sg_cpu->iowait_boost = 0; 190 + sg_cpu->iowait_boost_pending = false; 191 + } 201 192 } 202 193 } 203 194 204 195 static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util, 205 196 unsigned long *max) 206 197 { 207 - unsigned long boost_util = sg_cpu->iowait_boost; 208 - unsigned long boost_max = sg_cpu->iowait_boost_max; 198 + unsigned int boost_util, boost_max; 209 199 210 - if (!boost_util) 200 + if (!sg_cpu->iowait_boost) 211 201 return; 202 + 203 + if (sg_cpu->iowait_boost_pending) { 204 + sg_cpu->iowait_boost_pending = false; 205 + } else { 206 + sg_cpu->iowait_boost >>= 1; 207 + if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) { 208 + sg_cpu->iowait_boost = 0; 209 + return; 210 + } 211 + } 212 + 213 + boost_util = sg_cpu->iowait_boost; 214 + boost_max = sg_cpu->iowait_boost_max; 212 215 213 216 if (*util * boost_max < *max * boost_util) { 214 217 *util = boost_util; 215 218 *max = boost_max; 216 219 } 217 - sg_cpu->iowait_boost >>= 1; 218 220 } 219 221 220 222 #ifdef CONFIG_NO_HZ_COMMON ··· 275 229 if (flags & SCHED_CPUFREQ_RT_DL) { 276 230 next_f = policy->cpuinfo.max_freq; 277 231 } else { 278 - sugov_get_util(&util, &max); 232 + sugov_get_util(&util, &max, sg_cpu->cpu); 279 233 sugov_iowait_boost(sg_cpu, &util, &max); 280 234 next_f = get_next_freq(sg_policy, util, max); 281 235 /* ··· 310 264 delta_ns = time - j_sg_cpu->last_update; 311 265 if (delta_ns > TICK_NSEC) { 312 266 j_sg_cpu->iowait_boost = 0; 267 + j_sg_cpu->iowait_boost_pending = false; 313 268 continue; 314 269 } 315 270 if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL) ··· 337 290 unsigned long util, max; 338 291 unsigned int next_f; 339 292 340 - sugov_get_util(&util, &max); 293 + sugov_get_util(&util, &max, sg_cpu->cpu); 341 294 342 295 raw_spin_lock(&sg_policy->update_lock); 343 296 ··· 492 445 } 493 446 494 447 sg_policy->thread = thread; 495 - kthread_bind_mask(thread, policy->related_cpus); 448 + 449 + /* Kthread is bound to all CPUs by default */ 450 + if (!policy->dvfs_possible_from_any_cpu) 451 + kthread_bind_mask(thread, policy->related_cpus); 452 + 496 453 init_irq_work(&sg_policy->irq_work, sugov_irq_work); 497 454 mutex_init(&sg_policy->work_lock); 498 455 ··· 579 528 goto stop_kthread; 580 529 } 581 530 582 - if (policy->transition_delay_us) { 583 - tunables->rate_limit_us = policy->transition_delay_us; 584 - } else { 585 - unsigned int lat; 586 - 587 - tunables->rate_limit_us = LATENCY_MULTIPLIER; 588 - lat = policy->cpuinfo.transition_latency / NSEC_PER_USEC; 589 - if (lat) 590 - tunables->rate_limit_us *= lat; 591 - } 531 + tunables->rate_limit_us = cpufreq_policy_transition_delay_us(policy); 592 532 593 533 policy->governor_data = sg_policy; 594 534 sg_policy->tunables = tunables; ··· 697 655 static struct cpufreq_governor schedutil_gov = { 698 656 .name = "schedutil", 699 657 .owner = THIS_MODULE, 658 + .dynamic_switching = true, 700 659 .init = sugov_init, 701 660 .exit = sugov_exit, 702 661 .start = sugov_start, ··· 714 671 715 672 static int __init sugov_register(void) 716 673 { 674 + int cpu; 675 + 676 + for_each_possible_cpu(cpu) 677 + per_cpu(sugov_cpu, cpu).cpu = cpu; 678 + 717 679 return cpufreq_register_governor(&schedutil_gov); 718 680 } 719 681 fs_initcall(sugov_register);
+1 -1
kernel/sched/deadline.c
··· 1136 1136 } 1137 1137 1138 1138 /* kick cpufreq (see the comment in kernel/sched/sched.h). */ 1139 - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_DL); 1139 + cpufreq_update_util(rq, SCHED_CPUFREQ_DL); 1140 1140 1141 1141 schedstat_set(curr->se.statistics.exec_max, 1142 1142 max(curr->se.statistics.exec_max, delta_exec));
+5 -3
kernel/sched/fair.c
··· 2803 2803 2804 2804 static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) 2805 2805 { 2806 - if (&this_rq()->cfs == cfs_rq) { 2806 + struct rq *rq = rq_of(cfs_rq); 2807 + 2808 + if (&rq->cfs == cfs_rq) { 2807 2809 /* 2808 2810 * There are a few boundary cases this might miss but it should 2809 2811 * get called often enough that that should (hopefully) not be ··· 2822 2820 * 2823 2821 * See cpu_util(). 2824 2822 */ 2825 - cpufreq_update_util(rq_of(cfs_rq), 0); 2823 + cpufreq_update_util(rq, 0); 2826 2824 } 2827 2825 } 2828 2826 ··· 4899 4897 * passed. 4900 4898 */ 4901 4899 if (p->in_iowait) 4902 - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_IOWAIT); 4900 + cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT); 4903 4901 4904 4902 for_each_sched_entity(se) { 4905 4903 if (se->on_rq)
+4 -4
kernel/sched/idle.c
··· 158 158 } 159 159 160 160 /* 161 - * Suspend-to-idle ("freeze") is a system state in which all user space 161 + * Suspend-to-idle ("s2idle") is a system state in which all user space 162 162 * has been frozen, all I/O devices have been suspended and the only 163 163 * activity happens here and in iterrupts (if any). In that case bypass 164 164 * the cpuidle governor and go stratight for the deepest idle state ··· 167 167 * until a proper wakeup interrupt happens. 168 168 */ 169 169 170 - if (idle_should_freeze() || dev->use_deepest_state) { 171 - if (idle_should_freeze()) { 172 - entered_state = cpuidle_enter_freeze(drv, dev); 170 + if (idle_should_enter_s2idle() || dev->use_deepest_state) { 171 + if (idle_should_enter_s2idle()) { 172 + entered_state = cpuidle_enter_s2idle(drv, dev); 173 173 if (entered_state > 0) { 174 174 local_irq_enable(); 175 175 goto exit_idle;
+1 -1
kernel/sched/rt.c
··· 970 970 return; 971 971 972 972 /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ 973 - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); 973 + cpufreq_update_util(rq, SCHED_CPUFREQ_RT); 974 974 975 975 schedstat_set(curr->se.statistics.exec_max, 976 976 max(curr->se.statistics.exec_max, delta_exec));
+2 -8
kernel/sched/sched.h
··· 2074 2074 { 2075 2075 struct update_util_data *data; 2076 2076 2077 - data = rcu_dereference_sched(*this_cpu_ptr(&cpufreq_update_util_data)); 2077 + data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, 2078 + cpu_of(rq))); 2078 2079 if (data) 2079 2080 data->func(data, rq_clock(rq), flags); 2080 2081 } 2081 - 2082 - static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) 2083 - { 2084 - if (cpu_of(rq) == smp_processor_id()) 2085 - cpufreq_update_util(rq, flags); 2086 - } 2087 2082 #else 2088 2083 static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} 2089 - static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) {} 2090 2084 #endif /* CONFIG_CPU_FREQ */ 2091 2085 2092 2086 #ifdef arch_scale_freq_capacity
+3 -2
kernel/time/timekeeping_debug.c
··· 19 19 #include <linux/init.h> 20 20 #include <linux/kernel.h> 21 21 #include <linux/seq_file.h> 22 + #include <linux/suspend.h> 22 23 #include <linux/time.h> 23 24 24 25 #include "timekeeping_internal.h" ··· 76 75 int bin = min(fls(t->tv_sec), NUM_BINS-1); 77 76 78 77 sleep_time_bin[bin]++; 79 - printk_deferred(KERN_INFO "Suspended for %lld.%03lu seconds\n", 80 - (s64)t->tv_sec, t->tv_nsec / NSEC_PER_MSEC); 78 + pm_deferred_pr_dbg("Timekeeping suspended for %lld.%03lu seconds\n", 79 + (s64)t->tv_sec, t->tv_nsec / NSEC_PER_MSEC); 81 80 } 82 81
+12 -3
tools/power/cpupower/utils/cpupower.c
··· 12 12 #include <string.h> 13 13 #include <unistd.h> 14 14 #include <errno.h> 15 + #include <sched.h> 15 16 #include <sys/types.h> 16 17 #include <sys/stat.h> 17 18 #include <sys/utsname.h> ··· 32 31 */ 33 32 struct cpupower_cpu_info cpupower_cpu_info; 34 33 int run_as_root; 34 + int base_cpu; 35 35 /* Affected cpus chosen by -c/--cpu param */ 36 36 struct bitmask *cpus_chosen; 37 37 ··· 176 174 unsigned int i, ret; 177 175 struct stat statbuf; 178 176 struct utsname uts; 177 + char pathname[32]; 179 178 180 179 cpus_chosen = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF)); 181 180 ··· 201 198 argv[0] = cmd = "help"; 202 199 } 203 200 204 - get_cpu_info(0, &cpupower_cpu_info); 201 + base_cpu = sched_getcpu(); 202 + if (base_cpu < 0) { 203 + fprintf(stderr, _("No valid cpus found.\n")); 204 + return EXIT_FAILURE; 205 + } 206 + 207 + get_cpu_info(&cpupower_cpu_info); 205 208 run_as_root = !geteuid(); 206 209 if (run_as_root) { 207 210 ret = uname(&uts); 211 + sprintf(pathname, "/dev/cpu/%d/msr", base_cpu); 208 212 if (!ret && !strcmp(uts.machine, "x86_64") && 209 - stat("/dev/cpu/0/msr", &statbuf) != 0) { 213 + stat(pathname, &statbuf) != 0) { 210 214 if (system("modprobe msr") == -1) 211 215 fprintf(stderr, _("MSR access not available.\n")); 212 216 } 213 217 } 214 - 215 218 216 219 for (i = 0; i < ARRAY_SIZE(commands); i++) { 217 220 struct cmd_struct *p = commands + i;
+2 -2
tools/power/cpupower/utils/helpers/cpuid.c
··· 42 42 * 43 43 * TBD: Should there be a cpuid alternative for this if /proc is not mounted? 44 44 */ 45 - int get_cpu_info(unsigned int cpu, struct cpupower_cpu_info *cpu_info) 45 + int get_cpu_info(struct cpupower_cpu_info *cpu_info) 46 46 { 47 47 FILE *fp; 48 48 char value[64]; ··· 70 70 if (!strncmp(value, "processor\t: ", 12)) 71 71 sscanf(value, "processor\t: %u", &proc); 72 72 73 - if (proc != cpu) 73 + if (proc != (unsigned int)base_cpu) 74 74 continue; 75 75 76 76 /* Get CPU vendor */
+3 -2
tools/power/cpupower/utils/helpers/helpers.h
··· 34 34 /* Internationalization ****************************/ 35 35 36 36 extern int run_as_root; 37 + extern int base_cpu; 37 38 extern struct bitmask *cpus_chosen; 38 39 39 40 /* Global verbose (-d) stuff *********************************/ ··· 88 87 * 89 88 * Extract CPU vendor, family, model, stepping info from /proc/cpuinfo 90 89 * 91 - * Returns 0 on success or a negativ error code 90 + * Returns 0 on success or a negative error code 92 91 * Only used on x86, below global's struct values are zero/unknown on 93 92 * other archs 94 93 */ 95 - extern int get_cpu_info(unsigned int cpu, struct cpupower_cpu_info *cpu_info); 94 + extern int get_cpu_info(struct cpupower_cpu_info *cpu_info); 96 95 extern struct cpupower_cpu_info cpupower_cpu_info; 97 96 /* cpuid and cpuinfo helpers **************************/ 98 97
+1 -1
tools/power/cpupower/utils/helpers/misc.c
··· 13 13 14 14 *support = *active = *states = 0; 15 15 16 - ret = get_cpu_info(0, &cpu_info); 16 + ret = get_cpu_info(&cpu_info); 17 17 if (ret) 18 18 return ret; 19 19
+2 -2
tools/power/cpupower/utils/idle_monitor/hsw_ext_idle.c
··· 123 123 previous_count[num][cpu] = val; 124 124 } 125 125 } 126 - hsw_ext_get_count(TSC, &tsc_at_measure_start, 0); 126 + hsw_ext_get_count(TSC, &tsc_at_measure_start, base_cpu); 127 127 return 0; 128 128 } 129 129 ··· 132 132 unsigned long long val; 133 133 int num, cpu; 134 134 135 - hsw_ext_get_count(TSC, &tsc_at_measure_end, 0); 135 + hsw_ext_get_count(TSC, &tsc_at_measure_end, base_cpu); 136 136 137 137 for (num = 0; num < HSW_EXT_CSTATE_COUNT; num++) { 138 138 for (cpu = 0; cpu < cpu_count; cpu++) {
+2 -1
tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
··· 80 80 static int mperf_get_tsc(unsigned long long *tsc) 81 81 { 82 82 int ret; 83 - ret = read_msr(0, MSR_TSC, tsc); 83 + 84 + ret = read_msr(base_cpu, MSR_TSC, tsc); 84 85 if (ret) 85 86 dprint("Reading TSC MSR failed, returning %llu\n", *tsc); 86 87 return ret;
+4 -4
tools/power/cpupower/utils/idle_monitor/nhm_idle.c
··· 129 129 int num, cpu; 130 130 unsigned long long dbg, val; 131 131 132 - nhm_get_count(TSC, &tsc_at_measure_start, 0); 132 + nhm_get_count(TSC, &tsc_at_measure_start, base_cpu); 133 133 134 134 for (num = 0; num < NHM_CSTATE_COUNT; num++) { 135 135 for (cpu = 0; cpu < cpu_count; cpu++) { ··· 137 137 previous_count[num][cpu] = val; 138 138 } 139 139 } 140 - nhm_get_count(TSC, &dbg, 0); 140 + nhm_get_count(TSC, &dbg, base_cpu); 141 141 dprint("TSC diff: %llu\n", dbg - tsc_at_measure_start); 142 142 return 0; 143 143 } ··· 148 148 unsigned long long dbg; 149 149 int num, cpu; 150 150 151 - nhm_get_count(TSC, &tsc_at_measure_end, 0); 151 + nhm_get_count(TSC, &tsc_at_measure_end, base_cpu); 152 152 153 153 for (num = 0; num < NHM_CSTATE_COUNT; num++) { 154 154 for (cpu = 0; cpu < cpu_count; cpu++) { ··· 156 156 current_count[num][cpu] = val; 157 157 } 158 158 } 159 - nhm_get_count(TSC, &dbg, 0); 159 + nhm_get_count(TSC, &dbg, base_cpu); 160 160 dprint("TSC diff: %llu\n", dbg - tsc_at_measure_end); 161 161 162 162 return 0;
+2 -2
tools/power/cpupower/utils/idle_monitor/snb_idle.c
··· 120 120 previous_count[num][cpu] = val; 121 121 } 122 122 } 123 - snb_get_count(TSC, &tsc_at_measure_start, 0); 123 + snb_get_count(TSC, &tsc_at_measure_start, base_cpu); 124 124 return 0; 125 125 } 126 126 ··· 129 129 unsigned long long val; 130 130 int num, cpu; 131 131 132 - snb_get_count(TSC, &tsc_at_measure_end, 0); 132 + snb_get_count(TSC, &tsc_at_measure_end, base_cpu); 133 133 134 134 for (num = 0; num < SNB_CSTATE_COUNT; num++) { 135 135 for (cpu = 0; cpu < cpu_count; cpu++) {
+11 -8
tools/power/pm-graph/Makefile
··· 4 4 all: 5 5 @echo "Nothing to build" 6 6 7 - install : 7 + install : uninstall 8 8 install -d $(DESTDIR)$(PREFIX)/lib/pm-graph 9 9 install analyze_suspend.py $(DESTDIR)$(PREFIX)/lib/pm-graph 10 10 install analyze_boot.py $(DESTDIR)$(PREFIX)/lib/pm-graph ··· 17 17 install sleepgraph.8 $(DESTDIR)$(PREFIX)/share/man/man8 18 18 19 19 uninstall : 20 - rm $(DESTDIR)$(PREFIX)/share/man/man8/bootgraph.8 21 - rm $(DESTDIR)$(PREFIX)/share/man/man8/sleepgraph.8 20 + rm -f $(DESTDIR)$(PREFIX)/share/man/man8/bootgraph.8 21 + rm -f $(DESTDIR)$(PREFIX)/share/man/man8/sleepgraph.8 22 22 23 - rm $(DESTDIR)$(PREFIX)/bin/bootgraph 24 - rm $(DESTDIR)$(PREFIX)/bin/sleepgraph 23 + rm -f $(DESTDIR)$(PREFIX)/bin/bootgraph 24 + rm -f $(DESTDIR)$(PREFIX)/bin/sleepgraph 25 25 26 - rm $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_boot.py 27 - rm $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_suspend.py 28 - rmdir $(DESTDIR)$(PREFIX)/lib/pm-graph 26 + rm -f $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_boot.py 27 + rm -f $(DESTDIR)$(PREFIX)/lib/pm-graph/analyze_suspend.py 28 + rm -f $(DESTDIR)$(PREFIX)/lib/pm-graph/*.pyc 29 + if [ -d $(DESTDIR)$(PREFIX)/lib/pm-graph ] ; then \ 30 + rmdir $(DESTDIR)$(PREFIX)/lib/pm-graph; \ 31 + fi;
+386 -198
tools/power/pm-graph/analyze_boot.py
··· 42 42 # store system values and test parameters 43 43 class SystemValues(aslib.SystemValues): 44 44 title = 'BootGraph' 45 - version = 2.0 45 + version = '2.1' 46 46 hostname = 'localhost' 47 47 testtime = '' 48 48 kernel = '' ··· 50 50 ftracefile = '' 51 51 htmlfile = 'bootgraph.html' 52 52 outfile = '' 53 - phoronix = False 54 - addlogs = False 53 + testdir = '' 54 + testdirprefix = 'boot' 55 + embedded = False 56 + testlog = False 57 + dmesglog = False 58 + ftracelog = False 55 59 useftrace = False 60 + usecallgraph = False 56 61 usedevsrc = True 57 62 suspendmode = 'boot' 58 63 max_graph_depth = 2 ··· 66 61 manual = False 67 62 iscronjob = False 68 63 timeformat = '%.6f' 64 + bootloader = 'grub' 65 + blexec = [] 69 66 def __init__(self): 70 67 if('LOG_FILE' in os.environ and 'TEST_RESULTS_IDENTIFIER' in os.environ): 71 - self.phoronix = True 72 - self.addlogs = True 68 + self.embedded = True 69 + self.dmesglog = True 73 70 self.outfile = os.environ['LOG_FILE'] 74 71 self.htmlfile = os.environ['LOG_FILE'] 75 72 self.hostname = platform.node() ··· 83 76 self.kernel = self.kernelVersion(val) 84 77 else: 85 78 self.kernel = 'unknown' 79 + self.testdir = datetime.now().strftime('boot-%y%m%d-%H%M%S') 86 80 def kernelVersion(self, msg): 87 81 return msg.split()[2] 82 + def checkFtraceKernelVersion(self): 83 + val = tuple(map(int, self.kernel.split('-')[0].split('.'))) 84 + if val >= (4, 10, 0): 85 + return True 86 + return False 88 87 def kernelParams(self): 89 88 cmdline = 'initcall_debug log_buf_len=32M' 90 89 if self.useftrace: 91 - cmdline += ' trace_buf_size=128M trace_clock=global '\ 90 + if self.cpucount > 0: 91 + bs = min(self.memtotal / 2, 2*1024*1024) / self.cpucount 92 + else: 93 + bs = 131072 94 + cmdline += ' trace_buf_size=%dK trace_clock=global '\ 92 95 'trace_options=nooverwrite,funcgraph-abstime,funcgraph-cpu,'\ 93 96 'funcgraph-duration,funcgraph-proc,funcgraph-tail,'\ 94 97 'nofuncgraph-overhead,context-info,graph-time '\ 95 98 'ftrace=function_graph '\ 96 99 'ftrace_graph_max_depth=%d '\ 97 100 'ftrace_graph_filter=%s' % \ 98 - (self.max_graph_depth, self.graph_filter) 101 + (bs, self.max_graph_depth, self.graph_filter) 99 102 return cmdline 100 103 def setGraphFilter(self, val): 101 - fp = open(self.tpath+'available_filter_functions') 102 - master = fp.read().split('\n') 103 - fp.close() 104 + master = self.getBootFtraceFilterFunctions() 105 + fs = '' 104 106 for i in val.split(','): 105 107 func = i.strip() 108 + if func == '': 109 + doError('badly formatted filter function string') 110 + if '[' in func or ']' in func: 111 + doError('loadable module functions not allowed - "%s"' % func) 112 + if ' ' in func: 113 + doError('spaces found in filter functions - "%s"' % func) 106 114 if func not in master: 107 115 doError('function "%s" not available for ftrace' % func) 108 - self.graph_filter = val 116 + if not fs: 117 + fs = func 118 + else: 119 + fs += ','+func 120 + if not fs: 121 + doError('badly formatted filter function string') 122 + self.graph_filter = fs 123 + def getBootFtraceFilterFunctions(self): 124 + self.rootCheck(True) 125 + fp = open(self.tpath+'available_filter_functions') 126 + fulllist = fp.read().split('\n') 127 + fp.close() 128 + list = [] 129 + for i in fulllist: 130 + if not i or ' ' in i or '[' in i or ']' in i: 131 + continue 132 + list.append(i) 133 + return list 134 + def myCronJob(self, line): 135 + if '@reboot' not in line: 136 + return False 137 + if 'bootgraph' in line or 'analyze_boot.py' in line or '-cronjob' in line: 138 + return True 139 + return False 109 140 def cronjobCmdString(self): 110 141 cmdline = '%s -cronjob' % os.path.abspath(sys.argv[0]) 111 142 args = iter(sys.argv[1:]) 112 143 for arg in args: 113 144 if arg in ['-h', '-v', '-cronjob', '-reboot']: 114 145 continue 115 - elif arg in ['-o', '-dmesg', '-ftrace', '-filter']: 146 + elif arg in ['-o', '-dmesg', '-ftrace', '-func']: 116 147 args.next() 117 148 continue 118 149 cmdline += ' '+arg 119 150 if self.graph_filter != 'do_one_initcall': 120 - cmdline += ' -filter "%s"' % self.graph_filter 121 - cmdline += ' -o "%s"' % os.path.abspath(self.htmlfile) 151 + cmdline += ' -func "%s"' % self.graph_filter 152 + cmdline += ' -o "%s"' % os.path.abspath(self.testdir) 122 153 return cmdline 123 154 def manualRebootRequired(self): 124 155 cmdline = self.kernelParams() ··· 166 121 print '3. After reboot, re-run this tool with the same arguments but no command (w/o -reboot or -manual).\n' 167 122 print 'CMDLINE="%s"' % cmdline 168 123 sys.exit() 124 + def getExec(self, cmd): 125 + dirlist = ['/sbin', '/bin', '/usr/sbin', '/usr/bin', 126 + '/usr/local/sbin', '/usr/local/bin'] 127 + for path in dirlist: 128 + cmdfull = os.path.join(path, cmd) 129 + if os.path.exists(cmdfull): 130 + return cmdfull 131 + return '' 132 + def blGrub(self): 133 + blcmd = '' 134 + for cmd in ['update-grub', 'grub-mkconfig', 'grub2-mkconfig']: 135 + if blcmd: 136 + break 137 + blcmd = self.getExec(cmd) 138 + if not blcmd: 139 + doError('[GRUB] missing update command') 140 + if not os.path.exists('/etc/default/grub'): 141 + doError('[GRUB] missing /etc/default/grub') 142 + if 'grub2' in blcmd: 143 + cfg = '/boot/grub2/grub.cfg' 144 + else: 145 + cfg = '/boot/grub/grub.cfg' 146 + if not os.path.exists(cfg): 147 + doError('[GRUB] missing %s' % cfg) 148 + if 'update-grub' in blcmd: 149 + self.blexec = [blcmd] 150 + else: 151 + self.blexec = [blcmd, '-o', cfg] 152 + def getBootLoader(self): 153 + if self.bootloader == 'grub': 154 + self.blGrub() 155 + else: 156 + doError('unknown boot loader: %s' % self.bootloader) 169 157 170 158 sysvals = SystemValues() 171 159 ··· 214 136 idstr = '' 215 137 html_device_id = 0 216 138 valid = False 217 - initstart = 0.0 139 + tUserMode = 0.0 218 140 boottime = '' 219 - phases = ['boot'] 141 + phases = ['kernel', 'user'] 220 142 do_one_initcall = False 221 143 def __init__(self, num): 222 144 self.testnumber = num 223 145 self.idstr = 'a' 224 146 self.dmesgtext = [] 225 147 self.dmesg = { 226 - 'boot': {'list': dict(), 'start': -1.0, 'end': -1.0, 'row': 0, 'color': '#dddddd'} 148 + 'kernel': {'list': dict(), 'start': -1.0, 'end': -1.0, 'row': 0, 149 + 'order': 0, 'color': 'linear-gradient(to bottom, #fff, #bcf)'}, 150 + 'user': {'list': dict(), 'start': -1.0, 'end': -1.0, 'row': 0, 151 + 'order': 1, 'color': '#fff'} 227 152 } 228 153 def deviceTopology(self): 229 154 return '' 230 - def newAction(self, phase, name, start, end, ret, ulen): 155 + def newAction(self, phase, name, pid, start, end, ret, ulen): 231 156 # new device callback for a specific phase 232 157 self.html_device_id += 1 233 158 devid = '%s%d' % (self.idstr, self.html_device_id) ··· 244 163 name = '%s[%d]' % (origname, i) 245 164 i += 1 246 165 list[name] = {'name': name, 'start': start, 'end': end, 247 - 'pid': 0, 'length': length, 'row': 0, 'id': devid, 166 + 'pid': pid, 'length': length, 'row': 0, 'id': devid, 248 167 'ret': ret, 'ulen': ulen } 249 168 return name 250 - def deviceMatch(self, cg): 169 + def deviceMatch(self, pid, cg): 251 170 if cg.end - cg.start == 0: 252 171 return True 253 - list = self.dmesg['boot']['list'] 254 - for devname in list: 255 - dev = list[devname] 256 - if cg.name == 'do_one_initcall': 257 - if(cg.start <= dev['start'] and cg.end >= dev['end'] and dev['length'] > 0): 258 - dev['ftrace'] = cg 259 - self.do_one_initcall = True 260 - return True 261 - else: 262 - if(cg.start > dev['start'] and cg.end < dev['end']): 263 - if 'ftraces' not in dev: 264 - dev['ftraces'] = [] 265 - dev['ftraces'].append(cg) 266 - return True 172 + for p in data.phases: 173 + list = self.dmesg[p]['list'] 174 + for devname in list: 175 + dev = list[devname] 176 + if pid != dev['pid']: 177 + continue 178 + if cg.name == 'do_one_initcall': 179 + if(cg.start <= dev['start'] and cg.end >= dev['end'] and dev['length'] > 0): 180 + dev['ftrace'] = cg 181 + self.do_one_initcall = True 182 + return True 183 + else: 184 + if(cg.start > dev['start'] and cg.end < dev['end']): 185 + if 'ftraces' not in dev: 186 + dev['ftraces'] = [] 187 + dev['ftraces'].append(cg) 188 + return True 267 189 return False 268 190 269 191 # ----------------- FUNCTIONS -------------------- 270 192 271 - # Function: loadKernelLog 193 + # Function: parseKernelLog 272 194 # Description: 273 - # Load a raw kernel log from dmesg 274 - def loadKernelLog(): 195 + # parse a kernel log for boot data 196 + def parseKernelLog(): 197 + phase = 'kernel' 275 198 data = Data(0) 276 - data.dmesg['boot']['start'] = data.start = ktime = 0.0 199 + data.dmesg['kernel']['start'] = data.start = ktime = 0.0 277 200 sysvals.stamp = { 278 201 'time': datetime.now().strftime('%B %d %Y, %I:%M:%S %p'), 279 202 'host': sysvals.hostname, 280 203 'mode': 'boot', 'kernel': ''} 281 204 205 + tp = aslib.TestProps() 282 206 devtemp = dict() 283 207 if(sysvals.dmesgfile): 284 208 lf = open(sysvals.dmesgfile, 'r') ··· 291 205 lf = Popen('dmesg', stdout=PIPE).stdout 292 206 for line in lf: 293 207 line = line.replace('\r\n', '') 208 + # grab the stamp and sysinfo 209 + if re.match(tp.stampfmt, line): 210 + tp.stamp = line 211 + continue 212 + elif re.match(tp.sysinfofmt, line): 213 + tp.sysinfo = line 214 + continue 294 215 idx = line.find('[') 295 216 if idx > 1: 296 217 line = line[idx:] ··· 308 215 if(ktime > 120): 309 216 break 310 217 msg = m.group('msg') 311 - data.end = data.initstart = ktime 312 218 data.dmesgtext.append(line) 313 219 if(ktime == 0.0 and re.match('^Linux version .*', msg)): 314 220 if(not sysvals.stamp['kernel']): ··· 320 228 data.boottime = bt.strftime('%Y-%m-%d_%H:%M:%S') 321 229 sysvals.stamp['time'] = bt.strftime('%B %d %Y, %I:%M:%S %p') 322 230 continue 323 - m = re.match('^calling *(?P<f>.*)\+.*', msg) 231 + m = re.match('^calling *(?P<f>.*)\+.* @ (?P<p>[0-9]*)', msg) 324 232 if(m): 325 - devtemp[m.group('f')] = ktime 233 + func = m.group('f') 234 + pid = int(m.group('p')) 235 + devtemp[func] = (ktime, pid) 326 236 continue 327 237 m = re.match('^initcall *(?P<f>.*)\+.* returned (?P<r>.*) after (?P<t>.*) usecs', msg) 328 238 if(m): 329 239 data.valid = True 240 + data.end = ktime 330 241 f, r, t = m.group('f', 'r', 't') 331 242 if(f in devtemp): 332 - data.newAction('boot', f, devtemp[f], ktime, int(r), int(t)) 333 - data.end = ktime 243 + start, pid = devtemp[f] 244 + data.newAction(phase, f, pid, start, ktime, int(r), int(t)) 334 245 del devtemp[f] 335 246 continue 336 247 if(re.match('^Freeing unused kernel memory.*', msg)): 337 - break 248 + data.tUserMode = ktime 249 + data.dmesg['kernel']['end'] = ktime 250 + data.dmesg['user']['start'] = ktime 251 + phase = 'user' 338 252 339 - data.dmesg['boot']['end'] = data.end 253 + if tp.stamp: 254 + sysvals.stamp = 0 255 + tp.parseStamp(data, sysvals) 256 + data.dmesg['user']['end'] = data.end 340 257 lf.close() 341 258 return data 342 259 343 - # Function: loadTraceLog 260 + # Function: parseTraceLog 344 261 # Description: 345 262 # Check if trace is available and copy to a temp file 346 - def loadTraceLog(data): 347 - # load the data to a temp file if none given 348 - if not sysvals.ftracefile: 349 - lib = aslib.sysvals 350 - aslib.rootCheck(True) 351 - if not lib.verifyFtrace(): 352 - doError('ftrace not available') 353 - if lib.fgetVal('current_tracer').strip() != 'function_graph': 354 - doError('ftrace not configured for a boot callgraph') 355 - sysvals.ftracefile = '/tmp/boot_ftrace.%s.txt' % os.getpid() 356 - call('cat '+lib.tpath+'trace > '+sysvals.ftracefile, shell=True) 357 - if not sysvals.ftracefile: 358 - doError('No trace data available') 359 - 263 + def parseTraceLog(data): 360 264 # parse the trace log 361 265 ftemp = dict() 362 266 tp = aslib.TestProps() ··· 394 306 print('Sanity check failed for %s-%d' % (proc, pid)) 395 307 continue 396 308 # match cg data to devices 397 - if not data.deviceMatch(cg): 309 + if not data.deviceMatch(pid, cg): 398 310 print ' BAD: %s %s-%d [%f - %f]' % (cg.name, proc, pid, cg.start, cg.end) 311 + 312 + # Function: retrieveLogs 313 + # Description: 314 + # Create copies of dmesg and/or ftrace for later processing 315 + def retrieveLogs(): 316 + # check ftrace is configured first 317 + if sysvals.useftrace: 318 + tracer = sysvals.fgetVal('current_tracer').strip() 319 + if tracer != 'function_graph': 320 + doError('ftrace not configured for a boot callgraph') 321 + # create the folder and get dmesg 322 + sysvals.systemInfo(aslib.dmidecode(sysvals.mempath)) 323 + sysvals.initTestOutput('boot') 324 + sysvals.writeDatafileHeader(sysvals.dmesgfile) 325 + call('dmesg >> '+sysvals.dmesgfile, shell=True) 326 + if not sysvals.useftrace: 327 + return 328 + # get ftrace 329 + sysvals.writeDatafileHeader(sysvals.ftracefile) 330 + call('cat '+sysvals.tpath+'trace >> '+sysvals.ftracefile, shell=True) 399 331 400 332 # Function: colorForName 401 333 # Description: ··· 461 353 # testruns: array of Data objects from parseKernelLog or parseTraceLog 462 354 # Output: 463 355 # True if the html file was created, false if it failed 464 - def createBootGraph(data, embedded): 356 + def createBootGraph(data): 465 357 # html function templates 466 358 html_srccall = '<div id={6} title="{5}" class="srccall" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;">{0}</div>\n' 467 359 html_timetotal = '<table class="time1">\n<tr>'\ 468 - '<td class="blue">Time from Kernel Boot to start of User Mode: <b>{0} ms</b></td>'\ 360 + '<td class="blue">Init process starts @ <b>{0} ms</b></td>'\ 361 + '<td class="blue">Last initcall ends @ <b>{1} ms</b></td>'\ 469 362 '</tr>\n</table>\n' 470 363 471 364 # device timeline 472 365 devtl = aslib.Timeline(100, 20) 473 366 474 367 # write the test title and general info header 475 - devtl.createHeader(sysvals, 'noftrace') 368 + devtl.createHeader(sysvals) 476 369 477 370 # Generate the header for this timeline 478 371 t0 = data.start ··· 482 373 if(tTotal == 0): 483 374 print('ERROR: No timeline data') 484 375 return False 485 - boot_time = '%.0f'%(tTotal*1000) 486 - devtl.html += html_timetotal.format(boot_time) 376 + user_mode = '%.0f'%(data.tUserMode*1000) 377 + last_init = '%.0f'%(tTotal*1000) 378 + devtl.html += html_timetotal.format(user_mode, last_init) 487 379 488 380 # determine the maximum number of rows we need to draw 489 - phase = 'boot' 490 - list = data.dmesg[phase]['list'] 491 381 devlist = [] 492 - for devname in list: 493 - d = aslib.DevItem(0, phase, list[devname]) 494 - devlist.append(d) 495 - devtl.getPhaseRows(devlist) 382 + for p in data.phases: 383 + list = data.dmesg[p]['list'] 384 + for devname in list: 385 + d = aslib.DevItem(0, p, list[devname]) 386 + devlist.append(d) 387 + devtl.getPhaseRows(devlist, 0, 'start') 496 388 devtl.calcTotalRows() 497 389 498 390 # draw the timeline background 499 391 devtl.createZoomBox() 500 - boot = data.dmesg[phase] 501 - length = boot['end']-boot['start'] 502 - left = '%.3f' % (((boot['start']-t0)*100.0)/tTotal) 503 - width = '%.3f' % ((length*100.0)/tTotal) 504 - devtl.html += devtl.html_tblock.format(phase, left, width, devtl.scaleH) 505 - devtl.html += devtl.html_phase.format('0', '100', \ 506 - '%.3f'%devtl.scaleH, '%.3f'%devtl.bodyH, \ 507 - 'white', '') 392 + devtl.html += devtl.html_tblock.format('boot', '0', '100', devtl.scaleH) 393 + for p in data.phases: 394 + phase = data.dmesg[p] 395 + length = phase['end']-phase['start'] 396 + left = '%.3f' % (((phase['start']-t0)*100.0)/tTotal) 397 + width = '%.3f' % ((length*100.0)/tTotal) 398 + devtl.html += devtl.html_phase.format(left, width, \ 399 + '%.3f'%devtl.scaleH, '%.3f'%devtl.bodyH, \ 400 + phase['color'], '') 508 401 509 402 # draw the device timeline 510 403 num = 0 511 404 devstats = dict() 512 - for devname in sorted(list): 513 - cls, color = colorForName(devname) 514 - dev = list[devname] 515 - info = '@|%.3f|%.3f|%.3f|%d' % (dev['start']*1000.0, dev['end']*1000.0, 516 - dev['ulen']/1000.0, dev['ret']) 517 - devstats[dev['id']] = {'info':info} 518 - dev['color'] = color 519 - height = devtl.phaseRowHeight(0, phase, dev['row']) 520 - top = '%.6f' % ((dev['row']*height) + devtl.scaleH) 521 - left = '%.6f' % (((dev['start']-t0)*100)/tTotal) 522 - width = '%.6f' % (((dev['end']-dev['start'])*100)/tTotal) 523 - length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000) 524 - devtl.html += devtl.html_device.format(dev['id'], 525 - devname+length+'kernel_mode', left, top, '%.3f'%height, 526 - width, devname, ' '+cls, '') 527 - rowtop = devtl.phaseRowTop(0, phase, dev['row']) 528 - height = '%.6f' % (devtl.rowH / 2) 529 - top = '%.6f' % (rowtop + devtl.scaleH + (devtl.rowH / 2)) 530 - if data.do_one_initcall: 531 - if('ftrace' not in dev): 405 + for phase in data.phases: 406 + list = data.dmesg[phase]['list'] 407 + for devname in sorted(list): 408 + cls, color = colorForName(devname) 409 + dev = list[devname] 410 + info = '@|%.3f|%.3f|%.3f|%d' % (dev['start']*1000.0, dev['end']*1000.0, 411 + dev['ulen']/1000.0, dev['ret']) 412 + devstats[dev['id']] = {'info':info} 413 + dev['color'] = color 414 + height = devtl.phaseRowHeight(0, phase, dev['row']) 415 + top = '%.6f' % ((dev['row']*height) + devtl.scaleH) 416 + left = '%.6f' % (((dev['start']-t0)*100)/tTotal) 417 + width = '%.6f' % (((dev['end']-dev['start'])*100)/tTotal) 418 + length = ' (%0.3f ms) ' % ((dev['end']-dev['start'])*1000) 419 + devtl.html += devtl.html_device.format(dev['id'], 420 + devname+length+phase+'_mode', left, top, '%.3f'%height, 421 + width, devname, ' '+cls, '') 422 + rowtop = devtl.phaseRowTop(0, phase, dev['row']) 423 + height = '%.6f' % (devtl.rowH / 2) 424 + top = '%.6f' % (rowtop + devtl.scaleH + (devtl.rowH / 2)) 425 + if data.do_one_initcall: 426 + if('ftrace' not in dev): 427 + continue 428 + cg = dev['ftrace'] 429 + large, stats = cgOverview(cg, 0.001) 430 + devstats[dev['id']]['fstat'] = stats 431 + for l in large: 432 + left = '%f' % (((l.time-t0)*100)/tTotal) 433 + width = '%f' % (l.length*100/tTotal) 434 + title = '%s (%0.3fms)' % (l.name, l.length * 1000.0) 435 + devtl.html += html_srccall.format(l.name, left, 436 + top, height, width, title, 'x%d'%num) 437 + num += 1 532 438 continue 533 - cg = dev['ftrace'] 534 - large, stats = cgOverview(cg, 0.001) 535 - devstats[dev['id']]['fstat'] = stats 536 - for l in large: 537 - left = '%f' % (((l.time-t0)*100)/tTotal) 538 - width = '%f' % (l.length*100/tTotal) 539 - title = '%s (%0.3fms)' % (l.name, l.length * 1000.0) 540 - devtl.html += html_srccall.format(l.name, left, 541 - top, height, width, title, 'x%d'%num) 439 + if('ftraces' not in dev): 440 + continue 441 + for cg in dev['ftraces']: 442 + left = '%f' % (((cg.start-t0)*100)/tTotal) 443 + width = '%f' % ((cg.end-cg.start)*100/tTotal) 444 + cglen = (cg.end - cg.start) * 1000.0 445 + title = '%s (%0.3fms)' % (cg.name, cglen) 446 + cg.id = 'x%d' % num 447 + devtl.html += html_srccall.format(cg.name, left, 448 + top, height, width, title, dev['id']+cg.id) 542 449 num += 1 543 - continue 544 - if('ftraces' not in dev): 545 - continue 546 - for cg in dev['ftraces']: 547 - left = '%f' % (((cg.start-t0)*100)/tTotal) 548 - width = '%f' % ((cg.end-cg.start)*100/tTotal) 549 - cglen = (cg.end - cg.start) * 1000.0 550 - title = '%s (%0.3fms)' % (cg.name, cglen) 551 - cg.id = 'x%d' % num 552 - devtl.html += html_srccall.format(cg.name, left, 553 - top, height, width, title, dev['id']+cg.id) 554 - num += 1 555 450 556 451 # draw the time scale, try to make the number of labels readable 557 - devtl.createTimeScale(t0, tMax, tTotal, phase) 452 + devtl.createTimeScale(t0, tMax, tTotal, 'boot') 558 453 devtl.html += '</div>\n' 559 454 560 455 # timeline is finished 561 456 devtl.html += '</div>\n</div>\n' 457 + 458 + # draw a legend which describes the phases by color 459 + devtl.html += '<div class="legend">\n' 460 + pdelta = 20.0 461 + pmargin = 36.0 462 + for phase in data.phases: 463 + order = '%.2f' % ((data.dmesg[phase]['order'] * pdelta) + pmargin) 464 + devtl.html += devtl.html_legend.format(order, \ 465 + data.dmesg[phase]['color'], phase+'_mode', phase[0]) 466 + devtl.html += '</div>\n' 562 467 563 468 if(sysvals.outfile == sysvals.htmlfile): 564 469 hf = open(sysvals.htmlfile, 'a') ··· 597 474 .fstat td {text-align:left;width:35px;}\n\ 598 475 .srccall {position:absolute;font-size:10px;z-index:7;overflow:hidden;color:black;text-align:center;white-space:nowrap;border-radius:5px;border:1px solid black;background:linear-gradient(to bottom right,#CCC,#969696);}\n\ 599 476 .srccall:hover {color:white;font-weight:bold;border:1px solid white;}\n' 600 - if(not embedded): 477 + if(not sysvals.embedded): 601 478 aslib.addCSS(hf, sysvals, 1, False, extra) 602 479 603 480 # write the device timeline ··· 618 495 html = \ 619 496 '<div id="devicedetailtitle"></div>\n'\ 620 497 '<div id="devicedetail" style="display:none;">\n'\ 621 - '<div id="devicedetail0">\n'\ 622 - '<div id="kernel_mode" class="phaselet" style="left:0%;width:100%;background:#DDDDDD"></div>\n'\ 623 - '</div>\n</div>\n'\ 498 + '<div id="devicedetail0">\n' 499 + for p in data.phases: 500 + phase = data.dmesg[p] 501 + html += devtl.html_phaselet.format(p+'_mode', '0', '100', phase['color']) 502 + html += '</div>\n</div>\n'\ 624 503 '<script type="text/javascript">\n'+statinfo+\ 625 504 '</script>\n' 626 505 hf.write(html) ··· 632 507 aslib.addCallgraphs(sysvals, hf, data) 633 508 634 509 # add the dmesg log as a hidden div 635 - if sysvals.addlogs: 510 + if sysvals.dmesglog: 636 511 hf.write('<div id="dmesglog" style="display:none;">\n') 637 512 for line in data.dmesgtext: 638 513 line = line.replace('<', '&lt').replace('>', '&gt') 639 514 hf.write(line) 640 515 hf.write('</div>\n') 641 516 642 - if(not embedded): 517 + if(not sysvals.embedded): 643 518 # write the footer and close 644 519 aslib.addScriptCode(hf, [data]) 645 520 hf.write('</body>\n</html>\n') 646 521 else: 647 522 # embedded out will be loaded in a page, skip the js 648 523 hf.write('<div id=bounds style=display:none>%f,%f</div>' % \ 649 - (data.start*1000, data.initstart*1000)) 524 + (data.start*1000, data.end*1000)) 650 525 hf.close() 651 526 return True 652 527 ··· 658 533 if not restore: 659 534 sysvals.rootUser(True) 660 535 crondir = '/var/spool/cron/crontabs/' 661 - cronfile = crondir+'root' 662 - backfile = crondir+'root-analyze_boot-backup' 536 + if not os.path.exists(crondir): 537 + crondir = '/var/spool/cron/' 663 538 if not os.path.exists(crondir): 664 539 doError('%s not found' % crondir) 665 - out = Popen(['which', 'crontab'], stdout=PIPE).stdout.read() 666 - if not out: 540 + cronfile = crondir+'root' 541 + backfile = crondir+'root-analyze_boot-backup' 542 + cmd = sysvals.getExec('crontab') 543 + if not cmd: 667 544 doError('crontab not found') 668 545 # on restore: move the backup cron back into place 669 546 if restore: 670 547 if os.path.exists(backfile): 671 548 shutil.move(backfile, cronfile) 549 + call([cmd, cronfile]) 672 550 return 673 551 # backup current cron and install new one with reboot 674 552 if os.path.exists(cronfile): ··· 684 556 fp = open(backfile, 'r') 685 557 op = open(cronfile, 'w') 686 558 for line in fp: 687 - if '@reboot' not in line: 559 + if not sysvals.myCronJob(line): 688 560 op.write(line) 689 561 continue 690 562 fp.close() 691 563 op.write('@reboot python %s\n' % sysvals.cronjobCmdString()) 692 564 op.close() 693 - res = call('crontab %s' % cronfile, shell=True) 565 + res = call([cmd, cronfile]) 694 566 except Exception, e: 695 567 print 'Exception: %s' % str(e) 696 568 shutil.move(backfile, cronfile) ··· 705 577 # call update-grub on restore 706 578 if restore: 707 579 try: 708 - call(['update-grub'], stderr=PIPE, stdout=PIPE, 580 + call(sysvals.blexec, stderr=PIPE, stdout=PIPE, 709 581 env={'PATH': '.:/sbin:/usr/sbin:/usr/bin:/sbin:/bin'}) 710 582 except Exception, e: 711 583 print 'Exception: %s\n' % str(e) 712 584 return 713 - # verify we can do this 714 - sysvals.rootUser(True) 715 - grubfile = '/etc/default/grub' 716 - if not os.path.exists(grubfile): 717 - print 'ERROR: Unable to set the kernel parameters via grub.\n' 718 - sysvals.manualRebootRequired() 719 - out = Popen(['which', 'update-grub'], stdout=PIPE).stdout.read() 720 - if not out: 721 - print 'ERROR: Unable to set the kernel parameters via grub.\n' 722 - sysvals.manualRebootRequired() 723 - 724 585 # extract the option and create a grub config without it 586 + sysvals.rootUser(True) 725 587 tgtopt = 'GRUB_CMDLINE_LINUX_DEFAULT' 726 588 cmdline = '' 589 + grubfile = '/etc/default/grub' 727 590 tempfile = '/etc/default/grub.analyze_boot' 728 591 shutil.move(grubfile, tempfile) 729 592 res = -1 ··· 741 622 # if the target option value is in quotes, strip them 742 623 sp = '"' 743 624 val = cmdline.strip() 744 - if val[0] == '\'' or val[0] == '"': 625 + if val and (val[0] == '\'' or val[0] == '"'): 745 626 sp = val[0] 746 627 val = val.strip(sp) 747 628 cmdline = val ··· 752 633 # write out the updated target option 753 634 op.write('\n%s=%s%s%s\n' % (tgtopt, sp, cmdline, sp)) 754 635 op.close() 755 - res = call('update-grub') 636 + res = call(sysvals.blexec) 756 637 os.remove(grubfile) 757 638 except Exception, e: 758 639 print 'Exception: %s' % str(e) ··· 760 641 # cleanup 761 642 shutil.move(tempfile, grubfile) 762 643 if res != 0: 763 - doError('update-grub failed') 644 + doError('update grub failed') 764 645 765 - # Function: doError 646 + # Function: updateKernelParams 766 647 # Description: 648 + # update boot conf for all kernels with our parameters 649 + def updateKernelParams(restore=False): 650 + # find the boot loader 651 + sysvals.getBootLoader() 652 + if sysvals.bootloader == 'grub': 653 + updateGrub(restore) 654 + 655 + # Function: doError Description: 767 656 # generic error function for catastrphic failures 768 657 # Arguments: 769 658 # msg: the error message to print ··· 787 660 # print out the help text 788 661 def printHelp(): 789 662 print('') 790 - print('%s v%.1f' % (sysvals.title, sysvals.version)) 663 + print('%s v%s' % (sysvals.title, sysvals.version)) 791 664 print('Usage: bootgraph <options> <command>') 792 665 print('') 793 666 print('Description:') ··· 796 669 print(' the start of the init process.') 797 670 print('') 798 671 print(' If no specific command is given the tool reads the current dmesg') 799 - print(' and/or ftrace log and outputs bootgraph.html') 672 + print(' and/or ftrace log and creates a timeline') 673 + print('') 674 + print(' Generates output files in subdirectory: boot-yymmdd-HHMMSS') 675 + print(' HTML output: <hostname>_boot.html') 676 + print(' raw dmesg output: <hostname>_boot_dmesg.txt') 677 + print(' raw ftrace output: <hostname>_boot_ftrace.txt') 800 678 print('') 801 679 print('Options:') 802 680 print(' -h Print this help text') 803 681 print(' -v Print the current tool version') 804 682 print(' -addlogs Add the dmesg log to the html output') 805 - print(' -o file Html timeline name (default: bootgraph.html)') 683 + print(' -o name Overrides the output subdirectory name when running a new test') 684 + print(' default: boot-{date}-{time}') 806 685 print(' [advanced]') 807 686 print(' -f Use ftrace to add function detail (default: disabled)') 808 687 print(' -callgraph Add callgraph detail, can be very large (default: disabled)') ··· 816 683 print(' -mincg ms Discard all callgraphs shorter than ms milliseconds (e.g. 0.001 for us)') 817 684 print(' -timeprec N Number of significant digits in timestamps (0:S, 3:ms, [6:us])') 818 685 print(' -expandcg pre-expand the callgraph data in the html output (default: disabled)') 819 - print(' -filter list Limit ftrace to comma-delimited list of functions (default: do_one_initcall)') 820 - print(' [commands]') 686 + print(' -func list Limit ftrace to comma-delimited list of functions (default: do_one_initcall)') 687 + print(' -cgfilter S Filter the callgraph output in the timeline') 688 + print(' -bl name Use the following boot loader for kernel params (default: grub)') 821 689 print(' -reboot Reboot the machine automatically and generate a new timeline') 822 - print(' -manual Show the requirements to generate a new timeline manually') 823 - print(' -dmesg file Load a stored dmesg file (used with -ftrace)') 824 - print(' -ftrace file Load a stored ftrace file (used with -dmesg)') 690 + print(' -manual Show the steps to generate a new timeline manually (used with -reboot)') 691 + print('') 692 + print('Other commands:') 825 693 print(' -flistall Print all functions capable of being captured in ftrace') 694 + print(' -sysinfo Print out system info extracted from BIOS') 695 + print(' [redo]') 696 + print(' -dmesg file Create HTML output using dmesg input (used with -ftrace)') 697 + print(' -ftrace file Create HTML output using ftrace input (used with -dmesg)') 826 698 print('') 827 699 return True 828 700 ··· 836 698 if __name__ == '__main__': 837 699 # loop through the command line arguments 838 700 cmd = '' 839 - simplecmds = ['-updategrub', '-flistall'] 701 + testrun = True 702 + simplecmds = ['-sysinfo', '-kpupdate', '-flistall', '-checkbl'] 840 703 args = iter(sys.argv[1:]) 841 704 for arg in args: 842 705 if(arg == '-h'): 843 706 printHelp() 844 707 sys.exit() 845 708 elif(arg == '-v'): 846 - print("Version %.1f" % sysvals.version) 709 + print("Version %s" % sysvals.version) 847 710 sys.exit() 848 711 elif(arg in simplecmds): 849 712 cmd = arg[1:] ··· 855 716 sysvals.usecallgraph = True 856 717 elif(arg == '-mincg'): 857 718 sysvals.mincglen = aslib.getArgFloat('-mincg', args, 0.0, 10000.0) 719 + elif(arg == '-cgfilter'): 720 + try: 721 + val = args.next() 722 + except: 723 + doError('No callgraph functions supplied', True) 724 + sysvals.setDeviceFilter(val) 725 + elif(arg == '-bl'): 726 + try: 727 + val = args.next() 728 + except: 729 + doError('No boot loader name supplied', True) 730 + if val.lower() not in ['grub']: 731 + doError('Unknown boot loader: %s' % val, True) 732 + sysvals.bootloader = val.lower() 858 733 elif(arg == '-timeprec'): 859 734 sysvals.setPrecision(aslib.getArgInt('-timeprec', args, 0, 6)) 860 735 elif(arg == '-maxdepth'): 861 736 sysvals.max_graph_depth = aslib.getArgInt('-maxdepth', args, 0, 1000) 862 - elif(arg == '-filter'): 737 + elif(arg == '-func'): 863 738 try: 864 739 val = args.next() 865 740 except: 866 741 doError('No filter functions supplied', True) 867 - aslib.rootCheck(True) 742 + sysvals.useftrace = True 743 + sysvals.usecallgraph = True 744 + sysvals.rootCheck(True) 868 745 sysvals.setGraphFilter(val) 869 746 elif(arg == '-ftrace'): 870 747 try: ··· 889 734 doError('No ftrace file supplied', True) 890 735 if(os.path.exists(val) == False): 891 736 doError('%s does not exist' % val) 737 + testrun = False 892 738 sysvals.ftracefile = val 893 739 elif(arg == '-addlogs'): 894 - sysvals.addlogs = True 740 + sysvals.dmesglog = True 895 741 elif(arg == '-expandcg'): 896 742 sysvals.cgexp = True 897 743 elif(arg == '-dmesg'): ··· 904 748 doError('%s does not exist' % val) 905 749 if(sysvals.htmlfile == val or sysvals.outfile == val): 906 750 doError('Output filename collision') 751 + testrun = False 907 752 sysvals.dmesgfile = val 908 753 elif(arg == '-o'): 909 754 try: 910 755 val = args.next() 911 756 except: 912 - doError('No HTML filename supplied', True) 913 - if(sysvals.dmesgfile == val or sysvals.ftracefile == val): 914 - doError('Output filename collision') 915 - sysvals.htmlfile = val 757 + doError('No subdirectory name supplied', True) 758 + sysvals.testdir = sysvals.setOutputFolder(val) 916 759 elif(arg == '-reboot'): 917 - if sysvals.iscronjob: 918 - doError('-reboot and -cronjob are incompatible') 919 760 sysvals.reboot = True 920 761 elif(arg == '-manual'): 921 762 sysvals.reboot = True ··· 920 767 # remaining options are only for cron job use 921 768 elif(arg == '-cronjob'): 922 769 sysvals.iscronjob = True 923 - if sysvals.reboot: 924 - doError('-reboot and -cronjob are incompatible') 925 770 else: 926 771 doError('Invalid argument: '+arg, True) 927 772 773 + # compatibility errors and access checks 774 + if(sysvals.iscronjob and (sysvals.reboot or \ 775 + sysvals.dmesgfile or sysvals.ftracefile or cmd)): 776 + doError('-cronjob is meant for batch purposes only') 777 + if(sysvals.reboot and (sysvals.dmesgfile or sysvals.ftracefile)): 778 + doError('-reboot and -dmesg/-ftrace are incompatible') 779 + if cmd or sysvals.reboot or sysvals.iscronjob or testrun: 780 + sysvals.rootCheck(True) 781 + if (testrun and sysvals.useftrace) or cmd == 'flistall': 782 + if not sysvals.verifyFtrace(): 783 + doError('Ftrace is not properly enabled') 784 + 785 + # run utility commands 786 + sysvals.cpuInfo() 928 787 if cmd != '': 929 - if cmd == 'updategrub': 930 - updateGrub() 788 + if cmd == 'kpupdate': 789 + updateKernelParams() 931 790 elif cmd == 'flistall': 932 - sysvals.getFtraceFilterFunctions(False) 791 + for f in sysvals.getBootFtraceFilterFunctions(): 792 + print f 793 + elif cmd == 'checkbl': 794 + sysvals.getBootLoader() 795 + print 'Boot Loader: %s\n%s' % (sysvals.bootloader, sysvals.blexec) 796 + elif(cmd == 'sysinfo'): 797 + sysvals.printSystemInfo() 933 798 sys.exit() 934 799 935 - # update grub, setup a cronjob, and reboot 800 + # reboot: update grub, setup a cronjob, and reboot 936 801 if sysvals.reboot: 802 + if (sysvals.useftrace or sysvals.usecallgraph) and \ 803 + not sysvals.checkFtraceKernelVersion(): 804 + doError('Ftrace functionality requires kernel v4.10 or newer') 937 805 if not sysvals.manual: 938 - updateGrub() 806 + updateKernelParams() 939 807 updateCron() 940 808 call('reboot') 941 809 else: 942 810 sysvals.manualRebootRequired() 943 811 sys.exit() 944 812 945 - # disable the cronjob 813 + # cronjob: remove the cronjob, grub changes, and disable ftrace 946 814 if sysvals.iscronjob: 947 815 updateCron(True) 948 - updateGrub(True) 816 + updateKernelParams(True) 817 + try: 818 + sysvals.fsetVal('0', 'tracing_on') 819 + except: 820 + pass 949 821 950 - data = loadKernelLog() 951 - if sysvals.useftrace: 952 - loadTraceLog(data) 953 - if sysvals.iscronjob: 954 - try: 955 - sysvals.fsetVal('0', 'tracing_on') 956 - except: 957 - pass 822 + # testrun: generate copies of the logs 823 + if testrun: 824 + retrieveLogs() 825 + else: 826 + sysvals.setOutputFile() 958 827 959 - if(sysvals.outfile and sysvals.phoronix): 960 - fp = open(sysvals.outfile, 'w') 961 - fp.write('pass %s initstart %.3f end %.3f boot %s\n' % 962 - (data.valid, data.initstart*1000, data.end*1000, data.boottime)) 963 - fp.close() 964 - if(not data.valid): 965 - if sysvals.dmesgfile: 828 + # process the log data 829 + if sysvals.dmesgfile: 830 + data = parseKernelLog() 831 + if(not data.valid): 966 832 doError('No initcall data found in %s' % sysvals.dmesgfile) 967 - else: 968 - doError('No initcall data found, is initcall_debug enabled?') 833 + if sysvals.useftrace and sysvals.ftracefile: 834 + parseTraceLog(data) 835 + else: 836 + doError('dmesg file required') 969 837 970 838 print(' Host: %s' % sysvals.hostname) 971 839 print(' Test time: %s' % sysvals.testtime) 972 840 print(' Boot time: %s' % data.boottime) 973 841 print('Kernel Version: %s' % sysvals.kernel) 974 842 print(' Kernel start: %.3f' % (data.start * 1000)) 975 - print(' init start: %.3f' % (data.initstart * 1000)) 843 + print('Usermode start: %.3f' % (data.tUserMode * 1000)) 844 + print('Last Init Call: %.3f' % (data.end * 1000)) 976 845 977 - createBootGraph(data, sysvals.phoronix) 846 + # handle embedded output logs 847 + if(sysvals.outfile and sysvals.embedded): 848 + fp = open(sysvals.outfile, 'w') 849 + fp.write('pass %s initstart %.3f end %.3f boot %s\n' % 850 + (data.valid, data.tUserMode*1000, data.end*1000, data.boottime)) 851 + fp.close() 852 + 853 + createBootGraph(data) 854 + 855 + # if running as root, change output dir owner to sudo_user 856 + if testrun and os.path.isdir(sysvals.testdir) and \ 857 + os.getuid() == 0 and 'SUDO_USER' in os.environ: 858 + cmd = 'chown -R {0}:{0} {1} > /dev/null 2>&1' 859 + call(cmd.format(os.environ['SUDO_USER'], sysvals.testdir), shell=True)
+379 -155
tools/power/pm-graph/analyze_suspend.py
··· 68 68 # store system values and test parameters 69 69 class SystemValues: 70 70 title = 'SleepGraph' 71 - version = '4.6' 71 + version = '4.7' 72 72 ansi = False 73 73 verbose = False 74 - addlogs = False 74 + testlog = True 75 + dmesglog = False 76 + ftracelog = False 75 77 mindevlen = 0.0 76 78 mincglen = 0.0 77 79 cgphase = '' ··· 81 79 max_graph_depth = 0 82 80 callloopmaxgap = 0.0001 83 81 callloopmaxlen = 0.005 82 + cpucount = 0 83 + memtotal = 204800 84 84 srgap = 0 85 85 cgexp = False 86 - outdir = '' 87 - testdir = '.' 86 + testdir = '' 88 87 tpath = '/sys/kernel/debug/tracing/' 89 88 fpdtpath = '/sys/firmware/acpi/tables/FPDT' 90 89 epath = '/sys/kernel/debug/tracing/events/power/' ··· 98 95 testcommand = '' 99 96 mempath = '/dev/mem' 100 97 powerfile = '/sys/power/state' 98 + mempowerfile = '/sys/power/mem_sleep' 101 99 suspendmode = 'mem' 100 + memmode = '' 102 101 hostname = 'localhost' 103 102 prefix = 'test' 104 103 teststamp = '' 104 + sysstamp = '' 105 105 dmesgstart = 0.0 106 106 dmesgfile = '' 107 107 ftracefile = '' 108 - htmlfile = '' 108 + htmlfile = 'output.html' 109 109 embedded = False 110 110 rtcwake = True 111 111 rtcwaketime = 15 ··· 133 127 devpropfmt = '# Device Properties: .*' 134 128 tracertypefmt = '# tracer: (?P<t>.*)' 135 129 firmwarefmt = '# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$' 136 - stampfmt = '# suspend-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-'+\ 137 - '(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})'+\ 138 - ' (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$' 139 130 tracefuncs = { 140 131 'sys_sync': dict(), 141 132 'pm_prepare_console': dict(), ··· 221 218 # if this is a phoronix test run, set some default options 222 219 if('LOG_FILE' in os.environ and 'TEST_RESULTS_IDENTIFIER' in os.environ): 223 220 self.embedded = True 224 - self.addlogs = True 221 + self.dmesglog = self.ftracelog = True 225 222 self.htmlfile = os.environ['LOG_FILE'] 226 223 self.archargs = 'args_'+platform.machine() 227 224 self.hostname = platform.node() ··· 236 233 self.rtcpath = rtc 237 234 if (hasattr(sys.stdout, 'isatty') and sys.stdout.isatty()): 238 235 self.ansi = True 236 + self.testdir = datetime.now().strftime('suspend-%y%m%d-%H%M%S') 237 + def rootCheck(self, fatal=True): 238 + if(os.access(self.powerfile, os.W_OK)): 239 + return True 240 + if fatal: 241 + doError('This command requires sysfs mount and root access') 242 + return False 239 243 def rootUser(self, fatal=False): 240 244 if 'USER' in os.environ and os.environ['USER'] == 'root': 241 245 return True ··· 259 249 args['date'] = n.strftime('%y%m%d') 260 250 args['time'] = n.strftime('%H%M%S') 261 251 args['hostname'] = self.hostname 262 - self.outdir = value.format(**args) 252 + return value.format(**args) 263 253 def setOutputFile(self): 264 - if((self.htmlfile == '') and (self.dmesgfile != '')): 254 + if self.dmesgfile != '': 265 255 m = re.match('(?P<name>.*)_dmesg\.txt$', self.dmesgfile) 266 256 if(m): 267 257 self.htmlfile = m.group('name')+'.html' 268 - if((self.htmlfile == '') and (self.ftracefile != '')): 258 + if self.ftracefile != '': 269 259 m = re.match('(?P<name>.*)_ftrace\.txt$', self.ftracefile) 270 260 if(m): 271 261 self.htmlfile = m.group('name')+'.html' 272 - if(self.htmlfile == ''): 273 - self.htmlfile = 'output.html' 274 - def initTestOutput(self, subdir, testpath=''): 262 + def systemInfo(self, info): 263 + p = c = m = b = '' 264 + if 'baseboard-manufacturer' in info: 265 + m = info['baseboard-manufacturer'] 266 + elif 'system-manufacturer' in info: 267 + m = info['system-manufacturer'] 268 + if 'baseboard-product-name' in info: 269 + p = info['baseboard-product-name'] 270 + elif 'system-product-name' in info: 271 + p = info['system-product-name'] 272 + if 'processor-version' in info: 273 + c = info['processor-version'] 274 + if 'bios-version' in info: 275 + b = info['bios-version'] 276 + self.sysstamp = '# sysinfo | man:%s | plat:%s | cpu:%s | bios:%s | numcpu:%d | memsz:%d' % \ 277 + (m, p, c, b, self.cpucount, self.memtotal) 278 + def printSystemInfo(self): 279 + self.rootCheck(True) 280 + out = dmidecode(self.mempath, True) 281 + fmt = '%-24s: %s' 282 + for name in sorted(out): 283 + print fmt % (name, out[name]) 284 + print fmt % ('cpucount', ('%d' % self.cpucount)) 285 + print fmt % ('memtotal', ('%d kB' % self.memtotal)) 286 + def cpuInfo(self): 287 + self.cpucount = 0 288 + fp = open('/proc/cpuinfo', 'r') 289 + for line in fp: 290 + if re.match('^processor[ \t]*:[ \t]*[0-9]*', line): 291 + self.cpucount += 1 292 + fp.close() 293 + fp = open('/proc/meminfo', 'r') 294 + for line in fp: 295 + m = re.match('^MemTotal:[ \t]*(?P<sz>[0-9]*) *kB', line) 296 + if m: 297 + self.memtotal = int(m.group('sz')) 298 + break 299 + fp.close() 300 + def initTestOutput(self, name): 275 301 self.prefix = self.hostname 276 302 v = open('/proc/version', 'r').read().strip() 277 303 kver = string.split(v)[2] 278 - n = datetime.now() 279 - testtime = n.strftime('suspend-%m%d%y-%H%M%S') 280 - if not testpath: 281 - testpath = n.strftime('suspend-%y%m%d-%H%M%S') 282 - if(subdir != "."): 283 - self.testdir = subdir+"/"+testpath 284 - else: 285 - self.testdir = testpath 304 + fmt = name+'-%m%d%y-%H%M%S' 305 + testtime = datetime.now().strftime(fmt) 286 306 self.teststamp = \ 287 307 '# '+testtime+' '+self.prefix+' '+self.suspendmode+' '+kver 288 308 if(self.embedded): ··· 395 355 continue 396 356 self.tracefuncs[i] = dict() 397 357 def getFtraceFilterFunctions(self, current): 398 - rootCheck(True) 358 + self.rootCheck(True) 399 359 if not current: 400 360 call('cat '+self.tpath+'available_filter_functions', shell=True) 401 361 return ··· 493 453 val += '\nr:%s_ret %s $retval\n' % (name, func) 494 454 return val 495 455 def addKprobes(self, output=False): 496 - if len(sysvals.kprobes) < 1: 456 + if len(self.kprobes) < 1: 497 457 return 498 458 if output: 499 459 print(' kprobe functions in this kernel:') ··· 565 525 fp.flush() 566 526 fp.close() 567 527 except: 568 - pass 528 + return False 569 529 return True 570 530 def fgetVal(self, path): 571 531 file = self.tpath+path ··· 606 566 self.cleanupFtrace() 607 567 # set the trace clock to global 608 568 self.fsetVal('global', 'trace_clock') 609 - # set trace buffer to a huge value 610 569 self.fsetVal('nop', 'current_tracer') 611 - self.fsetVal('131073', 'buffer_size_kb') 570 + # set trace buffer to a huge value 571 + if self.usecallgraph or self.usedevsrc: 572 + tgtsize = min(self.memtotal / 2, 2*1024*1024) 573 + maxbuf = '%d' % (tgtsize / max(1, self.cpucount)) 574 + if self.cpucount < 1 or not self.fsetVal(maxbuf, 'buffer_size_kb'): 575 + self.fsetVal('131072', 'buffer_size_kb') 576 + else: 577 + self.fsetVal('16384', 'buffer_size_kb') 612 578 # go no further if this is just a status check 613 579 if testing: 614 580 return ··· 687 641 if not self.ansi: 688 642 return str 689 643 return '\x1B[%d;40m%s\x1B[m' % (color, str) 644 + def writeDatafileHeader(self, filename, fwdata=[]): 645 + fp = open(filename, 'w') 646 + fp.write(self.teststamp+'\n') 647 + fp.write(self.sysstamp+'\n') 648 + if(self.suspendmode == 'mem' or self.suspendmode == 'command'): 649 + for fw in fwdata: 650 + if(fw): 651 + fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1])) 652 + fp.close() 690 653 691 654 sysvals = SystemValues() 692 655 suspendmodename = { ··· 1063 1008 else: 1064 1009 self.trimTime(self.tSuspended, \ 1065 1010 self.tResumed-self.tSuspended, False) 1011 + def getTimeValues(self): 1012 + sktime = (self.dmesg['suspend_machine']['end'] - \ 1013 + self.tKernSus) * 1000 1014 + rktime = (self.dmesg['resume_complete']['end'] - \ 1015 + self.dmesg['resume_machine']['start']) * 1000 1016 + return (sktime, rktime) 1066 1017 def setPhase(self, phase, ktime, isbegin): 1067 1018 if(isbegin): 1068 1019 self.dmesg[phase]['start'] = ktime ··· 1578 1517 prelinedep += 1 1579 1518 last = 0 1580 1519 lasttime = line.time 1581 - virtualfname = 'execution_misalignment' 1520 + virtualfname = 'missing_function_name' 1582 1521 if len(self.list) > 0: 1583 1522 last = self.list[-1] 1584 1523 lasttime = last.time ··· 1834 1773 html_device = '<div id="{0}" title="{1}" class="thread{7}" style="left:{2}%;top:{3}px;height:{4}px;width:{5}%;{8}">{6}</div>\n' 1835 1774 html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}px;height:{3}px;background:{4}">{5}</div>\n' 1836 1775 html_phaselet = '<div id="{0}" class="phaselet" style="left:{1}%;width:{2}%;background:{3}"></div>\n' 1776 + html_legend = '<div id="p{3}" class="square" style="left:{0}%;background:{1}">&nbsp;{2}</div>\n' 1837 1777 def __init__(self, rowheight, scaleheight): 1838 1778 self.rowH = rowheight 1839 1779 self.scaleH = scaleheight 1840 1780 self.html = '' 1841 - def createHeader(self, sv, suppress=''): 1781 + def createHeader(self, sv): 1842 1782 if(not sv.stamp['time']): 1843 1783 return 1844 1784 self.html += '<div class="version"><a href="https://01.org/suspendresume">%s v%s</a></div>' \ 1845 1785 % (sv.title, sv.version) 1846 - if sv.logmsg and 'log' not in suppress: 1847 - self.html += '<button id="showtest" class="logbtn">log</button>' 1848 - if sv.addlogs and 'dmesg' not in suppress: 1849 - self.html += '<button id="showdmesg" class="logbtn">dmesg</button>' 1850 - if sv.addlogs and sv.ftracefile and 'ftrace' not in suppress: 1851 - self.html += '<button id="showftrace" class="logbtn">ftrace</button>' 1786 + if sv.logmsg and sv.testlog: 1787 + self.html += '<button id="showtest" class="logbtn btnfmt">log</button>' 1788 + if sv.dmesglog: 1789 + self.html += '<button id="showdmesg" class="logbtn btnfmt">dmesg</button>' 1790 + if sv.ftracelog: 1791 + self.html += '<button id="showftrace" class="logbtn btnfmt">ftrace</button>' 1852 1792 headline_stamp = '<div class="stamp">{0} {1} {2} {3}</div>\n' 1853 1793 self.html += headline_stamp.format(sv.stamp['host'], sv.stamp['kernel'], 1854 1794 sv.stamp['mode'], sv.stamp['time']) 1795 + if 'man' in sv.stamp and 'plat' in sv.stamp and 'cpu' in sv.stamp: 1796 + headline_sysinfo = '<div class="stamp sysinfo">{0} {1} <i>with</i> {2}</div>\n' 1797 + self.html += headline_sysinfo.format(sv.stamp['man'], 1798 + sv.stamp['plat'], sv.stamp['cpu']) 1799 + 1855 1800 # Function: getDeviceRows 1856 1801 # Description: 1857 1802 # determine how may rows the device funcs will take ··· 1906 1839 # devlist: the list of devices/actions in a group of contiguous phases 1907 1840 # Output: 1908 1841 # The total number of rows needed to display this phase of the timeline 1909 - def getPhaseRows(self, devlist, row=0): 1842 + def getPhaseRows(self, devlist, row=0, sortby='length'): 1910 1843 # clear all rows and set them to undefined 1911 1844 remaining = len(devlist) 1912 1845 rowdata = dict() ··· 1919 1852 if tp not in myphases: 1920 1853 myphases.append(tp) 1921 1854 dev['row'] = -1 1922 - # sort by length 1st, then name 2nd 1923 - sortdict[item] = (float(dev['end']) - float(dev['start']), item.dev['name']) 1855 + if sortby == 'start': 1856 + # sort by start 1st, then length 2nd 1857 + sortdict[item] = (-1*float(dev['start']), float(dev['end']) - float(dev['start'])) 1858 + else: 1859 + # sort by length 1st, then name 2nd 1860 + sortdict[item] = (float(dev['end']) - float(dev['start']), item.dev['name']) 1924 1861 if 'src' in dev: 1925 1862 dev['devrows'] = self.getDeviceRows(dev['src']) 1926 1863 # sort the devlist by length so that large items graph on top ··· 2066 1995 # A list of values describing the properties of these test runs 2067 1996 class TestProps: 2068 1997 stamp = '' 1998 + sysinfo = '' 2069 1999 S0i3 = False 2070 2000 fwdata = [] 2001 + stampfmt = '# [a-z]*-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-'+\ 2002 + '(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})'+\ 2003 + ' (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$' 2004 + sysinfofmt = '^# sysinfo .*' 2071 2005 ftrace_line_fmt_fg = \ 2072 2006 '^ *(?P<time>[0-9\.]*) *\| *(?P<cpu>[0-9]*)\)'+\ 2073 2007 ' *(?P<proc>.*)-(?P<pid>[0-9]*) *\|'+\ ··· 2095 2019 self.ftrace_line_fmt = self.ftrace_line_fmt_nop 2096 2020 else: 2097 2021 doError('Invalid tracer format: [%s]' % tracer) 2022 + def parseStamp(self, data, sv): 2023 + m = re.match(self.stampfmt, self.stamp) 2024 + data.stamp = {'time': '', 'host': '', 'mode': ''} 2025 + dt = datetime(int(m.group('y'))+2000, int(m.group('m')), 2026 + int(m.group('d')), int(m.group('H')), int(m.group('M')), 2027 + int(m.group('S'))) 2028 + data.stamp['time'] = dt.strftime('%B %d %Y, %I:%M:%S %p') 2029 + data.stamp['host'] = m.group('host') 2030 + data.stamp['mode'] = m.group('mode') 2031 + data.stamp['kernel'] = m.group('kernel') 2032 + if re.match(self.sysinfofmt, self.sysinfo): 2033 + for f in self.sysinfo.split('|'): 2034 + if '#' in f: 2035 + continue 2036 + tmp = f.strip().split(':', 1) 2037 + key = tmp[0] 2038 + val = tmp[1] 2039 + data.stamp[key] = val 2040 + sv.hostname = data.stamp['host'] 2041 + sv.suspendmode = data.stamp['mode'] 2042 + if sv.suspendmode == 'command' and sv.ftracefile != '': 2043 + modes = ['on', 'freeze', 'standby', 'mem'] 2044 + out = Popen(['grep', 'suspend_enter', sv.ftracefile], 2045 + stderr=PIPE, stdout=PIPE).stdout.read() 2046 + m = re.match('.* suspend_enter\[(?P<mode>.*)\]', out) 2047 + if m and m.group('mode') in ['1', '2', '3']: 2048 + sv.suspendmode = modes[int(m.group('mode'))] 2049 + data.stamp['mode'] = sv.suspendmode 2050 + if not sv.stamp: 2051 + sv.stamp = data.stamp 2098 2052 2099 2053 # Class: TestRun 2100 2054 # Description: ··· 2196 2090 if(sysvals.verbose): 2197 2091 print(msg) 2198 2092 2199 - # Function: parseStamp 2200 - # Description: 2201 - # Pull in the stamp comment line from the data file(s), 2202 - # create the stamp, and add it to the global sysvals object 2203 - # Arguments: 2204 - # m: the valid re.match output for the stamp line 2205 - def parseStamp(line, data): 2206 - m = re.match(sysvals.stampfmt, line) 2207 - data.stamp = {'time': '', 'host': '', 'mode': ''} 2208 - dt = datetime(int(m.group('y'))+2000, int(m.group('m')), 2209 - int(m.group('d')), int(m.group('H')), int(m.group('M')), 2210 - int(m.group('S'))) 2211 - data.stamp['time'] = dt.strftime('%B %d %Y, %I:%M:%S %p') 2212 - data.stamp['host'] = m.group('host') 2213 - data.stamp['mode'] = m.group('mode') 2214 - data.stamp['kernel'] = m.group('kernel') 2215 - sysvals.hostname = data.stamp['host'] 2216 - sysvals.suspendmode = data.stamp['mode'] 2217 - if sysvals.suspendmode == 'command' and sysvals.ftracefile != '': 2218 - modes = ['on', 'freeze', 'standby', 'mem'] 2219 - out = Popen(['grep', 'suspend_enter', sysvals.ftracefile], 2220 - stderr=PIPE, stdout=PIPE).stdout.read() 2221 - m = re.match('.* suspend_enter\[(?P<mode>.*)\]', out) 2222 - if m and m.group('mode') in ['1', '2', '3']: 2223 - sysvals.suspendmode = modes[int(m.group('mode'))] 2224 - data.stamp['mode'] = sysvals.suspendmode 2225 - if not sysvals.stamp: 2226 - sysvals.stamp = data.stamp 2227 - 2228 2093 # Function: doesTraceLogHaveTraceEvents 2229 2094 # Description: 2230 2095 # Quickly determine if the ftrace log has some or all of the trace events ··· 2213 2136 sysvals.usekprobes = True 2214 2137 out = Popen(['head', '-1', sysvals.ftracefile], 2215 2138 stderr=PIPE, stdout=PIPE).stdout.read().replace('\n', '') 2216 - m = re.match(sysvals.stampfmt, out) 2217 - if m and m.group('mode') == 'command': 2218 - sysvals.usetraceeventsonly = True 2219 - sysvals.usetraceevents = True 2220 - return 2221 2139 # figure out what level of trace events are supported 2222 2140 sysvals.usetraceeventsonly = True 2223 2141 sysvals.usetraceevents = False ··· 2254 2182 for line in tf: 2255 2183 # remove any latent carriage returns 2256 2184 line = line.replace('\r\n', '') 2257 - # grab the time stamp 2258 - m = re.match(sysvals.stampfmt, line) 2259 - if(m): 2185 + # grab the stamp and sysinfo 2186 + if re.match(tp.stampfmt, line): 2260 2187 tp.stamp = line 2188 + continue 2189 + elif re.match(tp.sysinfofmt, line): 2190 + tp.sysinfo = line 2261 2191 continue 2262 2192 # determine the trace data type (required for further parsing) 2263 2193 m = re.match(sysvals.tracertypefmt, line) ··· 2293 2219 # look for the suspend start marker 2294 2220 if(t.startMarker()): 2295 2221 data = testrun[testidx].data 2296 - parseStamp(tp.stamp, data) 2222 + tp.parseStamp(data, sysvals) 2297 2223 data.setStart(t.time) 2298 2224 continue 2299 2225 if(not data): ··· 2463 2389 for line in tf: 2464 2390 # remove any latent carriage returns 2465 2391 line = line.replace('\r\n', '') 2466 - # stamp line: each stamp means a new test run 2467 - m = re.match(sysvals.stampfmt, line) 2468 - if(m): 2392 + # stamp and sysinfo lines 2393 + if re.match(tp.stampfmt, line): 2469 2394 tp.stamp = line 2395 + continue 2396 + elif re.match(tp.sysinfofmt, line): 2397 + tp.sysinfo = line 2470 2398 continue 2471 2399 # firmware line: pull out any firmware data 2472 2400 m = re.match(sysvals.firmwarefmt, line) ··· 2515 2439 testdata.append(data) 2516 2440 testrun = TestRun(data) 2517 2441 testruns.append(testrun) 2518 - parseStamp(tp.stamp, data) 2442 + tp.parseStamp(data, sysvals) 2519 2443 data.setStart(t.time) 2520 2444 data.tKernSus = t.time 2521 2445 continue ··· 2896 2820 idx = line.find('[') 2897 2821 if idx > 1: 2898 2822 line = line[idx:] 2899 - m = re.match(sysvals.stampfmt, line) 2900 - if(m): 2823 + # grab the stamp and sysinfo 2824 + if re.match(tp.stampfmt, line): 2901 2825 tp.stamp = line 2826 + continue 2827 + elif re.match(tp.sysinfofmt, line): 2828 + tp.sysinfo = line 2902 2829 continue 2903 2830 m = re.match(sysvals.firmwarefmt, line) 2904 2831 if(m): ··· 2918 2839 if(data): 2919 2840 testruns.append(data) 2920 2841 data = Data(len(testruns)) 2921 - parseStamp(tp.stamp, data) 2842 + tp.parseStamp(data, sysvals) 2922 2843 if len(tp.fwdata) > data.testnumber: 2923 2844 data.fwSuspend, data.fwResume = tp.fwdata[data.testnumber] 2924 2845 if(data.fwSuspend > 0 or data.fwResume > 0): ··· 3249 3170 continue 3250 3171 list = data.dmesg[p]['list'] 3251 3172 for devname in data.sortedDevices(p): 3173 + if len(sv.devicefilter) > 0 and devname not in sv.devicefilter: 3174 + continue 3252 3175 dev = list[devname] 3253 3176 color = 'white' 3254 3177 if 'color' in data.dmesg[p]: ··· 3390 3309 html_error = '<div id="{1}" title="kernel error/warning" class="err" style="right:{0}%">ERROR&rarr;</div>\n' 3391 3310 html_traceevent = '<div title="{0}" class="traceevent{6}" style="left:{1}%;top:{2}px;height:{3}px;width:{4}%;line-height:{3}px;{7}">{5}</div>\n' 3392 3311 html_cpuexec = '<div class="jiffie" style="left:{0}%;top:{1}px;height:{2}px;width:{3}%;background:{4};"></div>\n' 3393 - html_legend = '<div id="p{3}" class="square" style="left:{0}%;background:{1}">&nbsp;{2}</div>\n' 3394 3312 html_timetotal = '<table class="time1">\n<tr>'\ 3395 3313 '<td class="green" title="{3}">{2} Suspend Time: <b>{0} ms</b></td>'\ 3396 3314 '<td class="yellow" title="{4}">{2} Resume Time: <b>{1} ms</b></td>'\ ··· 3426 3346 # Generate the header for this timeline 3427 3347 for data in testruns: 3428 3348 tTotal = data.end - data.start 3429 - sktime = (data.dmesg['suspend_machine']['end'] - \ 3430 - data.tKernSus) * 1000 3431 - rktime = (data.dmesg['resume_complete']['end'] - \ 3432 - data.dmesg['resume_machine']['start']) * 1000 3349 + sktime, rktime = data.getTimeValues() 3433 3350 if(tTotal == 0): 3434 3351 print('ERROR: No timeline data') 3435 3352 sys.exit() ··· 3658 3581 id += tmp[1][0] 3659 3582 order = '%.2f' % ((data.dmesg[phase]['order'] * pdelta) + pmargin) 3660 3583 name = string.replace(phase, '_', ' &nbsp;') 3661 - devtl.html += html_legend.format(order, \ 3584 + devtl.html += devtl.html_legend.format(order, \ 3662 3585 data.dmesg[phase]['color'], name, id) 3663 3586 devtl.html += '</div>\n' 3664 3587 ··· 3705 3628 addCallgraphs(sysvals, hf, data) 3706 3629 3707 3630 # add the test log as a hidden div 3708 - if sysvals.logmsg: 3631 + if sysvals.testlog and sysvals.logmsg: 3709 3632 hf.write('<div id="testlog" style="display:none;">\n'+sysvals.logmsg+'</div>\n') 3710 3633 # add the dmesg log as a hidden div 3711 - if sysvals.addlogs and sysvals.dmesgfile: 3634 + if sysvals.dmesglog and sysvals.dmesgfile: 3712 3635 hf.write('<div id="dmesglog" style="display:none;">\n') 3713 3636 lf = open(sysvals.dmesgfile, 'r') 3714 3637 for line in lf: ··· 3717 3640 lf.close() 3718 3641 hf.write('</div>\n') 3719 3642 # add the ftrace log as a hidden div 3720 - if sysvals.addlogs and sysvals.ftracefile: 3643 + if sysvals.ftracelog and sysvals.ftracefile: 3721 3644 hf.write('<div id="ftracelog" style="display:none;">\n') 3722 3645 lf = open(sysvals.ftracefile, 'r') 3723 3646 for line in lf: ··· 3778 3701 <style type=\'text/css\'>\n\ 3779 3702 body {overflow-y:scroll;}\n\ 3780 3703 .stamp {width:100%;text-align:center;background:gray;line-height:30px;color:white;font:25px Arial;}\n\ 3704 + .stamp.sysinfo {font:10px Arial;}\n\ 3781 3705 .callgraph {margin-top:30px;box-shadow:5px 5px 20px black;}\n\ 3782 3706 .callgraph article * {padding-left:28px;}\n\ 3783 3707 h1 {color:black;font:bold 30px Times;}\n\ ··· 3824 3746 .legend {position:relative; width:100%; height:40px; text-align:center;margin-bottom:20px}\n\ 3825 3747 .legend .square {position:absolute;cursor:pointer;top:10px; width:0px;height:20px;border:1px solid;padding-left:20px;}\n\ 3826 3748 button {height:40px;width:200px;margin-bottom:20px;margin-top:20px;font-size:24px;}\n\ 3827 - .logbtn {position:relative;float:right;height:25px;width:50px;margin-top:3px;margin-bottom:0;font-size:10px;text-align:center;}\n\ 3749 + .btnfmt {position:relative;float:right;height:25px;width:auto;margin-top:3px;margin-bottom:0;font-size:10px;text-align:center;}\n\ 3828 3750 .devlist {position:'+devlistpos+';width:190px;}\n\ 3829 3751 a:link {color:white;text-decoration:none;}\n\ 3830 3752 a:visited {color:white;}\n\ ··· 4162 4084 ' win.document.write(title+"<pre>"+log.innerHTML+"</pre>");\n'\ 4163 4085 ' win.document.close();\n'\ 4164 4086 ' }\n'\ 4165 - ' function onClickPhase(e) {\n'\ 4166 - ' }\n'\ 4167 4087 ' function onMouseDown(e) {\n'\ 4168 4088 ' dragval[0] = e.clientX;\n'\ 4169 4089 ' dragval[1] = document.getElementById("dmesgzoombox").scrollLeft;\n'\ ··· 4196 4120 ' document.getElementById("zoomin").onclick = zoomTimeline;\n'\ 4197 4121 ' document.getElementById("zoomout").onclick = zoomTimeline;\n'\ 4198 4122 ' document.getElementById("zoomdef").onclick = zoomTimeline;\n'\ 4199 - ' var list = document.getElementsByClassName("square");\n'\ 4200 - ' for (var i = 0; i < list.length; i++)\n'\ 4201 - ' list[i].onclick = onClickPhase;\n'\ 4202 4123 ' var list = document.getElementsByClassName("err");\n'\ 4203 4124 ' for (var i = 0; i < list.length; i++)\n'\ 4204 4125 ' list[i].onclick = errWindow;\n'\ ··· 4266 4193 if sysvals.testcommand != '': 4267 4194 call(sysvals.testcommand+' 2>&1', shell=True); 4268 4195 else: 4196 + mode = sysvals.suspendmode 4197 + if sysvals.memmode and os.path.exists(sysvals.mempowerfile): 4198 + mode = 'mem' 4199 + pf = open(sysvals.mempowerfile, 'w') 4200 + pf.write(sysvals.memmode) 4201 + pf.close() 4269 4202 pf = open(sysvals.powerfile, 'w') 4270 - pf.write(sysvals.suspendmode) 4203 + pf.write(mode) 4271 4204 # execution will pause here 4272 4205 try: 4273 4206 pf.close() ··· 4298 4219 pm.stop() 4299 4220 sysvals.fsetVal('0', 'tracing_on') 4300 4221 print('CAPTURING TRACE') 4301 - writeDatafileHeader(sysvals.ftracefile, fwdata) 4222 + sysvals.writeDatafileHeader(sysvals.ftracefile, fwdata) 4302 4223 call('cat '+tp+'trace >> '+sysvals.ftracefile, shell=True) 4303 4224 sysvals.fsetVal('', 'trace') 4304 4225 devProps() 4305 4226 # grab a copy of the dmesg output 4306 4227 print('CAPTURING DMESG') 4307 - writeDatafileHeader(sysvals.dmesgfile, fwdata) 4228 + sysvals.writeDatafileHeader(sysvals.dmesgfile, fwdata) 4308 4229 sysvals.getdmesg() 4309 - 4310 - def writeDatafileHeader(filename, fwdata): 4311 - fp = open(filename, 'a') 4312 - fp.write(sysvals.teststamp+'\n') 4313 - if(sysvals.suspendmode == 'mem' or sysvals.suspendmode == 'command'): 4314 - for fw in fwdata: 4315 - if(fw): 4316 - fp.write('# fwsuspend %u fwresume %u\n' % (fw[0], fw[1])) 4317 - fp.close() 4318 4230 4319 4231 # Function: setUSBDevicesAuto 4320 4232 # Description: ··· 4314 4244 # to always-on since the kernel cant determine if the device can 4315 4245 # properly autosuspend 4316 4246 def setUSBDevicesAuto(): 4317 - rootCheck(True) 4247 + sysvals.rootCheck(True) 4318 4248 for dirname, dirnames, filenames in os.walk('/sys/devices'): 4319 4249 if(re.match('.*/usb[0-9]*.*', dirname) and 4320 4250 'idVendor' in filenames and 'idProduct' in filenames): ··· 4537 4467 # Output: 4538 4468 # A string list of the available modes 4539 4469 def getModes(): 4540 - modes = '' 4470 + modes = [] 4541 4471 if(os.path.exists(sysvals.powerfile)): 4542 4472 fp = open(sysvals.powerfile, 'r') 4543 4473 modes = string.split(fp.read()) 4544 4474 fp.close() 4475 + if(os.path.exists(sysvals.mempowerfile)): 4476 + deep = False 4477 + fp = open(sysvals.mempowerfile, 'r') 4478 + for m in string.split(fp.read()): 4479 + memmode = m.strip('[]') 4480 + if memmode == 'deep': 4481 + deep = True 4482 + else: 4483 + modes.append('mem-%s' % memmode) 4484 + fp.close() 4485 + if 'mem' in modes and not deep: 4486 + modes.remove('mem') 4545 4487 return modes 4488 + 4489 + # Function: dmidecode 4490 + # Description: 4491 + # Read the bios tables and pull out system info 4492 + # Arguments: 4493 + # mempath: /dev/mem or custom mem path 4494 + # fatal: True to exit on error, False to return empty dict 4495 + # Output: 4496 + # A dict object with all available key/values 4497 + def dmidecode(mempath, fatal=False): 4498 + out = dict() 4499 + 4500 + # the list of values to retrieve, with hardcoded (type, idx) 4501 + info = { 4502 + 'bios-vendor': (0, 4), 4503 + 'bios-version': (0, 5), 4504 + 'bios-release-date': (0, 8), 4505 + 'system-manufacturer': (1, 4), 4506 + 'system-product-name': (1, 5), 4507 + 'system-version': (1, 6), 4508 + 'system-serial-number': (1, 7), 4509 + 'baseboard-manufacturer': (2, 4), 4510 + 'baseboard-product-name': (2, 5), 4511 + 'baseboard-version': (2, 6), 4512 + 'baseboard-serial-number': (2, 7), 4513 + 'chassis-manufacturer': (3, 4), 4514 + 'chassis-type': (3, 5), 4515 + 'chassis-version': (3, 6), 4516 + 'chassis-serial-number': (3, 7), 4517 + 'processor-manufacturer': (4, 7), 4518 + 'processor-version': (4, 16), 4519 + } 4520 + if(not os.path.exists(mempath)): 4521 + if(fatal): 4522 + doError('file does not exist: %s' % mempath) 4523 + return out 4524 + if(not os.access(mempath, os.R_OK)): 4525 + if(fatal): 4526 + doError('file is not readable: %s' % mempath) 4527 + return out 4528 + 4529 + # by default use legacy scan, but try to use EFI first 4530 + memaddr = 0xf0000 4531 + memsize = 0x10000 4532 + for ep in ['/sys/firmware/efi/systab', '/proc/efi/systab']: 4533 + if not os.path.exists(ep) or not os.access(ep, os.R_OK): 4534 + continue 4535 + fp = open(ep, 'r') 4536 + buf = fp.read() 4537 + fp.close() 4538 + i = buf.find('SMBIOS=') 4539 + if i >= 0: 4540 + try: 4541 + memaddr = int(buf[i+7:], 16) 4542 + memsize = 0x20 4543 + except: 4544 + continue 4545 + 4546 + # read in the memory for scanning 4547 + fp = open(mempath, 'rb') 4548 + try: 4549 + fp.seek(memaddr) 4550 + buf = fp.read(memsize) 4551 + except: 4552 + if(fatal): 4553 + doError('DMI table is unreachable, sorry') 4554 + else: 4555 + return out 4556 + fp.close() 4557 + 4558 + # search for either an SM table or DMI table 4559 + i = base = length = num = 0 4560 + while(i < memsize): 4561 + if buf[i:i+4] == '_SM_' and i < memsize - 16: 4562 + length = struct.unpack('H', buf[i+22:i+24])[0] 4563 + base, num = struct.unpack('IH', buf[i+24:i+30]) 4564 + break 4565 + elif buf[i:i+5] == '_DMI_': 4566 + length = struct.unpack('H', buf[i+6:i+8])[0] 4567 + base, num = struct.unpack('IH', buf[i+8:i+14]) 4568 + break 4569 + i += 16 4570 + if base == 0 and length == 0 and num == 0: 4571 + if(fatal): 4572 + doError('Neither SMBIOS nor DMI were found') 4573 + else: 4574 + return out 4575 + 4576 + # read in the SM or DMI table 4577 + fp = open(mempath, 'rb') 4578 + try: 4579 + fp.seek(base) 4580 + buf = fp.read(length) 4581 + except: 4582 + if(fatal): 4583 + doError('DMI table is unreachable, sorry') 4584 + else: 4585 + return out 4586 + fp.close() 4587 + 4588 + # scan the table for the values we want 4589 + count = i = 0 4590 + while(count < num and i <= len(buf) - 4): 4591 + type, size, handle = struct.unpack('BBH', buf[i:i+4]) 4592 + n = i + size 4593 + while n < len(buf) - 1: 4594 + if 0 == struct.unpack('H', buf[n:n+2])[0]: 4595 + break 4596 + n += 1 4597 + data = buf[i+size:n+2].split('\0') 4598 + for name in info: 4599 + itype, idxadr = info[name] 4600 + if itype == type: 4601 + idx = struct.unpack('B', buf[i+idxadr])[0] 4602 + if idx > 0 and idx < len(data) - 1: 4603 + s = data[idx-1].strip() 4604 + if s and s.lower() != 'to be filled by o.e.m.': 4605 + out[name] = data[idx-1] 4606 + i = n + 2 4607 + count += 1 4608 + return out 4546 4609 4547 4610 # Function: getFPDT 4548 4611 # Description: ··· 4690 4487 prectype[0] = 'Basic S3 Resume Performance Record' 4691 4488 prectype[1] = 'Basic S3 Suspend Performance Record' 4692 4489 4693 - rootCheck(True) 4490 + sysvals.rootCheck(True) 4694 4491 if(not os.path.exists(sysvals.fpdtpath)): 4695 4492 if(output): 4696 4493 doError('file does not exist: %s' % sysvals.fpdtpath) ··· 4820 4617 4821 4618 # check we have root access 4822 4619 res = sysvals.colorText('NO (No features of this tool will work!)') 4823 - if(rootCheck(False)): 4620 + if(sysvals.rootCheck(False)): 4824 4621 res = 'YES' 4825 4622 print(' have root access: %s' % res) 4826 4623 if(res != 'YES'): ··· 4919 4716 print('ERROR: %s\n') % msg 4920 4717 sys.exit() 4921 4718 4922 - # Function: rootCheck 4923 - # Description: 4924 - # quick check to see if we have root access 4925 - def rootCheck(fatal): 4926 - if(os.access(sysvals.powerfile, os.W_OK)): 4927 - return True 4928 - if fatal: 4929 - doError('This command requires sysfs mount and root access') 4930 - return False 4931 - 4932 4719 # Function: getArgInt 4933 4720 # Description: 4934 4721 # pull out an integer argument from the command line with checks ··· 4972 4779 if(sysvals.ftracefile and (sysvals.usecallgraph or sysvals.usetraceevents)): 4973 4780 appendIncompleteTraceLog(testruns) 4974 4781 createHTML(testruns) 4782 + return testruns 4975 4783 4976 4784 # Function: rerunTest 4977 4785 # Description: ··· 4984 4790 doError('recreating this html output requires a dmesg file') 4985 4791 sysvals.setOutputFile() 4986 4792 vprint('Output file: %s' % sysvals.htmlfile) 4987 - if(os.path.exists(sysvals.htmlfile) and not os.access(sysvals.htmlfile, os.W_OK)): 4988 - doError('missing permission to write to %s' % sysvals.htmlfile) 4989 - processData() 4793 + if os.path.exists(sysvals.htmlfile): 4794 + if not os.path.isfile(sysvals.htmlfile): 4795 + doError('a directory already exists with this name: %s' % sysvals.htmlfile) 4796 + elif not os.access(sysvals.htmlfile, os.W_OK): 4797 + doError('missing permission to write to %s' % sysvals.htmlfile) 4798 + return processData() 4990 4799 4991 4800 # Function: runTest 4992 4801 # Description: 4993 4802 # execute a suspend/resume, gather the logs, and generate the output 4994 - def runTest(subdir, testpath=''): 4803 + def runTest(): 4995 4804 # prepare for the test 4996 4805 sysvals.initFtrace() 4997 - sysvals.initTestOutput(subdir, testpath) 4806 + sysvals.initTestOutput('suspend') 4998 4807 vprint('Output files:\n\t%s\n\t%s\n\t%s' % \ 4999 4808 (sysvals.dmesgfile, sysvals.ftracefile, sysvals.htmlfile)) 5000 4809 ··· 5094 4897 if(opt.lower() == 'verbose'): 5095 4898 sysvals.verbose = checkArgBool(value) 5096 4899 elif(opt.lower() == 'addlogs'): 5097 - sysvals.addlogs = checkArgBool(value) 4900 + sysvals.dmesglog = sysvals.ftracelog = checkArgBool(value) 5098 4901 elif(opt.lower() == 'dev'): 5099 4902 sysvals.usedevsrc = checkArgBool(value) 5100 4903 elif(opt.lower() == 'proc'): ··· 5144 4947 elif(opt.lower() == 'mincg'): 5145 4948 sysvals.mincglen = getArgFloat('-mincg', value, 0.0, 10000.0, False) 5146 4949 elif(opt.lower() == 'output-dir'): 5147 - sysvals.setOutputFolder(value) 4950 + sysvals.testdir = sysvals.setOutputFolder(value) 5148 4951 5149 4952 if sysvals.suspendmode == 'command' and not sysvals.testcommand: 5150 4953 doError('No command supplied for mode "command"') ··· 5227 5030 # Description: 5228 5031 # print out the help text 5229 5032 def printHelp(): 5230 - modes = getModes() 5231 - 5232 5033 print('') 5233 5034 print('%s v%s' % (sysvals.title, sysvals.version)) 5234 5035 print('Usage: sudo sleepgraph <options> <commands>') ··· 5243 5048 print(' If no specific command is given, the default behavior is to initiate') 5244 5049 print(' a suspend/resume and capture the dmesg/ftrace output as an html timeline.') 5245 5050 print('') 5246 - print(' Generates output files in subdirectory: suspend-mmddyy-HHMMSS') 5051 + print(' Generates output files in subdirectory: suspend-yymmdd-HHMMSS') 5247 5052 print(' HTML output: <hostname>_<mode>.html') 5248 5053 print(' raw dmesg output: <hostname>_<mode>_dmesg.txt') 5249 5054 print(' raw ftrace output: <hostname>_<mode>_ftrace.txt') ··· 5253 5058 print(' -v Print the current tool version') 5254 5059 print(' -config fn Pull arguments and config options from file fn') 5255 5060 print(' -verbose Print extra information during execution and analysis') 5256 - print(' -m mode Mode to initiate for suspend %s (default: %s)') % (modes, sysvals.suspendmode) 5257 - print(' -o subdir Override the output subdirectory') 5061 + print(' -m mode Mode to initiate for suspend (default: %s)') % (sysvals.suspendmode) 5062 + print(' -o name Overrides the output subdirectory name when running a new test') 5063 + print(' default: suspend-{date}-{time}') 5258 5064 print(' -rtcwake t Wakeup t seconds after suspend, set t to "off" to disable (default: 15)') 5259 5065 print(' -addlogs Add the dmesg and ftrace logs to the html output') 5260 5066 print(' -srgap Add a visible gap in the timeline between sus/res (default: disabled)') ··· 5280 5084 print(' -cgphase P Only show callgraph data for phase P (e.g. suspend_late)') 5281 5085 print(' -cgtest N Only show callgraph data for test N (e.g. 0 or 1 in an x2 run)') 5282 5086 print(' -timeprec N Number of significant digits in timestamps (0:S, [3:ms], 6:us)') 5283 - print(' [commands]') 5284 - print(' -ftrace ftracefile Create HTML output using ftrace input (used with -dmesg)') 5285 - print(' -dmesg dmesgfile Create HTML output using dmesg (used with -ftrace)') 5286 - print(' -summary directory Create a summary of all test in this dir') 5087 + print('') 5088 + print('Other commands:') 5287 5089 print(' -modes List available suspend modes') 5288 5090 print(' -status Test to see if the system is enabled to run this tool') 5289 5091 print(' -fpdt Print out the contents of the ACPI Firmware Performance Data Table') 5092 + print(' -sysinfo Print out system info extracted from BIOS') 5290 5093 print(' -usbtopo Print out the current USB topology with power info') 5291 5094 print(' -usbauto Enable autosuspend for all connected USB devices') 5292 5095 print(' -flist Print the list of functions currently being captured in ftrace') 5293 5096 print(' -flistall Print all functions capable of being captured in ftrace') 5097 + print(' -summary directory Create a summary of all test in this dir') 5098 + print(' [redo]') 5099 + print(' -ftrace ftracefile Create HTML output using ftrace input (used with -dmesg)') 5100 + print(' -dmesg dmesgfile Create HTML output using dmesg (used with -ftrace)') 5294 5101 print('') 5295 5102 return True 5296 5103 ··· 5301 5102 # exec start (skipped if script is loaded as library) 5302 5103 if __name__ == '__main__': 5303 5104 cmd = '' 5304 - cmdarg = '' 5105 + outdir = '' 5305 5106 multitest = {'run': False, 'count': 0, 'delay': 0} 5306 - simplecmds = ['-modes', '-fpdt', '-flist', '-flistall', '-usbtopo', '-usbauto', '-status'] 5107 + simplecmds = ['-sysinfo', '-modes', '-fpdt', '-flist', '-flistall', '-usbtopo', '-usbauto', '-status'] 5307 5108 # loop through the command line arguments 5308 5109 args = iter(sys.argv[1:]) 5309 5110 for arg in args: ··· 5334 5135 elif(arg == '-f'): 5335 5136 sysvals.usecallgraph = True 5336 5137 elif(arg == '-addlogs'): 5337 - sysvals.addlogs = True 5138 + sysvals.dmesglog = sysvals.ftracelog = True 5338 5139 elif(arg == '-verbose'): 5339 5140 sysvals.verbose = True 5340 5141 elif(arg == '-proc'): ··· 5394 5195 val = args.next() 5395 5196 except: 5396 5197 doError('No subdirectory name supplied', True) 5397 - sysvals.setOutputFolder(val) 5198 + outdir = sysvals.setOutputFolder(val) 5398 5199 elif(arg == '-config'): 5399 5200 try: 5400 5201 val = args.next() ··· 5435 5236 except: 5436 5237 doError('No directory supplied', True) 5437 5238 cmd = 'summary' 5438 - cmdarg = val 5239 + outdir = val 5439 5240 sysvals.notestrun = True 5440 5241 if(os.path.isdir(val) == False): 5441 5242 doError('%s is not accesible' % val) ··· 5459 5260 sysvals.mincglen = sysvals.mindevlen 5460 5261 5461 5262 # just run a utility command and exit 5263 + sysvals.cpuInfo() 5462 5264 if(cmd != ''): 5463 5265 if(cmd == 'status'): 5464 5266 statusCheck(True) 5465 5267 elif(cmd == 'fpdt'): 5466 5268 getFPDT(True) 5269 + elif(cmd == 'sysinfo'): 5270 + sysvals.printSystemInfo() 5467 5271 elif(cmd == 'usbtopo'): 5468 5272 detectUSB() 5469 5273 elif(cmd == 'modes'): ··· 5478 5276 elif(cmd == 'usbauto'): 5479 5277 setUSBDevicesAuto() 5480 5278 elif(cmd == 'summary'): 5481 - runSummary(cmdarg, True) 5279 + runSummary(outdir, True) 5482 5280 sys.exit() 5483 5281 5484 5282 # if instructed, re-analyze existing data files ··· 5491 5289 print('Check FAILED, aborting the test run!') 5492 5290 sys.exit() 5493 5291 5292 + # extract mem modes and convert 5293 + mode = sysvals.suspendmode 5294 + if 'mem' == mode[:3]: 5295 + if '-' in mode: 5296 + memmode = mode.split('-')[-1] 5297 + else: 5298 + memmode = 'deep' 5299 + if memmode == 'shallow': 5300 + mode = 'standby' 5301 + elif memmode == 's2idle': 5302 + mode = 'freeze' 5303 + else: 5304 + mode = 'mem' 5305 + sysvals.memmode = memmode 5306 + sysvals.suspendmode = mode 5307 + 5308 + sysvals.systemInfo(dmidecode(sysvals.mempath)) 5309 + 5494 5310 if multitest['run']: 5495 5311 # run multiple tests in a separate subdirectory 5496 - s = 'x%d' % multitest['count'] 5497 - if not sysvals.outdir: 5498 - sysvals.outdir = datetime.now().strftime('suspend-'+s+'-%m%d%y-%H%M%S') 5499 - if not os.path.isdir(sysvals.outdir): 5500 - os.mkdir(sysvals.outdir) 5312 + if not outdir: 5313 + s = 'suspend-x%d' % multitest['count'] 5314 + outdir = datetime.now().strftime(s+'-%y%m%d-%H%M%S') 5315 + if not os.path.isdir(outdir): 5316 + os.mkdir(outdir) 5501 5317 for i in range(multitest['count']): 5502 5318 if(i != 0): 5503 5319 print('Waiting %d seconds...' % (multitest['delay'])) 5504 5320 time.sleep(multitest['delay']) 5505 5321 print('TEST (%d/%d) START' % (i+1, multitest['count'])) 5506 - runTest(sysvals.outdir) 5322 + fmt = 'suspend-%y%m%d-%H%M%S' 5323 + sysvals.testdir = os.path.join(outdir, datetime.now().strftime(fmt)) 5324 + runTest() 5507 5325 print('TEST (%d/%d) COMPLETE' % (i+1, multitest['count'])) 5508 - runSummary(sysvals.outdir, False) 5326 + runSummary(outdir, False) 5509 5327 else: 5328 + if outdir: 5329 + sysvals.testdir = outdir 5510 5330 # run the test in the current directory 5511 - runTest('.', sysvals.outdir) 5331 + runTest()
+46 -15
tools/power/pm-graph/bootgraph.8
··· 8 8 .RB [ COMMAND ] 9 9 .SH DESCRIPTION 10 10 \fBbootgraph \fP reads the dmesg log from kernel boot and 11 - creates an html representation of the initcall timeline up to the start 12 - of the init process. 11 + creates an html representation of the initcall timeline. It graphs 12 + every module init call found, through both kernel and user modes. The 13 + timeline is split into two phases: kernel mode & user mode. kernel mode 14 + represents a single process run on a single cpu with serial init calls. 15 + Once user mode begins, the init process is called, and the init calls 16 + start working in parallel. 13 17 .PP 14 18 If no specific command is given, the tool reads the current dmesg log and 15 - outputs bootgraph.html. 19 + outputs a new timeline. 16 20 .PP 17 21 The tool can also augment the timeline with ftrace data on custom target 18 22 functions as well as full trace callgraphs. 23 + .PP 24 + Generates output files in subdirectory: boot-yymmdd-HHMMSS 25 + html timeline : <hostname>_boot.html 26 + raw dmesg file : <hostname>_boot_dmesg.txt 27 + raw ftrace file : <hostname>_boot_ftrace.txt 19 28 .SH OPTIONS 20 29 .TP 21 30 \fB-h\fR ··· 37 28 Add the dmesg log to the html output. It will be viewable by 38 29 clicking a button in the timeline. 39 30 .TP 40 - \fB-o \fIfile\fR 41 - Override the HTML output filename (default: bootgraph.html) 42 - .SS "Ftrace Debug" 31 + \fB-o \fIname\fR 32 + Overrides the output subdirectory name when running a new test. 33 + Use {date}, {time}, {hostname} for current values. 34 + .sp 35 + e.g. boot-{hostname}-{date}-{time} 36 + .SS "advanced" 43 37 .TP 44 38 \fB-f\fR 45 39 Use ftrace to add function detail (default: disabled) 46 40 .TP 47 41 \fB-callgraph\fR 48 - Use ftrace to create initcall callgraphs (default: disabled). If -filter 42 + Use ftrace to create initcall callgraphs (default: disabled). If -func 49 43 is not used there will be one callgraph per initcall. This can produce 50 44 very large outputs, i.e. 10MB - 100MB. 51 45 .TP ··· 62 50 which are barely visible in the timeline. 63 51 The value is a float: e.g. 0.001 represents 1 us. 64 52 .TP 53 + \fB-cgfilter \fI"func1,func2,..."\fR 54 + Reduce callgraph output in the timeline by limiting it to a list of calls. The 55 + argument can be a single function name or a comma delimited list. 56 + (default: none) 57 + .TP 65 58 \fB-timeprec \fIn\fR 66 59 Number of significant digits in timestamps (0:S, 3:ms, [6:us]) 67 60 .TP 68 61 \fB-expandcg\fR 69 62 pre-expand the callgraph data in the html output (default: disabled) 70 63 .TP 71 - \fB-filter \fI"func1,func2,..."\fR 64 + \fB-func \fI"func1,func2,..."\fR 72 65 Instead of tracing each initcall, trace a custom list of functions (default: do_one_initcall) 73 - 74 - .SH COMMANDS 75 66 .TP 76 67 \fB-reboot\fR 77 68 Reboot the machine and generate a new timeline automatically. Works in 4 steps. ··· 88 73 1. append the string to the kernel command line via your native boot manager. 89 74 2. reboot the system 90 75 3. after startup, re-run the tool with the same arguments and no command 76 + 77 + .SH COMMANDS 78 + .SS "rebuild" 91 79 .TP 92 80 \fB-dmesg \fIfile\fR 93 81 Create HTML output from an existing dmesg file. 94 82 .TP 95 83 \fB-ftrace \fIfile\fR 96 84 Create HTML output from an existing ftrace file (used with -dmesg). 85 + .SS "other" 97 86 .TP 98 87 \fB-flistall\fR 99 88 Print all ftrace functions capable of being captured. These are all the 100 - possible values you can add to trace via the -filter argument. 89 + possible values you can add to trace via the -func argument. 90 + .TP 91 + \fB-sysinfo\fR 92 + Print out system info extracted from BIOS. Reads /dev/mem directly instead of going through dmidecode. 101 93 102 94 .SH EXAMPLES 103 95 Create a timeline using the current dmesg log. ··· 115 93 .IP 116 94 \f(CW$ bootgraph -callgraph\fR 117 95 .PP 118 - Create a timeline using the current dmesg, add the log to the html and change the name. 96 + Create a timeline using the current dmesg, add the log to the html and change the folder. 119 97 .IP 120 - \f(CW$ bootgraph -addlogs -o myboot.html\fR 98 + \f(CW$ bootgraph -addlogs -o "myboot-{date}-{time}"\fR 121 99 .PP 122 100 Capture a new boot timeline by automatically rebooting the machine. 123 101 .IP 124 - \f(CW$ sudo bootgraph -reboot -addlogs -o latestboot.html\fR 102 + \f(CW$ sudo bootgraph -reboot -addlogs -o "latest-{hostname)"\fR 125 103 .PP 126 104 Capture a new boot timeline with function trace data. 127 105 .IP ··· 133 111 .PP 134 112 Capture a new boot timeline with callgraph data over custom functions. 135 113 .IP 136 - \f(CW$ sudo bootgraph -reboot -callgraph -filter "acpi_ps_parse_aml,msleep"\fR 114 + \f(CW$ sudo bootgraph -reboot -callgraph -func "acpi_ps_parse_aml,msleep"\fR 137 115 .PP 138 116 Capture a brand new boot timeline with manual reboot. 139 117 .IP ··· 145 123 .IP 146 124 \f(CW$ sudo bootgraph -callgraph # re-run the tool after restart\fR 147 125 .PP 126 + .SS "rebuild timeline from logs" 127 + .PP 128 + Rebuild the html from a previous run's logs, using the same options. 129 + .IP 130 + \f(CW$ bootgraph -dmesg dmesg.txt -ftrace ftrace.txt -callgraph\fR 131 + .PP 132 + Rebuild the html with different options. 133 + .IP 134 + \f(CW$ bootgraph -dmesg dmesg.txt -ftrace ftrace.txt -addlogs\fR 148 135 149 136 .SH "SEE ALSO" 150 137 dmesg(1), update-grub(8), crontab(1), reboot(8)
+29 -19
tools/power/pm-graph/sleepgraph.8
··· 39 39 \fB-m \fImode\fR 40 40 Mode to initiate for suspend e.g. standby, freeze, mem (default: mem). 41 41 .TP 42 - \fB-o \fIsubdir\fR 43 - Override the output subdirectory. Use {date}, {time}, {hostname} for current values. 42 + \fB-o \fIname\fR 43 + Overrides the output subdirectory name when running a new test. 44 + Use {date}, {time}, {hostname} for current values. 44 45 .sp 45 46 e.g. suspend-{hostname}-{date}-{time} 46 47 .TP ··· 53 52 Add the dmesg and ftrace logs to the html output. They will be viewable by 54 53 clicking buttons in the timeline. 55 54 56 - .SS "Advanced" 55 + .SS "advanced" 57 56 .TP 58 57 \fB-cmd \fIstr\fR 59 58 Run the timeline over a custom suspend command, e.g. pm-suspend. By default ··· 92 91 Execute \fIn\fR consecutive tests at \fId\fR seconds intervals. The outputs will 93 92 be created in a new subdirectory with a summary page: suspend-xN-{date}-{time}. 94 93 95 - .SS "Ftrace Debug" 94 + .SS "ftrace debug" 96 95 .TP 97 96 \fB-f\fR 98 97 Use ftrace to create device callgraphs (default: disabled). This can produce ··· 125 124 126 125 .SH COMMANDS 127 126 .TP 128 - \fB-ftrace \fIfile\fR 129 - Create HTML output from an existing ftrace file. 130 - .TP 131 - \fB-dmesg \fIfile\fR 132 - Create HTML output from an existing dmesg file. 133 - .TP 134 127 \fB-summary \fIindir\fR 135 128 Create a summary page of all tests in \fIindir\fR. Creates summary.html 136 129 in the current folder. The output page is a table of tests with ··· 140 145 .TP 141 146 \fB-fpdt\fR 142 147 Print out the contents of the ACPI Firmware Performance Data Table. 148 + .TP 149 + \fB-sysinfo\fR 150 + Print out system info extracted from BIOS. Reads /dev/mem directly instead of going through dmidecode. 143 151 .TP 144 152 \fB-usbtopo\fR 145 153 Print out the current USB topology with power info. ··· 160 162 \fB-flistall\fR 161 163 Print all ftrace functions capable of being captured. These are all the 162 164 possible values you can add to trace via the -fadd argument. 165 + .SS "rebuild" 166 + .TP 167 + \fB-ftrace \fIfile\fR 168 + Create HTML output from an existing ftrace file. 169 + .TP 170 + \fB-dmesg \fIfile\fR 171 + Create HTML output from an existing dmesg file. 163 172 164 173 .SH EXAMPLES 165 - .SS "Simple Commands" 174 + .SS "simple commands" 166 175 Check which suspend modes are currently supported. 167 176 .IP 168 177 \f(CW$ sleepgraph -modes\fR ··· 190 185 .IP 191 186 \f(CW$ sleepgraph -summary ~/workspace/myresults/\fR 192 187 .PP 193 - Re-generate the html output from a previous run's dmesg and ftrace log. 194 - .IP 195 - \f(CW$ sleepgraph -dmesg myhost_mem_dmesg.txt -ftrace myhost_mem_ftrace.txt\fR 196 - .PP 197 188 198 - .SS "Capturing Simple Timelines" 189 + .SS "capturing basic timelines" 199 190 Execute a mem suspend with a 15 second wakeup. Include the logs in the html. 200 191 .IP 201 192 \f(CW$ sudo sleepgraph -rtcwake 15 -addlogs\fR ··· 205 204 \f(CW$ sudo sleepgraph -m freeze -rtcwake off -o "freeze-{hostname}-{date}-{time}"\fR 206 205 .PP 207 206 208 - .SS "Capturing Advanced Timelines" 207 + .SS "capturing advanced timelines" 209 208 Execute a suspend & include dev mode source calls, limit callbacks to 5ms or larger. 210 209 .IP 211 210 \f(CW$ sudo sleepgraph -m mem -rtcwake 15 -dev -mindev 5\fR ··· 223 222 \f(CW$ sudo sleepgraph -cmd "echo mem > /sys/power/state" -rtcwake 15\fR 224 223 .PP 225 224 226 - 227 - .SS "Capturing Timelines with Callgraph Data" 225 + .SS "adding callgraph data" 228 226 Add device callgraphs. Limit the trace depth and only show callgraphs 10ms or larger. 229 227 .IP 230 228 \f(CW$ sudo sleepgraph -m mem -rtcwake 15 -f -maxdepth 5 -mincg 10\fR ··· 234 234 .IP 235 235 \f(CW$ sleepgraph -dmesg host_mem_dmesg.txt -ftrace host_mem_ftrace.txt -f -cgphase resume 236 236 .PP 237 + 238 + .SS "rebuild timeline from logs" 239 + .PP 240 + Rebuild the html from a previous run's logs, using the same options. 241 + .IP 242 + \f(CW$ sleepgraph -dmesg dmesg.txt -ftrace ftrace.txt -callgraph\fR 243 + .PP 244 + Rebuild the html with different options. 245 + .IP 246 + \f(CW$ sleepgraph -dmesg dmesg.txt -ftrace ftrace.txt -addlogs -srgap\fR 237 247 238 248 .SH "SEE ALSO" 239 249 dmesg(1)