Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
"These clean up and rework the PM QoS API, address a suspend-to-idle
wakeup regression on some ACPI-based platforms, clean up and extend a
few cpuidle drivers, update multiple cpufreq drivers and cpufreq
documentation, and fix a number of issues in devfreq and several other
things all over.

Specifics:

- Clean up and rework the PM QoS API to simplify the code and reduce
the size of it (Rafael Wysocki).

- Fix a suspend-to-idle wakeup regression on Dell XPS13 9370 and
similar platforms where the USB plug/unplug events are handled by
the EC (Rafael Wysocki).

- CLean up the intel_idle and PSCI cpuidle drivers (Rafael Wysocki,
Ulf Hansson).

- Extend the haltpoll cpuidle driver so that it can be forced to run
on some systems where it refused to load (Maciej Szmigiero).

- Convert several cpufreq documents to the .rst format and move the
legacy driver documentation into one common file (Mauro Carvalho
Chehab, Rafael Wysocki).

- Update several cpufreq drivers:

* Extend and fix the imx-cpufreq-dt driver (Anson Huang).

* Improve the -EPROBE_DEFER handling and fix unwanted CPU
overclocking on i.MX6ULL in imx6q-cpufreq (Anson Huang,
Christoph Niedermaier).

* Add support for Krait based SoCs to the qcom driver (Ansuel
Smith).

* Add support for OPP_PLUS to ti-cpufreq (Lokesh Vutla).

* Add platform specific intermediate callbacks support to
cpufreq-dt and update the imx6q driver (Peng Fan).

* Simplify and consolidate some pieces of the intel_pstate
driver and update its documentation (Rafael Wysocki, Alex
Hung).

- Fix several devfreq issues:

* Remove unneeded extern keyword from a devfreq header file and
use the DEVFREQ_GOV_UPDATE_INTERNAL event name instead of
DEVFREQ_GOV_INTERNAL (Chanwoo Choi).

* Fix the handling of dev_pm_qos_remove_request() result
(Leonard Crestez).

* Use constant name for userspace governor (Pierre Kuo).

* Get rid of doc warnings and fix a typo (Christophe JAILLET).

- Use built-in RCU list checking in some places in the PM core to
avoid false-positive RCU usage warnings (Madhuparna Bhowmik).

- Add explicit READ_ONCE()/WRITE_ONCE() annotations to low-level PM
QoS routines (Qian Cai).

- Fix removal of wakeup sources to avoid NULL pointer dereferences in
a corner case (Neeraj Upadhyay).

- Clean up the handling of hibernate compat ioctls and fix the
related documentation (Eric Biggers).

- Update the idle_inject power capping driver to use variable-length
arrays instead of zero-length arrays (Gustavo Silva).

- Fix list format in a PM QoS document (Randy Dunlap).

- Make the cpufreq stats module use scnprintf() to avoid potential
buffer overflows (Takashi Iwai).

- Add pm_runtime_get_if_active() to PM-runtime API (Sakari Ailus).

- Allow no domain-idle-states DT property in generic PM domains (Ulf
Hansson).

- Fix a broken y-axis scale in the intel_pstate_tracer utility (Doug
Smythies)"

* tag 'pm-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (78 commits)
cpufreq: intel_pstate: Simplify intel_pstate_cpu_init()
tools/power/x86/intel_pstate_tracer: fix a broken y-axis scale
ACPI: PM: s2idle: Refine active GPEs check
ACPICA: Allow acpi_any_gpe_status_set() to skip one GPE
PM: sleep: wakeup: Skip wakeup_source_sysfs_remove() if device is not there
PM / devfreq: Get rid of some doc warnings
PM / devfreq: Fix handling dev_pm_qos_remove_request result
PM / devfreq: Fix a typo in a comment
PM / devfreq: Change to DEVFREQ_GOV_UPDATE_INTERVAL event name
PM / devfreq: Remove unneeded extern keyword
PM / devfreq: Use constant name of userspace governor
ACPI: PM: s2idle: Fix comment in acpi_s2idle_prepare_late()
cpufreq: qcom: Add support for krait based socs
cpufreq: imx6q-cpufreq: Improve the logic of -EPROBE_DEFER handling
cpufreq: Use scnprintf() for avoiding potential buffer overflow
cpuidle: psci: Split psci_dt_cpu_init_idle()
PM / Domains: Allow no domain-idle-states DT property in genpd when parsing
PM / hibernate: Remove unnecessary compat ioctl overrides
PM: hibernate: fix docs for ioctls that return loff_t via pointer
Documentation: intel_pstate: update links for references
...

+1667 -1676
+274
Documentation/admin-guide/pm/cpufreq_drivers.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ======================================================= 4 + Legacy Documentation of CPU Performance Scaling Drivers 5 + ======================================================= 6 + 7 + Included below are historic documents describing assorted 8 + :doc:`CPU performance scaling <cpufreq>` drivers. They are reproduced verbatim, 9 + with the original white space formatting and indentation preserved, except for 10 + the added leading space character in every line of text. 11 + 12 + 13 + AMD PowerNow! Drivers 14 + ===================== 15 + 16 + :: 17 + 18 + PowerNow! and Cool'n'Quiet are AMD names for frequency 19 + management capabilities in AMD processors. As the hardware 20 + implementation changes in new generations of the processors, 21 + there is a different cpu-freq driver for each generation. 22 + 23 + Note that the driver's will not load on the "wrong" hardware, 24 + so it is safe to try each driver in turn when in doubt as to 25 + which is the correct driver. 26 + 27 + Note that the functionality to change frequency (and voltage) 28 + is not available in all processors. The drivers will refuse 29 + to load on processors without this capability. The capability 30 + is detected with the cpuid instruction. 31 + 32 + The drivers use BIOS supplied tables to obtain frequency and 33 + voltage information appropriate for a particular platform. 34 + Frequency transitions will be unavailable if the BIOS does 35 + not supply these tables. 36 + 37 + 6th Generation: powernow-k6 38 + 39 + 7th Generation: powernow-k7: Athlon, Duron, Geode. 40 + 41 + 8th Generation: powernow-k8: Athlon, Athlon 64, Opteron, Sempron. 42 + Documentation on this functionality in 8th generation processors 43 + is available in the "BIOS and Kernel Developer's Guide", publication 44 + 26094, in chapter 9, available for download from www.amd.com. 45 + 46 + BIOS supplied data, for powernow-k7 and for powernow-k8, may be 47 + from either the PSB table or from ACPI objects. The ACPI support 48 + is only available if the kernel config sets CONFIG_ACPI_PROCESSOR. 49 + The powernow-k8 driver will attempt to use ACPI if so configured, 50 + and fall back to PST if that fails. 51 + The powernow-k7 driver will try to use the PSB support first, and 52 + fall back to ACPI if the PSB support fails. A module parameter, 53 + acpi_force, is provided to force ACPI support to be used instead 54 + of PSB support. 55 + 56 + 57 + ``cpufreq-nforce2`` 58 + =================== 59 + 60 + :: 61 + 62 + The cpufreq-nforce2 driver changes the FSB on nVidia nForce2 platforms. 63 + 64 + This works better than on other platforms, because the FSB of the CPU 65 + can be controlled independently from the PCI/AGP clock. 66 + 67 + The module has two options: 68 + 69 + fid: multiplier * 10 (for example 8.5 = 85) 70 + min_fsb: minimum FSB 71 + 72 + If not set, fid is calculated from the current CPU speed and the FSB. 73 + min_fsb defaults to FSB at boot time - 50 MHz. 74 + 75 + IMPORTANT: The available range is limited downwards! 76 + Also the minimum available FSB can differ, for systems 77 + booting with 200 MHz, 150 should always work. 78 + 79 + 80 + ``pcc-cpufreq`` 81 + =============== 82 + 83 + :: 84 + 85 + /* 86 + * pcc-cpufreq.txt - PCC interface documentation 87 + * 88 + * Copyright (C) 2009 Red Hat, Matthew Garrett <mjg@redhat.com> 89 + * Copyright (C) 2009 Hewlett-Packard Development Company, L.P. 90 + * Nagananda Chumbalkar <nagananda.chumbalkar@hp.com> 91 + */ 92 + 93 + 94 + Processor Clocking Control Driver 95 + --------------------------------- 96 + 97 + Contents: 98 + --------- 99 + 1. Introduction 100 + 1.1 PCC interface 101 + 1.1.1 Get Average Frequency 102 + 1.1.2 Set Desired Frequency 103 + 1.2 Platforms affected 104 + 2. Driver and /sys details 105 + 2.1 scaling_available_frequencies 106 + 2.2 cpuinfo_transition_latency 107 + 2.3 cpuinfo_cur_freq 108 + 2.4 related_cpus 109 + 3. Caveats 110 + 111 + 1. Introduction: 112 + ---------------- 113 + Processor Clocking Control (PCC) is an interface between the platform 114 + firmware and OSPM. It is a mechanism for coordinating processor 115 + performance (ie: frequency) between the platform firmware and the OS. 116 + 117 + The PCC driver (pcc-cpufreq) allows OSPM to take advantage of the PCC 118 + interface. 119 + 120 + OS utilizes the PCC interface to inform platform firmware what frequency the 121 + OS wants for a logical processor. The platform firmware attempts to achieve 122 + the requested frequency. If the request for the target frequency could not be 123 + satisfied by platform firmware, then it usually means that power budget 124 + conditions are in place, and "power capping" is taking place. 125 + 126 + 1.1 PCC interface: 127 + ------------------ 128 + The complete PCC specification is available here: 129 + https://acpica.org/sites/acpica/files/Processor-Clocking-Control-v1p0.pdf 130 + 131 + PCC relies on a shared memory region that provides a channel for communication 132 + between the OS and platform firmware. PCC also implements a "doorbell" that 133 + is used by the OS to inform the platform firmware that a command has been 134 + sent. 135 + 136 + The ACPI PCCH() method is used to discover the location of the PCC shared 137 + memory region. The shared memory region header contains the "command" and 138 + "status" interface. PCCH() also contains details on how to access the platform 139 + doorbell. 140 + 141 + The following commands are supported by the PCC interface: 142 + * Get Average Frequency 143 + * Set Desired Frequency 144 + 145 + The ACPI PCCP() method is implemented for each logical processor and is 146 + used to discover the offsets for the input and output buffers in the shared 147 + memory region. 148 + 149 + When PCC mode is enabled, the platform will not expose processor performance 150 + or throttle states (_PSS, _TSS and related ACPI objects) to OSPM. Therefore, 151 + the native P-state driver (such as acpi-cpufreq for Intel, powernow-k8 for 152 + AMD) will not load. 153 + 154 + However, OSPM remains in control of policy. The governor (eg: "ondemand") 155 + computes the required performance for each processor based on server workload. 156 + The PCC driver fills in the command interface, and the input buffer and 157 + communicates the request to the platform firmware. The platform firmware is 158 + responsible for delivering the requested performance. 159 + 160 + Each PCC command is "global" in scope and can affect all the logical CPUs in 161 + the system. Therefore, PCC is capable of performing "group" updates. With PCC 162 + the OS is capable of getting/setting the frequency of all the logical CPUs in 163 + the system with a single call to the BIOS. 164 + 165 + 1.1.1 Get Average Frequency: 166 + ---------------------------- 167 + This command is used by the OSPM to query the running frequency of the 168 + processor since the last time this command was completed. The output buffer 169 + indicates the average unhalted frequency of the logical processor expressed as 170 + a percentage of the nominal (ie: maximum) CPU frequency. The output buffer 171 + also signifies if the CPU frequency is limited by a power budget condition. 172 + 173 + 1.1.2 Set Desired Frequency: 174 + ---------------------------- 175 + This command is used by the OSPM to communicate to the platform firmware the 176 + desired frequency for a logical processor. The output buffer is currently 177 + ignored by OSPM. The next invocation of "Get Average Frequency" will inform 178 + OSPM if the desired frequency was achieved or not. 179 + 180 + 1.2 Platforms affected: 181 + ----------------------- 182 + The PCC driver will load on any system where the platform firmware: 183 + * supports the PCC interface, and the associated PCCH() and PCCP() methods 184 + * assumes responsibility for managing the hardware clocking controls in order 185 + to deliver the requested processor performance 186 + 187 + Currently, certain HP ProLiant platforms implement the PCC interface. On those 188 + platforms PCC is the "default" choice. 189 + 190 + However, it is possible to disable this interface via a BIOS setting. In 191 + such an instance, as is also the case on platforms where the PCC interface 192 + is not implemented, the PCC driver will fail to load silently. 193 + 194 + 2. Driver and /sys details: 195 + --------------------------- 196 + When the driver loads, it merely prints the lowest and the highest CPU 197 + frequencies supported by the platform firmware. 198 + 199 + The PCC driver loads with a message such as: 200 + pcc-cpufreq: (v1.00.00) driver loaded with frequency limits: 1600 MHz, 2933 201 + MHz 202 + 203 + This means that the OPSM can request the CPU to run at any frequency in 204 + between the limits (1600 MHz, and 2933 MHz) specified in the message. 205 + 206 + Internally, there is no need for the driver to convert the "target" frequency 207 + to a corresponding P-state. 208 + 209 + The VERSION number for the driver will be of the format v.xy.ab. 210 + eg: 1.00.02 211 + ----- -- 212 + | | 213 + | -- this will increase with bug fixes/enhancements to the driver 214 + |-- this is the version of the PCC specification the driver adheres to 215 + 216 + 217 + The following is a brief discussion on some of the fields exported via the 218 + /sys filesystem and how their values are affected by the PCC driver: 219 + 220 + 2.1 scaling_available_frequencies: 221 + ---------------------------------- 222 + scaling_available_frequencies is not created in /sys. No intermediate 223 + frequencies need to be listed because the BIOS will try to achieve any 224 + frequency, within limits, requested by the governor. A frequency does not have 225 + to be strictly associated with a P-state. 226 + 227 + 2.2 cpuinfo_transition_latency: 228 + ------------------------------- 229 + The cpuinfo_transition_latency field is 0. The PCC specification does 230 + not include a field to expose this value currently. 231 + 232 + 2.3 cpuinfo_cur_freq: 233 + --------------------- 234 + A) Often cpuinfo_cur_freq will show a value different than what is declared 235 + in the scaling_available_frequencies or scaling_cur_freq, or scaling_max_freq. 236 + This is due to "turbo boost" available on recent Intel processors. If certain 237 + conditions are met the BIOS can achieve a slightly higher speed than requested 238 + by OSPM. An example: 239 + 240 + scaling_cur_freq : 2933000 241 + cpuinfo_cur_freq : 3196000 242 + 243 + B) There is a round-off error associated with the cpuinfo_cur_freq value. 244 + Since the driver obtains the current frequency as a "percentage" (%) of the 245 + nominal frequency from the BIOS, sometimes, the values displayed by 246 + scaling_cur_freq and cpuinfo_cur_freq may not match. An example: 247 + 248 + scaling_cur_freq : 1600000 249 + cpuinfo_cur_freq : 1583000 250 + 251 + In this example, the nominal frequency is 2933 MHz. The driver obtains the 252 + current frequency, cpuinfo_cur_freq, as 54% of the nominal frequency: 253 + 254 + 54% of 2933 MHz = 1583 MHz 255 + 256 + Nominal frequency is the maximum frequency of the processor, and it usually 257 + corresponds to the frequency of the P0 P-state. 258 + 259 + 2.4 related_cpus: 260 + ----------------- 261 + The related_cpus field is identical to affected_cpus. 262 + 263 + affected_cpus : 4 264 + related_cpus : 4 265 + 266 + Currently, the PCC driver does not evaluate _PSD. The platforms that support 267 + PCC do not implement SW_ALL. So OSPM doesn't need to perform any coordination 268 + to ensure that the same frequency is requested of all dependent CPUs. 269 + 270 + 3. Caveats: 271 + ----------- 272 + The "cpufreq_stats" module in its present form cannot be loaded and 273 + expected to work with the PCC driver. Since the "cpufreq_stats" module 274 + provides information wrt each P-state, it is not applicable to the PCC driver.
+36 -37
Documentation/admin-guide/pm/cpuidle.rst
··· 583 583 The power management quality of service (PM QoS) framework in the Linux kernel 584 584 allows kernel code and user space processes to set constraints on various 585 585 energy-efficiency features of the kernel to prevent performance from dropping 586 - below a required level. The PM QoS constraints can be set globally, in 587 - predefined categories referred to as PM QoS classes, or against individual 588 - devices. 586 + below a required level. 589 587 590 588 CPU idle time management can be affected by PM QoS in two ways, through the 591 - global constraint in the ``PM_QOS_CPU_DMA_LATENCY`` class and through the 592 - resume latency constraints for individual CPUs. Kernel code (e.g. device 593 - drivers) can set both of them with the help of special internal interfaces 594 - provided by the PM QoS framework. User space can modify the former by opening 595 - the :file:`cpu_dma_latency` special device file under :file:`/dev/` and writing 596 - a binary value (interpreted as a signed 32-bit integer) to it. In turn, the 597 - resume latency constraint for a CPU can be modified by user space by writing a 598 - string (representing a signed 32-bit integer) to the 599 - :file:`power/pm_qos_resume_latency_us` file under 589 + global CPU latency limit and through the resume latency constraints for 590 + individual CPUs. Kernel code (e.g. device drivers) can set both of them with 591 + the help of special internal interfaces provided by the PM QoS framework. User 592 + space can modify the former by opening the :file:`cpu_dma_latency` special 593 + device file under :file:`/dev/` and writing a binary value (interpreted as a 594 + signed 32-bit integer) to it. In turn, the resume latency constraint for a CPU 595 + can be modified from user space by writing a string (representing a signed 596 + 32-bit integer) to the :file:`power/pm_qos_resume_latency_us` file under 600 597 :file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number 601 598 ``<N>`` is allocated at the system initialization time. Negative values 602 599 will be rejected in both cases and, also in both cases, the written integer ··· 602 605 The requested value is not automatically applied as a new constraint, however, 603 606 as it may be less restrictive (greater in this particular case) than another 604 607 constraint previously requested by someone else. For this reason, the PM QoS 605 - framework maintains a list of requests that have been made so far in each 606 - global class and for each device, aggregates them and applies the effective 607 - (minimum in this particular case) value as the new constraint. 608 + framework maintains a list of requests that have been made so far for the 609 + global CPU latency limit and for each individual CPU, aggregates them and 610 + applies the effective (minimum in this particular case) value as the new 611 + constraint. 608 612 609 613 In fact, opening the :file:`cpu_dma_latency` special device file causes a new 610 - PM QoS request to be created and added to the priority list of requests in the 611 - ``PM_QOS_CPU_DMA_LATENCY`` class and the file descriptor coming from the 612 - "open" operation represents that request. If that file descriptor is then 613 - used for writing, the number written to it will be associated with the PM QoS 614 - request represented by it as a new requested constraint value. Next, the 615 - priority list mechanism will be used to determine the new effective value of 616 - the entire list of requests and that effective value will be set as a new 617 - constraint. Thus setting a new requested constraint value will only change the 618 - real constraint if the effective "list" value is affected by it. In particular, 619 - for the ``PM_QOS_CPU_DMA_LATENCY`` class it only affects the real constraint if 620 - it is the minimum of the requested constraints in the list. The process holding 621 - a file descriptor obtained by opening the :file:`cpu_dma_latency` special device 622 - file controls the PM QoS request associated with that file descriptor, but it 623 - controls this particular PM QoS request only. 614 + PM QoS request to be created and added to a global priority list of CPU latency 615 + limit requests and the file descriptor coming from the "open" operation 616 + represents that request. If that file descriptor is then used for writing, the 617 + number written to it will be associated with the PM QoS request represented by 618 + it as a new requested limit value. Next, the priority list mechanism will be 619 + used to determine the new effective value of the entire list of requests and 620 + that effective value will be set as a new CPU latency limit. Thus requesting a 621 + new limit value will only change the real limit if the effective "list" value is 622 + affected by it, which is the case if it is the minimum of the requested values 623 + in the list. 624 + 625 + The process holding a file descriptor obtained by opening the 626 + :file:`cpu_dma_latency` special device file controls the PM QoS request 627 + associated with that file descriptor, but it controls this particular PM QoS 628 + request only. 624 629 625 630 Closing the :file:`cpu_dma_latency` special device file or, more precisely, the 626 631 file descriptor obtained while opening it, causes the PM QoS request associated 627 - with that file descriptor to be removed from the ``PM_QOS_CPU_DMA_LATENCY`` 628 - class priority list and destroyed. If that happens, the priority list mechanism 629 - will be used, again, to determine the new effective value for the whole list 630 - and that value will become the new real constraint. 632 + with that file descriptor to be removed from the global priority list of CPU 633 + latency limit requests and destroyed. If that happens, the priority list 634 + mechanism will be used again, to determine the new effective value for the whole 635 + list and that value will become the new limit. 631 636 632 637 In turn, for each CPU there is one resume latency PM QoS request associated with 633 638 the :file:`power/pm_qos_resume_latency_us` file under ··· 646 647 (there may be other requests coming from kernel code in that list). 647 648 648 649 CPU idle time governors are expected to regard the minimum of the global 649 - effective ``PM_QOS_CPU_DMA_LATENCY`` class constraint and the effective 650 - resume latency constraint for the given CPU as the upper limit for the exit 651 - latency of the idle states they can select for that CPU. They should never 652 - select any idle states with exit latency beyond that limit. 650 + (effective) CPU latency limit and the effective resume latency constraint for 651 + the given CPU as the upper limit for the exit latency of the idle states that 652 + they are allowed to select for that CPU. They should never select any idle 653 + states with exit latency beyond that limit. 653 654 654 655 655 656 Idle States Control Via Kernel Command Line
+2 -2
Documentation/admin-guide/pm/intel_pstate.rst
··· 734 734 ========== 735 735 736 736 .. [1] Kristen Accardi, *Balancing Power and Performance in the Linux Kernel*, 737 - http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf 737 + https://events.static.linuxfound.org/sites/events/files/slides/LinuxConEurope_2015.pdf 738 738 739 739 .. [2] *Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3: System Programming Guide*, 740 - http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html 740 + https://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html 741 741 742 742 .. [3] *Advanced Configuration and Power Interface Specification*, 743 743 https://uefi.org/sites/default/files/resources/ACPI_6_3_final_Jan30.pdf
+1
Documentation/admin-guide/pm/working-state.rst
··· 11 11 intel_idle 12 12 cpufreq 13 13 intel_pstate 14 + cpufreq_drivers 14 15 intel_epb
-38
Documentation/cpu-freq/amd-powernow.txt
··· 1 - 2 - PowerNow! and Cool'n'Quiet are AMD names for frequency 3 - management capabilities in AMD processors. As the hardware 4 - implementation changes in new generations of the processors, 5 - there is a different cpu-freq driver for each generation. 6 - 7 - Note that the driver's will not load on the "wrong" hardware, 8 - so it is safe to try each driver in turn when in doubt as to 9 - which is the correct driver. 10 - 11 - Note that the functionality to change frequency (and voltage) 12 - is not available in all processors. The drivers will refuse 13 - to load on processors without this capability. The capability 14 - is detected with the cpuid instruction. 15 - 16 - The drivers use BIOS supplied tables to obtain frequency and 17 - voltage information appropriate for a particular platform. 18 - Frequency transitions will be unavailable if the BIOS does 19 - not supply these tables. 20 - 21 - 6th Generation: powernow-k6 22 - 23 - 7th Generation: powernow-k7: Athlon, Duron, Geode. 24 - 25 - 8th Generation: powernow-k8: Athlon, Athlon 64, Opteron, Sempron. 26 - Documentation on this functionality in 8th generation processors 27 - is available in the "BIOS and Kernel Developer's Guide", publication 28 - 26094, in chapter 9, available for download from www.amd.com. 29 - 30 - BIOS supplied data, for powernow-k7 and for powernow-k8, may be 31 - from either the PSB table or from ACPI objects. The ACPI support 32 - is only available if the kernel config sets CONFIG_ACPI_PROCESSOR. 33 - The powernow-k8 driver will attempt to use ACPI if so configured, 34 - and fall back to PST if that fails. 35 - The powernow-k7 driver will try to use the PSB support first, and 36 - fall back to ACPI if the PSB support fails. A module parameter, 37 - acpi_force, is provided to force ACPI support to be used instead 38 - of PSB support.
+33 -32
Documentation/cpu-freq/core.txt Documentation/cpu-freq/core.rst
··· 1 - CPU frequency and voltage scaling code in the Linux(TM) kernel 1 + .. SPDX-License-Identifier: GPL-2.0 2 2 3 + ============================================================= 4 + General description of the CPUFreq core and CPUFreq notifiers 5 + ============================================================= 3 6 4 - L i n u x C P U F r e q 7 + Authors: 8 + - Dominik Brodowski <linux@brodo.de> 9 + - David Kimdon <dwhedon@debian.org> 10 + - Rafael J. Wysocki <rafael.j.wysocki@intel.com> 11 + - Viresh Kumar <viresh.kumar@linaro.org> 5 12 6 - C P U F r e q C o r e 13 + .. Contents: 7 14 8 - 9 - Dominik Brodowski <linux@brodo.de> 10 - David Kimdon <dwhedon@debian.org> 11 - Rafael J. Wysocki <rafael.j.wysocki@intel.com> 12 - Viresh Kumar <viresh.kumar@linaro.org> 13 - 14 - 15 - 16 - Clock scaling allows you to change the clock speed of the CPUs on the 17 - fly. This is a nice method to save battery power, because the lower 18 - the clock speed, the less power the CPU consumes. 19 - 20 - 21 - Contents: 22 - --------- 23 - 1. CPUFreq core and interfaces 24 - 2. CPUFreq notifiers 25 - 3. CPUFreq Table Generation with Operating Performance Point (OPP) 15 + 1. CPUFreq core and interfaces 16 + 2. CPUFreq notifiers 17 + 3. CPUFreq Table Generation with Operating Performance Point (OPP) 26 18 27 19 1. General Information 28 - ======================= 20 + ====================== 29 21 30 22 The CPUFreq core code is located in drivers/cpufreq/cpufreq.c. This 31 23 cpufreq code offers a standardized interface for the CPUFreq ··· 55 63 CPUFREQ_CREATE_POLICY when the policy is first created and it is 56 64 CPUFREQ_REMOVE_POLICY when the policy is removed. 57 65 58 - The third argument, a void *pointer, points to a struct cpufreq_policy 66 + The third argument, a ``void *pointer``, points to a struct cpufreq_policy 59 67 consisting of several values, including min, max (the lower and upper 60 68 frequencies (in kHz) of the new policy). 61 69 ··· 72 80 73 81 The third argument is a struct cpufreq_freqs with the following 74 82 values: 75 - cpu - number of the affected CPU 76 - old - old frequency 77 - new - new frequency 78 - flags - flags of the cpufreq driver 83 + 84 + ===== =========================== 85 + cpu number of the affected CPU 86 + old old frequency 87 + new new frequency 88 + flags flags of the cpufreq driver 89 + ===== =========================== 79 90 80 91 3. CPUFreq Table Generation with Operating Performance Point (OPP) 81 92 ================================================================== ··· 89 94 the OPP layer's internal information about the available frequencies 90 95 into a format readily providable to cpufreq. 91 96 92 - WARNING: Do not use this function in interrupt context. 97 + .. Warning:: 93 98 94 - Example: 99 + Do not use this function in interrupt context. 100 + 101 + Example:: 102 + 95 103 soc_pm_init() 96 104 { 97 105 /* Do things */ ··· 104 106 /* Do other things */ 105 107 } 106 108 107 - NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in 108 - addition to CONFIG_PM_OPP. 109 + .. note:: 109 110 110 - dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table 111 + This function is available only if CONFIG_CPU_FREQ is enabled in 112 + addition to CONFIG_PM_OPP. 113 + 114 + dev_pm_opp_free_cpufreq_table 115 + Free up the table allocated by dev_pm_opp_init_cpufreq_table
+65 -68
Documentation/cpu-freq/cpu-drivers.txt Documentation/cpu-freq/cpu-drivers.rst
··· 1 - CPU frequency and voltage scaling code in the Linux(TM) kernel 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =============================================== 4 + How to Implement a new CPUFreq Processor Driver 5 + =============================================== 6 + 7 + Authors: 2 8 3 9 4 - L i n u x C P U F r e q 10 + - Dominik Brodowski <linux@brodo.de> 11 + - Rafael J. Wysocki <rafael.j.wysocki@intel.com> 12 + - Viresh Kumar <viresh.kumar@linaro.org> 5 13 6 - C P U D r i v e r s 14 + .. Contents 7 15 8 - - information for developers - 9 - 10 - 11 - Dominik Brodowski <linux@brodo.de> 12 - Rafael J. Wysocki <rafael.j.wysocki@intel.com> 13 - Viresh Kumar <viresh.kumar@linaro.org> 14 - 15 - 16 - 17 - Clock scaling allows you to change the clock speed of the CPUs on the 18 - fly. This is a nice method to save battery power, because the lower 19 - the clock speed, the less power the CPU consumes. 20 - 21 - 22 - Contents: 23 - --------- 24 - 1. What To Do? 25 - 1.1 Initialization 26 - 1.2 Per-CPU Initialization 27 - 1.3 verify 28 - 1.4 target/target_index or setpolicy? 29 - 1.5 target/target_index 30 - 1.6 setpolicy 31 - 1.7 get_intermediate and target_intermediate 32 - 2. Frequency Table Helpers 16 + 1. What To Do? 17 + 1.1 Initialization 18 + 1.2 Per-CPU Initialization 19 + 1.3 verify 20 + 1.4 target/target_index or setpolicy? 21 + 1.5 target/target_index 22 + 1.6 setpolicy 23 + 1.7 get_intermediate and target_intermediate 24 + 2. Frequency Table Helpers 33 25 34 26 35 27 ··· 41 49 chipset. If so, register a struct cpufreq_driver with the CPUfreq core 42 50 using cpufreq_register_driver() 43 51 44 - What shall this struct cpufreq_driver contain? 52 + What shall this struct cpufreq_driver contain? 45 53 46 54 .name - The name of this driver. 47 55 ··· 100 108 cpufreq driver registers itself, the per-policy initialization function 101 109 cpufreq_driver.init is called if no cpufreq policy existed for the CPU. 102 110 Note that the .init() and .exit() routines are called only once for the 103 - policy and not for each CPU managed by the policy. It takes a struct 104 - cpufreq_policy *policy as argument. What to do now? 111 + policy and not for each CPU managed by the policy. It takes a ``struct 112 + cpufreq_policy *policy`` as argument. What to do now? 105 113 106 114 If necessary, activate the CPUfreq support on your CPU. 107 115 108 116 Then, the driver must fill in the following values: 109 117 110 - policy->cpuinfo.min_freq _and_ 111 - policy->cpuinfo.max_freq - the minimum and maximum frequency 112 - (in kHz) which is supported by 113 - this CPU 114 - policy->cpuinfo.transition_latency the time it takes on this CPU to 115 - switch between two frequencies in 116 - nanoseconds (if appropriate, else 117 - specify CPUFREQ_ETERNAL) 118 - 119 - policy->cur The current operating frequency of 120 - this CPU (if appropriate) 121 - policy->min, 122 - policy->max, 123 - policy->policy and, if necessary, 124 - policy->governor must contain the "default policy" for 125 - this CPU. A few moments later, 126 - cpufreq_driver.verify and either 127 - cpufreq_driver.setpolicy or 128 - cpufreq_driver.target/target_index is called 129 - with these values. 130 - policy->cpus Update this with the masks of the 131 - (online + offline) CPUs that do DVFS 132 - along with this CPU (i.e. that share 133 - clock/voltage rails with it). 118 + +-----------------------------------+--------------------------------------+ 119 + |policy->cpuinfo.min_freq _and_ | | 120 + |policy->cpuinfo.max_freq | the minimum and maximum frequency | 121 + | | (in kHz) which is supported by | 122 + | | this CPU | 123 + +-----------------------------------+--------------------------------------+ 124 + |policy->cpuinfo.transition_latency | the time it takes on this CPU to | 125 + | | switch between two frequencies in | 126 + | | nanoseconds (if appropriate, else | 127 + | | specify CPUFREQ_ETERNAL) | 128 + +-----------------------------------+--------------------------------------+ 129 + |policy->cur | The current operating frequency of | 130 + | | this CPU (if appropriate) | 131 + +-----------------------------------+--------------------------------------+ 132 + |policy->min, | | 133 + |policy->max, | | 134 + |policy->policy and, if necessary, | | 135 + |policy->governor | must contain the "default policy" for| 136 + | | this CPU. A few moments later, | 137 + | | cpufreq_driver.verify and either | 138 + | | cpufreq_driver.setpolicy or | 139 + | | cpufreq_driver.target/target_index is| 140 + | | called with these values. | 141 + +-----------------------------------+--------------------------------------+ 142 + |policy->cpus | Update this with the masks of the | 143 + | | (online + offline) CPUs that do DVFS | 144 + | | along with this CPU (i.e. that share| 145 + | | clock/voltage rails with it). | 146 + +-----------------------------------+--------------------------------------+ 134 147 135 148 For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the 136 149 frequency table helpers might be helpful. See the section 2 for more information ··· 148 151 When the user decides a new policy (consisting of 149 152 "policy,governor,min,max") shall be set, this policy must be validated 150 153 so that incompatible values can be corrected. For verifying these 151 - values cpufreq_verify_within_limits(struct cpufreq_policy *policy, 152 - unsigned int min_freq, unsigned int max_freq) function might be helpful. 154 + values cpufreq_verify_within_limits(``struct cpufreq_policy *policy``, 155 + ``unsigned int min_freq``, ``unsigned int max_freq``) function might be helpful. 153 156 See section 2 for details on frequency table helpers. 154 157 155 158 You need to make sure that at least one valid frequency (or operating ··· 160 163 1.4 target or target_index or setpolicy or fast_switch? 161 164 ------------------------------------------------------- 162 165 163 - Most cpufreq drivers or even most cpu frequency scaling algorithms 166 + Most cpufreq drivers or even most cpu frequency scaling algorithms 164 167 only allow the CPU frequency to be set to predefined fixed values. For 165 168 these, you use the ->target(), ->target_index() or ->fast_switch() 166 169 callbacks. ··· 172 175 1.5. target/target_index 173 176 ------------------------ 174 177 175 - The target_index call has two arguments: struct cpufreq_policy *policy, 176 - and unsigned int index (into the exposed frequency table). 178 + The target_index call has two arguments: ``struct cpufreq_policy *policy``, 179 + and ``unsigned int`` index (into the exposed frequency table). 177 180 178 181 The CPUfreq driver must set the new frequency when called here. The 179 182 actual frequency must be determined by freq_table[index].frequency. ··· 181 184 It should always restore to earlier frequency (i.e. policy->restore_freq) in 182 185 case of errors, even if we switched to intermediate frequency earlier. 183 186 184 - Deprecated: 187 + Deprecated 185 188 ---------- 186 - The target call has three arguments: struct cpufreq_policy *policy, 189 + The target call has three arguments: ``struct cpufreq_policy *policy``, 187 190 unsigned int target_frequency, unsigned int relation. 188 191 189 192 The CPUfreq driver must set the new frequency when called here. The ··· 207 210 this callback isn't allowed. This callback must be highly optimized to 208 211 do switching as fast as possible. 209 212 210 - This function has two arguments: struct cpufreq_policy *policy and 211 - unsigned int target_frequency. 213 + This function has two arguments: ``struct cpufreq_policy *policy`` and 214 + ``unsigned int target_frequency``. 212 215 213 216 214 217 1.7 setpolicy 215 218 ------------- 216 219 217 - The setpolicy call only takes a struct cpufreq_policy *policy as 220 + The setpolicy call only takes a ``struct cpufreq_policy *policy`` as 218 221 argument. You need to set the lower limit of the in-processor or 219 222 in-chipset dynamic frequency switching to policy->min, the upper limit 220 223 to policy->max, and -if supported- select a performance-oriented ··· 275 278 276 279 cpufreq_for_each_valid_entry(pos, table) - iterates over all entries, 277 280 excluding CPUFREQ_ENTRY_INVALID frequencies. 278 - Use arguments "pos" - a cpufreq_frequency_table * as a loop cursor and 279 - "table" - the cpufreq_frequency_table * you want to iterate over. 281 + Use arguments "pos" - a ``cpufreq_frequency_table *`` as a loop cursor and 282 + "table" - the ``cpufreq_frequency_table *`` you want to iterate over. 280 283 281 - For example: 284 + For example:: 282 285 283 286 struct cpufreq_frequency_table *pos, *driver_freq_table; 284 287
-19
Documentation/cpu-freq/cpufreq-nforce2.txt
··· 1 - 2 - The cpufreq-nforce2 driver changes the FSB on nVidia nForce2 platforms. 3 - 4 - This works better than on other platforms, because the FSB of the CPU 5 - can be controlled independently from the PCI/AGP clock. 6 - 7 - The module has two options: 8 - 9 - fid: multiplier * 10 (for example 8.5 = 85) 10 - min_fsb: minimum FSB 11 - 12 - If not set, fid is calculated from the current CPU speed and the FSB. 13 - min_fsb defaults to FSB at boot time - 50 MHz. 14 - 15 - IMPORTANT: The available range is limited downwards! 16 - Also the minimum available FSB can differ, for systems 17 - booting with 200 MHz, 150 should always work. 18 - 19 -
+136
Documentation/cpu-freq/cpufreq-stats.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ========================================== 4 + General Description of sysfs CPUFreq Stats 5 + ========================================== 6 + 7 + information for users 8 + 9 + 10 + Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> 11 + 12 + .. Contents 13 + 14 + 1. Introduction 15 + 2. Statistics Provided (with example) 16 + 3. Configuring cpufreq-stats 17 + 18 + 19 + 1. Introduction 20 + =============== 21 + 22 + cpufreq-stats is a driver that provides CPU frequency statistics for each CPU. 23 + These statistics are provided in /sysfs as a bunch of read_only interfaces. This 24 + interface (when configured) will appear in a separate directory under cpufreq 25 + in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU. 26 + Various statistics will form read_only files under this directory. 27 + 28 + This driver is designed to be independent of any particular cpufreq_driver 29 + that may be running on your CPU. So, it will work with any cpufreq_driver. 30 + 31 + 32 + 2. Statistics Provided (with example) 33 + ===================================== 34 + 35 + cpufreq stats provides following statistics (explained in detail below). 36 + 37 + - time_in_state 38 + - total_trans 39 + - trans_table 40 + 41 + All the statistics will be from the time the stats driver has been inserted 42 + (or the time the stats were reset) to the time when a read of a particular 43 + statistic is done. Obviously, stats driver will not have any information 44 + about the frequency transitions before the stats driver insertion. 45 + 46 + :: 47 + 48 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l 49 + total 0 50 + drwxr-xr-x 2 root root 0 May 14 16:06 . 51 + drwxr-xr-x 3 root root 0 May 14 15:58 .. 52 + --w------- 1 root root 4096 May 14 16:06 reset 53 + -r--r--r-- 1 root root 4096 May 14 16:06 time_in_state 54 + -r--r--r-- 1 root root 4096 May 14 16:06 total_trans 55 + -r--r--r-- 1 root root 4096 May 14 16:06 trans_table 56 + 57 + - **reset** 58 + 59 + Write-only attribute that can be used to reset the stat counters. This can be 60 + useful for evaluating system behaviour under different governors without the 61 + need for a reboot. 62 + 63 + - **time_in_state** 64 + 65 + This gives the amount of time spent in each of the frequencies supported by 66 + this CPU. The cat output will have "<frequency> <time>" pair in each line, which 67 + will mean this CPU spent <time> usertime units of time at <frequency>. Output 68 + will have one line for each of the supported frequencies. usertime units here 69 + is 10mS (similar to other time exported in /proc). 70 + 71 + :: 72 + 73 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat time_in_state 74 + 3600000 2089 75 + 3400000 136 76 + 3200000 34 77 + 3000000 67 78 + 2800000 172488 79 + 80 + 81 + - **total_trans** 82 + 83 + This gives the total number of frequency transitions on this CPU. The cat 84 + output will have a single count which is the total number of frequency 85 + transitions. 86 + 87 + :: 88 + 89 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat total_trans 90 + 20 91 + 92 + - **trans_table** 93 + 94 + This will give a fine grained information about all the CPU frequency 95 + transitions. The cat output here is a two dimensional matrix, where an entry 96 + <i,j> (row i, column j) represents the count of number of transitions from 97 + Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in 98 + which the driver has provided the frequency table initially to the cpufreq core 99 + and so can be sorted (ascending or descending) or unsorted. The output here 100 + also contains the actual freq values for each row and column for better 101 + readability. 102 + 103 + If the transition table is bigger than PAGE_SIZE, reading this will 104 + return an -EFBIG error. 105 + 106 + :: 107 + 108 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table 109 + From : To 110 + : 3600000 3400000 3200000 3000000 2800000 111 + 3600000: 0 5 0 0 0 112 + 3400000: 4 0 2 0 0 113 + 3200000: 0 1 0 2 0 114 + 3000000: 0 0 1 0 3 115 + 2800000: 0 0 0 2 0 116 + 117 + 3. Configuring cpufreq-stats 118 + ============================ 119 + 120 + To configure cpufreq-stats in your kernel:: 121 + 122 + Config Main Menu 123 + Power management options (ACPI, APM) ---> 124 + CPU Frequency scaling ---> 125 + [*] CPU Frequency scaling 126 + [*] CPU frequency translation statistics 127 + 128 + 129 + "CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure 130 + cpufreq-stats. 131 + 132 + "CPU frequency translation statistics" (CONFIG_CPU_FREQ_STAT) provides the 133 + statistics which includes time_in_state, total_trans and trans_table. 134 + 135 + Once this option is enabled and your CPU supports cpufrequency, you 136 + will be able to see the CPU frequency statistics in /sysfs.
-127
Documentation/cpu-freq/cpufreq-stats.txt
··· 1 - 2 - CPU frequency and voltage scaling statistics in the Linux(TM) kernel 3 - 4 - 5 - L i n u x c p u f r e q - s t a t s d r i v e r 6 - 7 - - information for users - 8 - 9 - 10 - Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> 11 - 12 - Contents 13 - 1. Introduction 14 - 2. Statistics Provided (with example) 15 - 3. Configuring cpufreq-stats 16 - 17 - 18 - 1. Introduction 19 - 20 - cpufreq-stats is a driver that provides CPU frequency statistics for each CPU. 21 - These statistics are provided in /sysfs as a bunch of read_only interfaces. This 22 - interface (when configured) will appear in a separate directory under cpufreq 23 - in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU. 24 - Various statistics will form read_only files under this directory. 25 - 26 - This driver is designed to be independent of any particular cpufreq_driver 27 - that may be running on your CPU. So, it will work with any cpufreq_driver. 28 - 29 - 30 - 2. Statistics Provided (with example) 31 - 32 - cpufreq stats provides following statistics (explained in detail below). 33 - - time_in_state 34 - - total_trans 35 - - trans_table 36 - 37 - All the statistics will be from the time the stats driver has been inserted 38 - (or the time the stats were reset) to the time when a read of a particular 39 - statistic is done. Obviously, stats driver will not have any information 40 - about the frequency transitions before the stats driver insertion. 41 - 42 - -------------------------------------------------------------------------------- 43 - <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l 44 - total 0 45 - drwxr-xr-x 2 root root 0 May 14 16:06 . 46 - drwxr-xr-x 3 root root 0 May 14 15:58 .. 47 - --w------- 1 root root 4096 May 14 16:06 reset 48 - -r--r--r-- 1 root root 4096 May 14 16:06 time_in_state 49 - -r--r--r-- 1 root root 4096 May 14 16:06 total_trans 50 - -r--r--r-- 1 root root 4096 May 14 16:06 trans_table 51 - -------------------------------------------------------------------------------- 52 - 53 - - reset 54 - Write-only attribute that can be used to reset the stat counters. This can be 55 - useful for evaluating system behaviour under different governors without the 56 - need for a reboot. 57 - 58 - - time_in_state 59 - This gives the amount of time spent in each of the frequencies supported by 60 - this CPU. The cat output will have "<frequency> <time>" pair in each line, which 61 - will mean this CPU spent <time> usertime units of time at <frequency>. Output 62 - will have one line for each of the supported frequencies. usertime units here 63 - is 10mS (similar to other time exported in /proc). 64 - 65 - -------------------------------------------------------------------------------- 66 - <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat time_in_state 67 - 3600000 2089 68 - 3400000 136 69 - 3200000 34 70 - 3000000 67 71 - 2800000 172488 72 - -------------------------------------------------------------------------------- 73 - 74 - 75 - - total_trans 76 - This gives the total number of frequency transitions on this CPU. The cat 77 - output will have a single count which is the total number of frequency 78 - transitions. 79 - 80 - -------------------------------------------------------------------------------- 81 - <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat total_trans 82 - 20 83 - -------------------------------------------------------------------------------- 84 - 85 - - trans_table 86 - This will give a fine grained information about all the CPU frequency 87 - transitions. The cat output here is a two dimensional matrix, where an entry 88 - <i,j> (row i, column j) represents the count of number of transitions from 89 - Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in 90 - which the driver has provided the frequency table initially to the cpufreq core 91 - and so can be sorted (ascending or descending) or unsorted. The output here 92 - also contains the actual freq values for each row and column for better 93 - readability. 94 - 95 - If the transition table is bigger than PAGE_SIZE, reading this will 96 - return an -EFBIG error. 97 - 98 - -------------------------------------------------------------------------------- 99 - <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table 100 - From : To 101 - : 3600000 3400000 3200000 3000000 2800000 102 - 3600000: 0 5 0 0 0 103 - 3400000: 4 0 2 0 0 104 - 3200000: 0 1 0 2 0 105 - 3000000: 0 0 1 0 3 106 - 2800000: 0 0 0 2 0 107 - -------------------------------------------------------------------------------- 108 - 109 - 110 - 3. Configuring cpufreq-stats 111 - 112 - To configure cpufreq-stats in your kernel 113 - Config Main Menu 114 - Power management options (ACPI, APM) ---> 115 - CPU Frequency scaling ---> 116 - [*] CPU Frequency scaling 117 - [*] CPU frequency translation statistics 118 - 119 - 120 - "CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure 121 - cpufreq-stats. 122 - 123 - "CPU frequency translation statistics" (CONFIG_CPU_FREQ_STAT) provides the 124 - statistics which includes time_in_state, total_trans and trans_table. 125 - 126 - Once this option is enabled and your CPU supports cpufrequency, you 127 - will be able to see the CPU frequency statistics in /sysfs.
+39
Documentation/cpu-freq/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================================================================== 4 + Linux CPUFreq - CPU frequency and voltage scaling code in the Linux(TM) kernel 5 + ============================================================================== 6 + 7 + Author: Dominik Brodowski <linux@brodo.de> 8 + 9 + Clock scaling allows you to change the clock speed of the CPUs on the 10 + fly. This is a nice method to save battery power, because the lower 11 + the clock speed, the less power the CPU consumes. 12 + 13 + 14 + .. toctree:: 15 + :maxdepth: 1 16 + 17 + core 18 + cpu-drivers 19 + cpufreq-stats 20 + 21 + Mailing List 22 + ------------ 23 + There is a CPU frequency changing CVS commit and general list where 24 + you can report bugs, problems or submit patches. To post a message, 25 + send an email to linux-pm@vger.kernel.org. 26 + 27 + Links 28 + ----- 29 + the FTP archives: 30 + * ftp://ftp.linux.org.uk/pub/linux/cpufreq/ 31 + 32 + how to access the CVS repository: 33 + * http://cvs.arm.linux.org.uk/ 34 + 35 + the CPUFreq Mailing list: 36 + * http://vger.kernel.org/vger-lists.html#linux-pm 37 + 38 + Clock and voltage scaling for the SA-1100: 39 + * http://www.lartmaker.nl/projects/scaling
-56
Documentation/cpu-freq/index.txt
··· 1 - CPU frequency and voltage scaling code in the Linux(TM) kernel 2 - 3 - 4 - L i n u x C P U F r e q 5 - 6 - 7 - 8 - 9 - Dominik Brodowski <linux@brodo.de> 10 - 11 - 12 - 13 - Clock scaling allows you to change the clock speed of the CPUs on the 14 - fly. This is a nice method to save battery power, because the lower 15 - the clock speed, the less power the CPU consumes. 16 - 17 - 18 - 19 - Documents in this directory: 20 - ---------------------------- 21 - 22 - amd-powernow.txt - AMD powernow driver specific file. 23 - 24 - core.txt - General description of the CPUFreq core and 25 - of CPUFreq notifiers. 26 - 27 - cpu-drivers.txt - How to implement a new cpufreq processor driver. 28 - 29 - cpufreq-nforce2.txt - nVidia nForce2 platform specific file. 30 - 31 - cpufreq-stats.txt - General description of sysfs cpufreq stats. 32 - 33 - index.txt - File index, Mailing list and Links (this document) 34 - 35 - pcc-cpufreq.txt - PCC cpufreq driver specific file. 36 - 37 - 38 - Mailing List 39 - ------------ 40 - There is a CPU frequency changing CVS commit and general list where 41 - you can report bugs, problems or submit patches. To post a message, 42 - send an email to linux-pm@vger.kernel.org. 43 - 44 - Links 45 - ----- 46 - the FTP archives: 47 - * ftp://ftp.linux.org.uk/pub/linux/cpufreq/ 48 - 49 - how to access the CVS repository: 50 - * http://cvs.arm.linux.org.uk/ 51 - 52 - the CPUFreq Mailing list: 53 - * http://vger.kernel.org/vger-lists.html#linux-pm 54 - 55 - Clock and voltage scaling for the SA-1100: 56 - * http://www.lartmaker.nl/projects/scaling
-207
Documentation/cpu-freq/pcc-cpufreq.txt
··· 1 - /* 2 - * pcc-cpufreq.txt - PCC interface documentation 3 - * 4 - * Copyright (C) 2009 Red Hat, Matthew Garrett <mjg@redhat.com> 5 - * Copyright (C) 2009 Hewlett-Packard Development Company, L.P. 6 - * Nagananda Chumbalkar <nagananda.chumbalkar@hp.com> 7 - * 8 - * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License as published by 12 - * the Free Software Foundation; version 2 of the License. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or NON 17 - * INFRINGEMENT. See the GNU General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 675 Mass Ave, Cambridge, MA 02139, USA. 22 - * 23 - * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 - */ 25 - 26 - 27 - Processor Clocking Control Driver 28 - --------------------------------- 29 - 30 - Contents: 31 - --------- 32 - 1. Introduction 33 - 1.1 PCC interface 34 - 1.1.1 Get Average Frequency 35 - 1.1.2 Set Desired Frequency 36 - 1.2 Platforms affected 37 - 2. Driver and /sys details 38 - 2.1 scaling_available_frequencies 39 - 2.2 cpuinfo_transition_latency 40 - 2.3 cpuinfo_cur_freq 41 - 2.4 related_cpus 42 - 3. Caveats 43 - 44 - 1. Introduction: 45 - ---------------- 46 - Processor Clocking Control (PCC) is an interface between the platform 47 - firmware and OSPM. It is a mechanism for coordinating processor 48 - performance (ie: frequency) between the platform firmware and the OS. 49 - 50 - The PCC driver (pcc-cpufreq) allows OSPM to take advantage of the PCC 51 - interface. 52 - 53 - OS utilizes the PCC interface to inform platform firmware what frequency the 54 - OS wants for a logical processor. The platform firmware attempts to achieve 55 - the requested frequency. If the request for the target frequency could not be 56 - satisfied by platform firmware, then it usually means that power budget 57 - conditions are in place, and "power capping" is taking place. 58 - 59 - 1.1 PCC interface: 60 - ------------------ 61 - The complete PCC specification is available here: 62 - http://www.acpica.org/download/Processor-Clocking-Control-v1p0.pdf 63 - 64 - PCC relies on a shared memory region that provides a channel for communication 65 - between the OS and platform firmware. PCC also implements a "doorbell" that 66 - is used by the OS to inform the platform firmware that a command has been 67 - sent. 68 - 69 - The ACPI PCCH() method is used to discover the location of the PCC shared 70 - memory region. The shared memory region header contains the "command" and 71 - "status" interface. PCCH() also contains details on how to access the platform 72 - doorbell. 73 - 74 - The following commands are supported by the PCC interface: 75 - * Get Average Frequency 76 - * Set Desired Frequency 77 - 78 - The ACPI PCCP() method is implemented for each logical processor and is 79 - used to discover the offsets for the input and output buffers in the shared 80 - memory region. 81 - 82 - When PCC mode is enabled, the platform will not expose processor performance 83 - or throttle states (_PSS, _TSS and related ACPI objects) to OSPM. Therefore, 84 - the native P-state driver (such as acpi-cpufreq for Intel, powernow-k8 for 85 - AMD) will not load. 86 - 87 - However, OSPM remains in control of policy. The governor (eg: "ondemand") 88 - computes the required performance for each processor based on server workload. 89 - The PCC driver fills in the command interface, and the input buffer and 90 - communicates the request to the platform firmware. The platform firmware is 91 - responsible for delivering the requested performance. 92 - 93 - Each PCC command is "global" in scope and can affect all the logical CPUs in 94 - the system. Therefore, PCC is capable of performing "group" updates. With PCC 95 - the OS is capable of getting/setting the frequency of all the logical CPUs in 96 - the system with a single call to the BIOS. 97 - 98 - 1.1.1 Get Average Frequency: 99 - ---------------------------- 100 - This command is used by the OSPM to query the running frequency of the 101 - processor since the last time this command was completed. The output buffer 102 - indicates the average unhalted frequency of the logical processor expressed as 103 - a percentage of the nominal (ie: maximum) CPU frequency. The output buffer 104 - also signifies if the CPU frequency is limited by a power budget condition. 105 - 106 - 1.1.2 Set Desired Frequency: 107 - ---------------------------- 108 - This command is used by the OSPM to communicate to the platform firmware the 109 - desired frequency for a logical processor. The output buffer is currently 110 - ignored by OSPM. The next invocation of "Get Average Frequency" will inform 111 - OSPM if the desired frequency was achieved or not. 112 - 113 - 1.2 Platforms affected: 114 - ----------------------- 115 - The PCC driver will load on any system where the platform firmware: 116 - * supports the PCC interface, and the associated PCCH() and PCCP() methods 117 - * assumes responsibility for managing the hardware clocking controls in order 118 - to deliver the requested processor performance 119 - 120 - Currently, certain HP ProLiant platforms implement the PCC interface. On those 121 - platforms PCC is the "default" choice. 122 - 123 - However, it is possible to disable this interface via a BIOS setting. In 124 - such an instance, as is also the case on platforms where the PCC interface 125 - is not implemented, the PCC driver will fail to load silently. 126 - 127 - 2. Driver and /sys details: 128 - --------------------------- 129 - When the driver loads, it merely prints the lowest and the highest CPU 130 - frequencies supported by the platform firmware. 131 - 132 - The PCC driver loads with a message such as: 133 - pcc-cpufreq: (v1.00.00) driver loaded with frequency limits: 1600 MHz, 2933 134 - MHz 135 - 136 - This means that the OPSM can request the CPU to run at any frequency in 137 - between the limits (1600 MHz, and 2933 MHz) specified in the message. 138 - 139 - Internally, there is no need for the driver to convert the "target" frequency 140 - to a corresponding P-state. 141 - 142 - The VERSION number for the driver will be of the format v.xy.ab. 143 - eg: 1.00.02 144 - ----- -- 145 - | | 146 - | -- this will increase with bug fixes/enhancements to the driver 147 - |-- this is the version of the PCC specification the driver adheres to 148 - 149 - 150 - The following is a brief discussion on some of the fields exported via the 151 - /sys filesystem and how their values are affected by the PCC driver: 152 - 153 - 2.1 scaling_available_frequencies: 154 - ---------------------------------- 155 - scaling_available_frequencies is not created in /sys. No intermediate 156 - frequencies need to be listed because the BIOS will try to achieve any 157 - frequency, within limits, requested by the governor. A frequency does not have 158 - to be strictly associated with a P-state. 159 - 160 - 2.2 cpuinfo_transition_latency: 161 - ------------------------------- 162 - The cpuinfo_transition_latency field is 0. The PCC specification does 163 - not include a field to expose this value currently. 164 - 165 - 2.3 cpuinfo_cur_freq: 166 - --------------------- 167 - A) Often cpuinfo_cur_freq will show a value different than what is declared 168 - in the scaling_available_frequencies or scaling_cur_freq, or scaling_max_freq. 169 - This is due to "turbo boost" available on recent Intel processors. If certain 170 - conditions are met the BIOS can achieve a slightly higher speed than requested 171 - by OSPM. An example: 172 - 173 - scaling_cur_freq : 2933000 174 - cpuinfo_cur_freq : 3196000 175 - 176 - B) There is a round-off error associated with the cpuinfo_cur_freq value. 177 - Since the driver obtains the current frequency as a "percentage" (%) of the 178 - nominal frequency from the BIOS, sometimes, the values displayed by 179 - scaling_cur_freq and cpuinfo_cur_freq may not match. An example: 180 - 181 - scaling_cur_freq : 1600000 182 - cpuinfo_cur_freq : 1583000 183 - 184 - In this example, the nominal frequency is 2933 MHz. The driver obtains the 185 - current frequency, cpuinfo_cur_freq, as 54% of the nominal frequency: 186 - 187 - 54% of 2933 MHz = 1583 MHz 188 - 189 - Nominal frequency is the maximum frequency of the processor, and it usually 190 - corresponds to the frequency of the P0 P-state. 191 - 192 - 2.4 related_cpus: 193 - ----------------- 194 - The related_cpus field is identical to affected_cpus. 195 - 196 - affected_cpus : 4 197 - related_cpus : 4 198 - 199 - Currently, the PCC driver does not evaluate _PSD. The platforms that support 200 - PCC do not implement SW_ALL. So OSPM doesn't need to perform any coordination 201 - to ensure that the same frequency is requested of all dependent CPUs. 202 - 203 - 3. Caveats: 204 - ----------- 205 - The "cpufreq_stats" module in its present form cannot be loaded and 206 - expected to work with the PCC driver. Since the "cpufreq_stats" module 207 - provides information wrt each P-state, it is not applicable to the PCC driver.
+2 -1
Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt
··· 19 19 20 20 In 'operating-points-v2' table: 21 21 - compatible: Should be 22 - - 'operating-points-v2-kryo-cpu' for apq8096 and msm8996. 22 + - 'operating-points-v2-kryo-cpu' for apq8096, msm8996, msm8974, 23 + apq8064, ipq8064, msm8960 and ipq8074. 23 24 24 25 Optional properties: 25 26 --------------------
+1
Documentation/index.rst
··· 99 99 accounting/index 100 100 block/index 101 101 cdrom/index 102 + cpu-freq/index 102 103 ide/index 103 104 fb/index 104 105 fpga/index
+39 -47
Documentation/power/pm_qos_interface.rst
··· 7 7 one of the parameters. 8 8 9 9 Two different PM QoS frameworks are available: 10 - 1. PM QoS classes for cpu_dma_latency 11 - 2. The per-device PM QoS framework provides the API to manage the 10 + * CPU latency QoS. 11 + * The per-device PM QoS framework provides the API to manage the 12 12 per-device latency constraints and PM QoS flags. 13 13 14 - Each parameters have defined units: 15 - 16 - * latency: usec 17 - * timeout: usec 18 - * throughput: kbs (kilo bit / sec) 19 - * memory bandwidth: mbs (mega bit / sec) 14 + The latency unit used in the PM QoS framework is the microsecond (usec). 20 15 21 16 22 17 1. PM QoS framework 23 18 =================== 24 19 25 - The infrastructure exposes multiple misc device nodes one per implemented 26 - parameter. The set of parameters implement is defined by pm_qos_power_init() 27 - and pm_qos_params.h. This is done because having the available parameters 28 - being runtime configurable or changeable from a driver was seen as too easy to 29 - abuse. 20 + A global list of CPU latency QoS requests is maintained along with an aggregated 21 + (effective) target value. The aggregated target value is updated with changes 22 + to the request list or elements of the list. For CPU latency QoS, the 23 + aggregated target value is simply the min of the request values held in the list 24 + elements. 30 25 31 - For each parameter a list of performance requests is maintained along with 32 - an aggregated target value. The aggregated target value is updated with 33 - changes to the request list or elements of the list. Typically the 34 - aggregated target value is simply the max or min of the request values held 35 - in the parameter list elements. 36 26 Note: the aggregated target value is implemented as an atomic variable so that 37 27 reading the aggregated value does not require any locking mechanism. 38 28 29 + From kernel space the use of this interface is simple: 39 30 40 - From kernel mode the use of this interface is simple: 31 + void cpu_latency_qos_add_request(handle, target_value): 32 + Will insert an element into the CPU latency QoS list with the target value. 33 + Upon change to this list the new target is recomputed and any registered 34 + notifiers are called only if the target value is now different. 35 + Clients of PM QoS need to save the returned handle for future use in other 36 + PM QoS API functions. 41 37 42 - void pm_qos_add_request(handle, param_class, target_value): 43 - Will insert an element into the list for that identified PM QoS class with the 44 - target value. Upon change to this list the new target is recomputed and any 45 - registered notifiers are called only if the target value is now different. 46 - Clients of pm_qos need to save the returned handle for future use in other 47 - pm_qos API functions. 48 - 49 - void pm_qos_update_request(handle, new_target_value): 38 + void cpu_latency_qos_update_request(handle, new_target_value): 50 39 Will update the list element pointed to by the handle with the new target 51 40 value and recompute the new aggregated target, calling the notification tree 52 41 if the target is changed. 53 42 54 - void pm_qos_remove_request(handle): 43 + void cpu_latency_qos_remove_request(handle): 55 44 Will remove the element. After removal it will update the aggregate target 56 45 and call the notification tree if the target was changed as a result of 57 46 removing the request. 58 47 59 - int pm_qos_request(param_class): 60 - Returns the aggregated value for a given PM QoS class. 48 + int cpu_latency_qos_limit(): 49 + Returns the aggregated value for the CPU latency QoS. 61 50 62 - int pm_qos_request_active(handle): 63 - Returns if the request is still active, i.e. it has not been removed from a 64 - PM QoS class constraints list. 51 + int cpu_latency_qos_request_active(handle): 52 + Returns if the request is still active, i.e. it has not been removed from the 53 + CPU latency QoS list. 65 54 66 - int pm_qos_add_notifier(param_class, notifier): 67 - Adds a notification callback function to the PM QoS class. The callback is 68 - called when the aggregated value for the PM QoS class is changed. 55 + int cpu_latency_qos_add_notifier(notifier): 56 + Adds a notification callback function to the CPU latency QoS. The callback is 57 + called when the aggregated value for the CPU latency QoS is changed. 69 58 70 - int pm_qos_remove_notifier(int param_class, notifier): 71 - Removes the notification callback function for the PM QoS class. 59 + int cpu_latency_qos_remove_notifier(notifier): 60 + Removes the notification callback function from the CPU latency QoS. 72 61 73 62 74 - From user mode: 63 + From user space: 75 64 76 - Only processes can register a pm_qos request. To provide for automatic 65 + The infrastructure exposes one device node, /dev/cpu_dma_latency, for the CPU 66 + latency QoS. 67 + 68 + Only processes can register a PM QoS request. To provide for automatic 77 69 cleanup of a process, the interface requires the process to register its 78 - parameter requests in the following way: 70 + parameter requests as follows. 79 71 80 - To register the default pm_qos target for the specific parameter, the process 81 - must open /dev/cpu_dma_latency 72 + To register the default PM QoS target for the CPU latency QoS, the process must 73 + open /dev/cpu_dma_latency. 82 74 83 75 As long as the device node is held open that process has a registered 84 76 request on the parameter. 85 77 86 - To change the requested target value the process needs to write an s32 value to 87 - the open device node. Alternatively the user mode program could write a hex 88 - string for the value using 10 char long format e.g. "0x12345678". This 89 - translates to a pm_qos_update_request call. 78 + To change the requested target value, the process needs to write an s32 value to 79 + the open device node. Alternatively, it can write a hex string for the value 80 + using the 10 char long format e.g. "0x12345678". This translates to a 81 + cpu_latency_qos_update_request() call. 90 82 91 83 To remove the user mode request for a target value simply close the device 92 84 node.
+6
Documentation/power/runtime_pm.rst
··· 382 382 nonzero, increment the counter and return 1; otherwise return 0 without 383 383 changing the counter 384 384 385 + `int pm_runtime_get_if_active(struct device *dev, bool ign_usage_count);` 386 + - return -EINVAL if 'power.disable_depth' is nonzero; otherwise, if the 387 + runtime PM status is RPM_ACTIVE, and either ign_usage_count is true 388 + or the device's usage_count is non-zero, increment the counter and 389 + return 1; otherwise return 0 without changing the counter 390 + 385 391 `void pm_runtime_put_noidle(struct device *dev);` 386 392 - decrement the device's usage counter 387 393
+5 -3
Documentation/power/userland-swsusp.rst
··· 69 69 70 70 SNAPSHOT_GET_IMAGE_SIZE 71 71 return the actual size of the hibernation image 72 + (the last argument should be a pointer to a loff_t variable that 73 + will contain the result if the call is successful) 72 74 73 75 SNAPSHOT_AVAIL_SWAP_SIZE 74 - return the amount of available swap in bytes (the 75 - last argument should be a pointer to an unsigned int variable that will 76 - contain the result if the call is successful). 76 + return the amount of available swap in bytes 77 + (the last argument should be a pointer to a loff_t variable that 78 + will contain the result if the call is successful) 77 79 78 80 SNAPSHOT_ALLOC_SWAP_PAGE 79 81 allocate a swap page from the resume partition
+10 -11
Documentation/trace/events-power.rst
··· 75 75 target/flags update. 76 76 :: 77 77 78 - pm_qos_add_request "pm_qos_class=%s value=%d" 79 - pm_qos_update_request "pm_qos_class=%s value=%d" 80 - pm_qos_remove_request "pm_qos_class=%s value=%d" 81 - pm_qos_update_request_timeout "pm_qos_class=%s value=%d, timeout_us=%ld" 82 - 83 - The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY"). 84 - The second parameter is value to be added/updated/removed. 85 - The third parameter is timeout value in usec. 86 - :: 87 - 88 78 pm_qos_update_target "action=%s prev_value=%d curr_value=%d" 89 79 pm_qos_update_flags "action=%s prev_value=0x%x curr_value=0x%x" 90 80 ··· 82 92 The second parameter is the previous QoS value. 83 93 The third parameter is the current QoS value to update. 84 94 85 - And, there are also events used for device PM QoS add/update/remove request. 95 + There are also events used for device PM QoS add/update/remove request. 86 96 :: 87 97 88 98 dev_pm_qos_add_request "device=%s type=%s new_value=%d" ··· 93 103 QoS requests. 94 104 The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY"). 95 105 The third parameter is value to be added/updated/removed. 106 + 107 + And, there are events used for CPU latency QoS add/update/remove request. 108 + :: 109 + 110 + pm_qos_add_request "value=%d" 111 + pm_qos_update_request "value=%d" 112 + pm_qos_remove_request "value=%d" 113 + 114 + The parameter is the value to be added/updated/removed.
+6 -7
arch/x86/platform/intel/iosf_mbi.c
··· 265 265 iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT)) 266 266 dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n"); 267 267 268 - pm_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE); 268 + cpu_latency_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE); 269 269 270 270 blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier, 271 271 MBI_PMIC_BUS_ACCESS_END, NULL); ··· 301 301 * 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC 302 302 * if this happens while the kernel itself is accessing the PMIC I2C bus 303 303 * the SoC hangs. 304 - * As the third step we call pm_qos_update_request() to disallow the CPU 305 - * to enter C6 or C7. 304 + * As the third step we call cpu_latency_qos_update_request() to disallow the 305 + * CPU to enter C6 or C7. 306 306 * 307 307 * 5) The P-Unit has a PMIC bus semaphore which we can request to stop 308 308 * autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it. ··· 338 338 * requires the P-Unit to talk to the PMIC and if this happens while 339 339 * we're holding the semaphore, the SoC hangs. 340 340 */ 341 - pm_qos_update_request(&iosf_mbi_pm_qos, 0); 341 + cpu_latency_qos_update_request(&iosf_mbi_pm_qos, 0); 342 342 343 343 /* host driver writes to side band semaphore register */ 344 344 ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE, ··· 547 547 { 548 548 iosf_debugfs_init(); 549 549 550 - pm_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_CPU_DMA_LATENCY, 551 - PM_QOS_DEFAULT_VALUE); 550 + cpu_latency_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE); 552 551 553 552 return pci_register_driver(&iosf_mbi_pci_driver); 554 553 } ··· 560 561 pci_dev_put(mbi_pdev); 561 562 mbi_pdev = NULL; 562 563 563 - pm_qos_remove_request(&iosf_mbi_pm_qos); 564 + cpu_latency_qos_remove_request(&iosf_mbi_pm_qos); 564 565 } 565 566 566 567 module_init(iosf_mbi_init);
+1 -1
drivers/acpi/acpica/achware.h
··· 101 101 102 102 acpi_status acpi_hw_enable_all_wakeup_gpes(void); 103 103 104 - u8 acpi_hw_check_all_gpes(void); 104 + u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number); 105 105 106 106 acpi_status 107 107 acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+12 -5
drivers/acpi/acpica/evxfgpe.c
··· 799 799 * 800 800 * FUNCTION: acpi_any_gpe_status_set 801 801 * 802 - * PARAMETERS: None 802 + * PARAMETERS: gpe_skip_number - Number of the GPE to skip 803 803 * 804 804 * RETURN: Whether or not the status bit is set for any GPE 805 805 * 806 - * DESCRIPTION: Check the status bits of all enabled GPEs and return TRUE if any 807 - * of them is set or FALSE otherwise. 806 + * DESCRIPTION: Check the status bits of all enabled GPEs, except for the one 807 + * represented by the "skip" argument, and return TRUE if any of 808 + * them is set or FALSE otherwise. 808 809 * 809 810 ******************************************************************************/ 810 - u32 acpi_any_gpe_status_set(void) 811 + u32 acpi_any_gpe_status_set(u32 gpe_skip_number) 811 812 { 812 813 acpi_status status; 814 + acpi_handle gpe_device; 813 815 u8 ret; 814 816 815 817 ACPI_FUNCTION_TRACE(acpi_any_gpe_status_set); ··· 821 819 return (FALSE); 822 820 } 823 821 824 - ret = acpi_hw_check_all_gpes(); 822 + status = acpi_get_gpe_device(gpe_skip_number, &gpe_device); 823 + if (ACPI_FAILURE(status)) { 824 + gpe_device = NULL; 825 + } 826 + 827 + ret = acpi_hw_check_all_gpes(gpe_device, gpe_skip_number); 825 828 (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS); 826 829 827 830 return (ret);
+38 -9
drivers/acpi/acpica/hwgpe.c
··· 444 444 return (AE_OK); 445 445 } 446 446 447 + struct acpi_gpe_block_status_context { 448 + struct acpi_gpe_register_info *gpe_skip_register_info; 449 + u8 gpe_skip_mask; 450 + u8 retval; 451 + }; 452 + 447 453 /****************************************************************************** 448 454 * 449 455 * FUNCTION: acpi_hw_get_gpe_block_status 450 456 * 451 457 * PARAMETERS: gpe_xrupt_info - GPE Interrupt info 452 458 * gpe_block - Gpe Block info 459 + * context - GPE list walk context data 453 460 * 454 461 * RETURN: Success 455 462 * ··· 467 460 static acpi_status 468 461 acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info, 469 462 struct acpi_gpe_block_info *gpe_block, 470 - void *ret_ptr) 463 + void *context) 471 464 { 465 + struct acpi_gpe_block_status_context *c = context; 472 466 struct acpi_gpe_register_info *gpe_register_info; 473 467 u64 in_enable, in_status; 474 468 acpi_status status; 475 - u8 *ret = ret_ptr; 469 + u8 ret_mask; 476 470 u32 i; 477 471 478 472 /* Examine each GPE Register within the block */ ··· 493 485 continue; 494 486 } 495 487 496 - *ret |= in_enable & in_status; 488 + ret_mask = in_enable & in_status; 489 + if (ret_mask && c->gpe_skip_register_info == gpe_register_info) { 490 + ret_mask &= ~c->gpe_skip_mask; 491 + } 492 + c->retval |= ret_mask; 497 493 } 498 494 499 495 return (AE_OK); ··· 573 561 * 574 562 * FUNCTION: acpi_hw_check_all_gpes 575 563 * 576 - * PARAMETERS: None 564 + * PARAMETERS: gpe_skip_device - GPE devoce of the GPE to skip 565 + * gpe_skip_number - Number of the GPE to skip 577 566 * 578 567 * RETURN: Combined status of all GPEs 579 568 * 580 - * DESCRIPTION: Check all enabled GPEs in all GPE blocks and return TRUE if the 569 + * DESCRIPTION: Check all enabled GPEs in all GPE blocks, except for the one 570 + * represented by the "skip" arguments, and return TRUE if the 581 571 * status bit is set for at least one of them of FALSE otherwise. 582 572 * 583 573 ******************************************************************************/ 584 574 585 - u8 acpi_hw_check_all_gpes(void) 575 + u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number) 586 576 { 587 - u8 ret = 0; 577 + struct acpi_gpe_block_status_context context = { 578 + .gpe_skip_register_info = NULL, 579 + .retval = 0, 580 + }; 581 + struct acpi_gpe_event_info *gpe_event_info; 582 + acpi_cpu_flags flags; 588 583 589 584 ACPI_FUNCTION_TRACE(acpi_hw_check_all_gpes); 590 585 591 - (void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &ret); 586 + flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock); 592 587 593 - return (ret != 0); 588 + gpe_event_info = acpi_ev_get_gpe_event_info(gpe_skip_device, 589 + gpe_skip_number); 590 + if (gpe_event_info) { 591 + context.gpe_skip_register_info = gpe_event_info->register_info; 592 + context.gpe_skip_mask = acpi_hw_get_gpe_register_bit(gpe_event_info); 593 + } 594 + 595 + acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 596 + 597 + (void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &context); 598 + return (context.retval != 0); 594 599 } 595 600 596 601 #endif /* !ACPI_REDUCED_HARDWARE */
+5
drivers/acpi/ec.c
··· 2037 2037 acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action); 2038 2038 } 2039 2039 2040 + bool acpi_ec_other_gpes_active(void) 2041 + { 2042 + return acpi_any_gpe_status_set(first_ec ? first_ec->gpe : U32_MAX); 2043 + } 2044 + 2040 2045 bool acpi_ec_dispatch_gpe(void) 2041 2046 { 2042 2047 u32 ret;
+1
drivers/acpi/internal.h
··· 202 202 203 203 #ifdef CONFIG_PM_SLEEP 204 204 void acpi_ec_flush_work(void); 205 + bool acpi_ec_other_gpes_active(void); 205 206 bool acpi_ec_dispatch_gpe(void); 206 207 #endif 207 208
+11 -13
drivers/acpi/sleep.c
··· 982 982 983 983 static void acpi_s2idle_sync(void) 984 984 { 985 - /* 986 - * The EC driver uses the system workqueue and an additional special 987 - * one, so those need to be flushed too. 988 - */ 985 + /* The EC driver uses special workqueues that need to be flushed. */ 989 986 acpi_ec_flush_work(); 990 987 acpi_os_wait_events_complete(); /* synchronize Notify handling */ 991 988 } ··· 1010 1013 return true; 1011 1014 1012 1015 /* 1013 - * If there are no EC events to process and at least one of the 1014 - * other enabled GPEs is active, the wakeup is regarded as a 1015 - * genuine one. 1016 - * 1017 - * Note that the checks below must be carried out in this order 1018 - * to avoid returning prematurely due to a change of the EC GPE 1019 - * status bit from unset to set between the checks with the 1020 - * status bits of all the other GPEs unset. 1016 + * If the status bit is set for any enabled GPE other than the 1017 + * EC one, the wakeup is regarded as a genuine one. 1021 1018 */ 1022 - if (acpi_any_gpe_status_set() && !acpi_ec_dispatch_gpe()) 1019 + if (acpi_ec_other_gpes_active()) 1023 1020 return true; 1021 + 1022 + /* 1023 + * If the EC GPE status bit has not been set, the wakeup is 1024 + * regarded as a spurious one. 1025 + */ 1026 + if (!acpi_ec_dispatch_gpe()) 1027 + return false; 1024 1028 1025 1029 /* 1026 1030 * Cancel the wakeup and process all pending events in case
+1 -1
drivers/base/power/domain.c
··· 2653 2653 2654 2654 ret = of_count_phandle_with_args(dn, "domain-idle-states", NULL); 2655 2655 if (ret <= 0) 2656 - return ret; 2656 + return ret == -ENOENT ? 0 : ret; 2657 2657 2658 2658 /* Loop over the phandles until all the requested entry is found */ 2659 2659 of_for_each_phandle(&it, ret, dn, "domain-idle-states", NULL, 0) {
+8 -4
drivers/base/power/main.c
··· 40 40 41 41 typedef int (*pm_callback_t)(struct device *); 42 42 43 + #define list_for_each_entry_rcu_locked(pos, head, member) \ 44 + list_for_each_entry_rcu(pos, head, member, \ 45 + device_links_read_lock_held()) 46 + 43 47 /* 44 48 * The entries in the dpm_list list are in a depth first order, simply 45 49 * because children are guaranteed to be discovered after parents, and ··· 270 266 * callbacks freeing the link objects for the links in the list we're 271 267 * walking. 272 268 */ 273 - list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) 269 + list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) 274 270 if (READ_ONCE(link->status) != DL_STATE_DORMANT) 275 271 dpm_wait(link->supplier, async); 276 272 ··· 327 323 * continue instead of trying to continue in parallel with its 328 324 * unregistration). 329 325 */ 330 - list_for_each_entry_rcu(link, &dev->links.consumers, s_node) 326 + list_for_each_entry_rcu_locked(link, &dev->links.consumers, s_node) 331 327 if (READ_ONCE(link->status) != DL_STATE_DORMANT) 332 328 dpm_wait(link->consumer, async); 333 329 ··· 1239 1235 1240 1236 idx = device_links_read_lock(); 1241 1237 1242 - list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) 1238 + list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) 1243 1239 link->supplier->power.must_resume = true; 1244 1240 1245 1241 device_links_read_unlock(idx); ··· 1699 1695 1700 1696 idx = device_links_read_lock(); 1701 1697 1702 - list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) { 1698 + list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) { 1703 1699 spin_lock_irq(&link->supplier->power.lock); 1704 1700 link->supplier->power.direct_complete = false; 1705 1701 spin_unlock_irq(&link->supplier->power.lock);
+27 -9
drivers/base/power/runtime.c
··· 1087 1087 EXPORT_SYMBOL_GPL(__pm_runtime_resume); 1088 1088 1089 1089 /** 1090 - * pm_runtime_get_if_in_use - Conditionally bump up the device's usage counter. 1090 + * pm_runtime_get_if_active - Conditionally bump up the device's usage counter. 1091 1091 * @dev: Device to handle. 1092 1092 * 1093 1093 * Return -EINVAL if runtime PM is disabled for the device. 1094 1094 * 1095 - * If that's not the case and if the device's runtime PM status is RPM_ACTIVE 1096 - * and the runtime PM usage counter is nonzero, increment the counter and 1097 - * return 1. Otherwise return 0 without changing the counter. 1095 + * Otherwise, if the device's runtime PM status is RPM_ACTIVE and either 1096 + * ign_usage_count is true or the device's usage_count is non-zero, increment 1097 + * the counter and return 1. Otherwise return 0 without changing the counter. 1098 + * 1099 + * If ign_usage_count is true, the function can be used to prevent suspending 1100 + * the device when its runtime PM status is RPM_ACTIVE. 1101 + * 1102 + * If ign_usage_count is false, the function can be used to prevent suspending 1103 + * the device when both its runtime PM status is RPM_ACTIVE and its usage_count 1104 + * is non-zero. 1105 + * 1106 + * The caller is resposible for putting the device's usage count when ther 1107 + * return value is greater than zero. 1098 1108 */ 1099 - int pm_runtime_get_if_in_use(struct device *dev) 1109 + int pm_runtime_get_if_active(struct device *dev, bool ign_usage_count) 1100 1110 { 1101 1111 unsigned long flags; 1102 1112 int retval; 1103 1113 1104 1114 spin_lock_irqsave(&dev->power.lock, flags); 1105 - retval = dev->power.disable_depth > 0 ? -EINVAL : 1106 - dev->power.runtime_status == RPM_ACTIVE 1107 - && atomic_inc_not_zero(&dev->power.usage_count); 1115 + if (dev->power.disable_depth > 0) { 1116 + retval = -EINVAL; 1117 + } else if (dev->power.runtime_status != RPM_ACTIVE) { 1118 + retval = 0; 1119 + } else if (ign_usage_count) { 1120 + retval = 1; 1121 + atomic_inc(&dev->power.usage_count); 1122 + } else { 1123 + retval = atomic_inc_not_zero(&dev->power.usage_count); 1124 + } 1108 1125 trace_rpm_usage_rcuidle(dev, 0); 1109 1126 spin_unlock_irqrestore(&dev->power.lock, flags); 1127 + 1110 1128 return retval; 1111 1129 } 1112 - EXPORT_SYMBOL_GPL(pm_runtime_get_if_in_use); 1130 + EXPORT_SYMBOL_GPL(pm_runtime_get_if_active); 1113 1131 1114 1132 /** 1115 1133 * __pm_runtime_set_status - Set runtime PM status of a device.
+11 -6
drivers/base/power/wakeup.c
··· 24 24 #define pm_suspend_target_state (PM_SUSPEND_ON) 25 25 #endif 26 26 27 + #define list_for_each_entry_rcu_locked(pos, head, member) \ 28 + list_for_each_entry_rcu(pos, head, member, \ 29 + srcu_read_lock_held(&wakeup_srcu)) 27 30 /* 28 31 * If set, the suspend/hibernate code will abort transitions to a sleep state 29 32 * if wakeup events are registered during or immediately before the transition. ··· 244 241 { 245 242 if (ws) { 246 243 wakeup_source_remove(ws); 247 - wakeup_source_sysfs_remove(ws); 244 + if (ws->dev) 245 + wakeup_source_sysfs_remove(ws); 246 + 248 247 wakeup_source_destroy(ws); 249 248 } 250 249 } ··· 410 405 int srcuidx; 411 406 412 407 srcuidx = srcu_read_lock(&wakeup_srcu); 413 - list_for_each_entry_rcu(ws, &wakeup_sources, entry) 408 + list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) 414 409 dev_pm_arm_wake_irq(ws->wakeirq); 415 410 srcu_read_unlock(&wakeup_srcu, srcuidx); 416 411 } ··· 426 421 int srcuidx; 427 422 428 423 srcuidx = srcu_read_lock(&wakeup_srcu); 429 - list_for_each_entry_rcu(ws, &wakeup_sources, entry) 424 + list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) 430 425 dev_pm_disarm_wake_irq(ws->wakeirq); 431 426 srcu_read_unlock(&wakeup_srcu, srcuidx); 432 427 } ··· 879 874 struct wakeup_source *last_activity_ws = NULL; 880 875 881 876 srcuidx = srcu_read_lock(&wakeup_srcu); 882 - list_for_each_entry_rcu(ws, &wakeup_sources, entry) { 877 + list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) { 883 878 if (ws->active) { 884 879 pm_pr_dbg("active wakeup source: %s\n", ws->name); 885 880 active = 1; ··· 1030 1025 int srcuidx; 1031 1026 1032 1027 srcuidx = srcu_read_lock(&wakeup_srcu); 1033 - list_for_each_entry_rcu(ws, &wakeup_sources, entry) { 1028 + list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) { 1034 1029 spin_lock_irq(&ws->lock); 1035 1030 if (ws->autosleep_enabled != set) { 1036 1031 ws->autosleep_enabled = set; ··· 1109 1104 } 1110 1105 1111 1106 *srcuidx = srcu_read_lock(&wakeup_srcu); 1112 - list_for_each_entry_rcu(ws, &wakeup_sources, entry) { 1107 + list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) { 1113 1108 if (n-- <= 0) 1114 1109 return ws; 1115 1110 }
+1 -1
drivers/cpufreq/Kconfig.arm
··· 128 128 129 129 config ARM_QCOM_CPUFREQ_NVMEM 130 130 tristate "Qualcomm nvmem based CPUFreq" 131 - depends on ARM64 131 + depends on ARCH_QCOM 132 132 depends on QCOM_QFPROM 133 133 depends on QCOM_SMEM 134 134 select PM_OPP
+1 -1
drivers/cpufreq/Kconfig.x86
··· 25 25 This driver adds support for the PCC interface. 26 26 27 27 For details, take a look at: 28 - <file:Documentation/cpu-freq/pcc-cpufreq.txt>. 28 + <file:Documentation/admin-guide/pm/cpufreq_drivers.rst>. 29 29 30 30 To compile this driver as a module, choose M here: the 31 31 module will be called pcc-cpufreq.
+5
drivers/cpufreq/cpufreq-dt-platdev.c
··· 141 141 { .compatible = "ti,dra7", }, 142 142 { .compatible = "ti,omap3", }, 143 143 144 + { .compatible = "qcom,ipq8064", }, 145 + { .compatible = "qcom,apq8064", }, 146 + { .compatible = "qcom,msm8974", }, 147 + { .compatible = "qcom,msm8960", }, 148 + 144 149 { } 145 150 }; 146 151
+4
drivers/cpufreq/cpufreq-dt.c
··· 363 363 dt_cpufreq_driver.resume = data->resume; 364 364 if (data->suspend) 365 365 dt_cpufreq_driver.suspend = data->suspend; 366 + if (data->get_intermediate) { 367 + dt_cpufreq_driver.target_intermediate = data->target_intermediate; 368 + dt_cpufreq_driver.get_intermediate = data->get_intermediate; 369 + } 366 370 } 367 371 368 372 ret = cpufreq_register_driver(&dt_cpufreq_driver);
+4
drivers/cpufreq/cpufreq-dt.h
··· 14 14 struct cpufreq_dt_platform_data { 15 15 bool have_governor_per_policy; 16 16 17 + unsigned int (*get_intermediate)(struct cpufreq_policy *policy, 18 + unsigned int index); 19 + int (*target_intermediate)(struct cpufreq_policy *policy, 20 + unsigned int index); 17 21 int (*suspend)(struct cpufreq_policy *policy); 18 22 int (*resume)(struct cpufreq_policy *policy); 19 23 };
+7 -7
drivers/cpufreq/cpufreq_stats.c
··· 90 90 if (policy->fast_switch_enabled) 91 91 return 0; 92 92 93 - len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n"); 94 - len += snprintf(buf + len, PAGE_SIZE - len, " : "); 93 + len += scnprintf(buf + len, PAGE_SIZE - len, " From : To\n"); 94 + len += scnprintf(buf + len, PAGE_SIZE - len, " : "); 95 95 for (i = 0; i < stats->state_num; i++) { 96 96 if (len >= PAGE_SIZE) 97 97 break; 98 - len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 98 + len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ", 99 99 stats->freq_table[i]); 100 100 } 101 101 if (len >= PAGE_SIZE) 102 102 return PAGE_SIZE; 103 103 104 - len += snprintf(buf + len, PAGE_SIZE - len, "\n"); 104 + len += scnprintf(buf + len, PAGE_SIZE - len, "\n"); 105 105 106 106 for (i = 0; i < stats->state_num; i++) { 107 107 if (len >= PAGE_SIZE) 108 108 break; 109 109 110 - len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ", 110 + len += scnprintf(buf + len, PAGE_SIZE - len, "%9u: ", 111 111 stats->freq_table[i]); 112 112 113 113 for (j = 0; j < stats->state_num; j++) { 114 114 if (len >= PAGE_SIZE) 115 115 break; 116 - len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 116 + len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ", 117 117 stats->trans_table[i*stats->max_state+j]); 118 118 } 119 119 if (len >= PAGE_SIZE) 120 120 break; 121 - len += snprintf(buf + len, PAGE_SIZE - len, "\n"); 121 + len += scnprintf(buf + len, PAGE_SIZE - len, "\n"); 122 122 } 123 123 124 124 if (len >= PAGE_SIZE) {
+12 -1
drivers/cpufreq/imx-cpufreq-dt.c
··· 19 19 #define IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK (0xf << 8) 20 20 #define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6 21 21 #define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6) 22 + #define IMX8MP_OCOTP_CFG3_MKT_SEGMENT_SHIFT 5 23 + #define IMX8MP_OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 5) 22 24 23 25 /* cpufreq-dt device registered by imx-cpufreq-dt */ 24 26 static struct platform_device *cpufreq_dt_pdev; ··· 33 31 int speed_grade, mkt_segment; 34 32 int ret; 35 33 34 + if (!of_find_property(cpu_dev->of_node, "cpu-supply", NULL)) 35 + return -ENODEV; 36 + 36 37 ret = nvmem_cell_read_u32(cpu_dev, "speed_grade", &cell_value); 37 38 if (ret) 38 39 return ret; ··· 47 42 else 48 43 speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK) 49 44 >> OCOTP_CFG3_SPEED_GRADE_SHIFT; 50 - mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT; 45 + 46 + if (of_machine_is_compatible("fsl,imx8mp")) 47 + mkt_segment = (cell_value & IMX8MP_OCOTP_CFG3_MKT_SEGMENT_MASK) 48 + >> IMX8MP_OCOTP_CFG3_MKT_SEGMENT_SHIFT; 49 + else 50 + mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) 51 + >> OCOTP_CFG3_MKT_SEGMENT_SHIFT; 51 52 52 53 /* 53 54 * Early samples without fuses written report "0 0" which may NOT
+41 -30
drivers/cpufreq/imx6q-cpufreq.c
··· 216 216 #define OCOTP_CFG3_SPEED_996MHZ 0x2 217 217 #define OCOTP_CFG3_SPEED_852MHZ 0x1 218 218 219 - static void imx6q_opp_check_speed_grading(struct device *dev) 219 + static int imx6q_opp_check_speed_grading(struct device *dev) 220 220 { 221 221 struct device_node *np; 222 222 void __iomem *base; 223 223 u32 val; 224 + int ret; 224 225 225 - np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp"); 226 - if (!np) 227 - return; 226 + if (of_find_property(dev->of_node, "nvmem-cells", NULL)) { 227 + ret = nvmem_cell_read_u32(dev, "speed_grade", &val); 228 + if (ret) 229 + return ret; 230 + } else { 231 + np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp"); 232 + if (!np) 233 + return -ENOENT; 228 234 229 - base = of_iomap(np, 0); 230 - if (!base) { 231 - dev_err(dev, "failed to map ocotp\n"); 232 - goto put_node; 235 + base = of_iomap(np, 0); 236 + of_node_put(np); 237 + if (!base) { 238 + dev_err(dev, "failed to map ocotp\n"); 239 + return -EFAULT; 240 + } 241 + 242 + /* 243 + * SPEED_GRADING[1:0] defines the max speed of ARM: 244 + * 2b'11: 1200000000Hz; 245 + * 2b'10: 996000000Hz; 246 + * 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz. 247 + * 2b'00: 792000000Hz; 248 + * We need to set the max speed of ARM according to fuse map. 249 + */ 250 + val = readl_relaxed(base + OCOTP_CFG3); 251 + iounmap(base); 233 252 } 234 253 235 - /* 236 - * SPEED_GRADING[1:0] defines the max speed of ARM: 237 - * 2b'11: 1200000000Hz; 238 - * 2b'10: 996000000Hz; 239 - * 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz. 240 - * 2b'00: 792000000Hz; 241 - * We need to set the max speed of ARM according to fuse map. 242 - */ 243 - val = readl_relaxed(base + OCOTP_CFG3); 244 254 val >>= OCOTP_CFG3_SPEED_SHIFT; 245 255 val &= 0x3; 246 256 ··· 267 257 if (dev_pm_opp_disable(dev, 1200000000)) 268 258 dev_warn(dev, "failed to disable 1.2GHz OPP\n"); 269 259 } 270 - iounmap(base); 271 - put_node: 272 - of_node_put(np); 260 + 261 + return 0; 273 262 } 274 263 275 264 #define OCOTP_CFG3_6UL_SPEED_696MHZ 0x2 ··· 289 280 void __iomem *base; 290 281 291 282 np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp"); 283 + if (!np) 284 + np = of_find_compatible_node(NULL, NULL, 285 + "fsl,imx6ull-ocotp"); 292 286 if (!np) 293 287 return -ENOENT; 294 288 ··· 390 378 goto put_reg; 391 379 } 392 380 381 + /* Because we have added the OPPs here, we must free them */ 382 + free_opp = true; 383 + 393 384 if (of_machine_is_compatible("fsl,imx6ul") || 394 385 of_machine_is_compatible("fsl,imx6ull")) { 395 386 ret = imx6ul_opp_check_speed_grading(cpu_dev); 396 - if (ret) { 397 - if (ret == -EPROBE_DEFER) 398 - goto put_node; 399 - 387 + } else { 388 + ret = imx6q_opp_check_speed_grading(cpu_dev); 389 + } 390 + if (ret) { 391 + if (ret != -EPROBE_DEFER) 400 392 dev_err(cpu_dev, "failed to read ocotp: %d\n", 401 393 ret); 402 - goto put_node; 403 - } 404 - } else { 405 - imx6q_opp_check_speed_grading(cpu_dev); 394 + goto out_free_opp; 406 395 } 407 396 408 - /* Because we have added the OPPs here, we must free them */ 409 - free_opp = true; 410 397 num = dev_pm_opp_get_opp_count(cpu_dev); 411 398 if (num < 0) { 412 399 ret = num;
+13 -13
drivers/cpufreq/intel_pstate.c
··· 2155 2155 } 2156 2156 } 2157 2157 2158 - static int intel_pstate_verify_policy(struct cpufreq_policy_data *policy) 2158 + static void intel_pstate_verify_cpu_policy(struct cpudata *cpu, 2159 + struct cpufreq_policy_data *policy) 2159 2160 { 2160 - struct cpudata *cpu = all_cpu_data[policy->cpu]; 2161 - 2162 2161 update_turbo_state(); 2163 2162 cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, 2164 2163 intel_pstate_get_max_freq(cpu)); 2165 2164 2166 2165 intel_pstate_adjust_policy_max(cpu, policy); 2166 + } 2167 + 2168 + static int intel_pstate_verify_policy(struct cpufreq_policy_data *policy) 2169 + { 2170 + intel_pstate_verify_cpu_policy(all_cpu_data[policy->cpu], policy); 2167 2171 2168 2172 return 0; 2169 2173 } ··· 2247 2243 if (ret) 2248 2244 return ret; 2249 2245 2250 - if (IS_ENABLED(CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE)) 2251 - policy->policy = CPUFREQ_POLICY_PERFORMANCE; 2252 - else 2253 - policy->policy = CPUFREQ_POLICY_POWERSAVE; 2246 + /* 2247 + * Set the policy to powersave to provide a valid fallback value in case 2248 + * the default cpufreq governor is neither powersave nor performance. 2249 + */ 2250 + policy->policy = CPUFREQ_POLICY_POWERSAVE; 2254 2251 2255 2252 return 0; 2256 2253 } ··· 2273 2268 { 2274 2269 struct cpudata *cpu = all_cpu_data[policy->cpu]; 2275 2270 2276 - update_turbo_state(); 2277 - cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, 2278 - intel_pstate_get_max_freq(cpu)); 2279 - 2280 - intel_pstate_adjust_policy_max(cpu, policy); 2281 - 2271 + intel_pstate_verify_cpu_policy(cpu, policy); 2282 2272 intel_pstate_update_perf_limits(cpu, policy->min, policy->max); 2283 2273 2284 2274 return 0;
+175 -16
drivers/cpufreq/qcom-cpufreq-nvmem.c
··· 49 49 struct qcom_cpufreq_match_data { 50 50 int (*get_version)(struct device *cpu_dev, 51 51 struct nvmem_cell *speedbin_nvmem, 52 + char **pvs_name, 52 53 struct qcom_cpufreq_drv *drv); 53 54 const char **genpd_names; 54 55 }; 55 56 56 57 struct qcom_cpufreq_drv { 57 - struct opp_table **opp_tables; 58 + struct opp_table **names_opp_tables; 59 + struct opp_table **hw_opp_tables; 58 60 struct opp_table **genpd_opp_tables; 59 61 u32 versions; 60 62 const struct qcom_cpufreq_match_data *data; 61 63 }; 62 64 63 65 static struct platform_device *cpufreq_dt_pdev, *cpufreq_pdev; 66 + 67 + static void get_krait_bin_format_a(struct device *cpu_dev, 68 + int *speed, int *pvs, int *pvs_ver, 69 + struct nvmem_cell *pvs_nvmem, u8 *buf) 70 + { 71 + u32 pte_efuse; 72 + 73 + pte_efuse = *((u32 *)buf); 74 + 75 + *speed = pte_efuse & 0xf; 76 + if (*speed == 0xf) 77 + *speed = (pte_efuse >> 4) & 0xf; 78 + 79 + if (*speed == 0xf) { 80 + *speed = 0; 81 + dev_warn(cpu_dev, "Speed bin: Defaulting to %d\n", *speed); 82 + } else { 83 + dev_dbg(cpu_dev, "Speed bin: %d\n", *speed); 84 + } 85 + 86 + *pvs = (pte_efuse >> 10) & 0x7; 87 + if (*pvs == 0x7) 88 + *pvs = (pte_efuse >> 13) & 0x7; 89 + 90 + if (*pvs == 0x7) { 91 + *pvs = 0; 92 + dev_warn(cpu_dev, "PVS bin: Defaulting to %d\n", *pvs); 93 + } else { 94 + dev_dbg(cpu_dev, "PVS bin: %d\n", *pvs); 95 + } 96 + } 97 + 98 + static void get_krait_bin_format_b(struct device *cpu_dev, 99 + int *speed, int *pvs, int *pvs_ver, 100 + struct nvmem_cell *pvs_nvmem, u8 *buf) 101 + { 102 + u32 pte_efuse, redundant_sel; 103 + 104 + pte_efuse = *((u32 *)buf); 105 + redundant_sel = (pte_efuse >> 24) & 0x7; 106 + 107 + *pvs_ver = (pte_efuse >> 4) & 0x3; 108 + 109 + switch (redundant_sel) { 110 + case 1: 111 + *pvs = ((pte_efuse >> 28) & 0x8) | ((pte_efuse >> 6) & 0x7); 112 + *speed = (pte_efuse >> 27) & 0xf; 113 + break; 114 + case 2: 115 + *pvs = (pte_efuse >> 27) & 0xf; 116 + *speed = pte_efuse & 0x7; 117 + break; 118 + default: 119 + /* 4 bits of PVS are in efuse register bits 31, 8-6. */ 120 + *pvs = ((pte_efuse >> 28) & 0x8) | ((pte_efuse >> 6) & 0x7); 121 + *speed = pte_efuse & 0x7; 122 + } 123 + 124 + /* Check SPEED_BIN_BLOW_STATUS */ 125 + if (pte_efuse & BIT(3)) { 126 + dev_dbg(cpu_dev, "Speed bin: %d\n", *speed); 127 + } else { 128 + dev_warn(cpu_dev, "Speed bin not set. Defaulting to 0!\n"); 129 + *speed = 0; 130 + } 131 + 132 + /* Check PVS_BLOW_STATUS */ 133 + pte_efuse = *(((u32 *)buf) + 4); 134 + pte_efuse &= BIT(21); 135 + if (pte_efuse) { 136 + dev_dbg(cpu_dev, "PVS bin: %d\n", *pvs); 137 + } else { 138 + dev_warn(cpu_dev, "PVS bin not set. Defaulting to 0!\n"); 139 + *pvs = 0; 140 + } 141 + 142 + dev_dbg(cpu_dev, "PVS version: %d\n", *pvs_ver); 143 + } 64 144 65 145 static enum _msm8996_version qcom_cpufreq_get_msm_id(void) 66 146 { ··· 173 93 174 94 static int qcom_cpufreq_kryo_name_version(struct device *cpu_dev, 175 95 struct nvmem_cell *speedbin_nvmem, 96 + char **pvs_name, 176 97 struct qcom_cpufreq_drv *drv) 177 98 { 178 99 size_t len; 179 100 u8 *speedbin; 180 101 enum _msm8996_version msm8996_version; 102 + *pvs_name = NULL; 181 103 182 104 msm8996_version = qcom_cpufreq_get_msm_id(); 183 105 if (NUM_OF_MSM8996_VERSIONS == msm8996_version) { ··· 207 125 return 0; 208 126 } 209 127 128 + static int qcom_cpufreq_krait_name_version(struct device *cpu_dev, 129 + struct nvmem_cell *speedbin_nvmem, 130 + char **pvs_name, 131 + struct qcom_cpufreq_drv *drv) 132 + { 133 + int speed = 0, pvs = 0, pvs_ver = 0; 134 + u8 *speedbin; 135 + size_t len; 136 + 137 + speedbin = nvmem_cell_read(speedbin_nvmem, &len); 138 + 139 + if (IS_ERR(speedbin)) 140 + return PTR_ERR(speedbin); 141 + 142 + switch (len) { 143 + case 4: 144 + get_krait_bin_format_a(cpu_dev, &speed, &pvs, &pvs_ver, 145 + speedbin_nvmem, speedbin); 146 + break; 147 + case 8: 148 + get_krait_bin_format_b(cpu_dev, &speed, &pvs, &pvs_ver, 149 + speedbin_nvmem, speedbin); 150 + break; 151 + default: 152 + dev_err(cpu_dev, "Unable to read nvmem data. Defaulting to 0!\n"); 153 + return -ENODEV; 154 + } 155 + 156 + snprintf(*pvs_name, sizeof("speedXX-pvsXX-vXX"), "speed%d-pvs%d-v%d", 157 + speed, pvs, pvs_ver); 158 + 159 + drv->versions = (1 << speed); 160 + 161 + kfree(speedbin); 162 + return 0; 163 + } 164 + 210 165 static const struct qcom_cpufreq_match_data match_data_kryo = { 211 166 .get_version = qcom_cpufreq_kryo_name_version, 167 + }; 168 + 169 + static const struct qcom_cpufreq_match_data match_data_krait = { 170 + .get_version = qcom_cpufreq_krait_name_version, 212 171 }; 213 172 214 173 static const char *qcs404_genpd_names[] = { "cpr", NULL }; ··· 264 141 struct nvmem_cell *speedbin_nvmem; 265 142 struct device_node *np; 266 143 struct device *cpu_dev; 144 + char *pvs_name = "speedXX-pvsXX-vXX"; 267 145 unsigned cpu; 268 146 const struct of_device_id *match; 269 147 int ret; ··· 277 153 if (!np) 278 154 return -ENOENT; 279 155 280 - ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu"); 156 + ret = of_device_is_compatible(np, "operating-points-v2-qcom-cpu"); 281 157 if (!ret) { 282 158 of_node_put(np); 283 159 return -ENOENT; ··· 305 181 goto free_drv; 306 182 } 307 183 308 - ret = drv->data->get_version(cpu_dev, speedbin_nvmem, drv); 184 + ret = drv->data->get_version(cpu_dev, 185 + speedbin_nvmem, &pvs_name, drv); 309 186 if (ret) { 310 187 nvmem_cell_put(speedbin_nvmem); 311 188 goto free_drv; ··· 315 190 } 316 191 of_node_put(np); 317 192 318 - drv->opp_tables = kcalloc(num_possible_cpus(), sizeof(*drv->opp_tables), 193 + drv->names_opp_tables = kcalloc(num_possible_cpus(), 194 + sizeof(*drv->names_opp_tables), 319 195 GFP_KERNEL); 320 - if (!drv->opp_tables) { 196 + if (!drv->names_opp_tables) { 321 197 ret = -ENOMEM; 322 198 goto free_drv; 199 + } 200 + drv->hw_opp_tables = kcalloc(num_possible_cpus(), 201 + sizeof(*drv->hw_opp_tables), 202 + GFP_KERNEL); 203 + if (!drv->hw_opp_tables) { 204 + ret = -ENOMEM; 205 + goto free_opp_names; 323 206 } 324 207 325 208 drv->genpd_opp_tables = kcalloc(num_possible_cpus(), ··· 346 213 } 347 214 348 215 if (drv->data->get_version) { 349 - drv->opp_tables[cpu] = 350 - dev_pm_opp_set_supported_hw(cpu_dev, 351 - &drv->versions, 1); 352 - if (IS_ERR(drv->opp_tables[cpu])) { 353 - ret = PTR_ERR(drv->opp_tables[cpu]); 216 + 217 + if (pvs_name) { 218 + drv->names_opp_tables[cpu] = dev_pm_opp_set_prop_name( 219 + cpu_dev, 220 + pvs_name); 221 + if (IS_ERR(drv->names_opp_tables[cpu])) { 222 + ret = PTR_ERR(drv->names_opp_tables[cpu]); 223 + dev_err(cpu_dev, "Failed to add OPP name %s\n", 224 + pvs_name); 225 + goto free_opp; 226 + } 227 + } 228 + 229 + drv->hw_opp_tables[cpu] = dev_pm_opp_set_supported_hw( 230 + cpu_dev, &drv->versions, 1); 231 + if (IS_ERR(drv->hw_opp_tables[cpu])) { 232 + ret = PTR_ERR(drv->hw_opp_tables[cpu]); 354 233 dev_err(cpu_dev, 355 234 "Failed to set supported hardware\n"); 356 235 goto free_genpd_opp; ··· 404 259 kfree(drv->genpd_opp_tables); 405 260 free_opp: 406 261 for_each_possible_cpu(cpu) { 407 - if (IS_ERR_OR_NULL(drv->opp_tables[cpu])) 262 + if (IS_ERR_OR_NULL(drv->names_opp_tables[cpu])) 408 263 break; 409 - dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]); 264 + dev_pm_opp_put_prop_name(drv->names_opp_tables[cpu]); 410 265 } 411 - kfree(drv->opp_tables); 266 + for_each_possible_cpu(cpu) { 267 + if (IS_ERR_OR_NULL(drv->hw_opp_tables[cpu])) 268 + break; 269 + dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]); 270 + } 271 + kfree(drv->hw_opp_tables); 272 + free_opp_names: 273 + kfree(drv->names_opp_tables); 412 274 free_drv: 413 275 kfree(drv); 414 276 ··· 430 278 platform_device_unregister(cpufreq_dt_pdev); 431 279 432 280 for_each_possible_cpu(cpu) { 433 - if (drv->opp_tables[cpu]) 434 - dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]); 281 + if (drv->names_opp_tables[cpu]) 282 + dev_pm_opp_put_supported_hw(drv->names_opp_tables[cpu]); 283 + if (drv->hw_opp_tables[cpu]) 284 + dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]); 435 285 if (drv->genpd_opp_tables[cpu]) 436 286 dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]); 437 287 } 438 288 439 - kfree(drv->opp_tables); 289 + kfree(drv->names_opp_tables); 290 + kfree(drv->hw_opp_tables); 440 291 kfree(drv->genpd_opp_tables); 441 292 kfree(drv); 442 293 ··· 458 303 { .compatible = "qcom,apq8096", .data = &match_data_kryo }, 459 304 { .compatible = "qcom,msm8996", .data = &match_data_kryo }, 460 305 { .compatible = "qcom,qcs404", .data = &match_data_qcs404 }, 306 + { .compatible = "qcom,ipq8064", .data = &match_data_krait }, 307 + { .compatible = "qcom,apq8064", .data = &match_data_krait }, 308 + { .compatible = "qcom,msm8974", .data = &match_data_krait }, 309 + { .compatible = "qcom,msm8960", .data = &match_data_krait }, 461 310 {}, 462 311 }; 463 312
+7
drivers/cpufreq/ti-cpufreq.c
··· 25 25 26 26 #define DRA7_EFUSE_HAS_OD_MPU_OPP 11 27 27 #define DRA7_EFUSE_HAS_HIGH_MPU_OPP 15 28 + #define DRA76_EFUSE_HAS_PLUS_MPU_OPP 18 28 29 #define DRA7_EFUSE_HAS_ALL_MPU_OPP 23 30 + #define DRA76_EFUSE_HAS_ALL_MPU_OPP 24 29 31 30 32 #define DRA7_EFUSE_NOM_MPU_OPP BIT(0) 31 33 #define DRA7_EFUSE_OD_MPU_OPP BIT(1) 32 34 #define DRA7_EFUSE_HIGH_MPU_OPP BIT(2) 35 + #define DRA76_EFUSE_PLUS_MPU_OPP BIT(3) 33 36 34 37 #define OMAP3_CONTROL_DEVICE_STATUS 0x4800244C 35 38 #define OMAP3_CONTROL_IDCODE 0x4830A204 ··· 83 80 */ 84 81 85 82 switch (efuse) { 83 + case DRA76_EFUSE_HAS_PLUS_MPU_OPP: 84 + case DRA76_EFUSE_HAS_ALL_MPU_OPP: 85 + calculated_efuse |= DRA76_EFUSE_PLUS_MPU_OPP; 86 + /* Fall through */ 86 87 case DRA7_EFUSE_HAS_ALL_MPU_OPP: 87 88 case DRA7_EFUSE_HAS_HIGH_MPU_OPP: 88 89 calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP;
+10 -2
drivers/cpuidle/cpuidle-haltpoll.c
··· 18 18 #include <linux/kvm_para.h> 19 19 #include <linux/cpuidle_haltpoll.h> 20 20 21 + static bool force __read_mostly; 22 + module_param(force, bool, 0444); 23 + MODULE_PARM_DESC(force, "Load unconditionally"); 24 + 21 25 static struct cpuidle_device __percpu *haltpoll_cpuidle_devices; 22 26 static enum cpuhp_state haltpoll_hp_state; 23 27 ··· 94 90 haltpoll_cpuidle_devices = NULL; 95 91 } 96 92 93 + static bool haltpool_want(void) 94 + { 95 + return kvm_para_has_hint(KVM_HINTS_REALTIME) || force; 96 + } 97 + 97 98 static int __init haltpoll_init(void) 98 99 { 99 100 int ret; ··· 110 101 111 102 cpuidle_poll_state_init(drv); 112 103 113 - if (!kvm_para_available() || 114 - !kvm_para_has_hint(KVM_HINTS_REALTIME)) 104 + if (!kvm_para_available() || !haltpool_want()) 115 105 return -ENODEV; 116 106 117 107 ret = cpuidle_register_driver(drv);
+27 -19
drivers/cpuidle/cpuidle-psci.c
··· 160 160 return 0; 161 161 } 162 162 163 + static int __init psci_dt_cpu_init_topology(struct cpuidle_driver *drv, 164 + struct psci_cpuidle_data *data, 165 + unsigned int state_count, int cpu) 166 + { 167 + /* Currently limit the hierarchical topology to be used in OSI mode. */ 168 + if (!psci_has_osi_support()) 169 + return 0; 170 + 171 + data->dev = psci_dt_attach_cpu(cpu); 172 + if (IS_ERR_OR_NULL(data->dev)) 173 + return PTR_ERR_OR_ZERO(data->dev); 174 + 175 + /* 176 + * Using the deepest state for the CPU to trigger a potential selection 177 + * of a shared state for the domain, assumes the domain states are all 178 + * deeper states. 179 + */ 180 + drv->states[state_count - 1].enter = psci_enter_domain_idle_state; 181 + psci_cpuidle_use_cpuhp = true; 182 + 183 + return 0; 184 + } 185 + 163 186 static int __init psci_dt_cpu_init_idle(struct cpuidle_driver *drv, 164 187 struct device_node *cpu_node, 165 188 unsigned int state_count, int cpu) ··· 216 193 goto free_mem; 217 194 } 218 195 219 - /* Currently limit the hierarchical topology to be used in OSI mode. */ 220 - if (psci_has_osi_support()) { 221 - data->dev = psci_dt_attach_cpu(cpu); 222 - if (IS_ERR(data->dev)) { 223 - ret = PTR_ERR(data->dev); 224 - goto free_mem; 225 - } 226 - 227 - /* 228 - * Using the deepest state for the CPU to trigger a potential 229 - * selection of a shared state for the domain, assumes the 230 - * domain states are all deeper states. 231 - */ 232 - if (data->dev) { 233 - drv->states[state_count - 1].enter = 234 - psci_enter_domain_idle_state; 235 - psci_cpuidle_use_cpuhp = true; 236 - } 237 - } 196 + /* Initialize optional data, used for the hierarchical topology. */ 197 + ret = psci_dt_cpu_init_topology(drv, data, state_count, cpu); 198 + if (ret < 0) 199 + goto free_mem; 238 200 239 201 /* Idle states parsed correctly, store them in the per-cpu struct. */ 240 202 data->psci_states = psci_states;
+1 -39
drivers/cpuidle/cpuidle.c
··· 736 736 } 737 737 EXPORT_SYMBOL_GPL(cpuidle_register); 738 738 739 - #ifdef CONFIG_SMP 740 - 741 - /* 742 - * This function gets called when a part of the kernel has a new latency 743 - * requirement. This means we need to get all processors out of their C-state, 744 - * and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that 745 - * wakes them all right up. 746 - */ 747 - static int cpuidle_latency_notify(struct notifier_block *b, 748 - unsigned long l, void *v) 749 - { 750 - wake_up_all_idle_cpus(); 751 - return NOTIFY_OK; 752 - } 753 - 754 - static struct notifier_block cpuidle_latency_notifier = { 755 - .notifier_call = cpuidle_latency_notify, 756 - }; 757 - 758 - static inline void latency_notifier_init(struct notifier_block *n) 759 - { 760 - pm_qos_add_notifier(PM_QOS_CPU_DMA_LATENCY, n); 761 - } 762 - 763 - #else /* CONFIG_SMP */ 764 - 765 - #define latency_notifier_init(x) do { } while (0) 766 - 767 - #endif /* CONFIG_SMP */ 768 - 769 739 /** 770 740 * cpuidle_init - core initializer 771 741 */ 772 742 static int __init cpuidle_init(void) 773 743 { 774 - int ret; 775 - 776 744 if (cpuidle_disabled()) 777 745 return -ENODEV; 778 746 779 - ret = cpuidle_add_interface(cpu_subsys.dev_root); 780 - if (ret) 781 - return ret; 782 - 783 - latency_notifier_init(&cpuidle_latency_notifier); 784 - 785 - return 0; 747 + return cpuidle_add_interface(cpu_subsys.dev_root); 786 748 } 787 749 788 750 module_param(off, int, 0444);
+1 -1
drivers/cpuidle/governor.c
··· 109 109 */ 110 110 s64 cpuidle_governor_latency_req(unsigned int cpu) 111 111 { 112 - int global_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); 113 112 struct device *device = get_cpu_device(cpu); 114 113 int device_req = dev_pm_qos_raw_resume_latency(device); 114 + int global_req = cpu_latency_qos_limit(); 115 115 116 116 if (device_req > global_req) 117 117 device_req = global_req;
+7 -7
drivers/devfreq/devfreq.c
··· 550 550 EXPORT_SYMBOL(devfreq_monitor_resume); 551 551 552 552 /** 553 - * devfreq_interval_update() - Update device devfreq monitoring interval 553 + * devfreq_update_interval() - Update device devfreq monitoring interval 554 554 * @devfreq: the devfreq instance. 555 555 * @delay: new polling interval to be set. 556 556 * 557 557 * Helper function to set new load monitoring polling interval. Function 558 - * to be called from governor in response to DEVFREQ_GOV_INTERVAL event. 558 + * to be called from governor in response to DEVFREQ_GOV_UPDATE_INTERVAL event. 559 559 */ 560 - void devfreq_interval_update(struct devfreq *devfreq, unsigned int *delay) 560 + void devfreq_update_interval(struct devfreq *devfreq, unsigned int *delay) 561 561 { 562 562 unsigned int cur_delay = devfreq->profile->polling_ms; 563 563 unsigned int new_delay = *delay; ··· 597 597 out: 598 598 mutex_unlock(&devfreq->lock); 599 599 } 600 - EXPORT_SYMBOL(devfreq_interval_update); 600 + EXPORT_SYMBOL(devfreq_update_interval); 601 601 602 602 /** 603 603 * devfreq_notifier_call() - Notify that the device frequency requirements ··· 705 705 706 706 if (dev_pm_qos_request_active(&devfreq->user_max_freq_req)) { 707 707 err = dev_pm_qos_remove_request(&devfreq->user_max_freq_req); 708 - if (err) 708 + if (err < 0) 709 709 dev_warn(dev->parent, 710 710 "Failed to remove max_freq request: %d\n", err); 711 711 } 712 712 if (dev_pm_qos_request_active(&devfreq->user_min_freq_req)) { 713 713 err = dev_pm_qos_remove_request(&devfreq->user_min_freq_req); 714 - if (err) 714 + if (err < 0) 715 715 dev_warn(dev->parent, 716 716 "Failed to remove min_freq request: %d\n", err); 717 717 } ··· 1424 1424 if (ret != 1) 1425 1425 return -EINVAL; 1426 1426 1427 - df->governor->event_handler(df, DEVFREQ_GOV_INTERVAL, &value); 1427 + df->governor->event_handler(df, DEVFREQ_GOV_UPDATE_INTERVAL, &value); 1428 1428 ret = count; 1429 1429 1430 1430 return ret;
+10 -11
drivers/devfreq/governor.h
··· 18 18 /* Devfreq events */ 19 19 #define DEVFREQ_GOV_START 0x1 20 20 #define DEVFREQ_GOV_STOP 0x2 21 - #define DEVFREQ_GOV_INTERVAL 0x3 21 + #define DEVFREQ_GOV_UPDATE_INTERVAL 0x3 22 22 #define DEVFREQ_GOV_SUSPEND 0x4 23 23 #define DEVFREQ_GOV_RESUME 0x5 24 24 ··· 30 30 * @node: list node - contains registered devfreq governors 31 31 * @name: Governor's name 32 32 * @immutable: Immutable flag for governor. If the value is 1, 33 - * this govenror is never changeable to other governor. 33 + * this governor is never changeable to other governor. 34 34 * @interrupt_driven: Devfreq core won't schedule polling work for this 35 35 * governor if value is set to 1. 36 36 * @get_target_freq: Returns desired operating frequency for the device. ··· 57 57 unsigned int event, void *data); 58 58 }; 59 59 60 - extern void devfreq_monitor_start(struct devfreq *devfreq); 61 - extern void devfreq_monitor_stop(struct devfreq *devfreq); 62 - extern void devfreq_monitor_suspend(struct devfreq *devfreq); 63 - extern void devfreq_monitor_resume(struct devfreq *devfreq); 64 - extern void devfreq_interval_update(struct devfreq *devfreq, 65 - unsigned int *delay); 60 + void devfreq_monitor_start(struct devfreq *devfreq); 61 + void devfreq_monitor_stop(struct devfreq *devfreq); 62 + void devfreq_monitor_suspend(struct devfreq *devfreq); 63 + void devfreq_monitor_resume(struct devfreq *devfreq); 64 + void devfreq_update_interval(struct devfreq *devfreq, unsigned int *delay); 66 65 67 - extern int devfreq_add_governor(struct devfreq_governor *governor); 68 - extern int devfreq_remove_governor(struct devfreq_governor *governor); 66 + int devfreq_add_governor(struct devfreq_governor *governor); 67 + int devfreq_remove_governor(struct devfreq_governor *governor); 69 68 70 - extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq); 69 + int devfreq_update_status(struct devfreq *devfreq, unsigned long freq); 71 70 72 71 static inline int devfreq_update_stats(struct devfreq *df) 73 72 {
+2 -2
drivers/devfreq/governor_simpleondemand.c
··· 96 96 devfreq_monitor_stop(devfreq); 97 97 break; 98 98 99 - case DEVFREQ_GOV_INTERVAL: 100 - devfreq_interval_update(devfreq, (unsigned int *)data); 99 + case DEVFREQ_GOV_UPDATE_INTERVAL: 100 + devfreq_update_interval(devfreq, (unsigned int *)data); 101 101 break; 102 102 103 103 case DEVFREQ_GOV_SUSPEND:
+1 -1
drivers/devfreq/governor_userspace.c
··· 131 131 } 132 132 133 133 static struct devfreq_governor devfreq_userspace = { 134 - .name = "userspace", 134 + .name = DEVFREQ_GOV_USERSPACE, 135 135 .get_target_freq = devfreq_userspace_func, 136 136 .event_handler = devfreq_userspace_handler, 137 137 };
+2 -2
drivers/devfreq/tegra30-devfreq.c
··· 734 734 devfreq_monitor_stop(devfreq); 735 735 break; 736 736 737 - case DEVFREQ_GOV_INTERVAL: 737 + case DEVFREQ_GOV_UPDATE_INTERVAL: 738 738 /* 739 739 * ACTMON hardware supports up to 256 milliseconds for the 740 740 * sampling period. ··· 745 745 } 746 746 747 747 tegra_actmon_pause(tegra); 748 - devfreq_interval_update(devfreq, new_delay); 748 + devfreq_update_interval(devfreq, new_delay); 749 749 ret = tegra_actmon_resume(tegra); 750 750 break; 751 751
+2 -2
drivers/gpu/drm/i915/display/intel_dp.c
··· 1360 1360 * lowest possible wakeup latency and so prevent the cpu from going into 1361 1361 * deep sleep states. 1362 1362 */ 1363 - pm_qos_update_request(&i915->pm_qos, 0); 1363 + cpu_latency_qos_update_request(&i915->pm_qos, 0); 1364 1364 1365 1365 intel_dp_check_edp(intel_dp); 1366 1366 ··· 1488 1488 1489 1489 ret = recv_bytes; 1490 1490 out: 1491 - pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE); 1491 + cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE); 1492 1492 1493 1493 if (vdd) 1494 1494 edp_panel_vdd_off(intel_dp, false);
+5 -7
drivers/gpu/drm/i915/i915_drv.c
··· 505 505 mutex_init(&dev_priv->backlight_lock); 506 506 507 507 mutex_init(&dev_priv->sb_lock); 508 - pm_qos_add_request(&dev_priv->sb_qos, 509 - PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE); 508 + cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE); 510 509 511 510 mutex_init(&dev_priv->av_mutex); 512 511 mutex_init(&dev_priv->wm.wm_mutex); ··· 570 571 vlv_free_s0ix_state(dev_priv); 571 572 i915_workqueues_cleanup(dev_priv); 572 573 573 - pm_qos_remove_request(&dev_priv->sb_qos); 574 + cpu_latency_qos_remove_request(&dev_priv->sb_qos); 574 575 mutex_destroy(&dev_priv->sb_lock); 575 576 } 576 577 ··· 1228 1229 } 1229 1230 } 1230 1231 1231 - pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY, 1232 - PM_QOS_DEFAULT_VALUE); 1232 + cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE); 1233 1233 1234 1234 intel_gt_init_workarounds(dev_priv); 1235 1235 ··· 1274 1276 err_msi: 1275 1277 if (pdev->msi_enabled) 1276 1278 pci_disable_msi(pdev); 1277 - pm_qos_remove_request(&dev_priv->pm_qos); 1279 + cpu_latency_qos_remove_request(&dev_priv->pm_qos); 1278 1280 err_mem_regions: 1279 1281 intel_memory_regions_driver_release(dev_priv); 1280 1282 err_ggtt: ··· 1297 1299 if (pdev->msi_enabled) 1298 1300 pci_disable_msi(pdev); 1299 1301 1300 - pm_qos_remove_request(&dev_priv->pm_qos); 1302 + cpu_latency_qos_remove_request(&dev_priv->pm_qos); 1301 1303 } 1302 1304 1303 1305 /**
+3 -2
drivers/gpu/drm/i915/intel_sideband.c
··· 60 60 * to the Valleyview P-unit and not all sideband communications. 61 61 */ 62 62 if (IS_VALLEYVIEW(i915)) { 63 - pm_qos_update_request(&i915->sb_qos, 0); 63 + cpu_latency_qos_update_request(&i915->sb_qos, 0); 64 64 on_each_cpu(ping, NULL, 1); 65 65 } 66 66 } ··· 68 68 static void __vlv_punit_put(struct drm_i915_private *i915) 69 69 { 70 70 if (IS_VALLEYVIEW(i915)) 71 - pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE); 71 + cpu_latency_qos_update_request(&i915->sb_qos, 72 + PM_QOS_DEFAULT_VALUE); 72 73 73 74 iosf_mbi_punit_release(); 74 75 }
+4 -5
drivers/hsi/clients/cmt_speech.c
··· 965 965 966 966 if (old_state != hi->iface_state) { 967 967 if (hi->iface_state == CS_STATE_CONFIGURED) { 968 - pm_qos_add_request(&hi->pm_qos_req, 969 - PM_QOS_CPU_DMA_LATENCY, 968 + cpu_latency_qos_add_request(&hi->pm_qos_req, 970 969 CS_QOS_LATENCY_FOR_DATA_USEC); 971 970 local_bh_disable(); 972 971 cs_hsi_read_on_data(hi); 973 972 local_bh_enable(); 974 973 } else if (old_state == CS_STATE_CONFIGURED) { 975 - pm_qos_remove_request(&hi->pm_qos_req); 974 + cpu_latency_qos_remove_request(&hi->pm_qos_req); 976 975 } 977 976 } 978 977 return r; ··· 1074 1075 WARN_ON(!cs_state_idle(hi->control_state)); 1075 1076 WARN_ON(!cs_state_idle(hi->data_state)); 1076 1077 1077 - if (pm_qos_request_active(&hi->pm_qos_req)) 1078 - pm_qos_remove_request(&hi->pm_qos_req); 1078 + if (cpu_latency_qos_request_active(&hi->pm_qos_req)) 1079 + cpu_latency_qos_remove_request(&hi->pm_qos_req); 1079 1080 1080 1081 spin_lock_bh(&hi->lock); 1081 1082 cs_hsi_free_data(hi);
+157 -145
drivers/idle/intel_idle.c
··· 2 2 /* 3 3 * intel_idle.c - native hardware idle loop for modern Intel processors 4 4 * 5 - * Copyright (c) 2013, Intel Corporation. 5 + * Copyright (c) 2013 - 2020, Intel Corporation. 6 6 * Len Brown <len.brown@intel.com> 7 + * Rafael J. Wysocki <rafael.j.wysocki@intel.com> 7 8 */ 8 9 9 10 /* ··· 25 24 26 25 /* 27 26 * Known limitations 28 - * 29 - * The driver currently initializes for_each_online_cpu() upon modprobe. 30 - * It it unaware of subsequent processors hot-added to the system. 31 - * This means that if you boot with maxcpus=n and later online 32 - * processors above n, those processors will use C1 only. 33 27 * 34 28 * ACPI has a .suspend hack to turn off deep c-statees during suspend 35 29 * to avoid complications with the lapic timer workaround. ··· 51 55 #include <asm/mwait.h> 52 56 #include <asm/msr.h> 53 57 54 - #define INTEL_IDLE_VERSION "0.4.1" 58 + #define INTEL_IDLE_VERSION "0.5.1" 55 59 56 60 static struct cpuidle_driver intel_idle_driver = { 57 61 .name = "intel_idle", ··· 61 65 static int max_cstate = CPUIDLE_STATE_MAX - 1; 62 66 static unsigned int disabled_states_mask; 63 67 64 - static unsigned int mwait_substates; 68 + static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; 65 69 66 - #define LAPIC_TIMER_ALWAYS_RELIABLE 0xFFFFFFFF 67 - /* Reliable LAPIC Timer States, bit 1 for C1 etc. */ 68 - static unsigned int lapic_timer_reliable_states = (1 << 1); /* Default to only C1 */ 70 + static unsigned long auto_demotion_disable_flags; 71 + static bool disable_promotion_to_c1e; 72 + 73 + static bool lapic_timer_always_reliable; 69 74 70 75 struct idle_cpu { 71 76 struct cpuidle_state *state_table; ··· 81 84 bool use_acpi; 82 85 }; 83 86 84 - static const struct idle_cpu *icpu; 85 - static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; 86 - static int intel_idle(struct cpuidle_device *dev, 87 - struct cpuidle_driver *drv, int index); 88 - static void intel_idle_s2idle(struct cpuidle_device *dev, 89 - struct cpuidle_driver *drv, int index); 90 - static struct cpuidle_state *cpuidle_state_table; 87 + static const struct idle_cpu *icpu __initdata; 88 + static struct cpuidle_state *cpuidle_state_table __initdata; 89 + 90 + static unsigned int mwait_substates __initdata; 91 91 92 92 /* 93 93 * Enable this state by default even if the ACPI _CST does not list it. ··· 97 103 * If this flag is set, SW flushes the TLB, so even if the 98 104 * HW doesn't do the flushing, this flag is safe to use. 99 105 */ 100 - #define CPUIDLE_FLAG_TLB_FLUSHED 0x10000 106 + #define CPUIDLE_FLAG_TLB_FLUSHED BIT(16) 101 107 102 108 /* 103 109 * MWAIT takes an 8-bit "hint" in EAX "suggesting" ··· 109 115 #define flg2MWAIT(flags) (((flags) >> 24) & 0xFF) 110 116 #define MWAIT2flg(eax) ((eax & 0xFF) << 24) 111 117 118 + /** 119 + * intel_idle - Ask the processor to enter the given idle state. 120 + * @dev: cpuidle device of the target CPU. 121 + * @drv: cpuidle driver (assumed to point to intel_idle_driver). 122 + * @index: Target idle state index. 123 + * 124 + * Use the MWAIT instruction to notify the processor that the CPU represented by 125 + * @dev is idle and it can try to enter the idle state corresponding to @index. 126 + * 127 + * If the local APIC timer is not known to be reliable in the target idle state, 128 + * enable one-shot tick broadcasting for the target CPU before executing MWAIT. 129 + * 130 + * Optionally call leave_mm() for the target CPU upfront to avoid wakeups due to 131 + * flushing user TLBs. 132 + * 133 + * Must be called under local_irq_disable(). 134 + */ 135 + static __cpuidle int intel_idle(struct cpuidle_device *dev, 136 + struct cpuidle_driver *drv, int index) 137 + { 138 + struct cpuidle_state *state = &drv->states[index]; 139 + unsigned long eax = flg2MWAIT(state->flags); 140 + unsigned long ecx = 1; /* break on interrupt flag */ 141 + bool uninitialized_var(tick); 142 + int cpu = smp_processor_id(); 143 + 144 + /* 145 + * leave_mm() to avoid costly and often unnecessary wakeups 146 + * for flushing the user TLB's associated with the active mm. 147 + */ 148 + if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED) 149 + leave_mm(cpu); 150 + 151 + if (!static_cpu_has(X86_FEATURE_ARAT) && !lapic_timer_always_reliable) { 152 + /* 153 + * Switch over to one-shot tick broadcast if the target C-state 154 + * is deeper than C1. 155 + */ 156 + if ((eax >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) { 157 + tick = true; 158 + tick_broadcast_enter(); 159 + } else { 160 + tick = false; 161 + } 162 + } 163 + 164 + mwait_idle_with_hints(eax, ecx); 165 + 166 + if (!static_cpu_has(X86_FEATURE_ARAT) && tick) 167 + tick_broadcast_exit(); 168 + 169 + return index; 170 + } 171 + 172 + /** 173 + * intel_idle_s2idle - Ask the processor to enter the given idle state. 174 + * @dev: cpuidle device of the target CPU. 175 + * @drv: cpuidle driver (assumed to point to intel_idle_driver). 176 + * @index: Target idle state index. 177 + * 178 + * Use the MWAIT instruction to notify the processor that the CPU represented by 179 + * @dev is idle and it can try to enter the idle state corresponding to @index. 180 + * 181 + * Invoked as a suspend-to-idle callback routine with frozen user space, frozen 182 + * scheduler tick and suspended scheduler clock on the target CPU. 183 + */ 184 + static __cpuidle void intel_idle_s2idle(struct cpuidle_device *dev, 185 + struct cpuidle_driver *drv, int index) 186 + { 187 + unsigned long eax = flg2MWAIT(drv->states[index].flags); 188 + unsigned long ecx = 1; /* break on interrupt flag */ 189 + 190 + mwait_idle_with_hints(eax, ecx); 191 + } 192 + 112 193 /* 113 194 * States are indexed by the cstate number, 114 195 * which is also the index into the MWAIT hint array. 115 196 * Thus C0 is a dummy. 116 197 */ 117 - static struct cpuidle_state nehalem_cstates[] = { 198 + static struct cpuidle_state nehalem_cstates[] __initdata = { 118 199 { 119 200 .name = "C1", 120 201 .desc = "MWAIT 0x00", ··· 226 157 .enter = NULL } 227 158 }; 228 159 229 - static struct cpuidle_state snb_cstates[] = { 160 + static struct cpuidle_state snb_cstates[] __initdata = { 230 161 { 231 162 .name = "C1", 232 163 .desc = "MWAIT 0x00", ··· 271 202 .enter = NULL } 272 203 }; 273 204 274 - static struct cpuidle_state byt_cstates[] = { 205 + static struct cpuidle_state byt_cstates[] __initdata = { 275 206 { 276 207 .name = "C1", 277 208 .desc = "MWAIT 0x00", ··· 316 247 .enter = NULL } 317 248 }; 318 249 319 - static struct cpuidle_state cht_cstates[] = { 250 + static struct cpuidle_state cht_cstates[] __initdata = { 320 251 { 321 252 .name = "C1", 322 253 .desc = "MWAIT 0x00", ··· 361 292 .enter = NULL } 362 293 }; 363 294 364 - static struct cpuidle_state ivb_cstates[] = { 295 + static struct cpuidle_state ivb_cstates[] __initdata = { 365 296 { 366 297 .name = "C1", 367 298 .desc = "MWAIT 0x00", ··· 406 337 .enter = NULL } 407 338 }; 408 339 409 - static struct cpuidle_state ivt_cstates[] = { 340 + static struct cpuidle_state ivt_cstates[] __initdata = { 410 341 { 411 342 .name = "C1", 412 343 .desc = "MWAIT 0x00", ··· 443 374 .enter = NULL } 444 375 }; 445 376 446 - static struct cpuidle_state ivt_cstates_4s[] = { 377 + static struct cpuidle_state ivt_cstates_4s[] __initdata = { 447 378 { 448 379 .name = "C1", 449 380 .desc = "MWAIT 0x00", ··· 480 411 .enter = NULL } 481 412 }; 482 413 483 - static struct cpuidle_state ivt_cstates_8s[] = { 414 + static struct cpuidle_state ivt_cstates_8s[] __initdata = { 484 415 { 485 416 .name = "C1", 486 417 .desc = "MWAIT 0x00", ··· 517 448 .enter = NULL } 518 449 }; 519 450 520 - static struct cpuidle_state hsw_cstates[] = { 451 + static struct cpuidle_state hsw_cstates[] __initdata = { 521 452 { 522 453 .name = "C1", 523 454 .desc = "MWAIT 0x00", ··· 585 516 { 586 517 .enter = NULL } 587 518 }; 588 - static struct cpuidle_state bdw_cstates[] = { 519 + static struct cpuidle_state bdw_cstates[] __initdata = { 589 520 { 590 521 .name = "C1", 591 522 .desc = "MWAIT 0x00", ··· 654 585 .enter = NULL } 655 586 }; 656 587 657 - static struct cpuidle_state skl_cstates[] = { 588 + static struct cpuidle_state skl_cstates[] __initdata = { 658 589 { 659 590 .name = "C1", 660 591 .desc = "MWAIT 0x00", ··· 723 654 .enter = NULL } 724 655 }; 725 656 726 - static struct cpuidle_state skx_cstates[] = { 657 + static struct cpuidle_state skx_cstates[] __initdata = { 727 658 { 728 659 .name = "C1", 729 660 .desc = "MWAIT 0x00", ··· 752 683 .enter = NULL } 753 684 }; 754 685 755 - static struct cpuidle_state atom_cstates[] = { 686 + static struct cpuidle_state atom_cstates[] __initdata = { 756 687 { 757 688 .name = "C1E", 758 689 .desc = "MWAIT 0x00", ··· 788 719 { 789 720 .enter = NULL } 790 721 }; 791 - static struct cpuidle_state tangier_cstates[] = { 722 + static struct cpuidle_state tangier_cstates[] __initdata = { 792 723 { 793 724 .name = "C1", 794 725 .desc = "MWAIT 0x00", ··· 832 763 { 833 764 .enter = NULL } 834 765 }; 835 - static struct cpuidle_state avn_cstates[] = { 766 + static struct cpuidle_state avn_cstates[] __initdata = { 836 767 { 837 768 .name = "C1", 838 769 .desc = "MWAIT 0x00", ··· 852 783 { 853 784 .enter = NULL } 854 785 }; 855 - static struct cpuidle_state knl_cstates[] = { 786 + static struct cpuidle_state knl_cstates[] __initdata = { 856 787 { 857 788 .name = "C1", 858 789 .desc = "MWAIT 0x00", ··· 873 804 .enter = NULL } 874 805 }; 875 806 876 - static struct cpuidle_state bxt_cstates[] = { 807 + static struct cpuidle_state bxt_cstates[] __initdata = { 877 808 { 878 809 .name = "C1", 879 810 .desc = "MWAIT 0x00", ··· 934 865 .enter = NULL } 935 866 }; 936 867 937 - static struct cpuidle_state dnv_cstates[] = { 868 + static struct cpuidle_state dnv_cstates[] __initdata = { 938 869 { 939 870 .name = "C1", 940 871 .desc = "MWAIT 0x00", ··· 963 894 .enter = NULL } 964 895 }; 965 896 966 - /** 967 - * intel_idle 968 - * @dev: cpuidle_device 969 - * @drv: cpuidle driver 970 - * @index: index of cpuidle state 971 - * 972 - * Must be called under local_irq_disable(). 973 - */ 974 - static __cpuidle int intel_idle(struct cpuidle_device *dev, 975 - struct cpuidle_driver *drv, int index) 976 - { 977 - unsigned long ecx = 1; /* break on interrupt flag */ 978 - struct cpuidle_state *state = &drv->states[index]; 979 - unsigned long eax = flg2MWAIT(state->flags); 980 - unsigned int cstate; 981 - bool uninitialized_var(tick); 982 - int cpu = smp_processor_id(); 983 - 984 - /* 985 - * leave_mm() to avoid costly and often unnecessary wakeups 986 - * for flushing the user TLB's associated with the active mm. 987 - */ 988 - if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED) 989 - leave_mm(cpu); 990 - 991 - if (!static_cpu_has(X86_FEATURE_ARAT)) { 992 - cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) & 993 - MWAIT_CSTATE_MASK) + 1; 994 - tick = false; 995 - if (!(lapic_timer_reliable_states & (1 << (cstate)))) { 996 - tick = true; 997 - tick_broadcast_enter(); 998 - } 999 - } 1000 - 1001 - mwait_idle_with_hints(eax, ecx); 1002 - 1003 - if (!static_cpu_has(X86_FEATURE_ARAT) && tick) 1004 - tick_broadcast_exit(); 1005 - 1006 - return index; 1007 - } 1008 - 1009 - /** 1010 - * intel_idle_s2idle - simplified "enter" callback routine for suspend-to-idle 1011 - * @dev: cpuidle_device 1012 - * @drv: cpuidle driver 1013 - * @index: state index 1014 - */ 1015 - static void intel_idle_s2idle(struct cpuidle_device *dev, 1016 - struct cpuidle_driver *drv, int index) 1017 - { 1018 - unsigned long ecx = 1; /* break on interrupt flag */ 1019 - unsigned long eax = flg2MWAIT(drv->states[index].flags); 1020 - 1021 - mwait_idle_with_hints(eax, ecx); 1022 - } 1023 - 1024 - static const struct idle_cpu idle_cpu_nehalem = { 897 + static const struct idle_cpu idle_cpu_nehalem __initconst = { 1025 898 .state_table = nehalem_cstates, 1026 899 .auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE, 1027 900 .disable_promotion_to_c1e = true, 1028 901 }; 1029 902 1030 - static const struct idle_cpu idle_cpu_nhx = { 903 + static const struct idle_cpu idle_cpu_nhx __initconst = { 1031 904 .state_table = nehalem_cstates, 1032 905 .auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE, 1033 906 .disable_promotion_to_c1e = true, 1034 907 .use_acpi = true, 1035 908 }; 1036 909 1037 - static const struct idle_cpu idle_cpu_atom = { 910 + static const struct idle_cpu idle_cpu_atom __initconst = { 1038 911 .state_table = atom_cstates, 1039 912 }; 1040 913 1041 - static const struct idle_cpu idle_cpu_tangier = { 914 + static const struct idle_cpu idle_cpu_tangier __initconst = { 1042 915 .state_table = tangier_cstates, 1043 916 }; 1044 917 1045 - static const struct idle_cpu idle_cpu_lincroft = { 918 + static const struct idle_cpu idle_cpu_lincroft __initconst = { 1046 919 .state_table = atom_cstates, 1047 920 .auto_demotion_disable_flags = ATM_LNC_C6_AUTO_DEMOTE, 1048 921 }; 1049 922 1050 - static const struct idle_cpu idle_cpu_snb = { 923 + static const struct idle_cpu idle_cpu_snb __initconst = { 1051 924 .state_table = snb_cstates, 1052 925 .disable_promotion_to_c1e = true, 1053 926 }; 1054 927 1055 - static const struct idle_cpu idle_cpu_snx = { 928 + static const struct idle_cpu idle_cpu_snx __initconst = { 1056 929 .state_table = snb_cstates, 1057 930 .disable_promotion_to_c1e = true, 1058 931 .use_acpi = true, 1059 932 }; 1060 933 1061 - static const struct idle_cpu idle_cpu_byt = { 934 + static const struct idle_cpu idle_cpu_byt __initconst = { 1062 935 .state_table = byt_cstates, 1063 936 .disable_promotion_to_c1e = true, 1064 937 .byt_auto_demotion_disable_flag = true, 1065 938 }; 1066 939 1067 - static const struct idle_cpu idle_cpu_cht = { 940 + static const struct idle_cpu idle_cpu_cht __initconst = { 1068 941 .state_table = cht_cstates, 1069 942 .disable_promotion_to_c1e = true, 1070 943 .byt_auto_demotion_disable_flag = true, 1071 944 }; 1072 945 1073 - static const struct idle_cpu idle_cpu_ivb = { 946 + static const struct idle_cpu idle_cpu_ivb __initconst = { 1074 947 .state_table = ivb_cstates, 1075 948 .disable_promotion_to_c1e = true, 1076 949 }; 1077 950 1078 - static const struct idle_cpu idle_cpu_ivt = { 951 + static const struct idle_cpu idle_cpu_ivt __initconst = { 1079 952 .state_table = ivt_cstates, 1080 953 .disable_promotion_to_c1e = true, 1081 954 .use_acpi = true, 1082 955 }; 1083 956 1084 - static const struct idle_cpu idle_cpu_hsw = { 957 + static const struct idle_cpu idle_cpu_hsw __initconst = { 1085 958 .state_table = hsw_cstates, 1086 959 .disable_promotion_to_c1e = true, 1087 960 }; 1088 961 1089 - static const struct idle_cpu idle_cpu_hsx = { 962 + static const struct idle_cpu idle_cpu_hsx __initconst = { 1090 963 .state_table = hsw_cstates, 1091 964 .disable_promotion_to_c1e = true, 1092 965 .use_acpi = true, 1093 966 }; 1094 967 1095 - static const struct idle_cpu idle_cpu_bdw = { 968 + static const struct idle_cpu idle_cpu_bdw __initconst = { 1096 969 .state_table = bdw_cstates, 1097 970 .disable_promotion_to_c1e = true, 1098 971 }; 1099 972 1100 - static const struct idle_cpu idle_cpu_bdx = { 973 + static const struct idle_cpu idle_cpu_bdx __initconst = { 1101 974 .state_table = bdw_cstates, 1102 975 .disable_promotion_to_c1e = true, 1103 976 .use_acpi = true, 1104 977 }; 1105 978 1106 - static const struct idle_cpu idle_cpu_skl = { 979 + static const struct idle_cpu idle_cpu_skl __initconst = { 1107 980 .state_table = skl_cstates, 1108 981 .disable_promotion_to_c1e = true, 1109 982 }; 1110 983 1111 - static const struct idle_cpu idle_cpu_skx = { 984 + static const struct idle_cpu idle_cpu_skx __initconst = { 1112 985 .state_table = skx_cstates, 1113 986 .disable_promotion_to_c1e = true, 1114 987 .use_acpi = true, 1115 988 }; 1116 989 1117 - static const struct idle_cpu idle_cpu_avn = { 990 + static const struct idle_cpu idle_cpu_avn __initconst = { 1118 991 .state_table = avn_cstates, 1119 992 .disable_promotion_to_c1e = true, 1120 993 .use_acpi = true, 1121 994 }; 1122 995 1123 - static const struct idle_cpu idle_cpu_knl = { 996 + static const struct idle_cpu idle_cpu_knl __initconst = { 1124 997 .state_table = knl_cstates, 1125 998 .use_acpi = true, 1126 999 }; 1127 1000 1128 - static const struct idle_cpu idle_cpu_bxt = { 1001 + static const struct idle_cpu idle_cpu_bxt __initconst = { 1129 1002 .state_table = bxt_cstates, 1130 1003 .disable_promotion_to_c1e = true, 1131 1004 }; 1132 1005 1133 - static const struct idle_cpu idle_cpu_dnv = { 1006 + static const struct idle_cpu idle_cpu_dnv __initconst = { 1134 1007 .state_table = dnv_cstates, 1135 1008 .disable_promotion_to_c1e = true, 1136 1009 .use_acpi = true, ··· 1284 1273 static inline bool intel_idle_off_by_default(u32 mwait_hint) { return false; } 1285 1274 #endif /* !CONFIG_ACPI_PROCESSOR_CSTATE */ 1286 1275 1287 - /* 1288 - * ivt_idle_state_table_update(void) 1276 + /** 1277 + * ivt_idle_state_table_update - Tune the idle states table for Ivy Town. 1289 1278 * 1290 - * Tune IVT multi-socket targets 1291 - * Assumption: num_sockets == (max_package_num + 1) 1279 + * Tune IVT multi-socket targets. 1280 + * Assumption: num_sockets == (max_package_num + 1). 1292 1281 */ 1293 1282 static void __init ivt_idle_state_table_update(void) 1294 1283 { ··· 1334 1323 return div_u64((irtl & 0x3FF) * ns, NSEC_PER_USEC); 1335 1324 } 1336 1325 1337 - /* 1338 - * bxt_idle_state_table_update(void) 1326 + /** 1327 + * bxt_idle_state_table_update - Fix up the Broxton idle states table. 1339 1328 * 1340 - * On BXT, we trust the IRTL to show the definitive maximum latency 1341 - * We use the same value for target_residency. 1329 + * On BXT, trust the IRTL (Interrupt Response Time Limit) MSR to show the 1330 + * definitive maximum latency and use the same value for target_residency. 1342 1331 */ 1343 1332 static void __init bxt_idle_state_table_update(void) 1344 1333 { ··· 1381 1370 } 1382 1371 1383 1372 } 1384 - /* 1385 - * sklh_idle_state_table_update(void) 1373 + 1374 + /** 1375 + * sklh_idle_state_table_update - Fix up the Sky Lake idle states table. 1386 1376 * 1387 - * On SKL-H (model 0x5e) disable C8 and C9 if: 1388 - * C10 is enabled and SGX disabled 1377 + * On SKL-H (model 0x5e) skip C8 and C9 if C10 is enabled and SGX disabled. 1389 1378 */ 1390 1379 static void __init sklh_idle_state_table_update(void) 1391 1380 { ··· 1496 1485 } 1497 1486 } 1498 1487 1499 - /* 1500 - * intel_idle_cpuidle_driver_init() 1501 - * allocate, initialize cpuidle_states 1488 + /** 1489 + * intel_idle_cpuidle_driver_init - Create the list of available idle states. 1490 + * @drv: cpuidle driver structure to initialize. 1502 1491 */ 1503 1492 static void __init intel_idle_cpuidle_driver_init(struct cpuidle_driver *drv) 1504 1493 { ··· 1520 1509 unsigned long long msr_bits; 1521 1510 1522 1511 rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr_bits); 1523 - msr_bits &= ~(icpu->auto_demotion_disable_flags); 1512 + msr_bits &= ~auto_demotion_disable_flags; 1524 1513 wrmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr_bits); 1525 1514 } 1526 1515 ··· 1533 1522 wrmsrl(MSR_IA32_POWER_CTL, msr_bits); 1534 1523 } 1535 1524 1536 - /* 1537 - * intel_idle_cpu_init() 1538 - * allocate, initialize, register cpuidle_devices 1539 - * @cpu: cpu/core to initialize 1525 + /** 1526 + * intel_idle_cpu_init - Register the target CPU with the cpuidle core. 1527 + * @cpu: CPU to initialize. 1528 + * 1529 + * Register a cpuidle device object for @cpu and update its MSRs in accordance 1530 + * with the processor model flags. 1540 1531 */ 1541 1532 static int intel_idle_cpu_init(unsigned int cpu) 1542 1533 { ··· 1552 1539 return -EIO; 1553 1540 } 1554 1541 1555 - if (!icpu) 1556 - return 0; 1557 - 1558 - if (icpu->auto_demotion_disable_flags) 1542 + if (auto_demotion_disable_flags) 1559 1543 auto_demotion_disable(); 1560 1544 1561 - if (icpu->disable_promotion_to_c1e) 1545 + if (disable_promotion_to_c1e) 1562 1546 c1e_promotion_disable(); 1563 1547 1564 1548 return 0; ··· 1565 1555 { 1566 1556 struct cpuidle_device *dev; 1567 1557 1568 - if (lapic_timer_reliable_states != LAPIC_TIMER_ALWAYS_RELIABLE) 1558 + if (!lapic_timer_always_reliable) 1569 1559 tick_broadcast_enable(); 1570 1560 1571 1561 /* ··· 1633 1623 icpu = (const struct idle_cpu *)id->driver_data; 1634 1624 if (icpu) { 1635 1625 cpuidle_state_table = icpu->state_table; 1626 + auto_demotion_disable_flags = icpu->auto_demotion_disable_flags; 1627 + disable_promotion_to_c1e = icpu->disable_promotion_to_c1e; 1636 1628 if (icpu->use_acpi || force_use_acpi) 1637 1629 intel_idle_acpi_cst_extract(); 1638 1630 } else if (!intel_idle_acpi_cst_extract()) { ··· 1659 1647 } 1660 1648 1661 1649 if (boot_cpu_has(X86_FEATURE_ARAT)) /* Always Reliable APIC Timer */ 1662 - lapic_timer_reliable_states = LAPIC_TIMER_ALWAYS_RELIABLE; 1650 + lapic_timer_always_reliable = true; 1663 1651 1664 1652 retval = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "idle/intel:online", 1665 1653 intel_idle_cpu_online, NULL); 1666 1654 if (retval < 0) 1667 1655 goto hp_setup_fail; 1668 1656 1669 - pr_debug("lapic_timer_reliable_states 0x%x\n", 1670 - lapic_timer_reliable_states); 1657 + pr_debug("Local APIC timer is reliable in %s\n", 1658 + lapic_timer_always_reliable ? "all C-states" : "C1"); 1671 1659 1672 1660 return 0; 1673 1661
+2 -3
drivers/media/pci/saa7134/saa7134-video.c
··· 1008 1008 */ 1009 1009 if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || 1010 1010 (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) 1011 - pm_qos_add_request(&dev->qos_request, 1012 - PM_QOS_CPU_DMA_LATENCY, 20); 1011 + cpu_latency_qos_add_request(&dev->qos_request, 20); 1013 1012 dmaq->seq_nr = 0; 1014 1013 1015 1014 return 0; ··· 1023 1024 1024 1025 if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || 1025 1026 (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) 1026 - pm_qos_remove_request(&dev->qos_request); 1027 + cpu_latency_qos_remove_request(&dev->qos_request); 1027 1028 } 1028 1029 1029 1030 static const struct vb2_ops vb2_qops = {
+2 -2
drivers/media/platform/via-camera.c
··· 646 646 * requirement which will keep the CPU out of the deeper sleep 647 647 * states. 648 648 */ 649 - pm_qos_add_request(&cam->qos_request, PM_QOS_CPU_DMA_LATENCY, 50); 649 + cpu_latency_qos_add_request(&cam->qos_request, 50); 650 650 viacam_start_engine(cam); 651 651 return 0; 652 652 out: ··· 662 662 struct via_camera *cam = vb2_get_drv_priv(vq); 663 663 struct via_buffer *buf, *tmp; 664 664 665 - pm_qos_remove_request(&cam->qos_request); 665 + cpu_latency_qos_remove_request(&cam->qos_request); 666 666 viacam_stop_engine(cam); 667 667 668 668 list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) {
+6 -8
drivers/mmc/host/sdhci-esdhc-imx.c
··· 1452 1452 pdev->id_entry->driver_data; 1453 1453 1454 1454 if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) 1455 - pm_qos_add_request(&imx_data->pm_qos_req, 1456 - PM_QOS_CPU_DMA_LATENCY, 0); 1455 + cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0); 1457 1456 1458 1457 imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg"); 1459 1458 if (IS_ERR(imx_data->clk_ipg)) { ··· 1571 1572 clk_disable_unprepare(imx_data->clk_per); 1572 1573 free_sdhci: 1573 1574 if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) 1574 - pm_qos_remove_request(&imx_data->pm_qos_req); 1575 + cpu_latency_qos_remove_request(&imx_data->pm_qos_req); 1575 1576 sdhci_pltfm_free(pdev); 1576 1577 return err; 1577 1578 } ··· 1594 1595 clk_disable_unprepare(imx_data->clk_ahb); 1595 1596 1596 1597 if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) 1597 - pm_qos_remove_request(&imx_data->pm_qos_req); 1598 + cpu_latency_qos_remove_request(&imx_data->pm_qos_req); 1598 1599 1599 1600 sdhci_pltfm_free(pdev); 1600 1601 ··· 1666 1667 clk_disable_unprepare(imx_data->clk_ahb); 1667 1668 1668 1669 if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) 1669 - pm_qos_remove_request(&imx_data->pm_qos_req); 1670 + cpu_latency_qos_remove_request(&imx_data->pm_qos_req); 1670 1671 1671 1672 return ret; 1672 1673 } ··· 1679 1680 int err; 1680 1681 1681 1682 if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) 1682 - pm_qos_add_request(&imx_data->pm_qos_req, 1683 - PM_QOS_CPU_DMA_LATENCY, 0); 1683 + cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0); 1684 1684 1685 1685 err = clk_prepare_enable(imx_data->clk_ahb); 1686 1686 if (err) ··· 1712 1714 clk_disable_unprepare(imx_data->clk_ahb); 1713 1715 remove_pm_qos_request: 1714 1716 if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) 1715 - pm_qos_remove_request(&imx_data->pm_qos_req); 1717 + cpu_latency_qos_remove_request(&imx_data->pm_qos_req); 1716 1718 return err; 1717 1719 } 1718 1720 #endif
+6 -7
drivers/net/ethernet/intel/e1000e/netdev.c
··· 3280 3280 3281 3281 dev_info(&adapter->pdev->dev, 3282 3282 "Some CPU C-states have been disabled in order to enable jumbo frames\n"); 3283 - pm_qos_update_request(&adapter->pm_qos_req, lat); 3283 + cpu_latency_qos_update_request(&adapter->pm_qos_req, lat); 3284 3284 } else { 3285 - pm_qos_update_request(&adapter->pm_qos_req, 3286 - PM_QOS_DEFAULT_VALUE); 3285 + cpu_latency_qos_update_request(&adapter->pm_qos_req, 3286 + PM_QOS_DEFAULT_VALUE); 3287 3287 } 3288 3288 3289 3289 /* Enable Receives */ ··· 4636 4636 e1000_update_mng_vlan(adapter); 4637 4637 4638 4638 /* DMA latency requirement to workaround jumbo issue */ 4639 - pm_qos_add_request(&adapter->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 4640 - PM_QOS_DEFAULT_VALUE); 4639 + cpu_latency_qos_add_request(&adapter->pm_qos_req, PM_QOS_DEFAULT_VALUE); 4641 4640 4642 4641 /* before we allocate an interrupt, we must be ready to handle it. 4643 4642 * Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt ··· 4678 4679 return 0; 4679 4680 4680 4681 err_req_irq: 4681 - pm_qos_remove_request(&adapter->pm_qos_req); 4682 + cpu_latency_qos_remove_request(&adapter->pm_qos_req); 4682 4683 e1000e_release_hw_control(adapter); 4683 4684 e1000_power_down_phy(adapter); 4684 4685 e1000e_free_rx_resources(adapter->rx_ring); ··· 4742 4743 !test_bit(__E1000_TESTING, &adapter->state)) 4743 4744 e1000e_release_hw_control(adapter); 4744 4745 4745 - pm_qos_remove_request(&adapter->pm_qos_req); 4746 + cpu_latency_qos_remove_request(&adapter->pm_qos_req); 4746 4747 4747 4748 pm_runtime_put_sync(&pdev->dev); 4748 4749
+2 -2
drivers/net/wireless/ath/ath10k/core.c
··· 1052 1052 } 1053 1053 1054 1054 memset(&latency_qos, 0, sizeof(latency_qos)); 1055 - pm_qos_add_request(&latency_qos, PM_QOS_CPU_DMA_LATENCY, 0); 1055 + cpu_latency_qos_add_request(&latency_qos, 0); 1056 1056 1057 1057 ret = ath10k_bmi_fast_download(ar, address, data, data_len); 1058 1058 1059 - pm_qos_remove_request(&latency_qos); 1059 + cpu_latency_qos_remove_request(&latency_qos); 1060 1060 1061 1061 return ret; 1062 1062 }
+5 -5
drivers/net/wireless/intel/ipw2x00/ipw2100.c
··· 1730 1730 /* the ipw2100 hardware really doesn't want power management delays 1731 1731 * longer than 175usec 1732 1732 */ 1733 - pm_qos_update_request(&ipw2100_pm_qos_req, 175); 1733 + cpu_latency_qos_update_request(&ipw2100_pm_qos_req, 175); 1734 1734 1735 1735 /* If the interrupt is enabled, turn it off... */ 1736 1736 spin_lock_irqsave(&priv->low_lock, flags); ··· 1875 1875 ipw2100_disable_interrupts(priv); 1876 1876 spin_unlock_irqrestore(&priv->low_lock, flags); 1877 1877 1878 - pm_qos_update_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE); 1878 + cpu_latency_qos_update_request(&ipw2100_pm_qos_req, 1879 + PM_QOS_DEFAULT_VALUE); 1879 1880 1880 1881 /* We have to signal any supplicant if we are disassociating */ 1881 1882 if (associated) ··· 6567 6566 printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION); 6568 6567 printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT); 6569 6568 6570 - pm_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 6571 - PM_QOS_DEFAULT_VALUE); 6569 + cpu_latency_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE); 6572 6570 6573 6571 ret = pci_register_driver(&ipw2100_pci_driver); 6574 6572 if (ret) ··· 6594 6594 &driver_attr_debug_level); 6595 6595 #endif 6596 6596 pci_unregister_driver(&ipw2100_pci_driver); 6597 - pm_qos_remove_request(&ipw2100_pm_qos_req); 6597 + cpu_latency_qos_remove_request(&ipw2100_pm_qos_req); 6598 6598 } 6599 6599 6600 6600 module_init(ipw2100_init);
+1 -1
drivers/powercap/idle_inject.c
··· 67 67 struct hrtimer timer; 68 68 unsigned int idle_duration_us; 69 69 unsigned int run_duration_us; 70 - unsigned long int cpumask[0]; 70 + unsigned long cpumask[]; 71 71 }; 72 72 73 73 static DEFINE_PER_CPU(struct idle_inject_thread, idle_inject_thread);
+2 -2
drivers/spi/spi-fsl-qspi.c
··· 484 484 } 485 485 486 486 if (needs_wakeup_wait_mode(q)) 487 - pm_qos_add_request(&q->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 0); 487 + cpu_latency_qos_add_request(&q->pm_qos_req, 0); 488 488 489 489 return 0; 490 490 } ··· 492 492 static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q) 493 493 { 494 494 if (needs_wakeup_wait_mode(q)) 495 - pm_qos_remove_request(&q->pm_qos_req); 495 + cpu_latency_qos_remove_request(&q->pm_qos_req); 496 496 497 497 clk_disable_unprepare(q->clk); 498 498 clk_disable_unprepare(q->clk_en);
+6 -7
drivers/tty/serial/8250/8250_omap.c
··· 569 569 struct omap8250_priv *priv; 570 570 571 571 priv = container_of(work, struct omap8250_priv, qos_work); 572 - pm_qos_update_request(&priv->pm_qos_request, priv->latency); 572 + cpu_latency_qos_update_request(&priv->pm_qos_request, priv->latency); 573 573 } 574 574 575 575 #ifdef CONFIG_SERIAL_8250_DMA ··· 1222 1222 DEFAULT_CLK_SPEED); 1223 1223 } 1224 1224 1225 - priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; 1226 - priv->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; 1227 - pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY, 1228 - priv->latency); 1225 + priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; 1226 + priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; 1227 + cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency); 1229 1228 INIT_WORK(&priv->qos_work, omap8250_uart_qos_work); 1230 1229 1231 1230 spin_lock_init(&priv->rx_dma_lock); ··· 1294 1295 pm_runtime_put_sync(&pdev->dev); 1295 1296 pm_runtime_disable(&pdev->dev); 1296 1297 serial8250_unregister_port(priv->line); 1297 - pm_qos_remove_request(&priv->pm_qos_request); 1298 + cpu_latency_qos_remove_request(&priv->pm_qos_request); 1298 1299 device_init_wakeup(&pdev->dev, false); 1299 1300 return 0; 1300 1301 } ··· 1444 1445 if (up->dma && up->dma->rxchan) 1445 1446 omap_8250_rx_dma_flush(up); 1446 1447 1447 - priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; 1448 + priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; 1448 1449 schedule_work(&priv->qos_work); 1449 1450 1450 1451 return 0;
+7 -8
drivers/tty/serial/omap-serial.c
··· 831 831 struct uart_omap_port *up = container_of(work, struct uart_omap_port, 832 832 qos_work); 833 833 834 - pm_qos_update_request(&up->pm_qos_request, up->latency); 834 + cpu_latency_qos_update_request(&up->pm_qos_request, up->latency); 835 835 } 836 836 837 837 static void ··· 1722 1722 DEFAULT_CLK_SPEED); 1723 1723 } 1724 1724 1725 - up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; 1726 - up->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; 1727 - pm_qos_add_request(&up->pm_qos_request, 1728 - PM_QOS_CPU_DMA_LATENCY, up->latency); 1725 + up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; 1726 + up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; 1727 + cpu_latency_qos_add_request(&up->pm_qos_request, up->latency); 1729 1728 INIT_WORK(&up->qos_work, serial_omap_uart_qos_work); 1730 1729 1731 1730 platform_set_drvdata(pdev, up); ··· 1758 1759 pm_runtime_dont_use_autosuspend(&pdev->dev); 1759 1760 pm_runtime_put_sync(&pdev->dev); 1760 1761 pm_runtime_disable(&pdev->dev); 1761 - pm_qos_remove_request(&up->pm_qos_request); 1762 + cpu_latency_qos_remove_request(&up->pm_qos_request); 1762 1763 device_init_wakeup(up->dev, false); 1763 1764 err_rs485: 1764 1765 err_port_line: ··· 1776 1777 pm_runtime_dont_use_autosuspend(up->dev); 1777 1778 pm_runtime_put_sync(up->dev); 1778 1779 pm_runtime_disable(up->dev); 1779 - pm_qos_remove_request(&up->pm_qos_request); 1780 + cpu_latency_qos_remove_request(&up->pm_qos_request); 1780 1781 device_init_wakeup(&dev->dev, false); 1781 1782 1782 1783 return 0; ··· 1868 1869 1869 1870 serial_omap_enable_wakeup(up, true); 1870 1871 1871 - up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; 1872 + up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; 1872 1873 schedule_work(&up->qos_work); 1873 1874 1874 1875 return 0;
+5 -7
drivers/usb/chipidea/ci_hdrc_imx.c
··· 393 393 } 394 394 395 395 if (pdata.flags & CI_HDRC_PMQOS) 396 - pm_qos_add_request(&data->pm_qos_req, 397 - PM_QOS_CPU_DMA_LATENCY, 0); 396 + cpu_latency_qos_add_request(&data->pm_qos_req, 0); 398 397 399 398 ret = imx_get_clks(dev); 400 399 if (ret) ··· 477 478 /* don't overwrite original ret (cf. EPROBE_DEFER) */ 478 479 regulator_disable(data->hsic_pad_regulator); 479 480 if (pdata.flags & CI_HDRC_PMQOS) 480 - pm_qos_remove_request(&data->pm_qos_req); 481 + cpu_latency_qos_remove_request(&data->pm_qos_req); 481 482 data->ci_pdev = NULL; 482 483 return ret; 483 484 } ··· 498 499 if (data->ci_pdev) { 499 500 imx_disable_unprepare_clks(&pdev->dev); 500 501 if (data->plat_data->flags & CI_HDRC_PMQOS) 501 - pm_qos_remove_request(&data->pm_qos_req); 502 + cpu_latency_qos_remove_request(&data->pm_qos_req); 502 503 if (data->hsic_pad_regulator) 503 504 regulator_disable(data->hsic_pad_regulator); 504 505 } ··· 526 527 527 528 imx_disable_unprepare_clks(dev); 528 529 if (data->plat_data->flags & CI_HDRC_PMQOS) 529 - pm_qos_remove_request(&data->pm_qos_req); 530 + cpu_latency_qos_remove_request(&data->pm_qos_req); 530 531 531 532 data->in_lpm = true; 532 533 ··· 546 547 } 547 548 548 549 if (data->plat_data->flags & CI_HDRC_PMQOS) 549 - pm_qos_add_request(&data->pm_qos_req, 550 - PM_QOS_CPU_DMA_LATENCY, 0); 550 + cpu_latency_qos_add_request(&data->pm_qos_req, 0); 551 551 552 552 ret = imx_prepare_enable_clks(dev); 553 553 if (ret)
+1 -1
include/acpi/acpixf.h
··· 752 752 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void)) 753 753 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void)) 754 754 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void)) 755 - ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(void)) 755 + ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(u32 gpe_skip_number)) 756 756 ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_fixed_event_status_set(void)) 757 757 758 758 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
+52 -54
include/linux/devfreq.h
··· 158 158 * functions except for the context of callbacks defined in struct 159 159 * devfreq_governor, the governor should protect its access with the 160 160 * struct mutex lock in struct devfreq. A governor may use this mutex 161 - * to protect its own private data in void *data as well. 161 + * to protect its own private data in ``void *data`` as well. 162 162 */ 163 163 struct devfreq { 164 164 struct list_head node; ··· 201 201 }; 202 202 203 203 #if defined(CONFIG_PM_DEVFREQ) 204 - extern struct devfreq *devfreq_add_device(struct device *dev, 205 - struct devfreq_dev_profile *profile, 206 - const char *governor_name, 207 - void *data); 208 - extern int devfreq_remove_device(struct devfreq *devfreq); 209 - extern struct devfreq *devm_devfreq_add_device(struct device *dev, 210 - struct devfreq_dev_profile *profile, 211 - const char *governor_name, 212 - void *data); 213 - extern void devm_devfreq_remove_device(struct device *dev, 214 - struct devfreq *devfreq); 204 + struct devfreq *devfreq_add_device(struct device *dev, 205 + struct devfreq_dev_profile *profile, 206 + const char *governor_name, 207 + void *data); 208 + int devfreq_remove_device(struct devfreq *devfreq); 209 + struct devfreq *devm_devfreq_add_device(struct device *dev, 210 + struct devfreq_dev_profile *profile, 211 + const char *governor_name, 212 + void *data); 213 + void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq); 215 214 216 215 /* Supposed to be called by PM callbacks */ 217 - extern int devfreq_suspend_device(struct devfreq *devfreq); 218 - extern int devfreq_resume_device(struct devfreq *devfreq); 216 + int devfreq_suspend_device(struct devfreq *devfreq); 217 + int devfreq_resume_device(struct devfreq *devfreq); 219 218 220 - extern void devfreq_suspend(void); 221 - extern void devfreq_resume(void); 219 + void devfreq_suspend(void); 220 + void devfreq_resume(void); 222 221 223 222 /** 224 223 * update_devfreq() - Reevaluate the device and configure frequency ··· 225 226 * 226 227 * Note: devfreq->lock must be held 227 228 */ 228 - extern int update_devfreq(struct devfreq *devfreq); 229 + int update_devfreq(struct devfreq *devfreq); 229 230 230 231 /* Helper functions for devfreq user device driver with OPP. */ 231 - extern struct dev_pm_opp *devfreq_recommended_opp(struct device *dev, 232 - unsigned long *freq, u32 flags); 233 - extern int devfreq_register_opp_notifier(struct device *dev, 234 - struct devfreq *devfreq); 235 - extern int devfreq_unregister_opp_notifier(struct device *dev, 236 - struct devfreq *devfreq); 237 - extern int devm_devfreq_register_opp_notifier(struct device *dev, 238 - struct devfreq *devfreq); 239 - extern void devm_devfreq_unregister_opp_notifier(struct device *dev, 240 - struct devfreq *devfreq); 241 - extern int devfreq_register_notifier(struct devfreq *devfreq, 242 - struct notifier_block *nb, 243 - unsigned int list); 244 - extern int devfreq_unregister_notifier(struct devfreq *devfreq, 245 - struct notifier_block *nb, 246 - unsigned int list); 247 - extern int devm_devfreq_register_notifier(struct device *dev, 232 + struct dev_pm_opp *devfreq_recommended_opp(struct device *dev, 233 + unsigned long *freq, u32 flags); 234 + int devfreq_register_opp_notifier(struct device *dev, 235 + struct devfreq *devfreq); 236 + int devfreq_unregister_opp_notifier(struct device *dev, 237 + struct devfreq *devfreq); 238 + int devm_devfreq_register_opp_notifier(struct device *dev, 239 + struct devfreq *devfreq); 240 + void devm_devfreq_unregister_opp_notifier(struct device *dev, 241 + struct devfreq *devfreq); 242 + int devfreq_register_notifier(struct devfreq *devfreq, 243 + struct notifier_block *nb, 244 + unsigned int list); 245 + int devfreq_unregister_notifier(struct devfreq *devfreq, 246 + struct notifier_block *nb, 247 + unsigned int list); 248 + int devm_devfreq_register_notifier(struct device *dev, 248 249 struct devfreq *devfreq, 249 250 struct notifier_block *nb, 250 251 unsigned int list); 251 - extern void devm_devfreq_unregister_notifier(struct device *dev, 252 + void devm_devfreq_unregister_notifier(struct device *dev, 252 253 struct devfreq *devfreq, 253 254 struct notifier_block *nb, 254 255 unsigned int list); 255 - extern struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 256 - int index); 256 + struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index); 257 257 258 258 #if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) 259 259 /** 260 - * struct devfreq_simple_ondemand_data - void *data fed to struct devfreq 260 + * struct devfreq_simple_ondemand_data - ``void *data`` fed to struct devfreq 261 261 * and devfreq_add_device 262 262 * @upthreshold: If the load is over this value, the frequency jumps. 263 263 * Specify 0 to use the default. Valid value = 0 to 100. ··· 276 278 277 279 #if IS_ENABLED(CONFIG_DEVFREQ_GOV_PASSIVE) 278 280 /** 279 - * struct devfreq_passive_data - void *data fed to struct devfreq 281 + * struct devfreq_passive_data - ``void *data`` fed to struct devfreq 280 282 * and devfreq_add_device 281 283 * @parent: the devfreq instance of parent device. 282 284 * @get_target_freq: Optional callback, Returns desired operating frequency ··· 309 311 310 312 #else /* !CONFIG_PM_DEVFREQ */ 311 313 static inline struct devfreq *devfreq_add_device(struct device *dev, 312 - struct devfreq_dev_profile *profile, 313 - const char *governor_name, 314 - void *data) 314 + struct devfreq_dev_profile *profile, 315 + const char *governor_name, 316 + void *data) 315 317 { 316 318 return ERR_PTR(-ENOSYS); 317 319 } ··· 348 350 static inline void devfreq_resume(void) {} 349 351 350 352 static inline struct dev_pm_opp *devfreq_recommended_opp(struct device *dev, 351 - unsigned long *freq, u32 flags) 353 + unsigned long *freq, u32 flags) 352 354 { 353 355 return ERR_PTR(-EINVAL); 354 356 } 355 357 356 358 static inline int devfreq_register_opp_notifier(struct device *dev, 357 - struct devfreq *devfreq) 359 + struct devfreq *devfreq) 358 360 { 359 361 return -EINVAL; 360 362 } 361 363 362 364 static inline int devfreq_unregister_opp_notifier(struct device *dev, 363 - struct devfreq *devfreq) 365 + struct devfreq *devfreq) 364 366 { 365 367 return -EINVAL; 366 368 } 367 369 368 370 static inline int devm_devfreq_register_opp_notifier(struct device *dev, 369 - struct devfreq *devfreq) 371 + struct devfreq *devfreq) 370 372 { 371 373 return -EINVAL; 372 374 } 373 375 374 376 static inline void devm_devfreq_unregister_opp_notifier(struct device *dev, 375 - struct devfreq *devfreq) 377 + struct devfreq *devfreq) 376 378 { 377 379 } 378 380 ··· 391 393 } 392 394 393 395 static inline int devm_devfreq_register_notifier(struct device *dev, 394 - struct devfreq *devfreq, 395 - struct notifier_block *nb, 396 - unsigned int list) 396 + struct devfreq *devfreq, 397 + struct notifier_block *nb, 398 + unsigned int list) 397 399 { 398 400 return 0; 399 401 } 400 402 401 403 static inline void devm_devfreq_unregister_notifier(struct device *dev, 402 - struct devfreq *devfreq, 403 - struct notifier_block *nb, 404 - unsigned int list) 404 + struct devfreq *devfreq, 405 + struct notifier_block *nb, 406 + unsigned int list) 405 407 { 406 408 } 407 409 408 410 static inline struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 409 - int index) 411 + int index) 410 412 { 411 413 return ERR_PTR(-ENODEV); 412 414 }
+41 -38
include/linux/pm_qos.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Definitions related to Power Management Quality of Service (PM QoS). 4 + * 5 + * Copyright (C) 2020 Intel Corporation 6 + * 7 + * Authors: 8 + * Mark Gross <mgross@linux.intel.com> 9 + * Rafael J. Wysocki <rafael.j.wysocki@intel.com> 10 + */ 11 + 2 12 #ifndef _LINUX_PM_QOS_H 3 13 #define _LINUX_PM_QOS_H 4 - /* interface for the pm_qos_power infrastructure of the linux kernel. 5 - * 6 - * Mark Gross <mgross@linux.intel.com> 7 - */ 14 + 8 15 #include <linux/plist.h> 9 16 #include <linux/notifier.h> 10 17 #include <linux/device.h> 11 - #include <linux/workqueue.h> 12 - 13 - enum { 14 - PM_QOS_RESERVED = 0, 15 - PM_QOS_CPU_DMA_LATENCY, 16 - 17 - /* insert new class ID */ 18 - PM_QOS_NUM_CLASSES, 19 - }; 20 18 21 19 enum pm_qos_flags_status { 22 20 PM_QOS_FLAGS_UNDEFINED = -1, ··· 27 29 #define PM_QOS_LATENCY_ANY S32_MAX 28 30 #define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC) 29 31 30 - #define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 32 + #define PM_QOS_CPU_LATENCY_DEFAULT_VALUE (2000 * USEC_PER_SEC) 31 33 #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY 32 34 #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY 33 35 #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS ··· 38 40 39 41 #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) 40 42 41 - struct pm_qos_request { 42 - struct plist_node node; 43 - int pm_qos_class; 44 - struct delayed_work work; /* for pm_qos_update_request_timeout */ 45 - }; 46 - 47 - struct pm_qos_flags_request { 48 - struct list_head node; 49 - s32 flags; /* Do not change to 64 bit */ 50 - }; 51 - 52 43 enum pm_qos_type { 53 44 PM_QOS_UNITIALIZED, 54 45 PM_QOS_MAX, /* return the largest value */ 55 46 PM_QOS_MIN, /* return the smallest value */ 56 - PM_QOS_SUM /* return the sum */ 57 47 }; 58 48 59 49 /* ··· 56 70 s32 no_constraint_value; 57 71 enum pm_qos_type type; 58 72 struct blocking_notifier_head *notifiers; 73 + }; 74 + 75 + struct pm_qos_request { 76 + struct plist_node node; 77 + struct pm_qos_constraints *qos; 78 + }; 79 + 80 + struct pm_qos_flags_request { 81 + struct list_head node; 82 + s32 flags; /* Do not change to 64 bit */ 59 83 }; 60 84 61 85 struct pm_qos_flags { ··· 136 140 return req->dev != NULL; 137 141 } 138 142 143 + s32 pm_qos_read_value(struct pm_qos_constraints *c); 139 144 int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, 140 145 enum pm_qos_req_action action, int value); 141 146 bool pm_qos_update_flags(struct pm_qos_flags *pqf, 142 147 struct pm_qos_flags_request *req, 143 148 enum pm_qos_req_action action, s32 val); 144 - void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class, 145 - s32 value); 146 - void pm_qos_update_request(struct pm_qos_request *req, 147 - s32 new_value); 148 - void pm_qos_update_request_timeout(struct pm_qos_request *req, 149 - s32 new_value, unsigned long timeout_us); 150 - void pm_qos_remove_request(struct pm_qos_request *req); 151 149 152 - int pm_qos_request(int pm_qos_class); 153 - int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier); 154 - int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier); 155 - int pm_qos_request_active(struct pm_qos_request *req); 156 - s32 pm_qos_read_value(struct pm_qos_constraints *c); 150 + #ifdef CONFIG_CPU_IDLE 151 + s32 cpu_latency_qos_limit(void); 152 + bool cpu_latency_qos_request_active(struct pm_qos_request *req); 153 + void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value); 154 + void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value); 155 + void cpu_latency_qos_remove_request(struct pm_qos_request *req); 156 + #else 157 + static inline s32 cpu_latency_qos_limit(void) { return INT_MAX; } 158 + static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req) 159 + { 160 + return false; 161 + } 162 + static inline void cpu_latency_qos_add_request(struct pm_qos_request *req, 163 + s32 value) {} 164 + static inline void cpu_latency_qos_update_request(struct pm_qos_request *req, 165 + s32 new_value) {} 166 + static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req) {} 167 + #endif 157 168 158 169 #ifdef CONFIG_PM 159 170 enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
+11 -1
include/linux/pm_runtime.h
··· 38 38 extern int __pm_runtime_idle(struct device *dev, int rpmflags); 39 39 extern int __pm_runtime_suspend(struct device *dev, int rpmflags); 40 40 extern int __pm_runtime_resume(struct device *dev, int rpmflags); 41 - extern int pm_runtime_get_if_in_use(struct device *dev); 41 + extern int pm_runtime_get_if_active(struct device *dev, bool ign_usage_count); 42 42 extern int pm_schedule_suspend(struct device *dev, unsigned int delay); 43 43 extern int __pm_runtime_set_status(struct device *dev, unsigned int status); 44 44 extern int pm_runtime_barrier(struct device *dev); ··· 59 59 extern void pm_runtime_put_suppliers(struct device *dev); 60 60 extern void pm_runtime_new_link(struct device *dev); 61 61 extern void pm_runtime_drop_link(struct device *dev); 62 + 63 + static inline int pm_runtime_get_if_in_use(struct device *dev) 64 + { 65 + return pm_runtime_get_if_active(dev, false); 66 + } 62 67 63 68 static inline void pm_suspend_ignore_children(struct device *dev, bool enable) 64 69 { ··· 145 140 return -ENOSYS; 146 141 } 147 142 static inline int pm_runtime_get_if_in_use(struct device *dev) 143 + { 144 + return -EINVAL; 145 + } 146 + static inline int pm_runtime_get_if_active(struct device *dev, 147 + bool ign_usage_count) 148 148 { 149 149 return -EINVAL; 150 150 }
+17 -42
include/trace/events/power.h
··· 359 359 ); 360 360 361 361 /* 362 - * The pm qos events are used for pm qos update 362 + * CPU latency QoS events used for global CPU latency QoS list updates 363 363 */ 364 - DECLARE_EVENT_CLASS(pm_qos_request, 364 + DECLARE_EVENT_CLASS(cpu_latency_qos_request, 365 365 366 - TP_PROTO(int pm_qos_class, s32 value), 366 + TP_PROTO(s32 value), 367 367 368 - TP_ARGS(pm_qos_class, value), 368 + TP_ARGS(value), 369 369 370 370 TP_STRUCT__entry( 371 - __field( int, pm_qos_class ) 372 371 __field( s32, value ) 373 372 ), 374 373 375 374 TP_fast_assign( 376 - __entry->pm_qos_class = pm_qos_class; 377 375 __entry->value = value; 378 376 ), 379 377 380 - TP_printk("pm_qos_class=%s value=%d", 381 - __print_symbolic(__entry->pm_qos_class, 382 - { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }), 378 + TP_printk("CPU_DMA_LATENCY value=%d", 383 379 __entry->value) 384 380 ); 385 381 386 - DEFINE_EVENT(pm_qos_request, pm_qos_add_request, 382 + DEFINE_EVENT(cpu_latency_qos_request, pm_qos_add_request, 387 383 388 - TP_PROTO(int pm_qos_class, s32 value), 384 + TP_PROTO(s32 value), 389 385 390 - TP_ARGS(pm_qos_class, value) 386 + TP_ARGS(value) 391 387 ); 392 388 393 - DEFINE_EVENT(pm_qos_request, pm_qos_update_request, 389 + DEFINE_EVENT(cpu_latency_qos_request, pm_qos_update_request, 394 390 395 - TP_PROTO(int pm_qos_class, s32 value), 391 + TP_PROTO(s32 value), 396 392 397 - TP_ARGS(pm_qos_class, value) 393 + TP_ARGS(value) 398 394 ); 399 395 400 - DEFINE_EVENT(pm_qos_request, pm_qos_remove_request, 396 + DEFINE_EVENT(cpu_latency_qos_request, pm_qos_remove_request, 401 397 402 - TP_PROTO(int pm_qos_class, s32 value), 398 + TP_PROTO(s32 value), 403 399 404 - TP_ARGS(pm_qos_class, value) 400 + TP_ARGS(value) 405 401 ); 406 402 407 - TRACE_EVENT(pm_qos_update_request_timeout, 408 - 409 - TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us), 410 - 411 - TP_ARGS(pm_qos_class, value, timeout_us), 412 - 413 - TP_STRUCT__entry( 414 - __field( int, pm_qos_class ) 415 - __field( s32, value ) 416 - __field( unsigned long, timeout_us ) 417 - ), 418 - 419 - TP_fast_assign( 420 - __entry->pm_qos_class = pm_qos_class; 421 - __entry->value = value; 422 - __entry->timeout_us = timeout_us; 423 - ), 424 - 425 - TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld", 426 - __print_symbolic(__entry->pm_qos_class, 427 - { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }), 428 - __entry->value, __entry->timeout_us) 429 - ); 430 - 403 + /* 404 + * General PM QoS events used for updates of PM QoS request lists 405 + */ 431 406 DECLARE_EVENT_CLASS(pm_qos_update, 432 407 433 408 TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
+197 -424
kernel/power/qos.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * This module exposes the interface to kernel space for specifying 4 - * QoS dependencies. It provides infrastructure for registration of: 3 + * Power Management Quality of Service (PM QoS) support base. 5 4 * 6 - * Dependents on a QoS value : register requests 7 - * Watchers of QoS value : get notified when target QoS value changes 5 + * Copyright (C) 2020 Intel Corporation 8 6 * 9 - * This QoS design is best effort based. Dependents register their QoS needs. 10 - * Watchers register to keep track of the current QoS needs of the system. 7 + * Authors: 8 + * Mark Gross <mgross@linux.intel.com> 9 + * Rafael J. Wysocki <rafael.j.wysocki@intel.com> 11 10 * 12 - * There are 3 basic classes of QoS parameter: latency, timeout, throughput 13 - * each have defined units: 14 - * latency: usec 15 - * timeout: usec <-- currently not used. 16 - * throughput: kbs (kilo byte / sec) 11 + * Provided here is an interface for specifying PM QoS dependencies. It allows 12 + * entities depending on QoS constraints to register their requests which are 13 + * aggregated as appropriate to produce effective constraints (target values) 14 + * that can be monitored by entities needing to respect them, either by polling 15 + * or through a built-in notification mechanism. 17 16 * 18 - * There are lists of pm_qos_objects each one wrapping requests, notifiers 19 - * 20 - * User mode requests on a QOS parameter register themselves to the 21 - * subsystem by opening the device node /dev/... and writing there request to 22 - * the node. As long as the process holds a file handle open to the node the 23 - * client continues to be accounted for. Upon file release the usermode 24 - * request is removed and a new qos target is computed. This way when the 25 - * request that the application has is cleaned up when closes the file 26 - * pointer or exits the pm_qos_object will get an opportunity to clean up. 27 - * 28 - * Mark Gross <mgross@linux.intel.com> 17 + * In addition to the basic functionality, more specific interfaces for managing 18 + * global CPU latency QoS requests and frequency QoS requests are provided. 29 19 */ 30 20 31 21 /*#define DEBUG*/ ··· 44 54 * or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock 45 55 * held, taken with _irqsave. One lock to rule them all 46 56 */ 47 - struct pm_qos_object { 48 - struct pm_qos_constraints *constraints; 49 - struct miscdevice pm_qos_power_miscdev; 50 - char *name; 51 - }; 52 - 53 57 static DEFINE_SPINLOCK(pm_qos_lock); 54 58 55 - static struct pm_qos_object null_pm_qos; 56 - 57 - static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier); 58 - static struct pm_qos_constraints cpu_dma_constraints = { 59 - .list = PLIST_HEAD_INIT(cpu_dma_constraints.list), 60 - .target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE, 61 - .default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE, 62 - .no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE, 63 - .type = PM_QOS_MIN, 64 - .notifiers = &cpu_dma_lat_notifier, 65 - }; 66 - static struct pm_qos_object cpu_dma_pm_qos = { 67 - .constraints = &cpu_dma_constraints, 68 - .name = "cpu_dma_latency", 69 - }; 70 - 71 - static struct pm_qos_object *pm_qos_array[] = { 72 - &null_pm_qos, 73 - &cpu_dma_pm_qos, 74 - }; 75 - 76 - static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, 77 - size_t count, loff_t *f_pos); 78 - static ssize_t pm_qos_power_read(struct file *filp, char __user *buf, 79 - size_t count, loff_t *f_pos); 80 - static int pm_qos_power_open(struct inode *inode, struct file *filp); 81 - static int pm_qos_power_release(struct inode *inode, struct file *filp); 82 - 83 - static const struct file_operations pm_qos_power_fops = { 84 - .write = pm_qos_power_write, 85 - .read = pm_qos_power_read, 86 - .open = pm_qos_power_open, 87 - .release = pm_qos_power_release, 88 - .llseek = noop_llseek, 89 - }; 90 - 91 - /* unlocked internal variant */ 92 - static inline int pm_qos_get_value(struct pm_qos_constraints *c) 59 + /** 60 + * pm_qos_read_value - Return the current effective constraint value. 61 + * @c: List of PM QoS constraint requests. 62 + */ 63 + s32 pm_qos_read_value(struct pm_qos_constraints *c) 93 64 { 94 - struct plist_node *node; 95 - int total_value = 0; 65 + return READ_ONCE(c->target_value); 66 + } 96 67 68 + static int pm_qos_get_value(struct pm_qos_constraints *c) 69 + { 97 70 if (plist_head_empty(&c->list)) 98 71 return c->no_constraint_value; 99 72 ··· 67 114 case PM_QOS_MAX: 68 115 return plist_last(&c->list)->prio; 69 116 70 - case PM_QOS_SUM: 71 - plist_for_each(node, &c->list) 72 - total_value += node->prio; 73 - 74 - return total_value; 75 - 76 117 default: 77 - /* runtime check for not using enum */ 78 - BUG(); 118 + WARN(1, "Unknown PM QoS type in %s\n", __func__); 79 119 return PM_QOS_DEFAULT_VALUE; 80 120 } 81 121 } 82 122 83 - s32 pm_qos_read_value(struct pm_qos_constraints *c) 123 + static void pm_qos_set_value(struct pm_qos_constraints *c, s32 value) 84 124 { 85 - return c->target_value; 125 + WRITE_ONCE(c->target_value, value); 86 126 } 87 - 88 - static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value) 89 - { 90 - c->target_value = value; 91 - } 92 - 93 - static int pm_qos_debug_show(struct seq_file *s, void *unused) 94 - { 95 - struct pm_qos_object *qos = (struct pm_qos_object *)s->private; 96 - struct pm_qos_constraints *c; 97 - struct pm_qos_request *req; 98 - char *type; 99 - unsigned long flags; 100 - int tot_reqs = 0; 101 - int active_reqs = 0; 102 - 103 - if (IS_ERR_OR_NULL(qos)) { 104 - pr_err("%s: bad qos param!\n", __func__); 105 - return -EINVAL; 106 - } 107 - c = qos->constraints; 108 - if (IS_ERR_OR_NULL(c)) { 109 - pr_err("%s: Bad constraints on qos?\n", __func__); 110 - return -EINVAL; 111 - } 112 - 113 - /* Lock to ensure we have a snapshot */ 114 - spin_lock_irqsave(&pm_qos_lock, flags); 115 - if (plist_head_empty(&c->list)) { 116 - seq_puts(s, "Empty!\n"); 117 - goto out; 118 - } 119 - 120 - switch (c->type) { 121 - case PM_QOS_MIN: 122 - type = "Minimum"; 123 - break; 124 - case PM_QOS_MAX: 125 - type = "Maximum"; 126 - break; 127 - case PM_QOS_SUM: 128 - type = "Sum"; 129 - break; 130 - default: 131 - type = "Unknown"; 132 - } 133 - 134 - plist_for_each_entry(req, &c->list, node) { 135 - char *state = "Default"; 136 - 137 - if ((req->node).prio != c->default_value) { 138 - active_reqs++; 139 - state = "Active"; 140 - } 141 - tot_reqs++; 142 - seq_printf(s, "%d: %d: %s\n", tot_reqs, 143 - (req->node).prio, state); 144 - } 145 - 146 - seq_printf(s, "Type=%s, Value=%d, Requests: active=%d / total=%d\n", 147 - type, pm_qos_get_value(c), active_reqs, tot_reqs); 148 - 149 - out: 150 - spin_unlock_irqrestore(&pm_qos_lock, flags); 151 - return 0; 152 - } 153 - 154 - DEFINE_SHOW_ATTRIBUTE(pm_qos_debug); 155 127 156 128 /** 157 - * pm_qos_update_target - manages the constraints list and calls the notifiers 158 - * if needed 159 - * @c: constraints data struct 160 - * @node: request to add to the list, to update or to remove 161 - * @action: action to take on the constraints list 162 - * @value: value of the request to add or update 129 + * pm_qos_update_target - Update a list of PM QoS constraint requests. 130 + * @c: List of PM QoS requests. 131 + * @node: Target list entry. 132 + * @action: Action to carry out (add, update or remove). 133 + * @value: New request value for the target list entry. 163 134 * 164 - * This function returns 1 if the aggregated constraint value has changed, 0 165 - * otherwise. 135 + * Update the given list of PM QoS constraint requests, @c, by carrying an 136 + * @action involving the @node list entry and @value on it. 137 + * 138 + * The recognized values of @action are PM_QOS_ADD_REQ (store @value in @node 139 + * and add it to the list), PM_QOS_UPDATE_REQ (remove @node from the list, store 140 + * @value in it and add it to the list again), and PM_QOS_REMOVE_REQ (remove 141 + * @node from the list, ignore @value). 142 + * 143 + * Return: 1 if the aggregate constraint value has changed, 0 otherwise. 166 144 */ 167 145 int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, 168 146 enum pm_qos_req_action action, int value) 169 147 { 170 - unsigned long flags; 171 148 int prev_value, curr_value, new_value; 172 - int ret; 149 + unsigned long flags; 173 150 174 151 spin_lock_irqsave(&pm_qos_lock, flags); 152 + 175 153 prev_value = pm_qos_get_value(c); 176 154 if (value == PM_QOS_DEFAULT_VALUE) 177 155 new_value = c->default_value; ··· 115 231 break; 116 232 case PM_QOS_UPDATE_REQ: 117 233 /* 118 - * to change the list, we atomically remove, reinit 119 - * with new value and add, then see if the extremal 120 - * changed 234 + * To change the list, atomically remove, reinit with new value 235 + * and add, then see if the aggregate has changed. 121 236 */ 122 237 plist_del(node, &c->list); 123 238 /* fall through */ ··· 135 252 spin_unlock_irqrestore(&pm_qos_lock, flags); 136 253 137 254 trace_pm_qos_update_target(action, prev_value, curr_value); 138 - if (prev_value != curr_value) { 139 - ret = 1; 140 - if (c->notifiers) 141 - blocking_notifier_call_chain(c->notifiers, 142 - (unsigned long)curr_value, 143 - NULL); 144 - } else { 145 - ret = 0; 146 - } 147 - return ret; 255 + 256 + if (prev_value == curr_value) 257 + return 0; 258 + 259 + if (c->notifiers) 260 + blocking_notifier_call_chain(c->notifiers, curr_value, NULL); 261 + 262 + return 1; 148 263 } 149 264 150 265 /** ··· 164 283 165 284 /** 166 285 * pm_qos_update_flags - Update a set of PM QoS flags. 167 - * @pqf: Set of flags to update. 286 + * @pqf: Set of PM QoS flags to update. 168 287 * @req: Request to add to the set, to modify, or to remove from the set. 169 288 * @action: Action to take on the set. 170 289 * @val: Value of the request to add or modify. 171 290 * 172 - * Update the given set of PM QoS flags and call notifiers if the aggregate 173 - * value has changed. Returns 1 if the aggregate constraint value has changed, 174 - * 0 otherwise. 291 + * Return: 1 if the aggregate constraint value has changed, 0 otherwise. 175 292 */ 176 293 bool pm_qos_update_flags(struct pm_qos_flags *pqf, 177 294 struct pm_qos_flags_request *req, ··· 205 326 spin_unlock_irqrestore(&pm_qos_lock, irqflags); 206 327 207 328 trace_pm_qos_update_flags(action, prev_value, curr_value); 329 + 208 330 return prev_value != curr_value; 209 331 } 210 332 333 + #ifdef CONFIG_CPU_IDLE 334 + /* Definitions related to the CPU latency QoS. */ 335 + 336 + static struct pm_qos_constraints cpu_latency_constraints = { 337 + .list = PLIST_HEAD_INIT(cpu_latency_constraints.list), 338 + .target_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE, 339 + .default_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE, 340 + .no_constraint_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE, 341 + .type = PM_QOS_MIN, 342 + }; 343 + 211 344 /** 212 - * pm_qos_request - returns current system wide qos expectation 213 - * @pm_qos_class: identification of which qos value is requested 214 - * 215 - * This function returns the current target value. 345 + * cpu_latency_qos_limit - Return current system-wide CPU latency QoS limit. 216 346 */ 217 - int pm_qos_request(int pm_qos_class) 347 + s32 cpu_latency_qos_limit(void) 218 348 { 219 - return pm_qos_read_value(pm_qos_array[pm_qos_class]->constraints); 220 - } 221 - EXPORT_SYMBOL_GPL(pm_qos_request); 222 - 223 - int pm_qos_request_active(struct pm_qos_request *req) 224 - { 225 - return req->pm_qos_class != 0; 226 - } 227 - EXPORT_SYMBOL_GPL(pm_qos_request_active); 228 - 229 - static void __pm_qos_update_request(struct pm_qos_request *req, 230 - s32 new_value) 231 - { 232 - trace_pm_qos_update_request(req->pm_qos_class, new_value); 233 - 234 - if (new_value != req->node.prio) 235 - pm_qos_update_target( 236 - pm_qos_array[req->pm_qos_class]->constraints, 237 - &req->node, PM_QOS_UPDATE_REQ, new_value); 349 + return pm_qos_read_value(&cpu_latency_constraints); 238 350 } 239 351 240 352 /** 241 - * pm_qos_work_fn - the timeout handler of pm_qos_update_request_timeout 242 - * @work: work struct for the delayed work (timeout) 353 + * cpu_latency_qos_request_active - Check the given PM QoS request. 354 + * @req: PM QoS request to check. 243 355 * 244 - * This cancels the timeout request by falling back to the default at timeout. 356 + * Return: 'true' if @req has been added to the CPU latency QoS list, 'false' 357 + * otherwise. 245 358 */ 246 - static void pm_qos_work_fn(struct work_struct *work) 359 + bool cpu_latency_qos_request_active(struct pm_qos_request *req) 247 360 { 248 - struct pm_qos_request *req = container_of(to_delayed_work(work), 249 - struct pm_qos_request, 250 - work); 361 + return req->qos == &cpu_latency_constraints; 362 + } 363 + EXPORT_SYMBOL_GPL(cpu_latency_qos_request_active); 251 364 252 - __pm_qos_update_request(req, PM_QOS_DEFAULT_VALUE); 365 + static void cpu_latency_qos_apply(struct pm_qos_request *req, 366 + enum pm_qos_req_action action, s32 value) 367 + { 368 + int ret = pm_qos_update_target(req->qos, &req->node, action, value); 369 + if (ret > 0) 370 + wake_up_all_idle_cpus(); 253 371 } 254 372 255 373 /** 256 - * pm_qos_add_request - inserts new qos request into the list 257 - * @req: pointer to a preallocated handle 258 - * @pm_qos_class: identifies which list of qos request to use 259 - * @value: defines the qos request 374 + * cpu_latency_qos_add_request - Add new CPU latency QoS request. 375 + * @req: Pointer to a preallocated handle. 376 + * @value: Requested constraint value. 260 377 * 261 - * This function inserts a new entry in the pm_qos_class list of requested qos 262 - * performance characteristics. It recomputes the aggregate QoS expectations 263 - * for the pm_qos_class of parameters and initializes the pm_qos_request 264 - * handle. Caller needs to save this handle for later use in updates and 265 - * removal. 378 + * Use @value to initialize the request handle pointed to by @req, insert it as 379 + * a new entry to the CPU latency QoS list and recompute the effective QoS 380 + * constraint for that list. 381 + * 382 + * Callers need to save the handle for later use in updates and removal of the 383 + * QoS request represented by it. 266 384 */ 267 - 268 - void pm_qos_add_request(struct pm_qos_request *req, 269 - int pm_qos_class, s32 value) 270 - { 271 - if (!req) /*guard against callers passing in null */ 272 - return; 273 - 274 - if (pm_qos_request_active(req)) { 275 - WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n"); 276 - return; 277 - } 278 - req->pm_qos_class = pm_qos_class; 279 - INIT_DELAYED_WORK(&req->work, pm_qos_work_fn); 280 - trace_pm_qos_add_request(pm_qos_class, value); 281 - pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints, 282 - &req->node, PM_QOS_ADD_REQ, value); 283 - } 284 - EXPORT_SYMBOL_GPL(pm_qos_add_request); 285 - 286 - /** 287 - * pm_qos_update_request - modifies an existing qos request 288 - * @req : handle to list element holding a pm_qos request to use 289 - * @value: defines the qos request 290 - * 291 - * Updates an existing qos request for the pm_qos_class of parameters along 292 - * with updating the target pm_qos_class value. 293 - * 294 - * Attempts are made to make this code callable on hot code paths. 295 - */ 296 - void pm_qos_update_request(struct pm_qos_request *req, 297 - s32 new_value) 298 - { 299 - if (!req) /*guard against callers passing in null */ 300 - return; 301 - 302 - if (!pm_qos_request_active(req)) { 303 - WARN(1, KERN_ERR "pm_qos_update_request() called for unknown object\n"); 304 - return; 305 - } 306 - 307 - cancel_delayed_work_sync(&req->work); 308 - __pm_qos_update_request(req, new_value); 309 - } 310 - EXPORT_SYMBOL_GPL(pm_qos_update_request); 311 - 312 - /** 313 - * pm_qos_update_request_timeout - modifies an existing qos request temporarily. 314 - * @req : handle to list element holding a pm_qos request to use 315 - * @new_value: defines the temporal qos request 316 - * @timeout_us: the effective duration of this qos request in usecs. 317 - * 318 - * After timeout_us, this qos request is cancelled automatically. 319 - */ 320 - void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value, 321 - unsigned long timeout_us) 385 + void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value) 322 386 { 323 387 if (!req) 324 388 return; 325 - if (WARN(!pm_qos_request_active(req), 326 - "%s called for unknown object.", __func__)) 327 - return; 328 389 329 - cancel_delayed_work_sync(&req->work); 330 - 331 - trace_pm_qos_update_request_timeout(req->pm_qos_class, 332 - new_value, timeout_us); 333 - if (new_value != req->node.prio) 334 - pm_qos_update_target( 335 - pm_qos_array[req->pm_qos_class]->constraints, 336 - &req->node, PM_QOS_UPDATE_REQ, new_value); 337 - 338 - schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us)); 339 - } 340 - 341 - /** 342 - * pm_qos_remove_request - modifies an existing qos request 343 - * @req: handle to request list element 344 - * 345 - * Will remove pm qos request from the list of constraints and 346 - * recompute the current target value for the pm_qos_class. Call this 347 - * on slow code paths. 348 - */ 349 - void pm_qos_remove_request(struct pm_qos_request *req) 350 - { 351 - if (!req) /*guard against callers passing in null */ 352 - return; 353 - /* silent return to keep pcm code cleaner */ 354 - 355 - if (!pm_qos_request_active(req)) { 356 - WARN(1, KERN_ERR "pm_qos_remove_request() called for unknown object\n"); 390 + if (cpu_latency_qos_request_active(req)) { 391 + WARN(1, KERN_ERR "%s called for already added request\n", __func__); 357 392 return; 358 393 } 359 394 360 - cancel_delayed_work_sync(&req->work); 395 + trace_pm_qos_add_request(value); 361 396 362 - trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE); 363 - pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints, 364 - &req->node, PM_QOS_REMOVE_REQ, 365 - PM_QOS_DEFAULT_VALUE); 397 + req->qos = &cpu_latency_constraints; 398 + cpu_latency_qos_apply(req, PM_QOS_ADD_REQ, value); 399 + } 400 + EXPORT_SYMBOL_GPL(cpu_latency_qos_add_request); 401 + 402 + /** 403 + * cpu_latency_qos_update_request - Modify existing CPU latency QoS request. 404 + * @req : QoS request to update. 405 + * @new_value: New requested constraint value. 406 + * 407 + * Use @new_value to update the QoS request represented by @req in the CPU 408 + * latency QoS list along with updating the effective constraint value for that 409 + * list. 410 + */ 411 + void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value) 412 + { 413 + if (!req) 414 + return; 415 + 416 + if (!cpu_latency_qos_request_active(req)) { 417 + WARN(1, KERN_ERR "%s called for unknown object\n", __func__); 418 + return; 419 + } 420 + 421 + trace_pm_qos_update_request(new_value); 422 + 423 + if (new_value == req->node.prio) 424 + return; 425 + 426 + cpu_latency_qos_apply(req, PM_QOS_UPDATE_REQ, new_value); 427 + } 428 + EXPORT_SYMBOL_GPL(cpu_latency_qos_update_request); 429 + 430 + /** 431 + * cpu_latency_qos_remove_request - Remove existing CPU latency QoS request. 432 + * @req: QoS request to remove. 433 + * 434 + * Remove the CPU latency QoS request represented by @req from the CPU latency 435 + * QoS list along with updating the effective constraint value for that list. 436 + */ 437 + void cpu_latency_qos_remove_request(struct pm_qos_request *req) 438 + { 439 + if (!req) 440 + return; 441 + 442 + if (!cpu_latency_qos_request_active(req)) { 443 + WARN(1, KERN_ERR "%s called for unknown object\n", __func__); 444 + return; 445 + } 446 + 447 + trace_pm_qos_remove_request(PM_QOS_DEFAULT_VALUE); 448 + 449 + cpu_latency_qos_apply(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); 366 450 memset(req, 0, sizeof(*req)); 367 451 } 368 - EXPORT_SYMBOL_GPL(pm_qos_remove_request); 452 + EXPORT_SYMBOL_GPL(cpu_latency_qos_remove_request); 369 453 370 - /** 371 - * pm_qos_add_notifier - sets notification entry for changes to target value 372 - * @pm_qos_class: identifies which qos target changes should be notified. 373 - * @notifier: notifier block managed by caller. 374 - * 375 - * will register the notifier into a notification chain that gets called 376 - * upon changes to the pm_qos_class target value. 377 - */ 378 - int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier) 379 - { 380 - int retval; 454 + /* User space interface to the CPU latency QoS via misc device. */ 381 455 382 - retval = blocking_notifier_chain_register( 383 - pm_qos_array[pm_qos_class]->constraints->notifiers, 384 - notifier); 385 - 386 - return retval; 387 - } 388 - EXPORT_SYMBOL_GPL(pm_qos_add_notifier); 389 - 390 - /** 391 - * pm_qos_remove_notifier - deletes notification entry from chain. 392 - * @pm_qos_class: identifies which qos target changes are notified. 393 - * @notifier: notifier block to be removed. 394 - * 395 - * will remove the notifier from the notification chain that gets called 396 - * upon changes to the pm_qos_class target value. 397 - */ 398 - int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier) 399 - { 400 - int retval; 401 - 402 - retval = blocking_notifier_chain_unregister( 403 - pm_qos_array[pm_qos_class]->constraints->notifiers, 404 - notifier); 405 - 406 - return retval; 407 - } 408 - EXPORT_SYMBOL_GPL(pm_qos_remove_notifier); 409 - 410 - /* User space interface to PM QoS classes via misc devices */ 411 - static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d) 412 - { 413 - qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR; 414 - qos->pm_qos_power_miscdev.name = qos->name; 415 - qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops; 416 - 417 - debugfs_create_file(qos->name, S_IRUGO, d, (void *)qos, 418 - &pm_qos_debug_fops); 419 - 420 - return misc_register(&qos->pm_qos_power_miscdev); 421 - } 422 - 423 - static int find_pm_qos_object_by_minor(int minor) 424 - { 425 - int pm_qos_class; 426 - 427 - for (pm_qos_class = PM_QOS_CPU_DMA_LATENCY; 428 - pm_qos_class < PM_QOS_NUM_CLASSES; pm_qos_class++) { 429 - if (minor == 430 - pm_qos_array[pm_qos_class]->pm_qos_power_miscdev.minor) 431 - return pm_qos_class; 432 - } 433 - return -1; 434 - } 435 - 436 - static int pm_qos_power_open(struct inode *inode, struct file *filp) 437 - { 438 - long pm_qos_class; 439 - 440 - pm_qos_class = find_pm_qos_object_by_minor(iminor(inode)); 441 - if (pm_qos_class >= PM_QOS_CPU_DMA_LATENCY) { 442 - struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL); 443 - if (!req) 444 - return -ENOMEM; 445 - 446 - pm_qos_add_request(req, pm_qos_class, PM_QOS_DEFAULT_VALUE); 447 - filp->private_data = req; 448 - 449 - return 0; 450 - } 451 - return -EPERM; 452 - } 453 - 454 - static int pm_qos_power_release(struct inode *inode, struct file *filp) 456 + static int cpu_latency_qos_open(struct inode *inode, struct file *filp) 455 457 { 456 458 struct pm_qos_request *req; 457 459 458 - req = filp->private_data; 459 - pm_qos_remove_request(req); 460 + req = kzalloc(sizeof(*req), GFP_KERNEL); 461 + if (!req) 462 + return -ENOMEM; 463 + 464 + cpu_latency_qos_add_request(req, PM_QOS_DEFAULT_VALUE); 465 + filp->private_data = req; 466 + 467 + return 0; 468 + } 469 + 470 + static int cpu_latency_qos_release(struct inode *inode, struct file *filp) 471 + { 472 + struct pm_qos_request *req = filp->private_data; 473 + 474 + filp->private_data = NULL; 475 + 476 + cpu_latency_qos_remove_request(req); 460 477 kfree(req); 461 478 462 479 return 0; 463 480 } 464 481 465 - 466 - static ssize_t pm_qos_power_read(struct file *filp, char __user *buf, 467 - size_t count, loff_t *f_pos) 482 + static ssize_t cpu_latency_qos_read(struct file *filp, char __user *buf, 483 + size_t count, loff_t *f_pos) 468 484 { 469 - s32 value; 470 - unsigned long flags; 471 485 struct pm_qos_request *req = filp->private_data; 486 + unsigned long flags; 487 + s32 value; 472 488 473 - if (!req) 474 - return -EINVAL; 475 - if (!pm_qos_request_active(req)) 489 + if (!req || !cpu_latency_qos_request_active(req)) 476 490 return -EINVAL; 477 491 478 492 spin_lock_irqsave(&pm_qos_lock, flags); 479 - value = pm_qos_get_value(pm_qos_array[req->pm_qos_class]->constraints); 493 + value = pm_qos_get_value(&cpu_latency_constraints); 480 494 spin_unlock_irqrestore(&pm_qos_lock, flags); 481 495 482 496 return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32)); 483 497 } 484 498 485 - static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, 486 - size_t count, loff_t *f_pos) 499 + static ssize_t cpu_latency_qos_write(struct file *filp, const char __user *buf, 500 + size_t count, loff_t *f_pos) 487 501 { 488 502 s32 value; 489 - struct pm_qos_request *req; 490 503 491 504 if (count == sizeof(s32)) { 492 505 if (copy_from_user(&value, buf, sizeof(s32))) ··· 391 620 return ret; 392 621 } 393 622 394 - req = filp->private_data; 395 - pm_qos_update_request(req, value); 623 + cpu_latency_qos_update_request(filp->private_data, value); 396 624 397 625 return count; 398 626 } 399 627 628 + static const struct file_operations cpu_latency_qos_fops = { 629 + .write = cpu_latency_qos_write, 630 + .read = cpu_latency_qos_read, 631 + .open = cpu_latency_qos_open, 632 + .release = cpu_latency_qos_release, 633 + .llseek = noop_llseek, 634 + }; 400 635 401 - static int __init pm_qos_power_init(void) 636 + static struct miscdevice cpu_latency_qos_miscdev = { 637 + .minor = MISC_DYNAMIC_MINOR, 638 + .name = "cpu_dma_latency", 639 + .fops = &cpu_latency_qos_fops, 640 + }; 641 + 642 + static int __init cpu_latency_qos_init(void) 402 643 { 403 - int ret = 0; 404 - int i; 405 - struct dentry *d; 644 + int ret; 406 645 407 - BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES); 408 - 409 - d = debugfs_create_dir("pm_qos", NULL); 410 - 411 - for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) { 412 - ret = register_pm_qos_misc(pm_qos_array[i], d); 413 - if (ret < 0) { 414 - pr_err("%s: %s setup failed\n", 415 - __func__, pm_qos_array[i]->name); 416 - return ret; 417 - } 418 - } 646 + ret = misc_register(&cpu_latency_qos_miscdev); 647 + if (ret < 0) 648 + pr_err("%s: %s setup failed\n", __func__, 649 + cpu_latency_qos_miscdev.name); 419 650 420 651 return ret; 421 652 } 422 - 423 - late_initcall(pm_qos_power_init); 653 + late_initcall(cpu_latency_qos_init); 654 + #endif /* CONFIG_CPU_IDLE */ 424 655 425 656 /* Definitions related to the frequency QoS below. */ 426 657
+1 -15
kernel/power/user.c
··· 409 409 switch (cmd) { 410 410 case SNAPSHOT_GET_IMAGE_SIZE: 411 411 case SNAPSHOT_AVAIL_SWAP_SIZE: 412 - case SNAPSHOT_ALLOC_SWAP_PAGE: { 413 - compat_loff_t __user *uoffset = compat_ptr(arg); 414 - loff_t offset; 415 - mm_segment_t old_fs; 416 - int err; 417 - 418 - old_fs = get_fs(); 419 - set_fs(KERNEL_DS); 420 - err = snapshot_ioctl(file, cmd, (unsigned long) &offset); 421 - set_fs(old_fs); 422 - if (!err && put_user(offset, uoffset)) 423 - err = -EFAULT; 424 - return err; 425 - } 426 - 412 + case SNAPSHOT_ALLOC_SWAP_PAGE: 427 413 case SNAPSHOT_CREATE_IMAGE: 428 414 return snapshot_ioctl(file, cmd, 429 415 (unsigned long) compat_ptr(arg));
+7 -7
sound/core/pcm_native.c
··· 748 748 snd_pcm_timer_resolution_change(substream); 749 749 snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP); 750 750 751 - if (pm_qos_request_active(&substream->latency_pm_qos_req)) 752 - pm_qos_remove_request(&substream->latency_pm_qos_req); 751 + if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req)) 752 + cpu_latency_qos_remove_request(&substream->latency_pm_qos_req); 753 753 if ((usecs = period_to_usecs(runtime)) >= 0) 754 - pm_qos_add_request(&substream->latency_pm_qos_req, 755 - PM_QOS_CPU_DMA_LATENCY, usecs); 754 + cpu_latency_qos_add_request(&substream->latency_pm_qos_req, 755 + usecs); 756 756 return 0; 757 757 _error: 758 758 /* hardware might be unusable from this time, ··· 821 821 return -EBADFD; 822 822 result = do_hw_free(substream); 823 823 snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN); 824 - pm_qos_remove_request(&substream->latency_pm_qos_req); 824 + cpu_latency_qos_remove_request(&substream->latency_pm_qos_req); 825 825 return result; 826 826 } 827 827 ··· 2599 2599 substream->ops->close(substream); 2600 2600 substream->hw_opened = 0; 2601 2601 } 2602 - if (pm_qos_request_active(&substream->latency_pm_qos_req)) 2603 - pm_qos_remove_request(&substream->latency_pm_qos_req); 2602 + if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req)) 2603 + cpu_latency_qos_remove_request(&substream->latency_pm_qos_req); 2604 2604 if (substream->pcm_release) { 2605 2605 substream->pcm_release(substream); 2606 2606 substream->pcm_release = NULL;
+2 -3
sound/soc/intel/atom/sst/sst.c
··· 325 325 ret = -ENOMEM; 326 326 goto do_free_mem; 327 327 } 328 - pm_qos_add_request(ctx->qos, PM_QOS_CPU_DMA_LATENCY, 329 - PM_QOS_DEFAULT_VALUE); 328 + cpu_latency_qos_add_request(ctx->qos, PM_QOS_DEFAULT_VALUE); 330 329 331 330 dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name); 332 331 ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name, ··· 363 364 sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group); 364 365 flush_scheduled_work(); 365 366 destroy_workqueue(ctx->post_msg_wq); 366 - pm_qos_remove_request(ctx->qos); 367 + cpu_latency_qos_remove_request(ctx->qos); 367 368 kfree(ctx->fw_sg_list.src); 368 369 kfree(ctx->fw_sg_list.dst); 369 370 ctx->fw_sg_list.list_len = 0;
+2 -2
sound/soc/intel/atom/sst/sst_loader.c
··· 412 412 return -ENOMEM; 413 413 414 414 /* Prevent C-states beyond C6 */ 415 - pm_qos_update_request(sst_drv_ctx->qos, 0); 415 + cpu_latency_qos_update_request(sst_drv_ctx->qos, 0); 416 416 417 417 sst_drv_ctx->sst_state = SST_FW_LOADING; 418 418 ··· 442 442 443 443 restore: 444 444 /* Re-enable Deeper C-states beyond C6 */ 445 - pm_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE); 445 + cpu_latency_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE); 446 446 sst_free_block(sst_drv_ctx, block); 447 447 dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n"); 448 448
+4 -3
sound/soc/ti/omap-dmic.c
··· 112 112 113 113 mutex_lock(&dmic->mutex); 114 114 115 - pm_qos_remove_request(&dmic->pm_qos_req); 115 + cpu_latency_qos_remove_request(&dmic->pm_qos_req); 116 116 117 117 if (!dai->active) 118 118 dmic->active = 0; ··· 230 230 struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); 231 231 u32 ctrl; 232 232 233 - if (pm_qos_request_active(&dmic->pm_qos_req)) 234 - pm_qos_update_request(&dmic->pm_qos_req, dmic->latency); 233 + if (cpu_latency_qos_request_active(&dmic->pm_qos_req)) 234 + cpu_latency_qos_update_request(&dmic->pm_qos_req, 235 + dmic->latency); 235 236 236 237 /* Configure uplink threshold */ 237 238 omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);
+8 -8
sound/soc/ti/omap-mcbsp.c
··· 836 836 int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 837 837 838 838 if (mcbsp->latency[stream2]) 839 - pm_qos_update_request(&mcbsp->pm_qos_req, 840 - mcbsp->latency[stream2]); 839 + cpu_latency_qos_update_request(&mcbsp->pm_qos_req, 840 + mcbsp->latency[stream2]); 841 841 else if (mcbsp->latency[stream1]) 842 - pm_qos_remove_request(&mcbsp->pm_qos_req); 842 + cpu_latency_qos_remove_request(&mcbsp->pm_qos_req); 843 843 844 844 mcbsp->latency[stream1] = 0; 845 845 ··· 863 863 if (!latency || mcbsp->latency[stream1] < latency) 864 864 latency = mcbsp->latency[stream1]; 865 865 866 - if (pm_qos_request_active(pm_qos_req)) 867 - pm_qos_update_request(pm_qos_req, latency); 866 + if (cpu_latency_qos_request_active(pm_qos_req)) 867 + cpu_latency_qos_update_request(pm_qos_req, latency); 868 868 else if (latency) 869 - pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); 869 + cpu_latency_qos_add_request(pm_qos_req, latency); 870 870 871 871 return 0; 872 872 } ··· 1434 1434 if (mcbsp->pdata->ops && mcbsp->pdata->ops->free) 1435 1435 mcbsp->pdata->ops->free(mcbsp->id); 1436 1436 1437 - if (pm_qos_request_active(&mcbsp->pm_qos_req)) 1438 - pm_qos_remove_request(&mcbsp->pm_qos_req); 1437 + if (cpu_latency_qos_request_active(&mcbsp->pm_qos_req)) 1438 + cpu_latency_qos_remove_request(&mcbsp->pm_qos_req); 1439 1439 1440 1440 if (mcbsp->pdata->buffer_size) 1441 1441 sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);
+8 -8
sound/soc/ti/omap-mcpdm.c
··· 281 281 } 282 282 283 283 if (mcpdm->latency[stream2]) 284 - pm_qos_update_request(&mcpdm->pm_qos_req, 285 - mcpdm->latency[stream2]); 284 + cpu_latency_qos_update_request(&mcpdm->pm_qos_req, 285 + mcpdm->latency[stream2]); 286 286 else if (mcpdm->latency[stream1]) 287 - pm_qos_remove_request(&mcpdm->pm_qos_req); 287 + cpu_latency_qos_remove_request(&mcpdm->pm_qos_req); 288 288 289 289 mcpdm->latency[stream1] = 0; 290 290 ··· 386 386 if (!latency || mcpdm->latency[stream1] < latency) 387 387 latency = mcpdm->latency[stream1]; 388 388 389 - if (pm_qos_request_active(pm_qos_req)) 390 - pm_qos_update_request(pm_qos_req, latency); 389 + if (cpu_latency_qos_request_active(pm_qos_req)) 390 + cpu_latency_qos_update_request(pm_qos_req, latency); 391 391 else if (latency) 392 - pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); 392 + cpu_latency_qos_add_request(pm_qos_req, latency); 393 393 394 394 if (!omap_mcpdm_active(mcpdm)) { 395 395 omap_mcpdm_start(mcpdm); ··· 451 451 free_irq(mcpdm->irq, (void *)mcpdm); 452 452 pm_runtime_disable(mcpdm->dev); 453 453 454 - if (pm_qos_request_active(&mcpdm->pm_qos_req)) 455 - pm_qos_remove_request(&mcpdm->pm_qos_req); 454 + if (cpu_latency_qos_request_active(&mcpdm->pm_qos_req)) 455 + cpu_latency_qos_remove_request(&mcpdm->pm_qos_req); 456 456 457 457 return 0; 458 458 }
-1
tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
··· 235 235 output_png = 'all_cpu_durations.png' 236 236 g_plot = common_all_gnuplot_settings(output_png) 237 237 # autoscale this one, no set y range 238 - g_plot('set ytics 0, 500') 239 238 g_plot('set ylabel "Timer Duration (MilliSeconds)"') 240 239 g_plot('set title "{} : cpu durations : {:%F %H:%M}"'.format(testname, datetime.now())) 241 240