Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
"There are no real big ticket items here this time.

The most noticeable change is probably the relocation of the OPP
(Operating Performance Points) framework to its own directory under
drivers/ as it has grown big enough for that. Also Viresh is now going
to maintain it and send pull requests for it to me, so you will see
this change in the git history going forward (but still not right
now).

Another noticeable set of changes is the modifications of the PM core,
the PCI subsystem and the ACPI PM domain to allow of more integration
between system-wide suspend/resume and runtime PM. For now it's just a
way to avoid resuming devices from runtime suspend unnecessarily
during system suspend (if the driver sets a flag to indicate its
readiness for that) and in the works is an analogous mechanism to
allow devices to stay suspended after system resume.

In addition to that, we have some changes related to supporting
frequency-invariant CPU utilization metrics in the scheduler and in
the schedutil cpufreq governor on ARM and changes to add support for
device performance states to the generic power domains (genpd)
framework.

The rest is mostly fixes and cleanups of various sorts.

Specifics:

- Relocate the OPP (Operating Performance Points) framework to its
own directory under drivers/ and add support for power domain
performance states to it (Viresh Kumar).

- Modify the PM core, the PCI bus type and the ACPI PM domain to
support power management driver flags allowing device drivers to
specify their capabilities and preferences regarding the handling
of devices with enabled runtime PM during system suspend/resume and
clean up that code somewhat (Rafael Wysocki, Ulf Hansson).

- Add frequency-invariant accounting support to the task scheduler on
ARM and ARM64 (Dietmar Eggemann).

- Fix PM QoS device resume latency framework to prevent "no
restriction" requests from overriding requests with specific
requirements and drop the confusing PM_QOS_FLAG_REMOTE_WAKEUP
device PM QoS flag (Rafael Wysocki).

- Drop legacy class suspend/resume operations from the PM core and
drop legacy bus type suspend and resume callbacks from ARM/locomo
(Rafael Wysocki).

- Add min/max frequency support to devfreq and clean it up somewhat
(Chanwoo Choi).

- Rework wakeup support in the generic power domains (genpd)
framework and update some of its users accordingly (Geert
Uytterhoeven).

- Convert timers in the PM core to use timer_setup() (Kees Cook).

- Add support for exposing the SLP_S0 (Low Power S0 Idle) residency
counter based on the LPIT ACPI table on Intel platforms (Srinivas
Pandruvada).

- Add per-CPU PM QoS resume latency support to the ladder cpuidle
governor (Ramesh Thomas).

- Fix a deadlock between the wakeup notify handler and the notifier
removal in the ACPI core (Ville Syrjälä).

- Fix a cpufreq schedutil governor issue causing it to use stale
cached frequency values sometimes (Viresh Kumar).

- Fix an issue in the system suspend core support code causing wakeup
events detection to fail in some cases (Rajat Jain).

- Fix the generic power domains (genpd) framework to prevent the PM
core from using the direct-complete optimization with it as that is
guaranteed to fail (Ulf Hansson).

- Fix a minor issue in the cpuidle core and clean it up a bit (Gaurav
Jindal, Nicholas Piggin).

- Fix and clean up the intel_idle and ARM cpuidle drivers (Jason
Baron, Len Brown, Leo Yan).

- Fix a couple of minor issues in the OPP framework and clean it up
(Arvind Yadav, Fabio Estevam, Sudeep Holla, Tobias Jordan).

- Fix and clean up some cpufreq drivers and fix a minor issue in the
cpufreq statistics code (Arvind Yadav, Bhumika Goyal, Fabio
Estevam, Gautham Shenoy, Gustavo Silva, Marek Szyprowski, Masahiro
Yamada, Robert Jarzmik, Zumeng Chen).

- Fix minor issues in the system suspend and hibernation core, in
power management documentation and in the AVS (Adaptive Voltage
Scaling) framework (Helge Deller, Himanshu Jha, Joe Perches, Rafael
Wysocki).

- Fix some issues in the cpupower utility and document that Shuah
Khan is going to maintain it going forward (Prarit Bhargava, Shuah
Khan)"

* tag 'pm-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (88 commits)
tools/power/cpupower: add libcpupower.so.0.0.1 to .gitignore
tools/power/cpupower: Add 64 bit library detection
intel_idle: Graceful probe failure when MWAIT is disabled
cpufreq: schedutil: Reset cached_raw_freq when not in sync with next_freq
freezer: Fix typo in freezable_schedule_timeout() comment
PM / s2idle: Clear the events_check_enabled flag
cpufreq: stats: Handle the case when trans_table goes beyond PAGE_SIZE
cpufreq: arm_big_little: make cpufreq_arm_bL_ops structures const
cpufreq: arm_big_little: make function arguments and structure pointer const
cpuidle: Avoid assignment in if () argument
cpuidle: Clean up cpuidle_enable_device() error handling a bit
ACPI / PM: Fix acpi_pm_notifier_lock vs flush_workqueue() deadlock
PM / Domains: Fix genpd to deal with drivers returning 1 from ->prepare()
cpuidle: ladder: Add per CPU PM QoS resume latency support
PM / QoS: Fix device resume latency framework
PM / domains: Rework governor code to be more consistent
PM / Domains: Remove gpd_dev_ops.active_wakeup() callback
soc: rockchip: power-domain: Use GENPD_FLAG_ACTIVE_WAKEUP
soc: mediatek: Use GENPD_FLAG_ACTIVE_WAKEUP
ARM: shmobile: pm-rmobile: Use GENPD_FLAG_ACTIVE_WAKEUP
...

+1765 -1070
+3 -17
Documentation/ABI/testing/sysfs-devices-power
··· 211 211 device, after it has been suspended at run time, from a resume 212 212 request to the moment the device will be ready to process I/O, 213 213 in microseconds. If it is equal to 0, however, this means that 214 - the PM QoS resume latency may be arbitrary. 214 + the PM QoS resume latency may be arbitrary and the special value 215 + "n/a" means that user space cannot accept any resume latency at 216 + all for the given device. 215 217 216 218 Not all drivers support this attribute. If it isn't supported, 217 219 it is not present. ··· 254 252 is used for manipulating the PM QoS "no power off" flag. If 255 253 set, this flag indicates to the kernel that power should not 256 254 be removed entirely from the device. 257 - 258 - Not all drivers support this attribute. If it isn't supported, 259 - it is not present. 260 - 261 - This attribute has no effect on system-wide suspend/resume and 262 - hibernation. 263 - 264 - What: /sys/devices/.../power/pm_qos_remote_wakeup 265 - Date: September 2012 266 - Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 267 - Description: 268 - The /sys/devices/.../power/pm_qos_remote_wakeup attribute 269 - is used for manipulating the PM QoS "remote wakeup required" 270 - flag. If set, this flag indicates to the kernel that the 271 - device is a source of user events that have to be signaled from 272 - its low-power states. 273 255 274 256 Not all drivers support this attribute. If it isn't supported, 275 257 it is not present.
+25
Documentation/acpi/lpit.txt
··· 1 + To enumerate platform Low Power Idle states, Intel platforms are using 2 + “Low Power Idle Table” (LPIT). More details about this table can be 3 + downloaded from: 4 + http://www.uefi.org/sites/default/files/resources/Intel_ACPI_Low_Power_S0_Idle.pdf 5 + 6 + Residencies for each low power state can be read via FFH 7 + (Function fixed hardware) or a memory mapped interface. 8 + 9 + On platforms supporting S0ix sleep states, there can be two types of 10 + residencies: 11 + - CPU PKG C10 (Read via FFH interface) 12 + - Platform Controller Hub (PCH) SLP_S0 (Read via memory mapped interface) 13 + 14 + The following attributes are added dynamically to the cpuidle 15 + sysfs attribute group: 16 + /sys/devices/system/cpu/cpuidle/low_power_idle_cpu_residency_us 17 + /sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us 18 + 19 + The "low_power_idle_cpu_residency_us" attribute shows time spent 20 + by the CPU package in PKG C10 21 + 22 + The "low_power_idle_system_residency_us" attribute shows SLP_S0 23 + residency, or system time spent with the SLP_S0# signal asserted. 24 + This is the lowest possible system power state, achieved only when CPU is in 25 + PKG C10 and all functional blocks in PCH are in a low power state.
+3
Documentation/cpu-freq/cpufreq-stats.txt
··· 90 90 Freq_j is in descending order with increasing columns. The output here also 91 91 contains the actual freq values for each row and column for better readability. 92 92 93 + If the transition table is bigger than PAGE_SIZE, reading this will 94 + return an -EFBIG error. 95 + 93 96 -------------------------------------------------------------------------------- 94 97 <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table 95 98 From : To
+59 -2
Documentation/driver-api/pm/devices.rst
··· 274 274 executing callbacks for every device before the next phase begins. Not all 275 275 buses or classes support all these callbacks and not all drivers use all the 276 276 callbacks. The various phases always run after tasks have been frozen and 277 - before they are unfrozen. Furthermore, the ``*_noirq phases`` run at a time 277 + before they are unfrozen. Furthermore, the ``*_noirq`` phases run at a time 278 278 when IRQ handlers have been disabled (except for those marked with the 279 279 IRQF_NO_SUSPEND flag). 280 280 ··· 328 328 After the ``->prepare`` callback method returns, no new children may be 329 329 registered below the device. The method may also prepare the device or 330 330 driver in some way for the upcoming system power transition, but it 331 - should not put the device into a low-power state. 331 + should not put the device into a low-power state. Moreover, if the 332 + device supports runtime power management, the ``->prepare`` callback 333 + method must not update its state in case it is necessary to resume it 334 + from runtime suspend later on. 332 335 333 336 For devices supporting runtime power management, the return value of the 334 337 prepare callback can be used to indicate to the PM core that it may ··· 354 351 is because all such devices are initially set to runtime-suspended with 355 352 runtime PM disabled. 356 353 354 + This feature also can be controlled by device drivers by using the 355 + ``DPM_FLAG_NEVER_SKIP`` and ``DPM_FLAG_SMART_PREPARE`` driver power 356 + management flags. [Typically, they are set at the time the driver is 357 + probed against the device in question by passing them to the 358 + :c:func:`dev_pm_set_driver_flags` helper function.] If the first of 359 + these flags is set, the PM core will not apply the direct-complete 360 + procedure described above to the given device and, consequenty, to any 361 + of its ancestors. The second flag, when set, informs the middle layer 362 + code (bus types, device types, PM domains, classes) that it should take 363 + the return value of the ``->prepare`` callback provided by the driver 364 + into account and it may only return a positive value from its own 365 + ``->prepare`` callback if the driver's one also has returned a positive 366 + value. 367 + 357 368 2. The ``->suspend`` methods should quiesce the device to stop it from 358 369 performing I/O. They also may save the device registers and put it into 359 370 the appropriate low-power state, depending on the bus type the device is 360 371 on, and they may enable wakeup events. 372 + 373 + However, for devices supporting runtime power management, the 374 + ``->suspend`` methods provided by subsystems (bus types and PM domains 375 + in particular) must follow an additional rule regarding what can be done 376 + to the devices before their drivers' ``->suspend`` methods are called. 377 + Namely, they can only resume the devices from runtime suspend by 378 + calling :c:func:`pm_runtime_resume` for them, if that is necessary, and 379 + they must not update the state of the devices in any other way at that 380 + time (in case the drivers need to resume the devices from runtime 381 + suspend in their ``->suspend`` methods). 361 382 362 383 3. For a number of devices it is convenient to split suspend into the 363 384 "quiesce device" and "save device state" phases, in which cases ··· 755 728 state temporarily, for example so that its system wakeup capability can be 756 729 disabled. This all depends on the hardware and the design of the subsystem and 757 730 device driver in question. 731 + 732 + If it is necessary to resume a device from runtime suspend during a system-wide 733 + transition into a sleep state, that can be done by calling 734 + :c:func:`pm_runtime_resume` for it from the ``->suspend`` callback (or its 735 + couterpart for transitions related to hibernation) of either the device's driver 736 + or a subsystem responsible for it (for example, a bus type or a PM domain). 737 + That is guaranteed to work by the requirement that subsystems must not change 738 + the state of devices (possibly except for resuming them from runtime suspend) 739 + from their ``->prepare`` and ``->suspend`` callbacks (or equivalent) *before* 740 + invoking device drivers' ``->suspend`` callbacks (or equivalent). 741 + 742 + Some bus types and PM domains have a policy to resume all devices from runtime 743 + suspend upfront in their ``->suspend`` callbacks, but that may not be really 744 + necessary if the driver of the device can cope with runtime-suspended devices. 745 + The driver can indicate that by setting ``DPM_FLAG_SMART_SUSPEND`` in 746 + :c:member:`power.driver_flags` at the probe time, by passing it to the 747 + :c:func:`dev_pm_set_driver_flags` helper. That also may cause middle-layer code 748 + (bus types, PM domains etc.) to skip the ``->suspend_late`` and 749 + ``->suspend_noirq`` callbacks provided by the driver if the device remains in 750 + runtime suspend at the beginning of the ``suspend_late`` phase of system-wide 751 + suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM 752 + has been disabled for it, under the assumption that its state should not change 753 + after that point until the system-wide transition is over. If that happens, the 754 + driver's system-wide resume callbacks, if present, may still be invoked during 755 + the subsequent system-wide resume transition and the device's runtime power 756 + management status may be set to "active" before enabling runtime PM for it, 757 + so the driver must be prepared to cope with the invocation of its system-wide 758 + resume callbacks back-to-back with its ``->runtime_suspend`` one (without the 759 + intervening ``->runtime_resume`` and so on) and the final state of the device 760 + must reflect the "active" status for runtime PM in that case. 758 761 759 762 During system-wide resume from a sleep state it's easiest to put devices into 760 763 the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
+33
Documentation/power/pci.txt
··· 961 961 .suspend(), .freeze(), and .poweroff() members and one resume routine is to 962 962 be pointed to by the .resume(), .thaw(), and .restore() members. 963 963 964 + 3.1.19. Driver Flags for Power Management 965 + 966 + The PM core allows device drivers to set flags that influence the handling of 967 + power management for the devices by the core itself and by middle layer code 968 + including the PCI bus type. The flags should be set once at the driver probe 969 + time with the help of the dev_pm_set_driver_flags() function and they should not 970 + be updated directly afterwards. 971 + 972 + The DPM_FLAG_NEVER_SKIP flag prevents the PM core from using the direct-complete 973 + mechanism allowing device suspend/resume callbacks to be skipped if the device 974 + is in runtime suspend when the system suspend starts. That also affects all of 975 + the ancestors of the device, so this flag should only be used if absolutely 976 + necessary. 977 + 978 + The DPM_FLAG_SMART_PREPARE flag instructs the PCI bus type to only return a 979 + positive value from pci_pm_prepare() if the ->prepare callback provided by the 980 + driver of the device returns a positive value. That allows the driver to opt 981 + out from using the direct-complete mechanism dynamically. 982 + 983 + The DPM_FLAG_SMART_SUSPEND flag tells the PCI bus type that from the driver's 984 + perspective the device can be safely left in runtime suspend during system 985 + suspend. That causes pci_pm_suspend(), pci_pm_freeze() and pci_pm_poweroff() 986 + to skip resuming the device from runtime suspend unless there are PCI-specific 987 + reasons for doing that. Also, it causes pci_pm_suspend_late/noirq(), 988 + pci_pm_freeze_late/noirq() and pci_pm_poweroff_late/noirq() to return early 989 + if the device remains in runtime suspend in the beginning of the "late" phase 990 + of the system-wide transition under way. Moreover, if the device is in 991 + runtime suspend in pci_pm_resume_noirq() or pci_pm_restore_noirq(), its runtime 992 + power management status will be changed to "active" (as it is going to be put 993 + into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(), 994 + the function will set the power.direct_complete flag for it (to make the PM core 995 + skip the subsequent "thaw" callbacks for it) and return. 996 + 964 997 3.2. Device Runtime Power Management 965 998 ------------------------------------ 966 999 In addition to providing device power management callbacks PCI device drivers
+6 -7
Documentation/power/pm_qos_interface.txt
··· 98 98 The target values of resume latency and active state latency tolerance are 99 99 simply the minimum of the request values held in the parameter list elements. 100 100 The PM QoS flags aggregate value is a gather (bitwise OR) of all list elements' 101 - values. Two device PM QoS flags are defined currently: PM_QOS_FLAG_NO_POWER_OFF 102 - and PM_QOS_FLAG_REMOTE_WAKEUP. 101 + values. One device PM QoS flag is defined currently: PM_QOS_FLAG_NO_POWER_OFF. 103 102 104 103 Note: The aggregated target values are implemented in such a way that reading 105 104 the aggregated value does not require any locking mechanism. ··· 152 153 pm_qos_resume_latency_us from the device's power directory. 153 154 154 155 int dev_pm_qos_expose_flags(device, value) 155 - Add a request to the device's PM QoS list of flags and create sysfs attributes 156 - pm_qos_no_power_off and pm_qos_remote_wakeup under the device's power directory 157 - allowing user space to change these flags' value. 156 + Add a request to the device's PM QoS list of flags and create sysfs attribute 157 + pm_qos_no_power_off under the device's power directory allowing user space to 158 + change the value of the PM_QOS_FLAG_NO_POWER_OFF flag. 158 159 159 160 void dev_pm_qos_hide_flags(device) 160 161 Drop the request added by dev_pm_qos_expose_flags() from the device's PM QoS list 161 - of flags and remove sysfs attributes pm_qos_no_power_off and pm_qos_remote_wakeup 162 - under the device's power directory. 162 + of flags and remove sysfs attribute pm_qos_no_power_off from the device's power 163 + directory. 163 164 164 165 Notification mechanisms: 165 166 The per-device PM QoS framework has a per-device notification tree.
+3 -1
MAINTAINERS
··· 3637 3637 3638 3638 CPU POWER MONITORING SUBSYSTEM 3639 3639 M: Thomas Renninger <trenn@suse.com> 3640 + M: Shuah Khan <shuahkh@osg.samsung.com> 3641 + M: Shuah Khan <shuah@kernel.org> 3640 3642 L: linux-pm@vger.kernel.org 3641 3643 S: Maintained 3642 3644 F: tools/power/cpupower/ ··· 10062 10060 L: linux-pm@vger.kernel.org 10063 10061 S: Maintained 10064 10062 T: git git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm.git 10065 - F: drivers/base/power/opp/ 10063 + F: drivers/opp/ 10066 10064 F: include/linux/pm_opp.h 10067 10065 F: Documentation/power/opp.txt 10068 10066 F: Documentation/devicetree/bindings/opp/
-24
arch/arm/common/locomo.c
··· 826 826 return dev->devid == drv->devid; 827 827 } 828 828 829 - static int locomo_bus_suspend(struct device *dev, pm_message_t state) 830 - { 831 - struct locomo_dev *ldev = LOCOMO_DEV(dev); 832 - struct locomo_driver *drv = LOCOMO_DRV(dev->driver); 833 - int ret = 0; 834 - 835 - if (drv && drv->suspend) 836 - ret = drv->suspend(ldev, state); 837 - return ret; 838 - } 839 - 840 - static int locomo_bus_resume(struct device *dev) 841 - { 842 - struct locomo_dev *ldev = LOCOMO_DEV(dev); 843 - struct locomo_driver *drv = LOCOMO_DRV(dev->driver); 844 - int ret = 0; 845 - 846 - if (drv && drv->resume) 847 - ret = drv->resume(ldev); 848 - return ret; 849 - } 850 - 851 829 static int locomo_bus_probe(struct device *dev) 852 830 { 853 831 struct locomo_dev *ldev = LOCOMO_DEV(dev); ··· 853 875 .match = locomo_match, 854 876 .probe = locomo_bus_probe, 855 877 .remove = locomo_bus_remove, 856 - .suspend = locomo_bus_suspend, 857 - .resume = locomo_bus_resume, 858 878 }; 859 879 860 880 int locomo_driver_register(struct locomo_driver *driver)
-2
arch/arm/include/asm/hardware/locomo.h
··· 189 189 unsigned int devid; 190 190 int (*probe)(struct locomo_dev *); 191 191 int (*remove)(struct locomo_dev *); 192 - int (*suspend)(struct locomo_dev *, pm_message_t); 193 - int (*resume)(struct locomo_dev *); 194 192 }; 195 193 196 194 #define LOCOMO_DRV(_d) container_of((_d), struct locomo_driver, drv)
+8
arch/arm/include/asm/topology.h
··· 25 25 void store_cpu_topology(unsigned int cpuid); 26 26 const struct cpumask *cpu_coregroup_mask(int cpu); 27 27 28 + #include <linux/arch_topology.h> 29 + 30 + /* Replace task scheduler's default frequency-invariant accounting */ 31 + #define arch_scale_freq_capacity topology_get_freq_scale 32 + 33 + /* Replace task scheduler's default cpu-invariant accounting */ 34 + #define arch_scale_cpu_capacity topology_get_cpu_scale 35 + 28 36 #else 29 37 30 38 static inline void init_cpu_topology(void) { }
+2 -86
arch/arm/mach-imx/mach-imx6q.c
··· 286 286 imx6q_axi_init(); 287 287 } 288 288 289 - #define OCOTP_CFG3 0x440 290 - #define OCOTP_CFG3_SPEED_SHIFT 16 291 - #define OCOTP_CFG3_SPEED_1P2GHZ 0x3 292 - #define OCOTP_CFG3_SPEED_996MHZ 0x2 293 - #define OCOTP_CFG3_SPEED_852MHZ 0x1 294 - 295 - static void __init imx6q_opp_check_speed_grading(struct device *cpu_dev) 296 - { 297 - struct device_node *np; 298 - void __iomem *base; 299 - u32 val; 300 - 301 - np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp"); 302 - if (!np) { 303 - pr_warn("failed to find ocotp node\n"); 304 - return; 305 - } 306 - 307 - base = of_iomap(np, 0); 308 - if (!base) { 309 - pr_warn("failed to map ocotp\n"); 310 - goto put_node; 311 - } 312 - 313 - /* 314 - * SPEED_GRADING[1:0] defines the max speed of ARM: 315 - * 2b'11: 1200000000Hz; 316 - * 2b'10: 996000000Hz; 317 - * 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz. 318 - * 2b'00: 792000000Hz; 319 - * We need to set the max speed of ARM according to fuse map. 320 - */ 321 - val = readl_relaxed(base + OCOTP_CFG3); 322 - val >>= OCOTP_CFG3_SPEED_SHIFT; 323 - val &= 0x3; 324 - 325 - if ((val != OCOTP_CFG3_SPEED_1P2GHZ) && cpu_is_imx6q()) 326 - if (dev_pm_opp_disable(cpu_dev, 1200000000)) 327 - pr_warn("failed to disable 1.2 GHz OPP\n"); 328 - if (val < OCOTP_CFG3_SPEED_996MHZ) 329 - if (dev_pm_opp_disable(cpu_dev, 996000000)) 330 - pr_warn("failed to disable 996 MHz OPP\n"); 331 - if (cpu_is_imx6q()) { 332 - if (val != OCOTP_CFG3_SPEED_852MHZ) 333 - if (dev_pm_opp_disable(cpu_dev, 852000000)) 334 - pr_warn("failed to disable 852 MHz OPP\n"); 335 - } 336 - iounmap(base); 337 - put_node: 338 - of_node_put(np); 339 - } 340 - 341 - static void __init imx6q_opp_init(void) 342 - { 343 - struct device_node *np; 344 - struct device *cpu_dev = get_cpu_device(0); 345 - 346 - if (!cpu_dev) { 347 - pr_warn("failed to get cpu0 device\n"); 348 - return; 349 - } 350 - np = of_node_get(cpu_dev->of_node); 351 - if (!np) { 352 - pr_warn("failed to find cpu0 node\n"); 353 - return; 354 - } 355 - 356 - if (dev_pm_opp_of_add_table(cpu_dev)) { 357 - pr_warn("failed to init OPP table\n"); 358 - goto put_node; 359 - } 360 - 361 - imx6q_opp_check_speed_grading(cpu_dev); 362 - 363 - put_node: 364 - of_node_put(np); 365 - } 366 - 367 - static struct platform_device imx6q_cpufreq_pdev = { 368 - .name = "imx6q-cpufreq", 369 - }; 370 - 371 289 static void __init imx6q_init_late(void) 372 290 { 373 291 /* ··· 295 377 if (imx_get_soc_revision() > IMX_CHIP_REVISION_1_1) 296 378 imx6q_cpuidle_init(); 297 379 298 - if (IS_ENABLED(CONFIG_ARM_IMX6Q_CPUFREQ)) { 299 - imx6q_opp_init(); 300 - platform_device_register(&imx6q_cpufreq_pdev); 301 - } 380 + if (IS_ENABLED(CONFIG_ARM_IMX6Q_CPUFREQ)) 381 + platform_device_register_simple("imx6q-cpufreq", -1, NULL, 0); 302 382 } 303 383 304 384 static void __init imx6q_map_io(void)
+1 -7
arch/arm/mach-shmobile/pm-rmobile.c
··· 120 120 return __rmobile_pd_power_up(to_rmobile_pd(genpd), true); 121 121 } 122 122 123 - static bool rmobile_pd_active_wakeup(struct device *dev) 124 - { 125 - return true; 126 - } 127 - 128 123 static void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd) 129 124 { 130 125 struct generic_pm_domain *genpd = &rmobile_pd->genpd; 131 126 struct dev_power_governor *gov = rmobile_pd->gov; 132 127 133 - genpd->flags |= GENPD_FLAG_PM_CLK; 134 - genpd->dev_ops.active_wakeup = rmobile_pd_active_wakeup; 128 + genpd->flags |= GENPD_FLAG_PM_CLK | GENPD_FLAG_ACTIVE_WAKEUP; 135 129 genpd->power_off = rmobile_pd_power_down; 136 130 genpd->power_on = rmobile_pd_power_up; 137 131 genpd->attach_dev = cpg_mstp_attach_dev;
+8
arch/arm64/include/asm/topology.h
··· 33 33 34 34 #endif /* CONFIG_NUMA */ 35 35 36 + #include <linux/arch_topology.h> 37 + 38 + /* Replace task scheduler's default frequency-invariant accounting */ 39 + #define arch_scale_freq_capacity topology_get_freq_scale 40 + 41 + /* Replace task scheduler's default cpu-invariant accounting */ 42 + #define arch_scale_cpu_capacity topology_get_cpu_scale 43 + 36 44 #include <asm-generic/topology.h> 37 45 38 46 #endif /* _ASM_ARM_TOPOLOGY_H */
+2
drivers/Kconfig
··· 209 209 210 210 source "drivers/mux/Kconfig" 211 211 212 + source "drivers/opp/Kconfig" 213 + 212 214 endmenu
+1
drivers/Makefile
··· 126 126 obj-$(CONFIG_ISDN) += isdn/ 127 127 obj-$(CONFIG_EDAC) += edac/ 128 128 obj-$(CONFIG_EISA) += eisa/ 129 + obj-$(CONFIG_PM_OPP) += opp/ 129 130 obj-$(CONFIG_CPU_FREQ) += cpufreq/ 130 131 obj-$(CONFIG_CPU_IDLE) += cpuidle/ 131 132 obj-y += mmc/
+5
drivers/acpi/Kconfig
··· 81 81 config ACPI_SPCR_TABLE 82 82 bool 83 83 84 + config ACPI_LPIT 85 + bool 86 + depends on X86_64 87 + default y 88 + 84 89 config ACPI_SLEEP 85 90 bool 86 91 depends on SUSPEND || HIBERNATION
+1
drivers/acpi/Makefile
··· 57 57 acpi-$(CONFIG_ACPI_NUMA) += numa.o 58 58 acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o 59 59 acpi-y += acpi_lpat.o 60 + acpi-$(CONFIG_ACPI_LPIT) += acpi_lpit.o 60 61 acpi-$(CONFIG_ACPI_GENERIC_GSI) += irq.o 61 62 acpi-$(CONFIG_ACPI_WATCHDOG) += acpi_watchdog.o 62 63
+162
drivers/acpi/acpi_lpit.c
··· 1 + 2 + /* 3 + * acpi_lpit.c - LPIT table processing functions 4 + * 5 + * Copyright (C) 2017 Intel Corporation. All rights reserved. 6 + * 7 + * This program is free software; you can redistribute it and/or 8 + * modify it under the terms of the GNU General Public License version 9 + * 2 as published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #include <linux/cpu.h> 18 + #include <linux/acpi.h> 19 + #include <asm/msr.h> 20 + #include <asm/tsc.h> 21 + 22 + struct lpit_residency_info { 23 + struct acpi_generic_address gaddr; 24 + u64 frequency; 25 + void __iomem *iomem_addr; 26 + }; 27 + 28 + /* Storage for an memory mapped and FFH based entries */ 29 + static struct lpit_residency_info residency_info_mem; 30 + static struct lpit_residency_info residency_info_ffh; 31 + 32 + static int lpit_read_residency_counter_us(u64 *counter, bool io_mem) 33 + { 34 + int err; 35 + 36 + if (io_mem) { 37 + u64 count = 0; 38 + int error; 39 + 40 + error = acpi_os_read_iomem(residency_info_mem.iomem_addr, &count, 41 + residency_info_mem.gaddr.bit_width); 42 + if (error) 43 + return error; 44 + 45 + *counter = div64_u64(count * 1000000ULL, residency_info_mem.frequency); 46 + return 0; 47 + } 48 + 49 + err = rdmsrl_safe(residency_info_ffh.gaddr.address, counter); 50 + if (!err) { 51 + u64 mask = GENMASK_ULL(residency_info_ffh.gaddr.bit_offset + 52 + residency_info_ffh.gaddr. bit_width - 1, 53 + residency_info_ffh.gaddr.bit_offset); 54 + 55 + *counter &= mask; 56 + *counter >>= residency_info_ffh.gaddr.bit_offset; 57 + *counter = div64_u64(*counter * 1000000ULL, residency_info_ffh.frequency); 58 + return 0; 59 + } 60 + 61 + return -ENODATA; 62 + } 63 + 64 + static ssize_t low_power_idle_system_residency_us_show(struct device *dev, 65 + struct device_attribute *attr, 66 + char *buf) 67 + { 68 + u64 counter; 69 + int ret; 70 + 71 + ret = lpit_read_residency_counter_us(&counter, true); 72 + if (ret) 73 + return ret; 74 + 75 + return sprintf(buf, "%llu\n", counter); 76 + } 77 + static DEVICE_ATTR_RO(low_power_idle_system_residency_us); 78 + 79 + static ssize_t low_power_idle_cpu_residency_us_show(struct device *dev, 80 + struct device_attribute *attr, 81 + char *buf) 82 + { 83 + u64 counter; 84 + int ret; 85 + 86 + ret = lpit_read_residency_counter_us(&counter, false); 87 + if (ret) 88 + return ret; 89 + 90 + return sprintf(buf, "%llu\n", counter); 91 + } 92 + static DEVICE_ATTR_RO(low_power_idle_cpu_residency_us); 93 + 94 + int lpit_read_residency_count_address(u64 *address) 95 + { 96 + if (!residency_info_mem.gaddr.address) 97 + return -EINVAL; 98 + 99 + *address = residency_info_mem.gaddr.address; 100 + 101 + return 0; 102 + } 103 + 104 + static void lpit_update_residency(struct lpit_residency_info *info, 105 + struct acpi_lpit_native *lpit_native) 106 + { 107 + info->frequency = lpit_native->counter_frequency ? 108 + lpit_native->counter_frequency : tsc_khz * 1000; 109 + if (!info->frequency) 110 + info->frequency = 1; 111 + 112 + info->gaddr = lpit_native->residency_counter; 113 + if (info->gaddr.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) { 114 + info->iomem_addr = ioremap_nocache(info->gaddr.address, 115 + info->gaddr.bit_width / 8); 116 + if (!info->iomem_addr) 117 + return; 118 + 119 + /* Silently fail, if cpuidle attribute group is not present */ 120 + sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj, 121 + &dev_attr_low_power_idle_system_residency_us.attr, 122 + "cpuidle"); 123 + } else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) { 124 + /* Silently fail, if cpuidle attribute group is not present */ 125 + sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj, 126 + &dev_attr_low_power_idle_cpu_residency_us.attr, 127 + "cpuidle"); 128 + } 129 + } 130 + 131 + static void lpit_process(u64 begin, u64 end) 132 + { 133 + while (begin + sizeof(struct acpi_lpit_native) < end) { 134 + struct acpi_lpit_native *lpit_native = (struct acpi_lpit_native *)begin; 135 + 136 + if (!lpit_native->header.type && !lpit_native->header.flags) { 137 + if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY && 138 + !residency_info_mem.gaddr.address) { 139 + lpit_update_residency(&residency_info_mem, lpit_native); 140 + } else if (lpit_native->residency_counter.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE && 141 + !residency_info_ffh.gaddr.address) { 142 + lpit_update_residency(&residency_info_ffh, lpit_native); 143 + } 144 + } 145 + begin += lpit_native->header.length; 146 + } 147 + } 148 + 149 + void acpi_init_lpit(void) 150 + { 151 + acpi_status status; 152 + u64 lpit_begin; 153 + struct acpi_table_lpit *lpit; 154 + 155 + status = acpi_get_table(ACPI_SIG_LPIT, 0, (struct acpi_table_header **)&lpit); 156 + 157 + if (ACPI_FAILURE(status)) 158 + return; 159 + 160 + lpit_begin = (u64)lpit + sizeof(*lpit); 161 + lpit_process(lpit_begin, lpit_begin + lpit->header.length); 162 + }
+49 -46
drivers/acpi/acpi_lpss.c
··· 693 693 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 694 694 int ret; 695 695 696 - ret = acpi_dev_runtime_resume(dev); 696 + ret = acpi_dev_resume(dev); 697 697 if (ret) 698 698 return ret; 699 699 ··· 713 713 714 714 static void acpi_lpss_dismiss(struct device *dev) 715 715 { 716 - acpi_dev_runtime_suspend(dev); 716 + acpi_dev_suspend(dev, false); 717 717 } 718 - 719 - #ifdef CONFIG_PM_SLEEP 720 - static int acpi_lpss_suspend_late(struct device *dev) 721 - { 722 - struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 723 - int ret; 724 - 725 - ret = pm_generic_suspend_late(dev); 726 - if (ret) 727 - return ret; 728 - 729 - if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 730 - acpi_lpss_save_ctx(dev, pdata); 731 - 732 - return acpi_dev_suspend_late(dev); 733 - } 734 - 735 - static int acpi_lpss_resume_early(struct device *dev) 736 - { 737 - struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 738 - int ret; 739 - 740 - ret = acpi_dev_resume_early(dev); 741 - if (ret) 742 - return ret; 743 - 744 - acpi_lpss_d3_to_d0_delay(pdata); 745 - 746 - if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 747 - acpi_lpss_restore_ctx(dev, pdata); 748 - 749 - return pm_generic_resume_early(dev); 750 - } 751 - #endif /* CONFIG_PM_SLEEP */ 752 718 753 719 /* IOSF SB for LPSS island */ 754 720 #define LPSS_IOSF_UNIT_LPIOEP 0xA0 ··· 801 835 mutex_unlock(&lpss_iosf_mutex); 802 836 } 803 837 804 - static int acpi_lpss_runtime_suspend(struct device *dev) 838 + static int acpi_lpss_suspend(struct device *dev, bool wakeup) 805 839 { 806 840 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 807 841 int ret; 808 842 809 - ret = pm_generic_runtime_suspend(dev); 810 - if (ret) 811 - return ret; 812 - 813 843 if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 814 844 acpi_lpss_save_ctx(dev, pdata); 815 845 816 - ret = acpi_dev_runtime_suspend(dev); 846 + ret = acpi_dev_suspend(dev, wakeup); 817 847 818 848 /* 819 849 * This call must be last in the sequence, otherwise PMC will return ··· 822 860 return ret; 823 861 } 824 862 825 - static int acpi_lpss_runtime_resume(struct device *dev) 863 + static int acpi_lpss_resume(struct device *dev) 826 864 { 827 865 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 828 866 int ret; ··· 834 872 if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available()) 835 873 lpss_iosf_exit_d3_state(); 836 874 837 - ret = acpi_dev_runtime_resume(dev); 875 + ret = acpi_dev_resume(dev); 838 876 if (ret) 839 877 return ret; 840 878 ··· 843 881 if (pdata->dev_desc->flags & LPSS_SAVE_CTX) 844 882 acpi_lpss_restore_ctx(dev, pdata); 845 883 846 - return pm_generic_runtime_resume(dev); 884 + return 0; 885 + } 886 + 887 + #ifdef CONFIG_PM_SLEEP 888 + static int acpi_lpss_suspend_late(struct device *dev) 889 + { 890 + int ret; 891 + 892 + if (dev_pm_smart_suspend_and_suspended(dev)) 893 + return 0; 894 + 895 + ret = pm_generic_suspend_late(dev); 896 + return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev)); 897 + } 898 + 899 + static int acpi_lpss_resume_early(struct device *dev) 900 + { 901 + int ret = acpi_lpss_resume(dev); 902 + 903 + return ret ? ret : pm_generic_resume_early(dev); 904 + } 905 + #endif /* CONFIG_PM_SLEEP */ 906 + 907 + static int acpi_lpss_runtime_suspend(struct device *dev) 908 + { 909 + int ret = pm_generic_runtime_suspend(dev); 910 + 911 + return ret ? ret : acpi_lpss_suspend(dev, true); 912 + } 913 + 914 + static int acpi_lpss_runtime_resume(struct device *dev) 915 + { 916 + int ret = acpi_lpss_resume(dev); 917 + 918 + return ret ? ret : pm_generic_runtime_resume(dev); 847 919 } 848 920 #endif /* CONFIG_PM */ 849 921 ··· 890 894 #ifdef CONFIG_PM 891 895 #ifdef CONFIG_PM_SLEEP 892 896 .prepare = acpi_subsys_prepare, 893 - .complete = pm_complete_with_resume_check, 897 + .complete = acpi_subsys_complete, 894 898 .suspend = acpi_subsys_suspend, 895 899 .suspend_late = acpi_lpss_suspend_late, 900 + .suspend_noirq = acpi_subsys_suspend_noirq, 901 + .resume_noirq = acpi_subsys_resume_noirq, 896 902 .resume_early = acpi_lpss_resume_early, 897 903 .freeze = acpi_subsys_freeze, 904 + .freeze_late = acpi_subsys_freeze_late, 905 + .freeze_noirq = acpi_subsys_freeze_noirq, 906 + .thaw_noirq = acpi_subsys_thaw_noirq, 898 907 .poweroff = acpi_subsys_suspend, 899 908 .poweroff_late = acpi_lpss_suspend_late, 909 + .poweroff_noirq = acpi_subsys_suspend_noirq, 910 + .restore_noirq = acpi_subsys_resume_noirq, 900 911 .restore_early = acpi_lpss_resume_early, 901 912 #endif 902 913 .runtime_suspend = acpi_lpss_runtime_suspend,
+176 -103
drivers/acpi/device_pm.c
··· 387 387 388 388 #ifdef CONFIG_PM 389 389 static DEFINE_MUTEX(acpi_pm_notifier_lock); 390 + static DEFINE_MUTEX(acpi_pm_notifier_install_lock); 390 391 391 392 void acpi_pm_wakeup_event(struct device *dev) 392 393 { ··· 444 443 if (!dev && !func) 445 444 return AE_BAD_PARAMETER; 446 445 447 - mutex_lock(&acpi_pm_notifier_lock); 446 + mutex_lock(&acpi_pm_notifier_install_lock); 448 447 449 448 if (adev->wakeup.flags.notifier_present) 450 449 goto out; 451 - 452 - adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev)); 453 - adev->wakeup.context.dev = dev; 454 - adev->wakeup.context.func = func; 455 450 456 451 status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY, 457 452 acpi_pm_notify_handler, NULL); 458 453 if (ACPI_FAILURE(status)) 459 454 goto out; 460 455 456 + mutex_lock(&acpi_pm_notifier_lock); 457 + adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev)); 458 + adev->wakeup.context.dev = dev; 459 + adev->wakeup.context.func = func; 461 460 adev->wakeup.flags.notifier_present = true; 461 + mutex_unlock(&acpi_pm_notifier_lock); 462 462 463 463 out: 464 - mutex_unlock(&acpi_pm_notifier_lock); 464 + mutex_unlock(&acpi_pm_notifier_install_lock); 465 465 return status; 466 466 } 467 467 ··· 474 472 { 475 473 acpi_status status = AE_BAD_PARAMETER; 476 474 477 - mutex_lock(&acpi_pm_notifier_lock); 475 + mutex_lock(&acpi_pm_notifier_install_lock); 478 476 479 477 if (!adev->wakeup.flags.notifier_present) 480 478 goto out; ··· 485 483 if (ACPI_FAILURE(status)) 486 484 goto out; 487 485 486 + mutex_lock(&acpi_pm_notifier_lock); 488 487 adev->wakeup.context.func = NULL; 489 488 adev->wakeup.context.dev = NULL; 490 489 wakeup_source_unregister(adev->wakeup.ws); 491 - 492 490 adev->wakeup.flags.notifier_present = false; 491 + mutex_unlock(&acpi_pm_notifier_lock); 493 492 494 493 out: 495 - mutex_unlock(&acpi_pm_notifier_lock); 494 + mutex_unlock(&acpi_pm_notifier_install_lock); 496 495 return status; 497 496 } 498 497 ··· 584 581 d_min = ret; 585 582 wakeup = device_may_wakeup(dev) && adev->wakeup.flags.valid 586 583 && adev->wakeup.sleep_state >= target_state; 587 - } else if (dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) != 588 - PM_QOS_FLAGS_NONE) { 584 + } else { 589 585 wakeup = adev->wakeup.flags.valid; 590 586 } 591 587 ··· 850 848 } 851 849 852 850 /** 853 - * acpi_dev_runtime_suspend - Put device into a low-power state using ACPI. 851 + * acpi_dev_suspend - Put device into a low-power state using ACPI. 854 852 * @dev: Device to put into a low-power state. 853 + * @wakeup: Whether or not to enable wakeup for the device. 855 854 * 856 - * Put the given device into a runtime low-power state using the standard ACPI 855 + * Put the given device into a low-power state using the standard ACPI 857 856 * mechanism. Set up remote wakeup if desired, choose the state to put the 858 857 * device into (this checks if remote wakeup is expected to work too), and set 859 858 * the power state of the device. 860 859 */ 861 - int acpi_dev_runtime_suspend(struct device *dev) 860 + int acpi_dev_suspend(struct device *dev, bool wakeup) 862 861 { 863 862 struct acpi_device *adev = ACPI_COMPANION(dev); 864 - bool remote_wakeup; 863 + u32 target_state = acpi_target_system_state(); 865 864 int error; 866 865 867 866 if (!adev) 868 867 return 0; 869 868 870 - remote_wakeup = dev_pm_qos_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP) > 871 - PM_QOS_FLAGS_NONE; 872 - if (remote_wakeup) { 873 - error = acpi_device_wakeup_enable(adev, ACPI_STATE_S0); 869 + if (wakeup && acpi_device_can_wakeup(adev)) { 870 + error = acpi_device_wakeup_enable(adev, target_state); 874 871 if (error) 875 872 return -EAGAIN; 873 + } else { 874 + wakeup = false; 876 875 } 877 876 878 - error = acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0); 879 - if (error && remote_wakeup) 877 + error = acpi_dev_pm_low_power(dev, adev, target_state); 878 + if (error && wakeup) 880 879 acpi_device_wakeup_disable(adev); 881 880 882 881 return error; 883 882 } 884 - EXPORT_SYMBOL_GPL(acpi_dev_runtime_suspend); 883 + EXPORT_SYMBOL_GPL(acpi_dev_suspend); 885 884 886 885 /** 887 - * acpi_dev_runtime_resume - Put device into the full-power state using ACPI. 886 + * acpi_dev_resume - Put device into the full-power state using ACPI. 888 887 * @dev: Device to put into the full-power state. 889 888 * 890 889 * Put the given device into the full-power state using the standard ACPI 891 - * mechanism at run time. Set the power state of the device to ACPI D0 and 892 - * disable remote wakeup. 890 + * mechanism. Set the power state of the device to ACPI D0 and disable wakeup. 893 891 */ 894 - int acpi_dev_runtime_resume(struct device *dev) 892 + int acpi_dev_resume(struct device *dev) 895 893 { 896 894 struct acpi_device *adev = ACPI_COMPANION(dev); 897 895 int error; ··· 903 901 acpi_device_wakeup_disable(adev); 904 902 return error; 905 903 } 906 - EXPORT_SYMBOL_GPL(acpi_dev_runtime_resume); 904 + EXPORT_SYMBOL_GPL(acpi_dev_resume); 907 905 908 906 /** 909 907 * acpi_subsys_runtime_suspend - Suspend device using ACPI. ··· 915 913 int acpi_subsys_runtime_suspend(struct device *dev) 916 914 { 917 915 int ret = pm_generic_runtime_suspend(dev); 918 - return ret ? ret : acpi_dev_runtime_suspend(dev); 916 + return ret ? ret : acpi_dev_suspend(dev, true); 919 917 } 920 918 EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend); 921 919 ··· 928 926 */ 929 927 int acpi_subsys_runtime_resume(struct device *dev) 930 928 { 931 - int ret = acpi_dev_runtime_resume(dev); 929 + int ret = acpi_dev_resume(dev); 932 930 return ret ? ret : pm_generic_runtime_resume(dev); 933 931 } 934 932 EXPORT_SYMBOL_GPL(acpi_subsys_runtime_resume); 935 933 936 934 #ifdef CONFIG_PM_SLEEP 937 - /** 938 - * acpi_dev_suspend_late - Put device into a low-power state using ACPI. 939 - * @dev: Device to put into a low-power state. 940 - * 941 - * Put the given device into a low-power state during system transition to a 942 - * sleep state using the standard ACPI mechanism. Set up system wakeup if 943 - * desired, choose the state to put the device into (this checks if system 944 - * wakeup is expected to work too), and set the power state of the device. 945 - */ 946 - int acpi_dev_suspend_late(struct device *dev) 935 + static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev) 947 936 { 948 - struct acpi_device *adev = ACPI_COMPANION(dev); 949 - u32 target_state; 950 - bool wakeup; 951 - int error; 937 + u32 sys_target = acpi_target_system_state(); 938 + int ret, state; 952 939 953 - if (!adev) 954 - return 0; 940 + if (!pm_runtime_suspended(dev) || !adev || 941 + device_may_wakeup(dev) != !!adev->wakeup.prepare_count) 942 + return true; 955 943 956 - target_state = acpi_target_system_state(); 957 - wakeup = device_may_wakeup(dev) && acpi_device_can_wakeup(adev); 958 - if (wakeup) { 959 - error = acpi_device_wakeup_enable(adev, target_state); 960 - if (error) 961 - return error; 962 - } 944 + if (sys_target == ACPI_STATE_S0) 945 + return false; 963 946 964 - error = acpi_dev_pm_low_power(dev, adev, target_state); 965 - if (error && wakeup) 966 - acpi_device_wakeup_disable(adev); 947 + if (adev->power.flags.dsw_present) 948 + return true; 967 949 968 - return error; 950 + ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state); 951 + if (ret) 952 + return true; 953 + 954 + return state != adev->power.state; 969 955 } 970 - EXPORT_SYMBOL_GPL(acpi_dev_suspend_late); 971 - 972 - /** 973 - * acpi_dev_resume_early - Put device into the full-power state using ACPI. 974 - * @dev: Device to put into the full-power state. 975 - * 976 - * Put the given device into the full-power state using the standard ACPI 977 - * mechanism during system transition to the working state. Set the power 978 - * state of the device to ACPI D0 and disable remote wakeup. 979 - */ 980 - int acpi_dev_resume_early(struct device *dev) 981 - { 982 - struct acpi_device *adev = ACPI_COMPANION(dev); 983 - int error; 984 - 985 - if (!adev) 986 - return 0; 987 - 988 - error = acpi_dev_pm_full_power(adev); 989 - acpi_device_wakeup_disable(adev); 990 - return error; 991 - } 992 - EXPORT_SYMBOL_GPL(acpi_dev_resume_early); 993 956 994 957 /** 995 958 * acpi_subsys_prepare - Prepare device for system transition to a sleep state. ··· 963 996 int acpi_subsys_prepare(struct device *dev) 964 997 { 965 998 struct acpi_device *adev = ACPI_COMPANION(dev); 966 - u32 sys_target; 967 - int ret, state; 968 999 969 - ret = pm_generic_prepare(dev); 970 - if (ret < 0) 971 - return ret; 1000 + if (dev->driver && dev->driver->pm && dev->driver->pm->prepare) { 1001 + int ret = dev->driver->pm->prepare(dev); 972 1002 973 - if (!adev || !pm_runtime_suspended(dev) 974 - || device_may_wakeup(dev) != !!adev->wakeup.prepare_count) 975 - return 0; 1003 + if (ret < 0) 1004 + return ret; 976 1005 977 - sys_target = acpi_target_system_state(); 978 - if (sys_target == ACPI_STATE_S0) 979 - return 1; 1006 + if (!ret && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE)) 1007 + return 0; 1008 + } 980 1009 981 - if (adev->power.flags.dsw_present) 982 - return 0; 983 - 984 - ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state); 985 - return !ret && state == adev->power.state; 1010 + return !acpi_dev_needs_resume(dev, adev); 986 1011 } 987 1012 EXPORT_SYMBOL_GPL(acpi_subsys_prepare); 1013 + 1014 + /** 1015 + * acpi_subsys_complete - Finalize device's resume during system resume. 1016 + * @dev: Device to handle. 1017 + */ 1018 + void acpi_subsys_complete(struct device *dev) 1019 + { 1020 + pm_generic_complete(dev); 1021 + /* 1022 + * If the device had been runtime-suspended before the system went into 1023 + * the sleep state it is going out of and it has never been resumed till 1024 + * now, resume it in case the firmware powered it up. 1025 + */ 1026 + if (dev->power.direct_complete && pm_resume_via_firmware()) 1027 + pm_request_resume(dev); 1028 + } 1029 + EXPORT_SYMBOL_GPL(acpi_subsys_complete); 988 1030 989 1031 /** 990 1032 * acpi_subsys_suspend - Run the device driver's suspend callback. 991 1033 * @dev: Device to handle. 992 1034 * 993 - * Follow PCI and resume devices suspended at run time before running their 994 - * system suspend callbacks. 1035 + * Follow PCI and resume devices from runtime suspend before running their 1036 + * system suspend callbacks, unless the driver can cope with runtime-suspended 1037 + * devices during system suspend and there are no ACPI-specific reasons for 1038 + * resuming them. 995 1039 */ 996 1040 int acpi_subsys_suspend(struct device *dev) 997 1041 { 998 - pm_runtime_resume(dev); 1042 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) || 1043 + acpi_dev_needs_resume(dev, ACPI_COMPANION(dev))) 1044 + pm_runtime_resume(dev); 1045 + 999 1046 return pm_generic_suspend(dev); 1000 1047 } 1001 1048 EXPORT_SYMBOL_GPL(acpi_subsys_suspend); ··· 1023 1042 */ 1024 1043 int acpi_subsys_suspend_late(struct device *dev) 1025 1044 { 1026 - int ret = pm_generic_suspend_late(dev); 1027 - return ret ? ret : acpi_dev_suspend_late(dev); 1045 + int ret; 1046 + 1047 + if (dev_pm_smart_suspend_and_suspended(dev)) 1048 + return 0; 1049 + 1050 + ret = pm_generic_suspend_late(dev); 1051 + return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev)); 1028 1052 } 1029 1053 EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late); 1054 + 1055 + /** 1056 + * acpi_subsys_suspend_noirq - Run the device driver's "noirq" suspend callback. 1057 + * @dev: Device to suspend. 1058 + */ 1059 + int acpi_subsys_suspend_noirq(struct device *dev) 1060 + { 1061 + if (dev_pm_smart_suspend_and_suspended(dev)) 1062 + return 0; 1063 + 1064 + return pm_generic_suspend_noirq(dev); 1065 + } 1066 + EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); 1067 + 1068 + /** 1069 + * acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback. 1070 + * @dev: Device to handle. 1071 + */ 1072 + int acpi_subsys_resume_noirq(struct device *dev) 1073 + { 1074 + /* 1075 + * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 1076 + * during system suspend, so update their runtime PM status to "active" 1077 + * as they will be put into D0 going forward. 1078 + */ 1079 + if (dev_pm_smart_suspend_and_suspended(dev)) 1080 + pm_runtime_set_active(dev); 1081 + 1082 + return pm_generic_resume_noirq(dev); 1083 + } 1084 + EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq); 1030 1085 1031 1086 /** 1032 1087 * acpi_subsys_resume_early - Resume device using ACPI. ··· 1074 1057 */ 1075 1058 int acpi_subsys_resume_early(struct device *dev) 1076 1059 { 1077 - int ret = acpi_dev_resume_early(dev); 1060 + int ret = acpi_dev_resume(dev); 1078 1061 return ret ? ret : pm_generic_resume_early(dev); 1079 1062 } 1080 1063 EXPORT_SYMBOL_GPL(acpi_subsys_resume_early); ··· 1091 1074 * runtime-suspended devices should not be touched during freeze/thaw 1092 1075 * transitions. 1093 1076 */ 1094 - pm_runtime_resume(dev); 1077 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) 1078 + pm_runtime_resume(dev); 1079 + 1095 1080 return pm_generic_freeze(dev); 1096 1081 } 1097 1082 EXPORT_SYMBOL_GPL(acpi_subsys_freeze); 1098 1083 1084 + /** 1085 + * acpi_subsys_freeze_late - Run the device driver's "late" freeze callback. 1086 + * @dev: Device to handle. 1087 + */ 1088 + int acpi_subsys_freeze_late(struct device *dev) 1089 + { 1090 + 1091 + if (dev_pm_smart_suspend_and_suspended(dev)) 1092 + return 0; 1093 + 1094 + return pm_generic_freeze_late(dev); 1095 + } 1096 + EXPORT_SYMBOL_GPL(acpi_subsys_freeze_late); 1097 + 1098 + /** 1099 + * acpi_subsys_freeze_noirq - Run the device driver's "noirq" freeze callback. 1100 + * @dev: Device to handle. 1101 + */ 1102 + int acpi_subsys_freeze_noirq(struct device *dev) 1103 + { 1104 + 1105 + if (dev_pm_smart_suspend_and_suspended(dev)) 1106 + return 0; 1107 + 1108 + return pm_generic_freeze_noirq(dev); 1109 + } 1110 + EXPORT_SYMBOL_GPL(acpi_subsys_freeze_noirq); 1111 + 1112 + /** 1113 + * acpi_subsys_thaw_noirq - Run the device driver's "noirq" thaw callback. 1114 + * @dev: Device to handle. 1115 + */ 1116 + int acpi_subsys_thaw_noirq(struct device *dev) 1117 + { 1118 + /* 1119 + * If the device is in runtime suspend, the "thaw" code may not work 1120 + * correctly with it, so skip the driver callback and make the PM core 1121 + * skip all of the subsequent "thaw" callbacks for the device. 1122 + */ 1123 + if (dev_pm_smart_suspend_and_suspended(dev)) { 1124 + dev->power.direct_complete = true; 1125 + return 0; 1126 + } 1127 + 1128 + return pm_generic_thaw_noirq(dev); 1129 + } 1130 + EXPORT_SYMBOL_GPL(acpi_subsys_thaw_noirq); 1099 1131 #endif /* CONFIG_PM_SLEEP */ 1100 1132 1101 1133 static struct dev_pm_domain acpi_general_pm_domain = { ··· 1153 1087 .runtime_resume = acpi_subsys_runtime_resume, 1154 1088 #ifdef CONFIG_PM_SLEEP 1155 1089 .prepare = acpi_subsys_prepare, 1156 - .complete = pm_complete_with_resume_check, 1090 + .complete = acpi_subsys_complete, 1157 1091 .suspend = acpi_subsys_suspend, 1158 1092 .suspend_late = acpi_subsys_suspend_late, 1093 + .suspend_noirq = acpi_subsys_suspend_noirq, 1094 + .resume_noirq = acpi_subsys_resume_noirq, 1159 1095 .resume_early = acpi_subsys_resume_early, 1160 1096 .freeze = acpi_subsys_freeze, 1097 + .freeze_late = acpi_subsys_freeze_late, 1098 + .freeze_noirq = acpi_subsys_freeze_noirq, 1099 + .thaw_noirq = acpi_subsys_thaw_noirq, 1161 1100 .poweroff = acpi_subsys_suspend, 1162 1101 .poweroff_late = acpi_subsys_suspend_late, 1102 + .poweroff_noirq = acpi_subsys_suspend_noirq, 1103 + .restore_noirq = acpi_subsys_resume_noirq, 1163 1104 .restore_early = acpi_subsys_resume_early, 1164 1105 #endif 1165 1106 },
+6
drivers/acpi/internal.h
··· 248 248 static inline void acpi_watchdog_init(void) {} 249 249 #endif 250 250 251 + #ifdef CONFIG_ACPI_LPIT 252 + void acpi_init_lpit(void); 253 + #else 254 + static inline void acpi_init_lpit(void) { } 255 + #endif 256 + 251 257 #endif /* _ACPI_INTERNAL_H_ */
+30 -20
drivers/acpi/osl.c
··· 663 663 664 664 EXPORT_SYMBOL(acpi_os_write_port); 665 665 666 - acpi_status 667 - acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width) 666 + int acpi_os_read_iomem(void __iomem *virt_addr, u64 *value, u32 width) 668 667 { 669 - void __iomem *virt_addr; 670 - unsigned int size = width / 8; 671 - bool unmap = false; 672 - u64 dummy; 673 - 674 - rcu_read_lock(); 675 - virt_addr = acpi_map_vaddr_lookup(phys_addr, size); 676 - if (!virt_addr) { 677 - rcu_read_unlock(); 678 - virt_addr = acpi_os_ioremap(phys_addr, size); 679 - if (!virt_addr) 680 - return AE_BAD_ADDRESS; 681 - unmap = true; 682 - } 683 - 684 - if (!value) 685 - value = &dummy; 686 668 687 669 switch (width) { 688 670 case 8: ··· 680 698 *(u64 *) value = readq(virt_addr); 681 699 break; 682 700 default: 683 - BUG(); 701 + return -EINVAL; 684 702 } 703 + 704 + return 0; 705 + } 706 + 707 + acpi_status 708 + acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width) 709 + { 710 + void __iomem *virt_addr; 711 + unsigned int size = width / 8; 712 + bool unmap = false; 713 + u64 dummy; 714 + int error; 715 + 716 + rcu_read_lock(); 717 + virt_addr = acpi_map_vaddr_lookup(phys_addr, size); 718 + if (!virt_addr) { 719 + rcu_read_unlock(); 720 + virt_addr = acpi_os_ioremap(phys_addr, size); 721 + if (!virt_addr) 722 + return AE_BAD_ADDRESS; 723 + unmap = true; 724 + } 725 + 726 + if (!value) 727 + value = &dummy; 728 + 729 + error = acpi_os_read_iomem(virt_addr, value, width); 730 + BUG_ON(error); 685 731 686 732 if (unmap) 687 733 iounmap(virt_addr);
+1
drivers/acpi/scan.c
··· 2122 2122 acpi_int340x_thermal_init(); 2123 2123 acpi_amba_init(); 2124 2124 acpi_watchdog_init(); 2125 + acpi_init_lpit(); 2125 2126 2126 2127 acpi_scan_add_handler(&generic_device_handler); 2127 2128
+23 -6
drivers/base/arch_topology.c
··· 22 22 #include <linux/string.h> 23 23 #include <linux/sched/topology.h> 24 24 25 - static DEFINE_MUTEX(cpu_scale_mutex); 26 - static DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE; 25 + DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE; 27 26 28 - unsigned long topology_get_cpu_scale(struct sched_domain *sd, int cpu) 27 + void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, 28 + unsigned long max_freq) 29 29 { 30 - return per_cpu(cpu_scale, cpu); 30 + unsigned long scale; 31 + int i; 32 + 33 + scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq; 34 + 35 + for_each_cpu(i, cpus) 36 + per_cpu(freq_scale, i) = scale; 31 37 } 38 + 39 + static DEFINE_MUTEX(cpu_scale_mutex); 40 + DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE; 32 41 33 42 void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity) 34 43 { ··· 221 212 222 213 static int __init register_cpufreq_notifier(void) 223 214 { 215 + int ret; 216 + 224 217 /* 225 218 * on ACPI-based systems we need to use the default cpu capacity 226 219 * until we have the necessary code to parse the cpu capacity, so ··· 238 227 239 228 cpumask_copy(cpus_to_visit, cpu_possible_mask); 240 229 241 - return cpufreq_register_notifier(&init_cpu_capacity_notifier, 242 - CPUFREQ_POLICY_NOTIFIER); 230 + ret = cpufreq_register_notifier(&init_cpu_capacity_notifier, 231 + CPUFREQ_POLICY_NOTIFIER); 232 + 233 + if (ret) 234 + free_cpumask_var(cpus_to_visit); 235 + 236 + return ret; 243 237 } 244 238 core_initcall(register_cpufreq_notifier); 245 239 ··· 252 236 { 253 237 cpufreq_unregister_notifier(&init_cpu_capacity_notifier, 254 238 CPUFREQ_POLICY_NOTIFIER); 239 + free_cpumask_var(cpus_to_visit); 255 240 } 256 241 257 242 #else
+2 -1
drivers/base/cpu.c
··· 386 386 387 387 per_cpu(cpu_sys_devices, num) = &cpu->dev; 388 388 register_cpu_under_node(num, cpu_to_node(num)); 389 - dev_pm_qos_expose_latency_limit(&cpu->dev, 0); 389 + dev_pm_qos_expose_latency_limit(&cpu->dev, 390 + PM_QOS_RESUME_LATENCY_NO_CONSTRAINT); 390 391 391 392 return 0; 392 393 }
+2
drivers/base/dd.c
··· 464 464 if (dev->pm_domain && dev->pm_domain->dismiss) 465 465 dev->pm_domain->dismiss(dev); 466 466 pm_runtime_reinit(dev); 467 + dev_pm_set_driver_flags(dev, 0); 467 468 468 469 switch (ret) { 469 470 case -EPROBE_DEFER: ··· 870 869 if (dev->pm_domain && dev->pm_domain->dismiss) 871 870 dev->pm_domain->dismiss(dev); 872 871 pm_runtime_reinit(dev); 872 + dev_pm_set_driver_flags(dev, 0); 873 873 874 874 klist_remove(&dev->p->knode_driver); 875 875 device_pm_check_callbacks(dev);
-1
drivers/base/power/Makefile
··· 2 2 obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o 3 3 obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o 4 4 obj-$(CONFIG_PM_TRACE_RTC) += trace.o 5 - obj-$(CONFIG_PM_OPP) += opp/ 6 5 obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o 7 6 obj-$(CONFIG_HAVE_CLK) += clock_ops.o 8 7
+157 -69
drivers/base/power/domain.c
··· 124 124 #define genpd_status_on(genpd) (genpd->status == GPD_STATE_ACTIVE) 125 125 #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) 126 126 #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) 127 + #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) 127 128 128 129 static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, 129 130 const struct generic_pm_domain *genpd) ··· 238 237 static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {} 239 238 #endif 240 239 240 + /** 241 + * dev_pm_genpd_set_performance_state- Set performance state of device's power 242 + * domain. 243 + * 244 + * @dev: Device for which the performance-state needs to be set. 245 + * @state: Target performance state of the device. This can be set as 0 when the 246 + * device doesn't have any performance state constraints left (And so 247 + * the device wouldn't participate anymore to find the target 248 + * performance state of the genpd). 249 + * 250 + * It is assumed that the users guarantee that the genpd wouldn't be detached 251 + * while this routine is getting called. 252 + * 253 + * Returns 0 on success and negative error values on failures. 254 + */ 255 + int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state) 256 + { 257 + struct generic_pm_domain *genpd; 258 + struct generic_pm_domain_data *gpd_data, *pd_data; 259 + struct pm_domain_data *pdd; 260 + unsigned int prev; 261 + int ret = 0; 262 + 263 + genpd = dev_to_genpd(dev); 264 + if (IS_ERR(genpd)) 265 + return -ENODEV; 266 + 267 + if (unlikely(!genpd->set_performance_state)) 268 + return -EINVAL; 269 + 270 + if (unlikely(!dev->power.subsys_data || 271 + !dev->power.subsys_data->domain_data)) { 272 + WARN_ON(1); 273 + return -EINVAL; 274 + } 275 + 276 + genpd_lock(genpd); 277 + 278 + gpd_data = to_gpd_data(dev->power.subsys_data->domain_data); 279 + prev = gpd_data->performance_state; 280 + gpd_data->performance_state = state; 281 + 282 + /* New requested state is same as Max requested state */ 283 + if (state == genpd->performance_state) 284 + goto unlock; 285 + 286 + /* New requested state is higher than Max requested state */ 287 + if (state > genpd->performance_state) 288 + goto update_state; 289 + 290 + /* Traverse all devices within the domain */ 291 + list_for_each_entry(pdd, &genpd->dev_list, list_node) { 292 + pd_data = to_gpd_data(pdd); 293 + 294 + if (pd_data->performance_state > state) 295 + state = pd_data->performance_state; 296 + } 297 + 298 + if (state == genpd->performance_state) 299 + goto unlock; 300 + 301 + /* 302 + * We aren't propagating performance state changes of a subdomain to its 303 + * masters as we don't have hardware that needs it. Over that, the 304 + * performance states of subdomain and its masters may not have 305 + * one-to-one mapping and would require additional information. We can 306 + * get back to this once we have hardware that needs it. For that 307 + * reason, we don't have to consider performance state of the subdomains 308 + * of genpd here. 309 + */ 310 + 311 + update_state: 312 + if (genpd_status_on(genpd)) { 313 + ret = genpd->set_performance_state(genpd, state); 314 + if (ret) { 315 + gpd_data->performance_state = prev; 316 + goto unlock; 317 + } 318 + } 319 + 320 + genpd->performance_state = state; 321 + 322 + unlock: 323 + genpd_unlock(genpd); 324 + 325 + return ret; 326 + } 327 + EXPORT_SYMBOL_GPL(dev_pm_genpd_set_performance_state); 328 + 241 329 static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) 242 330 { 243 331 unsigned int state_idx = genpd->state_idx; ··· 346 256 return ret; 347 257 348 258 elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start)); 259 + 260 + if (unlikely(genpd->set_performance_state)) { 261 + ret = genpd->set_performance_state(genpd, genpd->performance_state); 262 + if (ret) { 263 + pr_warn("%s: Failed to set performance state %d (%d)\n", 264 + genpd->name, genpd->performance_state, ret); 265 + } 266 + } 267 + 349 268 if (elapsed_ns <= genpd->states[state_idx].power_on_latency_ns) 350 269 return ret; 351 270 ··· 445 346 list_for_each_entry(pdd, &genpd->dev_list, list_node) { 446 347 enum pm_qos_flags_status stat; 447 348 448 - stat = dev_pm_qos_flags(pdd->dev, 449 - PM_QOS_FLAG_NO_POWER_OFF 450 - | PM_QOS_FLAG_REMOTE_WAKEUP); 349 + stat = dev_pm_qos_flags(pdd->dev, PM_QOS_FLAG_NO_POWER_OFF); 451 350 if (stat > PM_QOS_FLAGS_NONE) 452 351 return -EBUSY; 453 352 ··· 846 749 847 750 #if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_GENERIC_DOMAINS_OF) 848 751 849 - /** 850 - * pm_genpd_present - Check if the given PM domain has been initialized. 851 - * @genpd: PM domain to check. 852 - */ 853 - static bool pm_genpd_present(const struct generic_pm_domain *genpd) 752 + static bool genpd_present(const struct generic_pm_domain *genpd) 854 753 { 855 754 const struct generic_pm_domain *gpd; 856 755 ··· 863 770 #endif 864 771 865 772 #ifdef CONFIG_PM_SLEEP 866 - 867 - static bool genpd_dev_active_wakeup(const struct generic_pm_domain *genpd, 868 - struct device *dev) 869 - { 870 - return GENPD_DEV_CALLBACK(genpd, bool, active_wakeup, dev); 871 - } 872 773 873 774 /** 874 775 * genpd_sync_power_off - Synchronously power off a PM domain and its masters. ··· 950 863 * @genpd: PM domain the device belongs to. 951 864 * 952 865 * There are two cases in which a device that can wake up the system from sleep 953 - * states should be resumed by pm_genpd_prepare(): (1) if the device is enabled 866 + * states should be resumed by genpd_prepare(): (1) if the device is enabled 954 867 * to wake up the system and it has to remain active for this purpose while the 955 868 * system is in the sleep state and (2) if the device is not enabled to wake up 956 869 * the system from sleep states and it generally doesn't generate wakeup signals ··· 968 881 if (!device_can_wakeup(dev)) 969 882 return false; 970 883 971 - active_wakeup = genpd_dev_active_wakeup(genpd, dev); 884 + active_wakeup = genpd_is_active_wakeup(genpd); 972 885 return device_may_wakeup(dev) ? active_wakeup : !active_wakeup; 973 886 } 974 887 975 888 /** 976 - * pm_genpd_prepare - Start power transition of a device in a PM domain. 889 + * genpd_prepare - Start power transition of a device in a PM domain. 977 890 * @dev: Device to start the transition of. 978 891 * 979 892 * Start a power transition of a device (during a system-wide power transition) ··· 981 894 * an object of type struct generic_pm_domain representing a PM domain 982 895 * consisting of I/O devices. 983 896 */ 984 - static int pm_genpd_prepare(struct device *dev) 897 + static int genpd_prepare(struct device *dev) 985 898 { 986 899 struct generic_pm_domain *genpd; 987 900 int ret; ··· 1008 921 genpd_unlock(genpd); 1009 922 1010 923 ret = pm_generic_prepare(dev); 1011 - if (ret) { 924 + if (ret < 0) { 1012 925 genpd_lock(genpd); 1013 926 1014 927 genpd->prepared_count--; ··· 1016 929 genpd_unlock(genpd); 1017 930 } 1018 931 1019 - return ret; 932 + /* Never return 1, as genpd don't cope with the direct_complete path. */ 933 + return ret >= 0 ? 0 : ret; 1020 934 } 1021 935 1022 936 /** ··· 1038 950 if (IS_ERR(genpd)) 1039 951 return -EINVAL; 1040 952 1041 - if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) 953 + if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd)) 1042 954 return 0; 1043 955 1044 956 if (poweroff) ··· 1063 975 } 1064 976 1065 977 /** 1066 - * pm_genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain. 978 + * genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain. 1067 979 * @dev: Device to suspend. 1068 980 * 1069 981 * Stop the device and remove power from the domain if all devices in it have 1070 982 * been stopped. 1071 983 */ 1072 - static int pm_genpd_suspend_noirq(struct device *dev) 984 + static int genpd_suspend_noirq(struct device *dev) 1073 985 { 1074 986 dev_dbg(dev, "%s()\n", __func__); 1075 987 ··· 1077 989 } 1078 990 1079 991 /** 1080 - * pm_genpd_resume_noirq - Start of resume of device in an I/O PM domain. 992 + * genpd_resume_noirq - Start of resume of device in an I/O PM domain. 1081 993 * @dev: Device to resume. 1082 994 * 1083 995 * Restore power to the device's PM domain, if necessary, and start the device. 1084 996 */ 1085 - static int pm_genpd_resume_noirq(struct device *dev) 997 + static int genpd_resume_noirq(struct device *dev) 1086 998 { 1087 999 struct generic_pm_domain *genpd; 1088 1000 int ret = 0; ··· 1093 1005 if (IS_ERR(genpd)) 1094 1006 return -EINVAL; 1095 1007 1096 - if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) 1008 + if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd)) 1097 1009 return 0; 1098 1010 1099 1011 genpd_lock(genpd); ··· 1112 1024 } 1113 1025 1114 1026 /** 1115 - * pm_genpd_freeze_noirq - Completion of freezing a device in an I/O PM domain. 1027 + * genpd_freeze_noirq - Completion of freezing a device in an I/O PM domain. 1116 1028 * @dev: Device to freeze. 1117 1029 * 1118 1030 * Carry out a late freeze of a device under the assumption that its ··· 1120 1032 * struct generic_pm_domain representing a power domain consisting of I/O 1121 1033 * devices. 1122 1034 */ 1123 - static int pm_genpd_freeze_noirq(struct device *dev) 1035 + static int genpd_freeze_noirq(struct device *dev) 1124 1036 { 1125 1037 const struct generic_pm_domain *genpd; 1126 1038 int ret = 0; ··· 1142 1054 } 1143 1055 1144 1056 /** 1145 - * pm_genpd_thaw_noirq - Early thaw of device in an I/O PM domain. 1057 + * genpd_thaw_noirq - Early thaw of device in an I/O PM domain. 1146 1058 * @dev: Device to thaw. 1147 1059 * 1148 1060 * Start the device, unless power has been removed from the domain already 1149 1061 * before the system transition. 1150 1062 */ 1151 - static int pm_genpd_thaw_noirq(struct device *dev) 1063 + static int genpd_thaw_noirq(struct device *dev) 1152 1064 { 1153 1065 const struct generic_pm_domain *genpd; 1154 1066 int ret = 0; ··· 1169 1081 } 1170 1082 1171 1083 /** 1172 - * pm_genpd_poweroff_noirq - Completion of hibernation of device in an 1084 + * genpd_poweroff_noirq - Completion of hibernation of device in an 1173 1085 * I/O PM domain. 1174 1086 * @dev: Device to poweroff. 1175 1087 * 1176 1088 * Stop the device and remove power from the domain if all devices in it have 1177 1089 * been stopped. 1178 1090 */ 1179 - static int pm_genpd_poweroff_noirq(struct device *dev) 1091 + static int genpd_poweroff_noirq(struct device *dev) 1180 1092 { 1181 1093 dev_dbg(dev, "%s()\n", __func__); 1182 1094 ··· 1184 1096 } 1185 1097 1186 1098 /** 1187 - * pm_genpd_restore_noirq - Start of restore of device in an I/O PM domain. 1099 + * genpd_restore_noirq - Start of restore of device in an I/O PM domain. 1188 1100 * @dev: Device to resume. 1189 1101 * 1190 1102 * Make sure the domain will be in the same power state as before the 1191 1103 * hibernation the system is resuming from and start the device if necessary. 1192 1104 */ 1193 - static int pm_genpd_restore_noirq(struct device *dev) 1105 + static int genpd_restore_noirq(struct device *dev) 1194 1106 { 1195 1107 struct generic_pm_domain *genpd; 1196 1108 int ret = 0; ··· 1227 1139 } 1228 1140 1229 1141 /** 1230 - * pm_genpd_complete - Complete power transition of a device in a power domain. 1142 + * genpd_complete - Complete power transition of a device in a power domain. 1231 1143 * @dev: Device to complete the transition of. 1232 1144 * 1233 1145 * Complete a power transition of a device (during a system-wide power ··· 1235 1147 * domain member of an object of type struct generic_pm_domain representing 1236 1148 * a power domain consisting of I/O devices. 1237 1149 */ 1238 - static void pm_genpd_complete(struct device *dev) 1150 + static void genpd_complete(struct device *dev) 1239 1151 { 1240 1152 struct generic_pm_domain *genpd; 1241 1153 ··· 1268 1180 struct generic_pm_domain *genpd; 1269 1181 1270 1182 genpd = dev_to_genpd(dev); 1271 - if (!pm_genpd_present(genpd)) 1183 + if (!genpd_present(genpd)) 1272 1184 return; 1273 1185 1274 1186 if (suspend) { ··· 1294 1206 1295 1207 #else /* !CONFIG_PM_SLEEP */ 1296 1208 1297 - #define pm_genpd_prepare NULL 1298 - #define pm_genpd_suspend_noirq NULL 1299 - #define pm_genpd_resume_noirq NULL 1300 - #define pm_genpd_freeze_noirq NULL 1301 - #define pm_genpd_thaw_noirq NULL 1302 - #define pm_genpd_poweroff_noirq NULL 1303 - #define pm_genpd_restore_noirq NULL 1304 - #define pm_genpd_complete NULL 1209 + #define genpd_prepare NULL 1210 + #define genpd_suspend_noirq NULL 1211 + #define genpd_resume_noirq NULL 1212 + #define genpd_freeze_noirq NULL 1213 + #define genpd_thaw_noirq NULL 1214 + #define genpd_poweroff_noirq NULL 1215 + #define genpd_restore_noirq NULL 1216 + #define genpd_complete NULL 1305 1217 1306 1218 #endif /* CONFIG_PM_SLEEP */ 1307 1219 ··· 1327 1239 1328 1240 gpd_data->base.dev = dev; 1329 1241 gpd_data->td.constraint_changed = true; 1330 - gpd_data->td.effective_constraint_ns = -1; 1242 + gpd_data->td.effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS; 1331 1243 gpd_data->nb.notifier_call = genpd_dev_pm_qos_notifier; 1332 1244 1333 1245 spin_lock_irq(&dev->power.lock); ··· 1662 1574 genpd->accounting_time = ktime_get(); 1663 1575 genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; 1664 1576 genpd->domain.ops.runtime_resume = genpd_runtime_resume; 1665 - genpd->domain.ops.prepare = pm_genpd_prepare; 1666 - genpd->domain.ops.suspend_noirq = pm_genpd_suspend_noirq; 1667 - genpd->domain.ops.resume_noirq = pm_genpd_resume_noirq; 1668 - genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq; 1669 - genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq; 1670 - genpd->domain.ops.poweroff_noirq = pm_genpd_poweroff_noirq; 1671 - genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq; 1672 - genpd->domain.ops.complete = pm_genpd_complete; 1577 + genpd->domain.ops.prepare = genpd_prepare; 1578 + genpd->domain.ops.suspend_noirq = genpd_suspend_noirq; 1579 + genpd->domain.ops.resume_noirq = genpd_resume_noirq; 1580 + genpd->domain.ops.freeze_noirq = genpd_freeze_noirq; 1581 + genpd->domain.ops.thaw_noirq = genpd_thaw_noirq; 1582 + genpd->domain.ops.poweroff_noirq = genpd_poweroff_noirq; 1583 + genpd->domain.ops.restore_noirq = genpd_restore_noirq; 1584 + genpd->domain.ops.complete = genpd_complete; 1673 1585 1674 1586 if (genpd->flags & GENPD_FLAG_PM_CLK) { 1675 1587 genpd->dev_ops.stop = pm_clk_suspend; ··· 1883 1795 1884 1796 mutex_lock(&gpd_list_lock); 1885 1797 1886 - if (pm_genpd_present(genpd)) { 1798 + if (genpd_present(genpd)) { 1887 1799 ret = genpd_add_provider(np, genpd_xlate_simple, genpd); 1888 1800 if (!ret) { 1889 1801 genpd->provider = &np->fwnode; ··· 1919 1831 for (i = 0; i < data->num_domains; i++) { 1920 1832 if (!data->domains[i]) 1921 1833 continue; 1922 - if (!pm_genpd_present(data->domains[i])) 1834 + if (!genpd_present(data->domains[i])) 1923 1835 goto error; 1924 1836 1925 1837 data->domains[i]->provider = &np->fwnode; ··· 2362 2274 #include <linux/seq_file.h> 2363 2275 #include <linux/init.h> 2364 2276 #include <linux/kobject.h> 2365 - static struct dentry *pm_genpd_debugfs_dir; 2277 + static struct dentry *genpd_debugfs_dir; 2366 2278 2367 2279 /* 2368 2280 * TODO: This function is a slightly modified version of rtpm_status_show ··· 2390 2302 seq_puts(s, p); 2391 2303 } 2392 2304 2393 - static int pm_genpd_summary_one(struct seq_file *s, 2394 - struct generic_pm_domain *genpd) 2305 + static int genpd_summary_one(struct seq_file *s, 2306 + struct generic_pm_domain *genpd) 2395 2307 { 2396 2308 static const char * const status_lookup[] = { 2397 2309 [GPD_STATE_ACTIVE] = "on", ··· 2461 2373 return -ERESTARTSYS; 2462 2374 2463 2375 list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2464 - ret = pm_genpd_summary_one(s, genpd); 2376 + ret = genpd_summary_one(s, genpd); 2465 2377 if (ret) 2466 2378 break; 2467 2379 } ··· 2647 2559 define_genpd_debugfs_fops(total_idle_time); 2648 2560 define_genpd_debugfs_fops(devices); 2649 2561 2650 - static int __init pm_genpd_debug_init(void) 2562 + static int __init genpd_debug_init(void) 2651 2563 { 2652 2564 struct dentry *d; 2653 2565 struct generic_pm_domain *genpd; 2654 2566 2655 - pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL); 2567 + genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL); 2656 2568 2657 - if (!pm_genpd_debugfs_dir) 2569 + if (!genpd_debugfs_dir) 2658 2570 return -ENOMEM; 2659 2571 2660 2572 d = debugfs_create_file("pm_genpd_summary", S_IRUGO, 2661 - pm_genpd_debugfs_dir, NULL, &genpd_summary_fops); 2573 + genpd_debugfs_dir, NULL, &genpd_summary_fops); 2662 2574 if (!d) 2663 2575 return -ENOMEM; 2664 2576 2665 2577 list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2666 - d = debugfs_create_dir(genpd->name, pm_genpd_debugfs_dir); 2578 + d = debugfs_create_dir(genpd->name, genpd_debugfs_dir); 2667 2579 if (!d) 2668 2580 return -ENOMEM; 2669 2581 ··· 2683 2595 2684 2596 return 0; 2685 2597 } 2686 - late_initcall(pm_genpd_debug_init); 2598 + late_initcall(genpd_debug_init); 2687 2599 2688 - static void __exit pm_genpd_debug_exit(void) 2600 + static void __exit genpd_debug_exit(void) 2689 2601 { 2690 - debugfs_remove_recursive(pm_genpd_debugfs_dir); 2602 + debugfs_remove_recursive(genpd_debugfs_dir); 2691 2603 } 2692 - __exitcall(pm_genpd_debug_exit); 2604 + __exitcall(genpd_debug_exit); 2693 2605 #endif /* CONFIG_DEBUG_FS */
+46 -27
drivers/base/power/domain_governor.c
··· 14 14 static int dev_update_qos_constraint(struct device *dev, void *data) 15 15 { 16 16 s64 *constraint_ns_p = data; 17 - s32 constraint_ns = -1; 17 + s64 constraint_ns; 18 18 19 - if (dev->power.subsys_data && dev->power.subsys_data->domain_data) 19 + if (dev->power.subsys_data && dev->power.subsys_data->domain_data) { 20 + /* 21 + * Only take suspend-time QoS constraints of devices into 22 + * account, because constraints updated after the device has 23 + * been suspended are not guaranteed to be taken into account 24 + * anyway. In order for them to take effect, the device has to 25 + * be resumed and suspended again. 26 + */ 20 27 constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns; 21 - 22 - if (constraint_ns < 0) { 28 + } else { 29 + /* 30 + * The child is not in a domain and there's no info on its 31 + * suspend/resume latencies, so assume them to be negligible and 32 + * take its current PM QoS constraint (that's the only thing 33 + * known at this point anyway). 34 + */ 23 35 constraint_ns = dev_pm_qos_read_value(dev); 24 36 constraint_ns *= NSEC_PER_USEC; 25 37 } 26 - if (constraint_ns == 0) 27 - return 0; 28 38 29 - /* 30 - * constraint_ns cannot be negative here, because the device has been 31 - * suspended. 32 - */ 33 - if (constraint_ns < *constraint_ns_p || *constraint_ns_p == 0) 39 + if (constraint_ns < *constraint_ns_p) 34 40 *constraint_ns_p = constraint_ns; 35 41 36 42 return 0; ··· 64 58 } 65 59 td->constraint_changed = false; 66 60 td->cached_suspend_ok = false; 67 - td->effective_constraint_ns = -1; 61 + td->effective_constraint_ns = 0; 68 62 constraint_ns = __dev_pm_qos_read_value(dev); 69 63 70 64 spin_unlock_irqrestore(&dev->power.lock, flags); 71 65 72 - if (constraint_ns < 0) 66 + if (constraint_ns == 0) 73 67 return false; 74 68 75 69 constraint_ns *= NSEC_PER_USEC; ··· 82 76 device_for_each_child(dev, &constraint_ns, 83 77 dev_update_qos_constraint); 84 78 85 - if (constraint_ns > 0) { 79 + if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS) { 80 + /* "No restriction", so the device is allowed to suspend. */ 81 + td->effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS; 82 + td->cached_suspend_ok = true; 83 + } else if (constraint_ns == 0) { 84 + /* 85 + * This triggers if one of the children that don't belong to a 86 + * domain has a zero PM QoS constraint and it's better not to 87 + * suspend then. effective_constraint_ns is zero already and 88 + * cached_suspend_ok is false, so bail out. 89 + */ 90 + return false; 91 + } else { 86 92 constraint_ns -= td->suspend_latency_ns + 87 93 td->resume_latency_ns; 88 - if (constraint_ns == 0) 94 + /* 95 + * effective_constraint_ns is zero already and cached_suspend_ok 96 + * is false, so if the computed value is not positive, return 97 + * right away. 98 + */ 99 + if (constraint_ns <= 0) 89 100 return false; 101 + 102 + td->effective_constraint_ns = constraint_ns; 103 + td->cached_suspend_ok = true; 90 104 } 91 - td->effective_constraint_ns = constraint_ns; 92 - td->cached_suspend_ok = constraint_ns >= 0; 93 105 94 106 /* 95 107 * The children have been suspended already, so we don't need to take ··· 168 144 */ 169 145 td = &to_gpd_data(pdd)->td; 170 146 constraint_ns = td->effective_constraint_ns; 171 - /* default_suspend_ok() need not be called before us. */ 172 - if (constraint_ns < 0) { 173 - constraint_ns = dev_pm_qos_read_value(pdd->dev); 174 - constraint_ns *= NSEC_PER_USEC; 175 - } 176 - if (constraint_ns == 0) 147 + /* 148 + * Zero means "no suspend at all" and this runs only when all 149 + * devices in the domain are suspended, so it must be positive. 150 + */ 151 + if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS) 177 152 continue; 178 153 179 - /* 180 - * constraint_ns cannot be negative here, because the device has 181 - * been suspended. 182 - */ 183 154 if (constraint_ns <= off_on_time_ns) 184 155 return false; 185 156
-23
drivers/base/power/generic_ops.c
··· 9 9 #include <linux/pm.h> 10 10 #include <linux/pm_runtime.h> 11 11 #include <linux/export.h> 12 - #include <linux/suspend.h> 13 12 14 13 #ifdef CONFIG_PM 15 14 /** ··· 297 298 if (drv && drv->pm && drv->pm->complete) 298 299 drv->pm->complete(dev); 299 300 } 300 - 301 - /** 302 - * pm_complete_with_resume_check - Complete a device power transition. 303 - * @dev: Device to handle. 304 - * 305 - * Complete a device power transition during a system-wide power transition and 306 - * optionally schedule a runtime resume of the device if the system resume in 307 - * progress has been initated by the platform firmware and the device had its 308 - * power.direct_complete flag set. 309 - */ 310 - void pm_complete_with_resume_check(struct device *dev) 311 - { 312 - pm_generic_complete(dev); 313 - /* 314 - * If the device had been runtime-suspended before the system went into 315 - * the sleep state it is going out of and it has never been resumed till 316 - * now, resume it in case the firmware powered it up. 317 - */ 318 - if (dev->power.direct_complete && pm_resume_via_firmware()) 319 - pm_request_resume(dev); 320 - } 321 - EXPORT_SYMBOL_GPL(pm_complete_with_resume_check); 322 301 #endif /* CONFIG_PM_SLEEP */
+25 -28
drivers/base/power/main.c
··· 526 526 /*------------------------- Resume routines -------------------------*/ 527 527 528 528 /** 529 - * device_resume_noirq - Execute an "early resume" callback for given device. 529 + * device_resume_noirq - Execute a "noirq resume" callback for given device. 530 530 * @dev: Device to handle. 531 531 * @state: PM transition of the system being carried out. 532 532 * @async: If true, the device is being resumed asynchronously. ··· 846 846 goto Driver; 847 847 } 848 848 849 - if (dev->class) { 850 - if (dev->class->pm) { 851 - info = "class "; 852 - callback = pm_op(dev->class->pm, state); 853 - goto Driver; 854 - } else if (dev->class->resume) { 855 - info = "legacy class "; 856 - callback = dev->class->resume; 857 - goto End; 858 - } 849 + if (dev->class && dev->class->pm) { 850 + info = "class "; 851 + callback = pm_op(dev->class->pm, state); 852 + goto Driver; 859 853 } 860 854 861 855 if (dev->bus) { ··· 1075 1081 } 1076 1082 1077 1083 /** 1078 - * device_suspend_noirq - Execute a "late suspend" callback for given device. 1084 + * __device_suspend_noirq - Execute a "noirq suspend" callback for given device. 1079 1085 * @dev: Device to handle. 1080 1086 * @state: PM transition of the system being carried out. 1081 1087 * @async: If true, the device is being suspended asynchronously. ··· 1235 1241 } 1236 1242 1237 1243 /** 1238 - * device_suspend_late - Execute a "late suspend" callback for given device. 1244 + * __device_suspend_late - Execute a "late suspend" callback for given device. 1239 1245 * @dev: Device to handle. 1240 1246 * @state: PM transition of the system being carried out. 1241 1247 * @async: If true, the device is being suspended asynchronously. ··· 1437 1443 } 1438 1444 1439 1445 /** 1440 - * device_suspend - Execute "suspend" callbacks for given device. 1446 + * __device_suspend - Execute "suspend" callbacks for given device. 1441 1447 * @dev: Device to handle. 1442 1448 * @state: PM transition of the system being carried out. 1443 1449 * @async: If true, the device is being suspended asynchronously. ··· 1500 1506 goto Run; 1501 1507 } 1502 1508 1503 - if (dev->class) { 1504 - if (dev->class->pm) { 1505 - info = "class "; 1506 - callback = pm_op(dev->class->pm, state); 1507 - goto Run; 1508 - } else if (dev->class->suspend) { 1509 - pm_dev_dbg(dev, state, "legacy class "); 1510 - error = legacy_suspend(dev, state, dev->class->suspend, 1511 - "legacy class "); 1512 - goto End; 1513 - } 1509 + if (dev->class && dev->class->pm) { 1510 + info = "class "; 1511 + callback = pm_op(dev->class->pm, state); 1512 + goto Run; 1514 1513 } 1515 1514 1516 1515 if (dev->bus) { ··· 1650 1663 if (dev->power.syscore) 1651 1664 return 0; 1652 1665 1666 + WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && 1667 + !pm_runtime_enabled(dev)); 1668 + 1653 1669 /* 1654 1670 * If a device's parent goes into runtime suspend at the wrong time, 1655 1671 * it won't be possible to resume the device. To prevent this we ··· 1701 1711 * applies to suspend transitions, however. 1702 1712 */ 1703 1713 spin_lock_irq(&dev->power.lock); 1704 - dev->power.direct_complete = ret > 0 && state.event == PM_EVENT_SUSPEND; 1714 + dev->power.direct_complete = state.event == PM_EVENT_SUSPEND && 1715 + pm_runtime_suspended(dev) && ret > 0 && 1716 + !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP); 1705 1717 spin_unlock_irq(&dev->power.lock); 1706 1718 return 0; 1707 1719 } ··· 1852 1860 dev->power.no_pm_callbacks = 1853 1861 (!dev->bus || (pm_ops_is_empty(dev->bus->pm) && 1854 1862 !dev->bus->suspend && !dev->bus->resume)) && 1855 - (!dev->class || (pm_ops_is_empty(dev->class->pm) && 1856 - !dev->class->suspend && !dev->class->resume)) && 1863 + (!dev->class || pm_ops_is_empty(dev->class->pm)) && 1857 1864 (!dev->type || pm_ops_is_empty(dev->type->pm)) && 1858 1865 (!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) && 1859 1866 (!dev->driver || (pm_ops_is_empty(dev->driver->pm) && 1860 1867 !dev->driver->suspend && !dev->driver->resume)); 1861 1868 spin_unlock_irq(&dev->power.lock); 1869 + } 1870 + 1871 + bool dev_pm_smart_suspend_and_suspended(struct device *dev) 1872 + { 1873 + return dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && 1874 + pm_runtime_status_suspended(dev); 1862 1875 }
drivers/base/power/opp/Makefile drivers/opp/Makefile
+138 -5
drivers/base/power/opp/core.c drivers/opp/core.c
··· 19 19 #include <linux/slab.h> 20 20 #include <linux/device.h> 21 21 #include <linux/export.h> 22 + #include <linux/pm_domain.h> 22 23 #include <linux/regulator/consumer.h> 23 24 24 25 #include "opp.h" ··· 297 296 opp_table = _find_opp_table(dev); 298 297 if (IS_ERR(opp_table)) { 299 298 count = PTR_ERR(opp_table); 300 - dev_err(dev, "%s: OPP table not found (%d)\n", 299 + dev_dbg(dev, "%s: OPP table not found (%d)\n", 301 300 __func__, count); 302 301 return count; 303 302 } ··· 536 535 return ret; 537 536 } 538 537 538 + static inline int 539 + _generic_set_opp_domain(struct device *dev, struct clk *clk, 540 + unsigned long old_freq, unsigned long freq, 541 + unsigned int old_pstate, unsigned int new_pstate) 542 + { 543 + int ret; 544 + 545 + /* Scaling up? Scale domain performance state before frequency */ 546 + if (freq > old_freq) { 547 + ret = dev_pm_genpd_set_performance_state(dev, new_pstate); 548 + if (ret) 549 + return ret; 550 + } 551 + 552 + ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq); 553 + if (ret) 554 + goto restore_domain_state; 555 + 556 + /* Scaling down? Scale domain performance state after frequency */ 557 + if (freq < old_freq) { 558 + ret = dev_pm_genpd_set_performance_state(dev, new_pstate); 559 + if (ret) 560 + goto restore_freq; 561 + } 562 + 563 + return 0; 564 + 565 + restore_freq: 566 + if (_generic_set_opp_clk_only(dev, clk, freq, old_freq)) 567 + dev_err(dev, "%s: failed to restore old-freq (%lu Hz)\n", 568 + __func__, old_freq); 569 + restore_domain_state: 570 + if (freq > old_freq) 571 + dev_pm_genpd_set_performance_state(dev, old_pstate); 572 + 573 + return ret; 574 + } 575 + 539 576 static int _generic_set_opp_regulator(const struct opp_table *opp_table, 540 577 struct device *dev, 541 578 unsigned long old_freq, ··· 692 653 693 654 /* Only frequency scaling */ 694 655 if (!opp_table->regulators) { 695 - ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq); 656 + /* 657 + * We don't support devices with both regulator and 658 + * domain performance-state for now. 659 + */ 660 + if (opp_table->genpd_performance_state) 661 + ret = _generic_set_opp_domain(dev, clk, old_freq, freq, 662 + IS_ERR(old_opp) ? 0 : old_opp->pstate, 663 + opp->pstate); 664 + else 665 + ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq); 696 666 } else if (!opp_table->set_opp) { 697 667 ret = _generic_set_opp_regulator(opp_table, dev, old_freq, freq, 698 668 IS_ERR(old_opp) ? NULL : old_opp->supplies, ··· 1035 987 mutex_unlock(&opp_table->lock); 1036 988 return ret; 1037 989 } 990 + 991 + if (opp_table->get_pstate) 992 + new_opp->pstate = opp_table->get_pstate(dev, new_opp->rate); 1038 993 1039 994 list_add(&new_opp->node, head); 1040 995 mutex_unlock(&opp_table->lock); ··· 1527 1476 EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper); 1528 1477 1529 1478 /** 1530 - * dev_pm_opp_register_put_opp_helper() - Releases resources blocked for 1479 + * dev_pm_opp_unregister_set_opp_helper() - Releases resources blocked for 1531 1480 * set_opp helper 1532 1481 * @opp_table: OPP table returned from dev_pm_opp_register_set_opp_helper(). 1533 1482 * 1534 1483 * Release resources blocked for platform specific set_opp helper. 1535 1484 */ 1536 - void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table) 1485 + void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table) 1537 1486 { 1538 1487 if (!opp_table->set_opp) { 1539 1488 pr_err("%s: Doesn't have custom set_opp helper set\n", ··· 1548 1497 1549 1498 dev_pm_opp_put_opp_table(opp_table); 1550 1499 } 1551 - EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); 1500 + EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_set_opp_helper); 1501 + 1502 + /** 1503 + * dev_pm_opp_register_get_pstate_helper() - Register get_pstate() helper. 1504 + * @dev: Device for which the helper is getting registered. 1505 + * @get_pstate: Helper. 1506 + * 1507 + * TODO: Remove this callback after the same information is available via Device 1508 + * Tree. 1509 + * 1510 + * This allows a platform to initialize the performance states of individual 1511 + * OPPs for its devices, until we get similar information directly from DT. 1512 + * 1513 + * This must be called before the OPPs are initialized for the device. 1514 + */ 1515 + struct opp_table *dev_pm_opp_register_get_pstate_helper(struct device *dev, 1516 + int (*get_pstate)(struct device *dev, unsigned long rate)) 1517 + { 1518 + struct opp_table *opp_table; 1519 + int ret; 1520 + 1521 + if (!get_pstate) 1522 + return ERR_PTR(-EINVAL); 1523 + 1524 + opp_table = dev_pm_opp_get_opp_table(dev); 1525 + if (!opp_table) 1526 + return ERR_PTR(-ENOMEM); 1527 + 1528 + /* This should be called before OPPs are initialized */ 1529 + if (WARN_ON(!list_empty(&opp_table->opp_list))) { 1530 + ret = -EBUSY; 1531 + goto err; 1532 + } 1533 + 1534 + /* Already have genpd_performance_state set */ 1535 + if (WARN_ON(opp_table->genpd_performance_state)) { 1536 + ret = -EBUSY; 1537 + goto err; 1538 + } 1539 + 1540 + opp_table->genpd_performance_state = true; 1541 + opp_table->get_pstate = get_pstate; 1542 + 1543 + return opp_table; 1544 + 1545 + err: 1546 + dev_pm_opp_put_opp_table(opp_table); 1547 + 1548 + return ERR_PTR(ret); 1549 + } 1550 + EXPORT_SYMBOL_GPL(dev_pm_opp_register_get_pstate_helper); 1551 + 1552 + /** 1553 + * dev_pm_opp_unregister_get_pstate_helper() - Releases resources blocked for 1554 + * get_pstate() helper 1555 + * @opp_table: OPP table returned from dev_pm_opp_register_get_pstate_helper(). 1556 + * 1557 + * Release resources blocked for platform specific get_pstate() helper. 1558 + */ 1559 + void dev_pm_opp_unregister_get_pstate_helper(struct opp_table *opp_table) 1560 + { 1561 + if (!opp_table->genpd_performance_state) { 1562 + pr_err("%s: Doesn't have performance states set\n", 1563 + __func__); 1564 + return; 1565 + } 1566 + 1567 + /* Make sure there are no concurrent readers while updating opp_table */ 1568 + WARN_ON(!list_empty(&opp_table->opp_list)); 1569 + 1570 + opp_table->genpd_performance_state = false; 1571 + opp_table->get_pstate = NULL; 1572 + 1573 + dev_pm_opp_put_opp_table(opp_table); 1574 + } 1575 + EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_get_pstate_helper); 1552 1576 1553 1577 /** 1554 1578 * dev_pm_opp_add() - Add an OPP table from a table definitions ··· 1832 1706 if (remove_all || !opp->dynamic) 1833 1707 dev_pm_opp_put(opp); 1834 1708 } 1709 + 1710 + /* 1711 + * The OPP table is getting removed, drop the performance state 1712 + * constraints. 1713 + */ 1714 + if (opp_table->genpd_performance_state) 1715 + dev_pm_genpd_set_performance_state(dev, 0); 1835 1716 } else { 1836 1717 _remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table); 1837 1718 }
drivers/base/power/opp/cpu.c drivers/opp/cpu.c
+6 -4
drivers/base/power/opp/debugfs.c drivers/opp/debugfs.c
··· 41 41 { 42 42 struct dentry *d; 43 43 int i; 44 - char *name; 45 44 46 45 for (i = 0; i < opp_table->regulator_count; i++) { 47 - name = kasprintf(GFP_KERNEL, "supply-%d", i); 46 + char name[15]; 47 + 48 + snprintf(name, sizeof(name), "supply-%d", i); 48 49 49 50 /* Create per-opp directory */ 50 51 d = debugfs_create_dir(name, pdentry); 51 - 52 - kfree(name); 53 52 54 53 if (!d) 55 54 return false; ··· 97 98 return -ENOMEM; 98 99 99 100 if (!debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend)) 101 + return -ENOMEM; 102 + 103 + if (!debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate)) 100 104 return -ENOMEM; 101 105 102 106 if (!debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate))
+4 -2
drivers/base/power/opp/of.c drivers/opp/of.c
··· 16 16 #include <linux/cpu.h> 17 17 #include <linux/errno.h> 18 18 #include <linux/device.h> 19 - #include <linux/of.h> 19 + #include <linux/of_device.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/export.h> 22 22 ··· 397 397 dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, 398 398 ret); 399 399 _dev_pm_opp_remove_table(opp_table, dev, false); 400 + of_node_put(np); 400 401 goto put_opp_table; 401 402 } 402 403 } ··· 604 603 if (cpu == cpu_dev->id) 605 604 continue; 606 605 607 - cpu_np = of_get_cpu_node(cpu, NULL); 606 + cpu_np = of_cpu_device_node_get(cpu); 608 607 if (!cpu_np) { 609 608 dev_err(cpu_dev, "%s: failed to get cpu%d node\n", 610 609 __func__, cpu); ··· 614 613 615 614 /* Get OPP descriptor node */ 616 615 tmp_np = _opp_of_get_opp_desc_node(cpu_np); 616 + of_node_put(cpu_np); 617 617 if (!tmp_np) { 618 618 pr_err("%pOF: Couldn't find opp node\n", cpu_np); 619 619 ret = -ENOENT;
+6
drivers/base/power/opp/opp.h drivers/opp/opp.h
··· 58 58 * @dynamic: not-created from static DT entries. 59 59 * @turbo: true if turbo (boost) OPP 60 60 * @suspend: true if suspend OPP 61 + * @pstate: Device's power domain's performance state. 61 62 * @rate: Frequency in hertz 62 63 * @supplies: Power supplies voltage/current values 63 64 * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's ··· 77 76 bool dynamic; 78 77 bool turbo; 79 78 bool suspend; 79 + unsigned int pstate; 80 80 unsigned long rate; 81 81 82 82 struct dev_pm_opp_supply *supplies; ··· 137 135 * @clk: Device's clock handle 138 136 * @regulators: Supply regulators 139 137 * @regulator_count: Number of power supply regulators 138 + * @genpd_performance_state: Device's power domain support performance state. 140 139 * @set_opp: Platform specific set_opp callback 141 140 * @set_opp_data: Data to be passed to set_opp callback 141 + * @get_pstate: Platform specific get_pstate callback 142 142 * @dentry: debugfs dentry pointer of the real device directory (not links). 143 143 * @dentry_name: Name of the real dentry. 144 144 * ··· 174 170 struct clk *clk; 175 171 struct regulator **regulators; 176 172 unsigned int regulator_count; 173 + bool genpd_performance_state; 177 174 178 175 int (*set_opp)(struct dev_pm_set_opp_data *data); 179 176 struct dev_pm_set_opp_data *set_opp_data; 177 + int (*get_pstate)(struct device *dev, unsigned long rate); 180 178 181 179 #ifdef CONFIG_DEBUG_FS 182 180 struct dentry *dentry;
+4 -1
drivers/base/power/qos.c
··· 139 139 140 140 switch(req->type) { 141 141 case DEV_PM_QOS_RESUME_LATENCY: 142 + if (WARN_ON(action != PM_QOS_REMOVE_REQ && value < 0)) 143 + value = 0; 144 + 142 145 ret = pm_qos_update_target(&qos->resume_latency, 143 146 &req->data.pnode, action, value); 144 147 break; ··· 192 189 plist_head_init(&c->list); 193 190 c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 194 191 c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 195 - c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 192 + c->no_constraint_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 196 193 c->type = PM_QOS_MIN; 197 194 c->notifiers = n; 198 195
+4 -5
drivers/base/power/runtime.c
··· 253 253 || (dev->power.request_pending 254 254 && dev->power.request == RPM_REQ_RESUME)) 255 255 retval = -EAGAIN; 256 - else if (__dev_pm_qos_read_value(dev) < 0) 256 + else if (__dev_pm_qos_read_value(dev) == 0) 257 257 retval = -EPERM; 258 258 else if (dev->power.runtime_status == RPM_SUSPENDED) 259 259 retval = 1; ··· 894 894 * 895 895 * Check if the time is right and queue a suspend request. 896 896 */ 897 - static void pm_suspend_timer_fn(unsigned long data) 897 + static void pm_suspend_timer_fn(struct timer_list *t) 898 898 { 899 - struct device *dev = (struct device *)data; 899 + struct device *dev = from_timer(dev, t, power.suspend_timer); 900 900 unsigned long flags; 901 901 unsigned long expires; 902 902 ··· 1499 1499 INIT_WORK(&dev->power.work, pm_runtime_work); 1500 1500 1501 1501 dev->power.timer_expires = 0; 1502 - setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn, 1503 - (unsigned long)dev); 1502 + timer_setup(&dev->power.suspend_timer, pm_suspend_timer_fn, 0); 1504 1503 1505 1504 init_waitqueue_head(&dev->power.wait_queue); 1506 1505 }
+21 -32
drivers/base/power/sysfs.c
··· 218 218 struct device_attribute *attr, 219 219 char *buf) 220 220 { 221 - return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev)); 221 + s32 value = dev_pm_qos_requested_resume_latency(dev); 222 + 223 + if (value == 0) 224 + return sprintf(buf, "n/a\n"); 225 + else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 226 + value = 0; 227 + 228 + return sprintf(buf, "%d\n", value); 222 229 } 223 230 224 231 static ssize_t pm_qos_resume_latency_store(struct device *dev, ··· 235 228 s32 value; 236 229 int ret; 237 230 238 - if (kstrtos32(buf, 0, &value)) 239 - return -EINVAL; 231 + if (!kstrtos32(buf, 0, &value)) { 232 + /* 233 + * Prevent users from writing negative or "no constraint" values 234 + * directly. 235 + */ 236 + if (value < 0 || value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 237 + return -EINVAL; 240 238 241 - if (value < 0) 239 + if (value == 0) 240 + value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 241 + } else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) { 242 + value = 0; 243 + } else { 242 244 return -EINVAL; 245 + } 243 246 244 247 ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req, 245 248 value); ··· 325 308 326 309 static DEVICE_ATTR(pm_qos_no_power_off, 0644, 327 310 pm_qos_no_power_off_show, pm_qos_no_power_off_store); 328 - 329 - static ssize_t pm_qos_remote_wakeup_show(struct device *dev, 330 - struct device_attribute *attr, 331 - char *buf) 332 - { 333 - return sprintf(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev) 334 - & PM_QOS_FLAG_REMOTE_WAKEUP)); 335 - } 336 - 337 - static ssize_t pm_qos_remote_wakeup_store(struct device *dev, 338 - struct device_attribute *attr, 339 - const char *buf, size_t n) 340 - { 341 - int ret; 342 - 343 - if (kstrtoint(buf, 0, &ret)) 344 - return -EINVAL; 345 - 346 - if (ret != 0 && ret != 1) 347 - return -EINVAL; 348 - 349 - ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP, ret); 350 - return ret < 0 ? ret : n; 351 - } 352 - 353 - static DEVICE_ATTR(pm_qos_remote_wakeup, 0644, 354 - pm_qos_remote_wakeup_show, pm_qos_remote_wakeup_store); 355 311 356 312 #ifdef CONFIG_PM_SLEEP 357 313 static const char _enabled[] = "enabled"; ··· 661 671 662 672 static struct attribute *pm_qos_flags_attrs[] = { 663 673 &dev_attr_pm_qos_no_power_off.attr, 664 - &dev_attr_pm_qos_remote_wakeup.attr, 665 674 NULL, 666 675 }; 667 676 static const struct attribute_group pm_qos_flags_attr_group = {
+5 -6
drivers/base/power/wakeup.c
··· 54 54 55 55 static DEFINE_SPINLOCK(events_lock); 56 56 57 - static void pm_wakeup_timer_fn(unsigned long data); 57 + static void pm_wakeup_timer_fn(struct timer_list *t); 58 58 59 59 static LIST_HEAD(wakeup_sources); 60 60 ··· 176 176 return; 177 177 178 178 spin_lock_init(&ws->lock); 179 - setup_timer(&ws->timer, pm_wakeup_timer_fn, (unsigned long)ws); 179 + timer_setup(&ws->timer, pm_wakeup_timer_fn, 0); 180 180 ws->active = false; 181 181 ws->last_time = ktime_get(); 182 182 ··· 481 481 * Use timer struct to check if the given source is initialized 482 482 * by wakeup_source_add. 483 483 */ 484 - return ws->timer.function != pm_wakeup_timer_fn || 485 - ws->timer.data != (unsigned long)ws; 484 + return ws->timer.function != (TIMER_FUNC_TYPE)pm_wakeup_timer_fn; 486 485 } 487 486 488 487 /* ··· 723 724 * in @data if it is currently active and its timer has not been canceled and 724 725 * the expiration time of the timer is not in future. 725 726 */ 726 - static void pm_wakeup_timer_fn(unsigned long data) 727 + static void pm_wakeup_timer_fn(struct timer_list *t) 727 728 { 728 - struct wakeup_source *ws = (struct wakeup_source *)data; 729 + struct wakeup_source *ws = from_timer(ws, t, timer); 729 730 unsigned long flags; 730 731 731 732 spin_lock_irqsave(&ws->lock, flags);
+12 -4
drivers/cpufreq/arm_big_little.c
··· 57 57 #define VIRT_FREQ(cluster, freq) ((cluster == A7_CLUSTER) ? freq >> 1 : freq) 58 58 59 59 static struct thermal_cooling_device *cdev[MAX_CLUSTERS]; 60 - static struct cpufreq_arm_bL_ops *arm_bL_ops; 60 + static const struct cpufreq_arm_bL_ops *arm_bL_ops; 61 61 static struct clk *clk[MAX_CLUSTERS]; 62 62 static struct cpufreq_frequency_table *freq_table[MAX_CLUSTERS + 1]; 63 63 static atomic_t cluster_usage[MAX_CLUSTERS + 1]; ··· 213 213 { 214 214 u32 cpu = policy->cpu, cur_cluster, new_cluster, actual_cluster; 215 215 unsigned int freqs_new; 216 + int ret; 216 217 217 218 cur_cluster = cpu_to_cluster(cpu); 218 219 new_cluster = actual_cluster = per_cpu(physical_cluster, cpu); ··· 230 229 } 231 230 } 232 231 233 - return bL_cpufreq_set_rate(cpu, actual_cluster, new_cluster, freqs_new); 232 + ret = bL_cpufreq_set_rate(cpu, actual_cluster, new_cluster, freqs_new); 233 + 234 + if (!ret) { 235 + arch_set_freq_scale(policy->related_cpus, freqs_new, 236 + policy->cpuinfo.max_freq); 237 + } 238 + 239 + return ret; 234 240 } 235 241 236 242 static inline u32 get_table_count(struct cpufreq_frequency_table *table) ··· 617 609 static int __bLs_unregister_notifier(void) { return 0; } 618 610 #endif 619 611 620 - int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops) 612 + int bL_cpufreq_register(const struct cpufreq_arm_bL_ops *ops) 621 613 { 622 614 int ret, i; 623 615 ··· 661 653 } 662 654 EXPORT_SYMBOL_GPL(bL_cpufreq_register); 663 655 664 - void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops) 656 + void bL_cpufreq_unregister(const struct cpufreq_arm_bL_ops *ops) 665 657 { 666 658 if (arm_bL_ops != ops) { 667 659 pr_err("%s: Registered with: %s, can't unregister, exiting\n",
+2 -2
drivers/cpufreq/arm_big_little.h
··· 37 37 void (*free_opp_table)(const struct cpumask *cpumask); 38 38 }; 39 39 40 - int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops); 41 - void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops); 40 + int bL_cpufreq_register(const struct cpufreq_arm_bL_ops *ops); 41 + void bL_cpufreq_unregister(const struct cpufreq_arm_bL_ops *ops); 42 42 43 43 #endif /* CPUFREQ_ARM_BIG_LITTLE_H */
+1 -1
drivers/cpufreq/arm_big_little_dt.c
··· 61 61 return transition_latency; 62 62 } 63 63 64 - static struct cpufreq_arm_bL_ops dt_bL_ops = { 64 + static const struct cpufreq_arm_bL_ops dt_bL_ops = { 65 65 .name = "dt-bl", 66 66 .get_transition_latency = dt_get_transition_latency, 67 67 .init_opp_table = dev_pm_opp_of_cpumask_add_table,
-3
drivers/cpufreq/cpufreq-dt-platdev.c
··· 48 48 49 49 { .compatible = "samsung,exynos3250", }, 50 50 { .compatible = "samsung,exynos4210", }, 51 - { .compatible = "samsung,exynos4212", }, 52 51 { .compatible = "samsung,exynos5250", }, 53 52 #ifndef CONFIG_BL_SWITCHER 54 53 { .compatible = "samsung,exynos5800", }, ··· 81 82 { .compatible = "rockchip,rk3366", }, 82 83 { .compatible = "rockchip,rk3368", }, 83 84 { .compatible = "rockchip,rk3399", }, 84 - 85 - { .compatible = "socionext,uniphier-ld6b", }, 86 85 87 86 { .compatible = "st-ericsson,u8500", }, 88 87 { .compatible = "st-ericsson,u8540", },
+10 -2
drivers/cpufreq/cpufreq-dt.c
··· 43 43 static int set_target(struct cpufreq_policy *policy, unsigned int index) 44 44 { 45 45 struct private_data *priv = policy->driver_data; 46 + unsigned long freq = policy->freq_table[index].frequency; 47 + int ret; 46 48 47 - return dev_pm_opp_set_rate(priv->cpu_dev, 48 - policy->freq_table[index].frequency * 1000); 49 + ret = dev_pm_opp_set_rate(priv->cpu_dev, freq * 1000); 50 + 51 + if (!ret) { 52 + arch_set_freq_scale(policy->related_cpus, freq, 53 + policy->cpuinfo.max_freq); 54 + } 55 + 56 + return ret; 49 57 } 50 58 51 59 /*
+6
drivers/cpufreq/cpufreq.c
··· 161 161 } 162 162 EXPORT_SYMBOL_GPL(get_cpu_idle_time); 163 163 164 + __weak void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, 165 + unsigned long max_freq) 166 + { 167 + } 168 + EXPORT_SYMBOL_GPL(arch_set_freq_scale); 169 + 164 170 /* 165 171 * This is a generic cpufreq init() routine which can be used by cpufreq 166 172 * drivers of SMP systems. It will do following:
+5 -2
drivers/cpufreq/cpufreq_stats.c
··· 118 118 break; 119 119 len += snprintf(buf + len, PAGE_SIZE - len, "\n"); 120 120 } 121 - if (len >= PAGE_SIZE) 122 - return PAGE_SIZE; 121 + 122 + if (len >= PAGE_SIZE) { 123 + pr_warn_once("cpufreq transition table exceeds PAGE_SIZE. Disabling\n"); 124 + return -EFBIG; 125 + } 123 126 return len; 124 127 } 125 128 cpufreq_freq_attr_ro(trans_table);
+65 -20
drivers/cpufreq/imx6q-cpufreq.c
··· 12 12 #include <linux/err.h> 13 13 #include <linux/module.h> 14 14 #include <linux/of.h> 15 + #include <linux/of_address.h> 15 16 #include <linux/pm_opp.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/regulator/consumer.h> ··· 192 191 .suspend = cpufreq_generic_suspend, 193 192 }; 194 193 194 + #define OCOTP_CFG3 0x440 195 + #define OCOTP_CFG3_SPEED_SHIFT 16 196 + #define OCOTP_CFG3_SPEED_1P2GHZ 0x3 197 + #define OCOTP_CFG3_SPEED_996MHZ 0x2 198 + #define OCOTP_CFG3_SPEED_852MHZ 0x1 199 + 200 + static void imx6q_opp_check_speed_grading(struct device *dev) 201 + { 202 + struct device_node *np; 203 + void __iomem *base; 204 + u32 val; 205 + 206 + np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp"); 207 + if (!np) 208 + return; 209 + 210 + base = of_iomap(np, 0); 211 + if (!base) { 212 + dev_err(dev, "failed to map ocotp\n"); 213 + goto put_node; 214 + } 215 + 216 + /* 217 + * SPEED_GRADING[1:0] defines the max speed of ARM: 218 + * 2b'11: 1200000000Hz; 219 + * 2b'10: 996000000Hz; 220 + * 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz. 221 + * 2b'00: 792000000Hz; 222 + * We need to set the max speed of ARM according to fuse map. 223 + */ 224 + val = readl_relaxed(base + OCOTP_CFG3); 225 + val >>= OCOTP_CFG3_SPEED_SHIFT; 226 + val &= 0x3; 227 + 228 + if ((val != OCOTP_CFG3_SPEED_1P2GHZ) && 229 + of_machine_is_compatible("fsl,imx6q")) 230 + if (dev_pm_opp_disable(dev, 1200000000)) 231 + dev_warn(dev, "failed to disable 1.2GHz OPP\n"); 232 + if (val < OCOTP_CFG3_SPEED_996MHZ) 233 + if (dev_pm_opp_disable(dev, 996000000)) 234 + dev_warn(dev, "failed to disable 996MHz OPP\n"); 235 + if (of_machine_is_compatible("fsl,imx6q")) { 236 + if (val != OCOTP_CFG3_SPEED_852MHZ) 237 + if (dev_pm_opp_disable(dev, 852000000)) 238 + dev_warn(dev, "failed to disable 852MHz OPP\n"); 239 + } 240 + iounmap(base); 241 + put_node: 242 + of_node_put(np); 243 + } 244 + 195 245 static int imx6q_cpufreq_probe(struct platform_device *pdev) 196 246 { 197 247 struct device_node *np; ··· 304 252 goto put_reg; 305 253 } 306 254 307 - /* 308 - * We expect an OPP table supplied by platform. 309 - * Just, incase the platform did not supply the OPP 310 - * table, it will try to get it. 311 - */ 255 + ret = dev_pm_opp_of_add_table(cpu_dev); 256 + if (ret < 0) { 257 + dev_err(cpu_dev, "failed to init OPP table: %d\n", ret); 258 + goto put_reg; 259 + } 260 + 261 + imx6q_opp_check_speed_grading(cpu_dev); 262 + 263 + /* Because we have added the OPPs here, we must free them */ 264 + free_opp = true; 312 265 num = dev_pm_opp_get_opp_count(cpu_dev); 313 266 if (num < 0) { 314 - ret = dev_pm_opp_of_add_table(cpu_dev); 315 - if (ret < 0) { 316 - dev_err(cpu_dev, "failed to init OPP table: %d\n", ret); 317 - goto put_reg; 318 - } 319 - 320 - /* Because we have added the OPPs here, we must free them */ 321 - free_opp = true; 322 - 323 - num = dev_pm_opp_get_opp_count(cpu_dev); 324 - if (num < 0) { 325 - ret = num; 326 - dev_err(cpu_dev, "no OPP table is found: %d\n", ret); 327 - goto out_free_opp; 328 - } 267 + ret = num; 268 + dev_err(cpu_dev, "no OPP table is found: %d\n", ret); 269 + goto out_free_opp; 329 270 } 330 271 331 272 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+1 -1
drivers/cpufreq/powernow-k8.c
··· 1043 1043 1044 1044 data = kzalloc(sizeof(*data), GFP_KERNEL); 1045 1045 if (!data) { 1046 - pr_err("unable to alloc powernow_k8_data"); 1046 + pr_err("unable to alloc powernow_k8_data\n"); 1047 1047 return -ENOMEM; 1048 1048 } 1049 1049
+39 -152
drivers/cpufreq/pxa2xx-cpufreq.c
··· 58 58 MODULE_PARM_DESC(pxa27x_maxfreq, "Set the pxa27x maxfreq in MHz" 59 59 "(typically 624=>pxa270, 416=>pxa271, 520=>pxa272)"); 60 60 61 + struct pxa_cpufreq_data { 62 + struct clk *clk_core; 63 + }; 64 + static struct pxa_cpufreq_data pxa_cpufreq_data; 65 + 61 66 struct pxa_freqs { 62 67 unsigned int khz; 63 - unsigned int membus; 64 - unsigned int cccr; 65 - unsigned int div2; 66 - unsigned int cclkcfg; 67 68 int vmin; 68 69 int vmax; 69 70 }; 70 71 71 - /* Define the refresh period in mSec for the SDRAM and the number of rows */ 72 - #define SDRAM_TREF 64 /* standard 64ms SDRAM */ 73 - static unsigned int sdram_rows; 74 - 75 - #define CCLKCFG_TURBO 0x1 76 - #define CCLKCFG_FCS 0x2 77 - #define CCLKCFG_HALFTURBO 0x4 78 - #define CCLKCFG_FASTBUS 0x8 79 - #define MDREFR_DB2_MASK (MDREFR_K2DB2 | MDREFR_K1DB2) 80 - #define MDREFR_DRI_MASK 0xFFF 81 - 82 - #define MDCNFG_DRAC2(mdcnfg) (((mdcnfg) >> 21) & 0x3) 83 - #define MDCNFG_DRAC0(mdcnfg) (((mdcnfg) >> 5) & 0x3) 84 - 85 72 /* 86 73 * PXA255 definitions 87 74 */ 88 - /* Use the run mode frequencies for the CPUFREQ_POLICY_PERFORMANCE policy */ 89 - #define CCLKCFG CCLKCFG_TURBO | CCLKCFG_FCS 90 - 91 75 static const struct pxa_freqs pxa255_run_freqs[] = 92 76 { 93 - /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */ 94 - { 99500, 99500, 0x121, 1, CCLKCFG, -1, -1}, /* 99, 99, 50, 50 */ 95 - {132700, 132700, 0x123, 1, CCLKCFG, -1, -1}, /* 133, 133, 66, 66 */ 96 - {199100, 99500, 0x141, 0, CCLKCFG, -1, -1}, /* 199, 199, 99, 99 */ 97 - {265400, 132700, 0x143, 1, CCLKCFG, -1, -1}, /* 265, 265, 133, 66 */ 98 - {331800, 165900, 0x145, 1, CCLKCFG, -1, -1}, /* 331, 331, 166, 83 */ 99 - {398100, 99500, 0x161, 0, CCLKCFG, -1, -1}, /* 398, 398, 196, 99 */ 77 + /* CPU MEMBUS run turbo PXbus SDRAM */ 78 + { 99500, -1, -1}, /* 99, 99, 50, 50 */ 79 + {132700, -1, -1}, /* 133, 133, 66, 66 */ 80 + {199100, -1, -1}, /* 199, 199, 99, 99 */ 81 + {265400, -1, -1}, /* 265, 265, 133, 66 */ 82 + {331800, -1, -1}, /* 331, 331, 166, 83 */ 83 + {398100, -1, -1}, /* 398, 398, 196, 99 */ 100 84 }; 101 85 102 86 /* Use the turbo mode frequencies for the CPUFREQ_POLICY_POWERSAVE policy */ 103 87 static const struct pxa_freqs pxa255_turbo_freqs[] = 104 88 { 105 - /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */ 106 - { 99500, 99500, 0x121, 1, CCLKCFG, -1, -1}, /* 99, 99, 50, 50 */ 107 - {199100, 99500, 0x221, 0, CCLKCFG, -1, -1}, /* 99, 199, 50, 99 */ 108 - {298500, 99500, 0x321, 0, CCLKCFG, -1, -1}, /* 99, 287, 50, 99 */ 109 - {298600, 99500, 0x1c1, 0, CCLKCFG, -1, -1}, /* 199, 287, 99, 99 */ 110 - {398100, 99500, 0x241, 0, CCLKCFG, -1, -1}, /* 199, 398, 99, 99 */ 89 + /* CPU run turbo PXbus SDRAM */ 90 + { 99500, -1, -1}, /* 99, 99, 50, 50 */ 91 + {199100, -1, -1}, /* 99, 199, 50, 99 */ 92 + {298500, -1, -1}, /* 99, 287, 50, 99 */ 93 + {298600, -1, -1}, /* 199, 287, 99, 99 */ 94 + {398100, -1, -1}, /* 199, 398, 99, 99 */ 111 95 }; 112 96 113 97 #define NUM_PXA25x_RUN_FREQS ARRAY_SIZE(pxa255_run_freqs) ··· 106 122 module_param(pxa255_turbo_table, uint, 0); 107 123 MODULE_PARM_DESC(pxa255_turbo_table, "Selects the frequency table (0 = run table, !0 = turbo table)"); 108 124 109 - /* 110 - * PXA270 definitions 111 - * 112 - * For the PXA27x: 113 - * Control variables are A, L, 2N for CCCR; B, HT, T for CLKCFG. 114 - * 115 - * A = 0 => memory controller clock from table 3-7, 116 - * A = 1 => memory controller clock = system bus clock 117 - * Run mode frequency = 13 MHz * L 118 - * Turbo mode frequency = 13 MHz * L * N 119 - * System bus frequency = 13 MHz * L / (B + 1) 120 - * 121 - * In CCCR: 122 - * A = 1 123 - * L = 16 oscillator to run mode ratio 124 - * 2N = 6 2 * (turbo mode to run mode ratio) 125 - * 126 - * In CCLKCFG: 127 - * B = 1 Fast bus mode 128 - * HT = 0 Half-Turbo mode 129 - * T = 1 Turbo mode 130 - * 131 - * For now, just support some of the combinations in table 3-7 of 132 - * PXA27x Processor Family Developer's Manual to simplify frequency 133 - * change sequences. 134 - */ 135 - #define PXA27x_CCCR(A, L, N2) (A << 25 | N2 << 7 | L) 136 - #define CCLKCFG2(B, HT, T) \ 137 - (CCLKCFG_FCS | \ 138 - ((B) ? CCLKCFG_FASTBUS : 0) | \ 139 - ((HT) ? CCLKCFG_HALFTURBO : 0) | \ 140 - ((T) ? CCLKCFG_TURBO : 0)) 141 - 142 125 static struct pxa_freqs pxa27x_freqs[] = { 143 - {104000, 104000, PXA27x_CCCR(1, 8, 2), 0, CCLKCFG2(1, 0, 1), 900000, 1705000 }, 144 - {156000, 104000, PXA27x_CCCR(1, 8, 3), 0, CCLKCFG2(1, 0, 1), 1000000, 1705000 }, 145 - {208000, 208000, PXA27x_CCCR(0, 16, 2), 1, CCLKCFG2(0, 0, 1), 1180000, 1705000 }, 146 - {312000, 208000, PXA27x_CCCR(1, 16, 3), 1, CCLKCFG2(1, 0, 1), 1250000, 1705000 }, 147 - {416000, 208000, PXA27x_CCCR(1, 16, 4), 1, CCLKCFG2(1, 0, 1), 1350000, 1705000 }, 148 - {520000, 208000, PXA27x_CCCR(1, 16, 5), 1, CCLKCFG2(1, 0, 1), 1450000, 1705000 }, 149 - {624000, 208000, PXA27x_CCCR(1, 16, 6), 1, CCLKCFG2(1, 0, 1), 1550000, 1705000 } 126 + {104000, 900000, 1705000 }, 127 + {156000, 1000000, 1705000 }, 128 + {208000, 1180000, 1705000 }, 129 + {312000, 1250000, 1705000 }, 130 + {416000, 1350000, 1705000 }, 131 + {520000, 1450000, 1705000 }, 132 + {624000, 1550000, 1705000 } 150 133 }; 151 134 152 135 #define NUM_PXA27x_FREQS ARRAY_SIZE(pxa27x_freqs) ··· 192 241 } 193 242 } 194 243 195 - static void init_sdram_rows(void) 196 - { 197 - uint32_t mdcnfg = __raw_readl(MDCNFG); 198 - unsigned int drac2 = 0, drac0 = 0; 199 - 200 - if (mdcnfg & (MDCNFG_DE2 | MDCNFG_DE3)) 201 - drac2 = MDCNFG_DRAC2(mdcnfg); 202 - 203 - if (mdcnfg & (MDCNFG_DE0 | MDCNFG_DE1)) 204 - drac0 = MDCNFG_DRAC0(mdcnfg); 205 - 206 - sdram_rows = 1 << (11 + max(drac0, drac2)); 207 - } 208 - 209 - static u32 mdrefr_dri(unsigned int freq) 210 - { 211 - u32 interval = freq * SDRAM_TREF / sdram_rows; 212 - 213 - return (interval - (cpu_is_pxa27x() ? 31 : 0)) / 32; 214 - } 215 - 216 244 static unsigned int pxa_cpufreq_get(unsigned int cpu) 217 245 { 218 - return get_clk_frequency_khz(0); 246 + struct pxa_cpufreq_data *data = cpufreq_get_driver_data(); 247 + 248 + return (unsigned int) clk_get_rate(data->clk_core) / 1000; 219 249 } 220 250 221 251 static int pxa_set_target(struct cpufreq_policy *policy, unsigned int idx) 222 252 { 223 253 struct cpufreq_frequency_table *pxa_freqs_table; 224 254 const struct pxa_freqs *pxa_freq_settings; 225 - unsigned long flags; 226 - unsigned int new_freq_cpu, new_freq_mem; 227 - unsigned int unused, preset_mdrefr, postset_mdrefr, cclkcfg; 255 + struct pxa_cpufreq_data *data = cpufreq_get_driver_data(); 256 + unsigned int new_freq_cpu; 228 257 int ret = 0; 229 258 230 259 /* Get the current policy */ 231 260 find_freq_tables(&pxa_freqs_table, &pxa_freq_settings); 232 261 233 262 new_freq_cpu = pxa_freq_settings[idx].khz; 234 - new_freq_mem = pxa_freq_settings[idx].membus; 235 263 236 264 if (freq_debug) 237 - pr_debug("Changing CPU frequency to %d Mhz, (SDRAM %d Mhz)\n", 238 - new_freq_cpu / 1000, (pxa_freq_settings[idx].div2) ? 239 - (new_freq_mem / 2000) : (new_freq_mem / 1000)); 265 + pr_debug("Changing CPU frequency from %d Mhz to %d Mhz\n", 266 + policy->cur / 1000, new_freq_cpu / 1000); 240 267 241 268 if (vcc_core && new_freq_cpu > policy->cur) { 242 269 ret = pxa_cpufreq_change_voltage(&pxa_freq_settings[idx]); ··· 222 293 return ret; 223 294 } 224 295 225 - /* Calculate the next MDREFR. If we're slowing down the SDRAM clock 226 - * we need to preset the smaller DRI before the change. If we're 227 - * speeding up we need to set the larger DRI value after the change. 228 - */ 229 - preset_mdrefr = postset_mdrefr = __raw_readl(MDREFR); 230 - if ((preset_mdrefr & MDREFR_DRI_MASK) > mdrefr_dri(new_freq_mem)) { 231 - preset_mdrefr = (preset_mdrefr & ~MDREFR_DRI_MASK); 232 - preset_mdrefr |= mdrefr_dri(new_freq_mem); 233 - } 234 - postset_mdrefr = 235 - (postset_mdrefr & ~MDREFR_DRI_MASK) | mdrefr_dri(new_freq_mem); 236 - 237 - /* If we're dividing the memory clock by two for the SDRAM clock, this 238 - * must be set prior to the change. Clearing the divide must be done 239 - * after the change. 240 - */ 241 - if (pxa_freq_settings[idx].div2) { 242 - preset_mdrefr |= MDREFR_DB2_MASK; 243 - postset_mdrefr |= MDREFR_DB2_MASK; 244 - } else { 245 - postset_mdrefr &= ~MDREFR_DB2_MASK; 246 - } 247 - 248 - local_irq_save(flags); 249 - 250 - /* Set new the CCCR and prepare CCLKCFG */ 251 - writel(pxa_freq_settings[idx].cccr, CCCR); 252 - cclkcfg = pxa_freq_settings[idx].cclkcfg; 253 - 254 - asm volatile(" \n\ 255 - ldr r4, [%1] /* load MDREFR */ \n\ 256 - b 2f \n\ 257 - .align 5 \n\ 258 - 1: \n\ 259 - str %3, [%1] /* preset the MDREFR */ \n\ 260 - mcr p14, 0, %2, c6, c0, 0 /* set CCLKCFG[FCS] */ \n\ 261 - str %4, [%1] /* postset the MDREFR */ \n\ 262 - \n\ 263 - b 3f \n\ 264 - 2: b 1b \n\ 265 - 3: nop \n\ 266 - " 267 - : "=&r" (unused) 268 - : "r" (MDREFR), "r" (cclkcfg), 269 - "r" (preset_mdrefr), "r" (postset_mdrefr) 270 - : "r4", "r5"); 271 - local_irq_restore(flags); 296 + clk_set_rate(data->clk_core, new_freq_cpu * 1000); 272 297 273 298 /* 274 299 * Even if voltage setting fails, we don't report it, as the frequency ··· 251 368 pxa27x_guess_max_freq(); 252 369 253 370 pxa_cpufreq_init_voltages(); 254 - 255 - init_sdram_rows(); 256 371 257 372 /* set default policy and cpuinfo */ 258 373 policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */ ··· 310 429 .init = pxa_cpufreq_init, 311 430 .get = pxa_cpufreq_get, 312 431 .name = "PXA2xx", 432 + .driver_data = &pxa_cpufreq_data, 313 433 }; 314 434 315 435 static int __init pxa_cpu_init(void) 316 436 { 317 437 int ret = -ENODEV; 438 + 439 + pxa_cpufreq_data.clk_core = clk_get_sys(NULL, "core"); 440 + if (IS_ERR(pxa_cpufreq_data.clk_core)) 441 + return PTR_ERR(pxa_cpufreq_data.clk_core); 442 + 318 443 if (cpu_is_pxa25x() || cpu_is_pxa27x()) 319 444 ret = cpufreq_register_driver(&pxa_cpufreq_driver); 320 445 return ret;
+1 -1
drivers/cpufreq/scpi-cpufreq.c
··· 53 53 return ret; 54 54 } 55 55 56 - static struct cpufreq_arm_bL_ops scpi_cpufreq_ops = { 56 + static const struct cpufreq_arm_bL_ops scpi_cpufreq_ops = { 57 57 .name = "scpi", 58 58 .get_transition_latency = scpi_get_transition_latency, 59 59 .init_opp_table = scpi_init_opp_table,
+2 -2
drivers/cpufreq/spear-cpufreq.c
··· 177 177 178 178 np = of_cpu_device_node_get(0); 179 179 if (!np) { 180 - pr_err("No cpu node found"); 180 + pr_err("No cpu node found\n"); 181 181 return -ENODEV; 182 182 } 183 183 ··· 187 187 188 188 prop = of_find_property(np, "cpufreq_tbl", NULL); 189 189 if (!prop || !prop->value) { 190 - pr_err("Invalid cpufreq_tbl"); 190 + pr_err("Invalid cpufreq_tbl\n"); 191 191 ret = -ENODEV; 192 192 goto out_put_node; 193 193 }
+1 -1
drivers/cpufreq/speedstep-lib.c
··· 367 367 } else 368 368 return SPEEDSTEP_CPU_PIII_C; 369 369 } 370 - 370 + /* fall through */ 371 371 default: 372 372 return 0; 373 373 }
+5 -1
drivers/cpufreq/ti-cpufreq.c
··· 205 205 206 206 np = of_find_node_by_path("/"); 207 207 match = of_match_node(ti_cpufreq_of_match, np); 208 + of_node_put(np); 208 209 if (!match) 209 210 return -ENODEV; 210 211 ··· 218 217 opp_data->cpu_dev = get_cpu_device(0); 219 218 if (!opp_data->cpu_dev) { 220 219 pr_err("%s: Failed to get device for CPU0\n", __func__); 221 - return -ENODEV; 220 + ret = ENODEV; 221 + goto free_opp_data; 222 222 } 223 223 224 224 opp_data->opp_node = dev_pm_opp_of_get_opp_desc_node(opp_data->cpu_dev); ··· 264 262 265 263 fail_put_node: 266 264 of_node_put(opp_data->opp_node); 265 + free_opp_data: 266 + kfree(opp_data); 267 267 268 268 return ret; 269 269 }
+1 -1
drivers/cpufreq/vexpress-spc-cpufreq.c
··· 42 42 return 1000000; /* 1 ms */ 43 43 } 44 44 45 - static struct cpufreq_arm_bL_ops ve_spc_cpufreq_ops = { 45 + static const struct cpufreq_arm_bL_ops ve_spc_cpufreq_ops = { 46 46 .name = "vexpress-spc", 47 47 .get_transition_latency = ve_spc_get_transition_latency, 48 48 .init_opp_table = ve_spc_init_opp_table,
+88 -65
drivers/cpuidle/cpuidle-arm.c
··· 72 72 }; 73 73 74 74 /* 75 - * arm_idle_init 75 + * arm_idle_init_cpu 76 76 * 77 77 * Registers the arm specific cpuidle driver with the cpuidle 78 78 * framework. It relies on core code to parse the idle states 79 79 * and initialize them using driver data structures accordingly. 80 + */ 81 + static int __init arm_idle_init_cpu(int cpu) 82 + { 83 + int ret; 84 + struct cpuidle_driver *drv; 85 + struct cpuidle_device *dev; 86 + 87 + drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL); 88 + if (!drv) 89 + return -ENOMEM; 90 + 91 + drv->cpumask = (struct cpumask *)cpumask_of(cpu); 92 + 93 + /* 94 + * Initialize idle states data, starting at index 1. This 95 + * driver is DT only, if no DT idle states are detected (ret 96 + * == 0) let the driver initialization fail accordingly since 97 + * there is no reason to initialize the idle driver if only 98 + * wfi is supported. 99 + */ 100 + ret = dt_init_idle_driver(drv, arm_idle_state_match, 1); 101 + if (ret <= 0) { 102 + ret = ret ? : -ENODEV; 103 + goto out_kfree_drv; 104 + } 105 + 106 + ret = cpuidle_register_driver(drv); 107 + if (ret) { 108 + pr_err("Failed to register cpuidle driver\n"); 109 + goto out_kfree_drv; 110 + } 111 + 112 + /* 113 + * Call arch CPU operations in order to initialize 114 + * idle states suspend back-end specific data 115 + */ 116 + ret = arm_cpuidle_init(cpu); 117 + 118 + /* 119 + * Skip the cpuidle device initialization if the reported 120 + * failure is a HW misconfiguration/breakage (-ENXIO). 121 + */ 122 + if (ret == -ENXIO) 123 + return 0; 124 + 125 + if (ret) { 126 + pr_err("CPU %d failed to init idle CPU ops\n", cpu); 127 + goto out_unregister_drv; 128 + } 129 + 130 + dev = kzalloc(sizeof(*dev), GFP_KERNEL); 131 + if (!dev) { 132 + pr_err("Failed to allocate cpuidle device\n"); 133 + ret = -ENOMEM; 134 + goto out_unregister_drv; 135 + } 136 + dev->cpu = cpu; 137 + 138 + ret = cpuidle_register_device(dev); 139 + if (ret) { 140 + pr_err("Failed to register cpuidle device for CPU %d\n", 141 + cpu); 142 + goto out_kfree_dev; 143 + } 144 + 145 + return 0; 146 + 147 + out_kfree_dev: 148 + kfree(dev); 149 + out_unregister_drv: 150 + cpuidle_unregister_driver(drv); 151 + out_kfree_drv: 152 + kfree(drv); 153 + return ret; 154 + } 155 + 156 + /* 157 + * arm_idle_init - Initializes arm cpuidle driver 158 + * 159 + * Initializes arm cpuidle driver for all CPUs, if any CPU fails 160 + * to register cpuidle driver then rollback to cancel all CPUs 161 + * registeration. 80 162 */ 81 163 static int __init arm_idle_init(void) 82 164 { ··· 167 85 struct cpuidle_device *dev; 168 86 169 87 for_each_possible_cpu(cpu) { 170 - 171 - drv = kmemdup(&arm_idle_driver, sizeof(*drv), GFP_KERNEL); 172 - if (!drv) { 173 - ret = -ENOMEM; 88 + ret = arm_idle_init_cpu(cpu); 89 + if (ret) 174 90 goto out_fail; 175 - } 176 - 177 - drv->cpumask = (struct cpumask *)cpumask_of(cpu); 178 - 179 - /* 180 - * Initialize idle states data, starting at index 1. This 181 - * driver is DT only, if no DT idle states are detected (ret 182 - * == 0) let the driver initialization fail accordingly since 183 - * there is no reason to initialize the idle driver if only 184 - * wfi is supported. 185 - */ 186 - ret = dt_init_idle_driver(drv, arm_idle_state_match, 1); 187 - if (ret <= 0) { 188 - ret = ret ? : -ENODEV; 189 - goto init_fail; 190 - } 191 - 192 - ret = cpuidle_register_driver(drv); 193 - if (ret) { 194 - pr_err("Failed to register cpuidle driver\n"); 195 - goto init_fail; 196 - } 197 - 198 - /* 199 - * Call arch CPU operations in order to initialize 200 - * idle states suspend back-end specific data 201 - */ 202 - ret = arm_cpuidle_init(cpu); 203 - 204 - /* 205 - * Skip the cpuidle device initialization if the reported 206 - * failure is a HW misconfiguration/breakage (-ENXIO). 207 - */ 208 - if (ret == -ENXIO) 209 - continue; 210 - 211 - if (ret) { 212 - pr_err("CPU %d failed to init idle CPU ops\n", cpu); 213 - goto out_fail; 214 - } 215 - 216 - dev = kzalloc(sizeof(*dev), GFP_KERNEL); 217 - if (!dev) { 218 - pr_err("Failed to allocate cpuidle device\n"); 219 - ret = -ENOMEM; 220 - goto out_fail; 221 - } 222 - dev->cpu = cpu; 223 - 224 - ret = cpuidle_register_device(dev); 225 - if (ret) { 226 - pr_err("Failed to register cpuidle device for CPU %d\n", 227 - cpu); 228 - kfree(dev); 229 - goto out_fail; 230 - } 231 91 } 232 92 233 93 return 0; 234 - init_fail: 235 - kfree(drv); 94 + 236 95 out_fail: 237 96 while (--cpu >= 0) { 238 97 dev = per_cpu(cpuidle_devices, cpu); 98 + drv = cpuidle_get_cpu_driver(dev); 239 99 cpuidle_unregister_device(dev); 240 - kfree(dev); 241 - drv = cpuidle_get_driver(); 242 100 cpuidle_unregister_driver(drv); 101 + kfree(dev); 243 102 kfree(drv); 244 103 } 245 104
+10 -4
drivers/cpuidle/cpuidle.c
··· 208 208 return -EBUSY; 209 209 } 210 210 target_state = &drv->states[index]; 211 + broadcast = false; 211 212 } 212 213 213 214 /* Take note of the planned idle state. */ ··· 388 387 if (dev->enabled) 389 388 return 0; 390 389 390 + if (!cpuidle_curr_governor) 391 + return -EIO; 392 + 391 393 drv = cpuidle_get_cpu_driver(dev); 392 394 393 - if (!drv || !cpuidle_curr_governor) 395 + if (!drv) 394 396 return -EIO; 395 397 396 398 if (!dev->registered) ··· 403 399 if (ret) 404 400 return ret; 405 401 406 - if (cpuidle_curr_governor->enable && 407 - (ret = cpuidle_curr_governor->enable(drv, dev))) 408 - goto fail_sysfs; 402 + if (cpuidle_curr_governor->enable) { 403 + ret = cpuidle_curr_governor->enable(drv, dev); 404 + if (ret) 405 + goto fail_sysfs; 406 + } 409 407 410 408 smp_wmb(); 411 409
+7
drivers/cpuidle/governors/ladder.c
··· 17 17 #include <linux/pm_qos.h> 18 18 #include <linux/jiffies.h> 19 19 #include <linux/tick.h> 20 + #include <linux/cpu.h> 20 21 21 22 #include <asm/io.h> 22 23 #include <linux/uaccess.h> ··· 68 67 struct cpuidle_device *dev) 69 68 { 70 69 struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); 70 + struct device *device = get_cpu_device(dev->cpu); 71 71 struct ladder_device_state *last_state; 72 72 int last_residency, last_idx = ldev->last_state_idx; 73 73 int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0; 74 74 int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); 75 + int resume_latency = dev_pm_qos_raw_read_value(device); 76 + 77 + if (resume_latency < latency_req && 78 + resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 79 + latency_req = resume_latency; 75 80 76 81 /* Special case when user has set very strict latency requirement */ 77 82 if (unlikely(latency_req == 0)) {
+2 -2
drivers/cpuidle/governors/menu.c
··· 298 298 data->needs_update = 0; 299 299 } 300 300 301 - /* resume_latency is 0 means no restriction */ 302 - if (resume_latency && resume_latency < latency_req) 301 + if (resume_latency < latency_req && 302 + resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 303 303 latency_req = resume_latency; 304 304 305 305 /* Special case when user has set very strict latency requirement */
+101 -40
drivers/devfreq/devfreq.c
··· 28 28 #include <linux/of.h> 29 29 #include "governor.h" 30 30 31 + #define MAX(a,b) ((a > b) ? a : b) 32 + #define MIN(a,b) ((a < b) ? a : b) 33 + 31 34 static struct class *devfreq_class; 32 35 33 36 /* ··· 72 69 return ERR_PTR(-ENODEV); 73 70 } 74 71 72 + static unsigned long find_available_min_freq(struct devfreq *devfreq) 73 + { 74 + struct dev_pm_opp *opp; 75 + unsigned long min_freq = 0; 76 + 77 + opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &min_freq); 78 + if (IS_ERR(opp)) 79 + min_freq = 0; 80 + else 81 + dev_pm_opp_put(opp); 82 + 83 + return min_freq; 84 + } 85 + 86 + static unsigned long find_available_max_freq(struct devfreq *devfreq) 87 + { 88 + struct dev_pm_opp *opp; 89 + unsigned long max_freq = ULONG_MAX; 90 + 91 + opp = dev_pm_opp_find_freq_floor(devfreq->dev.parent, &max_freq); 92 + if (IS_ERR(opp)) 93 + max_freq = 0; 94 + else 95 + dev_pm_opp_put(opp); 96 + 97 + return max_freq; 98 + } 99 + 75 100 /** 76 101 * devfreq_get_freq_level() - Lookup freq_table for the frequency 77 102 * @devfreq: the devfreq instance ··· 116 85 return -EINVAL; 117 86 } 118 87 119 - /** 120 - * devfreq_set_freq_table() - Initialize freq_table for the frequency 121 - * @devfreq: the devfreq instance 122 - */ 123 - static void devfreq_set_freq_table(struct devfreq *devfreq) 88 + static int set_freq_table(struct devfreq *devfreq) 124 89 { 125 90 struct devfreq_dev_profile *profile = devfreq->profile; 126 91 struct dev_pm_opp *opp; ··· 126 99 /* Initialize the freq_table from OPP table */ 127 100 count = dev_pm_opp_get_opp_count(devfreq->dev.parent); 128 101 if (count <= 0) 129 - return; 102 + return -EINVAL; 130 103 131 104 profile->max_state = count; 132 105 profile->freq_table = devm_kcalloc(devfreq->dev.parent, ··· 135 108 GFP_KERNEL); 136 109 if (!profile->freq_table) { 137 110 profile->max_state = 0; 138 - return; 111 + return -ENOMEM; 139 112 } 140 113 141 114 for (i = 0, freq = 0; i < profile->max_state; i++, freq++) { ··· 143 116 if (IS_ERR(opp)) { 144 117 devm_kfree(devfreq->dev.parent, profile->freq_table); 145 118 profile->max_state = 0; 146 - return; 119 + return PTR_ERR(opp); 147 120 } 148 121 dev_pm_opp_put(opp); 149 122 profile->freq_table[i] = freq; 150 123 } 124 + 125 + return 0; 151 126 } 152 127 153 128 /** ··· 256 227 int update_devfreq(struct devfreq *devfreq) 257 228 { 258 229 struct devfreq_freqs freqs; 259 - unsigned long freq, cur_freq; 230 + unsigned long freq, cur_freq, min_freq, max_freq; 260 231 int err = 0; 261 232 u32 flags = 0; 262 233 ··· 274 245 return err; 275 246 276 247 /* 277 - * Adjust the frequency with user freq and QoS. 248 + * Adjust the frequency with user freq, QoS and available freq. 278 249 * 279 250 * List from the highest priority 280 251 * max_freq 281 252 * min_freq 282 253 */ 254 + max_freq = MIN(devfreq->scaling_max_freq, devfreq->max_freq); 255 + min_freq = MAX(devfreq->scaling_min_freq, devfreq->min_freq); 283 256 284 - if (devfreq->min_freq && freq < devfreq->min_freq) { 285 - freq = devfreq->min_freq; 257 + if (min_freq && freq < min_freq) { 258 + freq = min_freq; 286 259 flags &= ~DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use GLB */ 287 260 } 288 - if (devfreq->max_freq && freq > devfreq->max_freq) { 289 - freq = devfreq->max_freq; 261 + if (max_freq && freq > max_freq) { 262 + freq = max_freq; 290 263 flags |= DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use LUB */ 291 264 } 292 265 ··· 311 280 freqs.new = freq; 312 281 devfreq_notify_transition(devfreq, &freqs, DEVFREQ_POSTCHANGE); 313 282 314 - if (devfreq->profile->freq_table) 315 - if (devfreq_update_status(devfreq, freq)) 316 - dev_err(&devfreq->dev, 317 - "Couldn't update frequency transition information.\n"); 283 + if (devfreq_update_status(devfreq, freq)) 284 + dev_err(&devfreq->dev, 285 + "Couldn't update frequency transition information.\n"); 318 286 319 287 devfreq->previous_freq = freq; 320 288 return err; ··· 496 466 int ret; 497 467 498 468 mutex_lock(&devfreq->lock); 469 + 470 + devfreq->scaling_min_freq = find_available_min_freq(devfreq); 471 + if (!devfreq->scaling_min_freq) { 472 + mutex_unlock(&devfreq->lock); 473 + return -EINVAL; 474 + } 475 + 476 + devfreq->scaling_max_freq = find_available_max_freq(devfreq); 477 + if (!devfreq->scaling_max_freq) { 478 + mutex_unlock(&devfreq->lock); 479 + return -EINVAL; 480 + } 481 + 499 482 ret = update_devfreq(devfreq); 500 483 mutex_unlock(&devfreq->lock); 501 484 ··· 598 555 599 556 if (!devfreq->profile->max_state && !devfreq->profile->freq_table) { 600 557 mutex_unlock(&devfreq->lock); 601 - devfreq_set_freq_table(devfreq); 558 + err = set_freq_table(devfreq); 559 + if (err < 0) 560 + goto err_out; 602 561 mutex_lock(&devfreq->lock); 603 562 } 563 + 564 + devfreq->min_freq = find_available_min_freq(devfreq); 565 + if (!devfreq->min_freq) { 566 + mutex_unlock(&devfreq->lock); 567 + err = -EINVAL; 568 + goto err_dev; 569 + } 570 + devfreq->scaling_min_freq = devfreq->min_freq; 571 + 572 + devfreq->max_freq = find_available_max_freq(devfreq); 573 + if (!devfreq->max_freq) { 574 + mutex_unlock(&devfreq->lock); 575 + err = -EINVAL; 576 + goto err_dev; 577 + } 578 + devfreq->scaling_max_freq = devfreq->max_freq; 604 579 605 580 dev_set_name(&devfreq->dev, "devfreq%d", 606 581 atomic_inc_return(&devfreq_no)); ··· 1143 1082 return ret; 1144 1083 } 1145 1084 1085 + static ssize_t min_freq_show(struct device *dev, struct device_attribute *attr, 1086 + char *buf) 1087 + { 1088 + struct devfreq *df = to_devfreq(dev); 1089 + 1090 + return sprintf(buf, "%lu\n", MAX(df->scaling_min_freq, df->min_freq)); 1091 + } 1092 + 1146 1093 static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr, 1147 1094 const char *buf, size_t count) 1148 1095 { ··· 1177 1108 mutex_unlock(&df->lock); 1178 1109 return ret; 1179 1110 } 1180 - 1181 - #define show_one(name) \ 1182 - static ssize_t name##_show \ 1183 - (struct device *dev, struct device_attribute *attr, char *buf) \ 1184 - { \ 1185 - return sprintf(buf, "%lu\n", to_devfreq(dev)->name); \ 1186 - } 1187 - show_one(min_freq); 1188 - show_one(max_freq); 1189 - 1190 1111 static DEVICE_ATTR_RW(min_freq); 1112 + 1113 + static ssize_t max_freq_show(struct device *dev, struct device_attribute *attr, 1114 + char *buf) 1115 + { 1116 + struct devfreq *df = to_devfreq(dev); 1117 + 1118 + return sprintf(buf, "%lu\n", MIN(df->scaling_max_freq, df->max_freq)); 1119 + } 1191 1120 static DEVICE_ATTR_RW(max_freq); 1192 1121 1193 1122 static ssize_t available_frequencies_show(struct device *d, ··· 1193 1126 char *buf) 1194 1127 { 1195 1128 struct devfreq *df = to_devfreq(d); 1196 - struct device *dev = df->dev.parent; 1197 - struct dev_pm_opp *opp; 1198 1129 ssize_t count = 0; 1199 - unsigned long freq = 0; 1130 + int i; 1200 1131 1201 - do { 1202 - opp = dev_pm_opp_find_freq_ceil(dev, &freq); 1203 - if (IS_ERR(opp)) 1204 - break; 1132 + mutex_lock(&df->lock); 1205 1133 1206 - dev_pm_opp_put(opp); 1134 + for (i = 0; i < df->profile->max_state; i++) 1207 1135 count += scnprintf(&buf[count], (PAGE_SIZE - count - 2), 1208 - "%lu ", freq); 1209 - freq++; 1210 - } while (1); 1136 + "%lu ", df->profile->freq_table[i]); 1211 1137 1138 + mutex_unlock(&df->lock); 1212 1139 /* Truncate the trailing space */ 1213 1140 if (count) 1214 1141 count--;
+3 -2
drivers/devfreq/exynos-bus.c
··· 436 436 ondemand_data->downdifferential = 5; 437 437 438 438 /* Add devfreq device to monitor and handle the exynos bus */ 439 - bus->devfreq = devm_devfreq_add_device(dev, profile, "simple_ondemand", 439 + bus->devfreq = devm_devfreq_add_device(dev, profile, 440 + DEVFREQ_GOV_SIMPLE_ONDEMAND, 440 441 ondemand_data); 441 442 if (IS_ERR(bus->devfreq)) { 442 443 dev_err(dev, "failed to add devfreq device\n"); ··· 489 488 passive_data->parent = parent_devfreq; 490 489 491 490 /* Add devfreq device for exynos bus with passive governor */ 492 - bus->devfreq = devm_devfreq_add_device(dev, profile, "passive", 491 + bus->devfreq = devm_devfreq_add_device(dev, profile, DEVFREQ_GOV_PASSIVE, 493 492 passive_data); 494 493 if (IS_ERR(bus->devfreq)) { 495 494 dev_err(dev,
+1 -1
drivers/devfreq/governor_passive.c
··· 183 183 } 184 184 185 185 static struct devfreq_governor devfreq_passive = { 186 - .name = "passive", 186 + .name = DEVFREQ_GOV_PASSIVE, 187 187 .immutable = 1, 188 188 .get_target_freq = devfreq_passive_get_target_freq, 189 189 .event_handler = devfreq_passive_event_handler,
+1 -1
drivers/devfreq/governor_performance.c
··· 42 42 } 43 43 44 44 static struct devfreq_governor devfreq_performance = { 45 - .name = "performance", 45 + .name = DEVFREQ_GOV_PERFORMANCE, 46 46 .get_target_freq = devfreq_performance_func, 47 47 .event_handler = devfreq_performance_handler, 48 48 };
+1 -1
drivers/devfreq/governor_powersave.c
··· 39 39 } 40 40 41 41 static struct devfreq_governor devfreq_powersave = { 42 - .name = "powersave", 42 + .name = DEVFREQ_GOV_POWERSAVE, 43 43 .get_target_freq = devfreq_powersave_func, 44 44 .event_handler = devfreq_powersave_handler, 45 45 };
+1 -1
drivers/devfreq/governor_simpleondemand.c
··· 125 125 } 126 126 127 127 static struct devfreq_governor devfreq_simple_ondemand = { 128 - .name = "simple_ondemand", 128 + .name = DEVFREQ_GOV_SIMPLE_ONDEMAND, 129 129 .get_target_freq = devfreq_simple_ondemand_func, 130 130 .event_handler = devfreq_simple_ondemand_handler, 131 131 };
+1 -1
drivers/devfreq/governor_userspace.c
··· 87 87 NULL, 88 88 }; 89 89 static const struct attribute_group dev_attr_group = { 90 - .name = "userspace", 90 + .name = DEVFREQ_GOV_USERSPACE, 91 91 .attrs = dev_entries, 92 92 }; 93 93
+1 -1
drivers/devfreq/rk3399_dmc.c
··· 431 431 432 432 data->devfreq = devm_devfreq_add_device(dev, 433 433 &rk3399_devfreq_dmc_profile, 434 - "simple_ondemand", 434 + DEVFREQ_GOV_SIMPLE_ONDEMAND, 435 435 &data->ondemand_data); 436 436 if (IS_ERR(data->devfreq)) 437 437 return PTR_ERR(data->devfreq);
+1 -1
drivers/gpu/drm/i915/i915_drv.c
··· 1304 1304 * becaue the HDA driver may require us to enable the audio power 1305 1305 * domain during system suspend. 1306 1306 */ 1307 - pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 1307 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 1308 1308 1309 1309 ret = i915_driver_init_early(dev_priv, ent); 1310 1310 if (ret < 0)
+17 -6
drivers/idle/intel_idle.c
··· 913 913 struct cpuidle_state *state = &drv->states[index]; 914 914 unsigned long eax = flg2MWAIT(state->flags); 915 915 unsigned int cstate; 916 + bool uninitialized_var(tick); 916 917 int cpu = smp_processor_id(); 917 - 918 - cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1; 919 918 920 919 /* 921 920 * leave_mm() to avoid costly and often unnecessary wakeups ··· 923 924 if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED) 924 925 leave_mm(cpu); 925 926 926 - if (!(lapic_timer_reliable_states & (1 << (cstate)))) 927 - tick_broadcast_enter(); 927 + if (!static_cpu_has(X86_FEATURE_ARAT)) { 928 + cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) & 929 + MWAIT_CSTATE_MASK) + 1; 930 + tick = false; 931 + if (!(lapic_timer_reliable_states & (1 << (cstate)))) { 932 + tick = true; 933 + tick_broadcast_enter(); 934 + } 935 + } 928 936 929 937 mwait_idle_with_hints(eax, ecx); 930 938 931 - if (!(lapic_timer_reliable_states & (1 << (cstate)))) 939 + if (!static_cpu_has(X86_FEATURE_ARAT) && tick) 932 940 tick_broadcast_exit(); 933 941 934 942 return index; ··· 1067 1061 }; 1068 1062 1069 1063 #define ICPU(model, cpu) \ 1070 - { X86_VENDOR_INTEL, 6, model, X86_FEATURE_MWAIT, (unsigned long)&cpu } 1064 + { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&cpu } 1071 1065 1072 1066 static const struct x86_cpu_id intel_idle_ids[] __initconst = { 1073 1067 ICPU(INTEL_FAM6_NEHALEM_EP, idle_cpu_nehalem), ··· 1128 1122 boot_cpu_data.x86 == 6) 1129 1123 pr_debug("does not run on family %d model %d\n", 1130 1124 boot_cpu_data.x86, boot_cpu_data.x86_model); 1125 + return -ENODEV; 1126 + } 1127 + 1128 + if (!boot_cpu_has(X86_FEATURE_MWAIT)) { 1129 + pr_debug("Please enable MWAIT in BIOS SETUP\n"); 1131 1130 return -ENODEV; 1132 1131 } 1133 1132
+1 -1
drivers/misc/mei/pci-me.c
··· 225 225 * MEI requires to resume from runtime suspend mode 226 226 * in order to perform link reset flow upon system suspend. 227 227 */ 228 - pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 228 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 229 229 230 230 /* 231 231 * ME maps runtime suspend/resume to D0i states,
+1 -1
drivers/misc/mei/pci-txe.c
··· 141 141 * MEI requires to resume from runtime suspend mode 142 142 * in order to perform link reset flow upon system suspend. 143 143 */ 144 - pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 144 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 145 145 146 146 /* 147 147 * TXE maps runtime suspend/resume to own power gating states,
+13
drivers/opp/Kconfig
··· 1 + config PM_OPP 2 + bool 3 + select SRCU 4 + ---help--- 5 + SOCs have a standard set of tuples consisting of frequency and 6 + voltage pairs that the device will support per voltage domain. This 7 + is called Operating Performance Point or OPP. The actual definitions 8 + of OPP varies over silicon within the same family of devices. 9 + 10 + OPP layer organizes the data internally using device pointers 11 + representing individual voltage domains and provides SOC 12 + implementations a ready to use framework to manage OPPs. 13 + For more information, read <file:Documentation/power/opp.txt>
+93 -43
drivers/pci/pci-driver.c
··· 680 680 { 681 681 struct device_driver *drv = dev->driver; 682 682 683 - /* 684 - * Devices having power.ignore_children set may still be necessary for 685 - * suspending their children in the next phase of device suspend. 686 - */ 687 - if (dev->power.ignore_children) 688 - pm_runtime_resume(dev); 689 - 690 683 if (drv && drv->pm && drv->pm->prepare) { 691 684 int error = drv->pm->prepare(dev); 692 - if (error) 685 + if (error < 0) 693 686 return error; 687 + 688 + if (!error && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE)) 689 + return 0; 694 690 } 695 691 return pci_dev_keep_suspended(to_pci_dev(dev)); 696 692 } ··· 727 731 728 732 if (!pm) { 729 733 pci_pm_default_suspend(pci_dev); 730 - goto Fixup; 734 + return 0; 731 735 } 732 736 733 737 /* 734 - * PCI devices suspended at run time need to be resumed at this point, 735 - * because in general it is necessary to reconfigure them for system 736 - * suspend. Namely, if the device is supposed to wake up the system 737 - * from the sleep state, we may need to reconfigure it for this purpose. 738 - * In turn, if the device is not supposed to wake up the system from the 739 - * sleep state, we'll have to prevent it from signaling wake-up. 738 + * PCI devices suspended at run time may need to be resumed at this 739 + * point, because in general it may be necessary to reconfigure them for 740 + * system suspend. Namely, if the device is expected to wake up the 741 + * system from the sleep state, it may have to be reconfigured for this 742 + * purpose, or if the device is not expected to wake up the system from 743 + * the sleep state, it should be prevented from signaling wakeup events 744 + * going forward. 745 + * 746 + * Also if the driver of the device does not indicate that its system 747 + * suspend callbacks can cope with runtime-suspended devices, it is 748 + * better to resume the device from runtime suspend here. 740 749 */ 741 - pm_runtime_resume(dev); 750 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) || 751 + !pci_dev_keep_suspended(pci_dev)) 752 + pm_runtime_resume(dev); 742 753 743 754 pci_dev->state_saved = false; 744 755 if (pm->suspend) { ··· 765 762 } 766 763 } 767 764 768 - Fixup: 769 - pci_fixup_device(pci_fixup_suspend, pci_dev); 770 - 771 765 return 0; 766 + } 767 + 768 + static int pci_pm_suspend_late(struct device *dev) 769 + { 770 + if (dev_pm_smart_suspend_and_suspended(dev)) 771 + return 0; 772 + 773 + pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev)); 774 + 775 + return pm_generic_suspend_late(dev); 772 776 } 773 777 774 778 static int pci_pm_suspend_noirq(struct device *dev) 775 779 { 776 780 struct pci_dev *pci_dev = to_pci_dev(dev); 777 781 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 782 + 783 + if (dev_pm_smart_suspend_and_suspended(dev)) 784 + return 0; 778 785 779 786 if (pci_has_legacy_pm_support(pci_dev)) 780 787 return pci_legacy_suspend_late(dev, PMSG_SUSPEND); ··· 818 805 pci_prepare_to_sleep(pci_dev); 819 806 } 820 807 808 + dev_dbg(dev, "PCI PM: Suspend power state: %s\n", 809 + pci_power_name(pci_dev->current_state)); 810 + 821 811 pci_pm_set_unknown_state(pci_dev); 822 812 823 813 /* ··· 846 830 struct pci_dev *pci_dev = to_pci_dev(dev); 847 831 struct device_driver *drv = dev->driver; 848 832 int error = 0; 833 + 834 + /* 835 + * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 836 + * during system suspend, so update their runtime PM status to "active" 837 + * as they are going to be put into D0 shortly. 838 + */ 839 + if (dev_pm_smart_suspend_and_suspended(dev)) 840 + pm_runtime_set_active(dev); 849 841 850 842 pci_pm_default_resume_early(pci_dev); 851 843 ··· 897 873 #else /* !CONFIG_SUSPEND */ 898 874 899 875 #define pci_pm_suspend NULL 876 + #define pci_pm_suspend_late NULL 900 877 #define pci_pm_suspend_noirq NULL 901 878 #define pci_pm_resume NULL 902 879 #define pci_pm_resume_noirq NULL ··· 932 907 * devices should not be touched during freeze/thaw transitions, 933 908 * however. 934 909 */ 935 - pm_runtime_resume(dev); 910 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) 911 + pm_runtime_resume(dev); 936 912 937 913 pci_dev->state_saved = false; 938 914 if (pm->freeze) { ··· 945 919 return error; 946 920 } 947 921 948 - if (pcibios_pm_ops.freeze) 949 - return pcibios_pm_ops.freeze(dev); 950 - 951 922 return 0; 923 + } 924 + 925 + static int pci_pm_freeze_late(struct device *dev) 926 + { 927 + if (dev_pm_smart_suspend_and_suspended(dev)) 928 + return 0; 929 + 930 + return pm_generic_freeze_late(dev);; 952 931 } 953 932 954 933 static int pci_pm_freeze_noirq(struct device *dev) 955 934 { 956 935 struct pci_dev *pci_dev = to_pci_dev(dev); 957 936 struct device_driver *drv = dev->driver; 937 + 938 + if (dev_pm_smart_suspend_and_suspended(dev)) 939 + return 0; 958 940 959 941 if (pci_has_legacy_pm_support(pci_dev)) 960 942 return pci_legacy_suspend_late(dev, PMSG_FREEZE); ··· 993 959 struct device_driver *drv = dev->driver; 994 960 int error = 0; 995 961 962 + /* 963 + * If the device is in runtime suspend, the code below may not work 964 + * correctly with it, so skip that code and make the PM core skip all of 965 + * the subsequent "thaw" callbacks for the device. 966 + */ 967 + if (dev_pm_smart_suspend_and_suspended(dev)) { 968 + dev->power.direct_complete = true; 969 + return 0; 970 + } 971 + 996 972 if (pcibios_pm_ops.thaw_noirq) { 997 973 error = pcibios_pm_ops.thaw_noirq(dev); 998 974 if (error) ··· 1026 982 struct pci_dev *pci_dev = to_pci_dev(dev); 1027 983 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1028 984 int error = 0; 1029 - 1030 - if (pcibios_pm_ops.thaw) { 1031 - error = pcibios_pm_ops.thaw(dev); 1032 - if (error) 1033 - return error; 1034 - } 1035 985 1036 986 if (pci_has_legacy_pm_support(pci_dev)) 1037 987 return pci_legacy_resume(dev); ··· 1052 1014 1053 1015 if (!pm) { 1054 1016 pci_pm_default_suspend(pci_dev); 1055 - goto Fixup; 1017 + return 0; 1056 1018 } 1057 1019 1058 1020 /* The reason to do that is the same as in pci_pm_suspend(). */ 1059 - pm_runtime_resume(dev); 1021 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) || 1022 + !pci_dev_keep_suspended(pci_dev)) 1023 + pm_runtime_resume(dev); 1060 1024 1061 1025 pci_dev->state_saved = false; 1062 1026 if (pm->poweroff) { ··· 1070 1030 return error; 1071 1031 } 1072 1032 1073 - Fixup: 1074 - pci_fixup_device(pci_fixup_suspend, pci_dev); 1075 - 1076 - if (pcibios_pm_ops.poweroff) 1077 - return pcibios_pm_ops.poweroff(dev); 1078 - 1079 1033 return 0; 1034 + } 1035 + 1036 + static int pci_pm_poweroff_late(struct device *dev) 1037 + { 1038 + if (dev_pm_smart_suspend_and_suspended(dev)) 1039 + return 0; 1040 + 1041 + pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev)); 1042 + 1043 + return pm_generic_poweroff_late(dev); 1080 1044 } 1081 1045 1082 1046 static int pci_pm_poweroff_noirq(struct device *dev) 1083 1047 { 1084 1048 struct pci_dev *pci_dev = to_pci_dev(dev); 1085 1049 struct device_driver *drv = dev->driver; 1050 + 1051 + if (dev_pm_smart_suspend_and_suspended(dev)) 1052 + return 0; 1086 1053 1087 1054 if (pci_has_legacy_pm_support(to_pci_dev(dev))) 1088 1055 return pci_legacy_suspend_late(dev, PMSG_HIBERNATE); ··· 1132 1085 struct device_driver *drv = dev->driver; 1133 1086 int error = 0; 1134 1087 1088 + /* This is analogous to the pci_pm_resume_noirq() case. */ 1089 + if (dev_pm_smart_suspend_and_suspended(dev)) 1090 + pm_runtime_set_active(dev); 1091 + 1135 1092 if (pcibios_pm_ops.restore_noirq) { 1136 1093 error = pcibios_pm_ops.restore_noirq(dev); 1137 1094 if (error) ··· 1158 1107 struct pci_dev *pci_dev = to_pci_dev(dev); 1159 1108 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1160 1109 int error = 0; 1161 - 1162 - if (pcibios_pm_ops.restore) { 1163 - error = pcibios_pm_ops.restore(dev); 1164 - if (error) 1165 - return error; 1166 - } 1167 1110 1168 1111 /* 1169 1112 * This is necessary for the hibernation error path in which restore is ··· 1184 1139 #else /* !CONFIG_HIBERNATE_CALLBACKS */ 1185 1140 1186 1141 #define pci_pm_freeze NULL 1142 + #define pci_pm_freeze_late NULL 1187 1143 #define pci_pm_freeze_noirq NULL 1188 1144 #define pci_pm_thaw NULL 1189 1145 #define pci_pm_thaw_noirq NULL 1190 1146 #define pci_pm_poweroff NULL 1147 + #define pci_pm_poweroff_late NULL 1191 1148 #define pci_pm_poweroff_noirq NULL 1192 1149 #define pci_pm_restore NULL 1193 1150 #define pci_pm_restore_noirq NULL ··· 1305 1258 .prepare = pci_pm_prepare, 1306 1259 .complete = pci_pm_complete, 1307 1260 .suspend = pci_pm_suspend, 1261 + .suspend_late = pci_pm_suspend_late, 1308 1262 .resume = pci_pm_resume, 1309 1263 .freeze = pci_pm_freeze, 1264 + .freeze_late = pci_pm_freeze_late, 1310 1265 .thaw = pci_pm_thaw, 1311 1266 .poweroff = pci_pm_poweroff, 1267 + .poweroff_late = pci_pm_poweroff_late, 1312 1268 .restore = pci_pm_restore, 1313 1269 .suspend_noirq = pci_pm_suspend_noirq, 1314 1270 .resume_noirq = pci_pm_resume_noirq,
+1 -2
drivers/pci/pci.c
··· 2166 2166 2167 2167 if (!pm_runtime_suspended(dev) 2168 2168 || pci_target_state(pci_dev, wakeup) != pci_dev->current_state 2169 - || platform_pci_need_resume(pci_dev) 2170 - || (pci_dev->dev_flags & PCI_DEV_FLAGS_NEEDS_RESUME)) 2169 + || platform_pci_need_resume(pci_dev)) 2171 2170 return false; 2172 2171 2173 2172 /*
+5 -5
drivers/power/avs/smartreflex.c
··· 355 355 u8 senp_shift, senn_shift; 356 356 357 357 if (!sr) { 358 - pr_warn("%s: NULL omap_sr from %pF\n", 358 + pr_warn("%s: NULL omap_sr from %pS\n", 359 359 __func__, (void *)_RET_IP_); 360 360 return -EINVAL; 361 361 } ··· 422 422 u32 vpboundint_en, vpboundint_st; 423 423 424 424 if (!sr) { 425 - pr_warn("%s: NULL omap_sr from %pF\n", 425 + pr_warn("%s: NULL omap_sr from %pS\n", 426 426 __func__, (void *)_RET_IP_); 427 427 return -EINVAL; 428 428 } ··· 477 477 u8 senp_shift, senn_shift; 478 478 479 479 if (!sr) { 480 - pr_warn("%s: NULL omap_sr from %pF\n", 480 + pr_warn("%s: NULL omap_sr from %pS\n", 481 481 __func__, (void *)_RET_IP_); 482 482 return -EINVAL; 483 483 } ··· 562 562 int ret; 563 563 564 564 if (!sr) { 565 - pr_warn("%s: NULL omap_sr from %pF\n", 565 + pr_warn("%s: NULL omap_sr from %pS\n", 566 566 __func__, (void *)_RET_IP_); 567 567 return -EINVAL; 568 568 } ··· 614 614 void sr_disable(struct omap_sr *sr) 615 615 { 616 616 if (!sr) { 617 - pr_warn("%s: NULL omap_sr from %pF\n", 617 + pr_warn("%s: NULL omap_sr from %pS\n", 618 618 __func__, (void *)_RET_IP_); 619 619 return; 620 620 }
+2 -12
drivers/soc/mediatek/mtk-scpsys.c
··· 361 361 return ret; 362 362 } 363 363 364 - static bool scpsys_active_wakeup(struct device *dev) 365 - { 366 - struct generic_pm_domain *genpd; 367 - struct scp_domain *scpd; 368 - 369 - genpd = pd_to_genpd(dev->pm_domain); 370 - scpd = container_of(genpd, struct scp_domain, genpd); 371 - 372 - return scpd->data->active_wakeup; 373 - } 374 - 375 364 static void init_clks(struct platform_device *pdev, struct clk **clk) 376 365 { 377 366 int i; ··· 455 466 genpd->name = data->name; 456 467 genpd->power_off = scpsys_power_off; 457 468 genpd->power_on = scpsys_power_on; 458 - genpd->dev_ops.active_wakeup = scpsys_active_wakeup; 469 + if (scpd->data->active_wakeup) 470 + genpd->flags |= GENPD_FLAG_ACTIVE_WAKEUP; 459 471 } 460 472 461 473 return scp;
+2 -12
drivers/soc/rockchip/pm_domains.c
··· 358 358 pm_clk_destroy(dev); 359 359 } 360 360 361 - static bool rockchip_active_wakeup(struct device *dev) 362 - { 363 - struct generic_pm_domain *genpd; 364 - struct rockchip_pm_domain *pd; 365 - 366 - genpd = pd_to_genpd(dev->pm_domain); 367 - pd = container_of(genpd, struct rockchip_pm_domain, genpd); 368 - 369 - return pd->info->active_wakeup; 370 - } 371 - 372 361 static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, 373 362 struct device_node *node) 374 363 { ··· 478 489 pd->genpd.power_on = rockchip_pd_power_on; 479 490 pd->genpd.attach_dev = rockchip_pd_attach_dev; 480 491 pd->genpd.detach_dev = rockchip_pd_detach_dev; 481 - pd->genpd.dev_ops.active_wakeup = rockchip_active_wakeup; 482 492 pd->genpd.flags = GENPD_FLAG_PM_CLK; 493 + if (pd_info->active_wakeup) 494 + pd->genpd.flags |= GENPD_FLAG_ACTIVE_WAKEUP; 483 495 pm_genpd_init(&pd->genpd, NULL, false); 484 496 485 497 pmu->genpd_data.domains[id] = &pd->genpd;
+2
include/acpi/acpiosxf.h
··· 287 287 /* 288 288 * Platform and hardware-independent physical memory interfaces 289 289 */ 290 + int acpi_os_read_iomem(void __iomem *virt_addr, u64 *value, u32 width); 291 + 290 292 #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_read_memory 291 293 acpi_status 292 294 acpi_os_read_memory(acpi_physical_address address, u64 *value, u32 width);
+21 -9
include/linux/acpi.h
··· 864 864 #endif 865 865 866 866 #if defined(CONFIG_ACPI) && defined(CONFIG_PM) 867 - int acpi_dev_runtime_suspend(struct device *dev); 868 - int acpi_dev_runtime_resume(struct device *dev); 867 + int acpi_dev_suspend(struct device *dev, bool wakeup); 868 + int acpi_dev_resume(struct device *dev); 869 869 int acpi_subsys_runtime_suspend(struct device *dev); 870 870 int acpi_subsys_runtime_resume(struct device *dev); 871 - struct acpi_device *acpi_dev_pm_get_node(struct device *dev); 872 871 int acpi_dev_pm_attach(struct device *dev, bool power_on); 873 872 #else 874 873 static inline int acpi_dev_runtime_suspend(struct device *dev) { return 0; } 875 874 static inline int acpi_dev_runtime_resume(struct device *dev) { return 0; } 876 875 static inline int acpi_subsys_runtime_suspend(struct device *dev) { return 0; } 877 876 static inline int acpi_subsys_runtime_resume(struct device *dev) { return 0; } 878 - static inline struct acpi_device *acpi_dev_pm_get_node(struct device *dev) 879 - { 880 - return NULL; 881 - } 882 877 static inline int acpi_dev_pm_attach(struct device *dev, bool power_on) 883 878 { 884 879 return -ENODEV; ··· 882 887 883 888 #if defined(CONFIG_ACPI) && defined(CONFIG_PM_SLEEP) 884 889 int acpi_dev_suspend_late(struct device *dev); 885 - int acpi_dev_resume_early(struct device *dev); 886 890 int acpi_subsys_prepare(struct device *dev); 887 891 void acpi_subsys_complete(struct device *dev); 888 892 int acpi_subsys_suspend_late(struct device *dev); 893 + int acpi_subsys_suspend_noirq(struct device *dev); 894 + int acpi_subsys_resume_noirq(struct device *dev); 889 895 int acpi_subsys_resume_early(struct device *dev); 890 896 int acpi_subsys_suspend(struct device *dev); 891 897 int acpi_subsys_freeze(struct device *dev); 898 + int acpi_subsys_freeze_late(struct device *dev); 899 + int acpi_subsys_freeze_noirq(struct device *dev); 900 + int acpi_subsys_thaw_noirq(struct device *dev); 892 901 #else 893 - static inline int acpi_dev_suspend_late(struct device *dev) { return 0; } 894 902 static inline int acpi_dev_resume_early(struct device *dev) { return 0; } 895 903 static inline int acpi_subsys_prepare(struct device *dev) { return 0; } 896 904 static inline void acpi_subsys_complete(struct device *dev) {} 897 905 static inline int acpi_subsys_suspend_late(struct device *dev) { return 0; } 906 + static inline int acpi_subsys_suspend_noirq(struct device *dev) { return 0; } 907 + static inline int acpi_subsys_resume_noirq(struct device *dev) { return 0; } 898 908 static inline int acpi_subsys_resume_early(struct device *dev) { return 0; } 899 909 static inline int acpi_subsys_suspend(struct device *dev) { return 0; } 900 910 static inline int acpi_subsys_freeze(struct device *dev) { return 0; } 911 + static inline int acpi_subsys_freeze_late(struct device *dev) { return 0; } 912 + static inline int acpi_subsys_freeze_noirq(struct device *dev) { return 0; } 913 + static inline int acpi_subsys_thaw_noirq(struct device *dev) { return 0; } 901 914 #endif 902 915 903 916 #ifdef CONFIG_ACPI ··· 1252 1249 #else 1253 1250 static inline 1254 1251 int acpi_irq_get(acpi_handle handle, unsigned int index, struct resource *res) 1252 + { 1253 + return -EINVAL; 1254 + } 1255 + #endif 1256 + 1257 + #ifdef CONFIG_ACPI_LPIT 1258 + int lpit_read_residency_count_address(u64 *address); 1259 + #else 1260 + static inline int lpit_read_residency_count_address(u64 *address) 1255 1261 { 1256 1262 return -EINVAL; 1257 1263 }
+16 -1
include/linux/arch_topology.h
··· 6 6 #define _LINUX_ARCH_TOPOLOGY_H_ 7 7 8 8 #include <linux/types.h> 9 + #include <linux/percpu.h> 9 10 10 11 void topology_normalize_cpu_scale(void); 11 12 12 13 struct device_node; 13 14 bool topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu); 14 15 16 + DECLARE_PER_CPU(unsigned long, cpu_scale); 17 + 15 18 struct sched_domain; 16 - unsigned long topology_get_cpu_scale(struct sched_domain *sd, int cpu); 19 + static inline 20 + unsigned long topology_get_cpu_scale(struct sched_domain *sd, int cpu) 21 + { 22 + return per_cpu(cpu_scale, cpu); 23 + } 17 24 18 25 void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity); 26 + 27 + DECLARE_PER_CPU(unsigned long, freq_scale); 28 + 29 + static inline 30 + unsigned long topology_get_freq_scale(struct sched_domain *sd, int cpu) 31 + { 32 + return per_cpu(freq_scale, cpu); 33 + } 19 34 20 35 #endif /* _LINUX_ARCH_TOPOLOGY_H_ */
+3
include/linux/cpufreq.h
··· 919 919 920 920 extern unsigned int arch_freq_get_on_cpu(int cpu); 921 921 922 + extern void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, 923 + unsigned long max_freq); 924 + 922 925 /* the following are really really optional */ 923 926 extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs; 924 927 extern struct freq_attr cpufreq_freq_attr_scaling_boost_freqs;
+14 -2
include/linux/devfreq.h
··· 19 19 20 20 #define DEVFREQ_NAME_LEN 16 21 21 22 + /* DEVFREQ governor name */ 23 + #define DEVFREQ_GOV_SIMPLE_ONDEMAND "simple_ondemand" 24 + #define DEVFREQ_GOV_PERFORMANCE "performance" 25 + #define DEVFREQ_GOV_POWERSAVE "powersave" 26 + #define DEVFREQ_GOV_USERSPACE "userspace" 27 + #define DEVFREQ_GOV_PASSIVE "passive" 28 + 22 29 /* DEVFREQ notifier interface */ 23 30 #define DEVFREQ_TRANSITION_NOTIFIER (0) 24 31 ··· 91 84 * from devfreq_remove_device() call. If the user 92 85 * has registered devfreq->nb at a notifier-head, 93 86 * this is the time to unregister it. 94 - * @freq_table: Optional list of frequencies to support statistics. 95 - * @max_state: The size of freq_table. 87 + * @freq_table: Optional list of frequencies to support statistics 88 + * and freq_table must be generated in ascending order. 89 + * @max_state: The size of freq_table. 96 90 */ 97 91 struct devfreq_dev_profile { 98 92 unsigned long initial_freq; ··· 128 120 * touch this. 129 121 * @min_freq: Limit minimum frequency requested by user (0: none) 130 122 * @max_freq: Limit maximum frequency requested by user (0: none) 123 + * @scaling_min_freq: Limit minimum frequency requested by OPP interface 124 + * @scaling_max_freq: Limit maximum frequency requested by OPP interface 131 125 * @stop_polling: devfreq polling status of a device. 132 126 * @total_trans: Number of devfreq transitions 133 127 * @trans_table: Statistics of devfreq transitions ··· 163 153 164 154 unsigned long min_freq; 165 155 unsigned long max_freq; 156 + unsigned long scaling_min_freq; 157 + unsigned long scaling_max_freq; 166 158 bool stop_polling; 167 159 168 160 /* information for device frequency transition */
+10 -5
include/linux/device.h
··· 370 370 * @devnode: Callback to provide the devtmpfs. 371 371 * @class_release: Called to release this class. 372 372 * @dev_release: Called to release the device. 373 - * @suspend: Used to put the device to sleep mode, usually to a low power 374 - * state. 375 - * @resume: Used to bring the device from the sleep mode. 376 373 * @shutdown_pre: Called at shut-down time before driver shutdown. 377 374 * @ns_type: Callbacks so sysfs can detemine namespaces. 378 375 * @namespace: Namespace of the device belongs to this class. ··· 397 400 void (*class_release)(struct class *class); 398 401 void (*dev_release)(struct device *dev); 399 402 400 - int (*suspend)(struct device *dev, pm_message_t state); 401 - int (*resume)(struct device *dev); 402 403 int (*shutdown_pre)(struct device *dev); 403 404 404 405 const struct kobj_ns_type_operations *ns_type; ··· 1068 1073 #ifdef CONFIG_PM_SLEEP 1069 1074 dev->power.syscore = val; 1070 1075 #endif 1076 + } 1077 + 1078 + static inline void dev_pm_set_driver_flags(struct device *dev, u32 flags) 1079 + { 1080 + dev->power.driver_flags = flags; 1081 + } 1082 + 1083 + static inline bool dev_pm_test_driver_flags(struct device *dev, u32 flags) 1084 + { 1085 + return !!(dev->power.driver_flags & flags); 1071 1086 } 1072 1087 1073 1088 static inline void device_lock(struct device *dev)
+1 -1
include/linux/freezer.h
··· 182 182 } 183 183 184 184 /* 185 - * Like freezable_schedule_timeout(), but should not block the freezer. Do not 185 + * Like schedule_timeout(), but should not block the freezer. Do not 186 186 * call this with locks held. 187 187 */ 188 188 static inline long freezable_schedule_timeout(long timeout)
+1 -6
include/linux/pci.h
··· 206 206 PCI_DEV_FLAGS_BRIDGE_XLATE_ROOT = (__force pci_dev_flags_t) (1 << 9), 207 207 /* Do not use FLR even if device advertises PCI_AF_CAP */ 208 208 PCI_DEV_FLAGS_NO_FLR_RESET = (__force pci_dev_flags_t) (1 << 10), 209 - /* 210 - * Resume before calling the driver's system suspend hooks, disabling 211 - * the direct_complete optimization. 212 - */ 213 - PCI_DEV_FLAGS_NEEDS_RESUME = (__force pci_dev_flags_t) (1 << 11), 214 209 /* Don't use Relaxed Ordering for TLPs directed at this device */ 215 - PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 12), 210 + PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11), 216 211 }; 217 212 218 213 enum pci_irq_reroute_variant {
+30 -1
include/linux/pm.h
··· 550 550 #endif 551 551 }; 552 552 553 + /* 554 + * Driver flags to control system suspend/resume behavior. 555 + * 556 + * These flags can be set by device drivers at the probe time. They need not be 557 + * cleared by the drivers as the driver core will take care of that. 558 + * 559 + * NEVER_SKIP: Do not skip system suspend/resume callbacks for the device. 560 + * SMART_PREPARE: Check the return value of the driver's ->prepare callback. 561 + * SMART_SUSPEND: No need to resume the device from runtime suspend. 562 + * 563 + * Setting SMART_PREPARE instructs bus types and PM domains which may want 564 + * system suspend/resume callbacks to be skipped for the device to return 0 from 565 + * their ->prepare callbacks if the driver's ->prepare callback returns 0 (in 566 + * other words, the system suspend/resume callbacks can only be skipped for the 567 + * device if its driver doesn't object against that). This flag has no effect 568 + * if NEVER_SKIP is set. 569 + * 570 + * Setting SMART_SUSPEND instructs bus types and PM domains which may want to 571 + * runtime resume the device upfront during system suspend that doing so is not 572 + * necessary from the driver's perspective. It also may cause them to skip 573 + * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by 574 + * the driver if they decide to leave the device in runtime suspend. 575 + */ 576 + #define DPM_FLAG_NEVER_SKIP BIT(0) 577 + #define DPM_FLAG_SMART_PREPARE BIT(1) 578 + #define DPM_FLAG_SMART_SUSPEND BIT(2) 579 + 553 580 struct dev_pm_info { 554 581 pm_message_t power_state; 555 582 unsigned int can_wakeup:1; ··· 588 561 bool is_late_suspended:1; 589 562 bool early_init:1; /* Owned by the PM core */ 590 563 bool direct_complete:1; /* Owned by the PM core */ 564 + u32 driver_flags; 591 565 spinlock_t lock; 592 566 #ifdef CONFIG_PM_SLEEP 593 567 struct list_head entry; ··· 764 736 extern int pm_generic_poweroff_late(struct device *dev); 765 737 extern int pm_generic_poweroff(struct device *dev); 766 738 extern void pm_generic_complete(struct device *dev); 767 - extern void pm_complete_with_resume_check(struct device *dev); 739 + 740 + extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); 768 741 769 742 #else /* !CONFIG_PM_SLEEP */ 770 743
+16 -4
include/linux/pm_domain.h
··· 18 18 #include <linux/spinlock.h> 19 19 20 20 /* Defines used for the flags field in the struct generic_pm_domain */ 21 - #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ 22 - #define GENPD_FLAG_IRQ_SAFE (1U << 1) /* PM domain operates in atomic */ 23 - #define GENPD_FLAG_ALWAYS_ON (1U << 2) /* PM domain is always powered on */ 21 + #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ 22 + #define GENPD_FLAG_IRQ_SAFE (1U << 1) /* PM domain operates in atomic */ 23 + #define GENPD_FLAG_ALWAYS_ON (1U << 2) /* PM domain is always powered on */ 24 + #define GENPD_FLAG_ACTIVE_WAKEUP (1U << 3) /* Keep devices active if wakeup */ 24 25 25 26 enum gpd_status { 26 27 GPD_STATE_ACTIVE = 0, /* PM domain is active */ ··· 36 35 struct gpd_dev_ops { 37 36 int (*start)(struct device *dev); 38 37 int (*stop)(struct device *dev); 39 - bool (*active_wakeup)(struct device *dev); 40 38 }; 41 39 42 40 struct genpd_power_state { ··· 64 64 unsigned int device_count; /* Number of devices */ 65 65 unsigned int suspended_count; /* System suspend device counter */ 66 66 unsigned int prepared_count; /* Suspend counter of prepared devices */ 67 + unsigned int performance_state; /* Aggregated max performance state */ 67 68 int (*power_off)(struct generic_pm_domain *domain); 68 69 int (*power_on)(struct generic_pm_domain *domain); 70 + int (*set_performance_state)(struct generic_pm_domain *genpd, 71 + unsigned int state); 69 72 struct gpd_dev_ops dev_ops; 70 73 s64 max_off_time_ns; /* Maximum allowed "suspended" time. */ 71 74 bool max_off_time_changed; ··· 124 121 struct pm_domain_data base; 125 122 struct gpd_timing_data td; 126 123 struct notifier_block nb; 124 + unsigned int performance_state; 127 125 void *data; 128 126 }; 129 127 ··· 152 148 extern int pm_genpd_init(struct generic_pm_domain *genpd, 153 149 struct dev_power_governor *gov, bool is_off); 154 150 extern int pm_genpd_remove(struct generic_pm_domain *genpd); 151 + extern int dev_pm_genpd_set_performance_state(struct device *dev, 152 + unsigned int state); 155 153 156 154 extern struct dev_power_governor simple_qos_governor; 157 155 extern struct dev_power_governor pm_domain_always_on_gov; ··· 190 184 return -ENOSYS; 191 185 } 192 186 static inline int pm_genpd_remove(struct generic_pm_domain *genpd) 187 + { 188 + return -ENOTSUPP; 189 + } 190 + 191 + static inline int dev_pm_genpd_set_performance_state(struct device *dev, 192 + unsigned int state) 193 193 { 194 194 return -ENOTSUPP; 195 195 }
+12 -2
include/linux/pm_opp.h
··· 124 124 struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char * name); 125 125 void dev_pm_opp_put_clkname(struct opp_table *opp_table); 126 126 struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); 127 - void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table); 127 + void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table); 128 + struct opp_table *dev_pm_opp_register_get_pstate_helper(struct device *dev, int (*get_pstate)(struct device *dev, unsigned long rate)); 129 + void dev_pm_opp_unregister_get_pstate_helper(struct opp_table *opp_table); 128 130 int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq); 129 131 int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask); 130 132 int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask); ··· 245 243 return ERR_PTR(-ENOTSUPP); 246 244 } 247 245 248 - static inline void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table) {} 246 + static inline void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table) {} 247 + 248 + static inline struct opp_table *dev_pm_opp_register_get_pstate_helper(struct device *dev, 249 + int (*get_pstate)(struct device *dev, unsigned long rate)) 250 + { 251 + return ERR_PTR(-ENOTSUPP); 252 + } 253 + 254 + static inline void dev_pm_opp_unregister_get_pstate_helper(struct opp_table *opp_table) {} 249 255 250 256 static inline struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name) 251 257 {
+18 -9
include/linux/pm_qos.h
··· 28 28 PM_QOS_FLAGS_ALL, 29 29 }; 30 30 31 - #define PM_QOS_DEFAULT_VALUE -1 31 + #define PM_QOS_DEFAULT_VALUE (-1) 32 + #define PM_QOS_LATENCY_ANY S32_MAX 33 + #define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC) 32 34 33 35 #define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 34 36 #define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 35 37 #define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0 36 38 #define PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE 0 37 - #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0 39 + #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY 40 + #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY 41 + #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS 38 42 #define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0 39 43 #define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1) 40 - #define PM_QOS_LATENCY_ANY ((s32)(~(__u32)0 >> 1)) 41 44 42 45 #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) 43 - #define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1) 44 46 45 47 struct pm_qos_request { 46 48 struct plist_node node; ··· 177 175 static inline s32 dev_pm_qos_raw_read_value(struct device *dev) 178 176 { 179 177 return IS_ERR_OR_NULL(dev->power.qos) ? 180 - 0 : pm_qos_read_value(&dev->power.qos->resume_latency); 178 + PM_QOS_RESUME_LATENCY_NO_CONSTRAINT : 179 + pm_qos_read_value(&dev->power.qos->resume_latency); 181 180 } 182 181 #else 183 182 static inline enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, ··· 188 185 s32 mask) 189 186 { return PM_QOS_FLAGS_UNDEFINED; } 190 187 static inline s32 __dev_pm_qos_read_value(struct device *dev) 191 - { return 0; } 188 + { return PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; } 192 189 static inline s32 dev_pm_qos_read_value(struct device *dev) 193 - { return 0; } 190 + { return PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; } 194 191 static inline int dev_pm_qos_add_request(struct device *dev, 195 192 struct dev_pm_qos_request *req, 196 193 enum dev_pm_qos_req_type type, ··· 236 233 { return 0; } 237 234 static inline void dev_pm_qos_hide_latency_tolerance(struct device *dev) {} 238 235 239 - static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; } 236 + static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) 237 + { 238 + return PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 239 + } 240 240 static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; } 241 - static inline s32 dev_pm_qos_raw_read_value(struct device *dev) { return 0; } 241 + static inline s32 dev_pm_qos_raw_read_value(struct device *dev) 242 + { 243 + return PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 244 + } 242 245 #endif 243 246 244 247 #endif
-14
kernel/power/Kconfig
··· 259 259 anything, try disabling/enabling this option (or disabling/enabling 260 260 APM in your BIOS). 261 261 262 - config PM_OPP 263 - bool 264 - select SRCU 265 - ---help--- 266 - SOCs have a standard set of tuples consisting of frequency and 267 - voltage pairs that the device will support per voltage domain. This 268 - is called Operating Performance Point or OPP. The actual definitions 269 - of OPP varies over silicon within the same family of devices. 270 - 271 - OPP layer organizes the data internally using device pointers 272 - representing individual voltage domains and provides SOC 273 - implementations a ready to use framework to manage OPPs. 274 - For more information, read <file:Documentation/power/opp.txt> 275 - 276 262 config PM_CLK 277 263 def_bool y 278 264 depends on PM && HAVE_CLK
+2 -2
kernel/power/qos.c
··· 701 701 for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) { 702 702 ret = register_pm_qos_misc(pm_qos_array[i], d); 703 703 if (ret < 0) { 704 - printk(KERN_ERR "pm_qos_param: %s setup failed\n", 705 - pm_qos_array[i]->name); 704 + pr_err("%s: %s setup failed\n", 705 + __func__, pm_qos_array[i]->name); 706 706 return ret; 707 707 } 708 708 }
+18 -17
kernel/power/snapshot.c
··· 10 10 * 11 11 */ 12 12 13 + #define pr_fmt(fmt) "PM: " fmt 14 + 13 15 #include <linux/version.h> 14 16 #include <linux/module.h> 15 17 #include <linux/mm.h> ··· 969 967 region->end_pfn = end_pfn; 970 968 list_add_tail(&region->list, &nosave_regions); 971 969 Report: 972 - printk(KERN_INFO "PM: Registered nosave memory: [mem %#010llx-%#010llx]\n", 970 + pr_info("Registered nosave memory: [mem %#010llx-%#010llx]\n", 973 971 (unsigned long long) start_pfn << PAGE_SHIFT, 974 972 ((unsigned long long) end_pfn << PAGE_SHIFT) - 1); 975 973 } ··· 1041 1039 list_for_each_entry(region, &nosave_regions, list) { 1042 1040 unsigned long pfn; 1043 1041 1044 - pr_debug("PM: Marking nosave pages: [mem %#010llx-%#010llx]\n", 1042 + pr_debug("Marking nosave pages: [mem %#010llx-%#010llx]\n", 1045 1043 (unsigned long long) region->start_pfn << PAGE_SHIFT, 1046 1044 ((unsigned long long) region->end_pfn << PAGE_SHIFT) 1047 1045 - 1); ··· 1097 1095 free_pages_map = bm2; 1098 1096 mark_nosave_pages(forbidden_pages_map); 1099 1097 1100 - pr_debug("PM: Basic memory bitmaps created\n"); 1098 + pr_debug("Basic memory bitmaps created\n"); 1101 1099 1102 1100 return 0; 1103 1101 ··· 1133 1131 memory_bm_free(bm2, PG_UNSAFE_CLEAR); 1134 1132 kfree(bm2); 1135 1133 1136 - pr_debug("PM: Basic memory bitmaps freed\n"); 1134 + pr_debug("Basic memory bitmaps freed\n"); 1137 1135 } 1138 1136 1139 1137 void clear_free_pages(void) ··· 1154 1152 pfn = memory_bm_next_pfn(bm); 1155 1153 } 1156 1154 memory_bm_position_reset(bm); 1157 - pr_info("PM: free pages cleared after restore\n"); 1155 + pr_info("free pages cleared after restore\n"); 1158 1156 #endif /* PAGE_POISONING_ZERO */ 1159 1157 } 1160 1158 ··· 1692 1690 ktime_t start, stop; 1693 1691 int error; 1694 1692 1695 - printk(KERN_INFO "PM: Preallocating image memory... "); 1693 + pr_info("Preallocating image memory... "); 1696 1694 start = ktime_get(); 1697 1695 1698 1696 error = memory_bm_create(&orig_bm, GFP_IMAGE, PG_ANY); ··· 1823 1821 1824 1822 out: 1825 1823 stop = ktime_get(); 1826 - printk(KERN_CONT "done (allocated %lu pages)\n", pages); 1824 + pr_cont("done (allocated %lu pages)\n", pages); 1827 1825 swsusp_show_speed(start, stop, pages, "Allocated"); 1828 1826 1829 1827 return 0; 1830 1828 1831 1829 err_out: 1832 - printk(KERN_CONT "\n"); 1830 + pr_cont("\n"); 1833 1831 swsusp_free(); 1834 1832 return -ENOMEM; 1835 1833 } ··· 1869 1867 free += zone_page_state(zone, NR_FREE_PAGES); 1870 1868 1871 1869 nr_pages += count_pages_for_highmem(nr_highmem); 1872 - pr_debug("PM: Normal pages needed: %u + %u, available pages: %u\n", 1873 - nr_pages, PAGES_FOR_IO, free); 1870 + pr_debug("Normal pages needed: %u + %u, available pages: %u\n", 1871 + nr_pages, PAGES_FOR_IO, free); 1874 1872 1875 1873 return free > nr_pages + PAGES_FOR_IO; 1876 1874 } ··· 1963 1961 { 1964 1962 unsigned int nr_pages, nr_highmem; 1965 1963 1966 - printk(KERN_INFO "PM: Creating hibernation image:\n"); 1964 + pr_info("Creating hibernation image:\n"); 1967 1965 1968 1966 drain_local_pages(NULL); 1969 1967 nr_pages = count_data_pages(); 1970 1968 nr_highmem = count_highmem_pages(); 1971 - printk(KERN_INFO "PM: Need to copy %u pages\n", nr_pages + nr_highmem); 1969 + pr_info("Need to copy %u pages\n", nr_pages + nr_highmem); 1972 1970 1973 1971 if (!enough_free_mem(nr_pages, nr_highmem)) { 1974 - printk(KERN_ERR "PM: Not enough free memory\n"); 1972 + pr_err("Not enough free memory\n"); 1975 1973 return -ENOMEM; 1976 1974 } 1977 1975 1978 1976 if (swsusp_alloc(&copy_bm, nr_pages, nr_highmem)) { 1979 - printk(KERN_ERR "PM: Memory allocation failed\n"); 1977 + pr_err("Memory allocation failed\n"); 1980 1978 return -ENOMEM; 1981 1979 } 1982 1980 ··· 1997 1995 nr_copy_pages = nr_pages; 1998 1996 nr_meta_pages = DIV_ROUND_UP(nr_pages * sizeof(long), PAGE_SIZE); 1999 1997 2000 - printk(KERN_INFO "PM: Hibernation image created (%d pages copied)\n", 2001 - nr_pages); 1998 + pr_info("Hibernation image created (%d pages copied)\n", nr_pages); 2002 1999 2003 2000 return 0; 2004 2001 } ··· 2171 2170 if (!reason && info->num_physpages != get_num_physpages()) 2172 2171 reason = "memory size"; 2173 2172 if (reason) { 2174 - printk(KERN_ERR "PM: Image mismatch: %s\n", reason); 2173 + pr_err("Image mismatch: %s\n", reason); 2175 2174 return -EPERM; 2176 2175 } 2177 2176 return 0;
+1 -1
kernel/power/suspend.c
··· 437 437 error = suspend_ops->enter(state); 438 438 trace_suspend_resume(TPS("machine_suspend"), 439 439 state, false); 440 - events_check_enabled = false; 441 440 } else if (*wakeup) { 442 441 error = -EBUSY; 443 442 } ··· 581 582 pm_restore_gfp_mask(); 582 583 583 584 Finish: 585 + events_check_enabled = false; 584 586 pm_pr_dbg("Finishing wakeup.\n"); 585 587 suspend_finish(); 586 588 Unlock:
+57 -71
kernel/power/swap.c
··· 12 12 * 13 13 */ 14 14 15 + #define pr_fmt(fmt) "PM: " fmt 16 + 15 17 #include <linux/module.h> 16 18 #include <linux/file.h> 17 19 #include <linux/delay.h> ··· 243 241 struct page *page = bio->bi_io_vec[0].bv_page; 244 242 245 243 if (bio->bi_status) { 246 - printk(KERN_ALERT "Read-error on swap-device (%u:%u:%Lu)\n", 247 - MAJOR(bio_dev(bio)), MINOR(bio_dev(bio)), 248 - (unsigned long long)bio->bi_iter.bi_sector); 244 + pr_alert("Read-error on swap-device (%u:%u:%Lu)\n", 245 + MAJOR(bio_dev(bio)), MINOR(bio_dev(bio)), 246 + (unsigned long long)bio->bi_iter.bi_sector); 249 247 } 250 248 251 249 if (bio_data_dir(bio) == WRITE) ··· 275 273 bio_set_op_attrs(bio, op, op_flags); 276 274 277 275 if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { 278 - printk(KERN_ERR "PM: Adding page to bio failed at %llu\n", 279 - (unsigned long long)bio->bi_iter.bi_sector); 276 + pr_err("Adding page to bio failed at %llu\n", 277 + (unsigned long long)bio->bi_iter.bi_sector); 280 278 bio_put(bio); 281 279 return -EFAULT; 282 280 } ··· 321 319 error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC, 322 320 swsusp_resume_block, swsusp_header, NULL); 323 321 } else { 324 - printk(KERN_ERR "PM: Swap header not found!\n"); 322 + pr_err("Swap header not found!\n"); 325 323 error = -ENODEV; 326 324 } 327 325 return error; ··· 415 413 ret = swsusp_swap_check(); 416 414 if (ret) { 417 415 if (ret != -ENOSPC) 418 - printk(KERN_ERR "PM: Cannot find swap device, try " 419 - "swapon -a.\n"); 416 + pr_err("Cannot find swap device, try swapon -a\n"); 420 417 return ret; 421 418 } 422 419 handle->cur = (struct swap_map_page *)get_zeroed_page(GFP_KERNEL); ··· 492 491 { 493 492 if (!error) { 494 493 flush_swap_writer(handle); 495 - printk(KERN_INFO "PM: S"); 494 + pr_info("S"); 496 495 error = mark_swapfiles(handle, flags); 497 - printk("|\n"); 496 + pr_cont("|\n"); 498 497 } 499 498 500 499 if (error) ··· 543 542 544 543 hib_init_batch(&hb); 545 544 546 - printk(KERN_INFO "PM: Saving image data pages (%u pages)...\n", 545 + pr_info("Saving image data pages (%u pages)...\n", 547 546 nr_to_write); 548 547 m = nr_to_write / 10; 549 548 if (!m) ··· 558 557 if (ret) 559 558 break; 560 559 if (!(nr_pages % m)) 561 - printk(KERN_INFO "PM: Image saving progress: %3d%%\n", 562 - nr_pages / m * 10); 560 + pr_info("Image saving progress: %3d%%\n", 561 + nr_pages / m * 10); 563 562 nr_pages++; 564 563 } 565 564 err2 = hib_wait_io(&hb); ··· 567 566 if (!ret) 568 567 ret = err2; 569 568 if (!ret) 570 - printk(KERN_INFO "PM: Image saving done.\n"); 569 + pr_info("Image saving done\n"); 571 570 swsusp_show_speed(start, stop, nr_to_write, "Wrote"); 572 571 return ret; 573 572 } ··· 693 692 694 693 page = (void *)__get_free_page(__GFP_RECLAIM | __GFP_HIGH); 695 694 if (!page) { 696 - printk(KERN_ERR "PM: Failed to allocate LZO page\n"); 695 + pr_err("Failed to allocate LZO page\n"); 697 696 ret = -ENOMEM; 698 697 goto out_clean; 699 698 } 700 699 701 700 data = vmalloc(sizeof(*data) * nr_threads); 702 701 if (!data) { 703 - printk(KERN_ERR "PM: Failed to allocate LZO data\n"); 702 + pr_err("Failed to allocate LZO data\n"); 704 703 ret = -ENOMEM; 705 704 goto out_clean; 706 705 } ··· 709 708 710 709 crc = kmalloc(sizeof(*crc), GFP_KERNEL); 711 710 if (!crc) { 712 - printk(KERN_ERR "PM: Failed to allocate crc\n"); 711 + pr_err("Failed to allocate crc\n"); 713 712 ret = -ENOMEM; 714 713 goto out_clean; 715 714 } ··· 727 726 "image_compress/%u", thr); 728 727 if (IS_ERR(data[thr].thr)) { 729 728 data[thr].thr = NULL; 730 - printk(KERN_ERR 731 - "PM: Cannot start compression threads\n"); 729 + pr_err("Cannot start compression threads\n"); 732 730 ret = -ENOMEM; 733 731 goto out_clean; 734 732 } ··· 749 749 crc->thr = kthread_run(crc32_threadfn, crc, "image_crc32"); 750 750 if (IS_ERR(crc->thr)) { 751 751 crc->thr = NULL; 752 - printk(KERN_ERR "PM: Cannot start CRC32 thread\n"); 752 + pr_err("Cannot start CRC32 thread\n"); 753 753 ret = -ENOMEM; 754 754 goto out_clean; 755 755 } ··· 760 760 */ 761 761 handle->reqd_free_pages = reqd_free_pages(); 762 762 763 - printk(KERN_INFO 764 - "PM: Using %u thread(s) for compression.\n" 765 - "PM: Compressing and saving image data (%u pages)...\n", 766 - nr_threads, nr_to_write); 763 + pr_info("Using %u thread(s) for compression\n", nr_threads); 764 + pr_info("Compressing and saving image data (%u pages)...\n", 765 + nr_to_write); 767 766 m = nr_to_write / 10; 768 767 if (!m) 769 768 m = 1; ··· 782 783 data_of(*snapshot), PAGE_SIZE); 783 784 784 785 if (!(nr_pages % m)) 785 - printk(KERN_INFO 786 - "PM: Image saving progress: " 787 - "%3d%%\n", 788 - nr_pages / m * 10); 786 + pr_info("Image saving progress: %3d%%\n", 787 + nr_pages / m * 10); 789 788 nr_pages++; 790 789 } 791 790 if (!off) ··· 810 813 ret = data[thr].ret; 811 814 812 815 if (ret < 0) { 813 - printk(KERN_ERR "PM: LZO compression failed\n"); 816 + pr_err("LZO compression failed\n"); 814 817 goto out_finish; 815 818 } 816 819 817 820 if (unlikely(!data[thr].cmp_len || 818 821 data[thr].cmp_len > 819 822 lzo1x_worst_compress(data[thr].unc_len))) { 820 - printk(KERN_ERR 821 - "PM: Invalid LZO compressed length\n"); 823 + pr_err("Invalid LZO compressed length\n"); 822 824 ret = -1; 823 825 goto out_finish; 824 826 } ··· 853 857 if (!ret) 854 858 ret = err2; 855 859 if (!ret) 856 - printk(KERN_INFO "PM: Image saving done.\n"); 860 + pr_info("Image saving done\n"); 857 861 swsusp_show_speed(start, stop, nr_to_write, "Wrote"); 858 862 out_clean: 859 863 if (crc) { ··· 884 888 unsigned int free_swap = count_swap_pages(root_swap, 1); 885 889 unsigned int required; 886 890 887 - pr_debug("PM: Free swap pages: %u\n", free_swap); 891 + pr_debug("Free swap pages: %u\n", free_swap); 888 892 889 893 required = PAGES_FOR_IO + nr_pages; 890 894 return free_swap > required; ··· 911 915 pages = snapshot_get_image_size(); 912 916 error = get_swap_writer(&handle); 913 917 if (error) { 914 - printk(KERN_ERR "PM: Cannot get swap writer\n"); 918 + pr_err("Cannot get swap writer\n"); 915 919 return error; 916 920 } 917 921 if (flags & SF_NOCOMPRESS_MODE) { 918 922 if (!enough_swap(pages, flags)) { 919 - printk(KERN_ERR "PM: Not enough free swap\n"); 923 + pr_err("Not enough free swap\n"); 920 924 error = -ENOSPC; 921 925 goto out_finish; 922 926 } ··· 1064 1068 hib_init_batch(&hb); 1065 1069 1066 1070 clean_pages_on_read = true; 1067 - printk(KERN_INFO "PM: Loading image data pages (%u pages)...\n", 1068 - nr_to_read); 1071 + pr_info("Loading image data pages (%u pages)...\n", nr_to_read); 1069 1072 m = nr_to_read / 10; 1070 1073 if (!m) 1071 1074 m = 1; ··· 1082 1087 if (ret) 1083 1088 break; 1084 1089 if (!(nr_pages % m)) 1085 - printk(KERN_INFO "PM: Image loading progress: %3d%%\n", 1086 - nr_pages / m * 10); 1090 + pr_info("Image loading progress: %3d%%\n", 1091 + nr_pages / m * 10); 1087 1092 nr_pages++; 1088 1093 } 1089 1094 err2 = hib_wait_io(&hb); ··· 1091 1096 if (!ret) 1092 1097 ret = err2; 1093 1098 if (!ret) { 1094 - printk(KERN_INFO "PM: Image loading done.\n"); 1099 + pr_info("Image loading done\n"); 1095 1100 snapshot_write_finalize(snapshot); 1096 1101 if (!snapshot_image_loaded(snapshot)) 1097 1102 ret = -ENODATA; ··· 1185 1190 1186 1191 page = vmalloc(sizeof(*page) * LZO_MAX_RD_PAGES); 1187 1192 if (!page) { 1188 - printk(KERN_ERR "PM: Failed to allocate LZO page\n"); 1193 + pr_err("Failed to allocate LZO page\n"); 1189 1194 ret = -ENOMEM; 1190 1195 goto out_clean; 1191 1196 } 1192 1197 1193 1198 data = vmalloc(sizeof(*data) * nr_threads); 1194 1199 if (!data) { 1195 - printk(KERN_ERR "PM: Failed to allocate LZO data\n"); 1200 + pr_err("Failed to allocate LZO data\n"); 1196 1201 ret = -ENOMEM; 1197 1202 goto out_clean; 1198 1203 } ··· 1201 1206 1202 1207 crc = kmalloc(sizeof(*crc), GFP_KERNEL); 1203 1208 if (!crc) { 1204 - printk(KERN_ERR "PM: Failed to allocate crc\n"); 1209 + pr_err("Failed to allocate crc\n"); 1205 1210 ret = -ENOMEM; 1206 1211 goto out_clean; 1207 1212 } ··· 1221 1226 "image_decompress/%u", thr); 1222 1227 if (IS_ERR(data[thr].thr)) { 1223 1228 data[thr].thr = NULL; 1224 - printk(KERN_ERR 1225 - "PM: Cannot start decompression threads\n"); 1229 + pr_err("Cannot start decompression threads\n"); 1226 1230 ret = -ENOMEM; 1227 1231 goto out_clean; 1228 1232 } ··· 1243 1249 crc->thr = kthread_run(crc32_threadfn, crc, "image_crc32"); 1244 1250 if (IS_ERR(crc->thr)) { 1245 1251 crc->thr = NULL; 1246 - printk(KERN_ERR "PM: Cannot start CRC32 thread\n"); 1252 + pr_err("Cannot start CRC32 thread\n"); 1247 1253 ret = -ENOMEM; 1248 1254 goto out_clean; 1249 1255 } ··· 1268 1274 if (!page[i]) { 1269 1275 if (i < LZO_CMP_PAGES) { 1270 1276 ring_size = i; 1271 - printk(KERN_ERR 1272 - "PM: Failed to allocate LZO pages\n"); 1277 + pr_err("Failed to allocate LZO pages\n"); 1273 1278 ret = -ENOMEM; 1274 1279 goto out_clean; 1275 1280 } else { ··· 1278 1285 } 1279 1286 want = ring_size = i; 1280 1287 1281 - printk(KERN_INFO 1282 - "PM: Using %u thread(s) for decompression.\n" 1283 - "PM: Loading and decompressing image data (%u pages)...\n", 1284 - nr_threads, nr_to_read); 1288 + pr_info("Using %u thread(s) for decompression\n", nr_threads); 1289 + pr_info("Loading and decompressing image data (%u pages)...\n", 1290 + nr_to_read); 1285 1291 m = nr_to_read / 10; 1286 1292 if (!m) 1287 1293 m = 1; ··· 1340 1348 if (unlikely(!data[thr].cmp_len || 1341 1349 data[thr].cmp_len > 1342 1350 lzo1x_worst_compress(LZO_UNC_SIZE))) { 1343 - printk(KERN_ERR 1344 - "PM: Invalid LZO compressed length\n"); 1351 + pr_err("Invalid LZO compressed length\n"); 1345 1352 ret = -1; 1346 1353 goto out_finish; 1347 1354 } ··· 1391 1400 ret = data[thr].ret; 1392 1401 1393 1402 if (ret < 0) { 1394 - printk(KERN_ERR 1395 - "PM: LZO decompression failed\n"); 1403 + pr_err("LZO decompression failed\n"); 1396 1404 goto out_finish; 1397 1405 } 1398 1406 1399 1407 if (unlikely(!data[thr].unc_len || 1400 1408 data[thr].unc_len > LZO_UNC_SIZE || 1401 1409 data[thr].unc_len & (PAGE_SIZE - 1))) { 1402 - printk(KERN_ERR 1403 - "PM: Invalid LZO uncompressed length\n"); 1410 + pr_err("Invalid LZO uncompressed length\n"); 1404 1411 ret = -1; 1405 1412 goto out_finish; 1406 1413 } ··· 1409 1420 data[thr].unc + off, PAGE_SIZE); 1410 1421 1411 1422 if (!(nr_pages % m)) 1412 - printk(KERN_INFO 1413 - "PM: Image loading progress: " 1414 - "%3d%%\n", 1415 - nr_pages / m * 10); 1423 + pr_info("Image loading progress: %3d%%\n", 1424 + nr_pages / m * 10); 1416 1425 nr_pages++; 1417 1426 1418 1427 ret = snapshot_write_next(snapshot); ··· 1435 1448 } 1436 1449 stop = ktime_get(); 1437 1450 if (!ret) { 1438 - printk(KERN_INFO "PM: Image loading done.\n"); 1451 + pr_info("Image loading done\n"); 1439 1452 snapshot_write_finalize(snapshot); 1440 1453 if (!snapshot_image_loaded(snapshot)) 1441 1454 ret = -ENODATA; 1442 1455 if (!ret) { 1443 1456 if (swsusp_header->flags & SF_CRC32_MODE) { 1444 1457 if(handle->crc32 != swsusp_header->crc32) { 1445 - printk(KERN_ERR 1446 - "PM: Invalid image CRC32!\n"); 1458 + pr_err("Invalid image CRC32!\n"); 1447 1459 ret = -ENODATA; 1448 1460 } 1449 1461 } ··· 1499 1513 swap_reader_finish(&handle); 1500 1514 end: 1501 1515 if (!error) 1502 - pr_debug("PM: Image successfully loaded\n"); 1516 + pr_debug("Image successfully loaded\n"); 1503 1517 else 1504 - pr_debug("PM: Error %d resuming\n", error); 1518 + pr_debug("Error %d resuming\n", error); 1505 1519 return error; 1506 1520 } 1507 1521 ··· 1538 1552 if (error) 1539 1553 blkdev_put(hib_resume_bdev, FMODE_READ); 1540 1554 else 1541 - pr_debug("PM: Image signature found, resuming\n"); 1555 + pr_debug("Image signature found, resuming\n"); 1542 1556 } else { 1543 1557 error = PTR_ERR(hib_resume_bdev); 1544 1558 } 1545 1559 1546 1560 if (error) 1547 - pr_debug("PM: Image not found (code %d)\n", error); 1561 + pr_debug("Image not found (code %d)\n", error); 1548 1562 1549 1563 return error; 1550 1564 } ··· 1556 1570 void swsusp_close(fmode_t mode) 1557 1571 { 1558 1572 if (IS_ERR(hib_resume_bdev)) { 1559 - pr_debug("PM: Image device not initialised\n"); 1573 + pr_debug("Image device not initialised\n"); 1560 1574 return; 1561 1575 } 1562 1576 ··· 1580 1594 swsusp_resume_block, 1581 1595 swsusp_header, NULL); 1582 1596 } else { 1583 - printk(KERN_ERR "PM: Cannot find swsusp signature!\n"); 1597 + pr_err("Cannot find swsusp signature!\n"); 1584 1598 error = -ENODEV; 1585 1599 } 1586 1600
+5 -1
kernel/sched/cpufreq_schedutil.c
··· 282 282 * Do not reduce the frequency if the CPU has not been idle 283 283 * recently, as the reduction is likely to be premature then. 284 284 */ 285 - if (busy && next_f < sg_policy->next_freq) 285 + if (busy && next_f < sg_policy->next_freq) { 286 286 next_f = sg_policy->next_freq; 287 + 288 + /* Reset cached freq as next_freq has changed */ 289 + sg_policy->cached_raw_freq = 0; 290 + } 287 291 } 288 292 sugov_update_commit(sg_policy, time, next_f); 289 293 }
+1 -2
tools/power/cpupower/.gitignore
··· 1 1 .libs 2 2 libcpupower.so 3 - libcpupower.so.0 4 - libcpupower.so.0.0.0 3 + libcpupower.so.* 5 4 build/ccdv 6 5 cpufreq-info 7 6 cpufreq-set
+6
tools/power/cpupower/Makefile
··· 30 30 $(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 31 31 endif 32 32 33 + include ../../scripts/Makefile.arch 34 + 33 35 # --- CONFIGURATION BEGIN --- 34 36 35 37 # Set the following to `true' to make a unstripped, unoptimized ··· 81 79 sbindir ?= /usr/sbin 82 80 mandir ?= /usr/man 83 81 includedir ?= /usr/include 82 + ifeq ($(IS_64_BIT), 1) 83 + libdir ?= /usr/lib64 84 + else 84 85 libdir ?= /usr/lib 86 + endif 85 87 localedir ?= /usr/share/locale 86 88 docdir ?= /usr/share/doc/packages/cpupower 87 89 confdir ?= /etc/
-2
tools/power/cpupower/utils/cpufreq-info.c
··· 93 93 if (speed > 1000000) 94 94 printf("%u.%06u GHz", ((unsigned int) speed/1000000), 95 95 ((unsigned int) speed%1000000)); 96 - else if (speed > 100000) 97 - printf("%u MHz", (unsigned int) speed); 98 96 else if (speed > 1000) 99 97 printf("%u.%03u MHz", ((unsigned int) speed/1000), 100 98 (unsigned int) (speed%1000));