Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge back system-wide PM material for v5.8.

+323 -415
+125 -70
Documentation/driver-api/pm/devices.rst
··· 349 349 PM core will skip the ``suspend``, ``suspend_late`` and 350 350 ``suspend_noirq`` phases as well as all of the corresponding phases of 351 351 the subsequent device resume for all of these devices. In that case, 352 - the ``->complete`` callback will be invoked directly after the 352 + the ``->complete`` callback will be the next one invoked after the 353 353 ``->prepare`` callback and is entirely responsible for putting the 354 354 device into a consistent state as appropriate. 355 355 ··· 361 361 runtime PM disabled. 362 362 363 363 This feature also can be controlled by device drivers by using the 364 - ``DPM_FLAG_NEVER_SKIP`` and ``DPM_FLAG_SMART_PREPARE`` driver power 365 - management flags. [Typically, they are set at the time the driver is 366 - probed against the device in question by passing them to the 364 + ``DPM_FLAG_NO_DIRECT_COMPLETE`` and ``DPM_FLAG_SMART_PREPARE`` driver 365 + power management flags. [Typically, they are set at the time the driver 366 + is probed against the device in question by passing them to the 367 367 :c:func:`dev_pm_set_driver_flags` helper function.] If the first of 368 368 these flags is set, the PM core will not apply the direct-complete 369 369 procedure described above to the given device and, consequenty, to any ··· 383 383 ``->suspend`` methods provided by subsystems (bus types and PM domains 384 384 in particular) must follow an additional rule regarding what can be done 385 385 to the devices before their drivers' ``->suspend`` methods are called. 386 - Namely, they can only resume the devices from runtime suspend by 387 - calling :c:func:`pm_runtime_resume` for them, if that is necessary, and 386 + Namely, they may resume the devices from runtime suspend by 387 + calling :c:func:`pm_runtime_resume` for them, if that is necessary, but 388 388 they must not update the state of the devices in any other way at that 389 389 time (in case the drivers need to resume the devices from runtime 390 - suspend in their ``->suspend`` methods). 390 + suspend in their ``->suspend`` methods). In fact, the PM core prevents 391 + subsystems or drivers from putting devices into runtime suspend at 392 + these times by calling :c:func:`pm_runtime_get_noresume` before issuing 393 + the ``->prepare`` callback (and calling :c:func:`pm_runtime_put` after 394 + issuing the ``->complete`` callback). 391 395 392 396 3. For a number of devices it is convenient to split suspend into the 393 397 "quiesce device" and "save device state" phases, in which cases ··· 463 459 464 460 Note, however, that new children may be registered below the device as 465 461 soon as the ``->resume`` callbacks occur; it's not necessary to wait 466 - until the ``complete`` phase with that. 462 + until the ``complete`` phase runs. 467 463 468 464 Moreover, if the preceding ``->prepare`` callback returned a positive 469 465 number, the device may have been left in runtime suspend throughout the 470 - whole system suspend and resume (the ``suspend``, ``suspend_late``, 471 - ``suspend_noirq`` phases of system suspend and the ``resume_noirq``, 472 - ``resume_early``, ``resume`` phases of system resume may have been 473 - skipped for it). In that case, the ``->complete`` callback is entirely 466 + whole system suspend and resume (its ``->suspend``, ``->suspend_late``, 467 + ``->suspend_noirq``, ``->resume_noirq``, 468 + ``->resume_early``, and ``->resume`` callbacks may have been 469 + skipped). In that case, the ``->complete`` callback is entirely 474 470 responsible for putting the device into a consistent state after system 475 471 suspend if necessary. [For example, it may need to queue up a runtime 476 472 resume request for the device for this purpose.] To check if that is 477 473 the case, the ``->complete`` callback can consult the device's 478 - ``power.direct_complete`` flag. Namely, if that flag is set when the 479 - ``->complete`` callback is being run, it has been called directly after 480 - the preceding ``->prepare`` and special actions may be required 481 - to make the device work correctly afterward. 474 + ``power.direct_complete`` flag. If that flag is set when the 475 + ``->complete`` callback is being run then the direct-complete mechanism 476 + was used, and special actions may be required to make the device work 477 + correctly afterward. 482 478 483 479 At the end of these phases, drivers should be as functional as they were before 484 480 suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are ··· 579 575 580 576 The ``->poweroff``, ``->poweroff_late`` and ``->poweroff_noirq`` callbacks 581 577 should do essentially the same things as the ``->suspend``, ``->suspend_late`` 582 - and ``->suspend_noirq`` callbacks, respectively. The only notable difference is 578 + and ``->suspend_noirq`` callbacks, respectively. A notable difference is 583 579 that they need not store the device register values, because the registers 584 580 should already have been stored during the ``freeze``, ``freeze_late`` or 585 - ``freeze_noirq`` phases. 581 + ``freeze_noirq`` phases. Also, on many machines the firmware will power-down 582 + the entire system, so it is not necessary for the callback to put the device in 583 + a low-power state. 586 584 587 585 588 586 Leaving Hibernation ··· 770 764 771 765 If it is necessary to resume a device from runtime suspend during a system-wide 772 766 transition into a sleep state, that can be done by calling 773 - :c:func:`pm_runtime_resume` for it from the ``->suspend`` callback (or its 774 - couterpart for transitions related to hibernation) of either the device's driver 775 - or a subsystem responsible for it (for example, a bus type or a PM domain). 776 - That is guaranteed to work by the requirement that subsystems must not change 777 - the state of devices (possibly except for resuming them from runtime suspend) 767 + :c:func:`pm_runtime_resume` from the ``->suspend`` callback (or the ``->freeze`` 768 + or ``->poweroff`` callback for transitions related to hibernation) of either the 769 + device's driver or its subsystem (for example, a bus type or a PM domain). 770 + However, subsystems must not otherwise change the runtime status of devices 778 771 from their ``->prepare`` and ``->suspend`` callbacks (or equivalent) *before* 779 772 invoking device drivers' ``->suspend`` callbacks (or equivalent). 780 773 774 + .. _smart_suspend_flag: 775 + 776 + The ``DPM_FLAG_SMART_SUSPEND`` Driver Flag 777 + ------------------------------------------ 778 + 781 779 Some bus types and PM domains have a policy to resume all devices from runtime 782 780 suspend upfront in their ``->suspend`` callbacks, but that may not be really 783 - necessary if the driver of the device can cope with runtime-suspended devices. 784 - The driver can indicate that by setting ``DPM_FLAG_SMART_SUSPEND`` in 785 - :c:member:`power.driver_flags` at the probe time, by passing it to the 786 - :c:func:`dev_pm_set_driver_flags` helper. That also may cause middle-layer code 781 + necessary if the device's driver can cope with runtime-suspended devices. 782 + The driver can indicate this by setting ``DPM_FLAG_SMART_SUSPEND`` in 783 + :c:member:`power.driver_flags` at probe time, with the assistance of the 784 + :c:func:`dev_pm_set_driver_flags` helper routine. 785 + 786 + Setting that flag causes the PM core and middle-layer code 787 787 (bus types, PM domains etc.) to skip the ``->suspend_late`` and 788 788 ``->suspend_noirq`` callbacks provided by the driver if the device remains in 789 - runtime suspend at the beginning of the ``suspend_late`` phase of system-wide 790 - suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM 791 - has been disabled for it, under the assumption that its state should not change 792 - after that point until the system-wide transition is over (the PM core itself 793 - does that for devices whose "noirq", "late" and "early" system-wide PM callbacks 794 - are executed directly by it). If that happens, the driver's system-wide resume 795 - callbacks, if present, may still be invoked during the subsequent system-wide 796 - resume transition and the device's runtime power management status may be set 797 - to "active" before enabling runtime PM for it, so the driver must be prepared to 798 - cope with the invocation of its system-wide resume callbacks back-to-back with 799 - its ``->runtime_suspend`` one (without the intervening ``->runtime_resume`` and 800 - so on) and the final state of the device must reflect the "active" runtime PM 801 - status in that case. 789 + runtime suspend throughout those phases of the system-wide suspend (and 790 + similarly for the "freeze" and "poweroff" parts of system hibernation). 791 + [Otherwise the same driver 792 + callback might be executed twice in a row for the same device, which would not 793 + be valid in general.] If the middle-layer system-wide PM callbacks are present 794 + for the device then they are responsible for skipping these driver callbacks; 795 + if not then the PM core skips them. The subsystem callback routines can 796 + determine whether they need to skip the driver callbacks by testing the return 797 + value from the :c:func:`dev_pm_skip_suspend` helper function. 798 + 799 + In addition, with ``DPM_FLAG_SMART_SUSPEND`` set, the driver's ``->thaw_noirq`` 800 + and ``->thaw_early`` callbacks are skipped in hibernation if the device remained 801 + in runtime suspend throughout the preceding "freeze" transition. Again, if the 802 + middle-layer callbacks are present for the device, they are responsible for 803 + doing this, otherwise the PM core takes care of it. 804 + 805 + 806 + The ``DPM_FLAG_MAY_SKIP_RESUME`` Driver Flag 807 + -------------------------------------------- 802 808 803 809 During system-wide resume from a sleep state it's easiest to put devices into 804 810 the full-power state, as explained in :file:`Documentation/power/runtime_pm.rst`. 805 811 [Refer to that document for more information regarding this particular issue as 806 812 well as for information on the device runtime power management framework in 807 - general.] 808 - 809 - However, it often is desirable to leave devices in suspend after system 810 - transitions to the working state, especially if those devices had been in 813 + general.] However, it often is desirable to leave devices in suspend after 814 + system transitions to the working state, especially if those devices had been in 811 815 runtime suspend before the preceding system-wide suspend (or analogous) 812 - transition. Device drivers can use the ``DPM_FLAG_LEAVE_SUSPENDED`` flag to 813 - indicate to the PM core (and middle-layer code) that they prefer the specific 814 - devices handled by them to be left suspended and they have no problems with 815 - skipping their system-wide resume callbacks for this reason. Whether or not the 816 - devices will actually be left in suspend may depend on their state before the 817 - given system suspend-resume cycle and on the type of the system transition under 818 - way. In particular, devices are not left suspended if that transition is a 819 - restore from hibernation, as device states are not guaranteed to be reflected 820 - by the information stored in the hibernation image in that case. 816 + transition. 821 817 822 - The middle-layer code involved in the handling of the device is expected to 823 - indicate to the PM core if the device may be left in suspend by setting its 824 - :c:member:`power.may_skip_resume` status bit which is checked by the PM core 825 - during the "noirq" phase of the preceding system-wide suspend (or analogous) 826 - transition. The middle layer is then responsible for handling the device as 827 - appropriate in its "noirq" resume callback, which is executed regardless of 828 - whether or not the device is left suspended, but the other resume callbacks 829 - (except for ``->complete``) will be skipped automatically by the PM core if the 830 - device really can be left in suspend. 818 + To that end, device drivers can use the ``DPM_FLAG_MAY_SKIP_RESUME`` flag to 819 + indicate to the PM core and middle-layer code that they allow their "noirq" and 820 + "early" resume callbacks to be skipped if the device can be left in suspend 821 + after system-wide PM transitions to the working state. Whether or not that is 822 + the case generally depends on the state of the device before the given system 823 + suspend-resume cycle and on the type of the system transition under way. 824 + In particular, the "thaw" and "restore" transitions related to hibernation are 825 + not affected by ``DPM_FLAG_MAY_SKIP_RESUME`` at all. [All callbacks are 826 + issued during the "restore" transition regardless of the flag settings, 827 + and whether or not any driver callbacks 828 + are skipped during the "thaw" transition depends whether or not the 829 + ``DPM_FLAG_SMART_SUSPEND`` flag is set (see `above <smart_suspend_flag_>`_). 830 + In addition, a device is not allowed to remain in runtime suspend if any of its 831 + children will be returned to full power.] 831 832 832 - For devices whose "noirq", "late" and "early" driver callbacks are invoked 833 - directly by the PM core, all of the system-wide resume callbacks are skipped if 834 - ``DPM_FLAG_LEAVE_SUSPENDED`` is set and the device is in runtime suspend during 835 - the ``suspend_noirq`` (or analogous) phase or the transition under way is a 836 - proper system suspend (rather than anything related to hibernation) and the 837 - device's wakeup settings are suitable for runtime PM (that is, it cannot 838 - generate wakeup signals at all or it is allowed to wake up the system from 839 - sleep). 833 + The ``DPM_FLAG_MAY_SKIP_RESUME`` flag is taken into account in combination with 834 + the :c:member:`power.may_skip_resume` status bit set by the PM core during the 835 + "suspend" phase of suspend-type transitions. If the driver or the middle layer 836 + has a reason to prevent the driver's "noirq" and "early" resume callbacks from 837 + being skipped during the subsequent system resume transition, it should 838 + clear :c:member:`power.may_skip_resume` in its ``->suspend``, ``->suspend_late`` 839 + or ``->suspend_noirq`` callback. [Note that the drivers setting 840 + ``DPM_FLAG_SMART_SUSPEND`` need to clear :c:member:`power.may_skip_resume` in 841 + their ``->suspend`` callback in case the other two are skipped.] 842 + 843 + Setting the :c:member:`power.may_skip_resume` status bit along with the 844 + ``DPM_FLAG_MAY_SKIP_RESUME`` flag is necessary, but generally not sufficient, 845 + for the driver's "noirq" and "early" resume callbacks to be skipped. Whether or 846 + not they should be skipped can be determined by evaluating the 847 + :c:func:`dev_pm_skip_resume` helper function. 848 + 849 + If that function returns ``true``, the driver's "noirq" and "early" resume 850 + callbacks should be skipped and the device's runtime PM status will be set to 851 + "suspended" by the PM core. Otherwise, if the device was runtime-suspended 852 + during the preceding system-wide suspend transition and its 853 + ``DPM_FLAG_SMART_SUSPEND`` is set, its runtime PM status will be set to 854 + "active" by the PM core. [Hence, the drivers that do not set 855 + ``DPM_FLAG_SMART_SUSPEND`` should not expect the runtime PM status of their 856 + devices to be changed from "suspended" to "active" by the PM core during 857 + system-wide resume-type transitions.] 858 + 859 + If the ``DPM_FLAG_MAY_SKIP_RESUME`` flag is not set for a device, but 860 + ``DPM_FLAG_SMART_SUSPEND`` is set and the driver's "late" and "noirq" suspend 861 + callbacks are skipped, its system-wide "noirq" and "early" resume callbacks, if 862 + present, are invoked as usual and the device's runtime PM status is set to 863 + "active" by the PM core before enabling runtime PM for it. In that case, the 864 + driver must be prepared to cope with the invocation of its system-wide resume 865 + callbacks back-to-back with its ``->runtime_suspend`` one (without the 866 + intervening ``->runtime_resume`` and system-wide suspend callbacks) and the 867 + final state of the device must reflect the "active" runtime PM status in that 868 + case. [Note that this is not a problem at all if the driver's 869 + ``->suspend_late`` callback pointer points to the same function as its 870 + ``->runtime_suspend`` one and its ``->resume_early`` callback pointer points to 871 + the same function as the ``->runtime_resume`` one, while none of the other 872 + system-wide suspend-resume callbacks of the driver are present, for example.] 873 + 874 + Likewise, if ``DPM_FLAG_MAY_SKIP_RESUME`` is set for a device, its driver's 875 + system-wide "noirq" and "early" resume callbacks may be skipped while its "late" 876 + and "noirq" suspend callbacks may have been executed (in principle, regardless 877 + of whether or not ``DPM_FLAG_SMART_SUSPEND`` is set). In that case, the driver 878 + needs to be able to cope with the invocation of its ``->runtime_resume`` 879 + callback back-to-back with its "late" and "noirq" suspend ones. [For instance, 880 + that is not a concern if the driver sets both ``DPM_FLAG_SMART_SUSPEND`` and 881 + ``DPM_FLAG_MAY_SKIP_RESUME`` and uses the same pair of suspend/resume callback 882 + functions for runtime PM and system-wide suspend/resume.]
+26 -28
Documentation/power/pci.rst
··· 1004 1004 time with the help of the dev_pm_set_driver_flags() function and they should not 1005 1005 be updated directly afterwards. 1006 1006 1007 - The DPM_FLAG_NEVER_SKIP flag prevents the PM core from using the direct-complete 1008 - mechanism allowing device suspend/resume callbacks to be skipped if the device 1009 - is in runtime suspend when the system suspend starts. That also affects all of 1010 - the ancestors of the device, so this flag should only be used if absolutely 1011 - necessary. 1007 + The DPM_FLAG_NO_DIRECT_COMPLETE flag prevents the PM core from using the 1008 + direct-complete mechanism allowing device suspend/resume callbacks to be skipped 1009 + if the device is in runtime suspend when the system suspend starts. That also 1010 + affects all of the ancestors of the device, so this flag should only be used if 1011 + absolutely necessary. 1012 1012 1013 - The DPM_FLAG_SMART_PREPARE flag instructs the PCI bus type to only return a 1014 - positive value from pci_pm_prepare() if the ->prepare callback provided by the 1013 + The DPM_FLAG_SMART_PREPARE flag causes the PCI bus type to return a positive 1014 + value from pci_pm_prepare() only if the ->prepare callback provided by the 1015 1015 driver of the device returns a positive value. That allows the driver to opt 1016 - out from using the direct-complete mechanism dynamically. 1016 + out from using the direct-complete mechanism dynamically (whereas setting 1017 + DPM_FLAG_NO_DIRECT_COMPLETE means permanent opt-out). 1017 1018 1018 1019 The DPM_FLAG_SMART_SUSPEND flag tells the PCI bus type that from the driver's 1019 1020 perspective the device can be safely left in runtime suspend during system 1020 1021 suspend. That causes pci_pm_suspend(), pci_pm_freeze() and pci_pm_poweroff() 1021 - to skip resuming the device from runtime suspend unless there are PCI-specific 1022 - reasons for doing that. Also, it causes pci_pm_suspend_late/noirq(), 1023 - pci_pm_freeze_late/noirq() and pci_pm_poweroff_late/noirq() to return early 1024 - if the device remains in runtime suspend in the beginning of the "late" phase 1025 - of the system-wide transition under way. Moreover, if the device is in 1026 - runtime suspend in pci_pm_resume_noirq() or pci_pm_restore_noirq(), its runtime 1027 - power management status will be changed to "active" (as it is going to be put 1028 - into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(), 1029 - the function will set the power.direct_complete flag for it (to make the PM core 1030 - skip the subsequent "thaw" callbacks for it) and return. 1022 + to avoid resuming the device from runtime suspend unless there are PCI-specific 1023 + reasons for doing that. Also, it causes pci_pm_suspend_late/noirq() and 1024 + pci_pm_poweroff_late/noirq() to return early if the device remains in runtime 1025 + suspend during the "late" phase of the system-wide transition under way. 1026 + Moreover, if the device is in runtime suspend in pci_pm_resume_noirq() or 1027 + pci_pm_restore_noirq(), its runtime PM status will be changed to "active" (as it 1028 + is going to be put into D0 going forward). 1031 1029 1032 - Setting the DPM_FLAG_LEAVE_SUSPENDED flag means that the driver prefers the 1033 - device to be left in suspend after system-wide transitions to the working state. 1034 - This flag is checked by the PM core, but the PCI bus type informs the PM core 1035 - which devices may be left in suspend from its perspective (that happens during 1036 - the "noirq" phase of system-wide suspend and analogous transitions) and next it 1037 - uses the dev_pm_may_skip_resume() helper to decide whether or not to return from 1038 - pci_pm_resume_noirq() early, as the PM core will skip the remaining resume 1039 - callbacks for the device during the transition under way and will set its 1040 - runtime PM status to "suspended" if dev_pm_may_skip_resume() returns "true" for 1041 - it. 1030 + Setting the DPM_FLAG_MAY_SKIP_RESUME flag means that the driver allows its 1031 + "noirq" and "early" resume callbacks to be skipped if the device can be left 1032 + in suspend after a system-wide transition into the working state. This flag is 1033 + taken into consideration by the PM core along with the power.may_skip_resume 1034 + status bit of the device which is set by pci_pm_suspend_noirq() in certain 1035 + situations. If the PM core determines that the driver's "noirq" and "early" 1036 + resume callbacks should be skipped, the dev_pm_skip_resume() helper function 1037 + will return "true" and that will cause pci_pm_resume_noirq() and 1038 + pci_pm_resume_early() to return upfront without touching the device and 1039 + executing the driver callbacks. 1042 1040 1043 1041 3.2. Device Runtime Power Management 1044 1042 ------------------------------------
+7 -7
drivers/acpi/acpi_lpss.c
··· 1041 1041 { 1042 1042 int ret; 1043 1043 1044 - if (dev_pm_smart_suspend_and_suspended(dev)) 1044 + if (dev_pm_skip_suspend(dev)) 1045 1045 return 0; 1046 1046 1047 1047 ret = pm_generic_suspend_late(dev); ··· 1093 1093 if (pdata->dev_desc->resume_from_noirq) 1094 1094 return 0; 1095 1095 1096 + if (dev_pm_skip_resume(dev)) 1097 + return 0; 1098 + 1096 1099 return acpi_lpss_do_resume_early(dev); 1097 1100 } 1098 1101 ··· 1105 1102 int ret; 1106 1103 1107 1104 /* Follow acpi_subsys_resume_noirq(). */ 1108 - if (dev_pm_may_skip_resume(dev)) 1105 + if (dev_pm_skip_resume(dev)) 1109 1106 return 0; 1110 - 1111 - if (dev_pm_smart_suspend_and_suspended(dev)) 1112 - pm_runtime_set_active(dev); 1113 1107 1114 1108 ret = pm_generic_resume_noirq(dev); 1115 1109 if (ret) ··· 1169 1169 { 1170 1170 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 1171 1171 1172 - if (dev_pm_smart_suspend_and_suspended(dev)) 1172 + if (dev_pm_skip_suspend(dev)) 1173 1173 return 0; 1174 1174 1175 1175 if (pdata->dev_desc->resume_from_noirq) ··· 1182 1182 { 1183 1183 struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 1184 1184 1185 - if (dev_pm_smart_suspend_and_suspended(dev)) 1185 + if (dev_pm_skip_suspend(dev)) 1186 1186 return 0; 1187 1187 1188 1188 if (pdata->dev_desc->resume_from_noirq) {
+1 -1
drivers/acpi/acpi_tad.c
··· 624 624 */ 625 625 device_init_wakeup(dev, true); 626 626 dev_pm_set_driver_flags(dev, DPM_FLAG_SMART_SUSPEND | 627 - DPM_FLAG_LEAVE_SUSPENDED); 627 + DPM_FLAG_MAY_SKIP_RESUME); 628 628 /* 629 629 * The platform bus type layer tells the ACPI PM domain powers up the 630 630 * device, so set the runtime PM status of it to "active".
+13 -18
drivers/acpi/device_pm.c
··· 1084 1084 { 1085 1085 int ret; 1086 1086 1087 - if (dev_pm_smart_suspend_and_suspended(dev)) 1087 + if (dev_pm_skip_suspend(dev)) 1088 1088 return 0; 1089 1089 1090 1090 ret = pm_generic_suspend_late(dev); ··· 1100 1100 { 1101 1101 int ret; 1102 1102 1103 - if (dev_pm_smart_suspend_and_suspended(dev)) { 1104 - dev->power.may_skip_resume = true; 1103 + if (dev_pm_skip_suspend(dev)) 1105 1104 return 0; 1106 - } 1107 1105 1108 1106 ret = pm_generic_suspend_noirq(dev); 1109 1107 if (ret) ··· 1114 1116 * acpi_subsys_complete() to take care of fixing up the device's state 1115 1117 * anyway, if need be. 1116 1118 */ 1117 - dev->power.may_skip_resume = device_may_wakeup(dev) || 1118 - !device_can_wakeup(dev); 1119 + if (device_can_wakeup(dev) && !device_may_wakeup(dev)) 1120 + dev->power.may_skip_resume = false; 1119 1121 1120 1122 return 0; 1121 1123 } ··· 1127 1129 */ 1128 1130 static int acpi_subsys_resume_noirq(struct device *dev) 1129 1131 { 1130 - if (dev_pm_may_skip_resume(dev)) 1132 + if (dev_pm_skip_resume(dev)) 1131 1133 return 0; 1132 - 1133 - /* 1134 - * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 1135 - * during system suspend, so update their runtime PM status to "active" 1136 - * as they will be put into D0 going forward. 1137 - */ 1138 - if (dev_pm_smart_suspend_and_suspended(dev)) 1139 - pm_runtime_set_active(dev); 1140 1134 1141 1135 return pm_generic_resume_noirq(dev); 1142 1136 } ··· 1143 1153 */ 1144 1154 static int acpi_subsys_resume_early(struct device *dev) 1145 1155 { 1146 - int ret = acpi_dev_resume(dev); 1156 + int ret; 1157 + 1158 + if (dev_pm_skip_resume(dev)) 1159 + return 0; 1160 + 1161 + ret = acpi_dev_resume(dev); 1147 1162 return ret ? ret : pm_generic_resume_early(dev); 1148 1163 } 1149 1164 ··· 1213 1218 { 1214 1219 int ret; 1215 1220 1216 - if (dev_pm_smart_suspend_and_suspended(dev)) 1221 + if (dev_pm_skip_suspend(dev)) 1217 1222 return 0; 1218 1223 1219 1224 ret = pm_generic_poweroff_late(dev); ··· 1229 1234 */ 1230 1235 static int acpi_subsys_poweroff_noirq(struct device *dev) 1231 1236 { 1232 - if (dev_pm_smart_suspend_and_suspended(dev)) 1237 + if (dev_pm_skip_suspend(dev)) 1233 1238 return 0; 1234 1239 1235 1240 return pm_generic_poweroff_noirq(dev);
+113 -239
drivers/base/power/main.c
··· 562 562 /*------------------------- Resume routines -------------------------*/ 563 563 564 564 /** 565 - * suspend_event - Return a "suspend" message for given "resume" one. 566 - * @resume_msg: PM message representing a system-wide resume transition. 567 - */ 568 - static pm_message_t suspend_event(pm_message_t resume_msg) 569 - { 570 - switch (resume_msg.event) { 571 - case PM_EVENT_RESUME: 572 - return PMSG_SUSPEND; 573 - case PM_EVENT_THAW: 574 - case PM_EVENT_RESTORE: 575 - return PMSG_FREEZE; 576 - case PM_EVENT_RECOVER: 577 - return PMSG_HIBERNATE; 578 - } 579 - return PMSG_ON; 580 - } 581 - 582 - /** 583 - * dev_pm_may_skip_resume - System-wide device resume optimization check. 565 + * dev_pm_skip_resume - System-wide device resume optimization check. 584 566 * @dev: Target device. 585 567 * 586 - * Checks whether or not the device may be left in suspend after a system-wide 587 - * transition to the working state. 568 + * Return: 569 + * - %false if the transition under way is RESTORE. 570 + * - Return value of dev_pm_skip_suspend() if the transition under way is THAW. 571 + * - The logical negation of %power.must_resume otherwise (that is, when the 572 + * transition under way is RESUME). 588 573 */ 589 - bool dev_pm_may_skip_resume(struct device *dev) 574 + bool dev_pm_skip_resume(struct device *dev) 590 575 { 591 - return !dev->power.must_resume && pm_transition.event != PM_EVENT_RESTORE; 576 + if (pm_transition.event == PM_EVENT_RESTORE) 577 + return false; 578 + 579 + if (pm_transition.event == PM_EVENT_THAW) 580 + return dev_pm_skip_suspend(dev); 581 + 582 + return !dev->power.must_resume; 592 583 } 593 - 594 - static pm_callback_t dpm_subsys_resume_noirq_cb(struct device *dev, 595 - pm_message_t state, 596 - const char **info_p) 597 - { 598 - pm_callback_t callback; 599 - const char *info; 600 - 601 - if (dev->pm_domain) { 602 - info = "noirq power domain "; 603 - callback = pm_noirq_op(&dev->pm_domain->ops, state); 604 - } else if (dev->type && dev->type->pm) { 605 - info = "noirq type "; 606 - callback = pm_noirq_op(dev->type->pm, state); 607 - } else if (dev->class && dev->class->pm) { 608 - info = "noirq class "; 609 - callback = pm_noirq_op(dev->class->pm, state); 610 - } else if (dev->bus && dev->bus->pm) { 611 - info = "noirq bus "; 612 - callback = pm_noirq_op(dev->bus->pm, state); 613 - } else { 614 - return NULL; 615 - } 616 - 617 - if (info_p) 618 - *info_p = info; 619 - 620 - return callback; 621 - } 622 - 623 - static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev, 624 - pm_message_t state, 625 - const char **info_p); 626 - 627 - static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev, 628 - pm_message_t state, 629 - const char **info_p); 630 584 631 585 /** 632 586 * device_resume_noirq - Execute a "noirq resume" callback for given device. ··· 593 639 */ 594 640 static int device_resume_noirq(struct device *dev, pm_message_t state, bool async) 595 641 { 596 - pm_callback_t callback; 597 - const char *info; 642 + pm_callback_t callback = NULL; 643 + const char *info = NULL; 598 644 bool skip_resume; 599 645 int error = 0; 600 646 ··· 610 656 if (!dpm_wait_for_superior(dev, async)) 611 657 goto Out; 612 658 613 - skip_resume = dev_pm_may_skip_resume(dev); 659 + skip_resume = dev_pm_skip_resume(dev); 660 + /* 661 + * If the driver callback is skipped below or by the middle layer 662 + * callback and device_resume_early() also skips the driver callback for 663 + * this device later, it needs to appear as "suspended" to PM-runtime, 664 + * so change its status accordingly. 665 + * 666 + * Otherwise, the device is going to be resumed, so set its PM-runtime 667 + * status to "active", but do that only if DPM_FLAG_SMART_SUSPEND is set 668 + * to avoid confusing drivers that don't use it. 669 + */ 670 + if (skip_resume) 671 + pm_runtime_set_suspended(dev); 672 + else if (dev_pm_skip_suspend(dev)) 673 + pm_runtime_set_active(dev); 614 674 615 - callback = dpm_subsys_resume_noirq_cb(dev, state, &info); 675 + if (dev->pm_domain) { 676 + info = "noirq power domain "; 677 + callback = pm_noirq_op(&dev->pm_domain->ops, state); 678 + } else if (dev->type && dev->type->pm) { 679 + info = "noirq type "; 680 + callback = pm_noirq_op(dev->type->pm, state); 681 + } else if (dev->class && dev->class->pm) { 682 + info = "noirq class "; 683 + callback = pm_noirq_op(dev->class->pm, state); 684 + } else if (dev->bus && dev->bus->pm) { 685 + info = "noirq bus "; 686 + callback = pm_noirq_op(dev->bus->pm, state); 687 + } 616 688 if (callback) 617 689 goto Run; 618 690 619 691 if (skip_resume) 620 692 goto Skip; 621 - 622 - if (dev_pm_smart_suspend_and_suspended(dev)) { 623 - pm_message_t suspend_msg = suspend_event(state); 624 - 625 - /* 626 - * If "freeze" callbacks have been skipped during a transition 627 - * related to hibernation, the subsequent "thaw" callbacks must 628 - * be skipped too or bad things may happen. Otherwise, resume 629 - * callbacks are going to be run for the device, so its runtime 630 - * PM status must be changed to reflect the new state after the 631 - * transition under way. 632 - */ 633 - if (!dpm_subsys_suspend_late_cb(dev, suspend_msg, NULL) && 634 - !dpm_subsys_suspend_noirq_cb(dev, suspend_msg, NULL)) { 635 - if (state.event == PM_EVENT_THAW) { 636 - skip_resume = true; 637 - goto Skip; 638 - } else { 639 - pm_runtime_set_active(dev); 640 - } 641 - } 642 - } 643 693 644 694 if (dev->driver && dev->driver->pm) { 645 695 info = "noirq driver "; ··· 655 697 656 698 Skip: 657 699 dev->power.is_noirq_suspended = false; 658 - 659 - if (skip_resume) { 660 - /* Make the next phases of resume skip the device. */ 661 - dev->power.is_late_suspended = false; 662 - dev->power.is_suspended = false; 663 - /* 664 - * The device is going to be left in suspend, but it might not 665 - * have been in runtime suspend before the system suspended, so 666 - * its runtime PM status needs to be updated to avoid confusing 667 - * the runtime PM framework when runtime PM is enabled for the 668 - * device again. 669 - */ 670 - pm_runtime_set_suspended(dev); 671 - } 672 700 673 701 Out: 674 702 complete_all(&dev->power.completion); ··· 754 810 cpuidle_resume(); 755 811 } 756 812 757 - static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev, 758 - pm_message_t state, 759 - const char **info_p) 760 - { 761 - pm_callback_t callback; 762 - const char *info; 763 - 764 - if (dev->pm_domain) { 765 - info = "early power domain "; 766 - callback = pm_late_early_op(&dev->pm_domain->ops, state); 767 - } else if (dev->type && dev->type->pm) { 768 - info = "early type "; 769 - callback = pm_late_early_op(dev->type->pm, state); 770 - } else if (dev->class && dev->class->pm) { 771 - info = "early class "; 772 - callback = pm_late_early_op(dev->class->pm, state); 773 - } else if (dev->bus && dev->bus->pm) { 774 - info = "early bus "; 775 - callback = pm_late_early_op(dev->bus->pm, state); 776 - } else { 777 - return NULL; 778 - } 779 - 780 - if (info_p) 781 - *info_p = info; 782 - 783 - return callback; 784 - } 785 - 786 813 /** 787 814 * device_resume_early - Execute an "early resume" callback for given device. 788 815 * @dev: Device to handle. ··· 764 849 */ 765 850 static int device_resume_early(struct device *dev, pm_message_t state, bool async) 766 851 { 767 - pm_callback_t callback; 768 - const char *info; 852 + pm_callback_t callback = NULL; 853 + const char *info = NULL; 769 854 int error = 0; 770 855 771 856 TRACE_DEVICE(dev); ··· 780 865 if (!dpm_wait_for_superior(dev, async)) 781 866 goto Out; 782 867 783 - callback = dpm_subsys_resume_early_cb(dev, state, &info); 868 + if (dev->pm_domain) { 869 + info = "early power domain "; 870 + callback = pm_late_early_op(&dev->pm_domain->ops, state); 871 + } else if (dev->type && dev->type->pm) { 872 + info = "early type "; 873 + callback = pm_late_early_op(dev->type->pm, state); 874 + } else if (dev->class && dev->class->pm) { 875 + info = "early class "; 876 + callback = pm_late_early_op(dev->class->pm, state); 877 + } else if (dev->bus && dev->bus->pm) { 878 + info = "early bus "; 879 + callback = pm_late_early_op(dev->bus->pm, state); 880 + } 881 + if (callback) 882 + goto Run; 784 883 785 - if (!callback && dev->driver && dev->driver->pm) { 884 + if (dev_pm_skip_resume(dev)) 885 + goto Skip; 886 + 887 + if (dev->driver && dev->driver->pm) { 786 888 info = "early driver "; 787 889 callback = pm_late_early_op(dev->driver->pm, state); 788 890 } 789 891 892 + Run: 790 893 error = dpm_run_callback(callback, dev, state, info); 894 + 895 + Skip: 791 896 dev->power.is_late_suspended = false; 792 897 793 - Out: 898 + Out: 794 899 TRACE_RESUME(error); 795 900 796 901 pm_runtime_enable(dev); ··· 1180 1245 device_links_read_unlock(idx); 1181 1246 } 1182 1247 1183 - static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev, 1184 - pm_message_t state, 1185 - const char **info_p) 1186 - { 1187 - pm_callback_t callback; 1188 - const char *info; 1189 - 1190 - if (dev->pm_domain) { 1191 - info = "noirq power domain "; 1192 - callback = pm_noirq_op(&dev->pm_domain->ops, state); 1193 - } else if (dev->type && dev->type->pm) { 1194 - info = "noirq type "; 1195 - callback = pm_noirq_op(dev->type->pm, state); 1196 - } else if (dev->class && dev->class->pm) { 1197 - info = "noirq class "; 1198 - callback = pm_noirq_op(dev->class->pm, state); 1199 - } else if (dev->bus && dev->bus->pm) { 1200 - info = "noirq bus "; 1201 - callback = pm_noirq_op(dev->bus->pm, state); 1202 - } else { 1203 - return NULL; 1204 - } 1205 - 1206 - if (info_p) 1207 - *info_p = info; 1208 - 1209 - return callback; 1210 - } 1211 - 1212 - static bool device_must_resume(struct device *dev, pm_message_t state, 1213 - bool no_subsys_suspend_noirq) 1214 - { 1215 - pm_message_t resume_msg = resume_event(state); 1216 - 1217 - /* 1218 - * If all of the device driver's "noirq", "late" and "early" callbacks 1219 - * are invoked directly by the core, the decision to allow the device to 1220 - * stay in suspend can be based on its current runtime PM status and its 1221 - * wakeup settings. 1222 - */ 1223 - if (no_subsys_suspend_noirq && 1224 - !dpm_subsys_suspend_late_cb(dev, state, NULL) && 1225 - !dpm_subsys_resume_early_cb(dev, resume_msg, NULL) && 1226 - !dpm_subsys_resume_noirq_cb(dev, resume_msg, NULL)) 1227 - return !pm_runtime_status_suspended(dev) && 1228 - (resume_msg.event != PM_EVENT_RESUME || 1229 - (device_can_wakeup(dev) && !device_may_wakeup(dev))); 1230 - 1231 - /* 1232 - * The only safe strategy here is to require that if the device may not 1233 - * be left in suspend, resume callbacks must be invoked for it. 1234 - */ 1235 - return !dev->power.may_skip_resume; 1236 - } 1237 - 1238 1248 /** 1239 1249 * __device_suspend_noirq - Execute a "noirq suspend" callback for given device. 1240 1250 * @dev: Device to handle. ··· 1191 1311 */ 1192 1312 static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool async) 1193 1313 { 1194 - pm_callback_t callback; 1195 - const char *info; 1196 - bool no_subsys_cb = false; 1314 + pm_callback_t callback = NULL; 1315 + const char *info = NULL; 1197 1316 int error = 0; 1198 1317 1199 1318 TRACE_DEVICE(dev); ··· 1206 1327 if (dev->power.syscore || dev->power.direct_complete) 1207 1328 goto Complete; 1208 1329 1209 - callback = dpm_subsys_suspend_noirq_cb(dev, state, &info); 1330 + if (dev->pm_domain) { 1331 + info = "noirq power domain "; 1332 + callback = pm_noirq_op(&dev->pm_domain->ops, state); 1333 + } else if (dev->type && dev->type->pm) { 1334 + info = "noirq type "; 1335 + callback = pm_noirq_op(dev->type->pm, state); 1336 + } else if (dev->class && dev->class->pm) { 1337 + info = "noirq class "; 1338 + callback = pm_noirq_op(dev->class->pm, state); 1339 + } else if (dev->bus && dev->bus->pm) { 1340 + info = "noirq bus "; 1341 + callback = pm_noirq_op(dev->bus->pm, state); 1342 + } 1210 1343 if (callback) 1211 1344 goto Run; 1212 1345 1213 - no_subsys_cb = !dpm_subsys_suspend_late_cb(dev, state, NULL); 1214 - 1215 - if (dev_pm_smart_suspend_and_suspended(dev) && no_subsys_cb) 1346 + if (dev_pm_skip_suspend(dev)) 1216 1347 goto Skip; 1217 1348 1218 1349 if (dev->driver && dev->driver->pm) { ··· 1240 1351 Skip: 1241 1352 dev->power.is_noirq_suspended = true; 1242 1353 1243 - if (dev_pm_test_driver_flags(dev, DPM_FLAG_LEAVE_SUSPENDED)) { 1244 - dev->power.must_resume = dev->power.must_resume || 1245 - atomic_read(&dev->power.usage_count) > 1 || 1246 - device_must_resume(dev, state, no_subsys_cb); 1247 - } else { 1354 + /* 1355 + * Skipping the resume of devices that were in use right before the 1356 + * system suspend (as indicated by their PM-runtime usage counters) 1357 + * would be suboptimal. Also resume them if doing that is not allowed 1358 + * to be skipped. 1359 + */ 1360 + if (atomic_read(&dev->power.usage_count) > 1 || 1361 + !(dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME) && 1362 + dev->power.may_skip_resume)) 1248 1363 dev->power.must_resume = true; 1249 - } 1250 1364 1251 1365 if (dev->power.must_resume) 1252 1366 dpm_superior_set_must_resume(dev); ··· 1366 1474 spin_unlock_irq(&parent->power.lock); 1367 1475 } 1368 1476 1369 - static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev, 1370 - pm_message_t state, 1371 - const char **info_p) 1372 - { 1373 - pm_callback_t callback; 1374 - const char *info; 1375 - 1376 - if (dev->pm_domain) { 1377 - info = "late power domain "; 1378 - callback = pm_late_early_op(&dev->pm_domain->ops, state); 1379 - } else if (dev->type && dev->type->pm) { 1380 - info = "late type "; 1381 - callback = pm_late_early_op(dev->type->pm, state); 1382 - } else if (dev->class && dev->class->pm) { 1383 - info = "late class "; 1384 - callback = pm_late_early_op(dev->class->pm, state); 1385 - } else if (dev->bus && dev->bus->pm) { 1386 - info = "late bus "; 1387 - callback = pm_late_early_op(dev->bus->pm, state); 1388 - } else { 1389 - return NULL; 1390 - } 1391 - 1392 - if (info_p) 1393 - *info_p = info; 1394 - 1395 - return callback; 1396 - } 1397 - 1398 1477 /** 1399 1478 * __device_suspend_late - Execute a "late suspend" callback for given device. 1400 1479 * @dev: Device to handle. ··· 1376 1513 */ 1377 1514 static int __device_suspend_late(struct device *dev, pm_message_t state, bool async) 1378 1515 { 1379 - pm_callback_t callback; 1380 - const char *info; 1516 + pm_callback_t callback = NULL; 1517 + const char *info = NULL; 1381 1518 int error = 0; 1382 1519 1383 1520 TRACE_DEVICE(dev); ··· 1398 1535 if (dev->power.syscore || dev->power.direct_complete) 1399 1536 goto Complete; 1400 1537 1401 - callback = dpm_subsys_suspend_late_cb(dev, state, &info); 1538 + if (dev->pm_domain) { 1539 + info = "late power domain "; 1540 + callback = pm_late_early_op(&dev->pm_domain->ops, state); 1541 + } else if (dev->type && dev->type->pm) { 1542 + info = "late type "; 1543 + callback = pm_late_early_op(dev->type->pm, state); 1544 + } else if (dev->class && dev->class->pm) { 1545 + info = "late class "; 1546 + callback = pm_late_early_op(dev->class->pm, state); 1547 + } else if (dev->bus && dev->bus->pm) { 1548 + info = "late bus "; 1549 + callback = pm_late_early_op(dev->bus->pm, state); 1550 + } 1402 1551 if (callback) 1403 1552 goto Run; 1404 1553 1405 - if (dev_pm_smart_suspend_and_suspended(dev) && 1406 - !dpm_subsys_suspend_noirq_cb(dev, state, NULL)) 1554 + if (dev_pm_skip_suspend(dev)) 1407 1555 goto Skip; 1408 1556 1409 1557 if (dev->driver && dev->driver->pm) { ··· 1640 1766 dev->power.direct_complete = false; 1641 1767 } 1642 1768 1643 - dev->power.may_skip_resume = false; 1769 + dev->power.may_skip_resume = true; 1644 1770 dev->power.must_resume = false; 1645 1771 1646 1772 dpm_watchdog_set(&wd, dev); ··· 1844 1970 spin_lock_irq(&dev->power.lock); 1845 1971 dev->power.direct_complete = state.event == PM_EVENT_SUSPEND && 1846 1972 (ret > 0 || dev->power.no_pm_callbacks) && 1847 - !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP); 1973 + !dev_pm_test_driver_flags(dev, DPM_FLAG_NO_DIRECT_COMPLETE); 1848 1974 spin_unlock_irq(&dev->power.lock); 1849 1975 return 0; 1850 1976 } ··· 2002 2128 spin_unlock_irq(&dev->power.lock); 2003 2129 } 2004 2130 2005 - bool dev_pm_smart_suspend_and_suspended(struct device *dev) 2131 + bool dev_pm_skip_suspend(struct device *dev) 2006 2132 { 2007 2133 return dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && 2008 2134 pm_runtime_status_suspended(dev);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 191 191 } 192 192 193 193 if (adev->runpm) { 194 - dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP); 194 + dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 195 195 pm_runtime_use_autosuspend(dev->dev); 196 196 pm_runtime_set_autosuspend_delay(dev->dev, 5000); 197 197 pm_runtime_set_active(dev->dev);
+1 -1
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 549 549 * becaue the HDA driver may require us to enable the audio power 550 550 * domain during system suspend. 551 551 */ 552 - dev_pm_set_driver_flags(kdev, DPM_FLAG_NEVER_SKIP); 552 + dev_pm_set_driver_flags(kdev, DPM_FLAG_NO_DIRECT_COMPLETE); 553 553 554 554 pm_runtime_set_autosuspend_delay(kdev, 10000); /* 10s */ 555 555 pm_runtime_mark_last_busy(kdev);
+1 -1
drivers/gpu/drm/radeon/radeon_kms.c
··· 158 158 } 159 159 160 160 if (radeon_is_px(dev)) { 161 - dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP); 161 + dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 162 162 pm_runtime_use_autosuspend(dev->dev); 163 163 pm_runtime_set_autosuspend_delay(dev->dev, 5000); 164 164 pm_runtime_set_active(dev->dev);
+2 -2
drivers/i2c/busses/i2c-designware-platdrv.c
··· 357 357 if (dev->flags & ACCESS_NO_IRQ_SUSPEND) { 358 358 dev_pm_set_driver_flags(&pdev->dev, 359 359 DPM_FLAG_SMART_PREPARE | 360 - DPM_FLAG_LEAVE_SUSPENDED); 360 + DPM_FLAG_MAY_SKIP_RESUME); 361 361 } else { 362 362 dev_pm_set_driver_flags(&pdev->dev, 363 363 DPM_FLAG_SMART_PREPARE | 364 364 DPM_FLAG_SMART_SUSPEND | 365 - DPM_FLAG_LEAVE_SUSPENDED); 365 + DPM_FLAG_MAY_SKIP_RESUME); 366 366 } 367 367 368 368 /* The code below assumes runtime PM to be disabled. */
+1 -1
drivers/misc/mei/pci-me.c
··· 241 241 * MEI requires to resume from runtime suspend mode 242 242 * in order to perform link reset flow upon system suspend. 243 243 */ 244 - dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 244 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 245 245 246 246 /* 247 247 * ME maps runtime suspend/resume to D0i states,
+1 -1
drivers/misc/mei/pci-txe.c
··· 128 128 * MEI requires to resume from runtime suspend mode 129 129 * in order to perform link reset flow upon system suspend. 130 130 */ 131 - dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 131 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 132 132 133 133 /* 134 134 * TXE maps runtime suspend/resume to own power gating states,
+1 -1
drivers/net/ethernet/intel/e1000e/netdev.c
··· 7549 7549 7550 7550 e1000_print_device_info(adapter); 7551 7551 7552 - dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 7552 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 7553 7553 7554 7554 if (pci_dev_run_wake(pdev) && hw->mac.type < e1000_pch_cnp) 7555 7555 pm_runtime_put_noidle(&pdev->dev);
+1 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 3445 3445 } 3446 3446 } 3447 3447 3448 - dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 3448 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 3449 3449 3450 3450 pm_runtime_put_noidle(&pdev->dev); 3451 3451 return 0;
+1 -1
drivers/net/ethernet/intel/igc/igc_main.c
··· 4825 4825 pcie_print_link_status(pdev); 4826 4826 netdev_info(netdev, "MAC: %pM\n", netdev->dev_addr); 4827 4827 4828 - dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 4828 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 4829 4829 4830 4830 pm_runtime_put_noidle(&pdev->dev); 4831 4831
+1 -1
drivers/pci/hotplug/pciehp_core.c
··· 275 275 * If the port is already runtime suspended we can keep it that 276 276 * way. 277 277 */ 278 - if (dev_pm_smart_suspend_and_suspended(&dev->port->dev)) 278 + if (dev_pm_skip_suspend(&dev->port->dev)) 279 279 return 0; 280 280 281 281 pciehp_disable_interrupt(dev);
+17 -17
drivers/pci/pci-driver.c
··· 776 776 777 777 static int pci_pm_suspend_late(struct device *dev) 778 778 { 779 - if (dev_pm_smart_suspend_and_suspended(dev)) 779 + if (dev_pm_skip_suspend(dev)) 780 780 return 0; 781 781 782 782 pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev)); ··· 789 789 struct pci_dev *pci_dev = to_pci_dev(dev); 790 790 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 791 791 792 - if (dev_pm_smart_suspend_and_suspended(dev)) { 793 - dev->power.may_skip_resume = true; 792 + if (dev_pm_skip_suspend(dev)) 794 793 return 0; 795 - } 796 794 797 795 if (pci_has_legacy_pm_support(pci_dev)) 798 796 return pci_legacy_suspend_late(dev, PMSG_SUSPEND); ··· 878 880 * pci_pm_complete() to take care of fixing up the device's state 879 881 * anyway, if need be. 880 882 */ 881 - dev->power.may_skip_resume = device_may_wakeup(dev) || 882 - !device_can_wakeup(dev); 883 + if (device_can_wakeup(dev) && !device_may_wakeup(dev)) 884 + dev->power.may_skip_resume = false; 883 885 884 886 return 0; 885 887 } ··· 891 893 pci_power_t prev_state = pci_dev->current_state; 892 894 bool skip_bus_pm = pci_dev->skip_bus_pm; 893 895 894 - if (dev_pm_may_skip_resume(dev)) 896 + if (dev_pm_skip_resume(dev)) 895 897 return 0; 896 - 897 - /* 898 - * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 899 - * during system suspend, so update their runtime PM status to "active" 900 - * as they are going to be put into D0 shortly. 901 - */ 902 - if (dev_pm_smart_suspend_and_suspended(dev)) 903 - pm_runtime_set_active(dev); 904 898 905 899 /* 906 900 * In the suspend-to-idle case, devices left in D0 during suspend will ··· 916 926 return pm->resume_noirq(dev); 917 927 918 928 return 0; 929 + } 930 + 931 + static int pci_pm_resume_early(struct device *dev) 932 + { 933 + if (dev_pm_skip_resume(dev)) 934 + return 0; 935 + 936 + return pm_generic_resume_early(dev); 919 937 } 920 938 921 939 static int pci_pm_resume(struct device *dev) ··· 959 961 #define pci_pm_suspend_late NULL 960 962 #define pci_pm_suspend_noirq NULL 961 963 #define pci_pm_resume NULL 964 + #define pci_pm_resume_early NULL 962 965 #define pci_pm_resume_noirq NULL 963 966 964 967 #endif /* !CONFIG_SUSPEND */ ··· 1126 1127 1127 1128 static int pci_pm_poweroff_late(struct device *dev) 1128 1129 { 1129 - if (dev_pm_smart_suspend_and_suspended(dev)) 1130 + if (dev_pm_skip_suspend(dev)) 1130 1131 return 0; 1131 1132 1132 1133 pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev)); ··· 1139 1140 struct pci_dev *pci_dev = to_pci_dev(dev); 1140 1141 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1141 1142 1142 - if (dev_pm_smart_suspend_and_suspended(dev)) 1143 + if (dev_pm_skip_suspend(dev)) 1143 1144 return 0; 1144 1145 1145 1146 if (pci_has_legacy_pm_support(pci_dev)) ··· 1357 1358 .suspend = pci_pm_suspend, 1358 1359 .suspend_late = pci_pm_suspend_late, 1359 1360 .resume = pci_pm_resume, 1361 + .resume_early = pci_pm_resume_early, 1360 1362 .freeze = pci_pm_freeze, 1361 1363 .thaw = pci_pm_thaw, 1362 1364 .poweroff = pci_pm_poweroff,
+1 -1
drivers/pci/pcie/portdrv_pci.c
··· 115 115 116 116 pci_save_state(dev); 117 117 118 - dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NEVER_SKIP | 118 + dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NO_DIRECT_COMPLETE | 119 119 DPM_FLAG_SMART_SUSPEND); 120 120 121 121 if (pci_bridge_d3_possible(dev)) {
+9 -23
include/linux/pm.h
··· 544 544 * These flags can be set by device drivers at the probe time. They need not be 545 545 * cleared by the drivers as the driver core will take care of that. 546 546 * 547 - * NEVER_SKIP: Do not skip all system suspend/resume callbacks for the device. 548 - * SMART_PREPARE: Check the return value of the driver's ->prepare callback. 549 - * SMART_SUSPEND: No need to resume the device from runtime suspend. 550 - * LEAVE_SUSPENDED: Avoid resuming the device during system resume if possible. 547 + * NO_DIRECT_COMPLETE: Do not apply direct-complete optimization to the device. 548 + * SMART_PREPARE: Take the driver ->prepare callback return value into account. 549 + * SMART_SUSPEND: Avoid resuming the device from runtime suspend. 550 + * MAY_SKIP_RESUME: Allow driver "noirq" and "early" callbacks to be skipped. 551 551 * 552 - * Setting SMART_PREPARE instructs bus types and PM domains which may want 553 - * system suspend/resume callbacks to be skipped for the device to return 0 from 554 - * their ->prepare callbacks if the driver's ->prepare callback returns 0 (in 555 - * other words, the system suspend/resume callbacks can only be skipped for the 556 - * device if its driver doesn't object against that). This flag has no effect 557 - * if NEVER_SKIP is set. 558 - * 559 - * Setting SMART_SUSPEND instructs bus types and PM domains which may want to 560 - * runtime resume the device upfront during system suspend that doing so is not 561 - * necessary from the driver's perspective. It also may cause them to skip 562 - * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by 563 - * the driver if they decide to leave the device in runtime suspend. 564 - * 565 - * Setting LEAVE_SUSPENDED informs the PM core and middle-layer code that the 566 - * driver prefers the device to be left in suspend after system resume. 552 + * See Documentation/driver-api/pm/devices.rst for details. 567 553 */ 568 - #define DPM_FLAG_NEVER_SKIP BIT(0) 554 + #define DPM_FLAG_NO_DIRECT_COMPLETE BIT(0) 569 555 #define DPM_FLAG_SMART_PREPARE BIT(1) 570 556 #define DPM_FLAG_SMART_SUSPEND BIT(2) 571 - #define DPM_FLAG_LEAVE_SUSPENDED BIT(3) 557 + #define DPM_FLAG_MAY_SKIP_RESUME BIT(3) 572 558 573 559 struct dev_pm_info { 574 560 pm_message_t power_state; ··· 744 758 extern int pm_generic_poweroff(struct device *dev); 745 759 extern void pm_generic_complete(struct device *dev); 746 760 747 - extern bool dev_pm_may_skip_resume(struct device *dev); 748 - extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); 761 + extern bool dev_pm_skip_resume(struct device *dev); 762 + extern bool dev_pm_skip_suspend(struct device *dev); 749 763 750 764 #else /* !CONFIG_PM_SLEEP */ 751 765