Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pm-core'

* pm-core:
ACPI / PM: Take SMART_SUSPEND driver flag into account
PCI / PM: Take SMART_SUSPEND driver flag into account
PCI / PM: Drop unnecessary invocations of pcibios_pm_ops callbacks
PM / core: Add SMART_SUSPEND driver flag
PCI / PM: Use the NEVER_SKIP driver flag
PM / core: Add NEVER_SKIP and SMART_PREPARE driver flags
PM / core: Convert timers to use timer_setup()
PM / core: Fix kerneldoc comments of four functions
PM / core: Drop legacy class suspend/resume operations

+371 -103
+34
Documentation/driver-api/pm/devices.rst
··· 354 354 is because all such devices are initially set to runtime-suspended with 355 355 runtime PM disabled. 356 356 357 + This feature also can be controlled by device drivers by using the 358 + ``DPM_FLAG_NEVER_SKIP`` and ``DPM_FLAG_SMART_PREPARE`` driver power 359 + management flags. [Typically, they are set at the time the driver is 360 + probed against the device in question by passing them to the 361 + :c:func:`dev_pm_set_driver_flags` helper function.] If the first of 362 + these flags is set, the PM core will not apply the direct-complete 363 + procedure described above to the given device and, consequenty, to any 364 + of its ancestors. The second flag, when set, informs the middle layer 365 + code (bus types, device types, PM domains, classes) that it should take 366 + the return value of the ``->prepare`` callback provided by the driver 367 + into account and it may only return a positive value from its own 368 + ``->prepare`` callback if the driver's one also has returned a positive 369 + value. 370 + 357 371 2. The ``->suspend`` methods should quiesce the device to stop it from 358 372 performing I/O. They also may save the device registers and put it into 359 373 the appropriate low-power state, depending on the bus type the device is ··· 765 751 the state of devices (possibly except for resuming them from runtime suspend) 766 752 from their ``->prepare`` and ``->suspend`` callbacks (or equivalent) *before* 767 753 invoking device drivers' ``->suspend`` callbacks (or equivalent). 754 + 755 + Some bus types and PM domains have a policy to resume all devices from runtime 756 + suspend upfront in their ``->suspend`` callbacks, but that may not be really 757 + necessary if the driver of the device can cope with runtime-suspended devices. 758 + The driver can indicate that by setting ``DPM_FLAG_SMART_SUSPEND`` in 759 + :c:member:`power.driver_flags` at the probe time, by passing it to the 760 + :c:func:`dev_pm_set_driver_flags` helper. That also may cause middle-layer code 761 + (bus types, PM domains etc.) to skip the ``->suspend_late`` and 762 + ``->suspend_noirq`` callbacks provided by the driver if the device remains in 763 + runtime suspend at the beginning of the ``suspend_late`` phase of system-wide 764 + suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM 765 + has been disabled for it, under the assumption that its state should not change 766 + after that point until the system-wide transition is over. If that happens, the 767 + driver's system-wide resume callbacks, if present, may still be invoked during 768 + the subsequent system-wide resume transition and the device's runtime power 769 + management status may be set to "active" before enabling runtime PM for it, 770 + so the driver must be prepared to cope with the invocation of its system-wide 771 + resume callbacks back-to-back with its ``->runtime_suspend`` one (without the 772 + intervening ``->runtime_resume`` and so on) and the final state of the device 773 + must reflect the "active" status for runtime PM in that case. 768 774 769 775 During system-wide resume from a sleep state it's easiest to put devices into 770 776 the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
+33
Documentation/power/pci.txt
··· 961 961 .suspend(), .freeze(), and .poweroff() members and one resume routine is to 962 962 be pointed to by the .resume(), .thaw(), and .restore() members. 963 963 964 + 3.1.19. Driver Flags for Power Management 965 + 966 + The PM core allows device drivers to set flags that influence the handling of 967 + power management for the devices by the core itself and by middle layer code 968 + including the PCI bus type. The flags should be set once at the driver probe 969 + time with the help of the dev_pm_set_driver_flags() function and they should not 970 + be updated directly afterwards. 971 + 972 + The DPM_FLAG_NEVER_SKIP flag prevents the PM core from using the direct-complete 973 + mechanism allowing device suspend/resume callbacks to be skipped if the device 974 + is in runtime suspend when the system suspend starts. That also affects all of 975 + the ancestors of the device, so this flag should only be used if absolutely 976 + necessary. 977 + 978 + The DPM_FLAG_SMART_PREPARE flag instructs the PCI bus type to only return a 979 + positive value from pci_pm_prepare() if the ->prepare callback provided by the 980 + driver of the device returns a positive value. That allows the driver to opt 981 + out from using the direct-complete mechanism dynamically. 982 + 983 + The DPM_FLAG_SMART_SUSPEND flag tells the PCI bus type that from the driver's 984 + perspective the device can be safely left in runtime suspend during system 985 + suspend. That causes pci_pm_suspend(), pci_pm_freeze() and pci_pm_poweroff() 986 + to skip resuming the device from runtime suspend unless there are PCI-specific 987 + reasons for doing that. Also, it causes pci_pm_suspend_late/noirq(), 988 + pci_pm_freeze_late/noirq() and pci_pm_poweroff_late/noirq() to return early 989 + if the device remains in runtime suspend in the beginning of the "late" phase 990 + of the system-wide transition under way. Moreover, if the device is in 991 + runtime suspend in pci_pm_resume_noirq() or pci_pm_restore_noirq(), its runtime 992 + power management status will be changed to "active" (as it is going to be put 993 + into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(), 994 + the function will set the power.direct_complete flag for it (to make the PM core 995 + skip the subsequent "thaw" callbacks for it) and return. 996 + 964 997 3.2. Device Runtime Power Management 965 998 ------------------------------------ 966 999 In addition to providing device power management callbacks PCI device drivers
+12 -1
drivers/acpi/acpi_lpss.c
··· 849 849 #ifdef CONFIG_PM_SLEEP 850 850 static int acpi_lpss_suspend_late(struct device *dev) 851 851 { 852 - int ret = pm_generic_suspend_late(dev); 852 + int ret; 853 853 854 + if (dev_pm_smart_suspend_and_suspended(dev)) 855 + return 0; 856 + 857 + ret = pm_generic_suspend_late(dev); 854 858 return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev)); 855 859 } 856 860 ··· 893 889 .complete = acpi_subsys_complete, 894 890 .suspend = acpi_subsys_suspend, 895 891 .suspend_late = acpi_lpss_suspend_late, 892 + .suspend_noirq = acpi_subsys_suspend_noirq, 893 + .resume_noirq = acpi_subsys_resume_noirq, 896 894 .resume_early = acpi_lpss_resume_early, 897 895 .freeze = acpi_subsys_freeze, 896 + .freeze_late = acpi_subsys_freeze_late, 897 + .freeze_noirq = acpi_subsys_freeze_noirq, 898 + .thaw_noirq = acpi_subsys_thaw_noirq, 898 899 .poweroff = acpi_subsys_suspend, 899 900 .poweroff_late = acpi_lpss_suspend_late, 901 + .poweroff_noirq = acpi_subsys_suspend_noirq, 902 + .restore_noirq = acpi_subsys_resume_noirq, 900 903 .restore_early = acpi_lpss_resume_early, 901 904 #endif 902 905 .runtime_suspend = acpi_lpss_runtime_suspend,
+112 -12
drivers/acpi/device_pm.c
··· 939 939 u32 sys_target = acpi_target_system_state(); 940 940 int ret, state; 941 941 942 - if (device_may_wakeup(dev) != !!adev->wakeup.prepare_count) 942 + if (!pm_runtime_suspended(dev) || !adev || 943 + device_may_wakeup(dev) != !!adev->wakeup.prepare_count) 943 944 return true; 944 945 945 946 if (sys_target == ACPI_STATE_S0) ··· 963 962 int acpi_subsys_prepare(struct device *dev) 964 963 { 965 964 struct acpi_device *adev = ACPI_COMPANION(dev); 966 - int ret; 967 965 968 - ret = pm_generic_prepare(dev); 969 - if (ret < 0) 970 - return ret; 966 + if (dev->driver && dev->driver->pm && dev->driver->pm->prepare) { 967 + int ret = dev->driver->pm->prepare(dev); 971 968 972 - if (!adev || !pm_runtime_suspended(dev)) 973 - return 0; 969 + if (ret < 0) 970 + return ret; 971 + 972 + if (!ret && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE)) 973 + return 0; 974 + } 974 975 975 976 return !acpi_dev_needs_resume(dev, adev); 976 977 } ··· 999 996 * acpi_subsys_suspend - Run the device driver's suspend callback. 1000 997 * @dev: Device to handle. 1001 998 * 1002 - * Follow PCI and resume devices suspended at run time before running their 1003 - * system suspend callbacks. 999 + * Follow PCI and resume devices from runtime suspend before running their 1000 + * system suspend callbacks, unless the driver can cope with runtime-suspended 1001 + * devices during system suspend and there are no ACPI-specific reasons for 1002 + * resuming them. 1004 1003 */ 1005 1004 int acpi_subsys_suspend(struct device *dev) 1006 1005 { 1007 - pm_runtime_resume(dev); 1006 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) || 1007 + acpi_dev_needs_resume(dev, ACPI_COMPANION(dev))) 1008 + pm_runtime_resume(dev); 1009 + 1008 1010 return pm_generic_suspend(dev); 1009 1011 } 1010 1012 EXPORT_SYMBOL_GPL(acpi_subsys_suspend); ··· 1023 1015 */ 1024 1016 int acpi_subsys_suspend_late(struct device *dev) 1025 1017 { 1026 - int ret = pm_generic_suspend_late(dev); 1018 + int ret; 1019 + 1020 + if (dev_pm_smart_suspend_and_suspended(dev)) 1021 + return 0; 1022 + 1023 + ret = pm_generic_suspend_late(dev); 1027 1024 return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev)); 1028 1025 } 1029 1026 EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late); 1027 + 1028 + /** 1029 + * acpi_subsys_suspend_noirq - Run the device driver's "noirq" suspend callback. 1030 + * @dev: Device to suspend. 1031 + */ 1032 + int acpi_subsys_suspend_noirq(struct device *dev) 1033 + { 1034 + if (dev_pm_smart_suspend_and_suspended(dev)) 1035 + return 0; 1036 + 1037 + return pm_generic_suspend_noirq(dev); 1038 + } 1039 + EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); 1040 + 1041 + /** 1042 + * acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback. 1043 + * @dev: Device to handle. 1044 + */ 1045 + int acpi_subsys_resume_noirq(struct device *dev) 1046 + { 1047 + /* 1048 + * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 1049 + * during system suspend, so update their runtime PM status to "active" 1050 + * as they will be put into D0 going forward. 1051 + */ 1052 + if (dev_pm_smart_suspend_and_suspended(dev)) 1053 + pm_runtime_set_active(dev); 1054 + 1055 + return pm_generic_resume_noirq(dev); 1056 + } 1057 + EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq); 1030 1058 1031 1059 /** 1032 1060 * acpi_subsys_resume_early - Resume device using ACPI. ··· 1091 1047 * runtime-suspended devices should not be touched during freeze/thaw 1092 1048 * transitions. 1093 1049 */ 1094 - pm_runtime_resume(dev); 1050 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) 1051 + pm_runtime_resume(dev); 1052 + 1095 1053 return pm_generic_freeze(dev); 1096 1054 } 1097 1055 EXPORT_SYMBOL_GPL(acpi_subsys_freeze); 1098 1056 1057 + /** 1058 + * acpi_subsys_freeze_late - Run the device driver's "late" freeze callback. 1059 + * @dev: Device to handle. 1060 + */ 1061 + int acpi_subsys_freeze_late(struct device *dev) 1062 + { 1063 + 1064 + if (dev_pm_smart_suspend_and_suspended(dev)) 1065 + return 0; 1066 + 1067 + return pm_generic_freeze_late(dev); 1068 + } 1069 + EXPORT_SYMBOL_GPL(acpi_subsys_freeze_late); 1070 + 1071 + /** 1072 + * acpi_subsys_freeze_noirq - Run the device driver's "noirq" freeze callback. 1073 + * @dev: Device to handle. 1074 + */ 1075 + int acpi_subsys_freeze_noirq(struct device *dev) 1076 + { 1077 + 1078 + if (dev_pm_smart_suspend_and_suspended(dev)) 1079 + return 0; 1080 + 1081 + return pm_generic_freeze_noirq(dev); 1082 + } 1083 + EXPORT_SYMBOL_GPL(acpi_subsys_freeze_noirq); 1084 + 1085 + /** 1086 + * acpi_subsys_thaw_noirq - Run the device driver's "noirq" thaw callback. 1087 + * @dev: Device to handle. 1088 + */ 1089 + int acpi_subsys_thaw_noirq(struct device *dev) 1090 + { 1091 + /* 1092 + * If the device is in runtime suspend, the "thaw" code may not work 1093 + * correctly with it, so skip the driver callback and make the PM core 1094 + * skip all of the subsequent "thaw" callbacks for the device. 1095 + */ 1096 + if (dev_pm_smart_suspend_and_suspended(dev)) { 1097 + dev->power.direct_complete = true; 1098 + return 0; 1099 + } 1100 + 1101 + return pm_generic_thaw_noirq(dev); 1102 + } 1103 + EXPORT_SYMBOL_GPL(acpi_subsys_thaw_noirq); 1099 1104 #endif /* CONFIG_PM_SLEEP */ 1100 1105 1101 1106 static struct dev_pm_domain acpi_general_pm_domain = { ··· 1156 1063 .complete = acpi_subsys_complete, 1157 1064 .suspend = acpi_subsys_suspend, 1158 1065 .suspend_late = acpi_subsys_suspend_late, 1066 + .suspend_noirq = acpi_subsys_suspend_noirq, 1067 + .resume_noirq = acpi_subsys_resume_noirq, 1159 1068 .resume_early = acpi_subsys_resume_early, 1160 1069 .freeze = acpi_subsys_freeze, 1070 + .freeze_late = acpi_subsys_freeze_late, 1071 + .freeze_noirq = acpi_subsys_freeze_noirq, 1072 + .thaw_noirq = acpi_subsys_thaw_noirq, 1161 1073 .poweroff = acpi_subsys_suspend, 1162 1074 .poweroff_late = acpi_subsys_suspend_late, 1075 + .poweroff_noirq = acpi_subsys_suspend_noirq, 1076 + .restore_noirq = acpi_subsys_resume_noirq, 1163 1077 .restore_early = acpi_subsys_resume_early, 1164 1078 #endif 1165 1079 },
+2
drivers/base/dd.c
··· 464 464 if (dev->pm_domain && dev->pm_domain->dismiss) 465 465 dev->pm_domain->dismiss(dev); 466 466 pm_runtime_reinit(dev); 467 + dev_pm_set_driver_flags(dev, 0); 467 468 468 469 switch (ret) { 469 470 case -EPROBE_DEFER: ··· 870 869 if (dev->pm_domain && dev->pm_domain->dismiss) 871 870 dev->pm_domain->dismiss(dev); 872 871 pm_runtime_reinit(dev); 872 + dev_pm_set_driver_flags(dev, 0); 873 873 874 874 klist_remove(&dev->p->knode_driver); 875 875 device_pm_check_callbacks(dev);
+25 -28
drivers/base/power/main.c
··· 528 528 /*------------------------- Resume routines -------------------------*/ 529 529 530 530 /** 531 - * device_resume_noirq - Execute an "early resume" callback for given device. 531 + * device_resume_noirq - Execute a "noirq resume" callback for given device. 532 532 * @dev: Device to handle. 533 533 * @state: PM transition of the system being carried out. 534 534 * @async: If true, the device is being resumed asynchronously. ··· 848 848 goto Driver; 849 849 } 850 850 851 - if (dev->class) { 852 - if (dev->class->pm) { 853 - info = "class "; 854 - callback = pm_op(dev->class->pm, state); 855 - goto Driver; 856 - } else if (dev->class->resume) { 857 - info = "legacy class "; 858 - callback = dev->class->resume; 859 - goto End; 860 - } 851 + if (dev->class && dev->class->pm) { 852 + info = "class "; 853 + callback = pm_op(dev->class->pm, state); 854 + goto Driver; 861 855 } 862 856 863 857 if (dev->bus) { ··· 1077 1083 } 1078 1084 1079 1085 /** 1080 - * device_suspend_noirq - Execute a "late suspend" callback for given device. 1086 + * __device_suspend_noirq - Execute a "noirq suspend" callback for given device. 1081 1087 * @dev: Device to handle. 1082 1088 * @state: PM transition of the system being carried out. 1083 1089 * @async: If true, the device is being suspended asynchronously. ··· 1237 1243 } 1238 1244 1239 1245 /** 1240 - * device_suspend_late - Execute a "late suspend" callback for given device. 1246 + * __device_suspend_late - Execute a "late suspend" callback for given device. 1241 1247 * @dev: Device to handle. 1242 1248 * @state: PM transition of the system being carried out. 1243 1249 * @async: If true, the device is being suspended asynchronously. ··· 1439 1445 } 1440 1446 1441 1447 /** 1442 - * device_suspend - Execute "suspend" callbacks for given device. 1448 + * __device_suspend - Execute "suspend" callbacks for given device. 1443 1449 * @dev: Device to handle. 1444 1450 * @state: PM transition of the system being carried out. 1445 1451 * @async: If true, the device is being suspended asynchronously. ··· 1502 1508 goto Run; 1503 1509 } 1504 1510 1505 - if (dev->class) { 1506 - if (dev->class->pm) { 1507 - info = "class "; 1508 - callback = pm_op(dev->class->pm, state); 1509 - goto Run; 1510 - } else if (dev->class->suspend) { 1511 - pm_dev_dbg(dev, state, "legacy class "); 1512 - error = legacy_suspend(dev, state, dev->class->suspend, 1513 - "legacy class "); 1514 - goto End; 1515 - } 1511 + if (dev->class && dev->class->pm) { 1512 + info = "class "; 1513 + callback = pm_op(dev->class->pm, state); 1514 + goto Run; 1516 1515 } 1517 1516 1518 1517 if (dev->bus) { ··· 1652 1665 if (dev->power.syscore) 1653 1666 return 0; 1654 1667 1668 + WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && 1669 + !pm_runtime_enabled(dev)); 1670 + 1655 1671 /* 1656 1672 * If a device's parent goes into runtime suspend at the wrong time, 1657 1673 * it won't be possible to resume the device. To prevent this we ··· 1703 1713 * applies to suspend transitions, however. 1704 1714 */ 1705 1715 spin_lock_irq(&dev->power.lock); 1706 - dev->power.direct_complete = ret > 0 && state.event == PM_EVENT_SUSPEND; 1716 + dev->power.direct_complete = state.event == PM_EVENT_SUSPEND && 1717 + pm_runtime_suspended(dev) && ret > 0 && 1718 + !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP); 1707 1719 spin_unlock_irq(&dev->power.lock); 1708 1720 return 0; 1709 1721 } ··· 1854 1862 dev->power.no_pm_callbacks = 1855 1863 (!dev->bus || (pm_ops_is_empty(dev->bus->pm) && 1856 1864 !dev->bus->suspend && !dev->bus->resume)) && 1857 - (!dev->class || (pm_ops_is_empty(dev->class->pm) && 1858 - !dev->class->suspend && !dev->class->resume)) && 1865 + (!dev->class || pm_ops_is_empty(dev->class->pm)) && 1859 1866 (!dev->type || pm_ops_is_empty(dev->type->pm)) && 1860 1867 (!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) && 1861 1868 (!dev->driver || (pm_ops_is_empty(dev->driver->pm) && 1862 1869 !dev->driver->suspend && !dev->driver->resume)); 1863 1870 spin_unlock_irq(&dev->power.lock); 1871 + } 1872 + 1873 + bool dev_pm_smart_suspend_and_suspended(struct device *dev) 1874 + { 1875 + return dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && 1876 + pm_runtime_status_suspended(dev); 1864 1877 }
+3 -4
drivers/base/power/runtime.c
··· 894 894 * 895 895 * Check if the time is right and queue a suspend request. 896 896 */ 897 - static void pm_suspend_timer_fn(unsigned long data) 897 + static void pm_suspend_timer_fn(struct timer_list *t) 898 898 { 899 - struct device *dev = (struct device *)data; 899 + struct device *dev = from_timer(dev, t, power.suspend_timer); 900 900 unsigned long flags; 901 901 unsigned long expires; 902 902 ··· 1499 1499 INIT_WORK(&dev->power.work, pm_runtime_work); 1500 1500 1501 1501 dev->power.timer_expires = 0; 1502 - setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn, 1503 - (unsigned long)dev); 1502 + timer_setup(&dev->power.suspend_timer, pm_suspend_timer_fn, 0); 1504 1503 1505 1504 init_waitqueue_head(&dev->power.wait_queue); 1506 1505 }
+5 -6
drivers/base/power/wakeup.c
··· 54 54 55 55 static DEFINE_SPINLOCK(events_lock); 56 56 57 - static void pm_wakeup_timer_fn(unsigned long data); 57 + static void pm_wakeup_timer_fn(struct timer_list *t); 58 58 59 59 static LIST_HEAD(wakeup_sources); 60 60 ··· 176 176 return; 177 177 178 178 spin_lock_init(&ws->lock); 179 - setup_timer(&ws->timer, pm_wakeup_timer_fn, (unsigned long)ws); 179 + timer_setup(&ws->timer, pm_wakeup_timer_fn, 0); 180 180 ws->active = false; 181 181 ws->last_time = ktime_get(); 182 182 ··· 481 481 * Use timer struct to check if the given source is initialized 482 482 * by wakeup_source_add. 483 483 */ 484 - return ws->timer.function != pm_wakeup_timer_fn || 485 - ws->timer.data != (unsigned long)ws; 484 + return ws->timer.function != (TIMER_FUNC_TYPE)pm_wakeup_timer_fn; 486 485 } 487 486 488 487 /* ··· 723 724 * in @data if it is currently active and its timer has not been canceled and 724 725 * the expiration time of the timer is not in future. 725 726 */ 726 - static void pm_wakeup_timer_fn(unsigned long data) 727 + static void pm_wakeup_timer_fn(struct timer_list *t) 727 728 { 728 - struct wakeup_source *ws = (struct wakeup_source *)data; 729 + struct wakeup_source *ws = from_timer(ws, t, timer); 729 730 unsigned long flags; 730 731 731 732 spin_lock_irqsave(&ws->lock, flags);
+1 -1
drivers/gpu/drm/i915/i915_drv.c
··· 1304 1304 * becaue the HDA driver may require us to enable the audio power 1305 1305 * domain during system suspend. 1306 1306 */ 1307 - pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 1307 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 1308 1308 1309 1309 ret = i915_driver_init_early(dev_priv, ent); 1310 1310 if (ret < 0)
+1 -1
drivers/misc/mei/pci-me.c
··· 225 225 * MEI requires to resume from runtime suspend mode 226 226 * in order to perform link reset flow upon system suspend. 227 227 */ 228 - pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 228 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 229 229 230 230 /* 231 231 * ME maps runtime suspend/resume to D0i states,
+1 -1
drivers/misc/mei/pci-txe.c
··· 141 141 * MEI requires to resume from runtime suspend mode 142 142 * in order to perform link reset flow upon system suspend. 143 143 */ 144 - pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME; 144 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 145 145 146 146 /* 147 147 * TXE maps runtime suspend/resume to own power gating states,
+90 -36
drivers/pci/pci-driver.c
··· 682 682 683 683 if (drv && drv->pm && drv->pm->prepare) { 684 684 int error = drv->pm->prepare(dev); 685 - if (error) 685 + if (error < 0) 686 686 return error; 687 + 688 + if (!error && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE)) 689 + return 0; 687 690 } 688 691 return pci_dev_keep_suspended(to_pci_dev(dev)); 689 692 } ··· 727 724 728 725 if (!pm) { 729 726 pci_pm_default_suspend(pci_dev); 730 - goto Fixup; 727 + return 0; 731 728 } 732 729 733 730 /* 734 - * PCI devices suspended at run time need to be resumed at this point, 735 - * because in general it is necessary to reconfigure them for system 736 - * suspend. Namely, if the device is supposed to wake up the system 737 - * from the sleep state, we may need to reconfigure it for this purpose. 738 - * In turn, if the device is not supposed to wake up the system from the 739 - * sleep state, we'll have to prevent it from signaling wake-up. 731 + * PCI devices suspended at run time may need to be resumed at this 732 + * point, because in general it may be necessary to reconfigure them for 733 + * system suspend. Namely, if the device is expected to wake up the 734 + * system from the sleep state, it may have to be reconfigured for this 735 + * purpose, or if the device is not expected to wake up the system from 736 + * the sleep state, it should be prevented from signaling wakeup events 737 + * going forward. 738 + * 739 + * Also if the driver of the device does not indicate that its system 740 + * suspend callbacks can cope with runtime-suspended devices, it is 741 + * better to resume the device from runtime suspend here. 740 742 */ 741 - pm_runtime_resume(dev); 743 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) || 744 + !pci_dev_keep_suspended(pci_dev)) 745 + pm_runtime_resume(dev); 742 746 743 747 pci_dev->state_saved = false; 744 748 if (pm->suspend) { ··· 765 755 } 766 756 } 767 757 768 - Fixup: 769 - pci_fixup_device(pci_fixup_suspend, pci_dev); 770 - 771 758 return 0; 759 + } 760 + 761 + static int pci_pm_suspend_late(struct device *dev) 762 + { 763 + if (dev_pm_smart_suspend_and_suspended(dev)) 764 + return 0; 765 + 766 + pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev)); 767 + 768 + return pm_generic_suspend_late(dev); 772 769 } 773 770 774 771 static int pci_pm_suspend_noirq(struct device *dev) 775 772 { 776 773 struct pci_dev *pci_dev = to_pci_dev(dev); 777 774 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 775 + 776 + if (dev_pm_smart_suspend_and_suspended(dev)) 777 + return 0; 778 778 779 779 if (pci_has_legacy_pm_support(pci_dev)) 780 780 return pci_legacy_suspend_late(dev, PMSG_SUSPEND); ··· 847 827 struct device_driver *drv = dev->driver; 848 828 int error = 0; 849 829 830 + /* 831 + * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 832 + * during system suspend, so update their runtime PM status to "active" 833 + * as they are going to be put into D0 shortly. 834 + */ 835 + if (dev_pm_smart_suspend_and_suspended(dev)) 836 + pm_runtime_set_active(dev); 837 + 850 838 pci_pm_default_resume_early(pci_dev); 851 839 852 840 if (pci_has_legacy_pm_support(pci_dev)) ··· 897 869 #else /* !CONFIG_SUSPEND */ 898 870 899 871 #define pci_pm_suspend NULL 872 + #define pci_pm_suspend_late NULL 900 873 #define pci_pm_suspend_noirq NULL 901 874 #define pci_pm_resume NULL 902 875 #define pci_pm_resume_noirq NULL ··· 932 903 * devices should not be touched during freeze/thaw transitions, 933 904 * however. 934 905 */ 935 - pm_runtime_resume(dev); 906 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) 907 + pm_runtime_resume(dev); 936 908 937 909 pci_dev->state_saved = false; 938 910 if (pm->freeze) { ··· 945 915 return error; 946 916 } 947 917 948 - if (pcibios_pm_ops.freeze) 949 - return pcibios_pm_ops.freeze(dev); 950 - 951 918 return 0; 919 + } 920 + 921 + static int pci_pm_freeze_late(struct device *dev) 922 + { 923 + if (dev_pm_smart_suspend_and_suspended(dev)) 924 + return 0; 925 + 926 + return pm_generic_freeze_late(dev);; 952 927 } 953 928 954 929 static int pci_pm_freeze_noirq(struct device *dev) 955 930 { 956 931 struct pci_dev *pci_dev = to_pci_dev(dev); 957 932 struct device_driver *drv = dev->driver; 933 + 934 + if (dev_pm_smart_suspend_and_suspended(dev)) 935 + return 0; 958 936 959 937 if (pci_has_legacy_pm_support(pci_dev)) 960 938 return pci_legacy_suspend_late(dev, PMSG_FREEZE); ··· 993 955 struct device_driver *drv = dev->driver; 994 956 int error = 0; 995 957 958 + /* 959 + * If the device is in runtime suspend, the code below may not work 960 + * correctly with it, so skip that code and make the PM core skip all of 961 + * the subsequent "thaw" callbacks for the device. 962 + */ 963 + if (dev_pm_smart_suspend_and_suspended(dev)) { 964 + dev->power.direct_complete = true; 965 + return 0; 966 + } 967 + 996 968 if (pcibios_pm_ops.thaw_noirq) { 997 969 error = pcibios_pm_ops.thaw_noirq(dev); 998 970 if (error) ··· 1026 978 struct pci_dev *pci_dev = to_pci_dev(dev); 1027 979 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1028 980 int error = 0; 1029 - 1030 - if (pcibios_pm_ops.thaw) { 1031 - error = pcibios_pm_ops.thaw(dev); 1032 - if (error) 1033 - return error; 1034 - } 1035 981 1036 982 if (pci_has_legacy_pm_support(pci_dev)) 1037 983 return pci_legacy_resume(dev); ··· 1052 1010 1053 1011 if (!pm) { 1054 1012 pci_pm_default_suspend(pci_dev); 1055 - goto Fixup; 1013 + return 0; 1056 1014 } 1057 1015 1058 1016 /* The reason to do that is the same as in pci_pm_suspend(). */ 1059 - pm_runtime_resume(dev); 1017 + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) || 1018 + !pci_dev_keep_suspended(pci_dev)) 1019 + pm_runtime_resume(dev); 1060 1020 1061 1021 pci_dev->state_saved = false; 1062 1022 if (pm->poweroff) { ··· 1070 1026 return error; 1071 1027 } 1072 1028 1073 - Fixup: 1074 - pci_fixup_device(pci_fixup_suspend, pci_dev); 1075 - 1076 - if (pcibios_pm_ops.poweroff) 1077 - return pcibios_pm_ops.poweroff(dev); 1078 - 1079 1029 return 0; 1030 + } 1031 + 1032 + static int pci_pm_poweroff_late(struct device *dev) 1033 + { 1034 + if (dev_pm_smart_suspend_and_suspended(dev)) 1035 + return 0; 1036 + 1037 + pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev)); 1038 + 1039 + return pm_generic_poweroff_late(dev); 1080 1040 } 1081 1041 1082 1042 static int pci_pm_poweroff_noirq(struct device *dev) 1083 1043 { 1084 1044 struct pci_dev *pci_dev = to_pci_dev(dev); 1085 1045 struct device_driver *drv = dev->driver; 1046 + 1047 + if (dev_pm_smart_suspend_and_suspended(dev)) 1048 + return 0; 1086 1049 1087 1050 if (pci_has_legacy_pm_support(to_pci_dev(dev))) 1088 1051 return pci_legacy_suspend_late(dev, PMSG_HIBERNATE); ··· 1132 1081 struct device_driver *drv = dev->driver; 1133 1082 int error = 0; 1134 1083 1084 + /* This is analogous to the pci_pm_resume_noirq() case. */ 1085 + if (dev_pm_smart_suspend_and_suspended(dev)) 1086 + pm_runtime_set_active(dev); 1087 + 1135 1088 if (pcibios_pm_ops.restore_noirq) { 1136 1089 error = pcibios_pm_ops.restore_noirq(dev); 1137 1090 if (error) ··· 1158 1103 struct pci_dev *pci_dev = to_pci_dev(dev); 1159 1104 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1160 1105 int error = 0; 1161 - 1162 - if (pcibios_pm_ops.restore) { 1163 - error = pcibios_pm_ops.restore(dev); 1164 - if (error) 1165 - return error; 1166 - } 1167 1106 1168 1107 /* 1169 1108 * This is necessary for the hibernation error path in which restore is ··· 1184 1135 #else /* !CONFIG_HIBERNATE_CALLBACKS */ 1185 1136 1186 1137 #define pci_pm_freeze NULL 1138 + #define pci_pm_freeze_late NULL 1187 1139 #define pci_pm_freeze_noirq NULL 1188 1140 #define pci_pm_thaw NULL 1189 1141 #define pci_pm_thaw_noirq NULL 1190 1142 #define pci_pm_poweroff NULL 1143 + #define pci_pm_poweroff_late NULL 1191 1144 #define pci_pm_poweroff_noirq NULL 1192 1145 #define pci_pm_restore NULL 1193 1146 #define pci_pm_restore_noirq NULL ··· 1305 1254 .prepare = pci_pm_prepare, 1306 1255 .complete = pci_pm_complete, 1307 1256 .suspend = pci_pm_suspend, 1257 + .suspend_late = pci_pm_suspend_late, 1308 1258 .resume = pci_pm_resume, 1309 1259 .freeze = pci_pm_freeze, 1260 + .freeze_late = pci_pm_freeze_late, 1310 1261 .thaw = pci_pm_thaw, 1311 1262 .poweroff = pci_pm_poweroff, 1263 + .poweroff_late = pci_pm_poweroff_late, 1312 1264 .restore = pci_pm_restore, 1313 1265 .suspend_noirq = pci_pm_suspend_noirq, 1314 1266 .resume_noirq = pci_pm_resume_noirq,
+1 -2
drivers/pci/pci.c
··· 2166 2166 2167 2167 if (!pm_runtime_suspended(dev) 2168 2168 || pci_target_state(pci_dev, wakeup) != pci_dev->current_state 2169 - || platform_pci_need_resume(pci_dev) 2170 - || (pci_dev->dev_flags & PCI_DEV_FLAGS_NEEDS_RESUME)) 2169 + || platform_pci_need_resume(pci_dev)) 2171 2170 return false; 2172 2171 2173 2172 /*
+10
include/linux/acpi.h
··· 885 885 int acpi_subsys_prepare(struct device *dev); 886 886 void acpi_subsys_complete(struct device *dev); 887 887 int acpi_subsys_suspend_late(struct device *dev); 888 + int acpi_subsys_suspend_noirq(struct device *dev); 889 + int acpi_subsys_resume_noirq(struct device *dev); 888 890 int acpi_subsys_resume_early(struct device *dev); 889 891 int acpi_subsys_suspend(struct device *dev); 890 892 int acpi_subsys_freeze(struct device *dev); 893 + int acpi_subsys_freeze_late(struct device *dev); 894 + int acpi_subsys_freeze_noirq(struct device *dev); 895 + int acpi_subsys_thaw_noirq(struct device *dev); 891 896 #else 892 897 static inline int acpi_dev_resume_early(struct device *dev) { return 0; } 893 898 static inline int acpi_subsys_prepare(struct device *dev) { return 0; } 894 899 static inline void acpi_subsys_complete(struct device *dev) {} 895 900 static inline int acpi_subsys_suspend_late(struct device *dev) { return 0; } 901 + static inline int acpi_subsys_suspend_noirq(struct device *dev) { return 0; } 902 + static inline int acpi_subsys_resume_noirq(struct device *dev) { return 0; } 896 903 static inline int acpi_subsys_resume_early(struct device *dev) { return 0; } 897 904 static inline int acpi_subsys_suspend(struct device *dev) { return 0; } 898 905 static inline int acpi_subsys_freeze(struct device *dev) { return 0; } 906 + static inline int acpi_subsys_freeze_late(struct device *dev) { return 0; } 907 + static inline int acpi_subsys_freeze_noirq(struct device *dev) { return 0; } 908 + static inline int acpi_subsys_thaw_noirq(struct device *dev) { return 0; } 899 909 #endif 900 910 901 911 #ifdef CONFIG_ACPI
+10 -5
include/linux/device.h
··· 370 370 * @devnode: Callback to provide the devtmpfs. 371 371 * @class_release: Called to release this class. 372 372 * @dev_release: Called to release the device. 373 - * @suspend: Used to put the device to sleep mode, usually to a low power 374 - * state. 375 - * @resume: Used to bring the device from the sleep mode. 376 373 * @shutdown_pre: Called at shut-down time before driver shutdown. 377 374 * @ns_type: Callbacks so sysfs can detemine namespaces. 378 375 * @namespace: Namespace of the device belongs to this class. ··· 397 400 void (*class_release)(struct class *class); 398 401 void (*dev_release)(struct device *dev); 399 402 400 - int (*suspend)(struct device *dev, pm_message_t state); 401 - int (*resume)(struct device *dev); 402 403 int (*shutdown_pre)(struct device *dev); 403 404 404 405 const struct kobj_ns_type_operations *ns_type; ··· 1068 1073 #ifdef CONFIG_PM_SLEEP 1069 1074 dev->power.syscore = val; 1070 1075 #endif 1076 + } 1077 + 1078 + static inline void dev_pm_set_driver_flags(struct device *dev, u32 flags) 1079 + { 1080 + dev->power.driver_flags = flags; 1081 + } 1082 + 1083 + static inline bool dev_pm_test_driver_flags(struct device *dev, u32 flags) 1084 + { 1085 + return !!(dev->power.driver_flags & flags); 1071 1086 } 1072 1087 1073 1088 static inline void device_lock(struct device *dev)
+1 -6
include/linux/pci.h
··· 206 206 PCI_DEV_FLAGS_BRIDGE_XLATE_ROOT = (__force pci_dev_flags_t) (1 << 9), 207 207 /* Do not use FLR even if device advertises PCI_AF_CAP */ 208 208 PCI_DEV_FLAGS_NO_FLR_RESET = (__force pci_dev_flags_t) (1 << 10), 209 - /* 210 - * Resume before calling the driver's system suspend hooks, disabling 211 - * the direct_complete optimization. 212 - */ 213 - PCI_DEV_FLAGS_NEEDS_RESUME = (__force pci_dev_flags_t) (1 << 11), 214 209 /* Don't use Relaxed Ordering for TLPs directed at this device */ 215 - PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 12), 210 + PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11), 216 211 }; 217 212 218 213 enum pci_irq_reroute_variant {
+30
include/linux/pm.h
··· 550 550 #endif 551 551 }; 552 552 553 + /* 554 + * Driver flags to control system suspend/resume behavior. 555 + * 556 + * These flags can be set by device drivers at the probe time. They need not be 557 + * cleared by the drivers as the driver core will take care of that. 558 + * 559 + * NEVER_SKIP: Do not skip system suspend/resume callbacks for the device. 560 + * SMART_PREPARE: Check the return value of the driver's ->prepare callback. 561 + * SMART_SUSPEND: No need to resume the device from runtime suspend. 562 + * 563 + * Setting SMART_PREPARE instructs bus types and PM domains which may want 564 + * system suspend/resume callbacks to be skipped for the device to return 0 from 565 + * their ->prepare callbacks if the driver's ->prepare callback returns 0 (in 566 + * other words, the system suspend/resume callbacks can only be skipped for the 567 + * device if its driver doesn't object against that). This flag has no effect 568 + * if NEVER_SKIP is set. 569 + * 570 + * Setting SMART_SUSPEND instructs bus types and PM domains which may want to 571 + * runtime resume the device upfront during system suspend that doing so is not 572 + * necessary from the driver's perspective. It also may cause them to skip 573 + * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by 574 + * the driver if they decide to leave the device in runtime suspend. 575 + */ 576 + #define DPM_FLAG_NEVER_SKIP BIT(0) 577 + #define DPM_FLAG_SMART_PREPARE BIT(1) 578 + #define DPM_FLAG_SMART_SUSPEND BIT(2) 579 + 553 580 struct dev_pm_info { 554 581 pm_message_t power_state; 555 582 unsigned int can_wakeup:1; ··· 588 561 bool is_late_suspended:1; 589 562 bool early_init:1; /* Owned by the PM core */ 590 563 bool direct_complete:1; /* Owned by the PM core */ 564 + u32 driver_flags; 591 565 spinlock_t lock; 592 566 #ifdef CONFIG_PM_SLEEP 593 567 struct list_head entry; ··· 764 736 extern int pm_generic_poweroff_late(struct device *dev); 765 737 extern int pm_generic_poweroff(struct device *dev); 766 738 extern void pm_generic_complete(struct device *dev); 739 + 740 + extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); 767 741 768 742 #else /* !CONFIG_PM_SLEEP */ 769 743