Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'pm-cpuidle', 'pm-core' and 'pm-sleep'

Merge cpuidle updates, PM core updates and one hiberation-related
update for 5.17-rc1:

- Make cpuidle use default_groups in kobj_type (Greg Kroah-Hartman).

- Fix two comments in cpuidle code (Jason Wang, Yang Li).

- Simplify locking in pm_runtime_put_suppliers() (Rafael Wysocki).

- Add safety net to supplier device release in the runtime PM core
code (Rafael Wysocki).

- Capture device status before disabling runtime PM for it (Rafael
Wysocki).

- Add new macros for declaring PM operations to allow drivers to
avoid guarding them with CONFIG_PM #ifdefs or __maybe_unused and
update some drivers to use these macros (Paul Cercueil).

- Allow ACPI hardware signature to be honoured during restore from
hibernation (David Woodhouse).

* pm-cpuidle:
cpuidle: use default_groups in kobj_type
cpuidle: Fix cpuidle_remove_state_sysfs() kerneldoc comment
cpuidle: menu: Fix typo in a comment

* pm-core:
PM: runtime: Simplify locking in pm_runtime_put_suppliers()
mmc: mxc: Use the new PM macros
mmc: jz4740: Use the new PM macros
PM: runtime: Add safety net to supplier device release
PM: runtime: Capture device status before disabling runtime PM
PM: core: Add new *_PM_OPS macros, deprecate old ones
PM: core: Redefine pm_ptr() macro
r8169: Avoid misuse of pm_ptr() macro

* pm-sleep:
PM: hibernate: Allow ACPI hardware signature to be honoured

+198 -95
+12 -3
Documentation/admin-guide/kernel-parameters.txt
··· 225 225 For broken nForce2 BIOS resulting in XT-PIC timer. 226 226 227 227 acpi_sleep= [HW,ACPI] Sleep options 228 - Format: { s3_bios, s3_mode, s3_beep, s4_nohwsig, 229 - old_ordering, nonvs, sci_force_enable, nobl } 228 + Format: { s3_bios, s3_mode, s3_beep, s4_hwsig, 229 + s4_nohwsig, old_ordering, nonvs, 230 + sci_force_enable, nobl } 230 231 See Documentation/power/video.rst for information on 231 232 s3_bios and s3_mode. 232 233 s3_beep is for debugging; it makes the PC's speaker beep 233 234 as soon as the kernel's real-mode entry point is called. 235 + s4_hwsig causes the kernel to check the ACPI hardware 236 + signature during resume from hibernation, and gracefully 237 + refuse to resume if it has changed. This complies with 238 + the ACPI specification but not with reality, since 239 + Windows does not do this and many laptops do change it 240 + on docking. So the default behaviour is to allow resume 241 + and simply warn when the signature changes, unless the 242 + s4_hwsig option is enabled. 234 243 s4_nohwsig prevents ACPI hardware signature from being 235 - used during resume from hibernation. 244 + used (or even warned about) during resume. 236 245 old_ordering causes the ACPI 1.0 ordering of the _PTS 237 246 control method, with respect to putting devices into 238 247 low power states, to be enforced (the ACPI 2.0 ordering
+10 -4
Documentation/power/runtime_pm.rst
··· 265 265 RPM_SUSPENDED, which means that each device is initially regarded by the 266 266 PM core as 'suspended', regardless of its real hardware status 267 267 268 + `enum rpm_status last_status;` 269 + - the last runtime PM status of the device captured before disabling runtime 270 + PM for it (invalid initially and when disable_depth is 0) 271 + 268 272 `unsigned int runtime_auto;` 269 273 - if set, indicates that the user space has allowed the device driver to 270 274 power manage the device at run time via the /sys/devices/.../power/control ··· 337 333 338 334 `int pm_runtime_resume(struct device *dev);` 339 335 - execute the subsystem-level resume callback for the device; returns 0 on 340 - success, 1 if the device's runtime PM status was already 'active' or 341 - error code on failure, where -EAGAIN means it may be safe to attempt to 342 - resume the device again in future, but 'power.runtime_error' should be 343 - checked additionally, and -EACCES means that 'power.disable_depth' is 336 + success, 1 if the device's runtime PM status is already 'active' (also if 337 + 'power.disable_depth' is nonzero, but the status was 'active' when it was 338 + changing from 0 to 1) or error code on failure, where -EAGAIN means it may 339 + be safe to attempt to resume the device again in future, but 340 + 'power.runtime_error' should be checked additionally, and -EACCES means 341 + that the callback could not be run, because 'power.disable_depth' was 344 342 different from 0 345 343 346 344 `int pm_runtime_resume_and_get(struct device *dev);`
+3 -1
arch/x86/kernel/acpi/sleep.c
··· 139 139 if (strncmp(str, "s3_beep", 7) == 0) 140 140 acpi_realmode_flags |= 4; 141 141 #ifdef CONFIG_HIBERNATION 142 + if (strncmp(str, "s4_hwsig", 8) == 0) 143 + acpi_check_s4_hw_signature(1); 142 144 if (strncmp(str, "s4_nohwsig", 10) == 0) 143 - acpi_no_s4_hw_signature(); 145 + acpi_check_s4_hw_signature(0); 144 146 #endif 145 147 if (strncmp(str, "nonvs", 5) == 0) 146 148 acpi_nvs_nosave();
+21 -5
drivers/acpi/sleep.c
··· 877 877 #ifdef CONFIG_HIBERNATION 878 878 static unsigned long s4_hardware_signature; 879 879 static struct acpi_table_facs *facs; 880 - static bool nosigcheck; 880 + static int sigcheck = -1; /* Default behaviour is just to warn */ 881 881 882 - void __init acpi_no_s4_hw_signature(void) 882 + void __init acpi_check_s4_hw_signature(int check) 883 883 { 884 - nosigcheck = true; 884 + sigcheck = check; 885 885 } 886 886 887 887 static int acpi_hibernation_begin(pm_message_t stage) ··· 1009 1009 hibernation_set_ops(old_suspend_ordering ? 1010 1010 &acpi_hibernation_ops_old : &acpi_hibernation_ops); 1011 1011 sleep_states[ACPI_STATE_S4] = 1; 1012 - if (nosigcheck) 1012 + if (!sigcheck) 1013 1013 return; 1014 1014 1015 1015 acpi_get_table(ACPI_SIG_FACS, 1, (struct acpi_table_header **)&facs); 1016 - if (facs) 1016 + if (facs) { 1017 + /* 1018 + * s4_hardware_signature is the local variable which is just 1019 + * used to warn about mismatch after we're attempting to 1020 + * resume (in violation of the ACPI specification.) 1021 + */ 1017 1022 s4_hardware_signature = facs->hardware_signature; 1023 + 1024 + if (sigcheck > 0) { 1025 + /* 1026 + * If we're actually obeying the ACPI specification 1027 + * then the signature is written out as part of the 1028 + * swsusp header, in order to allow the boot kernel 1029 + * to gracefully decline to resume. 1030 + */ 1031 + swsusp_hardware_signature = facs->hardware_signature; 1032 + } 1033 + } 1018 1034 } 1019 1035 #else /* !CONFIG_HIBERNATION */ 1020 1036 static inline void acpi_sleep_hibernate_setup(void) {}
+1 -2
drivers/base/core.c
··· 485 485 /* Ensure that all references to the link object have been dropped. */ 486 486 device_link_synchronize_removal(); 487 487 488 - while (refcount_dec_not_one(&link->rpm_active)) 489 - pm_runtime_put(link->supplier); 488 + pm_runtime_release_supplier(link, true); 490 489 491 490 put_device(link->consumer); 492 491 put_device(link->supplier);
+63 -35
drivers/base/power/runtime.c
··· 305 305 return 0; 306 306 } 307 307 308 + /** 309 + * pm_runtime_release_supplier - Drop references to device link's supplier. 310 + * @link: Target device link. 311 + * @check_idle: Whether or not to check if the supplier device is idle. 312 + * 313 + * Drop all runtime PM references associated with @link to its supplier device 314 + * and if @check_idle is set, check if that device is idle (and so it can be 315 + * suspended). 316 + */ 317 + void pm_runtime_release_supplier(struct device_link *link, bool check_idle) 318 + { 319 + struct device *supplier = link->supplier; 320 + 321 + /* 322 + * The additional power.usage_count check is a safety net in case 323 + * the rpm_active refcount becomes saturated, in which case 324 + * refcount_dec_not_one() would return true forever, but it is not 325 + * strictly necessary. 326 + */ 327 + while (refcount_dec_not_one(&link->rpm_active) && 328 + atomic_read(&supplier->power.usage_count) > 0) 329 + pm_runtime_put_noidle(supplier); 330 + 331 + if (check_idle) 332 + pm_request_idle(supplier); 333 + } 334 + 308 335 static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend) 309 336 { 310 337 struct device_link *link; 311 338 312 339 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, 313 - device_links_read_lock_held()) { 314 - 315 - while (refcount_dec_not_one(&link->rpm_active)) 316 - pm_runtime_put_noidle(link->supplier); 317 - 318 - if (try_to_suspend) 319 - pm_request_idle(link->supplier); 320 - } 340 + device_links_read_lock_held()) 341 + pm_runtime_release_supplier(link, try_to_suspend); 321 342 } 322 343 323 344 static void rpm_put_suppliers(struct device *dev) ··· 763 742 trace_rpm_resume_rcuidle(dev, rpmflags); 764 743 765 744 repeat: 766 - if (dev->power.runtime_error) 745 + if (dev->power.runtime_error) { 767 746 retval = -EINVAL; 768 - else if (dev->power.disable_depth == 1 && dev->power.is_suspended 769 - && dev->power.runtime_status == RPM_ACTIVE) 770 - retval = 1; 771 - else if (dev->power.disable_depth > 0) 772 - retval = -EACCES; 747 + } else if (dev->power.disable_depth > 0) { 748 + if (dev->power.runtime_status == RPM_ACTIVE && 749 + dev->power.last_status == RPM_ACTIVE) 750 + retval = 1; 751 + else 752 + retval = -EACCES; 753 + } 773 754 if (retval) 774 755 goto out; 775 756 ··· 1433 1410 /* Update time accounting before disabling PM-runtime. */ 1434 1411 update_pm_runtime_accounting(dev); 1435 1412 1436 - if (!dev->power.disable_depth++) 1413 + if (!dev->power.disable_depth++) { 1437 1414 __pm_runtime_barrier(dev); 1415 + dev->power.last_status = dev->power.runtime_status; 1416 + } 1438 1417 1439 1418 out: 1440 1419 spin_unlock_irq(&dev->power.lock); ··· 1453 1428 1454 1429 spin_lock_irqsave(&dev->power.lock, flags); 1455 1430 1456 - if (dev->power.disable_depth > 0) { 1457 - dev->power.disable_depth--; 1458 - 1459 - /* About to enable runtime pm, set accounting_timestamp to now */ 1460 - if (!dev->power.disable_depth) 1461 - dev->power.accounting_timestamp = ktime_get_mono_fast_ns(); 1462 - } else { 1431 + if (!dev->power.disable_depth) { 1463 1432 dev_warn(dev, "Unbalanced %s!\n", __func__); 1433 + goto out; 1464 1434 } 1465 1435 1466 - WARN(!dev->power.disable_depth && 1467 - dev->power.runtime_status == RPM_SUSPENDED && 1468 - !dev->power.ignore_children && 1469 - atomic_read(&dev->power.child_count) > 0, 1470 - "Enabling runtime PM for inactive device (%s) with active children\n", 1471 - dev_name(dev)); 1436 + if (--dev->power.disable_depth > 0) 1437 + goto out; 1472 1438 1439 + dev->power.last_status = RPM_INVALID; 1440 + dev->power.accounting_timestamp = ktime_get_mono_fast_ns(); 1441 + 1442 + if (dev->power.runtime_status == RPM_SUSPENDED && 1443 + !dev->power.ignore_children && 1444 + atomic_read(&dev->power.child_count) > 0) 1445 + dev_warn(dev, "Enabling runtime PM for inactive device with active children\n"); 1446 + 1447 + out: 1473 1448 spin_unlock_irqrestore(&dev->power.lock, flags); 1474 1449 } 1475 1450 EXPORT_SYMBOL_GPL(pm_runtime_enable); ··· 1665 1640 void pm_runtime_init(struct device *dev) 1666 1641 { 1667 1642 dev->power.runtime_status = RPM_SUSPENDED; 1643 + dev->power.last_status = RPM_INVALID; 1668 1644 dev->power.idle_notification = false; 1669 1645 1670 1646 dev->power.disable_depth = 1; ··· 1748 1722 void pm_runtime_put_suppliers(struct device *dev) 1749 1723 { 1750 1724 struct device_link *link; 1751 - unsigned long flags; 1752 - bool put; 1753 1725 int idx; 1754 1726 1755 1727 idx = device_links_read_lock(); ··· 1755 1731 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, 1756 1732 device_links_read_lock_held()) 1757 1733 if (link->supplier_preactivated) { 1734 + bool put; 1735 + 1758 1736 link->supplier_preactivated = false; 1759 - spin_lock_irqsave(&dev->power.lock, flags); 1737 + 1738 + spin_lock_irq(&dev->power.lock); 1739 + 1760 1740 put = pm_runtime_status_suspended(dev) && 1761 1741 refcount_dec_not_one(&link->rpm_active); 1762 - spin_unlock_irqrestore(&dev->power.lock, flags); 1742 + 1743 + spin_unlock_irq(&dev->power.lock); 1744 + 1763 1745 if (put) 1764 1746 pm_runtime_put(link->supplier); 1765 1747 } ··· 1802 1772 return; 1803 1773 1804 1774 pm_runtime_drop_link_count(link->consumer); 1805 - 1806 - while (refcount_dec_not_one(&link->rpm_active)) 1807 - pm_runtime_put(link->supplier); 1775 + pm_runtime_release_supplier(link, true); 1808 1776 } 1809 1777 1810 1778 static bool pm_runtime_need_not_resume(struct device *dev)
+1 -1
drivers/cpuidle/governors/menu.c
··· 34 34 * 1) Energy break even point 35 35 * 2) Performance impact 36 36 * 3) Latency tolerance (from pmqos infrastructure) 37 - * These these three factors are treated independently. 37 + * These three factors are treated independently. 38 38 * 39 39 * Energy break even point 40 40 * -----------------------
+5 -3
drivers/cpuidle/sysfs.c
··· 335 335 &attr_default_status.attr, 336 336 NULL 337 337 }; 338 + ATTRIBUTE_GROUPS(cpuidle_state_default); 338 339 339 340 struct cpuidle_state_kobj { 340 341 struct cpuidle_state *state; ··· 449 448 450 449 static struct kobj_type ktype_state_cpuidle = { 451 450 .sysfs_ops = &cpuidle_state_sysfs_ops, 452 - .default_attrs = cpuidle_state_default_attrs, 451 + .default_groups = cpuidle_state_default_groups, 453 452 .release = cpuidle_state_sysfs_release, 454 453 }; 455 454 ··· 506 505 } 507 506 508 507 /** 509 - * cpuidle_remove_driver_sysfs - removes the cpuidle states sysfs attributes 508 + * cpuidle_remove_state_sysfs - removes the cpuidle states sysfs attributes 510 509 * @device: the target device 511 510 */ 512 511 static void cpuidle_remove_state_sysfs(struct cpuidle_device *device) ··· 592 591 &attr_driver_name.attr, 593 592 NULL 594 593 }; 594 + ATTRIBUTE_GROUPS(cpuidle_driver_default); 595 595 596 596 static struct kobj_type ktype_driver_cpuidle = { 597 597 .sysfs_ops = &cpuidle_driver_sysfs_ops, 598 - .default_attrs = cpuidle_driver_default_attrs, 598 + .default_groups = cpuidle_driver_default_groups, 599 599 .release = cpuidle_driver_sysfs_release, 600 600 }; 601 601
+4 -4
drivers/mmc/host/jz4740_mmc.c
··· 1103 1103 return 0; 1104 1104 } 1105 1105 1106 - static int __maybe_unused jz4740_mmc_suspend(struct device *dev) 1106 + static int jz4740_mmc_suspend(struct device *dev) 1107 1107 { 1108 1108 return pinctrl_pm_select_sleep_state(dev); 1109 1109 } 1110 1110 1111 - static int __maybe_unused jz4740_mmc_resume(struct device *dev) 1111 + static int jz4740_mmc_resume(struct device *dev) 1112 1112 { 1113 1113 return pinctrl_select_default_state(dev); 1114 1114 } 1115 1115 1116 - static SIMPLE_DEV_PM_OPS(jz4740_mmc_pm_ops, jz4740_mmc_suspend, 1116 + DEFINE_SIMPLE_DEV_PM_OPS(jz4740_mmc_pm_ops, jz4740_mmc_suspend, 1117 1117 jz4740_mmc_resume); 1118 1118 1119 1119 static struct platform_driver jz4740_mmc_driver = { ··· 1123 1123 .name = "jz4740-mmc", 1124 1124 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 1125 1125 .of_match_table = of_match_ptr(jz4740_mmc_of_match), 1126 - .pm = pm_ptr(&jz4740_mmc_pm_ops), 1126 + .pm = pm_sleep_ptr(&jz4740_mmc_pm_ops), 1127 1127 }, 1128 1128 }; 1129 1129
+2 -4
drivers/mmc/host/mxcmmc.c
··· 1183 1183 return 0; 1184 1184 } 1185 1185 1186 - #ifdef CONFIG_PM_SLEEP 1187 1186 static int mxcmci_suspend(struct device *dev) 1188 1187 { 1189 1188 struct mmc_host *mmc = dev_get_drvdata(dev); ··· 1209 1210 1210 1211 return ret; 1211 1212 } 1212 - #endif 1213 1213 1214 - static SIMPLE_DEV_PM_OPS(mxcmci_pm_ops, mxcmci_suspend, mxcmci_resume); 1214 + DEFINE_SIMPLE_DEV_PM_OPS(mxcmci_pm_ops, mxcmci_suspend, mxcmci_resume); 1215 1215 1216 1216 static struct platform_driver mxcmci_driver = { 1217 1217 .probe = mxcmci_probe, ··· 1218 1220 .driver = { 1219 1221 .name = DRIVER_NAME, 1220 1222 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 1221 - .pm = &mxcmci_pm_ops, 1223 + .pm = pm_sleep_ptr(&mxcmci_pm_ops), 1222 1224 .of_match_table = mxcmci_of_match, 1223 1225 } 1224 1226 };
+3 -1
drivers/net/ethernet/realtek/r8169_main.c
··· 5441 5441 .probe = rtl_init_one, 5442 5442 .remove = rtl_remove_one, 5443 5443 .shutdown = rtl_shutdown, 5444 - .driver.pm = pm_ptr(&rtl8169_pm_ops), 5444 + #ifdef CONFIG_PM 5445 + .driver.pm = &rtl8169_pm_ops, 5446 + #endif 5445 5447 }; 5446 5448 5447 5449 module_pci_driver(rtl8169_pci_driver);
+1 -1
include/linux/acpi.h
··· 506 506 int acpi_resources_are_enforced(void); 507 507 508 508 #ifdef CONFIG_HIBERNATION 509 - void __init acpi_no_s4_hw_signature(void); 509 + void __init acpi_check_s4_hw_signature(int check); 510 510 #endif 511 511 512 512 #ifdef CONFIG_PM_SLEEP
+53 -29
include/linux/pm.h
··· 300 300 int (*runtime_idle)(struct device *dev); 301 301 }; 302 302 303 + #define SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 304 + .suspend = pm_sleep_ptr(suspend_fn), \ 305 + .resume = pm_sleep_ptr(resume_fn), \ 306 + .freeze = pm_sleep_ptr(suspend_fn), \ 307 + .thaw = pm_sleep_ptr(resume_fn), \ 308 + .poweroff = pm_sleep_ptr(suspend_fn), \ 309 + .restore = pm_sleep_ptr(resume_fn), 310 + 311 + #define LATE_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 312 + .suspend_late = pm_sleep_ptr(suspend_fn), \ 313 + .resume_early = pm_sleep_ptr(resume_fn), \ 314 + .freeze_late = pm_sleep_ptr(suspend_fn), \ 315 + .thaw_early = pm_sleep_ptr(resume_fn), \ 316 + .poweroff_late = pm_sleep_ptr(suspend_fn), \ 317 + .restore_early = pm_sleep_ptr(resume_fn), 318 + 319 + #define NOIRQ_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 320 + .suspend_noirq = pm_sleep_ptr(suspend_fn), \ 321 + .resume_noirq = pm_sleep_ptr(resume_fn), \ 322 + .freeze_noirq = pm_sleep_ptr(suspend_fn), \ 323 + .thaw_noirq = pm_sleep_ptr(resume_fn), \ 324 + .poweroff_noirq = pm_sleep_ptr(suspend_fn), \ 325 + .restore_noirq = pm_sleep_ptr(resume_fn), 326 + 327 + #define RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ 328 + .runtime_suspend = suspend_fn, \ 329 + .runtime_resume = resume_fn, \ 330 + .runtime_idle = idle_fn, 331 + 303 332 #ifdef CONFIG_PM_SLEEP 304 333 #define SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 305 - .suspend = suspend_fn, \ 306 - .resume = resume_fn, \ 307 - .freeze = suspend_fn, \ 308 - .thaw = resume_fn, \ 309 - .poweroff = suspend_fn, \ 310 - .restore = resume_fn, 334 + SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 311 335 #else 312 336 #define SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 313 337 #endif 314 338 315 339 #ifdef CONFIG_PM_SLEEP 316 340 #define SET_LATE_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 317 - .suspend_late = suspend_fn, \ 318 - .resume_early = resume_fn, \ 319 - .freeze_late = suspend_fn, \ 320 - .thaw_early = resume_fn, \ 321 - .poweroff_late = suspend_fn, \ 322 - .restore_early = resume_fn, 341 + LATE_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 323 342 #else 324 343 #define SET_LATE_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 325 344 #endif 326 345 327 346 #ifdef CONFIG_PM_SLEEP 328 347 #define SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 329 - .suspend_noirq = suspend_fn, \ 330 - .resume_noirq = resume_fn, \ 331 - .freeze_noirq = suspend_fn, \ 332 - .thaw_noirq = resume_fn, \ 333 - .poweroff_noirq = suspend_fn, \ 334 - .restore_noirq = resume_fn, 348 + NOIRQ_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 335 349 #else 336 350 #define SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 337 351 #endif 338 352 339 353 #ifdef CONFIG_PM 340 354 #define SET_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ 341 - .runtime_suspend = suspend_fn, \ 342 - .runtime_resume = resume_fn, \ 343 - .runtime_idle = idle_fn, 355 + RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) 344 356 #else 345 357 #define SET_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) 346 358 #endif ··· 361 349 * Use this if you want to use the same suspend and resume callbacks for suspend 362 350 * to RAM and hibernation. 363 351 */ 364 - #define SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ 365 - const struct dev_pm_ops __maybe_unused name = { \ 366 - SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 352 + #define DEFINE_SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ 353 + static const struct dev_pm_ops name = { \ 354 + SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 367 355 } 368 356 369 357 /* ··· 379 367 * .resume_early(), to the same routines as .runtime_suspend() and 380 368 * .runtime_resume(), respectively (and analogously for hibernation). 381 369 */ 370 + #define DEFINE_UNIVERSAL_DEV_PM_OPS(name, suspend_fn, resume_fn, idle_fn) \ 371 + static const struct dev_pm_ops name = { \ 372 + SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 373 + RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ 374 + } 375 + 376 + /* Deprecated. Use DEFINE_SIMPLE_DEV_PM_OPS() instead. */ 377 + #define SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ 378 + const struct dev_pm_ops __maybe_unused name = { \ 379 + SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 380 + } 381 + 382 + /* Deprecated. Use DEFINE_UNIVERSAL_DEV_PM_OPS() instead. */ 382 383 #define UNIVERSAL_DEV_PM_OPS(name, suspend_fn, resume_fn, idle_fn) \ 383 384 const struct dev_pm_ops __maybe_unused name = { \ 384 385 SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 385 386 SET_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ 386 387 } 387 388 388 - #ifdef CONFIG_PM 389 - #define pm_ptr(_ptr) (_ptr) 390 - #else 391 - #define pm_ptr(_ptr) NULL 392 - #endif 389 + #define pm_ptr(_ptr) PTR_IF(IS_ENABLED(CONFIG_PM), (_ptr)) 390 + #define pm_sleep_ptr(_ptr) PTR_IF(IS_ENABLED(CONFIG_PM_SLEEP), (_ptr)) 393 391 394 392 /* 395 393 * PM_EVENT_ messages ··· 521 499 */ 522 500 523 501 enum rpm_status { 502 + RPM_INVALID = -1, 524 503 RPM_ACTIVE = 0, 525 504 RPM_RESUMING, 526 505 RPM_SUSPENDED, ··· 635 612 unsigned int links_count; 636 613 enum rpm_request request; 637 614 enum rpm_status runtime_status; 615 + enum rpm_status last_status; 638 616 int runtime_error; 639 617 int autosuspend_delay; 640 618 u64 last_busy;
+3
include/linux/pm_runtime.h
··· 58 58 extern void pm_runtime_put_suppliers(struct device *dev); 59 59 extern void pm_runtime_new_link(struct device *dev); 60 60 extern void pm_runtime_drop_link(struct device_link *link); 61 + extern void pm_runtime_release_supplier(struct device_link *link, bool check_idle); 61 62 62 63 extern int devm_pm_runtime_enable(struct device *dev); 63 64 ··· 284 283 static inline void pm_runtime_put_suppliers(struct device *dev) {} 285 284 static inline void pm_runtime_new_link(struct device *dev) {} 286 285 static inline void pm_runtime_drop_link(struct device_link *link) {} 286 + static inline void pm_runtime_release_supplier(struct device_link *link, 287 + bool check_idle) {} 287 288 288 289 #endif /* !CONFIG_PM */ 289 290
+1
include/linux/suspend.h
··· 446 446 extern asmlinkage int swsusp_arch_suspend(void); 447 447 extern asmlinkage int swsusp_arch_resume(void); 448 448 449 + extern u32 swsusp_hardware_signature; 449 450 extern void hibernation_set_ops(const struct platform_hibernation_ops *ops); 450 451 extern int hibernate(void); 451 452 extern bool system_entering_hibernation(void);
+1
kernel/power/power.h
··· 170 170 #define SF_PLATFORM_MODE 1 171 171 #define SF_NOCOMPRESS_MODE 2 172 172 #define SF_CRC32_MODE 4 173 + #define SF_HW_SIG 8 173 174 174 175 /* kernel/power/hibernate.c */ 175 176 extern int swsusp_check(void);
+14 -2
kernel/power/swap.c
··· 36 36 37 37 #define HIBERNATE_SIG "S1SUSPEND" 38 38 39 + u32 swsusp_hardware_signature; 40 + 39 41 /* 40 42 * When reading an {un,}compressed image, we may restore pages in place, 41 43 * in which case some architectures need these pages cleaning before they ··· 106 104 107 105 struct swsusp_header { 108 106 char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int) - 109 - sizeof(u32)]; 107 + sizeof(u32) - sizeof(u32)]; 108 + u32 hw_sig; 110 109 u32 crc32; 111 110 sector_t image; 112 111 unsigned int flags; /* Flags to pass to the "boot" kernel */ ··· 315 312 /* 316 313 * Saving part 317 314 */ 318 - 319 315 static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags) 320 316 { 321 317 int error; ··· 326 324 memcpy(swsusp_header->orig_sig,swsusp_header->sig, 10); 327 325 memcpy(swsusp_header->sig, HIBERNATE_SIG, 10); 328 326 swsusp_header->image = handle->first_sector; 327 + if (swsusp_hardware_signature) { 328 + swsusp_header->hw_sig = swsusp_hardware_signature; 329 + flags |= SF_HW_SIG; 330 + } 329 331 swsusp_header->flags = flags; 330 332 if (flags & SF_CRC32_MODE) 331 333 swsusp_header->crc32 = handle->crc32; ··· 1541 1535 swsusp_resume_block, 1542 1536 swsusp_header, NULL); 1543 1537 } else { 1538 + error = -EINVAL; 1539 + } 1540 + if (!error && swsusp_header->flags & SF_HW_SIG && 1541 + swsusp_header->hw_sig != swsusp_hardware_signature) { 1542 + pr_info("Suspend image hardware signature mismatch (%08x now %08x); aborting resume.\n", 1543 + swsusp_header->hw_sig, swsusp_hardware_signature); 1544 1544 error = -EINVAL; 1545 1545 } 1546 1546