Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

PM / Sleep: Introduce "late suspend" and "early resume" of devices

The current device suspend/resume phases during system-wide power
transitions appear to be insufficient for some platforms that want
to use the same callback routines for saving device states and
related operations during runtime suspend/resume as well as during
system suspend/resume. In principle, they could point their
.suspend_noirq() and .resume_noirq() to the same callback routines
as their .runtime_suspend() and .runtime_resume(), respectively,
but at least some of them require device interrupts to be enabled
while the code in those routines is running.

It also makes sense to have device suspend-resume callbacks that will
be executed with runtime PM disabled and with device interrupts
enabled in case someone needs to run some special code in that
context during system-wide power transitions.

Apart from this, .suspend_noirq() and .resume_noirq() were introduced
as a workaround for drivers using shared interrupts and failing to
prevent their interrupt handlers from accessing suspended hardware.
It appears to be better not to use them for other porposes, or we may
have to deal with some serious confusion (which seems to be happening
already).

For the above reasons, introduce new device suspend/resume phases,
"late suspend" and "early resume" (and analogously for hibernation)
whose callback will be executed with runtime PM disabled and with
device interrupts enabled and whose callback pointers generally may
point to runtime suspend/resume routines.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>

+358 -92
+61 -32
Documentation/power/devices.txt
··· 96 96 int (*thaw)(struct device *dev); 97 97 int (*poweroff)(struct device *dev); 98 98 int (*restore)(struct device *dev); 99 + int (*suspend_late)(struct device *dev); 100 + int (*resume_early)(struct device *dev); 101 + int (*freeze_late)(struct device *dev); 102 + int (*thaw_early)(struct device *dev); 103 + int (*poweroff_late)(struct device *dev); 104 + int (*restore_early)(struct device *dev); 99 105 int (*suspend_noirq)(struct device *dev); 100 106 int (*resume_noirq)(struct device *dev); 101 107 int (*freeze_noirq)(struct device *dev); ··· 311 305 ----------------------- 312 306 When the system goes into the standby or memory sleep state, the phases are: 313 307 314 - prepare, suspend, suspend_noirq. 308 + prepare, suspend, suspend_late, suspend_noirq. 315 309 316 310 1. The prepare phase is meant to prevent races by preventing new devices 317 311 from being registered; the PM core would never know that all the ··· 330 324 appropriate low-power state, depending on the bus type the device is on, 331 325 and they may enable wakeup events. 332 326 333 - 3. The suspend_noirq phase occurs after IRQ handlers have been disabled, 327 + 3 For a number of devices it is convenient to split suspend into the 328 + "quiesce device" and "save device state" phases, in which cases 329 + suspend_late is meant to do the latter. It is always executed after 330 + runtime power management has been disabled for all devices. 331 + 332 + 4. The suspend_noirq phase occurs after IRQ handlers have been disabled, 334 333 which means that the driver's interrupt handler will not be called while 335 334 the callback method is running. The methods should save the values of 336 335 the device's registers that weren't saved previously and finally put the ··· 370 359 ---------------------- 371 360 When resuming from standby or memory sleep, the phases are: 372 361 373 - resume_noirq, resume, complete. 362 + resume_noirq, resume_early, resume, complete. 374 363 375 364 1. The resume_noirq callback methods should perform any actions needed 376 365 before the driver's interrupt handlers are invoked. This generally ··· 386 375 device driver's ->pm.resume_noirq() method to perform device-specific 387 376 actions. 388 377 389 - 2. The resume methods should bring the the device back to its operating 378 + 2. The resume_early methods should prepare devices for the execution of 379 + the resume methods. This generally involves undoing the actions of the 380 + preceding suspend_late phase. 381 + 382 + 3 The resume methods should bring the the device back to its operating 390 383 state, so that it can perform normal I/O. This generally involves 391 384 undoing the actions of the suspend phase. 392 385 393 - 3. The complete phase uses only a bus callback. The method should undo the 394 - actions of the prepare phase. Note, however, that new children may be 395 - registered below the device as soon as the resume callbacks occur; it's 396 - not necessary to wait until the complete phase. 386 + 4. The complete phase should undo the actions of the prepare phase. Note, 387 + however, that new children may be registered below the device as soon as 388 + the resume callbacks occur; it's not necessary to wait until the 389 + complete phase. 397 390 398 391 At the end of these phases, drivers should be as functional as they were before 399 392 suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are ··· 444 429 devices (thaw), write the image to permanent storage, and finally shut down the 445 430 system (poweroff). The phases used to accomplish this are: 446 431 447 - prepare, freeze, freeze_noirq, thaw_noirq, thaw, complete, 448 - prepare, poweroff, poweroff_noirq 432 + prepare, freeze, freeze_late, freeze_noirq, thaw_noirq, thaw_early, 433 + thaw, complete, prepare, poweroff, poweroff_late, poweroff_noirq 449 434 450 435 1. The prepare phase is discussed in the "Entering System Suspend" section 451 436 above. ··· 456 441 save time it's best not to do so. Also, the device should not be 457 442 prepared to generate wakeup events. 458 443 459 - 3. The freeze_noirq phase is analogous to the suspend_noirq phase discussed 444 + 3. The freeze_late phase is analogous to the suspend_late phase described 445 + above, except that the device should not be put in a low-power state and 446 + should not be allowed to generate wakeup events by it. 447 + 448 + 4. The freeze_noirq phase is analogous to the suspend_noirq phase discussed 460 449 above, except again that the device should not be put in a low-power 461 450 state and should not be allowed to generate wakeup events. 462 451 ··· 468 449 the contents of memory should remain undisturbed while this happens, so that the 469 450 image forms an atomic snapshot of the system state. 470 451 471 - 4. The thaw_noirq phase is analogous to the resume_noirq phase discussed 452 + 5. The thaw_noirq phase is analogous to the resume_noirq phase discussed 472 453 above. The main difference is that its methods can assume the device is 473 454 in the same state as at the end of the freeze_noirq phase. 474 455 475 - 5. The thaw phase is analogous to the resume phase discussed above. Its 456 + 6. The thaw_early phase is analogous to the resume_early phase described 457 + above. Its methods should undo the actions of the preceding 458 + freeze_late, if necessary. 459 + 460 + 7. The thaw phase is analogous to the resume phase discussed above. Its 476 461 methods should bring the device back to an operating state, so that it 477 462 can be used for saving the image if necessary. 478 463 479 - 6. The complete phase is discussed in the "Leaving System Suspend" section 464 + 8. The complete phase is discussed in the "Leaving System Suspend" section 480 465 above. 481 466 482 467 At this point the system image is saved, and the devices then need to be ··· 488 465 before putting the system into the standby or memory sleep state, and the phases 489 466 are similar. 490 467 491 - 7. The prepare phase is discussed above. 468 + 9. The prepare phase is discussed above. 492 469 493 - 8. The poweroff phase is analogous to the suspend phase. 470 + 10. The poweroff phase is analogous to the suspend phase. 494 471 495 - 9. The poweroff_noirq phase is analogous to the suspend_noirq phase. 472 + 11. The poweroff_late phase is analogous to the suspend_late phase. 496 473 497 - The poweroff and poweroff_noirq callbacks should do essentially the same things 498 - as the suspend and suspend_noirq callbacks. The only notable difference is that 499 - they need not store the device register values, because the registers should 500 - already have been stored during the freeze or freeze_noirq phases. 474 + 12. The poweroff_noirq phase is analogous to the suspend_noirq phase. 475 + 476 + The poweroff, poweroff_late and poweroff_noirq callbacks should do essentially 477 + the same things as the suspend, suspend_late and suspend_noirq callbacks, 478 + respectively. The only notable difference is that they need not store the 479 + device register values, because the registers should already have been stored 480 + during the freeze, freeze_late or freeze_noirq phases. 501 481 502 482 503 483 Leaving Hibernation ··· 544 518 functionality. The operation is much like waking up from the memory sleep 545 519 state, although it involves different phases: 546 520 547 - restore_noirq, restore, complete 521 + restore_noirq, restore_early, restore, complete 548 522 549 523 1. The restore_noirq phase is analogous to the resume_noirq phase. 550 524 551 - 2. The restore phase is analogous to the resume phase. 525 + 2. The restore_early phase is analogous to the resume_early phase. 552 526 553 - 3. The complete phase is discussed above. 527 + 3. The restore phase is analogous to the resume phase. 554 528 555 - The main difference from resume[_noirq] is that restore[_noirq] must assume the 556 - device has been accessed and reconfigured by the boot loader or the boot kernel. 557 - Consequently the state of the device may be different from the state remembered 558 - from the freeze and freeze_noirq phases. The device may even need to be reset 559 - and completely re-initialized. In many cases this difference doesn't matter, so 560 - the resume[_noirq] and restore[_norq] method pointers can be set to the same 561 - routines. Nevertheless, different callback pointers are used in case there is a 562 - situation where it actually matters. 529 + 4. The complete phase is discussed above. 530 + 531 + The main difference from resume[_early|_noirq] is that restore[_early|_noirq] 532 + must assume the device has been accessed and reconfigured by the boot loader or 533 + the boot kernel. Consequently the state of the device may be different from the 534 + state remembered from the freeze, freeze_late and freeze_noirq phases. The 535 + device may even need to be reset and completely re-initialized. In many cases 536 + this difference doesn't matter, so the resume[_early|_noirq] and 537 + restore[_early|_norq] method pointers can be set to the same routines. 538 + Nevertheless, different callback pointers are used in case there is a situation 539 + where it actually does matter. 563 540 564 541 565 542 Device Power Management Domains
+5 -6
arch/x86/kernel/apm_32.c
··· 1234 1234 struct apm_user *as; 1235 1235 1236 1236 dpm_suspend_start(PMSG_SUSPEND); 1237 - 1238 - dpm_suspend_noirq(PMSG_SUSPEND); 1237 + dpm_suspend_end(PMSG_SUSPEND); 1239 1238 1240 1239 local_irq_disable(); 1241 1240 syscore_suspend(); ··· 1258 1259 syscore_resume(); 1259 1260 local_irq_enable(); 1260 1261 1261 - dpm_resume_noirq(PMSG_RESUME); 1262 - 1262 + dpm_resume_start(PMSG_RESUME); 1263 1263 dpm_resume_end(PMSG_RESUME); 1264 + 1264 1265 queue_event(APM_NORMAL_RESUME, NULL); 1265 1266 spin_lock(&user_list_lock); 1266 1267 for (as = user_list; as != NULL; as = as->next) { ··· 1276 1277 { 1277 1278 int err; 1278 1279 1279 - dpm_suspend_noirq(PMSG_SUSPEND); 1280 + dpm_suspend_end(PMSG_SUSPEND); 1280 1281 1281 1282 local_irq_disable(); 1282 1283 syscore_suspend(); ··· 1290 1291 syscore_resume(); 1291 1292 local_irq_enable(); 1292 1293 1293 - dpm_resume_noirq(PMSG_RESUME); 1294 + dpm_resume_start(PMSG_RESUME); 1294 1295 } 1295 1296 1296 1297 static apm_event_t get_event(void)
+226 -23
drivers/base/power/main.c
··· 47 47 LIST_HEAD(dpm_list); 48 48 LIST_HEAD(dpm_prepared_list); 49 49 LIST_HEAD(dpm_suspended_list); 50 + LIST_HEAD(dpm_late_early_list); 50 51 LIST_HEAD(dpm_noirq_list); 51 52 52 53 struct suspend_stats suspend_stats; ··· 247 246 } 248 247 249 248 /** 249 + * pm_late_early_op - Return the PM operation appropriate for given PM event. 250 + * @ops: PM operations to choose from. 251 + * @state: PM transition of the system being carried out. 252 + * 253 + * Runtime PM is disabled for @dev while this function is being executed. 254 + */ 255 + static pm_callback_t pm_late_early_op(const struct dev_pm_ops *ops, 256 + pm_message_t state) 257 + { 258 + switch (state.event) { 259 + #ifdef CONFIG_SUSPEND 260 + case PM_EVENT_SUSPEND: 261 + return ops->suspend_late; 262 + case PM_EVENT_RESUME: 263 + return ops->resume_early; 264 + #endif /* CONFIG_SUSPEND */ 265 + #ifdef CONFIG_HIBERNATE_CALLBACKS 266 + case PM_EVENT_FREEZE: 267 + case PM_EVENT_QUIESCE: 268 + return ops->freeze_late; 269 + case PM_EVENT_HIBERNATE: 270 + return ops->poweroff_late; 271 + case PM_EVENT_THAW: 272 + case PM_EVENT_RECOVER: 273 + return ops->thaw_early; 274 + case PM_EVENT_RESTORE: 275 + return ops->restore_early; 276 + #endif /* CONFIG_HIBERNATE_CALLBACKS */ 277 + } 278 + 279 + return NULL; 280 + } 281 + 282 + /** 250 283 * pm_noirq_op - Return the PM operation appropriate for given PM event. 251 284 * @ops: PM operations to choose from. 252 285 * @state: PM transition of the system being carried out. ··· 409 374 TRACE_RESUME(0); 410 375 411 376 if (dev->pm_domain) { 412 - info = "EARLY power domain "; 377 + info = "noirq power domain "; 413 378 callback = pm_noirq_op(&dev->pm_domain->ops, state); 414 379 } else if (dev->type && dev->type->pm) { 415 - info = "EARLY type "; 380 + info = "noirq type "; 416 381 callback = pm_noirq_op(dev->type->pm, state); 417 382 } else if (dev->class && dev->class->pm) { 418 - info = "EARLY class "; 383 + info = "noirq class "; 419 384 callback = pm_noirq_op(dev->class->pm, state); 420 385 } else if (dev->bus && dev->bus->pm) { 421 - info = "EARLY bus "; 386 + info = "noirq bus "; 422 387 callback = pm_noirq_op(dev->bus->pm, state); 423 388 } 424 389 425 390 if (!callback && dev->driver && dev->driver->pm) { 426 - info = "EARLY driver "; 391 + info = "noirq driver "; 427 392 callback = pm_noirq_op(dev->driver->pm, state); 428 393 } 429 394 ··· 434 399 } 435 400 436 401 /** 437 - * dpm_resume_noirq - Execute "early resume" callbacks for non-sysdev devices. 402 + * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices. 438 403 * @state: PM transition of the system being carried out. 439 404 * 440 - * Call the "noirq" resume handlers for all devices marked as DPM_OFF_IRQ and 405 + * Call the "noirq" resume handlers for all devices in dpm_noirq_list and 441 406 * enable device drivers to receive interrupts. 442 407 */ 443 - void dpm_resume_noirq(pm_message_t state) 408 + static void dpm_resume_noirq(pm_message_t state) 444 409 { 445 410 ktime_t starttime = ktime_get(); 446 411 ··· 450 415 int error; 451 416 452 417 get_device(dev); 453 - list_move_tail(&dev->power.entry, &dpm_suspended_list); 418 + list_move_tail(&dev->power.entry, &dpm_late_early_list); 454 419 mutex_unlock(&dpm_list_mtx); 455 420 456 421 error = device_resume_noirq(dev, state); 457 422 if (error) { 458 423 suspend_stats.failed_resume_noirq++; 459 424 dpm_save_failed_step(SUSPEND_RESUME_NOIRQ); 425 + dpm_save_failed_dev(dev_name(dev)); 426 + pm_dev_err(dev, state, " noirq", error); 427 + } 428 + 429 + mutex_lock(&dpm_list_mtx); 430 + put_device(dev); 431 + } 432 + mutex_unlock(&dpm_list_mtx); 433 + dpm_show_time(starttime, state, "noirq"); 434 + resume_device_irqs(); 435 + } 436 + 437 + /** 438 + * device_resume_early - Execute an "early resume" callback for given device. 439 + * @dev: Device to handle. 440 + * @state: PM transition of the system being carried out. 441 + * 442 + * Runtime PM is disabled for @dev while this function is being executed. 443 + */ 444 + static int device_resume_early(struct device *dev, pm_message_t state) 445 + { 446 + pm_callback_t callback = NULL; 447 + char *info = NULL; 448 + int error = 0; 449 + 450 + TRACE_DEVICE(dev); 451 + TRACE_RESUME(0); 452 + 453 + if (dev->pm_domain) { 454 + info = "early power domain "; 455 + callback = pm_late_early_op(&dev->pm_domain->ops, state); 456 + } else if (dev->type && dev->type->pm) { 457 + info = "early type "; 458 + callback = pm_late_early_op(dev->type->pm, state); 459 + } else if (dev->class && dev->class->pm) { 460 + info = "early class "; 461 + callback = pm_late_early_op(dev->class->pm, state); 462 + } else if (dev->bus && dev->bus->pm) { 463 + info = "early bus "; 464 + callback = pm_late_early_op(dev->bus->pm, state); 465 + } 466 + 467 + if (!callback && dev->driver && dev->driver->pm) { 468 + info = "early driver "; 469 + callback = pm_late_early_op(dev->driver->pm, state); 470 + } 471 + 472 + error = dpm_run_callback(callback, dev, state, info); 473 + 474 + TRACE_RESUME(error); 475 + return error; 476 + } 477 + 478 + /** 479 + * dpm_resume_early - Execute "early resume" callbacks for all devices. 480 + * @state: PM transition of the system being carried out. 481 + */ 482 + static void dpm_resume_early(pm_message_t state) 483 + { 484 + ktime_t starttime = ktime_get(); 485 + 486 + mutex_lock(&dpm_list_mtx); 487 + while (!list_empty(&dpm_late_early_list)) { 488 + struct device *dev = to_device(dpm_late_early_list.next); 489 + int error; 490 + 491 + get_device(dev); 492 + list_move_tail(&dev->power.entry, &dpm_suspended_list); 493 + mutex_unlock(&dpm_list_mtx); 494 + 495 + error = device_resume_early(dev, state); 496 + if (error) { 497 + suspend_stats.failed_resume_early++; 498 + dpm_save_failed_step(SUSPEND_RESUME_EARLY); 460 499 dpm_save_failed_dev(dev_name(dev)); 461 500 pm_dev_err(dev, state, " early", error); 462 501 } ··· 540 431 } 541 432 mutex_unlock(&dpm_list_mtx); 542 433 dpm_show_time(starttime, state, "early"); 543 - resume_device_irqs(); 544 434 } 545 - EXPORT_SYMBOL_GPL(dpm_resume_noirq); 435 + 436 + /** 437 + * dpm_resume_start - Execute "noirq" and "early" device callbacks. 438 + * @state: PM transition of the system being carried out. 439 + */ 440 + void dpm_resume_start(pm_message_t state) 441 + { 442 + dpm_resume_noirq(state); 443 + dpm_resume_early(state); 444 + } 445 + EXPORT_SYMBOL_GPL(dpm_resume_start); 546 446 547 447 /** 548 448 * device_resume - Execute "resume" callbacks for given device. ··· 834 716 char *info = NULL; 835 717 836 718 if (dev->pm_domain) { 837 - info = "LATE power domain "; 719 + info = "noirq power domain "; 838 720 callback = pm_noirq_op(&dev->pm_domain->ops, state); 839 721 } else if (dev->type && dev->type->pm) { 840 - info = "LATE type "; 722 + info = "noirq type "; 841 723 callback = pm_noirq_op(dev->type->pm, state); 842 724 } else if (dev->class && dev->class->pm) { 843 - info = "LATE class "; 725 + info = "noirq class "; 844 726 callback = pm_noirq_op(dev->class->pm, state); 845 727 } else if (dev->bus && dev->bus->pm) { 846 - info = "LATE bus "; 728 + info = "noirq bus "; 847 729 callback = pm_noirq_op(dev->bus->pm, state); 848 730 } 849 731 850 732 if (!callback && dev->driver && dev->driver->pm) { 851 - info = "LATE driver "; 733 + info = "noirq driver "; 852 734 callback = pm_noirq_op(dev->driver->pm, state); 853 735 } 854 736 ··· 856 738 } 857 739 858 740 /** 859 - * dpm_suspend_noirq - Execute "late suspend" callbacks for non-sysdev devices. 741 + * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices. 860 742 * @state: PM transition of the system being carried out. 861 743 * 862 744 * Prevent device drivers from receiving interrupts and call the "noirq" suspend 863 745 * handlers for all non-sysdev devices. 864 746 */ 865 - int dpm_suspend_noirq(pm_message_t state) 747 + static int dpm_suspend_noirq(pm_message_t state) 866 748 { 867 749 ktime_t starttime = ktime_get(); 868 750 int error = 0; 869 751 870 752 suspend_device_irqs(); 871 753 mutex_lock(&dpm_list_mtx); 872 - while (!list_empty(&dpm_suspended_list)) { 873 - struct device *dev = to_device(dpm_suspended_list.prev); 754 + while (!list_empty(&dpm_late_early_list)) { 755 + struct device *dev = to_device(dpm_late_early_list.prev); 874 756 875 757 get_device(dev); 876 758 mutex_unlock(&dpm_list_mtx); ··· 879 761 880 762 mutex_lock(&dpm_list_mtx); 881 763 if (error) { 882 - pm_dev_err(dev, state, " late", error); 764 + pm_dev_err(dev, state, " noirq", error); 883 765 suspend_stats.failed_suspend_noirq++; 884 766 dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ); 885 767 dpm_save_failed_dev(dev_name(dev)); ··· 894 776 if (error) 895 777 dpm_resume_noirq(resume_event(state)); 896 778 else 897 - dpm_show_time(starttime, state, "late"); 779 + dpm_show_time(starttime, state, "noirq"); 898 780 return error; 899 781 } 900 - EXPORT_SYMBOL_GPL(dpm_suspend_noirq); 782 + 783 + /** 784 + * device_suspend_late - Execute a "late suspend" callback for given device. 785 + * @dev: Device to handle. 786 + * @state: PM transition of the system being carried out. 787 + * 788 + * Runtime PM is disabled for @dev while this function is being executed. 789 + */ 790 + static int device_suspend_late(struct device *dev, pm_message_t state) 791 + { 792 + pm_callback_t callback = NULL; 793 + char *info = NULL; 794 + 795 + if (dev->pm_domain) { 796 + info = "late power domain "; 797 + callback = pm_late_early_op(&dev->pm_domain->ops, state); 798 + } else if (dev->type && dev->type->pm) { 799 + info = "late type "; 800 + callback = pm_late_early_op(dev->type->pm, state); 801 + } else if (dev->class && dev->class->pm) { 802 + info = "late class "; 803 + callback = pm_late_early_op(dev->class->pm, state); 804 + } else if (dev->bus && dev->bus->pm) { 805 + info = "late bus "; 806 + callback = pm_late_early_op(dev->bus->pm, state); 807 + } 808 + 809 + if (!callback && dev->driver && dev->driver->pm) { 810 + info = "late driver "; 811 + callback = pm_late_early_op(dev->driver->pm, state); 812 + } 813 + 814 + return dpm_run_callback(callback, dev, state, info); 815 + } 816 + 817 + /** 818 + * dpm_suspend_late - Execute "late suspend" callbacks for all devices. 819 + * @state: PM transition of the system being carried out. 820 + */ 821 + static int dpm_suspend_late(pm_message_t state) 822 + { 823 + ktime_t starttime = ktime_get(); 824 + int error = 0; 825 + 826 + mutex_lock(&dpm_list_mtx); 827 + while (!list_empty(&dpm_suspended_list)) { 828 + struct device *dev = to_device(dpm_suspended_list.prev); 829 + 830 + get_device(dev); 831 + mutex_unlock(&dpm_list_mtx); 832 + 833 + error = device_suspend_late(dev, state); 834 + 835 + mutex_lock(&dpm_list_mtx); 836 + if (error) { 837 + pm_dev_err(dev, state, " late", error); 838 + suspend_stats.failed_suspend_late++; 839 + dpm_save_failed_step(SUSPEND_SUSPEND_LATE); 840 + dpm_save_failed_dev(dev_name(dev)); 841 + put_device(dev); 842 + break; 843 + } 844 + if (!list_empty(&dev->power.entry)) 845 + list_move(&dev->power.entry, &dpm_late_early_list); 846 + put_device(dev); 847 + } 848 + mutex_unlock(&dpm_list_mtx); 849 + if (error) 850 + dpm_resume_early(resume_event(state)); 851 + else 852 + dpm_show_time(starttime, state, "late"); 853 + 854 + return error; 855 + } 856 + 857 + /** 858 + * dpm_suspend_end - Execute "late" and "noirq" device suspend callbacks. 859 + * @state: PM transition of the system being carried out. 860 + */ 861 + int dpm_suspend_end(pm_message_t state) 862 + { 863 + int error = dpm_suspend_late(state); 864 + 865 + return error ? : dpm_suspend_noirq(state); 866 + } 867 + EXPORT_SYMBOL_GPL(dpm_suspend_end); 901 868 902 869 /** 903 870 * legacy_suspend - Execute a legacy (bus or class) suspend callback for device.
+3 -3
drivers/xen/manage.c
··· 129 129 printk(KERN_DEBUG "suspending xenstore...\n"); 130 130 xs_suspend(); 131 131 132 - err = dpm_suspend_noirq(PMSG_FREEZE); 132 + err = dpm_suspend_end(PMSG_FREEZE); 133 133 if (err) { 134 - printk(KERN_ERR "dpm_suspend_noirq failed: %d\n", err); 134 + printk(KERN_ERR "dpm_suspend_end failed: %d\n", err); 135 135 goto out_resume; 136 136 } 137 137 ··· 149 149 150 150 err = stop_machine(xen_suspend, &si, cpumask_of(0)); 151 151 152 - dpm_resume_noirq(si.cancelled ? PMSG_THAW : PMSG_RESTORE); 152 + dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE); 153 153 154 154 if (err) { 155 155 printk(KERN_ERR "failed to start xen_suspend: %d\n", err);
+35 -8
include/linux/pm.h
··· 110 110 * Subsystem-level @suspend() is executed for all devices after invoking 111 111 * subsystem-level @prepare() for all of them. 112 112 * 113 + * @suspend_late: Continue operations started by @suspend(). For a number of 114 + * devices @suspend_late() may point to the same callback routine as the 115 + * runtime suspend callback. 116 + * 113 117 * @resume: Executed after waking the system up from a sleep state in which the 114 118 * contents of main memory were preserved. The exact action to perform 115 119 * depends on the device's subsystem, but generally the driver is expected ··· 126 122 * Subsystem-level @resume() is executed for all devices after invoking 127 123 * subsystem-level @resume_noirq() for all of them. 128 124 * 125 + * @resume_early: Prepare to execute @resume(). For a number of devices 126 + * @resume_early() may point to the same callback routine as the runtime 127 + * resume callback. 128 + * 129 129 * @freeze: Hibernation-specific, executed before creating a hibernation image. 130 130 * Analogous to @suspend(), but it should not enable the device to signal 131 131 * wakeup events or change its power state. The majority of subsystems ··· 138 130 * during the subsequent resume from hibernation. 139 131 * Subsystem-level @freeze() is executed for all devices after invoking 140 132 * subsystem-level @prepare() for all of them. 133 + * 134 + * @freeze_late: Continue operations started by @freeze(). Analogous to 135 + * @suspend_late(), but it should not enable the device to signal wakeup 136 + * events or change its power state. 141 137 * 142 138 * @thaw: Hibernation-specific, executed after creating a hibernation image OR 143 139 * if the creation of an image has failed. Also executed after a failing ··· 152 140 * subsystem-level @thaw_noirq() for all of them. It also may be executed 153 141 * directly after @freeze() in case of a transition error. 154 142 * 143 + * @thaw_early: Prepare to execute @thaw(). Undo the changes made by the 144 + * preceding @freeze_late(). 145 + * 155 146 * @poweroff: Hibernation-specific, executed after saving a hibernation image. 156 147 * Analogous to @suspend(), but it need not save the device's settings in 157 148 * memory. 158 149 * Subsystem-level @poweroff() is executed for all devices after invoking 159 150 * subsystem-level @prepare() for all of them. 160 151 * 152 + * @poweroff_late: Continue operations started by @poweroff(). Analogous to 153 + * @suspend_late(), but it need not save the device's settings in memory. 154 + * 161 155 * @restore: Hibernation-specific, executed after restoring the contents of main 162 156 * memory from a hibernation image, analogous to @resume(). 157 + * 158 + * @restore_early: Prepare to execute @restore(), analogous to @resume_early(). 163 159 * 164 160 * @suspend_noirq: Complete the actions started by @suspend(). Carry out any 165 161 * additional operations required for suspending the device that might be ··· 178 158 * @suspend_noirq() has returned successfully. If the device can generate 179 159 * system wakeup signals and is enabled to wake up the system, it should be 180 160 * configured to do so at that time. However, depending on the platform 181 - * and device's subsystem, @suspend() may be allowed to put the device into 182 - * the low-power state and configure it to generate wakeup signals, in 183 - * which case it generally is not necessary to define @suspend_noirq(). 161 + * and device's subsystem, @suspend() or @suspend_late() may be allowed to 162 + * put the device into the low-power state and configure it to generate 163 + * wakeup signals, in which case it generally is not necessary to define 164 + * @suspend_noirq(). 184 165 * 185 166 * @resume_noirq: Prepare for the execution of @resume() by carrying out any 186 167 * operations required for resuming the device that might be racing with ··· 192 171 * additional operations required for freezing the device that might be 193 172 * racing with its driver's interrupt handler, which is guaranteed not to 194 173 * run while @freeze_noirq() is being executed. 195 - * The power state of the device should not be changed by either @freeze() 196 - * or @freeze_noirq() and it should not be configured to signal system 197 - * wakeup by any of these callbacks. 174 + * The power state of the device should not be changed by either @freeze(), 175 + * or @freeze_late(), or @freeze_noirq() and it should not be configured to 176 + * signal system wakeup by any of these callbacks. 198 177 * 199 178 * @thaw_noirq: Prepare for the execution of @thaw() by carrying out any 200 179 * operations required for thawing the device that might be racing with its ··· 270 249 int (*thaw)(struct device *dev); 271 250 int (*poweroff)(struct device *dev); 272 251 int (*restore)(struct device *dev); 252 + int (*suspend_late)(struct device *dev); 253 + int (*resume_early)(struct device *dev); 254 + int (*freeze_late)(struct device *dev); 255 + int (*thaw_early)(struct device *dev); 256 + int (*poweroff_late)(struct device *dev); 257 + int (*restore_early)(struct device *dev); 273 258 int (*suspend_noirq)(struct device *dev); 274 259 int (*resume_noirq)(struct device *dev); 275 260 int (*freeze_noirq)(struct device *dev); ··· 611 584 612 585 #ifdef CONFIG_PM_SLEEP 613 586 extern void device_pm_lock(void); 614 - extern void dpm_resume_noirq(pm_message_t state); 587 + extern void dpm_resume_start(pm_message_t state); 615 588 extern void dpm_resume_end(pm_message_t state); 616 589 extern void dpm_resume(pm_message_t state); 617 590 extern void dpm_complete(pm_message_t state); 618 591 619 592 extern void device_pm_unlock(void); 620 - extern int dpm_suspend_noirq(pm_message_t state); 593 + extern int dpm_suspend_end(pm_message_t state); 621 594 extern int dpm_suspend_start(pm_message_t state); 622 595 extern int dpm_suspend(pm_message_t state); 623 596 extern int dpm_prepare(pm_message_t state);
+4
include/linux/suspend.h
··· 42 42 SUSPEND_FREEZE = 1, 43 43 SUSPEND_PREPARE, 44 44 SUSPEND_SUSPEND, 45 + SUSPEND_SUSPEND_LATE, 45 46 SUSPEND_SUSPEND_NOIRQ, 46 47 SUSPEND_RESUME_NOIRQ, 48 + SUSPEND_RESUME_EARLY, 47 49 SUSPEND_RESUME 48 50 }; 49 51 ··· 55 53 int failed_freeze; 56 54 int failed_prepare; 57 55 int failed_suspend; 56 + int failed_suspend_late; 58 57 int failed_suspend_noirq; 59 58 int failed_resume; 59 + int failed_resume_early; 60 60 int failed_resume_noirq; 61 61 #define REC_FAILED_NUM 2 62 62 int last_failed_dev;
+4 -4
kernel/kexec.c
··· 1546 1546 if (error) 1547 1547 goto Resume_console; 1548 1548 /* At this point, dpm_suspend_start() has been called, 1549 - * but *not* dpm_suspend_noirq(). We *must* call 1550 - * dpm_suspend_noirq() now. Otherwise, drivers for 1549 + * but *not* dpm_suspend_end(). We *must* call 1550 + * dpm_suspend_end() now. Otherwise, drivers for 1551 1551 * some devices (e.g. interrupt controllers) become 1552 1552 * desynchronized with the actual state of the 1553 1553 * hardware at resume time, and evil weirdness ensues. 1554 1554 */ 1555 - error = dpm_suspend_noirq(PMSG_FREEZE); 1555 + error = dpm_suspend_end(PMSG_FREEZE); 1556 1556 if (error) 1557 1557 goto Resume_devices; 1558 1558 error = disable_nonboot_cpus(); ··· 1579 1579 local_irq_enable(); 1580 1580 Enable_cpus: 1581 1581 enable_nonboot_cpus(); 1582 - dpm_resume_noirq(PMSG_RESTORE); 1582 + dpm_resume_start(PMSG_RESTORE); 1583 1583 Resume_devices: 1584 1584 dpm_resume_end(PMSG_RESTORE); 1585 1585 Resume_console:
+12 -12
kernel/power/hibernate.c
··· 245 245 * create_image - Create a hibernation image. 246 246 * @platform_mode: Whether or not to use the platform driver. 247 247 * 248 - * Execute device drivers' .freeze_noirq() callbacks, create a hibernation image 249 - * and execute the drivers' .thaw_noirq() callbacks. 248 + * Execute device drivers' "late" and "noirq" freeze callbacks, create a 249 + * hibernation image and run the drivers' "noirq" and "early" thaw callbacks. 250 250 * 251 251 * Control reappears in this routine after the subsequent restore. 252 252 */ ··· 254 254 { 255 255 int error; 256 256 257 - error = dpm_suspend_noirq(PMSG_FREEZE); 257 + error = dpm_suspend_end(PMSG_FREEZE); 258 258 if (error) { 259 259 printk(KERN_ERR "PM: Some devices failed to power down, " 260 260 "aborting hibernation\n"); ··· 306 306 Platform_finish: 307 307 platform_finish(platform_mode); 308 308 309 - dpm_resume_noirq(in_suspend ? 309 + dpm_resume_start(in_suspend ? 310 310 (error ? PMSG_RECOVER : PMSG_THAW) : PMSG_RESTORE); 311 311 312 312 return error; ··· 394 394 * resume_target_kernel - Restore system state from a hibernation image. 395 395 * @platform_mode: Whether or not to use the platform driver. 396 396 * 397 - * Execute device drivers' .freeze_noirq() callbacks, restore the contents of 398 - * highmem that have not been restored yet from the image and run the low-level 399 - * code that will restore the remaining contents of memory and switch to the 400 - * just restored target kernel. 397 + * Execute device drivers' "noirq" and "late" freeze callbacks, restore the 398 + * contents of highmem that have not been restored yet from the image and run 399 + * the low-level code that will restore the remaining contents of memory and 400 + * switch to the just restored target kernel. 401 401 */ 402 402 static int resume_target_kernel(bool platform_mode) 403 403 { 404 404 int error; 405 405 406 - error = dpm_suspend_noirq(PMSG_QUIESCE); 406 + error = dpm_suspend_end(PMSG_QUIESCE); 407 407 if (error) { 408 408 printk(KERN_ERR "PM: Some devices failed to power down, " 409 409 "aborting resume\n"); ··· 460 460 Cleanup: 461 461 platform_restore_cleanup(platform_mode); 462 462 463 - dpm_resume_noirq(PMSG_RECOVER); 463 + dpm_resume_start(PMSG_RECOVER); 464 464 465 465 return error; 466 466 } ··· 518 518 goto Resume_devices; 519 519 } 520 520 521 - error = dpm_suspend_noirq(PMSG_HIBERNATE); 521 + error = dpm_suspend_end(PMSG_HIBERNATE); 522 522 if (error) 523 523 goto Resume_devices; 524 524 ··· 549 549 Platform_finish: 550 550 hibernation_ops->finish(); 551 551 552 - dpm_resume_noirq(PMSG_RESTORE); 552 + dpm_resume_start(PMSG_RESTORE); 553 553 554 554 Resume_devices: 555 555 entering_platform_hibernation = false;
+6 -2
kernel/power/main.c
··· 165 165 last_errno %= REC_FAILED_NUM; 166 166 last_step = suspend_stats.last_failed_step + REC_FAILED_NUM - 1; 167 167 last_step %= REC_FAILED_NUM; 168 - seq_printf(s, "%s: %d\n%s: %d\n%s: %d\n%s: %d\n" 169 - "%s: %d\n%s: %d\n%s: %d\n%s: %d\n", 168 + seq_printf(s, "%s: %d\n%s: %d\n%s: %d\n%s: %d\n%s: %d\n" 169 + "%s: %d\n%s: %d\n%s: %d\n%s: %d\n%s: %d\n", 170 170 "success", suspend_stats.success, 171 171 "fail", suspend_stats.fail, 172 172 "failed_freeze", suspend_stats.failed_freeze, 173 173 "failed_prepare", suspend_stats.failed_prepare, 174 174 "failed_suspend", suspend_stats.failed_suspend, 175 + "failed_suspend_late", 176 + suspend_stats.failed_suspend_late, 175 177 "failed_suspend_noirq", 176 178 suspend_stats.failed_suspend_noirq, 177 179 "failed_resume", suspend_stats.failed_resume, 180 + "failed_resume_early", 181 + suspend_stats.failed_resume_early, 178 182 "failed_resume_noirq", 179 183 suspend_stats.failed_resume_noirq); 180 184 seq_printf(s, "failures:\n last_failed_dev:\t%-s\n",
+2 -2
kernel/power/suspend.c
··· 147 147 goto Platform_finish; 148 148 } 149 149 150 - error = dpm_suspend_noirq(PMSG_SUSPEND); 150 + error = dpm_suspend_end(PMSG_SUSPEND); 151 151 if (error) { 152 152 printk(KERN_ERR "PM: Some devices failed to power down\n"); 153 153 goto Platform_finish; ··· 189 189 if (suspend_ops->wake) 190 190 suspend_ops->wake(); 191 191 192 - dpm_resume_noirq(PMSG_RESUME); 192 + dpm_resume_start(PMSG_RESUME); 193 193 194 194 Platform_finish: 195 195 if (suspend_ops->finish)