Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge back earlier PM core material for v4.16.

+254 -132
+25 -2
Documentation/driver-api/pm/devices.rst
··· 788 788 789 789 During system-wide resume from a sleep state it's easiest to put devices into 790 790 the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`. 791 - Refer to that document for more information regarding this particular issue as 791 + [Refer to that document for more information regarding this particular issue as 792 792 well as for information on the device runtime power management framework in 793 - general. 793 + general.] 794 + 795 + However, it often is desirable to leave devices in suspend after system 796 + transitions to the working state, especially if those devices had been in 797 + runtime suspend before the preceding system-wide suspend (or analogous) 798 + transition. Device drivers can use the ``DPM_FLAG_LEAVE_SUSPENDED`` flag to 799 + indicate to the PM core (and middle-layer code) that they prefer the specific 800 + devices handled by them to be left suspended and they have no problems with 801 + skipping their system-wide resume callbacks for this reason. Whether or not the 802 + devices will actually be left in suspend may depend on their state before the 803 + given system suspend-resume cycle and on the type of the system transition under 804 + way. In particular, devices are not left suspended if that transition is a 805 + restore from hibernation, as device states are not guaranteed to be reflected 806 + by the information stored in the hibernation image in that case. 807 + 808 + The middle-layer code involved in the handling of the device is expected to 809 + indicate to the PM core if the device may be left in suspend by setting its 810 + :c:member:`power.may_skip_resume` status bit which is checked by the PM core 811 + during the "noirq" phase of the preceding system-wide suspend (or analogous) 812 + transition. The middle layer is then responsible for handling the device as 813 + appropriate in its "noirq" resume callback, which is executed regardless of 814 + whether or not the device is left suspended, but the other resume callbacks 815 + (except for ``->complete``) will be skipped automatically by the PM core if the 816 + device really can be left in suspend.
+11
Documentation/power/pci.txt
··· 994 994 the function will set the power.direct_complete flag for it (to make the PM core 995 995 skip the subsequent "thaw" callbacks for it) and return. 996 996 997 + Setting the DPM_FLAG_LEAVE_SUSPENDED flag means that the driver prefers the 998 + device to be left in suspend after system-wide transitions to the working state. 999 + This flag is checked by the PM core, but the PCI bus type informs the PM core 1000 + which devices may be left in suspend from its perspective (that happens during 1001 + the "noirq" phase of system-wide suspend and analogous transitions) and next it 1002 + uses the dev_pm_may_skip_resume() helper to decide whether or not to return from 1003 + pci_pm_resume_noirq() early, as the PM core will skip the remaining resume 1004 + callbacks for the device during the transition under way and will set its 1005 + runtime PM status to "suspended" if dev_pm_may_skip_resume() returns "true" for 1006 + it. 1007 + 997 1008 3.2. Device Runtime Power Management 998 1009 ------------------------------------ 999 1010 In addition to providing device power management callbacks PCI device drivers
+25 -4
drivers/acpi/device_pm.c
··· 990 990 * the sleep state it is going out of and it has never been resumed till 991 991 * now, resume it in case the firmware powered it up. 992 992 */ 993 - if (dev->power.direct_complete && pm_resume_via_firmware()) 993 + if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) 994 994 pm_request_resume(dev); 995 995 } 996 996 EXPORT_SYMBOL_GPL(acpi_subsys_complete); ··· 1039 1039 */ 1040 1040 int acpi_subsys_suspend_noirq(struct device *dev) 1041 1041 { 1042 - if (dev_pm_smart_suspend_and_suspended(dev)) 1043 - return 0; 1042 + int ret; 1044 1043 1045 - return pm_generic_suspend_noirq(dev); 1044 + if (dev_pm_smart_suspend_and_suspended(dev)) { 1045 + dev->power.may_skip_resume = true; 1046 + return 0; 1047 + } 1048 + 1049 + ret = pm_generic_suspend_noirq(dev); 1050 + if (ret) 1051 + return ret; 1052 + 1053 + /* 1054 + * If the target system sleep state is suspend-to-idle, it is sufficient 1055 + * to check whether or not the device's wakeup settings are good for 1056 + * runtime PM. Otherwise, the pm_resume_via_firmware() check will cause 1057 + * acpi_subsys_complete() to take care of fixing up the device's state 1058 + * anyway, if need be. 1059 + */ 1060 + dev->power.may_skip_resume = device_may_wakeup(dev) || 1061 + !device_can_wakeup(dev); 1062 + 1063 + return 0; 1046 1064 } 1047 1065 EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); 1048 1066 ··· 1070 1052 */ 1071 1053 int acpi_subsys_resume_noirq(struct device *dev) 1072 1054 { 1055 + if (dev_pm_may_skip_resume(dev)) 1056 + return 0; 1057 + 1073 1058 /* 1074 1059 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend 1075 1060 * during system suspend, so update their runtime PM status to "active"
+85 -17
drivers/base/power/main.c
··· 526 526 /*------------------------- Resume routines -------------------------*/ 527 527 528 528 /** 529 + * dev_pm_may_skip_resume - System-wide device resume optimization check. 530 + * @dev: Target device. 531 + * 532 + * Checks whether or not the device may be left in suspend after a system-wide 533 + * transition to the working state. 534 + */ 535 + bool dev_pm_may_skip_resume(struct device *dev) 536 + { 537 + return !dev->power.must_resume && pm_transition.event != PM_EVENT_RESTORE; 538 + } 539 + 540 + /** 529 541 * device_resume_noirq - Execute a "noirq resume" callback for given device. 530 542 * @dev: Device to handle. 531 543 * @state: PM transition of the system being carried out. ··· 584 572 585 573 error = dpm_run_callback(callback, dev, state, info); 586 574 dev->power.is_noirq_suspended = false; 575 + 576 + if (dev_pm_may_skip_resume(dev)) { 577 + /* 578 + * The device is going to be left in suspend, but it might not 579 + * have been in runtime suspend before the system suspended, so 580 + * its runtime PM status needs to be updated to avoid confusing 581 + * the runtime PM framework when runtime PM is enabled for the 582 + * device again. 583 + */ 584 + pm_runtime_set_suspended(dev); 585 + dev->power.is_late_suspended = false; 586 + dev->power.is_suspended = false; 587 + } 587 588 588 589 Out: 589 590 complete_all(&dev->power.completion); ··· 1099 1074 return PMSG_ON; 1100 1075 } 1101 1076 1077 + static void dpm_superior_set_must_resume(struct device *dev) 1078 + { 1079 + struct device_link *link; 1080 + int idx; 1081 + 1082 + if (dev->parent) 1083 + dev->parent->power.must_resume = true; 1084 + 1085 + idx = device_links_read_lock(); 1086 + 1087 + list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) 1088 + link->supplier->power.must_resume = true; 1089 + 1090 + device_links_read_unlock(idx); 1091 + } 1092 + 1102 1093 /** 1103 1094 * __device_suspend_noirq - Execute a "noirq suspend" callback for given device. 1104 1095 * @dev: Device to handle. ··· 1166 1125 } 1167 1126 1168 1127 error = dpm_run_callback(callback, dev, state, info); 1169 - if (!error) 1170 - dev->power.is_noirq_suspended = true; 1171 - else 1128 + if (error) { 1172 1129 async_error = error; 1130 + goto Complete; 1131 + } 1132 + 1133 + dev->power.is_noirq_suspended = true; 1134 + 1135 + if (dev_pm_test_driver_flags(dev, DPM_FLAG_LEAVE_SUSPENDED)) { 1136 + /* 1137 + * The only safe strategy here is to require that if the device 1138 + * may not be left in suspend, resume callbacks must be invoked 1139 + * for it. 1140 + */ 1141 + dev->power.must_resume = dev->power.must_resume || 1142 + !dev->power.may_skip_resume || 1143 + atomic_read(&dev->power.usage_count) > 1; 1144 + } else { 1145 + dev->power.must_resume = true; 1146 + } 1147 + 1148 + if (dev->power.must_resume) 1149 + dpm_superior_set_must_resume(dev); 1173 1150 1174 1151 Complete: 1175 1152 complete_all(&dev->power.completion); ··· 1479 1420 return error; 1480 1421 } 1481 1422 1423 + static void dpm_propagate_to_parent(struct device *dev) 1424 + { 1425 + struct device *parent = dev->parent; 1426 + 1427 + if (!parent) 1428 + return; 1429 + 1430 + spin_lock_irq(&parent->power.lock); 1431 + 1432 + parent->power.direct_complete = false; 1433 + if (dev->power.wakeup_path && !parent->power.ignore_children) 1434 + parent->power.wakeup_path = true; 1435 + 1436 + spin_unlock_irq(&parent->power.lock); 1437 + } 1438 + 1482 1439 static void dpm_clear_suppliers_direct_complete(struct device *dev) 1483 1440 { 1484 1441 struct device_link *link; ··· 1560 1485 dev->power.direct_complete = false; 1561 1486 } 1562 1487 1488 + dev->power.may_skip_resume = false; 1489 + dev->power.must_resume = false; 1490 + 1563 1491 dpm_watchdog_set(&wd, dev); 1564 1492 device_lock(dev); 1565 1493 ··· 1606 1528 1607 1529 End: 1608 1530 if (!error) { 1609 - struct device *parent = dev->parent; 1610 - 1611 1531 dev->power.is_suspended = true; 1612 - if (parent) { 1613 - spin_lock_irq(&parent->power.lock); 1614 - 1615 - dev->parent->power.direct_complete = false; 1616 - if (dev->power.wakeup_path 1617 - && !dev->parent->power.ignore_children) 1618 - dev->parent->power.wakeup_path = true; 1619 - 1620 - spin_unlock_irq(&parent->power.lock); 1621 - } 1532 + dpm_propagate_to_parent(dev); 1622 1533 dpm_clear_suppliers_direct_complete(dev); 1623 1534 } 1624 1535 ··· 1717 1650 if (dev->power.syscore) 1718 1651 return 0; 1719 1652 1720 - WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && 1721 - !pm_runtime_enabled(dev)); 1653 + WARN_ON(!pm_runtime_enabled(dev) && 1654 + dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND | 1655 + DPM_FLAG_LEAVE_SUSPENDED)); 1722 1656 1723 1657 /* 1724 1658 * If a device's parent goes into runtime suspend at the wrong time,
+79 -103
drivers/base/power/sysfs.c
··· 108 108 static ssize_t control_store(struct device * dev, struct device_attribute *attr, 109 109 const char * buf, size_t n) 110 110 { 111 - char *cp; 112 - int len = n; 113 - 114 - cp = memchr(buf, '\n', n); 115 - if (cp) 116 - len = cp - buf; 117 111 device_lock(dev); 118 - if (len == sizeof ctrl_auto - 1 && strncmp(buf, ctrl_auto, len) == 0) 112 + if (sysfs_streq(buf, ctrl_auto)) 119 113 pm_runtime_allow(dev); 120 - else if (len == sizeof ctrl_on - 1 && strncmp(buf, ctrl_on, len) == 0) 114 + else if (sysfs_streq(buf, ctrl_on)) 121 115 pm_runtime_forbid(dev); 122 116 else 123 117 n = -EINVAL; ··· 119 125 return n; 120 126 } 121 127 122 - static DEVICE_ATTR(control, 0644, control_show, control_store); 128 + static DEVICE_ATTR_RW(control); 123 129 124 - static ssize_t rtpm_active_time_show(struct device *dev, 130 + static ssize_t runtime_active_time_show(struct device *dev, 125 131 struct device_attribute *attr, char *buf) 126 132 { 127 133 int ret; ··· 132 138 return ret; 133 139 } 134 140 135 - static DEVICE_ATTR(runtime_active_time, 0444, rtpm_active_time_show, NULL); 141 + static DEVICE_ATTR_RO(runtime_active_time); 136 142 137 - static ssize_t rtpm_suspended_time_show(struct device *dev, 143 + static ssize_t runtime_suspended_time_show(struct device *dev, 138 144 struct device_attribute *attr, char *buf) 139 145 { 140 146 int ret; ··· 146 152 return ret; 147 153 } 148 154 149 - static DEVICE_ATTR(runtime_suspended_time, 0444, rtpm_suspended_time_show, NULL); 155 + static DEVICE_ATTR_RO(runtime_suspended_time); 150 156 151 - static ssize_t rtpm_status_show(struct device *dev, 157 + static ssize_t runtime_status_show(struct device *dev, 152 158 struct device_attribute *attr, char *buf) 153 159 { 154 160 const char *p; ··· 178 184 return sprintf(buf, p); 179 185 } 180 186 181 - static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL); 187 + static DEVICE_ATTR_RO(runtime_status); 182 188 183 189 static ssize_t autosuspend_delay_ms_show(struct device *dev, 184 190 struct device_attribute *attr, char *buf) ··· 205 211 return n; 206 212 } 207 213 208 - static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show, 209 - autosuspend_delay_ms_store); 214 + static DEVICE_ATTR_RW(autosuspend_delay_ms); 210 215 211 - static ssize_t pm_qos_resume_latency_show(struct device *dev, 212 - struct device_attribute *attr, 213 - char *buf) 216 + static ssize_t pm_qos_resume_latency_us_show(struct device *dev, 217 + struct device_attribute *attr, 218 + char *buf) 214 219 { 215 220 s32 value = dev_pm_qos_requested_resume_latency(dev); 216 221 217 222 if (value == 0) 218 223 return sprintf(buf, "n/a\n"); 219 - else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 224 + if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 220 225 value = 0; 221 226 222 227 return sprintf(buf, "%d\n", value); 223 228 } 224 229 225 - static ssize_t pm_qos_resume_latency_store(struct device *dev, 226 - struct device_attribute *attr, 227 - const char *buf, size_t n) 230 + static ssize_t pm_qos_resume_latency_us_store(struct device *dev, 231 + struct device_attribute *attr, 232 + const char *buf, size_t n) 228 233 { 229 234 s32 value; 230 235 int ret; ··· 238 245 239 246 if (value == 0) 240 247 value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 241 - } else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) { 248 + } else if (sysfs_streq(buf, "n/a")) { 242 249 value = 0; 243 250 } else { 244 251 return -EINVAL; ··· 249 256 return ret < 0 ? ret : n; 250 257 } 251 258 252 - static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, 253 - pm_qos_resume_latency_show, pm_qos_resume_latency_store); 259 + static DEVICE_ATTR_RW(pm_qos_resume_latency_us); 254 260 255 - static ssize_t pm_qos_latency_tolerance_show(struct device *dev, 256 - struct device_attribute *attr, 257 - char *buf) 261 + static ssize_t pm_qos_latency_tolerance_us_show(struct device *dev, 262 + struct device_attribute *attr, 263 + char *buf) 258 264 { 259 265 s32 value = dev_pm_qos_get_user_latency_tolerance(dev); 260 266 261 267 if (value < 0) 262 268 return sprintf(buf, "auto\n"); 263 - else if (value == PM_QOS_LATENCY_ANY) 269 + if (value == PM_QOS_LATENCY_ANY) 264 270 return sprintf(buf, "any\n"); 265 271 266 272 return sprintf(buf, "%d\n", value); 267 273 } 268 274 269 - static ssize_t pm_qos_latency_tolerance_store(struct device *dev, 270 - struct device_attribute *attr, 271 - const char *buf, size_t n) 275 + static ssize_t pm_qos_latency_tolerance_us_store(struct device *dev, 276 + struct device_attribute *attr, 277 + const char *buf, size_t n) 272 278 { 273 279 s32 value; 274 280 int ret; ··· 277 285 if (value < 0) 278 286 return -EINVAL; 279 287 } else { 280 - if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n")) 288 + if (sysfs_streq(buf, "auto")) 281 289 value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; 282 - else if (!strcmp(buf, "any") || !strcmp(buf, "any\n")) 290 + else if (sysfs_streq(buf, "any")) 283 291 value = PM_QOS_LATENCY_ANY; 284 292 else 285 293 return -EINVAL; ··· 288 296 return ret < 0 ? ret : n; 289 297 } 290 298 291 - static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644, 292 - pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store); 299 + static DEVICE_ATTR_RW(pm_qos_latency_tolerance_us); 293 300 294 301 static ssize_t pm_qos_no_power_off_show(struct device *dev, 295 302 struct device_attribute *attr, ··· 314 323 return ret < 0 ? ret : n; 315 324 } 316 325 317 - static DEVICE_ATTR(pm_qos_no_power_off, 0644, 318 - pm_qos_no_power_off_show, pm_qos_no_power_off_store); 326 + static DEVICE_ATTR_RW(pm_qos_no_power_off); 319 327 320 328 #ifdef CONFIG_PM_SLEEP 321 329 static const char _enabled[] = "enabled"; 322 330 static const char _disabled[] = "disabled"; 323 331 324 - static ssize_t 325 - wake_show(struct device * dev, struct device_attribute *attr, char * buf) 332 + static ssize_t wakeup_show(struct device *dev, struct device_attribute *attr, 333 + char *buf) 326 334 { 327 335 return sprintf(buf, "%s\n", device_can_wakeup(dev) 328 336 ? (device_may_wakeup(dev) ? _enabled : _disabled) 329 337 : ""); 330 338 } 331 339 332 - static ssize_t 333 - wake_store(struct device * dev, struct device_attribute *attr, 334 - const char * buf, size_t n) 340 + static ssize_t wakeup_store(struct device *dev, struct device_attribute *attr, 341 + const char *buf, size_t n) 335 342 { 336 - char *cp; 337 - int len = n; 338 - 339 343 if (!device_can_wakeup(dev)) 340 344 return -EINVAL; 341 345 342 - cp = memchr(buf, '\n', n); 343 - if (cp) 344 - len = cp - buf; 345 - if (len == sizeof _enabled - 1 346 - && strncmp(buf, _enabled, sizeof _enabled - 1) == 0) 346 + if (sysfs_streq(buf, _enabled)) 347 347 device_set_wakeup_enable(dev, 1); 348 - else if (len == sizeof _disabled - 1 349 - && strncmp(buf, _disabled, sizeof _disabled - 1) == 0) 348 + else if (sysfs_streq(buf, _disabled)) 350 349 device_set_wakeup_enable(dev, 0); 351 350 else 352 351 return -EINVAL; 353 352 return n; 354 353 } 355 354 356 - static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store); 355 + static DEVICE_ATTR_RW(wakeup); 357 356 358 357 static ssize_t wakeup_count_show(struct device *dev, 359 - struct device_attribute *attr, char *buf) 358 + struct device_attribute *attr, char *buf) 360 359 { 361 360 unsigned long count = 0; 362 361 bool enabled = false; ··· 360 379 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 361 380 } 362 381 363 - static DEVICE_ATTR(wakeup_count, 0444, wakeup_count_show, NULL); 382 + static DEVICE_ATTR_RO(wakeup_count); 364 383 365 384 static ssize_t wakeup_active_count_show(struct device *dev, 366 - struct device_attribute *attr, char *buf) 385 + struct device_attribute *attr, 386 + char *buf) 367 387 { 368 388 unsigned long count = 0; 369 389 bool enabled = false; ··· 378 396 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 379 397 } 380 398 381 - static DEVICE_ATTR(wakeup_active_count, 0444, wakeup_active_count_show, NULL); 399 + static DEVICE_ATTR_RO(wakeup_active_count); 382 400 383 401 static ssize_t wakeup_abort_count_show(struct device *dev, 384 - struct device_attribute *attr, 385 - char *buf) 402 + struct device_attribute *attr, 403 + char *buf) 386 404 { 387 405 unsigned long count = 0; 388 406 bool enabled = false; ··· 396 414 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 397 415 } 398 416 399 - static DEVICE_ATTR(wakeup_abort_count, 0444, wakeup_abort_count_show, NULL); 417 + static DEVICE_ATTR_RO(wakeup_abort_count); 400 418 401 419 static ssize_t wakeup_expire_count_show(struct device *dev, 402 420 struct device_attribute *attr, ··· 414 432 return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); 415 433 } 416 434 417 - static DEVICE_ATTR(wakeup_expire_count, 0444, wakeup_expire_count_show, NULL); 435 + static DEVICE_ATTR_RO(wakeup_expire_count); 418 436 419 437 static ssize_t wakeup_active_show(struct device *dev, 420 - struct device_attribute *attr, char *buf) 438 + struct device_attribute *attr, char *buf) 421 439 { 422 440 unsigned int active = 0; 423 441 bool enabled = false; ··· 431 449 return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n"); 432 450 } 433 451 434 - static DEVICE_ATTR(wakeup_active, 0444, wakeup_active_show, NULL); 452 + static DEVICE_ATTR_RO(wakeup_active); 435 453 436 - static ssize_t wakeup_total_time_show(struct device *dev, 437 - struct device_attribute *attr, char *buf) 454 + static ssize_t wakeup_total_time_ms_show(struct device *dev, 455 + struct device_attribute *attr, 456 + char *buf) 438 457 { 439 458 s64 msec = 0; 440 459 bool enabled = false; ··· 449 466 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 450 467 } 451 468 452 - static DEVICE_ATTR(wakeup_total_time_ms, 0444, wakeup_total_time_show, NULL); 469 + static DEVICE_ATTR_RO(wakeup_total_time_ms); 453 470 454 - static ssize_t wakeup_max_time_show(struct device *dev, 455 - struct device_attribute *attr, char *buf) 471 + static ssize_t wakeup_max_time_ms_show(struct device *dev, 472 + struct device_attribute *attr, char *buf) 456 473 { 457 474 s64 msec = 0; 458 475 bool enabled = false; ··· 466 483 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 467 484 } 468 485 469 - static DEVICE_ATTR(wakeup_max_time_ms, 0444, wakeup_max_time_show, NULL); 486 + static DEVICE_ATTR_RO(wakeup_max_time_ms); 470 487 471 - static ssize_t wakeup_last_time_show(struct device *dev, 472 - struct device_attribute *attr, char *buf) 488 + static ssize_t wakeup_last_time_ms_show(struct device *dev, 489 + struct device_attribute *attr, 490 + char *buf) 473 491 { 474 492 s64 msec = 0; 475 493 bool enabled = false; ··· 484 500 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 485 501 } 486 502 487 - static DEVICE_ATTR(wakeup_last_time_ms, 0444, wakeup_last_time_show, NULL); 503 + static DEVICE_ATTR_RO(wakeup_last_time_ms); 488 504 489 505 #ifdef CONFIG_PM_AUTOSLEEP 490 - static ssize_t wakeup_prevent_sleep_time_show(struct device *dev, 491 - struct device_attribute *attr, 492 - char *buf) 506 + static ssize_t wakeup_prevent_sleep_time_ms_show(struct device *dev, 507 + struct device_attribute *attr, 508 + char *buf) 493 509 { 494 510 s64 msec = 0; 495 511 bool enabled = false; ··· 503 519 return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); 504 520 } 505 521 506 - static DEVICE_ATTR(wakeup_prevent_sleep_time_ms, 0444, 507 - wakeup_prevent_sleep_time_show, NULL); 522 + static DEVICE_ATTR_RO(wakeup_prevent_sleep_time_ms); 508 523 #endif /* CONFIG_PM_AUTOSLEEP */ 509 524 #endif /* CONFIG_PM_SLEEP */ 510 525 511 526 #ifdef CONFIG_PM_ADVANCED_DEBUG 512 - static ssize_t rtpm_usagecount_show(struct device *dev, 513 - struct device_attribute *attr, char *buf) 527 + static ssize_t runtime_usage_show(struct device *dev, 528 + struct device_attribute *attr, char *buf) 514 529 { 515 530 return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count)); 516 531 } 532 + static DEVICE_ATTR_RO(runtime_usage); 517 533 518 - static ssize_t rtpm_children_show(struct device *dev, 519 - struct device_attribute *attr, char *buf) 534 + static ssize_t runtime_active_kids_show(struct device *dev, 535 + struct device_attribute *attr, 536 + char *buf) 520 537 { 521 538 return sprintf(buf, "%d\n", dev->power.ignore_children ? 522 539 0 : atomic_read(&dev->power.child_count)); 523 540 } 541 + static DEVICE_ATTR_RO(runtime_active_kids); 524 542 525 - static ssize_t rtpm_enabled_show(struct device *dev, 526 - struct device_attribute *attr, char *buf) 543 + static ssize_t runtime_enabled_show(struct device *dev, 544 + struct device_attribute *attr, char *buf) 527 545 { 528 - if ((dev->power.disable_depth) && (dev->power.runtime_auto == false)) 546 + if (dev->power.disable_depth && (dev->power.runtime_auto == false)) 529 547 return sprintf(buf, "disabled & forbidden\n"); 530 - else if (dev->power.disable_depth) 548 + if (dev->power.disable_depth) 531 549 return sprintf(buf, "disabled\n"); 532 - else if (dev->power.runtime_auto == false) 550 + if (dev->power.runtime_auto == false) 533 551 return sprintf(buf, "forbidden\n"); 534 552 return sprintf(buf, "enabled\n"); 535 553 } 536 - 537 - static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL); 538 - static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL); 539 - static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL); 554 + static DEVICE_ATTR_RO(runtime_enabled); 540 555 541 556 #ifdef CONFIG_PM_SLEEP 542 557 static ssize_t async_show(struct device *dev, struct device_attribute *attr, ··· 549 566 static ssize_t async_store(struct device *dev, struct device_attribute *attr, 550 567 const char *buf, size_t n) 551 568 { 552 - char *cp; 553 - int len = n; 554 - 555 - cp = memchr(buf, '\n', n); 556 - if (cp) 557 - len = cp - buf; 558 - if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0) 569 + if (sysfs_streq(buf, _enabled)) 559 570 device_enable_async_suspend(dev); 560 - else if (len == sizeof _disabled - 1 && 561 - strncmp(buf, _disabled, len) == 0) 571 + else if (sysfs_streq(buf, _disabled)) 562 572 device_disable_async_suspend(dev); 563 573 else 564 574 return -EINVAL; 565 575 return n; 566 576 } 567 577 568 - static DEVICE_ATTR(async, 0644, async_show, async_store); 578 + static DEVICE_ATTR_RW(async); 569 579 570 580 #endif /* CONFIG_PM_SLEEP */ 571 581 #endif /* CONFIG_PM_ADVANCED_DEBUG */
+17 -2
drivers/pci/pci-driver.c
··· 699 699 pm_generic_complete(dev); 700 700 701 701 /* Resume device if platform firmware has put it in reset-power-on */ 702 - if (dev->power.direct_complete && pm_resume_via_firmware()) { 702 + if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) { 703 703 pci_power_t pre_sleep_state = pci_dev->current_state; 704 704 705 705 pci_update_current_state(pci_dev, pci_dev->current_state); ··· 783 783 struct pci_dev *pci_dev = to_pci_dev(dev); 784 784 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 785 785 786 - if (dev_pm_smart_suspend_and_suspended(dev)) 786 + if (dev_pm_smart_suspend_and_suspended(dev)) { 787 + dev->power.may_skip_resume = true; 787 788 return 0; 789 + } 788 790 789 791 if (pci_has_legacy_pm_support(pci_dev)) 790 792 return pci_legacy_suspend_late(dev, PMSG_SUSPEND); ··· 840 838 Fixup: 841 839 pci_fixup_device(pci_fixup_suspend_late, pci_dev); 842 840 841 + /* 842 + * If the target system sleep state is suspend-to-idle, it is sufficient 843 + * to check whether or not the device's wakeup settings are good for 844 + * runtime PM. Otherwise, the pm_resume_via_firmware() check will cause 845 + * pci_pm_complete() to take care of fixing up the device's state 846 + * anyway, if need be. 847 + */ 848 + dev->power.may_skip_resume = device_may_wakeup(dev) || 849 + !device_can_wakeup(dev); 850 + 843 851 return 0; 844 852 } 845 853 ··· 858 846 struct pci_dev *pci_dev = to_pci_dev(dev); 859 847 struct device_driver *drv = dev->driver; 860 848 int error = 0; 849 + 850 + if (dev_pm_may_skip_resume(dev)) 851 + return 0; 861 852 862 853 /* 863 854 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
+12 -4
include/linux/pm.h
··· 556 556 * These flags can be set by device drivers at the probe time. They need not be 557 557 * cleared by the drivers as the driver core will take care of that. 558 558 * 559 - * NEVER_SKIP: Do not skip system suspend/resume callbacks for the device. 559 + * NEVER_SKIP: Do not skip all system suspend/resume callbacks for the device. 560 560 * SMART_PREPARE: Check the return value of the driver's ->prepare callback. 561 561 * SMART_SUSPEND: No need to resume the device from runtime suspend. 562 + * LEAVE_SUSPENDED: Avoid resuming the device during system resume if possible. 562 563 * 563 564 * Setting SMART_PREPARE instructs bus types and PM domains which may want 564 565 * system suspend/resume callbacks to be skipped for the device to return 0 from ··· 573 572 * necessary from the driver's perspective. It also may cause them to skip 574 573 * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by 575 574 * the driver if they decide to leave the device in runtime suspend. 575 + * 576 + * Setting LEAVE_SUSPENDED informs the PM core and middle-layer code that the 577 + * driver prefers the device to be left in suspend after system resume. 576 578 */ 577 - #define DPM_FLAG_NEVER_SKIP BIT(0) 578 - #define DPM_FLAG_SMART_PREPARE BIT(1) 579 - #define DPM_FLAG_SMART_SUSPEND BIT(2) 579 + #define DPM_FLAG_NEVER_SKIP BIT(0) 580 + #define DPM_FLAG_SMART_PREPARE BIT(1) 581 + #define DPM_FLAG_SMART_SUSPEND BIT(2) 582 + #define DPM_FLAG_LEAVE_SUSPENDED BIT(3) 580 583 581 584 struct dev_pm_info { 582 585 pm_message_t power_state; ··· 602 597 bool wakeup_path:1; 603 598 bool syscore:1; 604 599 bool no_pm_callbacks:1; /* Owned by the PM core */ 600 + unsigned int must_resume:1; /* Owned by the PM core */ 601 + unsigned int may_skip_resume:1; /* Set by subsystems */ 605 602 #else 606 603 unsigned int should_wakeup:1; 607 604 #endif ··· 772 765 extern int pm_generic_poweroff(struct device *dev); 773 766 extern void pm_generic_complete(struct device *dev); 774 767 768 + extern bool dev_pm_may_skip_resume(struct device *dev); 775 769 extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); 776 770 777 771 #else /* !CONFIG_PM_SLEEP */