Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pm-assorted'

* pm-assorted:
PM / QoS: Add pm_qos and dev_pm_qos to events-power.txt
PM / QoS: Add dev_pm_qos_request tracepoints
PM / QoS: Add pm_qos_request tracepoints
PM / QoS: Add pm_qos_update_target/flags tracepoints
PM / QoS: Update Documentation/power/pm_qos_interface.txt
PM / Sleep: Print last wakeup source on failed wakeup_count write
PM / QoS: correct the valid range of pm_qos_class
PM / wakeup: Adjust messaging for wake events during suspend
PM / Runtime: Update .runtime_idle() callback documentation
PM / Runtime: Rework the "runtime idle" helper routine
PM / Hibernate: print physical addresses consistently with other parts of kernel

+315 -113
+43 -7
Documentation/power/pm_qos_interface.txt
··· 7 7 Two different PM QoS frameworks are available: 8 8 1. PM QoS classes for cpu_dma_latency, network_latency, network_throughput. 9 9 2. the per-device PM QoS framework provides the API to manage the per-device latency 10 - constraints. 10 + constraints and PM QoS flags. 11 11 12 12 Each parameters have defined units: 13 13 * latency: usec ··· 86 86 node. 87 87 88 88 89 - 2. PM QoS per-device latency framework 89 + 2. PM QoS per-device latency and flags framework 90 90 91 - For each device a list of performance requests is maintained along with 92 - an aggregated target value. The aggregated target value is updated with 93 - changes to the request list or elements of the list. Typically the 94 - aggregated target value is simply the max or min of the request values held 95 - in the parameter list elements. 91 + For each device, there are two lists of PM QoS requests. One is maintained 92 + along with the aggregated target of latency value and the other is for PM QoS 93 + flags. Values are updated in response to changes of the request list. 94 + 95 + Target latency value is simply the minimum of the request values held in the 96 + parameter list elements. The PM QoS flags aggregate value is a gather (bitwise 97 + OR) of all list elements' values. Two device PM QoS flags are defined currently: 98 + PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. 99 + 96 100 Note: the aggregated target value is implemented as an atomic variable so that 97 101 reading the aggregated value does not require any locking mechanism. 98 102 ··· 123 119 s32 dev_pm_qos_read_value(device): 124 120 Returns the aggregated value for a given device's constraints list. 125 121 122 + enum pm_qos_flags_status dev_pm_qos_flags(device, mask) 123 + Check PM QoS flags of the given device against the given mask of flags. 124 + The meaning of the return values is as follows: 125 + PM_QOS_FLAGS_ALL: All flags from the mask are set 126 + PM_QOS_FLAGS_SOME: Some flags from the mask are set 127 + PM_QOS_FLAGS_NONE: No flags from the mask are set 128 + PM_QOS_FLAGS_UNDEFINED: The device's PM QoS structure has not been 129 + initialized or the list of requests is empty. 130 + 131 + int dev_pm_qos_add_ancestor_request(dev, handle, value) 132 + Add a PM QoS request for the first direct ancestor of the given device whose 133 + power.ignore_children flag is unset. 134 + 135 + int dev_pm_qos_expose_latency_limit(device, value) 136 + Add a request to the device's PM QoS list of latency constraints and create 137 + a sysfs attribute pm_qos_resume_latency_us under the device's power directory 138 + allowing user space to manipulate that request. 139 + 140 + void dev_pm_qos_hide_latency_limit(device) 141 + Drop the request added by dev_pm_qos_expose_latency_limit() from the device's 142 + PM QoS list of latency constraints and remove sysfs attribute pm_qos_resume_latency_us 143 + from the device's power directory. 144 + 145 + int dev_pm_qos_expose_flags(device, value) 146 + Add a request to the device's PM QoS list of flags and create sysfs attributes 147 + pm_qos_no_power_off and pm_qos_remote_wakeup under the device's power directory 148 + allowing user space to change these flags' value. 149 + 150 + void dev_pm_qos_hide_flags(device) 151 + Drop the request added by dev_pm_qos_expose_flags() from the device's PM QoS list 152 + of flags and remove sysfs attributes pm_qos_no_power_off and pm_qos_remote_wakeup 153 + under the device's power directory. 126 154 127 155 Notification mechanisms: 128 156 The per-device PM QoS framework has 2 different and distinct notification trees:
+10 -10
Documentation/power/runtime_pm.txt
··· 144 144 (or driver) in question, but the expected and recommended action is to check 145 145 if the device can be suspended (i.e. if all of the conditions necessary for 146 146 suspending the device are satisfied) and to queue up a suspend request for the 147 - device in that case. The value returned by this callback is ignored by the PM 148 - core. 147 + device in that case. If there is no idle callback, or if the callback returns 148 + 0, then the PM core will attempt to carry out a runtime suspend of the device; 149 + in essence, it will call pm_runtime_suspend() directly. To prevent this (for 150 + example, if the callback routine has started a delayed suspend), the routine 151 + should return a non-zero value. Negative error return codes are ignored by the 152 + PM core. 149 153 150 154 The helper functions provided by the PM core, described in Section 4, guarantee 151 155 that the following constraints are met with respect to runtime PM callbacks for ··· 305 301 removing the device from device hierarchy 306 302 307 303 int pm_runtime_idle(struct device *dev); 308 - - execute the subsystem-level idle callback for the device; returns 0 on 309 - success or error code on failure, where -EINPROGRESS means that 310 - ->runtime_idle() is already being executed 304 + - execute the subsystem-level idle callback for the device; returns an 305 + error code on failure, where -EINPROGRESS means that ->runtime_idle() is 306 + already being executed; if there is no callback or the callback returns 0 307 + then run pm_runtime_suspend(dev) and return its result 311 308 312 309 int pm_runtime_suspend(struct device *dev); 313 310 - execute the subsystem-level suspend callback for the device; returns 0 on ··· 664 659 Subsystems may wish to conserve code space by using the set of generic power 665 660 management callbacks provided by the PM core, defined in 666 661 driver/base/power/generic_ops.c: 667 - 668 - int pm_generic_runtime_idle(struct device *dev); 669 - - invoke the ->runtime_idle() callback provided by the driver of this 670 - device, if defined, and call pm_runtime_suspend() for this device if the 671 - return value is 0 or the callback is not defined 672 662 673 663 int pm_generic_runtime_suspend(struct device *dev); 674 664 - invoke the ->runtime_suspend() callback provided by the driver of this
+31
Documentation/trace/events-power.txt
··· 63 63 The first parameter gives the power domain name (e.g. "mpu_pwrdm"). 64 64 The second parameter is the power domain target state. 65 65 66 + 4. PM QoS events 67 + ================ 68 + The PM QoS events are used for QoS add/update/remove request and for 69 + target/flags update. 70 + 71 + pm_qos_add_request "pm_qos_class=%s value=%d" 72 + pm_qos_update_request "pm_qos_class=%s value=%d" 73 + pm_qos_remove_request "pm_qos_class=%s value=%d" 74 + pm_qos_update_request_timeout "pm_qos_class=%s value=%d, timeout_us=%ld" 75 + 76 + The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY"). 77 + The second parameter is value to be added/updated/removed. 78 + The third parameter is timeout value in usec. 79 + 80 + pm_qos_update_target "action=%s prev_value=%d curr_value=%d" 81 + pm_qos_update_flags "action=%s prev_value=0x%x curr_value=0x%x" 82 + 83 + The first parameter gives the QoS action name (e.g. "ADD_REQ"). 84 + The second parameter is the previous QoS value. 85 + The third parameter is the current QoS value to update. 86 + 87 + And, there are also events used for device PM QoS add/update/remove request. 88 + 89 + dev_pm_qos_add_request "device=%s type=%s new_value=%d" 90 + dev_pm_qos_update_request "device=%s type=%s new_value=%d" 91 + dev_pm_qos_remove_request "device=%s type=%s new_value=%d" 92 + 93 + The first parameter gives the device name which tries to add/update/remove 94 + QoS requests. 95 + The second parameter gives the request type (e.g. "DEV_PM_QOS_LATENCY"). 96 + The third parameter is value to be added/updated/removed.
+1 -6
arch/arm/mach-omap2/omap_device.c
··· 591 591 return ret; 592 592 } 593 593 594 - static int _od_runtime_idle(struct device *dev) 595 - { 596 - return pm_generic_runtime_idle(dev); 597 - } 598 - 599 594 static int _od_runtime_resume(struct device *dev) 600 595 { 601 596 struct platform_device *pdev = to_platform_device(dev); ··· 648 653 struct dev_pm_domain omap_device_pm_domain = { 649 654 .ops = { 650 655 SET_RUNTIME_PM_OPS(_od_runtime_suspend, _od_runtime_resume, 651 - _od_runtime_idle) 656 + NULL) 652 657 USE_PLATFORM_PM_SLEEP_OPS 653 658 .suspend_noirq = _od_suspend_noirq, 654 659 .resume_noirq = _od_resume_noirq,
-1
drivers/acpi/device_pm.c
··· 933 933 #ifdef CONFIG_PM_RUNTIME 934 934 .runtime_suspend = acpi_subsys_runtime_suspend, 935 935 .runtime_resume = acpi_subsys_runtime_resume, 936 - .runtime_idle = pm_generic_runtime_idle, 937 936 #endif 938 937 #ifdef CONFIG_PM_SLEEP 939 938 .prepare = acpi_subsys_prepare,
+1 -1
drivers/amba/bus.c
··· 284 284 SET_RUNTIME_PM_OPS( 285 285 amba_pm_runtime_suspend, 286 286 amba_pm_runtime_resume, 287 - pm_generic_runtime_idle 287 + NULL 288 288 ) 289 289 }; 290 290
+1 -1
drivers/ata/libata-core.c
··· 5436 5436 return -EBUSY; 5437 5437 } 5438 5438 5439 - return pm_runtime_suspend(dev); 5439 + return 0; 5440 5440 } 5441 5441 5442 5442 static int ata_port_runtime_suspend(struct device *dev)
-1
drivers/base/platform.c
··· 888 888 static const struct dev_pm_ops platform_dev_pm_ops = { 889 889 .runtime_suspend = pm_generic_runtime_suspend, 890 890 .runtime_resume = pm_generic_runtime_resume, 891 - .runtime_idle = pm_generic_runtime_idle, 892 891 USE_PLATFORM_PM_SLEEP_OPS 893 892 }; 894 893
-1
drivers/base/power/domain.c
··· 2143 2143 genpd->max_off_time_changed = true; 2144 2144 genpd->domain.ops.runtime_suspend = pm_genpd_runtime_suspend; 2145 2145 genpd->domain.ops.runtime_resume = pm_genpd_runtime_resume; 2146 - genpd->domain.ops.runtime_idle = pm_generic_runtime_idle; 2147 2146 genpd->domain.ops.prepare = pm_genpd_prepare; 2148 2147 genpd->domain.ops.suspend = pm_genpd_suspend; 2149 2148 genpd->domain.ops.suspend_late = pm_genpd_suspend_late;
-23
drivers/base/power/generic_ops.c
··· 12 12 13 13 #ifdef CONFIG_PM_RUNTIME 14 14 /** 15 - * pm_generic_runtime_idle - Generic runtime idle callback for subsystems. 16 - * @dev: Device to handle. 17 - * 18 - * If PM operations are defined for the @dev's driver and they include 19 - * ->runtime_idle(), execute it and return its error code, if nonzero. 20 - * Otherwise, execute pm_runtime_suspend() for the device and return 0. 21 - */ 22 - int pm_generic_runtime_idle(struct device *dev) 23 - { 24 - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 25 - 26 - if (pm && pm->runtime_idle) { 27 - int ret = pm->runtime_idle(dev); 28 - if (ret) 29 - return ret; 30 - } 31 - 32 - pm_runtime_suspend(dev); 33 - return 0; 34 - } 35 - EXPORT_SYMBOL_GPL(pm_generic_runtime_idle); 36 - 37 - /** 38 15 * pm_generic_runtime_suspend - Generic runtime suspend callback for subsystems. 39 16 * @dev: Device to suspend. 40 17 *
+6
drivers/base/power/qos.c
··· 42 42 #include <linux/export.h> 43 43 #include <linux/pm_runtime.h> 44 44 #include <linux/err.h> 45 + #include <trace/events/power.h> 45 46 46 47 #include "power.h" 47 48 ··· 306 305 else if (!dev->power.qos) 307 306 ret = dev_pm_qos_constraints_allocate(dev); 308 307 308 + trace_dev_pm_qos_add_request(dev_name(dev), type, value); 309 309 if (!ret) { 310 310 req->dev = dev; 311 311 req->type = type; ··· 351 349 return -EINVAL; 352 350 } 353 351 352 + trace_dev_pm_qos_update_request(dev_name(req->dev), req->type, 353 + new_value); 354 354 if (curr_value != new_value) 355 355 ret = apply_constraint(req, PM_QOS_UPDATE_REQ, new_value); 356 356 ··· 402 398 if (IS_ERR_OR_NULL(req->dev->power.qos)) 403 399 return -ENODEV; 404 400 401 + trace_dev_pm_qos_remove_request(dev_name(req->dev), req->type, 402 + PM_QOS_DEFAULT_VALUE); 405 403 ret = apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); 406 404 memset(req, 0, sizeof(*req)); 407 405 return ret;
+5 -7
drivers/base/power/runtime.c
··· 293 293 /* Pending requests need to be canceled. */ 294 294 dev->power.request = RPM_REQ_NONE; 295 295 296 - if (dev->power.no_callbacks) { 297 - /* Assume ->runtime_idle() callback would have suspended. */ 298 - retval = rpm_suspend(dev, rpmflags); 296 + if (dev->power.no_callbacks) 299 297 goto out; 300 - } 301 298 302 299 /* Carry out an asynchronous or a synchronous idle notification. */ 303 300 if (rpmflags & RPM_ASYNC) { ··· 303 306 dev->power.request_pending = true; 304 307 queue_work(pm_wq, &dev->power.work); 305 308 } 306 - goto out; 309 + trace_rpm_return_int(dev, _THIS_IP_, 0); 310 + return 0; 307 311 } 308 312 309 313 dev->power.idle_notification = true; ··· 324 326 callback = dev->driver->pm->runtime_idle; 325 327 326 328 if (callback) 327 - __rpm_callback(callback, dev); 329 + retval = __rpm_callback(callback, dev); 328 330 329 331 dev->power.idle_notification = false; 330 332 wake_up_all(&dev->power.wait_queue); 331 333 332 334 out: 333 335 trace_rpm_return_int(dev, _THIS_IP_, retval); 334 - return retval; 336 + return retval ? retval : rpm_suspend(dev, rpmflags); 335 337 } 336 338 337 339 /**
+6 -3
drivers/base/power/wakeup.c
··· 659 659 } 660 660 EXPORT_SYMBOL_GPL(pm_wakeup_event); 661 661 662 - static void print_active_wakeup_sources(void) 662 + void pm_print_active_wakeup_sources(void) 663 663 { 664 664 struct wakeup_source *ws; 665 665 int active = 0; ··· 683 683 last_activity_ws->name); 684 684 rcu_read_unlock(); 685 685 } 686 + EXPORT_SYMBOL_GPL(pm_print_active_wakeup_sources); 686 687 687 688 /** 688 689 * pm_wakeup_pending - Check if power transition in progress should be aborted. ··· 708 707 } 709 708 spin_unlock_irqrestore(&events_lock, flags); 710 709 711 - if (ret) 712 - print_active_wakeup_sources(); 710 + if (ret) { 711 + pr_info("PM: Wakeup pending, aborting suspend\n"); 712 + pm_print_active_wakeup_sources(); 713 + } 713 714 714 715 return ret; 715 716 }
+1 -1
drivers/dma/intel_mid_dma.c
··· 1405 1405 return -EAGAIN; 1406 1406 } 1407 1407 1408 - return pm_schedule_suspend(dev, 0); 1408 + return 0; 1409 1409 } 1410 1410 1411 1411 /******************************************************************************
+1 -5
drivers/gpio/gpio-langwell.c
··· 305 305 306 306 static int lnw_gpio_runtime_idle(struct device *dev) 307 307 { 308 - int err = pm_schedule_suspend(dev, 500); 309 - 310 - if (!err) 311 - return 0; 312 - 308 + pm_schedule_suspend(dev, 500); 313 309 return -EBUSY; 314 310 } 315 311
+1 -1
drivers/i2c/i2c-core.c
··· 435 435 SET_RUNTIME_PM_OPS( 436 436 pm_generic_runtime_suspend, 437 437 pm_generic_runtime_resume, 438 - pm_generic_runtime_idle 438 + NULL 439 439 ) 440 440 }; 441 441
+1 -7
drivers/mfd/ab8500-gpadc.c
··· 886 886 return ret; 887 887 } 888 888 889 - static int ab8500_gpadc_runtime_idle(struct device *dev) 890 - { 891 - pm_runtime_suspend(dev); 892 - return 0; 893 - } 894 - 895 889 static int ab8500_gpadc_suspend(struct device *dev) 896 890 { 897 891 struct ab8500_gpadc *gpadc = dev_get_drvdata(dev); ··· 1033 1039 static const struct dev_pm_ops ab8500_gpadc_pm_ops = { 1034 1040 SET_RUNTIME_PM_OPS(ab8500_gpadc_runtime_suspend, 1035 1041 ab8500_gpadc_runtime_resume, 1036 - ab8500_gpadc_runtime_idle) 1042 + NULL) 1037 1043 SET_SYSTEM_SLEEP_PM_OPS(ab8500_gpadc_suspend, 1038 1044 ab8500_gpadc_resume) 1039 1045
+1 -1
drivers/mmc/core/bus.c
··· 164 164 165 165 static int mmc_runtime_idle(struct device *dev) 166 166 { 167 - return pm_runtime_suspend(dev); 167 + return 0; 168 168 } 169 169 170 170 #endif /* !CONFIG_PM_RUNTIME */
+1 -1
drivers/mmc/core/sdio_bus.c
··· 211 211 SET_RUNTIME_PM_OPS( 212 212 pm_generic_runtime_suspend, 213 213 pm_generic_runtime_resume, 214 - pm_generic_runtime_idle 214 + NULL 215 215 ) 216 216 }; 217 217
+5 -9
drivers/pci/pci-driver.c
··· 1050 1050 { 1051 1051 struct pci_dev *pci_dev = to_pci_dev(dev); 1052 1052 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1053 + int ret = 0; 1053 1054 1054 1055 /* 1055 1056 * If pci_dev->driver is not set (unbound), the device should 1056 1057 * always remain in D0 regardless of the runtime PM status 1057 1058 */ 1058 1059 if (!pci_dev->driver) 1059 - goto out; 1060 + return 0; 1060 1061 1061 1062 if (!pm) 1062 1063 return -ENOSYS; 1063 1064 1064 - if (pm->runtime_idle) { 1065 - int ret = pm->runtime_idle(dev); 1066 - if (ret) 1067 - return ret; 1068 - } 1065 + if (pm->runtime_idle) 1066 + ret = pm->runtime_idle(dev); 1069 1067 1070 - out: 1071 - pm_runtime_suspend(dev); 1072 - return 0; 1068 + return ret; 1073 1069 } 1074 1070 1075 1071 #else /* !CONFIG_PM_RUNTIME */
+3 -8
drivers/scsi/scsi_pm.c
··· 229 229 230 230 static int scsi_runtime_idle(struct device *dev) 231 231 { 232 - int err; 233 - 234 232 dev_dbg(dev, "scsi_runtime_idle\n"); 235 233 236 234 /* Insert hooks here for targets, hosts, and transport classes */ ··· 238 240 239 241 if (sdev->request_queue->dev) { 240 242 pm_runtime_mark_last_busy(dev); 241 - err = pm_runtime_autosuspend(dev); 242 - } else { 243 - err = pm_runtime_suspend(dev); 243 + pm_runtime_autosuspend(dev); 244 + return -EBUSY; 244 245 } 245 - } else { 246 - err = pm_runtime_suspend(dev); 247 246 } 248 - return err; 247 + return 0; 249 248 } 250 249 251 250 int scsi_autopm_get_device(struct scsi_device *sdev)
+1 -1
drivers/sh/pm_runtime.c
··· 25 25 static int default_platform_runtime_idle(struct device *dev) 26 26 { 27 27 /* suspend synchronously to disable clocks immediately */ 28 - return pm_runtime_suspend(dev); 28 + return 0; 29 29 } 30 30 31 31 static struct dev_pm_domain default_pm_domain = {
+1 -1
drivers/spi/spi.c
··· 223 223 SET_RUNTIME_PM_OPS( 224 224 pm_generic_runtime_suspend, 225 225 pm_generic_runtime_resume, 226 - pm_generic_runtime_idle 226 + NULL 227 227 ) 228 228 }; 229 229
+2 -7
drivers/tty/serial/mfd.c
··· 1248 1248 #ifdef CONFIG_PM_RUNTIME 1249 1249 static int serial_hsu_runtime_idle(struct device *dev) 1250 1250 { 1251 - int err; 1252 - 1253 - err = pm_schedule_suspend(dev, 500); 1254 - if (err) 1255 - return -EBUSY; 1256 - 1257 - return 0; 1251 + pm_schedule_suspend(dev, 500); 1252 + return -EBUSY; 1258 1253 } 1259 1254 1260 1255 static int serial_hsu_runtime_suspend(struct device *dev)
+2 -1
drivers/usb/core/driver.c
··· 1765 1765 */ 1766 1766 if (autosuspend_check(udev) == 0) 1767 1767 pm_runtime_autosuspend(dev); 1768 - return 0; 1768 + /* Tell the core not to suspend it, though. */ 1769 + return -EBUSY; 1769 1770 } 1770 1771 1771 1772 int usb_set_usb2_hardware_lpm(struct usb_device *udev, int enable)
-1
drivers/usb/core/port.c
··· 141 141 #ifdef CONFIG_PM_RUNTIME 142 142 .runtime_suspend = usb_port_runtime_suspend, 143 143 .runtime_resume = usb_port_runtime_resume, 144 - .runtime_idle = pm_generic_runtime_idle, 145 144 #endif 146 145 }; 147 146
-2
include/linux/pm_runtime.h
··· 37 37 extern void __pm_runtime_disable(struct device *dev, bool check_resume); 38 38 extern void pm_runtime_allow(struct device *dev); 39 39 extern void pm_runtime_forbid(struct device *dev); 40 - extern int pm_generic_runtime_idle(struct device *dev); 41 40 extern int pm_generic_runtime_suspend(struct device *dev); 42 41 extern int pm_generic_runtime_resume(struct device *dev); 43 42 extern void pm_runtime_no_callbacks(struct device *dev); ··· 142 143 static inline bool pm_runtime_status_suspended(struct device *dev) { return false; } 143 144 static inline bool pm_runtime_enabled(struct device *dev) { return false; } 144 145 145 - static inline int pm_generic_runtime_idle(struct device *dev) { return 0; } 146 146 static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; } 147 147 static inline int pm_generic_runtime_resume(struct device *dev) { return 0; } 148 148 static inline void pm_runtime_no_callbacks(struct device *dev) {}
+1
include/linux/suspend.h
··· 363 363 extern bool pm_get_wakeup_count(unsigned int *count, bool block); 364 364 extern bool pm_save_wakeup_count(unsigned int count); 365 365 extern void pm_wakep_autosleep_enabled(bool set); 366 + extern void pm_print_active_wakeup_sources(void); 366 367 367 368 static inline void lock_system_sleep(void) 368 369 {
+173
include/trace/events/power.h
··· 5 5 #define _TRACE_POWER_H 6 6 7 7 #include <linux/ktime.h> 8 + #include <linux/pm_qos.h> 8 9 #include <linux/tracepoint.h> 9 10 10 11 DECLARE_EVENT_CLASS(cpu, ··· 177 176 TP_PROTO(const char *name, unsigned int state, unsigned int cpu_id), 178 177 179 178 TP_ARGS(name, state, cpu_id) 179 + ); 180 + 181 + /* 182 + * The pm qos events are used for pm qos update 183 + */ 184 + DECLARE_EVENT_CLASS(pm_qos_request, 185 + 186 + TP_PROTO(int pm_qos_class, s32 value), 187 + 188 + TP_ARGS(pm_qos_class, value), 189 + 190 + TP_STRUCT__entry( 191 + __field( int, pm_qos_class ) 192 + __field( s32, value ) 193 + ), 194 + 195 + TP_fast_assign( 196 + __entry->pm_qos_class = pm_qos_class; 197 + __entry->value = value; 198 + ), 199 + 200 + TP_printk("pm_qos_class=%s value=%d", 201 + __print_symbolic(__entry->pm_qos_class, 202 + { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }, 203 + { PM_QOS_NETWORK_LATENCY, "NETWORK_LATENCY" }, 204 + { PM_QOS_NETWORK_THROUGHPUT, "NETWORK_THROUGHPUT" }), 205 + __entry->value) 206 + ); 207 + 208 + DEFINE_EVENT(pm_qos_request, pm_qos_add_request, 209 + 210 + TP_PROTO(int pm_qos_class, s32 value), 211 + 212 + TP_ARGS(pm_qos_class, value) 213 + ); 214 + 215 + DEFINE_EVENT(pm_qos_request, pm_qos_update_request, 216 + 217 + TP_PROTO(int pm_qos_class, s32 value), 218 + 219 + TP_ARGS(pm_qos_class, value) 220 + ); 221 + 222 + DEFINE_EVENT(pm_qos_request, pm_qos_remove_request, 223 + 224 + TP_PROTO(int pm_qos_class, s32 value), 225 + 226 + TP_ARGS(pm_qos_class, value) 227 + ); 228 + 229 + TRACE_EVENT(pm_qos_update_request_timeout, 230 + 231 + TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us), 232 + 233 + TP_ARGS(pm_qos_class, value, timeout_us), 234 + 235 + TP_STRUCT__entry( 236 + __field( int, pm_qos_class ) 237 + __field( s32, value ) 238 + __field( unsigned long, timeout_us ) 239 + ), 240 + 241 + TP_fast_assign( 242 + __entry->pm_qos_class = pm_qos_class; 243 + __entry->value = value; 244 + __entry->timeout_us = timeout_us; 245 + ), 246 + 247 + TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld", 248 + __print_symbolic(__entry->pm_qos_class, 249 + { PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }, 250 + { PM_QOS_NETWORK_LATENCY, "NETWORK_LATENCY" }, 251 + { PM_QOS_NETWORK_THROUGHPUT, "NETWORK_THROUGHPUT" }), 252 + __entry->value, __entry->timeout_us) 253 + ); 254 + 255 + DECLARE_EVENT_CLASS(pm_qos_update, 256 + 257 + TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value), 258 + 259 + TP_ARGS(action, prev_value, curr_value), 260 + 261 + TP_STRUCT__entry( 262 + __field( enum pm_qos_req_action, action ) 263 + __field( int, prev_value ) 264 + __field( int, curr_value ) 265 + ), 266 + 267 + TP_fast_assign( 268 + __entry->action = action; 269 + __entry->prev_value = prev_value; 270 + __entry->curr_value = curr_value; 271 + ), 272 + 273 + TP_printk("action=%s prev_value=%d curr_value=%d", 274 + __print_symbolic(__entry->action, 275 + { PM_QOS_ADD_REQ, "ADD_REQ" }, 276 + { PM_QOS_UPDATE_REQ, "UPDATE_REQ" }, 277 + { PM_QOS_REMOVE_REQ, "REMOVE_REQ" }), 278 + __entry->prev_value, __entry->curr_value) 279 + ); 280 + 281 + DEFINE_EVENT(pm_qos_update, pm_qos_update_target, 282 + 283 + TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value), 284 + 285 + TP_ARGS(action, prev_value, curr_value) 286 + ); 287 + 288 + DEFINE_EVENT_PRINT(pm_qos_update, pm_qos_update_flags, 289 + 290 + TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value), 291 + 292 + TP_ARGS(action, prev_value, curr_value), 293 + 294 + TP_printk("action=%s prev_value=0x%x curr_value=0x%x", 295 + __print_symbolic(__entry->action, 296 + { PM_QOS_ADD_REQ, "ADD_REQ" }, 297 + { PM_QOS_UPDATE_REQ, "UPDATE_REQ" }, 298 + { PM_QOS_REMOVE_REQ, "REMOVE_REQ" }), 299 + __entry->prev_value, __entry->curr_value) 300 + ); 301 + 302 + DECLARE_EVENT_CLASS(dev_pm_qos_request, 303 + 304 + TP_PROTO(const char *name, enum dev_pm_qos_req_type type, 305 + s32 new_value), 306 + 307 + TP_ARGS(name, type, new_value), 308 + 309 + TP_STRUCT__entry( 310 + __string( name, name ) 311 + __field( enum dev_pm_qos_req_type, type ) 312 + __field( s32, new_value ) 313 + ), 314 + 315 + TP_fast_assign( 316 + __assign_str(name, name); 317 + __entry->type = type; 318 + __entry->new_value = new_value; 319 + ), 320 + 321 + TP_printk("device=%s type=%s new_value=%d", 322 + __get_str(name), 323 + __print_symbolic(__entry->type, 324 + { DEV_PM_QOS_LATENCY, "DEV_PM_QOS_LATENCY" }, 325 + { DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }), 326 + __entry->new_value) 327 + ); 328 + 329 + DEFINE_EVENT(dev_pm_qos_request, dev_pm_qos_add_request, 330 + 331 + TP_PROTO(const char *name, enum dev_pm_qos_req_type type, 332 + s32 new_value), 333 + 334 + TP_ARGS(name, type, new_value) 335 + ); 336 + 337 + DEFINE_EVENT(dev_pm_qos_request, dev_pm_qos_update_request, 338 + 339 + TP_PROTO(const char *name, enum dev_pm_qos_req_type type, 340 + s32 new_value), 341 + 342 + TP_ARGS(name, type, new_value) 343 + ); 344 + 345 + DEFINE_EVENT(dev_pm_qos_request, dev_pm_qos_remove_request, 346 + 347 + TP_PROTO(const char *name, enum dev_pm_qos_req_type type, 348 + s32 new_value), 349 + 350 + TP_ARGS(name, type, new_value) 180 351 ); 181 352 #endif /* _TRACE_POWER_H */ 182 353
+2
kernel/power/main.c
··· 424 424 if (sscanf(buf, "%u", &val) == 1) { 425 425 if (pm_save_wakeup_count(val)) 426 426 error = n; 427 + else 428 + pm_print_active_wakeup_sources(); 427 429 } 428 430 429 431 out:
+11 -3
kernel/power/qos.c
··· 44 44 45 45 #include <linux/uaccess.h> 46 46 #include <linux/export.h> 47 + #include <trace/events/power.h> 47 48 48 49 /* 49 50 * locking rule: all changes to constraints or notifiers lists ··· 203 202 204 203 spin_unlock_irqrestore(&pm_qos_lock, flags); 205 204 205 + trace_pm_qos_update_target(action, prev_value, curr_value); 206 206 if (prev_value != curr_value) { 207 207 blocking_notifier_call_chain(c->notifiers, 208 208 (unsigned long)curr_value, ··· 274 272 275 273 spin_unlock_irqrestore(&pm_qos_lock, irqflags); 276 274 275 + trace_pm_qos_update_flags(action, prev_value, curr_value); 277 276 return prev_value != curr_value; 278 277 } 279 278 ··· 336 333 } 337 334 req->pm_qos_class = pm_qos_class; 338 335 INIT_DELAYED_WORK(&req->work, pm_qos_work_fn); 336 + trace_pm_qos_add_request(pm_qos_class, value); 339 337 pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints, 340 338 &req->node, PM_QOS_ADD_REQ, value); 341 339 } ··· 365 361 366 362 cancel_delayed_work_sync(&req->work); 367 363 364 + trace_pm_qos_update_request(req->pm_qos_class, new_value); 368 365 if (new_value != req->node.prio) 369 366 pm_qos_update_target( 370 367 pm_qos_array[req->pm_qos_class]->constraints, ··· 392 387 393 388 cancel_delayed_work_sync(&req->work); 394 389 390 + trace_pm_qos_update_request_timeout(req->pm_qos_class, 391 + new_value, timeout_us); 395 392 if (new_value != req->node.prio) 396 393 pm_qos_update_target( 397 394 pm_qos_array[req->pm_qos_class]->constraints, ··· 423 416 424 417 cancel_delayed_work_sync(&req->work); 425 418 419 + trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE); 426 420 pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints, 427 421 &req->node, PM_QOS_REMOVE_REQ, 428 422 PM_QOS_DEFAULT_VALUE); ··· 485 477 { 486 478 int pm_qos_class; 487 479 488 - for (pm_qos_class = 0; 480 + for (pm_qos_class = PM_QOS_CPU_DMA_LATENCY; 489 481 pm_qos_class < PM_QOS_NUM_CLASSES; pm_qos_class++) { 490 482 if (minor == 491 483 pm_qos_array[pm_qos_class]->pm_qos_power_miscdev.minor) ··· 499 491 long pm_qos_class; 500 492 501 493 pm_qos_class = find_pm_qos_object_by_minor(iminor(inode)); 502 - if (pm_qos_class >= 0) { 494 + if (pm_qos_class >= PM_QOS_CPU_DMA_LATENCY) { 503 495 struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL); 504 496 if (!req) 505 497 return -ENOMEM; ··· 592 584 593 585 BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES); 594 586 595 - for (i = 1; i < PM_QOS_NUM_CLASSES; i++) { 587 + for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) { 596 588 ret = register_pm_qos_misc(pm_qos_array[i]); 597 589 if (ret < 0) { 598 590 printk(KERN_ERR "pm_qos_param: %s setup failed\n",
+3 -2
kernel/power/snapshot.c
··· 642 642 region->end_pfn = end_pfn; 643 643 list_add_tail(&region->list, &nosave_regions); 644 644 Report: 645 - printk(KERN_INFO "PM: Registered nosave memory: %016lx - %016lx\n", 646 - start_pfn << PAGE_SHIFT, end_pfn << PAGE_SHIFT); 645 + printk(KERN_INFO "PM: Registered nosave memory: [mem %#010llx-%#010llx]\n", 646 + (unsigned long long) start_pfn << PAGE_SHIFT, 647 + ((unsigned long long) end_pfn << PAGE_SHIFT) - 1); 647 648 } 648 649 649 650 /*
+1 -1
kernel/power/suspend.c
··· 269 269 suspend_test_start(); 270 270 error = dpm_suspend_start(PMSG_SUSPEND); 271 271 if (error) { 272 - printk(KERN_ERR "PM: Some devices failed to suspend\n"); 272 + pr_err("PM: Some devices failed to suspend, or early wake event detected\n"); 273 273 goto Recover_platform; 274 274 } 275 275 suspend_test_finish("suspend devices");