Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

PM / QoS: Rename device resume latency QoS items

Rename symbols, variables, functions and structure fields related do
the resume latency device PM QoS type so that it is clear where they
belong (in particular, to avoid confusion with the latency tolerance
device PM QoS type introduced by a subsequent changeset).

Update the PM QoS documentation to better reflect its current state.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

+73 -77
+16 -21
Documentation/power/pm_qos_interface.txt
··· 89 89 2. PM QoS per-device latency and flags framework 90 90 91 91 For each device, there are two lists of PM QoS requests. One is maintained 92 - along with the aggregated target of latency value and the other is for PM QoS 93 - flags. Values are updated in response to changes of the request list. 92 + along with the aggregated target of resume latency value and the other is for 93 + PM QoS flags. Values are updated in response to changes of the request list. 94 94 95 - Target latency value is simply the minimum of the request values held in the 96 - parameter list elements. The PM QoS flags aggregate value is a gather (bitwise 97 - OR) of all list elements' values. Two device PM QoS flags are defined currently: 98 - PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. 95 + Target resume latency value is simply the minimum of the request values held in 96 + the parameter list elements. The PM QoS flags aggregate value is a gather 97 + (bitwise OR) of all list elements' values. Two device PM QoS flags are defined 98 + currently: PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. 99 99 100 - Note: the aggregated target value is implemented as an atomic variable so that 101 - reading the aggregated value does not require any locking mechanism. 100 + Note: the aggregated target value is implemented in such a way that reading the 101 + aggregated value does not require any locking mechanism. 102 102 103 103 104 104 From kernel mode the use of this interface is the following: ··· 137 137 power.ignore_children flag is unset. 138 138 139 139 int dev_pm_qos_expose_latency_limit(device, value) 140 - Add a request to the device's PM QoS list of latency constraints and create 141 - a sysfs attribute pm_qos_resume_latency_us under the device's power directory 142 - allowing user space to manipulate that request. 140 + Add a request to the device's PM QoS list of resume latency constraints and 141 + create a sysfs attribute pm_qos_resume_latency_us under the device's power 142 + directory allowing user space to manipulate that request. 143 143 144 144 void dev_pm_qos_hide_latency_limit(device) 145 145 Drop the request added by dev_pm_qos_expose_latency_limit() from the device's 146 - PM QoS list of latency constraints and remove sysfs attribute pm_qos_resume_latency_us 147 - from the device's power directory. 146 + PM QoS list of resume latency constraints and remove sysfs attribute 147 + pm_qos_resume_latency_us from the device's power directory. 148 148 149 149 int dev_pm_qos_expose_flags(device, value) 150 150 Add a request to the device's PM QoS list of flags and create sysfs attributes ··· 163 163 int dev_pm_qos_add_notifier(device, notifier): 164 164 Adds a notification callback function for the device. 165 165 The callback is called when the aggregated value of the device constraints list 166 - is changed. 166 + is changed (for resume latency device PM QoS only). 167 167 168 168 int dev_pm_qos_remove_notifier(device, notifier): 169 169 Removes the notification callback function for the device. ··· 171 171 int dev_pm_qos_add_global_notifier(notifier): 172 172 Adds a notification callback function in the global notification tree of the 173 173 framework. 174 - The callback is called when the aggregated value for any device is changed. 174 + The callback is called when the aggregated value for any device is changed 175 + (for resume latency device PM QoS only). 175 176 176 177 int dev_pm_qos_remove_global_notifier(notifier): 177 178 Removes the notification callback function from the global notification tree 178 179 of the framework. 179 - 180 - 181 - From user mode: 182 - No API for user space access to the per-device latency constraints is provided 183 - yet - still under discussion. 184 -
+1 -1
Documentation/trace/events-power.txt
··· 92 92 93 93 The first parameter gives the device name which tries to add/update/remove 94 94 QoS requests. 95 - The second parameter gives the request type (e.g. "DEV_PM_QOS_LATENCY"). 95 + The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY"). 96 96 The third parameter is value to be added/updated/removed.
+2 -2
drivers/base/power/power.h
··· 89 89 extern void rpm_sysfs_remove(struct device *dev); 90 90 extern int wakeup_sysfs_add(struct device *dev); 91 91 extern void wakeup_sysfs_remove(struct device *dev); 92 - extern int pm_qos_sysfs_add_latency(struct device *dev); 93 - extern void pm_qos_sysfs_remove_latency(struct device *dev); 92 + extern int pm_qos_sysfs_add_resume_latency(struct device *dev); 93 + extern void pm_qos_sysfs_remove_resume_latency(struct device *dev); 94 94 extern int pm_qos_sysfs_add_flags(struct device *dev); 95 95 extern void pm_qos_sysfs_remove_flags(struct device *dev); 96 96
+27 -28
drivers/base/power/qos.c
··· 105 105 s32 __dev_pm_qos_read_value(struct device *dev) 106 106 { 107 107 return IS_ERR_OR_NULL(dev->power.qos) ? 108 - 0 : pm_qos_read_value(&dev->power.qos->latency); 108 + 0 : pm_qos_read_value(&dev->power.qos->resume_latency); 109 109 } 110 110 111 111 /** ··· 141 141 int ret; 142 142 143 143 switch(req->type) { 144 - case DEV_PM_QOS_LATENCY: 145 - ret = pm_qos_update_target(&qos->latency, &req->data.pnode, 146 - action, value); 144 + case DEV_PM_QOS_RESUME_LATENCY: 145 + ret = pm_qos_update_target(&qos->resume_latency, 146 + &req->data.pnode, action, value); 147 147 if (ret) { 148 - value = pm_qos_read_value(&qos->latency); 148 + value = pm_qos_read_value(&qos->resume_latency); 149 149 blocking_notifier_call_chain(&dev_pm_notifiers, 150 150 (unsigned long)value, 151 151 req); ··· 186 186 } 187 187 BLOCKING_INIT_NOTIFIER_HEAD(n); 188 188 189 - c = &qos->latency; 189 + c = &qos->resume_latency; 190 190 plist_head_init(&c->list); 191 - c->target_value = PM_QOS_DEV_LAT_DEFAULT_VALUE; 192 - c->default_value = PM_QOS_DEV_LAT_DEFAULT_VALUE; 191 + c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 192 + c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 193 193 c->type = PM_QOS_MIN; 194 194 c->notifiers = n; 195 195 ··· 224 224 * If the device's PM QoS resume latency limit or PM QoS flags have been 225 225 * exposed to user space, they have to be hidden at this point. 226 226 */ 227 - pm_qos_sysfs_remove_latency(dev); 227 + pm_qos_sysfs_remove_resume_latency(dev); 228 228 pm_qos_sysfs_remove_flags(dev); 229 229 230 230 mutex_lock(&dev_pm_qos_mtx); ··· 237 237 goto out; 238 238 239 239 /* Flush the constraints lists for the device. */ 240 - c = &qos->latency; 240 + c = &qos->resume_latency; 241 241 plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) { 242 242 /* 243 243 * Update constraints list and call the notification ··· 341 341 return -ENODEV; 342 342 343 343 switch(req->type) { 344 - case DEV_PM_QOS_LATENCY: 344 + case DEV_PM_QOS_RESUME_LATENCY: 345 345 curr_value = req->data.pnode.prio; 346 346 break; 347 347 case DEV_PM_QOS_FLAGS: ··· 460 460 ret = dev_pm_qos_constraints_allocate(dev); 461 461 462 462 if (!ret) 463 - ret = blocking_notifier_chain_register( 464 - dev->power.qos->latency.notifiers, notifier); 463 + ret = blocking_notifier_chain_register(dev->power.qos->resume_latency.notifiers, 464 + notifier); 465 465 466 466 mutex_unlock(&dev_pm_qos_mtx); 467 467 return ret; ··· 487 487 488 488 /* Silently return if the constraints object is not present. */ 489 489 if (!IS_ERR_OR_NULL(dev->power.qos)) 490 - retval = blocking_notifier_chain_unregister( 491 - dev->power.qos->latency.notifiers, 492 - notifier); 490 + retval = blocking_notifier_chain_unregister(dev->power.qos->resume_latency.notifiers, 491 + notifier); 493 492 494 493 mutex_unlock(&dev_pm_qos_mtx); 495 494 return retval; ··· 542 543 543 544 if (ancestor) 544 545 ret = dev_pm_qos_add_request(ancestor, req, 545 - DEV_PM_QOS_LATENCY, value); 546 + DEV_PM_QOS_RESUME_LATENCY, value); 546 547 547 548 if (ret < 0) 548 549 req->dev = NULL; ··· 558 559 struct dev_pm_qos_request *req = NULL; 559 560 560 561 switch(type) { 561 - case DEV_PM_QOS_LATENCY: 562 - req = dev->power.qos->latency_req; 563 - dev->power.qos->latency_req = NULL; 562 + case DEV_PM_QOS_RESUME_LATENCY: 563 + req = dev->power.qos->resume_latency_req; 564 + dev->power.qos->resume_latency_req = NULL; 564 565 break; 565 566 case DEV_PM_QOS_FLAGS: 566 567 req = dev->power.qos->flags_req; ··· 596 597 if (!req) 597 598 return -ENOMEM; 598 599 599 - ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY, value); 600 + ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_RESUME_LATENCY, value); 600 601 if (ret < 0) { 601 602 kfree(req); 602 603 return ret; ··· 608 609 609 610 if (IS_ERR_OR_NULL(dev->power.qos)) 610 611 ret = -ENODEV; 611 - else if (dev->power.qos->latency_req) 612 + else if (dev->power.qos->resume_latency_req) 612 613 ret = -EEXIST; 613 614 614 615 if (ret < 0) { ··· 617 618 mutex_unlock(&dev_pm_qos_mtx); 618 619 goto out; 619 620 } 620 - dev->power.qos->latency_req = req; 621 + dev->power.qos->resume_latency_req = req; 621 622 622 623 mutex_unlock(&dev_pm_qos_mtx); 623 624 624 - ret = pm_qos_sysfs_add_latency(dev); 625 + ret = pm_qos_sysfs_add_resume_latency(dev); 625 626 if (ret) 626 - dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY); 627 + dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY); 627 628 628 629 out: 629 630 mutex_unlock(&dev_pm_qos_sysfs_mtx); ··· 633 634 634 635 static void __dev_pm_qos_hide_latency_limit(struct device *dev) 635 636 { 636 - if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->latency_req) 637 - __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY); 637 + if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->resume_latency_req) 638 + __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY); 638 639 } 639 640 640 641 /** ··· 645 646 { 646 647 mutex_lock(&dev_pm_qos_sysfs_mtx); 647 648 648 - pm_qos_sysfs_remove_latency(dev); 649 + pm_qos_sysfs_remove_resume_latency(dev); 649 650 650 651 mutex_lock(&dev_pm_qos_mtx); 651 652 __dev_pm_qos_hide_latency_limit(dev);
+17 -15
drivers/base/power/sysfs.c
··· 218 218 static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show, 219 219 autosuspend_delay_ms_store); 220 220 221 - static ssize_t pm_qos_latency_show(struct device *dev, 222 - struct device_attribute *attr, char *buf) 221 + static ssize_t pm_qos_resume_latency_show(struct device *dev, 222 + struct device_attribute *attr, 223 + char *buf) 223 224 { 224 - return sprintf(buf, "%d\n", dev_pm_qos_requested_latency(dev)); 225 + return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev)); 225 226 } 226 227 227 - static ssize_t pm_qos_latency_store(struct device *dev, 228 - struct device_attribute *attr, 229 - const char *buf, size_t n) 228 + static ssize_t pm_qos_resume_latency_store(struct device *dev, 229 + struct device_attribute *attr, 230 + const char *buf, size_t n) 230 231 { 231 232 s32 value; 232 233 int ret; ··· 238 237 if (value < 0) 239 238 return -EINVAL; 240 239 241 - ret = dev_pm_qos_update_request(dev->power.qos->latency_req, value); 240 + ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req, 241 + value); 242 242 return ret < 0 ? ret : n; 243 243 } 244 244 245 245 static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, 246 - pm_qos_latency_show, pm_qos_latency_store); 246 + pm_qos_resume_latency_show, pm_qos_resume_latency_store); 247 247 248 248 static ssize_t pm_qos_no_power_off_show(struct device *dev, 249 249 struct device_attribute *attr, ··· 620 618 .attrs = runtime_attrs, 621 619 }; 622 620 623 - static struct attribute *pm_qos_latency_attrs[] = { 621 + static struct attribute *pm_qos_resume_latency_attrs[] = { 624 622 #ifdef CONFIG_PM_RUNTIME 625 623 &dev_attr_pm_qos_resume_latency_us.attr, 626 624 #endif /* CONFIG_PM_RUNTIME */ 627 625 NULL, 628 626 }; 629 - static struct attribute_group pm_qos_latency_attr_group = { 627 + static struct attribute_group pm_qos_resume_latency_attr_group = { 630 628 .name = power_group_name, 631 - .attrs = pm_qos_latency_attrs, 629 + .attrs = pm_qos_resume_latency_attrs, 632 630 }; 633 631 634 632 static struct attribute *pm_qos_flags_attrs[] = { ··· 683 681 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); 684 682 } 685 683 686 - int pm_qos_sysfs_add_latency(struct device *dev) 684 + int pm_qos_sysfs_add_resume_latency(struct device *dev) 687 685 { 688 - return sysfs_merge_group(&dev->kobj, &pm_qos_latency_attr_group); 686 + return sysfs_merge_group(&dev->kobj, &pm_qos_resume_latency_attr_group); 689 687 } 690 688 691 - void pm_qos_sysfs_remove_latency(struct device *dev) 689 + void pm_qos_sysfs_remove_resume_latency(struct device *dev) 692 690 { 693 - sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_attr_group); 691 + sysfs_unmerge_group(&dev->kobj, &pm_qos_resume_latency_attr_group); 694 692 } 695 693 696 694 int pm_qos_sysfs_add_flags(struct device *dev)
+1 -1
drivers/mtd/nand/sh_flctl.c
··· 897 897 if (!flctl->qos_request) { 898 898 ret = dev_pm_qos_add_request(&flctl->pdev->dev, 899 899 &flctl->pm_qos, 900 - DEV_PM_QOS_LATENCY, 900 + DEV_PM_QOS_RESUME_LATENCY, 901 901 100); 902 902 if (ret < 0) 903 903 dev_err(&flctl->pdev->dev,
+7 -7
include/linux/pm_qos.h
··· 32 32 #define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 33 33 #define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 34 34 #define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0 35 - #define PM_QOS_DEV_LAT_DEFAULT_VALUE 0 35 + #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0 36 36 37 37 #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) 38 38 #define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1) ··· 49 49 }; 50 50 51 51 enum dev_pm_qos_req_type { 52 - DEV_PM_QOS_LATENCY = 1, 52 + DEV_PM_QOS_RESUME_LATENCY = 1, 53 53 DEV_PM_QOS_FLAGS, 54 54 }; 55 55 ··· 87 87 }; 88 88 89 89 struct dev_pm_qos { 90 - struct pm_qos_constraints latency; 90 + struct pm_qos_constraints resume_latency; 91 91 struct pm_qos_flags flags; 92 - struct dev_pm_qos_request *latency_req; 92 + struct dev_pm_qos_request *resume_latency_req; 93 93 struct dev_pm_qos_request *flags_req; 94 94 }; 95 95 ··· 196 196 void dev_pm_qos_hide_flags(struct device *dev); 197 197 int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set); 198 198 199 - static inline s32 dev_pm_qos_requested_latency(struct device *dev) 199 + static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) 200 200 { 201 - return dev->power.qos->latency_req->data.pnode.prio; 201 + return dev->power.qos->resume_latency_req->data.pnode.prio; 202 202 } 203 203 204 204 static inline s32 dev_pm_qos_requested_flags(struct device *dev) ··· 215 215 static inline int dev_pm_qos_update_flags(struct device *dev, s32 m, bool set) 216 216 { return 0; } 217 217 218 - static inline s32 dev_pm_qos_requested_latency(struct device *dev) { return 0; } 218 + static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; } 219 219 static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; } 220 220 #endif 221 221
+2 -2
include/trace/events/power.h
··· 412 412 TP_printk("device=%s type=%s new_value=%d", 413 413 __get_str(name), 414 414 __print_symbolic(__entry->type, 415 - { DEV_PM_QOS_LATENCY, "DEV_PM_QOS_LATENCY" }, 416 - { DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }), 415 + { DEV_PM_QOS_RESUME_LATENCY, "DEV_PM_QOS_RESUME_LATENCY" }, 416 + { DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }), 417 417 __entry->new_value) 418 418 ); 419 419