Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'pm-qos', 'pm-domains' and 'pm-drivers'

* pm-qos:
PM / QoS: Add type to dev_pm_qos_add_ancestor_request() arguments
ACPI / LPSS: Support for device latency tolerance PM QoS
ACPI / scan: Add bind/unbind callbacks to struct acpi_scan_handler
PM / QoS: Introcuce latency tolerance device PM QoS type
PM / QoS: Add no_constraints_value field to struct pm_qos_constraints
PM / QoS: Rename device resume latency QoS items

* pm-domains:
PM / domains: Turn latency warning into debug message

* pm-drivers:
PM: Add pm_runtime_suspend|resume_force functions
PM / runtime: Fetch runtime PM callbacks using a macro

+590 -167
+26 -1
Documentation/ABI/testing/sysfs-devices-power
··· 187 187 Not all drivers support this attribute. If it isn't supported, 188 188 attempts to read or write it will yield I/O errors. 189 189 190 - What: /sys/devices/.../power/pm_qos_latency_us 190 + What: /sys/devices/.../power/pm_qos_resume_latency_us 191 191 Date: March 2012 192 192 Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 193 193 Description: ··· 204 204 205 205 This attribute has no effect on system-wide suspend/resume and 206 206 hibernation. 207 + 208 + What: /sys/devices/.../power/pm_qos_latency_tolerance_us 209 + Date: January 2014 210 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 211 + Description: 212 + The /sys/devices/.../power/pm_qos_latency_tolerance_us attribute 213 + contains the PM QoS active state latency tolerance limit for the 214 + given device in microseconds. That is the maximum memory access 215 + latency the device can suffer without any visible adverse 216 + effects on user space functionality. If that value is the 217 + string "any", the latency does not matter to user space at all, 218 + but hardware should not be allowed to set the latency tolerance 219 + for the device automatically. 220 + 221 + Reading "auto" from this file means that the maximum memory 222 + access latency for the device may be determined automatically 223 + by the hardware as needed. Writing "auto" to it allows the 224 + hardware to be switched to this mode if there are no other 225 + latency tolerance requirements from the kernel side. 226 + 227 + This attribute is only present if the feature controlled by it 228 + is supported by the hardware. 229 + 230 + This attribute has no effect on runtime suspend and resume of 231 + devices and on system-wide suspend/resume and hibernation. 207 232 208 233 What: /sys/devices/.../power/pm_qos_no_power_off 209 234 Date: September 2012
+59 -21
Documentation/power/pm_qos_interface.txt
··· 88 88 89 89 2. PM QoS per-device latency and flags framework 90 90 91 - For each device, there are two lists of PM QoS requests. One is maintained 92 - along with the aggregated target of latency value and the other is for PM QoS 93 - flags. Values are updated in response to changes of the request list. 91 + For each device, there are three lists of PM QoS requests. Two of them are 92 + maintained along with the aggregated targets of resume latency and active 93 + state latency tolerance (in microseconds) and the third one is for PM QoS flags. 94 + Values are updated in response to changes of the request list. 94 95 95 - Target latency value is simply the minimum of the request values held in the 96 - parameter list elements. The PM QoS flags aggregate value is a gather (bitwise 97 - OR) of all list elements' values. Two device PM QoS flags are defined currently: 98 - PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. 96 + The target values of resume latency and active state latency tolerance are 97 + simply the minimum of the request values held in the parameter list elements. 98 + The PM QoS flags aggregate value is a gather (bitwise OR) of all list elements' 99 + values. Two device PM QoS flags are defined currently: PM_QOS_FLAG_NO_POWER_OFF 100 + and PM_QOS_FLAG_REMOTE_WAKEUP. 99 101 100 - Note: the aggregated target value is implemented as an atomic variable so that 101 - reading the aggregated value does not require any locking mechanism. 102 + Note: The aggregated target values are implemented in such a way that reading 103 + the aggregated value does not require any locking mechanism. 102 104 103 105 104 106 From kernel mode the use of this interface is the following: ··· 134 132 PM_QOS_FLAGS_UNDEFINED: The device's PM QoS structure has not been 135 133 initialized or the list of requests is empty. 136 134 137 - int dev_pm_qos_add_ancestor_request(dev, handle, value) 135 + int dev_pm_qos_add_ancestor_request(dev, handle, type, value) 138 136 Add a PM QoS request for the first direct ancestor of the given device whose 139 - power.ignore_children flag is unset. 137 + power.ignore_children flag is unset (for DEV_PM_QOS_RESUME_LATENCY requests) 138 + or whose power.set_latency_tolerance callback pointer is not NULL (for 139 + DEV_PM_QOS_LATENCY_TOLERANCE requests). 140 140 141 141 int dev_pm_qos_expose_latency_limit(device, value) 142 - Add a request to the device's PM QoS list of latency constraints and create 143 - a sysfs attribute pm_qos_resume_latency_us under the device's power directory 144 - allowing user space to manipulate that request. 142 + Add a request to the device's PM QoS list of resume latency constraints and 143 + create a sysfs attribute pm_qos_resume_latency_us under the device's power 144 + directory allowing user space to manipulate that request. 145 145 146 146 void dev_pm_qos_hide_latency_limit(device) 147 147 Drop the request added by dev_pm_qos_expose_latency_limit() from the device's 148 - PM QoS list of latency constraints and remove sysfs attribute pm_qos_resume_latency_us 149 - from the device's power directory. 148 + PM QoS list of resume latency constraints and remove sysfs attribute 149 + pm_qos_resume_latency_us from the device's power directory. 150 150 151 151 int dev_pm_qos_expose_flags(device, value) 152 152 Add a request to the device's PM QoS list of flags and create sysfs attributes ··· 167 163 int dev_pm_qos_add_notifier(device, notifier): 168 164 Adds a notification callback function for the device. 169 165 The callback is called when the aggregated value of the device constraints list 170 - is changed. 166 + is changed (for resume latency device PM QoS only). 171 167 172 168 int dev_pm_qos_remove_notifier(device, notifier): 173 169 Removes the notification callback function for the device. ··· 175 171 int dev_pm_qos_add_global_notifier(notifier): 176 172 Adds a notification callback function in the global notification tree of the 177 173 framework. 178 - The callback is called when the aggregated value for any device is changed. 174 + The callback is called when the aggregated value for any device is changed 175 + (for resume latency device PM QoS only). 179 176 180 177 int dev_pm_qos_remove_global_notifier(notifier): 181 178 Removes the notification callback function from the global notification tree 182 179 of the framework. 183 180 184 181 185 - From user mode: 186 - No API for user space access to the per-device latency constraints is provided 187 - yet - still under discussion. 182 + Active state latency tolerance 188 183 184 + This device PM QoS type is used to support systems in which hardware may switch 185 + to energy-saving operation modes on the fly. In those systems, if the operation 186 + mode chosen by the hardware attempts to save energy in an overly aggressive way, 187 + it may cause excess latencies to be visible to software, causing it to miss 188 + certain protocol requirements or target frame or sample rates etc. 189 + 190 + If there is a latency tolerance control mechanism for a given device available 191 + to software, the .set_latency_tolerance callback in that device's dev_pm_info 192 + structure should be populated. The routine pointed to by it is should implement 193 + whatever is necessary to transfer the effective requirement value to the 194 + hardware. 195 + 196 + Whenever the effective latency tolerance changes for the device, its 197 + .set_latency_tolerance() callback will be executed and the effective value will 198 + be passed to it. If that value is negative, which means that the list of 199 + latency tolerance requirements for the device is empty, the callback is expected 200 + to switch the underlying hardware latency tolerance control mechanism to an 201 + autonomous mode if available. If that value is PM_QOS_LATENCY_ANY, in turn, and 202 + the hardware supports a special "no requirement" setting, the callback is 203 + expected to use it. That allows software to prevent the hardware from 204 + automatically updating the device's latency tolerance in response to its power 205 + state changes (e.g. during transitions from D3cold to D0), which generally may 206 + be done in the autonomous latency tolerance control mode. 207 + 208 + If .set_latency_tolerance() is present for the device, sysfs attribute 209 + pm_qos_latency_tolerance_us will be present in the devivce's power directory. 210 + Then, user space can use that attribute to specify its latency tolerance 211 + requirement for the device, if any. Writing "any" to it means "no requirement, 212 + but do not let the hardware control latency tolerance" and writing "auto" to it 213 + allows the hardware to be switched to the autonomous mode if there are no other 214 + requirements from the kernel side in the device's list. 215 + 216 + Kernel code can use the functions described above along with the 217 + DEV_PM_QOS_LATENCY_TOLERANCE device PM QoS type to add, remove and update 218 + latency tolerance requirements for devices.
+1 -1
Documentation/trace/events-power.txt
··· 92 92 93 93 The first parameter gives the device name which tries to add/update/remove 94 94 QoS requests. 95 - The second parameter gives the request type (e.g. "DEV_PM_QOS_LATENCY"). 95 + The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY"). 96 96 The third parameter is value to be added/updated/removed.
+70 -1
drivers/acpi/acpi_lpss.c
··· 33 33 #define LPSS_GENERAL_UART_RTS_OVRD BIT(3) 34 34 #define LPSS_SW_LTR 0x10 35 35 #define LPSS_AUTO_LTR 0x14 36 + #define LPSS_LTR_SNOOP_REQ BIT(15) 37 + #define LPSS_LTR_SNOOP_MASK 0x0000FFFF 38 + #define LPSS_LTR_SNOOP_LAT_1US 0x800 39 + #define LPSS_LTR_SNOOP_LAT_32US 0xC00 40 + #define LPSS_LTR_SNOOP_LAT_SHIFT 5 41 + #define LPSS_LTR_SNOOP_LAT_CUTOFF 3000 42 + #define LPSS_LTR_MAX_VAL 0x3FF 36 43 #define LPSS_TX_INT 0x20 37 44 #define LPSS_TX_INT_MASK BIT(1) 38 45 ··· 333 326 return ret; 334 327 } 335 328 329 + static u32 __lpss_reg_read(struct lpss_private_data *pdata, unsigned int reg) 330 + { 331 + return readl(pdata->mmio_base + pdata->dev_desc->prv_offset + reg); 332 + } 333 + 334 + static void __lpss_reg_write(u32 val, struct lpss_private_data *pdata, 335 + unsigned int reg) 336 + { 337 + writel(val, pdata->mmio_base + pdata->dev_desc->prv_offset + reg); 338 + } 339 + 336 340 static int lpss_reg_read(struct device *dev, unsigned int reg, u32 *val) 337 341 { 338 342 struct acpi_device *adev; ··· 365 347 ret = -ENODEV; 366 348 goto out; 367 349 } 368 - *val = readl(pdata->mmio_base + pdata->dev_desc->prv_offset + reg); 350 + *val = __lpss_reg_read(pdata, reg); 369 351 370 352 out: 371 353 spin_unlock_irqrestore(&dev->power.lock, flags); ··· 418 400 .name = "lpss_ltr", 419 401 }; 420 402 403 + static void acpi_lpss_set_ltr(struct device *dev, s32 val) 404 + { 405 + struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 406 + u32 ltr_mode, ltr_val; 407 + 408 + ltr_mode = __lpss_reg_read(pdata, LPSS_GENERAL); 409 + if (val < 0) { 410 + if (ltr_mode & LPSS_GENERAL_LTR_MODE_SW) { 411 + ltr_mode &= ~LPSS_GENERAL_LTR_MODE_SW; 412 + __lpss_reg_write(ltr_mode, pdata, LPSS_GENERAL); 413 + } 414 + return; 415 + } 416 + ltr_val = __lpss_reg_read(pdata, LPSS_SW_LTR) & ~LPSS_LTR_SNOOP_MASK; 417 + if (val >= LPSS_LTR_SNOOP_LAT_CUTOFF) { 418 + ltr_val |= LPSS_LTR_SNOOP_LAT_32US; 419 + val = LPSS_LTR_MAX_VAL; 420 + } else if (val > LPSS_LTR_MAX_VAL) { 421 + ltr_val |= LPSS_LTR_SNOOP_LAT_32US | LPSS_LTR_SNOOP_REQ; 422 + val >>= LPSS_LTR_SNOOP_LAT_SHIFT; 423 + } else { 424 + ltr_val |= LPSS_LTR_SNOOP_LAT_1US | LPSS_LTR_SNOOP_REQ; 425 + } 426 + ltr_val |= val; 427 + __lpss_reg_write(ltr_val, pdata, LPSS_SW_LTR); 428 + if (!(ltr_mode & LPSS_GENERAL_LTR_MODE_SW)) { 429 + ltr_mode |= LPSS_GENERAL_LTR_MODE_SW; 430 + __lpss_reg_write(ltr_mode, pdata, LPSS_GENERAL); 431 + } 432 + } 433 + 421 434 static int acpi_lpss_platform_notify(struct notifier_block *nb, 422 435 unsigned long action, void *data) 423 436 { ··· 486 437 .notifier_call = acpi_lpss_platform_notify, 487 438 }; 488 439 440 + static void acpi_lpss_bind(struct device *dev) 441 + { 442 + struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 443 + 444 + if (!pdata || !pdata->mmio_base || !pdata->dev_desc->ltr_required) 445 + return; 446 + 447 + if (pdata->mmio_size >= pdata->dev_desc->prv_offset + LPSS_LTR_SIZE) 448 + dev->power.set_latency_tolerance = acpi_lpss_set_ltr; 449 + else 450 + dev_err(dev, "MMIO size insufficient to access LTR\n"); 451 + } 452 + 453 + static void acpi_lpss_unbind(struct device *dev) 454 + { 455 + dev->power.set_latency_tolerance = NULL; 456 + } 457 + 489 458 static struct acpi_scan_handler lpss_handler = { 490 459 .ids = acpi_lpss_device_ids, 491 460 .attach = acpi_lpss_create_device, 461 + .bind = acpi_lpss_bind, 462 + .unbind = acpi_lpss_unbind, 492 463 }; 493 464 494 465 void __init acpi_lpss_init(void)
+12
drivers/acpi/glue.c
··· 287 287 static int acpi_platform_notify(struct device *dev) 288 288 { 289 289 struct acpi_bus_type *type = acpi_get_bus_type(dev); 290 + struct acpi_device *adev; 290 291 int ret; 291 292 292 293 ret = acpi_bind_one(dev, NULL); ··· 304 303 if (ret) 305 304 goto out; 306 305 } 306 + adev = ACPI_COMPANION(dev); 307 + if (!adev) 308 + goto out; 307 309 308 310 if (type && type->setup) 309 311 type->setup(dev); 312 + else if (adev->handler && adev->handler->bind) 313 + adev->handler->bind(dev); 310 314 311 315 out: 312 316 #if ACPI_GLUE_DEBUG ··· 330 324 331 325 static int acpi_platform_notify_remove(struct device *dev) 332 326 { 327 + struct acpi_device *adev = ACPI_COMPANION(dev); 333 328 struct acpi_bus_type *type; 329 + 330 + if (!adev) 331 + return 0; 334 332 335 333 type = acpi_get_bus_type(dev); 336 334 if (type && type->cleanup) 337 335 type->cleanup(dev); 336 + else if (adev->handler && adev->handler->unbind) 337 + adev->handler->unbind(dev); 338 338 339 339 acpi_unbind_one(dev); 340 340 return 0;
+5 -4
drivers/acpi/scan.c
··· 2072 2072 2073 2073 handler = acpi_scan_match_handler(hwid->id, &devid); 2074 2074 if (handler) { 2075 + device->handler = handler; 2075 2076 ret = handler->attach(device, devid); 2076 - if (ret > 0) { 2077 - device->handler = handler; 2077 + if (ret > 0) 2078 2078 break; 2079 - } else if (ret < 0) { 2079 + 2080 + device->handler = NULL; 2081 + if (ret < 0) 2080 2082 break; 2081 - } 2082 2083 } 2083 2084 } 2084 2085 return ret;
+1 -2
drivers/base/power/Makefile
··· 1 - obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o 1 + obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o 2 2 obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o 3 - obj-$(CONFIG_PM_RUNTIME) += runtime.o 4 3 obj-$(CONFIG_PM_TRACE_RTC) += trace.o 5 4 obj-$(CONFIG_PM_OPP) += opp.o 6 5 obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
+1 -1
drivers/base/power/domain.c
··· 42 42 struct gpd_timing_data *__td = &dev_gpd_data(dev)->td; \ 43 43 if (!__retval && __elapsed > __td->field) { \ 44 44 __td->field = __elapsed; \ 45 - dev_warn(dev, name " latency exceeded, new value %lld ns\n", \ 45 + dev_dbg(dev, name " latency exceeded, new value %lld ns\n", \ 46 46 __elapsed); \ 47 47 genpd->max_off_time_changed = true; \ 48 48 __td->constraint_changed = true; \
+2 -2
drivers/base/power/power.h
··· 89 89 extern void rpm_sysfs_remove(struct device *dev); 90 90 extern int wakeup_sysfs_add(struct device *dev); 91 91 extern void wakeup_sysfs_remove(struct device *dev); 92 - extern int pm_qos_sysfs_add_latency(struct device *dev); 93 - extern void pm_qos_sysfs_remove_latency(struct device *dev); 92 + extern int pm_qos_sysfs_add_resume_latency(struct device *dev); 93 + extern void pm_qos_sysfs_remove_resume_latency(struct device *dev); 94 94 extern int pm_qos_sysfs_add_flags(struct device *dev); 95 95 extern void pm_qos_sysfs_remove_flags(struct device *dev); 96 96
+166 -54
drivers/base/power/qos.c
··· 105 105 s32 __dev_pm_qos_read_value(struct device *dev) 106 106 { 107 107 return IS_ERR_OR_NULL(dev->power.qos) ? 108 - 0 : pm_qos_read_value(&dev->power.qos->latency); 108 + 0 : pm_qos_read_value(&dev->power.qos->resume_latency); 109 109 } 110 110 111 111 /** ··· 141 141 int ret; 142 142 143 143 switch(req->type) { 144 - case DEV_PM_QOS_LATENCY: 145 - ret = pm_qos_update_target(&qos->latency, &req->data.pnode, 146 - action, value); 144 + case DEV_PM_QOS_RESUME_LATENCY: 145 + ret = pm_qos_update_target(&qos->resume_latency, 146 + &req->data.pnode, action, value); 147 147 if (ret) { 148 - value = pm_qos_read_value(&qos->latency); 148 + value = pm_qos_read_value(&qos->resume_latency); 149 149 blocking_notifier_call_chain(&dev_pm_notifiers, 150 150 (unsigned long)value, 151 151 req); 152 + } 153 + break; 154 + case DEV_PM_QOS_LATENCY_TOLERANCE: 155 + ret = pm_qos_update_target(&qos->latency_tolerance, 156 + &req->data.pnode, action, value); 157 + if (ret) { 158 + value = pm_qos_read_value(&qos->latency_tolerance); 159 + req->dev->power.set_latency_tolerance(req->dev, value); 152 160 } 153 161 break; 154 162 case DEV_PM_QOS_FLAGS: ··· 194 186 } 195 187 BLOCKING_INIT_NOTIFIER_HEAD(n); 196 188 197 - c = &qos->latency; 189 + c = &qos->resume_latency; 198 190 plist_head_init(&c->list); 199 - c->target_value = PM_QOS_DEV_LAT_DEFAULT_VALUE; 200 - c->default_value = PM_QOS_DEV_LAT_DEFAULT_VALUE; 191 + c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 192 + c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 193 + c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 201 194 c->type = PM_QOS_MIN; 202 195 c->notifiers = n; 196 + 197 + c = &qos->latency_tolerance; 198 + plist_head_init(&c->list); 199 + c->target_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE; 200 + c->default_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE; 201 + c->no_constraint_value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; 202 + c->type = PM_QOS_MIN; 203 203 204 204 INIT_LIST_HEAD(&qos->flags.list); 205 205 ··· 240 224 * If the device's PM QoS resume latency limit or PM QoS flags have been 241 225 * exposed to user space, they have to be hidden at this point. 242 226 */ 243 - pm_qos_sysfs_remove_latency(dev); 227 + pm_qos_sysfs_remove_resume_latency(dev); 244 228 pm_qos_sysfs_remove_flags(dev); 245 229 246 230 mutex_lock(&dev_pm_qos_mtx); ··· 253 237 goto out; 254 238 255 239 /* Flush the constraints lists for the device. */ 256 - c = &qos->latency; 240 + c = &qos->resume_latency; 257 241 plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) { 258 242 /* 259 243 * Update constraints list and call the notification 260 244 * callbacks if needed 261 245 */ 246 + apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); 247 + memset(req, 0, sizeof(*req)); 248 + } 249 + c = &qos->latency_tolerance; 250 + plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) { 262 251 apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); 263 252 memset(req, 0, sizeof(*req)); 264 253 } ··· 284 263 mutex_unlock(&dev_pm_qos_mtx); 285 264 286 265 mutex_unlock(&dev_pm_qos_sysfs_mtx); 266 + } 267 + 268 + static bool dev_pm_qos_invalid_request(struct device *dev, 269 + struct dev_pm_qos_request *req) 270 + { 271 + return !req || (req->type == DEV_PM_QOS_LATENCY_TOLERANCE 272 + && !dev->power.set_latency_tolerance); 273 + } 274 + 275 + static int __dev_pm_qos_add_request(struct device *dev, 276 + struct dev_pm_qos_request *req, 277 + enum dev_pm_qos_req_type type, s32 value) 278 + { 279 + int ret = 0; 280 + 281 + if (!dev || dev_pm_qos_invalid_request(dev, req)) 282 + return -EINVAL; 283 + 284 + if (WARN(dev_pm_qos_request_active(req), 285 + "%s() called for already added request\n", __func__)) 286 + return -EINVAL; 287 + 288 + if (IS_ERR(dev->power.qos)) 289 + ret = -ENODEV; 290 + else if (!dev->power.qos) 291 + ret = dev_pm_qos_constraints_allocate(dev); 292 + 293 + trace_dev_pm_qos_add_request(dev_name(dev), type, value); 294 + if (!ret) { 295 + req->dev = dev; 296 + req->type = type; 297 + ret = apply_constraint(req, PM_QOS_ADD_REQ, value); 298 + } 299 + return ret; 287 300 } 288 301 289 302 /** ··· 345 290 int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, 346 291 enum dev_pm_qos_req_type type, s32 value) 347 292 { 348 - int ret = 0; 349 - 350 - if (!dev || !req) /*guard against callers passing in null */ 351 - return -EINVAL; 352 - 353 - if (WARN(dev_pm_qos_request_active(req), 354 - "%s() called for already added request\n", __func__)) 355 - return -EINVAL; 293 + int ret; 356 294 357 295 mutex_lock(&dev_pm_qos_mtx); 358 - 359 - if (IS_ERR(dev->power.qos)) 360 - ret = -ENODEV; 361 - else if (!dev->power.qos) 362 - ret = dev_pm_qos_constraints_allocate(dev); 363 - 364 - trace_dev_pm_qos_add_request(dev_name(dev), type, value); 365 - if (!ret) { 366 - req->dev = dev; 367 - req->type = type; 368 - ret = apply_constraint(req, PM_QOS_ADD_REQ, value); 369 - } 370 - 296 + ret = __dev_pm_qos_add_request(dev, req, type, value); 371 297 mutex_unlock(&dev_pm_qos_mtx); 372 - 373 298 return ret; 374 299 } 375 300 EXPORT_SYMBOL_GPL(dev_pm_qos_add_request); ··· 376 341 return -ENODEV; 377 342 378 343 switch(req->type) { 379 - case DEV_PM_QOS_LATENCY: 344 + case DEV_PM_QOS_RESUME_LATENCY: 345 + case DEV_PM_QOS_LATENCY_TOLERANCE: 380 346 curr_value = req->data.pnode.prio; 381 347 break; 382 348 case DEV_PM_QOS_FLAGS: ··· 496 460 ret = dev_pm_qos_constraints_allocate(dev); 497 461 498 462 if (!ret) 499 - ret = blocking_notifier_chain_register( 500 - dev->power.qos->latency.notifiers, notifier); 463 + ret = blocking_notifier_chain_register(dev->power.qos->resume_latency.notifiers, 464 + notifier); 501 465 502 466 mutex_unlock(&dev_pm_qos_mtx); 503 467 return ret; ··· 523 487 524 488 /* Silently return if the constraints object is not present. */ 525 489 if (!IS_ERR_OR_NULL(dev->power.qos)) 526 - retval = blocking_notifier_chain_unregister( 527 - dev->power.qos->latency.notifiers, 528 - notifier); 490 + retval = blocking_notifier_chain_unregister(dev->power.qos->resume_latency.notifiers, 491 + notifier); 529 492 530 493 mutex_unlock(&dev_pm_qos_mtx); 531 494 return retval; ··· 565 530 * dev_pm_qos_add_ancestor_request - Add PM QoS request for device's ancestor. 566 531 * @dev: Device whose ancestor to add the request for. 567 532 * @req: Pointer to the preallocated handle. 533 + * @type: Type of the request. 568 534 * @value: Constraint latency value. 569 535 */ 570 536 int dev_pm_qos_add_ancestor_request(struct device *dev, 571 - struct dev_pm_qos_request *req, s32 value) 537 + struct dev_pm_qos_request *req, 538 + enum dev_pm_qos_req_type type, s32 value) 572 539 { 573 540 struct device *ancestor = dev->parent; 574 541 int ret = -ENODEV; 575 542 576 - while (ancestor && !ancestor->power.ignore_children) 577 - ancestor = ancestor->parent; 543 + switch (type) { 544 + case DEV_PM_QOS_RESUME_LATENCY: 545 + while (ancestor && !ancestor->power.ignore_children) 546 + ancestor = ancestor->parent; 578 547 548 + break; 549 + case DEV_PM_QOS_LATENCY_TOLERANCE: 550 + while (ancestor && !ancestor->power.set_latency_tolerance) 551 + ancestor = ancestor->parent; 552 + 553 + break; 554 + default: 555 + ancestor = NULL; 556 + } 579 557 if (ancestor) 580 - ret = dev_pm_qos_add_request(ancestor, req, 581 - DEV_PM_QOS_LATENCY, value); 558 + ret = dev_pm_qos_add_request(ancestor, req, type, value); 582 559 583 560 if (ret < 0) 584 561 req->dev = NULL; ··· 606 559 struct dev_pm_qos_request *req = NULL; 607 560 608 561 switch(type) { 609 - case DEV_PM_QOS_LATENCY: 610 - req = dev->power.qos->latency_req; 611 - dev->power.qos->latency_req = NULL; 562 + case DEV_PM_QOS_RESUME_LATENCY: 563 + req = dev->power.qos->resume_latency_req; 564 + dev->power.qos->resume_latency_req = NULL; 565 + break; 566 + case DEV_PM_QOS_LATENCY_TOLERANCE: 567 + req = dev->power.qos->latency_tolerance_req; 568 + dev->power.qos->latency_tolerance_req = NULL; 612 569 break; 613 570 case DEV_PM_QOS_FLAGS: 614 571 req = dev->power.qos->flags_req; ··· 648 597 if (!req) 649 598 return -ENOMEM; 650 599 651 - ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY, value); 600 + ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_RESUME_LATENCY, value); 652 601 if (ret < 0) { 653 602 kfree(req); 654 603 return ret; ··· 660 609 661 610 if (IS_ERR_OR_NULL(dev->power.qos)) 662 611 ret = -ENODEV; 663 - else if (dev->power.qos->latency_req) 612 + else if (dev->power.qos->resume_latency_req) 664 613 ret = -EEXIST; 665 614 666 615 if (ret < 0) { ··· 669 618 mutex_unlock(&dev_pm_qos_mtx); 670 619 goto out; 671 620 } 672 - dev->power.qos->latency_req = req; 621 + dev->power.qos->resume_latency_req = req; 673 622 674 623 mutex_unlock(&dev_pm_qos_mtx); 675 624 676 - ret = pm_qos_sysfs_add_latency(dev); 625 + ret = pm_qos_sysfs_add_resume_latency(dev); 677 626 if (ret) 678 - dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY); 627 + dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY); 679 628 680 629 out: 681 630 mutex_unlock(&dev_pm_qos_sysfs_mtx); ··· 685 634 686 635 static void __dev_pm_qos_hide_latency_limit(struct device *dev) 687 636 { 688 - if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->latency_req) 689 - __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY); 637 + if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->resume_latency_req) 638 + __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY); 690 639 } 691 640 692 641 /** ··· 697 646 { 698 647 mutex_lock(&dev_pm_qos_sysfs_mtx); 699 648 700 - pm_qos_sysfs_remove_latency(dev); 649 + pm_qos_sysfs_remove_resume_latency(dev); 701 650 702 651 mutex_lock(&dev_pm_qos_mtx); 703 652 __dev_pm_qos_hide_latency_limit(dev); ··· 817 766 out: 818 767 mutex_unlock(&dev_pm_qos_mtx); 819 768 pm_runtime_put(dev); 769 + return ret; 770 + } 771 + 772 + /** 773 + * dev_pm_qos_get_user_latency_tolerance - Get user space latency tolerance. 774 + * @dev: Device to obtain the user space latency tolerance for. 775 + */ 776 + s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev) 777 + { 778 + s32 ret; 779 + 780 + mutex_lock(&dev_pm_qos_mtx); 781 + ret = IS_ERR_OR_NULL(dev->power.qos) 782 + || !dev->power.qos->latency_tolerance_req ? 783 + PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT : 784 + dev->power.qos->latency_tolerance_req->data.pnode.prio; 785 + mutex_unlock(&dev_pm_qos_mtx); 786 + return ret; 787 + } 788 + 789 + /** 790 + * dev_pm_qos_update_user_latency_tolerance - Update user space latency tolerance. 791 + * @dev: Device to update the user space latency tolerance for. 792 + * @val: New user space latency tolerance for @dev (negative values disable). 793 + */ 794 + int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) 795 + { 796 + int ret; 797 + 798 + mutex_lock(&dev_pm_qos_mtx); 799 + 800 + if (IS_ERR_OR_NULL(dev->power.qos) 801 + || !dev->power.qos->latency_tolerance_req) { 802 + struct dev_pm_qos_request *req; 803 + 804 + if (val < 0) { 805 + ret = -EINVAL; 806 + goto out; 807 + } 808 + req = kzalloc(sizeof(*req), GFP_KERNEL); 809 + if (!req) { 810 + ret = -ENOMEM; 811 + goto out; 812 + } 813 + ret = __dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY_TOLERANCE, val); 814 + if (ret < 0) { 815 + kfree(req); 816 + goto out; 817 + } 818 + dev->power.qos->latency_tolerance_req = req; 819 + } else { 820 + if (val < 0) { 821 + __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE); 822 + ret = 0; 823 + } else { 824 + ret = __dev_pm_qos_update_request(dev->power.qos->latency_tolerance_req, val); 825 + } 826 + } 827 + 828 + out: 829 + mutex_unlock(&dev_pm_qos_mtx); 820 830 return ret; 821 831 } 822 832 #else /* !CONFIG_PM_RUNTIME */
+123 -39
drivers/base/power/runtime.c
··· 13 13 #include <trace/events/rpm.h> 14 14 #include "power.h" 15 15 16 + #define RPM_GET_CALLBACK(dev, cb) \ 17 + ({ \ 18 + int (*__rpm_cb)(struct device *__d); \ 19 + \ 20 + if (dev->pm_domain) \ 21 + __rpm_cb = dev->pm_domain->ops.cb; \ 22 + else if (dev->type && dev->type->pm) \ 23 + __rpm_cb = dev->type->pm->cb; \ 24 + else if (dev->class && dev->class->pm) \ 25 + __rpm_cb = dev->class->pm->cb; \ 26 + else if (dev->bus && dev->bus->pm) \ 27 + __rpm_cb = dev->bus->pm->cb; \ 28 + else \ 29 + __rpm_cb = NULL; \ 30 + \ 31 + if (!__rpm_cb && dev->driver && dev->driver->pm) \ 32 + __rpm_cb = dev->driver->pm->cb; \ 33 + \ 34 + __rpm_cb; \ 35 + }) 36 + 37 + static int (*rpm_get_suspend_cb(struct device *dev))(struct device *) 38 + { 39 + return RPM_GET_CALLBACK(dev, runtime_suspend); 40 + } 41 + 42 + static int (*rpm_get_resume_cb(struct device *dev))(struct device *) 43 + { 44 + return RPM_GET_CALLBACK(dev, runtime_resume); 45 + } 46 + 47 + #ifdef CONFIG_PM_RUNTIME 48 + static int (*rpm_get_idle_cb(struct device *dev))(struct device *) 49 + { 50 + return RPM_GET_CALLBACK(dev, runtime_idle); 51 + } 52 + 16 53 static int rpm_resume(struct device *dev, int rpmflags); 17 54 static int rpm_suspend(struct device *dev, int rpmflags); 18 55 ··· 347 310 348 311 dev->power.idle_notification = true; 349 312 350 - if (dev->pm_domain) 351 - callback = dev->pm_domain->ops.runtime_idle; 352 - else if (dev->type && dev->type->pm) 353 - callback = dev->type->pm->runtime_idle; 354 - else if (dev->class && dev->class->pm) 355 - callback = dev->class->pm->runtime_idle; 356 - else if (dev->bus && dev->bus->pm) 357 - callback = dev->bus->pm->runtime_idle; 358 - else 359 - callback = NULL; 360 - 361 - if (!callback && dev->driver && dev->driver->pm) 362 - callback = dev->driver->pm->runtime_idle; 313 + callback = rpm_get_idle_cb(dev); 363 314 364 315 if (callback) 365 316 retval = __rpm_callback(callback, dev); ··· 517 492 518 493 __update_runtime_status(dev, RPM_SUSPENDING); 519 494 520 - if (dev->pm_domain) 521 - callback = dev->pm_domain->ops.runtime_suspend; 522 - else if (dev->type && dev->type->pm) 523 - callback = dev->type->pm->runtime_suspend; 524 - else if (dev->class && dev->class->pm) 525 - callback = dev->class->pm->runtime_suspend; 526 - else if (dev->bus && dev->bus->pm) 527 - callback = dev->bus->pm->runtime_suspend; 528 - else 529 - callback = NULL; 530 - 531 - if (!callback && dev->driver && dev->driver->pm) 532 - callback = dev->driver->pm->runtime_suspend; 495 + callback = rpm_get_suspend_cb(dev); 533 496 534 497 retval = rpm_callback(callback, dev); 535 498 if (retval) ··· 737 724 738 725 __update_runtime_status(dev, RPM_RESUMING); 739 726 740 - if (dev->pm_domain) 741 - callback = dev->pm_domain->ops.runtime_resume; 742 - else if (dev->type && dev->type->pm) 743 - callback = dev->type->pm->runtime_resume; 744 - else if (dev->class && dev->class->pm) 745 - callback = dev->class->pm->runtime_resume; 746 - else if (dev->bus && dev->bus->pm) 747 - callback = dev->bus->pm->runtime_resume; 748 - else 749 - callback = NULL; 750 - 751 - if (!callback && dev->driver && dev->driver->pm) 752 - callback = dev->driver->pm->runtime_resume; 727 + callback = rpm_get_resume_cb(dev); 753 728 754 729 retval = rpm_callback(callback, dev); 755 730 if (retval) { ··· 1402 1401 if (dev->power.irq_safe && dev->parent) 1403 1402 pm_runtime_put(dev->parent); 1404 1403 } 1404 + #endif 1405 + 1406 + /** 1407 + * pm_runtime_force_suspend - Force a device into suspend state if needed. 1408 + * @dev: Device to suspend. 1409 + * 1410 + * Disable runtime PM so we safely can check the device's runtime PM status and 1411 + * if it is active, invoke it's .runtime_suspend callback to bring it into 1412 + * suspend state. Keep runtime PM disabled to preserve the state unless we 1413 + * encounter errors. 1414 + * 1415 + * Typically this function may be invoked from a system suspend callback to make 1416 + * sure the device is put into low power state. 1417 + */ 1418 + int pm_runtime_force_suspend(struct device *dev) 1419 + { 1420 + int (*callback)(struct device *); 1421 + int ret = 0; 1422 + 1423 + pm_runtime_disable(dev); 1424 + 1425 + /* 1426 + * Note that pm_runtime_status_suspended() returns false while 1427 + * !CONFIG_PM_RUNTIME, which means the device will be put into low 1428 + * power state. 1429 + */ 1430 + if (pm_runtime_status_suspended(dev)) 1431 + return 0; 1432 + 1433 + callback = rpm_get_suspend_cb(dev); 1434 + 1435 + if (!callback) { 1436 + ret = -ENOSYS; 1437 + goto err; 1438 + } 1439 + 1440 + ret = callback(dev); 1441 + if (ret) 1442 + goto err; 1443 + 1444 + pm_runtime_set_suspended(dev); 1445 + return 0; 1446 + err: 1447 + pm_runtime_enable(dev); 1448 + return ret; 1449 + } 1450 + EXPORT_SYMBOL_GPL(pm_runtime_force_suspend); 1451 + 1452 + /** 1453 + * pm_runtime_force_resume - Force a device into resume state. 1454 + * @dev: Device to resume. 1455 + * 1456 + * Prior invoking this function we expect the user to have brought the device 1457 + * into low power state by a call to pm_runtime_force_suspend(). Here we reverse 1458 + * those actions and brings the device into full power. We update the runtime PM 1459 + * status and re-enables runtime PM. 1460 + * 1461 + * Typically this function may be invoked from a system resume callback to make 1462 + * sure the device is put into full power state. 1463 + */ 1464 + int pm_runtime_force_resume(struct device *dev) 1465 + { 1466 + int (*callback)(struct device *); 1467 + int ret = 0; 1468 + 1469 + callback = rpm_get_resume_cb(dev); 1470 + 1471 + if (!callback) { 1472 + ret = -ENOSYS; 1473 + goto out; 1474 + } 1475 + 1476 + ret = callback(dev); 1477 + if (ret) 1478 + goto out; 1479 + 1480 + pm_runtime_set_active(dev); 1481 + pm_runtime_mark_last_busy(dev); 1482 + out: 1483 + pm_runtime_enable(dev); 1484 + return ret; 1485 + } 1486 + EXPORT_SYMBOL_GPL(pm_runtime_force_resume);
+75 -22
drivers/base/power/sysfs.c
··· 218 218 static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show, 219 219 autosuspend_delay_ms_store); 220 220 221 - static ssize_t pm_qos_latency_show(struct device *dev, 222 - struct device_attribute *attr, char *buf) 221 + static ssize_t pm_qos_resume_latency_show(struct device *dev, 222 + struct device_attribute *attr, 223 + char *buf) 223 224 { 224 - return sprintf(buf, "%d\n", dev_pm_qos_requested_latency(dev)); 225 + return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev)); 225 226 } 226 227 227 - static ssize_t pm_qos_latency_store(struct device *dev, 228 - struct device_attribute *attr, 229 - const char *buf, size_t n) 228 + static ssize_t pm_qos_resume_latency_store(struct device *dev, 229 + struct device_attribute *attr, 230 + const char *buf, size_t n) 230 231 { 231 232 s32 value; 232 233 int ret; ··· 238 237 if (value < 0) 239 238 return -EINVAL; 240 239 241 - ret = dev_pm_qos_update_request(dev->power.qos->latency_req, value); 240 + ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req, 241 + value); 242 242 return ret < 0 ? ret : n; 243 243 } 244 244 245 245 static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, 246 - pm_qos_latency_show, pm_qos_latency_store); 246 + pm_qos_resume_latency_show, pm_qos_resume_latency_store); 247 + 248 + static ssize_t pm_qos_latency_tolerance_show(struct device *dev, 249 + struct device_attribute *attr, 250 + char *buf) 251 + { 252 + s32 value = dev_pm_qos_get_user_latency_tolerance(dev); 253 + 254 + if (value < 0) 255 + return sprintf(buf, "auto\n"); 256 + else if (value == PM_QOS_LATENCY_ANY) 257 + return sprintf(buf, "any\n"); 258 + 259 + return sprintf(buf, "%d\n", value); 260 + } 261 + 262 + static ssize_t pm_qos_latency_tolerance_store(struct device *dev, 263 + struct device_attribute *attr, 264 + const char *buf, size_t n) 265 + { 266 + s32 value; 267 + int ret; 268 + 269 + if (kstrtos32(buf, 0, &value)) { 270 + if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n")) 271 + value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; 272 + else if (!strcmp(buf, "any") || !strcmp(buf, "any\n")) 273 + value = PM_QOS_LATENCY_ANY; 274 + } 275 + ret = dev_pm_qos_update_user_latency_tolerance(dev, value); 276 + return ret < 0 ? ret : n; 277 + } 278 + 279 + static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644, 280 + pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store); 247 281 248 282 static ssize_t pm_qos_no_power_off_show(struct device *dev, 249 283 struct device_attribute *attr, ··· 654 618 .attrs = runtime_attrs, 655 619 }; 656 620 657 - static struct attribute *pm_qos_latency_attrs[] = { 621 + static struct attribute *pm_qos_resume_latency_attrs[] = { 658 622 #ifdef CONFIG_PM_RUNTIME 659 623 &dev_attr_pm_qos_resume_latency_us.attr, 660 624 #endif /* CONFIG_PM_RUNTIME */ 661 625 NULL, 662 626 }; 663 - static struct attribute_group pm_qos_latency_attr_group = { 627 + static struct attribute_group pm_qos_resume_latency_attr_group = { 664 628 .name = power_group_name, 665 - .attrs = pm_qos_latency_attrs, 629 + .attrs = pm_qos_resume_latency_attrs, 630 + }; 631 + 632 + static struct attribute *pm_qos_latency_tolerance_attrs[] = { 633 + #ifdef CONFIG_PM_RUNTIME 634 + &dev_attr_pm_qos_latency_tolerance_us.attr, 635 + #endif /* CONFIG_PM_RUNTIME */ 636 + NULL, 637 + }; 638 + static struct attribute_group pm_qos_latency_tolerance_attr_group = { 639 + .name = power_group_name, 640 + .attrs = pm_qos_latency_tolerance_attrs, 666 641 }; 667 642 668 643 static struct attribute *pm_qos_flags_attrs[] = { ··· 701 654 if (rc) 702 655 goto err_out; 703 656 } 704 - 705 657 if (device_can_wakeup(dev)) { 706 658 rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group); 707 - if (rc) { 708 - if (pm_runtime_callbacks_present(dev)) 709 - sysfs_unmerge_group(&dev->kobj, 710 - &pm_runtime_attr_group); 711 - goto err_out; 712 - } 659 + if (rc) 660 + goto err_runtime; 661 + } 662 + if (dev->power.set_latency_tolerance) { 663 + rc = sysfs_merge_group(&dev->kobj, 664 + &pm_qos_latency_tolerance_attr_group); 665 + if (rc) 666 + goto err_wakeup; 713 667 } 714 668 return 0; 715 669 670 + err_wakeup: 671 + sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); 672 + err_runtime: 673 + sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group); 716 674 err_out: 717 675 sysfs_remove_group(&dev->kobj, &pm_attr_group); 718 676 return rc; ··· 733 681 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); 734 682 } 735 683 736 - int pm_qos_sysfs_add_latency(struct device *dev) 684 + int pm_qos_sysfs_add_resume_latency(struct device *dev) 737 685 { 738 - return sysfs_merge_group(&dev->kobj, &pm_qos_latency_attr_group); 686 + return sysfs_merge_group(&dev->kobj, &pm_qos_resume_latency_attr_group); 739 687 } 740 688 741 - void pm_qos_sysfs_remove_latency(struct device *dev) 689 + void pm_qos_sysfs_remove_resume_latency(struct device *dev) 742 690 { 743 - sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_attr_group); 691 + sysfs_unmerge_group(&dev->kobj, &pm_qos_resume_latency_attr_group); 744 692 } 745 693 746 694 int pm_qos_sysfs_add_flags(struct device *dev) ··· 760 708 761 709 void dpm_sysfs_remove(struct device *dev) 762 710 { 711 + sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group); 763 712 dev_pm_qos_constraints_destroy(dev); 764 713 rpm_sysfs_remove(dev); 765 714 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
+2 -1
drivers/input/touchscreen/st1232.c
··· 134 134 } else if (!ts->low_latency_req.dev) { 135 135 /* First contact, request 100 us latency. */ 136 136 dev_pm_qos_add_ancestor_request(&ts->client->dev, 137 - &ts->low_latency_req, 100); 137 + &ts->low_latency_req, 138 + DEV_PM_QOS_RESUME_LATENCY, 100); 138 139 } 139 140 140 141 /* SYN_REPORT */
+1 -1
drivers/mtd/nand/sh_flctl.c
··· 897 897 if (!flctl->qos_request) { 898 898 ret = dev_pm_qos_add_request(&flctl->pdev->dev, 899 899 &flctl->pm_qos, 900 - DEV_PM_QOS_LATENCY, 900 + DEV_PM_QOS_RESUME_LATENCY, 901 901 100); 902 902 if (ret < 0) 903 903 dev_err(&flctl->pdev->dev,
+2
include/acpi/acpi_bus.h
··· 133 133 struct list_head list_node; 134 134 int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id); 135 135 void (*detach)(struct acpi_device *dev); 136 + void (*bind)(struct device *phys_dev); 137 + void (*unbind)(struct device *phys_dev); 136 138 struct acpi_hotplug_profile hotplug; 137 139 }; 138 140
+1
include/linux/pm.h
··· 582 582 unsigned long accounting_timestamp; 583 583 #endif 584 584 struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */ 585 + void (*set_latency_tolerance)(struct device *, s32); 585 586 struct dev_pm_qos *qos; 586 587 }; 587 588
+25 -9
include/linux/pm_qos.h
··· 32 32 #define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 33 33 #define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 34 34 #define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0 35 - #define PM_QOS_DEV_LAT_DEFAULT_VALUE 0 35 + #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0 36 + #define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0 37 + #define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1) 38 + #define PM_QOS_LATENCY_ANY ((s32)(~(__u32)0 >> 1)) 36 39 37 40 #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) 38 41 #define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1) ··· 52 49 }; 53 50 54 51 enum dev_pm_qos_req_type { 55 - DEV_PM_QOS_LATENCY = 1, 52 + DEV_PM_QOS_RESUME_LATENCY = 1, 53 + DEV_PM_QOS_LATENCY_TOLERANCE, 56 54 DEV_PM_QOS_FLAGS, 57 55 }; 58 56 ··· 81 77 struct plist_head list; 82 78 s32 target_value; /* Do not change to 64 bit */ 83 79 s32 default_value; 80 + s32 no_constraint_value; 84 81 enum pm_qos_type type; 85 82 struct blocking_notifier_head *notifiers; 86 83 }; ··· 92 87 }; 93 88 94 89 struct dev_pm_qos { 95 - struct pm_qos_constraints latency; 90 + struct pm_qos_constraints resume_latency; 91 + struct pm_qos_constraints latency_tolerance; 96 92 struct pm_qos_flags flags; 97 - struct dev_pm_qos_request *latency_req; 93 + struct dev_pm_qos_request *resume_latency_req; 94 + struct dev_pm_qos_request *latency_tolerance_req; 98 95 struct dev_pm_qos_request *flags_req; 99 96 }; 100 97 ··· 149 142 void dev_pm_qos_constraints_init(struct device *dev); 150 143 void dev_pm_qos_constraints_destroy(struct device *dev); 151 144 int dev_pm_qos_add_ancestor_request(struct device *dev, 152 - struct dev_pm_qos_request *req, s32 value); 145 + struct dev_pm_qos_request *req, 146 + enum dev_pm_qos_req_type type, s32 value); 153 147 #else 154 148 static inline enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, 155 149 s32 mask) ··· 193 185 dev->power.power_state = PMSG_INVALID; 194 186 } 195 187 static inline int dev_pm_qos_add_ancestor_request(struct device *dev, 196 - struct dev_pm_qos_request *req, s32 value) 188 + struct dev_pm_qos_request *req, 189 + enum dev_pm_qos_req_type type, 190 + s32 value) 197 191 { return 0; } 198 192 #endif 199 193 ··· 205 195 int dev_pm_qos_expose_flags(struct device *dev, s32 value); 206 196 void dev_pm_qos_hide_flags(struct device *dev); 207 197 int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set); 198 + s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev); 199 + int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val); 208 200 209 - static inline s32 dev_pm_qos_requested_latency(struct device *dev) 201 + static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) 210 202 { 211 - return dev->power.qos->latency_req->data.pnode.prio; 203 + return dev->power.qos->resume_latency_req->data.pnode.prio; 212 204 } 213 205 214 206 static inline s32 dev_pm_qos_requested_flags(struct device *dev) ··· 226 214 static inline void dev_pm_qos_hide_flags(struct device *dev) {} 227 215 static inline int dev_pm_qos_update_flags(struct device *dev, s32 m, bool set) 228 216 { return 0; } 217 + static inline s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev) 218 + { return PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; } 219 + static inline int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) 220 + { return 0; } 229 221 230 - static inline s32 dev_pm_qos_requested_latency(struct device *dev) { return 0; } 222 + static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; } 231 223 static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; } 232 224 #endif 233 225
+4
include/linux/pm_runtime.h
··· 26 26 #ifdef CONFIG_PM 27 27 extern int pm_generic_runtime_suspend(struct device *dev); 28 28 extern int pm_generic_runtime_resume(struct device *dev); 29 + extern int pm_runtime_force_suspend(struct device *dev); 30 + extern int pm_runtime_force_resume(struct device *dev); 29 31 #else 30 32 static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; } 31 33 static inline int pm_generic_runtime_resume(struct device *dev) { return 0; } 34 + static inline int pm_runtime_force_suspend(struct device *dev) { return 0; } 35 + static inline int pm_runtime_force_resume(struct device *dev) { return 0; } 32 36 #endif 33 37 34 38 #ifdef CONFIG_PM_RUNTIME
+2 -2
include/trace/events/power.h
··· 407 407 TP_printk("device=%s type=%s new_value=%d", 408 408 __get_str(name), 409 409 __print_symbolic(__entry->type, 410 - { DEV_PM_QOS_LATENCY, "DEV_PM_QOS_LATENCY" }, 411 - { DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }), 410 + { DEV_PM_QOS_RESUME_LATENCY, "DEV_PM_QOS_RESUME_LATENCY" }, 411 + { DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }), 412 412 __entry->new_value) 413 413 ); 414 414
+12 -6
kernel/power/qos.c
··· 66 66 .list = PLIST_HEAD_INIT(cpu_dma_constraints.list), 67 67 .target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE, 68 68 .default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE, 69 + .no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE, 69 70 .type = PM_QOS_MIN, 70 71 .notifiers = &cpu_dma_lat_notifier, 71 72 }; ··· 80 79 .list = PLIST_HEAD_INIT(network_lat_constraints.list), 81 80 .target_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE, 82 81 .default_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE, 82 + .no_constraint_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE, 83 83 .type = PM_QOS_MIN, 84 84 .notifiers = &network_lat_notifier, 85 85 }; ··· 95 93 .list = PLIST_HEAD_INIT(network_tput_constraints.list), 96 94 .target_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE, 97 95 .default_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE, 96 + .no_constraint_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE, 98 97 .type = PM_QOS_MAX, 99 98 .notifiers = &network_throughput_notifier, 100 99 }; ··· 131 128 static inline int pm_qos_get_value(struct pm_qos_constraints *c) 132 129 { 133 130 if (plist_head_empty(&c->list)) 134 - return c->default_value; 131 + return c->no_constraint_value; 135 132 136 133 switch (c->type) { 137 134 case PM_QOS_MIN: ··· 173 170 { 174 171 unsigned long flags; 175 172 int prev_value, curr_value, new_value; 173 + int ret; 176 174 177 175 spin_lock_irqsave(&pm_qos_lock, flags); 178 176 prev_value = pm_qos_get_value(c); ··· 209 205 210 206 trace_pm_qos_update_target(action, prev_value, curr_value); 211 207 if (prev_value != curr_value) { 212 - blocking_notifier_call_chain(c->notifiers, 213 - (unsigned long)curr_value, 214 - NULL); 215 - return 1; 208 + ret = 1; 209 + if (c->notifiers) 210 + blocking_notifier_call_chain(c->notifiers, 211 + (unsigned long)curr_value, 212 + NULL); 216 213 } else { 217 - return 0; 214 + ret = 0; 218 215 } 216 + return ret; 219 217 } 220 218 221 219 /**