Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

PM: Wrap documentation to fit in 80 columns

Wrap to 80 columns. No textual change except to correct some "it's" that
should be "its".

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

authored by

Bjorn Helgaas and committed by
Rafael J. Wysocki
1992b66d af42d346

+80 -73
+4 -3
Documentation/power/drivers-testing.rst
··· 39 39 d) Attempt to hibernate with the driver compiled directly into the kernel 40 40 in the "reboot", "shutdown" and "platform" modes. 41 41 42 - e) Try the test modes of suspend (see: Documentation/power/basic-pm-debugging.rst, 43 - 2). [As far as the STR tests are concerned, it should not matter whether or 44 - not the driver is built as a module.] 42 + e) Try the test modes of suspend (see: 43 + Documentation/power/basic-pm-debugging.rst, 2). [As far as the STR tests are 44 + concerned, it should not matter whether or not the driver is built as a 45 + module.] 45 46 46 47 f) Attempt to suspend to RAM using the s2ram tool with the driver loaded 47 48 (see: Documentation/power/basic-pm-debugging.rst, 2).
+18 -17
Documentation/power/freezing-of-tasks.rst
··· 215 215 216 216 Yes, there are. 217 217 218 - First of all, grabbing the 'system_transition_mutex' lock to mutually exclude a piece of code 219 - from system-wide sleep such as suspend/hibernation is not encouraged. 220 - If possible, that piece of code must instead hook onto the suspend/hibernation 221 - notifiers to achieve mutual exclusion. Look at the CPU-Hotplug code 222 - (kernel/cpu.c) for an example. 218 + First of all, grabbing the 'system_transition_mutex' lock to mutually exclude a 219 + piece of code from system-wide sleep such as suspend/hibernation is not 220 + encouraged. If possible, that piece of code must instead hook onto the 221 + suspend/hibernation notifiers to achieve mutual exclusion. Look at the 222 + CPU-Hotplug code (kernel/cpu.c) for an example. 223 223 224 - However, if that is not feasible, and grabbing 'system_transition_mutex' is deemed necessary, 225 - it is strongly discouraged to directly call mutex_[un]lock(&system_transition_mutex) since 226 - that could lead to freezing failures, because if the suspend/hibernate code 227 - successfully acquired the 'system_transition_mutex' lock, and hence that other entity failed 228 - to acquire the lock, then that task would get blocked in TASK_UNINTERRUPTIBLE 229 - state. As a consequence, the freezer would not be able to freeze that task, 230 - leading to freezing failure. 224 + However, if that is not feasible, and grabbing 'system_transition_mutex' is 225 + deemed necessary, it is strongly discouraged to directly call 226 + mutex_[un]lock(&system_transition_mutex) since that could lead to freezing 227 + failures, because if the suspend/hibernate code successfully acquired the 228 + 'system_transition_mutex' lock, and hence that other entity failed to acquire 229 + the lock, then that task would get blocked in TASK_UNINTERRUPTIBLE state. As a 230 + consequence, the freezer would not be able to freeze that task, leading to 231 + freezing failure. 231 232 232 233 However, the [un]lock_system_sleep() APIs are safe to use in this scenario, 233 234 since they ask the freezer to skip freezing this task, since it is anyway 234 - "frozen enough" as it is blocked on 'system_transition_mutex', which will be released 235 - only after the entire suspend/hibernation sequence is complete. 236 - So, to summarize, use [un]lock_system_sleep() instead of directly using 235 + "frozen enough" as it is blocked on 'system_transition_mutex', which will be 236 + released only after the entire suspend/hibernation sequence is complete. So, to 237 + summarize, use [un]lock_system_sleep() instead of directly using 237 238 mutex_[un]lock(&system_transition_mutex). That would prevent freezing failures. 238 239 239 240 V. Miscellaneous 240 241 ================ 241 242 242 243 /sys/power/pm_freeze_timeout controls how long it will cost at most to freeze 243 - all user space processes or all freezable kernel threads, in unit of millisecond. 244 - The default value is 20000, with range of unsigned integer. 244 + all user space processes or all freezable kernel threads, in unit of 245 + millisecond. The default value is 20000, with range of unsigned integer.
+17 -15
Documentation/power/opp.rst
··· 73 73 SoC framework might choose to disable a higher frequency OPP to safely continue 74 74 operations until that OPP could be re-enabled if possible. 75 75 76 - OPP library facilitates this concept in it's implementation. The following 76 + OPP library facilitates this concept in its implementation. The following 77 77 operational functions operate only on available opps: 78 - opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, dev_pm_opp_get_opp_count 78 + opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, 79 + dev_pm_opp_get_opp_count 79 80 80 - dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer which can then 81 - be used for dev_pm_opp_enable/disable functions to make an opp available as required. 81 + dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer 82 + which can then be used for dev_pm_opp_enable/disable functions to make an 83 + opp available as required. 82 84 83 85 WARNING: Users of OPP library should refresh their availability count using 84 - get_opp_count if dev_pm_opp_enable/disable functions are invoked for a device, the 85 - exact mechanism to trigger these or the notification mechanism to other 86 - dependent subsystems such as cpufreq are left to the discretion of the SoC 87 - specific framework which uses the OPP library. Similar care needs to be taken 88 - care to refresh the cpufreq table in cases of these operations. 86 + get_opp_count if dev_pm_opp_enable/disable functions are invoked for a 87 + device, the exact mechanism to trigger these or the notification mechanism 88 + to other dependent subsystems such as cpufreq are left to the discretion of 89 + the SoC specific framework which uses the OPP library. Similar care needs 90 + to be taken care to refresh the cpufreq table in cases of these operations. 89 91 90 92 2. Initial OPP List Registration 91 93 ================================ ··· 101 99 dev_pm_opp_add 102 100 Add a new OPP for a specific domain represented by the device pointer. 103 101 The OPP is defined using the frequency and voltage. Once added, the OPP 104 - is assumed to be available and control of it's availability can be done 105 - with the dev_pm_opp_enable/disable functions. OPP library internally stores 106 - and manages this information in the opp struct. This function may be 107 - used by SoC framework to define a optimal list as per the demands of 108 - SoC usage environment. 102 + is assumed to be available and control of its availability can be done 103 + with the dev_pm_opp_enable/disable functions. OPP library 104 + internally stores and manages this information in the opp struct. 105 + This function may be used by SoC framework to define a optimal list 106 + as per the demands of SoC usage environment. 109 107 110 108 WARNING: 111 109 Do not use this function in interrupt context. ··· 356 354 357 355 struct device 358 356 This is used to identify a domain to the OPP layer. The 359 - nature of the device and it's implementation is left to the user of 357 + nature of the device and its implementation is left to the user of 360 358 OPP library such as the SoC framework. 361 359 362 360 Overall, in a simplistic view, the data structure operations is represented as
+14 -14
Documentation/power/pci.rst
··· 426 426 2.4. System-Wide Power Transitions 427 427 ---------------------------------- 428 428 There are a few different types of system-wide power transitions, described in 429 - Documentation/driver-api/pm/devices.rst. Each of them requires devices to be handled 430 - in a specific way and the PM core executes subsystem-level power management 431 - callbacks for this purpose. They are executed in phases such that each phase 432 - involves executing the same subsystem-level callback for every device belonging 433 - to the given subsystem before the next phase begins. These phases always run 434 - after tasks have been frozen. 429 + Documentation/driver-api/pm/devices.rst. Each of them requires devices to be 430 + handled in a specific way and the PM core executes subsystem-level power 431 + management callbacks for this purpose. They are executed in phases such that 432 + each phase involves executing the same subsystem-level callback for every device 433 + belonging to the given subsystem before the next phase begins. These phases 434 + always run after tasks have been frozen. 435 435 436 436 2.4.1. System Suspend 437 437 ^^^^^^^^^^^^^^^^^^^^^ ··· 636 636 pre-hibernation memory contents to be restored before the pre-hibernation system 637 637 activity can be resumed. 638 638 639 - As described in Documentation/driver-api/pm/devices.rst, the hibernation image is loaded 640 - into memory by a fresh instance of the kernel, called the boot kernel, which in 641 - turn is loaded and run by a boot loader in the usual way. After the boot kernel 642 - has loaded the image, it needs to replace its own code and data with the code 643 - and data of the "hibernated" kernel stored within the image, called the image 644 - kernel. For this purpose all devices are frozen just like before creating 639 + As described in Documentation/driver-api/pm/devices.rst, the hibernation image 640 + is loaded into memory by a fresh instance of the kernel, called the boot kernel, 641 + which in turn is loaded and run by a boot loader in the usual way. After the 642 + boot kernel has loaded the image, it needs to replace its own code and data with 643 + the code and data of the "hibernated" kernel stored within the image, called the 644 + image kernel. For this purpose all devices are frozen just like before creating 645 645 the image during hibernation, in the 646 646 647 647 prepare, freeze, freeze_noirq ··· 691 691 692 692 At the time of this writing there are two ways to define power management 693 693 callbacks for a PCI device driver, the recommended one, based on using a 694 - dev_pm_ops structure described in Documentation/driver-api/pm/devices.rst, and the 695 - "legacy" one, in which the .suspend(), .suspend_late(), .resume_early(), and 694 + dev_pm_ops structure described in Documentation/driver-api/pm/devices.rst, and 695 + the "legacy" one, in which the .suspend(), .suspend_late(), .resume_early(), and 696 696 .resume() callbacks from struct pci_driver are used. The legacy approach, 697 697 however, doesn't allow one to define runtime power management callbacks and is 698 698 not really suitable for any new drivers. Therefore it is not covered by this
+13 -13
Documentation/power/pm_qos_interface.rst
··· 8 8 9 9 Two different PM QoS frameworks are available: 10 10 1. PM QoS classes for cpu_dma_latency 11 - 2. the per-device PM QoS framework provides the API to manage the per-device latency 12 - constraints and PM QoS flags. 11 + 2. The per-device PM QoS framework provides the API to manage the 12 + per-device latency constraints and PM QoS flags. 13 13 14 14 Each parameters have defined units: 15 15 ··· 47 47 pm_qos API functions. 48 48 49 49 void pm_qos_update_request(handle, new_target_value): 50 - Will update the list element pointed to by the handle with the new target value 51 - and recompute the new aggregated target, calling the notification tree if the 52 - target is changed. 50 + Will update the list element pointed to by the handle with the new target 51 + value and recompute the new aggregated target, calling the notification tree 52 + if the target is changed. 53 53 54 54 void pm_qos_remove_request(handle): 55 - Will remove the element. After removal it will update the aggregate target and 56 - call the notification tree if the target was changed as a result of removing 57 - the request. 55 + Will remove the element. After removal it will update the aggregate target 56 + and call the notification tree if the target was changed as a result of 57 + removing the request. 58 58 59 59 int pm_qos_request(param_class): 60 60 Returns the aggregated value for a given PM QoS class. ··· 167 167 change the value of the PM_QOS_FLAG_NO_POWER_OFF flag. 168 168 169 169 void dev_pm_qos_hide_flags(device) 170 - Drop the request added by dev_pm_qos_expose_flags() from the device's PM QoS list 171 - of flags and remove sysfs attribute pm_qos_no_power_off from the device's power 172 - directory. 170 + Drop the request added by dev_pm_qos_expose_flags() from the device's PM QoS 171 + list of flags and remove sysfs attribute pm_qos_no_power_off from the device's 172 + power directory. 173 173 174 174 Notification mechanisms: 175 175 ··· 179 179 Adds a notification callback function for the device for a particular request 180 180 type. 181 181 182 - The callback is called when the aggregated value of the device constraints list 183 - is changed. 182 + The callback is called when the aggregated value of the device constraints 183 + list is changed. 184 184 185 185 int dev_pm_qos_remove_notifier(device, notifier, type): 186 186 Removes the notification callback function for the device.
+2 -2
Documentation/power/runtime_pm.rst
··· 268 268 `unsigned int runtime_auto;` 269 269 - if set, indicates that the user space has allowed the device driver to 270 270 power manage the device at run time via the /sys/devices/.../power/control 271 - `interface;` it may only be modified with the help of the pm_runtime_allow() 272 - and pm_runtime_forbid() helper functions 271 + `interface;` it may only be modified with the help of the 272 + pm_runtime_allow() and pm_runtime_forbid() helper functions 273 273 274 274 `unsigned int no_callbacks;` 275 275 - indicates that the device does not use the runtime PM callbacks (see
+4 -3
Documentation/power/suspend-and-cpuhotplug.rst
··· 106 106 * Release system_transition_mutex lock. 107 107 108 108 109 - It is to be noted here that the system_transition_mutex lock is acquired at the very 110 - beginning, when we are just starting out to suspend, and then released only 109 + It is to be noted here that the system_transition_mutex lock is acquired at the 110 + very beginning, when we are just starting out to suspend, and then released only 111 111 after the entire cycle is complete (i.e., suspend + resume). 112 112 113 113 :: ··· 165 165 166 166 - kernel/power/process.c : freeze_processes(), thaw_processes() 167 167 - kernel/power/suspend.c : suspend_prepare(), suspend_enter(), suspend_finish() 168 - - kernel/cpu.c: cpu_[up|down](), _cpu_[up|down](), [disable|enable]_nonboot_cpus() 168 + - kernel/cpu.c: cpu_[up|down](), _cpu_[up|down](), 169 + [disable|enable]_nonboot_cpus() 169 170 170 171 171 172
+8 -6
Documentation/power/swsusp.rst
··· 118 118 119 119 echo 1 > /proc/acpi/sleep # for standby 120 120 echo 2 > /proc/acpi/sleep # for suspend to ram 121 - echo 3 > /proc/acpi/sleep # for suspend to ram, but with more power conservative 121 + echo 3 > /proc/acpi/sleep # for suspend to ram, but with more power 122 + # conservative 122 123 echo 4 > /proc/acpi/sleep # for suspend to disk 123 124 echo 5 > /proc/acpi/sleep # for shutdown unfriendly the system 124 125 ··· 193 192 194 193 A: 195 194 The freezing of tasks is a mechanism by which user space processes and some 196 - kernel threads are controlled during hibernation or system-wide suspend (on some 197 - architectures). See freezing-of-tasks.txt for details. 195 + kernel threads are controlled during hibernation or system-wide suspend (on 196 + some architectures). See freezing-of-tasks.txt for details. 198 197 199 198 Q: 200 199 What is the difference between "platform" and "shutdown"? ··· 283 282 suspend(PMSG_FREEZE): devices are frozen so that they don't interfere 284 283 with state snapshot 285 284 286 - state snapshot: copy of whole used memory is taken with interrupts disabled 285 + state snapshot: copy of whole used memory is taken with interrupts 286 + disabled 287 287 288 288 resume(): devices are woken up so that we can write image to swap 289 289 ··· 355 353 356 354 A: 357 355 Generally, yes, you can. However, it requires you to use the "resume=" and 358 - "resume_offset=" kernel command line parameters, so the resume from a swap file 359 - cannot be initiated from an initrd or initramfs image. See 356 + "resume_offset=" kernel command line parameters, so the resume from a swap 357 + file cannot be initiated from an initrd or initramfs image. See 360 358 swsusp-and-swap-files.txt for details. 361 359 362 360 Q: