Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-6.18-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more power management updates from Rafael Wysocki:
"These are cpufreq fixes and cleanups on top of the material merged
previously, a power management core code fix and updates of the
runtime PM framework including unit tests, documentation updates and
introduction of auto-cleanup macros for runtime PM "resume and get"
and "get without resuming" operations.

Specifics:

- Make cpufreq drivers setting the default CPU transition latency to
CPUFREQ_ETERNAL specify a proper default transition latency value
instead which addresses a regression introduced during the 6.6
cycle that broke CPUFREQ_ETERNAL handling (Rafael Wysocki)

- Make the cpufreq CPPC driver use a proper transition delay value
when CPUFREQ_ETERNAL is returned by cppc_get_transition_latency()
to indicate an error condition (Rafael Wysocki)

- Make cppc_get_transition_latency() return a negative error code to
indicate error conditions instead of using CPUFREQ_ETERNAL for this
purpose and drop CPUFREQ_ETERNAL that has no other users (Rafael
Wysocki, Gopi Krishna Menon)

- Fix device leak in the mediatek cpufreq driver (Johan Hovold)

- Set target frequency on all CPUs sharing a policy during frequency
updates in the tegra186 cpufreq driver and make it initialize all
cores to max frequencies (Aaron Kling)

- Rust cpufreq helper cleanup (Thorsten Blum)

- Make pm_runtime_put*() family of functions return 1 when the given
device is already suspended which is consistent with the
documentation (Brian Norris)

- Add basic kunit tests for runtime PM API contracts and update
return values in kerneldoc comments for the runtime PM API (Brian
Norris, Dan Carpenter)

- Add auto-cleanup macros for runtime PM "resume and get" and "get
without resume" operations, use one of them in the PCI core and
drop the existing "free" macro introduced for similar purpose, but
somewhat cumbersome to use (Rafael Wysocki)

- Make the core power management code avoid waiting on device links
marked as SYNC_STATE_ONLY which is consistent with the handling of
those device links elsewhere (Pin-yen Lin)"

* tag 'pm-6.18-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
docs/zh_CN: Fix malformed table
docs/zh_TW: Fix malformed table
PM: runtime: Fix error checking for kunit_device_register()
PM: runtime: Introduce one more usage counter guard
cpufreq: Drop unused symbol CPUFREQ_ETERNAL
ACPI: CPPC: Do not use CPUFREQ_ETERNAL as an error value
cpufreq: CPPC: Avoid using CPUFREQ_ETERNAL as transition delay
cpufreq: Make drivers using CPUFREQ_ETERNAL specify transition latency
PM: runtime: Drop DEFINE_FREE() for pm_runtime_put()
PCI/sysfs: Use runtime PM guard macro for auto-cleanup
PM: runtime: Add auto-cleanup macros for "resume and get" operations
cpufreq: tegra186: Initialize all cores to max frequencies
cpufreq: tegra186: Set target frequency for all cpus in policy
rust: cpufreq: streamline find_supply_names
cpufreq: mediatek: fix device leak on probe failure
PM: sleep: Do not wait on SYNC_STATE_ONLY device links
PM: runtime: Update kerneldoc return codes
PM: runtime: Make put{,_sync}() return 1 when already suspended
PM: runtime: Add basic kunit tests for API contracts

+426 -101
-4
Documentation/admin-guide/pm/cpufreq.rst
··· 274 274 The time it takes to switch the CPUs belonging to this policy from one 275 275 P-state to another, in nanoseconds. 276 276 277 - If unknown or if known to be so high that the scaling driver does not 278 - work with the `ondemand`_ governor, -1 (:c:macro:`CPUFREQ_ETERNAL`) 279 - will be returned by reads from this attribute. 280 - 281 277 ``related_cpus`` 282 278 List of all (online and offline) CPUs belonging to this policy. 283 279
+1 -2
Documentation/cpu-freq/cpu-drivers.rst
··· 109 109 +-----------------------------------+--------------------------------------+ 110 110 |policy->cpuinfo.transition_latency | the time it takes on this CPU to | 111 111 | | switch between two frequencies in | 112 - | | nanoseconds (if appropriate, else | 113 - | | specify CPUFREQ_ETERNAL) | 112 + | | nanoseconds | 114 113 +-----------------------------------+--------------------------------------+ 115 114 |policy->cur | The current operating frequency of | 116 115 | | this CPU (if appropriate) |
+1 -2
Documentation/translations/zh_CN/cpu-freq/cpu-drivers.rst
··· 112 112 | | | 113 113 +-----------------------------------+--------------------------------------+ 114 114 |policy->cpuinfo.transition_latency | CPU在两个频率之间切换所需的时间,以 | 115 - | | 纳秒为单位(如不适用,设定为 | 116 - | | CPUFREQ_ETERNAL) | 115 + | | 纳秒为单位 | 117 116 | | | 118 117 +-----------------------------------+--------------------------------------+ 119 118 |policy->cur | 该CPU当前的工作频率(如适用) |
+1 -2
Documentation/translations/zh_TW/cpu-freq/cpu-drivers.rst
··· 112 112 | | | 113 113 +-----------------------------------+--------------------------------------+ 114 114 |policy->cpuinfo.transition_latency | CPU在兩個頻率之間切換所需的時間,以 | 115 - | | 納秒爲單位(如不適用,設定爲 | 116 - | | CPUFREQ_ETERNAL) | 115 + | | 納秒爲單位 | 117 116 | | | 118 117 +-----------------------------------+--------------------------------------+ 119 118 |policy->cur | 該CPU當前的工作頻率(如適用) |
+7 -9
drivers/acpi/cppc_acpi.c
··· 1876 1876 * If desired_reg is in the SystemMemory or SystemIo ACPI address space, 1877 1877 * then assume there is no latency. 1878 1878 */ 1879 - unsigned int cppc_get_transition_latency(int cpu_num) 1879 + int cppc_get_transition_latency(int cpu_num) 1880 1880 { 1881 1881 /* 1882 1882 * Expected transition latency is based on the PCCT timing values ··· 1889 1889 * completion of a command before issuing the next command, 1890 1890 * in microseconds. 1891 1891 */ 1892 - unsigned int latency_ns = 0; 1893 1892 struct cpc_desc *cpc_desc; 1894 1893 struct cpc_register_resource *desired_reg; 1895 1894 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu_num); 1896 1895 struct cppc_pcc_data *pcc_ss_data; 1896 + int latency_ns = 0; 1897 1897 1898 1898 cpc_desc = per_cpu(cpc_desc_ptr, cpu_num); 1899 1899 if (!cpc_desc) 1900 - return CPUFREQ_ETERNAL; 1900 + return -ENODATA; 1901 1901 1902 1902 desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; 1903 1903 if (CPC_IN_SYSTEM_MEMORY(desired_reg) || CPC_IN_SYSTEM_IO(desired_reg)) 1904 1904 return 0; 1905 - else if (!CPC_IN_PCC(desired_reg)) 1906 - return CPUFREQ_ETERNAL; 1907 1905 1908 - if (pcc_ss_id < 0) 1909 - return CPUFREQ_ETERNAL; 1906 + if (!CPC_IN_PCC(desired_reg) || pcc_ss_id < 0) 1907 + return -ENODATA; 1910 1908 1911 1909 pcc_ss_data = pcc_data[pcc_ss_id]; 1912 1910 if (pcc_ss_data->pcc_mpar) 1913 1911 latency_ns = 60 * (1000 * 1000 * 1000 / pcc_ss_data->pcc_mpar); 1914 1912 1915 - latency_ns = max(latency_ns, pcc_ss_data->pcc_nominal * 1000); 1916 - latency_ns = max(latency_ns, pcc_ss_data->pcc_mrtt * 1000); 1913 + latency_ns = max_t(int, latency_ns, pcc_ss_data->pcc_nominal * 1000); 1914 + latency_ns = max_t(int, latency_ns, pcc_ss_data->pcc_mrtt * 1000); 1917 1915 1918 1916 return latency_ns; 1919 1917 }
+6
drivers/base/Kconfig
··· 167 167 depends on KUNIT=y 168 168 default KUNIT_ALL_TESTS 169 169 170 + config PM_RUNTIME_KUNIT_TEST 171 + tristate "KUnit Tests for runtime PM" if !KUNIT_ALL_TESTS 172 + depends on KUNIT 173 + depends on PM 174 + default KUNIT_ALL_TESTS 175 + 170 176 config HMEM_REPORTING 171 177 bool 172 178 default n
+1
drivers/base/base.h
··· 248 248 void device_links_no_driver(struct device *dev); 249 249 bool device_links_busy(struct device *dev); 250 250 void device_links_unbind_consumers(struct device *dev); 251 + bool device_link_flag_is_sync_state_only(u32 flags); 251 252 void fw_devlink_drivers_done(void); 252 253 void fw_devlink_probing_done(void); 253 254
+1 -1
drivers/base/core.c
··· 287 287 #define DL_MARKER_FLAGS (DL_FLAG_INFERRED | \ 288 288 DL_FLAG_CYCLE | \ 289 289 DL_FLAG_MANAGED) 290 - static inline bool device_link_flag_is_sync_state_only(u32 flags) 290 + bool device_link_flag_is_sync_state_only(u32 flags) 291 291 { 292 292 return (flags & ~DL_MARKER_FLAGS) == DL_FLAG_SYNC_STATE_ONLY; 293 293 }
+1
drivers/base/power/Makefile
··· 4 4 obj-$(CONFIG_PM_TRACE_RTC) += trace.o 5 5 obj-$(CONFIG_HAVE_CLK) += clock_ops.o 6 6 obj-$(CONFIG_PM_QOS_KUNIT_TEST) += qos-test.o 7 + obj-$(CONFIG_PM_RUNTIME_KUNIT_TEST) += runtime-test.o 7 8 8 9 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
+4 -2
drivers/base/power/main.c
··· 278 278 * walking. 279 279 */ 280 280 dev_for_each_link_to_supplier(link, dev) 281 - if (READ_ONCE(link->status) != DL_STATE_DORMANT) 281 + if (READ_ONCE(link->status) != DL_STATE_DORMANT && 282 + !device_link_flag_is_sync_state_only(link->flags)) 282 283 dpm_wait(link->supplier, async); 283 284 284 285 device_links_read_unlock(idx); ··· 336 335 * unregistration). 337 336 */ 338 337 dev_for_each_link_to_consumer(link, dev) 339 - if (READ_ONCE(link->status) != DL_STATE_DORMANT) 338 + if (READ_ONCE(link->status) != DL_STATE_DORMANT && 339 + !device_link_flag_is_sync_state_only(link->flags)) 340 340 dpm_wait(link->consumer, async); 341 341 342 342 device_links_read_unlock(idx);
+253
drivers/base/power/runtime-test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright 2025 Google, Inc. 4 + */ 5 + 6 + #include <linux/cleanup.h> 7 + #include <linux/pm_runtime.h> 8 + #include <kunit/device.h> 9 + #include <kunit/test.h> 10 + 11 + #define DEVICE_NAME "pm_runtime_test_device" 12 + 13 + static void pm_runtime_depth_test(struct kunit *test) 14 + { 15 + struct device *dev = kunit_device_register(test, DEVICE_NAME); 16 + 17 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 18 + 19 + pm_runtime_enable(dev); 20 + 21 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 22 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev)); 23 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 24 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_get_sync(dev)); /* "already active" */ 25 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev)); 26 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev)); 27 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 28 + } 29 + 30 + /* Test pm_runtime_put() and friends when already suspended. */ 31 + static void pm_runtime_already_suspended_test(struct kunit *test) 32 + { 33 + struct device *dev = kunit_device_register(test, DEVICE_NAME); 34 + 35 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 36 + 37 + pm_runtime_enable(dev); 38 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 39 + 40 + pm_runtime_get_noresume(dev); 41 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_barrier(dev)); /* no wakeup needed */ 42 + pm_runtime_put(dev); 43 + 44 + pm_runtime_get_noresume(dev); 45 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_put_sync(dev)); 46 + 47 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_suspend(dev)); 48 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_autosuspend(dev)); 49 + KUNIT_EXPECT_EQ(test, 1, pm_request_autosuspend(dev)); 50 + 51 + pm_runtime_get_noresume(dev); 52 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_put_sync_autosuspend(dev)); 53 + 54 + pm_runtime_get_noresume(dev); 55 + pm_runtime_put_autosuspend(dev); 56 + 57 + /* Grab 2 refcounts */ 58 + pm_runtime_get_noresume(dev); 59 + pm_runtime_get_noresume(dev); 60 + /* The first put() sees usage_count 1 */ 61 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync_autosuspend(dev)); 62 + /* The second put() sees usage_count 0 but tells us "already suspended". */ 63 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_put_sync_autosuspend(dev)); 64 + 65 + /* Should have remained suspended the whole time. */ 66 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 67 + } 68 + 69 + static void pm_runtime_idle_test(struct kunit *test) 70 + { 71 + struct device *dev = kunit_device_register(test, DEVICE_NAME); 72 + 73 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 74 + 75 + pm_runtime_enable(dev); 76 + 77 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 78 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev)); 79 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 80 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_idle(dev)); 81 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 82 + pm_runtime_put_noidle(dev); 83 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 84 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_idle(dev)); 85 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 86 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_idle(dev)); 87 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_request_idle(dev)); 88 + } 89 + 90 + static void pm_runtime_disabled_test(struct kunit *test) 91 + { 92 + struct device *dev = kunit_device_register(test, DEVICE_NAME); 93 + 94 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 95 + 96 + /* Never called pm_runtime_enable() */ 97 + KUNIT_EXPECT_FALSE(test, pm_runtime_enabled(dev)); 98 + 99 + /* "disabled" is treated as "active" */ 100 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 101 + KUNIT_EXPECT_FALSE(test, pm_runtime_suspended(dev)); 102 + 103 + /* 104 + * Note: these "fail", but they still acquire/release refcounts, so 105 + * keep them balanced. 106 + */ 107 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_get(dev)); 108 + pm_runtime_put(dev); 109 + 110 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_get_sync(dev)); 111 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_put_sync(dev)); 112 + 113 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_get(dev)); 114 + pm_runtime_put_autosuspend(dev); 115 + 116 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_resume_and_get(dev)); 117 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_idle(dev)); 118 + KUNIT_EXPECT_EQ(test, -EACCES, pm_request_idle(dev)); 119 + KUNIT_EXPECT_EQ(test, -EACCES, pm_request_resume(dev)); 120 + KUNIT_EXPECT_EQ(test, -EACCES, pm_request_autosuspend(dev)); 121 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_suspend(dev)); 122 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_resume(dev)); 123 + KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_autosuspend(dev)); 124 + 125 + /* Still disabled */ 126 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 127 + KUNIT_EXPECT_FALSE(test, pm_runtime_enabled(dev)); 128 + } 129 + 130 + static void pm_runtime_error_test(struct kunit *test) 131 + { 132 + struct device *dev = kunit_device_register(test, DEVICE_NAME); 133 + 134 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 135 + 136 + pm_runtime_enable(dev); 137 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 138 + 139 + /* Fake a .runtime_resume() error */ 140 + dev->power.runtime_error = -EIO; 141 + 142 + /* 143 + * Note: these "fail", but they still acquire/release refcounts, so 144 + * keep them balanced. 145 + */ 146 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get(dev)); 147 + pm_runtime_put(dev); 148 + 149 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get_sync(dev)); 150 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_put_sync(dev)); 151 + 152 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get(dev)); 153 + pm_runtime_put_autosuspend(dev); 154 + 155 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get(dev)); 156 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_put_sync_autosuspend(dev)); 157 + 158 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_resume_and_get(dev)); 159 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_idle(dev)); 160 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_request_idle(dev)); 161 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_request_resume(dev)); 162 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_request_autosuspend(dev)); 163 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_suspend(dev)); 164 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_resume(dev)); 165 + KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_autosuspend(dev)); 166 + 167 + /* Error is still pending */ 168 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 169 + KUNIT_EXPECT_EQ(test, -EIO, dev->power.runtime_error); 170 + /* Clear error */ 171 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_set_suspended(dev)); 172 + KUNIT_EXPECT_EQ(test, 0, dev->power.runtime_error); 173 + /* Still suspended */ 174 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 175 + 176 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_get(dev)); 177 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_barrier(dev)); /* resume was pending */ 178 + pm_runtime_put(dev); 179 + pm_runtime_suspend(dev); /* flush the put(), to suspend */ 180 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 181 + 182 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev)); 183 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev)); 184 + 185 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev)); 186 + pm_runtime_put_autosuspend(dev); 187 + 188 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_resume_and_get(dev)); 189 + 190 + /* 191 + * The following should all return -EAGAIN (usage is non-zero) or 1 192 + * (already resumed). 193 + */ 194 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_idle(dev)); 195 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_request_idle(dev)); 196 + KUNIT_EXPECT_EQ(test, 1, pm_request_resume(dev)); 197 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_request_autosuspend(dev)); 198 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_suspend(dev)); 199 + KUNIT_EXPECT_EQ(test, 1, pm_runtime_resume(dev)); 200 + KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_autosuspend(dev)); 201 + 202 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev)); 203 + 204 + /* Suspended again */ 205 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 206 + } 207 + 208 + /* 209 + * Explore a typical probe() sequence in which a device marks itself powered, 210 + * but doesn't hold any runtime PM reference, so it suspends as soon as it goes 211 + * idle. 212 + */ 213 + static void pm_runtime_probe_active_test(struct kunit *test) 214 + { 215 + struct device *dev = kunit_device_register(test, DEVICE_NAME); 216 + 217 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 218 + 219 + KUNIT_EXPECT_TRUE(test, pm_runtime_status_suspended(dev)); 220 + 221 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_set_active(dev)); 222 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 223 + 224 + pm_runtime_enable(dev); 225 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 226 + 227 + /* Nothing to flush. We stay active. */ 228 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_barrier(dev)); 229 + KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev)); 230 + 231 + /* Ask for idle? Now we suspend. */ 232 + KUNIT_EXPECT_EQ(test, 0, pm_runtime_idle(dev)); 233 + KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev)); 234 + } 235 + 236 + static struct kunit_case pm_runtime_test_cases[] = { 237 + KUNIT_CASE(pm_runtime_depth_test), 238 + KUNIT_CASE(pm_runtime_already_suspended_test), 239 + KUNIT_CASE(pm_runtime_idle_test), 240 + KUNIT_CASE(pm_runtime_disabled_test), 241 + KUNIT_CASE(pm_runtime_error_test), 242 + KUNIT_CASE(pm_runtime_probe_active_test), 243 + {} 244 + }; 245 + 246 + static struct kunit_suite pm_runtime_test_suite = { 247 + .name = "pm_runtime_test_cases", 248 + .test_cases = pm_runtime_test_cases, 249 + }; 250 + 251 + kunit_test_suite(pm_runtime_test_suite); 252 + MODULE_DESCRIPTION("Runtime power management unit test suite"); 253 + MODULE_LICENSE("GPL");
+5
drivers/base/power/runtime.c
··· 498 498 if (retval < 0) 499 499 ; /* Conditions are wrong. */ 500 500 501 + else if ((rpmflags & RPM_GET_PUT) && retval == 1) 502 + ; /* put() is allowed in RPM_SUSPENDED */ 503 + 501 504 /* Idle notifications are allowed only in the RPM_ACTIVE state. */ 502 505 else if (dev->power.runtime_status != RPM_ACTIVE) 503 506 retval = -EAGAIN; ··· 799 796 if (dev->power.runtime_status == RPM_ACTIVE && 800 797 dev->power.last_status == RPM_ACTIVE) 801 798 retval = 1; 799 + else if (rpmflags & RPM_TRANSPARENT) 800 + goto out; 802 801 else 803 802 retval = -EACCES; 804 803 }
+4 -4
drivers/cpufreq/amd-pstate.c
··· 872 872 */ 873 873 static u32 amd_pstate_get_transition_delay_us(unsigned int cpu) 874 874 { 875 - u32 transition_delay_ns; 875 + int transition_delay_ns; 876 876 877 877 transition_delay_ns = cppc_get_transition_latency(cpu); 878 - if (transition_delay_ns == CPUFREQ_ETERNAL) { 878 + if (transition_delay_ns < 0) { 879 879 if (cpu_feature_enabled(X86_FEATURE_AMD_FAST_CPPC)) 880 880 return AMD_PSTATE_FAST_CPPC_TRANSITION_DELAY; 881 881 else ··· 891 891 */ 892 892 static u32 amd_pstate_get_transition_latency(unsigned int cpu) 893 893 { 894 - u32 transition_latency; 894 + int transition_latency; 895 895 896 896 transition_latency = cppc_get_transition_latency(cpu); 897 - if (transition_latency == CPUFREQ_ETERNAL) 897 + if (transition_latency < 0) 898 898 return AMD_PSTATE_TRANSITION_LATENCY; 899 899 900 900 return transition_latency;
+12 -2
drivers/cpufreq/cppc_cpufreq.c
··· 308 308 return 0; 309 309 } 310 310 311 + static unsigned int __cppc_cpufreq_get_transition_delay_us(unsigned int cpu) 312 + { 313 + int transition_latency_ns = cppc_get_transition_latency(cpu); 314 + 315 + if (transition_latency_ns < 0) 316 + return CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS / NSEC_PER_USEC; 317 + 318 + return transition_latency_ns / NSEC_PER_USEC; 319 + } 320 + 311 321 /* 312 322 * The PCC subspace describes the rate at which platform can accept commands 313 323 * on the shared PCC channel (including READs which do not count towards freq ··· 340 330 return 10000; 341 331 } 342 332 } 343 - return cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 333 + return __cppc_cpufreq_get_transition_delay_us(cpu); 344 334 } 345 335 #else 346 336 static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu) 347 337 { 348 - return cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 338 + return __cppc_cpufreq_get_transition_delay_us(cpu); 349 339 } 350 340 #endif 351 341
+1 -1
drivers/cpufreq/cpufreq-dt.c
··· 104 104 105 105 transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev); 106 106 if (!transition_latency) 107 - transition_latency = CPUFREQ_ETERNAL; 107 + transition_latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 108 108 109 109 cpumask_copy(policy->cpus, priv->cpus); 110 110 policy->driver_data = priv;
+1 -1
drivers/cpufreq/imx6q-cpufreq.c
··· 442 442 } 443 443 444 444 if (of_property_read_u32(np, "clock-latency", &transition_latency)) 445 - transition_latency = CPUFREQ_ETERNAL; 445 + transition_latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 446 446 447 447 /* 448 448 * Calculate the ramp time for max voltage change in the
+1 -1
drivers/cpufreq/mediatek-cpufreq-hw.c
··· 309 309 310 310 latency = readl_relaxed(data->reg_bases[REG_FREQ_LATENCY]) * 1000; 311 311 if (!latency) 312 - latency = CPUFREQ_ETERNAL; 312 + latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 313 313 314 314 policy->cpuinfo.transition_latency = latency; 315 315 policy->fast_switch_possible = true;
+11 -3
drivers/cpufreq/mediatek-cpufreq.c
··· 403 403 } 404 404 405 405 info->cpu_clk = clk_get(cpu_dev, "cpu"); 406 - if (IS_ERR(info->cpu_clk)) 407 - return dev_err_probe(cpu_dev, PTR_ERR(info->cpu_clk), 408 - "cpu%d: failed to get cpu clk\n", cpu); 406 + if (IS_ERR(info->cpu_clk)) { 407 + ret = PTR_ERR(info->cpu_clk); 408 + dev_err_probe(cpu_dev, ret, "cpu%d: failed to get cpu clk\n", cpu); 409 + goto out_put_cci_dev; 410 + } 409 411 410 412 info->inter_clk = clk_get(cpu_dev, "intermediate"); 411 413 if (IS_ERR(info->inter_clk)) { ··· 553 551 out_free_mux_clock: 554 552 clk_put(info->cpu_clk); 555 553 554 + out_put_cci_dev: 555 + if (info->soc_data->ccifreq_supported) 556 + put_device(info->cci_dev); 557 + 556 558 return ret; 557 559 } 558 560 ··· 574 568 clk_put(info->inter_clk); 575 569 dev_pm_opp_of_cpumask_remove_table(&info->cpus); 576 570 dev_pm_opp_unregister_notifier(info->cpu_dev, &info->opp_nb); 571 + if (info->soc_data->ccifreq_supported) 572 + put_device(info->cci_dev); 577 573 } 578 574 579 575 static int mtk_cpufreq_init(struct cpufreq_policy *policy)
+4 -8
drivers/cpufreq/rcpufreq_dt.rs
··· 28 28 /// Finds supply name for the CPU from DT. 29 29 fn find_supply_names(dev: &Device, cpu: cpu::CpuId) -> Option<KVec<CString>> { 30 30 // Try "cpu0" for older DTs, fallback to "cpu". 31 - let name = (cpu.as_u32() == 0) 31 + (cpu.as_u32() == 0) 32 32 .then(|| find_supply_name_exact(dev, "cpu0")) 33 33 .flatten() 34 - .or_else(|| find_supply_name_exact(dev, "cpu"))?; 35 - 36 - let mut list = KVec::with_capacity(1, GFP_KERNEL).ok()?; 37 - list.push(name, GFP_KERNEL).ok()?; 38 - 39 - Some(list) 34 + .or_else(|| find_supply_name_exact(dev, "cpu")) 35 + .and_then(|name| kernel::kvec![name].ok()) 40 36 } 41 37 42 38 /// Represents the cpufreq dt device. ··· 119 123 120 124 let mut transition_latency = opp_table.max_transition_latency_ns() as u32; 121 125 if transition_latency == 0 { 122 - transition_latency = cpufreq::ETERNAL_LATENCY_NS; 126 + transition_latency = cpufreq::DEFAULT_TRANSITION_LATENCY_NS; 123 127 } 124 128 125 129 policy
+1 -1
drivers/cpufreq/scmi-cpufreq.c
··· 294 294 295 295 latency = perf_ops->transition_latency_get(ph, domain); 296 296 if (!latency) 297 - latency = CPUFREQ_ETERNAL; 297 + latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 298 298 299 299 policy->cpuinfo.transition_latency = latency; 300 300
+1 -1
drivers/cpufreq/scpi-cpufreq.c
··· 157 157 158 158 latency = scpi_ops->get_transition_latency(cpu_dev); 159 159 if (!latency) 160 - latency = CPUFREQ_ETERNAL; 160 + latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 161 161 162 162 policy->cpuinfo.transition_latency = latency; 163 163
+1 -1
drivers/cpufreq/spear-cpufreq.c
··· 182 182 183 183 if (of_property_read_u32(np, "clock-latency", 184 184 &spear_cpufreq.transition_latency)) 185 - spear_cpufreq.transition_latency = CPUFREQ_ETERNAL; 185 + spear_cpufreq.transition_latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 186 186 187 187 cnt = of_property_count_u32_elems(np, "cpufreq_tbl"); 188 188 if (cnt <= 0) {
+27 -8
drivers/cpufreq/tegra186-cpufreq.c
··· 93 93 { 94 94 struct tegra186_cpufreq_data *data = cpufreq_get_driver_data(); 95 95 struct cpufreq_frequency_table *tbl = policy->freq_table + index; 96 - unsigned int edvd_offset = data->cpus[policy->cpu].edvd_offset; 96 + unsigned int edvd_offset; 97 97 u32 edvd_val = tbl->driver_data; 98 + u32 cpu; 98 99 99 - writel(edvd_val, data->regs + edvd_offset); 100 + for_each_cpu(cpu, policy->cpus) { 101 + edvd_offset = data->cpus[cpu].edvd_offset; 102 + writel(edvd_val, data->regs + edvd_offset); 103 + } 100 104 101 105 return 0; 102 106 } ··· 136 132 137 133 static struct cpufreq_frequency_table *init_vhint_table( 138 134 struct platform_device *pdev, struct tegra_bpmp *bpmp, 139 - struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id) 135 + struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id, 136 + int *num_rates) 140 137 { 141 138 struct cpufreq_frequency_table *table; 142 139 struct mrq_cpu_vhint_request req; 143 140 struct tegra_bpmp_message msg; 144 141 struct cpu_vhint_data *data; 145 - int err, i, j, num_rates = 0; 142 + int err, i, j; 146 143 dma_addr_t phys; 147 144 void *virt; 148 145 ··· 173 168 goto free; 174 169 } 175 170 171 + *num_rates = 0; 176 172 for (i = data->vfloor; i <= data->vceil; i++) { 177 173 u16 ndiv = data->ndiv[i]; 178 174 ··· 184 178 if (i > 0 && ndiv == data->ndiv[i - 1]) 185 179 continue; 186 180 187 - num_rates++; 181 + (*num_rates)++; 188 182 } 189 183 190 - table = devm_kcalloc(&pdev->dev, num_rates + 1, sizeof(*table), 184 + table = devm_kcalloc(&pdev->dev, *num_rates + 1, sizeof(*table), 191 185 GFP_KERNEL); 192 186 if (!table) { 193 187 table = ERR_PTR(-ENOMEM); ··· 229 223 { 230 224 struct tegra186_cpufreq_data *data; 231 225 struct tegra_bpmp *bpmp; 232 - unsigned int i = 0, err; 226 + unsigned int i = 0, err, edvd_offset; 227 + int num_rates = 0; 228 + u32 edvd_val, cpu; 233 229 234 230 data = devm_kzalloc(&pdev->dev, 235 231 struct_size(data, clusters, TEGRA186_NUM_CLUSTERS), ··· 254 246 for (i = 0; i < TEGRA186_NUM_CLUSTERS; i++) { 255 247 struct tegra186_cpufreq_cluster *cluster = &data->clusters[i]; 256 248 257 - cluster->table = init_vhint_table(pdev, bpmp, cluster, i); 249 + cluster->table = init_vhint_table(pdev, bpmp, cluster, i, &num_rates); 258 250 if (IS_ERR(cluster->table)) { 259 251 err = PTR_ERR(cluster->table); 260 252 goto put_bpmp; 253 + } else if (!num_rates) { 254 + err = -EINVAL; 255 + goto put_bpmp; 256 + } 257 + 258 + for (cpu = 0; cpu < ARRAY_SIZE(tegra186_cpus); cpu++) { 259 + if (data->cpus[cpu].bpmp_cluster_id == i) { 260 + edvd_val = cluster->table[num_rates - 1].driver_data; 261 + edvd_offset = data->cpus[cpu].edvd_offset; 262 + writel(edvd_val, data->regs + edvd_offset); 263 + } 261 264 } 262 265 } 263 266
+3 -2
drivers/pci/pci-sysfs.c
··· 1517 1517 return count; 1518 1518 } 1519 1519 1520 - pm_runtime_get_sync(dev); 1521 - struct device *pmdev __free(pm_runtime_put) = dev; 1520 + ACQUIRE(pm_runtime_active_try, pm)(dev); 1521 + if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) 1522 + return -ENXIO; 1522 1523 1523 1524 if (sysfs_streq(buf, "default")) { 1524 1525 pci_init_reset_methods(pdev);
+3 -3
include/acpi/cppc_acpi.h
··· 160 160 extern bool acpi_cpc_valid(void); 161 161 extern bool cppc_allow_fast_switch(void); 162 162 extern int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data); 163 - extern unsigned int cppc_get_transition_latency(int cpu); 163 + extern int cppc_get_transition_latency(int cpu); 164 164 extern bool cpc_ffh_supported(void); 165 165 extern bool cpc_supported_by_cpu(void); 166 166 extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val); ··· 216 216 { 217 217 return false; 218 218 } 219 - static inline unsigned int cppc_get_transition_latency(int cpu) 219 + static inline int cppc_get_transition_latency(int cpu) 220 220 { 221 - return CPUFREQ_ETERNAL; 221 + return -ENODATA; 222 222 } 223 223 static inline bool cpc_ffh_supported(void) 224 224 {
+2 -4
include/linux/cpufreq.h
··· 26 26 *********************************************************************/ 27 27 /* 28 28 * Frequency values here are CPU kHz 29 - * 30 - * Maximum transition latency is in nanoseconds - if it's unknown, 31 - * CPUFREQ_ETERNAL shall be used. 32 29 */ 33 30 34 - #define CPUFREQ_ETERNAL (-1) 31 + #define CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS NSEC_PER_MSEC 32 + 35 33 #define CPUFREQ_NAME_LEN 16 36 34 /* Print length for names. Extra 1 space for accommodating '\n' in prints */ 37 35 #define CPUFREQ_NAME_PLEN (CPUFREQ_NAME_LEN + 1)
+69 -36
include/linux/pm_runtime.h
··· 21 21 #define RPM_GET_PUT 0x04 /* Increment/decrement the 22 22 usage_count */ 23 23 #define RPM_AUTO 0x08 /* Use autosuspend_delay */ 24 + #define RPM_TRANSPARENT 0x10 /* Succeed if runtime PM is disabled */ 24 25 25 26 /* 26 27 * Use this for defining a set of PM operations to be used in all situations ··· 351 350 * * 0: Success. 352 351 * * -EINVAL: Runtime PM error. 353 352 * * -EACCES: Runtime PM disabled. 354 - * * -EAGAIN: Runtime PM usage_count non-zero, Runtime PM status change ongoing 355 - * or device not in %RPM_ACTIVE state. 353 + * * -EAGAIN: Runtime PM usage counter non-zero, Runtime PM status change 354 + * ongoing or device not in %RPM_ACTIVE state. 356 355 * * -EBUSY: Runtime PM child_count non-zero. 357 356 * * -EPERM: Device PM QoS resume latency 0. 358 357 * * -EINPROGRESS: Suspend already in progress. 359 358 * * -ENOSYS: CONFIG_PM not enabled. 360 - * * 1: Device already suspended. 361 359 * Other values and conditions for the above values are possible as returned by 362 360 * Runtime PM idle and suspend callbacks. 363 361 */ ··· 370 370 * @dev: Target device. 371 371 * 372 372 * Return: 373 + * * 1: Success; device was already suspended. 373 374 * * 0: Success. 374 375 * * -EINVAL: Runtime PM error. 375 376 * * -EACCES: Runtime PM disabled. 376 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 377 + * * -EAGAIN: Runtime PM usage counter non-zero or Runtime PM status change 378 + * ongoing. 377 379 * * -EBUSY: Runtime PM child_count non-zero. 378 380 * * -EPERM: Device PM QoS resume latency 0. 379 381 * * -ENOSYS: CONFIG_PM not enabled. 380 - * * 1: Device already suspended. 381 382 * Other values and conditions for the above values are possible as returned by 382 383 * Runtime PM suspend callbacks. 383 384 */ ··· 397 396 * engaging its "idle check" callback. 398 397 * 399 398 * Return: 399 + * * 1: Success; device was already suspended. 400 400 * * 0: Success. 401 401 * * -EINVAL: Runtime PM error. 402 402 * * -EACCES: Runtime PM disabled. 403 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 403 + * * -EAGAIN: Runtime PM usage counter non-zero or Runtime PM status change 404 + * ongoing. 404 405 * * -EBUSY: Runtime PM child_count non-zero. 405 406 * * -EPERM: Device PM QoS resume latency 0. 406 407 * * -ENOSYS: CONFIG_PM not enabled. 407 - * * 1: Device already suspended. 408 408 * Other values and conditions for the above values are possible as returned by 409 409 * Runtime PM suspend callbacks. 410 410 */ ··· 435 433 * * 0: Success. 436 434 * * -EINVAL: Runtime PM error. 437 435 * * -EACCES: Runtime PM disabled. 438 - * * -EAGAIN: Runtime PM usage_count non-zero, Runtime PM status change ongoing 439 - * or device not in %RPM_ACTIVE state. 436 + * * -EAGAIN: Runtime PM usage counter non-zero, Runtime PM status change 437 + * ongoing or device not in %RPM_ACTIVE state. 440 438 * * -EBUSY: Runtime PM child_count non-zero. 441 439 * * -EPERM: Device PM QoS resume latency 0. 442 440 * * -EINPROGRESS: Suspend already in progress. 443 441 * * -ENOSYS: CONFIG_PM not enabled. 444 - * * 1: Device already suspended. 445 442 */ 446 443 static inline int pm_request_idle(struct device *dev) 447 444 { ··· 465 464 * equivalent pm_runtime_autosuspend() for @dev asynchronously. 466 465 * 467 466 * Return: 467 + * * 1: Success; device was already suspended. 468 468 * * 0: Success. 469 469 * * -EINVAL: Runtime PM error. 470 470 * * -EACCES: Runtime PM disabled. 471 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 471 + * * -EAGAIN: Runtime PM usage counter non-zero or Runtime PM status change 472 + * ongoing. 472 473 * * -EBUSY: Runtime PM child_count non-zero. 473 474 * * -EPERM: Device PM QoS resume latency 0. 474 475 * * -EINPROGRESS: Suspend already in progress. 475 476 * * -ENOSYS: CONFIG_PM not enabled. 476 - * * 1: Device already suspended. 477 477 */ 478 478 static inline int pm_request_autosuspend(struct device *dev) 479 479 { ··· 513 511 return __pm_runtime_resume(dev, RPM_GET_PUT); 514 512 } 515 513 514 + static inline int pm_runtime_get_active(struct device *dev, int rpmflags) 515 + { 516 + int ret; 517 + 518 + ret = __pm_runtime_resume(dev, RPM_GET_PUT | rpmflags); 519 + if (ret < 0) { 520 + pm_runtime_put_noidle(dev); 521 + return ret; 522 + } 523 + 524 + return 0; 525 + } 526 + 516 527 /** 517 528 * pm_runtime_resume_and_get - Bump up usage counter of a device and resume it. 518 529 * @dev: Target device. ··· 536 521 */ 537 522 static inline int pm_runtime_resume_and_get(struct device *dev) 538 523 { 539 - int ret; 540 - 541 - ret = __pm_runtime_resume(dev, RPM_GET_PUT); 542 - if (ret < 0) { 543 - pm_runtime_put_noidle(dev); 544 - return ret; 545 - } 546 - 547 - return 0; 524 + return pm_runtime_get_active(dev, 0); 548 525 } 549 526 550 527 /** ··· 547 540 * equal to 0, queue up a work item for @dev like in pm_request_idle(). 548 541 * 549 542 * Return: 543 + * * 1: Success. Usage counter dropped to zero, but device was already suspended. 550 544 * * 0: Success. 551 545 * * -EINVAL: Runtime PM error. 552 546 * * -EACCES: Runtime PM disabled. 553 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 547 + * * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status 548 + * change ongoing. 554 549 * * -EBUSY: Runtime PM child_count non-zero. 555 550 * * -EPERM: Device PM QoS resume latency 0. 556 551 * * -EINPROGRESS: Suspend already in progress. 557 552 * * -ENOSYS: CONFIG_PM not enabled. 558 - * * 1: Device already suspended. 559 553 */ 560 554 static inline int pm_runtime_put(struct device *dev) 561 555 { 562 556 return __pm_runtime_idle(dev, RPM_GET_PUT | RPM_ASYNC); 563 557 } 564 - 565 - DEFINE_FREE(pm_runtime_put, struct device *, if (_T) pm_runtime_put(_T)) 566 558 567 559 /** 568 560 * __pm_runtime_put_autosuspend - Drop device usage counter and queue autosuspend if 0. ··· 571 565 * equal to 0, queue up a work item for @dev like in pm_request_autosuspend(). 572 566 * 573 567 * Return: 568 + * * 1: Success. Usage counter dropped to zero, but device was already suspended. 574 569 * * 0: Success. 575 570 * * -EINVAL: Runtime PM error. 576 571 * * -EACCES: Runtime PM disabled. 577 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 572 + * * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status 573 + * change ongoing. 578 574 * * -EBUSY: Runtime PM child_count non-zero. 579 575 * * -EPERM: Device PM QoS resume latency 0. 580 576 * * -EINPROGRESS: Suspend already in progress. 581 577 * * -ENOSYS: CONFIG_PM not enabled. 582 - * * 1: Device already suspended. 583 578 */ 584 579 static inline int __pm_runtime_put_autosuspend(struct device *dev) 585 580 { ··· 597 590 * in pm_request_autosuspend(). 598 591 * 599 592 * Return: 593 + * * 1: Success. Usage counter dropped to zero, but device was already suspended. 600 594 * * 0: Success. 601 595 * * -EINVAL: Runtime PM error. 602 596 * * -EACCES: Runtime PM disabled. 603 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 597 + * * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status 598 + * change ongoing. 604 599 * * -EBUSY: Runtime PM child_count non-zero. 605 600 * * -EPERM: Device PM QoS resume latency 0. 606 601 * * -EINPROGRESS: Suspend already in progress. 607 602 * * -ENOSYS: CONFIG_PM not enabled. 608 - * * 1: Device already suspended. 609 603 */ 610 604 static inline int pm_runtime_put_autosuspend(struct device *dev) 611 605 { 612 606 pm_runtime_mark_last_busy(dev); 613 607 return __pm_runtime_put_autosuspend(dev); 614 608 } 609 + 610 + DEFINE_GUARD(pm_runtime_noresume, struct device *, 611 + pm_runtime_get_noresume(_T), pm_runtime_put_noidle(_T)); 612 + 613 + DEFINE_GUARD(pm_runtime_active, struct device *, 614 + pm_runtime_get_sync(_T), pm_runtime_put(_T)); 615 + DEFINE_GUARD(pm_runtime_active_auto, struct device *, 616 + pm_runtime_get_sync(_T), pm_runtime_put_autosuspend(_T)); 617 + /* 618 + * Use the following guards with ACQUIRE()/ACQUIRE_ERR(). 619 + * 620 + * The difference between the "_try" and "_try_enabled" variants is that the 621 + * former do not produce an error when runtime PM is disabled for the given 622 + * device. 623 + */ 624 + DEFINE_GUARD_COND(pm_runtime_active, _try, 625 + pm_runtime_get_active(_T, RPM_TRANSPARENT)) 626 + DEFINE_GUARD_COND(pm_runtime_active, _try_enabled, 627 + pm_runtime_resume_and_get(_T)) 628 + DEFINE_GUARD_COND(pm_runtime_active_auto, _try, 629 + pm_runtime_get_active(_T, RPM_TRANSPARENT)) 630 + DEFINE_GUARD_COND(pm_runtime_active_auto, _try_enabled, 631 + pm_runtime_resume_and_get(_T)) 615 632 616 633 /** 617 634 * pm_runtime_put_sync - Drop device usage counter and run "idle check" if 0. ··· 650 619 * if it returns an error code. 651 620 * 652 621 * Return: 622 + * * 1: Success. Usage counter dropped to zero, but device was already suspended. 653 623 * * 0: Success. 654 624 * * -EINVAL: Runtime PM error. 655 625 * * -EACCES: Runtime PM disabled. 656 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 626 + * * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status 627 + * change ongoing. 657 628 * * -EBUSY: Runtime PM child_count non-zero. 658 629 * * -EPERM: Device PM QoS resume latency 0. 659 630 * * -ENOSYS: CONFIG_PM not enabled. 660 - * * 1: Device already suspended. 661 631 * Other values and conditions for the above values are possible as returned by 662 632 * Runtime PM suspend callbacks. 663 633 */ ··· 678 646 * if it returns an error code. 679 647 * 680 648 * Return: 649 + * * 1: Success. Usage counter dropped to zero, but device was already suspended. 681 650 * * 0: Success. 682 651 * * -EINVAL: Runtime PM error. 683 652 * * -EACCES: Runtime PM disabled. 684 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 685 - * * -EAGAIN: usage_count non-zero or Runtime PM status change ongoing. 653 + * * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status 654 + * change ongoing. 686 655 * * -EBUSY: Runtime PM child_count non-zero. 687 656 * * -EPERM: Device PM QoS resume latency 0. 688 657 * * -ENOSYS: CONFIG_PM not enabled. 689 - * * 1: Device already suspended. 690 658 * Other values and conditions for the above values are possible as returned by 691 659 * Runtime PM suspend callbacks. 692 660 */ ··· 709 677 * if it returns an error code. 710 678 * 711 679 * Return: 680 + * * 1: Success. Usage counter dropped to zero, but device was already suspended. 712 681 * * 0: Success. 713 682 * * -EINVAL: Runtime PM error. 714 683 * * -EACCES: Runtime PM disabled. 715 - * * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing. 684 + * * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status 685 + * change ongoing. 716 686 * * -EBUSY: Runtime PM child_count non-zero. 717 687 * * -EPERM: Device PM QoS resume latency 0. 718 688 * * -EINPROGRESS: Suspend already in progress. 719 689 * * -ENOSYS: CONFIG_PM not enabled. 720 - * * 1: Device already suspended. 721 690 * Other values and conditions for the above values are possible as returned by 722 691 * Runtime PM suspend callbacks. 723 692 */
+4 -3
rust/kernel/cpufreq.rs
··· 38 38 const CPUFREQ_NAME_LEN: usize = bindings::CPUFREQ_NAME_LEN as usize; 39 39 40 40 /// Default transition latency value in nanoseconds. 41 - pub const ETERNAL_LATENCY_NS: u32 = bindings::CPUFREQ_ETERNAL as u32; 41 + pub const DEFAULT_TRANSITION_LATENCY_NS: u32 = 42 + bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 42 43 43 44 /// CPU frequency driver flags. 44 45 pub mod flags { ··· 400 399 /// The following example demonstrates how to create a CPU frequency table. 401 400 /// 402 401 /// ``` 403 - /// use kernel::cpufreq::{ETERNAL_LATENCY_NS, Policy}; 402 + /// use kernel::cpufreq::{DEFAULT_TRANSITION_LATENCY_NS, Policy}; 404 403 /// 405 404 /// fn update_policy(policy: &mut Policy) { 406 405 /// policy 407 406 /// .set_dvfs_possible_from_any_cpu(true) 408 407 /// .set_fast_switch_possible(true) 409 - /// .set_transition_latency_ns(ETERNAL_LATENCY_NS); 408 + /// .set_transition_latency_ns(DEFAULT_TRANSITION_LATENCY_NS); 410 409 /// 411 410 /// pr_info!("The policy details are: {:?}\n", (policy.cpu(), policy.cur())); 412 411 /// }