Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
"These add some new hardware support (for example, IceLake-D idle
states in intel_idle), fix some issues (for example, the handling of
negative "sleep length" values in cpuidle governors), add new
functionality to the existing drivers (for example, scale-invariance
support in the ACPI CPPC cpufreq driver) and clean up code all over.

Specifics:

- Add idle states table for IceLake-D to the intel_idle driver and
update IceLake-X C6 data in it (Artem Bityutskiy).

- Fix the C7 idle state on Tegra114 in the tegra cpuidle driver and
drop the unused do_idle() firmware call from it (Dmitry Osipenko).

- Fix cpuidle-qcom-spm Kconfig entry (He Ying).

- Fix handling of possible negative tick_nohz_get_next_hrtimer()
return values of in cpuidle governors (Rafael Wysocki).

- Add support for frequency-invariance to the ACPI CPPC cpufreq
driver and update the frequency-invariance engine (FIE) to use it
as needed (Viresh Kumar).

- Simplify the default delay_us setting in the ACPI CPPC cpufreq
driver (Tom Saeger).

- Clean up frequency-related computations in the intel_pstate cpufreq
driver (Rafael Wysocki).

- Fix TBG parent setting for load levels in the armada-37xx cpufreq
driver and drop the CPU PM clock .set_parent method for armada-37xx
(Marek Behún).

- Fix multiple issues in the armada-37xx cpufreq driver (Pali Rohár).

- Fix handling of dev_pm_opp_of_cpumask_add_table() return values in
cpufreq-dt to take the -EPROBE_DEFER one into acconut as
appropriate (Quanyang Wang).

- Fix format string in ia64-acpi-cpufreq (Sergei Trofimovich).

- Drop the unused for_each_policy() macro from cpufreq (Shaokun
Zhang).

- Simplify computations in the schedutil cpufreq governor to avoid
unnecessary overhead (Yue Hu).

- Fix typos in the s5pv210 cpufreq driver (Bhaskar Chowdhury).

- Fix cpufreq documentation links in Kconfig (Alexander Monakov).

- Fix PCI device power state handling in pci_enable_device_flags() to
avoid issuse in some cases when the device depends on an ACPI power
resource (Rafael Wysocki).

- Add missing documentation of pm_runtime_resume_and_get() (Alan
Stern).

- Add missing static inline stub for pm_runtime_has_no_callbacks() to
pm_runtime.h and drop the unused try_to_freeze_nowarn() definition
(YueHaibing).

- Drop duplicate struct device declaration from pm.h and fix a
structure type declaration in intel_rapl.h (Wan Jiabing).

- Use dev_set_name() instead of an open-coded equivalent of it in the
wakeup sources code and drop a redundant local variable
initialization from it (Andy Shevchenko, Colin Ian King).

- Use crc32 instead of md5 for e820 memory map integrity check during
resume from hibernation on x86 (Chris von Recklinghausen).

- Fix typos in comments in the system-wide and hibernation support
code (Lu Jialin).

- Modify the generic power domains (genpd) code to avoid resuming
devices in the "prepare" phase of system-wide suspend and
hibernation (Ulf Hansson).

- Add Hygon Fam18h RAPL support to the intel_rapl power capping
driver (Pu Wen).

- Add MAINTAINERS entry for the dynamic thermal power management
(DTPM) code (Daniel Lezcano).

- Add devm variants of operating performance points (OPP) API
functions and switch over some users of the OPP framework to the
new resource-managed API (Yangtao Li and Dmitry Osipenko).

- Update devfreq core:

* Register devfreq devices as cooling devices on demand (Daniel
Lezcano).

* Add missing unlock opeation in devfreq_add_device() (Lukasz
Luba).

* Use the next frequency as resume_freq instead of the previous
frequency when using the opp-suspend property (Dong Aisheng).

* Check get_dev_status in devfreq_update_stats() (Dong Aisheng).

* Fix set_freq path for the userspace governor in Kconfig (Dong
Aisheng).

* Remove invalid description of get_target_freq() (Dong Aisheng).

- Update devfreq drivers:

* imx8m-ddrc: Remove imx8m_ddrc_get_dev_status() and unneeded
of_match_ptr() (Dong Aisheng, Fabio Estevam).

* rk3399_dmc: dt-bindings: Add rockchip,pmu phandle and drop
references to undefined symbols (Enric Balletbo i Serra, Gaël
PORTAY).

* rk3399_dmc: Use dev_err_probe() to simplify the code (Krzysztof
Kozlowski).

* imx-bus: Remove unneeded of_match_ptr() (Fabio Estevam).

- Fix kernel-doc warnings in three places (Pierre-Louis Bossart).

- Fix typo in the pm-graph utility code (Ricardo Ribalda)"

* tag 'pm-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (74 commits)
PM: wakeup: remove redundant assignment to variable retval
PM: hibernate: x86: Use crc32 instead of md5 for hibernation e820 integrity check
cpufreq: Kconfig: fix documentation links
PM: wakeup: use dev_set_name() directly
PM: runtime: Add documentation for pm_runtime_resume_and_get()
cpufreq: intel_pstate: Simplify intel_pstate_update_perf_limits()
cpufreq: armada-37xx: Fix module unloading
cpufreq: armada-37xx: Remove cur_frequency variable
cpufreq: armada-37xx: Fix determining base CPU frequency
cpufreq: armada-37xx: Fix driver cleanup when registration failed
clk: mvebu: armada-37xx-periph: Fix workaround for switching from L1 to L0
clk: mvebu: armada-37xx-periph: Fix switching CPU freq from 250 Mhz to 1 GHz
cpufreq: armada-37xx: Fix the AVS value for load L1
clk: mvebu: armada-37xx-periph: remove .set_parent method for CPU PM clock
cpufreq: armada-37xx: Fix setting TBG parent for load levels
cpuidle: Fix ARM_QCOM_SPM_CPUIDLE configuration
cpuidle: tegra: Remove do_idle firmware call
cpuidle: tegra: Fix C7 idling state on Tegra114
PM: sleep: fix typos in comments
cpufreq: Remove unused for_each_policy macro
...

+1028 -754
+1 -4
Documentation/ABI/testing/sysfs-class-devfreq
··· 97 97 object. The values are represented in ms. If the value is 98 98 less than 1 jiffy, it is considered to be 0, which means 99 99 no polling. This value is meaningless if the governor is 100 - not polling; thus. If the governor is not using 101 - devfreq-provided central polling 102 - (/sys/class/devfreq/.../central_polling is 0), this value 103 - may be useless. 100 + not polling. 104 101 105 102 A list of governors that support the node: 106 103 - simple_ondmenad
+36 -39
Documentation/devicetree/bindings/devfreq/rk3399_dmc.txt
··· 12 12 for details. 13 13 - center-supply: DMC supply node. 14 14 - status: Marks the node enabled/disabled. 15 + - rockchip,pmu: Phandle to the syscon managing the "PMU general register 16 + files". 15 17 16 18 Optional properties: 17 19 - interrupts: The CPU interrupt number. The interrupt specifier ··· 79 77 80 78 - rockchip,ddr3_drv : When the DRAM type is DDR3, this parameter defines 81 79 the DRAM side driver strength in ohms. Default 82 - value is DDR3_DS_40ohm. 80 + value is 40. 83 81 84 82 - rockchip,ddr3_odt : When the DRAM type is DDR3, this parameter defines 85 83 the DRAM side ODT strength in ohms. Default value 86 - is DDR3_ODT_120ohm. 84 + is 120. 87 85 88 86 - rockchip,phy_ddr3_ca_drv : When the DRAM type is DDR3, this parameter defines 89 87 the phy side CA line (incluing command line, 90 88 address line and clock line) driver strength. 91 - Default value is PHY_DRV_ODT_40. 89 + Default value is 40. 92 90 93 91 - rockchip,phy_ddr3_dq_drv : When the DRAM type is DDR3, this parameter defines 94 92 the PHY side DQ line (including DQS/DQ/DM line) 95 - driver strength. Default value is PHY_DRV_ODT_40. 93 + driver strength. Default value is 40. 96 94 97 95 - rockchip,phy_ddr3_odt : When the DRAM type is DDR3, this parameter defines 98 - the PHY side ODT strength. Default value is 99 - PHY_DRV_ODT_240. 96 + the PHY side ODT strength. Default value is 240. 100 97 101 98 - rockchip,lpddr3_odt_dis_freq : When the DRAM type is LPDDR3, this parameter defines 102 99 then ODT disable frequency in MHz (Mega Hz). ··· 105 104 106 105 - rockchip,lpddr3_drv : When the DRAM type is LPDDR3, this parameter defines 107 106 the DRAM side driver strength in ohms. Default 108 - value is LP3_DS_34ohm. 107 + value is 34. 109 108 110 109 - rockchip,lpddr3_odt : When the DRAM type is LPDDR3, this parameter defines 111 110 the DRAM side ODT strength in ohms. Default value 112 - is LP3_ODT_240ohm. 111 + is 240. 113 112 114 113 - rockchip,phy_lpddr3_ca_drv : When the DRAM type is LPDDR3, this parameter defines 115 114 the PHY side CA line (including command line, 116 115 address line and clock line) driver strength. 117 - Default value is PHY_DRV_ODT_40. 116 + Default value is 40. 118 117 119 118 - rockchip,phy_lpddr3_dq_drv : When the DRAM type is LPDDR3, this parameter defines 120 119 the PHY side DQ line (including DQS/DQ/DM line) 121 - driver strength. Default value is 122 - PHY_DRV_ODT_40. 120 + driver strength. Default value is 40. 123 121 124 122 - rockchip,phy_lpddr3_odt : When dram type is LPDDR3, this parameter define 125 - the phy side odt strength, default value is 126 - PHY_DRV_ODT_240. 123 + the phy side odt strength, default value is 240. 127 124 128 125 - rockchip,lpddr4_odt_dis_freq : When the DRAM type is LPDDR4, this parameter 129 126 defines the ODT disable frequency in ··· 131 132 132 133 - rockchip,lpddr4_drv : When the DRAM type is LPDDR4, this parameter defines 133 134 the DRAM side driver strength in ohms. Default 134 - value is LP4_PDDS_60ohm. 135 + value is 60. 135 136 136 137 - rockchip,lpddr4_dq_odt : When the DRAM type is LPDDR4, this parameter defines 137 138 the DRAM side ODT on DQS/DQ line strength in ohms. 138 - Default value is LP4_DQ_ODT_40ohm. 139 + Default value is 40. 139 140 140 141 - rockchip,lpddr4_ca_odt : When the DRAM type is LPDDR4, this parameter defines 141 142 the DRAM side ODT on CA line strength in ohms. 142 - Default value is LP4_CA_ODT_40ohm. 143 + Default value is 40. 143 144 144 145 - rockchip,phy_lpddr4_ca_drv : When the DRAM type is LPDDR4, this parameter defines 145 146 the PHY side CA line (including command address 146 - line) driver strength. Default value is 147 - PHY_DRV_ODT_40. 147 + line) driver strength. Default value is 40. 148 148 149 149 - rockchip,phy_lpddr4_ck_cs_drv : When the DRAM type is LPDDR4, this parameter defines 150 150 the PHY side clock line and CS line driver 151 - strength. Default value is PHY_DRV_ODT_80. 151 + strength. Default value is 80. 152 152 153 153 - rockchip,phy_lpddr4_dq_drv : When the DRAM type is LPDDR4, this parameter defines 154 154 the PHY side DQ line (including DQS/DQ/DM line) 155 - driver strength. Default value is PHY_DRV_ODT_80. 155 + driver strength. Default value is 80. 156 156 157 157 - rockchip,phy_lpddr4_odt : When the DRAM type is LPDDR4, this parameter defines 158 - the PHY side ODT strength. Default value is 159 - PHY_DRV_ODT_60. 158 + the PHY side ODT strength. Default value is 60. 160 159 161 160 Example: 162 161 dmc_opp_table: dmc_opp_table { ··· 190 193 rockchip,phy_dll_dis_freq = <125>; 191 194 rockchip,auto_pd_dis_freq = <666>; 192 195 rockchip,ddr3_odt_dis_freq = <333>; 193 - rockchip,ddr3_drv = <DDR3_DS_40ohm>; 194 - rockchip,ddr3_odt = <DDR3_ODT_120ohm>; 195 - rockchip,phy_ddr3_ca_drv = <PHY_DRV_ODT_40>; 196 - rockchip,phy_ddr3_dq_drv = <PHY_DRV_ODT_40>; 197 - rockchip,phy_ddr3_odt = <PHY_DRV_ODT_240>; 196 + rockchip,ddr3_drv = <40>; 197 + rockchip,ddr3_odt = <120>; 198 + rockchip,phy_ddr3_ca_drv = <40>; 199 + rockchip,phy_ddr3_dq_drv = <40>; 200 + rockchip,phy_ddr3_odt = <240>; 198 201 rockchip,lpddr3_odt_dis_freq = <333>; 199 - rockchip,lpddr3_drv = <LP3_DS_34ohm>; 200 - rockchip,lpddr3_odt = <LP3_ODT_240ohm>; 201 - rockchip,phy_lpddr3_ca_drv = <PHY_DRV_ODT_40>; 202 - rockchip,phy_lpddr3_dq_drv = <PHY_DRV_ODT_40>; 203 - rockchip,phy_lpddr3_odt = <PHY_DRV_ODT_240>; 202 + rockchip,lpddr3_drv = <34>; 203 + rockchip,lpddr3_odt = <240>; 204 + rockchip,phy_lpddr3_ca_drv = <40>; 205 + rockchip,phy_lpddr3_dq_drv = <40>; 206 + rockchip,phy_lpddr3_odt = <240>; 204 207 rockchip,lpddr4_odt_dis_freq = <333>; 205 - rockchip,lpddr4_drv = <LP4_PDDS_60ohm>; 206 - rockchip,lpddr4_dq_odt = <LP4_DQ_ODT_40ohm>; 207 - rockchip,lpddr4_ca_odt = <LP4_CA_ODT_40ohm>; 208 - rockchip,phy_lpddr4_ca_drv = <PHY_DRV_ODT_40>; 209 - rockchip,phy_lpddr4_ck_cs_drv = <PHY_DRV_ODT_80>; 210 - rockchip,phy_lpddr4_dq_drv = <PHY_DRV_ODT_80>; 211 - rockchip,phy_lpddr4_odt = <PHY_DRV_ODT_60>; 208 + rockchip,lpddr4_drv = <60>; 209 + rockchip,lpddr4_dq_odt = <40>; 210 + rockchip,lpddr4_ca_odt = <40>; 211 + rockchip,phy_lpddr4_ca_drv = <40>; 212 + rockchip,phy_lpddr4_ck_cs_drv = <80>; 213 + rockchip,phy_lpddr4_dq_drv = <80>; 214 + rockchip,phy_lpddr4_odt = <60>; 212 215 };
+4
Documentation/power/runtime_pm.rst
··· 339 339 checked additionally, and -EACCES means that 'power.disable_depth' is 340 340 different from 0 341 341 342 + `int pm_runtime_resume_and_get(struct device *dev);` 343 + - run pm_runtime_resume(dev) and if successful, increment the device's 344 + usage counter; return the result of pm_runtime_resume 345 + 342 346 `int pm_request_idle(struct device *dev);` 343 347 - submit a request to execute the subsystem-level idle callback for the 344 348 device (the request is represented by a work item in pm_wq); returns 0 on
+9
MAINTAINERS
··· 14439 14439 F: include/linux/powercap.h 14440 14440 F: kernel/configs/nopm.config 14441 14441 14442 + DYNAMIC THERMAL POWER MANAGEMENT (DTPM) 14443 + M: Daniel Lezcano <daniel.lezcano@kernel.org> 14444 + L: linux-pm@vger.kernel.org 14445 + S: Supported 14446 + B: https://bugzilla.kernel.org 14447 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm 14448 + F: drivers/powercap/dtpm* 14449 + F: include/linux/dtpm.h 14450 + 14442 14451 POWER STATE COORDINATION INTERFACE (PSCI) 14443 14452 M: Mark Rutland <mark.rutland@arm.com> 14444 14453 M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
+1 -9
arch/arm64/include/asm/topology.h
··· 17 17 #include <linux/arch_topology.h> 18 18 19 19 void update_freq_counters_refs(void); 20 - void topology_scale_freq_tick(void); 21 - 22 - #ifdef CONFIG_ARM64_AMU_EXTN 23 - /* 24 - * Replace task scheduler's default counter-based 25 - * frequency-invariance scale factor setting. 26 - */ 27 - #define arch_scale_freq_tick topology_scale_freq_tick 28 - #endif /* CONFIG_ARM64_AMU_EXTN */ 29 20 30 21 /* Replace task scheduler's default frequency-invariant accounting */ 22 + #define arch_scale_freq_tick topology_scale_freq_tick 31 23 #define arch_set_freq_scale topology_set_freq_scale 32 24 #define arch_scale_freq_capacity topology_get_freq_scale 33 25 #define arch_scale_freq_invariant topology_scale_freq_invariant
+72 -99
arch/arm64/kernel/topology.c
··· 199 199 return 0; 200 200 } 201 201 202 - static DEFINE_STATIC_KEY_FALSE(amu_fie_key); 203 - #define amu_freq_invariant() static_branch_unlikely(&amu_fie_key) 204 - 205 - static void amu_fie_setup(const struct cpumask *cpus) 206 - { 207 - bool invariant; 208 - int cpu; 209 - 210 - /* We are already set since the last insmod of cpufreq driver */ 211 - if (unlikely(cpumask_subset(cpus, amu_fie_cpus))) 212 - return; 213 - 214 - for_each_cpu(cpu, cpus) { 215 - if (!freq_counters_valid(cpu) || 216 - freq_inv_set_max_ratio(cpu, 217 - cpufreq_get_hw_max_freq(cpu) * 1000, 218 - arch_timer_get_rate())) 219 - return; 220 - } 221 - 222 - cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus); 223 - 224 - invariant = topology_scale_freq_invariant(); 225 - 226 - /* We aren't fully invariant yet */ 227 - if (!invariant && !cpumask_equal(amu_fie_cpus, cpu_present_mask)) 228 - return; 229 - 230 - static_branch_enable(&amu_fie_key); 231 - 232 - pr_debug("CPUs[%*pbl]: counters will be used for FIE.", 233 - cpumask_pr_args(cpus)); 234 - 235 - /* 236 - * Task scheduler behavior depends on frequency invariance support, 237 - * either cpufreq or counter driven. If the support status changes as 238 - * a result of counter initialisation and use, retrigger the build of 239 - * scheduling domains to ensure the information is propagated properly. 240 - */ 241 - if (!invariant) 242 - rebuild_sched_domains_energy(); 243 - } 244 - 245 - static int init_amu_fie_callback(struct notifier_block *nb, unsigned long val, 246 - void *data) 247 - { 248 - struct cpufreq_policy *policy = data; 249 - 250 - if (val == CPUFREQ_CREATE_POLICY) 251 - amu_fie_setup(policy->related_cpus); 252 - 253 - /* 254 - * We don't need to handle CPUFREQ_REMOVE_POLICY event as the AMU 255 - * counters don't have any dependency on cpufreq driver once we have 256 - * initialized AMU support and enabled invariance. The AMU counters will 257 - * keep on working just fine in the absence of the cpufreq driver, and 258 - * for the CPUs for which there are no counters available, the last set 259 - * value of freq_scale will remain valid as that is the frequency those 260 - * CPUs are running at. 261 - */ 262 - 263 - return 0; 264 - } 265 - 266 - static struct notifier_block init_amu_fie_notifier = { 267 - .notifier_call = init_amu_fie_callback, 268 - }; 269 - 270 - static int __init init_amu_fie(void) 271 - { 272 - int ret; 273 - 274 - if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) 275 - return -ENOMEM; 276 - 277 - ret = cpufreq_register_notifier(&init_amu_fie_notifier, 278 - CPUFREQ_POLICY_NOTIFIER); 279 - if (ret) 280 - free_cpumask_var(amu_fie_cpus); 281 - 282 - return ret; 283 - } 284 - core_initcall(init_amu_fie); 285 - 286 - bool arch_freq_counters_available(const struct cpumask *cpus) 287 - { 288 - return amu_freq_invariant() && 289 - cpumask_subset(cpus, amu_fie_cpus); 290 - } 291 - 292 - void topology_scale_freq_tick(void) 202 + static void amu_scale_freq_tick(void) 293 203 { 294 204 u64 prev_core_cnt, prev_const_cnt; 295 205 u64 core_cnt, const_cnt, scale; 296 - int cpu = smp_processor_id(); 297 - 298 - if (!amu_freq_invariant()) 299 - return; 300 - 301 - if (!cpumask_test_cpu(cpu, amu_fie_cpus)) 302 - return; 303 206 304 207 prev_const_cnt = this_cpu_read(arch_const_cycles_prev); 305 208 prev_core_cnt = this_cpu_read(arch_core_cycles_prev); ··· 230 327 const_cnt - prev_const_cnt); 231 328 232 329 scale = min_t(unsigned long, scale, SCHED_CAPACITY_SCALE); 233 - this_cpu_write(freq_scale, (unsigned long)scale); 330 + this_cpu_write(arch_freq_scale, (unsigned long)scale); 234 331 } 332 + 333 + static struct scale_freq_data amu_sfd = { 334 + .source = SCALE_FREQ_SOURCE_ARCH, 335 + .set_freq_scale = amu_scale_freq_tick, 336 + }; 337 + 338 + static void amu_fie_setup(const struct cpumask *cpus) 339 + { 340 + int cpu; 341 + 342 + /* We are already set since the last insmod of cpufreq driver */ 343 + if (unlikely(cpumask_subset(cpus, amu_fie_cpus))) 344 + return; 345 + 346 + for_each_cpu(cpu, cpus) { 347 + if (!freq_counters_valid(cpu) || 348 + freq_inv_set_max_ratio(cpu, 349 + cpufreq_get_hw_max_freq(cpu) * 1000, 350 + arch_timer_get_rate())) 351 + return; 352 + } 353 + 354 + cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus); 355 + 356 + topology_set_scale_freq_source(&amu_sfd, amu_fie_cpus); 357 + 358 + pr_debug("CPUs[%*pbl]: counters will be used for FIE.", 359 + cpumask_pr_args(cpus)); 360 + } 361 + 362 + static int init_amu_fie_callback(struct notifier_block *nb, unsigned long val, 363 + void *data) 364 + { 365 + struct cpufreq_policy *policy = data; 366 + 367 + if (val == CPUFREQ_CREATE_POLICY) 368 + amu_fie_setup(policy->related_cpus); 369 + 370 + /* 371 + * We don't need to handle CPUFREQ_REMOVE_POLICY event as the AMU 372 + * counters don't have any dependency on cpufreq driver once we have 373 + * initialized AMU support and enabled invariance. The AMU counters will 374 + * keep on working just fine in the absence of the cpufreq driver, and 375 + * for the CPUs for which there are no counters available, the last set 376 + * value of arch_freq_scale will remain valid as that is the frequency 377 + * those CPUs are running at. 378 + */ 379 + 380 + return 0; 381 + } 382 + 383 + static struct notifier_block init_amu_fie_notifier = { 384 + .notifier_call = init_amu_fie_callback, 385 + }; 386 + 387 + static int __init init_amu_fie(void) 388 + { 389 + int ret; 390 + 391 + if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) 392 + return -ENOMEM; 393 + 394 + ret = cpufreq_register_notifier(&init_amu_fie_notifier, 395 + CPUFREQ_POLICY_NOTIFIER); 396 + if (ret) 397 + free_cpumask_var(amu_fie_cpus); 398 + 399 + return ret; 400 + } 401 + core_initcall(init_amu_fie); 235 402 236 403 #ifdef CONFIG_ACPI_CPPC_LIB 237 404 #include <acpi/cppc_acpi.h>
+2 -2
arch/x86/kernel/e820.c
··· 31 31 * - inform the user about the firmware's notion of memory layout 32 32 * via /sys/firmware/memmap 33 33 * 34 - * - the hibernation code uses it to generate a kernel-independent MD5 35 - * fingerprint of the physical memory layout of a system. 34 + * - the hibernation code uses it to generate a kernel-independent CRC32 35 + * checksum of the physical memory layout of a system. 36 36 * 37 37 * - 'e820_table_kexec': a slightly modified (by the kernel) firmware version 38 38 * passed to us by the bootloader - the major difference between
+14 -75
arch/x86/power/hibernate.c
··· 13 13 #include <linux/kdebug.h> 14 14 #include <linux/cpu.h> 15 15 #include <linux/pgtable.h> 16 - 17 - #include <crypto/hash.h> 16 + #include <linux/types.h> 17 + #include <linux/crc32.h> 18 18 19 19 #include <asm/e820/api.h> 20 20 #include <asm/init.h> ··· 54 54 return pfn >= nosave_begin_pfn && pfn < nosave_end_pfn; 55 55 } 56 56 57 - 58 - #define MD5_DIGEST_SIZE 16 59 - 60 57 struct restore_data_record { 61 58 unsigned long jump_address; 62 59 unsigned long jump_address_phys; 63 60 unsigned long cr3; 64 61 unsigned long magic; 65 - u8 e820_digest[MD5_DIGEST_SIZE]; 62 + unsigned long e820_checksum; 66 63 }; 67 64 68 - #if IS_BUILTIN(CONFIG_CRYPTO_MD5) 69 65 /** 70 - * get_e820_md5 - calculate md5 according to given e820 table 66 + * compute_e820_crc32 - calculate crc32 of a given e820 table 71 67 * 72 68 * @table: the e820 table to be calculated 73 - * @buf: the md5 result to be stored to 69 + * 70 + * Return: the resulting checksum 74 71 */ 75 - static int get_e820_md5(struct e820_table *table, void *buf) 72 + static inline u32 compute_e820_crc32(struct e820_table *table) 76 73 { 77 - struct crypto_shash *tfm; 78 - struct shash_desc *desc; 79 - int size; 80 - int ret = 0; 81 - 82 - tfm = crypto_alloc_shash("md5", 0, 0); 83 - if (IS_ERR(tfm)) 84 - return -ENOMEM; 85 - 86 - desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm), 87 - GFP_KERNEL); 88 - if (!desc) { 89 - ret = -ENOMEM; 90 - goto free_tfm; 91 - } 92 - 93 - desc->tfm = tfm; 94 - 95 - size = offsetof(struct e820_table, entries) + 74 + int size = offsetof(struct e820_table, entries) + 96 75 sizeof(struct e820_entry) * table->nr_entries; 97 76 98 - if (crypto_shash_digest(desc, (u8 *)table, size, buf)) 99 - ret = -EINVAL; 100 - 101 - kfree_sensitive(desc); 102 - 103 - free_tfm: 104 - crypto_free_shash(tfm); 105 - return ret; 77 + return ~crc32_le(~0, (unsigned char const *)table, size); 106 78 } 107 - 108 - static int hibernation_e820_save(void *buf) 109 - { 110 - return get_e820_md5(e820_table_firmware, buf); 111 - } 112 - 113 - static bool hibernation_e820_mismatch(void *buf) 114 - { 115 - int ret; 116 - u8 result[MD5_DIGEST_SIZE]; 117 - 118 - memset(result, 0, MD5_DIGEST_SIZE); 119 - /* If there is no digest in suspend kernel, let it go. */ 120 - if (!memcmp(result, buf, MD5_DIGEST_SIZE)) 121 - return false; 122 - 123 - ret = get_e820_md5(e820_table_firmware, result); 124 - if (ret) 125 - return true; 126 - 127 - return memcmp(result, buf, MD5_DIGEST_SIZE) ? true : false; 128 - } 129 - #else 130 - static int hibernation_e820_save(void *buf) 131 - { 132 - return 0; 133 - } 134 - 135 - static bool hibernation_e820_mismatch(void *buf) 136 - { 137 - /* If md5 is not builtin for restore kernel, let it go. */ 138 - return false; 139 - } 140 - #endif 141 79 142 80 #ifdef CONFIG_X86_64 143 - #define RESTORE_MAGIC 0x23456789ABCDEF01UL 81 + #define RESTORE_MAGIC 0x23456789ABCDEF02UL 144 82 #else 145 - #define RESTORE_MAGIC 0x12345678UL 83 + #define RESTORE_MAGIC 0x12345679UL 146 84 #endif 147 85 148 86 /** ··· 117 179 */ 118 180 rdr->cr3 = restore_cr3 & ~CR3_PCID_MASK; 119 181 120 - return hibernation_e820_save(rdr->e820_digest); 182 + rdr->e820_checksum = compute_e820_crc32(e820_table_firmware); 183 + return 0; 121 184 } 122 185 123 186 /** ··· 139 200 jump_address_phys = rdr->jump_address_phys; 140 201 restore_cr3 = rdr->cr3; 141 202 142 - if (hibernation_e820_mismatch(rdr->e820_digest)) { 203 + if (rdr->e820_checksum != compute_e820_crc32(e820_table_firmware)) { 143 204 pr_crit("Hibernate inconsistent memory map detected!\n"); 144 205 return -ENODEV; 145 206 }
+83 -6
drivers/base/arch_topology.c
··· 21 21 #include <linux/sched.h> 22 22 #include <linux/smp.h> 23 23 24 + static DEFINE_PER_CPU(struct scale_freq_data *, sft_data); 25 + static struct cpumask scale_freq_counters_mask; 26 + static bool scale_freq_invariant; 27 + 28 + static bool supports_scale_freq_counters(const struct cpumask *cpus) 29 + { 30 + return cpumask_subset(cpus, &scale_freq_counters_mask); 31 + } 32 + 24 33 bool topology_scale_freq_invariant(void) 25 34 { 26 35 return cpufreq_supports_freq_invariance() || 27 - arch_freq_counters_available(cpu_online_mask); 36 + supports_scale_freq_counters(cpu_online_mask); 28 37 } 29 38 30 - __weak bool arch_freq_counters_available(const struct cpumask *cpus) 39 + static void update_scale_freq_invariant(bool status) 31 40 { 32 - return false; 41 + if (scale_freq_invariant == status) 42 + return; 43 + 44 + /* 45 + * Task scheduler behavior depends on frequency invariance support, 46 + * either cpufreq or counter driven. If the support status changes as 47 + * a result of counter initialisation and use, retrigger the build of 48 + * scheduling domains to ensure the information is propagated properly. 49 + */ 50 + if (topology_scale_freq_invariant() == status) { 51 + scale_freq_invariant = status; 52 + rebuild_sched_domains_energy(); 53 + } 33 54 } 34 - DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE; 55 + 56 + void topology_set_scale_freq_source(struct scale_freq_data *data, 57 + const struct cpumask *cpus) 58 + { 59 + struct scale_freq_data *sfd; 60 + int cpu; 61 + 62 + /* 63 + * Avoid calling rebuild_sched_domains() unnecessarily if FIE is 64 + * supported by cpufreq. 65 + */ 66 + if (cpumask_empty(&scale_freq_counters_mask)) 67 + scale_freq_invariant = topology_scale_freq_invariant(); 68 + 69 + for_each_cpu(cpu, cpus) { 70 + sfd = per_cpu(sft_data, cpu); 71 + 72 + /* Use ARCH provided counters whenever possible */ 73 + if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) { 74 + per_cpu(sft_data, cpu) = data; 75 + cpumask_set_cpu(cpu, &scale_freq_counters_mask); 76 + } 77 + } 78 + 79 + update_scale_freq_invariant(true); 80 + } 81 + EXPORT_SYMBOL_GPL(topology_set_scale_freq_source); 82 + 83 + void topology_clear_scale_freq_source(enum scale_freq_source source, 84 + const struct cpumask *cpus) 85 + { 86 + struct scale_freq_data *sfd; 87 + int cpu; 88 + 89 + for_each_cpu(cpu, cpus) { 90 + sfd = per_cpu(sft_data, cpu); 91 + 92 + if (sfd && sfd->source == source) { 93 + per_cpu(sft_data, cpu) = NULL; 94 + cpumask_clear_cpu(cpu, &scale_freq_counters_mask); 95 + } 96 + } 97 + 98 + update_scale_freq_invariant(false); 99 + } 100 + EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source); 101 + 102 + void topology_scale_freq_tick(void) 103 + { 104 + struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data); 105 + 106 + if (sfd) 107 + sfd->set_freq_scale(); 108 + } 109 + 110 + DEFINE_PER_CPU(unsigned long, arch_freq_scale) = SCHED_CAPACITY_SCALE; 111 + EXPORT_PER_CPU_SYMBOL_GPL(arch_freq_scale); 35 112 36 113 void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq, 37 114 unsigned long max_freq) ··· 124 47 * want to update the scale factor with information from CPUFREQ. 125 48 * Instead the scale factor will be updated from arch_scale_freq_tick. 126 49 */ 127 - if (arch_freq_counters_available(cpus)) 50 + if (supports_scale_freq_counters(cpus)) 128 51 return; 129 52 130 53 scale = (cur_freq << SCHED_CAPACITY_SHIFT) / max_freq; 131 54 132 55 for_each_cpu(i, cpus) 133 - per_cpu(freq_scale, i) = scale; 56 + per_cpu(arch_freq_scale, i) = scale; 134 57 } 135 58 136 59 DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
+1 -1
drivers/base/power/clock_ops.c
··· 140 140 } 141 141 142 142 /** 143 - * pm_clk_enable - Enable a clock, reporting any errors 143 + * __pm_clk_enable - Enable a clock, reporting any errors 144 144 * @dev: The device for the given clock 145 145 * @ce: PM clock entry corresponding to the clock. 146 146 */
-36
drivers/base/power/domain.c
··· 1088 1088 } 1089 1089 1090 1090 /** 1091 - * resume_needed - Check whether to resume a device before system suspend. 1092 - * @dev: Device to check. 1093 - * @genpd: PM domain the device belongs to. 1094 - * 1095 - * There are two cases in which a device that can wake up the system from sleep 1096 - * states should be resumed by genpd_prepare(): (1) if the device is enabled 1097 - * to wake up the system and it has to remain active for this purpose while the 1098 - * system is in the sleep state and (2) if the device is not enabled to wake up 1099 - * the system from sleep states and it generally doesn't generate wakeup signals 1100 - * by itself (those signals are generated on its behalf by other parts of the 1101 - * system). In the latter case it may be necessary to reconfigure the device's 1102 - * wakeup settings during system suspend, because it may have been set up to 1103 - * signal remote wakeup from the system's working state as needed by runtime PM. 1104 - * Return 'true' in either of the above cases. 1105 - */ 1106 - static bool resume_needed(struct device *dev, 1107 - const struct generic_pm_domain *genpd) 1108 - { 1109 - bool active_wakeup; 1110 - 1111 - if (!device_can_wakeup(dev)) 1112 - return false; 1113 - 1114 - active_wakeup = genpd_is_active_wakeup(genpd); 1115 - return device_may_wakeup(dev) ? active_wakeup : !active_wakeup; 1116 - } 1117 - 1118 - /** 1119 1091 * genpd_prepare - Start power transition of a device in a PM domain. 1120 1092 * @dev: Device to start the transition of. 1121 1093 * ··· 1106 1134 genpd = dev_to_genpd(dev); 1107 1135 if (IS_ERR(genpd)) 1108 1136 return -EINVAL; 1109 - 1110 - /* 1111 - * If a wakeup request is pending for the device, it should be woken up 1112 - * at this point and a system wakeup event should be reported if it's 1113 - * set up to wake up the system from sleep states. 1114 - */ 1115 - if (resume_needed(dev, genpd)) 1116 - pm_runtime_resume(dev); 1117 1137 1118 1138 genpd_lock(genpd); 1119 1139
+1 -1
drivers/base/power/runtime.c
··· 951 951 952 952 /** 953 953 * pm_suspend_timer_fn - Timer function for pm_schedule_suspend(). 954 - * @data: Device pointer passed by pm_schedule_suspend(). 954 + * @timer: hrtimer used by pm_schedule_suspend(). 955 955 * 956 956 * Check if the time is right and queue a suspend request. 957 957 */
+9 -8
drivers/base/power/wakeup.c
··· 400 400 } 401 401 402 402 /** 403 - * device_wakeup_arm_wake_irqs(void) 403 + * device_wakeup_arm_wake_irqs - 404 404 * 405 - * Itereates over the list of device wakeirqs to arm them. 405 + * Iterates over the list of device wakeirqs to arm them. 406 406 */ 407 407 void device_wakeup_arm_wake_irqs(void) 408 408 { ··· 416 416 } 417 417 418 418 /** 419 - * device_wakeup_disarm_wake_irqs(void) 419 + * device_wakeup_disarm_wake_irqs - 420 420 * 421 - * Itereates over the list of device wakeirqs to disarm them. 421 + * Iterates over the list of device wakeirqs to disarm them. 422 422 */ 423 423 void device_wakeup_disarm_wake_irqs(void) 424 424 { ··· 532 532 /** 533 533 * device_set_wakeup_enable - Enable or disable a device to wake up the system. 534 534 * @dev: Device to handle. 535 + * @enable: enable/disable flag 535 536 */ 536 537 int device_set_wakeup_enable(struct device *dev, bool enable) 537 538 { ··· 582 581 */ 583 582 584 583 /** 585 - * wakup_source_activate - Mark given wakeup source as active. 584 + * wakeup_source_activate - Mark given wakeup source as active. 586 585 * @ws: Wakeup source to handle. 587 586 * 588 587 * Update the @ws' statistics and, if @ws has just been activated, notify the PM ··· 687 686 #endif 688 687 689 688 /** 690 - * wakup_source_deactivate - Mark given wakeup source as inactive. 689 + * wakeup_source_deactivate - Mark given wakeup source as inactive. 691 690 * @ws: Wakeup source to handle. 692 691 * 693 692 * Update the @ws' statistics and notify the PM core that the wakeup source has ··· 786 785 787 786 /** 788 787 * pm_wakeup_timer_fn - Delayed finalization of a wakeup event. 789 - * @data: Address of the wakeup source object associated with the event source. 788 + * @t: timer list 790 789 * 791 790 * Call wakeup_source_deactivate() for the wakeup source whose address is stored 792 791 * in @data if it is currently active and its timer has not been canceled and ··· 1022 1021 #ifdef CONFIG_PM_AUTOSLEEP 1023 1022 /** 1024 1023 * pm_wakep_autosleep_enabled - Modify autosleep_enabled for all wakeup sources. 1025 - * @enabled: Whether to set or to clear the autosleep_enabled flags. 1024 + * @set: Whether to set or to clear the autosleep_enabled flags. 1026 1025 */ 1027 1026 void pm_wakep_autosleep_enabled(bool set) 1028 1027 {
+1 -1
drivers/base/power/wakeup_stats.c
··· 137 137 struct wakeup_source *ws) 138 138 { 139 139 struct device *dev = NULL; 140 - int retval = -ENODEV; 140 + int retval; 141 141 142 142 dev = kzalloc(sizeof(*dev), GFP_KERNEL); 143 143 if (!dev) {
+45 -38
drivers/clk/mvebu/armada-37xx-periph.c
··· 84 84 void __iomem *reg_div; 85 85 u8 shift_div; 86 86 struct regmap *nb_pm_base; 87 + unsigned long l1_expiration; 87 88 }; 88 89 89 90 #define to_clk_double_div(_hw) container_of(_hw, struct clk_double_div, hw) ··· 441 440 return val; 442 441 } 443 442 444 - static int clk_pm_cpu_set_parent(struct clk_hw *hw, u8 index) 445 - { 446 - struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw); 447 - struct regmap *base = pm_cpu->nb_pm_base; 448 - int load_level; 449 - 450 - /* 451 - * We set the clock parent only if the DVFS is available but 452 - * not enabled. 453 - */ 454 - if (IS_ERR(base) || armada_3700_pm_dvfs_is_enabled(base)) 455 - return -EINVAL; 456 - 457 - /* Set the parent clock for all the load level */ 458 - for (load_level = 0; load_level < LOAD_LEVEL_NR; load_level++) { 459 - unsigned int reg, mask, val, 460 - offset = ARMADA_37XX_NB_TBG_SEL_OFF; 461 - 462 - armada_3700_pm_dvfs_update_regs(load_level, &reg, &offset); 463 - 464 - val = index << offset; 465 - mask = ARMADA_37XX_NB_TBG_SEL_MASK << offset; 466 - regmap_update_bits(base, reg, mask, val); 467 - } 468 - return 0; 469 - } 470 - 471 443 static unsigned long clk_pm_cpu_recalc_rate(struct clk_hw *hw, 472 444 unsigned long parent_rate) 473 445 { ··· 488 514 } 489 515 490 516 /* 491 - * Switching the CPU from the L2 or L3 frequencies (300 and 200 Mhz 492 - * respectively) to L0 frequency (1.2 Ghz) requires a significant 517 + * Workaround when base CPU frequnecy is 1000 or 1200 MHz 518 + * 519 + * Switching the CPU from the L2 or L3 frequencies (250/300 or 200 MHz 520 + * respectively) to L0 frequency (1/1.2 GHz) requires a significant 493 521 * amount of time to let VDD stabilize to the appropriate 494 522 * voltage. This amount of time is large enough that it cannot be 495 523 * covered by the hardware countdown register. Due to this, the CPU ··· 501 525 * To work around this problem, we prevent switching directly from the 502 526 * L2/L3 frequencies to the L0 frequency, and instead switch to the L1 503 527 * frequency in-between. The sequence therefore becomes: 504 - * 1. First switch from L2/L3(200/300MHz) to L1(600MHZ) 528 + * 1. First switch from L2/L3 (200/250/300 MHz) to L1 (500/600 MHz) 505 529 * 2. Sleep 20ms for stabling VDD voltage 506 - * 3. Then switch from L1(600MHZ) to L0(1200Mhz). 530 + * 3. Then switch from L1 (500/600 MHz) to L0 (1000/1200 MHz). 507 531 */ 508 - static void clk_pm_cpu_set_rate_wa(unsigned long rate, struct regmap *base) 532 + static void clk_pm_cpu_set_rate_wa(struct clk_pm_cpu *pm_cpu, 533 + unsigned int new_level, unsigned long rate, 534 + struct regmap *base) 509 535 { 510 536 unsigned int cur_level; 511 537 512 - if (rate != 1200 * 1000 * 1000) 513 - return; 514 - 515 538 regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level); 516 539 cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK; 517 - if (cur_level <= ARMADA_37XX_DVFS_LOAD_1) 540 + 541 + if (cur_level == new_level) 518 542 return; 543 + 544 + /* 545 + * System wants to go to L1 on its own. If we are going from L2/L3, 546 + * remember when 20ms will expire. If from L0, set the value so that 547 + * next switch to L0 won't have to wait. 548 + */ 549 + if (new_level == ARMADA_37XX_DVFS_LOAD_1) { 550 + if (cur_level == ARMADA_37XX_DVFS_LOAD_0) 551 + pm_cpu->l1_expiration = jiffies; 552 + else 553 + pm_cpu->l1_expiration = jiffies + msecs_to_jiffies(20); 554 + return; 555 + } 556 + 557 + /* 558 + * If we are setting to L2/L3, just invalidate L1 expiration time, 559 + * sleeping is not needed. 560 + */ 561 + if (rate < 1000*1000*1000) 562 + goto invalidate_l1_exp; 563 + 564 + /* 565 + * We are going to L0 with rate >= 1GHz. Check whether we have been at 566 + * L1 for long enough time. If not, go to L1 for 20ms. 567 + */ 568 + if (pm_cpu->l1_expiration && jiffies >= pm_cpu->l1_expiration) 569 + goto invalidate_l1_exp; 519 570 520 571 regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD, 521 572 ARMADA_37XX_NB_CPU_LOAD_MASK, 522 573 ARMADA_37XX_DVFS_LOAD_1); 523 574 msleep(20); 575 + 576 + invalidate_l1_exp: 577 + pm_cpu->l1_expiration = 0; 524 578 } 525 579 526 580 static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate, ··· 584 578 reg = ARMADA_37XX_NB_CPU_LOAD; 585 579 mask = ARMADA_37XX_NB_CPU_LOAD_MASK; 586 580 587 - clk_pm_cpu_set_rate_wa(rate, base); 581 + /* Apply workaround when base CPU frequency is 1000 or 1200 MHz */ 582 + if (parent_rate >= 1000*1000*1000) 583 + clk_pm_cpu_set_rate_wa(pm_cpu, load_level, rate, base); 588 584 589 585 regmap_update_bits(base, reg, mask, load_level); 590 586 ··· 600 592 601 593 static const struct clk_ops clk_pm_cpu_ops = { 602 594 .get_parent = clk_pm_cpu_get_parent, 603 - .set_parent = clk_pm_cpu_set_parent, 604 595 .round_rate = clk_pm_cpu_round_rate, 605 596 .set_rate = clk_pm_cpu_set_rate, 606 597 .recalc_rate = clk_pm_cpu_recalc_rate,
+6 -17
drivers/cpufreq/Kconfig
··· 13 13 clock speed, you need to either enable a dynamic cpufreq governor 14 14 (see below) after boot, or use a userspace tool. 15 15 16 - For details, take a look at <file:Documentation/cpu-freq>. 16 + For details, take a look at 17 + <file:Documentation/admin-guide/pm/cpufreq.rst>. 17 18 18 19 If in doubt, say N. 19 20 ··· 141 140 To compile this driver as a module, choose M here: the 142 141 module will be called cpufreq_userspace. 143 142 144 - For details, take a look at <file:Documentation/cpu-freq/>. 145 - 146 143 If in doubt, say Y. 147 144 148 145 config CPU_FREQ_GOV_ONDEMAND ··· 157 158 To compile this driver as a module, choose M here: the 158 159 module will be called cpufreq_ondemand. 159 160 160 - For details, take a look at linux/Documentation/cpu-freq. 161 + For details, take a look at 162 + <file:Documentation/admin-guide/pm/cpufreq.rst>. 161 163 162 164 If in doubt, say N. 163 165 ··· 182 182 To compile this driver as a module, choose M here: the 183 183 module will be called cpufreq_conservative. 184 184 185 - For details, take a look at linux/Documentation/cpu-freq. 185 + For details, take a look at 186 + <file:Documentation/admin-guide/pm/cpufreq.rst>. 186 187 187 188 If in doubt, say N. 188 189 ··· 247 246 This driver adds a CPUFreq driver which utilizes the ACPI 248 247 Processor Performance States. 249 248 250 - For details, take a look at <file:Documentation/cpu-freq/>. 251 - 252 249 If in doubt, say N. 253 250 endif 254 251 ··· 270 271 271 272 Loongson2F and it's successors support this feature. 272 273 273 - For details, take a look at <file:Documentation/cpu-freq/>. 274 - 275 274 If in doubt, say N. 276 275 277 276 config LOONGSON1_CPUFREQ ··· 278 281 help 279 282 This option adds a CPUFreq driver for loongson1 processors which 280 283 support software configurable cpu frequency. 281 - 282 - For details, take a look at <file:Documentation/cpu-freq/>. 283 284 284 285 If in doubt, say N. 285 286 endif ··· 288 293 help 289 294 This adds the CPUFreq driver for UltraSPARC-III processors. 290 295 291 - For details, take a look at <file:Documentation/cpu-freq>. 292 - 293 296 If in doubt, say N. 294 297 295 298 config SPARC_US2E_CPUFREQ 296 299 tristate "UltraSPARC-IIe CPU Frequency driver" 297 300 help 298 301 This adds the CPUFreq driver for UltraSPARC-IIe processors. 299 - 300 - For details, take a look at <file:Documentation/cpu-freq>. 301 302 302 303 If in doubt, say N. 303 304 endif ··· 308 317 harmless for CPUs that don't support rate rounding. The driver 309 318 will also generate a notice in the boot log before disabling 310 319 itself if the CPU in question is not capable of rate rounding. 311 - 312 - For details, take a look at <file:Documentation/cpu-freq>. 313 320 314 321 If unsure, say N. 315 322 endif
+10
drivers/cpufreq/Kconfig.arm
··· 19 19 20 20 If in doubt, say N. 21 21 22 + config ACPI_CPPC_CPUFREQ_FIE 23 + bool "Frequency Invariance support for CPPC cpufreq driver" 24 + depends on ACPI_CPPC_CPUFREQ && GENERIC_ARCH_TOPOLOGY 25 + default y 26 + help 27 + This extends frequency invariance support in the CPPC cpufreq driver, 28 + by using CPPC delivered and reference performance counters. 29 + 30 + If in doubt, say N. 31 + 22 32 config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM 23 33 tristate "Allwinner nvmem based SUN50I CPUFreq driver" 24 34 depends on ARCH_SUNXI
+88 -23
drivers/cpufreq/armada-37xx-cpufreq.c
··· 25 25 26 26 #include "cpufreq-dt.h" 27 27 28 + /* Clk register set */ 29 + #define ARMADA_37XX_CLK_TBG_SEL 0 30 + #define ARMADA_37XX_CLK_TBG_SEL_CPU_OFF 22 31 + 28 32 /* Power management in North Bridge register set */ 29 33 #define ARMADA_37XX_NB_L0L1 0x18 30 34 #define ARMADA_37XX_NB_L2L3 0x1C ··· 73 69 #define LOAD_LEVEL_NR 4 74 70 75 71 #define MIN_VOLT_MV 1000 72 + #define MIN_VOLT_MV_FOR_L1_1000MHZ 1108 73 + #define MIN_VOLT_MV_FOR_L1_1200MHZ 1155 76 74 77 75 /* AVS value for the corresponding voltage (in mV) */ 78 76 static int avs_map[] = { ··· 86 80 }; 87 81 88 82 struct armada37xx_cpufreq_state { 83 + struct platform_device *pdev; 84 + struct device *cpu_dev; 89 85 struct regmap *regmap; 90 86 u32 nb_l0l1; 91 87 u32 nb_l2l3; ··· 128 120 * will be configured then the DVFS will be enabled. 129 121 */ 130 122 static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base, 131 - struct clk *clk, u8 *divider) 123 + struct regmap *clk_base, u8 *divider) 132 124 { 125 + u32 cpu_tbg_sel; 133 126 int load_lvl; 134 - struct clk *parent; 127 + 128 + /* Determine to which TBG clock is CPU connected */ 129 + regmap_read(clk_base, ARMADA_37XX_CLK_TBG_SEL, &cpu_tbg_sel); 130 + cpu_tbg_sel >>= ARMADA_37XX_CLK_TBG_SEL_CPU_OFF; 131 + cpu_tbg_sel &= ARMADA_37XX_NB_TBG_SEL_MASK; 135 132 136 133 for (load_lvl = 0; load_lvl < LOAD_LEVEL_NR; load_lvl++) { 137 134 unsigned int reg, mask, val, offset = 0; ··· 154 141 val = ARMADA_37XX_NB_CLK_SEL_TBG << ARMADA_37XX_NB_CLK_SEL_OFF; 155 142 mask = (ARMADA_37XX_NB_CLK_SEL_MASK 156 143 << ARMADA_37XX_NB_CLK_SEL_OFF); 144 + 145 + /* Set TBG index, for all levels we use the same TBG */ 146 + val = cpu_tbg_sel << ARMADA_37XX_NB_TBG_SEL_OFF; 147 + mask = (ARMADA_37XX_NB_TBG_SEL_MASK 148 + << ARMADA_37XX_NB_TBG_SEL_OFF); 157 149 158 150 /* 159 151 * Set cpu divider based on the pre-computed array in ··· 178 160 179 161 regmap_update_bits(base, reg, mask, val); 180 162 } 181 - 182 - /* 183 - * Set cpu clock source, for all the level we keep the same 184 - * clock source that the one already configured. For this one 185 - * we need to use the clock framework 186 - */ 187 - parent = clk_get_parent(clk); 188 - clk_set_parent(clk, parent); 189 163 } 190 164 191 165 /* ··· 212 202 * - L2 & L3 voltage should be about 150mv smaller than L0 voltage. 213 203 * This function calculates L1 & L2 & L3 AVS values dynamically based 214 204 * on L0 voltage and fill all AVS values to the AVS value table. 205 + * When base CPU frequency is 1000 or 1200 MHz then there is additional 206 + * minimal avs value for load L1. 215 207 */ 216 208 static void __init armada37xx_cpufreq_avs_configure(struct regmap *base, 217 209 struct armada_37xx_dvfs *dvfs) ··· 245 233 for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++) 246 234 dvfs->avs[load_level] = avs_min; 247 235 236 + /* 237 + * Set the avs values for load L0 and L1 when base CPU frequency 238 + * is 1000/1200 MHz to its typical initial values according to 239 + * the Armada 3700 Hardware Specifications. 240 + */ 241 + if (dvfs->cpu_freq_max >= 1000*1000*1000) { 242 + if (dvfs->cpu_freq_max >= 1200*1000*1000) 243 + avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ); 244 + else 245 + avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ); 246 + dvfs->avs[0] = dvfs->avs[1] = avs_min; 247 + } 248 + 248 249 return; 249 250 } 250 251 ··· 277 252 target_vm = avs_map[l0_vdd_min] - 150; 278 253 target_vm = target_vm > MIN_VOLT_MV ? target_vm : MIN_VOLT_MV; 279 254 dvfs->avs[2] = dvfs->avs[3] = armada_37xx_avs_val_match(target_vm); 255 + 256 + /* 257 + * Fix the avs value for load L1 when base CPU frequency is 1000/1200 MHz, 258 + * otherwise the CPU gets stuck when switching from load L1 to load L0. 259 + * Also ensure that avs value for load L1 is not higher than for L0. 260 + */ 261 + if (dvfs->cpu_freq_max >= 1000*1000*1000) { 262 + u32 avs_min_l1; 263 + 264 + if (dvfs->cpu_freq_max >= 1200*1000*1000) 265 + avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ); 266 + else 267 + avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ); 268 + 269 + if (avs_min_l1 > dvfs->avs[0]) 270 + avs_min_l1 = dvfs->avs[0]; 271 + 272 + if (dvfs->avs[1] < avs_min_l1) 273 + dvfs->avs[1] = avs_min_l1; 274 + } 280 275 } 281 276 282 277 static void __init armada37xx_cpufreq_avs_setup(struct regmap *base, ··· 402 357 struct armada_37xx_dvfs *dvfs; 403 358 struct platform_device *pdev; 404 359 unsigned long freq; 405 - unsigned int cur_frequency, base_frequency; 406 - struct regmap *nb_pm_base, *avs_base; 360 + unsigned int base_frequency; 361 + struct regmap *nb_clk_base, *nb_pm_base, *avs_base; 407 362 struct device *cpu_dev; 408 363 int load_lvl, ret; 409 364 struct clk *clk, *parent; 365 + 366 + nb_clk_base = 367 + syscon_regmap_lookup_by_compatible("marvell,armada-3700-periph-clock-nb"); 368 + if (IS_ERR(nb_clk_base)) 369 + return -ENODEV; 410 370 411 371 nb_pm_base = 412 372 syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm"); ··· 463 413 return -EINVAL; 464 414 } 465 415 466 - /* Get nominal (current) CPU frequency */ 467 - cur_frequency = clk_get_rate(clk); 468 - if (!cur_frequency) { 469 - dev_err(cpu_dev, "Failed to get clock rate for CPU\n"); 470 - clk_put(clk); 471 - return -EINVAL; 472 - } 473 - 474 - dvfs = armada_37xx_cpu_freq_info_get(cur_frequency); 416 + dvfs = armada_37xx_cpu_freq_info_get(base_frequency); 475 417 if (!dvfs) { 476 418 clk_put(clk); 477 419 return -EINVAL; ··· 481 439 armada37xx_cpufreq_avs_configure(avs_base, dvfs); 482 440 armada37xx_cpufreq_avs_setup(avs_base, dvfs); 483 441 484 - armada37xx_cpufreq_dvfs_setup(nb_pm_base, clk, dvfs->divider); 442 + armada37xx_cpufreq_dvfs_setup(nb_pm_base, nb_clk_base, dvfs->divider); 485 443 clk_put(clk); 486 444 487 445 for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR; ··· 508 466 if (ret) 509 467 goto disable_dvfs; 510 468 469 + armada37xx_cpufreq_state->cpu_dev = cpu_dev; 470 + armada37xx_cpufreq_state->pdev = pdev; 471 + platform_set_drvdata(pdev, dvfs); 511 472 return 0; 512 473 513 474 disable_dvfs: ··· 518 473 remove_opp: 519 474 /* clean-up the already added opp before leaving */ 520 475 while (load_lvl-- > ARMADA_37XX_DVFS_LOAD_0) { 521 - freq = cur_frequency / dvfs->divider[load_lvl]; 476 + freq = base_frequency / dvfs->divider[load_lvl]; 522 477 dev_pm_opp_remove(cpu_dev, freq); 523 478 } 524 479 ··· 528 483 } 529 484 /* late_initcall, to guarantee the driver is loaded after A37xx clock driver */ 530 485 late_initcall(armada37xx_cpufreq_driver_init); 486 + 487 + static void __exit armada37xx_cpufreq_driver_exit(void) 488 + { 489 + struct platform_device *pdev = armada37xx_cpufreq_state->pdev; 490 + struct armada_37xx_dvfs *dvfs = platform_get_drvdata(pdev); 491 + unsigned long freq; 492 + int load_lvl; 493 + 494 + platform_device_unregister(pdev); 495 + 496 + armada37xx_cpufreq_disable_dvfs(armada37xx_cpufreq_state->regmap); 497 + 498 + for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR; load_lvl++) { 499 + freq = dvfs->cpu_freq_max / dvfs->divider[load_lvl]; 500 + dev_pm_opp_remove(armada37xx_cpufreq_state->cpu_dev, freq); 501 + } 502 + 503 + kfree(armada37xx_cpufreq_state); 504 + } 505 + module_exit(armada37xx_cpufreq_driver_exit); 531 506 532 507 static const struct of_device_id __maybe_unused armada37xx_cpufreq_of_match[] = { 533 508 { .compatible = "marvell,armada-3700-nb-pm" },
+235 -24
drivers/cpufreq/cppc_cpufreq.c
··· 10 10 11 11 #define pr_fmt(fmt) "CPPC Cpufreq:" fmt 12 12 13 + #include <linux/arch_topology.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/module.h> 15 16 #include <linux/delay.h> 16 17 #include <linux/cpu.h> 17 18 #include <linux/cpufreq.h> 18 19 #include <linux/dmi.h> 20 + #include <linux/irq_work.h> 21 + #include <linux/kthread.h> 19 22 #include <linux/time.h> 20 23 #include <linux/vmalloc.h> 24 + #include <uapi/linux/sched/types.h> 21 25 22 26 #include <asm/unaligned.h> 23 27 ··· 60 56 .oem_revision = 0, 61 57 } 62 58 }; 59 + 60 + #ifdef CONFIG_ACPI_CPPC_CPUFREQ_FIE 61 + 62 + /* Frequency invariance support */ 63 + struct cppc_freq_invariance { 64 + int cpu; 65 + struct irq_work irq_work; 66 + struct kthread_work work; 67 + struct cppc_perf_fb_ctrs prev_perf_fb_ctrs; 68 + struct cppc_cpudata *cpu_data; 69 + }; 70 + 71 + static DEFINE_PER_CPU(struct cppc_freq_invariance, cppc_freq_inv); 72 + static struct kthread_worker *kworker_fie; 73 + static bool fie_disabled; 74 + 75 + static struct cpufreq_driver cppc_cpufreq_driver; 76 + static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu); 77 + static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data, 78 + struct cppc_perf_fb_ctrs fb_ctrs_t0, 79 + struct cppc_perf_fb_ctrs fb_ctrs_t1); 80 + 81 + /** 82 + * cppc_scale_freq_workfn - CPPC arch_freq_scale updater for frequency invariance 83 + * @work: The work item. 84 + * 85 + * The CPPC driver register itself with the topology core to provide its own 86 + * implementation (cppc_scale_freq_tick()) of topology_scale_freq_tick() which 87 + * gets called by the scheduler on every tick. 88 + * 89 + * Note that the arch specific counters have higher priority than CPPC counters, 90 + * if available, though the CPPC driver doesn't need to have any special 91 + * handling for that. 92 + * 93 + * On an invocation of cppc_scale_freq_tick(), we schedule an irq work (since we 94 + * reach here from hard-irq context), which then schedules a normal work item 95 + * and cppc_scale_freq_workfn() updates the per_cpu arch_freq_scale variable 96 + * based on the counter updates since the last tick. 97 + */ 98 + static void cppc_scale_freq_workfn(struct kthread_work *work) 99 + { 100 + struct cppc_freq_invariance *cppc_fi; 101 + struct cppc_perf_fb_ctrs fb_ctrs = {0}; 102 + struct cppc_cpudata *cpu_data; 103 + unsigned long local_freq_scale; 104 + u64 perf; 105 + 106 + cppc_fi = container_of(work, struct cppc_freq_invariance, work); 107 + cpu_data = cppc_fi->cpu_data; 108 + 109 + if (cppc_get_perf_ctrs(cppc_fi->cpu, &fb_ctrs)) { 110 + pr_warn("%s: failed to read perf counters\n", __func__); 111 + return; 112 + } 113 + 114 + cppc_fi->prev_perf_fb_ctrs = fb_ctrs; 115 + perf = cppc_perf_from_fbctrs(cpu_data, cppc_fi->prev_perf_fb_ctrs, 116 + fb_ctrs); 117 + 118 + perf <<= SCHED_CAPACITY_SHIFT; 119 + local_freq_scale = div64_u64(perf, cpu_data->perf_caps.highest_perf); 120 + if (WARN_ON(local_freq_scale > 1024)) 121 + local_freq_scale = 1024; 122 + 123 + per_cpu(arch_freq_scale, cppc_fi->cpu) = local_freq_scale; 124 + } 125 + 126 + static void cppc_irq_work(struct irq_work *irq_work) 127 + { 128 + struct cppc_freq_invariance *cppc_fi; 129 + 130 + cppc_fi = container_of(irq_work, struct cppc_freq_invariance, irq_work); 131 + kthread_queue_work(kworker_fie, &cppc_fi->work); 132 + } 133 + 134 + static void cppc_scale_freq_tick(void) 135 + { 136 + struct cppc_freq_invariance *cppc_fi = &per_cpu(cppc_freq_inv, smp_processor_id()); 137 + 138 + /* 139 + * cppc_get_perf_ctrs() can potentially sleep, call that from the right 140 + * context. 141 + */ 142 + irq_work_queue(&cppc_fi->irq_work); 143 + } 144 + 145 + static struct scale_freq_data cppc_sftd = { 146 + .source = SCALE_FREQ_SOURCE_CPPC, 147 + .set_freq_scale = cppc_scale_freq_tick, 148 + }; 149 + 150 + static void cppc_freq_invariance_policy_init(struct cpufreq_policy *policy, 151 + struct cppc_cpudata *cpu_data) 152 + { 153 + struct cppc_perf_fb_ctrs fb_ctrs = {0}; 154 + struct cppc_freq_invariance *cppc_fi; 155 + int i, ret; 156 + 157 + if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate) 158 + return; 159 + 160 + if (fie_disabled) 161 + return; 162 + 163 + for_each_cpu(i, policy->cpus) { 164 + cppc_fi = &per_cpu(cppc_freq_inv, i); 165 + cppc_fi->cpu = i; 166 + cppc_fi->cpu_data = cpu_data; 167 + kthread_init_work(&cppc_fi->work, cppc_scale_freq_workfn); 168 + init_irq_work(&cppc_fi->irq_work, cppc_irq_work); 169 + 170 + ret = cppc_get_perf_ctrs(i, &fb_ctrs); 171 + if (ret) { 172 + pr_warn("%s: failed to read perf counters: %d\n", 173 + __func__, ret); 174 + fie_disabled = true; 175 + } else { 176 + cppc_fi->prev_perf_fb_ctrs = fb_ctrs; 177 + } 178 + } 179 + } 180 + 181 + static void __init cppc_freq_invariance_init(void) 182 + { 183 + struct sched_attr attr = { 184 + .size = sizeof(struct sched_attr), 185 + .sched_policy = SCHED_DEADLINE, 186 + .sched_nice = 0, 187 + .sched_priority = 0, 188 + /* 189 + * Fake (unused) bandwidth; workaround to "fix" 190 + * priority inheritance. 191 + */ 192 + .sched_runtime = 1000000, 193 + .sched_deadline = 10000000, 194 + .sched_period = 10000000, 195 + }; 196 + int ret; 197 + 198 + if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate) 199 + return; 200 + 201 + if (fie_disabled) 202 + return; 203 + 204 + kworker_fie = kthread_create_worker(0, "cppc_fie"); 205 + if (IS_ERR(kworker_fie)) 206 + return; 207 + 208 + ret = sched_setattr_nocheck(kworker_fie->task, &attr); 209 + if (ret) { 210 + pr_warn("%s: failed to set SCHED_DEADLINE: %d\n", __func__, 211 + ret); 212 + kthread_destroy_worker(kworker_fie); 213 + return; 214 + } 215 + 216 + /* Register for freq-invariance */ 217 + topology_set_scale_freq_source(&cppc_sftd, cpu_present_mask); 218 + } 219 + 220 + static void cppc_freq_invariance_exit(void) 221 + { 222 + struct cppc_freq_invariance *cppc_fi; 223 + int i; 224 + 225 + if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate) 226 + return; 227 + 228 + if (fie_disabled) 229 + return; 230 + 231 + topology_clear_scale_freq_source(SCALE_FREQ_SOURCE_CPPC, cpu_present_mask); 232 + 233 + for_each_possible_cpu(i) { 234 + cppc_fi = &per_cpu(cppc_freq_inv, i); 235 + irq_work_sync(&cppc_fi->irq_work); 236 + } 237 + 238 + kthread_destroy_worker(kworker_fie); 239 + kworker_fie = NULL; 240 + } 241 + 242 + #else 243 + static inline void 244 + cppc_freq_invariance_policy_init(struct cpufreq_policy *policy, 245 + struct cppc_cpudata *cpu_data) 246 + { 247 + } 248 + 249 + static inline void cppc_freq_invariance_init(void) 250 + { 251 + } 252 + 253 + static inline void cppc_freq_invariance_exit(void) 254 + { 255 + } 256 + #endif /* CONFIG_ACPI_CPPC_CPUFREQ_FIE */ 63 257 64 258 /* Callback function used to retrieve the max frequency from DMI */ 65 259 static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private) ··· 418 216 { 419 217 unsigned long implementor = read_cpuid_implementor(); 420 218 unsigned long part_num = read_cpuid_part_number(); 421 - unsigned int delay_us = 0; 422 219 423 220 switch (implementor) { 424 221 case ARM_CPU_IMP_QCOM: 425 222 switch (part_num) { 426 223 case QCOM_CPU_PART_FALKOR_V1: 427 224 case QCOM_CPU_PART_FALKOR: 428 - delay_us = 10000; 429 - break; 430 - default: 431 - delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 432 - break; 225 + return 10000; 433 226 } 434 - break; 435 - default: 436 - delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 437 - break; 438 227 } 439 - 440 - return delay_us; 228 + return cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 441 229 } 442 230 443 231 #else ··· 547 355 cpu_data->perf_ctrls.desired_perf = caps->highest_perf; 548 356 549 357 ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); 550 - if (ret) 358 + if (ret) { 551 359 pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", 552 360 caps->highest_perf, cpu, ret); 361 + } else { 362 + cppc_freq_invariance_policy_init(policy, cpu_data); 363 + } 553 364 554 365 return ret; 555 366 } ··· 565 370 return (u32)t1 - (u32)t0; 566 371 } 567 372 568 - static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data, 569 - struct cppc_perf_fb_ctrs fb_ctrs_t0, 570 - struct cppc_perf_fb_ctrs fb_ctrs_t1) 373 + static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data, 374 + struct cppc_perf_fb_ctrs fb_ctrs_t0, 375 + struct cppc_perf_fb_ctrs fb_ctrs_t1) 571 376 { 572 377 u64 delta_reference, delta_delivered; 573 - u64 reference_perf, delivered_perf; 378 + u64 reference_perf; 574 379 575 380 reference_perf = fb_ctrs_t0.reference_perf; 576 381 ··· 579 384 delta_delivered = get_delta(fb_ctrs_t1.delivered, 580 385 fb_ctrs_t0.delivered); 581 386 582 - /* Check to avoid divide-by zero */ 583 - if (delta_reference || delta_delivered) 584 - delivered_perf = (reference_perf * delta_delivered) / 585 - delta_reference; 586 - else 587 - delivered_perf = cpu_data->perf_ctrls.desired_perf; 387 + /* Check to avoid divide-by zero and invalid delivered_perf */ 388 + if (!delta_reference || !delta_delivered) 389 + return cpu_data->perf_ctrls.desired_perf; 390 + 391 + return (reference_perf * delta_delivered) / delta_reference; 392 + } 393 + 394 + static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data, 395 + struct cppc_perf_fb_ctrs fb_ctrs_t0, 396 + struct cppc_perf_fb_ctrs fb_ctrs_t1) 397 + { 398 + u64 delivered_perf; 399 + 400 + delivered_perf = cppc_perf_from_fbctrs(cpu_data, fb_ctrs_t0, 401 + fb_ctrs_t1); 588 402 589 403 return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf); 590 404 } ··· 718 514 719 515 static int __init cppc_cpufreq_init(void) 720 516 { 517 + int ret; 518 + 721 519 if ((acpi_disabled) || !acpi_cpc_valid()) 722 520 return -ENODEV; 723 521 ··· 727 521 728 522 cppc_check_hisi_workaround(); 729 523 730 - return cpufreq_register_driver(&cppc_cpufreq_driver); 524 + ret = cpufreq_register_driver(&cppc_cpufreq_driver); 525 + if (!ret) 526 + cppc_freq_invariance_init(); 527 + 528 + return ret; 731 529 } 732 530 733 531 static inline void free_cpu_data(void) ··· 748 538 749 539 static void __exit cppc_cpufreq_exit(void) 750 540 { 541 + cppc_freq_invariance_exit(); 751 542 cpufreq_unregister_driver(&cppc_cpufreq_driver); 752 543 753 544 free_cpu_data();
+7 -2
drivers/cpufreq/cpufreq-dt.c
··· 255 255 * before updating priv->cpus. Otherwise, we will end up creating 256 256 * duplicate OPPs for the CPUs. 257 257 * 258 - * OPPs might be populated at runtime, don't check for error here. 258 + * OPPs might be populated at runtime, don't fail for error here unless 259 + * it is -EPROBE_DEFER. 259 260 */ 260 - if (!dev_pm_opp_of_cpumask_add_table(priv->cpus)) 261 + ret = dev_pm_opp_of_cpumask_add_table(priv->cpus); 262 + if (!ret) { 261 263 priv->have_static_opps = true; 264 + } else if (ret == -EPROBE_DEFER) { 265 + goto out; 266 + } 262 267 263 268 /* 264 269 * The OPP table must be initialized, statically or dynamically, by this
-3
drivers/cpufreq/cpufreq.c
··· 42 42 #define for_each_inactive_policy(__policy) \ 43 43 for_each_suitable_policy(__policy, false) 44 44 45 - #define for_each_policy(__policy) \ 46 - list_for_each_entry(__policy, &cpufreq_policy_list, policy_list) 47 - 48 45 /* Iterate over governors */ 49 46 static LIST_HEAD(cpufreq_governor_list); 50 47 #define for_each_governor(__governor) \
+2 -2
drivers/cpufreq/ia64-acpi-cpufreq.c
··· 54 54 retval = ia64_pal_set_pstate((u64)value); 55 55 56 56 if (retval) { 57 - pr_debug("Failed to set freq to 0x%x, with error 0x%lx\n", 57 + pr_debug("Failed to set freq to 0x%x, with error 0x%llx\n", 58 58 value, retval); 59 59 return -ENODEV; 60 60 } ··· 77 77 78 78 if (retval) 79 79 pr_debug("Failed to get current freq with " 80 - "error 0x%lx, idx 0x%x\n", retval, *value); 80 + "error 0x%llx, idx 0x%x\n", retval, *value); 81 81 82 82 return (int)retval; 83 83 }
+43 -64
drivers/cpufreq/intel_pstate.c
··· 819 819 NULL, 820 820 }; 821 821 822 - static void intel_pstate_get_hwp_max(struct cpudata *cpu, int *phy_max, 823 - int *current_max) 822 + static void __intel_pstate_get_hwp_cap(struct cpudata *cpu) 824 823 { 825 824 u64 cap; 826 825 827 826 rdmsrl_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap); 828 827 WRITE_ONCE(cpu->hwp_cap_cached, cap); 829 - if (global.no_turbo || global.turbo_disabled) 830 - *current_max = HWP_GUARANTEED_PERF(cap); 831 - else 832 - *current_max = HWP_HIGHEST_PERF(cap); 828 + cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(cap); 829 + cpu->pstate.turbo_pstate = HWP_HIGHEST_PERF(cap); 830 + } 833 831 834 - *phy_max = HWP_HIGHEST_PERF(cap); 832 + static void intel_pstate_get_hwp_cap(struct cpudata *cpu) 833 + { 834 + __intel_pstate_get_hwp_cap(cpu); 835 + cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling; 836 + cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling; 835 837 } 836 838 837 839 static void intel_pstate_hwp_set(unsigned int cpu) ··· 1197 1195 1198 1196 static void update_qos_request(enum freq_qos_req_type type) 1199 1197 { 1200 - int max_state, turbo_max, freq, i, perf_pct; 1201 1198 struct freq_qos_request *req; 1202 1199 struct cpufreq_policy *policy; 1200 + int i; 1203 1201 1204 1202 for_each_possible_cpu(i) { 1205 1203 struct cpudata *cpu = all_cpu_data[i]; 1204 + unsigned int freq, perf_pct; 1206 1205 1207 1206 policy = cpufreq_cpu_get(i); 1208 1207 if (!policy) ··· 1216 1213 continue; 1217 1214 1218 1215 if (hwp_active) 1219 - intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); 1220 - else 1221 - turbo_max = cpu->pstate.turbo_pstate; 1216 + intel_pstate_get_hwp_cap(cpu); 1222 1217 1223 1218 if (type == FREQ_QOS_MIN) { 1224 1219 perf_pct = global.min_perf_pct; ··· 1225 1224 perf_pct = global.max_perf_pct; 1226 1225 } 1227 1226 1228 - freq = DIV_ROUND_UP(turbo_max * perf_pct, 100); 1229 - freq *= cpu->pstate.scaling; 1227 + freq = DIV_ROUND_UP(cpu->pstate.turbo_freq * perf_pct, 100); 1230 1228 1231 1229 if (freq_qos_update_request(req, freq) < 0) 1232 1230 pr_warn("Failed to update freq constraint: CPU%d\n", i); ··· 1715 1715 { 1716 1716 cpu->pstate.min_pstate = pstate_funcs.get_min(); 1717 1717 cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical(); 1718 - cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); 1719 1718 cpu->pstate.scaling = pstate_funcs.get_scaling(); 1720 1719 1721 1720 if (hwp_active && !hwp_mode_bdw) { 1722 - unsigned int phy_max, current_max; 1723 - 1724 - intel_pstate_get_hwp_max(cpu, &phy_max, &current_max); 1725 - cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling; 1726 - cpu->pstate.turbo_pstate = phy_max; 1727 - cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(READ_ONCE(cpu->hwp_cap_cached)); 1721 + __intel_pstate_get_hwp_cap(cpu); 1728 1722 } else { 1729 - cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling; 1730 1723 cpu->pstate.max_pstate = pstate_funcs.get_max(); 1724 + cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); 1731 1725 } 1726 + 1732 1727 cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling; 1728 + cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling; 1733 1729 1734 1730 if (pstate_funcs.get_aperf_mperf_shift) 1735 1731 cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift(); ··· 2195 2199 unsigned int policy_min, 2196 2200 unsigned int policy_max) 2197 2201 { 2202 + int scaling = cpu->pstate.scaling; 2198 2203 int32_t max_policy_perf, min_policy_perf; 2199 - int max_state, turbo_max; 2200 - int max_freq; 2201 2204 2202 2205 /* 2203 - * HWP needs some special consideration, because on BDX the 2204 - * HWP_REQUEST uses abstract value to represent performance 2205 - * rather than pure ratios. 2206 + * HWP needs some special consideration, because HWP_REQUEST uses 2207 + * abstract values to represent performance rather than pure ratios. 2206 2208 */ 2207 - if (hwp_active) { 2208 - intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); 2209 - } else { 2210 - max_state = global.no_turbo || global.turbo_disabled ? 2211 - cpu->pstate.max_pstate : cpu->pstate.turbo_pstate; 2212 - turbo_max = cpu->pstate.turbo_pstate; 2213 - } 2214 - max_freq = max_state * cpu->pstate.scaling; 2209 + if (hwp_active) 2210 + intel_pstate_get_hwp_cap(cpu); 2215 2211 2216 - max_policy_perf = max_state * policy_max / max_freq; 2212 + max_policy_perf = policy_max / scaling; 2217 2213 if (policy_max == policy_min) { 2218 2214 min_policy_perf = max_policy_perf; 2219 2215 } else { 2220 - min_policy_perf = max_state * policy_min / max_freq; 2216 + min_policy_perf = policy_min / scaling; 2221 2217 min_policy_perf = clamp_t(int32_t, min_policy_perf, 2222 2218 0, max_policy_perf); 2223 2219 } 2224 2220 2225 - pr_debug("cpu:%d max_state %d min_policy_perf:%d max_policy_perf:%d\n", 2226 - cpu->cpu, max_state, min_policy_perf, max_policy_perf); 2221 + pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n", 2222 + cpu->cpu, min_policy_perf, max_policy_perf); 2227 2223 2228 2224 /* Normalize user input to [min_perf, max_perf] */ 2229 2225 if (per_cpu_limits) { 2230 2226 cpu->min_perf_ratio = min_policy_perf; 2231 2227 cpu->max_perf_ratio = max_policy_perf; 2232 2228 } else { 2229 + int turbo_max = cpu->pstate.turbo_pstate; 2233 2230 int32_t global_min, global_max; 2234 2231 2235 2232 /* Global limits are in percent of the maximum turbo P-state. */ ··· 2311 2322 2312 2323 update_turbo_state(); 2313 2324 if (hwp_active) { 2314 - int max_state, turbo_max; 2315 - 2316 - intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); 2317 - max_freq = max_state * cpu->pstate.scaling; 2325 + intel_pstate_get_hwp_cap(cpu); 2326 + max_freq = global.no_turbo || global.turbo_disabled ? 2327 + cpu->pstate.max_freq : cpu->pstate.turbo_freq; 2318 2328 } else { 2319 2329 max_freq = intel_pstate_get_max_freq(cpu); 2320 2330 } ··· 2404 2416 cpu->max_perf_ratio = 0xFF; 2405 2417 cpu->min_perf_ratio = 0; 2406 2418 2407 - policy->min = cpu->pstate.min_pstate * cpu->pstate.scaling; 2408 - policy->max = cpu->pstate.turbo_pstate * cpu->pstate.scaling; 2409 - 2410 2419 /* cpuinfo and default policy values */ 2411 2420 policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling; 2412 2421 update_turbo_state(); 2413 2422 global.turbo_disabled_mf = global.turbo_disabled; 2414 2423 policy->cpuinfo.max_freq = global.turbo_disabled ? 2415 - cpu->pstate.max_pstate : cpu->pstate.turbo_pstate; 2416 - policy->cpuinfo.max_freq *= cpu->pstate.scaling; 2417 - 2418 - if (hwp_active) { 2419 - unsigned int max_freq; 2420 - 2421 - max_freq = global.turbo_disabled ? 2422 2424 cpu->pstate.max_freq : cpu->pstate.turbo_freq; 2423 - if (max_freq < policy->cpuinfo.max_freq) 2424 - policy->cpuinfo.max_freq = max_freq; 2425 - } 2425 + 2426 + policy->min = policy->cpuinfo.min_freq; 2427 + policy->max = policy->cpuinfo.max_freq; 2426 2428 2427 2429 intel_pstate_init_acpi_perf_limits(policy); 2428 2430 ··· 2661 2683 2662 2684 static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy) 2663 2685 { 2664 - int max_state, turbo_max, min_freq, max_freq, ret; 2665 2686 struct freq_qos_request *req; 2666 2687 struct cpudata *cpu; 2667 2688 struct device *dev; 2689 + int ret, freq; 2668 2690 2669 2691 dev = get_cpu_device(policy->cpu); 2670 2692 if (!dev) ··· 2689 2711 if (hwp_active) { 2690 2712 u64 value; 2691 2713 2692 - intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); 2693 2714 policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY_HWP; 2715 + 2716 + intel_pstate_get_hwp_cap(cpu); 2717 + 2694 2718 rdmsrl_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value); 2695 2719 WRITE_ONCE(cpu->hwp_req_cached, value); 2720 + 2696 2721 cpu->epp_cached = intel_pstate_get_epp(cpu, value); 2697 2722 } else { 2698 - turbo_max = cpu->pstate.turbo_pstate; 2699 2723 policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY; 2700 2724 } 2701 2725 2702 - min_freq = DIV_ROUND_UP(turbo_max * global.min_perf_pct, 100); 2703 - min_freq *= cpu->pstate.scaling; 2704 - max_freq = DIV_ROUND_UP(turbo_max * global.max_perf_pct, 100); 2705 - max_freq *= cpu->pstate.scaling; 2726 + freq = DIV_ROUND_UP(cpu->pstate.turbo_freq * global.min_perf_pct, 100); 2706 2727 2707 2728 ret = freq_qos_add_request(&policy->constraints, req, FREQ_QOS_MIN, 2708 - min_freq); 2729 + freq); 2709 2730 if (ret < 0) { 2710 2731 dev_err(dev, "Failed to add min-freq constraint (%d)\n", ret); 2711 2732 goto free_req; 2712 2733 } 2713 2734 2735 + freq = DIV_ROUND_UP(cpu->pstate.turbo_freq * global.max_perf_pct, 100); 2736 + 2714 2737 ret = freq_qos_add_request(&policy->constraints, req + 1, FREQ_QOS_MAX, 2715 - max_freq); 2738 + freq); 2716 2739 if (ret < 0) { 2717 2740 dev_err(dev, "Failed to add max-freq constraint (%d)\n", ret); 2718 2741 goto remove_min_req;
+7 -7
drivers/cpufreq/s5pv210-cpufreq.c
··· 91 91 /* Use 800MHz when entering sleep mode */ 92 92 #define SLEEP_FREQ (800 * 1000) 93 93 94 - /* Tracks if cpu freqency can be updated anymore */ 94 + /* Tracks if CPU frequency can be updated anymore */ 95 95 static bool no_cpufreq_access; 96 96 97 97 /* ··· 190 190 191 191 /* 192 192 * This function set DRAM refresh counter 193 - * accoriding to operating frequency of DRAM 193 + * according to operating frequency of DRAM 194 194 * ch: DMC port number 0 or 1 195 195 * freq: Operating frequency of DRAM(KHz) 196 196 */ ··· 320 320 321 321 /* 322 322 * 3. DMC1 refresh count for 133Mhz if (index == L4) is 323 - * true refresh counter is already programed in upper 323 + * true refresh counter is already programmed in upper 324 324 * code. 0x287@83Mhz 325 325 */ 326 326 if (!bus_speed_changing) ··· 378 378 /* 379 379 * 6. Turn on APLL 380 380 * 6-1. Set PMS values 381 - * 6-2. Wait untile the PLL is locked 381 + * 6-2. Wait until the PLL is locked 382 382 */ 383 383 if (index == L0) 384 384 writel_relaxed(APLL_VAL_1000, S5P_APLL_CON); ··· 390 390 } while (!(reg & (0x1 << 29))); 391 391 392 392 /* 393 - * 7. Change souce clock from SCLKMPLL(667Mhz) 393 + * 7. Change source clock from SCLKMPLL(667Mhz) 394 394 * to SCLKA2M(200Mhz) in MFC_MUX and G3D MUX 395 395 * (667/4=166)->(200/4=50)Mhz 396 396 */ ··· 439 439 } 440 440 441 441 /* 442 - * L4 level need to change memory bus speed, hence onedram clock divier 443 - * and memory refresh parameter should be changed 442 + * L4 level needs to change memory bus speed, hence ONEDRAM clock 443 + * divider and memory refresh parameter should be changed 444 444 */ 445 445 if (bus_speed_changing) { 446 446 reg = readl_relaxed(S5P_CLK_DIV6);
+1 -1
drivers/cpuidle/Kconfig.arm
··· 107 107 108 108 config ARM_QCOM_SPM_CPUIDLE 109 109 bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)" 110 - depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 110 + depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 && MMU 111 111 select ARM_CPU_SUSPEND 112 112 select CPU_IDLE_MULTIPLE_DRIVERS 113 113 select DT_IDLE_STATES
+4 -15
drivers/cpuidle/cpuidle-tegra.c
··· 48 48 static atomic_t tegra_idle_barrier; 49 49 static atomic_t tegra_abort_flag; 50 50 51 - static inline bool tegra_cpuidle_using_firmware(void) 52 - { 53 - return firmware_ops->prepare_idle && firmware_ops->do_idle; 54 - } 55 - 56 51 static void tegra_cpuidle_report_cpus_state(void) 57 52 { 58 53 unsigned long cpu, lcpu, csr; ··· 130 135 { 131 136 int err; 132 137 133 - if (tegra_cpuidle_using_firmware()) { 134 - err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2); 135 - if (err) 136 - return err; 137 - 138 - return call_firmware_op(do_idle, 0); 139 - } 138 + err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2); 139 + if (err && err != -ENOSYS) 140 + return err; 140 141 141 142 return cpu_suspend(0, tegra30_pm_secondary_cpu_suspend); 142 143 } ··· 347 356 * is disabled. 348 357 */ 349 358 if (!IS_ENABLED(CONFIG_PM_SLEEP)) { 350 - if (!tegra_cpuidle_using_firmware()) 351 - tegra_cpuidle_disable_state(TEGRA_C7); 352 - 359 + tegra_cpuidle_disable_state(TEGRA_C7); 353 360 tegra_cpuidle_disable_state(TEGRA_CC6); 354 361 } 355 362
+4
drivers/cpuidle/driver.c
··· 181 181 */ 182 182 if (s->target_residency > 0) 183 183 s->target_residency_ns = s->target_residency * NSEC_PER_USEC; 184 + else if (s->target_residency_ns < 0) 185 + s->target_residency_ns = 0; 184 186 185 187 if (s->exit_latency > 0) 186 188 s->exit_latency_ns = s->exit_latency * NSEC_PER_USEC; 189 + else if (s->exit_latency_ns < 0) 190 + s->exit_latency_ns = 0; 187 191 } 188 192 } 189 193
+11 -6
drivers/cpuidle/governors/menu.c
··· 271 271 u64 predicted_ns; 272 272 u64 interactivity_req; 273 273 unsigned long nr_iowaiters; 274 - ktime_t delta_next; 274 + ktime_t delta, delta_tick; 275 275 int i, idx; 276 276 277 277 if (data->needs_update) { ··· 280 280 } 281 281 282 282 /* determine the expected residency time, round up */ 283 - data->next_timer_ns = tick_nohz_get_sleep_length(&delta_next); 283 + delta = tick_nohz_get_sleep_length(&delta_tick); 284 + if (unlikely(delta < 0)) { 285 + delta = 0; 286 + delta_tick = 0; 287 + } 288 + data->next_timer_ns = delta; 284 289 285 290 nr_iowaiters = nr_iowait_cpu(dev->cpu); 286 291 data->bucket = which_bucket(data->next_timer_ns, nr_iowaiters); ··· 323 318 * state selection. 324 319 */ 325 320 if (predicted_ns < TICK_NSEC) 326 - predicted_ns = delta_next; 321 + predicted_ns = data->next_timer_ns; 327 322 } else { 328 323 /* 329 324 * Use the performance multiplier and the user-configurable ··· 382 377 * stuck in the shallow one for too long. 383 378 */ 384 379 if (drv->states[idx].target_residency_ns < TICK_NSEC && 385 - s->target_residency_ns <= delta_next) 380 + s->target_residency_ns <= delta_tick) 386 381 idx = i; 387 382 388 383 return idx; ··· 404 399 predicted_ns < TICK_NSEC) && !tick_nohz_tick_stopped()) { 405 400 *stop_tick = false; 406 401 407 - if (idx > 0 && drv->states[idx].target_residency_ns > delta_next) { 402 + if (idx > 0 && drv->states[idx].target_residency_ns > delta_tick) { 408 403 /* 409 404 * The tick is not going to be stopped and the target 410 405 * residency of the state to be returned is not within ··· 416 411 continue; 417 412 418 413 idx = i; 419 - if (drv->states[i].target_residency_ns <= delta_next) 414 + if (drv->states[i].target_residency_ns <= delta_tick) 420 415 break; 421 416 } 422 417 }
+29 -23
drivers/cpuidle/governors/teo.c
··· 100 100 * @intervals: Saved idle duration values. 101 101 */ 102 102 struct teo_cpu { 103 - u64 time_span_ns; 104 - u64 sleep_length_ns; 103 + s64 time_span_ns; 104 + s64 sleep_length_ns; 105 105 struct teo_idle_state states[CPUIDLE_STATE_MAX]; 106 106 int interval_idx; 107 107 u64 intervals[INTERVALS]; ··· 117 117 static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) 118 118 { 119 119 struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); 120 - int i, idx_hit = -1, idx_timer = -1; 120 + int i, idx_hit = 0, idx_timer = 0; 121 + unsigned int hits, misses; 121 122 u64 measured_ns; 122 123 123 124 if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) { ··· 175 174 * also increase the "early hits" metric for the state that actually 176 175 * matches the measured idle duration. 177 176 */ 178 - if (idx_timer >= 0) { 179 - unsigned int hits = cpu_data->states[idx_timer].hits; 180 - unsigned int misses = cpu_data->states[idx_timer].misses; 177 + hits = cpu_data->states[idx_timer].hits; 178 + hits -= hits >> DECAY_SHIFT; 181 179 182 - hits -= hits >> DECAY_SHIFT; 183 - misses -= misses >> DECAY_SHIFT; 180 + misses = cpu_data->states[idx_timer].misses; 181 + misses -= misses >> DECAY_SHIFT; 184 182 185 - if (idx_timer > idx_hit) { 186 - misses += PULSE; 187 - if (idx_hit >= 0) 188 - cpu_data->states[idx_hit].early_hits += PULSE; 189 - } else { 190 - hits += PULSE; 191 - } 192 - 193 - cpu_data->states[idx_timer].misses = misses; 194 - cpu_data->states[idx_timer].hits = hits; 183 + if (idx_timer == idx_hit) { 184 + hits += PULSE; 185 + } else { 186 + misses += PULSE; 187 + cpu_data->states[idx_hit].early_hits += PULSE; 195 188 } 189 + 190 + cpu_data->states[idx_timer].misses = misses; 191 + cpu_data->states[idx_timer].hits = hits; 196 192 197 193 /* 198 194 * Save idle duration values corresponding to non-timer wakeups for ··· 214 216 */ 215 217 static int teo_find_shallower_state(struct cpuidle_driver *drv, 216 218 struct cpuidle_device *dev, int state_idx, 217 - u64 duration_ns) 219 + s64 duration_ns) 218 220 { 219 221 int i; 220 222 ··· 240 242 { 241 243 struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); 242 244 s64 latency_req = cpuidle_governor_latency_req(dev->cpu); 243 - u64 duration_ns; 245 + int max_early_idx, prev_max_early_idx, constraint_idx, idx0, idx, i; 244 246 unsigned int hits, misses, early_hits; 245 - int max_early_idx, prev_max_early_idx, constraint_idx, idx, i; 246 247 ktime_t delta_tick; 248 + s64 duration_ns; 247 249 248 250 if (dev->last_state_idx >= 0) { 249 251 teo_update(drv, dev); ··· 262 264 prev_max_early_idx = -1; 263 265 constraint_idx = drv->state_count; 264 266 idx = -1; 267 + idx0 = idx; 265 268 266 269 for (i = 0; i < drv->state_count; i++) { 267 270 struct cpuidle_state *s = &drv->states[i]; ··· 323 324 idx = i; /* first enabled state */ 324 325 hits = cpu_data->states[i].hits; 325 326 misses = cpu_data->states[i].misses; 327 + idx0 = i; 326 328 } 327 329 328 330 if (s->target_residency_ns > duration_ns) ··· 376 376 377 377 if (idx < 0) { 378 378 idx = 0; /* No states enabled. Must use 0. */ 379 - } else if (idx > 0) { 379 + } else if (idx > idx0) { 380 380 unsigned int count = 0; 381 381 u64 sum = 0; 382 382 383 383 /* 384 + * The target residencies of at least two different enabled idle 385 + * states are less than or equal to the current expected idle 386 + * duration. Try to refine the selection using the most recent 387 + * measured idle duration values. 388 + * 384 389 * Count and sum the most recent idle duration values less than 385 390 * the current expected idle duration value. 386 391 */ ··· 433 428 * till the closest timer including the tick, try to correct 434 429 * that. 435 430 */ 436 - if (idx > 0 && drv->states[idx].target_residency_ns > delta_tick) 431 + if (idx > idx0 && 432 + drv->states[idx].target_residency_ns > delta_tick) 437 433 idx = teo_find_shallower_state(drv, dev, idx, delta_tick); 438 434 } 439 435
+1 -1
drivers/devfreq/Kconfig
··· 62 62 help 63 63 Sets the frequency at the user specified one. 64 64 This governor returns the user configured frequency if there 65 - has been an input to /sys/devices/.../power/devfreq_set_freq. 65 + has been an input to /sys/devices/.../userspace/set_freq. 66 66 Otherwise, the governor does not change the frequency 67 67 given at the initialization. 68 68
+12 -2
drivers/devfreq/devfreq.c
··· 11 11 #include <linux/kmod.h> 12 12 #include <linux/sched.h> 13 13 #include <linux/debugfs.h> 14 + #include <linux/devfreq_cooling.h> 14 15 #include <linux/errno.h> 15 16 #include <linux/err.h> 16 17 #include <linux/init.h> ··· 388 387 devfreq->previous_freq = new_freq; 389 388 390 389 if (devfreq->suspend_freq) 391 - devfreq->resume_freq = cur_freq; 390 + devfreq->resume_freq = new_freq; 392 391 393 392 return err; 394 393 } ··· 822 821 823 822 if (devfreq->profile->timer < 0 824 823 || devfreq->profile->timer >= DEVFREQ_TIMER_NUM) { 825 - goto err_out; 824 + mutex_unlock(&devfreq->lock); 825 + goto err_dev; 826 826 } 827 827 828 828 if (!devfreq->profile->max_state && !devfreq->profile->freq_table) { ··· 937 935 938 936 mutex_unlock(&devfreq_list_lock); 939 937 938 + if (devfreq->profile->is_cooling_device) { 939 + devfreq->cdev = devfreq_cooling_em_register(devfreq, NULL); 940 + if (IS_ERR(devfreq->cdev)) 941 + devfreq->cdev = NULL; 942 + } 943 + 940 944 return devfreq; 941 945 942 946 err_init: ··· 967 959 { 968 960 if (!devfreq) 969 961 return -EINVAL; 962 + 963 + devfreq_cooling_unregister(devfreq->cdev); 970 964 971 965 if (devfreq->governor) { 972 966 devfreq->governor->event_handler(devfreq,
+3 -2
drivers/devfreq/governor.h
··· 57 57 * Basically, get_target_freq will run 58 58 * devfreq_dev_profile.get_dev_status() to get the 59 59 * status of the device (load = busy_time / total_time). 60 - * If no_central_polling is set, this callback is called 61 - * only with update_devfreq() notified by OPP. 62 60 * @event_handler: Callback for devfreq core framework to notify events 63 61 * to governors. Events include per device governor 64 62 * init and exit, opp changes out of devfreq, suspend ··· 89 91 90 92 static inline int devfreq_update_stats(struct devfreq *df) 91 93 { 94 + if (!df->profile->get_dev_status) 95 + return -EINVAL; 96 + 92 97 return df->profile->get_dev_status(df->dev.parent, &df->last_status); 93 98 } 94 99 #endif /* _GOVERNOR_H */
+1 -1
drivers/devfreq/imx-bus.c
··· 169 169 .probe = imx_bus_probe, 170 170 .driver = { 171 171 .name = "imx-bus-devfreq", 172 - .of_match_table = of_match_ptr(imx_bus_of_match), 172 + .of_match_table = imx_bus_of_match, 173 173 }, 174 174 }; 175 175 module_platform_driver(imx_bus_platdrv);
+1 -15
drivers/devfreq/imx8m-ddrc.c
··· 280 280 return 0; 281 281 } 282 282 283 - static int imx8m_ddrc_get_dev_status(struct device *dev, 284 - struct devfreq_dev_status *stat) 285 - { 286 - struct imx8m_ddrc *priv = dev_get_drvdata(dev); 287 - 288 - stat->busy_time = 0; 289 - stat->total_time = 0; 290 - stat->current_frequency = clk_get_rate(priv->dram_core); 291 - 292 - return 0; 293 - } 294 - 295 283 static int imx8m_ddrc_init_freq_info(struct device *dev) 296 284 { 297 285 struct imx8m_ddrc *priv = dev_get_drvdata(dev); ··· 417 429 if (ret < 0) 418 430 goto err; 419 431 420 - priv->profile.polling_ms = 1000; 421 432 priv->profile.target = imx8m_ddrc_target; 422 - priv->profile.get_dev_status = imx8m_ddrc_get_dev_status; 423 433 priv->profile.exit = imx8m_ddrc_exit; 424 434 priv->profile.get_cur_freq = imx8m_ddrc_get_cur_freq; 425 435 priv->profile.initial_freq = clk_get_rate(priv->dram_core); ··· 447 461 .probe = imx8m_ddrc_probe, 448 462 .driver = { 449 463 .name = "imx8m-ddrc-devfreq", 450 - .of_match_table = of_match_ptr(imx8m_ddrc_of_match), 464 + .of_match_table = imx8m_ddrc_of_match, 451 465 }, 452 466 }; 453 467 module_platform_driver(imx8m_ddrc_platdrv);
+6 -14
drivers/devfreq/rk3399_dmc.c
··· 324 324 mutex_init(&data->lock); 325 325 326 326 data->vdd_center = devm_regulator_get(dev, "center"); 327 - if (IS_ERR(data->vdd_center)) { 328 - if (PTR_ERR(data->vdd_center) == -EPROBE_DEFER) 329 - return -EPROBE_DEFER; 330 - 331 - dev_err(dev, "Cannot get the regulator \"center\"\n"); 332 - return PTR_ERR(data->vdd_center); 333 - } 327 + if (IS_ERR(data->vdd_center)) 328 + return dev_err_probe(dev, PTR_ERR(data->vdd_center), 329 + "Cannot get the regulator \"center\"\n"); 334 330 335 331 data->dmc_clk = devm_clk_get(dev, "dmc_clk"); 336 - if (IS_ERR(data->dmc_clk)) { 337 - if (PTR_ERR(data->dmc_clk) == -EPROBE_DEFER) 338 - return -EPROBE_DEFER; 339 - 340 - dev_err(dev, "Cannot get the clk dmc_clk\n"); 341 - return PTR_ERR(data->dmc_clk); 342 - } 332 + if (IS_ERR(data->dmc_clk)) 333 + return dev_err_probe(dev, PTR_ERR(data->dmc_clk), 334 + "Cannot get the clk dmc_clk\n"); 343 335 344 336 data->edev = devfreq_event_get_edev_by_phandle(dev, "devfreq-events", 0); 345 337 if (IS_ERR(data->edev))
+11 -36
drivers/gpu/drm/lima/lima_devfreq.c
··· 99 99 devm_devfreq_remove_device(ldev->dev, devfreq->devfreq); 100 100 devfreq->devfreq = NULL; 101 101 } 102 - 103 - dev_pm_opp_of_remove_table(ldev->dev); 104 - 105 - dev_pm_opp_put_regulators(devfreq->regulators_opp_table); 106 - dev_pm_opp_put_clkname(devfreq->clkname_opp_table); 107 - devfreq->regulators_opp_table = NULL; 108 - devfreq->clkname_opp_table = NULL; 109 102 } 110 103 111 104 int lima_devfreq_init(struct lima_device *ldev) 112 105 { 113 106 struct thermal_cooling_device *cooling; 114 107 struct device *dev = ldev->dev; 115 - struct opp_table *opp_table; 116 108 struct devfreq *devfreq; 117 109 struct lima_devfreq *ldevfreq = &ldev->devfreq; 118 110 struct dev_pm_opp *opp; ··· 117 125 118 126 spin_lock_init(&ldevfreq->lock); 119 127 120 - opp_table = dev_pm_opp_set_clkname(dev, "core"); 121 - if (IS_ERR(opp_table)) { 122 - ret = PTR_ERR(opp_table); 123 - goto err_fini; 124 - } 128 + ret = devm_pm_opp_set_clkname(dev, "core"); 129 + if (ret) 130 + return ret; 125 131 126 - ldevfreq->clkname_opp_table = opp_table; 127 - 128 - opp_table = dev_pm_opp_set_regulators(dev, 129 - (const char *[]){ "mali" }, 130 - 1); 131 - if (IS_ERR(opp_table)) { 132 - ret = PTR_ERR(opp_table); 133 - 132 + ret = devm_pm_opp_set_regulators(dev, (const char *[]){ "mali" }, 1); 133 + if (ret) { 134 134 /* Continue if the optional regulator is missing */ 135 135 if (ret != -ENODEV) 136 - goto err_fini; 137 - } else { 138 - ldevfreq->regulators_opp_table = opp_table; 136 + return ret; 139 137 } 140 138 141 - ret = dev_pm_opp_of_add_table(dev); 139 + ret = devm_pm_opp_of_add_table(dev); 142 140 if (ret) 143 - goto err_fini; 141 + return ret; 144 142 145 143 lima_devfreq_reset(ldevfreq); 146 144 147 145 cur_freq = clk_get_rate(ldev->clk_gpu); 148 146 149 147 opp = devfreq_recommended_opp(dev, &cur_freq, 0); 150 - if (IS_ERR(opp)) { 151 - ret = PTR_ERR(opp); 152 - goto err_fini; 153 - } 148 + if (IS_ERR(opp)) 149 + return PTR_ERR(opp); 154 150 155 151 lima_devfreq_profile.initial_freq = cur_freq; 156 152 dev_pm_opp_put(opp); ··· 147 167 DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL); 148 168 if (IS_ERR(devfreq)) { 149 169 dev_err(dev, "Couldn't initialize GPU devfreq\n"); 150 - ret = PTR_ERR(devfreq); 151 - goto err_fini; 170 + return PTR_ERR(devfreq); 152 171 } 153 172 154 173 ldevfreq->devfreq = devfreq; ··· 159 180 ldevfreq->cooling = cooling; 160 181 161 182 return 0; 162 - 163 - err_fini: 164 - lima_devfreq_fini(ldev); 165 - return ret; 166 183 } 167 184 168 185 void lima_devfreq_record_busy(struct lima_devfreq *devfreq)
-3
drivers/gpu/drm/lima/lima_devfreq.h
··· 8 8 #include <linux/ktime.h> 9 9 10 10 struct devfreq; 11 - struct opp_table; 12 11 struct thermal_cooling_device; 13 12 14 13 struct lima_device; 15 14 16 15 struct lima_devfreq { 17 16 struct devfreq *devfreq; 18 - struct opp_table *clkname_opp_table; 19 - struct opp_table *regulators_opp_table; 20 17 struct thermal_cooling_device *cooling; 21 18 22 19 ktime_t busy_time;
+9 -28
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 89 89 unsigned long cur_freq; 90 90 struct device *dev = &pfdev->pdev->dev; 91 91 struct devfreq *devfreq; 92 - struct opp_table *opp_table; 93 92 struct thermal_cooling_device *cooling; 94 93 struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq; 95 94 96 - opp_table = dev_pm_opp_set_regulators(dev, pfdev->comp->supply_names, 97 - pfdev->comp->num_supplies); 98 - if (IS_ERR(opp_table)) { 99 - ret = PTR_ERR(opp_table); 95 + ret = devm_pm_opp_set_regulators(dev, pfdev->comp->supply_names, 96 + pfdev->comp->num_supplies); 97 + if (ret) { 100 98 /* Continue if the optional regulator is missing */ 101 99 if (ret != -ENODEV) { 102 100 DRM_DEV_ERROR(dev, "Couldn't set OPP regulators\n"); 103 - goto err_fini; 101 + return ret; 104 102 } 105 - } else { 106 - pfdevfreq->regulators_opp_table = opp_table; 107 103 } 108 104 109 - ret = dev_pm_opp_of_add_table(dev); 105 + ret = devm_pm_opp_of_add_table(dev); 110 106 if (ret) { 111 107 /* Optional, continue without devfreq */ 112 108 if (ret == -ENODEV) 113 109 ret = 0; 114 - goto err_fini; 110 + return ret; 115 111 } 116 112 pfdevfreq->opp_of_table_added = true; 117 113 ··· 118 122 cur_freq = clk_get_rate(pfdev->clock); 119 123 120 124 opp = devfreq_recommended_opp(dev, &cur_freq, 0); 121 - if (IS_ERR(opp)) { 122 - ret = PTR_ERR(opp); 123 - goto err_fini; 124 - } 125 + if (IS_ERR(opp)) 126 + return PTR_ERR(opp); 125 127 126 128 panfrost_devfreq_profile.initial_freq = cur_freq; 127 129 dev_pm_opp_put(opp); ··· 128 134 DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL); 129 135 if (IS_ERR(devfreq)) { 130 136 DRM_DEV_ERROR(dev, "Couldn't initialize GPU devfreq\n"); 131 - ret = PTR_ERR(devfreq); 132 - goto err_fini; 137 + return PTR_ERR(devfreq); 133 138 } 134 139 pfdevfreq->devfreq = devfreq; 135 140 ··· 139 146 pfdevfreq->cooling = cooling; 140 147 141 148 return 0; 142 - 143 - err_fini: 144 - panfrost_devfreq_fini(pfdev); 145 - return ret; 146 149 } 147 150 148 151 void panfrost_devfreq_fini(struct panfrost_device *pfdev) ··· 149 160 devfreq_cooling_unregister(pfdevfreq->cooling); 150 161 pfdevfreq->cooling = NULL; 151 162 } 152 - 153 - if (pfdevfreq->opp_of_table_added) { 154 - dev_pm_opp_of_remove_table(&pfdev->pdev->dev); 155 - pfdevfreq->opp_of_table_added = false; 156 - } 157 - 158 - dev_pm_opp_put_regulators(pfdevfreq->regulators_opp_table); 159 - pfdevfreq->regulators_opp_table = NULL; 160 163 } 161 164 162 165 void panfrost_devfreq_resume(struct panfrost_device *pfdev)
-2
drivers/gpu/drm/panfrost/panfrost_devfreq.h
··· 8 8 #include <linux/ktime.h> 9 9 10 10 struct devfreq; 11 - struct opp_table; 12 11 struct thermal_cooling_device; 13 12 14 13 struct panfrost_device; 15 14 16 15 struct panfrost_devfreq { 17 16 struct devfreq *devfreq; 18 - struct opp_table *regulators_opp_table; 19 17 struct thermal_cooling_device *cooling; 20 18 bool opp_of_table_added; 21 19
+3 -2
drivers/idle/intel_idle.c
··· 744 744 .name = "C6", 745 745 .desc = "MWAIT 0x20", 746 746 .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, 747 - .exit_latency = 128, 748 - .target_residency = 384, 747 + .exit_latency = 170, 748 + .target_residency = 600, 749 749 .enter = &intel_idle, 750 750 .enter_s2idle = intel_idle_s2idle, }, 751 751 { ··· 1156 1156 X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, &idle_cpu_skl), 1157 1157 X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &idle_cpu_skx), 1158 1158 X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &idle_cpu_icx), 1159 + X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, &idle_cpu_icx), 1159 1160 X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &idle_cpu_knl), 1160 1161 X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &idle_cpu_knl), 1161 1162 X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &idle_cpu_bxt),
+3 -10
drivers/memory/samsung/exynos5422-dmc.c
··· 343 343 int idx; 344 344 unsigned long freq; 345 345 346 - ret = dev_pm_opp_of_add_table(dmc->dev); 346 + ret = devm_pm_opp_of_add_table(dmc->dev); 347 347 if (ret < 0) { 348 348 dev_err(dmc->dev, "Failed to get OPP table\n"); 349 349 return ret; ··· 354 354 dmc->opp = devm_kmalloc_array(dmc->dev, dmc->opp_count, 355 355 sizeof(struct dmc_opp_table), GFP_KERNEL); 356 356 if (!dmc->opp) 357 - goto err_opp; 357 + return -ENOMEM; 358 358 359 359 idx = dmc->opp_count - 1; 360 360 for (i = 0, freq = ULONG_MAX; i < dmc->opp_count; i++, freq--) { ··· 362 362 363 363 opp = dev_pm_opp_find_freq_floor(dmc->dev, &freq); 364 364 if (IS_ERR(opp)) 365 - goto err_opp; 365 + return PTR_ERR(opp); 366 366 367 367 dmc->opp[idx - i].freq_hz = freq; 368 368 dmc->opp[idx - i].volt_uv = dev_pm_opp_get_voltage(opp); ··· 371 371 } 372 372 373 373 return 0; 374 - 375 - err_opp: 376 - dev_pm_opp_of_remove_table(dmc->dev); 377 - 378 - return -EINVAL; 379 374 } 380 375 381 376 /** ··· 1563 1568 1564 1569 clk_disable_unprepare(dmc->mout_bpll); 1565 1570 clk_disable_unprepare(dmc->fout_bpll); 1566 - 1567 - dev_pm_opp_remove_table(dmc->dev); 1568 1571 1569 1572 return 0; 1570 1573 }
+5 -14
drivers/mmc/host/sdhci-msm.c
··· 264 264 struct clk_bulk_data bulk_clks[5]; 265 265 unsigned long clk_rate; 266 266 struct mmc_host *mmc; 267 - struct opp_table *opp_table; 268 267 bool use_14lpp_dll_reset; 269 268 bool tuning_done; 270 269 bool calibration_done; ··· 2550 2551 if (ret) 2551 2552 goto bus_clk_disable; 2552 2553 2553 - msm_host->opp_table = dev_pm_opp_set_clkname(&pdev->dev, "core"); 2554 - if (IS_ERR(msm_host->opp_table)) { 2555 - ret = PTR_ERR(msm_host->opp_table); 2554 + ret = devm_pm_opp_set_clkname(&pdev->dev, "core"); 2555 + if (ret) 2556 2556 goto bus_clk_disable; 2557 - } 2558 2557 2559 2558 /* OPP table is optional */ 2560 - ret = dev_pm_opp_of_add_table(&pdev->dev); 2559 + ret = devm_pm_opp_of_add_table(&pdev->dev); 2561 2560 if (ret && ret != -ENODEV) { 2562 2561 dev_err(&pdev->dev, "Invalid OPP table in Device tree\n"); 2563 - goto opp_put_clkname; 2562 + goto bus_clk_disable; 2564 2563 } 2565 2564 2566 2565 /* Vote for maximum clock rate for maximum performance */ ··· 2584 2587 ret = clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), 2585 2588 msm_host->bulk_clks); 2586 2589 if (ret) 2587 - goto opp_cleanup; 2590 + goto bus_clk_disable; 2588 2591 2589 2592 /* 2590 2593 * xo clock is needed for FLL feature of cm_dll. ··· 2729 2732 clk_disable: 2730 2733 clk_bulk_disable_unprepare(ARRAY_SIZE(msm_host->bulk_clks), 2731 2734 msm_host->bulk_clks); 2732 - opp_cleanup: 2733 - dev_pm_opp_of_remove_table(&pdev->dev); 2734 - opp_put_clkname: 2735 - dev_pm_opp_put_clkname(msm_host->opp_table); 2736 2735 bus_clk_disable: 2737 2736 if (!IS_ERR(msm_host->bus_clk)) 2738 2737 clk_disable_unprepare(msm_host->bus_clk); ··· 2747 2754 2748 2755 sdhci_remove_host(host, dead); 2749 2756 2750 - dev_pm_opp_of_remove_table(&pdev->dev); 2751 - dev_pm_opp_put_clkname(msm_host->opp_table); 2752 2757 pm_runtime_get_sync(&pdev->dev); 2753 2758 pm_runtime_disable(&pdev->dev); 2754 2759 pm_runtime_put_noidle(&pdev->dev);
+98 -24
drivers/opp/core.c
··· 1857 1857 } 1858 1858 EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw); 1859 1859 1860 + static void devm_pm_opp_supported_hw_release(void *data) 1861 + { 1862 + dev_pm_opp_put_supported_hw(data); 1863 + } 1864 + 1865 + /** 1866 + * devm_pm_opp_set_supported_hw() - Set supported platforms 1867 + * @dev: Device for which supported-hw has to be set. 1868 + * @versions: Array of hierarchy of versions to match. 1869 + * @count: Number of elements in the array. 1870 + * 1871 + * This is a resource-managed variant of dev_pm_opp_set_supported_hw(). 1872 + * 1873 + * Return: 0 on success and errorno otherwise. 1874 + */ 1875 + int devm_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, 1876 + unsigned int count) 1877 + { 1878 + struct opp_table *opp_table; 1879 + 1880 + opp_table = dev_pm_opp_set_supported_hw(dev, versions, count); 1881 + if (IS_ERR(opp_table)) 1882 + return PTR_ERR(opp_table); 1883 + 1884 + return devm_add_action_or_reset(dev, devm_pm_opp_supported_hw_release, 1885 + opp_table); 1886 + } 1887 + EXPORT_SYMBOL_GPL(devm_pm_opp_set_supported_hw); 1888 + 1860 1889 /** 1861 1890 * dev_pm_opp_set_prop_name() - Set prop-extn name 1862 1891 * @dev: Device for which the prop-name has to be set. ··· 2076 2047 } 2077 2048 EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators); 2078 2049 2050 + static void devm_pm_opp_regulators_release(void *data) 2051 + { 2052 + dev_pm_opp_put_regulators(data); 2053 + } 2054 + 2055 + /** 2056 + * devm_pm_opp_set_regulators() - Set regulator names for the device 2057 + * @dev: Device for which regulator name is being set. 2058 + * @names: Array of pointers to the names of the regulator. 2059 + * @count: Number of regulators. 2060 + * 2061 + * This is a resource-managed variant of dev_pm_opp_set_regulators(). 2062 + * 2063 + * Return: 0 on success and errorno otherwise. 2064 + */ 2065 + int devm_pm_opp_set_regulators(struct device *dev, 2066 + const char * const names[], 2067 + unsigned int count) 2068 + { 2069 + struct opp_table *opp_table; 2070 + 2071 + opp_table = dev_pm_opp_set_regulators(dev, names, count); 2072 + if (IS_ERR(opp_table)) 2073 + return PTR_ERR(opp_table); 2074 + 2075 + return devm_add_action_or_reset(dev, devm_pm_opp_regulators_release, 2076 + opp_table); 2077 + } 2078 + EXPORT_SYMBOL_GPL(devm_pm_opp_set_regulators); 2079 + 2079 2080 /** 2080 2081 * dev_pm_opp_set_clkname() - Set clk name for the device 2081 2082 * @dev: Device for which clk name is being set. ··· 2177 2118 dev_pm_opp_put_opp_table(opp_table); 2178 2119 } 2179 2120 EXPORT_SYMBOL_GPL(dev_pm_opp_put_clkname); 2121 + 2122 + static void devm_pm_opp_clkname_release(void *data) 2123 + { 2124 + dev_pm_opp_put_clkname(data); 2125 + } 2126 + 2127 + /** 2128 + * devm_pm_opp_set_clkname() - Set clk name for the device 2129 + * @dev: Device for which clk name is being set. 2130 + * @name: Clk name. 2131 + * 2132 + * This is a resource-managed variant of dev_pm_opp_set_clkname(). 2133 + * 2134 + * Return: 0 on success and errorno otherwise. 2135 + */ 2136 + int devm_pm_opp_set_clkname(struct device *dev, const char *name) 2137 + { 2138 + struct opp_table *opp_table; 2139 + 2140 + opp_table = dev_pm_opp_set_clkname(dev, name); 2141 + if (IS_ERR(opp_table)) 2142 + return PTR_ERR(opp_table); 2143 + 2144 + return devm_add_action_or_reset(dev, devm_pm_opp_clkname_release, 2145 + opp_table); 2146 + } 2147 + EXPORT_SYMBOL_GPL(devm_pm_opp_set_clkname); 2180 2148 2181 2149 /** 2182 2150 * dev_pm_opp_register_set_opp_helper() - Register custom set OPP helper ··· 2295 2209 * 2296 2210 * This is a resource-managed version of dev_pm_opp_register_set_opp_helper(). 2297 2211 * 2298 - * Return: pointer to 'struct opp_table' on success and errorno otherwise. 2212 + * Return: 0 on success and errorno otherwise. 2299 2213 */ 2300 - struct opp_table * 2301 - devm_pm_opp_register_set_opp_helper(struct device *dev, 2302 - int (*set_opp)(struct dev_pm_set_opp_data *data)) 2214 + int devm_pm_opp_register_set_opp_helper(struct device *dev, 2215 + int (*set_opp)(struct dev_pm_set_opp_data *data)) 2303 2216 { 2304 2217 struct opp_table *opp_table; 2305 - int err; 2306 2218 2307 2219 opp_table = dev_pm_opp_register_set_opp_helper(dev, set_opp); 2308 2220 if (IS_ERR(opp_table)) 2309 - return opp_table; 2221 + return PTR_ERR(opp_table); 2310 2222 2311 - err = devm_add_action_or_reset(dev, devm_pm_opp_unregister_set_opp_helper, 2312 - opp_table); 2313 - if (err) 2314 - return ERR_PTR(err); 2315 - 2316 - return opp_table; 2223 + return devm_add_action_or_reset(dev, devm_pm_opp_unregister_set_opp_helper, 2224 + opp_table); 2317 2225 } 2318 2226 EXPORT_SYMBOL_GPL(devm_pm_opp_register_set_opp_helper); 2319 2227 ··· 2460 2380 * 2461 2381 * This is a resource-managed version of dev_pm_opp_attach_genpd(). 2462 2382 * 2463 - * Return: pointer to 'struct opp_table' on success and errorno otherwise. 2383 + * Return: 0 on success and errorno otherwise. 2464 2384 */ 2465 - struct opp_table * 2466 - devm_pm_opp_attach_genpd(struct device *dev, const char **names, 2467 - struct device ***virt_devs) 2385 + int devm_pm_opp_attach_genpd(struct device *dev, const char **names, 2386 + struct device ***virt_devs) 2468 2387 { 2469 2388 struct opp_table *opp_table; 2470 - int err; 2471 2389 2472 2390 opp_table = dev_pm_opp_attach_genpd(dev, names, virt_devs); 2473 2391 if (IS_ERR(opp_table)) 2474 - return opp_table; 2392 + return PTR_ERR(opp_table); 2475 2393 2476 - err = devm_add_action_or_reset(dev, devm_pm_opp_detach_genpd, 2477 - opp_table); 2478 - if (err) 2479 - return ERR_PTR(err); 2480 - 2481 - return opp_table; 2394 + return devm_add_action_or_reset(dev, devm_pm_opp_detach_genpd, 2395 + opp_table); 2482 2396 } 2483 2397 EXPORT_SYMBOL_GPL(devm_pm_opp_attach_genpd); 2484 2398
+36
drivers/opp/of.c
··· 1104 1104 return ret; 1105 1105 } 1106 1106 1107 + static void devm_pm_opp_of_table_release(void *data) 1108 + { 1109 + dev_pm_opp_of_remove_table(data); 1110 + } 1111 + 1112 + /** 1113 + * devm_pm_opp_of_add_table() - Initialize opp table from device tree 1114 + * @dev: device pointer used to lookup OPP table. 1115 + * 1116 + * Register the initial OPP table with the OPP library for given device. 1117 + * 1118 + * The opp_table structure will be freed after the device is destroyed. 1119 + * 1120 + * Return: 1121 + * 0 On success OR 1122 + * Duplicate OPPs (both freq and volt are same) and opp->available 1123 + * -EEXIST Freq are same and volt are different OR 1124 + * Duplicate OPPs (both freq and volt are same) and !opp->available 1125 + * -ENOMEM Memory allocation failure 1126 + * -ENODEV when 'operating-points' property is not found or is invalid data 1127 + * in device node. 1128 + * -ENODATA when empty 'operating-points' property is found 1129 + * -EINVAL when invalid entries are found in opp-v2 table 1130 + */ 1131 + int devm_pm_opp_of_add_table(struct device *dev) 1132 + { 1133 + int ret; 1134 + 1135 + ret = dev_pm_opp_of_add_table(dev); 1136 + if (ret) 1137 + return ret; 1138 + 1139 + return devm_add_action_or_reset(dev, devm_pm_opp_of_table_release, dev); 1140 + } 1141 + EXPORT_SYMBOL_GPL(devm_pm_opp_of_add_table); 1142 + 1107 1143 /** 1108 1144 * dev_pm_opp_of_add_table() - Initialize opp table from device tree 1109 1145 * @dev: device pointer used to lookup OPP table.
+3 -13
drivers/pci/pci.c
··· 1870 1870 int err; 1871 1871 int i, bars = 0; 1872 1872 1873 - /* 1874 - * Power state could be unknown at this point, either due to a fresh 1875 - * boot or a device removal call. So get the current power state 1876 - * so that things like MSI message writing will behave as expected 1877 - * (e.g. if the device really is in D0 at enable time). 1878 - */ 1879 - if (dev->pm_cap) { 1880 - u16 pmcsr; 1881 - pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); 1882 - dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK); 1883 - } 1884 - 1885 - if (atomic_inc_return(&dev->enable_cnt) > 1) 1873 + if (atomic_inc_return(&dev->enable_cnt) > 1) { 1874 + pci_update_current_state(dev, dev->current_state); 1886 1875 return 0; /* already enabled */ 1876 + } 1887 1877 1888 1878 bridge = pci_upstream_bridge(dev); 1889 1879 if (bridge)
+1
drivers/powercap/intel_rapl_common.c
··· 1069 1069 1070 1070 X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd), 1071 1071 X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd), 1072 + X86_MATCH_VENDOR_FAM(HYGON, 0x18, &rapl_defaults_amd), 1072 1073 {} 1073 1074 }; 1074 1075 MODULE_DEVICE_TABLE(x86cpu, rapl_ids);
+1
drivers/powercap/intel_rapl_msr.c
··· 150 150 case X86_VENDOR_INTEL: 151 151 rapl_msr_priv = &rapl_msr_priv_intel; 152 152 break; 153 + case X86_VENDOR_HYGON: 153 154 case X86_VENDOR_AMD: 154 155 rapl_msr_priv = &rapl_msr_priv_amd; 155 156 break;
+6 -10
drivers/spi/spi-geni-qcom.c
··· 691 691 mas->se.wrapper = dev_get_drvdata(dev->parent); 692 692 mas->se.base = base; 693 693 mas->se.clk = clk; 694 - mas->se.opp_table = dev_pm_opp_set_clkname(&pdev->dev, "se"); 695 - if (IS_ERR(mas->se.opp_table)) 696 - return PTR_ERR(mas->se.opp_table); 694 + 695 + ret = devm_pm_opp_set_clkname(&pdev->dev, "se"); 696 + if (ret) 697 + return ret; 697 698 /* OPP table is optional */ 698 - ret = dev_pm_opp_of_add_table(&pdev->dev); 699 + ret = devm_pm_opp_of_add_table(&pdev->dev); 699 700 if (ret && ret != -ENODEV) { 700 701 dev_err(&pdev->dev, "invalid OPP table in device tree\n"); 701 - goto put_clkname; 702 + return ret; 702 703 } 703 704 704 705 spi->bus_num = -1; ··· 751 750 free_irq(mas->irq, spi); 752 751 spi_geni_probe_runtime_disable: 753 752 pm_runtime_disable(dev); 754 - dev_pm_opp_of_remove_table(&pdev->dev); 755 - put_clkname: 756 - dev_pm_opp_put_clkname(mas->se.opp_table); 757 753 return ret; 758 754 } 759 755 ··· 764 766 765 767 free_irq(mas->irq, spi); 766 768 pm_runtime_disable(&pdev->dev); 767 - dev_pm_opp_of_remove_table(&pdev->dev); 768 - dev_pm_opp_put_clkname(mas->se.opp_table); 769 769 return 0; 770 770 } 771 771
+5 -13
drivers/spi/spi-qcom-qspi.c
··· 142 142 struct clk_bulk_data *clks; 143 143 struct qspi_xfer xfer; 144 144 struct icc_path *icc_path_cpu_to_qspi; 145 - struct opp_table *opp_table; 146 145 unsigned long last_speed; 147 146 /* Lock to protect data accessed by IRQs */ 148 147 spinlock_t lock; ··· 529 530 master->handle_err = qcom_qspi_handle_err; 530 531 master->auto_runtime_pm = true; 531 532 532 - ctrl->opp_table = dev_pm_opp_set_clkname(&pdev->dev, "core"); 533 - if (IS_ERR(ctrl->opp_table)) 534 - return PTR_ERR(ctrl->opp_table); 533 + ret = devm_pm_opp_set_clkname(&pdev->dev, "core"); 534 + if (ret) 535 + return ret; 535 536 /* OPP table is optional */ 536 - ret = dev_pm_opp_of_add_table(&pdev->dev); 537 + ret = devm_pm_opp_of_add_table(&pdev->dev); 537 538 if (ret && ret != -ENODEV) { 538 539 dev_err(&pdev->dev, "invalid OPP table in device tree\n"); 539 - goto exit_probe_put_clkname; 540 + return ret; 540 541 } 541 542 542 543 pm_runtime_use_autosuspend(dev); ··· 548 549 return 0; 549 550 550 551 pm_runtime_disable(dev); 551 - dev_pm_opp_of_remove_table(&pdev->dev); 552 - 553 - exit_probe_put_clkname: 554 - dev_pm_opp_put_clkname(ctrl->opp_table); 555 552 556 553 return ret; 557 554 } ··· 555 560 static int qcom_qspi_remove(struct platform_device *pdev) 556 561 { 557 562 struct spi_master *master = platform_get_drvdata(pdev); 558 - struct qcom_qspi *ctrl = spi_master_get_devdata(master); 559 563 560 564 /* Unregister _before_ disabling pm_runtime() so we stop transfers */ 561 565 spi_unregister_master(master); 562 566 563 567 pm_runtime_disable(&pdev->dev); 564 - dev_pm_opp_of_remove_table(&pdev->dev); 565 - dev_pm_opp_put_clkname(ctrl->opp_table); 566 568 567 569 return 0; 568 570 }
+8 -15
drivers/tty/serial/qcom_geni_serial.c
··· 1426 1426 if (of_property_read_bool(pdev->dev.of_node, "cts-rts-swap")) 1427 1427 port->cts_rts_swap = true; 1428 1428 1429 - port->se.opp_table = dev_pm_opp_set_clkname(&pdev->dev, "se"); 1430 - if (IS_ERR(port->se.opp_table)) 1431 - return PTR_ERR(port->se.opp_table); 1429 + ret = devm_pm_opp_set_clkname(&pdev->dev, "se"); 1430 + if (ret) 1431 + return ret; 1432 1432 /* OPP table is optional */ 1433 - ret = dev_pm_opp_of_add_table(&pdev->dev); 1433 + ret = devm_pm_opp_of_add_table(&pdev->dev); 1434 1434 if (ret && ret != -ENODEV) { 1435 1435 dev_err(&pdev->dev, "invalid OPP table in device tree\n"); 1436 - goto put_clkname; 1436 + return ret; 1437 1437 } 1438 1438 1439 1439 port->private_data.drv = drv; ··· 1443 1443 1444 1444 ret = uart_add_one_port(drv, uport); 1445 1445 if (ret) 1446 - goto err; 1446 + return ret; 1447 1447 1448 1448 irq_set_status_flags(uport->irq, IRQ_NOAUTOEN); 1449 1449 ret = devm_request_irq(uport->dev, uport->irq, qcom_geni_serial_isr, ··· 1451 1451 if (ret) { 1452 1452 dev_err(uport->dev, "Failed to get IRQ ret %d\n", ret); 1453 1453 uart_remove_one_port(drv, uport); 1454 - goto err; 1454 + return ret; 1455 1455 } 1456 1456 1457 1457 /* ··· 1468 1468 if (ret) { 1469 1469 device_init_wakeup(&pdev->dev, false); 1470 1470 uart_remove_one_port(drv, uport); 1471 - goto err; 1471 + return ret; 1472 1472 } 1473 1473 } 1474 1474 1475 1475 return 0; 1476 - err: 1477 - dev_pm_opp_of_remove_table(&pdev->dev); 1478 - put_clkname: 1479 - dev_pm_opp_put_clkname(port->se.opp_table); 1480 - return ret; 1481 1476 } 1482 1477 1483 1478 static int qcom_geni_serial_remove(struct platform_device *pdev) ··· 1480 1485 struct qcom_geni_serial_port *port = platform_get_drvdata(pdev); 1481 1486 struct uart_driver *drv = port->private_data.drv; 1482 1487 1483 - dev_pm_opp_of_remove_table(&pdev->dev); 1484 - dev_pm_opp_put_clkname(port->se.opp_table); 1485 1488 dev_pm_clear_wake_irq(&pdev->dev); 1486 1489 device_init_wakeup(&pdev->dev, false); 1487 1490 uart_remove_one_port(drv, &port->uport);
+16 -3
include/linux/arch_topology.h
··· 23 23 24 24 void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity); 25 25 26 - DECLARE_PER_CPU(unsigned long, freq_scale); 26 + DECLARE_PER_CPU(unsigned long, arch_freq_scale); 27 27 28 28 static inline unsigned long topology_get_freq_scale(int cpu) 29 29 { 30 - return per_cpu(freq_scale, cpu); 30 + return per_cpu(arch_freq_scale, cpu); 31 31 } 32 32 33 33 void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq, 34 34 unsigned long max_freq); 35 35 bool topology_scale_freq_invariant(void); 36 36 37 - bool arch_freq_counters_available(const struct cpumask *cpus); 37 + enum scale_freq_source { 38 + SCALE_FREQ_SOURCE_CPUFREQ = 0, 39 + SCALE_FREQ_SOURCE_ARCH, 40 + SCALE_FREQ_SOURCE_CPPC, 41 + }; 42 + 43 + struct scale_freq_data { 44 + enum scale_freq_source source; 45 + void (*set_freq_scale)(void); 46 + }; 47 + 48 + void topology_scale_freq_tick(void); 49 + void topology_set_scale_freq_source(struct scale_freq_data *data, const struct cpumask *cpus); 50 + void topology_clear_scale_freq_source(enum scale_freq_source source, const struct cpumask *cpus); 38 51 39 52 DECLARE_PER_CPU(unsigned long, thermal_pressure); 40 53
+2 -2
include/linux/cpuidle.h
··· 49 49 char name[CPUIDLE_NAME_LEN]; 50 50 char desc[CPUIDLE_DESC_LEN]; 51 51 52 - u64 exit_latency_ns; 53 - u64 target_residency_ns; 52 + s64 exit_latency_ns; 53 + s64 target_residency_ns; 54 54 unsigned int flags; 55 55 unsigned int exit_latency; /* in US */ 56 56 int power_usage; /* in mW */
+9
include/linux/devfreq.h
··· 38 38 39 39 struct devfreq; 40 40 struct devfreq_governor; 41 + struct thermal_cooling_device; 41 42 42 43 /** 43 44 * struct devfreq_dev_status - Data given from devfreq user device to ··· 99 98 * @freq_table: Optional list of frequencies to support statistics 100 99 * and freq_table must be generated in ascending order. 101 100 * @max_state: The size of freq_table. 101 + * 102 + * @is_cooling_device: A self-explanatory boolean giving the device a 103 + * cooling effect property. 102 104 */ 103 105 struct devfreq_dev_profile { 104 106 unsigned long initial_freq; 105 107 unsigned int polling_ms; 106 108 enum devfreq_timer timer; 109 + bool is_cooling_device; 107 110 108 111 int (*target)(struct device *dev, unsigned long *freq, u32 flags); 109 112 int (*get_dev_status)(struct device *dev, ··· 161 156 * @suspend_count: suspend requests counter for a device. 162 157 * @stats: Statistics of devfreq device behavior 163 158 * @transition_notifier_list: list head of DEVFREQ_TRANSITION_NOTIFIER notifier 159 + * @cdev: Cooling device pointer if the devfreq has cooling property 164 160 * @nb_min: Notifier block for DEV_PM_QOS_MIN_FREQUENCY 165 161 * @nb_max: Notifier block for DEV_PM_QOS_MAX_FREQUENCY 166 162 * ··· 203 197 struct devfreq_stats stats; 204 198 205 199 struct srcu_notifier_head transition_notifier_list; 200 + 201 + /* Pointer to the cooling device if used for thermal mitigation */ 202 + struct thermal_cooling_device *cdev; 206 203 207 204 struct notifier_block nb_min; 208 205 struct notifier_block nb_max;
-1
include/linux/freezer.h
··· 279 279 static inline void thaw_processes(void) {} 280 280 static inline void thaw_kernel_threads(void) {} 281 281 282 - static inline bool try_to_freeze_nowarn(void) { return false; } 283 282 static inline bool try_to_freeze(void) { return false; } 284 283 285 284 static inline void freezer_do_not_count(void) {}
+1 -1
include/linux/intel_rapl.h
··· 33 33 RAPL_DOMAIN_REG_MAX, 34 34 }; 35 35 36 - struct rapl_package; 36 + struct rapl_domain; 37 37 38 38 enum rapl_primitives { 39 39 ENERGY_COUNTER,
-1
include/linux/pm.h
··· 39 39 * Device power management 40 40 */ 41 41 42 - struct device; 43 42 44 43 #ifdef CONFIG_PM 45 44 extern const char power_group_name[]; /* = "power" */
+36 -8
include/linux/pm_opp.h
··· 144 144 145 145 struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count); 146 146 void dev_pm_opp_put_supported_hw(struct opp_table *opp_table); 147 + int devm_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count); 147 148 struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name); 148 149 void dev_pm_opp_put_prop_name(struct opp_table *opp_table); 149 150 struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count); 150 151 void dev_pm_opp_put_regulators(struct opp_table *opp_table); 152 + int devm_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count); 151 153 struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name); 152 154 void dev_pm_opp_put_clkname(struct opp_table *opp_table); 155 + int devm_pm_opp_set_clkname(struct device *dev, const char *name); 153 156 struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); 154 157 void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table); 155 - struct opp_table *devm_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); 158 + int devm_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); 156 159 struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs); 157 160 void dev_pm_opp_detach_genpd(struct opp_table *opp_table); 158 - struct opp_table *devm_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs); 161 + int devm_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs); 159 162 struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table, struct opp_table *dst_table, struct dev_pm_opp *src_opp); 160 163 int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate); 161 164 int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq); ··· 322 319 323 320 static inline void dev_pm_opp_put_supported_hw(struct opp_table *opp_table) {} 324 321 322 + static inline int devm_pm_opp_set_supported_hw(struct device *dev, 323 + const u32 *versions, 324 + unsigned int count) 325 + { 326 + return -EOPNOTSUPP; 327 + } 328 + 325 329 static inline struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, 326 330 int (*set_opp)(struct dev_pm_set_opp_data *data)) 327 331 { ··· 337 327 338 328 static inline void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table) {} 339 329 340 - static inline struct opp_table * 341 - devm_pm_opp_register_set_opp_helper(struct device *dev, 330 + static inline int devm_pm_opp_register_set_opp_helper(struct device *dev, 342 331 int (*set_opp)(struct dev_pm_set_opp_data *data)) 343 332 { 344 - return ERR_PTR(-EOPNOTSUPP); 333 + return -EOPNOTSUPP; 345 334 } 346 335 347 336 static inline struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name) ··· 357 348 358 349 static inline void dev_pm_opp_put_regulators(struct opp_table *opp_table) {} 359 350 351 + static inline int devm_pm_opp_set_regulators(struct device *dev, 352 + const char * const names[], 353 + unsigned int count) 354 + { 355 + return -EOPNOTSUPP; 356 + } 357 + 360 358 static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name) 361 359 { 362 360 return ERR_PTR(-EOPNOTSUPP); 363 361 } 364 362 365 363 static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {} 364 + 365 + static inline int devm_pm_opp_set_clkname(struct device *dev, const char *name) 366 + { 367 + return -EOPNOTSUPP; 368 + } 366 369 367 370 static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs) 368 371 { ··· 383 362 384 363 static inline void dev_pm_opp_detach_genpd(struct opp_table *opp_table) {} 385 364 386 - static inline struct opp_table *devm_pm_opp_attach_genpd(struct device *dev, 387 - const char **names, struct device ***virt_devs) 365 + static inline int devm_pm_opp_attach_genpd(struct device *dev, 366 + const char **names, 367 + struct device ***virt_devs) 388 368 { 389 - return ERR_PTR(-EOPNOTSUPP); 369 + return -EOPNOTSUPP; 390 370 } 391 371 392 372 static inline struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table, ··· 441 419 int dev_pm_opp_of_add_table_indexed(struct device *dev, int index); 442 420 int dev_pm_opp_of_add_table_noclk(struct device *dev, int index); 443 421 void dev_pm_opp_of_remove_table(struct device *dev); 422 + int devm_pm_opp_of_add_table(struct device *dev); 444 423 int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask); 445 424 void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask); 446 425 int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask); ··· 472 449 473 450 static inline void dev_pm_opp_of_remove_table(struct device *dev) 474 451 { 452 + } 453 + 454 + static inline int devm_pm_opp_of_add_table(struct device *dev) 455 + { 456 + return -EOPNOTSUPP; 475 457 } 476 458 477 459 static inline int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
+1 -1
include/linux/pm_runtime.h
··· 265 265 static inline void pm_runtime_irq_safe(struct device *dev) {} 266 266 static inline bool pm_runtime_is_irq_safe(struct device *dev) { return false; } 267 267 268 - static inline bool pm_runtime_callbacks_present(struct device *dev) { return false; } 268 + static inline bool pm_runtime_has_no_callbacks(struct device *dev) { return false; } 269 269 static inline void pm_runtime_mark_last_busy(struct device *dev) {} 270 270 static inline void __pm_runtime_use_autosuspend(struct device *dev, 271 271 bool use) {}
-2
include/linux/qcom-geni-se.h
··· 47 47 * @num_clk_levels: Number of valid clock levels in clk_perf_tbl 48 48 * @clk_perf_tbl: Table of clock frequency input to serial engine clock 49 49 * @icc_paths: Array of ICC paths for SE 50 - * @opp_table: Pointer to the OPP table 51 50 */ 52 51 struct geni_se { 53 52 void __iomem *base; ··· 56 57 unsigned int num_clk_levels; 57 58 unsigned long *clk_perf_tbl; 58 59 struct geni_icc_path icc_paths[3]; 59 - struct opp_table *opp_table; 60 60 }; 61 61 62 62 /* Common SE registers */
+1 -1
kernel/power/autosleep.c
··· 54 54 goto out; 55 55 56 56 /* 57 - * If the wakeup occured for an unknown reason, wait to prevent the 57 + * If the wakeup occurred for an unknown reason, wait to prevent the 58 58 * system from trying to suspend and waking up in a tight loop. 59 59 */ 60 60 if (final_count == initial_count)
+1 -1
kernel/power/snapshot.c
··· 329 329 /** 330 330 * Data types related to memory bitmaps. 331 331 * 332 - * Memory bitmap is a structure consiting of many linked lists of 332 + * Memory bitmap is a structure consisting of many linked lists of 333 333 * objects. The main list's elements are of type struct zone_bitmap 334 334 * and each of them corresonds to one zone. For each zone bitmap 335 335 * object there is a list of objects of type struct bm_block that
+1 -1
kernel/power/swap.c
··· 884 884 * enough_swap - Make sure we have enough swap to save the image. 885 885 * 886 886 * Returns TRUE or FALSE after checking the total amount of swap 887 - * space avaiable from the resume partition. 887 + * space available from the resume partition. 888 888 */ 889 889 890 890 static int enough_swap(unsigned int nr_pages)
+1
kernel/sched/core.c
··· 6384 6384 { 6385 6385 return __sched_setscheduler(p, attr, false, true); 6386 6386 } 6387 + EXPORT_SYMBOL_GPL(sched_setattr_nocheck); 6387 6388 6388 6389 /** 6389 6390 * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
+14 -19
kernel/sched/cpufreq_schedutil.c
··· 114 114 return true; 115 115 } 116 116 117 - static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time, 118 - unsigned int next_freq) 117 + static void sugov_deferred_update(struct sugov_policy *sg_policy) 119 118 { 120 - if (sugov_update_next_freq(sg_policy, time, next_freq)) 121 - cpufreq_driver_fast_switch(sg_policy->policy, next_freq); 122 - } 123 - 124 - static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time, 125 - unsigned int next_freq) 126 - { 127 - if (!sugov_update_next_freq(sg_policy, time, next_freq)) 128 - return; 129 - 130 119 if (!sg_policy->work_in_progress) { 131 120 sg_policy->work_in_progress = true; 132 121 irq_work_queue(&sg_policy->irq_work); ··· 355 366 sg_policy->cached_raw_freq = cached_freq; 356 367 } 357 368 369 + if (!sugov_update_next_freq(sg_policy, time, next_f)) 370 + return; 371 + 358 372 /* 359 373 * This code runs under rq->lock for the target CPU, so it won't run 360 374 * concurrently on two different CPUs for the same target and it is not 361 375 * necessary to acquire the lock in the fast switch case. 362 376 */ 363 377 if (sg_policy->policy->fast_switch_enabled) { 364 - sugov_fast_switch(sg_policy, time, next_f); 378 + cpufreq_driver_fast_switch(sg_policy->policy, next_f); 365 379 } else { 366 380 raw_spin_lock(&sg_policy->update_lock); 367 - sugov_deferred_update(sg_policy, time, next_f); 381 + sugov_deferred_update(sg_policy); 368 382 raw_spin_unlock(&sg_policy->update_lock); 369 383 } 370 384 } ··· 446 454 if (sugov_should_update_freq(sg_policy, time)) { 447 455 next_f = sugov_next_freq_shared(sg_cpu, time); 448 456 449 - if (sg_policy->policy->fast_switch_enabled) 450 - sugov_fast_switch(sg_policy, time, next_f); 451 - else 452 - sugov_deferred_update(sg_policy, time, next_f); 453 - } 457 + if (!sugov_update_next_freq(sg_policy, time, next_f)) 458 + goto unlock; 454 459 460 + if (sg_policy->policy->fast_switch_enabled) 461 + cpufreq_driver_fast_switch(sg_policy->policy, next_f); 462 + else 463 + sugov_deferred_update(sg_policy); 464 + } 465 + unlock: 455 466 raw_spin_unlock(&sg_policy->update_lock); 456 467 } 457 468
+5 -1
kernel/time/tick-sched.c
··· 1124 1124 * tick_nohz_get_sleep_length - return the expected length of the current sleep 1125 1125 * @delta_next: duration until the next event if the tick cannot be stopped 1126 1126 * 1127 - * Called from power state control code with interrupts disabled 1127 + * Called from power state control code with interrupts disabled. 1128 + * 1129 + * The return value of this function and/or the value returned by it through the 1130 + * @delta_next pointer can be negative which must be taken into account by its 1131 + * callers. 1128 1132 */ 1129 1133 ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next) 1130 1134 {
+1 -1
tools/power/pm-graph/sleepgraph.py
··· 6819 6819 sysvals.outdir = val 6820 6820 sysvals.notestrun = True 6821 6821 if(os.path.isdir(val) == False): 6822 - doError('%s is not accesible' % val) 6822 + doError('%s is not accessible' % val) 6823 6823 elif(arg == '-filter'): 6824 6824 try: 6825 6825 val = next(args)