Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-5.14-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more power management updates from Rafael Wysocki:
"These include cpufreq core simplifications and fixes, cpufreq driver
updates, cpuidle driver update, a generic power domains (genpd)
locking fix and a debug-related simplification of the PM core.

Specifics:

- Drop the ->stop_cpu() (not really useful) and ->resolve_freq()
(unused) cpufreq driver callbacks and modify the users of the
former accordingly (Viresh Kumar, Rafael Wysocki).

- Add frequency invariance support to the ACPI CPPC cpufreq driver
again along with the related fixes and cleanups (Viresh Kumar).

- Update the Meditak, qcom and SCMI ARM cpufreq drivers (Fabien
Parent, Seiya Wang, Sibi Sankar, Christophe JAILLET).

- Rename black/white-lists in the DT cpufreq driver (Viresh Kumar).

- Add generic performance domains support to the dvfs DT bindings
(Sudeep Holla).

- Refine locking in the generic power domains (genpd) support code to
avoid lock dependency issues (Stephen Boyd).

- Update the MSM and qcom ARM cpuidle drivers (Bartosz Dudziak).

- Simplify the PM core debug code by using ktime_us_delta() to
compute time interval lengths (Mark-PK Tsai)"

* tag 'pm-5.14-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (21 commits)
PM: domains: Shrink locking area of the gpd_list_lock
PM: sleep: Use ktime_us_delta() in initcall_debug_report()
cpufreq: CPPC: Add support for frequency invariance
arch_topology: Avoid use-after-free for scale_freq_data
cpufreq: CPPC: Pass structure instance by reference
cpufreq: CPPC: Fix potential memleak in cppc_cpufreq_cpu_init
cpufreq: Remove ->resolve_freq()
cpufreq: Reuse cpufreq_driver_resolve_freq() in __cpufreq_driver_target()
cpufreq: Remove the ->stop_cpu() driver callback
cpufreq: powernv: Migrate to ->exit() callback instead of ->stop_cpu()
cpufreq: CPPC: Migrate to ->exit() callback instead of ->stop_cpu()
cpufreq: intel_pstate: Combine ->stop_cpu() and ->offline()
cpuidle: qcom: Add SPM register data for MSM8226
dt-bindings: arm: msm: Add SAW2 for MSM8226
dt-bindings: cpufreq: update cpu type and clock name for MT8173 SoC
clk: mediatek: remove deprecated CLK_INFRA_CA57SEL for MT8173 SoC
cpufreq: dt: Rename black/white-lists
cpufreq: scmi: Fix an error message
cpufreq: mediatek: add support for mt8365
dt-bindings: dvfs: Add support for generic performance domains
...

+474 -149
-6
Documentation/cpu-freq/cpu-drivers.rst
··· 58 58 59 59 .driver_data - cpufreq driver specific data. 60 60 61 - .resolve_freq - Returns the most appropriate frequency for a target 62 - frequency. Doesn't change the frequency though. 63 - 64 61 .get_intermediate and target_intermediate - Used to switch to stable 65 62 frequency while changing CPU frequency. 66 63 ··· 67 70 68 71 .exit - A pointer to a per-policy cleanup function called during 69 72 CPU_POST_DEAD phase of cpu hotplug process. 70 - 71 - .stop_cpu - A pointer to a per-policy stop function called during 72 - CPU_DOWN_PREPARE phase of cpu hotplug process. 73 73 74 74 .suspend - A pointer to a per-policy suspend function which is called 75 75 with interrupts disabled and _after_ the governor is stopped for the
+7
Documentation/devicetree/bindings/arm/cpus.yaml
··· 257 257 258 258 where voltage is in V, frequency is in MHz. 259 259 260 + performance-domains: 261 + maxItems: 1 262 + description: 263 + List of phandles and performance domain specifiers, as defined by 264 + bindings of the performance domain provider. See also 265 + dvfs/performance-domain.yaml. 266 + 260 267 power-domains: 261 268 description: 262 269 List of phandles and PM domain specifiers, as defined by bindings of the
+1
Documentation/devicetree/bindings/arm/msm/qcom,saw2.txt
··· 25 25 "qcom,saw2" 26 26 A more specific value could be one of: 27 27 "qcom,apq8064-saw2-v1.1-cpu" 28 + "qcom,msm8226-saw2-v2.1-cpu" 28 29 "qcom,msm8974-saw2-v2.1-cpu" 29 30 "qcom,apq8084-saw2-v2.1-cpu" 30 31
+4 -4
Documentation/devicetree/bindings/cpufreq/cpufreq-mediatek.txt
··· 202 202 203 203 cpu2: cpu@100 { 204 204 device_type = "cpu"; 205 - compatible = "arm,cortex-a57"; 205 + compatible = "arm,cortex-a72"; 206 206 reg = <0x100>; 207 207 enable-method = "psci"; 208 208 cpu-idle-states = <&CPU_SLEEP_0>; 209 - clocks = <&infracfg CLK_INFRA_CA57SEL>, 209 + clocks = <&infracfg CLK_INFRA_CA72SEL>, 210 210 <&apmixedsys CLK_APMIXED_MAINPLL>; 211 211 clock-names = "cpu", "intermediate"; 212 212 operating-points-v2 = <&cpu_opp_table_b>; ··· 214 214 215 215 cpu3: cpu@101 { 216 216 device_type = "cpu"; 217 - compatible = "arm,cortex-a57"; 217 + compatible = "arm,cortex-a72"; 218 218 reg = <0x101>; 219 219 enable-method = "psci"; 220 220 cpu-idle-states = <&CPU_SLEEP_0>; 221 - clocks = <&infracfg CLK_INFRA_CA57SEL>, 221 + clocks = <&infracfg CLK_INFRA_CA72SEL>, 222 222 <&apmixedsys CLK_APMIXED_MAINPLL>; 223 223 clock-names = "cpu", "intermediate"; 224 224 operating-points-v2 = <&cpu_opp_table_b>;
+74
Documentation/devicetree/bindings/dvfs/performance-domain.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/dvfs/performance-domain.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Generic performance domains 8 + 9 + maintainers: 10 + - Sudeep Holla <sudeep.holla@arm.com> 11 + 12 + description: |+ 13 + This binding is intended for performance management of groups of devices or 14 + CPUs that run in the same performance domain. Performance domains must not 15 + be confused with power domains. A performance domain is defined by a set 16 + of devices that always have to run at the same performance level. For a given 17 + performance domain, there is a single point of control that affects all the 18 + devices in the domain, making it impossible to set the performance level of 19 + an individual device in the domain independently from other devices in 20 + that domain. For example, a set of CPUs that share a voltage domain, and 21 + have a common frequency control, is said to be in the same performance 22 + domain. 23 + 24 + This device tree binding can be used to bind performance domain consumer 25 + devices with their performance domains provided by performance domain 26 + providers. A performance domain provider can be represented by any node in 27 + the device tree and can provide one or more performance domains. A consumer 28 + node can refer to the provider by a phandle and a set of phandle arguments 29 + (so called performance domain specifiers) of length specified by the 30 + \#performance-domain-cells property in the performance domain provider node. 31 + 32 + select: true 33 + 34 + properties: 35 + "#performance-domain-cells": 36 + description: 37 + Number of cells in a performance domain specifier. Typically 0 for nodes 38 + representing a single performance domain and 1 for nodes providing 39 + multiple performance domains (e.g. performance controllers), but can be 40 + any value as specified by device tree binding documentation of particular 41 + provider. 42 + enum: [ 0, 1 ] 43 + 44 + performance-domains: 45 + $ref: '/schemas/types.yaml#/definitions/phandle-array' 46 + maxItems: 1 47 + description: 48 + A phandle and performance domain specifier as defined by bindings of the 49 + performance controller/provider specified by phandle. 50 + 51 + additionalProperties: true 52 + 53 + examples: 54 + - | 55 + performance: performance-controller@12340000 { 56 + compatible = "qcom,cpufreq-hw"; 57 + reg = <0x12340000 0x1000>; 58 + #performance-domain-cells = <1>; 59 + }; 60 + 61 + // The node above defines a performance controller that is a performance 62 + // domain provider and expects one cell as its phandle argument. 63 + 64 + cpus { 65 + #address-cells = <2>; 66 + #size-cells = <0>; 67 + 68 + cpu@0 { 69 + device_type = "cpu"; 70 + compatible = "arm,cortex-a57"; 71 + reg = <0x0 0x0>; 72 + performance-domains = <&performance 1>; 73 + }; 74 + };
-5
Documentation/translations/zh_CN/cpu-freq/cpu-drivers.rst
··· 64 64 65 65 .driver_data - cpufreq驱动程序的特定数据。 66 66 67 - .resolve_freq - 返回最适合目标频率的频率。不过并不能改变频率。 68 - 69 67 .get_intermediate 和 target_intermediate - 用于在改变CPU频率时切换到稳定 70 68 的频率。 71 69 ··· 72 74 .bios_limit - 返回HW/BIOS对CPU的最大频率限制值。 73 75 74 76 .exit - 一个指向per-policy清理函数的指针,该函数在cpu热插拔过程的CPU_POST_DEAD 75 - 阶段被调用。 76 - 77 - .stop_cpu - 一个指向per-policy停止函数的指针,该函数在cpu热插拔过程的CPU_DOWN_PREPARE 78 77 阶段被调用。 79 78 80 79 .suspend - 一个指向per-policy暂停函数的指针,该函数在关中断且在该策略的调节器停止
+21 -6
drivers/base/arch_topology.c
··· 18 18 #include <linux/cpumask.h> 19 19 #include <linux/init.h> 20 20 #include <linux/percpu.h> 21 + #include <linux/rcupdate.h> 21 22 #include <linux/sched.h> 22 23 #include <linux/smp.h> 23 24 24 - static DEFINE_PER_CPU(struct scale_freq_data *, sft_data); 25 + static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data); 25 26 static struct cpumask scale_freq_counters_mask; 26 27 static bool scale_freq_invariant; 27 28 ··· 67 66 if (cpumask_empty(&scale_freq_counters_mask)) 68 67 scale_freq_invariant = topology_scale_freq_invariant(); 69 68 69 + rcu_read_lock(); 70 + 70 71 for_each_cpu(cpu, cpus) { 71 - sfd = per_cpu(sft_data, cpu); 72 + sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu)); 72 73 73 74 /* Use ARCH provided counters whenever possible */ 74 75 if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) { 75 - per_cpu(sft_data, cpu) = data; 76 + rcu_assign_pointer(per_cpu(sft_data, cpu), data); 76 77 cpumask_set_cpu(cpu, &scale_freq_counters_mask); 77 78 } 78 79 } 80 + 81 + rcu_read_unlock(); 79 82 80 83 update_scale_freq_invariant(true); 81 84 } ··· 91 86 struct scale_freq_data *sfd; 92 87 int cpu; 93 88 89 + rcu_read_lock(); 90 + 94 91 for_each_cpu(cpu, cpus) { 95 - sfd = per_cpu(sft_data, cpu); 92 + sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu)); 96 93 97 94 if (sfd && sfd->source == source) { 98 - per_cpu(sft_data, cpu) = NULL; 95 + rcu_assign_pointer(per_cpu(sft_data, cpu), NULL); 99 96 cpumask_clear_cpu(cpu, &scale_freq_counters_mask); 100 97 } 101 98 } 99 + 100 + rcu_read_unlock(); 101 + 102 + /* 103 + * Make sure all references to previous sft_data are dropped to avoid 104 + * use-after-free races. 105 + */ 106 + synchronize_rcu(); 102 107 103 108 update_scale_freq_invariant(false); 104 109 } ··· 116 101 117 102 void topology_scale_freq_tick(void) 118 103 { 119 - struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data); 104 + struct scale_freq_data *sfd = rcu_dereference_sched(*this_cpu_ptr(&sft_data)); 120 105 121 106 if (sfd) 122 107 sfd->set_freq_scale();
+17 -21
drivers/base/power/domain.c
··· 2018 2018 2019 2019 mutex_lock(&gpd_list_lock); 2020 2020 list_add(&genpd->gpd_list_node, &gpd_list); 2021 - genpd_debug_add(genpd); 2022 2021 mutex_unlock(&gpd_list_lock); 2022 + genpd_debug_add(genpd); 2023 2023 2024 2024 return 0; 2025 2025 } ··· 2206 2206 2207 2207 static bool genpd_present(const struct generic_pm_domain *genpd) 2208 2208 { 2209 + bool ret = false; 2209 2210 const struct generic_pm_domain *gpd; 2210 2211 2211 - list_for_each_entry(gpd, &gpd_list, gpd_list_node) 2212 - if (gpd == genpd) 2213 - return true; 2214 - return false; 2212 + mutex_lock(&gpd_list_lock); 2213 + list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 2214 + if (gpd == genpd) { 2215 + ret = true; 2216 + break; 2217 + } 2218 + } 2219 + mutex_unlock(&gpd_list_lock); 2220 + 2221 + return ret; 2215 2222 } 2216 2223 2217 2224 /** ··· 2229 2222 int of_genpd_add_provider_simple(struct device_node *np, 2230 2223 struct generic_pm_domain *genpd) 2231 2224 { 2232 - int ret = -EINVAL; 2225 + int ret; 2233 2226 2234 2227 if (!np || !genpd) 2235 2228 return -EINVAL; 2236 2229 2237 - mutex_lock(&gpd_list_lock); 2238 - 2239 2230 if (!genpd_present(genpd)) 2240 - goto unlock; 2231 + return -EINVAL; 2241 2232 2242 2233 genpd->dev.of_node = np; 2243 2234 ··· 2246 2241 if (ret != -EPROBE_DEFER) 2247 2242 dev_err(&genpd->dev, "Failed to add OPP table: %d\n", 2248 2243 ret); 2249 - goto unlock; 2244 + return ret; 2250 2245 } 2251 2246 2252 2247 /* ··· 2264 2259 dev_pm_opp_of_remove_table(&genpd->dev); 2265 2260 } 2266 2261 2267 - goto unlock; 2262 + return ret; 2268 2263 } 2269 2264 2270 2265 genpd->provider = &np->fwnode; 2271 2266 genpd->has_provider = true; 2272 2267 2273 - unlock: 2274 - mutex_unlock(&gpd_list_lock); 2275 - 2276 - return ret; 2268 + return 0; 2277 2269 } 2278 2270 EXPORT_SYMBOL_GPL(of_genpd_add_provider_simple); 2279 2271 ··· 2288 2286 2289 2287 if (!np || !data) 2290 2288 return -EINVAL; 2291 - 2292 - mutex_lock(&gpd_list_lock); 2293 2289 2294 2290 if (!data->xlate) 2295 2291 data->xlate = genpd_xlate_onecell; ··· 2328 2328 if (ret < 0) 2329 2329 goto error; 2330 2330 2331 - mutex_unlock(&gpd_list_lock); 2332 - 2333 2331 return 0; 2334 2332 2335 2333 error: ··· 2345 2347 dev_pm_opp_of_remove_table(&genpd->dev); 2346 2348 } 2347 2349 } 2348 - 2349 - mutex_unlock(&gpd_list_lock); 2350 2350 2351 2351 return ret; 2352 2352 }
+1 -4
drivers/base/power/main.c
··· 220 220 void *cb, int error) 221 221 { 222 222 ktime_t rettime; 223 - s64 nsecs; 224 223 225 224 if (!pm_print_times_enabled) 226 225 return; 227 226 228 227 rettime = ktime_get(); 229 - nsecs = (s64) ktime_to_ns(ktime_sub(rettime, calltime)); 230 - 231 228 dev_info(dev, "%pS returned %d after %Ld usecs\n", cb, error, 232 - (unsigned long long)nsecs >> 10); 229 + (unsigned long long)ktime_us_delta(rettime, calltime)); 233 230 } 234 231 235 232 /**
+10
drivers/cpufreq/Kconfig.arm
··· 19 19 20 20 If in doubt, say N. 21 21 22 + config ACPI_CPPC_CPUFREQ_FIE 23 + bool "Frequency Invariance support for CPPC cpufreq driver" 24 + depends on ACPI_CPPC_CPUFREQ && GENERIC_ARCH_TOPOLOGY 25 + default y 26 + help 27 + This extends frequency invariance support in the CPPC cpufreq driver, 28 + by using CPPC delivered and reference performance counters. 29 + 30 + If in doubt, say N. 31 + 22 32 config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM 23 33 tristate "Allwinner nvmem based SUN50I CPUFreq driver" 24 34 depends on ARCH_SUNXI
+282 -42
drivers/cpufreq/cppc_cpufreq.c
··· 10 10 11 11 #define pr_fmt(fmt) "CPPC Cpufreq:" fmt 12 12 13 + #include <linux/arch_topology.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/module.h> 15 16 #include <linux/delay.h> 16 17 #include <linux/cpu.h> 17 18 #include <linux/cpufreq.h> 18 19 #include <linux/dmi.h> 20 + #include <linux/irq_work.h> 21 + #include <linux/kthread.h> 19 22 #include <linux/time.h> 20 23 #include <linux/vmalloc.h> 24 + #include <uapi/linux/sched/types.h> 21 25 22 26 #include <asm/unaligned.h> 23 27 ··· 60 56 .oem_revision = 0, 61 57 } 62 58 }; 59 + 60 + #ifdef CONFIG_ACPI_CPPC_CPUFREQ_FIE 61 + 62 + /* Frequency invariance support */ 63 + struct cppc_freq_invariance { 64 + int cpu; 65 + struct irq_work irq_work; 66 + struct kthread_work work; 67 + struct cppc_perf_fb_ctrs prev_perf_fb_ctrs; 68 + struct cppc_cpudata *cpu_data; 69 + }; 70 + 71 + static DEFINE_PER_CPU(struct cppc_freq_invariance, cppc_freq_inv); 72 + static struct kthread_worker *kworker_fie; 73 + 74 + static struct cpufreq_driver cppc_cpufreq_driver; 75 + static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu); 76 + static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data, 77 + struct cppc_perf_fb_ctrs *fb_ctrs_t0, 78 + struct cppc_perf_fb_ctrs *fb_ctrs_t1); 79 + 80 + /** 81 + * cppc_scale_freq_workfn - CPPC arch_freq_scale updater for frequency invariance 82 + * @work: The work item. 83 + * 84 + * The CPPC driver register itself with the topology core to provide its own 85 + * implementation (cppc_scale_freq_tick()) of topology_scale_freq_tick() which 86 + * gets called by the scheduler on every tick. 87 + * 88 + * Note that the arch specific counters have higher priority than CPPC counters, 89 + * if available, though the CPPC driver doesn't need to have any special 90 + * handling for that. 91 + * 92 + * On an invocation of cppc_scale_freq_tick(), we schedule an irq work (since we 93 + * reach here from hard-irq context), which then schedules a normal work item 94 + * and cppc_scale_freq_workfn() updates the per_cpu arch_freq_scale variable 95 + * based on the counter updates since the last tick. 96 + */ 97 + static void cppc_scale_freq_workfn(struct kthread_work *work) 98 + { 99 + struct cppc_freq_invariance *cppc_fi; 100 + struct cppc_perf_fb_ctrs fb_ctrs = {0}; 101 + struct cppc_cpudata *cpu_data; 102 + unsigned long local_freq_scale; 103 + u64 perf; 104 + 105 + cppc_fi = container_of(work, struct cppc_freq_invariance, work); 106 + cpu_data = cppc_fi->cpu_data; 107 + 108 + if (cppc_get_perf_ctrs(cppc_fi->cpu, &fb_ctrs)) { 109 + pr_warn("%s: failed to read perf counters\n", __func__); 110 + return; 111 + } 112 + 113 + perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs, 114 + &fb_ctrs); 115 + cppc_fi->prev_perf_fb_ctrs = fb_ctrs; 116 + 117 + perf <<= SCHED_CAPACITY_SHIFT; 118 + local_freq_scale = div64_u64(perf, cpu_data->perf_caps.highest_perf); 119 + 120 + /* This can happen due to counter's overflow */ 121 + if (unlikely(local_freq_scale > 1024)) 122 + local_freq_scale = 1024; 123 + 124 + per_cpu(arch_freq_scale, cppc_fi->cpu) = local_freq_scale; 125 + } 126 + 127 + static void cppc_irq_work(struct irq_work *irq_work) 128 + { 129 + struct cppc_freq_invariance *cppc_fi; 130 + 131 + cppc_fi = container_of(irq_work, struct cppc_freq_invariance, irq_work); 132 + kthread_queue_work(kworker_fie, &cppc_fi->work); 133 + } 134 + 135 + static void cppc_scale_freq_tick(void) 136 + { 137 + struct cppc_freq_invariance *cppc_fi = &per_cpu(cppc_freq_inv, smp_processor_id()); 138 + 139 + /* 140 + * cppc_get_perf_ctrs() can potentially sleep, call that from the right 141 + * context. 142 + */ 143 + irq_work_queue(&cppc_fi->irq_work); 144 + } 145 + 146 + static struct scale_freq_data cppc_sftd = { 147 + .source = SCALE_FREQ_SOURCE_CPPC, 148 + .set_freq_scale = cppc_scale_freq_tick, 149 + }; 150 + 151 + static void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy) 152 + { 153 + struct cppc_freq_invariance *cppc_fi; 154 + int cpu, ret; 155 + 156 + if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate) 157 + return; 158 + 159 + for_each_cpu(cpu, policy->cpus) { 160 + cppc_fi = &per_cpu(cppc_freq_inv, cpu); 161 + cppc_fi->cpu = cpu; 162 + cppc_fi->cpu_data = policy->driver_data; 163 + kthread_init_work(&cppc_fi->work, cppc_scale_freq_workfn); 164 + init_irq_work(&cppc_fi->irq_work, cppc_irq_work); 165 + 166 + ret = cppc_get_perf_ctrs(cpu, &cppc_fi->prev_perf_fb_ctrs); 167 + if (ret) { 168 + pr_warn("%s: failed to read perf counters for cpu:%d: %d\n", 169 + __func__, cpu, ret); 170 + 171 + /* 172 + * Don't abort if the CPU was offline while the driver 173 + * was getting registered. 174 + */ 175 + if (cpu_online(cpu)) 176 + return; 177 + } 178 + } 179 + 180 + /* Register for freq-invariance */ 181 + topology_set_scale_freq_source(&cppc_sftd, policy->cpus); 182 + } 183 + 184 + /* 185 + * We free all the resources on policy's removal and not on CPU removal as the 186 + * irq-work are per-cpu and the hotplug core takes care of flushing the pending 187 + * irq-works (hint: smpcfd_dying_cpu()) on CPU hotplug. Even if the kthread-work 188 + * fires on another CPU after the concerned CPU is removed, it won't harm. 189 + * 190 + * We just need to make sure to remove them all on policy->exit(). 191 + */ 192 + static void cppc_cpufreq_cpu_fie_exit(struct cpufreq_policy *policy) 193 + { 194 + struct cppc_freq_invariance *cppc_fi; 195 + int cpu; 196 + 197 + if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate) 198 + return; 199 + 200 + /* policy->cpus will be empty here, use related_cpus instead */ 201 + topology_clear_scale_freq_source(SCALE_FREQ_SOURCE_CPPC, policy->related_cpus); 202 + 203 + for_each_cpu(cpu, policy->related_cpus) { 204 + cppc_fi = &per_cpu(cppc_freq_inv, cpu); 205 + irq_work_sync(&cppc_fi->irq_work); 206 + kthread_cancel_work_sync(&cppc_fi->work); 207 + } 208 + } 209 + 210 + static void __init cppc_freq_invariance_init(void) 211 + { 212 + struct sched_attr attr = { 213 + .size = sizeof(struct sched_attr), 214 + .sched_policy = SCHED_DEADLINE, 215 + .sched_nice = 0, 216 + .sched_priority = 0, 217 + /* 218 + * Fake (unused) bandwidth; workaround to "fix" 219 + * priority inheritance. 220 + */ 221 + .sched_runtime = 1000000, 222 + .sched_deadline = 10000000, 223 + .sched_period = 10000000, 224 + }; 225 + int ret; 226 + 227 + if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate) 228 + return; 229 + 230 + kworker_fie = kthread_create_worker(0, "cppc_fie"); 231 + if (IS_ERR(kworker_fie)) 232 + return; 233 + 234 + ret = sched_setattr_nocheck(kworker_fie->task, &attr); 235 + if (ret) { 236 + pr_warn("%s: failed to set SCHED_DEADLINE: %d\n", __func__, 237 + ret); 238 + kthread_destroy_worker(kworker_fie); 239 + return; 240 + } 241 + } 242 + 243 + static void cppc_freq_invariance_exit(void) 244 + { 245 + if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate) 246 + return; 247 + 248 + kthread_destroy_worker(kworker_fie); 249 + kworker_fie = NULL; 250 + } 251 + 252 + #else 253 + static inline void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy) 254 + { 255 + } 256 + 257 + static inline void cppc_cpufreq_cpu_fie_exit(struct cpufreq_policy *policy) 258 + { 259 + } 260 + 261 + static inline void cppc_freq_invariance_init(void) 262 + { 263 + } 264 + 265 + static inline void cppc_freq_invariance_exit(void) 266 + { 267 + } 268 + #endif /* CONFIG_ACPI_CPPC_CPUFREQ_FIE */ 63 269 64 270 /* Callback function used to retrieve the max frequency from DMI */ 65 271 static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private) ··· 396 182 return 0; 397 183 } 398 184 399 - static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy) 400 - { 401 - struct cppc_cpudata *cpu_data = policy->driver_data; 402 - struct cppc_perf_caps *caps = &cpu_data->perf_caps; 403 - unsigned int cpu = policy->cpu; 404 - int ret; 405 - 406 - cpu_data->perf_ctrls.desired_perf = caps->lowest_perf; 407 - 408 - ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); 409 - if (ret) 410 - pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", 411 - caps->lowest_perf, cpu, ret); 412 - 413 - /* Remove CPU node from list and free driver data for policy */ 414 - free_cpumask_var(cpu_data->shared_cpu_map); 415 - list_del(&cpu_data->node); 416 - kfree(policy->driver_data); 417 - policy->driver_data = NULL; 418 - } 419 - 420 185 /* 421 186 * The PCC subspace describes the rate at which platform can accept commands 422 187 * on the shared PCC channel (including READs which do not count towards freq ··· 470 277 return NULL; 471 278 } 472 279 280 + static void cppc_cpufreq_put_cpu_data(struct cpufreq_policy *policy) 281 + { 282 + struct cppc_cpudata *cpu_data = policy->driver_data; 283 + 284 + list_del(&cpu_data->node); 285 + free_cpumask_var(cpu_data->shared_cpu_map); 286 + kfree(cpu_data); 287 + policy->driver_data = NULL; 288 + } 289 + 473 290 static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) 474 291 { 475 292 unsigned int cpu = policy->cpu; ··· 533 330 default: 534 331 pr_debug("Unsupported CPU co-ord type: %d\n", 535 332 policy->shared_type); 536 - return -EFAULT; 333 + ret = -EFAULT; 334 + goto out; 537 335 } 538 336 539 337 /* ··· 549 345 cpu_data->perf_ctrls.desired_perf = caps->highest_perf; 550 346 551 347 ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); 552 - if (ret) 348 + if (ret) { 553 349 pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", 554 350 caps->highest_perf, cpu, ret); 351 + goto out; 352 + } 555 353 354 + cppc_cpufreq_cpu_fie_init(policy); 355 + return 0; 356 + 357 + out: 358 + cppc_cpufreq_put_cpu_data(policy); 556 359 return ret; 360 + } 361 + 362 + static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy) 363 + { 364 + struct cppc_cpudata *cpu_data = policy->driver_data; 365 + struct cppc_perf_caps *caps = &cpu_data->perf_caps; 366 + unsigned int cpu = policy->cpu; 367 + int ret; 368 + 369 + cppc_cpufreq_cpu_fie_exit(policy); 370 + 371 + cpu_data->perf_ctrls.desired_perf = caps->lowest_perf; 372 + 373 + ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); 374 + if (ret) 375 + pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", 376 + caps->lowest_perf, cpu, ret); 377 + 378 + cppc_cpufreq_put_cpu_data(policy); 379 + return 0; 557 380 } 558 381 559 382 static inline u64 get_delta(u64 t1, u64 t0) ··· 591 360 return (u32)t1 - (u32)t0; 592 361 } 593 362 594 - static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data, 595 - struct cppc_perf_fb_ctrs fb_ctrs_t0, 596 - struct cppc_perf_fb_ctrs fb_ctrs_t1) 363 + static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data, 364 + struct cppc_perf_fb_ctrs *fb_ctrs_t0, 365 + struct cppc_perf_fb_ctrs *fb_ctrs_t1) 597 366 { 598 367 u64 delta_reference, delta_delivered; 599 - u64 reference_perf, delivered_perf; 368 + u64 reference_perf; 600 369 601 - reference_perf = fb_ctrs_t0.reference_perf; 370 + reference_perf = fb_ctrs_t0->reference_perf; 602 371 603 - delta_reference = get_delta(fb_ctrs_t1.reference, 604 - fb_ctrs_t0.reference); 605 - delta_delivered = get_delta(fb_ctrs_t1.delivered, 606 - fb_ctrs_t0.delivered); 372 + delta_reference = get_delta(fb_ctrs_t1->reference, 373 + fb_ctrs_t0->reference); 374 + delta_delivered = get_delta(fb_ctrs_t1->delivered, 375 + fb_ctrs_t0->delivered); 607 376 608 - /* Check to avoid divide-by zero */ 609 - if (delta_reference || delta_delivered) 610 - delivered_perf = (reference_perf * delta_delivered) / 611 - delta_reference; 612 - else 613 - delivered_perf = cpu_data->perf_ctrls.desired_perf; 377 + /* Check to avoid divide-by zero and invalid delivered_perf */ 378 + if (!delta_reference || !delta_delivered) 379 + return cpu_data->perf_ctrls.desired_perf; 614 380 615 - return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf); 381 + return (reference_perf * delta_delivered) / delta_reference; 616 382 } 617 383 618 384 static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) ··· 617 389 struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0}; 618 390 struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); 619 391 struct cppc_cpudata *cpu_data = policy->driver_data; 392 + u64 delivered_perf; 620 393 int ret; 621 394 622 395 cpufreq_cpu_put(policy); ··· 632 403 if (ret) 633 404 return ret; 634 405 635 - return cppc_get_rate_from_fbctrs(cpu_data, fb_ctrs_t0, fb_ctrs_t1); 406 + delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0, 407 + &fb_ctrs_t1); 408 + 409 + return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf); 636 410 } 637 411 638 412 static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state) ··· 683 451 .target = cppc_cpufreq_set_target, 684 452 .get = cppc_cpufreq_get_rate, 685 453 .init = cppc_cpufreq_cpu_init, 686 - .stop_cpu = cppc_cpufreq_stop_cpu, 454 + .exit = cppc_cpufreq_cpu_exit, 687 455 .set_boost = cppc_cpufreq_set_boost, 688 456 .attr = cppc_cpufreq_attr, 689 457 .name = "cppc_cpufreq", ··· 736 504 737 505 static int __init cppc_cpufreq_init(void) 738 506 { 507 + int ret; 508 + 739 509 if ((acpi_disabled) || !acpi_cpc_valid()) 740 510 return -ENODEV; 741 511 742 512 INIT_LIST_HEAD(&cpu_data_list); 743 513 744 514 cppc_check_hisi_workaround(); 515 + cppc_freq_invariance_init(); 745 516 746 - return cpufreq_register_driver(&cppc_cpufreq_driver); 517 + ret = cpufreq_register_driver(&cppc_cpufreq_driver); 518 + if (ret) 519 + cppc_freq_invariance_exit(); 520 + 521 + return ret; 747 522 } 748 523 749 524 static inline void free_cpu_data(void) ··· 768 529 static void __exit cppc_cpufreq_exit(void) 769 530 { 770 531 cpufreq_unregister_driver(&cppc_cpufreq_driver); 532 + cppc_freq_invariance_exit(); 771 533 772 534 free_cpu_data(); 773 535 }
+6 -4
drivers/cpufreq/cpufreq-dt-platdev.c
··· 15 15 * Machines for which the cpufreq device is *always* created, mostly used for 16 16 * platforms using "operating-points" (V1) property. 17 17 */ 18 - static const struct of_device_id whitelist[] __initconst = { 18 + static const struct of_device_id allowlist[] __initconst = { 19 19 { .compatible = "allwinner,sun4i-a10", }, 20 20 { .compatible = "allwinner,sun5i-a10s", }, 21 21 { .compatible = "allwinner,sun5i-a13", }, ··· 100 100 * Machines for which the cpufreq device is *not* created, mostly used for 101 101 * platforms using "operating-points-v2" property. 102 102 */ 103 - static const struct of_device_id blacklist[] __initconst = { 103 + static const struct of_device_id blocklist[] __initconst = { 104 104 { .compatible = "allwinner,sun50i-h6", }, 105 105 106 106 { .compatible = "arm,vexpress", }, ··· 126 126 { .compatible = "mediatek,mt8173", }, 127 127 { .compatible = "mediatek,mt8176", }, 128 128 { .compatible = "mediatek,mt8183", }, 129 + { .compatible = "mediatek,mt8365", }, 129 130 { .compatible = "mediatek,mt8516", }, 130 131 131 132 { .compatible = "nvidia,tegra20", }, ··· 138 137 { .compatible = "qcom,msm8996", }, 139 138 { .compatible = "qcom,qcs404", }, 140 139 { .compatible = "qcom,sc7180", }, 140 + { .compatible = "qcom,sc7280", }, 141 141 { .compatible = "qcom,sdm845", }, 142 142 143 143 { .compatible = "st,stih407", }, ··· 179 177 if (!np) 180 178 return -ENODEV; 181 179 182 - match = of_match_node(whitelist, np); 180 + match = of_match_node(allowlist, np); 183 181 if (match) { 184 182 data = match->data; 185 183 goto create_pdev; 186 184 } 187 185 188 - if (cpu0_node_has_opp_v2_prop() && !of_match_node(blacklist, np)) 186 + if (cpu0_node_has_opp_v2_prop() && !of_match_node(blocklist, np)) 189 187 goto create_pdev; 190 188 191 189 of_node_put(np);
+19 -25
drivers/cpufreq/cpufreq.c
··· 524 524 } 525 525 EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch); 526 526 527 + static unsigned int __resolve_freq(struct cpufreq_policy *policy, 528 + unsigned int target_freq, unsigned int relation) 529 + { 530 + unsigned int idx; 531 + 532 + target_freq = clamp_val(target_freq, policy->min, policy->max); 533 + 534 + if (!cpufreq_driver->target_index) 535 + return target_freq; 536 + 537 + idx = cpufreq_frequency_table_target(policy, target_freq, relation); 538 + policy->cached_resolved_idx = idx; 539 + policy->cached_target_freq = target_freq; 540 + return policy->freq_table[idx].frequency; 541 + } 542 + 527 543 /** 528 544 * cpufreq_driver_resolve_freq - Map a target frequency to a driver-supported 529 545 * one. ··· 554 538 unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, 555 539 unsigned int target_freq) 556 540 { 557 - target_freq = clamp_val(target_freq, policy->min, policy->max); 558 - policy->cached_target_freq = target_freq; 559 - 560 - if (cpufreq_driver->target_index) { 561 - unsigned int idx; 562 - 563 - idx = cpufreq_frequency_table_target(policy, target_freq, 564 - CPUFREQ_RELATION_L); 565 - policy->cached_resolved_idx = idx; 566 - return policy->freq_table[idx].frequency; 567 - } 568 - 569 - if (cpufreq_driver->resolve_freq) 570 - return cpufreq_driver->resolve_freq(policy, target_freq); 571 - 572 - return target_freq; 541 + return __resolve_freq(policy, target_freq, CPUFREQ_RELATION_L); 573 542 } 574 543 EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq); 575 544 ··· 1607 1606 policy->cdev = NULL; 1608 1607 } 1609 1608 1610 - if (cpufreq_driver->stop_cpu) 1611 - cpufreq_driver->stop_cpu(policy); 1612 - 1613 1609 if (has_target()) 1614 1610 cpufreq_exit_governor(policy); 1615 1611 ··· 2232 2234 unsigned int relation) 2233 2235 { 2234 2236 unsigned int old_target_freq = target_freq; 2235 - int index; 2236 2237 2237 2238 if (cpufreq_disabled()) 2238 2239 return -ENODEV; 2239 2240 2240 - /* Make sure that target_freq is within supported range */ 2241 - target_freq = clamp_val(target_freq, policy->min, policy->max); 2241 + target_freq = __resolve_freq(policy, target_freq, relation); 2242 2242 2243 2243 pr_debug("target for CPU %u: %u kHz, relation %u, requested %u kHz\n", 2244 2244 policy->cpu, target_freq, relation, old_target_freq); ··· 2257 2261 if (!cpufreq_driver->target_index) 2258 2262 return -EINVAL; 2259 2263 2260 - index = cpufreq_frequency_table_target(policy, target_freq, relation); 2261 - 2262 - return __target_index(policy, index); 2264 + return __target_index(policy, policy->cached_resolved_idx); 2263 2265 } 2264 2266 EXPORT_SYMBOL_GPL(__cpufreq_driver_target); 2265 2267
+5 -6
drivers/cpufreq/intel_pstate.c
··· 2532 2532 return 0; 2533 2533 } 2534 2534 2535 - static int intel_pstate_cpu_offline(struct cpufreq_policy *policy) 2535 + static int intel_cpufreq_cpu_offline(struct cpufreq_policy *policy) 2536 2536 { 2537 2537 struct cpudata *cpu = all_cpu_data[policy->cpu]; 2538 2538 ··· 2577 2577 return 0; 2578 2578 } 2579 2579 2580 - static void intel_pstate_stop_cpu(struct cpufreq_policy *policy) 2580 + static int intel_pstate_cpu_offline(struct cpufreq_policy *policy) 2581 2581 { 2582 - pr_debug("CPU %d stopping\n", policy->cpu); 2583 - 2584 2582 intel_pstate_clear_update_util_hook(policy->cpu); 2583 + 2584 + return intel_cpufreq_cpu_offline(policy); 2585 2585 } 2586 2586 2587 2587 static int intel_pstate_cpu_exit(struct cpufreq_policy *policy) ··· 2654 2654 .resume = intel_pstate_resume, 2655 2655 .init = intel_pstate_cpu_init, 2656 2656 .exit = intel_pstate_cpu_exit, 2657 - .stop_cpu = intel_pstate_stop_cpu, 2658 2657 .offline = intel_pstate_cpu_offline, 2659 2658 .online = intel_pstate_cpu_online, 2660 2659 .update_limits = intel_pstate_update_limits, ··· 2955 2956 .fast_switch = intel_cpufreq_fast_switch, 2956 2957 .init = intel_cpufreq_cpu_init, 2957 2958 .exit = intel_cpufreq_cpu_exit, 2958 - .offline = intel_pstate_cpu_offline, 2959 + .offline = intel_cpufreq_cpu_offline, 2959 2960 .online = intel_pstate_cpu_online, 2960 2961 .suspend = intel_pstate_suspend, 2961 2962 .resume = intel_pstate_resume,
+1
drivers/cpufreq/mediatek-cpufreq.c
··· 537 537 { .compatible = "mediatek,mt8173", }, 538 538 { .compatible = "mediatek,mt8176", }, 539 539 { .compatible = "mediatek,mt8183", }, 540 + { .compatible = "mediatek,mt8365", }, 540 541 { .compatible = "mediatek,mt8516", }, 541 542 542 543 { }
+9 -14
drivers/cpufreq/powernv-cpufreq.c
··· 875 875 876 876 static int powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy) 877 877 { 878 - /* timer is deleted in cpufreq_cpu_stop() */ 878 + struct powernv_smp_call_data freq_data; 879 + struct global_pstate_info *gpstates = policy->driver_data; 880 + 881 + freq_data.pstate_id = idx_to_pstate(powernv_pstate_info.min); 882 + freq_data.gpstate_id = idx_to_pstate(powernv_pstate_info.min); 883 + smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1); 884 + if (gpstates) 885 + del_timer_sync(&gpstates->timer); 886 + 879 887 kfree(policy->driver_data); 880 888 881 889 return 0; ··· 1015 1007 .priority = 0, 1016 1008 }; 1017 1009 1018 - static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy) 1019 - { 1020 - struct powernv_smp_call_data freq_data; 1021 - struct global_pstate_info *gpstates = policy->driver_data; 1022 - 1023 - freq_data.pstate_id = idx_to_pstate(powernv_pstate_info.min); 1024 - freq_data.gpstate_id = idx_to_pstate(powernv_pstate_info.min); 1025 - smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1); 1026 - if (gpstates) 1027 - del_timer_sync(&gpstates->timer); 1028 - } 1029 - 1030 1010 static unsigned int powernv_fast_switch(struct cpufreq_policy *policy, 1031 1011 unsigned int target_freq) 1032 1012 { ··· 1038 1042 .target_index = powernv_cpufreq_target_index, 1039 1043 .fast_switch = powernv_fast_switch, 1040 1044 .get = powernv_cpufreq_get, 1041 - .stop_cpu = powernv_cpufreq_stop_cpu, 1042 1045 .attr = powernv_cpu_freq_attr, 1043 1046 }; 1044 1047
+1 -1
drivers/cpufreq/scmi-cpufreq.c
··· 174 174 nr_opp = dev_pm_opp_get_opp_count(cpu_dev); 175 175 if (nr_opp <= 0) { 176 176 dev_err(cpu_dev, "%s: No OPPs for this device: %d\n", 177 - __func__, ret); 177 + __func__, nr_opp); 178 178 179 179 ret = -ENODEV; 180 180 goto out_free_opp;
+14
drivers/cpuidle/cpuidle-qcom-spm.c
··· 87 87 .start_index[PM_SLEEP_MODE_SPC] = 3, 88 88 }; 89 89 90 + /* SPM register data for 8226 */ 91 + static const struct spm_reg_data spm_reg_8226_cpu = { 92 + .reg_offset = spm_reg_offset_v2_1, 93 + .spm_cfg = 0x0, 94 + .spm_dly = 0x3C102800, 95 + .seq = { 0x60, 0x03, 0x60, 0x0B, 0x0F, 0x20, 0x10, 0x80, 0x30, 0x90, 96 + 0x5B, 0x60, 0x03, 0x60, 0x3B, 0x76, 0x76, 0x0B, 0x94, 0x5B, 97 + 0x80, 0x10, 0x26, 0x30, 0x0F }, 98 + .start_index[PM_SLEEP_MODE_STBY] = 0, 99 + .start_index[PM_SLEEP_MODE_SPC] = 5, 100 + }; 101 + 90 102 static const u8 spm_reg_offset_v1_1[SPM_REG_NR] = { 91 103 [SPM_REG_CFG] = 0x08, 92 104 [SPM_REG_SPM_CTL] = 0x20, ··· 271 259 } 272 260 273 261 static const struct of_device_id spm_match_table[] = { 262 + { .compatible = "qcom,msm8226-saw2-v2.1-cpu", 263 + .data = &spm_reg_8226_cpu }, 274 264 { .compatible = "qcom,msm8974-saw2-v2.1-cpu", 275 265 .data = &spm_reg_8974_8084_cpu }, 276 266 { .compatible = "qcom,apq8084-saw2-v2.1-cpu",
-1
include/dt-bindings/clock/mt8173-clk.h
··· 186 186 #define CLK_INFRA_PMICWRAP 11 187 187 #define CLK_INFRA_CLK_13M 12 188 188 #define CLK_INFRA_CA53SEL 13 189 - #define CLK_INFRA_CA57SEL 14 /* Deprecated. Don't use it. */ 190 189 #define CLK_INFRA_CA72SEL 14 191 190 #define CLK_INFRA_NR_CLK 15 192 191
+1
include/linux/arch_topology.h
··· 37 37 enum scale_freq_source { 38 38 SCALE_FREQ_SOURCE_CPUFREQ = 0, 39 39 SCALE_FREQ_SOURCE_ARCH, 40 + SCALE_FREQ_SOURCE_CPPC, 40 41 }; 41 42 42 43 struct scale_freq_data {
-10
include/linux/cpufreq.h
··· 331 331 unsigned long capacity); 332 332 333 333 /* 334 - * Caches and returns the lowest driver-supported frequency greater than 335 - * or equal to the target frequency, subject to any driver limitations. 336 - * Does not set the frequency. Only to be implemented for drivers with 337 - * target(). 338 - */ 339 - unsigned int (*resolve_freq)(struct cpufreq_policy *policy, 340 - unsigned int target_freq); 341 - 342 - /* 343 334 * Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION 344 335 * unset. 345 336 * ··· 362 371 int (*online)(struct cpufreq_policy *policy); 363 372 int (*offline)(struct cpufreq_policy *policy); 364 373 int (*exit)(struct cpufreq_policy *policy); 365 - void (*stop_cpu)(struct cpufreq_policy *policy); 366 374 int (*suspend)(struct cpufreq_policy *policy); 367 375 int (*resume)(struct cpufreq_policy *policy); 368 376
+1
kernel/sched/core.c
··· 7182 7182 { 7183 7183 return __sched_setscheduler(p, attr, false, true); 7184 7184 } 7185 + EXPORT_SYMBOL_GPL(sched_setattr_nocheck); 7185 7186 7186 7187 /** 7187 7188 * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.