Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pm-cpufreq'

Merge cpufreq updates for 6.14:

- Use str_enable_disable()-like helpers in cpufreq (Krzysztof
Kozlowski).

- Extend the Apple cpufreq driver to support more SoCs (Hector Martin,
Nick Chan).

- Add new cpufreq driver for Airoha SoCs (Christian Marangi).

- Fix using cpufreq-dt as module (Andreas Kemnade).

- Minor fixes for Sparc, SCMI, and Qcom cpufreq drivers (Ethan Carter
Edwards, Sibi Sankar, Manivannan Sadhasivam).

- Fix the maximum supported frequency computation in the ACPI cpufreq
driver to avoid relying on unfounded assumptions (Gautham Shenoy).

- Fix an amd-pstate driver regression with preferred core rankings not
being used (Mario Limonciello).

- Fix a precision issue with frequency calculation in the amd-pstate
driver (Naresh Solanki).

- Add ftrace event to the amd-pstate driver for active mode (Mario
Limonciello).

- Set default EPP policy on Ryzen processors in amd-pstate (Mario
Limonciello).

- Clean up the amd-pstate cpufreq driver and optimize it to increase
code reuse (Mario Limonciello, Dhananjay Ugwekar).

- Use CPPC to get scaling factors between HWP performance levels and
frequency in the intel_pstate driver and make it stop using a built
-in scaling factor for the Arrow Lake processor (Rafael Wysocki).

- Make intel_pstate initialize epp_policy to CPUFREQ_POLICY_UNKNOWN for
consistency with CPU offline (Christian Loehle).

- Fix superfluous updates caused by need_freq_update in the schedutil
cpufreq governor (Sultan Alsawaf).

* pm-cpufreq: (40 commits)
cpufreq: Use str_enable_disable()-like helpers
cpufreq: airoha: Add EN7581 CPUFreq SMCCC driver
cpufreq: ACPI: Fix max-frequency computation
cpufreq/amd-pstate: Refactor max frequency calculation
cpufreq/amd-pstate: Fix prefcore rankings
cpufreq: sparc: change kzalloc to kcalloc
cpufreq: qcom: Implement clk_ops::determine_rate() for qcom_cpufreq* clocks
cpufreq: qcom: Fix qcom_cpufreq_hw_recalc_rate() to query LUT if LMh IRQ is not available
cpufreq: apple-soc: Add Apple A7-A8X SoC cpufreq support
cpufreq: apple-soc: Set fallback transition latency to APPLE_DVFS_TRANSITION_TIMEOUT
cpufreq: apple-soc: Increase cluster switch timeout to 400us
cpufreq: apple-soc: Use 32-bit read for status register
cpufreq: apple-soc: Allow per-SoC configuration of APPLE_DVFS_CMD_PS1
cpufreq: apple-soc: Drop setting the PS2 field on M2+
dt-bindings: cpufreq: apple,cluster-cpufreq: Add A7-A11, T2 compatibles
dt-bindings: cpufreq: Document support for Airoha EN7581 CPUFreq
cpufreq: fix using cpufreq-dt as module
cpufreq: scmi: Register for limit change notifications
cpufreq: schedutil: Fix superfluous updates caused by need_freq_update
cpufreq: intel_pstate: Use CPUFREQ_POLICY_UNKNOWN
...

+692 -341
+55
Documentation/devicetree/bindings/cpufreq/airoha,en7581-cpufreq.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/cpufreq/airoha,en7581-cpufreq.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Airoha EN7581 CPUFreq 8 + 9 + maintainers: 10 + - Christian Marangi <ansuelsmth@gmail.com> 11 + 12 + description: | 13 + On newer Airoha SoC, CPU Frequency is scaled indirectly with SMC commands 14 + to ATF. 15 + 16 + A virtual clock is exposed. This virtual clock is a get-only clock and 17 + is used to expose the current global CPU clock. The frequency info comes 18 + by the output of the SMC command that reports the clock in MHz. 19 + 20 + The SMC sets the CPU clock by providing an index, this is modelled as 21 + performance states in a power domain. 22 + 23 + CPUs can't be individually scaled as the CPU frequency is shared across 24 + all CPUs and is global. 25 + 26 + properties: 27 + compatible: 28 + const: airoha,en7581-cpufreq 29 + 30 + '#clock-cells': 31 + const: 0 32 + 33 + '#power-domain-cells': 34 + const: 0 35 + 36 + operating-points-v2: true 37 + 38 + required: 39 + - compatible 40 + - '#clock-cells' 41 + - '#power-domain-cells' 42 + - operating-points-v2 43 + 44 + additionalProperties: false 45 + 46 + examples: 47 + - | 48 + performance-domain { 49 + compatible = "airoha,en7581-cpufreq"; 50 + 51 + operating-points-v2 = <&cpu_smcc_opp_table>; 52 + 53 + #power-domain-cells = <0>; 54 + #clock-cells = <0>; 55 + };
+9 -1
Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml
··· 24 - apple,t8112-cluster-cpufreq 25 - const: apple,cluster-cpufreq 26 - items: 27 - - const: apple,t6000-cluster-cpufreq 28 - const: apple,t8103-cluster-cpufreq 29 - const: apple,cluster-cpufreq 30 31 reg: 32 maxItems: 1
··· 24 - apple,t8112-cluster-cpufreq 25 - const: apple,cluster-cpufreq 26 - items: 27 + - enum: 28 + - apple,s8000-cluster-cpufreq 29 + - apple,t8010-cluster-cpufreq 30 + - apple,t8015-cluster-cpufreq 31 + - apple,t6000-cluster-cpufreq 32 - const: apple,t8103-cluster-cpufreq 33 - const: apple,cluster-cpufreq 34 + - items: 35 + - const: apple,t7000-cluster-cpufreq 36 + - const: apple,s5l8960x-cluster-cpufreq 37 + - const: apple,s5l8960x-cluster-cpufreq 38 39 reg: 40 maxItems: 1
+1 -1
drivers/cpufreq/Kconfig
··· 232 If in doubt, say N. 233 234 config CPUFREQ_DT_PLATDEV 235 - tristate "Generic DT based cpufreq platdev driver" 236 depends on OF 237 help 238 This adds a generic DT based cpufreq platdev driver for frequency
··· 232 If in doubt, say N. 233 234 config CPUFREQ_DT_PLATDEV 235 + bool "Generic DT based cpufreq platdev driver" 236 depends on OF 237 help 238 This adds a generic DT based cpufreq platdev driver for frequency
+8
drivers/cpufreq/Kconfig.arm
··· 15 To compile this driver as a module, choose M here: the 16 module will be called sun50i-cpufreq-nvmem. 17 18 config ARM_APPLE_SOC_CPUFREQ 19 tristate "Apple Silicon SoC CPUFreq support" 20 depends on ARCH_APPLE || (COMPILE_TEST && 64BIT)
··· 15 To compile this driver as a module, choose M here: the 16 module will be called sun50i-cpufreq-nvmem. 17 18 + config ARM_AIROHA_SOC_CPUFREQ 19 + tristate "Airoha EN7581 SoC CPUFreq support" 20 + depends on ARCH_AIROHA || COMPILE_TEST 21 + select PM_OPP 22 + default ARCH_AIROHA 23 + help 24 + This adds the CPUFreq driver for Airoha EN7581 SoCs. 25 + 26 config ARM_APPLE_SOC_CPUFREQ 27 tristate "Apple Silicon SoC CPUFreq support" 28 depends on ARCH_APPLE || (COMPILE_TEST && 64BIT)
+1
drivers/cpufreq/Makefile
··· 53 54 ################################################################################## 55 # ARM SoC drivers 56 obj-$(CONFIG_ARM_APPLE_SOC_CPUFREQ) += apple-soc-cpufreq.o 57 obj-$(CONFIG_ARM_ARMADA_37XX_CPUFREQ) += armada-37xx-cpufreq.o 58 obj-$(CONFIG_ARM_ARMADA_8K_CPUFREQ) += armada-8k-cpufreq.o
··· 53 54 ################################################################################## 55 # ARM SoC drivers 56 + obj-$(CONFIG_ARM_AIROHA_SOC_CPUFREQ) += airoha-cpufreq.o 57 obj-$(CONFIG_ARM_APPLE_SOC_CPUFREQ) += apple-soc-cpufreq.o 58 obj-$(CONFIG_ARM_ARMADA_37XX_CPUFREQ) += armada-37xx-cpufreq.o 59 obj-$(CONFIG_ARM_ARMADA_8K_CPUFREQ) += armada-8k-cpufreq.o
+27 -9
drivers/cpufreq/acpi-cpufreq.c
··· 623 #endif 624 625 #ifdef CONFIG_ACPI_CPPC_LIB 626 - static u64 get_max_boost_ratio(unsigned int cpu) 627 { 628 struct cppc_perf_caps perf_caps; 629 u64 highest_perf, nominal_perf; ··· 659 660 nominal_perf = perf_caps.nominal_perf; 661 662 if (!highest_perf || !nominal_perf) { 663 pr_debug("CPU%d: highest or nominal performance missing\n", cpu); 664 return 0; ··· 674 675 return div_u64(highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); 676 } 677 #else 678 - static inline u64 get_max_boost_ratio(unsigned int cpu) { return 0; } 679 #endif 680 681 static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) ··· 689 struct acpi_cpufreq_data *data; 690 unsigned int cpu = policy->cpu; 691 struct cpuinfo_x86 *c = &cpu_data(cpu); 692 unsigned int valid_states = 0; 693 unsigned int result = 0; 694 - u64 max_boost_ratio; 695 unsigned int i; 696 #ifdef CONFIG_SMP 697 static int blacklisted; ··· 841 } 842 freq_table[valid_states].frequency = CPUFREQ_TABLE_END; 843 844 - max_boost_ratio = get_max_boost_ratio(cpu); 845 if (max_boost_ratio) { 846 - unsigned int freq = freq_table[0].frequency; 847 848 /* 849 - * Because the loop above sorts the freq_table entries in the 850 - * descending order, freq is the maximum frequency in the table. 851 - * Assume that it corresponds to the CPPC nominal frequency and 852 - * use it to set cpuinfo.max_freq. 853 */ 854 policy->cpuinfo.max_freq = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT; 855 } else { 856 /*
··· 623 #endif 624 625 #ifdef CONFIG_ACPI_CPPC_LIB 626 + /* 627 + * get_max_boost_ratio: Computes the max_boost_ratio as the ratio 628 + * between the highest_perf and the nominal_perf. 629 + * 630 + * Returns the max_boost_ratio for @cpu. Returns the CPPC nominal 631 + * frequency via @nominal_freq if it is non-NULL pointer. 632 + */ 633 + static u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq) 634 { 635 struct cppc_perf_caps perf_caps; 636 u64 highest_perf, nominal_perf; ··· 652 653 nominal_perf = perf_caps.nominal_perf; 654 655 + if (nominal_freq) 656 + *nominal_freq = perf_caps.nominal_freq; 657 + 658 if (!highest_perf || !nominal_perf) { 659 pr_debug("CPU%d: highest or nominal performance missing\n", cpu); 660 return 0; ··· 664 665 return div_u64(highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); 666 } 667 + 668 #else 669 + static inline u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq) 670 + { 671 + return 0; 672 + } 673 #endif 674 675 static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) ··· 675 struct acpi_cpufreq_data *data; 676 unsigned int cpu = policy->cpu; 677 struct cpuinfo_x86 *c = &cpu_data(cpu); 678 + u64 max_boost_ratio, nominal_freq = 0; 679 unsigned int valid_states = 0; 680 unsigned int result = 0; 681 unsigned int i; 682 #ifdef CONFIG_SMP 683 static int blacklisted; ··· 827 } 828 freq_table[valid_states].frequency = CPUFREQ_TABLE_END; 829 830 + max_boost_ratio = get_max_boost_ratio(cpu, &nominal_freq); 831 if (max_boost_ratio) { 832 + unsigned int freq = nominal_freq; 833 834 /* 835 + * The loop above sorts the freq_table entries in the 836 + * descending order. If ACPI CPPC has not advertised 837 + * the nominal frequency (this is possible in CPPC 838 + * revisions prior to 3), then use the first entry in 839 + * the pstate table as a proxy for nominal frequency. 840 */ 841 + if (!freq) 842 + freq = freq_table[0].frequency; 843 + 844 policy->cpuinfo.max_freq = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT; 845 } else { 846 /*
+152
drivers/cpufreq/airoha-cpufreq.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bitfield.h> 4 + #include <linux/cpufreq.h> 5 + #include <linux/module.h> 6 + #include <linux/platform_device.h> 7 + #include <linux/pm_domain.h> 8 + #include <linux/pm_runtime.h> 9 + #include <linux/slab.h> 10 + 11 + #include "cpufreq-dt.h" 12 + 13 + struct airoha_cpufreq_priv { 14 + int opp_token; 15 + struct dev_pm_domain_list *pd_list; 16 + struct platform_device *cpufreq_dt; 17 + }; 18 + 19 + static struct platform_device *cpufreq_pdev; 20 + 21 + /* NOP function to disable OPP from setting clock */ 22 + static int airoha_cpufreq_config_clks_nop(struct device *dev, 23 + struct opp_table *opp_table, 24 + struct dev_pm_opp *opp, 25 + void *data, bool scaling_down) 26 + { 27 + return 0; 28 + } 29 + 30 + static const char * const airoha_cpufreq_clk_names[] = { "cpu", NULL }; 31 + static const char * const airoha_cpufreq_pd_names[] = { "perf" }; 32 + 33 + static int airoha_cpufreq_probe(struct platform_device *pdev) 34 + { 35 + const struct dev_pm_domain_attach_data attach_data = { 36 + .pd_names = airoha_cpufreq_pd_names, 37 + .num_pd_names = ARRAY_SIZE(airoha_cpufreq_pd_names), 38 + .pd_flags = PD_FLAG_DEV_LINK_ON | PD_FLAG_REQUIRED_OPP, 39 + }; 40 + struct dev_pm_opp_config config = { 41 + .clk_names = airoha_cpufreq_clk_names, 42 + .config_clks = airoha_cpufreq_config_clks_nop, 43 + }; 44 + struct platform_device *cpufreq_dt; 45 + struct airoha_cpufreq_priv *priv; 46 + struct device *dev = &pdev->dev; 47 + struct device *cpu_dev; 48 + int ret; 49 + 50 + /* CPUs refer to the same OPP table */ 51 + cpu_dev = get_cpu_device(0); 52 + if (!cpu_dev) 53 + return -ENODEV; 54 + 55 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 56 + if (!priv) 57 + return -ENOMEM; 58 + 59 + /* Set OPP table conf with NOP config_clks */ 60 + priv->opp_token = dev_pm_opp_set_config(cpu_dev, &config); 61 + if (priv->opp_token < 0) 62 + return dev_err_probe(dev, priv->opp_token, "Failed to set OPP config\n"); 63 + 64 + /* Attach PM for OPP */ 65 + ret = dev_pm_domain_attach_list(cpu_dev, &attach_data, 66 + &priv->pd_list); 67 + if (ret) 68 + goto clear_opp_config; 69 + 70 + cpufreq_dt = platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 71 + ret = PTR_ERR_OR_ZERO(cpufreq_dt); 72 + if (ret) { 73 + dev_err(dev, "failed to create cpufreq-dt device: %d\n", ret); 74 + goto detach_pm; 75 + } 76 + 77 + priv->cpufreq_dt = cpufreq_dt; 78 + platform_set_drvdata(pdev, priv); 79 + 80 + return 0; 81 + 82 + detach_pm: 83 + dev_pm_domain_detach_list(priv->pd_list); 84 + clear_opp_config: 85 + dev_pm_opp_clear_config(priv->opp_token); 86 + 87 + return ret; 88 + } 89 + 90 + static void airoha_cpufreq_remove(struct platform_device *pdev) 91 + { 92 + struct airoha_cpufreq_priv *priv = platform_get_drvdata(pdev); 93 + 94 + platform_device_unregister(priv->cpufreq_dt); 95 + 96 + dev_pm_domain_detach_list(priv->pd_list); 97 + 98 + dev_pm_opp_clear_config(priv->opp_token); 99 + } 100 + 101 + static struct platform_driver airoha_cpufreq_driver = { 102 + .probe = airoha_cpufreq_probe, 103 + .remove = airoha_cpufreq_remove, 104 + .driver = { 105 + .name = "airoha-cpufreq", 106 + }, 107 + }; 108 + 109 + static const struct of_device_id airoha_cpufreq_match_list[] __initconst = { 110 + { .compatible = "airoha,en7581" }, 111 + {}, 112 + }; 113 + MODULE_DEVICE_TABLE(of, airoha_cpufreq_match_list); 114 + 115 + static int __init airoha_cpufreq_init(void) 116 + { 117 + struct device_node *np = of_find_node_by_path("/"); 118 + const struct of_device_id *match; 119 + int ret; 120 + 121 + if (!np) 122 + return -ENODEV; 123 + 124 + match = of_match_node(airoha_cpufreq_match_list, np); 125 + of_node_put(np); 126 + if (!match) 127 + return -ENODEV; 128 + 129 + ret = platform_driver_register(&airoha_cpufreq_driver); 130 + if (unlikely(ret < 0)) 131 + return ret; 132 + 133 + cpufreq_pdev = platform_device_register_data(NULL, "airoha-cpufreq", 134 + -1, match, sizeof(*match)); 135 + ret = PTR_ERR_OR_ZERO(cpufreq_pdev); 136 + if (ret) 137 + platform_driver_unregister(&airoha_cpufreq_driver); 138 + 139 + return ret; 140 + } 141 + module_init(airoha_cpufreq_init); 142 + 143 + static void __exit airoha_cpufreq_exit(void) 144 + { 145 + platform_device_unregister(cpufreq_pdev); 146 + platform_driver_unregister(&airoha_cpufreq_driver); 147 + } 148 + module_exit(airoha_cpufreq_exit); 149 + 150 + MODULE_AUTHOR("Christian Marangi <ansuelsmth@gmail.com>"); 151 + MODULE_DESCRIPTION("CPUfreq driver for Airoha SoCs"); 152 + MODULE_LICENSE("GPL");
+46 -6
drivers/cpufreq/amd-pstate-trace.h
··· 32 u64 aperf, 33 u64 tsc, 34 unsigned int cpu_id, 35 - bool changed, 36 bool fast_switch 37 ), 38 ··· 43 aperf, 44 tsc, 45 cpu_id, 46 - changed, 47 fast_switch 48 ), 49 ··· 55 __field(unsigned long long, aperf) 56 __field(unsigned long long, tsc) 57 __field(unsigned int, cpu_id) 58 - __field(bool, changed) 59 __field(bool, fast_switch) 60 ), 61 ··· 67 __entry->aperf = aperf; 68 __entry->tsc = tsc; 69 __entry->cpu_id = cpu_id; 70 - __entry->changed = changed; 71 __entry->fast_switch = fast_switch; 72 ), 73 74 - TP_printk("amd_min_perf=%lu amd_des_perf=%lu amd_max_perf=%lu freq=%llu mperf=%llu aperf=%llu tsc=%llu cpu_id=%u changed=%s fast_switch=%s", 75 (unsigned long)__entry->min_perf, 76 (unsigned long)__entry->target_perf, 77 (unsigned long)__entry->capacity, ··· 79 (unsigned long long)__entry->aperf, 80 (unsigned long long)__entry->tsc, 81 (unsigned int)__entry->cpu_id, 82 - (__entry->changed) ? "true" : "false", 83 (__entry->fast_switch) ? "true" : "false" 84 ) 85 ); 86
··· 32 u64 aperf, 33 u64 tsc, 34 unsigned int cpu_id, 35 bool fast_switch 36 ), 37 ··· 44 aperf, 45 tsc, 46 cpu_id, 47 fast_switch 48 ), 49 ··· 57 __field(unsigned long long, aperf) 58 __field(unsigned long long, tsc) 59 __field(unsigned int, cpu_id) 60 __field(bool, fast_switch) 61 ), 62 ··· 70 __entry->aperf = aperf; 71 __entry->tsc = tsc; 72 __entry->cpu_id = cpu_id; 73 __entry->fast_switch = fast_switch; 74 ), 75 76 + TP_printk("amd_min_perf=%lu amd_des_perf=%lu amd_max_perf=%lu freq=%llu mperf=%llu aperf=%llu tsc=%llu cpu_id=%u fast_switch=%s", 77 (unsigned long)__entry->min_perf, 78 (unsigned long)__entry->target_perf, 79 (unsigned long)__entry->capacity, ··· 83 (unsigned long long)__entry->aperf, 84 (unsigned long long)__entry->tsc, 85 (unsigned int)__entry->cpu_id, 86 (__entry->fast_switch) ? "true" : "false" 87 + ) 88 + ); 89 + 90 + TRACE_EVENT(amd_pstate_epp_perf, 91 + 92 + TP_PROTO(unsigned int cpu_id, 93 + unsigned int highest_perf, 94 + unsigned int epp, 95 + unsigned int min_perf, 96 + unsigned int max_perf, 97 + bool boost 98 + ), 99 + 100 + TP_ARGS(cpu_id, 101 + highest_perf, 102 + epp, 103 + min_perf, 104 + max_perf, 105 + boost), 106 + 107 + TP_STRUCT__entry( 108 + __field(unsigned int, cpu_id) 109 + __field(unsigned int, highest_perf) 110 + __field(unsigned int, epp) 111 + __field(unsigned int, min_perf) 112 + __field(unsigned int, max_perf) 113 + __field(bool, boost) 114 + ), 115 + 116 + TP_fast_assign( 117 + __entry->cpu_id = cpu_id; 118 + __entry->highest_perf = highest_perf; 119 + __entry->epp = epp; 120 + __entry->min_perf = min_perf; 121 + __entry->max_perf = max_perf; 122 + __entry->boost = boost; 123 + ), 124 + 125 + TP_printk("cpu%u: [%u<->%u]/%u, epp=%u, boost=%u", 126 + (unsigned int)__entry->cpu_id, 127 + (unsigned int)__entry->min_perf, 128 + (unsigned int)__entry->max_perf, 129 + (unsigned int)__entry->highest_perf, 130 + (unsigned int)__entry->epp, 131 + (bool)__entry->boost 132 ) 133 ); 134
+5 -7
drivers/cpufreq/amd-pstate-ut.c
··· 207 int cpu = 0; 208 struct cpufreq_policy *policy = NULL; 209 struct amd_cpudata *cpudata = NULL; 210 - u32 nominal_freq_khz; 211 212 for_each_possible_cpu(cpu) { 213 policy = cpufreq_cpu_get(cpu); ··· 214 break; 215 cpudata = policy->driver_data; 216 217 - nominal_freq_khz = cpudata->nominal_freq*1000; 218 - if (!((cpudata->max_freq >= nominal_freq_khz) && 219 - (nominal_freq_khz > cpudata->lowest_nonlinear_freq) && 220 (cpudata->lowest_nonlinear_freq > cpudata->min_freq) && 221 (cpudata->min_freq > 0))) { 222 amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; 223 pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n", 224 - __func__, cpu, cpudata->max_freq, nominal_freq_khz, 225 cpudata->lowest_nonlinear_freq, cpudata->min_freq); 226 goto skip_test; 227 } ··· 234 235 if (cpudata->boost_supported) { 236 if ((policy->max == cpudata->max_freq) || 237 - (policy->max == nominal_freq_khz)) 238 amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; 239 else { 240 amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; 241 pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n", 242 __func__, cpu, policy->max, cpudata->max_freq, 243 - nominal_freq_khz); 244 goto skip_test; 245 } 246 } else {
··· 207 int cpu = 0; 208 struct cpufreq_policy *policy = NULL; 209 struct amd_cpudata *cpudata = NULL; 210 211 for_each_possible_cpu(cpu) { 212 policy = cpufreq_cpu_get(cpu); ··· 215 break; 216 cpudata = policy->driver_data; 217 218 + if (!((cpudata->max_freq >= cpudata->nominal_freq) && 219 + (cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) && 220 (cpudata->lowest_nonlinear_freq > cpudata->min_freq) && 221 (cpudata->min_freq > 0))) { 222 amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; 223 pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n", 224 + __func__, cpu, cpudata->max_freq, cpudata->nominal_freq, 225 cpudata->lowest_nonlinear_freq, cpudata->min_freq); 226 goto skip_test; 227 } ··· 236 237 if (cpudata->boost_supported) { 238 if ((policy->max == cpudata->max_freq) || 239 + (policy->max == cpudata->nominal_freq)) 240 amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; 241 else { 242 amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; 243 pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n", 244 __func__, cpu, policy->max, cpudata->max_freq, 245 + cpudata->nominal_freq); 246 goto skip_test; 247 } 248 } else {
+228 -257
drivers/cpufreq/amd-pstate.c
··· 22 23 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 24 25 #include <linux/kernel.h> 26 #include <linux/module.h> 27 #include <linux/init.h> ··· 88 static bool cppc_enabled; 89 static bool amd_pstate_prefcore = true; 90 static struct quirk_entry *quirks; 91 92 /* 93 * AMD Energy Preference Performance (EPP) ··· 186 static DEFINE_MUTEX(amd_pstate_limits_lock); 187 static DEFINE_MUTEX(amd_pstate_driver_lock); 188 189 - static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64 cppc_req_cached) 190 { 191 u64 epp; 192 int ret; 193 194 - if (cpu_feature_enabled(X86_FEATURE_CPPC)) { 195 - if (!cppc_req_cached) { 196 - epp = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, 197 - &cppc_req_cached); 198 - if (epp) 199 - return epp; 200 - } 201 - epp = (cppc_req_cached >> 24) & 0xFF; 202 - } else { 203 - ret = cppc_get_epp_perf(cpudata->cpu, &epp); 204 - if (ret < 0) { 205 - pr_debug("Could not retrieve energy perf value (%d)\n", ret); 206 - return -EIO; 207 - } 208 } 209 210 return (s16)(epp & 0xff); 211 } 212 213 - static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata) 214 { 215 - s16 epp; 216 - int index = -EINVAL; 217 218 - epp = amd_pstate_get_epp(cpudata, 0); 219 - if (epp < 0) 220 - return epp; 221 222 - switch (epp) { 223 - case AMD_CPPC_EPP_PERFORMANCE: 224 - index = EPP_INDEX_PERFORMANCE; 225 - break; 226 - case AMD_CPPC_EPP_BALANCE_PERFORMANCE: 227 - index = EPP_INDEX_BALANCE_PERFORMANCE; 228 - break; 229 - case AMD_CPPC_EPP_BALANCE_POWERSAVE: 230 - index = EPP_INDEX_BALANCE_POWERSAVE; 231 - break; 232 - case AMD_CPPC_EPP_POWERSAVE: 233 - index = EPP_INDEX_POWERSAVE; 234 - break; 235 - default: 236 - break; 237 } 238 239 - return index; 240 - } 241 242 - static void msr_update_perf(struct amd_cpudata *cpudata, u32 min_perf, 243 - u32 des_perf, u32 max_perf, bool fast_switch) 244 - { 245 - if (fast_switch) 246 - wrmsrl(MSR_AMD_CPPC_REQ, READ_ONCE(cpudata->cppc_req_cached)); 247 - else 248 - wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, 249 - READ_ONCE(cpudata->cppc_req_cached)); 250 } 251 252 DEFINE_STATIC_CALL(amd_pstate_update_perf, msr_update_perf); 253 254 - static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata, 255 u32 min_perf, u32 des_perf, 256 - u32 max_perf, bool fast_switch) 257 { 258 - static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf, 259 - max_perf, fast_switch); 260 } 261 262 - static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp) 263 { 264 int ret; 265 - struct cppc_perf_ctrls perf_ctrls; 266 267 - if (cpu_feature_enabled(X86_FEATURE_CPPC)) { 268 - u64 value = READ_ONCE(cpudata->cppc_req_cached); 269 270 - value &= ~GENMASK_ULL(31, 24); 271 - value |= (u64)epp << 24; 272 - WRITE_ONCE(cpudata->cppc_req_cached, value); 273 274 - ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 275 - if (!ret) 276 - cpudata->epp_cached = epp; 277 - } else { 278 - amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U, 279 - cpudata->max_limit_perf, false); 280 - 281 - perf_ctrls.energy_perf = epp; 282 - ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1); 283 - if (ret) { 284 - pr_debug("failed to set energy perf value (%d)\n", ret); 285 - return ret; 286 - } 287 - cpudata->epp_cached = epp; 288 } 289 290 return ret; 291 } 292 293 - static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata, 294 - int pref_index) 295 { 296 - int epp = -EINVAL; 297 int ret; 298 299 if (!pref_index) 300 epp = cpudata->epp_default; 301 - 302 - if (epp == -EINVAL) 303 epp = epp_values[pref_index]; 304 305 if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) { ··· 332 return -EBUSY; 333 } 334 335 - ret = amd_pstate_set_epp(cpudata, epp); 336 337 - return ret; 338 } 339 340 static inline int msr_cppc_enable(bool enable) ··· 479 return static_call(amd_pstate_init_perf)(cpudata); 480 } 481 482 - static void shmem_update_perf(struct amd_cpudata *cpudata, 483 - u32 min_perf, u32 des_perf, 484 - u32 max_perf, bool fast_switch) 485 { 486 struct cppc_perf_ctrls perf_ctrls; 487 488 perf_ctrls.max_perf = max_perf; 489 perf_ctrls.min_perf = min_perf; 490 perf_ctrls.desired_perf = des_perf; 491 492 - cppc_set_perf(cpudata->cpu, &perf_ctrls); 493 } 494 495 static inline bool amd_pstate_sample(struct amd_cpudata *cpudata) ··· 536 { 537 unsigned long max_freq; 538 struct cpufreq_policy *policy = cpufreq_cpu_get(cpudata->cpu); 539 - u64 prev = READ_ONCE(cpudata->cppc_req_cached); 540 u32 nominal_perf = READ_ONCE(cpudata->nominal_perf); 541 - u64 value = prev; 542 543 - min_perf = clamp_t(unsigned long, min_perf, cpudata->min_limit_perf, 544 - cpudata->max_limit_perf); 545 - max_perf = clamp_t(unsigned long, max_perf, cpudata->min_limit_perf, 546 - cpudata->max_limit_perf); 547 des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf); 548 549 max_freq = READ_ONCE(cpudata->max_limit_freq); ··· 548 des_perf = 0; 549 } 550 551 - value &= ~AMD_CPPC_MIN_PERF(~0L); 552 - value |= AMD_CPPC_MIN_PERF(min_perf); 553 - 554 - value &= ~AMD_CPPC_DES_PERF(~0L); 555 - value |= AMD_CPPC_DES_PERF(des_perf); 556 - 557 /* limit the max perf when core performance boost feature is disabled */ 558 if (!cpudata->boost_supported) 559 max_perf = min_t(unsigned long, nominal_perf, max_perf); 560 561 - value &= ~AMD_CPPC_MAX_PERF(~0L); 562 - value |= AMD_CPPC_MAX_PERF(max_perf); 563 - 564 if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) { 565 trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq, 566 cpudata->cur.mperf, cpudata->cur.aperf, cpudata->cur.tsc, 567 - cpudata->cpu, (value != prev), fast_switch); 568 } 569 570 - if (value == prev) 571 - goto cpufreq_policy_put; 572 573 - WRITE_ONCE(cpudata->cppc_req_cached, value); 574 - 575 - amd_pstate_update_perf(cpudata, min_perf, des_perf, 576 - max_perf, fast_switch); 577 - 578 - cpufreq_policy_put: 579 cpufreq_cpu_put(policy); 580 } 581 ··· 591 592 static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) 593 { 594 - u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf, max_freq; 595 struct amd_cpudata *cpudata = policy->driver_data; 596 597 max_perf = READ_ONCE(cpudata->highest_perf); ··· 599 max_limit_perf = div_u64(policy->max * max_perf, max_freq); 600 min_limit_perf = div_u64(policy->min * max_perf, max_freq); 601 602 - lowest_perf = READ_ONCE(cpudata->lowest_perf); 603 - if (min_limit_perf < lowest_perf) 604 - min_limit_perf = lowest_perf; 605 - 606 - if (max_limit_perf < min_limit_perf) 607 - max_limit_perf = min_limit_perf; 608 609 WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf); 610 WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf); ··· 721 722 if (on) 723 policy->cpuinfo.max_freq = max_freq; 724 - else if (policy->cpuinfo.max_freq > nominal_freq * 1000) 725 - policy->cpuinfo.max_freq = nominal_freq * 1000; 726 727 policy->max = policy->cpuinfo.max_freq; 728 ··· 744 pr_err("Boost mode is not supported by this processor or SBIOS\n"); 745 return -EOPNOTSUPP; 746 } 747 - mutex_lock(&amd_pstate_driver_lock); 748 ret = amd_pstate_cpu_boost_update(policy, state); 749 - WRITE_ONCE(cpudata->boost_state, !ret ? state : false); 750 policy->boost_enabled = !ret ? state : false; 751 refresh_frequency_limits(policy); 752 - mutex_unlock(&amd_pstate_driver_lock); 753 754 return ret; 755 } ··· 767 ret = 0; 768 goto exit_err; 769 } 770 - 771 - /* at least one CPU supports CPB, even if others fail later on to set up */ 772 - current_pstate_driver->boost_enabled = true; 773 774 ret = rdmsrl_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val); 775 if (ret) { ··· 815 * sched_set_itmt_support(true) has been called and it is valid to 816 * update them at any time after it has been called. 817 */ 818 - sched_set_itmt_core_prio((int)READ_ONCE(cpudata->highest_perf), cpudata->cpu); 819 820 schedule_work(&sched_prefcore_work); 821 } ··· 836 if (!amd_pstate_prefcore) 837 return; 838 839 - mutex_lock(&amd_pstate_driver_lock); 840 ret = amd_get_highest_perf(cpu, &cur_high); 841 if (ret) 842 goto free_cpufreq_put; ··· 857 if (!highest_perf_changed) 858 cpufreq_update_policy(cpu); 859 860 - mutex_unlock(&amd_pstate_driver_lock); 861 } 862 863 /* ··· 908 { 909 int ret; 910 u32 min_freq, max_freq; 911 - u32 nominal_perf, nominal_freq; 912 u32 lowest_nonlinear_perf, lowest_nonlinear_freq; 913 - u32 boost_ratio, lowest_nonlinear_ratio; 914 struct cppc_perf_caps cppc_perf; 915 916 ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); ··· 917 return ret; 918 919 if (quirks && quirks->lowest_freq) 920 - min_freq = quirks->lowest_freq * 1000; 921 else 922 - min_freq = cppc_perf.lowest_freq * 1000; 923 924 if (quirks && quirks->nominal_freq) 925 - nominal_freq = quirks->nominal_freq ; 926 else 927 nominal_freq = cppc_perf.nominal_freq; 928 929 nominal_perf = READ_ONCE(cpudata->nominal_perf); 930 - 931 - boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); 932 - max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT) * 1000; 933 934 lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); 935 - lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT, 936 - nominal_perf); 937 - lowest_nonlinear_freq = (nominal_freq * lowest_nonlinear_ratio >> SCHED_CAPACITY_SHIFT) * 1000; 938 - 939 - WRITE_ONCE(cpudata->min_freq, min_freq); 940 - WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq); 941 - WRITE_ONCE(cpudata->nominal_freq, nominal_freq); 942 - WRITE_ONCE(cpudata->max_freq, max_freq); 943 944 /** 945 * Below values need to be initialized correctly, otherwise driver will fail to load ··· 945 */ 946 if (min_freq <= 0 || max_freq <= 0 || nominal_freq <= 0 || min_freq > max_freq) { 947 pr_err("min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect\n", 948 - min_freq, max_freq, nominal_freq * 1000); 949 return -EINVAL; 950 } 951 952 - if (lowest_nonlinear_freq <= min_freq || lowest_nonlinear_freq > nominal_freq * 1000) { 953 pr_err("lowest_nonlinear_freq(%d) value is out of range [min_freq(%d), nominal_freq(%d)]\n", 954 - lowest_nonlinear_freq, min_freq, nominal_freq * 1000); 955 return -EINVAL; 956 } 957 ··· 1168 static ssize_t store_energy_performance_preference( 1169 struct cpufreq_policy *policy, const char *buf, size_t count) 1170 { 1171 - struct amd_cpudata *cpudata = policy->driver_data; 1172 char str_preference[21]; 1173 ssize_t ret; 1174 ··· 1179 if (ret < 0) 1180 return -EINVAL; 1181 1182 - mutex_lock(&amd_pstate_limits_lock); 1183 - ret = amd_pstate_set_energy_pref_index(cpudata, ret); 1184 - mutex_unlock(&amd_pstate_limits_lock); 1185 1186 - return ret ?: count; 1187 } 1188 1189 static ssize_t show_energy_performance_preference( ··· 1192 struct amd_cpudata *cpudata = policy->driver_data; 1193 int preference; 1194 1195 - preference = amd_pstate_get_energy_pref_index(cpudata); 1196 - if (preference < 0) 1197 - return preference; 1198 1199 return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); 1200 } ··· 1255 amd_pstate_driver_cleanup(); 1256 return ret; 1257 } 1258 1259 ret = cpufreq_register_driver(current_pstate_driver); 1260 if (ret) { ··· 1363 static ssize_t status_show(struct device *dev, 1364 struct device_attribute *attr, char *buf) 1365 { 1366 - ssize_t ret; 1367 1368 - mutex_lock(&amd_pstate_driver_lock); 1369 - ret = amd_pstate_show_status(buf); 1370 - mutex_unlock(&amd_pstate_driver_lock); 1371 1372 - return ret; 1373 } 1374 1375 static ssize_t status_store(struct device *a, struct device_attribute *b, ··· 1375 char *p = memchr(buf, '\n', count); 1376 int ret; 1377 1378 - mutex_lock(&amd_pstate_driver_lock); 1379 ret = amd_pstate_update_status(buf, p ? p - buf : count); 1380 - mutex_unlock(&amd_pstate_driver_lock); 1381 1382 return ret < 0 ? ret : count; 1383 } ··· 1470 return -ENOMEM; 1471 1472 cpudata->cpu = policy->cpu; 1473 - cpudata->epp_policy = 0; 1474 1475 ret = amd_pstate_init_perf(cpudata); 1476 if (ret) ··· 1495 1496 policy->driver_data = cpudata; 1497 1498 - cpudata->epp_cached = cpudata->epp_default = amd_pstate_get_epp(cpudata, 0); 1499 - 1500 policy->min = policy->cpuinfo.min_freq; 1501 policy->max = policy->cpuinfo.max_freq; 1502 ··· 1505 * the default cpufreq governor is neither powersave nor performance. 1506 */ 1507 if (amd_pstate_acpi_pm_profile_server() || 1508 - amd_pstate_acpi_pm_profile_undefined()) 1509 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 1510 - else 1511 policy->policy = CPUFREQ_POLICY_POWERSAVE; 1512 1513 if (cpu_feature_enabled(X86_FEATURE_CPPC)) { 1514 ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value); ··· 1524 return ret; 1525 WRITE_ONCE(cpudata->cppc_cap1_cached, value); 1526 } 1527 1528 current_pstate_driver->adjust_perf = NULL; 1529 ··· 1552 static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) 1553 { 1554 struct amd_cpudata *cpudata = policy->driver_data; 1555 - u32 max_perf, min_perf; 1556 - u64 value; 1557 - s16 epp; 1558 1559 - max_perf = READ_ONCE(cpudata->highest_perf); 1560 - min_perf = READ_ONCE(cpudata->lowest_perf); 1561 amd_pstate_update_min_max_limit(policy); 1562 - 1563 - max_perf = clamp_t(unsigned long, max_perf, cpudata->min_limit_perf, 1564 - cpudata->max_limit_perf); 1565 - min_perf = clamp_t(unsigned long, min_perf, cpudata->min_limit_perf, 1566 - cpudata->max_limit_perf); 1567 - value = READ_ONCE(cpudata->cppc_req_cached); 1568 - 1569 - if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) 1570 - min_perf = min(cpudata->nominal_perf, max_perf); 1571 - 1572 - /* Initial min/max values for CPPC Performance Controls Register */ 1573 - value &= ~AMD_CPPC_MIN_PERF(~0L); 1574 - value |= AMD_CPPC_MIN_PERF(min_perf); 1575 - 1576 - value &= ~AMD_CPPC_MAX_PERF(~0L); 1577 - value |= AMD_CPPC_MAX_PERF(max_perf); 1578 - 1579 - /* CPPC EPP feature require to set zero to the desire perf bit */ 1580 - value &= ~AMD_CPPC_DES_PERF(~0L); 1581 - value |= AMD_CPPC_DES_PERF(0); 1582 - 1583 - cpudata->epp_policy = cpudata->policy; 1584 - 1585 - /* Get BIOS pre-defined epp value */ 1586 - epp = amd_pstate_get_epp(cpudata, value); 1587 - if (epp < 0) { 1588 - /** 1589 - * This return value can only be negative for shared_memory 1590 - * systems where EPP register read/write not supported. 1591 - */ 1592 - return epp; 1593 - } 1594 1595 if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) 1596 epp = 0; 1597 1598 - WRITE_ONCE(cpudata->cppc_req_cached, value); 1599 - return amd_pstate_set_epp(cpudata, epp); 1600 } 1601 1602 static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) ··· 1598 return 0; 1599 } 1600 1601 - static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata) 1602 { 1603 - struct cppc_perf_ctrls perf_ctrls; 1604 - u64 value, max_perf; 1605 int ret; 1606 1607 ret = amd_pstate_cppc_enable(true); 1608 if (ret) 1609 pr_err("failed to enable amd pstate during resume, return %d\n", ret); 1610 1611 - value = READ_ONCE(cpudata->cppc_req_cached); 1612 max_perf = READ_ONCE(cpudata->highest_perf); 1613 1614 - if (cpu_feature_enabled(X86_FEATURE_CPPC)) { 1615 - wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 1616 - } else { 1617 - perf_ctrls.max_perf = max_perf; 1618 - cppc_set_perf(cpudata->cpu, &perf_ctrls); 1619 - perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(cpudata->epp_cached); 1620 - cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1); 1621 } 1622 } 1623 1624 static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) 1625 { 1626 struct amd_cpudata *cpudata = policy->driver_data; 1627 1628 pr_debug("AMD CPU Core %d going online\n", cpudata->cpu); 1629 1630 - if (cppc_state == AMD_PSTATE_ACTIVE) { 1631 - amd_pstate_epp_reenable(cpudata); 1632 - cpudata->suspended = false; 1633 - } 1634 1635 return 0; 1636 - } 1637 - 1638 - static void amd_pstate_epp_offline(struct cpufreq_policy *policy) 1639 - { 1640 - struct amd_cpudata *cpudata = policy->driver_data; 1641 - struct cppc_perf_ctrls perf_ctrls; 1642 - int min_perf; 1643 - u64 value; 1644 - 1645 - min_perf = READ_ONCE(cpudata->lowest_perf); 1646 - value = READ_ONCE(cpudata->cppc_req_cached); 1647 - 1648 - mutex_lock(&amd_pstate_limits_lock); 1649 - if (cpu_feature_enabled(X86_FEATURE_CPPC)) { 1650 - cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN; 1651 - 1652 - /* Set max perf same as min perf */ 1653 - value &= ~AMD_CPPC_MAX_PERF(~0L); 1654 - value |= AMD_CPPC_MAX_PERF(min_perf); 1655 - value &= ~AMD_CPPC_MIN_PERF(~0L); 1656 - value |= AMD_CPPC_MIN_PERF(min_perf); 1657 - wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 1658 - } else { 1659 - perf_ctrls.desired_perf = 0; 1660 - perf_ctrls.min_perf = min_perf; 1661 - perf_ctrls.max_perf = min_perf; 1662 - cppc_set_perf(cpudata->cpu, &perf_ctrls); 1663 - perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(HWP_EPP_BALANCE_POWERSAVE); 1664 - cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1); 1665 - } 1666 - mutex_unlock(&amd_pstate_limits_lock); 1667 } 1668 1669 static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) 1670 { 1671 struct amd_cpudata *cpudata = policy->driver_data; 1672 - 1673 - pr_debug("AMD CPU Core %d going offline\n", cpudata->cpu); 1674 1675 if (cpudata->suspended) 1676 return 0; 1677 1678 - if (cppc_state == AMD_PSTATE_ACTIVE) 1679 - amd_pstate_epp_offline(policy); 1680 1681 - return 0; 1682 } 1683 1684 static int amd_pstate_epp_suspend(struct cpufreq_policy *policy) ··· 1682 struct amd_cpudata *cpudata = policy->driver_data; 1683 1684 if (cpudata->suspended) { 1685 - mutex_lock(&amd_pstate_limits_lock); 1686 1687 /* enable amd pstate from suspend state*/ 1688 - amd_pstate_epp_reenable(cpudata); 1689 - 1690 - mutex_unlock(&amd_pstate_limits_lock); 1691 1692 cpudata->suspended = false; 1693 } ··· 1838 static_call_update(amd_pstate_cppc_enable, shmem_cppc_enable); 1839 static_call_update(amd_pstate_init_perf, shmem_init_perf); 1840 static_call_update(amd_pstate_update_perf, shmem_update_perf); 1841 } 1842 1843 if (amd_pstate_prefcore) {
··· 22 23 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 24 25 + #include <linux/bitfield.h> 26 #include <linux/kernel.h> 27 #include <linux/module.h> 28 #include <linux/init.h> ··· 87 static bool cppc_enabled; 88 static bool amd_pstate_prefcore = true; 89 static struct quirk_entry *quirks; 90 + 91 + #define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0) 92 + #define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8) 93 + #define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16) 94 + #define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24) 95 96 /* 97 * AMD Energy Preference Performance (EPP) ··· 180 static DEFINE_MUTEX(amd_pstate_limits_lock); 181 static DEFINE_MUTEX(amd_pstate_driver_lock); 182 183 + static s16 msr_get_epp(struct amd_cpudata *cpudata) 184 + { 185 + u64 value; 186 + int ret; 187 + 188 + ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value); 189 + if (ret < 0) { 190 + pr_debug("Could not retrieve energy perf value (%d)\n", ret); 191 + return ret; 192 + } 193 + 194 + return FIELD_GET(AMD_CPPC_EPP_PERF_MASK, value); 195 + } 196 + 197 + DEFINE_STATIC_CALL(amd_pstate_get_epp, msr_get_epp); 198 + 199 + static inline s16 amd_pstate_get_epp(struct amd_cpudata *cpudata) 200 + { 201 + return static_call(amd_pstate_get_epp)(cpudata); 202 + } 203 + 204 + static s16 shmem_get_epp(struct amd_cpudata *cpudata) 205 { 206 u64 epp; 207 int ret; 208 209 + ret = cppc_get_epp_perf(cpudata->cpu, &epp); 210 + if (ret < 0) { 211 + pr_debug("Could not retrieve energy perf value (%d)\n", ret); 212 + return ret; 213 } 214 215 return (s16)(epp & 0xff); 216 } 217 218 + static int msr_update_perf(struct amd_cpudata *cpudata, u32 min_perf, 219 + u32 des_perf, u32 max_perf, u32 epp, bool fast_switch) 220 { 221 + u64 value, prev; 222 223 + value = prev = READ_ONCE(cpudata->cppc_req_cached); 224 225 + value &= ~(AMD_CPPC_MAX_PERF_MASK | AMD_CPPC_MIN_PERF_MASK | 226 + AMD_CPPC_DES_PERF_MASK | AMD_CPPC_EPP_PERF_MASK); 227 + value |= FIELD_PREP(AMD_CPPC_MAX_PERF_MASK, max_perf); 228 + value |= FIELD_PREP(AMD_CPPC_DES_PERF_MASK, des_perf); 229 + value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf); 230 + value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); 231 + 232 + if (value == prev) 233 + return 0; 234 + 235 + if (fast_switch) { 236 + wrmsrl(MSR_AMD_CPPC_REQ, value); 237 + return 0; 238 + } else { 239 + int ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 240 + 241 + if (ret) 242 + return ret; 243 } 244 245 + WRITE_ONCE(cpudata->cppc_req_cached, value); 246 + WRITE_ONCE(cpudata->epp_cached, epp); 247 248 + return 0; 249 } 250 251 DEFINE_STATIC_CALL(amd_pstate_update_perf, msr_update_perf); 252 253 + static inline int amd_pstate_update_perf(struct amd_cpudata *cpudata, 254 u32 min_perf, u32 des_perf, 255 + u32 max_perf, u32 epp, 256 + bool fast_switch) 257 { 258 + return static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf, 259 + max_perf, epp, fast_switch); 260 } 261 262 + static int msr_set_epp(struct amd_cpudata *cpudata, u32 epp) 263 { 264 + u64 value, prev; 265 int ret; 266 267 + value = prev = READ_ONCE(cpudata->cppc_req_cached); 268 + value &= ~AMD_CPPC_EPP_PERF_MASK; 269 + value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); 270 271 + if (value == prev) 272 + return 0; 273 274 + ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 275 + if (ret) { 276 + pr_err("failed to set energy perf value (%d)\n", ret); 277 + return ret; 278 } 279 + 280 + /* update both so that msr_update_perf() can effectively check */ 281 + WRITE_ONCE(cpudata->epp_cached, epp); 282 + WRITE_ONCE(cpudata->cppc_req_cached, value); 283 284 return ret; 285 } 286 287 + DEFINE_STATIC_CALL(amd_pstate_set_epp, msr_set_epp); 288 + 289 + static inline int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp) 290 { 291 + return static_call(amd_pstate_set_epp)(cpudata, epp); 292 + } 293 + 294 + static int shmem_set_epp(struct amd_cpudata *cpudata, u32 epp) 295 + { 296 int ret; 297 + struct cppc_perf_ctrls perf_ctrls; 298 + 299 + if (epp == cpudata->epp_cached) 300 + return 0; 301 + 302 + perf_ctrls.energy_perf = epp; 303 + ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1); 304 + if (ret) { 305 + pr_debug("failed to set energy perf value (%d)\n", ret); 306 + return ret; 307 + } 308 + WRITE_ONCE(cpudata->epp_cached, epp); 309 + 310 + return ret; 311 + } 312 + 313 + static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy, 314 + int pref_index) 315 + { 316 + struct amd_cpudata *cpudata = policy->driver_data; 317 + int epp; 318 319 if (!pref_index) 320 epp = cpudata->epp_default; 321 + else 322 epp = epp_values[pref_index]; 323 324 if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) { ··· 301 return -EBUSY; 302 } 303 304 + if (trace_amd_pstate_epp_perf_enabled()) { 305 + trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, 306 + epp, 307 + FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), 308 + FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), 309 + policy->boost_enabled); 310 + } 311 312 + return amd_pstate_set_epp(cpudata, epp); 313 } 314 315 static inline int msr_cppc_enable(bool enable) ··· 442 return static_call(amd_pstate_init_perf)(cpudata); 443 } 444 445 + static int shmem_update_perf(struct amd_cpudata *cpudata, u32 min_perf, 446 + u32 des_perf, u32 max_perf, u32 epp, bool fast_switch) 447 { 448 struct cppc_perf_ctrls perf_ctrls; 449 + 450 + if (cppc_state == AMD_PSTATE_ACTIVE) { 451 + int ret = shmem_set_epp(cpudata, epp); 452 + 453 + if (ret) 454 + return ret; 455 + } 456 457 perf_ctrls.max_perf = max_perf; 458 perf_ctrls.min_perf = min_perf; 459 perf_ctrls.desired_perf = des_perf; 460 461 + return cppc_set_perf(cpudata->cpu, &perf_ctrls); 462 } 463 464 static inline bool amd_pstate_sample(struct amd_cpudata *cpudata) ··· 493 { 494 unsigned long max_freq; 495 struct cpufreq_policy *policy = cpufreq_cpu_get(cpudata->cpu); 496 u32 nominal_perf = READ_ONCE(cpudata->nominal_perf); 497 498 des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf); 499 500 max_freq = READ_ONCE(cpudata->max_limit_freq); ··· 511 des_perf = 0; 512 } 513 514 /* limit the max perf when core performance boost feature is disabled */ 515 if (!cpudata->boost_supported) 516 max_perf = min_t(unsigned long, nominal_perf, max_perf); 517 518 if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) { 519 trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq, 520 cpudata->cur.mperf, cpudata->cur.aperf, cpudata->cur.tsc, 521 + cpudata->cpu, fast_switch); 522 } 523 524 + amd_pstate_update_perf(cpudata, min_perf, des_perf, max_perf, 0, fast_switch); 525 526 cpufreq_cpu_put(policy); 527 } 528 ··· 570 571 static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) 572 { 573 + u32 max_limit_perf, min_limit_perf, max_perf, max_freq; 574 struct amd_cpudata *cpudata = policy->driver_data; 575 576 max_perf = READ_ONCE(cpudata->highest_perf); ··· 578 max_limit_perf = div_u64(policy->max * max_perf, max_freq); 579 min_limit_perf = div_u64(policy->min * max_perf, max_freq); 580 581 + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) 582 + min_limit_perf = min(cpudata->nominal_perf, max_limit_perf); 583 584 WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf); 585 WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf); ··· 704 705 if (on) 706 policy->cpuinfo.max_freq = max_freq; 707 + else if (policy->cpuinfo.max_freq > nominal_freq) 708 + policy->cpuinfo.max_freq = nominal_freq; 709 710 policy->max = policy->cpuinfo.max_freq; 711 ··· 727 pr_err("Boost mode is not supported by this processor or SBIOS\n"); 728 return -EOPNOTSUPP; 729 } 730 + guard(mutex)(&amd_pstate_driver_lock); 731 + 732 ret = amd_pstate_cpu_boost_update(policy, state); 733 policy->boost_enabled = !ret ? state : false; 734 refresh_frequency_limits(policy); 735 736 return ret; 737 } ··· 751 ret = 0; 752 goto exit_err; 753 } 754 755 ret = rdmsrl_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val); 756 if (ret) { ··· 802 * sched_set_itmt_support(true) has been called and it is valid to 803 * update them at any time after it has been called. 804 */ 805 + sched_set_itmt_core_prio((int)READ_ONCE(cpudata->prefcore_ranking), cpudata->cpu); 806 807 schedule_work(&sched_prefcore_work); 808 } ··· 823 if (!amd_pstate_prefcore) 824 return; 825 826 + guard(mutex)(&amd_pstate_driver_lock); 827 + 828 ret = amd_get_highest_perf(cpu, &cur_high); 829 if (ret) 830 goto free_cpufreq_put; ··· 843 if (!highest_perf_changed) 844 cpufreq_update_policy(cpu); 845 846 } 847 848 /* ··· 895 { 896 int ret; 897 u32 min_freq, max_freq; 898 + u32 highest_perf, nominal_perf, nominal_freq; 899 u32 lowest_nonlinear_perf, lowest_nonlinear_freq; 900 struct cppc_perf_caps cppc_perf; 901 902 ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); ··· 905 return ret; 906 907 if (quirks && quirks->lowest_freq) 908 + min_freq = quirks->lowest_freq; 909 else 910 + min_freq = cppc_perf.lowest_freq; 911 912 if (quirks && quirks->nominal_freq) 913 + nominal_freq = quirks->nominal_freq; 914 else 915 nominal_freq = cppc_perf.nominal_freq; 916 917 + highest_perf = READ_ONCE(cpudata->highest_perf); 918 nominal_perf = READ_ONCE(cpudata->nominal_perf); 919 + max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf); 920 921 lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); 922 + lowest_nonlinear_freq = div_u64((u64)nominal_freq * lowest_nonlinear_perf, nominal_perf); 923 + WRITE_ONCE(cpudata->min_freq, min_freq * 1000); 924 + WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq * 1000); 925 + WRITE_ONCE(cpudata->nominal_freq, nominal_freq * 1000); 926 + WRITE_ONCE(cpudata->max_freq, max_freq * 1000); 927 928 /** 929 * Below values need to be initialized correctly, otherwise driver will fail to load ··· 937 */ 938 if (min_freq <= 0 || max_freq <= 0 || nominal_freq <= 0 || min_freq > max_freq) { 939 pr_err("min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect\n", 940 + min_freq, max_freq, nominal_freq); 941 return -EINVAL; 942 } 943 944 + if (lowest_nonlinear_freq <= min_freq || lowest_nonlinear_freq > nominal_freq) { 945 pr_err("lowest_nonlinear_freq(%d) value is out of range [min_freq(%d), nominal_freq(%d)]\n", 946 + lowest_nonlinear_freq, min_freq, nominal_freq); 947 return -EINVAL; 948 } 949 ··· 1160 static ssize_t store_energy_performance_preference( 1161 struct cpufreq_policy *policy, const char *buf, size_t count) 1162 { 1163 char str_preference[21]; 1164 ssize_t ret; 1165 ··· 1172 if (ret < 0) 1173 return -EINVAL; 1174 1175 + guard(mutex)(&amd_pstate_limits_lock); 1176 1177 + ret = amd_pstate_set_energy_pref_index(policy, ret); 1178 + 1179 + return ret ? ret : count; 1180 } 1181 1182 static ssize_t show_energy_performance_preference( ··· 1185 struct amd_cpudata *cpudata = policy->driver_data; 1186 int preference; 1187 1188 + switch (cpudata->epp_cached) { 1189 + case AMD_CPPC_EPP_PERFORMANCE: 1190 + preference = EPP_INDEX_PERFORMANCE; 1191 + break; 1192 + case AMD_CPPC_EPP_BALANCE_PERFORMANCE: 1193 + preference = EPP_INDEX_BALANCE_PERFORMANCE; 1194 + break; 1195 + case AMD_CPPC_EPP_BALANCE_POWERSAVE: 1196 + preference = EPP_INDEX_BALANCE_POWERSAVE; 1197 + break; 1198 + case AMD_CPPC_EPP_POWERSAVE: 1199 + preference = EPP_INDEX_POWERSAVE; 1200 + break; 1201 + default: 1202 + return -EINVAL; 1203 + } 1204 1205 return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); 1206 } ··· 1235 amd_pstate_driver_cleanup(); 1236 return ret; 1237 } 1238 + 1239 + /* at least one CPU supports CPB */ 1240 + current_pstate_driver->boost_enabled = cpu_feature_enabled(X86_FEATURE_CPB); 1241 1242 ret = cpufreq_register_driver(current_pstate_driver); 1243 if (ret) { ··· 1340 static ssize_t status_show(struct device *dev, 1341 struct device_attribute *attr, char *buf) 1342 { 1343 1344 + guard(mutex)(&amd_pstate_driver_lock); 1345 1346 + return amd_pstate_show_status(buf); 1347 } 1348 1349 static ssize_t status_store(struct device *a, struct device_attribute *b, ··· 1355 char *p = memchr(buf, '\n', count); 1356 int ret; 1357 1358 + guard(mutex)(&amd_pstate_driver_lock); 1359 ret = amd_pstate_update_status(buf, p ? p - buf : count); 1360 1361 return ret < 0 ? ret : count; 1362 } ··· 1451 return -ENOMEM; 1452 1453 cpudata->cpu = policy->cpu; 1454 1455 ret = amd_pstate_init_perf(cpudata); 1456 if (ret) ··· 1477 1478 policy->driver_data = cpudata; 1479 1480 policy->min = policy->cpuinfo.min_freq; 1481 policy->max = policy->cpuinfo.max_freq; 1482 ··· 1489 * the default cpufreq governor is neither powersave nor performance. 1490 */ 1491 if (amd_pstate_acpi_pm_profile_server() || 1492 + amd_pstate_acpi_pm_profile_undefined()) { 1493 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 1494 + cpudata->epp_default = amd_pstate_get_epp(cpudata); 1495 + } else { 1496 policy->policy = CPUFREQ_POLICY_POWERSAVE; 1497 + cpudata->epp_default = AMD_CPPC_EPP_BALANCE_PERFORMANCE; 1498 + } 1499 1500 if (cpu_feature_enabled(X86_FEATURE_CPPC)) { 1501 ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value); ··· 1505 return ret; 1506 WRITE_ONCE(cpudata->cppc_cap1_cached, value); 1507 } 1508 + ret = amd_pstate_set_epp(cpudata, cpudata->epp_default); 1509 + if (ret) 1510 + return ret; 1511 1512 current_pstate_driver->adjust_perf = NULL; 1513 ··· 1530 static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) 1531 { 1532 struct amd_cpudata *cpudata = policy->driver_data; 1533 + u32 epp; 1534 1535 amd_pstate_update_min_max_limit(policy); 1536 1537 if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) 1538 epp = 0; 1539 + else 1540 + epp = READ_ONCE(cpudata->epp_cached); 1541 1542 + if (trace_amd_pstate_epp_perf_enabled()) { 1543 + trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, epp, 1544 + cpudata->min_limit_perf, 1545 + cpudata->max_limit_perf, 1546 + policy->boost_enabled); 1547 + } 1548 + 1549 + return amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U, 1550 + cpudata->max_limit_perf, epp, false); 1551 } 1552 1553 static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) ··· 1603 return 0; 1604 } 1605 1606 + static int amd_pstate_epp_reenable(struct cpufreq_policy *policy) 1607 { 1608 + struct amd_cpudata *cpudata = policy->driver_data; 1609 + u64 max_perf; 1610 int ret; 1611 1612 ret = amd_pstate_cppc_enable(true); 1613 if (ret) 1614 pr_err("failed to enable amd pstate during resume, return %d\n", ret); 1615 1616 max_perf = READ_ONCE(cpudata->highest_perf); 1617 1618 + if (trace_amd_pstate_epp_perf_enabled()) { 1619 + trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, 1620 + cpudata->epp_cached, 1621 + FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), 1622 + max_perf, policy->boost_enabled); 1623 } 1624 + 1625 + return amd_pstate_update_perf(cpudata, 0, 0, max_perf, cpudata->epp_cached, false); 1626 } 1627 1628 static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) 1629 { 1630 struct amd_cpudata *cpudata = policy->driver_data; 1631 + int ret; 1632 1633 pr_debug("AMD CPU Core %d going online\n", cpudata->cpu); 1634 1635 + ret = amd_pstate_epp_reenable(policy); 1636 + if (ret) 1637 + return ret; 1638 + cpudata->suspended = false; 1639 1640 return 0; 1641 } 1642 1643 static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) 1644 { 1645 struct amd_cpudata *cpudata = policy->driver_data; 1646 + int min_perf; 1647 1648 if (cpudata->suspended) 1649 return 0; 1650 1651 + min_perf = READ_ONCE(cpudata->lowest_perf); 1652 1653 + guard(mutex)(&amd_pstate_limits_lock); 1654 + 1655 + if (trace_amd_pstate_epp_perf_enabled()) { 1656 + trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, 1657 + AMD_CPPC_EPP_BALANCE_POWERSAVE, 1658 + min_perf, min_perf, policy->boost_enabled); 1659 + } 1660 + 1661 + return amd_pstate_update_perf(cpudata, min_perf, 0, min_perf, 1662 + AMD_CPPC_EPP_BALANCE_POWERSAVE, false); 1663 } 1664 1665 static int amd_pstate_epp_suspend(struct cpufreq_policy *policy) ··· 1711 struct amd_cpudata *cpudata = policy->driver_data; 1712 1713 if (cpudata->suspended) { 1714 + guard(mutex)(&amd_pstate_limits_lock); 1715 1716 /* enable amd pstate from suspend state*/ 1717 + amd_pstate_epp_reenable(policy); 1718 1719 cpudata->suspended = false; 1720 } ··· 1869 static_call_update(amd_pstate_cppc_enable, shmem_cppc_enable); 1870 static_call_update(amd_pstate_init_perf, shmem_init_perf); 1871 static_call_update(amd_pstate_update_perf, shmem_update_perf); 1872 + static_call_update(amd_pstate_get_epp, shmem_get_epp); 1873 + static_call_update(amd_pstate_set_epp, shmem_set_epp); 1874 } 1875 1876 if (amd_pstate_prefcore) {
-3
drivers/cpufreq/amd-pstate.h
··· 57 * @hw_prefcore: check whether HW supports preferred core featue. 58 * Only when hw_prefcore and early prefcore param are true, 59 * AMD P-State driver supports preferred core featue. 60 - * @epp_policy: Last saved policy used to set energy-performance preference 61 * @epp_cached: Cached CPPC energy-performance preference value 62 * @policy: Cpufreq policy value 63 * @cppc_cap1_cached Cached MSR_AMD_CPPC_CAP1 register value ··· 93 bool hw_prefcore; 94 95 /* EPP feature related attributes*/ 96 - s16 epp_policy; 97 s16 epp_cached; 98 u32 policy; 99 u64 cppc_cap1_cached; 100 bool suspended; 101 s16 epp_default; 102 - bool boost_state; 103 }; 104 105 /*
··· 57 * @hw_prefcore: check whether HW supports preferred core featue. 58 * Only when hw_prefcore and early prefcore param are true, 59 * AMD P-State driver supports preferred core featue. 60 * @epp_cached: Cached CPPC energy-performance preference value 61 * @policy: Cpufreq policy value 62 * @cppc_cap1_cached Cached MSR_AMD_CPPC_CAP1 register value ··· 94 bool hw_prefcore; 95 96 /* EPP feature related attributes*/ 97 s16 epp_cached; 98 u32 policy; 99 u64 cppc_cap1_cached; 100 bool suspended; 101 s16 epp_default; 102 }; 103 104 /*
+45 -11
drivers/cpufreq/apple-soc-cpufreq.c
··· 22 #include <linux/pm_opp.h> 23 #include <linux/slab.h> 24 25 - #define APPLE_DVFS_CMD 0x20 26 - #define APPLE_DVFS_CMD_BUSY BIT(31) 27 - #define APPLE_DVFS_CMD_SET BIT(25) 28 - #define APPLE_DVFS_CMD_PS2 GENMASK(16, 12) 29 - #define APPLE_DVFS_CMD_PS1 GENMASK(4, 0) 30 31 /* Same timebase as CPU counter (24MHz) */ 32 #define APPLE_DVFS_LAST_CHG_TIME 0x38 ··· 38 * Apple ran out of bits and had to shift this in T8112... 39 */ 40 #define APPLE_DVFS_STATUS 0x50 41 #define APPLE_DVFS_STATUS_CUR_PS_T8103 GENMASK(7, 4) 42 #define APPLE_DVFS_STATUS_CUR_PS_SHIFT_T8103 4 43 #define APPLE_DVFS_STATUS_TGT_PS_T8103 GENMASK(3, 0) ··· 58 #define APPLE_DVFS_PLL_FACTOR_MULT GENMASK(31, 16) 59 #define APPLE_DVFS_PLL_FACTOR_DIV GENMASK(15, 0) 60 61 - #define APPLE_DVFS_TRANSITION_TIMEOUT 100 62 63 struct apple_soc_cpufreq_info { 64 u64 max_pstate; 65 u64 cur_pstate_mask; 66 u64 cur_pstate_shift; 67 }; 68 69 struct apple_cpu_priv { ··· 77 78 static struct cpufreq_driver apple_soc_cpufreq_driver; 79 80 static const struct apple_soc_cpufreq_info soc_t8103_info = { 81 .max_pstate = 15, 82 .cur_pstate_mask = APPLE_DVFS_STATUS_CUR_PS_T8103, 83 .cur_pstate_shift = APPLE_DVFS_STATUS_CUR_PS_SHIFT_T8103, 84 }; 85 86 static const struct apple_soc_cpufreq_info soc_t8112_info = { 87 .max_pstate = 31, 88 .cur_pstate_mask = APPLE_DVFS_STATUS_CUR_PS_T8112, 89 .cur_pstate_shift = APPLE_DVFS_STATUS_CUR_PS_SHIFT_T8112, 90 }; 91 92 static const struct apple_soc_cpufreq_info soc_default_info = { 93 .max_pstate = 15, 94 .cur_pstate_mask = 0, /* fallback */ 95 }; 96 97 static const struct of_device_id apple_soc_cpufreq_of_match[] __maybe_unused = { 98 { 99 .compatible = "apple,t8103-cluster-cpufreq", 100 .data = &soc_t8103_info, ··· 140 unsigned int pstate; 141 142 if (priv->info->cur_pstate_mask) { 143 - u64 reg = readq_relaxed(priv->reg_base + APPLE_DVFS_STATUS); 144 145 pstate = (reg & priv->info->cur_pstate_mask) >> priv->info->cur_pstate_shift; 146 } else { ··· 179 return -EIO; 180 } 181 182 - reg &= ~(APPLE_DVFS_CMD_PS1 | APPLE_DVFS_CMD_PS2); 183 - reg |= FIELD_PREP(APPLE_DVFS_CMD_PS1, pstate); 184 - reg |= FIELD_PREP(APPLE_DVFS_CMD_PS2, pstate); 185 reg |= APPLE_DVFS_CMD_SET; 186 187 writeq_relaxed(reg, priv->reg_base + APPLE_DVFS_CMD); ··· 309 310 transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev); 311 if (!transition_latency) 312 - transition_latency = CPUFREQ_ETERNAL; 313 314 policy->cpuinfo.transition_latency = transition_latency; 315 policy->dvfs_possible_from_any_cpu = true;
··· 22 #include <linux/pm_opp.h> 23 #include <linux/slab.h> 24 25 + #define APPLE_DVFS_CMD 0x20 26 + #define APPLE_DVFS_CMD_BUSY BIT(31) 27 + #define APPLE_DVFS_CMD_SET BIT(25) 28 + #define APPLE_DVFS_CMD_PS1_S5L8960X GENMASK(24, 22) 29 + #define APPLE_DVFS_CMD_PS1_S5L8960X_SHIFT 22 30 + #define APPLE_DVFS_CMD_PS2 GENMASK(15, 12) 31 + #define APPLE_DVFS_CMD_PS1 GENMASK(4, 0) 32 + #define APPLE_DVFS_CMD_PS1_SHIFT 0 33 34 /* Same timebase as CPU counter (24MHz) */ 35 #define APPLE_DVFS_LAST_CHG_TIME 0x38 ··· 35 * Apple ran out of bits and had to shift this in T8112... 36 */ 37 #define APPLE_DVFS_STATUS 0x50 38 + #define APPLE_DVFS_STATUS_CUR_PS_S5L8960X GENMASK(5, 3) 39 + #define APPLE_DVFS_STATUS_CUR_PS_SHIFT_S5L8960X 3 40 + #define APPLE_DVFS_STATUS_TGT_PS_S5L8960X GENMASK(2, 0) 41 #define APPLE_DVFS_STATUS_CUR_PS_T8103 GENMASK(7, 4) 42 #define APPLE_DVFS_STATUS_CUR_PS_SHIFT_T8103 4 43 #define APPLE_DVFS_STATUS_TGT_PS_T8103 GENMASK(3, 0) ··· 52 #define APPLE_DVFS_PLL_FACTOR_MULT GENMASK(31, 16) 53 #define APPLE_DVFS_PLL_FACTOR_DIV GENMASK(15, 0) 54 55 + #define APPLE_DVFS_TRANSITION_TIMEOUT 400 56 57 struct apple_soc_cpufreq_info { 58 + bool has_ps2; 59 u64 max_pstate; 60 u64 cur_pstate_mask; 61 u64 cur_pstate_shift; 62 + u64 ps1_mask; 63 + u64 ps1_shift; 64 }; 65 66 struct apple_cpu_priv { ··· 68 69 static struct cpufreq_driver apple_soc_cpufreq_driver; 70 71 + static const struct apple_soc_cpufreq_info soc_s5l8960x_info = { 72 + .has_ps2 = false, 73 + .max_pstate = 7, 74 + .cur_pstate_mask = APPLE_DVFS_STATUS_CUR_PS_S5L8960X, 75 + .cur_pstate_shift = APPLE_DVFS_STATUS_CUR_PS_SHIFT_S5L8960X, 76 + .ps1_mask = APPLE_DVFS_CMD_PS1_S5L8960X, 77 + .ps1_shift = APPLE_DVFS_CMD_PS1_S5L8960X_SHIFT, 78 + }; 79 + 80 static const struct apple_soc_cpufreq_info soc_t8103_info = { 81 + .has_ps2 = true, 82 .max_pstate = 15, 83 .cur_pstate_mask = APPLE_DVFS_STATUS_CUR_PS_T8103, 84 .cur_pstate_shift = APPLE_DVFS_STATUS_CUR_PS_SHIFT_T8103, 85 + .ps1_mask = APPLE_DVFS_CMD_PS1, 86 + .ps1_shift = APPLE_DVFS_CMD_PS1_SHIFT, 87 }; 88 89 static const struct apple_soc_cpufreq_info soc_t8112_info = { 90 + .has_ps2 = false, 91 .max_pstate = 31, 92 .cur_pstate_mask = APPLE_DVFS_STATUS_CUR_PS_T8112, 93 .cur_pstate_shift = APPLE_DVFS_STATUS_CUR_PS_SHIFT_T8112, 94 + .ps1_mask = APPLE_DVFS_CMD_PS1, 95 + .ps1_shift = APPLE_DVFS_CMD_PS1_SHIFT, 96 }; 97 98 static const struct apple_soc_cpufreq_info soc_default_info = { 99 + .has_ps2 = false, 100 .max_pstate = 15, 101 .cur_pstate_mask = 0, /* fallback */ 102 + .ps1_mask = APPLE_DVFS_CMD_PS1, 103 + .ps1_shift = APPLE_DVFS_CMD_PS1_SHIFT, 104 }; 105 106 static const struct of_device_id apple_soc_cpufreq_of_match[] __maybe_unused = { 107 + { 108 + .compatible = "apple,s5l8960x-cluster-cpufreq", 109 + .data = &soc_s5l8960x_info, 110 + }, 111 { 112 .compatible = "apple,t8103-cluster-cpufreq", 113 .data = &soc_t8103_info, ··· 109 unsigned int pstate; 110 111 if (priv->info->cur_pstate_mask) { 112 + u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS); 113 114 pstate = (reg & priv->info->cur_pstate_mask) >> priv->info->cur_pstate_shift; 115 } else { ··· 148 return -EIO; 149 } 150 151 + reg &= ~priv->info->ps1_mask; 152 + reg |= pstate << priv->info->ps1_shift; 153 + if (priv->info->has_ps2) { 154 + reg &= ~APPLE_DVFS_CMD_PS2; 155 + reg |= FIELD_PREP(APPLE_DVFS_CMD_PS2, pstate); 156 + } 157 reg |= APPLE_DVFS_CMD_SET; 158 159 writeq_relaxed(reg, priv->reg_base + APPLE_DVFS_CMD); ··· 275 276 transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev); 277 if (!transition_latency) 278 + transition_latency = APPLE_DVFS_TRANSITION_TIMEOUT * NSEC_PER_USEC; 279 280 policy->cpuinfo.transition_latency = transition_latency; 281 policy->dvfs_possible_from_any_cpu = true;
+2 -2
drivers/cpufreq/cpufreq-dt-platdev.c
··· 103 * platforms using "operating-points-v2" property. 104 */ 105 static const struct of_device_id blocklist[] __initconst = { 106 { .compatible = "allwinner,sun50i-a100" }, 107 { .compatible = "allwinner,sun50i-h6", }, 108 { .compatible = "allwinner,sun50i-h616", }, ··· 237 sizeof(struct cpufreq_dt_platform_data))); 238 } 239 core_initcall(cpufreq_dt_platdev_init); 240 - MODULE_DESCRIPTION("Generic DT based cpufreq platdev driver"); 241 - MODULE_LICENSE("GPL");
··· 103 * platforms using "operating-points-v2" property. 104 */ 105 static const struct of_device_id blocklist[] __initconst = { 106 + { .compatible = "airoha,en7581", }, 107 + 108 { .compatible = "allwinner,sun50i-a100" }, 109 { .compatible = "allwinner,sun50i-h6", }, 110 { .compatible = "allwinner,sun50i-h616", }, ··· 235 sizeof(struct cpufreq_dt_platform_data))); 236 } 237 core_initcall(cpufreq_dt_platdev_init);
+4 -3
drivers/cpufreq/cpufreq.c
··· 25 #include <linux/mutex.h> 26 #include <linux/pm_qos.h> 27 #include <linux/slab.h> 28 #include <linux/suspend.h> 29 #include <linux/syscore_ops.h> 30 #include <linux/tick.h> ··· 603 604 if (cpufreq_boost_trigger_state(enable)) { 605 pr_err("%s: Cannot %s BOOST!\n", 606 - __func__, enable ? "enable" : "disable"); 607 return -EINVAL; 608 } 609 610 pr_debug("%s: cpufreq BOOST %s\n", 611 - __func__, enable ? "enabled" : "disabled"); 612 613 return count; 614 } ··· 2813 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2814 2815 pr_err("%s: Cannot %s BOOST\n", 2816 - __func__, state ? "enable" : "disable"); 2817 2818 return ret; 2819 }
··· 25 #include <linux/mutex.h> 26 #include <linux/pm_qos.h> 27 #include <linux/slab.h> 28 + #include <linux/string_choices.h> 29 #include <linux/suspend.h> 30 #include <linux/syscore_ops.h> 31 #include <linux/tick.h> ··· 602 603 if (cpufreq_boost_trigger_state(enable)) { 604 pr_err("%s: Cannot %s BOOST!\n", 605 + __func__, str_enable_disable(enable)); 606 return -EINVAL; 607 } 608 609 pr_debug("%s: cpufreq BOOST %s\n", 610 + __func__, str_enabled_disabled(enable)); 611 612 return count; 613 } ··· 2812 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2813 2814 pr_err("%s: Cannot %s BOOST\n", 2815 + __func__, str_enable_disable(state)); 2816 2817 return ret; 2818 }
+34 -26
drivers/cpufreq/intel_pstate.c
··· 28 #include <linux/pm_qos.h> 29 #include <linux/bitfield.h> 30 #include <trace/events/power.h> 31 32 #include <asm/cpu.h> 33 #include <asm/div64.h> ··· 303 304 static struct cpufreq_driver *intel_pstate_driver __read_mostly; 305 306 - #define HYBRID_SCALING_FACTOR 78741 307 #define HYBRID_SCALING_FACTOR_MTL 80000 308 #define HYBRID_SCALING_FACTOR_LNL 86957 309 310 - static int hybrid_scaling_factor = HYBRID_SCALING_FACTOR; 311 312 static inline int core_get_scaling(void) 313 { ··· 415 static int intel_pstate_cppc_get_scaling(int cpu) 416 { 417 struct cppc_perf_caps cppc_perf; 418 - int ret; 419 - 420 - ret = cppc_get_perf_caps(cpu, &cppc_perf); 421 422 /* 423 - * If the nominal frequency and the nominal performance are not 424 - * zero and the ratio between them is not 100, return the hybrid 425 - * scaling factor. 426 */ 427 - if (!ret && cppc_perf.nominal_perf && cppc_perf.nominal_freq && 428 - cppc_perf.nominal_perf * 100 != cppc_perf.nominal_freq) 429 - return hybrid_scaling_factor; 430 431 return core_get_scaling(); 432 } ··· 2209 2210 static int hwp_get_cpu_scaling(int cpu) 2211 { 2212 - u8 cpu_type = 0; 2213 2214 - smp_call_function_single(cpu, hybrid_get_type, &cpu_type, 1); 2215 - /* P-cores have a smaller perf level-to-freqency scaling factor. */ 2216 - if (cpu_type == 0x40) 2217 - return hybrid_scaling_factor; 2218 2219 - /* Use default core scaling for E-cores */ 2220 - if (cpu_type == 0x20) 2221 return core_get_scaling(); 2222 2223 /* 2224 - * If reached here, this system is either non-hybrid (like Tiger 2225 - * Lake) or hybrid-capable (like Alder Lake or Raptor Lake) with 2226 - * no E cores (in which case CPUID for hybrid support is 0). 2227 - * 2228 - * The CPPC nominal_frequency field is 0 for non-hybrid systems, 2229 - * so the default core scaling will be used for them. 2230 */ 2231 return intel_pstate_cppc_get_scaling(cpu); 2232 } ··· 2713 } 2714 2715 cpu->epp_powersave = -EINVAL; 2716 - cpu->epp_policy = 0; 2717 2718 intel_pstate_get_cpu_pstates(cpu); 2719 ··· 3669 }; 3670 3671 static const struct x86_cpu_id intel_hybrid_scaling_factor[] = { 3672 X86_MATCH_VFM(INTEL_METEORLAKE_L, HYBRID_SCALING_FACTOR_MTL), 3673 - X86_MATCH_VFM(INTEL_ARROWLAKE, HYBRID_SCALING_FACTOR_MTL), 3674 X86_MATCH_VFM(INTEL_LUNARLAKE_M, HYBRID_SCALING_FACTOR_LNL), 3675 {} 3676 };
··· 28 #include <linux/pm_qos.h> 29 #include <linux/bitfield.h> 30 #include <trace/events/power.h> 31 + #include <linux/units.h> 32 33 #include <asm/cpu.h> 34 #include <asm/div64.h> ··· 302 303 static struct cpufreq_driver *intel_pstate_driver __read_mostly; 304 305 + #define HYBRID_SCALING_FACTOR_ADL 78741 306 #define HYBRID_SCALING_FACTOR_MTL 80000 307 #define HYBRID_SCALING_FACTOR_LNL 86957 308 309 + static int hybrid_scaling_factor; 310 311 static inline int core_get_scaling(void) 312 { ··· 414 static int intel_pstate_cppc_get_scaling(int cpu) 415 { 416 struct cppc_perf_caps cppc_perf; 417 418 /* 419 + * Compute the perf-to-frequency scaling factor for the given CPU if 420 + * possible, unless it would be 0. 421 */ 422 + if (!cppc_get_perf_caps(cpu, &cppc_perf) && 423 + cppc_perf.nominal_perf && cppc_perf.nominal_freq) 424 + return div_u64(cppc_perf.nominal_freq * KHZ_PER_MHZ, 425 + cppc_perf.nominal_perf); 426 427 return core_get_scaling(); 428 } ··· 2211 2212 static int hwp_get_cpu_scaling(int cpu) 2213 { 2214 + if (hybrid_scaling_factor) { 2215 + u8 cpu_type = 0; 2216 2217 + smp_call_function_single(cpu, hybrid_get_type, &cpu_type, 1); 2218 2219 + /* 2220 + * Return the hybrid scaling factor for P-cores and use the 2221 + * default core scaling for E-cores. 2222 + */ 2223 + if (cpu_type == 0x40) 2224 + return hybrid_scaling_factor; 2225 + 2226 + if (cpu_type == 0x20) 2227 + return core_get_scaling(); 2228 + } 2229 + 2230 + /* Use core scaling on non-hybrid systems. */ 2231 + if (!cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) 2232 return core_get_scaling(); 2233 2234 /* 2235 + * The system is hybrid, but the hybrid scaling factor is not known or 2236 + * the CPU type is not one of the above, so use CPPC to compute the 2237 + * scaling factor for this CPU. 2238 */ 2239 return intel_pstate_cppc_get_scaling(cpu); 2240 } ··· 2709 } 2710 2711 cpu->epp_powersave = -EINVAL; 2712 + cpu->epp_policy = CPUFREQ_POLICY_UNKNOWN; 2713 2714 intel_pstate_get_cpu_pstates(cpu); 2715 ··· 3665 }; 3666 3667 static const struct x86_cpu_id intel_hybrid_scaling_factor[] = { 3668 + X86_MATCH_VFM(INTEL_ALDERLAKE, HYBRID_SCALING_FACTOR_ADL), 3669 + X86_MATCH_VFM(INTEL_ALDERLAKE_L, HYBRID_SCALING_FACTOR_ADL), 3670 + X86_MATCH_VFM(INTEL_RAPTORLAKE, HYBRID_SCALING_FACTOR_ADL), 3671 + X86_MATCH_VFM(INTEL_RAPTORLAKE_P, HYBRID_SCALING_FACTOR_ADL), 3672 + X86_MATCH_VFM(INTEL_RAPTORLAKE_S, HYBRID_SCALING_FACTOR_ADL), 3673 X86_MATCH_VFM(INTEL_METEORLAKE_L, HYBRID_SCALING_FACTOR_MTL), 3674 X86_MATCH_VFM(INTEL_LUNARLAKE_M, HYBRID_SCALING_FACTOR_LNL), 3675 {} 3676 };
+2 -1
drivers/cpufreq/powernv-cpufreq.c
··· 18 #include <linux/of.h> 19 #include <linux/reboot.h> 20 #include <linux/slab.h> 21 #include <linux/cpu.h> 22 #include <linux/hashtable.h> 23 #include <trace/events/power.h> ··· 282 pr_info("cpufreq pstate min 0x%x nominal 0x%x max 0x%x\n", pstate_min, 283 pstate_nominal, pstate_max); 284 pr_info("Workload Optimized Frequency is %s in the platform\n", 285 - (powernv_pstate_info.wof_enabled) ? "enabled" : "disabled"); 286 287 pstate_ids = of_get_property(power_mgt, "ibm,pstate-ids", &len_ids); 288 if (!pstate_ids) {
··· 18 #include <linux/of.h> 19 #include <linux/reboot.h> 20 #include <linux/slab.h> 21 + #include <linux/string_choices.h> 22 #include <linux/cpu.h> 23 #include <linux/hashtable.h> 24 #include <trace/events/power.h> ··· 281 pr_info("cpufreq pstate min 0x%x nominal 0x%x max 0x%x\n", pstate_min, 282 pstate_nominal, pstate_max); 283 pr_info("Workload Optimized Frequency is %s in the platform\n", 284 + str_enabled_disabled(powernv_pstate_info.wof_enabled)); 285 286 pstate_ids = of_get_property(power_mgt, "ibm,pstate-ids", &len_ids); 287 if (!pstate_ids) {
+24 -10
drivers/cpufreq/qcom-cpufreq-hw.c
··· 143 } 144 145 /* Get the frequency requested by the cpufreq core for the CPU */ 146 - static unsigned int qcom_cpufreq_get_freq(unsigned int cpu) 147 { 148 struct qcom_cpufreq_data *data; 149 const struct qcom_cpufreq_soc_data *soc_data; 150 - struct cpufreq_policy *policy; 151 unsigned int index; 152 153 - policy = cpufreq_cpu_get_raw(cpu); 154 if (!policy) 155 return 0; 156 ··· 161 return policy->freq_table[index].frequency; 162 } 163 164 - static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) 165 { 166 struct qcom_cpufreq_data *data; 167 - struct cpufreq_policy *policy; 168 169 - policy = cpufreq_cpu_get_raw(cpu); 170 if (!policy) 171 return 0; 172 ··· 173 if (data->throttle_irq >= 0) 174 return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ; 175 176 - return qcom_cpufreq_get_freq(cpu); 177 } 178 179 static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, ··· 364 * If h/w throttled frequency is higher than what cpufreq has requested 365 * for, then stop polling and switch back to interrupt mechanism. 366 */ 367 - if (throttled_freq >= qcom_cpufreq_get_freq(cpu)) 368 enable_irq(data->throttle_irq); 369 else 370 mod_delayed_work(system_highpri_wq, &data->throttle_work, ··· 442 return data->throttle_irq; 443 444 data->cancel_throttle = false; 445 - data->policy = policy; 446 447 mutex_init(&data->throttle_lock); 448 INIT_DEFERRABLE_WORK(&data->throttle_work, qcom_lmh_dcvs_poll); ··· 552 553 policy->driver_data = data; 554 policy->dvfs_possible_from_any_cpu = true; 555 556 ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy); 557 if (ret) { ··· 623 { 624 struct qcom_cpufreq_data *data = container_of(hw, struct qcom_cpufreq_data, cpu_clk); 625 626 - return qcom_lmh_get_throttle_freq(data); 627 } 628 629 static const struct clk_ops qcom_cpufreq_hw_clk_ops = { 630 .recalc_rate = qcom_cpufreq_hw_recalc_rate, 631 }; 632 633 static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
··· 143 } 144 145 /* Get the frequency requested by the cpufreq core for the CPU */ 146 + static unsigned int qcom_cpufreq_get_freq(struct cpufreq_policy *policy) 147 { 148 struct qcom_cpufreq_data *data; 149 const struct qcom_cpufreq_soc_data *soc_data; 150 unsigned int index; 151 152 if (!policy) 153 return 0; 154 ··· 163 return policy->freq_table[index].frequency; 164 } 165 166 + static unsigned int __qcom_cpufreq_hw_get(struct cpufreq_policy *policy) 167 { 168 struct qcom_cpufreq_data *data; 169 170 if (!policy) 171 return 0; 172 ··· 177 if (data->throttle_irq >= 0) 178 return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ; 179 180 + return qcom_cpufreq_get_freq(policy); 181 + } 182 + 183 + static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) 184 + { 185 + return __qcom_cpufreq_hw_get(cpufreq_cpu_get_raw(cpu)); 186 } 187 188 static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, ··· 363 * If h/w throttled frequency is higher than what cpufreq has requested 364 * for, then stop polling and switch back to interrupt mechanism. 365 */ 366 + if (throttled_freq >= qcom_cpufreq_get_freq(cpufreq_cpu_get_raw(cpu))) 367 enable_irq(data->throttle_irq); 368 else 369 mod_delayed_work(system_highpri_wq, &data->throttle_work, ··· 441 return data->throttle_irq; 442 443 data->cancel_throttle = false; 444 445 mutex_init(&data->throttle_lock); 446 INIT_DEFERRABLE_WORK(&data->throttle_work, qcom_lmh_dcvs_poll); ··· 552 553 policy->driver_data = data; 554 policy->dvfs_possible_from_any_cpu = true; 555 + data->policy = policy; 556 557 ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy); 558 if (ret) { ··· 622 { 623 struct qcom_cpufreq_data *data = container_of(hw, struct qcom_cpufreq_data, cpu_clk); 624 625 + return __qcom_cpufreq_hw_get(data->policy) * HZ_PER_KHZ; 626 + } 627 + 628 + /* 629 + * Since we cannot determine the closest rate of the target rate, let's just 630 + * return the actual rate at which the clock is running at. This is needed to 631 + * make clk_set_rate() API work properly. 632 + */ 633 + static int qcom_cpufreq_hw_determine_rate(struct clk_hw *hw, struct clk_rate_request *req) 634 + { 635 + req->rate = qcom_cpufreq_hw_recalc_rate(hw, 0); 636 + 637 + return 0; 638 } 639 640 static const struct clk_ops qcom_cpufreq_hw_clk_ops = { 641 .recalc_rate = qcom_cpufreq_hw_recalc_rate, 642 + .determine_rate = qcom_cpufreq_hw_determine_rate, 643 }; 644 645 static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
+45
drivers/cpufreq/scmi-cpufreq.c
··· 16 #include <linux/export.h> 17 #include <linux/module.h> 18 #include <linux/pm_opp.h> 19 #include <linux/slab.h> 20 #include <linux/scmi_protocol.h> 21 #include <linux/types.h> ··· 27 int nr_opp; 28 struct device *cpu_dev; 29 cpumask_var_t opp_shared_cpus; 30 }; 31 32 static struct scmi_protocol_handle *ph; ··· 177 NULL, 178 }; 179 180 static int scmi_cpufreq_init(struct cpufreq_policy *policy) 181 { 182 int ret, nr_opp, domain; ··· 200 struct device *cpu_dev; 201 struct scmi_data *priv; 202 struct cpufreq_frequency_table *freq_table; 203 204 cpu_dev = get_cpu_device(policy->cpu); 205 if (!cpu_dev) { ··· 314 } 315 } 316 317 return 0; 318 319 out_free_table: ··· 350 static void scmi_cpufreq_exit(struct cpufreq_policy *policy) 351 { 352 struct scmi_data *priv = policy->driver_data; 353 354 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 355 dev_pm_opp_remove_all_dynamic(priv->cpu_dev); 356 free_cpumask_var(priv->opp_shared_cpus); ··· 414 415 if (!handle) 416 return -ENODEV; 417 418 perf_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_PERF, &ph); 419 if (IS_ERR(perf_ops))
··· 16 #include <linux/export.h> 17 #include <linux/module.h> 18 #include <linux/pm_opp.h> 19 + #include <linux/pm_qos.h> 20 #include <linux/slab.h> 21 #include <linux/scmi_protocol.h> 22 #include <linux/types.h> ··· 26 int nr_opp; 27 struct device *cpu_dev; 28 cpumask_var_t opp_shared_cpus; 29 + struct notifier_block limit_notify_nb; 30 + struct freq_qos_request limits_freq_req; 31 }; 32 33 static struct scmi_protocol_handle *ph; ··· 174 NULL, 175 }; 176 177 + static int scmi_limit_notify_cb(struct notifier_block *nb, unsigned long event, void *data) 178 + { 179 + struct scmi_data *priv = container_of(nb, struct scmi_data, limit_notify_nb); 180 + struct scmi_perf_limits_report *limit_notify = data; 181 + unsigned int limit_freq_khz; 182 + int ret; 183 + 184 + limit_freq_khz = limit_notify->range_max_freq / HZ_PER_KHZ; 185 + 186 + ret = freq_qos_update_request(&priv->limits_freq_req, limit_freq_khz); 187 + if (ret < 0) 188 + pr_warn("failed to update freq constraint: %d\n", ret); 189 + 190 + return NOTIFY_OK; 191 + } 192 + 193 static int scmi_cpufreq_init(struct cpufreq_policy *policy) 194 { 195 int ret, nr_opp, domain; ··· 181 struct device *cpu_dev; 182 struct scmi_data *priv; 183 struct cpufreq_frequency_table *freq_table; 184 + struct scmi_device *sdev = cpufreq_get_driver_data(); 185 186 cpu_dev = get_cpu_device(policy->cpu); 187 if (!cpu_dev) { ··· 294 } 295 } 296 297 + ret = freq_qos_add_request(&policy->constraints, &priv->limits_freq_req, FREQ_QOS_MAX, 298 + FREQ_QOS_MAX_DEFAULT_VALUE); 299 + if (ret < 0) { 300 + dev_err(cpu_dev, "failed to add qos limits request: %d\n", ret); 301 + goto out_free_table; 302 + } 303 + 304 + priv->limit_notify_nb.notifier_call = scmi_limit_notify_cb; 305 + ret = sdev->handle->notify_ops->event_notifier_register(sdev->handle, SCMI_PROTOCOL_PERF, 306 + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, 307 + &priv->domain_id, 308 + &priv->limit_notify_nb); 309 + if (ret) 310 + dev_warn(&sdev->dev, 311 + "failed to register for limits change notifier for domain %d\n", 312 + priv->domain_id); 313 + 314 return 0; 315 316 out_free_table: ··· 313 static void scmi_cpufreq_exit(struct cpufreq_policy *policy) 314 { 315 struct scmi_data *priv = policy->driver_data; 316 + struct scmi_device *sdev = cpufreq_get_driver_data(); 317 318 + sdev->handle->notify_ops->event_notifier_unregister(sdev->handle, SCMI_PROTOCOL_PERF, 319 + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, 320 + &priv->domain_id, 321 + &priv->limit_notify_nb); 322 + freq_qos_remove_request(&priv->limits_freq_req); 323 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 324 dev_pm_opp_remove_all_dynamic(priv->cpu_dev); 325 free_cpumask_var(priv->opp_shared_cpus); ··· 371 372 if (!handle) 373 return -ENODEV; 374 + 375 + scmi_cpufreq_driver.driver_data = sdev; 376 377 perf_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_PERF, &ph); 378 if (IS_ERR(perf_ops))
+1 -1
drivers/cpufreq/sparc-us2e-cpufreq.c
··· 323 impl = ((ver >> 32) & 0xffff); 324 325 if (manuf == 0x17 && impl == 0x13) { 326 - us2e_freq_table = kzalloc(NR_CPUS * sizeof(*us2e_freq_table), 327 GFP_KERNEL); 328 if (!us2e_freq_table) 329 return -ENOMEM;
··· 323 impl = ((ver >> 32) & 0xffff); 324 325 if (manuf == 0x17 && impl == 0x13) { 326 + us2e_freq_table = kcalloc(NR_CPUS, sizeof(*us2e_freq_table), 327 GFP_KERNEL); 328 if (!us2e_freq_table) 329 return -ENOMEM;
+1 -1
drivers/cpufreq/sparc-us3-cpufreq.c
··· 171 impl == CHEETAH_PLUS_IMPL || 172 impl == JAGUAR_IMPL || 173 impl == PANTHER_IMPL)) { 174 - us3_freq_table = kzalloc(NR_CPUS * sizeof(*us3_freq_table), 175 GFP_KERNEL); 176 if (!us3_freq_table) 177 return -ENOMEM;
··· 171 impl == CHEETAH_PLUS_IMPL || 172 impl == JAGUAR_IMPL || 173 impl == PANTHER_IMPL)) { 174 + us3_freq_table = kcalloc(NR_CPUS, sizeof(*us3_freq_table), 175 GFP_KERNEL); 176 if (!us3_freq_table) 177 return -ENOMEM;
+2 -2
kernel/sched/cpufreq_schedutil.c
··· 83 84 if (unlikely(sg_policy->limits_changed)) { 85 sg_policy->limits_changed = false; 86 - sg_policy->need_freq_update = true; 87 return true; 88 } 89 ··· 96 unsigned int next_freq) 97 { 98 if (sg_policy->need_freq_update) 99 - sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS); 100 else if (sg_policy->next_freq == next_freq) 101 return false; 102
··· 83 84 if (unlikely(sg_policy->limits_changed)) { 85 sg_policy->limits_changed = false; 86 + sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS); 87 return true; 88 } 89 ··· 96 unsigned int next_freq) 97 { 98 if (sg_policy->need_freq_update) 99 + sg_policy->need_freq_update = false; 100 else if (sg_policy->next_freq == next_freq) 101 return false; 102