Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge back cpufreq material for 6.3-rc1.

+893 -305
+7
Documentation/admin-guide/kernel-parameters.txt
··· 7020 7020 management firmware translates the requests into actual 7021 7021 hardware states (core frequency, data fabric and memory 7022 7022 clocks etc.) 7023 + active 7024 + Use amd_pstate_epp driver instance as the scaling driver, 7025 + driver provides a hint to the hardware if software wants 7026 + to bias toward performance (0x0) or energy efficiency (0xff) 7027 + to the CPPC firmware. then CPPC power algorithm will 7028 + calculate the runtime workload and adjust the realtime cores 7029 + frequency.
+72 -2
Documentation/admin-guide/pm/amd-pstate.rst
··· 262 262 <perf_cap_>`_.) 263 263 This attribute is read-only. 264 264 265 + ``energy_performance_available_preferences`` 266 + 267 + A list of all the supported EPP preferences that could be used for 268 + ``energy_performance_preference`` on this system. 269 + These profiles represent different hints that are provided 270 + to the low-level firmware about the user's desired energy vs efficiency 271 + tradeoff. ``default`` represents the epp value is set by platform 272 + firmware. This attribute is read-only. 273 + 274 + ``energy_performance_preference`` 275 + 276 + The current energy performance preference can be read from this attribute. 277 + and user can change current preference according to energy or performance needs 278 + Please get all support profiles list from 279 + ``energy_performance_available_preferences`` attribute, all the profiles are 280 + integer values defined between 0 to 255 when EPP feature is enabled by platform 281 + firmware, if EPP feature is disabled, driver will ignore the written value 282 + This attribute is read-write. 283 + 265 284 Other performance and frequency values can be read back from 266 285 ``/sys/devices/system/cpu/cpuX/acpi_cppc/``, see :ref:`cppc_sysfs`. 267 286 ··· 299 280 platforms. The AMD P-States mechanism is the more performance and energy 300 281 efficiency frequency management method on AMD processors. 301 282 302 - Kernel Module Options for ``amd-pstate`` 303 - ========================================= 283 + 284 + AMD Pstate Driver Operation Modes 285 + ================================= 286 + 287 + ``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and 288 + CPPC non-autonomous(passive) mode. 289 + active mode and passive mode can be chosen by different kernel parameters. 290 + When in Autonomous mode, CPPC ignores requests done in the Desired Performance 291 + Target register and takes into account only the values set to the Minimum requested 292 + performance, Maximum requested performance, and Energy Performance Preference 293 + registers. When Autonomous is disabled, it only considers the Desired Performance Target. 294 + 295 + Active Mode 296 + ------------ 297 + 298 + ``amd_pstate=active`` 299 + 300 + This is the low-level firmware control mode which is implemented by ``amd_pstate_epp`` 301 + driver with ``amd_pstate=active`` passed to the kernel in the command line. 302 + In this mode, ``amd_pstate_epp`` driver provides a hint to the hardware if software 303 + wants to bias toward performance (0x0) or energy efficiency (0xff) to the CPPC firmware. 304 + then CPPC power algorithm will calculate the runtime workload and adjust the realtime 305 + cores frequency according to the power supply and thermal, core voltage and some other 306 + hardware conditions. 304 307 305 308 Passive Mode 306 309 ------------ ··· 338 297 processor must provide at least nominal performance requested and go higher if current 339 298 operating conditions allow. 340 299 300 + 301 + User Space Interface in ``sysfs`` 302 + ================================= 303 + 304 + Global Attributes 305 + ----------------- 306 + 307 + ``amd-pstate`` exposes several global attributes (files) in ``sysfs`` to 308 + control its functionality at the system level. They are located in the 309 + ``/sys/devices/system/cpu/amd-pstate/`` directory and affect all CPUs. 310 + 311 + ``status`` 312 + Operation mode of the driver: "active", "passive" or "disable". 313 + 314 + "active" 315 + The driver is functional and in the ``active mode`` 316 + 317 + "passive" 318 + The driver is functional and in the ``passive mode`` 319 + 320 + "disable" 321 + The driver is unregistered and not functional now. 322 + 323 + This attribute can be written to in order to change the driver's 324 + operation mode or to unregister it. The string written to it must be 325 + one of the possible values of it and, if successful, writing one of 326 + these values to the sysfs file will cause the driver to switch over 327 + to the operation mode represented by that string - or to be 328 + unregistered in the "disable" case. 341 329 342 330 ``cpupower`` tool support for ``amd-pstate`` 343 331 ===============================================
-18
arch/mips/include/asm/mach-loongson32/cpufreq.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Copyright (c) 2014 Zhang, Keguang <keguang.zhang@gmail.com> 4 - * 5 - * Loongson 1 CPUFreq platform support. 6 - */ 7 - 8 - #ifndef __ASM_MACH_LOONGSON32_CPUFREQ_H 9 - #define __ASM_MACH_LOONGSON32_CPUFREQ_H 10 - 11 - struct plat_ls1x_cpufreq { 12 - const char *clk_name; /* CPU clk */ 13 - const char *osc_clk_name; /* OSC clk */ 14 - unsigned int max_freq; /* in kHz */ 15 - unsigned int min_freq; /* in kHz */ 16 - }; 17 - 18 - #endif /* __ASM_MACH_LOONGSON32_CPUFREQ_H */
-1
arch/mips/include/asm/mach-loongson32/platform.h
··· 12 12 #include <nand.h> 13 13 14 14 extern struct platform_device ls1x_uart_pdev; 15 - extern struct platform_device ls1x_cpufreq_pdev; 16 15 extern struct platform_device ls1x_eth0_pdev; 17 16 extern struct platform_device ls1x_eth1_pdev; 18 17 extern struct platform_device ls1x_ehci_pdev;
-16
arch/mips/loongson32/common/platform.c
··· 15 15 16 16 #include <platform.h> 17 17 #include <loongson1.h> 18 - #include <cpufreq.h> 19 18 #include <dma.h> 20 19 #include <nand.h> 21 20 ··· 60 61 for (p = pdev->dev.platform_data; p->flags != 0; ++p) 61 62 p->uartclk = clk_get_rate(clk); 62 63 } 63 - 64 - /* CPUFreq */ 65 - static struct plat_ls1x_cpufreq ls1x_cpufreq_pdata = { 66 - .clk_name = "cpu_clk", 67 - .osc_clk_name = "osc_clk", 68 - .max_freq = 266 * 1000, 69 - .min_freq = 33 * 1000, 70 - }; 71 - 72 - struct platform_device ls1x_cpufreq_pdev = { 73 - .name = "ls1x-cpufreq", 74 - .dev = { 75 - .platform_data = &ls1x_cpufreq_pdata, 76 - }, 77 - }; 78 64 79 65 /* Synopsys Ethernet GMAC */ 80 66 static struct stmmac_mdio_bus_data ls1x_mdio_bus_data = {
-1
arch/mips/loongson32/ls1b/board.c
··· 35 35 36 36 static struct platform_device *ls1b_platform_devices[] __initdata = { 37 37 &ls1x_uart_pdev, 38 - &ls1x_cpufreq_pdev, 39 38 &ls1x_eth0_pdev, 40 39 &ls1x_eth1_pdev, 41 40 &ls1x_ehci_pdev,
+67
drivers/acpi/cppc_acpi.c
··· 1154 1154 } 1155 1155 1156 1156 /** 1157 + * cppc_get_epp_perf - Get the epp register value. 1158 + * @cpunum: CPU from which to get epp preference value. 1159 + * @epp_perf: Return address. 1160 + * 1161 + * Return: 0 for success, -EIO otherwise. 1162 + */ 1163 + int cppc_get_epp_perf(int cpunum, u64 *epp_perf) 1164 + { 1165 + return cppc_get_perf(cpunum, ENERGY_PERF, epp_perf); 1166 + } 1167 + EXPORT_SYMBOL_GPL(cppc_get_epp_perf); 1168 + 1169 + /** 1157 1170 * cppc_get_perf_caps - Get a CPU's performance capabilities. 1158 1171 * @cpunum: CPU from which to get capabilities info. 1159 1172 * @perf_caps: ptr to cppc_perf_caps. See cppc_acpi.h ··· 1377 1364 return ret; 1378 1365 } 1379 1366 EXPORT_SYMBOL_GPL(cppc_get_perf_ctrs); 1367 + 1368 + /* 1369 + * Set Energy Performance Preference Register value through 1370 + * Performance Controls Interface 1371 + */ 1372 + int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable) 1373 + { 1374 + int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); 1375 + struct cpc_register_resource *epp_set_reg; 1376 + struct cpc_register_resource *auto_sel_reg; 1377 + struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 1378 + struct cppc_pcc_data *pcc_ss_data = NULL; 1379 + int ret; 1380 + 1381 + if (!cpc_desc) { 1382 + pr_debug("No CPC descriptor for CPU:%d\n", cpu); 1383 + return -ENODEV; 1384 + } 1385 + 1386 + auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; 1387 + epp_set_reg = &cpc_desc->cpc_regs[ENERGY_PERF]; 1388 + 1389 + if (CPC_IN_PCC(epp_set_reg) || CPC_IN_PCC(auto_sel_reg)) { 1390 + if (pcc_ss_id < 0) { 1391 + pr_debug("Invalid pcc_ss_id for CPU:%d\n", cpu); 1392 + return -ENODEV; 1393 + } 1394 + 1395 + if (CPC_SUPPORTED(auto_sel_reg)) { 1396 + ret = cpc_write(cpu, auto_sel_reg, enable); 1397 + if (ret) 1398 + return ret; 1399 + } 1400 + 1401 + if (CPC_SUPPORTED(epp_set_reg)) { 1402 + ret = cpc_write(cpu, epp_set_reg, perf_ctrls->energy_perf); 1403 + if (ret) 1404 + return ret; 1405 + } 1406 + 1407 + pcc_ss_data = pcc_data[pcc_ss_id]; 1408 + 1409 + down_write(&pcc_ss_data->pcc_lock); 1410 + /* after writing CPC, transfer the ownership of PCC to platform */ 1411 + ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE); 1412 + up_write(&pcc_ss_data->pcc_lock); 1413 + } else { 1414 + ret = -ENOTSUPP; 1415 + pr_debug("_CPC in PCC is not supported\n"); 1416 + } 1417 + 1418 + return ret; 1419 + } 1420 + EXPORT_SYMBOL_GPL(cppc_set_epp_perf); 1380 1421 1381 1422 /** 1382 1423 * cppc_set_enable - Set to enable CPPC on the processor by writing the
-10
drivers/cpufreq/Kconfig
··· 3 3 4 4 config CPU_FREQ 5 5 bool "CPU Frequency scaling" 6 - select SRCU 7 6 help 8 7 CPU Frequency scaling allows you to change the clock speed of 9 8 CPUs on the fly. This is a nice method to save power, because ··· 268 269 support software configurable cpu frequency. 269 270 270 271 Loongson2F and its successors support this feature. 271 - 272 - If in doubt, say N. 273 - 274 - config LOONGSON1_CPUFREQ 275 - tristate "Loongson1 CPUFreq Driver" 276 - depends on LOONGSON1_LS1B 277 - help 278 - This option adds a CPUFreq driver for loongson1 processors which 279 - support software configurable cpu frequency. 280 272 281 273 If in doubt, say N. 282 274 endif
-1
drivers/cpufreq/Makefile
··· 111 111 obj-$(CONFIG_BMIPS_CPUFREQ) += bmips-cpufreq.o 112 112 obj-$(CONFIG_IA64_ACPI_CPUFREQ) += ia64-acpi-cpufreq.o 113 113 obj-$(CONFIG_LOONGSON2_CPUFREQ) += loongson2_cpufreq.o 114 - obj-$(CONFIG_LOONGSON1_CPUFREQ) += loongson1-cpufreq.o 115 114 obj-$(CONFIG_SH_CPU_FREQ) += sh-cpufreq.o 116 115 obj-$(CONFIG_SPARC_US2E_CPUFREQ) += sparc-us2e-cpufreq.o 117 116 obj-$(CONFIG_SPARC_US3_CPUFREQ) += sparc-us3-cpufreq.o
+685 -19
drivers/cpufreq/amd-pstate.c
··· 59 59 * we disable it by default to go acpi-cpufreq on these processors and add a 60 60 * module parameter to be able to enable it manually for debugging. 61 61 */ 62 + static struct cpufreq_driver *current_pstate_driver; 62 63 static struct cpufreq_driver amd_pstate_driver; 63 - static int cppc_load __initdata; 64 + static struct cpufreq_driver amd_pstate_epp_driver; 65 + static int cppc_state = AMD_PSTATE_DISABLE; 66 + struct kobject *amd_pstate_kobj; 67 + 68 + /* 69 + * AMD Energy Preference Performance (EPP) 70 + * The EPP is used in the CCLK DPM controller to drive 71 + * the frequency that a core is going to operate during 72 + * short periods of activity. EPP values will be utilized for 73 + * different OS profiles (balanced, performance, power savings) 74 + * display strings corresponding to EPP index in the 75 + * energy_perf_strings[] 76 + * index String 77 + *------------------------------------- 78 + * 0 default 79 + * 1 performance 80 + * 2 balance_performance 81 + * 3 balance_power 82 + * 4 power 83 + */ 84 + enum energy_perf_value_index { 85 + EPP_INDEX_DEFAULT = 0, 86 + EPP_INDEX_PERFORMANCE, 87 + EPP_INDEX_BALANCE_PERFORMANCE, 88 + EPP_INDEX_BALANCE_POWERSAVE, 89 + EPP_INDEX_POWERSAVE, 90 + }; 91 + 92 + static const char * const energy_perf_strings[] = { 93 + [EPP_INDEX_DEFAULT] = "default", 94 + [EPP_INDEX_PERFORMANCE] = "performance", 95 + [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance", 96 + [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power", 97 + [EPP_INDEX_POWERSAVE] = "power", 98 + NULL 99 + }; 100 + 101 + static unsigned int epp_values[] = { 102 + [EPP_INDEX_DEFAULT] = 0, 103 + [EPP_INDEX_PERFORMANCE] = AMD_CPPC_EPP_PERFORMANCE, 104 + [EPP_INDEX_BALANCE_PERFORMANCE] = AMD_CPPC_EPP_BALANCE_PERFORMANCE, 105 + [EPP_INDEX_BALANCE_POWERSAVE] = AMD_CPPC_EPP_BALANCE_POWERSAVE, 106 + [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE, 107 + }; 108 + 109 + static inline int get_mode_idx_from_str(const char *str, size_t size) 110 + { 111 + int i; 112 + 113 + for (i=0; i < AMD_PSTATE_MAX; i++) { 114 + if (!strncmp(str, amd_pstate_mode_string[i], size)) 115 + return i; 116 + } 117 + return -EINVAL; 118 + } 119 + 120 + static DEFINE_MUTEX(amd_pstate_limits_lock); 121 + static DEFINE_MUTEX(amd_pstate_driver_lock); 122 + 123 + static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64 cppc_req_cached) 124 + { 125 + u64 epp; 126 + int ret; 127 + 128 + if (boot_cpu_has(X86_FEATURE_CPPC)) { 129 + if (!cppc_req_cached) { 130 + epp = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, 131 + &cppc_req_cached); 132 + if (epp) 133 + return epp; 134 + } 135 + epp = (cppc_req_cached >> 24) & 0xFF; 136 + } else { 137 + ret = cppc_get_epp_perf(cpudata->cpu, &epp); 138 + if (ret < 0) { 139 + pr_debug("Could not retrieve energy perf value (%d)\n", ret); 140 + return -EIO; 141 + } 142 + } 143 + 144 + return (s16)(epp & 0xff); 145 + } 146 + 147 + static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata) 148 + { 149 + s16 epp; 150 + int index = -EINVAL; 151 + 152 + epp = amd_pstate_get_epp(cpudata, 0); 153 + if (epp < 0) 154 + return epp; 155 + 156 + switch (epp) { 157 + case AMD_CPPC_EPP_PERFORMANCE: 158 + index = EPP_INDEX_PERFORMANCE; 159 + break; 160 + case AMD_CPPC_EPP_BALANCE_PERFORMANCE: 161 + index = EPP_INDEX_BALANCE_PERFORMANCE; 162 + break; 163 + case AMD_CPPC_EPP_BALANCE_POWERSAVE: 164 + index = EPP_INDEX_BALANCE_POWERSAVE; 165 + break; 166 + case AMD_CPPC_EPP_POWERSAVE: 167 + index = EPP_INDEX_POWERSAVE; 168 + break; 169 + default: 170 + break; 171 + } 172 + 173 + return index; 174 + } 175 + 176 + static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp) 177 + { 178 + int ret; 179 + struct cppc_perf_ctrls perf_ctrls; 180 + 181 + if (boot_cpu_has(X86_FEATURE_CPPC)) { 182 + u64 value = READ_ONCE(cpudata->cppc_req_cached); 183 + 184 + value &= ~GENMASK_ULL(31, 24); 185 + value |= (u64)epp << 24; 186 + WRITE_ONCE(cpudata->cppc_req_cached, value); 187 + 188 + ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 189 + if (!ret) 190 + cpudata->epp_cached = epp; 191 + } else { 192 + perf_ctrls.energy_perf = epp; 193 + ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1); 194 + if (ret) { 195 + pr_debug("failed to set energy perf value (%d)\n", ret); 196 + return ret; 197 + } 198 + cpudata->epp_cached = epp; 199 + } 200 + 201 + return ret; 202 + } 203 + 204 + static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata, 205 + int pref_index) 206 + { 207 + int epp = -EINVAL; 208 + int ret; 209 + 210 + if (!pref_index) { 211 + pr_debug("EPP pref_index is invalid\n"); 212 + return -EINVAL; 213 + } 214 + 215 + if (epp == -EINVAL) 216 + epp = epp_values[pref_index]; 217 + 218 + if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) { 219 + pr_debug("EPP cannot be set under performance policy\n"); 220 + return -EBUSY; 221 + } 222 + 223 + ret = amd_pstate_set_epp(cpudata, epp); 224 + 225 + return ret; 226 + } 64 227 65 228 static inline int pstate_enable(bool enable) 66 229 { ··· 233 70 static int cppc_enable(bool enable) 234 71 { 235 72 int cpu, ret = 0; 73 + struct cppc_perf_ctrls perf_ctrls; 236 74 237 75 for_each_present_cpu(cpu) { 238 76 ret = cppc_set_enable(cpu, enable); 239 77 if (ret) 240 78 return ret; 79 + 80 + /* Enable autonomous mode for EPP */ 81 + if (cppc_state == AMD_PSTATE_ACTIVE) { 82 + /* Set desired perf as zero to allow EPP firmware control */ 83 + perf_ctrls.desired_perf = 0; 84 + ret = cppc_set_perf(cpu, &perf_ctrls); 85 + if (ret) 86 + return ret; 87 + } 241 88 } 242 89 243 90 return ret; ··· 591 418 return; 592 419 593 420 cpudata->boost_supported = true; 594 - amd_pstate_driver.boost_enabled = true; 421 + current_pstate_driver->boost_enabled = true; 595 422 } 596 423 597 424 static void amd_perf_ctl_reset(unsigned int cpu) ··· 674 501 policy->driver_data = cpudata; 675 502 676 503 amd_pstate_boost_init(cpudata); 504 + if (!current_pstate_driver->adjust_perf) 505 + current_pstate_driver->adjust_perf = amd_pstate_adjust_perf; 677 506 678 507 return 0; 679 508 ··· 736 561 if (max_freq < 0) 737 562 return max_freq; 738 563 739 - return sprintf(&buf[0], "%u\n", max_freq); 564 + return sysfs_emit(buf, "%u\n", max_freq); 740 565 } 741 566 742 567 static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy, ··· 749 574 if (freq < 0) 750 575 return freq; 751 576 752 - return sprintf(&buf[0], "%u\n", freq); 577 + return sysfs_emit(buf, "%u\n", freq); 753 578 } 754 579 755 580 /* ··· 764 589 765 590 perf = READ_ONCE(cpudata->highest_perf); 766 591 767 - return sprintf(&buf[0], "%u\n", perf); 592 + return sysfs_emit(buf, "%u\n", perf); 593 + } 594 + 595 + static ssize_t show_energy_performance_available_preferences( 596 + struct cpufreq_policy *policy, char *buf) 597 + { 598 + int i = 0; 599 + int offset = 0; 600 + 601 + while (energy_perf_strings[i] != NULL) 602 + offset += sysfs_emit_at(buf, offset, "%s ", energy_perf_strings[i++]); 603 + 604 + sysfs_emit_at(buf, offset, "\n"); 605 + 606 + return offset; 607 + } 608 + 609 + static ssize_t store_energy_performance_preference( 610 + struct cpufreq_policy *policy, const char *buf, size_t count) 611 + { 612 + struct amd_cpudata *cpudata = policy->driver_data; 613 + char str_preference[21]; 614 + ssize_t ret; 615 + 616 + ret = sscanf(buf, "%20s", str_preference); 617 + if (ret != 1) 618 + return -EINVAL; 619 + 620 + ret = match_string(energy_perf_strings, -1, str_preference); 621 + if (ret < 0) 622 + return -EINVAL; 623 + 624 + mutex_lock(&amd_pstate_limits_lock); 625 + ret = amd_pstate_set_energy_pref_index(cpudata, ret); 626 + mutex_unlock(&amd_pstate_limits_lock); 627 + 628 + return ret ?: count; 629 + } 630 + 631 + static ssize_t show_energy_performance_preference( 632 + struct cpufreq_policy *policy, char *buf) 633 + { 634 + struct amd_cpudata *cpudata = policy->driver_data; 635 + int preference; 636 + 637 + preference = amd_pstate_get_energy_pref_index(cpudata); 638 + if (preference < 0) 639 + return preference; 640 + 641 + return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); 642 + } 643 + 644 + static ssize_t amd_pstate_show_status(char *buf) 645 + { 646 + if (!current_pstate_driver) 647 + return sysfs_emit(buf, "disable\n"); 648 + 649 + return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]); 650 + } 651 + 652 + static void amd_pstate_driver_cleanup(void) 653 + { 654 + current_pstate_driver = NULL; 655 + } 656 + 657 + static int amd_pstate_update_status(const char *buf, size_t size) 658 + { 659 + int ret = 0; 660 + int mode_idx; 661 + 662 + if (size > 7 || size < 6) 663 + return -EINVAL; 664 + mode_idx = get_mode_idx_from_str(buf, size); 665 + 666 + switch(mode_idx) { 667 + case AMD_PSTATE_DISABLE: 668 + if (!current_pstate_driver) 669 + return -EINVAL; 670 + if (cppc_state == AMD_PSTATE_ACTIVE) 671 + return -EBUSY; 672 + cpufreq_unregister_driver(current_pstate_driver); 673 + amd_pstate_driver_cleanup(); 674 + break; 675 + case AMD_PSTATE_PASSIVE: 676 + if (current_pstate_driver) { 677 + if (current_pstate_driver == &amd_pstate_driver) 678 + return 0; 679 + cpufreq_unregister_driver(current_pstate_driver); 680 + cppc_state = AMD_PSTATE_PASSIVE; 681 + current_pstate_driver = &amd_pstate_driver; 682 + } 683 + 684 + ret = cpufreq_register_driver(current_pstate_driver); 685 + break; 686 + case AMD_PSTATE_ACTIVE: 687 + if (current_pstate_driver) { 688 + if (current_pstate_driver == &amd_pstate_epp_driver) 689 + return 0; 690 + cpufreq_unregister_driver(current_pstate_driver); 691 + current_pstate_driver = &amd_pstate_epp_driver; 692 + cppc_state = AMD_PSTATE_ACTIVE; 693 + } 694 + 695 + ret = cpufreq_register_driver(current_pstate_driver); 696 + break; 697 + default: 698 + ret = -EINVAL; 699 + break; 700 + } 701 + 702 + return ret; 703 + } 704 + 705 + static ssize_t show_status(struct kobject *kobj, 706 + struct kobj_attribute *attr, char *buf) 707 + { 708 + ssize_t ret; 709 + 710 + mutex_lock(&amd_pstate_driver_lock); 711 + ret = amd_pstate_show_status(buf); 712 + mutex_unlock(&amd_pstate_driver_lock); 713 + 714 + return ret; 715 + } 716 + 717 + static ssize_t store_status(struct kobject *a, struct kobj_attribute *b, 718 + const char *buf, size_t count) 719 + { 720 + char *p = memchr(buf, '\n', count); 721 + int ret; 722 + 723 + mutex_lock(&amd_pstate_driver_lock); 724 + ret = amd_pstate_update_status(buf, p ? p - buf : count); 725 + mutex_unlock(&amd_pstate_driver_lock); 726 + 727 + return ret < 0 ? ret : count; 768 728 } 769 729 770 730 cpufreq_freq_attr_ro(amd_pstate_max_freq); 771 731 cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq); 772 732 773 733 cpufreq_freq_attr_ro(amd_pstate_highest_perf); 734 + cpufreq_freq_attr_rw(energy_performance_preference); 735 + cpufreq_freq_attr_ro(energy_performance_available_preferences); 736 + define_one_global_rw(status); 774 737 775 738 static struct freq_attr *amd_pstate_attr[] = { 776 739 &amd_pstate_max_freq, ··· 916 603 &amd_pstate_highest_perf, 917 604 NULL, 918 605 }; 606 + 607 + static struct freq_attr *amd_pstate_epp_attr[] = { 608 + &amd_pstate_max_freq, 609 + &amd_pstate_lowest_nonlinear_freq, 610 + &amd_pstate_highest_perf, 611 + &energy_performance_preference, 612 + &energy_performance_available_preferences, 613 + NULL, 614 + }; 615 + 616 + static struct attribute *pstate_global_attributes[] = { 617 + &status.attr, 618 + NULL 619 + }; 620 + 621 + static const struct attribute_group amd_pstate_global_attr_group = { 622 + .attrs = pstate_global_attributes, 623 + }; 624 + 625 + static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) 626 + { 627 + int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret; 628 + struct amd_cpudata *cpudata; 629 + struct device *dev; 630 + u64 value; 631 + 632 + /* 633 + * Resetting PERF_CTL_MSR will put the CPU in P0 frequency, 634 + * which is ideal for initialization process. 635 + */ 636 + amd_perf_ctl_reset(policy->cpu); 637 + dev = get_cpu_device(policy->cpu); 638 + if (!dev) 639 + return -ENODEV; 640 + 641 + cpudata = kzalloc(sizeof(*cpudata), GFP_KERNEL); 642 + if (!cpudata) 643 + return -ENOMEM; 644 + 645 + cpudata->cpu = policy->cpu; 646 + cpudata->epp_policy = 0; 647 + 648 + ret = amd_pstate_init_perf(cpudata); 649 + if (ret) 650 + goto free_cpudata1; 651 + 652 + min_freq = amd_get_min_freq(cpudata); 653 + max_freq = amd_get_max_freq(cpudata); 654 + nominal_freq = amd_get_nominal_freq(cpudata); 655 + lowest_nonlinear_freq = amd_get_lowest_nonlinear_freq(cpudata); 656 + if (min_freq < 0 || max_freq < 0 || min_freq > max_freq) { 657 + dev_err(dev, "min_freq(%d) or max_freq(%d) value is incorrect\n", 658 + min_freq, max_freq); 659 + ret = -EINVAL; 660 + goto free_cpudata1; 661 + } 662 + 663 + policy->cpuinfo.min_freq = min_freq; 664 + policy->cpuinfo.max_freq = max_freq; 665 + /* It will be updated by governor */ 666 + policy->cur = policy->cpuinfo.min_freq; 667 + 668 + /* Initial processor data capability frequencies */ 669 + cpudata->max_freq = max_freq; 670 + cpudata->min_freq = min_freq; 671 + cpudata->nominal_freq = nominal_freq; 672 + cpudata->lowest_nonlinear_freq = lowest_nonlinear_freq; 673 + 674 + policy->driver_data = cpudata; 675 + 676 + cpudata->epp_cached = amd_pstate_get_epp(cpudata, 0); 677 + 678 + policy->min = policy->cpuinfo.min_freq; 679 + policy->max = policy->cpuinfo.max_freq; 680 + 681 + /* 682 + * Set the policy to powersave to provide a valid fallback value in case 683 + * the default cpufreq governor is neither powersave nor performance. 684 + */ 685 + policy->policy = CPUFREQ_POLICY_POWERSAVE; 686 + 687 + if (boot_cpu_has(X86_FEATURE_CPPC)) { 688 + policy->fast_switch_possible = true; 689 + ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value); 690 + if (ret) 691 + return ret; 692 + WRITE_ONCE(cpudata->cppc_req_cached, value); 693 + 694 + ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &value); 695 + if (ret) 696 + return ret; 697 + WRITE_ONCE(cpudata->cppc_cap1_cached, value); 698 + } 699 + amd_pstate_boost_init(cpudata); 700 + 701 + return 0; 702 + 703 + free_cpudata1: 704 + kfree(cpudata); 705 + return ret; 706 + } 707 + 708 + static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy) 709 + { 710 + pr_debug("CPU %d exiting\n", policy->cpu); 711 + policy->fast_switch_possible = false; 712 + return 0; 713 + } 714 + 715 + static void amd_pstate_epp_init(unsigned int cpu) 716 + { 717 + struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); 718 + struct amd_cpudata *cpudata = policy->driver_data; 719 + u32 max_perf, min_perf; 720 + u64 value; 721 + s16 epp; 722 + 723 + max_perf = READ_ONCE(cpudata->highest_perf); 724 + min_perf = READ_ONCE(cpudata->lowest_perf); 725 + 726 + value = READ_ONCE(cpudata->cppc_req_cached); 727 + 728 + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) 729 + min_perf = max_perf; 730 + 731 + /* Initial min/max values for CPPC Performance Controls Register */ 732 + value &= ~AMD_CPPC_MIN_PERF(~0L); 733 + value |= AMD_CPPC_MIN_PERF(min_perf); 734 + 735 + value &= ~AMD_CPPC_MAX_PERF(~0L); 736 + value |= AMD_CPPC_MAX_PERF(max_perf); 737 + 738 + /* CPPC EPP feature require to set zero to the desire perf bit */ 739 + value &= ~AMD_CPPC_DES_PERF(~0L); 740 + value |= AMD_CPPC_DES_PERF(0); 741 + 742 + if (cpudata->epp_policy == cpudata->policy) 743 + goto skip_epp; 744 + 745 + cpudata->epp_policy = cpudata->policy; 746 + 747 + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) { 748 + epp = amd_pstate_get_epp(cpudata, value); 749 + if (epp < 0) 750 + goto skip_epp; 751 + /* force the epp value to be zero for performance policy */ 752 + epp = 0; 753 + } else { 754 + /* Get BIOS pre-defined epp value */ 755 + epp = amd_pstate_get_epp(cpudata, value); 756 + if (epp) 757 + goto skip_epp; 758 + } 759 + /* Set initial EPP value */ 760 + if (boot_cpu_has(X86_FEATURE_CPPC)) { 761 + value &= ~GENMASK_ULL(31, 24); 762 + value |= (u64)epp << 24; 763 + } 764 + 765 + amd_pstate_set_epp(cpudata, epp); 766 + skip_epp: 767 + WRITE_ONCE(cpudata->cppc_req_cached, value); 768 + cpufreq_cpu_put(policy); 769 + } 770 + 771 + static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) 772 + { 773 + struct amd_cpudata *cpudata = policy->driver_data; 774 + 775 + if (!policy->cpuinfo.max_freq) 776 + return -ENODEV; 777 + 778 + pr_debug("set_policy: cpuinfo.max %u policy->max %u\n", 779 + policy->cpuinfo.max_freq, policy->max); 780 + 781 + cpudata->policy = policy->policy; 782 + 783 + amd_pstate_epp_init(policy->cpu); 784 + 785 + return 0; 786 + } 787 + 788 + static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata) 789 + { 790 + struct cppc_perf_ctrls perf_ctrls; 791 + u64 value, max_perf; 792 + int ret; 793 + 794 + ret = amd_pstate_enable(true); 795 + if (ret) 796 + pr_err("failed to enable amd pstate during resume, return %d\n", ret); 797 + 798 + value = READ_ONCE(cpudata->cppc_req_cached); 799 + max_perf = READ_ONCE(cpudata->highest_perf); 800 + 801 + if (boot_cpu_has(X86_FEATURE_CPPC)) { 802 + wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 803 + } else { 804 + perf_ctrls.max_perf = max_perf; 805 + perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(cpudata->epp_cached); 806 + cppc_set_perf(cpudata->cpu, &perf_ctrls); 807 + } 808 + } 809 + 810 + static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) 811 + { 812 + struct amd_cpudata *cpudata = policy->driver_data; 813 + 814 + pr_debug("AMD CPU Core %d going online\n", cpudata->cpu); 815 + 816 + if (cppc_state == AMD_PSTATE_ACTIVE) { 817 + amd_pstate_epp_reenable(cpudata); 818 + cpudata->suspended = false; 819 + } 820 + 821 + return 0; 822 + } 823 + 824 + static void amd_pstate_epp_offline(struct cpufreq_policy *policy) 825 + { 826 + struct amd_cpudata *cpudata = policy->driver_data; 827 + struct cppc_perf_ctrls perf_ctrls; 828 + int min_perf; 829 + u64 value; 830 + 831 + min_perf = READ_ONCE(cpudata->lowest_perf); 832 + value = READ_ONCE(cpudata->cppc_req_cached); 833 + 834 + mutex_lock(&amd_pstate_limits_lock); 835 + if (boot_cpu_has(X86_FEATURE_CPPC)) { 836 + cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN; 837 + 838 + /* Set max perf same as min perf */ 839 + value &= ~AMD_CPPC_MAX_PERF(~0L); 840 + value |= AMD_CPPC_MAX_PERF(min_perf); 841 + value &= ~AMD_CPPC_MIN_PERF(~0L); 842 + value |= AMD_CPPC_MIN_PERF(min_perf); 843 + wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); 844 + } else { 845 + perf_ctrls.desired_perf = 0; 846 + perf_ctrls.max_perf = min_perf; 847 + perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(HWP_EPP_BALANCE_POWERSAVE); 848 + cppc_set_perf(cpudata->cpu, &perf_ctrls); 849 + } 850 + mutex_unlock(&amd_pstate_limits_lock); 851 + } 852 + 853 + static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) 854 + { 855 + struct amd_cpudata *cpudata = policy->driver_data; 856 + 857 + pr_debug("AMD CPU Core %d going offline\n", cpudata->cpu); 858 + 859 + if (cpudata->suspended) 860 + return 0; 861 + 862 + if (cppc_state == AMD_PSTATE_ACTIVE) 863 + amd_pstate_epp_offline(policy); 864 + 865 + return 0; 866 + } 867 + 868 + static int amd_pstate_epp_verify_policy(struct cpufreq_policy_data *policy) 869 + { 870 + cpufreq_verify_within_cpu_limits(policy); 871 + pr_debug("policy_max =%d, policy_min=%d\n", policy->max, policy->min); 872 + return 0; 873 + } 874 + 875 + static int amd_pstate_epp_suspend(struct cpufreq_policy *policy) 876 + { 877 + struct amd_cpudata *cpudata = policy->driver_data; 878 + int ret; 879 + 880 + /* avoid suspending when EPP is not enabled */ 881 + if (cppc_state != AMD_PSTATE_ACTIVE) 882 + return 0; 883 + 884 + /* set this flag to avoid setting core offline*/ 885 + cpudata->suspended = true; 886 + 887 + /* disable CPPC in lowlevel firmware */ 888 + ret = amd_pstate_enable(false); 889 + if (ret) 890 + pr_err("failed to suspend, return %d\n", ret); 891 + 892 + return 0; 893 + } 894 + 895 + static int amd_pstate_epp_resume(struct cpufreq_policy *policy) 896 + { 897 + struct amd_cpudata *cpudata = policy->driver_data; 898 + 899 + if (cpudata->suspended) { 900 + mutex_lock(&amd_pstate_limits_lock); 901 + 902 + /* enable amd pstate from suspend state*/ 903 + amd_pstate_epp_reenable(cpudata); 904 + 905 + mutex_unlock(&amd_pstate_limits_lock); 906 + 907 + cpudata->suspended = false; 908 + } 909 + 910 + return 0; 911 + } 919 912 920 913 static struct cpufreq_driver amd_pstate_driver = { 921 914 .flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS, ··· 1236 617 .attr = amd_pstate_attr, 1237 618 }; 1238 619 620 + static struct cpufreq_driver amd_pstate_epp_driver = { 621 + .flags = CPUFREQ_CONST_LOOPS, 622 + .verify = amd_pstate_epp_verify_policy, 623 + .setpolicy = amd_pstate_epp_set_policy, 624 + .init = amd_pstate_epp_cpu_init, 625 + .exit = amd_pstate_epp_cpu_exit, 626 + .offline = amd_pstate_epp_cpu_offline, 627 + .online = amd_pstate_epp_cpu_online, 628 + .suspend = amd_pstate_epp_suspend, 629 + .resume = amd_pstate_epp_resume, 630 + .name = "amd_pstate_epp", 631 + .attr = amd_pstate_epp_attr, 632 + }; 633 + 1239 634 static int __init amd_pstate_init(void) 1240 635 { 1241 636 int ret; ··· 1259 626 /* 1260 627 * by default the pstate driver is disabled to load 1261 628 * enable the amd_pstate passive mode driver explicitly 1262 - * with amd_pstate=passive in kernel command line 629 + * with amd_pstate=passive or other modes in kernel command line 1263 630 */ 1264 - if (!cppc_load) { 1265 - pr_debug("driver load is disabled, boot with amd_pstate=passive to enable this\n"); 631 + if (cppc_state == AMD_PSTATE_DISABLE) { 632 + pr_debug("driver load is disabled, boot with specific mode to enable this\n"); 1266 633 return -ENODEV; 1267 634 } 1268 635 ··· 1278 645 /* capability check */ 1279 646 if (boot_cpu_has(X86_FEATURE_CPPC)) { 1280 647 pr_debug("AMD CPPC MSR based functionality is supported\n"); 1281 - amd_pstate_driver.adjust_perf = amd_pstate_adjust_perf; 648 + if (cppc_state == AMD_PSTATE_PASSIVE) 649 + current_pstate_driver->adjust_perf = amd_pstate_adjust_perf; 1282 650 } else { 1283 651 pr_debug("AMD CPPC shared memory based functionality is supported\n"); 1284 652 static_call_update(amd_pstate_enable, cppc_enable); ··· 1290 656 /* enable amd pstate feature */ 1291 657 ret = amd_pstate_enable(true); 1292 658 if (ret) { 1293 - pr_err("failed to enable amd-pstate with return %d\n", ret); 659 + pr_err("failed to enable with return %d\n", ret); 1294 660 return ret; 1295 661 } 1296 662 1297 - ret = cpufreq_register_driver(&amd_pstate_driver); 663 + ret = cpufreq_register_driver(current_pstate_driver); 1298 664 if (ret) 1299 - pr_err("failed to register amd_pstate_driver with return %d\n", 1300 - ret); 665 + pr_err("failed to register with return %d\n", ret); 1301 666 667 + amd_pstate_kobj = kobject_create_and_add("amd_pstate", &cpu_subsys.dev_root->kobj); 668 + if (!amd_pstate_kobj) { 669 + ret = -EINVAL; 670 + pr_err("global sysfs registration failed.\n"); 671 + goto kobject_free; 672 + } 673 + 674 + ret = sysfs_create_group(amd_pstate_kobj, &amd_pstate_global_attr_group); 675 + if (ret) { 676 + pr_err("sysfs attribute export failed with error %d.\n", ret); 677 + goto global_attr_free; 678 + } 679 + 680 + return ret; 681 + 682 + global_attr_free: 683 + kobject_put(amd_pstate_kobj); 684 + kobject_free: 685 + cpufreq_unregister_driver(current_pstate_driver); 1302 686 return ret; 1303 687 } 1304 688 device_initcall(amd_pstate_init); 1305 689 1306 690 static int __init amd_pstate_param(char *str) 1307 691 { 692 + size_t size; 693 + int mode_idx; 694 + 1308 695 if (!str) 1309 696 return -EINVAL; 1310 697 1311 - if (!strcmp(str, "disable")) { 1312 - cppc_load = 0; 1313 - pr_info("driver is explicitly disabled\n"); 1314 - } else if (!strcmp(str, "passive")) 1315 - cppc_load = 1; 698 + size = strlen(str); 699 + mode_idx = get_mode_idx_from_str(str, size); 1316 700 1317 - return 0; 701 + if (mode_idx >= AMD_PSTATE_DISABLE && mode_idx < AMD_PSTATE_MAX) { 702 + cppc_state = mode_idx; 703 + if (cppc_state == AMD_PSTATE_DISABLE) 704 + pr_info("driver is explicitly disabled\n"); 705 + 706 + if (cppc_state == AMD_PSTATE_ACTIVE) 707 + current_pstate_driver = &amd_pstate_epp_driver; 708 + 709 + if (cppc_state == AMD_PSTATE_PASSIVE) 710 + current_pstate_driver = &amd_pstate_driver; 711 + 712 + return 0; 713 + } 714 + 715 + return -EINVAL; 1318 716 } 1319 717 early_param("amd_pstate", amd_pstate_param); 1320 718
+1 -4
drivers/cpufreq/brcmstb-avs-cpufreq.c
··· 751 751 752 752 static int brcm_avs_cpufreq_remove(struct platform_device *pdev) 753 753 { 754 - int ret; 755 - 756 - ret = cpufreq_unregister_driver(&brcm_avs_driver); 757 - WARN_ON(ret); 754 + cpufreq_unregister_driver(&brcm_avs_driver); 758 755 759 756 brcm_avs_prepare_uninit(pdev); 760 757
+4 -6
drivers/cpufreq/cpufreq.c
··· 993 993 .store = store, 994 994 }; 995 995 996 - static struct kobj_type ktype_cpufreq = { 996 + static const struct kobj_type ktype_cpufreq = { 997 997 .sysfs_ops = &sysfs_ops, 998 998 .default_groups = cpufreq_groups, 999 999 .release = cpufreq_sysfs_release, ··· 2904 2904 * Returns zero if successful, and -EINVAL if the cpufreq_driver is 2905 2905 * currently not initialised. 2906 2906 */ 2907 - int cpufreq_unregister_driver(struct cpufreq_driver *driver) 2907 + void cpufreq_unregister_driver(struct cpufreq_driver *driver) 2908 2908 { 2909 2909 unsigned long flags; 2910 2910 2911 - if (!cpufreq_driver || (driver != cpufreq_driver)) 2912 - return -EINVAL; 2911 + if (WARN_ON(!cpufreq_driver || (driver != cpufreq_driver))) 2912 + return; 2913 2913 2914 2914 pr_debug("unregistering driver %s\n", driver->name); 2915 2915 ··· 2926 2926 2927 2927 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2928 2928 cpus_read_unlock(); 2929 - 2930 - return 0; 2931 2929 } 2932 2930 EXPORT_SYMBOL_GPL(cpufreq_unregister_driver); 2933 2931
+3 -1
drivers/cpufreq/davinci-cpufreq.c
··· 133 133 134 134 static int __exit davinci_cpufreq_remove(struct platform_device *pdev) 135 135 { 136 + cpufreq_unregister_driver(&davinci_driver); 137 + 136 138 clk_put(cpufreq.armclk); 137 139 138 140 if (cpufreq.asyncclk) 139 141 clk_put(cpufreq.asyncclk); 140 142 141 - return cpufreq_unregister_driver(&davinci_driver); 143 + return 0; 142 144 } 143 145 144 146 static struct platform_driver davinci_cpufreq_driver = {
-222
drivers/cpufreq/loongson1-cpufreq.c
··· 1 - /* 2 - * CPU Frequency Scaling for Loongson 1 SoC 3 - * 4 - * Copyright (C) 2014-2016 Zhang, Keguang <keguang.zhang@gmail.com> 5 - * 6 - * This file is licensed under the terms of the GNU General Public 7 - * License version 2. This program is licensed "as is" without any 8 - * warranty of any kind, whether express or implied. 9 - */ 10 - 11 - #include <linux/clk.h> 12 - #include <linux/clk-provider.h> 13 - #include <linux/cpu.h> 14 - #include <linux/cpufreq.h> 15 - #include <linux/delay.h> 16 - #include <linux/io.h> 17 - #include <linux/module.h> 18 - #include <linux/platform_device.h> 19 - #include <linux/slab.h> 20 - 21 - #include <cpufreq.h> 22 - #include <loongson1.h> 23 - 24 - struct ls1x_cpufreq { 25 - struct device *dev; 26 - struct clk *clk; /* CPU clk */ 27 - struct clk *mux_clk; /* MUX of CPU clk */ 28 - struct clk *pll_clk; /* PLL clk */ 29 - struct clk *osc_clk; /* OSC clk */ 30 - unsigned int max_freq; 31 - unsigned int min_freq; 32 - }; 33 - 34 - static struct ls1x_cpufreq *cpufreq; 35 - 36 - static int ls1x_cpufreq_notifier(struct notifier_block *nb, 37 - unsigned long val, void *data) 38 - { 39 - if (val == CPUFREQ_POSTCHANGE) 40 - current_cpu_data.udelay_val = loops_per_jiffy; 41 - 42 - return NOTIFY_OK; 43 - } 44 - 45 - static struct notifier_block ls1x_cpufreq_notifier_block = { 46 - .notifier_call = ls1x_cpufreq_notifier 47 - }; 48 - 49 - static int ls1x_cpufreq_target(struct cpufreq_policy *policy, 50 - unsigned int index) 51 - { 52 - struct device *cpu_dev = get_cpu_device(policy->cpu); 53 - unsigned int old_freq, new_freq; 54 - 55 - old_freq = policy->cur; 56 - new_freq = policy->freq_table[index].frequency; 57 - 58 - /* 59 - * The procedure of reconfiguring CPU clk is as below. 60 - * 61 - * - Reparent CPU clk to OSC clk 62 - * - Reset CPU clock (very important) 63 - * - Reconfigure CPU DIV 64 - * - Reparent CPU clk back to CPU DIV clk 65 - */ 66 - 67 - clk_set_parent(policy->clk, cpufreq->osc_clk); 68 - __raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) | RST_CPU_EN | RST_CPU, 69 - LS1X_CLK_PLL_DIV); 70 - __raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) & ~(RST_CPU_EN | RST_CPU), 71 - LS1X_CLK_PLL_DIV); 72 - clk_set_rate(cpufreq->mux_clk, new_freq * 1000); 73 - clk_set_parent(policy->clk, cpufreq->mux_clk); 74 - dev_dbg(cpu_dev, "%u KHz --> %u KHz\n", old_freq, new_freq); 75 - 76 - return 0; 77 - } 78 - 79 - static int ls1x_cpufreq_init(struct cpufreq_policy *policy) 80 - { 81 - struct device *cpu_dev = get_cpu_device(policy->cpu); 82 - struct cpufreq_frequency_table *freq_tbl; 83 - unsigned int pll_freq, freq; 84 - int steps, i; 85 - 86 - pll_freq = clk_get_rate(cpufreq->pll_clk) / 1000; 87 - 88 - steps = 1 << DIV_CPU_WIDTH; 89 - freq_tbl = kcalloc(steps, sizeof(*freq_tbl), GFP_KERNEL); 90 - if (!freq_tbl) 91 - return -ENOMEM; 92 - 93 - for (i = 0; i < (steps - 1); i++) { 94 - freq = pll_freq / (i + 1); 95 - if ((freq < cpufreq->min_freq) || (freq > cpufreq->max_freq)) 96 - freq_tbl[i].frequency = CPUFREQ_ENTRY_INVALID; 97 - else 98 - freq_tbl[i].frequency = freq; 99 - dev_dbg(cpu_dev, 100 - "cpufreq table: index %d: frequency %d\n", i, 101 - freq_tbl[i].frequency); 102 - } 103 - freq_tbl[i].frequency = CPUFREQ_TABLE_END; 104 - 105 - policy->clk = cpufreq->clk; 106 - cpufreq_generic_init(policy, freq_tbl, 0); 107 - 108 - return 0; 109 - } 110 - 111 - static int ls1x_cpufreq_exit(struct cpufreq_policy *policy) 112 - { 113 - kfree(policy->freq_table); 114 - return 0; 115 - } 116 - 117 - static struct cpufreq_driver ls1x_cpufreq_driver = { 118 - .name = "cpufreq-ls1x", 119 - .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 120 - .verify = cpufreq_generic_frequency_table_verify, 121 - .target_index = ls1x_cpufreq_target, 122 - .get = cpufreq_generic_get, 123 - .init = ls1x_cpufreq_init, 124 - .exit = ls1x_cpufreq_exit, 125 - .attr = cpufreq_generic_attr, 126 - }; 127 - 128 - static int ls1x_cpufreq_remove(struct platform_device *pdev) 129 - { 130 - cpufreq_unregister_notifier(&ls1x_cpufreq_notifier_block, 131 - CPUFREQ_TRANSITION_NOTIFIER); 132 - cpufreq_unregister_driver(&ls1x_cpufreq_driver); 133 - 134 - return 0; 135 - } 136 - 137 - static int ls1x_cpufreq_probe(struct platform_device *pdev) 138 - { 139 - struct plat_ls1x_cpufreq *pdata = dev_get_platdata(&pdev->dev); 140 - struct clk *clk; 141 - int ret; 142 - 143 - if (!pdata || !pdata->clk_name || !pdata->osc_clk_name) { 144 - dev_err(&pdev->dev, "platform data missing\n"); 145 - return -EINVAL; 146 - } 147 - 148 - cpufreq = 149 - devm_kzalloc(&pdev->dev, sizeof(struct ls1x_cpufreq), GFP_KERNEL); 150 - if (!cpufreq) 151 - return -ENOMEM; 152 - 153 - cpufreq->dev = &pdev->dev; 154 - 155 - clk = devm_clk_get(&pdev->dev, pdata->clk_name); 156 - if (IS_ERR(clk)) { 157 - dev_err(&pdev->dev, "unable to get %s clock\n", 158 - pdata->clk_name); 159 - return PTR_ERR(clk); 160 - } 161 - cpufreq->clk = clk; 162 - 163 - clk = clk_get_parent(clk); 164 - if (IS_ERR(clk)) { 165 - dev_err(&pdev->dev, "unable to get parent of %s clock\n", 166 - __clk_get_name(cpufreq->clk)); 167 - return PTR_ERR(clk); 168 - } 169 - cpufreq->mux_clk = clk; 170 - 171 - clk = clk_get_parent(clk); 172 - if (IS_ERR(clk)) { 173 - dev_err(&pdev->dev, "unable to get parent of %s clock\n", 174 - __clk_get_name(cpufreq->mux_clk)); 175 - return PTR_ERR(clk); 176 - } 177 - cpufreq->pll_clk = clk; 178 - 179 - clk = devm_clk_get(&pdev->dev, pdata->osc_clk_name); 180 - if (IS_ERR(clk)) { 181 - dev_err(&pdev->dev, "unable to get %s clock\n", 182 - pdata->osc_clk_name); 183 - return PTR_ERR(clk); 184 - } 185 - cpufreq->osc_clk = clk; 186 - 187 - cpufreq->max_freq = pdata->max_freq; 188 - cpufreq->min_freq = pdata->min_freq; 189 - 190 - ret = cpufreq_register_driver(&ls1x_cpufreq_driver); 191 - if (ret) { 192 - dev_err(&pdev->dev, 193 - "failed to register CPUFreq driver: %d\n", ret); 194 - return ret; 195 - } 196 - 197 - ret = cpufreq_register_notifier(&ls1x_cpufreq_notifier_block, 198 - CPUFREQ_TRANSITION_NOTIFIER); 199 - 200 - if (ret) { 201 - dev_err(&pdev->dev, 202 - "failed to register CPUFreq notifier: %d\n",ret); 203 - cpufreq_unregister_driver(&ls1x_cpufreq_driver); 204 - } 205 - 206 - return ret; 207 - } 208 - 209 - static struct platform_driver ls1x_cpufreq_platdrv = { 210 - .probe = ls1x_cpufreq_probe, 211 - .remove = ls1x_cpufreq_remove, 212 - .driver = { 213 - .name = "ls1x-cpufreq", 214 - }, 215 - }; 216 - 217 - module_platform_driver(ls1x_cpufreq_platdrv); 218 - 219 - MODULE_ALIAS("platform:ls1x-cpufreq"); 220 - MODULE_AUTHOR("Kelvin Cheung <keguang.zhang@gmail.com>"); 221 - MODULE_DESCRIPTION("Loongson1 CPUFreq driver"); 222 - MODULE_LICENSE("GPL");
+3 -1
drivers/cpufreq/mediatek-cpufreq-hw.c
··· 317 317 318 318 static int mtk_cpufreq_hw_driver_remove(struct platform_device *pdev) 319 319 { 320 - return cpufreq_unregister_driver(&cpufreq_mtk_hw_driver); 320 + cpufreq_unregister_driver(&cpufreq_mtk_hw_driver); 321 + 322 + return 0; 321 323 } 322 324 323 325 static const struct of_device_id mtk_cpufreq_hw_match[] = {
+3 -1
drivers/cpufreq/omap-cpufreq.c
··· 184 184 185 185 static int omap_cpufreq_remove(struct platform_device *pdev) 186 186 { 187 - return cpufreq_unregister_driver(&omap_driver); 187 + cpufreq_unregister_driver(&omap_driver); 188 + 189 + return 0; 188 190 } 189 191 190 192 static struct platform_driver omap_cpufreq_platdrv = {
+3 -1
drivers/cpufreq/qcom-cpufreq-hw.c
··· 770 770 771 771 static int qcom_cpufreq_hw_driver_remove(struct platform_device *pdev) 772 772 { 773 - return cpufreq_unregister_driver(&cpufreq_qcom_hw_driver); 773 + cpufreq_unregister_driver(&cpufreq_qcom_hw_driver); 774 + 775 + return 0; 774 776 } 775 777 776 778 static struct platform_driver qcom_cpufreq_hw_driver = {
+12
include/acpi/cppc_acpi.h
··· 108 108 u32 lowest_nonlinear_perf; 109 109 u32 lowest_freq; 110 110 u32 nominal_freq; 111 + u32 energy_perf; 111 112 }; 112 113 113 114 struct cppc_perf_ctrls { 114 115 u32 max_perf; 115 116 u32 min_perf; 116 117 u32 desired_perf; 118 + u32 energy_perf; 117 119 }; 118 120 119 121 struct cppc_perf_fb_ctrs { ··· 151 149 extern bool cpc_supported_by_cpu(void); 152 150 extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val); 153 151 extern int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val); 152 + extern int cppc_get_epp_perf(int cpunum, u64 *epp_perf); 153 + extern int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable); 154 154 #else /* !CONFIG_ACPI_CPPC_LIB */ 155 155 static inline int cppc_get_desired_perf(int cpunum, u64 *desired_perf) 156 156 { ··· 203 199 return -ENOTSUPP; 204 200 } 205 201 static inline int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val) 202 + { 203 + return -ENOTSUPP; 204 + } 205 + static inline int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable) 206 + { 207 + return -ENOTSUPP; 208 + } 209 + static inline int cppc_get_epp_perf(int cpunum, u64 *epp_perf) 206 210 { 207 211 return -ENOTSUPP; 208 212 }
+32
include/linux/amd-pstate.h
··· 12 12 13 13 #include <linux/pm_qos.h> 14 14 15 + #define AMD_CPPC_EPP_PERFORMANCE 0x00 16 + #define AMD_CPPC_EPP_BALANCE_PERFORMANCE 0x80 17 + #define AMD_CPPC_EPP_BALANCE_POWERSAVE 0xBF 18 + #define AMD_CPPC_EPP_POWERSAVE 0xFF 19 + 15 20 /********************************************************************* 16 21 * AMD P-state INTERFACE * 17 22 *********************************************************************/ ··· 52 47 * @prev: Last Aperf/Mperf/tsc count value read from register 53 48 * @freq: current cpu frequency value 54 49 * @boost_supported: check whether the Processor or SBIOS supports boost mode 50 + * @epp_policy: Last saved policy used to set energy-performance preference 51 + * @epp_cached: Cached CPPC energy-performance preference value 52 + * @policy: Cpufreq policy value 53 + * @cppc_cap1_cached Cached MSR_AMD_CPPC_CAP1 register value 55 54 * 56 55 * The amd_cpudata is key private data for each CPU thread in AMD P-State, and 57 56 * represents all the attributes and goals that AMD P-State requests at runtime. ··· 81 72 82 73 u64 freq; 83 74 bool boost_supported; 75 + 76 + /* EPP feature related attributes*/ 77 + s16 epp_policy; 78 + s16 epp_cached; 79 + u32 policy; 80 + u64 cppc_cap1_cached; 81 + bool suspended; 84 82 }; 85 83 84 + /* 85 + * enum amd_pstate_mode - driver working mode of amd pstate 86 + */ 87 + enum amd_pstate_mode { 88 + AMD_PSTATE_DISABLE = 0, 89 + AMD_PSTATE_PASSIVE, 90 + AMD_PSTATE_ACTIVE, 91 + AMD_PSTATE_MAX, 92 + }; 93 + 94 + static const char * const amd_pstate_mode_string[] = { 95 + [AMD_PSTATE_DISABLE] = "disable", 96 + [AMD_PSTATE_PASSIVE] = "passive", 97 + [AMD_PSTATE_ACTIVE] = "active", 98 + NULL, 99 + }; 86 100 #endif /* _LINUX_AMD_PSTATE_H */
+1 -1
include/linux/cpufreq.h
··· 448 448 #define CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING BIT(6) 449 449 450 450 int cpufreq_register_driver(struct cpufreq_driver *driver_data); 451 - int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); 451 + void cpufreq_unregister_driver(struct cpufreq_driver *driver_data); 452 452 453 453 bool cpufreq_driver_test_flags(u16 flags); 454 454 const char *cpufreq_get_current_driver(void);