···70207020 management firmware translates the requests into actual70217021 hardware states (core frequency, data fabric and memory70227022 clocks etc.)70237023+ active70247024+ Use amd_pstate_epp driver instance as the scaling driver,70257025+ driver provides a hint to the hardware if software wants70267026+ to bias toward performance (0x0) or energy efficiency (0xff)70277027+ to the CPPC firmware. then CPPC power algorithm will70287028+ calculate the runtime workload and adjust the realtime cores70297029+ frequency.
+72-2
Documentation/admin-guide/pm/amd-pstate.rst
···262262<perf_cap_>`_.)263263This attribute is read-only.264264265265+``energy_performance_available_preferences``266266+267267+A list of all the supported EPP preferences that could be used for268268+``energy_performance_preference`` on this system.269269+These profiles represent different hints that are provided270270+to the low-level firmware about the user's desired energy vs efficiency271271+tradeoff. ``default`` represents the epp value is set by platform272272+firmware. This attribute is read-only.273273+274274+``energy_performance_preference``275275+276276+The current energy performance preference can be read from this attribute.277277+and user can change current preference according to energy or performance needs278278+Please get all support profiles list from279279+``energy_performance_available_preferences`` attribute, all the profiles are280280+integer values defined between 0 to 255 when EPP feature is enabled by platform281281+firmware, if EPP feature is disabled, driver will ignore the written value282282+This attribute is read-write.283283+265284Other performance and frequency values can be read back from266285``/sys/devices/system/cpu/cpuX/acpi_cppc/``, see :ref:`cppc_sysfs`.267286···299280platforms. The AMD P-States mechanism is the more performance and energy300281efficiency frequency management method on AMD processors.301282302302-Kernel Module Options for ``amd-pstate``303303-=========================================283283+284284+AMD Pstate Driver Operation Modes285285+=================================286286+287287+``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and288288+CPPC non-autonomous(passive) mode.289289+active mode and passive mode can be chosen by different kernel parameters.290290+When in Autonomous mode, CPPC ignores requests done in the Desired Performance291291+Target register and takes into account only the values set to the Minimum requested292292+performance, Maximum requested performance, and Energy Performance Preference293293+registers. When Autonomous is disabled, it only considers the Desired Performance Target.294294+295295+Active Mode296296+------------297297+298298+``amd_pstate=active``299299+300300+This is the low-level firmware control mode which is implemented by ``amd_pstate_epp``301301+driver with ``amd_pstate=active`` passed to the kernel in the command line.302302+In this mode, ``amd_pstate_epp`` driver provides a hint to the hardware if software303303+wants to bias toward performance (0x0) or energy efficiency (0xff) to the CPPC firmware.304304+then CPPC power algorithm will calculate the runtime workload and adjust the realtime305305+cores frequency according to the power supply and thermal, core voltage and some other306306+hardware conditions.304307305308Passive Mode306309------------···338297processor must provide at least nominal performance requested and go higher if current339298operating conditions allow.340299300300+301301+User Space Interface in ``sysfs``302302+=================================303303+304304+Global Attributes305305+-----------------306306+307307+``amd-pstate`` exposes several global attributes (files) in ``sysfs`` to308308+control its functionality at the system level. They are located in the309309+``/sys/devices/system/cpu/amd-pstate/`` directory and affect all CPUs.310310+311311+``status``312312+ Operation mode of the driver: "active", "passive" or "disable".313313+314314+ "active"315315+ The driver is functional and in the ``active mode``316316+317317+ "passive"318318+ The driver is functional and in the ``passive mode``319319+320320+ "disable"321321+ The driver is unregistered and not functional now.322322+323323+ This attribute can be written to in order to change the driver's324324+ operation mode or to unregister it. The string written to it must be325325+ one of the possible values of it and, if successful, writing one of326326+ these values to the sysfs file will cause the driver to switch over327327+ to the operation mode represented by that string - or to be328328+ unregistered in the "disable" case.341329342330``cpupower`` tool support for ``amd-pstate``343331===============================================
···11541154}1155115511561156/**11571157+ * cppc_get_epp_perf - Get the epp register value.11581158+ * @cpunum: CPU from which to get epp preference value.11591159+ * @epp_perf: Return address.11601160+ *11611161+ * Return: 0 for success, -EIO otherwise.11621162+ */11631163+int cppc_get_epp_perf(int cpunum, u64 *epp_perf)11641164+{11651165+ return cppc_get_perf(cpunum, ENERGY_PERF, epp_perf);11661166+}11671167+EXPORT_SYMBOL_GPL(cppc_get_epp_perf);11681168+11691169+/**11571170 * cppc_get_perf_caps - Get a CPU's performance capabilities.11581171 * @cpunum: CPU from which to get capabilities info.11591172 * @perf_caps: ptr to cppc_perf_caps. See cppc_acpi.h···13771364 return ret;13781365}13791366EXPORT_SYMBOL_GPL(cppc_get_perf_ctrs);13671367+13681368+/*13691369+ * Set Energy Performance Preference Register value through13701370+ * Performance Controls Interface13711371+ */13721372+int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)13731373+{13741374+ int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);13751375+ struct cpc_register_resource *epp_set_reg;13761376+ struct cpc_register_resource *auto_sel_reg;13771377+ struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);13781378+ struct cppc_pcc_data *pcc_ss_data = NULL;13791379+ int ret;13801380+13811381+ if (!cpc_desc) {13821382+ pr_debug("No CPC descriptor for CPU:%d\n", cpu);13831383+ return -ENODEV;13841384+ }13851385+13861386+ auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE];13871387+ epp_set_reg = &cpc_desc->cpc_regs[ENERGY_PERF];13881388+13891389+ if (CPC_IN_PCC(epp_set_reg) || CPC_IN_PCC(auto_sel_reg)) {13901390+ if (pcc_ss_id < 0) {13911391+ pr_debug("Invalid pcc_ss_id for CPU:%d\n", cpu);13921392+ return -ENODEV;13931393+ }13941394+13951395+ if (CPC_SUPPORTED(auto_sel_reg)) {13961396+ ret = cpc_write(cpu, auto_sel_reg, enable);13971397+ if (ret)13981398+ return ret;13991399+ }14001400+14011401+ if (CPC_SUPPORTED(epp_set_reg)) {14021402+ ret = cpc_write(cpu, epp_set_reg, perf_ctrls->energy_perf);14031403+ if (ret)14041404+ return ret;14051405+ }14061406+14071407+ pcc_ss_data = pcc_data[pcc_ss_id];14081408+14091409+ down_write(&pcc_ss_data->pcc_lock);14101410+ /* after writing CPC, transfer the ownership of PCC to platform */14111411+ ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE);14121412+ up_write(&pcc_ss_data->pcc_lock);14131413+ } else {14141414+ ret = -ENOTSUPP;14151415+ pr_debug("_CPC in PCC is not supported\n");14161416+ }14171417+14181418+ return ret;14191419+}14201420+EXPORT_SYMBOL_GPL(cppc_set_epp_perf);1380142113811422/**13821423 * cppc_set_enable - Set to enable CPPC on the processor by writing the
-10
drivers/cpufreq/Kconfig
···3344config CPU_FREQ55 bool "CPU Frequency scaling"66- select SRCU76 help87 CPU Frequency scaling allows you to change the clock speed of 98 CPUs on the fly. This is a nice method to save power, because ···268269 support software configurable cpu frequency.269270270271 Loongson2F and its successors support this feature.271271-272272- If in doubt, say N.273273-274274-config LOONGSON1_CPUFREQ275275- tristate "Loongson1 CPUFreq Driver"276276- depends on LOONGSON1_LS1B277277- help278278- This option adds a CPUFreq driver for loongson1 processors which279279- support software configurable cpu frequency.280272281273 If in doubt, say N.282274endif
···5959 * we disable it by default to go acpi-cpufreq on these processors and add a6060 * module parameter to be able to enable it manually for debugging.6161 */6262+static struct cpufreq_driver *current_pstate_driver;6263static struct cpufreq_driver amd_pstate_driver;6363-static int cppc_load __initdata;6464+static struct cpufreq_driver amd_pstate_epp_driver;6565+static int cppc_state = AMD_PSTATE_DISABLE;6666+struct kobject *amd_pstate_kobj;6767+6868+/*6969+ * AMD Energy Preference Performance (EPP)7070+ * The EPP is used in the CCLK DPM controller to drive7171+ * the frequency that a core is going to operate during7272+ * short periods of activity. EPP values will be utilized for7373+ * different OS profiles (balanced, performance, power savings)7474+ * display strings corresponding to EPP index in the7575+ * energy_perf_strings[]7676+ * index String7777+ *-------------------------------------7878+ * 0 default7979+ * 1 performance8080+ * 2 balance_performance8181+ * 3 balance_power8282+ * 4 power8383+ */8484+enum energy_perf_value_index {8585+ EPP_INDEX_DEFAULT = 0,8686+ EPP_INDEX_PERFORMANCE,8787+ EPP_INDEX_BALANCE_PERFORMANCE,8888+ EPP_INDEX_BALANCE_POWERSAVE,8989+ EPP_INDEX_POWERSAVE,9090+};9191+9292+static const char * const energy_perf_strings[] = {9393+ [EPP_INDEX_DEFAULT] = "default",9494+ [EPP_INDEX_PERFORMANCE] = "performance",9595+ [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance",9696+ [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power",9797+ [EPP_INDEX_POWERSAVE] = "power",9898+ NULL9999+};100100+101101+static unsigned int epp_values[] = {102102+ [EPP_INDEX_DEFAULT] = 0,103103+ [EPP_INDEX_PERFORMANCE] = AMD_CPPC_EPP_PERFORMANCE,104104+ [EPP_INDEX_BALANCE_PERFORMANCE] = AMD_CPPC_EPP_BALANCE_PERFORMANCE,105105+ [EPP_INDEX_BALANCE_POWERSAVE] = AMD_CPPC_EPP_BALANCE_POWERSAVE,106106+ [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE,107107+ };108108+109109+static inline int get_mode_idx_from_str(const char *str, size_t size)110110+{111111+ int i;112112+113113+ for (i=0; i < AMD_PSTATE_MAX; i++) {114114+ if (!strncmp(str, amd_pstate_mode_string[i], size))115115+ return i;116116+ }117117+ return -EINVAL;118118+}119119+120120+static DEFINE_MUTEX(amd_pstate_limits_lock);121121+static DEFINE_MUTEX(amd_pstate_driver_lock);122122+123123+static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64 cppc_req_cached)124124+{125125+ u64 epp;126126+ int ret;127127+128128+ if (boot_cpu_has(X86_FEATURE_CPPC)) {129129+ if (!cppc_req_cached) {130130+ epp = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,131131+ &cppc_req_cached);132132+ if (epp)133133+ return epp;134134+ }135135+ epp = (cppc_req_cached >> 24) & 0xFF;136136+ } else {137137+ ret = cppc_get_epp_perf(cpudata->cpu, &epp);138138+ if (ret < 0) {139139+ pr_debug("Could not retrieve energy perf value (%d)\n", ret);140140+ return -EIO;141141+ }142142+ }143143+144144+ return (s16)(epp & 0xff);145145+}146146+147147+static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata)148148+{149149+ s16 epp;150150+ int index = -EINVAL;151151+152152+ epp = amd_pstate_get_epp(cpudata, 0);153153+ if (epp < 0)154154+ return epp;155155+156156+ switch (epp) {157157+ case AMD_CPPC_EPP_PERFORMANCE:158158+ index = EPP_INDEX_PERFORMANCE;159159+ break;160160+ case AMD_CPPC_EPP_BALANCE_PERFORMANCE:161161+ index = EPP_INDEX_BALANCE_PERFORMANCE;162162+ break;163163+ case AMD_CPPC_EPP_BALANCE_POWERSAVE:164164+ index = EPP_INDEX_BALANCE_POWERSAVE;165165+ break;166166+ case AMD_CPPC_EPP_POWERSAVE:167167+ index = EPP_INDEX_POWERSAVE;168168+ break;169169+ default:170170+ break;171171+ }172172+173173+ return index;174174+}175175+176176+static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)177177+{178178+ int ret;179179+ struct cppc_perf_ctrls perf_ctrls;180180+181181+ if (boot_cpu_has(X86_FEATURE_CPPC)) {182182+ u64 value = READ_ONCE(cpudata->cppc_req_cached);183183+184184+ value &= ~GENMASK_ULL(31, 24);185185+ value |= (u64)epp << 24;186186+ WRITE_ONCE(cpudata->cppc_req_cached, value);187187+188188+ ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);189189+ if (!ret)190190+ cpudata->epp_cached = epp;191191+ } else {192192+ perf_ctrls.energy_perf = epp;193193+ ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1);194194+ if (ret) {195195+ pr_debug("failed to set energy perf value (%d)\n", ret);196196+ return ret;197197+ }198198+ cpudata->epp_cached = epp;199199+ }200200+201201+ return ret;202202+}203203+204204+static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata,205205+ int pref_index)206206+{207207+ int epp = -EINVAL;208208+ int ret;209209+210210+ if (!pref_index) {211211+ pr_debug("EPP pref_index is invalid\n");212212+ return -EINVAL;213213+ }214214+215215+ if (epp == -EINVAL)216216+ epp = epp_values[pref_index];217217+218218+ if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {219219+ pr_debug("EPP cannot be set under performance policy\n");220220+ return -EBUSY;221221+ }222222+223223+ ret = amd_pstate_set_epp(cpudata, epp);224224+225225+ return ret;226226+}6422765228static inline int pstate_enable(bool enable)66229{···23370static int cppc_enable(bool enable)23471{23572 int cpu, ret = 0;7373+ struct cppc_perf_ctrls perf_ctrls;2367423775 for_each_present_cpu(cpu) {23876 ret = cppc_set_enable(cpu, enable);23977 if (ret)24078 return ret;7979+8080+ /* Enable autonomous mode for EPP */8181+ if (cppc_state == AMD_PSTATE_ACTIVE) {8282+ /* Set desired perf as zero to allow EPP firmware control */8383+ perf_ctrls.desired_perf = 0;8484+ ret = cppc_set_perf(cpu, &perf_ctrls);8585+ if (ret)8686+ return ret;8787+ }24188 }2428924390 return ret;···591418 return;592419593420 cpudata->boost_supported = true;594594- amd_pstate_driver.boost_enabled = true;421421+ current_pstate_driver->boost_enabled = true;595422}596423597424static void amd_perf_ctl_reset(unsigned int cpu)···674501 policy->driver_data = cpudata;675502676503 amd_pstate_boost_init(cpudata);504504+ if (!current_pstate_driver->adjust_perf)505505+ current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;677506678507 return 0;679508···736561 if (max_freq < 0)737562 return max_freq;738563739739- return sprintf(&buf[0], "%u\n", max_freq);564564+ return sysfs_emit(buf, "%u\n", max_freq);740565}741566742567static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy,···749574 if (freq < 0)750575 return freq;751576752752- return sprintf(&buf[0], "%u\n", freq);577577+ return sysfs_emit(buf, "%u\n", freq);753578}754579755580/*···764589765590 perf = READ_ONCE(cpudata->highest_perf);766591767767- return sprintf(&buf[0], "%u\n", perf);592592+ return sysfs_emit(buf, "%u\n", perf);593593+}594594+595595+static ssize_t show_energy_performance_available_preferences(596596+ struct cpufreq_policy *policy, char *buf)597597+{598598+ int i = 0;599599+ int offset = 0;600600+601601+ while (energy_perf_strings[i] != NULL)602602+ offset += sysfs_emit_at(buf, offset, "%s ", energy_perf_strings[i++]);603603+604604+ sysfs_emit_at(buf, offset, "\n");605605+606606+ return offset;607607+}608608+609609+static ssize_t store_energy_performance_preference(610610+ struct cpufreq_policy *policy, const char *buf, size_t count)611611+{612612+ struct amd_cpudata *cpudata = policy->driver_data;613613+ char str_preference[21];614614+ ssize_t ret;615615+616616+ ret = sscanf(buf, "%20s", str_preference);617617+ if (ret != 1)618618+ return -EINVAL;619619+620620+ ret = match_string(energy_perf_strings, -1, str_preference);621621+ if (ret < 0)622622+ return -EINVAL;623623+624624+ mutex_lock(&amd_pstate_limits_lock);625625+ ret = amd_pstate_set_energy_pref_index(cpudata, ret);626626+ mutex_unlock(&amd_pstate_limits_lock);627627+628628+ return ret ?: count;629629+}630630+631631+static ssize_t show_energy_performance_preference(632632+ struct cpufreq_policy *policy, char *buf)633633+{634634+ struct amd_cpudata *cpudata = policy->driver_data;635635+ int preference;636636+637637+ preference = amd_pstate_get_energy_pref_index(cpudata);638638+ if (preference < 0)639639+ return preference;640640+641641+ return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]);642642+}643643+644644+static ssize_t amd_pstate_show_status(char *buf)645645+{646646+ if (!current_pstate_driver)647647+ return sysfs_emit(buf, "disable\n");648648+649649+ return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]);650650+}651651+652652+static void amd_pstate_driver_cleanup(void)653653+{654654+ current_pstate_driver = NULL;655655+}656656+657657+static int amd_pstate_update_status(const char *buf, size_t size)658658+{659659+ int ret = 0;660660+ int mode_idx;661661+662662+ if (size > 7 || size < 6)663663+ return -EINVAL;664664+ mode_idx = get_mode_idx_from_str(buf, size);665665+666666+ switch(mode_idx) {667667+ case AMD_PSTATE_DISABLE:668668+ if (!current_pstate_driver)669669+ return -EINVAL;670670+ if (cppc_state == AMD_PSTATE_ACTIVE)671671+ return -EBUSY;672672+ cpufreq_unregister_driver(current_pstate_driver);673673+ amd_pstate_driver_cleanup();674674+ break;675675+ case AMD_PSTATE_PASSIVE:676676+ if (current_pstate_driver) {677677+ if (current_pstate_driver == &amd_pstate_driver)678678+ return 0;679679+ cpufreq_unregister_driver(current_pstate_driver);680680+ cppc_state = AMD_PSTATE_PASSIVE;681681+ current_pstate_driver = &amd_pstate_driver;682682+ }683683+684684+ ret = cpufreq_register_driver(current_pstate_driver);685685+ break;686686+ case AMD_PSTATE_ACTIVE:687687+ if (current_pstate_driver) {688688+ if (current_pstate_driver == &amd_pstate_epp_driver)689689+ return 0;690690+ cpufreq_unregister_driver(current_pstate_driver);691691+ current_pstate_driver = &amd_pstate_epp_driver;692692+ cppc_state = AMD_PSTATE_ACTIVE;693693+ }694694+695695+ ret = cpufreq_register_driver(current_pstate_driver);696696+ break;697697+ default:698698+ ret = -EINVAL;699699+ break;700700+ }701701+702702+ return ret;703703+}704704+705705+static ssize_t show_status(struct kobject *kobj,706706+ struct kobj_attribute *attr, char *buf)707707+{708708+ ssize_t ret;709709+710710+ mutex_lock(&amd_pstate_driver_lock);711711+ ret = amd_pstate_show_status(buf);712712+ mutex_unlock(&amd_pstate_driver_lock);713713+714714+ return ret;715715+}716716+717717+static ssize_t store_status(struct kobject *a, struct kobj_attribute *b,718718+ const char *buf, size_t count)719719+{720720+ char *p = memchr(buf, '\n', count);721721+ int ret;722722+723723+ mutex_lock(&amd_pstate_driver_lock);724724+ ret = amd_pstate_update_status(buf, p ? p - buf : count);725725+ mutex_unlock(&amd_pstate_driver_lock);726726+727727+ return ret < 0 ? ret : count;768728}769729770730cpufreq_freq_attr_ro(amd_pstate_max_freq);771731cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq);772732773733cpufreq_freq_attr_ro(amd_pstate_highest_perf);734734+cpufreq_freq_attr_rw(energy_performance_preference);735735+cpufreq_freq_attr_ro(energy_performance_available_preferences);736736+define_one_global_rw(status);774737775738static struct freq_attr *amd_pstate_attr[] = {776739 &amd_pstate_max_freq,···916603 &amd_pstate_highest_perf,917604 NULL,918605};606606+607607+static struct freq_attr *amd_pstate_epp_attr[] = {608608+ &amd_pstate_max_freq,609609+ &amd_pstate_lowest_nonlinear_freq,610610+ &amd_pstate_highest_perf,611611+ &energy_performance_preference,612612+ &energy_performance_available_preferences,613613+ NULL,614614+};615615+616616+static struct attribute *pstate_global_attributes[] = {617617+ &status.attr,618618+ NULL619619+};620620+621621+static const struct attribute_group amd_pstate_global_attr_group = {622622+ .attrs = pstate_global_attributes,623623+};624624+625625+static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)626626+{627627+ int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret;628628+ struct amd_cpudata *cpudata;629629+ struct device *dev;630630+ u64 value;631631+632632+ /*633633+ * Resetting PERF_CTL_MSR will put the CPU in P0 frequency,634634+ * which is ideal for initialization process.635635+ */636636+ amd_perf_ctl_reset(policy->cpu);637637+ dev = get_cpu_device(policy->cpu);638638+ if (!dev)639639+ return -ENODEV;640640+641641+ cpudata = kzalloc(sizeof(*cpudata), GFP_KERNEL);642642+ if (!cpudata)643643+ return -ENOMEM;644644+645645+ cpudata->cpu = policy->cpu;646646+ cpudata->epp_policy = 0;647647+648648+ ret = amd_pstate_init_perf(cpudata);649649+ if (ret)650650+ goto free_cpudata1;651651+652652+ min_freq = amd_get_min_freq(cpudata);653653+ max_freq = amd_get_max_freq(cpudata);654654+ nominal_freq = amd_get_nominal_freq(cpudata);655655+ lowest_nonlinear_freq = amd_get_lowest_nonlinear_freq(cpudata);656656+ if (min_freq < 0 || max_freq < 0 || min_freq > max_freq) {657657+ dev_err(dev, "min_freq(%d) or max_freq(%d) value is incorrect\n",658658+ min_freq, max_freq);659659+ ret = -EINVAL;660660+ goto free_cpudata1;661661+ }662662+663663+ policy->cpuinfo.min_freq = min_freq;664664+ policy->cpuinfo.max_freq = max_freq;665665+ /* It will be updated by governor */666666+ policy->cur = policy->cpuinfo.min_freq;667667+668668+ /* Initial processor data capability frequencies */669669+ cpudata->max_freq = max_freq;670670+ cpudata->min_freq = min_freq;671671+ cpudata->nominal_freq = nominal_freq;672672+ cpudata->lowest_nonlinear_freq = lowest_nonlinear_freq;673673+674674+ policy->driver_data = cpudata;675675+676676+ cpudata->epp_cached = amd_pstate_get_epp(cpudata, 0);677677+678678+ policy->min = policy->cpuinfo.min_freq;679679+ policy->max = policy->cpuinfo.max_freq;680680+681681+ /*682682+ * Set the policy to powersave to provide a valid fallback value in case683683+ * the default cpufreq governor is neither powersave nor performance.684684+ */685685+ policy->policy = CPUFREQ_POLICY_POWERSAVE;686686+687687+ if (boot_cpu_has(X86_FEATURE_CPPC)) {688688+ policy->fast_switch_possible = true;689689+ ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);690690+ if (ret)691691+ return ret;692692+ WRITE_ONCE(cpudata->cppc_req_cached, value);693693+694694+ ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &value);695695+ if (ret)696696+ return ret;697697+ WRITE_ONCE(cpudata->cppc_cap1_cached, value);698698+ }699699+ amd_pstate_boost_init(cpudata);700700+701701+ return 0;702702+703703+free_cpudata1:704704+ kfree(cpudata);705705+ return ret;706706+}707707+708708+static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)709709+{710710+ pr_debug("CPU %d exiting\n", policy->cpu);711711+ policy->fast_switch_possible = false;712712+ return 0;713713+}714714+715715+static void amd_pstate_epp_init(unsigned int cpu)716716+{717717+ struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);718718+ struct amd_cpudata *cpudata = policy->driver_data;719719+ u32 max_perf, min_perf;720720+ u64 value;721721+ s16 epp;722722+723723+ max_perf = READ_ONCE(cpudata->highest_perf);724724+ min_perf = READ_ONCE(cpudata->lowest_perf);725725+726726+ value = READ_ONCE(cpudata->cppc_req_cached);727727+728728+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)729729+ min_perf = max_perf;730730+731731+ /* Initial min/max values for CPPC Performance Controls Register */732732+ value &= ~AMD_CPPC_MIN_PERF(~0L);733733+ value |= AMD_CPPC_MIN_PERF(min_perf);734734+735735+ value &= ~AMD_CPPC_MAX_PERF(~0L);736736+ value |= AMD_CPPC_MAX_PERF(max_perf);737737+738738+ /* CPPC EPP feature require to set zero to the desire perf bit */739739+ value &= ~AMD_CPPC_DES_PERF(~0L);740740+ value |= AMD_CPPC_DES_PERF(0);741741+742742+ if (cpudata->epp_policy == cpudata->policy)743743+ goto skip_epp;744744+745745+ cpudata->epp_policy = cpudata->policy;746746+747747+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {748748+ epp = amd_pstate_get_epp(cpudata, value);749749+ if (epp < 0)750750+ goto skip_epp;751751+ /* force the epp value to be zero for performance policy */752752+ epp = 0;753753+ } else {754754+ /* Get BIOS pre-defined epp value */755755+ epp = amd_pstate_get_epp(cpudata, value);756756+ if (epp)757757+ goto skip_epp;758758+ }759759+ /* Set initial EPP value */760760+ if (boot_cpu_has(X86_FEATURE_CPPC)) {761761+ value &= ~GENMASK_ULL(31, 24);762762+ value |= (u64)epp << 24;763763+ }764764+765765+ amd_pstate_set_epp(cpudata, epp);766766+skip_epp:767767+ WRITE_ONCE(cpudata->cppc_req_cached, value);768768+ cpufreq_cpu_put(policy);769769+}770770+771771+static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)772772+{773773+ struct amd_cpudata *cpudata = policy->driver_data;774774+775775+ if (!policy->cpuinfo.max_freq)776776+ return -ENODEV;777777+778778+ pr_debug("set_policy: cpuinfo.max %u policy->max %u\n",779779+ policy->cpuinfo.max_freq, policy->max);780780+781781+ cpudata->policy = policy->policy;782782+783783+ amd_pstate_epp_init(policy->cpu);784784+785785+ return 0;786786+}787787+788788+static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata)789789+{790790+ struct cppc_perf_ctrls perf_ctrls;791791+ u64 value, max_perf;792792+ int ret;793793+794794+ ret = amd_pstate_enable(true);795795+ if (ret)796796+ pr_err("failed to enable amd pstate during resume, return %d\n", ret);797797+798798+ value = READ_ONCE(cpudata->cppc_req_cached);799799+ max_perf = READ_ONCE(cpudata->highest_perf);800800+801801+ if (boot_cpu_has(X86_FEATURE_CPPC)) {802802+ wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);803803+ } else {804804+ perf_ctrls.max_perf = max_perf;805805+ perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(cpudata->epp_cached);806806+ cppc_set_perf(cpudata->cpu, &perf_ctrls);807807+ }808808+}809809+810810+static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)811811+{812812+ struct amd_cpudata *cpudata = policy->driver_data;813813+814814+ pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);815815+816816+ if (cppc_state == AMD_PSTATE_ACTIVE) {817817+ amd_pstate_epp_reenable(cpudata);818818+ cpudata->suspended = false;819819+ }820820+821821+ return 0;822822+}823823+824824+static void amd_pstate_epp_offline(struct cpufreq_policy *policy)825825+{826826+ struct amd_cpudata *cpudata = policy->driver_data;827827+ struct cppc_perf_ctrls perf_ctrls;828828+ int min_perf;829829+ u64 value;830830+831831+ min_perf = READ_ONCE(cpudata->lowest_perf);832832+ value = READ_ONCE(cpudata->cppc_req_cached);833833+834834+ mutex_lock(&amd_pstate_limits_lock);835835+ if (boot_cpu_has(X86_FEATURE_CPPC)) {836836+ cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN;837837+838838+ /* Set max perf same as min perf */839839+ value &= ~AMD_CPPC_MAX_PERF(~0L);840840+ value |= AMD_CPPC_MAX_PERF(min_perf);841841+ value &= ~AMD_CPPC_MIN_PERF(~0L);842842+ value |= AMD_CPPC_MIN_PERF(min_perf);843843+ wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);844844+ } else {845845+ perf_ctrls.desired_perf = 0;846846+ perf_ctrls.max_perf = min_perf;847847+ perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(HWP_EPP_BALANCE_POWERSAVE);848848+ cppc_set_perf(cpudata->cpu, &perf_ctrls);849849+ }850850+ mutex_unlock(&amd_pstate_limits_lock);851851+}852852+853853+static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)854854+{855855+ struct amd_cpudata *cpudata = policy->driver_data;856856+857857+ pr_debug("AMD CPU Core %d going offline\n", cpudata->cpu);858858+859859+ if (cpudata->suspended)860860+ return 0;861861+862862+ if (cppc_state == AMD_PSTATE_ACTIVE)863863+ amd_pstate_epp_offline(policy);864864+865865+ return 0;866866+}867867+868868+static int amd_pstate_epp_verify_policy(struct cpufreq_policy_data *policy)869869+{870870+ cpufreq_verify_within_cpu_limits(policy);871871+ pr_debug("policy_max =%d, policy_min=%d\n", policy->max, policy->min);872872+ return 0;873873+}874874+875875+static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)876876+{877877+ struct amd_cpudata *cpudata = policy->driver_data;878878+ int ret;879879+880880+ /* avoid suspending when EPP is not enabled */881881+ if (cppc_state != AMD_PSTATE_ACTIVE)882882+ return 0;883883+884884+ /* set this flag to avoid setting core offline*/885885+ cpudata->suspended = true;886886+887887+ /* disable CPPC in lowlevel firmware */888888+ ret = amd_pstate_enable(false);889889+ if (ret)890890+ pr_err("failed to suspend, return %d\n", ret);891891+892892+ return 0;893893+}894894+895895+static int amd_pstate_epp_resume(struct cpufreq_policy *policy)896896+{897897+ struct amd_cpudata *cpudata = policy->driver_data;898898+899899+ if (cpudata->suspended) {900900+ mutex_lock(&amd_pstate_limits_lock);901901+902902+ /* enable amd pstate from suspend state*/903903+ amd_pstate_epp_reenable(cpudata);904904+905905+ mutex_unlock(&amd_pstate_limits_lock);906906+907907+ cpudata->suspended = false;908908+ }909909+910910+ return 0;911911+}919912920913static struct cpufreq_driver amd_pstate_driver = {921914 .flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,···1236617 .attr = amd_pstate_attr,1237618};1238619620620+static struct cpufreq_driver amd_pstate_epp_driver = {621621+ .flags = CPUFREQ_CONST_LOOPS,622622+ .verify = amd_pstate_epp_verify_policy,623623+ .setpolicy = amd_pstate_epp_set_policy,624624+ .init = amd_pstate_epp_cpu_init,625625+ .exit = amd_pstate_epp_cpu_exit,626626+ .offline = amd_pstate_epp_cpu_offline,627627+ .online = amd_pstate_epp_cpu_online,628628+ .suspend = amd_pstate_epp_suspend,629629+ .resume = amd_pstate_epp_resume,630630+ .name = "amd_pstate_epp",631631+ .attr = amd_pstate_epp_attr,632632+};633633+1239634static int __init amd_pstate_init(void)1240635{1241636 int ret;···1259626 /*1260627 * by default the pstate driver is disabled to load1261628 * enable the amd_pstate passive mode driver explicitly12621262- * with amd_pstate=passive in kernel command line629629+ * with amd_pstate=passive or other modes in kernel command line1263630 */12641264- if (!cppc_load) {12651265- pr_debug("driver load is disabled, boot with amd_pstate=passive to enable this\n");631631+ if (cppc_state == AMD_PSTATE_DISABLE) {632632+ pr_debug("driver load is disabled, boot with specific mode to enable this\n");1266633 return -ENODEV;1267634 }1268635···1278645 /* capability check */1279646 if (boot_cpu_has(X86_FEATURE_CPPC)) {1280647 pr_debug("AMD CPPC MSR based functionality is supported\n");12811281- amd_pstate_driver.adjust_perf = amd_pstate_adjust_perf;648648+ if (cppc_state == AMD_PSTATE_PASSIVE)649649+ current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;1282650 } else {1283651 pr_debug("AMD CPPC shared memory based functionality is supported\n");1284652 static_call_update(amd_pstate_enable, cppc_enable);···1290656 /* enable amd pstate feature */1291657 ret = amd_pstate_enable(true);1292658 if (ret) {12931293- pr_err("failed to enable amd-pstate with return %d\n", ret);659659+ pr_err("failed to enable with return %d\n", ret);1294660 return ret;1295661 }129666212971297- ret = cpufreq_register_driver(&amd_pstate_driver);663663+ ret = cpufreq_register_driver(current_pstate_driver);1298664 if (ret)12991299- pr_err("failed to register amd_pstate_driver with return %d\n",13001300- ret);665665+ pr_err("failed to register with return %d\n", ret);1301666667667+ amd_pstate_kobj = kobject_create_and_add("amd_pstate", &cpu_subsys.dev_root->kobj);668668+ if (!amd_pstate_kobj) {669669+ ret = -EINVAL;670670+ pr_err("global sysfs registration failed.\n");671671+ goto kobject_free;672672+ }673673+674674+ ret = sysfs_create_group(amd_pstate_kobj, &amd_pstate_global_attr_group);675675+ if (ret) {676676+ pr_err("sysfs attribute export failed with error %d.\n", ret);677677+ goto global_attr_free;678678+ }679679+680680+ return ret;681681+682682+global_attr_free:683683+ kobject_put(amd_pstate_kobj);684684+kobject_free:685685+ cpufreq_unregister_driver(current_pstate_driver);1302686 return ret;1303687}1304688device_initcall(amd_pstate_init);13056891306690static int __init amd_pstate_param(char *str)1307691{692692+ size_t size;693693+ int mode_idx;694694+1308695 if (!str)1309696 return -EINVAL;131069713111311- if (!strcmp(str, "disable")) {13121312- cppc_load = 0;13131313- pr_info("driver is explicitly disabled\n");13141314- } else if (!strcmp(str, "passive"))13151315- cppc_load = 1;698698+ size = strlen(str);699699+ mode_idx = get_mode_idx_from_str(str, size);131670013171317- return 0;701701+ if (mode_idx >= AMD_PSTATE_DISABLE && mode_idx < AMD_PSTATE_MAX) {702702+ cppc_state = mode_idx;703703+ if (cppc_state == AMD_PSTATE_DISABLE)704704+ pr_info("driver is explicitly disabled\n");705705+706706+ if (cppc_state == AMD_PSTATE_ACTIVE)707707+ current_pstate_driver = &amd_pstate_epp_driver;708708+709709+ if (cppc_state == AMD_PSTATE_PASSIVE)710710+ current_pstate_driver = &amd_pstate_driver;711711+712712+ return 0;713713+ }714714+715715+ return -EINVAL;1318716}1319717early_param("amd_pstate", amd_pstate_param);1320718
+1-4
drivers/cpufreq/brcmstb-avs-cpufreq.c
···751751752752static int brcm_avs_cpufreq_remove(struct platform_device *pdev)753753{754754- int ret;755755-756756- ret = cpufreq_unregister_driver(&brcm_avs_driver);757757- WARN_ON(ret);754754+ cpufreq_unregister_driver(&brcm_avs_driver);758755759756 brcm_avs_prepare_uninit(pdev);760757
+4-6
drivers/cpufreq/cpufreq.c
···993993 .store = store,994994};995995996996-static struct kobj_type ktype_cpufreq = {996996+static const struct kobj_type ktype_cpufreq = {997997 .sysfs_ops = &sysfs_ops,998998 .default_groups = cpufreq_groups,999999 .release = cpufreq_sysfs_release,···29042904 * Returns zero if successful, and -EINVAL if the cpufreq_driver is29052905 * currently not initialised.29062906 */29072907-int cpufreq_unregister_driver(struct cpufreq_driver *driver)29072907+void cpufreq_unregister_driver(struct cpufreq_driver *driver)29082908{29092909 unsigned long flags;2910291029112911- if (!cpufreq_driver || (driver != cpufreq_driver))29122912- return -EINVAL;29112911+ if (WARN_ON(!cpufreq_driver || (driver != cpufreq_driver)))29122912+ return;2913291329142914 pr_debug("unregistering driver %s\n", driver->name);29152915···2926292629272927 write_unlock_irqrestore(&cpufreq_driver_lock, flags);29282928 cpus_read_unlock();29292929-29302930- return 0;29312929}29322930EXPORT_SYMBOL_GPL(cpufreq_unregister_driver);29332931
+3-1
drivers/cpufreq/davinci-cpufreq.c
···133133134134static int __exit davinci_cpufreq_remove(struct platform_device *pdev)135135{136136+ cpufreq_unregister_driver(&davinci_driver);137137+136138 clk_put(cpufreq.armclk);137139138140 if (cpufreq.asyncclk)139141 clk_put(cpufreq.asyncclk);140142141141- return cpufreq_unregister_driver(&davinci_driver);143143+ return 0;142144}143145144146static struct platform_driver davinci_cpufreq_driver = {
-222
drivers/cpufreq/loongson1-cpufreq.c
···11-/*22- * CPU Frequency Scaling for Loongson 1 SoC33- *44- * Copyright (C) 2014-2016 Zhang, Keguang <keguang.zhang@gmail.com>55- *66- * This file is licensed under the terms of the GNU General Public77- * License version 2. This program is licensed "as is" without any88- * warranty of any kind, whether express or implied.99- */1010-1111-#include <linux/clk.h>1212-#include <linux/clk-provider.h>1313-#include <linux/cpu.h>1414-#include <linux/cpufreq.h>1515-#include <linux/delay.h>1616-#include <linux/io.h>1717-#include <linux/module.h>1818-#include <linux/platform_device.h>1919-#include <linux/slab.h>2020-2121-#include <cpufreq.h>2222-#include <loongson1.h>2323-2424-struct ls1x_cpufreq {2525- struct device *dev;2626- struct clk *clk; /* CPU clk */2727- struct clk *mux_clk; /* MUX of CPU clk */2828- struct clk *pll_clk; /* PLL clk */2929- struct clk *osc_clk; /* OSC clk */3030- unsigned int max_freq;3131- unsigned int min_freq;3232-};3333-3434-static struct ls1x_cpufreq *cpufreq;3535-3636-static int ls1x_cpufreq_notifier(struct notifier_block *nb,3737- unsigned long val, void *data)3838-{3939- if (val == CPUFREQ_POSTCHANGE)4040- current_cpu_data.udelay_val = loops_per_jiffy;4141-4242- return NOTIFY_OK;4343-}4444-4545-static struct notifier_block ls1x_cpufreq_notifier_block = {4646- .notifier_call = ls1x_cpufreq_notifier4747-};4848-4949-static int ls1x_cpufreq_target(struct cpufreq_policy *policy,5050- unsigned int index)5151-{5252- struct device *cpu_dev = get_cpu_device(policy->cpu);5353- unsigned int old_freq, new_freq;5454-5555- old_freq = policy->cur;5656- new_freq = policy->freq_table[index].frequency;5757-5858- /*5959- * The procedure of reconfiguring CPU clk is as below.6060- *6161- * - Reparent CPU clk to OSC clk6262- * - Reset CPU clock (very important)6363- * - Reconfigure CPU DIV6464- * - Reparent CPU clk back to CPU DIV clk6565- */6666-6767- clk_set_parent(policy->clk, cpufreq->osc_clk);6868- __raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) | RST_CPU_EN | RST_CPU,6969- LS1X_CLK_PLL_DIV);7070- __raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) & ~(RST_CPU_EN | RST_CPU),7171- LS1X_CLK_PLL_DIV);7272- clk_set_rate(cpufreq->mux_clk, new_freq * 1000);7373- clk_set_parent(policy->clk, cpufreq->mux_clk);7474- dev_dbg(cpu_dev, "%u KHz --> %u KHz\n", old_freq, new_freq);7575-7676- return 0;7777-}7878-7979-static int ls1x_cpufreq_init(struct cpufreq_policy *policy)8080-{8181- struct device *cpu_dev = get_cpu_device(policy->cpu);8282- struct cpufreq_frequency_table *freq_tbl;8383- unsigned int pll_freq, freq;8484- int steps, i;8585-8686- pll_freq = clk_get_rate(cpufreq->pll_clk) / 1000;8787-8888- steps = 1 << DIV_CPU_WIDTH;8989- freq_tbl = kcalloc(steps, sizeof(*freq_tbl), GFP_KERNEL);9090- if (!freq_tbl)9191- return -ENOMEM;9292-9393- for (i = 0; i < (steps - 1); i++) {9494- freq = pll_freq / (i + 1);9595- if ((freq < cpufreq->min_freq) || (freq > cpufreq->max_freq))9696- freq_tbl[i].frequency = CPUFREQ_ENTRY_INVALID;9797- else9898- freq_tbl[i].frequency = freq;9999- dev_dbg(cpu_dev,100100- "cpufreq table: index %d: frequency %d\n", i,101101- freq_tbl[i].frequency);102102- }103103- freq_tbl[i].frequency = CPUFREQ_TABLE_END;104104-105105- policy->clk = cpufreq->clk;106106- cpufreq_generic_init(policy, freq_tbl, 0);107107-108108- return 0;109109-}110110-111111-static int ls1x_cpufreq_exit(struct cpufreq_policy *policy)112112-{113113- kfree(policy->freq_table);114114- return 0;115115-}116116-117117-static struct cpufreq_driver ls1x_cpufreq_driver = {118118- .name = "cpufreq-ls1x",119119- .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,120120- .verify = cpufreq_generic_frequency_table_verify,121121- .target_index = ls1x_cpufreq_target,122122- .get = cpufreq_generic_get,123123- .init = ls1x_cpufreq_init,124124- .exit = ls1x_cpufreq_exit,125125- .attr = cpufreq_generic_attr,126126-};127127-128128-static int ls1x_cpufreq_remove(struct platform_device *pdev)129129-{130130- cpufreq_unregister_notifier(&ls1x_cpufreq_notifier_block,131131- CPUFREQ_TRANSITION_NOTIFIER);132132- cpufreq_unregister_driver(&ls1x_cpufreq_driver);133133-134134- return 0;135135-}136136-137137-static int ls1x_cpufreq_probe(struct platform_device *pdev)138138-{139139- struct plat_ls1x_cpufreq *pdata = dev_get_platdata(&pdev->dev);140140- struct clk *clk;141141- int ret;142142-143143- if (!pdata || !pdata->clk_name || !pdata->osc_clk_name) {144144- dev_err(&pdev->dev, "platform data missing\n");145145- return -EINVAL;146146- }147147-148148- cpufreq =149149- devm_kzalloc(&pdev->dev, sizeof(struct ls1x_cpufreq), GFP_KERNEL);150150- if (!cpufreq)151151- return -ENOMEM;152152-153153- cpufreq->dev = &pdev->dev;154154-155155- clk = devm_clk_get(&pdev->dev, pdata->clk_name);156156- if (IS_ERR(clk)) {157157- dev_err(&pdev->dev, "unable to get %s clock\n",158158- pdata->clk_name);159159- return PTR_ERR(clk);160160- }161161- cpufreq->clk = clk;162162-163163- clk = clk_get_parent(clk);164164- if (IS_ERR(clk)) {165165- dev_err(&pdev->dev, "unable to get parent of %s clock\n",166166- __clk_get_name(cpufreq->clk));167167- return PTR_ERR(clk);168168- }169169- cpufreq->mux_clk = clk;170170-171171- clk = clk_get_parent(clk);172172- if (IS_ERR(clk)) {173173- dev_err(&pdev->dev, "unable to get parent of %s clock\n",174174- __clk_get_name(cpufreq->mux_clk));175175- return PTR_ERR(clk);176176- }177177- cpufreq->pll_clk = clk;178178-179179- clk = devm_clk_get(&pdev->dev, pdata->osc_clk_name);180180- if (IS_ERR(clk)) {181181- dev_err(&pdev->dev, "unable to get %s clock\n",182182- pdata->osc_clk_name);183183- return PTR_ERR(clk);184184- }185185- cpufreq->osc_clk = clk;186186-187187- cpufreq->max_freq = pdata->max_freq;188188- cpufreq->min_freq = pdata->min_freq;189189-190190- ret = cpufreq_register_driver(&ls1x_cpufreq_driver);191191- if (ret) {192192- dev_err(&pdev->dev,193193- "failed to register CPUFreq driver: %d\n", ret);194194- return ret;195195- }196196-197197- ret = cpufreq_register_notifier(&ls1x_cpufreq_notifier_block,198198- CPUFREQ_TRANSITION_NOTIFIER);199199-200200- if (ret) {201201- dev_err(&pdev->dev,202202- "failed to register CPUFreq notifier: %d\n",ret);203203- cpufreq_unregister_driver(&ls1x_cpufreq_driver);204204- }205205-206206- return ret;207207-}208208-209209-static struct platform_driver ls1x_cpufreq_platdrv = {210210- .probe = ls1x_cpufreq_probe,211211- .remove = ls1x_cpufreq_remove,212212- .driver = {213213- .name = "ls1x-cpufreq",214214- },215215-};216216-217217-module_platform_driver(ls1x_cpufreq_platdrv);218218-219219-MODULE_ALIAS("platform:ls1x-cpufreq");220220-MODULE_AUTHOR("Kelvin Cheung <keguang.zhang@gmail.com>");221221-MODULE_DESCRIPTION("Loongson1 CPUFreq driver");222222-MODULE_LICENSE("GPL");