Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-5.20-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more power management updates from Rafael Wysocki:
"These are ARM cpufreq updates and operating performance points (OPP)
updates plus one cpuidle update adding a new trace point.

Specifics:

- Fix return error code in mtk_cpu_dvfs_info_init (Yang Yingliang).

- Minor cleanups and support for new boards for Qcom cpufreq drivers
(Bryan O'Donoghue, Konrad Dybcio, Pierre Gondois, and Yicong Yang).

- Fix sparse warnings for Tegra cpufreq driver (Viresh Kumar).

- Make dev_pm_opp_set_regulators() accept NULL terminated list
(Viresh Kumar).

- Add dev_pm_opp_set_config() and friends and migrate other users and
helpers to using them (Viresh Kumar).

- Add support for multiple clocks for a device (Viresh Kumar and
Krzysztof Kozlowski).

- Configure resources before adding OPP table for Venus (Stanimir
Varbanov).

- Keep reference count up for opp->np and opp_table->np while they
are still in use (Liang He).

- Minor OPP cleanups (Viresh Kumar and Yang Li).

- Add a trace event for cpuidle to track missed (too deep or too
shallow) wakeups (Kajetan Puchalski)"

* tag 'pm-5.20-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (55 commits)
cpuidle: Add cpu_idle_miss trace event
venus: pm_helpers: Fix warning in OPP during probe
OPP: Don't drop opp->np reference while it is still in use
OPP: Don't drop opp_table->np reference while it is still in use
cpufreq: tegra194: Staticize struct tegra_cpufreq_soc instances
dt-bindings: cpufreq: cpufreq-qcom-hw: Add SM6375 compatible
dt-bindings: opp: Add msm8939 to the compatible list
dt-bindings: opp: Add missing compat devices
dt-bindings: opp: opp-v2-kryo-cpu: Fix example binding checks
cpufreq: Change order of online() CB and policy->cpus modification
cpufreq: qcom-hw: Remove deprecated irq_set_affinity_hint() call
cpufreq: qcom-hw: Disable LMH irq when disabling policy
cpufreq: qcom-hw: Reset cancel_throttle when policy is re-enabled
cpufreq: qcom-cpufreq-hw: use HZ_PER_KHZ macro in units.h
cpufreq: mediatek: fix error return code in mtk_cpu_dvfs_info_init()
OPP: Remove dev{m}_pm_opp_of_add_table_noclk()
PM / devfreq: tegra30: Register config_clks helper
OPP: Allow config_clks helper for single clk case
OPP: Provide a simple implementation to configure multiple clocks
OPP: Assert clk_count == 1 for single clk helpers
...

+1423 -1287
+1
Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml
··· 25 25 - description: v2 of CPUFREQ HW (EPSS) 26 26 items: 27 27 - enum: 28 + - qcom,sm6375-cpufreq-epss 28 29 - qcom,sm8250-cpufreq-epss 29 30 - const: qcom,cpufreq-epss 30 31
+7
Documentation/devicetree/bindings/cpufreq/qcom-cpufreq-nvmem.yaml
··· 22 22 compatible: 23 23 contains: 24 24 enum: 25 + - qcom,apq8064 26 + - qcom,apq8096 27 + - qcom,ipq8064 28 + - qcom,msm8939 29 + - qcom,msm8960 30 + - qcom,msm8974 31 + - qcom,msm8996 25 32 - qcom,qcs404 26 33 required: 27 34 - compatible
+10
Documentation/devicetree/bindings/opp/opp-v2-base.yaml
··· 50 50 property to uniquely identify the OPP nodes exists. Devices like power 51 51 domains must have another (implementation dependent) property. 52 52 53 + Entries for multiple clocks shall be provided in the same field, as 54 + array of frequencies. The OPP binding doesn't provide any provisions 55 + to relate the values to their clocks or the order in which the clocks 56 + need to be configured and that is left for the implementation 57 + specific binding. 58 + minItems: 1 59 + maxItems: 16 60 + items: 61 + maxItems: 1 62 + 53 63 opp-microvolt: 54 64 description: | 55 65 Voltage for the OPP
+15
Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
··· 98 98 capacity-dmips-mhz = <1024>; 99 99 clocks = <&kryocc 0>; 100 100 operating-points-v2 = <&cluster0_opp>; 101 + power-domains = <&cpr>; 102 + power-domain-names = "cpr"; 101 103 #cooling-cells = <2>; 102 104 next-level-cache = <&L2_0>; 103 105 L2_0: l2-cache { ··· 117 115 capacity-dmips-mhz = <1024>; 118 116 clocks = <&kryocc 0>; 119 117 operating-points-v2 = <&cluster0_opp>; 118 + power-domains = <&cpr>; 119 + power-domain-names = "cpr"; 120 120 #cooling-cells = <2>; 121 121 next-level-cache = <&L2_0>; 122 122 }; ··· 132 128 capacity-dmips-mhz = <1024>; 133 129 clocks = <&kryocc 1>; 134 130 operating-points-v2 = <&cluster1_opp>; 131 + power-domains = <&cpr>; 132 + power-domain-names = "cpr"; 135 133 #cooling-cells = <2>; 136 134 next-level-cache = <&L2_1>; 137 135 L2_1: l2-cache { ··· 151 145 capacity-dmips-mhz = <1024>; 152 146 clocks = <&kryocc 1>; 153 147 operating-points-v2 = <&cluster1_opp>; 148 + power-domains = <&cpr>; 149 + power-domain-names = "cpr"; 154 150 #cooling-cells = <2>; 155 151 next-level-cache = <&L2_1>; 156 152 }; ··· 190 182 opp-microvolt = <905000 905000 1140000>; 191 183 opp-supported-hw = <0x7>; 192 184 clock-latency-ns = <200000>; 185 + required-opps = <&cpr_opp1>; 193 186 }; 194 187 opp-1401600000 { 195 188 opp-hz = /bits/ 64 <1401600000>; 196 189 opp-microvolt = <1140000 905000 1140000>; 197 190 opp-supported-hw = <0x5>; 198 191 clock-latency-ns = <200000>; 192 + required-opps = <&cpr_opp2>; 199 193 }; 200 194 opp-1593600000 { 201 195 opp-hz = /bits/ 64 <1593600000>; 202 196 opp-microvolt = <1140000 905000 1140000>; 203 197 opp-supported-hw = <0x1>; 204 198 clock-latency-ns = <200000>; 199 + required-opps = <&cpr_opp3>; 205 200 }; 206 201 }; 207 202 ··· 218 207 opp-microvolt = <905000 905000 1140000>; 219 208 opp-supported-hw = <0x7>; 220 209 clock-latency-ns = <200000>; 210 + required-opps = <&cpr_opp1>; 221 211 }; 222 212 opp-1804800000 { 223 213 opp-hz = /bits/ 64 <1804800000>; 224 214 opp-microvolt = <1140000 905000 1140000>; 225 215 opp-supported-hw = <0x6>; 226 216 clock-latency-ns = <200000>; 217 + required-opps = <&cpr_opp4>; 227 218 }; 228 219 opp-1900800000 { 229 220 opp-hz = /bits/ 64 <1900800000>; 230 221 opp-microvolt = <1140000 905000 1140000>; 231 222 opp-supported-hw = <0x4>; 232 223 clock-latency-ns = <200000>; 224 + required-opps = <&cpr_opp5>; 233 225 }; 234 226 opp-2150400000 { 235 227 opp-hz = /bits/ 64 <2150400000>; 236 228 opp-microvolt = <1140000 905000 1140000>; 237 229 opp-supported-hw = <0x1>; 238 230 clock-latency-ns = <200000>; 231 + required-opps = <&cpr_opp6>; 239 232 }; 240 233 }; 241 234
+9 -10
drivers/cpufreq/cpufreq-dt.c
··· 29 29 30 30 cpumask_var_t cpus; 31 31 struct device *cpu_dev; 32 - struct opp_table *opp_table; 33 32 struct cpufreq_frequency_table *freq_table; 34 33 bool have_static_opps; 34 + int opp_token; 35 35 }; 36 36 37 37 static LIST_HEAD(priv_list); ··· 193 193 struct private_data *priv; 194 194 struct device *cpu_dev; 195 195 bool fallback = false; 196 - const char *reg_name; 196 + const char *reg_name[] = { NULL, NULL }; 197 197 int ret; 198 198 199 199 /* Check if this CPU is already covered by some other policy */ ··· 218 218 * OPP layer will be taking care of regulators now, but it needs to know 219 219 * the name of the regulator first. 220 220 */ 221 - reg_name = find_supply_name(cpu_dev); 222 - if (reg_name) { 223 - priv->opp_table = dev_pm_opp_set_regulators(cpu_dev, &reg_name, 224 - 1); 225 - if (IS_ERR(priv->opp_table)) { 226 - ret = PTR_ERR(priv->opp_table); 221 + reg_name[0] = find_supply_name(cpu_dev); 222 + if (reg_name[0]) { 223 + priv->opp_token = dev_pm_opp_set_regulators(cpu_dev, reg_name); 224 + if (priv->opp_token < 0) { 225 + ret = priv->opp_token; 227 226 if (ret != -EPROBE_DEFER) 228 227 dev_err(cpu_dev, "failed to set regulators: %d\n", 229 228 ret); ··· 294 295 out: 295 296 if (priv->have_static_opps) 296 297 dev_pm_opp_of_cpumask_remove_table(priv->cpus); 297 - dev_pm_opp_put_regulators(priv->opp_table); 298 + dev_pm_opp_put_regulators(priv->opp_token); 298 299 free_cpumask: 299 300 free_cpumask_var(priv->cpus); 300 301 return ret; ··· 308 309 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &priv->freq_table); 309 310 if (priv->have_static_opps) 310 311 dev_pm_opp_of_cpumask_remove_table(priv->cpus); 311 - dev_pm_opp_put_regulators(priv->opp_table); 312 + dev_pm_opp_put_regulators(priv->opp_token); 312 313 free_cpumask_var(priv->cpus); 313 314 list_del(&priv->node); 314 315 }
+3 -3
drivers/cpufreq/cpufreq.c
··· 1348 1348 } 1349 1349 1350 1350 if (!new_policy && cpufreq_driver->online) { 1351 + /* Recover policy->cpus using related_cpus */ 1352 + cpumask_copy(policy->cpus, policy->related_cpus); 1353 + 1351 1354 ret = cpufreq_driver->online(policy); 1352 1355 if (ret) { 1353 1356 pr_debug("%s: %d: initialization failed\n", __func__, 1354 1357 __LINE__); 1355 1358 goto out_exit_policy; 1356 1359 } 1357 - 1358 - /* Recover policy->cpus using related_cpus */ 1359 - cpumask_copy(policy->cpus, policy->related_cpus); 1360 1360 } else { 1361 1361 cpumask_copy(policy->cpus, cpumask_of(cpu)); 1362 1362
+6 -6
drivers/cpufreq/imx-cpufreq-dt.c
··· 31 31 32 32 /* cpufreq-dt device registered by imx-cpufreq-dt */ 33 33 static struct platform_device *cpufreq_dt_pdev; 34 - static struct opp_table *cpufreq_opp_table; 35 34 static struct device *cpu_dev; 35 + static int cpufreq_opp_token; 36 36 37 37 enum IMX7ULP_CPUFREQ_CLKS { 38 38 ARM, ··· 153 153 dev_info(&pdev->dev, "cpu speed grade %d mkt segment %d supported-hw %#x %#x\n", 154 154 speed_grade, mkt_segment, supported_hw[0], supported_hw[1]); 155 155 156 - cpufreq_opp_table = dev_pm_opp_set_supported_hw(cpu_dev, supported_hw, 2); 157 - if (IS_ERR(cpufreq_opp_table)) { 158 - ret = PTR_ERR(cpufreq_opp_table); 156 + cpufreq_opp_token = dev_pm_opp_set_supported_hw(cpu_dev, supported_hw, 2); 157 + if (cpufreq_opp_token < 0) { 158 + ret = cpufreq_opp_token; 159 159 dev_err(&pdev->dev, "Failed to set supported opp: %d\n", ret); 160 160 return ret; 161 161 } ··· 163 163 cpufreq_dt_pdev = platform_device_register_data( 164 164 &pdev->dev, "cpufreq-dt", -1, NULL, 0); 165 165 if (IS_ERR(cpufreq_dt_pdev)) { 166 - dev_pm_opp_put_supported_hw(cpufreq_opp_table); 166 + dev_pm_opp_put_supported_hw(cpufreq_opp_token); 167 167 ret = PTR_ERR(cpufreq_dt_pdev); 168 168 dev_err(&pdev->dev, "Failed to register cpufreq-dt: %d\n", ret); 169 169 return ret; ··· 176 176 { 177 177 platform_device_unregister(cpufreq_dt_pdev); 178 178 if (!of_machine_is_compatible("fsl,imx7ulp")) 179 - dev_pm_opp_put_supported_hw(cpufreq_opp_table); 179 + dev_pm_opp_put_supported_hw(cpufreq_opp_token); 180 180 else 181 181 clk_bulk_put(ARRAY_SIZE(imx7ulp_clks), imx7ulp_clks); 182 182
+1
drivers/cpufreq/mediatek-cpufreq.c
··· 478 478 if (info->soc_data->ccifreq_supported) { 479 479 info->vproc_on_boot = regulator_get_voltage(info->proc_reg); 480 480 if (info->vproc_on_boot < 0) { 481 + ret = info->vproc_on_boot; 481 482 dev_err(info->cpu_dev, 482 483 "invalid Vproc value: %d\n", info->vproc_on_boot); 483 484 goto out_disable_inter_clock;
+9 -5
drivers/cpufreq/qcom-cpufreq-hw.c
··· 15 15 #include <linux/pm_opp.h> 16 16 #include <linux/slab.h> 17 17 #include <linux/spinlock.h> 18 + #include <linux/units.h> 18 19 19 20 #define LUT_MAX_ENTRIES 40U 20 21 #define LUT_SRC GENMASK(31, 30) ··· 26 25 #define LUT_TURBO_IND 1 27 26 28 27 #define GT_IRQ_STATUS BIT(2) 29 - 30 - #define HZ_PER_KHZ 1000 31 28 32 29 struct qcom_cpufreq_soc_data { 33 30 u32 reg_enable; ··· 427 428 return 0; 428 429 } 429 430 430 - ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus); 431 + ret = irq_set_affinity_and_hint(data->throttle_irq, policy->cpus); 431 432 if (ret) 432 433 dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n", 433 434 data->irq_name, data->throttle_irq); ··· 444 445 if (data->throttle_irq <= 0) 445 446 return 0; 446 447 447 - ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus); 448 + mutex_lock(&data->throttle_lock); 449 + data->cancel_throttle = false; 450 + mutex_unlock(&data->throttle_lock); 451 + 452 + ret = irq_set_affinity_and_hint(data->throttle_irq, policy->cpus); 448 453 if (ret) 449 454 dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n", 450 455 data->irq_name, data->throttle_irq); ··· 468 465 mutex_unlock(&data->throttle_lock); 469 466 470 467 cancel_delayed_work_sync(&data->throttle_work); 471 - irq_set_affinity_hint(data->throttle_irq, NULL); 468 + irq_set_affinity_and_hint(data->throttle_irq, NULL); 469 + disable_irq_nosync(data->throttle_irq); 472 470 473 471 return 0; 474 472 }
+28 -81
drivers/cpufreq/qcom-cpufreq-nvmem.c
··· 55 55 }; 56 56 57 57 struct qcom_cpufreq_drv { 58 - struct opp_table **names_opp_tables; 59 - struct opp_table **hw_opp_tables; 60 - struct opp_table **genpd_opp_tables; 58 + int *opp_tokens; 61 59 u32 versions; 62 60 const struct qcom_cpufreq_match_data *data; 63 61 }; ··· 313 315 } 314 316 of_node_put(np); 315 317 316 - drv->names_opp_tables = kcalloc(num_possible_cpus(), 317 - sizeof(*drv->names_opp_tables), 318 + drv->opp_tokens = kcalloc(num_possible_cpus(), sizeof(*drv->opp_tokens), 318 319 GFP_KERNEL); 319 - if (!drv->names_opp_tables) { 320 + if (!drv->opp_tokens) { 320 321 ret = -ENOMEM; 321 322 goto free_drv; 322 323 } 323 - drv->hw_opp_tables = kcalloc(num_possible_cpus(), 324 - sizeof(*drv->hw_opp_tables), 325 - GFP_KERNEL); 326 - if (!drv->hw_opp_tables) { 327 - ret = -ENOMEM; 328 - goto free_opp_names; 329 - } 330 - 331 - drv->genpd_opp_tables = kcalloc(num_possible_cpus(), 332 - sizeof(*drv->genpd_opp_tables), 333 - GFP_KERNEL); 334 - if (!drv->genpd_opp_tables) { 335 - ret = -ENOMEM; 336 - goto free_opp; 337 - } 338 324 339 325 for_each_possible_cpu(cpu) { 326 + struct dev_pm_opp_config config = { 327 + .supported_hw = NULL, 328 + }; 329 + 340 330 cpu_dev = get_cpu_device(cpu); 341 331 if (NULL == cpu_dev) { 342 332 ret = -ENODEV; 343 - goto free_genpd_opp; 333 + goto free_opp; 344 334 } 345 335 346 336 if (drv->data->get_version) { 337 + config.supported_hw = &drv->versions; 338 + config.supported_hw_count = 1; 347 339 348 - if (pvs_name) { 349 - drv->names_opp_tables[cpu] = dev_pm_opp_set_prop_name( 350 - cpu_dev, 351 - pvs_name); 352 - if (IS_ERR(drv->names_opp_tables[cpu])) { 353 - ret = PTR_ERR(drv->names_opp_tables[cpu]); 354 - dev_err(cpu_dev, "Failed to add OPP name %s\n", 355 - pvs_name); 356 - goto free_opp; 357 - } 358 - } 359 - 360 - drv->hw_opp_tables[cpu] = dev_pm_opp_set_supported_hw( 361 - cpu_dev, &drv->versions, 1); 362 - if (IS_ERR(drv->hw_opp_tables[cpu])) { 363 - ret = PTR_ERR(drv->hw_opp_tables[cpu]); 364 - dev_err(cpu_dev, 365 - "Failed to set supported hardware\n"); 366 - goto free_genpd_opp; 367 - } 340 + if (pvs_name) 341 + config.prop_name = pvs_name; 368 342 } 369 343 370 344 if (drv->data->genpd_names) { 371 - drv->genpd_opp_tables[cpu] = 372 - dev_pm_opp_attach_genpd(cpu_dev, 373 - drv->data->genpd_names, 374 - NULL); 375 - if (IS_ERR(drv->genpd_opp_tables[cpu])) { 376 - ret = PTR_ERR(drv->genpd_opp_tables[cpu]); 377 - if (ret != -EPROBE_DEFER) 378 - dev_err(cpu_dev, 379 - "Could not attach to pm_domain: %d\n", 380 - ret); 381 - goto free_genpd_opp; 345 + config.genpd_names = drv->data->genpd_names; 346 + config.virt_devs = NULL; 347 + } 348 + 349 + if (config.supported_hw || config.genpd_names) { 350 + drv->opp_tokens[cpu] = dev_pm_opp_set_config(cpu_dev, &config); 351 + if (drv->opp_tokens[cpu] < 0) { 352 + ret = drv->opp_tokens[cpu]; 353 + dev_err(cpu_dev, "Failed to set OPP config\n"); 354 + goto free_opp; 382 355 } 383 356 } 384 357 } ··· 364 395 ret = PTR_ERR(cpufreq_dt_pdev); 365 396 dev_err(cpu_dev, "Failed to register platform device\n"); 366 397 367 - free_genpd_opp: 368 - for_each_possible_cpu(cpu) { 369 - if (IS_ERR(drv->genpd_opp_tables[cpu])) 370 - break; 371 - dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]); 372 - } 373 - kfree(drv->genpd_opp_tables); 374 398 free_opp: 375 - for_each_possible_cpu(cpu) { 376 - if (IS_ERR(drv->names_opp_tables[cpu])) 377 - break; 378 - dev_pm_opp_put_prop_name(drv->names_opp_tables[cpu]); 379 - } 380 - for_each_possible_cpu(cpu) { 381 - if (IS_ERR(drv->hw_opp_tables[cpu])) 382 - break; 383 - dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]); 384 - } 385 - kfree(drv->hw_opp_tables); 386 - free_opp_names: 387 - kfree(drv->names_opp_tables); 399 + for_each_possible_cpu(cpu) 400 + dev_pm_opp_clear_config(drv->opp_tokens[cpu]); 401 + kfree(drv->opp_tokens); 388 402 free_drv: 389 403 kfree(drv); 390 404 ··· 381 429 382 430 platform_device_unregister(cpufreq_dt_pdev); 383 431 384 - for_each_possible_cpu(cpu) { 385 - dev_pm_opp_put_supported_hw(drv->names_opp_tables[cpu]); 386 - dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]); 387 - dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]); 388 - } 432 + for_each_possible_cpu(cpu) 433 + dev_pm_opp_clear_config(drv->opp_tokens[cpu]); 389 434 390 - kfree(drv->names_opp_tables); 391 - kfree(drv->hw_opp_tables); 392 - kfree(drv->genpd_opp_tables); 435 + kfree(drv->opp_tokens); 393 436 kfree(drv); 394 437 395 438 return 0;
+10 -17
drivers/cpufreq/sti-cpufreq.c
··· 156 156 unsigned int hw_info_offset; 157 157 unsigned int version[VERSION_ELEMENTS]; 158 158 int pcode, substrate, major, minor; 159 - int ret; 159 + int opp_token, ret; 160 160 char name[MAX_PCODE_NAME_LEN]; 161 - struct opp_table *opp_table; 161 + struct dev_pm_opp_config config = { 162 + .supported_hw = version, 163 + .supported_hw_count = ARRAY_SIZE(version), 164 + .prop_name = name, 165 + }; 162 166 163 167 reg_fields = sti_cpufreq_match(); 164 168 if (!reg_fields) { ··· 214 210 215 211 snprintf(name, MAX_PCODE_NAME_LEN, "pcode%d", pcode); 216 212 217 - opp_table = dev_pm_opp_set_prop_name(dev, name); 218 - if (IS_ERR(opp_table)) { 219 - dev_err(dev, "Failed to set prop name\n"); 220 - return PTR_ERR(opp_table); 221 - } 222 - 223 213 version[0] = BIT(major); 224 214 version[1] = BIT(minor); 225 215 version[2] = BIT(substrate); 226 216 227 - opp_table = dev_pm_opp_set_supported_hw(dev, version, VERSION_ELEMENTS); 228 - if (IS_ERR(opp_table)) { 229 - dev_err(dev, "Failed to set supported hardware\n"); 230 - ret = PTR_ERR(opp_table); 231 - goto err_put_prop_name; 217 + opp_token = dev_pm_opp_set_config(dev, &config); 218 + if (opp_token < 0) { 219 + dev_err(dev, "Failed to set OPP config\n"); 220 + return opp_token; 232 221 } 233 222 234 223 dev_dbg(dev, "pcode: %d major: %d minor: %d substrate: %d\n", ··· 230 233 version[0], version[1], version[2]); 231 234 232 235 return 0; 233 - 234 - err_put_prop_name: 235 - dev_pm_opp_put_prop_name(opp_table); 236 - return ret; 237 236 } 238 237 239 238 static int sti_cpufreq_fetch_syscon_registers(void)
+14 -17
drivers/cpufreq/sun50i-cpufreq-nvmem.c
··· 86 86 87 87 static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev) 88 88 { 89 - struct opp_table **opp_tables; 89 + int *opp_tokens; 90 90 char name[MAX_NAME_LEN]; 91 91 unsigned int cpu; 92 92 u32 speed = 0; 93 93 int ret; 94 94 95 - opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), 95 + opp_tokens = kcalloc(num_possible_cpus(), sizeof(*opp_tokens), 96 96 GFP_KERNEL); 97 - if (!opp_tables) 97 + if (!opp_tokens) 98 98 return -ENOMEM; 99 99 100 100 ret = sun50i_cpufreq_get_efuse(&speed); 101 101 if (ret) { 102 - kfree(opp_tables); 102 + kfree(opp_tokens); 103 103 return ret; 104 104 } 105 105 ··· 113 113 goto free_opp; 114 114 } 115 115 116 - opp_tables[cpu] = dev_pm_opp_set_prop_name(cpu_dev, name); 117 - if (IS_ERR(opp_tables[cpu])) { 118 - ret = PTR_ERR(opp_tables[cpu]); 116 + opp_tokens[cpu] = dev_pm_opp_set_prop_name(cpu_dev, name); 117 + if (opp_tokens[cpu] < 0) { 118 + ret = opp_tokens[cpu]; 119 119 pr_err("Failed to set prop name\n"); 120 120 goto free_opp; 121 121 } ··· 124 124 cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1, 125 125 NULL, 0); 126 126 if (!IS_ERR(cpufreq_dt_pdev)) { 127 - platform_set_drvdata(pdev, opp_tables); 127 + platform_set_drvdata(pdev, opp_tokens); 128 128 return 0; 129 129 } 130 130 ··· 132 132 pr_err("Failed to register platform device\n"); 133 133 134 134 free_opp: 135 - for_each_possible_cpu(cpu) { 136 - if (IS_ERR_OR_NULL(opp_tables[cpu])) 137 - break; 138 - dev_pm_opp_put_prop_name(opp_tables[cpu]); 139 - } 140 - kfree(opp_tables); 135 + for_each_possible_cpu(cpu) 136 + dev_pm_opp_put_prop_name(opp_tokens[cpu]); 137 + kfree(opp_tokens); 141 138 142 139 return ret; 143 140 } 144 141 145 142 static int sun50i_cpufreq_nvmem_remove(struct platform_device *pdev) 146 143 { 147 - struct opp_table **opp_tables = platform_get_drvdata(pdev); 144 + int *opp_tokens = platform_get_drvdata(pdev); 148 145 unsigned int cpu; 149 146 150 147 platform_device_unregister(cpufreq_dt_pdev); 151 148 152 149 for_each_possible_cpu(cpu) 153 - dev_pm_opp_put_prop_name(opp_tables[cpu]); 150 + dev_pm_opp_put_prop_name(opp_tokens[cpu]); 154 151 155 - kfree(opp_tables); 152 + kfree(opp_tokens); 156 153 157 154 return 0; 158 155 }
+2 -2
drivers/cpufreq/tegra194-cpufreq.c
··· 162 162 .set_cpu_ndiv = tegra234_set_cpu_ndiv, 163 163 }; 164 164 165 - const struct tegra_cpufreq_soc tegra234_cpufreq_soc = { 165 + static const struct tegra_cpufreq_soc tegra234_cpufreq_soc = { 166 166 .ops = &tegra234_cpufreq_ops, 167 167 .actmon_cntr_base = 0x9000, 168 168 .maxcpus_per_cluster = 4, ··· 430 430 .set_cpu_ndiv = tegra194_set_cpu_ndiv, 431 431 }; 432 432 433 - const struct tegra_cpufreq_soc tegra194_cpufreq_soc = { 433 + static const struct tegra_cpufreq_soc tegra194_cpufreq_soc = { 434 434 .ops = &tegra194_cpufreq_ops, 435 435 .maxcpus_per_cluster = 2, 436 436 };
+5 -7
drivers/cpufreq/tegra20-cpufreq.c
··· 32 32 return ret; 33 33 } 34 34 35 - static void tegra20_cpufreq_put_supported_hw(void *opp_table) 35 + static void tegra20_cpufreq_put_supported_hw(void *opp_token) 36 36 { 37 - dev_pm_opp_put_supported_hw(opp_table); 37 + dev_pm_opp_put_supported_hw((unsigned long) opp_token); 38 38 } 39 39 40 40 static void tegra20_cpufreq_dt_unregister(void *cpufreq_dt) ··· 45 45 static int tegra20_cpufreq_probe(struct platform_device *pdev) 46 46 { 47 47 struct platform_device *cpufreq_dt; 48 - struct opp_table *opp_table; 49 48 struct device *cpu_dev; 50 49 u32 versions[2]; 51 50 int err; ··· 70 71 if (WARN_ON(!cpu_dev)) 71 72 return -ENODEV; 72 73 73 - opp_table = dev_pm_opp_set_supported_hw(cpu_dev, versions, 2); 74 - err = PTR_ERR_OR_ZERO(opp_table); 75 - if (err) { 74 + err = dev_pm_opp_set_supported_hw(cpu_dev, versions, 2); 75 + if (err < 0) { 76 76 dev_err(&pdev->dev, "failed to set supported hw: %d\n", err); 77 77 return err; 78 78 } 79 79 80 80 err = devm_add_action_or_reset(&pdev->dev, 81 81 tegra20_cpufreq_put_supported_hw, 82 - opp_table); 82 + (void *)((unsigned long) err)); 83 83 if (err) 84 84 return err; 85 85
+17 -27
drivers/cpufreq/ti-cpufreq.c
··· 60 60 struct device_node *opp_node; 61 61 struct regmap *syscon; 62 62 const struct ti_cpufreq_soc_data *soc_data; 63 - struct opp_table *opp_table; 64 63 }; 65 64 66 65 static unsigned long amx3_efuse_xlate(struct ti_cpufreq_data *opp_data, ··· 172 173 * seems to always read as 0). 173 174 */ 174 175 175 - static const char * const omap3_reg_names[] = {"cpu0", "vbb"}; 176 + static const char * const omap3_reg_names[] = {"cpu0", "vbb", NULL}; 176 177 177 178 static struct ti_cpufreq_soc_data omap36xx_soc_data = { 178 179 .reg_names = omap3_reg_names, ··· 323 324 { 324 325 u32 version[VERSION_COUNT]; 325 326 const struct of_device_id *match; 326 - struct opp_table *ti_opp_table; 327 327 struct ti_cpufreq_data *opp_data; 328 - const char * const default_reg_names[] = {"vdd", "vbb"}; 328 + const char * const default_reg_names[] = {"vdd", "vbb", NULL}; 329 329 int ret; 330 + struct dev_pm_opp_config config = { 331 + .supported_hw = version, 332 + .supported_hw_count = ARRAY_SIZE(version), 333 + }; 330 334 331 335 match = dev_get_platdata(&pdev->dev); 332 336 if (!match) ··· 372 370 if (ret) 373 371 goto fail_put_node; 374 372 375 - ti_opp_table = dev_pm_opp_set_supported_hw(opp_data->cpu_dev, 376 - version, VERSION_COUNT); 377 - if (IS_ERR(ti_opp_table)) { 378 - dev_err(opp_data->cpu_dev, 379 - "Failed to set supported hardware\n"); 380 - ret = PTR_ERR(ti_opp_table); 373 + if (opp_data->soc_data->multi_regulator) { 374 + if (opp_data->soc_data->reg_names) 375 + config.regulator_names = opp_data->soc_data->reg_names; 376 + else 377 + config.regulator_names = default_reg_names; 378 + } 379 + 380 + ret = dev_pm_opp_set_config(opp_data->cpu_dev, &config); 381 + if (ret < 0) { 382 + dev_err(opp_data->cpu_dev, "Failed to set OPP config\n"); 381 383 goto fail_put_node; 382 384 } 383 385 384 - opp_data->opp_table = ti_opp_table; 385 - 386 - if (opp_data->soc_data->multi_regulator) { 387 - const char * const *reg_names = default_reg_names; 388 - 389 - if (opp_data->soc_data->reg_names) 390 - reg_names = opp_data->soc_data->reg_names; 391 - ti_opp_table = dev_pm_opp_set_regulators(opp_data->cpu_dev, 392 - reg_names, 393 - ARRAY_SIZE(default_reg_names)); 394 - if (IS_ERR(ti_opp_table)) { 395 - dev_pm_opp_put_supported_hw(opp_data->opp_table); 396 - ret = PTR_ERR(ti_opp_table); 397 - goto fail_put_node; 398 - } 399 - } 400 - 401 386 of_node_put(opp_data->opp_node); 387 + 402 388 register_cpufreq_dt: 403 389 platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 404 390
+5 -1
drivers/cpuidle/cpuidle.c
··· 8 8 * This code is licenced under the GPL. 9 9 */ 10 10 11 + #include "linux/percpu-defs.h" 11 12 #include <linux/clockchips.h> 12 13 #include <linux/kernel.h> 13 14 #include <linux/mutex.h> ··· 280 279 281 280 /* Shallower states are enabled, so update. */ 282 281 dev->states_usage[entered_state].above++; 282 + trace_cpu_idle_miss(dev->cpu, entered_state, false); 283 283 break; 284 284 } 285 285 } else if (diff > delay) { ··· 292 290 * Update if a deeper state would have been a 293 291 * better match for the observed idle duration. 294 292 */ 295 - if (diff - delay >= drv->states[i].target_residency_ns) 293 + if (diff - delay >= drv->states[i].target_residency_ns) { 296 294 dev->states_usage[entered_state].below++; 295 + trace_cpu_idle_miss(dev->cpu, entered_state, true); 296 + } 297 297 298 298 break; 299 299 }
+8 -13
drivers/devfreq/exynos-bus.c
··· 33 33 34 34 unsigned long curr_freq; 35 35 36 - struct opp_table *opp_table; 36 + int opp_token; 37 37 struct clk *clk; 38 38 unsigned int ratio; 39 39 }; ··· 161 161 162 162 dev_pm_opp_of_remove_table(dev); 163 163 clk_disable_unprepare(bus->clk); 164 - dev_pm_opp_put_regulators(bus->opp_table); 165 - bus->opp_table = NULL; 164 + dev_pm_opp_put_regulators(bus->opp_token); 166 165 } 167 166 168 167 static void exynos_bus_passive_exit(struct device *dev) ··· 178 179 struct exynos_bus *bus) 179 180 { 180 181 struct device *dev = bus->dev; 181 - struct opp_table *opp_table; 182 - const char *vdd = "vdd"; 182 + const char *supplies[] = { "vdd", NULL }; 183 183 int i, ret, count, size; 184 184 185 - opp_table = dev_pm_opp_set_regulators(dev, &vdd, 1); 186 - if (IS_ERR(opp_table)) { 187 - ret = PTR_ERR(opp_table); 185 + ret = dev_pm_opp_set_regulators(dev, supplies); 186 + if (ret < 0) { 188 187 dev_err(dev, "failed to set regulators %d\n", ret); 189 188 return ret; 190 189 } 191 190 192 - bus->opp_table = opp_table; 191 + bus->opp_token = ret; 193 192 194 193 /* 195 194 * Get the devfreq-event devices to get the current utilization of ··· 233 236 return 0; 234 237 235 238 err_regulator: 236 - dev_pm_opp_put_regulators(bus->opp_table); 237 - bus->opp_table = NULL; 239 + dev_pm_opp_put_regulators(bus->opp_token); 238 240 239 241 return ret; 240 242 } ··· 455 459 dev_pm_opp_of_remove_table(dev); 456 460 clk_disable_unprepare(bus->clk); 457 461 err_reg: 458 - dev_pm_opp_put_regulators(bus->opp_table); 459 - bus->opp_table = NULL; 462 + dev_pm_opp_put_regulators(bus->opp_token); 460 463 461 464 return ret; 462 465 }
+19 -3
drivers/devfreq/tegra30-devfreq.c
··· 821 821 return err; 822 822 } 823 823 824 + static int tegra_devfreq_config_clks_nop(struct device *dev, 825 + struct opp_table *opp_table, 826 + struct dev_pm_opp *opp, void *data, 827 + bool scaling_down) 828 + { 829 + /* We want to skip clk configuration via dev_pm_opp_set_opp() */ 830 + return 0; 831 + } 832 + 824 833 static int tegra_devfreq_probe(struct platform_device *pdev) 825 834 { 826 835 u32 hw_version = BIT(tegra_sku_info.soc_speedo_id); ··· 839 830 unsigned int i; 840 831 long rate; 841 832 int err; 833 + const char *clk_names[] = { "actmon", NULL }; 834 + struct dev_pm_opp_config config = { 835 + .supported_hw = &hw_version, 836 + .supported_hw_count = 1, 837 + .clk_names = clk_names, 838 + .config_clks = tegra_devfreq_config_clks_nop, 839 + }; 842 840 843 841 tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL); 844 842 if (!tegra) ··· 890 874 return err; 891 875 } 892 876 893 - err = devm_pm_opp_set_supported_hw(&pdev->dev, &hw_version, 1); 877 + err = devm_pm_opp_set_config(&pdev->dev, &config); 894 878 if (err) { 895 - dev_err(&pdev->dev, "Failed to set supported HW: %d\n", err); 879 + dev_err(&pdev->dev, "Failed to set OPP config: %d\n", err); 896 880 return err; 897 881 } 898 882 899 - err = devm_pm_opp_of_add_table_noclk(&pdev->dev, 0); 883 + err = devm_pm_opp_of_add_table_indexed(&pdev->dev, 0); 900 884 if (err) { 901 885 dev_err(&pdev->dev, "Failed to add OPP table: %d\n", err); 902 886 return err;
+7 -5
drivers/gpu/drm/lima/lima_devfreq.c
··· 111 111 struct dev_pm_opp *opp; 112 112 unsigned long cur_freq; 113 113 int ret; 114 + const char *regulator_names[] = { "mali", NULL }; 115 + const char *clk_names[] = { "core", NULL }; 116 + struct dev_pm_opp_config config = { 117 + .regulator_names = regulator_names, 118 + .clk_names = clk_names, 119 + }; 114 120 115 121 if (!device_property_present(dev, "operating-points-v2")) 116 122 /* Optional, continue without devfreq */ ··· 124 118 125 119 spin_lock_init(&ldevfreq->lock); 126 120 127 - ret = devm_pm_opp_set_clkname(dev, "core"); 128 - if (ret) 129 - return ret; 130 - 131 - ret = devm_pm_opp_set_regulators(dev, (const char *[]){ "mali" }, 1); 121 + ret = devm_pm_opp_set_config(dev, &config); 132 122 if (ret) { 133 123 /* Continue if the optional regulator is missing */ 134 124 if (ret != -ENODEV)
+1 -2
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 101 101 return 0; 102 102 } 103 103 104 - ret = devm_pm_opp_set_regulators(dev, pfdev->comp->supply_names, 105 - pfdev->comp->num_supplies); 104 + ret = devm_pm_opp_set_regulators(dev, pfdev->comp->supply_names); 106 105 if (ret) { 107 106 /* Continue if the optional regulator is missing */ 108 107 if (ret != -ENODEV) {
+10 -5
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 626 626 return 0; 627 627 } 628 628 629 - static const char * const default_supplies[] = { "mali" }; 629 + /* 630 + * The OPP core wants the supply names to be NULL terminated, but we need the 631 + * correct num_supplies value for regulator core. Hence, we NULL terminate here 632 + * and then initialize num_supplies with ARRAY_SIZE - 1. 633 + */ 634 + static const char * const default_supplies[] = { "mali", NULL }; 630 635 static const struct panfrost_compatible default_data = { 631 - .num_supplies = ARRAY_SIZE(default_supplies), 636 + .num_supplies = ARRAY_SIZE(default_supplies) - 1, 632 637 .supply_names = default_supplies, 633 638 .num_pm_domains = 1, /* optional */ 634 639 .pm_domain_names = NULL, 635 640 }; 636 641 637 642 static const struct panfrost_compatible amlogic_data = { 638 - .num_supplies = ARRAY_SIZE(default_supplies), 643 + .num_supplies = ARRAY_SIZE(default_supplies) - 1, 639 644 .supply_names = default_supplies, 640 645 .vendor_quirk = panfrost_gpu_amlogic_quirk, 641 646 }; 642 647 643 - static const char * const mediatek_mt8183_supplies[] = { "mali", "sram" }; 648 + static const char * const mediatek_mt8183_supplies[] = { "mali", "sram", NULL }; 644 649 static const char * const mediatek_mt8183_pm_domains[] = { "core0", "core1", "core2" }; 645 650 static const struct panfrost_compatible mediatek_mt8183_data = { 646 - .num_supplies = ARRAY_SIZE(mediatek_mt8183_supplies), 651 + .num_supplies = ARRAY_SIZE(mediatek_mt8183_supplies) - 1, 647 652 .supply_names = mediatek_mt8183_supplies, 648 653 .num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains), 649 654 .pm_domain_names = mediatek_mt8183_pm_domains,
+5 -5
drivers/media/platform/qcom/venus/pm_helpers.c
··· 875 875 } 876 876 877 877 skip_pmdomains: 878 - if (!core->has_opp_table) 878 + if (!core->res->opp_pmdomain) 879 879 return 0; 880 880 881 881 /* Attach the power domain for setting performance state */ ··· 1007 1007 if (ret) 1008 1008 return ret; 1009 1009 1010 + ret = vcodec_domains_get(core); 1011 + if (ret) 1012 + return ret; 1013 + 1010 1014 if (core->res->opp_pmdomain) { 1011 1015 ret = devm_pm_opp_of_add_table(dev); 1012 1016 if (!ret) { ··· 1020 1016 return ret; 1021 1017 } 1022 1018 } 1023 - 1024 - ret = vcodec_domains_get(core); 1025 - if (ret) 1026 - return ret; 1027 1019 1028 1020 return 0; 1029 1021 }
+5 -6
drivers/memory/tegra/tegra124-emc.c
··· 1395 1395 static int tegra_emc_opp_table_init(struct tegra_emc *emc) 1396 1396 { 1397 1397 u32 hw_version = BIT(tegra_sku_info.soc_speedo_id); 1398 - struct opp_table *hw_opp_table; 1399 - int err; 1398 + int opp_token, err; 1400 1399 1401 - hw_opp_table = dev_pm_opp_set_supported_hw(emc->dev, &hw_version, 1); 1402 - err = PTR_ERR_OR_ZERO(hw_opp_table); 1403 - if (err) { 1400 + err = dev_pm_opp_set_supported_hw(emc->dev, &hw_version, 1); 1401 + if (err < 0) { 1404 1402 dev_err(emc->dev, "failed to set OPP supported HW: %d\n", err); 1405 1403 return err; 1406 1404 } 1405 + opp_token = err; 1407 1406 1408 1407 err = dev_pm_opp_of_add_table(emc->dev); 1409 1408 if (err) { ··· 1429 1430 remove_table: 1430 1431 dev_pm_opp_of_remove_table(emc->dev); 1431 1432 put_hw_table: 1432 - dev_pm_opp_put_supported_hw(hw_opp_table); 1433 + dev_pm_opp_put_supported_hw(opp_token); 1433 1434 1434 1435 return err; 1435 1436 }
+794 -787
drivers/opp/core.c
··· 13 13 #include <linux/clk.h> 14 14 #include <linux/errno.h> 15 15 #include <linux/err.h> 16 - #include <linux/slab.h> 17 16 #include <linux/device.h> 18 17 #include <linux/export.h> 19 18 #include <linux/pm_domain.h> 20 19 #include <linux/regulator/consumer.h> 20 + #include <linux/slab.h> 21 + #include <linux/xarray.h> 21 22 22 23 #include "opp.h" 23 24 ··· 36 35 DEFINE_MUTEX(opp_table_lock); 37 36 /* Flag indicating that opp_tables list is being updated at the moment */ 38 37 static bool opp_tables_busy; 38 + 39 + /* OPP ID allocator */ 40 + static DEFINE_XARRAY_ALLOC1(opp_configs); 39 41 40 42 static bool _find_opp_dev(const struct device *dev, struct opp_table *opp_table) 41 43 { ··· 97 93 return opp_table; 98 94 } 99 95 96 + /* 97 + * Returns true if multiple clocks aren't there, else returns false with WARN. 98 + * 99 + * We don't force clk_count == 1 here as there are users who don't have a clock 100 + * representation in the OPP table and manage the clock configuration themselves 101 + * in an platform specific way. 102 + */ 103 + static bool assert_single_clk(struct opp_table *opp_table) 104 + { 105 + return !WARN_ON(opp_table->clk_count > 1); 106 + } 107 + 100 108 /** 101 109 * dev_pm_opp_get_voltage() - Gets the voltage corresponding to an opp 102 110 * @opp: opp for which voltage has to be returned for ··· 128 112 return opp->supplies[0].u_volt; 129 113 } 130 114 EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage); 115 + 116 + /** 117 + * dev_pm_opp_get_supplies() - Gets the supply information corresponding to an opp 118 + * @opp: opp for which voltage has to be returned for 119 + * @supplies: Placeholder for copying the supply information. 120 + * 121 + * Return: negative error number on failure, 0 otherwise on success after 122 + * setting @supplies. 123 + * 124 + * This can be used for devices with any number of power supplies. The caller 125 + * must ensure the @supplies array must contain space for each regulator. 126 + */ 127 + int dev_pm_opp_get_supplies(struct dev_pm_opp *opp, 128 + struct dev_pm_opp_supply *supplies) 129 + { 130 + if (IS_ERR_OR_NULL(opp) || !supplies) { 131 + pr_err("%s: Invalid parameters\n", __func__); 132 + return -EINVAL; 133 + } 134 + 135 + memcpy(supplies, opp->supplies, 136 + sizeof(*supplies) * opp->opp_table->regulator_count); 137 + return 0; 138 + } 139 + EXPORT_SYMBOL_GPL(dev_pm_opp_get_supplies); 131 140 132 141 /** 133 142 * dev_pm_opp_get_power() - Gets the power corresponding to an opp ··· 193 152 return 0; 194 153 } 195 154 196 - return opp->rate; 155 + if (!assert_single_clk(opp->opp_table)) 156 + return 0; 157 + 158 + return opp->rates[0]; 197 159 } 198 160 EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq); 199 161 ··· 442 398 } 443 399 EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count); 444 400 401 + /* Helpers to read keys */ 402 + static unsigned long _read_freq(struct dev_pm_opp *opp, int index) 403 + { 404 + return opp->rates[0]; 405 + } 406 + 407 + static unsigned long _read_level(struct dev_pm_opp *opp, int index) 408 + { 409 + return opp->level; 410 + } 411 + 412 + static unsigned long _read_bw(struct dev_pm_opp *opp, int index) 413 + { 414 + return opp->bandwidth[index].peak; 415 + } 416 + 417 + /* Generic comparison helpers */ 418 + static bool _compare_exact(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp, 419 + unsigned long opp_key, unsigned long key) 420 + { 421 + if (opp_key == key) { 422 + *opp = temp_opp; 423 + return true; 424 + } 425 + 426 + return false; 427 + } 428 + 429 + static bool _compare_ceil(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp, 430 + unsigned long opp_key, unsigned long key) 431 + { 432 + if (opp_key >= key) { 433 + *opp = temp_opp; 434 + return true; 435 + } 436 + 437 + return false; 438 + } 439 + 440 + static bool _compare_floor(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp, 441 + unsigned long opp_key, unsigned long key) 442 + { 443 + if (opp_key > key) 444 + return true; 445 + 446 + *opp = temp_opp; 447 + return false; 448 + } 449 + 450 + /* Generic key finding helpers */ 451 + static struct dev_pm_opp *_opp_table_find_key(struct opp_table *opp_table, 452 + unsigned long *key, int index, bool available, 453 + unsigned long (*read)(struct dev_pm_opp *opp, int index), 454 + bool (*compare)(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp, 455 + unsigned long opp_key, unsigned long key), 456 + bool (*assert)(struct opp_table *opp_table)) 457 + { 458 + struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 459 + 460 + /* Assert that the requirement is met */ 461 + if (assert && !assert(opp_table)) 462 + return ERR_PTR(-EINVAL); 463 + 464 + mutex_lock(&opp_table->lock); 465 + 466 + list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 467 + if (temp_opp->available == available) { 468 + if (compare(&opp, temp_opp, read(temp_opp, index), *key)) 469 + break; 470 + } 471 + } 472 + 473 + /* Increment the reference count of OPP */ 474 + if (!IS_ERR(opp)) { 475 + *key = read(opp, index); 476 + dev_pm_opp_get(opp); 477 + } 478 + 479 + mutex_unlock(&opp_table->lock); 480 + 481 + return opp; 482 + } 483 + 484 + static struct dev_pm_opp * 485 + _find_key(struct device *dev, unsigned long *key, int index, bool available, 486 + unsigned long (*read)(struct dev_pm_opp *opp, int index), 487 + bool (*compare)(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp, 488 + unsigned long opp_key, unsigned long key), 489 + bool (*assert)(struct opp_table *opp_table)) 490 + { 491 + struct opp_table *opp_table; 492 + struct dev_pm_opp *opp; 493 + 494 + opp_table = _find_opp_table(dev); 495 + if (IS_ERR(opp_table)) { 496 + dev_err(dev, "%s: OPP table not found (%ld)\n", __func__, 497 + PTR_ERR(opp_table)); 498 + return ERR_CAST(opp_table); 499 + } 500 + 501 + opp = _opp_table_find_key(opp_table, key, index, available, read, 502 + compare, assert); 503 + 504 + dev_pm_opp_put_opp_table(opp_table); 505 + 506 + return opp; 507 + } 508 + 509 + static struct dev_pm_opp *_find_key_exact(struct device *dev, 510 + unsigned long key, int index, bool available, 511 + unsigned long (*read)(struct dev_pm_opp *opp, int index), 512 + bool (*assert)(struct opp_table *opp_table)) 513 + { 514 + /* 515 + * The value of key will be updated here, but will be ignored as the 516 + * caller doesn't need it. 517 + */ 518 + return _find_key(dev, &key, index, available, read, _compare_exact, 519 + assert); 520 + } 521 + 522 + static struct dev_pm_opp *_opp_table_find_key_ceil(struct opp_table *opp_table, 523 + unsigned long *key, int index, bool available, 524 + unsigned long (*read)(struct dev_pm_opp *opp, int index), 525 + bool (*assert)(struct opp_table *opp_table)) 526 + { 527 + return _opp_table_find_key(opp_table, key, index, available, read, 528 + _compare_ceil, assert); 529 + } 530 + 531 + static struct dev_pm_opp *_find_key_ceil(struct device *dev, unsigned long *key, 532 + int index, bool available, 533 + unsigned long (*read)(struct dev_pm_opp *opp, int index), 534 + bool (*assert)(struct opp_table *opp_table)) 535 + { 536 + return _find_key(dev, key, index, available, read, _compare_ceil, 537 + assert); 538 + } 539 + 540 + static struct dev_pm_opp *_find_key_floor(struct device *dev, 541 + unsigned long *key, int index, bool available, 542 + unsigned long (*read)(struct dev_pm_opp *opp, int index), 543 + bool (*assert)(struct opp_table *opp_table)) 544 + { 545 + return _find_key(dev, key, index, available, read, _compare_floor, 546 + assert); 547 + } 548 + 445 549 /** 446 550 * dev_pm_opp_find_freq_exact() - search for an exact frequency 447 551 * @dev: device for which we do this operation ··· 614 422 * use. 615 423 */ 616 424 struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, 617 - unsigned long freq, 618 - bool available) 425 + unsigned long freq, bool available) 619 426 { 620 - struct opp_table *opp_table; 621 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 622 - 623 - opp_table = _find_opp_table(dev); 624 - if (IS_ERR(opp_table)) { 625 - int r = PTR_ERR(opp_table); 626 - 627 - dev_err(dev, "%s: OPP table not found (%d)\n", __func__, r); 628 - return ERR_PTR(r); 629 - } 630 - 631 - mutex_lock(&opp_table->lock); 632 - 633 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 634 - if (temp_opp->available == available && 635 - temp_opp->rate == freq) { 636 - opp = temp_opp; 637 - 638 - /* Increment the reference count of OPP */ 639 - dev_pm_opp_get(opp); 640 - break; 641 - } 642 - } 643 - 644 - mutex_unlock(&opp_table->lock); 645 - dev_pm_opp_put_opp_table(opp_table); 646 - 647 - return opp; 427 + return _find_key_exact(dev, freq, 0, available, _read_freq, 428 + assert_single_clk); 648 429 } 649 430 EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact); 650 431 651 432 static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table, 652 433 unsigned long *freq) 653 434 { 654 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 655 - 656 - mutex_lock(&opp_table->lock); 657 - 658 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 659 - if (temp_opp->available && temp_opp->rate >= *freq) { 660 - opp = temp_opp; 661 - *freq = opp->rate; 662 - 663 - /* Increment the reference count of OPP */ 664 - dev_pm_opp_get(opp); 665 - break; 666 - } 667 - } 668 - 669 - mutex_unlock(&opp_table->lock); 670 - 671 - return opp; 435 + return _opp_table_find_key_ceil(opp_table, freq, 0, true, _read_freq, 436 + assert_single_clk); 672 437 } 673 438 674 439 /** ··· 649 500 struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, 650 501 unsigned long *freq) 651 502 { 652 - struct opp_table *opp_table; 653 - struct dev_pm_opp *opp; 654 - 655 - if (!dev || !freq) { 656 - dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); 657 - return ERR_PTR(-EINVAL); 658 - } 659 - 660 - opp_table = _find_opp_table(dev); 661 - if (IS_ERR(opp_table)) 662 - return ERR_CAST(opp_table); 663 - 664 - opp = _find_freq_ceil(opp_table, freq); 665 - 666 - dev_pm_opp_put_opp_table(opp_table); 667 - 668 - return opp; 503 + return _find_key_ceil(dev, freq, 0, true, _read_freq, assert_single_clk); 669 504 } 670 505 EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil); 671 506 ··· 674 541 struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, 675 542 unsigned long *freq) 676 543 { 677 - struct opp_table *opp_table; 678 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 679 - 680 - if (!dev || !freq) { 681 - dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); 682 - return ERR_PTR(-EINVAL); 683 - } 684 - 685 - opp_table = _find_opp_table(dev); 686 - if (IS_ERR(opp_table)) 687 - return ERR_CAST(opp_table); 688 - 689 - mutex_lock(&opp_table->lock); 690 - 691 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 692 - if (temp_opp->available) { 693 - /* go to the next node, before choosing prev */ 694 - if (temp_opp->rate > *freq) 695 - break; 696 - else 697 - opp = temp_opp; 698 - } 699 - } 700 - 701 - /* Increment the reference count of OPP */ 702 - if (!IS_ERR(opp)) 703 - dev_pm_opp_get(opp); 704 - mutex_unlock(&opp_table->lock); 705 - dev_pm_opp_put_opp_table(opp_table); 706 - 707 - if (!IS_ERR(opp)) 708 - *freq = opp->rate; 709 - 710 - return opp; 544 + return _find_key_floor(dev, freq, 0, true, _read_freq, assert_single_clk); 711 545 } 712 546 EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor); 713 - 714 - /** 715 - * dev_pm_opp_find_freq_ceil_by_volt() - Find OPP with highest frequency for 716 - * target voltage. 717 - * @dev: Device for which we do this operation. 718 - * @u_volt: Target voltage. 719 - * 720 - * Search for OPP with highest (ceil) frequency and has voltage <= u_volt. 721 - * 722 - * Return: matching *opp, else returns ERR_PTR in case of error which should be 723 - * handled using IS_ERR. 724 - * 725 - * Error return values can be: 726 - * EINVAL: bad parameters 727 - * 728 - * The callers are required to call dev_pm_opp_put() for the returned OPP after 729 - * use. 730 - */ 731 - struct dev_pm_opp *dev_pm_opp_find_freq_ceil_by_volt(struct device *dev, 732 - unsigned long u_volt) 733 - { 734 - struct opp_table *opp_table; 735 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 736 - 737 - if (!dev || !u_volt) { 738 - dev_err(dev, "%s: Invalid argument volt=%lu\n", __func__, 739 - u_volt); 740 - return ERR_PTR(-EINVAL); 741 - } 742 - 743 - opp_table = _find_opp_table(dev); 744 - if (IS_ERR(opp_table)) 745 - return ERR_CAST(opp_table); 746 - 747 - mutex_lock(&opp_table->lock); 748 - 749 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 750 - if (temp_opp->available) { 751 - if (temp_opp->supplies[0].u_volt > u_volt) 752 - break; 753 - opp = temp_opp; 754 - } 755 - } 756 - 757 - /* Increment the reference count of OPP */ 758 - if (!IS_ERR(opp)) 759 - dev_pm_opp_get(opp); 760 - 761 - mutex_unlock(&opp_table->lock); 762 - dev_pm_opp_put_opp_table(opp_table); 763 - 764 - return opp; 765 - } 766 - EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil_by_volt); 767 547 768 548 /** 769 549 * dev_pm_opp_find_level_exact() - search for an exact level ··· 696 650 struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev, 697 651 unsigned int level) 698 652 { 699 - struct opp_table *opp_table; 700 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 701 - 702 - opp_table = _find_opp_table(dev); 703 - if (IS_ERR(opp_table)) { 704 - int r = PTR_ERR(opp_table); 705 - 706 - dev_err(dev, "%s: OPP table not found (%d)\n", __func__, r); 707 - return ERR_PTR(r); 708 - } 709 - 710 - mutex_lock(&opp_table->lock); 711 - 712 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 713 - if (temp_opp->level == level) { 714 - opp = temp_opp; 715 - 716 - /* Increment the reference count of OPP */ 717 - dev_pm_opp_get(opp); 718 - break; 719 - } 720 - } 721 - 722 - mutex_unlock(&opp_table->lock); 723 - dev_pm_opp_put_opp_table(opp_table); 724 - 725 - return opp; 653 + return _find_key_exact(dev, level, 0, true, _read_level, NULL); 726 654 } 727 655 EXPORT_SYMBOL_GPL(dev_pm_opp_find_level_exact); 728 656 ··· 718 698 struct dev_pm_opp *dev_pm_opp_find_level_ceil(struct device *dev, 719 699 unsigned int *level) 720 700 { 721 - struct opp_table *opp_table; 722 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 701 + unsigned long temp = *level; 702 + struct dev_pm_opp *opp; 723 703 724 - opp_table = _find_opp_table(dev); 725 - if (IS_ERR(opp_table)) { 726 - int r = PTR_ERR(opp_table); 727 - 728 - dev_err(dev, "%s: OPP table not found (%d)\n", __func__, r); 729 - return ERR_PTR(r); 730 - } 731 - 732 - mutex_lock(&opp_table->lock); 733 - 734 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 735 - if (temp_opp->available && temp_opp->level >= *level) { 736 - opp = temp_opp; 737 - *level = opp->level; 738 - 739 - /* Increment the reference count of OPP */ 740 - dev_pm_opp_get(opp); 741 - break; 742 - } 743 - } 744 - 745 - mutex_unlock(&opp_table->lock); 746 - dev_pm_opp_put_opp_table(opp_table); 747 - 704 + opp = _find_key_ceil(dev, &temp, 0, true, _read_level, NULL); 705 + *level = temp; 748 706 return opp; 749 707 } 750 708 EXPORT_SYMBOL_GPL(dev_pm_opp_find_level_ceil); ··· 730 732 /** 731 733 * dev_pm_opp_find_bw_ceil() - Search for a rounded ceil bandwidth 732 734 * @dev: device for which we do this operation 733 - * @freq: start bandwidth 735 + * @bw: start bandwidth 734 736 * @index: which bandwidth to compare, in case of OPPs with several values 735 737 * 736 738 * Search for the matching floor *available* OPP from a starting bandwidth ··· 746 748 * The callers are required to call dev_pm_opp_put() for the returned OPP after 747 749 * use. 748 750 */ 749 - struct dev_pm_opp *dev_pm_opp_find_bw_ceil(struct device *dev, 750 - unsigned int *bw, int index) 751 + struct dev_pm_opp *dev_pm_opp_find_bw_ceil(struct device *dev, unsigned int *bw, 752 + int index) 751 753 { 752 - struct opp_table *opp_table; 753 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 754 + unsigned long temp = *bw; 755 + struct dev_pm_opp *opp; 754 756 755 - if (!dev || !bw) { 756 - dev_err(dev, "%s: Invalid argument bw=%p\n", __func__, bw); 757 - return ERR_PTR(-EINVAL); 758 - } 759 - 760 - opp_table = _find_opp_table(dev); 761 - if (IS_ERR(opp_table)) 762 - return ERR_CAST(opp_table); 763 - 764 - if (index >= opp_table->path_count) 765 - return ERR_PTR(-EINVAL); 766 - 767 - mutex_lock(&opp_table->lock); 768 - 769 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 770 - if (temp_opp->available && temp_opp->bandwidth) { 771 - if (temp_opp->bandwidth[index].peak >= *bw) { 772 - opp = temp_opp; 773 - *bw = opp->bandwidth[index].peak; 774 - 775 - /* Increment the reference count of OPP */ 776 - dev_pm_opp_get(opp); 777 - break; 778 - } 779 - } 780 - } 781 - 782 - mutex_unlock(&opp_table->lock); 783 - dev_pm_opp_put_opp_table(opp_table); 784 - 757 + opp = _find_key_ceil(dev, &temp, index, true, _read_bw, NULL); 758 + *bw = temp; 785 759 return opp; 786 760 } 787 761 EXPORT_SYMBOL_GPL(dev_pm_opp_find_bw_ceil); ··· 761 791 /** 762 792 * dev_pm_opp_find_bw_floor() - Search for a rounded floor bandwidth 763 793 * @dev: device for which we do this operation 764 - * @freq: start bandwidth 794 + * @bw: start bandwidth 765 795 * @index: which bandwidth to compare, in case of OPPs with several values 766 796 * 767 797 * Search for the matching floor *available* OPP from a starting bandwidth ··· 780 810 struct dev_pm_opp *dev_pm_opp_find_bw_floor(struct device *dev, 781 811 unsigned int *bw, int index) 782 812 { 783 - struct opp_table *opp_table; 784 - struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); 813 + unsigned long temp = *bw; 814 + struct dev_pm_opp *opp; 785 815 786 - if (!dev || !bw) { 787 - dev_err(dev, "%s: Invalid argument bw=%p\n", __func__, bw); 788 - return ERR_PTR(-EINVAL); 789 - } 790 - 791 - opp_table = _find_opp_table(dev); 792 - if (IS_ERR(opp_table)) 793 - return ERR_CAST(opp_table); 794 - 795 - if (index >= opp_table->path_count) 796 - return ERR_PTR(-EINVAL); 797 - 798 - mutex_lock(&opp_table->lock); 799 - 800 - list_for_each_entry(temp_opp, &opp_table->opp_list, node) { 801 - if (temp_opp->available && temp_opp->bandwidth) { 802 - /* go to the next node, before choosing prev */ 803 - if (temp_opp->bandwidth[index].peak > *bw) 804 - break; 805 - opp = temp_opp; 806 - } 807 - } 808 - 809 - /* Increment the reference count of OPP */ 810 - if (!IS_ERR(opp)) 811 - dev_pm_opp_get(opp); 812 - mutex_unlock(&opp_table->lock); 813 - dev_pm_opp_put_opp_table(opp_table); 814 - 815 - if (!IS_ERR(opp)) 816 - *bw = opp->bandwidth[index].peak; 817 - 816 + opp = _find_key_floor(dev, &temp, index, true, _read_bw, NULL); 817 + *bw = temp; 818 818 return opp; 819 819 } 820 820 EXPORT_SYMBOL_GPL(dev_pm_opp_find_bw_floor); ··· 814 874 return ret; 815 875 } 816 876 817 - static inline int _generic_set_opp_clk_only(struct device *dev, struct clk *clk, 818 - unsigned long freq) 877 + static int 878 + _opp_config_clk_single(struct device *dev, struct opp_table *opp_table, 879 + struct dev_pm_opp *opp, void *data, bool scaling_down) 819 880 { 881 + unsigned long *target = data; 882 + unsigned long freq; 820 883 int ret; 821 884 822 - /* We may reach here for devices which don't change frequency */ 823 - if (IS_ERR(clk)) 824 - return 0; 885 + /* One of target and opp must be available */ 886 + if (target) { 887 + freq = *target; 888 + } else if (opp) { 889 + freq = opp->rates[0]; 890 + } else { 891 + WARN_ON(1); 892 + return -EINVAL; 893 + } 825 894 826 - ret = clk_set_rate(clk, freq); 895 + ret = clk_set_rate(opp_table->clk, freq); 827 896 if (ret) { 828 897 dev_err(dev, "%s: failed to set clock rate: %d\n", __func__, 829 898 ret); 899 + } else { 900 + opp_table->rate_clk_single = freq; 830 901 } 831 902 832 903 return ret; 833 904 } 834 905 835 - static int _generic_set_opp_regulator(struct opp_table *opp_table, 836 - struct device *dev, 837 - struct dev_pm_opp *opp, 838 - unsigned long freq, 839 - int scaling_down) 906 + /* 907 + * Simple implementation for configuring multiple clocks. Configure clocks in 908 + * the order in which they are present in the array while scaling up. 909 + */ 910 + int dev_pm_opp_config_clks_simple(struct device *dev, 911 + struct opp_table *opp_table, struct dev_pm_opp *opp, void *data, 912 + bool scaling_down) 840 913 { 841 - struct regulator *reg = opp_table->regulators[0]; 842 - struct dev_pm_opp *old_opp = opp_table->current_opp; 914 + int ret, i; 915 + 916 + if (scaling_down) { 917 + for (i = opp_table->clk_count - 1; i >= 0; i--) { 918 + ret = clk_set_rate(opp_table->clks[i], opp->rates[i]); 919 + if (ret) { 920 + dev_err(dev, "%s: failed to set clock rate: %d\n", __func__, 921 + ret); 922 + return ret; 923 + } 924 + } 925 + } else { 926 + for (i = 0; i < opp_table->clk_count; i++) { 927 + ret = clk_set_rate(opp_table->clks[i], opp->rates[i]); 928 + if (ret) { 929 + dev_err(dev, "%s: failed to set clock rate: %d\n", __func__, 930 + ret); 931 + return ret; 932 + } 933 + } 934 + } 935 + 936 + return ret; 937 + } 938 + EXPORT_SYMBOL_GPL(dev_pm_opp_config_clks_simple); 939 + 940 + static int _opp_config_regulator_single(struct device *dev, 941 + struct dev_pm_opp *old_opp, struct dev_pm_opp *new_opp, 942 + struct regulator **regulators, unsigned int count) 943 + { 944 + struct regulator *reg = regulators[0]; 843 945 int ret; 844 946 845 947 /* This function only supports single regulator per device */ 846 - if (WARN_ON(opp_table->regulator_count > 1)) { 948 + if (WARN_ON(count > 1)) { 847 949 dev_err(dev, "multiple regulators are not supported\n"); 848 950 return -EINVAL; 849 951 } 850 952 851 - /* Scaling up? Scale voltage before frequency */ 852 - if (!scaling_down) { 853 - ret = _set_opp_voltage(dev, reg, opp->supplies); 854 - if (ret) 855 - goto restore_voltage; 856 - } 857 - 858 - /* Change frequency */ 859 - ret = _generic_set_opp_clk_only(dev, opp_table->clk, freq); 953 + ret = _set_opp_voltage(dev, reg, new_opp->supplies); 860 954 if (ret) 861 - goto restore_voltage; 862 - 863 - /* Scaling down? Scale voltage after frequency */ 864 - if (scaling_down) { 865 - ret = _set_opp_voltage(dev, reg, opp->supplies); 866 - if (ret) 867 - goto restore_freq; 868 - } 955 + return ret; 869 956 870 957 /* 871 958 * Enable the regulator after setting its voltages, otherwise it breaks 872 959 * some boot-enabled regulators. 873 960 */ 874 - if (unlikely(!opp_table->enabled)) { 961 + if (unlikely(!new_opp->opp_table->enabled)) { 875 962 ret = regulator_enable(reg); 876 963 if (ret < 0) 877 964 dev_warn(dev, "Failed to enable regulator: %d", ret); 878 965 } 879 966 880 967 return 0; 881 - 882 - restore_freq: 883 - if (_generic_set_opp_clk_only(dev, opp_table->clk, old_opp->rate)) 884 - dev_err(dev, "%s: failed to restore old-freq (%lu Hz)\n", 885 - __func__, old_opp->rate); 886 - restore_voltage: 887 - /* This shouldn't harm even if the voltages weren't updated earlier */ 888 - _set_opp_voltage(dev, reg, old_opp->supplies); 889 - 890 - return ret; 891 968 } 892 969 893 970 static int _set_opp_bw(const struct opp_table *opp_table, ··· 935 978 return 0; 936 979 } 937 980 938 - static int _set_opp_custom(const struct opp_table *opp_table, 939 - struct device *dev, struct dev_pm_opp *opp, 940 - unsigned long freq) 941 - { 942 - struct dev_pm_set_opp_data *data = opp_table->set_opp_data; 943 - struct dev_pm_opp *old_opp = opp_table->current_opp; 944 - int size; 945 - 946 - /* 947 - * We support this only if dev_pm_opp_set_regulators() was called 948 - * earlier. 949 - */ 950 - if (opp_table->sod_supplies) { 951 - size = sizeof(*old_opp->supplies) * opp_table->regulator_count; 952 - memcpy(data->old_opp.supplies, old_opp->supplies, size); 953 - memcpy(data->new_opp.supplies, opp->supplies, size); 954 - data->regulator_count = opp_table->regulator_count; 955 - } else { 956 - data->regulator_count = 0; 957 - } 958 - 959 - data->regulators = opp_table->regulators; 960 - data->clk = opp_table->clk; 961 - data->dev = dev; 962 - data->old_opp.rate = old_opp->rate; 963 - data->new_opp.rate = freq; 964 - 965 - return opp_table->set_opp(data); 966 - } 967 - 968 981 static int _set_required_opp(struct device *dev, struct device *pd_dev, 969 982 struct dev_pm_opp *opp, int i) 970 983 { ··· 946 1019 947 1020 ret = dev_pm_genpd_set_performance_state(pd_dev, pstate); 948 1021 if (ret) { 949 - dev_err(dev, "Failed to set performance rate of %s: %d (%d)\n", 1022 + dev_err(dev, "Failed to set performance state of %s: %d (%d)\n", 950 1023 dev_name(pd_dev), pstate, ret); 951 1024 } 952 1025 ··· 1065 1138 } 1066 1139 1067 1140 static int _set_opp(struct device *dev, struct opp_table *opp_table, 1068 - struct dev_pm_opp *opp, unsigned long freq) 1141 + struct dev_pm_opp *opp, void *clk_data, bool forced) 1069 1142 { 1070 1143 struct dev_pm_opp *old_opp; 1071 1144 int scaling_down, ret; ··· 1080 1153 old_opp = opp_table->current_opp; 1081 1154 1082 1155 /* Return early if nothing to do */ 1083 - if (old_opp == opp && opp_table->current_rate == freq && 1084 - opp_table->enabled) { 1156 + if (!forced && old_opp == opp && opp_table->enabled) { 1085 1157 dev_dbg(dev, "%s: OPPs are same, nothing to do\n", __func__); 1086 1158 return 0; 1087 1159 } 1088 1160 1089 1161 dev_dbg(dev, "%s: switching OPP: Freq %lu -> %lu Hz, Level %u -> %u, Bw %u -> %u\n", 1090 - __func__, opp_table->current_rate, freq, old_opp->level, 1162 + __func__, old_opp->rates[0], opp->rates[0], old_opp->level, 1091 1163 opp->level, old_opp->bandwidth ? old_opp->bandwidth[0].peak : 0, 1092 1164 opp->bandwidth ? opp->bandwidth[0].peak : 0); 1093 1165 1094 - scaling_down = _opp_compare_key(old_opp, opp); 1166 + scaling_down = _opp_compare_key(opp_table, old_opp, opp); 1095 1167 if (scaling_down == -1) 1096 1168 scaling_down = 0; 1097 1169 ··· 1107 1181 dev_err(dev, "Failed to set bw: %d\n", ret); 1108 1182 return ret; 1109 1183 } 1184 + 1185 + if (opp_table->config_regulators) { 1186 + ret = opp_table->config_regulators(dev, old_opp, opp, 1187 + opp_table->regulators, 1188 + opp_table->regulator_count); 1189 + if (ret) { 1190 + dev_err(dev, "Failed to set regulator voltages: %d\n", 1191 + ret); 1192 + return ret; 1193 + } 1194 + } 1110 1195 } 1111 1196 1112 - if (opp_table->set_opp) { 1113 - ret = _set_opp_custom(opp_table, dev, opp, freq); 1114 - } else if (opp_table->regulators) { 1115 - ret = _generic_set_opp_regulator(opp_table, dev, opp, freq, 1116 - scaling_down); 1117 - } else { 1118 - /* Only frequency scaling */ 1119 - ret = _generic_set_opp_clk_only(dev, opp_table->clk, freq); 1197 + if (opp_table->config_clks) { 1198 + ret = opp_table->config_clks(dev, opp_table, opp, clk_data, scaling_down); 1199 + if (ret) 1200 + return ret; 1120 1201 } 1121 - 1122 - if (ret) 1123 - return ret; 1124 1202 1125 1203 /* Scaling down? Configure required OPPs after frequency */ 1126 1204 if (scaling_down) { 1205 + if (opp_table->config_regulators) { 1206 + ret = opp_table->config_regulators(dev, old_opp, opp, 1207 + opp_table->regulators, 1208 + opp_table->regulator_count); 1209 + if (ret) { 1210 + dev_err(dev, "Failed to set regulator voltages: %d\n", 1211 + ret); 1212 + return ret; 1213 + } 1214 + } 1215 + 1127 1216 ret = _set_opp_bw(opp_table, opp, dev); 1128 1217 if (ret) { 1129 1218 dev_err(dev, "Failed to set bw: %d\n", ret); ··· 1158 1217 /* Make sure current_opp doesn't get freed */ 1159 1218 dev_pm_opp_get(opp); 1160 1219 opp_table->current_opp = opp; 1161 - opp_table->current_rate = freq; 1162 1220 1163 1221 return ret; 1164 1222 } ··· 1178 1238 struct opp_table *opp_table; 1179 1239 unsigned long freq = 0, temp_freq; 1180 1240 struct dev_pm_opp *opp = NULL; 1241 + bool forced = false; 1181 1242 int ret; 1182 1243 1183 1244 opp_table = _find_opp_table(dev); ··· 1196 1255 * equivalent to a clk_set_rate() 1197 1256 */ 1198 1257 if (!_get_opp_count(opp_table)) { 1199 - ret = _generic_set_opp_clk_only(dev, opp_table->clk, target_freq); 1258 + ret = opp_table->config_clks(dev, opp_table, NULL, 1259 + &target_freq, false); 1200 1260 goto put_opp_table; 1201 1261 } 1202 1262 ··· 1218 1276 __func__, freq, ret); 1219 1277 goto put_opp_table; 1220 1278 } 1279 + 1280 + /* 1281 + * An OPP entry specifies the highest frequency at which other 1282 + * properties of the OPP entry apply. Even if the new OPP is 1283 + * same as the old one, we may still reach here for a different 1284 + * value of the frequency. In such a case, do not abort but 1285 + * configure the hardware to the desired frequency forcefully. 1286 + */ 1287 + forced = opp_table->rate_clk_single != target_freq; 1221 1288 } 1222 1289 1223 - ret = _set_opp(dev, opp_table, opp, freq); 1290 + ret = _set_opp(dev, opp_table, opp, &target_freq, forced); 1224 1291 1225 1292 if (target_freq) 1226 1293 dev_pm_opp_put(opp); 1294 + 1227 1295 put_opp_table: 1228 1296 dev_pm_opp_put_opp_table(opp_table); 1229 1297 return ret; ··· 1261 1309 return PTR_ERR(opp_table); 1262 1310 } 1263 1311 1264 - ret = _set_opp(dev, opp_table, opp, opp ? opp->rate : 0); 1312 + ret = _set_opp(dev, opp_table, opp, NULL, false); 1265 1313 dev_pm_opp_put_opp_table(opp_table); 1266 1314 1267 1315 return ret; ··· 1318 1366 INIT_LIST_HEAD(&opp_table->dev_list); 1319 1367 INIT_LIST_HEAD(&opp_table->lazy); 1320 1368 1369 + opp_table->clk = ERR_PTR(-ENODEV); 1370 + 1321 1371 /* Mark regulator count uninitialized */ 1322 1372 opp_table->regulator_count = -1; 1323 1373 ··· 1366 1412 int ret; 1367 1413 1368 1414 /* 1369 - * Return early if we don't need to get clk or we have already tried it 1415 + * Return early if we don't need to get clk or we have already done it 1370 1416 * earlier. 1371 1417 */ 1372 - if (!getclk || IS_ERR(opp_table) || opp_table->clk) 1418 + if (!getclk || IS_ERR(opp_table) || !IS_ERR(opp_table->clk) || 1419 + opp_table->clks) 1373 1420 return opp_table; 1374 1421 1375 1422 /* Find clk for the device */ 1376 1423 opp_table->clk = clk_get(dev, NULL); 1377 1424 1378 1425 ret = PTR_ERR_OR_ZERO(opp_table->clk); 1379 - if (!ret) 1426 + if (!ret) { 1427 + opp_table->config_clks = _opp_config_clk_single; 1428 + opp_table->clk_count = 1; 1380 1429 return opp_table; 1430 + } 1381 1431 1382 1432 if (ret == -ENOENT) { 1433 + /* 1434 + * There are few platforms which don't want the OPP core to 1435 + * manage device's clock settings. In such cases neither the 1436 + * platform provides the clks explicitly to us, nor the DT 1437 + * contains a valid clk entry. The OPP nodes in DT may still 1438 + * contain "opp-hz" property though, which we need to parse and 1439 + * allow the platform to find an OPP based on freq later on. 1440 + * 1441 + * This is a simple solution to take care of such corner cases, 1442 + * i.e. make the clk_count 1, which lets us allocate space for 1443 + * frequency in opp->rates and also parse the entries in DT. 1444 + */ 1445 + opp_table->clk_count = 1; 1446 + 1383 1447 dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, ret); 1384 1448 return opp_table; 1385 1449 } ··· 1500 1528 1501 1529 _of_clear_opp_table(opp_table); 1502 1530 1503 - /* Release clk */ 1531 + /* Release automatically acquired single clk */ 1504 1532 if (!IS_ERR(opp_table->clk)) 1505 1533 clk_put(opp_table->clk); 1506 1534 ··· 1553 1581 * frequency/voltage list. 1554 1582 */ 1555 1583 blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_REMOVE, opp); 1556 - _of_opp_free_required_opps(opp_table, opp); 1584 + _of_clear_opp(opp_table, opp); 1557 1585 opp_debug_remove_one(opp); 1558 1586 kfree(opp); 1559 1587 } ··· 1585 1613 if (IS_ERR(opp_table)) 1586 1614 return; 1587 1615 1616 + if (!assert_single_clk(opp_table)) 1617 + goto put_table; 1618 + 1588 1619 mutex_lock(&opp_table->lock); 1589 1620 1590 1621 list_for_each_entry(iter, &opp_table->opp_list, node) { 1591 - if (iter->rate == freq) { 1622 + if (iter->rates[0] == freq) { 1592 1623 opp = iter; 1593 1624 break; 1594 1625 } ··· 1609 1634 __func__, freq); 1610 1635 } 1611 1636 1637 + put_table: 1612 1638 /* Drop the reference taken by _find_opp_table() */ 1613 1639 dev_pm_opp_put_opp_table(opp_table); 1614 1640 } ··· 1696 1720 } 1697 1721 EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic); 1698 1722 1699 - struct dev_pm_opp *_opp_allocate(struct opp_table *table) 1723 + struct dev_pm_opp *_opp_allocate(struct opp_table *opp_table) 1700 1724 { 1701 1725 struct dev_pm_opp *opp; 1702 - int supply_count, supply_size, icc_size; 1726 + int supply_count, supply_size, icc_size, clk_size; 1703 1727 1704 1728 /* Allocate space for at least one supply */ 1705 - supply_count = table->regulator_count > 0 ? table->regulator_count : 1; 1729 + supply_count = opp_table->regulator_count > 0 ? 1730 + opp_table->regulator_count : 1; 1706 1731 supply_size = sizeof(*opp->supplies) * supply_count; 1707 - icc_size = sizeof(*opp->bandwidth) * table->path_count; 1732 + clk_size = sizeof(*opp->rates) * opp_table->clk_count; 1733 + icc_size = sizeof(*opp->bandwidth) * opp_table->path_count; 1708 1734 1709 1735 /* allocate new OPP node and supplies structures */ 1710 - opp = kzalloc(sizeof(*opp) + supply_size + icc_size, GFP_KERNEL); 1711 - 1736 + opp = kzalloc(sizeof(*opp) + supply_size + clk_size + icc_size, GFP_KERNEL); 1712 1737 if (!opp) 1713 1738 return NULL; 1714 1739 1715 - /* Put the supplies at the end of the OPP structure as an empty array */ 1740 + /* Put the supplies, bw and clock at the end of the OPP structure */ 1716 1741 opp->supplies = (struct dev_pm_opp_supply *)(opp + 1); 1742 + 1743 + opp->rates = (unsigned long *)(opp->supplies + supply_count); 1744 + 1717 1745 if (icc_size) 1718 - opp->bandwidth = (struct dev_pm_opp_icc_bw *)(opp->supplies + supply_count); 1746 + opp->bandwidth = (struct dev_pm_opp_icc_bw *)(opp->rates + opp_table->clk_count); 1747 + 1719 1748 INIT_LIST_HEAD(&opp->node); 1720 1749 1721 1750 return opp; ··· 1751 1770 return true; 1752 1771 } 1753 1772 1754 - int _opp_compare_key(struct dev_pm_opp *opp1, struct dev_pm_opp *opp2) 1773 + static int _opp_compare_rate(struct opp_table *opp_table, 1774 + struct dev_pm_opp *opp1, struct dev_pm_opp *opp2) 1755 1775 { 1756 - if (opp1->rate != opp2->rate) 1757 - return opp1->rate < opp2->rate ? -1 : 1; 1758 - if (opp1->bandwidth && opp2->bandwidth && 1759 - opp1->bandwidth[0].peak != opp2->bandwidth[0].peak) 1760 - return opp1->bandwidth[0].peak < opp2->bandwidth[0].peak ? -1 : 1; 1776 + int i; 1777 + 1778 + for (i = 0; i < opp_table->clk_count; i++) { 1779 + if (opp1->rates[i] != opp2->rates[i]) 1780 + return opp1->rates[i] < opp2->rates[i] ? -1 : 1; 1781 + } 1782 + 1783 + /* Same rates for both OPPs */ 1784 + return 0; 1785 + } 1786 + 1787 + static int _opp_compare_bw(struct opp_table *opp_table, struct dev_pm_opp *opp1, 1788 + struct dev_pm_opp *opp2) 1789 + { 1790 + int i; 1791 + 1792 + for (i = 0; i < opp_table->path_count; i++) { 1793 + if (opp1->bandwidth[i].peak != opp2->bandwidth[i].peak) 1794 + return opp1->bandwidth[i].peak < opp2->bandwidth[i].peak ? -1 : 1; 1795 + } 1796 + 1797 + /* Same bw for both OPPs */ 1798 + return 0; 1799 + } 1800 + 1801 + /* 1802 + * Returns 1803 + * 0: opp1 == opp2 1804 + * 1: opp1 > opp2 1805 + * -1: opp1 < opp2 1806 + */ 1807 + int _opp_compare_key(struct opp_table *opp_table, struct dev_pm_opp *opp1, 1808 + struct dev_pm_opp *opp2) 1809 + { 1810 + int ret; 1811 + 1812 + ret = _opp_compare_rate(opp_table, opp1, opp2); 1813 + if (ret) 1814 + return ret; 1815 + 1816 + ret = _opp_compare_bw(opp_table, opp1, opp2); 1817 + if (ret) 1818 + return ret; 1819 + 1761 1820 if (opp1->level != opp2->level) 1762 1821 return opp1->level < opp2->level ? -1 : 1; 1822 + 1823 + /* Duplicate OPPs */ 1763 1824 return 0; 1764 1825 } 1765 1826 ··· 1821 1798 * loop. 1822 1799 */ 1823 1800 list_for_each_entry(opp, &opp_table->opp_list, node) { 1824 - opp_cmp = _opp_compare_key(new_opp, opp); 1801 + opp_cmp = _opp_compare_key(opp_table, new_opp, opp); 1825 1802 if (opp_cmp > 0) { 1826 1803 *head = &opp->node; 1827 1804 continue; ··· 1832 1809 1833 1810 /* Duplicate OPPs */ 1834 1811 dev_warn(dev, "%s: duplicate OPPs detected. Existing: freq: %lu, volt: %lu, enabled: %d. New: freq: %lu, volt: %lu, enabled: %d\n", 1835 - __func__, opp->rate, opp->supplies[0].u_volt, 1836 - opp->available, new_opp->rate, 1812 + __func__, opp->rates[0], opp->supplies[0].u_volt, 1813 + opp->available, new_opp->rates[0], 1837 1814 new_opp->supplies[0].u_volt, new_opp->available); 1838 1815 1839 1816 /* Should we compare voltages for all regulators here ? */ ··· 1854 1831 1855 1832 opp->available = false; 1856 1833 pr_warn("%s: OPP not supported by required OPP %pOF (%lu)\n", 1857 - __func__, opp->required_opps[i]->np, opp->rate); 1834 + __func__, opp->required_opps[i]->np, opp->rates[0]); 1858 1835 return; 1859 1836 } 1860 1837 } ··· 1870 1847 * should be considered an error by the callers of _opp_add(). 1871 1848 */ 1872 1849 int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, 1873 - struct opp_table *opp_table, bool rate_not_available) 1850 + struct opp_table *opp_table) 1874 1851 { 1875 1852 struct list_head *head; 1876 1853 int ret; ··· 1895 1872 if (!_opp_supported_by_regulators(new_opp, opp_table)) { 1896 1873 new_opp->available = false; 1897 1874 dev_warn(dev, "%s: OPP not supported by regulators (%lu)\n", 1898 - __func__, new_opp->rate); 1875 + __func__, new_opp->rates[0]); 1899 1876 } 1900 1877 1901 1878 /* required-opps not fully initialized yet */ ··· 1936 1913 unsigned long tol; 1937 1914 int ret; 1938 1915 1916 + if (!assert_single_clk(opp_table)) 1917 + return -EINVAL; 1918 + 1939 1919 new_opp = _opp_allocate(opp_table); 1940 1920 if (!new_opp) 1941 1921 return -ENOMEM; 1942 1922 1943 1923 /* populate the opp table */ 1944 - new_opp->rate = freq; 1924 + new_opp->rates[0] = freq; 1945 1925 tol = u_volt * opp_table->voltage_tolerance_v1 / 100; 1946 1926 new_opp->supplies[0].u_volt = u_volt; 1947 1927 new_opp->supplies[0].u_volt_min = u_volt - tol; ··· 1952 1926 new_opp->available = true; 1953 1927 new_opp->dynamic = dynamic; 1954 1928 1955 - ret = _opp_add(dev, new_opp, opp_table, false); 1929 + ret = _opp_add(dev, new_opp, opp_table); 1956 1930 if (ret) { 1957 1931 /* Don't return error for duplicate OPPs */ 1958 1932 if (ret == -EBUSY) ··· 1974 1948 } 1975 1949 1976 1950 /** 1977 - * dev_pm_opp_set_supported_hw() - Set supported platforms 1951 + * _opp_set_supported_hw() - Set supported platforms 1978 1952 * @dev: Device for which supported-hw has to be set. 1979 1953 * @versions: Array of hierarchy of versions to match. 1980 1954 * @count: Number of elements in the array. ··· 1984 1958 * OPPs, which are available for those versions, based on its 'opp-supported-hw' 1985 1959 * property. 1986 1960 */ 1987 - struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, 1988 - const u32 *versions, unsigned int count) 1961 + static int _opp_set_supported_hw(struct opp_table *opp_table, 1962 + const u32 *versions, unsigned int count) 1989 1963 { 1990 - struct opp_table *opp_table; 1991 - 1992 - opp_table = _add_opp_table(dev, false); 1993 - if (IS_ERR(opp_table)) 1994 - return opp_table; 1995 - 1996 - /* Make sure there are no concurrent readers while updating opp_table */ 1997 - WARN_ON(!list_empty(&opp_table->opp_list)); 1998 - 1999 1964 /* Another CPU that shares the OPP table has set the property ? */ 2000 1965 if (opp_table->supported_hw) 2001 - return opp_table; 1966 + return 0; 2002 1967 2003 1968 opp_table->supported_hw = kmemdup(versions, count * sizeof(*versions), 2004 1969 GFP_KERNEL); 2005 - if (!opp_table->supported_hw) { 2006 - dev_pm_opp_put_opp_table(opp_table); 2007 - return ERR_PTR(-ENOMEM); 2008 - } 1970 + if (!opp_table->supported_hw) 1971 + return -ENOMEM; 2009 1972 2010 1973 opp_table->supported_hw_count = count; 2011 1974 2012 - return opp_table; 1975 + return 0; 2013 1976 } 2014 - EXPORT_SYMBOL_GPL(dev_pm_opp_set_supported_hw); 2015 1977 2016 1978 /** 2017 - * dev_pm_opp_put_supported_hw() - Releases resources blocked for supported hw 2018 - * @opp_table: OPP table returned by dev_pm_opp_set_supported_hw(). 1979 + * _opp_put_supported_hw() - Releases resources blocked for supported hw 1980 + * @opp_table: OPP table returned by _opp_set_supported_hw(). 2019 1981 * 2020 1982 * This is required only for the V2 bindings, and is called for a matching 2021 - * dev_pm_opp_set_supported_hw(). Until this is called, the opp_table structure 1983 + * _opp_set_supported_hw(). Until this is called, the opp_table structure 2022 1984 * will not be freed. 2023 1985 */ 2024 - void dev_pm_opp_put_supported_hw(struct opp_table *opp_table) 1986 + static void _opp_put_supported_hw(struct opp_table *opp_table) 2025 1987 { 2026 - if (unlikely(!opp_table)) 2027 - return; 2028 - 2029 - kfree(opp_table->supported_hw); 2030 - opp_table->supported_hw = NULL; 2031 - opp_table->supported_hw_count = 0; 2032 - 2033 - dev_pm_opp_put_opp_table(opp_table); 2034 - } 2035 - EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw); 2036 - 2037 - static void devm_pm_opp_supported_hw_release(void *data) 2038 - { 2039 - dev_pm_opp_put_supported_hw(data); 1988 + if (opp_table->supported_hw) { 1989 + kfree(opp_table->supported_hw); 1990 + opp_table->supported_hw = NULL; 1991 + opp_table->supported_hw_count = 0; 1992 + } 2040 1993 } 2041 1994 2042 1995 /** 2043 - * devm_pm_opp_set_supported_hw() - Set supported platforms 2044 - * @dev: Device for which supported-hw has to be set. 2045 - * @versions: Array of hierarchy of versions to match. 2046 - * @count: Number of elements in the array. 2047 - * 2048 - * This is a resource-managed variant of dev_pm_opp_set_supported_hw(). 2049 - * 2050 - * Return: 0 on success and errorno otherwise. 2051 - */ 2052 - int devm_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, 2053 - unsigned int count) 2054 - { 2055 - struct opp_table *opp_table; 2056 - 2057 - opp_table = dev_pm_opp_set_supported_hw(dev, versions, count); 2058 - if (IS_ERR(opp_table)) 2059 - return PTR_ERR(opp_table); 2060 - 2061 - return devm_add_action_or_reset(dev, devm_pm_opp_supported_hw_release, 2062 - opp_table); 2063 - } 2064 - EXPORT_SYMBOL_GPL(devm_pm_opp_set_supported_hw); 2065 - 2066 - /** 2067 - * dev_pm_opp_set_prop_name() - Set prop-extn name 1996 + * _opp_set_prop_name() - Set prop-extn name 2068 1997 * @dev: Device for which the prop-name has to be set. 2069 1998 * @name: name to postfix to properties. 2070 1999 * ··· 2028 2047 * which the extension will apply are opp-microvolt and opp-microamp. OPP core 2029 2048 * should postfix the property name with -<name> while looking for them. 2030 2049 */ 2031 - struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name) 2050 + static int _opp_set_prop_name(struct opp_table *opp_table, const char *name) 2032 2051 { 2033 - struct opp_table *opp_table; 2034 - 2035 - opp_table = _add_opp_table(dev, false); 2036 - if (IS_ERR(opp_table)) 2037 - return opp_table; 2038 - 2039 - /* Make sure there are no concurrent readers while updating opp_table */ 2040 - WARN_ON(!list_empty(&opp_table->opp_list)); 2041 - 2042 2052 /* Another CPU that shares the OPP table has set the property ? */ 2043 - if (opp_table->prop_name) 2044 - return opp_table; 2045 - 2046 - opp_table->prop_name = kstrdup(name, GFP_KERNEL); 2047 2053 if (!opp_table->prop_name) { 2048 - dev_pm_opp_put_opp_table(opp_table); 2049 - return ERR_PTR(-ENOMEM); 2054 + opp_table->prop_name = kstrdup(name, GFP_KERNEL); 2055 + if (!opp_table->prop_name) 2056 + return -ENOMEM; 2050 2057 } 2051 2058 2052 - return opp_table; 2059 + return 0; 2053 2060 } 2054 - EXPORT_SYMBOL_GPL(dev_pm_opp_set_prop_name); 2055 2061 2056 2062 /** 2057 - * dev_pm_opp_put_prop_name() - Releases resources blocked for prop-name 2058 - * @opp_table: OPP table returned by dev_pm_opp_set_prop_name(). 2063 + * _opp_put_prop_name() - Releases resources blocked for prop-name 2064 + * @opp_table: OPP table returned by _opp_set_prop_name(). 2059 2065 * 2060 2066 * This is required only for the V2 bindings, and is called for a matching 2061 - * dev_pm_opp_set_prop_name(). Until this is called, the opp_table structure 2067 + * _opp_set_prop_name(). Until this is called, the opp_table structure 2062 2068 * will not be freed. 2063 2069 */ 2064 - void dev_pm_opp_put_prop_name(struct opp_table *opp_table) 2070 + static void _opp_put_prop_name(struct opp_table *opp_table) 2065 2071 { 2066 - if (unlikely(!opp_table)) 2067 - return; 2068 - 2069 - kfree(opp_table->prop_name); 2070 - opp_table->prop_name = NULL; 2071 - 2072 - dev_pm_opp_put_opp_table(opp_table); 2072 + if (opp_table->prop_name) { 2073 + kfree(opp_table->prop_name); 2074 + opp_table->prop_name = NULL; 2075 + } 2073 2076 } 2074 - EXPORT_SYMBOL_GPL(dev_pm_opp_put_prop_name); 2075 2077 2076 2078 /** 2077 - * dev_pm_opp_set_regulators() - Set regulator names for the device 2079 + * _opp_set_regulators() - Set regulator names for the device 2078 2080 * @dev: Device for which regulator name is being set. 2079 2081 * @names: Array of pointers to the names of the regulator. 2080 2082 * @count: Number of regulators. ··· 2068 2104 * 2069 2105 * This must be called before any OPPs are initialized for the device. 2070 2106 */ 2071 - struct opp_table *dev_pm_opp_set_regulators(struct device *dev, 2072 - const char * const names[], 2073 - unsigned int count) 2107 + static int _opp_set_regulators(struct opp_table *opp_table, struct device *dev, 2108 + const char * const names[]) 2074 2109 { 2075 - struct dev_pm_opp_supply *supplies; 2076 - struct opp_table *opp_table; 2110 + const char * const *temp = names; 2077 2111 struct regulator *reg; 2078 - int ret, i; 2112 + int count = 0, ret, i; 2079 2113 2080 - opp_table = _add_opp_table(dev, false); 2081 - if (IS_ERR(opp_table)) 2082 - return opp_table; 2114 + /* Count number of regulators */ 2115 + while (*temp++) 2116 + count++; 2083 2117 2084 - /* This should be called before OPPs are initialized */ 2085 - if (WARN_ON(!list_empty(&opp_table->opp_list))) { 2086 - ret = -EBUSY; 2087 - goto err; 2088 - } 2118 + if (!count) 2119 + return -EINVAL; 2089 2120 2090 2121 /* Another CPU that shares the OPP table has set the regulators ? */ 2091 2122 if (opp_table->regulators) 2092 - return opp_table; 2123 + return 0; 2093 2124 2094 2125 opp_table->regulators = kmalloc_array(count, 2095 2126 sizeof(*opp_table->regulators), 2096 2127 GFP_KERNEL); 2097 - if (!opp_table->regulators) { 2098 - ret = -ENOMEM; 2099 - goto err; 2100 - } 2128 + if (!opp_table->regulators) 2129 + return -ENOMEM; 2101 2130 2102 2131 for (i = 0; i < count; i++) { 2103 2132 reg = regulator_get_optional(dev, names[i]); ··· 2106 2149 2107 2150 opp_table->regulator_count = count; 2108 2151 2109 - supplies = kmalloc_array(count * 2, sizeof(*supplies), GFP_KERNEL); 2110 - if (!supplies) { 2111 - ret = -ENOMEM; 2112 - goto free_regulators; 2113 - } 2152 + /* Set generic config_regulators() for single regulators here */ 2153 + if (count == 1) 2154 + opp_table->config_regulators = _opp_config_regulator_single; 2114 2155 2115 - mutex_lock(&opp_table->lock); 2116 - opp_table->sod_supplies = supplies; 2117 - if (opp_table->set_opp_data) { 2118 - opp_table->set_opp_data->old_opp.supplies = supplies; 2119 - opp_table->set_opp_data->new_opp.supplies = supplies + count; 2120 - } 2121 - mutex_unlock(&opp_table->lock); 2122 - 2123 - return opp_table; 2156 + return 0; 2124 2157 2125 2158 free_regulators: 2126 2159 while (i != 0) ··· 2119 2172 kfree(opp_table->regulators); 2120 2173 opp_table->regulators = NULL; 2121 2174 opp_table->regulator_count = -1; 2122 - err: 2123 - dev_pm_opp_put_opp_table(opp_table); 2124 2175 2125 - return ERR_PTR(ret); 2176 + return ret; 2126 2177 } 2127 - EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulators); 2128 2178 2129 2179 /** 2130 - * dev_pm_opp_put_regulators() - Releases resources blocked for regulator 2131 - * @opp_table: OPP table returned from dev_pm_opp_set_regulators(). 2180 + * _opp_put_regulators() - Releases resources blocked for regulator 2181 + * @opp_table: OPP table returned from _opp_set_regulators(). 2132 2182 */ 2133 - void dev_pm_opp_put_regulators(struct opp_table *opp_table) 2183 + static void _opp_put_regulators(struct opp_table *opp_table) 2134 2184 { 2135 2185 int i; 2136 2186 2137 - if (unlikely(!opp_table)) 2138 - return; 2139 - 2140 2187 if (!opp_table->regulators) 2141 - goto put_opp_table; 2188 + return; 2142 2189 2143 2190 if (opp_table->enabled) { 2144 2191 for (i = opp_table->regulator_count - 1; i >= 0; i--) ··· 2142 2201 for (i = opp_table->regulator_count - 1; i >= 0; i--) 2143 2202 regulator_put(opp_table->regulators[i]); 2144 2203 2145 - mutex_lock(&opp_table->lock); 2146 - if (opp_table->set_opp_data) { 2147 - opp_table->set_opp_data->old_opp.supplies = NULL; 2148 - opp_table->set_opp_data->new_opp.supplies = NULL; 2149 - } 2150 - 2151 - kfree(opp_table->sod_supplies); 2152 - opp_table->sod_supplies = NULL; 2153 - mutex_unlock(&opp_table->lock); 2154 - 2155 2204 kfree(opp_table->regulators); 2156 2205 opp_table->regulators = NULL; 2157 2206 opp_table->regulator_count = -1; 2158 - 2159 - put_opp_table: 2160 - dev_pm_opp_put_opp_table(opp_table); 2161 2207 } 2162 - EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators); 2163 2208 2164 - static void devm_pm_opp_regulators_release(void *data) 2209 + static void _put_clks(struct opp_table *opp_table, int count) 2165 2210 { 2166 - dev_pm_opp_put_regulators(data); 2211 + int i; 2212 + 2213 + for (i = count - 1; i >= 0; i--) 2214 + clk_put(opp_table->clks[i]); 2215 + 2216 + kfree(opp_table->clks); 2217 + opp_table->clks = NULL; 2167 2218 } 2168 2219 2169 2220 /** 2170 - * devm_pm_opp_set_regulators() - Set regulator names for the device 2171 - * @dev: Device for which regulator name is being set. 2172 - * @names: Array of pointers to the names of the regulator. 2173 - * @count: Number of regulators. 2221 + * _opp_set_clknames() - Set clk names for the device 2222 + * @dev: Device for which clk names is being set. 2223 + * @names: Clk names. 2174 2224 * 2175 - * This is a resource-managed variant of dev_pm_opp_set_regulators(). 2176 - * 2177 - * Return: 0 on success and errorno otherwise. 2178 - */ 2179 - int devm_pm_opp_set_regulators(struct device *dev, 2180 - const char * const names[], 2181 - unsigned int count) 2182 - { 2183 - struct opp_table *opp_table; 2184 - 2185 - opp_table = dev_pm_opp_set_regulators(dev, names, count); 2186 - if (IS_ERR(opp_table)) 2187 - return PTR_ERR(opp_table); 2188 - 2189 - return devm_add_action_or_reset(dev, devm_pm_opp_regulators_release, 2190 - opp_table); 2191 - } 2192 - EXPORT_SYMBOL_GPL(devm_pm_opp_set_regulators); 2193 - 2194 - /** 2195 - * dev_pm_opp_set_clkname() - Set clk name for the device 2196 - * @dev: Device for which clk name is being set. 2197 - * @name: Clk name. 2198 - * 2199 - * In order to support OPP switching, OPP layer needs to get pointer to the 2200 - * clock for the device. Simple cases work fine without using this routine (i.e. 2201 - * by passing connection-id as NULL), but for a device with multiple clocks 2202 - * available, the OPP core needs to know the exact name of the clk to use. 2225 + * In order to support OPP switching, OPP layer needs to get pointers to the 2226 + * clocks for the device. Simple cases work fine without using this routine 2227 + * (i.e. by passing connection-id as NULL), but for a device with multiple 2228 + * clocks available, the OPP core needs to know the exact names of the clks to 2229 + * use. 2203 2230 * 2204 2231 * This must be called before any OPPs are initialized for the device. 2205 2232 */ 2206 - struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name) 2233 + static int _opp_set_clknames(struct opp_table *opp_table, struct device *dev, 2234 + const char * const names[], 2235 + config_clks_t config_clks) 2207 2236 { 2208 - struct opp_table *opp_table; 2209 - int ret; 2237 + const char * const *temp = names; 2238 + int count = 0, ret, i; 2239 + struct clk *clk; 2210 2240 2211 - opp_table = _add_opp_table(dev, false); 2212 - if (IS_ERR(opp_table)) 2213 - return opp_table; 2241 + /* Count number of clks */ 2242 + while (*temp++) 2243 + count++; 2214 2244 2215 - /* This should be called before OPPs are initialized */ 2216 - if (WARN_ON(!list_empty(&opp_table->opp_list))) { 2217 - ret = -EBUSY; 2218 - goto err; 2245 + /* 2246 + * This is a special case where we have a single clock, whose connection 2247 + * id name is NULL, i.e. first two entries are NULL in the array. 2248 + */ 2249 + if (!count && !names[1]) 2250 + count = 1; 2251 + 2252 + /* Fail early for invalid configurations */ 2253 + if (!count || (!config_clks && count > 1)) 2254 + return -EINVAL; 2255 + 2256 + /* Another CPU that shares the OPP table has set the clkname ? */ 2257 + if (opp_table->clks) 2258 + return 0; 2259 + 2260 + opp_table->clks = kmalloc_array(count, sizeof(*opp_table->clks), 2261 + GFP_KERNEL); 2262 + if (!opp_table->clks) 2263 + return -ENOMEM; 2264 + 2265 + /* Find clks for the device */ 2266 + for (i = 0; i < count; i++) { 2267 + clk = clk_get(dev, names[i]); 2268 + if (IS_ERR(clk)) { 2269 + ret = dev_err_probe(dev, PTR_ERR(clk), 2270 + "%s: Couldn't find clock with name: %s\n", 2271 + __func__, names[i]); 2272 + goto free_clks; 2273 + } 2274 + 2275 + opp_table->clks[i] = clk; 2219 2276 } 2220 2277 2221 - /* clk shouldn't be initialized at this point */ 2222 - if (WARN_ON(opp_table->clk)) { 2223 - ret = -EBUSY; 2224 - goto err; 2278 + opp_table->clk_count = count; 2279 + opp_table->config_clks = config_clks; 2280 + 2281 + /* Set generic single clk set here */ 2282 + if (count == 1) { 2283 + if (!opp_table->config_clks) 2284 + opp_table->config_clks = _opp_config_clk_single; 2285 + 2286 + /* 2287 + * We could have just dropped the "clk" field and used "clks" 2288 + * everywhere. Instead we kept the "clk" field around for 2289 + * following reasons: 2290 + * 2291 + * - avoiding clks[0] everywhere else. 2292 + * - not running single clk helpers for multiple clk usecase by 2293 + * mistake. 2294 + * 2295 + * Since this is single-clk case, just update the clk pointer 2296 + * too. 2297 + */ 2298 + opp_table->clk = opp_table->clks[0]; 2225 2299 } 2226 2300 2227 - /* Find clk for the device */ 2228 - opp_table->clk = clk_get(dev, name); 2229 - if (IS_ERR(opp_table->clk)) { 2230 - ret = dev_err_probe(dev, PTR_ERR(opp_table->clk), 2231 - "%s: Couldn't find clock\n", __func__); 2232 - goto err; 2233 - } 2301 + return 0; 2234 2302 2235 - return opp_table; 2236 - 2237 - err: 2238 - dev_pm_opp_put_opp_table(opp_table); 2239 - 2240 - return ERR_PTR(ret); 2303 + free_clks: 2304 + _put_clks(opp_table, i); 2305 + return ret; 2241 2306 } 2242 - EXPORT_SYMBOL_GPL(dev_pm_opp_set_clkname); 2243 2307 2244 2308 /** 2245 - * dev_pm_opp_put_clkname() - Releases resources blocked for clk. 2246 - * @opp_table: OPP table returned from dev_pm_opp_set_clkname(). 2309 + * _opp_put_clknames() - Releases resources blocked for clks. 2310 + * @opp_table: OPP table returned from _opp_set_clknames(). 2247 2311 */ 2248 - void dev_pm_opp_put_clkname(struct opp_table *opp_table) 2312 + static void _opp_put_clknames(struct opp_table *opp_table) 2249 2313 { 2250 - if (unlikely(!opp_table)) 2314 + if (!opp_table->clks) 2251 2315 return; 2252 2316 2253 - clk_put(opp_table->clk); 2254 - opp_table->clk = ERR_PTR(-EINVAL); 2317 + opp_table->config_clks = NULL; 2318 + opp_table->clk = ERR_PTR(-ENODEV); 2255 2319 2256 - dev_pm_opp_put_opp_table(opp_table); 2257 - } 2258 - EXPORT_SYMBOL_GPL(dev_pm_opp_put_clkname); 2259 - 2260 - static void devm_pm_opp_clkname_release(void *data) 2261 - { 2262 - dev_pm_opp_put_clkname(data); 2320 + _put_clks(opp_table, opp_table->clk_count); 2263 2321 } 2264 2322 2265 2323 /** 2266 - * devm_pm_opp_set_clkname() - Set clk name for the device 2267 - * @dev: Device for which clk name is being set. 2268 - * @name: Clk name. 2269 - * 2270 - * This is a resource-managed variant of dev_pm_opp_set_clkname(). 2271 - * 2272 - * Return: 0 on success and errorno otherwise. 2273 - */ 2274 - int devm_pm_opp_set_clkname(struct device *dev, const char *name) 2275 - { 2276 - struct opp_table *opp_table; 2277 - 2278 - opp_table = dev_pm_opp_set_clkname(dev, name); 2279 - if (IS_ERR(opp_table)) 2280 - return PTR_ERR(opp_table); 2281 - 2282 - return devm_add_action_or_reset(dev, devm_pm_opp_clkname_release, 2283 - opp_table); 2284 - } 2285 - EXPORT_SYMBOL_GPL(devm_pm_opp_set_clkname); 2286 - 2287 - /** 2288 - * dev_pm_opp_register_set_opp_helper() - Register custom set OPP helper 2324 + * _opp_set_config_regulators_helper() - Register custom set regulator helper. 2289 2325 * @dev: Device for which the helper is getting registered. 2290 - * @set_opp: Custom set OPP helper. 2326 + * @config_regulators: Custom set regulator helper. 2291 2327 * 2292 - * This is useful to support complex platforms (like platforms with multiple 2293 - * regulators per device), instead of the generic OPP set rate helper. 2328 + * This is useful to support platforms with multiple regulators per device. 2294 2329 * 2295 2330 * This must be called before any OPPs are initialized for the device. 2296 2331 */ 2297 - struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, 2298 - int (*set_opp)(struct dev_pm_set_opp_data *data)) 2332 + static int _opp_set_config_regulators_helper(struct opp_table *opp_table, 2333 + struct device *dev, config_regulators_t config_regulators) 2299 2334 { 2300 - struct dev_pm_set_opp_data *data; 2301 - struct opp_table *opp_table; 2302 - 2303 - if (!set_opp) 2304 - return ERR_PTR(-EINVAL); 2305 - 2306 - opp_table = _add_opp_table(dev, false); 2307 - if (IS_ERR(opp_table)) 2308 - return opp_table; 2309 - 2310 - /* This should be called before OPPs are initialized */ 2311 - if (WARN_ON(!list_empty(&opp_table->opp_list))) { 2312 - dev_pm_opp_put_opp_table(opp_table); 2313 - return ERR_PTR(-EBUSY); 2314 - } 2315 - 2316 2335 /* Another CPU that shares the OPP table has set the helper ? */ 2317 - if (opp_table->set_opp) 2318 - return opp_table; 2336 + if (!opp_table->config_regulators) 2337 + opp_table->config_regulators = config_regulators; 2319 2338 2320 - data = kzalloc(sizeof(*data), GFP_KERNEL); 2321 - if (!data) 2322 - return ERR_PTR(-ENOMEM); 2323 - 2324 - mutex_lock(&opp_table->lock); 2325 - opp_table->set_opp_data = data; 2326 - if (opp_table->sod_supplies) { 2327 - data->old_opp.supplies = opp_table->sod_supplies; 2328 - data->new_opp.supplies = opp_table->sod_supplies + 2329 - opp_table->regulator_count; 2330 - } 2331 - mutex_unlock(&opp_table->lock); 2332 - 2333 - opp_table->set_opp = set_opp; 2334 - 2335 - return opp_table; 2336 - } 2337 - EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper); 2338 - 2339 - /** 2340 - * dev_pm_opp_unregister_set_opp_helper() - Releases resources blocked for 2341 - * set_opp helper 2342 - * @opp_table: OPP table returned from dev_pm_opp_register_set_opp_helper(). 2343 - * 2344 - * Release resources blocked for platform specific set_opp helper. 2345 - */ 2346 - void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table) 2347 - { 2348 - if (unlikely(!opp_table)) 2349 - return; 2350 - 2351 - opp_table->set_opp = NULL; 2352 - 2353 - mutex_lock(&opp_table->lock); 2354 - kfree(opp_table->set_opp_data); 2355 - opp_table->set_opp_data = NULL; 2356 - mutex_unlock(&opp_table->lock); 2357 - 2358 - dev_pm_opp_put_opp_table(opp_table); 2359 - } 2360 - EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_set_opp_helper); 2361 - 2362 - static void devm_pm_opp_unregister_set_opp_helper(void *data) 2363 - { 2364 - dev_pm_opp_unregister_set_opp_helper(data); 2339 + return 0; 2365 2340 } 2366 2341 2367 2342 /** 2368 - * devm_pm_opp_register_set_opp_helper() - Register custom set OPP helper 2369 - * @dev: Device for which the helper is getting registered. 2370 - * @set_opp: Custom set OPP helper. 2343 + * _opp_put_config_regulators_helper() - Releases resources blocked for 2344 + * config_regulators helper. 2345 + * @opp_table: OPP table returned from _opp_set_config_regulators_helper(). 2371 2346 * 2372 - * This is a resource-managed version of dev_pm_opp_register_set_opp_helper(). 2373 - * 2374 - * Return: 0 on success and errorno otherwise. 2347 + * Release resources blocked for platform specific config_regulators helper. 2375 2348 */ 2376 - int devm_pm_opp_register_set_opp_helper(struct device *dev, 2377 - int (*set_opp)(struct dev_pm_set_opp_data *data)) 2349 + static void _opp_put_config_regulators_helper(struct opp_table *opp_table) 2378 2350 { 2379 - struct opp_table *opp_table; 2380 - 2381 - opp_table = dev_pm_opp_register_set_opp_helper(dev, set_opp); 2382 - if (IS_ERR(opp_table)) 2383 - return PTR_ERR(opp_table); 2384 - 2385 - return devm_add_action_or_reset(dev, devm_pm_opp_unregister_set_opp_helper, 2386 - opp_table); 2351 + if (opp_table->config_regulators) 2352 + opp_table->config_regulators = NULL; 2387 2353 } 2388 - EXPORT_SYMBOL_GPL(devm_pm_opp_register_set_opp_helper); 2389 2354 2390 - static void _opp_detach_genpd(struct opp_table *opp_table) 2355 + static void _detach_genpd(struct opp_table *opp_table) 2391 2356 { 2392 2357 int index; 2393 2358 ··· 2313 2466 } 2314 2467 2315 2468 /** 2316 - * dev_pm_opp_attach_genpd - Attach genpd(s) for the device and save virtual device pointer 2469 + * _opp_attach_genpd - Attach genpd(s) for the device and save virtual device pointer 2317 2470 * @dev: Consumer device for which the genpd is getting attached. 2318 2471 * @names: Null terminated array of pointers containing names of genpd to attach. 2319 2472 * @virt_devs: Pointer to return the array of virtual devices. ··· 2334 2487 * The order of entries in the names array must match the order in which 2335 2488 * "required-opps" are added in DT. 2336 2489 */ 2337 - struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, 2338 - const char * const *names, struct device ***virt_devs) 2490 + static int _opp_attach_genpd(struct opp_table *opp_table, struct device *dev, 2491 + const char * const *names, struct device ***virt_devs) 2339 2492 { 2340 - struct opp_table *opp_table; 2341 2493 struct device *virt_dev; 2342 2494 int index = 0, ret = -EINVAL; 2343 2495 const char * const *name = names; 2344 2496 2345 - opp_table = _add_opp_table(dev, false); 2346 - if (IS_ERR(opp_table)) 2347 - return opp_table; 2348 - 2349 2497 if (opp_table->genpd_virt_devs) 2350 - return opp_table; 2498 + return 0; 2351 2499 2352 2500 /* 2353 2501 * If the genpd's OPP table isn't already initialized, parsing of the 2354 2502 * required-opps fail for dev. We should retry this after genpd's OPP 2355 2503 * table is added. 2356 2504 */ 2357 - if (!opp_table->required_opp_count) { 2358 - ret = -EPROBE_DEFER; 2359 - goto put_table; 2360 - } 2505 + if (!opp_table->required_opp_count) 2506 + return -EPROBE_DEFER; 2361 2507 2362 2508 mutex_lock(&opp_table->genpd_virt_dev_lock); 2363 2509 ··· 2368 2528 } 2369 2529 2370 2530 virt_dev = dev_pm_domain_attach_by_name(dev, *name); 2371 - if (IS_ERR(virt_dev)) { 2372 - ret = PTR_ERR(virt_dev); 2531 + if (IS_ERR_OR_NULL(virt_dev)) { 2532 + ret = PTR_ERR(virt_dev) ? : -ENODEV; 2373 2533 dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret); 2374 2534 goto err; 2375 2535 } ··· 2383 2543 *virt_devs = opp_table->genpd_virt_devs; 2384 2544 mutex_unlock(&opp_table->genpd_virt_dev_lock); 2385 2545 2386 - return opp_table; 2546 + return 0; 2387 2547 2388 2548 err: 2389 - _opp_detach_genpd(opp_table); 2549 + _detach_genpd(opp_table); 2390 2550 unlock: 2391 2551 mutex_unlock(&opp_table->genpd_virt_dev_lock); 2552 + return ret; 2392 2553 2393 - put_table: 2394 - dev_pm_opp_put_opp_table(opp_table); 2395 - 2396 - return ERR_PTR(ret); 2397 2554 } 2398 - EXPORT_SYMBOL_GPL(dev_pm_opp_attach_genpd); 2399 2555 2400 2556 /** 2401 - * dev_pm_opp_detach_genpd() - Detach genpd(s) from the device. 2402 - * @opp_table: OPP table returned by dev_pm_opp_attach_genpd(). 2557 + * _opp_detach_genpd() - Detach genpd(s) from the device. 2558 + * @opp_table: OPP table returned by _opp_attach_genpd(). 2403 2559 * 2404 2560 * This detaches the genpd(s), resets the virtual device pointers, and puts the 2405 2561 * OPP table. 2406 2562 */ 2407 - void dev_pm_opp_detach_genpd(struct opp_table *opp_table) 2563 + static void _opp_detach_genpd(struct opp_table *opp_table) 2408 2564 { 2409 - if (unlikely(!opp_table)) 2410 - return; 2411 - 2412 2565 /* 2413 2566 * Acquire genpd_virt_dev_lock to make sure virt_dev isn't getting 2414 2567 * used in parallel. 2415 2568 */ 2416 2569 mutex_lock(&opp_table->genpd_virt_dev_lock); 2417 - _opp_detach_genpd(opp_table); 2570 + _detach_genpd(opp_table); 2418 2571 mutex_unlock(&opp_table->genpd_virt_dev_lock); 2419 - 2420 - dev_pm_opp_put_opp_table(opp_table); 2421 2572 } 2422 - EXPORT_SYMBOL_GPL(dev_pm_opp_detach_genpd); 2423 2573 2424 - static void devm_pm_opp_detach_genpd(void *data) 2574 + static void _opp_clear_config(struct opp_config_data *data) 2425 2575 { 2426 - dev_pm_opp_detach_genpd(data); 2576 + if (data->flags & OPP_CONFIG_GENPD) 2577 + _opp_detach_genpd(data->opp_table); 2578 + if (data->flags & OPP_CONFIG_REGULATOR) 2579 + _opp_put_regulators(data->opp_table); 2580 + if (data->flags & OPP_CONFIG_SUPPORTED_HW) 2581 + _opp_put_supported_hw(data->opp_table); 2582 + if (data->flags & OPP_CONFIG_REGULATOR_HELPER) 2583 + _opp_put_config_regulators_helper(data->opp_table); 2584 + if (data->flags & OPP_CONFIG_PROP_NAME) 2585 + _opp_put_prop_name(data->opp_table); 2586 + if (data->flags & OPP_CONFIG_CLK) 2587 + _opp_put_clknames(data->opp_table); 2588 + 2589 + dev_pm_opp_put_opp_table(data->opp_table); 2590 + kfree(data); 2427 2591 } 2428 2592 2429 2593 /** 2430 - * devm_pm_opp_attach_genpd - Attach genpd(s) for the device and save virtual 2431 - * device pointer 2432 - * @dev: Consumer device for which the genpd is getting attached. 2433 - * @names: Null terminated array of pointers containing names of genpd to attach. 2434 - * @virt_devs: Pointer to return the array of virtual devices. 2594 + * dev_pm_opp_set_config() - Set OPP configuration for the device. 2595 + * @dev: Device for which configuration is being set. 2596 + * @config: OPP configuration. 2435 2597 * 2436 - * This is a resource-managed version of dev_pm_opp_attach_genpd(). 2598 + * This allows all device OPP configurations to be performed at once. 2599 + * 2600 + * This must be called before any OPPs are initialized for the device. This may 2601 + * be called multiple times for the same OPP table, for example once for each 2602 + * CPU that share the same table. This must be balanced by the same number of 2603 + * calls to dev_pm_opp_clear_config() in order to free the OPP table properly. 2604 + * 2605 + * This returns a token to the caller, which must be passed to 2606 + * dev_pm_opp_clear_config() to free the resources later. The value of the 2607 + * returned token will be >= 1 for success and negative for errors. The minimum 2608 + * value of 1 is chosen here to make it easy for callers to manage the resource. 2609 + */ 2610 + int dev_pm_opp_set_config(struct device *dev, struct dev_pm_opp_config *config) 2611 + { 2612 + struct opp_table *opp_table; 2613 + struct opp_config_data *data; 2614 + unsigned int id; 2615 + int ret; 2616 + 2617 + data = kmalloc(sizeof(*data), GFP_KERNEL); 2618 + if (!data) 2619 + return -ENOMEM; 2620 + 2621 + opp_table = _add_opp_table(dev, false); 2622 + if (IS_ERR(opp_table)) { 2623 + kfree(data); 2624 + return PTR_ERR(opp_table); 2625 + } 2626 + 2627 + data->opp_table = opp_table; 2628 + data->flags = 0; 2629 + 2630 + /* This should be called before OPPs are initialized */ 2631 + if (WARN_ON(!list_empty(&opp_table->opp_list))) { 2632 + ret = -EBUSY; 2633 + goto err; 2634 + } 2635 + 2636 + /* Configure clocks */ 2637 + if (config->clk_names) { 2638 + ret = _opp_set_clknames(opp_table, dev, config->clk_names, 2639 + config->config_clks); 2640 + if (ret) 2641 + goto err; 2642 + 2643 + data->flags |= OPP_CONFIG_CLK; 2644 + } else if (config->config_clks) { 2645 + /* Don't allow config callback without clocks */ 2646 + ret = -EINVAL; 2647 + goto err; 2648 + } 2649 + 2650 + /* Configure property names */ 2651 + if (config->prop_name) { 2652 + ret = _opp_set_prop_name(opp_table, config->prop_name); 2653 + if (ret) 2654 + goto err; 2655 + 2656 + data->flags |= OPP_CONFIG_PROP_NAME; 2657 + } 2658 + 2659 + /* Configure config_regulators helper */ 2660 + if (config->config_regulators) { 2661 + ret = _opp_set_config_regulators_helper(opp_table, dev, 2662 + config->config_regulators); 2663 + if (ret) 2664 + goto err; 2665 + 2666 + data->flags |= OPP_CONFIG_REGULATOR_HELPER; 2667 + } 2668 + 2669 + /* Configure supported hardware */ 2670 + if (config->supported_hw) { 2671 + ret = _opp_set_supported_hw(opp_table, config->supported_hw, 2672 + config->supported_hw_count); 2673 + if (ret) 2674 + goto err; 2675 + 2676 + data->flags |= OPP_CONFIG_SUPPORTED_HW; 2677 + } 2678 + 2679 + /* Configure supplies */ 2680 + if (config->regulator_names) { 2681 + ret = _opp_set_regulators(opp_table, dev, 2682 + config->regulator_names); 2683 + if (ret) 2684 + goto err; 2685 + 2686 + data->flags |= OPP_CONFIG_REGULATOR; 2687 + } 2688 + 2689 + /* Attach genpds */ 2690 + if (config->genpd_names) { 2691 + ret = _opp_attach_genpd(opp_table, dev, config->genpd_names, 2692 + config->virt_devs); 2693 + if (ret) 2694 + goto err; 2695 + 2696 + data->flags |= OPP_CONFIG_GENPD; 2697 + } 2698 + 2699 + ret = xa_alloc(&opp_configs, &id, data, XA_LIMIT(1, INT_MAX), 2700 + GFP_KERNEL); 2701 + if (ret) 2702 + goto err; 2703 + 2704 + return id; 2705 + 2706 + err: 2707 + _opp_clear_config(data); 2708 + return ret; 2709 + } 2710 + EXPORT_SYMBOL_GPL(dev_pm_opp_set_config); 2711 + 2712 + /** 2713 + * dev_pm_opp_clear_config() - Releases resources blocked for OPP configuration. 2714 + * @opp_table: OPP table returned from dev_pm_opp_set_config(). 2715 + * 2716 + * This allows all device OPP configurations to be cleared at once. This must be 2717 + * called once for each call made to dev_pm_opp_set_config(), in order to free 2718 + * the OPPs properly. 2719 + * 2720 + * Currently the first call itself ends up freeing all the OPP configurations, 2721 + * while the later ones only drop the OPP table reference. This works well for 2722 + * now as we would never want to use an half initialized OPP table and want to 2723 + * remove the configurations together. 2724 + */ 2725 + void dev_pm_opp_clear_config(int token) 2726 + { 2727 + struct opp_config_data *data; 2728 + 2729 + /* 2730 + * This lets the callers call this unconditionally and keep their code 2731 + * simple. 2732 + */ 2733 + if (unlikely(token <= 0)) 2734 + return; 2735 + 2736 + data = xa_erase(&opp_configs, token); 2737 + if (WARN_ON(!data)) 2738 + return; 2739 + 2740 + _opp_clear_config(data); 2741 + } 2742 + EXPORT_SYMBOL_GPL(dev_pm_opp_clear_config); 2743 + 2744 + static void devm_pm_opp_config_release(void *token) 2745 + { 2746 + dev_pm_opp_clear_config((unsigned long)token); 2747 + } 2748 + 2749 + /** 2750 + * devm_pm_opp_set_config() - Set OPP configuration for the device. 2751 + * @dev: Device for which configuration is being set. 2752 + * @config: OPP configuration. 2753 + * 2754 + * This allows all device OPP configurations to be performed at once. 2755 + * This is a resource-managed variant of dev_pm_opp_set_config(). 2437 2756 * 2438 2757 * Return: 0 on success and errorno otherwise. 2439 2758 */ 2440 - int devm_pm_opp_attach_genpd(struct device *dev, const char * const *names, 2441 - struct device ***virt_devs) 2759 + int devm_pm_opp_set_config(struct device *dev, struct dev_pm_opp_config *config) 2442 2760 { 2443 - struct opp_table *opp_table; 2761 + int token = dev_pm_opp_set_config(dev, config); 2444 2762 2445 - opp_table = dev_pm_opp_attach_genpd(dev, names, virt_devs); 2446 - if (IS_ERR(opp_table)) 2447 - return PTR_ERR(opp_table); 2763 + if (token < 0) 2764 + return token; 2448 2765 2449 - return devm_add_action_or_reset(dev, devm_pm_opp_detach_genpd, 2450 - opp_table); 2766 + return devm_add_action_or_reset(dev, devm_pm_opp_config_release, 2767 + (void *) ((unsigned long) token)); 2451 2768 } 2452 - EXPORT_SYMBOL_GPL(devm_pm_opp_attach_genpd); 2769 + EXPORT_SYMBOL_GPL(devm_pm_opp_set_config); 2453 2770 2454 2771 /** 2455 2772 * dev_pm_opp_xlate_required_opp() - Find required OPP for @src_table OPP. ··· 2792 2795 return r; 2793 2796 } 2794 2797 2798 + if (!assert_single_clk(opp_table)) { 2799 + r = -EINVAL; 2800 + goto put_table; 2801 + } 2802 + 2795 2803 mutex_lock(&opp_table->lock); 2796 2804 2797 2805 /* Do we have the frequency? */ 2798 2806 list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { 2799 - if (tmp_opp->rate == freq) { 2807 + if (tmp_opp->rates[0] == freq) { 2800 2808 opp = tmp_opp; 2801 2809 break; 2802 2810 } ··· 2868 2866 return r; 2869 2867 } 2870 2868 2869 + if (!assert_single_clk(opp_table)) { 2870 + r = -EINVAL; 2871 + goto put_table; 2872 + } 2873 + 2871 2874 mutex_lock(&opp_table->lock); 2872 2875 2873 2876 /* Do we have the frequency? */ 2874 2877 list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { 2875 - if (tmp_opp->rate == freq) { 2878 + if (tmp_opp->rates[0] == freq) { 2876 2879 opp = tmp_opp; 2877 2880 break; 2878 2881 } ··· 2904 2897 opp); 2905 2898 2906 2899 dev_pm_opp_put(opp); 2907 - goto adjust_put_table; 2900 + goto put_table; 2908 2901 2909 2902 adjust_unlock: 2910 2903 mutex_unlock(&opp_table->lock); 2911 - adjust_put_table: 2904 + put_table: 2912 2905 dev_pm_opp_put_opp_table(opp_table); 2913 2906 return r; 2914 2907 }
+6 -6
drivers/opp/cpu.c
··· 41 41 * the table if any of the mentioned functions have been invoked in the interim. 42 42 */ 43 43 int dev_pm_opp_init_cpufreq_table(struct device *dev, 44 - struct cpufreq_frequency_table **table) 44 + struct cpufreq_frequency_table **opp_table) 45 45 { 46 46 struct dev_pm_opp *opp; 47 47 struct cpufreq_frequency_table *freq_table = NULL; ··· 76 76 freq_table[i].driver_data = i; 77 77 freq_table[i].frequency = CPUFREQ_TABLE_END; 78 78 79 - *table = &freq_table[0]; 79 + *opp_table = &freq_table[0]; 80 80 81 81 out: 82 82 if (ret) ··· 94 94 * Free up the table allocated by dev_pm_opp_init_cpufreq_table 95 95 */ 96 96 void dev_pm_opp_free_cpufreq_table(struct device *dev, 97 - struct cpufreq_frequency_table **table) 97 + struct cpufreq_frequency_table **opp_table) 98 98 { 99 - if (!table) 99 + if (!opp_table) 100 100 return; 101 101 102 - kfree(*table); 103 - *table = NULL; 102 + kfree(*opp_table); 103 + *opp_table = NULL; 104 104 } 105 105 EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table); 106 106 #endif /* CONFIG_CPU_FREQ */
+23 -4
drivers/opp/debugfs.c
··· 74 74 } 75 75 } 76 76 77 + static void opp_debug_create_clks(struct dev_pm_opp *opp, 78 + struct opp_table *opp_table, 79 + struct dentry *pdentry) 80 + { 81 + char name[12]; 82 + int i; 83 + 84 + if (opp_table->clk_count == 1) { 85 + debugfs_create_ulong("rate_hz", S_IRUGO, pdentry, &opp->rates[0]); 86 + return; 87 + } 88 + 89 + for (i = 0; i < opp_table->clk_count; i++) { 90 + snprintf(name, sizeof(name), "rate_hz_%d", i); 91 + debugfs_create_ulong(name, S_IRUGO, pdentry, &opp->rates[i]); 92 + } 93 + } 94 + 77 95 static void opp_debug_create_supplies(struct dev_pm_opp *opp, 78 96 struct opp_table *opp_table, 79 97 struct dentry *pdentry) ··· 135 117 * Get directory name for OPP. 136 118 * 137 119 * - Normally rate is unique to each OPP, use it to get unique opp-name. 138 - * - For some devices rate isn't available, use index instead. 120 + * - For some devices rate isn't available or there are multiple, use 121 + * index instead for them. 139 122 */ 140 - if (likely(opp->rate)) 141 - id = opp->rate; 123 + if (likely(opp_table->clk_count == 1 && opp->rates[0])) 124 + id = opp->rates[0]; 142 125 else 143 126 id = _get_opp_count(opp_table); 144 127 ··· 153 134 debugfs_create_bool("turbo", S_IRUGO, d, &opp->turbo); 154 135 debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend); 155 136 debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate); 156 - debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate); 157 137 debugfs_create_u32("level", S_IRUGO, d, &opp->level); 158 138 debugfs_create_ulong("clock_latency_ns", S_IRUGO, d, 159 139 &opp->clock_latency_ns); ··· 160 142 opp->of_name = of_node_full_name(opp->np); 161 143 debugfs_create_str("of_name", S_IRUGO, d, (char **)&opp->of_name); 162 144 145 + opp_debug_create_clks(opp, opp_table, d); 163 146 opp_debug_create_supplies(opp, opp_table, d); 164 147 opp_debug_create_bw(opp, opp_table, d); 165 148
+80 -70
drivers/opp/of.c
··· 242 242 opp_table->np = opp_np; 243 243 244 244 _opp_table_alloc_required_tables(opp_table, dev, opp_np); 245 - of_node_put(opp_np); 246 245 } 247 246 248 247 void _of_clear_opp_table(struct opp_table *opp_table) 249 248 { 250 249 _opp_table_free_required_tables(opp_table); 250 + of_node_put(opp_table->np); 251 251 } 252 252 253 253 /* 254 254 * Release all resources previously acquired with a call to 255 255 * _of_opp_alloc_required_opps(). 256 256 */ 257 - void _of_opp_free_required_opps(struct opp_table *opp_table, 258 - struct dev_pm_opp *opp) 257 + static void _of_opp_free_required_opps(struct opp_table *opp_table, 258 + struct dev_pm_opp *opp) 259 259 { 260 260 struct dev_pm_opp **required_opps = opp->required_opps; 261 261 int i; ··· 273 273 274 274 opp->required_opps = NULL; 275 275 kfree(required_opps); 276 + } 277 + 278 + void _of_clear_opp(struct opp_table *opp_table, struct dev_pm_opp *opp) 279 + { 280 + _of_opp_free_required_opps(opp_table, opp); 281 + of_node_put(opp->np); 276 282 } 277 283 278 284 /* Populate all required OPPs which are part of "required-opps" list */ ··· 773 767 } 774 768 EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); 775 769 776 - static int _read_bw(struct dev_pm_opp *new_opp, struct opp_table *table, 770 + static int _read_rate(struct dev_pm_opp *new_opp, struct opp_table *opp_table, 771 + struct device_node *np) 772 + { 773 + struct property *prop; 774 + int i, count, ret; 775 + u64 *rates; 776 + 777 + prop = of_find_property(np, "opp-hz", NULL); 778 + if (!prop) 779 + return -ENODEV; 780 + 781 + count = prop->length / sizeof(u64); 782 + if (opp_table->clk_count != count) { 783 + pr_err("%s: Count mismatch between opp-hz and clk_count (%d %d)\n", 784 + __func__, count, opp_table->clk_count); 785 + return -EINVAL; 786 + } 787 + 788 + rates = kmalloc_array(count, sizeof(*rates), GFP_KERNEL); 789 + if (!rates) 790 + return -ENOMEM; 791 + 792 + ret = of_property_read_u64_array(np, "opp-hz", rates, count); 793 + if (ret) { 794 + pr_err("%s: Error parsing opp-hz: %d\n", __func__, ret); 795 + } else { 796 + /* 797 + * Rate is defined as an unsigned long in clk API, and so 798 + * casting explicitly to its type. Must be fixed once rate is 64 799 + * bit guaranteed in clk API. 800 + */ 801 + for (i = 0; i < count; i++) { 802 + new_opp->rates[i] = (unsigned long)rates[i]; 803 + 804 + /* This will happen for frequencies > 4.29 GHz */ 805 + WARN_ON(new_opp->rates[i] != rates[i]); 806 + } 807 + } 808 + 809 + kfree(rates); 810 + 811 + return ret; 812 + } 813 + 814 + static int _read_bw(struct dev_pm_opp *new_opp, struct opp_table *opp_table, 777 815 struct device_node *np, bool peak) 778 816 { 779 817 const char *name = peak ? "opp-peak-kBps" : "opp-avg-kBps"; ··· 830 780 return -ENODEV; 831 781 832 782 count = prop->length / sizeof(u32); 833 - if (table->path_count != count) { 783 + if (opp_table->path_count != count) { 834 784 pr_err("%s: Mismatch between %s and paths (%d %d)\n", 835 - __func__, name, count, table->path_count); 785 + __func__, name, count, opp_table->path_count); 836 786 return -EINVAL; 837 787 } 838 788 ··· 858 808 return ret; 859 809 } 860 810 861 - static int _read_opp_key(struct dev_pm_opp *new_opp, struct opp_table *table, 862 - struct device_node *np, bool *rate_not_available) 811 + static int _read_opp_key(struct dev_pm_opp *new_opp, 812 + struct opp_table *opp_table, struct device_node *np) 863 813 { 864 814 bool found = false; 865 - u64 rate; 866 815 int ret; 867 816 868 - ret = of_property_read_u64(np, "opp-hz", &rate); 869 - if (!ret) { 870 - /* 871 - * Rate is defined as an unsigned long in clk API, and so 872 - * casting explicitly to its type. Must be fixed once rate is 64 873 - * bit guaranteed in clk API. 874 - */ 875 - new_opp->rate = (unsigned long)rate; 817 + ret = _read_rate(new_opp, opp_table, np); 818 + if (!ret) 876 819 found = true; 877 - } 878 - *rate_not_available = !!ret; 820 + else if (ret != -ENODEV) 821 + return ret; 879 822 880 823 /* 881 824 * Bandwidth consists of peak and average (optional) values: 882 825 * opp-peak-kBps = <path1_value path2_value>; 883 826 * opp-avg-kBps = <path1_value path2_value>; 884 827 */ 885 - ret = _read_bw(new_opp, table, np, true); 828 + ret = _read_bw(new_opp, opp_table, np, true); 886 829 if (!ret) { 887 830 found = true; 888 - ret = _read_bw(new_opp, table, np, false); 831 + ret = _read_bw(new_opp, opp_table, np, false); 889 832 } 890 833 891 834 /* The properties were found but we failed to parse them */ ··· 924 881 struct dev_pm_opp *new_opp; 925 882 u32 val; 926 883 int ret; 927 - bool rate_not_available = false; 928 884 929 885 new_opp = _opp_allocate(opp_table); 930 886 if (!new_opp) 931 887 return ERR_PTR(-ENOMEM); 932 888 933 - ret = _read_opp_key(new_opp, opp_table, np, &rate_not_available); 889 + ret = _read_opp_key(new_opp, opp_table, np); 934 890 if (ret < 0) { 935 891 dev_err(dev, "%s: opp key field not found\n", __func__); 936 892 goto free_opp; ··· 937 895 938 896 /* Check if the OPP supports hardware's hierarchy of versions or not */ 939 897 if (!_opp_is_supported(dev, opp_table, np)) { 940 - dev_dbg(dev, "OPP not supported by hardware: %lu\n", 941 - new_opp->rate); 898 + dev_dbg(dev, "OPP not supported by hardware: %s\n", 899 + of_node_full_name(np)); 942 900 goto free_opp; 943 901 } 944 902 945 903 new_opp->turbo = of_property_read_bool(np, "turbo-mode"); 946 904 947 - new_opp->np = np; 905 + new_opp->np = of_node_get(np); 948 906 new_opp->dynamic = false; 949 907 new_opp->available = true; 950 908 ··· 962 920 if (opp_table->is_genpd) 963 921 new_opp->pstate = pm_genpd_opp_to_performance_state(dev, new_opp); 964 922 965 - ret = _opp_add(dev, new_opp, opp_table, rate_not_available); 923 + ret = _opp_add(dev, new_opp, opp_table); 966 924 if (ret) { 967 925 /* Don't return error for duplicate OPPs */ 968 926 if (ret == -EBUSY) ··· 973 931 /* OPP to select on device suspend */ 974 932 if (of_property_read_bool(np, "opp-suspend")) { 975 933 if (opp_table->suspend_opp) { 976 - /* Pick the OPP with higher rate as suspend OPP */ 977 - if (new_opp->rate > opp_table->suspend_opp->rate) { 934 + /* Pick the OPP with higher rate/bw/level as suspend OPP */ 935 + if (_opp_compare_key(opp_table, new_opp, opp_table->suspend_opp) == 1) { 978 936 opp_table->suspend_opp->suspend = false; 979 937 new_opp->suspend = true; 980 938 opp_table->suspend_opp = new_opp; ··· 989 947 opp_table->clock_latency_ns_max = new_opp->clock_latency_ns; 990 948 991 949 pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu level:%u\n", 992 - __func__, new_opp->turbo, new_opp->rate, 950 + __func__, new_opp->turbo, new_opp->rates[0], 993 951 new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min, 994 952 new_opp->supplies[0].u_volt_max, new_opp->clock_latency_ns, 995 953 new_opp->level); ··· 1126 1084 return ret; 1127 1085 } 1128 1086 1129 - static int _of_add_table_indexed(struct device *dev, int index, bool getclk) 1087 + static int _of_add_table_indexed(struct device *dev, int index) 1130 1088 { 1131 1089 struct opp_table *opp_table; 1132 1090 int ret, count; ··· 1142 1100 index = 0; 1143 1101 } 1144 1102 1145 - opp_table = _add_opp_table_indexed(dev, index, getclk); 1103 + opp_table = _add_opp_table_indexed(dev, index, true); 1146 1104 if (IS_ERR(opp_table)) 1147 1105 return PTR_ERR(opp_table); 1148 1106 ··· 1166 1124 dev_pm_opp_of_remove_table(data); 1167 1125 } 1168 1126 1169 - static int _devm_of_add_table_indexed(struct device *dev, int index, bool getclk) 1127 + static int _devm_of_add_table_indexed(struct device *dev, int index) 1170 1128 { 1171 1129 int ret; 1172 1130 1173 - ret = _of_add_table_indexed(dev, index, getclk); 1131 + ret = _of_add_table_indexed(dev, index); 1174 1132 if (ret) 1175 1133 return ret; 1176 1134 ··· 1198 1156 */ 1199 1157 int devm_pm_opp_of_add_table(struct device *dev) 1200 1158 { 1201 - return _devm_of_add_table_indexed(dev, 0, true); 1159 + return _devm_of_add_table_indexed(dev, 0); 1202 1160 } 1203 1161 EXPORT_SYMBOL_GPL(devm_pm_opp_of_add_table); 1204 1162 ··· 1221 1179 */ 1222 1180 int dev_pm_opp_of_add_table(struct device *dev) 1223 1181 { 1224 - return _of_add_table_indexed(dev, 0, true); 1182 + return _of_add_table_indexed(dev, 0); 1225 1183 } 1226 1184 EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table); 1227 1185 ··· 1237 1195 */ 1238 1196 int dev_pm_opp_of_add_table_indexed(struct device *dev, int index) 1239 1197 { 1240 - return _of_add_table_indexed(dev, index, true); 1198 + return _of_add_table_indexed(dev, index); 1241 1199 } 1242 1200 EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table_indexed); 1243 1201 ··· 1250 1208 */ 1251 1209 int devm_pm_opp_of_add_table_indexed(struct device *dev, int index) 1252 1210 { 1253 - return _devm_of_add_table_indexed(dev, index, true); 1211 + return _devm_of_add_table_indexed(dev, index); 1254 1212 } 1255 1213 EXPORT_SYMBOL_GPL(devm_pm_opp_of_add_table_indexed); 1256 - 1257 - /** 1258 - * dev_pm_opp_of_add_table_noclk() - Initialize indexed opp table from device 1259 - * tree without getting clk for device. 1260 - * @dev: device pointer used to lookup OPP table. 1261 - * @index: Index number. 1262 - * 1263 - * Register the initial OPP table with the OPP library for given device only 1264 - * using the "operating-points-v2" property. Do not try to get the clk for the 1265 - * device. 1266 - * 1267 - * Return: Refer to dev_pm_opp_of_add_table() for return values. 1268 - */ 1269 - int dev_pm_opp_of_add_table_noclk(struct device *dev, int index) 1270 - { 1271 - return _of_add_table_indexed(dev, index, false); 1272 - } 1273 - EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table_noclk); 1274 - 1275 - /** 1276 - * devm_pm_opp_of_add_table_noclk() - Initialize indexed opp table from device 1277 - * tree without getting clk for device. 1278 - * @dev: device pointer used to lookup OPP table. 1279 - * @index: Index number. 1280 - * 1281 - * This is a resource-managed variant of dev_pm_opp_of_add_table_noclk(). 1282 - */ 1283 - int devm_pm_opp_of_add_table_noclk(struct device *dev, int index) 1284 - { 1285 - return _devm_of_add_table_indexed(dev, index, false); 1286 - } 1287 - EXPORT_SYMBOL_GPL(devm_pm_opp_of_add_table_noclk); 1288 1214 1289 1215 /* CPU device specific helpers */ 1290 1216
+38 -18
drivers/opp/opp.h
··· 28 28 29 29 extern struct list_head opp_tables, lazy_opp_tables; 30 30 31 + /* OPP Config flags */ 32 + #define OPP_CONFIG_CLK BIT(0) 33 + #define OPP_CONFIG_REGULATOR BIT(1) 34 + #define OPP_CONFIG_REGULATOR_HELPER BIT(2) 35 + #define OPP_CONFIG_PROP_NAME BIT(3) 36 + #define OPP_CONFIG_SUPPORTED_HW BIT(4) 37 + #define OPP_CONFIG_GENPD BIT(5) 38 + 39 + /** 40 + * struct opp_config_data - data for set config operations 41 + * @opp_table: OPP table 42 + * @flags: OPP config flags 43 + * 44 + * This structure stores the OPP config information for each OPP table 45 + * configuration by the callers. 46 + */ 47 + struct opp_config_data { 48 + struct opp_table *opp_table; 49 + unsigned int flags; 50 + }; 51 + 31 52 /* 32 53 * Internal data structure organization with the OPP layer library is as 33 54 * follows: ··· 79 58 * @suspend: true if suspend OPP 80 59 * @removed: flag indicating that OPP's reference is dropped by OPP core. 81 60 * @pstate: Device's power domain's performance state. 82 - * @rate: Frequency in hertz 61 + * @rates: Frequencies in hertz 83 62 * @level: Performance level 84 63 * @supplies: Power supplies voltage/current values 85 64 * @bandwidth: Interconnect bandwidth values ··· 102 81 bool suspend; 103 82 bool removed; 104 83 unsigned int pstate; 105 - unsigned long rate; 84 + unsigned long *rates; 106 85 unsigned int level; 107 86 108 87 struct dev_pm_opp_supply *supplies; ··· 159 138 * @clock_latency_ns_max: Max clock latency in nanoseconds. 160 139 * @parsed_static_opps: Count of devices for which OPPs are initialized from DT. 161 140 * @shared_opp: OPP is shared between multiple devices. 162 - * @current_rate: Currently configured frequency. 141 + * @rate_clk_single: Currently configured frequency for single clk. 163 142 * @current_opp: Currently configured OPP for the table. 164 143 * @suspend_opp: Pointer to OPP to be used during device suspend. 165 144 * @genpd_virt_dev_lock: Mutex protecting the genpd virtual device pointers. ··· 170 149 * @supported_hw: Array of version number to support. 171 150 * @supported_hw_count: Number of elements in supported_hw array. 172 151 * @prop_name: A name to postfix to many DT properties, while parsing them. 173 - * @clk: Device's clock handle 152 + * @config_clks: Platform specific config_clks() callback. 153 + * @clks: Device's clock handles, for multiple clocks. 154 + * @clk: Device's clock handle, for single clock. 155 + * @clk_count: Number of clocks. 156 + * @config_regulators: Platform specific config_regulators() callback. 174 157 * @regulators: Supply regulators 175 158 * @regulator_count: Number of power supply regulators. Its value can be -1 176 159 * (uninitialized), 0 (no opp-microvolt property) or > 0 (has opp-microvolt ··· 184 159 * @enabled: Set to true if the device's resources are enabled/configured. 185 160 * @genpd_performance_state: Device's power domain support performance state. 186 161 * @is_genpd: Marks if the OPP table belongs to a genpd. 187 - * @set_opp: Platform specific set_opp callback 188 - * @sod_supplies: Set opp data supplies 189 - * @set_opp_data: Data to be passed to set_opp callback 190 162 * @dentry: debugfs dentry pointer of the real device directory (not links). 191 163 * @dentry_name: Name of the real dentry. 192 164 * ··· 210 188 211 189 unsigned int parsed_static_opps; 212 190 enum opp_table_access shared_opp; 213 - unsigned long current_rate; 191 + unsigned long rate_clk_single; 214 192 struct dev_pm_opp *current_opp; 215 193 struct dev_pm_opp *suspend_opp; 216 194 ··· 222 200 unsigned int *supported_hw; 223 201 unsigned int supported_hw_count; 224 202 const char *prop_name; 203 + config_clks_t config_clks; 204 + struct clk **clks; 225 205 struct clk *clk; 206 + int clk_count; 207 + config_regulators_t config_regulators; 226 208 struct regulator **regulators; 227 209 int regulator_count; 228 210 struct icc_path **paths; ··· 234 208 bool enabled; 235 209 bool genpd_performance_state; 236 210 bool is_genpd; 237 - 238 - int (*set_opp)(struct dev_pm_set_opp_data *data); 239 - struct dev_pm_opp_supply *sod_supplies; 240 - struct dev_pm_set_opp_data *set_opp_data; 241 211 242 212 #ifdef CONFIG_DEBUG_FS 243 213 struct dentry *dentry; ··· 250 228 struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); 251 229 struct dev_pm_opp *_opp_allocate(struct opp_table *opp_table); 252 230 void _opp_free(struct dev_pm_opp *opp); 253 - int _opp_compare_key(struct dev_pm_opp *opp1, struct dev_pm_opp *opp2); 254 - int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table, bool rate_not_available); 231 + int _opp_compare_key(struct opp_table *opp_table, struct dev_pm_opp *opp1, struct dev_pm_opp *opp2); 232 + int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table); 255 233 int _opp_add_v1(struct opp_table *opp_table, struct device *dev, unsigned long freq, long u_volt, bool dynamic); 256 234 void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, int last_cpu); 257 235 struct opp_table *_add_opp_table_indexed(struct device *dev, int index, bool getclk); ··· 267 245 void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, int index); 268 246 void _of_clear_opp_table(struct opp_table *opp_table); 269 247 struct opp_table *_managed_opp(struct device *dev, int index); 270 - void _of_opp_free_required_opps(struct opp_table *opp_table, 271 - struct dev_pm_opp *opp); 248 + void _of_clear_opp(struct opp_table *opp_table, struct dev_pm_opp *opp); 272 249 #else 273 250 static inline void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, int index) {} 274 251 static inline void _of_clear_opp_table(struct opp_table *opp_table) {} 275 252 static inline struct opp_table *_managed_opp(struct device *dev, int index) { return NULL; } 276 - static inline void _of_opp_free_required_opps(struct opp_table *opp_table, 277 - struct dev_pm_opp *opp) {} 253 + static inline void _of_clear_opp(struct opp_table *opp_table, struct dev_pm_opp *opp) {} 278 254 #endif 279 255 280 256 #ifdef CONFIG_DEBUG_FS
+35 -42
drivers/opp/ti-opp-supply.c
··· 36 36 * @vdd_table: Optimized voltage mapping table 37 37 * @num_vdd_table: number of entries in vdd_table 38 38 * @vdd_absolute_max_voltage_uv: absolute maximum voltage in UV for the supply 39 + * @old_supplies: Placeholder for supplies information for old OPP. 40 + * @new_supplies: Placeholder for supplies information for new OPP. 39 41 */ 40 42 struct ti_opp_supply_data { 41 43 struct ti_opp_supply_optimum_voltage_table *vdd_table; 42 44 u32 num_vdd_table; 43 45 u32 vdd_absolute_max_voltage_uv; 46 + struct dev_pm_opp_supply old_supplies[2]; 47 + struct dev_pm_opp_supply new_supplies[2]; 44 48 }; 45 49 46 50 static struct ti_opp_supply_data opp_data; ··· 270 266 return 0; 271 267 } 272 268 273 - /** 274 - * ti_opp_supply_set_opp() - do the opp supply transition 275 - * @data: information on regulators and new and old opps provided by 276 - * opp core to use in transition 277 - * 278 - * Return: If successful, 0, else appropriate error value. 279 - */ 280 - static int ti_opp_supply_set_opp(struct dev_pm_set_opp_data *data) 269 + /* Do the opp supply transition */ 270 + static int ti_opp_config_regulators(struct device *dev, 271 + struct dev_pm_opp *old_opp, struct dev_pm_opp *new_opp, 272 + struct regulator **regulators, unsigned int count) 281 273 { 282 - struct dev_pm_opp_supply *old_supply_vdd = &data->old_opp.supplies[0]; 283 - struct dev_pm_opp_supply *old_supply_vbb = &data->old_opp.supplies[1]; 284 - struct dev_pm_opp_supply *new_supply_vdd = &data->new_opp.supplies[0]; 285 - struct dev_pm_opp_supply *new_supply_vbb = &data->new_opp.supplies[1]; 286 - struct device *dev = data->dev; 287 - unsigned long old_freq = data->old_opp.rate, freq = data->new_opp.rate; 288 - struct clk *clk = data->clk; 289 - struct regulator *vdd_reg = data->regulators[0]; 290 - struct regulator *vbb_reg = data->regulators[1]; 274 + struct dev_pm_opp_supply *old_supply_vdd = &opp_data.old_supplies[0]; 275 + struct dev_pm_opp_supply *old_supply_vbb = &opp_data.old_supplies[1]; 276 + struct dev_pm_opp_supply *new_supply_vdd = &opp_data.new_supplies[0]; 277 + struct dev_pm_opp_supply *new_supply_vbb = &opp_data.new_supplies[1]; 278 + struct regulator *vdd_reg = regulators[0]; 279 + struct regulator *vbb_reg = regulators[1]; 280 + unsigned long old_freq, freq; 291 281 int vdd_uv; 292 282 int ret; 283 + 284 + /* We must have two regulators here */ 285 + WARN_ON(count != 2); 286 + 287 + /* Fetch supplies and freq information from OPP core */ 288 + ret = dev_pm_opp_get_supplies(new_opp, opp_data.new_supplies); 289 + WARN_ON(ret); 290 + 291 + old_freq = dev_pm_opp_get_freq(old_opp); 292 + freq = dev_pm_opp_get_freq(new_opp); 293 + WARN_ON(!old_freq || !freq); 293 294 294 295 vdd_uv = _get_optimal_vdd_voltage(dev, &opp_data, 295 296 new_supply_vdd->u_volt); ··· 312 303 ret = _opp_set_voltage(dev, new_supply_vbb, 0, vbb_reg, "vbb"); 313 304 if (ret) 314 305 goto restore_voltage; 315 - } 316 - 317 - /* Change frequency */ 318 - dev_dbg(dev, "%s: switching OPP: %lu Hz --> %lu Hz\n", 319 - __func__, old_freq, freq); 320 - 321 - ret = clk_set_rate(clk, freq); 322 - if (ret) { 323 - dev_err(dev, "%s: failed to set clock rate: %d\n", __func__, 324 - ret); 325 - goto restore_voltage; 326 - } 327 - 328 - /* Scaling down? Scale voltage after frequency */ 329 - if (freq < old_freq) { 306 + } else { 330 307 ret = _opp_set_voltage(dev, new_supply_vbb, 0, vbb_reg, "vbb"); 331 308 if (ret) 332 - goto restore_freq; 309 + goto restore_voltage; 333 310 334 311 ret = _opp_set_voltage(dev, new_supply_vdd, vdd_uv, vdd_reg, 335 312 "vdd"); 336 313 if (ret) 337 - goto restore_freq; 314 + goto restore_voltage; 338 315 } 339 316 340 317 return 0; 341 318 342 - restore_freq: 343 - ret = clk_set_rate(clk, old_freq); 344 - if (ret) 345 - dev_err(dev, "%s: failed to restore old-freq (%lu Hz)\n", 346 - __func__, old_freq); 347 319 restore_voltage: 320 + /* Fetch old supplies information only if required */ 321 + ret = dev_pm_opp_get_supplies(old_opp, opp_data.old_supplies); 322 + WARN_ON(ret); 323 + 348 324 /* This shouldn't harm even if the voltages weren't updated earlier */ 349 325 if (old_supply_vdd->u_volt) { 350 326 ret = _opp_set_voltage(dev, old_supply_vbb, 0, vbb_reg, "vbb"); ··· 399 405 return ret; 400 406 } 401 407 402 - ret = PTR_ERR_OR_ZERO(dev_pm_opp_register_set_opp_helper(cpu_dev, 403 - ti_opp_supply_set_opp)); 404 - if (ret) 408 + ret = dev_pm_opp_set_config_regulators(cpu_dev, ti_opp_config_regulators); 409 + if (ret < 0) 405 410 _free_optimized_voltages(dev, &opp_data); 406 411 407 412 return ret;
+32 -15
drivers/soc/tegra/common.c
··· 107 107 { 108 108 u32 hw_version; 109 109 int err; 110 + /* 111 + * The clk's connection id to set is NULL and this is a NULL terminated 112 + * array, hence two NULL entries. 113 + */ 114 + const char *clk_names[] = { NULL, NULL }; 115 + struct dev_pm_opp_config config = { 116 + /* 117 + * For some devices we don't have any OPP table in the DT, and 118 + * in order to use the same code path for all the devices, we 119 + * create a dummy OPP table for them via this. The dummy OPP 120 + * table is only capable of doing clk_set_rate() on invocation 121 + * of dev_pm_opp_set_rate() and doesn't provide any other 122 + * functionality. 123 + */ 124 + .clk_names = clk_names, 125 + }; 110 126 111 - err = devm_pm_opp_set_clkname(dev, NULL); 112 - if (err) { 113 - dev_err(dev, "failed to set OPP clk: %d\n", err); 114 - return err; 115 - } 116 - 117 - /* Tegra114+ doesn't support OPP yet */ 118 - if (!of_machine_is_compatible("nvidia,tegra20") && 119 - !of_machine_is_compatible("nvidia,tegra30")) 120 - return -ENODEV; 121 - 122 - if (of_machine_is_compatible("nvidia,tegra20")) 127 + if (of_machine_is_compatible("nvidia,tegra20")) { 123 128 hw_version = BIT(tegra_sku_info.soc_process_id); 124 - else 129 + config.supported_hw = &hw_version; 130 + config.supported_hw_count = 1; 131 + } else if (of_machine_is_compatible("nvidia,tegra30")) { 125 132 hw_version = BIT(tegra_sku_info.soc_speedo_id); 133 + config.supported_hw = &hw_version; 134 + config.supported_hw_count = 1; 135 + } 126 136 127 - err = devm_pm_opp_set_supported_hw(dev, &hw_version, 1); 137 + err = devm_pm_opp_set_config(dev, &config); 128 138 if (err) { 129 - dev_err(dev, "failed to set OPP supported HW: %d\n", err); 139 + dev_err(dev, "failed to set OPP config: %d\n", err); 130 140 return err; 131 141 } 142 + 143 + /* 144 + * Tegra114+ doesn't support OPP yet, return early for non tegra20/30 145 + * case. 146 + */ 147 + if (!config.supported_hw) 148 + return -ENODEV; 132 149 133 150 /* 134 151 * Older device-trees have an empty OPP table, we will get
+2 -2
drivers/soc/tegra/pmc.c
··· 1384 1384 static int tegra_pmc_core_pd_add(struct tegra_pmc *pmc, struct device_node *np) 1385 1385 { 1386 1386 struct generic_pm_domain *genpd; 1387 - const char *rname = "core"; 1387 + const char *rname[] = { "core", NULL}; 1388 1388 int err; 1389 1389 1390 1390 genpd = devm_kzalloc(pmc->dev, sizeof(*genpd), GFP_KERNEL); ··· 1395 1395 genpd->set_performance_state = tegra_pmc_core_pd_set_performance_state; 1396 1396 genpd->opp_to_performance_state = tegra_pmc_core_pd_opp_to_performance_state; 1397 1397 1398 - err = devm_pm_opp_set_regulators(pmc->dev, &rname, 1); 1398 + err = devm_pm_opp_set_regulators(pmc->dev, rname); 1399 1399 if (err) 1400 1400 return dev_err_probe(pmc->dev, err, 1401 1401 "failed to set core OPP regulator\n");
+194 -128
include/linux/pm_opp.h
··· 57 57 u32 peak; 58 58 }; 59 59 60 - /** 61 - * struct dev_pm_opp_info - OPP freq/voltage/current values 62 - * @rate: Target clk rate in hz 63 - * @supplies: Array of voltage/current values for all power supplies 64 - * 65 - * This structure stores the freq/voltage/current values for a single OPP. 66 - */ 67 - struct dev_pm_opp_info { 68 - unsigned long rate; 69 - struct dev_pm_opp_supply *supplies; 70 - }; 60 + typedef int (*config_regulators_t)(struct device *dev, 61 + struct dev_pm_opp *old_opp, struct dev_pm_opp *new_opp, 62 + struct regulator **regulators, unsigned int count); 63 + 64 + typedef int (*config_clks_t)(struct device *dev, struct opp_table *opp_table, 65 + struct dev_pm_opp *opp, void *data, bool scaling_down); 71 66 72 67 /** 73 - * struct dev_pm_set_opp_data - Set OPP data 74 - * @old_opp: Old OPP info 75 - * @new_opp: New OPP info 76 - * @regulators: Array of regulator pointers 77 - * @regulator_count: Number of regulators 78 - * @clk: Pointer to clk 79 - * @dev: Pointer to the struct device 68 + * struct dev_pm_opp_config - Device OPP configuration values 69 + * @clk_names: Clk names, NULL terminated array. 70 + * @config_clks: Custom set clk helper. 71 + * @prop_name: Name to postfix to properties. 72 + * @config_regulators: Custom set regulator helper. 73 + * @supported_hw: Array of hierarchy of versions to match. 74 + * @supported_hw_count: Number of elements in the array. 75 + * @regulator_names: Array of pointers to the names of the regulator, NULL terminated. 76 + * @genpd_names: Null terminated array of pointers containing names of genpd to 77 + * attach. 78 + * @virt_devs: Pointer to return the array of virtual devices. 80 79 * 81 - * This structure contains all information required for setting an OPP. 80 + * This structure contains platform specific OPP configurations for the device. 82 81 */ 83 - struct dev_pm_set_opp_data { 84 - struct dev_pm_opp_info old_opp; 85 - struct dev_pm_opp_info new_opp; 86 - 87 - struct regulator **regulators; 88 - unsigned int regulator_count; 89 - struct clk *clk; 90 - struct device *dev; 82 + struct dev_pm_opp_config { 83 + /* NULL terminated */ 84 + const char * const *clk_names; 85 + config_clks_t config_clks; 86 + const char *prop_name; 87 + config_regulators_t config_regulators; 88 + const unsigned int *supported_hw; 89 + unsigned int supported_hw_count; 90 + const char * const *regulator_names; 91 + const char * const *genpd_names; 92 + struct device ***virt_devs; 91 93 }; 92 94 93 95 #if defined(CONFIG_PM_OPP) ··· 98 96 void dev_pm_opp_put_opp_table(struct opp_table *opp_table); 99 97 100 98 unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp); 99 + 100 + int dev_pm_opp_get_supplies(struct dev_pm_opp *opp, struct dev_pm_opp_supply *supplies); 101 101 102 102 unsigned long dev_pm_opp_get_power(struct dev_pm_opp *opp); 103 103 ··· 123 119 bool available); 124 120 struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, 125 121 unsigned long *freq); 126 - struct dev_pm_opp *dev_pm_opp_find_freq_ceil_by_volt(struct device *dev, 127 - unsigned long u_volt); 128 122 129 123 struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev, 130 124 unsigned int level); ··· 156 154 int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb); 157 155 int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb); 158 156 159 - struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count); 160 - void dev_pm_opp_put_supported_hw(struct opp_table *opp_table); 161 - int devm_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count); 162 - struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name); 163 - void dev_pm_opp_put_prop_name(struct opp_table *opp_table); 164 - struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count); 165 - void dev_pm_opp_put_regulators(struct opp_table *opp_table); 166 - int devm_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count); 167 - struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name); 168 - void dev_pm_opp_put_clkname(struct opp_table *opp_table); 169 - int devm_pm_opp_set_clkname(struct device *dev, const char *name); 170 - struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); 171 - void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table); 172 - int devm_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); 173 - struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char * const *names, struct device ***virt_devs); 174 - void dev_pm_opp_detach_genpd(struct opp_table *opp_table); 175 - int devm_pm_opp_attach_genpd(struct device *dev, const char * const *names, struct device ***virt_devs); 157 + int dev_pm_opp_set_config(struct device *dev, struct dev_pm_opp_config *config); 158 + int devm_pm_opp_set_config(struct device *dev, struct dev_pm_opp_config *config); 159 + void dev_pm_opp_clear_config(int token); 160 + int dev_pm_opp_config_clks_simple(struct device *dev, 161 + struct opp_table *opp_table, struct dev_pm_opp *opp, void *data, 162 + bool scaling_down); 163 + 176 164 struct dev_pm_opp *dev_pm_opp_xlate_required_opp(struct opp_table *src_table, struct opp_table *dst_table, struct dev_pm_opp *src_opp); 177 165 int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate); 178 166 int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq); ··· 188 196 static inline unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp) 189 197 { 190 198 return 0; 199 + } 200 + 201 + static inline int dev_pm_opp_get_supplies(struct dev_pm_opp *opp, struct dev_pm_opp_supply *supplies) 202 + { 203 + return -EOPNOTSUPP; 191 204 } 192 205 193 206 static inline unsigned long dev_pm_opp_get_power(struct dev_pm_opp *opp) ··· 271 274 return ERR_PTR(-EOPNOTSUPP); 272 275 } 273 276 274 - static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil_by_volt(struct device *dev, 275 - unsigned long u_volt) 276 - { 277 - return ERR_PTR(-EOPNOTSUPP); 278 - } 279 - 280 277 static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, 281 278 unsigned long *freq) 282 279 { ··· 333 342 return -EOPNOTSUPP; 334 343 } 335 344 336 - static inline struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, 337 - const u32 *versions, 338 - unsigned int count) 339 - { 340 - return ERR_PTR(-EOPNOTSUPP); 341 - } 342 - 343 - static inline void dev_pm_opp_put_supported_hw(struct opp_table *opp_table) {} 344 - 345 - static inline int devm_pm_opp_set_supported_hw(struct device *dev, 346 - const u32 *versions, 347 - unsigned int count) 345 + static inline int dev_pm_opp_set_config(struct device *dev, struct dev_pm_opp_config *config) 348 346 { 349 347 return -EOPNOTSUPP; 350 348 } 351 349 352 - static inline struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, 353 - int (*set_opp)(struct dev_pm_set_opp_data *data)) 354 - { 355 - return ERR_PTR(-EOPNOTSUPP); 356 - } 357 - 358 - static inline void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table) {} 359 - 360 - static inline int devm_pm_opp_register_set_opp_helper(struct device *dev, 361 - int (*set_opp)(struct dev_pm_set_opp_data *data)) 350 + static inline int devm_pm_opp_set_config(struct device *dev, struct dev_pm_opp_config *config) 362 351 { 363 352 return -EOPNOTSUPP; 364 353 } 365 354 366 - static inline struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name) 367 - { 368 - return ERR_PTR(-EOPNOTSUPP); 369 - } 355 + static inline void dev_pm_opp_clear_config(int token) {} 370 356 371 - static inline void dev_pm_opp_put_prop_name(struct opp_table *opp_table) {} 372 - 373 - static inline struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count) 374 - { 375 - return ERR_PTR(-EOPNOTSUPP); 376 - } 377 - 378 - static inline void dev_pm_opp_put_regulators(struct opp_table *opp_table) {} 379 - 380 - static inline int devm_pm_opp_set_regulators(struct device *dev, 381 - const char * const names[], 382 - unsigned int count) 383 - { 384 - return -EOPNOTSUPP; 385 - } 386 - 387 - static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name) 388 - { 389 - return ERR_PTR(-EOPNOTSUPP); 390 - } 391 - 392 - static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {} 393 - 394 - static inline int devm_pm_opp_set_clkname(struct device *dev, const char *name) 395 - { 396 - return -EOPNOTSUPP; 397 - } 398 - 399 - static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char * const *names, struct device ***virt_devs) 400 - { 401 - return ERR_PTR(-EOPNOTSUPP); 402 - } 403 - 404 - static inline void dev_pm_opp_detach_genpd(struct opp_table *opp_table) {} 405 - 406 - static inline int devm_pm_opp_attach_genpd(struct device *dev, 407 - const char * const *names, 408 - struct device ***virt_devs) 357 + static inline int dev_pm_opp_config_clks_simple(struct device *dev, 358 + struct opp_table *opp_table, struct dev_pm_opp *opp, void *data, 359 + bool scaling_down) 409 360 { 410 361 return -EOPNOTSUPP; 411 362 } ··· 402 469 int dev_pm_opp_of_add_table(struct device *dev); 403 470 int dev_pm_opp_of_add_table_indexed(struct device *dev, int index); 404 471 int devm_pm_opp_of_add_table_indexed(struct device *dev, int index); 405 - int dev_pm_opp_of_add_table_noclk(struct device *dev, int index); 406 - int devm_pm_opp_of_add_table_noclk(struct device *dev, int index); 407 472 void dev_pm_opp_of_remove_table(struct device *dev); 408 473 int devm_pm_opp_of_add_table(struct device *dev); 409 474 int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask); ··· 428 497 } 429 498 430 499 static inline int devm_pm_opp_of_add_table_indexed(struct device *dev, int index) 431 - { 432 - return -EOPNOTSUPP; 433 - } 434 - 435 - static inline int dev_pm_opp_of_add_table_noclk(struct device *dev, int index) 436 - { 437 - return -EOPNOTSUPP; 438 - } 439 - 440 - static inline int devm_pm_opp_of_add_table_noclk(struct device *dev, int index) 441 500 { 442 501 return -EOPNOTSUPP; 443 502 } ··· 485 564 return -EOPNOTSUPP; 486 565 } 487 566 #endif 567 + 568 + /* OPP Configuration helpers */ 569 + 570 + /* Regulators helpers */ 571 + static inline int dev_pm_opp_set_regulators(struct device *dev, 572 + const char * const names[]) 573 + { 574 + struct dev_pm_opp_config config = { 575 + .regulator_names = names, 576 + }; 577 + 578 + return dev_pm_opp_set_config(dev, &config); 579 + } 580 + 581 + static inline void dev_pm_opp_put_regulators(int token) 582 + { 583 + dev_pm_opp_clear_config(token); 584 + } 585 + 586 + static inline int devm_pm_opp_set_regulators(struct device *dev, 587 + const char * const names[]) 588 + { 589 + struct dev_pm_opp_config config = { 590 + .regulator_names = names, 591 + }; 592 + 593 + return devm_pm_opp_set_config(dev, &config); 594 + } 595 + 596 + /* Supported-hw helpers */ 597 + static inline int dev_pm_opp_set_supported_hw(struct device *dev, 598 + const u32 *versions, 599 + unsigned int count) 600 + { 601 + struct dev_pm_opp_config config = { 602 + .supported_hw = versions, 603 + .supported_hw_count = count, 604 + }; 605 + 606 + return dev_pm_opp_set_config(dev, &config); 607 + } 608 + 609 + static inline void dev_pm_opp_put_supported_hw(int token) 610 + { 611 + dev_pm_opp_clear_config(token); 612 + } 613 + 614 + static inline int devm_pm_opp_set_supported_hw(struct device *dev, 615 + const u32 *versions, 616 + unsigned int count) 617 + { 618 + struct dev_pm_opp_config config = { 619 + .supported_hw = versions, 620 + .supported_hw_count = count, 621 + }; 622 + 623 + return devm_pm_opp_set_config(dev, &config); 624 + } 625 + 626 + /* clkname helpers */ 627 + static inline int dev_pm_opp_set_clkname(struct device *dev, const char *name) 628 + { 629 + const char *names[] = { name, NULL }; 630 + struct dev_pm_opp_config config = { 631 + .clk_names = names, 632 + }; 633 + 634 + return dev_pm_opp_set_config(dev, &config); 635 + } 636 + 637 + static inline void dev_pm_opp_put_clkname(int token) 638 + { 639 + dev_pm_opp_clear_config(token); 640 + } 641 + 642 + static inline int devm_pm_opp_set_clkname(struct device *dev, const char *name) 643 + { 644 + const char *names[] = { name, NULL }; 645 + struct dev_pm_opp_config config = { 646 + .clk_names = names, 647 + }; 648 + 649 + return devm_pm_opp_set_config(dev, &config); 650 + } 651 + 652 + /* config-regulators helpers */ 653 + static inline int dev_pm_opp_set_config_regulators(struct device *dev, 654 + config_regulators_t helper) 655 + { 656 + struct dev_pm_opp_config config = { 657 + .config_regulators = helper, 658 + }; 659 + 660 + return dev_pm_opp_set_config(dev, &config); 661 + } 662 + 663 + static inline void dev_pm_opp_put_config_regulators(int token) 664 + { 665 + dev_pm_opp_clear_config(token); 666 + } 667 + 668 + /* genpd helpers */ 669 + static inline int dev_pm_opp_attach_genpd(struct device *dev, 670 + const char * const *names, 671 + struct device ***virt_devs) 672 + { 673 + struct dev_pm_opp_config config = { 674 + .genpd_names = names, 675 + .virt_devs = virt_devs, 676 + }; 677 + 678 + return dev_pm_opp_set_config(dev, &config); 679 + } 680 + 681 + static inline void dev_pm_opp_detach_genpd(int token) 682 + { 683 + dev_pm_opp_clear_config(token); 684 + } 685 + 686 + static inline int devm_pm_opp_attach_genpd(struct device *dev, 687 + const char * const *names, 688 + struct device ***virt_devs) 689 + { 690 + struct dev_pm_opp_config config = { 691 + .genpd_names = names, 692 + .virt_devs = virt_devs, 693 + }; 694 + 695 + return devm_pm_opp_set_config(dev, &config); 696 + } 697 + 698 + /* prop-name helpers */ 699 + static inline int dev_pm_opp_set_prop_name(struct device *dev, const char *name) 700 + { 701 + struct dev_pm_opp_config config = { 702 + .prop_name = name, 703 + }; 704 + 705 + return dev_pm_opp_set_config(dev, &config); 706 + } 707 + 708 + static inline void dev_pm_opp_put_prop_name(int token) 709 + { 710 + dev_pm_opp_clear_config(token); 711 + } 488 712 489 713 #endif /* __LINUX_OPP_H__ */
+22
include/trace/events/power.h
··· 40 40 TP_ARGS(state, cpu_id) 41 41 ); 42 42 43 + TRACE_EVENT(cpu_idle_miss, 44 + 45 + TP_PROTO(unsigned int cpu_id, unsigned int state, bool below), 46 + 47 + TP_ARGS(cpu_id, state, below), 48 + 49 + TP_STRUCT__entry( 50 + __field(u32, cpu_id) 51 + __field(u32, state) 52 + __field(bool, below) 53 + ), 54 + 55 + TP_fast_assign( 56 + __entry->cpu_id = cpu_id; 57 + __entry->state = state; 58 + __entry->below = below; 59 + ), 60 + 61 + TP_printk("cpu_id=%lu state=%lu type=%s", (unsigned long)__entry->cpu_id, 62 + (unsigned long)__entry->state, (__entry->below)?"below":"above") 63 + ); 64 + 43 65 TRACE_EVENT(powernv_throttle, 44 66 45 67 TP_PROTO(int chip_id, const char *reason, int pmax),