Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
"These update several cpufreq drivers and the cpufreq core, add sysfs
interface for exposing the time really spent in the platform low-power
state during suspend-to-idle, update devfreq (core and drivers) and
the pm-graph suite of tools and clean up code.

Specifics:

- Fix the frequency unit in cpufreq_verify_current_freq checks()
Sanjay Chandrashekara)

- Make mode_state_machine in amd-pstate static (Tom Rix)

- Make the cpufreq core require drivers with target_index() to set
freq_table (Viresh Kumar)

- Fix typo in the ARM_BRCMSTB_AVS_CPUFREQ Kconfig entry (Jingyu Wang)

- Use of_property_read_bool() for boolean properties in the pmac32
cpufreq driver (Rob Herring)

- Make the cpufreq sysfs interface return proper error codes on
obviously invalid input (qinyu)

- Add guided autonomous mode support to the AMD P-state driver (Wyes
Karny)

- Make the Intel P-state driver enable HWP IO boost on all server
platforms (Srinivas Pandruvada)

- Add opp and bandwidth support to tegra194 cpufreq driver (Sumit
Gupta)

- Use of_property_present() for testing DT property presence (Rob
Herring)

- Remove MODULE_LICENSE in non-modules (Nick Alcock)

- Add SM7225 to cpufreq-dt-platdev blocklist (Luca Weiss)

- Optimizations and fixes for qcom-cpufreq-hw driver (Krzysztof
Kozlowski, Konrad Dybcio, and Bjorn Andersson)

- DT binding updates for qcom-cpufreq-hw driver (Konrad Dybcio and
Bartosz Golaszewski)

- Updates and fixes for mediatek driver (Jia-Wei Chang and
AngeloGioacchino Del Regno)

- Use of_property_present() for testing DT property presence in the
cpuidle code (Rob Herring)

- Drop unnecessary (void *) conversions from the PM core (Li zeming)

- Add sysfs files to represent time spent in a platform sleep state
during suspend-to-idle and make AMD and Intel PMC drivers use them
Mario Limonciello)

- Use of_property_present() for testing DT property presence (Rob
Herring)

- Add set_required_opps() callback to the 'struct opp_table', to make
the code paths cleaner (Viresh Kumar)

- Update the pm-graph siute of utilities to v5.11 with the following
changes:
* New script which allows users to install the latest pm-graph
from the upstream github repo.
* Update all the dmesg suspend/resume PM print formats to be able
to process recent timelines using dmesg only.
* Add ethtool output to the log for the system's ethernet device
if ethtool exists.
* Make the tool more robustly handle events where mangled dmesg
or ftrace outputs do not include all the requisite data.

- Make the sleepgraph utility recognize "CPU killed" messages (Xueqin
Luo)

- Remove unneeded SRCU selection in Kconfig because it's always set
from devfreq core (Paul E. McKenney)

- Drop of_match_ptr() macro from exynos-bus.c because this driver is
always using the DT table for driver probe (Krzysztof Kozlowski)

- Use the preferred of_property_present() instead of the low-level
of_get_property() on exynos-bus.c (Rob Herring)

- Use devm_platform_get_and_ioream_resource() in exyno-ppmu.c (Yang
Li)"

* tag 'pm-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (44 commits)
platform/x86/intel/pmc: core: Report duration of time in HW sleep state
platform/x86/intel/pmc: core: Always capture counters on suspend
platform/x86/amd: pmc: Report duration of time in hw sleep state
PM: Add sysfs files to represent time spent in hardware sleep state
cpufreq: use correct unit when verify cur freq
cpufreq: tegra194: add OPP support and set bandwidth
cpufreq: amd-pstate: Make varaiable mode_state_machine static
PM: core: Remove unnecessary (void *) conversions
cpufreq: drivers with target_index() must set freq_table
PM / devfreq: exynos-ppmu: Use devm_platform_get_and_ioremap_resource()
OPP: Move required opps configuration to specialized callback
OPP: Handle all genpd cases together in _set_required_opps()
cpufreq: qcom-cpufreq-hw: Revert adding cpufreq qos
dt-bindings: cpufreq: cpufreq-qcom-hw: Add QCM2290
dt-bindings: cpufreq: cpufreq-qcom-hw: Sanitize data per compatible
dt-bindings: cpufreq: cpufreq-qcom-hw: Allow just 1 frequency domain
cpufreq: Add SM7225 to cpufreq-dt-platdev blocklist
cpufreq: qcom-cpufreq-hw: fix double IO unmap and resource release on exit
cpufreq: mediatek: Raise proc and sram max voltage for MT7622/7623
cpufreq: mediatek: raise proc/sram max voltage for MT8516
...

+896 -303
+29
Documentation/ABI/testing/sysfs-power
··· 413 413 The /sys/power/suspend_stats/last_failed_step file contains 414 414 the last failed step in the suspend/resume path. 415 415 416 + What: /sys/power/suspend_stats/last_hw_sleep 417 + Date: June 2023 418 + Contact: Mario Limonciello <mario.limonciello@amd.com> 419 + Description: 420 + The /sys/power/suspend_stats/last_hw_sleep file 421 + contains the duration of time spent in a hardware sleep 422 + state in the most recent system suspend-resume cycle. 423 + This number is measured in microseconds. 424 + 425 + What: /sys/power/suspend_stats/total_hw_sleep 426 + Date: June 2023 427 + Contact: Mario Limonciello <mario.limonciello@amd.com> 428 + Description: 429 + The /sys/power/suspend_stats/total_hw_sleep file 430 + contains the aggregate of time spent in a hardware sleep 431 + state since the kernel was booted. This number 432 + is measured in microseconds. 433 + 434 + What: /sys/power/suspend_stats/max_hw_sleep 435 + Date: June 2023 436 + Contact: Mario Limonciello <mario.limonciello@amd.com> 437 + Description: 438 + The /sys/power/suspend_stats/max_hw_sleep file 439 + contains the maximum amount of time that the hardware can 440 + report for time spent in a hardware sleep state. When sleep 441 + cycles are longer than this time, the values for 442 + 'total_hw_sleep' and 'last_hw_sleep' may not be accurate. 443 + This number is measured in microseconds. 444 + 416 445 What: /sys/power/sync_on_suspend 417 446 Date: October 2019 418 447 Contact: Jonas Meurer <jonas@freesources.org>
+23 -17
Documentation/admin-guide/kernel-parameters.txt
··· 339 339 This mode requires kvm-amd.avic=1. 340 340 (Default when IOMMU HW support is present.) 341 341 342 + amd_pstate= [X86] 343 + disable 344 + Do not enable amd_pstate as the default 345 + scaling driver for the supported processors 346 + passive 347 + Use amd_pstate with passive mode as a scaling driver. 348 + In this mode autonomous selection is disabled. 349 + Driver requests a desired performance level and platform 350 + tries to match the same performance level if it is 351 + satisfied by guaranteed performance level. 352 + active 353 + Use amd_pstate_epp driver instance as the scaling driver, 354 + driver provides a hint to the hardware if software wants 355 + to bias toward performance (0x0) or energy efficiency (0xff) 356 + to the CPPC firmware. then CPPC power algorithm will 357 + calculate the runtime workload and adjust the realtime cores 358 + frequency. 359 + guided 360 + Activate guided autonomous mode. Driver requests minimum and 361 + maximum performance level and the platform autonomously 362 + selects a performance level in this range and appropriate 363 + to the current workload. 364 + 342 365 amijoy.map= [HW,JOY] Amiga joystick support 343 366 Map of devices attached to JOY0DAT and JOY1DAT 344 367 Format: <a>,<b> ··· 7085 7062 xmon commands. 7086 7063 off xmon is disabled. 7087 7064 7088 - amd_pstate= [X86] 7089 - disable 7090 - Do not enable amd_pstate as the default 7091 - scaling driver for the supported processors 7092 - passive 7093 - Use amd_pstate as a scaling driver, driver requests a 7094 - desired performance on this abstract scale and the power 7095 - management firmware translates the requests into actual 7096 - hardware states (core frequency, data fabric and memory 7097 - clocks etc.) 7098 - active 7099 - Use amd_pstate_epp driver instance as the scaling driver, 7100 - driver provides a hint to the hardware if software wants 7101 - to bias toward performance (0x0) or energy efficiency (0xff) 7102 - to the CPPC firmware. then CPPC power algorithm will 7103 - calculate the runtime workload and adjust the realtime cores 7104 - frequency.
+24 -7
Documentation/admin-guide/pm/amd-pstate.rst
··· 303 303 AMD Pstate Driver Operation Modes 304 304 ================================= 305 305 306 - ``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and 307 - CPPC non-autonomous(passive) mode. 308 - active mode and passive mode can be chosen by different kernel parameters. 309 - When in Autonomous mode, CPPC ignores requests done in the Desired Performance 310 - Target register and takes into account only the values set to the Minimum requested 311 - performance, Maximum requested performance, and Energy Performance Preference 312 - registers. When Autonomous is disabled, it only considers the Desired Performance Target. 306 + ``amd_pstate`` CPPC has 3 operation modes: autonomous (active) mode, 307 + non-autonomous (passive) mode and guided autonomous (guided) mode. 308 + Active/passive/guided mode can be chosen by different kernel parameters. 309 + 310 + - In autonomous mode, platform ignores the desired performance level request 311 + and takes into account only the values set to the minimum, maximum and energy 312 + performance preference registers. 313 + - In non-autonomous mode, platform gets desired performance level 314 + from OS directly through Desired Performance Register. 315 + - In guided-autonomous mode, platform sets operating performance level 316 + autonomously according to the current workload and within the limits set by 317 + OS through min and max performance registers. 313 318 314 319 Active Mode 315 320 ------------ ··· 343 338 processor must provide at least nominal performance requested and go higher if current 344 339 operating conditions allow. 345 340 341 + Guided Mode 342 + ----------- 343 + 344 + ``amd_pstate=guided`` 345 + 346 + If ``amd_pstate=guided`` is passed to kernel command line option then this mode 347 + is activated. In this mode, driver requests minimum and maximum performance 348 + level and the platform autonomously selects a performance level in this range 349 + and appropriate to the current workload. 346 350 347 351 User Space Interface in ``sysfs`` - General 348 352 =========================================== ··· 371 357 372 358 "passive" 373 359 The driver is functional and in the ``passive mode`` 360 + 361 + "guided" 362 + The driver is functional and in the ``guided mode`` 374 363 375 364 "disable" 376 365 The driver is unregistered and not functional now.
+116 -3
Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml
··· 20 20 oneOf: 21 21 - description: v1 of CPUFREQ HW 22 22 items: 23 + - enum: 24 + - qcom,qcm2290-cpufreq-hw 25 + - qcom,sc7180-cpufreq-hw 26 + - qcom,sdm845-cpufreq-hw 27 + - qcom,sm6115-cpufreq-hw 28 + - qcom,sm6350-cpufreq-hw 29 + - qcom,sm8150-cpufreq-hw 23 30 - const: qcom,cpufreq-hw 24 31 25 32 - description: v2 of CPUFREQ HW (EPSS) 26 33 items: 27 34 - enum: 28 35 - qcom,qdu1000-cpufreq-epss 36 + - qcom,sa8775p-cpufreq-epss 29 37 - qcom,sc7280-cpufreq-epss 30 38 - qcom,sc8280xp-cpufreq-epss 31 39 - qcom,sm6375-cpufreq-epss ··· 44 36 - const: qcom,cpufreq-epss 45 37 46 38 reg: 47 - minItems: 2 39 + minItems: 1 48 40 items: 49 41 - description: Frequency domain 0 register region 50 42 - description: Frequency domain 1 register region 51 43 - description: Frequency domain 2 register region 52 44 53 45 reg-names: 54 - minItems: 2 46 + minItems: 1 55 47 items: 56 48 - const: freq-domain0 57 49 - const: freq-domain1 ··· 92 84 - '#freq-domain-cells' 93 85 94 86 additionalProperties: false 87 + 88 + allOf: 89 + - if: 90 + properties: 91 + compatible: 92 + contains: 93 + enum: 94 + - qcom,qcm2290-cpufreq-hw 95 + then: 96 + properties: 97 + reg: 98 + minItems: 1 99 + maxItems: 1 100 + 101 + reg-names: 102 + minItems: 1 103 + maxItems: 1 104 + 105 + interrupts: 106 + minItems: 1 107 + maxItems: 1 108 + 109 + interrupt-names: 110 + minItems: 1 111 + 112 + - if: 113 + properties: 114 + compatible: 115 + contains: 116 + enum: 117 + - qcom,qdu1000-cpufreq-epss 118 + - qcom,sc7180-cpufreq-hw 119 + - qcom,sc8280xp-cpufreq-epss 120 + - qcom,sdm845-cpufreq-hw 121 + - qcom,sm6115-cpufreq-hw 122 + - qcom,sm6350-cpufreq-hw 123 + - qcom,sm6375-cpufreq-epss 124 + then: 125 + properties: 126 + reg: 127 + minItems: 2 128 + maxItems: 2 129 + 130 + reg-names: 131 + minItems: 2 132 + maxItems: 2 133 + 134 + interrupts: 135 + minItems: 2 136 + maxItems: 2 137 + 138 + interrupt-names: 139 + minItems: 2 140 + 141 + - if: 142 + properties: 143 + compatible: 144 + contains: 145 + enum: 146 + - qcom,sc7280-cpufreq-epss 147 + - qcom,sm8250-cpufreq-epss 148 + - qcom,sm8350-cpufreq-epss 149 + - qcom,sm8450-cpufreq-epss 150 + - qcom,sm8550-cpufreq-epss 151 + then: 152 + properties: 153 + reg: 154 + minItems: 3 155 + maxItems: 3 156 + 157 + reg-names: 158 + minItems: 3 159 + maxItems: 3 160 + 161 + interrupts: 162 + minItems: 3 163 + maxItems: 3 164 + 165 + interrupt-names: 166 + minItems: 3 167 + 168 + - if: 169 + properties: 170 + compatible: 171 + contains: 172 + enum: 173 + - qcom,sm8150-cpufreq-hw 174 + then: 175 + properties: 176 + reg: 177 + minItems: 3 178 + maxItems: 3 179 + 180 + reg-names: 181 + minItems: 3 182 + maxItems: 3 183 + 184 + # On some SoCs the Prime core shares the LMH irq with Big cores 185 + interrupts: 186 + minItems: 2 187 + maxItems: 2 188 + 189 + interrupt-names: 190 + minItems: 2 191 + 95 192 96 193 examples: 97 194 - | ··· 348 235 #size-cells = <1>; 349 236 350 237 cpufreq@17d43000 { 351 - compatible = "qcom,cpufreq-hw"; 238 + compatible = "qcom,sdm845-cpufreq-hw", "qcom,cpufreq-hw"; 352 239 reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>; 353 240 reg-names = "freq-domain0", "freq-domain1"; 354 241
+112 -8
drivers/acpi/cppc_acpi.c
··· 1434 1434 EXPORT_SYMBOL_GPL(cppc_set_epp_perf); 1435 1435 1436 1436 /** 1437 + * cppc_get_auto_sel_caps - Read autonomous selection register. 1438 + * @cpunum : CPU from which to read register. 1439 + * @perf_caps : struct where autonomous selection register value is updated. 1440 + */ 1441 + int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps) 1442 + { 1443 + struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); 1444 + struct cpc_register_resource *auto_sel_reg; 1445 + u64 auto_sel; 1446 + 1447 + if (!cpc_desc) { 1448 + pr_debug("No CPC descriptor for CPU:%d\n", cpunum); 1449 + return -ENODEV; 1450 + } 1451 + 1452 + auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; 1453 + 1454 + if (!CPC_SUPPORTED(auto_sel_reg)) 1455 + pr_warn_once("Autonomous mode is not unsupported!\n"); 1456 + 1457 + if (CPC_IN_PCC(auto_sel_reg)) { 1458 + int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum); 1459 + struct cppc_pcc_data *pcc_ss_data = NULL; 1460 + int ret = 0; 1461 + 1462 + if (pcc_ss_id < 0) 1463 + return -ENODEV; 1464 + 1465 + pcc_ss_data = pcc_data[pcc_ss_id]; 1466 + 1467 + down_write(&pcc_ss_data->pcc_lock); 1468 + 1469 + if (send_pcc_cmd(pcc_ss_id, CMD_READ) >= 0) { 1470 + cpc_read(cpunum, auto_sel_reg, &auto_sel); 1471 + perf_caps->auto_sel = (bool)auto_sel; 1472 + } else { 1473 + ret = -EIO; 1474 + } 1475 + 1476 + up_write(&pcc_ss_data->pcc_lock); 1477 + 1478 + return ret; 1479 + } 1480 + 1481 + return 0; 1482 + } 1483 + EXPORT_SYMBOL_GPL(cppc_get_auto_sel_caps); 1484 + 1485 + /** 1486 + * cppc_set_auto_sel - Write autonomous selection register. 1487 + * @cpu : CPU to which to write register. 1488 + * @enable : the desired value of autonomous selection resiter to be updated. 1489 + */ 1490 + int cppc_set_auto_sel(int cpu, bool enable) 1491 + { 1492 + int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); 1493 + struct cpc_register_resource *auto_sel_reg; 1494 + struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 1495 + struct cppc_pcc_data *pcc_ss_data = NULL; 1496 + int ret = -EINVAL; 1497 + 1498 + if (!cpc_desc) { 1499 + pr_debug("No CPC descriptor for CPU:%d\n", cpu); 1500 + return -ENODEV; 1501 + } 1502 + 1503 + auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; 1504 + 1505 + if (CPC_IN_PCC(auto_sel_reg)) { 1506 + if (pcc_ss_id < 0) { 1507 + pr_debug("Invalid pcc_ss_id\n"); 1508 + return -ENODEV; 1509 + } 1510 + 1511 + if (CPC_SUPPORTED(auto_sel_reg)) { 1512 + ret = cpc_write(cpu, auto_sel_reg, enable); 1513 + if (ret) 1514 + return ret; 1515 + } 1516 + 1517 + pcc_ss_data = pcc_data[pcc_ss_id]; 1518 + 1519 + down_write(&pcc_ss_data->pcc_lock); 1520 + /* after writing CPC, transfer the ownership of PCC to platform */ 1521 + ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE); 1522 + up_write(&pcc_ss_data->pcc_lock); 1523 + } else { 1524 + ret = -ENOTSUPP; 1525 + pr_debug("_CPC in PCC is not supported\n"); 1526 + } 1527 + 1528 + return ret; 1529 + } 1530 + EXPORT_SYMBOL_GPL(cppc_set_auto_sel); 1531 + 1532 + /** 1437 1533 * cppc_set_enable - Set to enable CPPC on the processor by writing the 1438 1534 * Continuous Performance Control package EnableRegister field. 1439 1535 * @cpu: CPU for which to enable CPPC register. ··· 1584 1488 int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) 1585 1489 { 1586 1490 struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 1587 - struct cpc_register_resource *desired_reg; 1491 + struct cpc_register_resource *desired_reg, *min_perf_reg, *max_perf_reg; 1588 1492 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); 1589 1493 struct cppc_pcc_data *pcc_ss_data = NULL; 1590 1494 int ret = 0; ··· 1595 1499 } 1596 1500 1597 1501 desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; 1502 + min_perf_reg = &cpc_desc->cpc_regs[MIN_PERF]; 1503 + max_perf_reg = &cpc_desc->cpc_regs[MAX_PERF]; 1598 1504 1599 1505 /* 1600 1506 * This is Phase-I where we want to write to CPC registers ··· 1605 1507 * Since read_lock can be acquired by multiple CPUs simultaneously we 1606 1508 * achieve that goal here 1607 1509 */ 1608 - if (CPC_IN_PCC(desired_reg)) { 1510 + if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) { 1609 1511 if (pcc_ss_id < 0) { 1610 1512 pr_debug("Invalid pcc_ss_id\n"); 1611 1513 return -ENODEV; ··· 1628 1530 cpc_desc->write_cmd_status = 0; 1629 1531 } 1630 1532 1631 - /* 1632 - * Skip writing MIN/MAX until Linux knows how to come up with 1633 - * useful values. 1634 - */ 1635 1533 cpc_write(cpu, desired_reg, perf_ctrls->desired_perf); 1636 1534 1637 - if (CPC_IN_PCC(desired_reg)) 1535 + /* 1536 + * Only write if min_perf and max_perf not zero. Some drivers pass zero 1537 + * value to min and max perf, but they don't mean to set the zero value, 1538 + * they just don't want to write to those registers. 1539 + */ 1540 + if (perf_ctrls->min_perf) 1541 + cpc_write(cpu, min_perf_reg, perf_ctrls->min_perf); 1542 + if (perf_ctrls->max_perf) 1543 + cpc_write(cpu, max_perf_reg, perf_ctrls->max_perf); 1544 + 1545 + if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) 1638 1546 up_read(&pcc_ss_data->pcc_lock); /* END Phase-I */ 1639 1547 /* 1640 1548 * This is Phase-II where we transfer the ownership of PCC to Platform ··· 1688 1584 * case during a CMD_READ and if there are pending writes it delivers 1689 1585 * the write command before servicing the read command 1690 1586 */ 1691 - if (CPC_IN_PCC(desired_reg)) { 1587 + if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) { 1692 1588 if (down_write_trylock(&pcc_ss_data->pcc_lock)) {/* BEGIN Phase-II */ 1693 1589 /* Update only if there are pending write commands */ 1694 1590 if (pcc_ss_data->pending_pcc_write_cmd)
+6 -6
drivers/base/power/main.c
··· 679 679 680 680 static void async_resume_noirq(void *data, async_cookie_t cookie) 681 681 { 682 - struct device *dev = (struct device *)data; 682 + struct device *dev = data; 683 683 int error; 684 684 685 685 error = device_resume_noirq(dev, pm_transition, true); ··· 816 816 817 817 static void async_resume_early(void *data, async_cookie_t cookie) 818 818 { 819 - struct device *dev = (struct device *)data; 819 + struct device *dev = data; 820 820 int error; 821 821 822 822 error = device_resume_early(dev, pm_transition, true); ··· 980 980 981 981 static void async_resume(void *data, async_cookie_t cookie) 982 982 { 983 - struct device *dev = (struct device *)data; 983 + struct device *dev = data; 984 984 int error; 985 985 986 986 error = device_resume(dev, pm_transition, true); ··· 1269 1269 1270 1270 static void async_suspend_noirq(void *data, async_cookie_t cookie) 1271 1271 { 1272 - struct device *dev = (struct device *)data; 1272 + struct device *dev = data; 1273 1273 int error; 1274 1274 1275 1275 error = __device_suspend_noirq(dev, pm_transition, true); ··· 1450 1450 1451 1451 static void async_suspend_late(void *data, async_cookie_t cookie) 1452 1452 { 1453 - struct device *dev = (struct device *)data; 1453 + struct device *dev = data; 1454 1454 int error; 1455 1455 1456 1456 error = __device_suspend_late(dev, pm_transition, true); ··· 1727 1727 1728 1728 static void async_suspend(void *data, async_cookie_t cookie) 1729 1729 { 1730 - struct device *dev = (struct device *)data; 1730 + struct device *dev = data; 1731 1731 int error; 1732 1732 1733 1733 error = __device_suspend(dev, pm_transition, true);
+1 -1
drivers/cpufreq/Kconfig.arm
··· 95 95 help 96 96 Some Broadcom STB SoCs use a co-processor running proprietary firmware 97 97 ("AVS") to handle voltage and frequency scaling. This driver provides 98 - a standard CPUfreq interface to to the firmware. 98 + a standard CPUfreq interface to the firmware. 99 99 100 100 Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS. 101 101
+129 -46
drivers/cpufreq/amd-pstate.c
··· 106 106 [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE, 107 107 }; 108 108 109 + typedef int (*cppc_mode_transition_fn)(int); 110 + 109 111 static inline int get_mode_idx_from_str(const char *str, size_t size) 110 112 { 111 113 int i; ··· 310 308 cppc_perf.lowest_nonlinear_perf); 311 309 WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf); 312 310 313 - return 0; 311 + if (cppc_state == AMD_PSTATE_ACTIVE) 312 + return 0; 313 + 314 + ret = cppc_get_auto_sel_caps(cpudata->cpu, &cppc_perf); 315 + if (ret) { 316 + pr_warn("failed to get auto_sel, ret: %d\n", ret); 317 + return 0; 318 + } 319 + 320 + ret = cppc_set_auto_sel(cpudata->cpu, 321 + (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1); 322 + 323 + if (ret) 324 + pr_warn("failed to set auto_sel, ret: %d\n", ret); 325 + 326 + return ret; 314 327 } 315 328 316 329 DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf); ··· 402 385 } 403 386 404 387 static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf, 405 - u32 des_perf, u32 max_perf, bool fast_switch) 388 + u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags) 406 389 { 407 390 u64 prev = READ_ONCE(cpudata->cppc_req_cached); 408 391 u64 value = prev; 409 392 410 393 des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf); 394 + 395 + if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) { 396 + min_perf = des_perf; 397 + des_perf = 0; 398 + } 399 + 411 400 value &= ~AMD_CPPC_MIN_PERF(~0L); 412 401 value |= AMD_CPPC_MIN_PERF(min_perf); 413 402 ··· 468 445 469 446 cpufreq_freq_transition_begin(policy, &freqs); 470 447 amd_pstate_update(cpudata, min_perf, des_perf, 471 - max_perf, false); 448 + max_perf, false, policy->governor->flags); 472 449 cpufreq_freq_transition_end(policy, &freqs, false); 473 450 474 451 return 0; ··· 502 479 if (max_perf < min_perf) 503 480 max_perf = min_perf; 504 481 505 - amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true); 482 + amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true, 483 + policy->governor->flags); 506 484 cpufreq_cpu_put(policy); 507 485 } 508 486 ··· 840 816 return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); 841 817 } 842 818 819 + static void amd_pstate_driver_cleanup(void) 820 + { 821 + amd_pstate_enable(false); 822 + cppc_state = AMD_PSTATE_DISABLE; 823 + current_pstate_driver = NULL; 824 + } 825 + 826 + static int amd_pstate_register_driver(int mode) 827 + { 828 + int ret; 829 + 830 + if (mode == AMD_PSTATE_PASSIVE || mode == AMD_PSTATE_GUIDED) 831 + current_pstate_driver = &amd_pstate_driver; 832 + else if (mode == AMD_PSTATE_ACTIVE) 833 + current_pstate_driver = &amd_pstate_epp_driver; 834 + else 835 + return -EINVAL; 836 + 837 + cppc_state = mode; 838 + ret = cpufreq_register_driver(current_pstate_driver); 839 + if (ret) { 840 + amd_pstate_driver_cleanup(); 841 + return ret; 842 + } 843 + return 0; 844 + } 845 + 846 + static int amd_pstate_unregister_driver(int dummy) 847 + { 848 + cpufreq_unregister_driver(current_pstate_driver); 849 + amd_pstate_driver_cleanup(); 850 + return 0; 851 + } 852 + 853 + static int amd_pstate_change_mode_without_dvr_change(int mode) 854 + { 855 + int cpu = 0; 856 + 857 + cppc_state = mode; 858 + 859 + if (boot_cpu_has(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE) 860 + return 0; 861 + 862 + for_each_present_cpu(cpu) { 863 + cppc_set_auto_sel(cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1); 864 + } 865 + 866 + return 0; 867 + } 868 + 869 + static int amd_pstate_change_driver_mode(int mode) 870 + { 871 + int ret; 872 + 873 + ret = amd_pstate_unregister_driver(0); 874 + if (ret) 875 + return ret; 876 + 877 + ret = amd_pstate_register_driver(mode); 878 + if (ret) 879 + return ret; 880 + 881 + return 0; 882 + } 883 + 884 + static cppc_mode_transition_fn mode_state_machine[AMD_PSTATE_MAX][AMD_PSTATE_MAX] = { 885 + [AMD_PSTATE_DISABLE] = { 886 + [AMD_PSTATE_DISABLE] = NULL, 887 + [AMD_PSTATE_PASSIVE] = amd_pstate_register_driver, 888 + [AMD_PSTATE_ACTIVE] = amd_pstate_register_driver, 889 + [AMD_PSTATE_GUIDED] = amd_pstate_register_driver, 890 + }, 891 + [AMD_PSTATE_PASSIVE] = { 892 + [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, 893 + [AMD_PSTATE_PASSIVE] = NULL, 894 + [AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode, 895 + [AMD_PSTATE_GUIDED] = amd_pstate_change_mode_without_dvr_change, 896 + }, 897 + [AMD_PSTATE_ACTIVE] = { 898 + [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, 899 + [AMD_PSTATE_PASSIVE] = amd_pstate_change_driver_mode, 900 + [AMD_PSTATE_ACTIVE] = NULL, 901 + [AMD_PSTATE_GUIDED] = amd_pstate_change_driver_mode, 902 + }, 903 + [AMD_PSTATE_GUIDED] = { 904 + [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, 905 + [AMD_PSTATE_PASSIVE] = amd_pstate_change_mode_without_dvr_change, 906 + [AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode, 907 + [AMD_PSTATE_GUIDED] = NULL, 908 + }, 909 + }; 910 + 843 911 static ssize_t amd_pstate_show_status(char *buf) 844 912 { 845 913 if (!current_pstate_driver) ··· 940 824 return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]); 941 825 } 942 826 943 - static void amd_pstate_driver_cleanup(void) 944 - { 945 - current_pstate_driver = NULL; 946 - } 947 - 948 827 static int amd_pstate_update_status(const char *buf, size_t size) 949 828 { 950 - int ret = 0; 951 829 int mode_idx; 952 830 953 - if (size > 7 || size < 6) 831 + if (size > strlen("passive") || size < strlen("active")) 954 832 return -EINVAL; 833 + 955 834 mode_idx = get_mode_idx_from_str(buf, size); 956 835 957 - switch(mode_idx) { 958 - case AMD_PSTATE_DISABLE: 959 - if (current_pstate_driver) { 960 - cpufreq_unregister_driver(current_pstate_driver); 961 - amd_pstate_driver_cleanup(); 962 - } 963 - break; 964 - case AMD_PSTATE_PASSIVE: 965 - if (current_pstate_driver) { 966 - if (current_pstate_driver == &amd_pstate_driver) 967 - return 0; 968 - cpufreq_unregister_driver(current_pstate_driver); 969 - } 836 + if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX) 837 + return -EINVAL; 970 838 971 - current_pstate_driver = &amd_pstate_driver; 972 - cppc_state = AMD_PSTATE_PASSIVE; 973 - ret = cpufreq_register_driver(current_pstate_driver); 974 - break; 975 - case AMD_PSTATE_ACTIVE: 976 - if (current_pstate_driver) { 977 - if (current_pstate_driver == &amd_pstate_epp_driver) 978 - return 0; 979 - cpufreq_unregister_driver(current_pstate_driver); 980 - } 839 + if (mode_state_machine[cppc_state][mode_idx]) 840 + return mode_state_machine[cppc_state][mode_idx](mode_idx); 981 841 982 - current_pstate_driver = &amd_pstate_epp_driver; 983 - cppc_state = AMD_PSTATE_ACTIVE; 984 - ret = cpufreq_register_driver(current_pstate_driver); 985 - break; 986 - default: 987 - ret = -EINVAL; 988 - break; 989 - } 990 - 991 - return ret; 842 + return 0; 992 843 } 993 844 994 845 static ssize_t show_status(struct kobject *kobj, ··· 1360 1277 /* capability check */ 1361 1278 if (boot_cpu_has(X86_FEATURE_CPPC)) { 1362 1279 pr_debug("AMD CPPC MSR based functionality is supported\n"); 1363 - if (cppc_state == AMD_PSTATE_PASSIVE) 1280 + if (cppc_state != AMD_PSTATE_ACTIVE) 1364 1281 current_pstate_driver->adjust_perf = amd_pstate_adjust_perf; 1365 1282 } else { 1366 1283 pr_debug("AMD CPPC shared memory based functionality is supported\n"); ··· 1422 1339 if (cppc_state == AMD_PSTATE_ACTIVE) 1423 1340 current_pstate_driver = &amd_pstate_epp_driver; 1424 1341 1425 - if (cppc_state == AMD_PSTATE_PASSIVE) 1342 + if (cppc_state == AMD_PSTATE_PASSIVE || cppc_state == AMD_PSTATE_GUIDED) 1426 1343 current_pstate_driver = &amd_pstate_driver; 1427 1344 1428 1345 return 0;
+2 -1
drivers/cpufreq/cpufreq-dt-platdev.c
··· 152 152 { .compatible = "qcom,sm6115", }, 153 153 { .compatible = "qcom,sm6350", }, 154 154 { .compatible = "qcom,sm6375", }, 155 + { .compatible = "qcom,sm7225", }, 155 156 { .compatible = "qcom,sm8150", }, 156 157 { .compatible = "qcom,sm8250", }, 157 158 { .compatible = "qcom,sm8350", }, ··· 180 179 struct device_node *np = of_cpu_device_node_get(0); 181 180 bool ret = false; 182 181 183 - if (of_get_property(np, "operating-points-v2", NULL)) 182 + if (of_property_present(np, "operating-points-v2")) 184 183 ret = true; 185 184 186 185 of_node_put(np);
+9 -4
drivers/cpufreq/cpufreq.c
··· 73 73 return cpufreq_driver->target_index || cpufreq_driver->target; 74 74 } 75 75 76 + bool has_target_index(void) 77 + { 78 + return !!cpufreq_driver->target_index; 79 + } 80 + 76 81 /* internal prototypes */ 77 82 static unsigned int __cpufreq_get(struct cpufreq_policy *policy); 78 83 static int cpufreq_init_governor(struct cpufreq_policy *policy); ··· 730 725 unsigned long val; \ 731 726 int ret; \ 732 727 \ 733 - ret = sscanf(buf, "%lu", &val); \ 734 - if (ret != 1) \ 735 - return -EINVAL; \ 728 + ret = kstrtoul(buf, 0, &val); \ 729 + if (ret) \ 730 + return ret; \ 736 731 \ 737 732 ret = freq_qos_update_request(policy->object##_freq_req, val);\ 738 733 return ret >= 0 ? count : ret; \ ··· 1732 1727 * MHz. In such cases it is better to avoid getting into 1733 1728 * unnecessary frequency updates. 1734 1729 */ 1735 - if (abs(policy->cur - new_freq) < HZ_PER_MHZ) 1730 + if (abs(policy->cur - new_freq) < KHZ_PER_MHZ) 1736 1731 return policy->cur; 1737 1732 1738 1733 cpufreq_out_of_sync(policy, new_freq);
+6 -2
drivers/cpufreq/freq_table.c
··· 355 355 { 356 356 int ret; 357 357 358 - if (!policy->freq_table) 358 + if (!policy->freq_table) { 359 + /* Freq table must be passed by drivers with target_index() */ 360 + if (has_target_index()) 361 + return -EINVAL; 362 + 359 363 return 0; 364 + } 360 365 361 366 ret = cpufreq_frequency_table_cpuinfo(policy, policy->freq_table); 362 367 if (ret) ··· 372 367 373 368 MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>"); 374 369 MODULE_DESCRIPTION("CPUfreq frequency table helpers"); 375 - MODULE_LICENSE("GPL");
+1 -1
drivers/cpufreq/imx-cpufreq-dt.c
··· 89 89 90 90 cpu_dev = get_cpu_device(0); 91 91 92 - if (!of_find_property(cpu_dev->of_node, "cpu-supply", NULL)) 92 + if (!of_property_present(cpu_dev->of_node, "cpu-supply")) 93 93 return -ENODEV; 94 94 95 95 if (of_machine_is_compatible("fsl,imx7ulp")) {
+2 -2
drivers/cpufreq/imx6q-cpufreq.c
··· 222 222 u32 val; 223 223 int ret; 224 224 225 - if (of_find_property(dev->of_node, "nvmem-cells", NULL)) { 225 + if (of_property_present(dev->of_node, "nvmem-cells")) { 226 226 ret = nvmem_cell_read_u32(dev, "speed_grade", &val); 227 227 if (ret) 228 228 return ret; ··· 279 279 u32 val; 280 280 int ret = 0; 281 281 282 - if (of_find_property(dev->of_node, "nvmem-cells", NULL)) { 282 + if (of_property_present(dev->of_node, "nvmem-cells")) { 283 283 ret = nvmem_cell_read_u32(dev, "speed_grade", &val); 284 284 if (ret) 285 285 return ret;
+1 -10
drivers/cpufreq/intel_pstate.c
··· 2384 2384 {} 2385 2385 }; 2386 2386 2387 - static const struct x86_cpu_id intel_pstate_hwp_boost_ids[] = { 2388 - X86_MATCH(SKYLAKE_X, core_funcs), 2389 - X86_MATCH(SKYLAKE, core_funcs), 2390 - {} 2391 - }; 2392 - 2393 2387 static int intel_pstate_init_cpu(unsigned int cpunum) 2394 2388 { 2395 2389 struct cpudata *cpu; ··· 2402 2408 cpu->epp_default = -EINVAL; 2403 2409 2404 2410 if (hwp_active) { 2405 - const struct x86_cpu_id *id; 2406 - 2407 2411 intel_pstate_hwp_enable(cpu); 2408 2412 2409 - id = x86_match_cpu(intel_pstate_hwp_boost_ids); 2410 - if (id && intel_pstate_acpi_pm_profile_server()) 2413 + if (intel_pstate_acpi_pm_profile_server()) 2411 2414 hwp_boost = true; 2412 2415 } 2413 2416 } else if (hwp_active) {
+57 -41
drivers/cpufreq/mediatek-cpufreq.c
··· 373 373 struct platform_device *pdev; 374 374 375 375 np = of_parse_phandle(cpu_dev->of_node, "mediatek,cci", 0); 376 - if (IS_ERR_OR_NULL(np)) 377 - return NULL; 376 + if (!np) 377 + return ERR_PTR(-ENODEV); 378 378 379 379 pdev = of_find_device_by_node(np); 380 380 of_node_put(np); 381 - if (IS_ERR_OR_NULL(pdev)) 382 - return NULL; 381 + if (!pdev) 382 + return ERR_PTR(-ENODEV); 383 383 384 384 return &pdev->dev; 385 385 } ··· 401 401 info->ccifreq_bound = false; 402 402 if (info->soc_data->ccifreq_supported) { 403 403 info->cci_dev = of_get_cci(info->cpu_dev); 404 - if (IS_ERR_OR_NULL(info->cci_dev)) { 404 + if (IS_ERR(info->cci_dev)) { 405 405 ret = PTR_ERR(info->cci_dev); 406 406 dev_err(cpu_dev, "cpu%d: failed to get cci device\n", cpu); 407 407 return -ENODEV; ··· 420 420 ret = PTR_ERR(info->inter_clk); 421 421 dev_err_probe(cpu_dev, ret, 422 422 "cpu%d: failed to get intermediate clk\n", cpu); 423 - goto out_free_resources; 423 + goto out_free_mux_clock; 424 424 } 425 425 426 426 info->proc_reg = regulator_get_optional(cpu_dev, "proc"); ··· 428 428 ret = PTR_ERR(info->proc_reg); 429 429 dev_err_probe(cpu_dev, ret, 430 430 "cpu%d: failed to get proc regulator\n", cpu); 431 - goto out_free_resources; 431 + goto out_free_inter_clock; 432 432 } 433 433 434 434 ret = regulator_enable(info->proc_reg); 435 435 if (ret) { 436 436 dev_warn(cpu_dev, "cpu%d: failed to enable vproc\n", cpu); 437 - goto out_free_resources; 437 + goto out_free_proc_reg; 438 438 } 439 439 440 440 /* Both presence and absence of sram regulator are valid cases. */ ··· 442 442 if (IS_ERR(info->sram_reg)) { 443 443 ret = PTR_ERR(info->sram_reg); 444 444 if (ret == -EPROBE_DEFER) 445 - goto out_free_resources; 445 + goto out_disable_proc_reg; 446 446 447 447 info->sram_reg = NULL; 448 448 } else { 449 449 ret = regulator_enable(info->sram_reg); 450 450 if (ret) { 451 451 dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu); 452 - goto out_free_resources; 452 + goto out_free_sram_reg; 453 453 } 454 454 } 455 455 ··· 458 458 if (ret) { 459 459 dev_err(cpu_dev, 460 460 "cpu%d: failed to get OPP-sharing information\n", cpu); 461 - goto out_free_resources; 461 + goto out_disable_sram_reg; 462 462 } 463 463 464 464 ret = dev_pm_opp_of_cpumask_add_table(&info->cpus); 465 465 if (ret) { 466 466 dev_warn(cpu_dev, "cpu%d: no OPP table\n", cpu); 467 - goto out_free_resources; 467 + goto out_disable_sram_reg; 468 468 } 469 469 470 470 ret = clk_prepare_enable(info->cpu_clk); ··· 533 533 out_free_opp_table: 534 534 dev_pm_opp_of_cpumask_remove_table(&info->cpus); 535 535 536 - out_free_resources: 537 - if (regulator_is_enabled(info->proc_reg)) 538 - regulator_disable(info->proc_reg); 539 - if (info->sram_reg && regulator_is_enabled(info->sram_reg)) 536 + out_disable_sram_reg: 537 + if (info->sram_reg) 540 538 regulator_disable(info->sram_reg); 541 539 542 - if (!IS_ERR(info->proc_reg)) 543 - regulator_put(info->proc_reg); 544 - if (!IS_ERR(info->sram_reg)) 540 + out_free_sram_reg: 541 + if (info->sram_reg) 545 542 regulator_put(info->sram_reg); 546 - if (!IS_ERR(info->cpu_clk)) 547 - clk_put(info->cpu_clk); 548 - if (!IS_ERR(info->inter_clk)) 549 - clk_put(info->inter_clk); 543 + 544 + out_disable_proc_reg: 545 + regulator_disable(info->proc_reg); 546 + 547 + out_free_proc_reg: 548 + regulator_put(info->proc_reg); 549 + 550 + out_free_inter_clock: 551 + clk_put(info->inter_clk); 552 + 553 + out_free_mux_clock: 554 + clk_put(info->cpu_clk); 550 555 551 556 return ret; 552 557 } 553 558 554 559 static void mtk_cpu_dvfs_info_release(struct mtk_cpu_dvfs_info *info) 555 560 { 556 - if (!IS_ERR(info->proc_reg)) { 557 - regulator_disable(info->proc_reg); 558 - regulator_put(info->proc_reg); 559 - } 560 - if (!IS_ERR(info->sram_reg)) { 561 + regulator_disable(info->proc_reg); 562 + regulator_put(info->proc_reg); 563 + if (info->sram_reg) { 561 564 regulator_disable(info->sram_reg); 562 565 regulator_put(info->sram_reg); 563 566 } 564 - if (!IS_ERR(info->cpu_clk)) { 565 - clk_disable_unprepare(info->cpu_clk); 566 - clk_put(info->cpu_clk); 567 - } 568 - if (!IS_ERR(info->inter_clk)) { 569 - clk_disable_unprepare(info->inter_clk); 570 - clk_put(info->inter_clk); 571 - } 572 - 567 + clk_disable_unprepare(info->cpu_clk); 568 + clk_put(info->cpu_clk); 569 + clk_disable_unprepare(info->inter_clk); 570 + clk_put(info->inter_clk); 573 571 dev_pm_opp_of_cpumask_remove_table(&info->cpus); 574 572 dev_pm_opp_unregister_notifier(info->cpu_dev, &info->opp_nb); 575 573 } ··· 693 695 .ccifreq_supported = false, 694 696 }; 695 697 698 + static const struct mtk_cpufreq_platform_data mt7622_platform_data = { 699 + .min_volt_shift = 100000, 700 + .max_volt_shift = 200000, 701 + .proc_max_volt = 1360000, 702 + .sram_min_volt = 0, 703 + .sram_max_volt = 1360000, 704 + .ccifreq_supported = false, 705 + }; 706 + 696 707 static const struct mtk_cpufreq_platform_data mt8183_platform_data = { 697 708 .min_volt_shift = 100000, 698 709 .max_volt_shift = 200000, ··· 720 713 .ccifreq_supported = true, 721 714 }; 722 715 716 + static const struct mtk_cpufreq_platform_data mt8516_platform_data = { 717 + .min_volt_shift = 100000, 718 + .max_volt_shift = 200000, 719 + .proc_max_volt = 1310000, 720 + .sram_min_volt = 0, 721 + .sram_max_volt = 1310000, 722 + .ccifreq_supported = false, 723 + }; 724 + 723 725 /* List of machines supported by this driver */ 724 726 static const struct of_device_id mtk_cpufreq_machines[] __initconst = { 725 727 { .compatible = "mediatek,mt2701", .data = &mt2701_platform_data }, 726 728 { .compatible = "mediatek,mt2712", .data = &mt2701_platform_data }, 727 - { .compatible = "mediatek,mt7622", .data = &mt2701_platform_data }, 728 - { .compatible = "mediatek,mt7623", .data = &mt2701_platform_data }, 729 - { .compatible = "mediatek,mt8167", .data = &mt2701_platform_data }, 729 + { .compatible = "mediatek,mt7622", .data = &mt7622_platform_data }, 730 + { .compatible = "mediatek,mt7623", .data = &mt7622_platform_data }, 731 + { .compatible = "mediatek,mt8167", .data = &mt8516_platform_data }, 730 732 { .compatible = "mediatek,mt817x", .data = &mt2701_platform_data }, 731 733 { .compatible = "mediatek,mt8173", .data = &mt2701_platform_data }, 732 734 { .compatible = "mediatek,mt8176", .data = &mt2701_platform_data }, 733 735 { .compatible = "mediatek,mt8183", .data = &mt8183_platform_data }, 734 736 { .compatible = "mediatek,mt8186", .data = &mt8186_platform_data }, 735 737 { .compatible = "mediatek,mt8365", .data = &mt2701_platform_data }, 736 - { .compatible = "mediatek,mt8516", .data = &mt2701_platform_data }, 738 + { .compatible = "mediatek,mt8516", .data = &mt8516_platform_data }, 737 739 { } 738 740 }; 739 741 MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines);
+3 -3
drivers/cpufreq/pmac32-cpufreq.c
··· 546 546 { 547 547 struct device_node *volt_gpio_np; 548 548 549 - if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL) 549 + if (!of_property_read_bool(cpunode, "dynamic-power-step")) 550 550 return 1; 551 551 552 552 volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select"); ··· 576 576 u32 pvr; 577 577 const u32 *value; 578 578 579 - if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL) 579 + if (!of_property_read_bool(cpunode, "dynamic-power-step")) 580 580 return 1; 581 581 582 582 hi_freq = cur_freq; ··· 632 632 633 633 /* Check for 7447A based MacRISC3 */ 634 634 if (of_machine_is_compatible("MacRISC3") && 635 - of_get_property(cpunode, "dynamic-power-step", NULL) && 635 + of_property_read_bool(cpunode, "dynamic-power-step") && 636 636 PVR_VER(mfspr(SPRN_PVR)) == 0x8003) { 637 637 pmac_cpufreq_init_7447A(cpunode); 638 638
+8 -46
drivers/cpufreq/qcom-cpufreq-hw.c
··· 14 14 #include <linux/of_address.h> 15 15 #include <linux/of_platform.h> 16 16 #include <linux/pm_opp.h> 17 - #include <linux/pm_qos.h> 18 17 #include <linux/slab.h> 19 18 #include <linux/spinlock.h> 20 19 #include <linux/units.h> ··· 27 28 #define LUT_TURBO_IND 1 28 29 29 30 #define GT_IRQ_STATUS BIT(2) 31 + 32 + #define MAX_FREQ_DOMAINS 3 30 33 31 34 struct qcom_cpufreq_soc_data { 32 35 u32 reg_enable; ··· 44 43 45 44 struct qcom_cpufreq_data { 46 45 void __iomem *base; 47 - struct resource *res; 48 46 49 47 /* 50 48 * Mutex to synchronize between de-init sequence and re-starting LMh ··· 58 58 struct clk_hw cpu_clk; 59 59 60 60 bool per_core_dcvs; 61 - 62 - struct freq_qos_request throttle_freq_req; 63 61 }; 64 62 65 63 static struct { ··· 347 349 348 350 throttled_freq = freq_hz / HZ_PER_KHZ; 349 351 350 - freq_qos_update_request(&data->throttle_freq_req, throttled_freq); 351 - 352 352 /* Update thermal pressure (the boost frequencies are accepted) */ 353 353 arch_update_thermal_pressure(policy->related_cpus, throttled_freq); 354 354 ··· 439 443 if (data->throttle_irq < 0) 440 444 return data->throttle_irq; 441 445 442 - ret = freq_qos_add_request(&policy->constraints, 443 - &data->throttle_freq_req, FREQ_QOS_MAX, 444 - FREQ_QOS_MAX_DEFAULT_VALUE); 445 - if (ret < 0) { 446 - dev_err(&pdev->dev, "Failed to add freq constraint (%d)\n", ret); 447 - return ret; 448 - } 449 - 450 446 data->cancel_throttle = false; 451 447 data->policy = policy; 452 448 ··· 505 517 if (data->throttle_irq <= 0) 506 518 return; 507 519 508 - freq_qos_remove_request(&data->throttle_freq_req); 509 520 free_irq(data->throttle_irq, data); 510 521 } 511 522 ··· 577 590 { 578 591 struct device *cpu_dev = get_cpu_device(policy->cpu); 579 592 struct qcom_cpufreq_data *data = policy->driver_data; 580 - struct resource *res = data->res; 581 - void __iomem *base = data->base; 582 593 583 594 dev_pm_opp_remove_all_dynamic(cpu_dev); 584 595 dev_pm_opp_of_cpumask_remove_table(policy->related_cpus); 585 596 qcom_cpufreq_hw_lmh_exit(data); 586 597 kfree(policy->freq_table); 587 598 kfree(data); 588 - iounmap(base); 589 - release_mem_region(res->start, resource_size(res)); 590 599 591 600 return 0; 592 601 } ··· 634 651 { 635 652 struct clk_hw_onecell_data *clk_data; 636 653 struct device *dev = &pdev->dev; 637 - struct device_node *soc_node; 638 654 struct device *cpu_dev; 639 655 struct clk *clk; 640 - int ret, i, num_domains, reg_sz; 656 + int ret, i, num_domains; 641 657 642 658 clk = clk_get(dev, "xo"); 643 659 if (IS_ERR(clk)) ··· 663 681 if (ret) 664 682 return ret; 665 683 666 - /* Allocate qcom_cpufreq_data based on the available frequency domains in DT */ 667 - soc_node = of_get_parent(dev->of_node); 668 - if (!soc_node) 669 - return -EINVAL; 670 - 671 - ret = of_property_read_u32(soc_node, "#address-cells", &reg_sz); 672 - if (ret) 673 - goto of_exit; 674 - 675 - ret = of_property_read_u32(soc_node, "#size-cells", &i); 676 - if (ret) 677 - goto of_exit; 678 - 679 - reg_sz += i; 680 - 681 - num_domains = of_property_count_elems_of_size(dev->of_node, "reg", sizeof(u32) * reg_sz); 682 - if (num_domains <= 0) 683 - return num_domains; 684 + for (num_domains = 0; num_domains < MAX_FREQ_DOMAINS; num_domains++) 685 + if (!platform_get_resource(pdev, IORESOURCE_MEM, num_domains)) 686 + break; 684 687 685 688 qcom_cpufreq.data = devm_kzalloc(dev, sizeof(struct qcom_cpufreq_data) * num_domains, 686 689 GFP_KERNEL); ··· 685 718 for (i = 0; i < num_domains; i++) { 686 719 struct qcom_cpufreq_data *data = &qcom_cpufreq.data[i]; 687 720 struct clk_init_data clk_init = {}; 688 - struct resource *res; 689 721 void __iomem *base; 690 722 691 - base = devm_platform_get_and_ioremap_resource(pdev, i, &res); 723 + base = devm_platform_ioremap_resource(pdev, i); 692 724 if (IS_ERR(base)) { 693 - dev_err(dev, "Failed to map resource %pR\n", res); 725 + dev_err(dev, "Failed to map resource index %d\n", i); 694 726 return PTR_ERR(base); 695 727 } 696 728 697 729 data->base = base; 698 - data->res = res; 699 730 700 731 /* Register CPU clock for each frequency domain */ 701 732 clk_init.name = kasprintf(GFP_KERNEL, "qcom_cpufreq%d", i); ··· 726 761 dev_err(dev, "CPUFreq HW driver failed to register\n"); 727 762 else 728 763 dev_dbg(dev, "QCOM CPUFreq HW driver initialized\n"); 729 - 730 - of_exit: 731 - of_node_put(soc_node); 732 764 733 765 return ret; 734 766 }
+1 -1
drivers/cpufreq/scmi-cpufreq.c
··· 310 310 311 311 #ifdef CONFIG_COMMON_CLK 312 312 /* dummy clock provider as needed by OPP if clocks property is used */ 313 - if (of_find_property(dev->of_node, "#clock-cells", NULL)) 313 + if (of_property_present(dev->of_node, "#clock-cells")) 314 314 devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL); 315 315 #endif 316 316
-1
drivers/cpufreq/tegra124-cpufreq.c
··· 221 221 222 222 MODULE_AUTHOR("Tuomas Tynkkynen <ttynkkynen@nvidia.com>"); 223 223 MODULE_DESCRIPTION("cpufreq driver for NVIDIA Tegra124"); 224 - MODULE_LICENSE("GPL v2");
+143 -13
drivers/cpufreq/tegra194-cpufreq.c
··· 12 12 #include <linux/of_platform.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/slab.h> 15 + #include <linux/units.h> 15 16 16 17 #include <asm/smp_plat.h> 17 18 ··· 66 65 67 66 struct tegra194_cpufreq_data { 68 67 void __iomem *regs; 69 - struct cpufreq_frequency_table **tables; 68 + struct cpufreq_frequency_table **bpmp_luts; 70 69 const struct tegra_cpufreq_soc *soc; 70 + bool icc_dram_bw_scaling; 71 71 }; 72 72 73 73 static struct workqueue_struct *read_counters_wq; 74 + 75 + static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz) 76 + { 77 + struct tegra194_cpufreq_data *data = cpufreq_get_driver_data(); 78 + struct dev_pm_opp *opp; 79 + struct device *dev; 80 + int ret; 81 + 82 + dev = get_cpu_device(policy->cpu); 83 + if (!dev) 84 + return -ENODEV; 85 + 86 + opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true); 87 + if (IS_ERR(opp)) 88 + return PTR_ERR(opp); 89 + 90 + ret = dev_pm_opp_set_opp(dev, opp); 91 + if (ret) 92 + data->icc_dram_bw_scaling = false; 93 + 94 + dev_pm_opp_put(opp); 95 + return ret; 96 + } 74 97 75 98 static void tegra_get_cpu_mpidr(void *mpidr) 76 99 { ··· 379 354 * to the last written ndiv value from freq_table. This is 380 355 * done to return consistent value. 381 356 */ 382 - cpufreq_for_each_valid_entry(pos, data->tables[clusterid]) { 357 + cpufreq_for_each_valid_entry(pos, data->bpmp_luts[clusterid]) { 383 358 if (pos->driver_data != ndiv) 384 359 continue; 385 360 ··· 394 369 return rate; 395 370 } 396 371 372 + static int tegra_cpufreq_init_cpufreq_table(struct cpufreq_policy *policy, 373 + struct cpufreq_frequency_table *bpmp_lut, 374 + struct cpufreq_frequency_table **opp_table) 375 + { 376 + struct tegra194_cpufreq_data *data = cpufreq_get_driver_data(); 377 + struct cpufreq_frequency_table *freq_table = NULL; 378 + struct cpufreq_frequency_table *pos; 379 + struct device *cpu_dev; 380 + struct dev_pm_opp *opp; 381 + unsigned long rate; 382 + int ret, max_opps; 383 + int j = 0; 384 + 385 + cpu_dev = get_cpu_device(policy->cpu); 386 + if (!cpu_dev) { 387 + pr_err("%s: failed to get cpu%d device\n", __func__, policy->cpu); 388 + return -ENODEV; 389 + } 390 + 391 + /* Initialize OPP table mentioned in operating-points-v2 property in DT */ 392 + ret = dev_pm_opp_of_add_table_indexed(cpu_dev, 0); 393 + if (!ret) { 394 + max_opps = dev_pm_opp_get_opp_count(cpu_dev); 395 + if (max_opps <= 0) { 396 + dev_err(cpu_dev, "Failed to add OPPs\n"); 397 + return max_opps; 398 + } 399 + 400 + /* Disable all opps and cross-validate against LUT later */ 401 + for (rate = 0; ; rate++) { 402 + opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate); 403 + if (IS_ERR(opp)) 404 + break; 405 + 406 + dev_pm_opp_put(opp); 407 + dev_pm_opp_disable(cpu_dev, rate); 408 + } 409 + } else { 410 + dev_err(cpu_dev, "Invalid or empty opp table in device tree\n"); 411 + data->icc_dram_bw_scaling = false; 412 + return ret; 413 + } 414 + 415 + freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_KERNEL); 416 + if (!freq_table) 417 + return -ENOMEM; 418 + 419 + /* 420 + * Cross check the frequencies from BPMP-FW LUT against the OPP's present in DT. 421 + * Enable only those DT OPP's which are present in LUT also. 422 + */ 423 + cpufreq_for_each_valid_entry(pos, bpmp_lut) { 424 + opp = dev_pm_opp_find_freq_exact(cpu_dev, pos->frequency * KHZ, false); 425 + if (IS_ERR(opp)) 426 + continue; 427 + 428 + ret = dev_pm_opp_enable(cpu_dev, pos->frequency * KHZ); 429 + if (ret < 0) 430 + return ret; 431 + 432 + freq_table[j].driver_data = pos->driver_data; 433 + freq_table[j].frequency = pos->frequency; 434 + j++; 435 + } 436 + 437 + freq_table[j].driver_data = pos->driver_data; 438 + freq_table[j].frequency = CPUFREQ_TABLE_END; 439 + 440 + *opp_table = &freq_table[0]; 441 + 442 + dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); 443 + 444 + return ret; 445 + } 446 + 397 447 static int tegra194_cpufreq_init(struct cpufreq_policy *policy) 398 448 { 399 449 struct tegra194_cpufreq_data *data = cpufreq_get_driver_data(); 400 450 int maxcpus_per_cluster = data->soc->maxcpus_per_cluster; 451 + struct cpufreq_frequency_table *freq_table; 452 + struct cpufreq_frequency_table *bpmp_lut; 401 453 u32 start_cpu, cpu; 402 454 u32 clusterid; 455 + int ret; 403 456 404 457 data->soc->ops->get_cpu_cluster_id(policy->cpu, NULL, &clusterid); 405 - 406 - if (clusterid >= data->soc->num_clusters || !data->tables[clusterid]) 458 + if (clusterid >= data->soc->num_clusters || !data->bpmp_luts[clusterid]) 407 459 return -EINVAL; 408 460 409 461 start_cpu = rounddown(policy->cpu, maxcpus_per_cluster); ··· 489 387 if (cpu_possible(cpu)) 490 388 cpumask_set_cpu(cpu, policy->cpus); 491 389 } 492 - policy->freq_table = data->tables[clusterid]; 493 390 policy->cpuinfo.transition_latency = TEGRA_CPUFREQ_TRANSITION_LATENCY; 391 + 392 + bpmp_lut = data->bpmp_luts[clusterid]; 393 + 394 + if (data->icc_dram_bw_scaling) { 395 + ret = tegra_cpufreq_init_cpufreq_table(policy, bpmp_lut, &freq_table); 396 + if (!ret) { 397 + policy->freq_table = freq_table; 398 + return 0; 399 + } 400 + } 401 + 402 + data->icc_dram_bw_scaling = false; 403 + policy->freq_table = bpmp_lut; 404 + pr_info("OPP tables missing from DT, EMC frequency scaling disabled\n"); 494 405 495 406 return 0; 496 407 } ··· 520 405 * request out of the values requested by both cores in that cluster. 521 406 */ 522 407 data->soc->ops->set_cpu_ndiv(policy, (u64)tbl->driver_data); 408 + 409 + if (data->icc_dram_bw_scaling) 410 + tegra_cpufreq_set_bw(policy, tbl->frequency); 523 411 524 412 return 0; 525 413 } ··· 557 439 } 558 440 559 441 static struct cpufreq_frequency_table * 560 - init_freq_table(struct platform_device *pdev, struct tegra_bpmp *bpmp, 561 - unsigned int cluster_id) 442 + tegra_cpufreq_bpmp_read_lut(struct platform_device *pdev, struct tegra_bpmp *bpmp, 443 + unsigned int cluster_id) 562 444 { 563 445 struct cpufreq_frequency_table *freq_table; 564 446 struct mrq_cpu_ndiv_limits_response resp; ··· 633 515 const struct tegra_cpufreq_soc *soc; 634 516 struct tegra194_cpufreq_data *data; 635 517 struct tegra_bpmp *bpmp; 518 + struct device *cpu_dev; 636 519 int err, i; 637 520 638 521 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); ··· 649 530 return -EINVAL; 650 531 } 651 532 652 - data->tables = devm_kcalloc(&pdev->dev, data->soc->num_clusters, 653 - sizeof(*data->tables), GFP_KERNEL); 654 - if (!data->tables) 533 + data->bpmp_luts = devm_kcalloc(&pdev->dev, data->soc->num_clusters, 534 + sizeof(*data->bpmp_luts), GFP_KERNEL); 535 + if (!data->bpmp_luts) 655 536 return -ENOMEM; 656 537 657 538 if (soc->actmon_cntr_base) { ··· 675 556 } 676 557 677 558 for (i = 0; i < data->soc->num_clusters; i++) { 678 - data->tables[i] = init_freq_table(pdev, bpmp, i); 679 - if (IS_ERR(data->tables[i])) { 680 - err = PTR_ERR(data->tables[i]); 559 + data->bpmp_luts[i] = tegra_cpufreq_bpmp_read_lut(pdev, bpmp, i); 560 + if (IS_ERR(data->bpmp_luts[i])) { 561 + err = PTR_ERR(data->bpmp_luts[i]); 681 562 goto err_free_res; 682 563 } 683 564 } 684 565 685 566 tegra194_cpufreq_driver.driver_data = data; 567 + 568 + /* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */ 569 + cpu_dev = get_cpu_device(0); 570 + if (!cpu_dev) 571 + return -EPROBE_DEFER; 572 + 573 + if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) { 574 + err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL); 575 + if (!err) 576 + data->icc_dram_bw_scaling = true; 577 + } 686 578 687 579 err = cpufreq_register_driver(&tegra194_cpufreq_driver); 688 580 if (!err)
+1 -1
drivers/cpufreq/tegra20-cpufreq.c
··· 25 25 struct device_node *np = of_cpu_device_node_get(0); 26 26 bool ret = false; 27 27 28 - if (of_get_property(np, "operating-points-v2", NULL)) 28 + if (of_property_present(np, "operating-points-v2")) 29 29 ret = true; 30 30 31 31 of_node_put(np);
+1 -1
drivers/cpuidle/cpuidle-psci-domain.c
··· 166 166 * initialize a genpd/genpd-of-provider pair when it's found. 167 167 */ 168 168 for_each_child_of_node(np, node) { 169 - if (!of_find_property(node, "#power-domain-cells", NULL)) 169 + if (!of_property_present(node, "#power-domain-cells")) 170 170 continue; 171 171 172 172 ret = psci_pd_init(node, use_osi);
+3 -3
drivers/cpuidle/cpuidle-riscv-sbi.c
··· 497 497 * initialize a genpd/genpd-of-provider pair when it's found. 498 498 */ 499 499 for_each_child_of_node(np, node) { 500 - if (!of_find_property(node, "#power-domain-cells", NULL)) 500 + if (!of_property_present(node, "#power-domain-cells")) 501 501 continue; 502 502 503 503 ret = sbi_pd_init(node); ··· 548 548 for_each_possible_cpu(cpu) { 549 549 np = of_cpu_device_node_get(cpu); 550 550 if (np && 551 - of_find_property(np, "power-domains", NULL) && 552 - of_find_property(np, "power-domain-names", NULL)) { 551 + of_property_present(np, "power-domains") && 552 + of_property_present(np, "power-domain-names")) { 553 553 continue; 554 554 } else { 555 555 sbi_cpuidle_use_osi = false;
-1
drivers/devfreq/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 menuconfig PM_DEVFREQ 3 3 bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support" 4 - select SRCU 5 4 select PM_OPP 6 5 help 7 6 A device may have a list of frequencies and voltages available.
+1 -2
drivers/devfreq/event/exynos-ppmu.c
··· 621 621 } 622 622 623 623 /* Maps the memory mapped IO to control PPMU register */ 624 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 625 - base = devm_ioremap_resource(dev, res); 624 + base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 626 625 if (IS_ERR(base)) 627 626 return PTR_ERR(base); 628 627
+2 -2
drivers/devfreq/exynos-bus.c
··· 432 432 goto err; 433 433 434 434 /* Create child platform device for the interconnect provider */ 435 - if (of_get_property(dev->of_node, "#interconnect-cells", NULL)) { 435 + if (of_property_present(dev->of_node, "#interconnect-cells")) { 436 436 bus->icc_pdev = platform_device_register_data( 437 437 dev, "exynos-generic-icc", 438 438 PLATFORM_DEVID_AUTO, NULL, 0); ··· 513 513 .driver = { 514 514 .name = "exynos-bus", 515 515 .pm = &exynos_bus_pm, 516 - .of_match_table = of_match_ptr(exynos_bus_of_match), 516 + .of_match_table = exynos_bus_of_match, 517 517 }, 518 518 }; 519 519 module_platform_driver(exynos_bus_platdrv);
+44 -34
drivers/opp/core.c
··· 935 935 return 0; 936 936 } 937 937 938 - static int _set_required_opp(struct device *dev, struct device *pd_dev, 939 - struct dev_pm_opp *opp, int i) 938 + static int _set_performance_state(struct device *dev, struct device *pd_dev, 939 + struct dev_pm_opp *opp, int i) 940 940 { 941 941 unsigned int pstate = likely(opp) ? opp->required_opps[i]->pstate : 0; 942 942 int ret; ··· 953 953 return ret; 954 954 } 955 955 956 - /* This is only called for PM domain for now */ 957 - static int _set_required_opps(struct device *dev, 958 - struct opp_table *opp_table, 959 - struct dev_pm_opp *opp, bool up) 956 + static int _opp_set_required_opps_generic(struct device *dev, 957 + struct opp_table *opp_table, struct dev_pm_opp *opp, bool scaling_down) 960 958 { 961 - struct opp_table **required_opp_tables = opp_table->required_opp_tables; 962 - struct device **genpd_virt_devs = opp_table->genpd_virt_devs; 959 + dev_err(dev, "setting required-opps isn't supported for non-genpd devices\n"); 960 + return -ENOENT; 961 + } 962 + 963 + static int _opp_set_required_opps_genpd(struct device *dev, 964 + struct opp_table *opp_table, struct dev_pm_opp *opp, bool scaling_down) 965 + { 966 + struct device **genpd_virt_devs = 967 + opp_table->genpd_virt_devs ? opp_table->genpd_virt_devs : &dev; 963 968 int i, ret = 0; 964 - 965 - if (!required_opp_tables) 966 - return 0; 967 - 968 - /* required-opps not fully initialized yet */ 969 - if (lazy_linking_pending(opp_table)) 970 - return -EBUSY; 971 - 972 - /* 973 - * We only support genpd's OPPs in the "required-opps" for now, as we 974 - * don't know much about other use cases. Error out if the required OPP 975 - * doesn't belong to a genpd. 976 - */ 977 - if (unlikely(!required_opp_tables[0]->is_genpd)) { 978 - dev_err(dev, "required-opps don't belong to a genpd\n"); 979 - return -ENOENT; 980 - } 981 - 982 - /* Single genpd case */ 983 - if (!genpd_virt_devs) 984 - return _set_required_opp(dev, dev, opp, 0); 985 - 986 - /* Multiple genpd case */ 987 969 988 970 /* 989 971 * Acquire genpd_virt_dev_lock to make sure we don't use a genpd_dev ··· 974 992 mutex_lock(&opp_table->genpd_virt_dev_lock); 975 993 976 994 /* Scaling up? Set required OPPs in normal order, else reverse */ 977 - if (up) { 995 + if (!scaling_down) { 978 996 for (i = 0; i < opp_table->required_opp_count; i++) { 979 - ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i); 997 + ret = _set_performance_state(dev, genpd_virt_devs[i], opp, i); 980 998 if (ret) 981 999 break; 982 1000 } 983 1001 } else { 984 1002 for (i = opp_table->required_opp_count - 1; i >= 0; i--) { 985 - ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i); 1003 + ret = _set_performance_state(dev, genpd_virt_devs[i], opp, i); 986 1004 if (ret) 987 1005 break; 988 1006 } ··· 991 1009 mutex_unlock(&opp_table->genpd_virt_dev_lock); 992 1010 993 1011 return ret; 1012 + } 1013 + 1014 + /* This is only called for PM domain for now */ 1015 + static int _set_required_opps(struct device *dev, struct opp_table *opp_table, 1016 + struct dev_pm_opp *opp, bool up) 1017 + { 1018 + /* required-opps not fully initialized yet */ 1019 + if (lazy_linking_pending(opp_table)) 1020 + return -EBUSY; 1021 + 1022 + if (opp_table->set_required_opps) 1023 + return opp_table->set_required_opps(dev, opp_table, opp, up); 1024 + 1025 + return 0; 1026 + } 1027 + 1028 + /* Update set_required_opps handler */ 1029 + void _update_set_required_opps(struct opp_table *opp_table) 1030 + { 1031 + /* Already set */ 1032 + if (opp_table->set_required_opps) 1033 + return; 1034 + 1035 + /* All required OPPs will belong to genpd or none */ 1036 + if (opp_table->required_opp_tables[0]->is_genpd) 1037 + opp_table->set_required_opps = _opp_set_required_opps_genpd; 1038 + else 1039 + opp_table->set_required_opps = _opp_set_required_opps_generic; 994 1040 } 995 1041 996 1042 static void _find_current_opp(struct device *dev, struct opp_table *opp_table)
+5 -2
drivers/opp/of.c
··· 196 196 /* Let's do the linking later on */ 197 197 if (lazy) 198 198 list_add(&opp_table->lazy, &lazy_opp_tables); 199 + else 200 + _update_set_required_opps(opp_table); 199 201 200 202 goto put_np; 201 203 ··· 226 224 of_property_read_u32(np, "voltage-tolerance", 227 225 &opp_table->voltage_tolerance_v1); 228 226 229 - if (of_find_property(np, "#power-domain-cells", NULL)) 227 + if (of_property_present(np, "#power-domain-cells")) 230 228 opp_table->is_genpd = true; 231 229 232 230 /* Get OPP table node */ ··· 413 411 414 412 /* All required opp-tables found, remove from lazy list */ 415 413 if (!lazy) { 414 + _update_set_required_opps(opp_table); 416 415 list_del_init(&opp_table->lazy); 417 416 418 417 list_for_each_entry(opp, &opp_table->opp_list, node) ··· 539 536 * an OPP then the OPP should not be enabled as there is 540 537 * no way to see if the hardware supports it. 541 538 */ 542 - if (of_find_property(np, "opp-supported-hw", NULL)) 539 + if (of_property_present(np, "opp-supported-hw")) 543 540 return false; 544 541 else 545 542 return true;
+4
drivers/opp/opp.h
··· 184 184 * @enabled: Set to true if the device's resources are enabled/configured. 185 185 * @genpd_performance_state: Device's power domain support performance state. 186 186 * @is_genpd: Marks if the OPP table belongs to a genpd. 187 + * @set_required_opps: Helper responsible to set required OPPs. 187 188 * @dentry: debugfs dentry pointer of the real device directory (not links). 188 189 * @dentry_name: Name of the real dentry. 189 190 * ··· 235 234 bool enabled; 236 235 bool genpd_performance_state; 237 236 bool is_genpd; 237 + int (*set_required_opps)(struct device *dev, 238 + struct opp_table *opp_table, struct dev_pm_opp *opp, bool scaling_down); 238 239 239 240 #ifdef CONFIG_DEBUG_FS 240 241 struct dentry *dentry; ··· 260 257 struct opp_table *_add_opp_table_indexed(struct device *dev, int index, bool getclk); 261 258 void _put_opp_list_kref(struct opp_table *opp_table); 262 259 void _required_opps_available(struct dev_pm_opp *opp, int count); 260 + void _update_set_required_opps(struct opp_table *opp_table); 263 261 264 262 static inline bool lazy_linking_pending(struct opp_table *opp_table) 265 263 {
+3 -3
drivers/platform/x86/amd/pmc.c
··· 365 365 366 366 if (!table.s0i3_last_entry_status) 367 367 dev_warn(pdev->dev, "Last suspend didn't reach deepest state\n"); 368 - else 369 - dev_dbg(pdev->dev, "Last suspend in deepest state for %lluus\n", 370 - table.timein_s0i3_lastcapture); 368 + pm_report_hw_sleep_time(table.s0i3_last_entry_status ? 369 + table.timein_s0i3_lastcapture : 0); 371 370 } 372 371 373 372 static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev) ··· 1015 1016 } 1016 1017 1017 1018 amd_pmc_dbgfs_register(dev); 1019 + pm_report_max_hw_sleep(U64_MAX); 1018 1020 return 0; 1019 1021 1020 1022 err_pci_dev_put:
+9 -8
drivers/platform/x86/intel/pmc/core.c
··· 1153 1153 pmc_core_do_dmi_quirks(pmcdev); 1154 1154 1155 1155 pmc_core_dbgfs_register(pmcdev); 1156 + pm_report_max_hw_sleep(FIELD_MAX(SLP_S0_RES_COUNTER_MASK) * 1157 + pmc_core_adjust_slp_s0_step(pmcdev, 1)); 1156 1158 1157 1159 device_initialized = true; 1158 1160 dev_info(&pdev->dev, " initialized\n"); ··· 1180 1178 { 1181 1179 struct pmc_dev *pmcdev = dev_get_drvdata(dev); 1182 1180 1183 - pmcdev->check_counters = false; 1184 - 1185 - /* No warnings on S0ix failures */ 1186 - if (!warn_on_s0ix_failures) 1187 - return 0; 1188 - 1189 1181 /* Check if the syspend will actually use S0ix */ 1190 1182 if (pm_suspend_via_firmware()) 1191 1183 return 0; ··· 1192 1196 if (pmc_core_dev_state_get(pmcdev, &pmcdev->s0ix_counter)) 1193 1197 return -EIO; 1194 1198 1195 - pmcdev->check_counters = true; 1196 1199 return 0; 1197 1200 } 1198 1201 ··· 1215 1220 if (pmc_core_dev_state_get(pmcdev, &s0ix_counter)) 1216 1221 return false; 1217 1222 1223 + pm_report_hw_sleep_time((u32)(s0ix_counter - pmcdev->s0ix_counter)); 1224 + 1218 1225 if (s0ix_counter == pmcdev->s0ix_counter) 1219 1226 return true; 1220 1227 ··· 1229 1232 const struct pmc_bit_map **maps = pmcdev->map->lpm_sts; 1230 1233 int offset = pmcdev->map->lpm_status_offset; 1231 1234 1232 - if (!pmcdev->check_counters) 1235 + /* Check if the syspend used S0ix */ 1236 + if (pm_suspend_via_firmware()) 1233 1237 return 0; 1234 1238 1235 1239 if (!pmc_core_is_s0ix_failed(pmcdev)) 1240 + return 0; 1241 + 1242 + if (!warn_on_s0ix_failures) 1236 1243 return 0; 1237 1244 1238 1245 if (pmc_core_is_pc10_failed(pmcdev)) {
+2 -2
drivers/platform/x86/intel/pmc/core.h
··· 16 16 #include <linux/bits.h> 17 17 #include <linux/platform_device.h> 18 18 19 + #define SLP_S0_RES_COUNTER_MASK GENMASK(31, 0) 20 + 19 21 #define PMC_BASE_ADDR_DEFAULT 0xFE000000 20 22 21 23 /* Sunrise Point Power Management Controller PCI Device ID */ ··· 321 319 * @pmc_xram_read_bit: flag to indicate whether PMC XRAM shadow registers 322 320 * used to read MPHY PG and PLL status are available 323 321 * @mutex_lock: mutex to complete one transcation 324 - * @check_counters: On resume, check if counters are getting incremented 325 322 * @pc10_counter: PC10 residency counter 326 323 * @s0ix_counter: S0ix residency (step adjusted) 327 324 * @num_lpm_modes: Count of enabled modes ··· 339 338 int pmc_xram_read_bit; 340 339 struct mutex lock; /* generic mutex lock for PMC Core */ 341 340 342 - bool check_counters; /* Check for counter increments on resume */ 343 341 u64 pc10_counter; 344 342 u64 s0ix_counter; 345 343 int num_lpm_modes;
+11
include/acpi/cppc_acpi.h
··· 109 109 u32 lowest_freq; 110 110 u32 nominal_freq; 111 111 u32 energy_perf; 112 + bool auto_sel; 112 113 }; 113 114 114 115 struct cppc_perf_ctrls { ··· 154 153 extern int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val); 155 154 extern int cppc_get_epp_perf(int cpunum, u64 *epp_perf); 156 155 extern int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable); 156 + extern int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps); 157 + extern int cppc_set_auto_sel(int cpu, bool enable); 157 158 #else /* !CONFIG_ACPI_CPPC_LIB */ 158 159 static inline int cppc_get_desired_perf(int cpunum, u64 *desired_perf) 159 160 { ··· 214 211 return -ENOTSUPP; 215 212 } 216 213 static inline int cppc_get_epp_perf(int cpunum, u64 *epp_perf) 214 + { 215 + return -ENOTSUPP; 216 + } 217 + static inline int cppc_set_auto_sel(int cpu, bool enable) 218 + { 219 + return -ENOTSUPP; 220 + } 221 + static inline int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps) 217 222 { 218 223 return -ENOTSUPP; 219 224 }
+2
include/linux/amd-pstate.h
··· 97 97 AMD_PSTATE_DISABLE = 0, 98 98 AMD_PSTATE_PASSIVE, 99 99 AMD_PSTATE_ACTIVE, 100 + AMD_PSTATE_GUIDED, 100 101 AMD_PSTATE_MAX, 101 102 }; 102 103 ··· 105 104 [AMD_PSTATE_DISABLE] = "disable", 106 105 [AMD_PSTATE_PASSIVE] = "passive", 107 106 [AMD_PSTATE_ACTIVE] = "active", 107 + [AMD_PSTATE_GUIDED] = "guided", 108 108 NULL, 109 109 }; 110 110 #endif /* _LINUX_AMD_PSTATE_H */
+1
include/linux/cpufreq.h
··· 237 237 struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy); 238 238 void cpufreq_enable_fast_switch(struct cpufreq_policy *policy); 239 239 void cpufreq_disable_fast_switch(struct cpufreq_policy *policy); 240 + bool has_target_index(void); 240 241 #else 241 242 static inline unsigned int cpufreq_get(unsigned int cpu) 242 243 {
+8
include/linux/suspend.h
··· 68 68 int last_failed_errno; 69 69 int errno[REC_FAILED_NUM]; 70 70 int last_failed_step; 71 + u64 last_hw_sleep; 72 + u64 total_hw_sleep; 73 + u64 max_hw_sleep; 71 74 enum suspend_stat_step failed_steps[REC_FAILED_NUM]; 72 75 }; 73 76 ··· 492 489 extern int register_pm_notifier(struct notifier_block *nb); 493 490 extern int unregister_pm_notifier(struct notifier_block *nb); 494 491 extern void ksys_sync_helper(void); 492 + extern void pm_report_hw_sleep_time(u64 t); 493 + extern void pm_report_max_hw_sleep(u64 t); 495 494 496 495 #define pm_notifier(fn, pri) { \ 497 496 static struct notifier_block fn##_nb = \ ··· 530 525 { 531 526 return 0; 532 527 } 528 + 529 + static inline void pm_report_hw_sleep_time(u64 t) {}; 530 + static inline void pm_report_max_hw_sleep(u64 t) {}; 533 531 534 532 static inline void ksys_sync_helper(void) {} 535 533
+47 -12
kernel/power/main.c
··· 6 6 * Copyright (c) 2003 Open Source Development Lab 7 7 */ 8 8 9 + #include <linux/acpi.h> 9 10 #include <linux/export.h> 10 11 #include <linux/kobject.h> 11 12 #include <linux/string.h> ··· 83 82 return blocking_notifier_chain_unregister(&pm_chain_head, nb); 84 83 } 85 84 EXPORT_SYMBOL_GPL(unregister_pm_notifier); 85 + 86 + void pm_report_hw_sleep_time(u64 t) 87 + { 88 + suspend_stats.last_hw_sleep = t; 89 + suspend_stats.total_hw_sleep += t; 90 + } 91 + EXPORT_SYMBOL_GPL(pm_report_hw_sleep_time); 92 + 93 + void pm_report_max_hw_sleep(u64 t) 94 + { 95 + suspend_stats.max_hw_sleep = t; 96 + } 97 + EXPORT_SYMBOL_GPL(pm_report_max_hw_sleep); 86 98 87 99 int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down) 88 100 { ··· 328 314 } 329 315 } 330 316 331 - #define suspend_attr(_name) \ 317 + #define suspend_attr(_name, format_str) \ 332 318 static ssize_t _name##_show(struct kobject *kobj, \ 333 319 struct kobj_attribute *attr, char *buf) \ 334 320 { \ 335 - return sprintf(buf, "%d\n", suspend_stats._name); \ 321 + return sprintf(buf, format_str, suspend_stats._name); \ 336 322 } \ 337 323 static struct kobj_attribute _name = __ATTR_RO(_name) 338 324 339 - suspend_attr(success); 340 - suspend_attr(fail); 341 - suspend_attr(failed_freeze); 342 - suspend_attr(failed_prepare); 343 - suspend_attr(failed_suspend); 344 - suspend_attr(failed_suspend_late); 345 - suspend_attr(failed_suspend_noirq); 346 - suspend_attr(failed_resume); 347 - suspend_attr(failed_resume_early); 348 - suspend_attr(failed_resume_noirq); 325 + suspend_attr(success, "%d\n"); 326 + suspend_attr(fail, "%d\n"); 327 + suspend_attr(failed_freeze, "%d\n"); 328 + suspend_attr(failed_prepare, "%d\n"); 329 + suspend_attr(failed_suspend, "%d\n"); 330 + suspend_attr(failed_suspend_late, "%d\n"); 331 + suspend_attr(failed_suspend_noirq, "%d\n"); 332 + suspend_attr(failed_resume, "%d\n"); 333 + suspend_attr(failed_resume_early, "%d\n"); 334 + suspend_attr(failed_resume_noirq, "%d\n"); 335 + suspend_attr(last_hw_sleep, "%llu\n"); 336 + suspend_attr(total_hw_sleep, "%llu\n"); 337 + suspend_attr(max_hw_sleep, "%llu\n"); 349 338 350 339 static ssize_t last_failed_dev_show(struct kobject *kobj, 351 340 struct kobj_attribute *attr, char *buf) ··· 408 391 &last_failed_dev.attr, 409 392 &last_failed_errno.attr, 410 393 &last_failed_step.attr, 394 + &last_hw_sleep.attr, 395 + &total_hw_sleep.attr, 396 + &max_hw_sleep.attr, 411 397 NULL, 412 398 }; 399 + 400 + static umode_t suspend_attr_is_visible(struct kobject *kobj, struct attribute *attr, int idx) 401 + { 402 + if (attr != &last_hw_sleep.attr && 403 + attr != &total_hw_sleep.attr && 404 + attr != &max_hw_sleep.attr) 405 + return 0444; 406 + 407 + #ifdef CONFIG_ACPI 408 + if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) 409 + return 0444; 410 + #endif 411 + return 0; 412 + } 413 413 414 414 static const struct attribute_group suspend_attr_group = { 415 415 .name = "suspend_stats", 416 416 .attrs = suspend_attrs, 417 + .is_visible = suspend_attr_is_visible, 417 418 }; 418 419 419 420 #ifdef CONFIG_DEBUG_FS
+1 -1
tools/power/pm-graph/README
··· 6 6 |_| |___/ |_| 7 7 8 8 pm-graph: suspend/resume/boot timing analysis tools 9 - Version: 5.10 9 + Version: 5.11 10 10 Author: Todd Brandt <todd.e.brandt@intel.com> 11 11 Home Page: https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/overview.html 12 12
+38
tools/power/pm-graph/install_latest_from_github.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Script which clones and installs the latest pm-graph 5 + # from http://github.com/intel/pm-graph.git 6 + 7 + OUT=`mktemp -d 2>/dev/null` 8 + if [ -z "$OUT" -o ! -e $OUT ]; then 9 + echo "ERROR: mktemp failed to create folder" 10 + exit 11 + fi 12 + 13 + cleanup() { 14 + if [ -e "$OUT" ]; then 15 + cd $OUT 16 + rm -rf pm-graph 17 + cd /tmp 18 + rmdir $OUT 19 + fi 20 + } 21 + 22 + git clone http://github.com/intel/pm-graph.git $OUT/pm-graph 23 + if [ ! -e "$OUT/pm-graph/sleepgraph.py" ]; then 24 + echo "ERROR: pm-graph github repo failed to clone" 25 + cleanup 26 + exit 27 + fi 28 + 29 + cd $OUT/pm-graph 30 + echo "INSTALLING PM-GRAPH" 31 + sudo make install 32 + if [ $? -eq 0 ]; then 33 + echo "INSTALL SUCCESS" 34 + sleepgraph -v 35 + else 36 + echo "INSTALL FAILED" 37 + fi 38 + cleanup
+40 -18
tools/power/pm-graph/sleepgraph.py
··· 86 86 # store system values and test parameters 87 87 class SystemValues: 88 88 title = 'SleepGraph' 89 - version = '5.10' 89 + version = '5.11' 90 90 ansi = False 91 91 rs = 0 92 92 display = '' ··· 300 300 [0, 'acpidevices', 'sh', '-c', 'ls -l /sys/bus/acpi/devices/*/physical_node'], 301 301 [0, 's0ix_require', 'cat', '/sys/kernel/debug/pmc_core/substate_requirements'], 302 302 [0, 's0ix_debug', 'cat', '/sys/kernel/debug/pmc_core/slp_s0_debug_status'], 303 + [0, 'ethtool', 'ethtool', '{ethdev}'], 303 304 [1, 's0ix_residency', 'cat', '/sys/kernel/debug/pmc_core/slp_s0_residency_usec'], 304 305 [1, 'interrupts', 'cat', '/proc/interrupts'], 305 306 [1, 'wakeups', 'cat', '/sys/kernel/debug/wakeup_sources'], ··· 1079 1078 else: 1080 1079 out[data[0].strip()] = data[1] 1081 1080 return out 1081 + def cmdinfovar(self, arg): 1082 + if arg == 'ethdev': 1083 + try: 1084 + cmd = [self.getExec('ip'), '-4', '-o', '-br', 'addr'] 1085 + fp = Popen(cmd, stdout=PIPE, stderr=PIPE).stdout 1086 + info = ascii(fp.read()).strip() 1087 + fp.close() 1088 + except: 1089 + return 'iptoolcrash' 1090 + for line in info.split('\n'): 1091 + if line[0] == 'e' and 'UP' in line: 1092 + return line.split()[0] 1093 + return 'nodevicefound' 1094 + return 'unknown' 1082 1095 def cmdinfo(self, begin, debug=False): 1083 1096 out = [] 1084 1097 if begin: 1085 1098 self.cmd1 = dict() 1086 1099 for cargs in self.infocmds: 1087 - delta, name = cargs[0], cargs[1] 1088 - cmdline, cmdpath = ' '.join(cargs[2:]), self.getExec(cargs[2]) 1100 + delta, name, args = cargs[0], cargs[1], cargs[2:] 1101 + for i in range(len(args)): 1102 + if args[i][0] == '{' and args[i][-1] == '}': 1103 + args[i] = self.cmdinfovar(args[i][1:-1]) 1104 + cmdline, cmdpath = ' '.join(args[0:]), self.getExec(args[0]) 1089 1105 if not cmdpath or (begin and not delta): 1090 1106 continue 1091 1107 self.dlog('[%s]' % cmdline) 1092 1108 try: 1093 - fp = Popen([cmdpath]+cargs[3:], stdout=PIPE, stderr=PIPE).stdout 1109 + fp = Popen([cmdpath]+args[1:], stdout=PIPE, stderr=PIPE).stdout 1094 1110 info = ascii(fp.read()).strip() 1095 1111 fp.close() 1096 1112 except: ··· 1470 1452 errlist = { 1471 1453 'HWERROR' : r'.*\[ *Hardware Error *\].*', 1472 1454 'FWBUG' : r'.*\[ *Firmware Bug *\].*', 1455 + 'TASKFAIL': r'.*Freezing .*after *.*', 1473 1456 'BUG' : r'(?i).*\bBUG\b.*', 1474 1457 'ERROR' : r'(?i).*\bERROR\b.*', 1475 1458 'WARNING' : r'(?i).*\bWARNING\b.*', ··· 1481 1462 'TIMEOUT' : r'(?i).*\bTIMEOUT\b.*', 1482 1463 'ABORT' : r'(?i).*\bABORT\b.*', 1483 1464 'IRQ' : r'.*\bgenirq: .*', 1484 - 'TASKFAIL': r'.*Freezing .*after *.*', 1485 1465 'ACPI' : r'.*\bACPI *(?P<b>[A-Za-z]*) *Error[: ].*', 1486 1466 'DISKFULL': r'.*\bNo space left on device.*', 1487 1467 'USBERR' : r'.*usb .*device .*, error [0-9-]*', ··· 1620 1602 pend = self.dmesg[phase]['end'] 1621 1603 if start <= pend: 1622 1604 return phase 1623 - return 'resume_complete' 1605 + return 'resume_complete' if 'resume_complete' in self.dmesg else '' 1624 1606 def sourceDevice(self, phaselist, start, end, pid, type): 1625 1607 tgtdev = '' 1626 1608 for phase in phaselist: ··· 1663 1645 else: 1664 1646 threadname = '%s-%d' % (proc, pid) 1665 1647 tgtphase = self.sourcePhase(start) 1648 + if not tgtphase: 1649 + return False 1666 1650 self.newAction(tgtphase, threadname, pid, '', start, end, '', ' kth', '') 1667 1651 return self.addDeviceFunctionCall(displayname, kprobename, proc, pid, start, end, cdata, rdata) 1668 1652 # this should not happen ··· 1855 1835 hwr = self.hwend - timedelta(microseconds=rtime) 1856 1836 self.tLow.append('%.0f'%((hwr - hws).total_seconds() * 1000)) 1857 1837 def getTimeValues(self): 1858 - sktime = (self.tSuspended - self.tKernSus) * 1000 1859 - rktime = (self.tKernRes - self.tResumed) * 1000 1860 - return (sktime, rktime) 1838 + s = (self.tSuspended - self.tKernSus) * 1000 1839 + r = (self.tKernRes - self.tResumed) * 1000 1840 + return (max(s, 0), max(r, 0)) 1861 1841 def setPhase(self, phase, ktime, isbegin, order=-1): 1862 1842 if(isbegin): 1863 1843 # phase start over current phase ··· 3981 3961 'suspend_machine': ['PM: suspend-to-idle', 3982 3962 'PM: noirq suspend of devices complete after.*', 3983 3963 'PM: noirq freeze of devices complete after.*'], 3984 - 'resume_machine': ['PM: Timekeeping suspended for.*', 3964 + 'resume_machine': ['[PM: ]*Timekeeping suspended for.*', 3985 3965 'ACPI: Low-level resume complete.*', 3986 3966 'ACPI: resume from mwait', 3987 3967 'Suspended for [0-9\.]* seconds'], ··· 3999 3979 # action table (expected events that occur and show up in dmesg) 4000 3980 at = { 4001 3981 'sync_filesystems': { 4002 - 'smsg': 'PM: Syncing filesystems.*', 4003 - 'emsg': 'PM: Preparing system for mem sleep.*' }, 3982 + 'smsg': '.*[Ff]+ilesystems.*', 3983 + 'emsg': 'PM: Preparing system for[a-z]* sleep.*' }, 4004 3984 'freeze_user_processes': { 4005 - 'smsg': 'Freezing user space processes .*', 3985 + 'smsg': 'Freezing user space processes.*', 4006 3986 'emsg': 'Freezing remaining freezable tasks.*' }, 4007 3987 'freeze_tasks': { 4008 3988 'smsg': 'Freezing remaining freezable tasks.*', 4009 - 'emsg': 'PM: Entering (?P<mode>[a-z,A-Z]*) sleep.*' }, 3989 + 'emsg': 'PM: Suspending system.*' }, 4010 3990 'ACPI prepare': { 4011 3991 'smsg': 'ACPI: Preparing to enter system sleep state.*', 4012 3992 'emsg': 'PM: Saving platform NVS memory.*' }, ··· 4140 4120 for a in sorted(at): 4141 4121 if(re.match(at[a]['smsg'], msg)): 4142 4122 if(a not in actions): 4143 - actions[a] = [] 4144 - actions[a].append({'begin': ktime, 'end': ktime}) 4123 + actions[a] = [{'begin': ktime, 'end': ktime}] 4145 4124 if(re.match(at[a]['emsg'], msg)): 4146 - if(a in actions): 4125 + if(a in actions and actions[a][-1]['begin'] == actions[a][-1]['end']): 4147 4126 actions[a][-1]['end'] = ktime 4148 4127 # now look for CPU on/off events 4149 4128 if(re.match('Disabling non-boot CPUs .*', msg)): ··· 4151 4132 elif(re.match('Enabling non-boot CPUs .*', msg)): 4152 4133 # start of first cpu resume 4153 4134 cpu_start = ktime 4154 - elif(re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg)): 4135 + elif(re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg)) \ 4136 + or re.match('psci: CPU(?P<cpu>[0-9]*) killed.*', msg)): 4155 4137 # end of a cpu suspend, start of the next 4156 4138 m = re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg) 4139 + if(not m): 4140 + m = re.match('psci: CPU(?P<cpu>[0-9]*) killed.*', msg) 4157 4141 cpu = 'CPU'+m.group('cpu') 4158 4142 if(cpu not in actions): 4159 4143 actions[cpu] = []