Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
"These rework the collection of cpufreq statistics to allow it to take
place if fast frequency switching is enabled in the governor, rework
the frequency invariance handling in the cpufreq core and drivers, add
new hardware support to a couple of cpufreq drivers, fix a number of
assorted issues and clean up the code all over.

Specifics:

- Rework cpufreq statistics collection to allow it to take place when
fast frequency switching is enabled in the governor (Viresh Kumar).

- Make the cpufreq core set the frequency scale on behalf of the
driver and update several cpufreq drivers accordingly (Ionela
Voinescu, Valentin Schneider).

- Add new hardware support to the STI and qcom cpufreq drivers and
improve them (Alain Volmat, Manivannan Sadhasivam).

- Fix multiple assorted issues in cpufreq drivers (Jon Hunter,
Krzysztof Kozlowski, Matthias Kaehlcke, Pali Rohár, Stephan
Gerhold, Viresh Kumar).

- Fix several assorted issues in the operating performance points
(OPP) framework (Stephan Gerhold, Viresh Kumar).

- Allow devfreq drivers to fetch devfreq instances by DT enumeration
instead of using explicit phandles and modify the devfreq core code
to support driver-specific devfreq DT bindings (Leonard Crestez,
Chanwoo Choi).

- Improve initial hardware resetting in the tegra30 devfreq driver
and clean up the tegra cpuidle driver (Dmitry Osipenko).

- Update the cpuidle core to collect state entry rejection statistics
and expose them via sysfs (Lina Iyer).

- Improve the ACPI _CST code handling diagnostics (Chen Yu).

- Update the PSCI cpuidle driver to allow the PM domain
initialization to occur in the OSI mode as well as in the PC mode
(Ulf Hansson).

- Rework the generic power domains (genpd) core code to allow domain
power off transition to be aborted in the absence of the "power
off" domain callback (Ulf Hansson).

- Fix two suspend-to-idle issues in the ACPI EC driver (Rafael
Wysocki).

- Fix the handling of timer_expires in the PM-runtime framework on
32-bit systems and the handling of device links in it (Grygorii
Strashko, Xiang Chen).

- Add IO requests batching support to the hibernate image saving and
reading code and drop a bogus get_gendisk() from there (Xiaoyi
Chen, Christoph Hellwig).

- Allow PCIe ports to be put into the D3cold power state if they are
power-manageable via ACPI (Lukas Wunner).

- Add missing header file include to a power capping driver (Pujin
Shi).

- Clean up the qcom-cpr AVS driver a bit (Liu Shixin).

- Kevin Hilman steps down as designated reviwer of adaptive voltage
scaling (AVS) drivers (Kevin Hilman)"

* tag 'pm-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (65 commits)
cpufreq: stats: Fix string format specifier mismatch
arm: disable frequency invariance for CONFIG_BL_SWITCHER
cpufreq,arm,arm64: restructure definitions of arch_set_freq_scale()
cpufreq: stats: Add memory barrier to store_reset()
cpufreq: schedutil: Simplify sugov_fast_switch()
ACPI: EC: PM: Drop ec_no_wakeup check from acpi_ec_dispatch_gpe()
ACPI: EC: PM: Flush EC work unconditionally after wakeup
PCI/ACPI: Whitelist hotplug ports for D3 if power managed by ACPI
PM: hibernate: remove the bogus call to get_gendisk() in software_resume()
cpufreq: Move traces and update to policy->cur to cpufreq core
cpufreq: stats: Enable stats for fast-switch as well
cpufreq: stats: Mark few conditionals with unlikely()
cpufreq: stats: Remove locking
cpufreq: stats: Defer stats update to cpufreq_stats_record_transition()
PM: domains: Allow to abort power off when no ->power_off() callback
PM: domains: Rename power state enums for genpd
PM / devfreq: tegra30: Improve initial hardware resetting
PM / devfreq: event: Change prototype of devfreq_event_get_edev_by_phandle function
PM / devfreq: Change prototype of devfreq_get_devfreq_by_phandle function
PM / devfreq: Add devfreq_get_devfreq_by_node function
...

+1018 -2062
+9
Documentation/admin-guide/pm/cpuidle.rst
··· 528 528 Total number of times the hardware has been asked by the given CPU to 529 529 enter this idle state. 530 530 531 + ``rejected`` 532 + Total number of times a request to enter this idle state on the given 533 + CPU was rejected. 534 + 531 535 The :file:`desc` and :file:`name` files both contain strings. The difference 532 536 between them is that the name is expected to be more concise, while the 533 537 description may be longer and it may contain white space or special characters. ··· 576 572 much time has been spent by the hardware in different idle states supported by 577 573 it is to use idle state residency counters in the hardware, if available. 578 574 575 + Generally, an interrupt received when trying to enter an idle state causes the 576 + idle state entry request to be rejected, in which case the ``CPUIdle`` driver 577 + may return an error code to indicate that this was the case. The :file:`usage` 578 + and :file:`rejected` files report the number of times the given idle state 579 + was entered successfully or rejected, respectively. 579 580 580 581 .. _cpu-pm-qos: 581 582
+1 -1
Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt
··· 8 8 - compatible 9 9 Usage: required 10 10 Value type: <string> 11 - Definition: must be "qcom,cpufreq-hw". 11 + Definition: must be "qcom,cpufreq-hw" or "qcom,cpufreq-epss". 12 12 13 13 - clocks 14 14 Usage: required
+29 -18
Documentation/devicetree/bindings/opp/opp.txt
··· 154 154 - opp-suspend: Marks the OPP to be used during device suspend. If multiple OPPs 155 155 in the table have this, the OPP with highest opp-hz will be used. 156 156 157 - - opp-supported-hw: This enables us to select only a subset of OPPs from the 158 - larger OPP table, based on what version of the hardware we are running on. We 159 - still can't have multiple nodes with the same opp-hz value in OPP table. 157 + - opp-supported-hw: This property allows a platform to enable only a subset of 158 + the OPPs from the larger set present in the OPP table, based on the current 159 + version of the hardware (already known to the operating system). 160 160 161 - It's a user defined array containing a hierarchy of hardware version numbers, 162 - supported by the OPP. For example: a platform with hierarchy of three levels 163 - of versions (A, B and C), this field should be like <X Y Z>, where X 164 - corresponds to Version hierarchy A, Y corresponds to version hierarchy B and Z 165 - corresponds to version hierarchy C. 161 + Each block present in the array of blocks in this property, represents a 162 + sub-group of hardware versions supported by the OPP. i.e. <sub-group A>, 163 + <sub-group B>, etc. The OPP will be enabled if _any_ of these sub-groups match 164 + the hardware's version. 166 165 167 - Each level of hierarchy is represented by a 32 bit value, and so there can be 168 - only 32 different supported version per hierarchy. i.e. 1 bit per version. A 169 - value of 0xFFFFFFFF will enable the OPP for all versions for that hierarchy 170 - level. And a value of 0x00000000 will disable the OPP completely, and so we 171 - never want that to happen. 166 + Each sub-group is a platform defined array representing the hierarchy of 167 + hardware versions supported by the platform. For a platform with three 168 + hierarchical levels of version (X.Y.Z), this field shall look like 172 169 173 - If 32 values aren't sufficient for a version hierarchy, than that version 174 - hierarchy can be contained in multiple 32 bit values. i.e. <X Y Z1 Z2> in the 175 - above example, Z1 & Z2 refer to the version hierarchy Z. 170 + opp-supported-hw = <X1 Y1 Z1>, <X2 Y2 Z2>, <X3 Y3 Z3>. 171 + 172 + Each level (eg. X1) in version hierarchy is represented by a 32 bit value, one 173 + bit per version and so there can be maximum 32 versions per level. Logical AND 174 + (&) operation is performed for each level with the hardware's level version 175 + and a non-zero output for _all_ the levels in a sub-group means the OPP is 176 + supported by hardware. A value of 0xFFFFFFFF for each level in the sub-group 177 + will enable the OPP for all versions for the hardware. 176 178 177 179 - status: Marks the node enabled/disabled. 178 180 ··· 505 503 */ 506 504 opp-supported-hw = <0xF 0xFFFFFFFF 0xFFFFFFFF> 507 505 opp-hz = /bits/ 64 <600000000>; 508 - opp-microvolt = <915000 900000 925000>; 509 506 ... 510 507 }; 511 508 ··· 517 516 */ 518 517 opp-supported-hw = <0x20 0xff0000ff 0x0000f4f0> 519 518 opp-hz = /bits/ 64 <800000000>; 520 - opp-microvolt = <915000 900000 925000>; 519 + ... 520 + }; 521 + 522 + opp-900000000 { 523 + /* 524 + * Supports: 525 + * - All cuts and substrate where process version is 0x2. 526 + * - All cuts and process where substrate version is 0x2. 527 + */ 528 + opp-supported-hw = <0xFFFFFFFF 0xFFFFFFFF 0x02>, <0xFFFFFFFF 0x01 0xFFFFFFFF> 529 + opp-hz = /bits/ 64 <900000000>; 521 530 ... 522 531 }; 523 532 };
-1
MAINTAINERS
··· 5388 5388 F: lib/kobj* 5389 5389 5390 5390 DRIVERS FOR ADAPTIVE VOLTAGE SCALING (AVS) 5391 - M: Kevin Hilman <khilman@kernel.org> 5392 5391 M: Nishanth Menon <nm@ti.com> 5393 5392 L: linux-pm@vger.kernel.org 5394 5393 S: Maintained
-36
arch/arm/boot/dts/tegra20-cpu-opp-microvolt.dtsi
··· 26 26 opp-microvolt = <800000 800000 1125000>; 27 27 }; 28 28 29 - opp@456000000,800,2,2 { 30 - opp-microvolt = <800000 800000 1125000>; 31 - }; 32 - 33 - opp@456000000,800,3,2 { 34 - opp-microvolt = <800000 800000 1125000>; 35 - }; 36 - 37 29 opp@456000000,825 { 38 30 opp-microvolt = <825000 825000 1125000>; 39 31 }; ··· 35 43 }; 36 44 37 45 opp@608000000,800 { 38 - opp-microvolt = <800000 800000 1125000>; 39 - }; 40 - 41 - opp@608000000,800,3,2 { 42 46 opp-microvolt = <800000 800000 1125000>; 43 47 }; 44 48 ··· 63 75 }; 64 76 65 77 opp@760000000,875 { 66 - opp-microvolt = <875000 875000 1125000>; 67 - }; 68 - 69 - opp@760000000,875,1,1 { 70 - opp-microvolt = <875000 875000 1125000>; 71 - }; 72 - 73 - opp@760000000,875,0,2 { 74 - opp-microvolt = <875000 875000 1125000>; 75 - }; 76 - 77 - opp@760000000,875,1,2 { 78 78 opp-microvolt = <875000 875000 1125000>; 79 79 }; 80 80 ··· 110 134 opp-microvolt = <950000 950000 1125000>; 111 135 }; 112 136 113 - opp@912000000,950,0,2 { 114 - opp-microvolt = <950000 950000 1125000>; 115 - }; 116 - 117 - opp@912000000,950,2,2 { 118 - opp-microvolt = <950000 950000 1125000>; 119 - }; 120 - 121 137 opp@912000000,1000 { 122 138 opp-microvolt = <1000000 1000000 1125000>; 123 139 }; ··· 135 167 }; 136 168 137 169 opp@1000000000,1000 { 138 - opp-microvolt = <1000000 1000000 1125000>; 139 - }; 140 - 141 - opp@1000000000,1000,0,2 { 142 170 opp-microvolt = <1000000 1000000 1125000>; 143 171 }; 144 172
+8 -59
arch/arm/boot/dts/tegra20-cpu-opp.dtsi
··· 37 37 38 38 opp@456000000,800 { 39 39 clock-latency-ns = <400000>; 40 - opp-supported-hw = <0x03 0x0006>; 41 - opp-hz = /bits/ 64 <456000000>; 42 - }; 43 - 44 - opp@456000000,800,2,2 { 45 - clock-latency-ns = <400000>; 46 - opp-supported-hw = <0x04 0x0004>; 47 - opp-hz = /bits/ 64 <456000000>; 48 - }; 49 - 50 - opp@456000000,800,3,2 { 51 - clock-latency-ns = <400000>; 52 - opp-supported-hw = <0x08 0x0004>; 40 + opp-supported-hw = <0x03 0x0006>, <0x04 0x0004>, 41 + <0x08 0x0004>; 53 42 opp-hz = /bits/ 64 <456000000>; 54 43 }; 55 44 ··· 56 67 57 68 opp@608000000,800 { 58 69 clock-latency-ns = <400000>; 59 - opp-supported-hw = <0x04 0x0006>; 60 - opp-hz = /bits/ 64 <608000000>; 61 - }; 62 - 63 - opp@608000000,800,3,2 { 64 - clock-latency-ns = <400000>; 65 - opp-supported-hw = <0x08 0x0004>; 70 + opp-supported-hw = <0x04 0x0006>, <0x08 0x0004>; 66 71 opp-hz = /bits/ 64 <608000000>; 67 72 }; 68 73 ··· 98 115 99 116 opp@760000000,875 { 100 117 clock-latency-ns = <400000>; 101 - opp-supported-hw = <0x04 0x0001>; 102 - opp-hz = /bits/ 64 <760000000>; 103 - }; 104 - 105 - opp@760000000,875,1,1 { 106 - clock-latency-ns = <400000>; 107 - opp-supported-hw = <0x02 0x0002>; 108 - opp-hz = /bits/ 64 <760000000>; 109 - }; 110 - 111 - opp@760000000,875,0,2 { 112 - clock-latency-ns = <400000>; 113 - opp-supported-hw = <0x01 0x0004>; 114 - opp-hz = /bits/ 64 <760000000>; 115 - }; 116 - 117 - opp@760000000,875,1,2 { 118 - clock-latency-ns = <400000>; 119 - opp-supported-hw = <0x02 0x0004>; 118 + opp-supported-hw = <0x04 0x0001>, <0x02 0x0002>, 119 + <0x01 0x0004>, <0x02 0x0004>; 120 120 opp-hz = /bits/ 64 <760000000>; 121 121 }; 122 122 ··· 165 199 166 200 opp@912000000,950 { 167 201 clock-latency-ns = <400000>; 168 - opp-supported-hw = <0x02 0x0006>; 169 - opp-hz = /bits/ 64 <912000000>; 170 - }; 171 - 172 - opp@912000000,950,0,2 { 173 - clock-latency-ns = <400000>; 174 - opp-supported-hw = <0x01 0x0004>; 175 - opp-hz = /bits/ 64 <912000000>; 176 - }; 177 - 178 - opp@912000000,950,2,2 { 179 - clock-latency-ns = <400000>; 180 - opp-supported-hw = <0x04 0x0004>; 202 + opp-supported-hw = <0x02 0x0006>, <0x01 0x0004>, 203 + <0x04 0x0004>; 181 204 opp-hz = /bits/ 64 <912000000>; 182 205 }; 183 206 ··· 208 253 209 254 opp@1000000000,1000 { 210 255 clock-latency-ns = <400000>; 211 - opp-supported-hw = <0x02 0x0006>; 212 - opp-hz = /bits/ 64 <1000000000>; 213 - }; 214 - 215 - opp@1000000000,1000,0,2 { 216 - clock-latency-ns = <400000>; 217 - opp-supported-hw = <0x01 0x0004>; 256 + opp-supported-hw = <0x02 0x0006>, <0x01 0x0004>; 218 257 opp-hz = /bits/ 64 <1000000000>; 219 258 }; 220 259
-512
arch/arm/boot/dts/tegra30-cpu-opp-microvolt.dtsi
··· 74 74 opp-microvolt = <850000 850000 1250000>; 75 75 }; 76 76 77 - opp@475000000,850,0,1 { 78 - opp-microvolt = <850000 850000 1250000>; 79 - }; 80 - 81 - opp@475000000,850,0,4 { 82 - opp-microvolt = <850000 850000 1250000>; 83 - }; 84 - 85 - opp@475000000,850,0,7 { 86 - opp-microvolt = <850000 850000 1250000>; 87 - }; 88 - 89 - opp@475000000,850,0,8 { 90 - opp-microvolt = <850000 850000 1250000>; 91 - }; 92 - 93 77 opp@608000000,850 { 94 78 opp-microvolt = <850000 850000 1250000>; 95 79 }; ··· 90 106 opp-microvolt = <850000 850000 1250000>; 91 107 }; 92 108 93 - opp@640000000,850,1,1 { 94 - opp-microvolt = <850000 850000 1250000>; 95 - }; 96 - 97 - opp@640000000,850,2,1 { 98 - opp-microvolt = <850000 850000 1250000>; 99 - }; 100 - 101 - opp@640000000,850,3,1 { 102 - opp-microvolt = <850000 850000 1250000>; 103 - }; 104 - 105 - opp@640000000,850,1,4 { 106 - opp-microvolt = <850000 850000 1250000>; 107 - }; 108 - 109 - opp@640000000,850,2,4 { 110 - opp-microvolt = <850000 850000 1250000>; 111 - }; 112 - 113 - opp@640000000,850,3,4 { 114 - opp-microvolt = <850000 850000 1250000>; 115 - }; 116 - 117 - opp@640000000,850,1,7 { 118 - opp-microvolt = <850000 850000 1250000>; 119 - }; 120 - 121 - opp@640000000,850,2,7 { 122 - opp-microvolt = <850000 850000 1250000>; 123 - }; 124 - 125 - opp@640000000,850,3,7 { 126 - opp-microvolt = <850000 850000 1250000>; 127 - }; 128 - 129 - opp@640000000,850,4,7 { 130 - opp-microvolt = <850000 850000 1250000>; 131 - }; 132 - 133 - opp@640000000,850,1,8 { 134 - opp-microvolt = <850000 850000 1250000>; 135 - }; 136 - 137 - opp@640000000,850,2,8 { 138 - opp-microvolt = <850000 850000 1250000>; 139 - }; 140 - 141 - opp@640000000,850,3,8 { 142 - opp-microvolt = <850000 850000 1250000>; 143 - }; 144 - 145 - opp@640000000,850,4,8 { 146 - opp-microvolt = <850000 850000 1250000>; 147 - }; 148 - 149 109 opp@640000000,900 { 150 110 opp-microvolt = <900000 900000 1250000>; 151 111 }; ··· 98 170 opp-microvolt = <850000 850000 1250000>; 99 171 }; 100 172 101 - opp@760000000,850,3,1 { 102 - opp-microvolt = <850000 850000 1250000>; 103 - }; 104 - 105 - opp@760000000,850,3,2 { 106 - opp-microvolt = <850000 850000 1250000>; 107 - }; 108 - 109 - opp@760000000,850,3,3 { 110 - opp-microvolt = <850000 850000 1250000>; 111 - }; 112 - 113 - opp@760000000,850,3,4 { 114 - opp-microvolt = <850000 850000 1250000>; 115 - }; 116 - 117 - opp@760000000,850,3,7 { 118 - opp-microvolt = <850000 850000 1250000>; 119 - }; 120 - 121 - opp@760000000,850,4,7 { 122 - opp-microvolt = <850000 850000 1250000>; 123 - }; 124 - 125 - opp@760000000,850,3,8 { 126 - opp-microvolt = <850000 850000 1250000>; 127 - }; 128 - 129 - opp@760000000,850,4,8 { 130 - opp-microvolt = <850000 850000 1250000>; 131 - }; 132 - 133 - opp@760000000,850,0,10 { 134 - opp-microvolt = <850000 850000 1250000>; 135 - }; 136 - 137 173 opp@760000000,900 { 138 - opp-microvolt = <900000 900000 1250000>; 139 - }; 140 - 141 - opp@760000000,900,1,1 { 142 - opp-microvolt = <900000 900000 1250000>; 143 - }; 144 - 145 - opp@760000000,900,2,1 { 146 - opp-microvolt = <900000 900000 1250000>; 147 - }; 148 - 149 - opp@760000000,900,1,2 { 150 - opp-microvolt = <900000 900000 1250000>; 151 - }; 152 - 153 - opp@760000000,900,2,2 { 154 - opp-microvolt = <900000 900000 1250000>; 155 - }; 156 - 157 - opp@760000000,900,1,3 { 158 - opp-microvolt = <900000 900000 1250000>; 159 - }; 160 - 161 - opp@760000000,900,2,3 { 162 - opp-microvolt = <900000 900000 1250000>; 163 - }; 164 - 165 - opp@760000000,900,1,4 { 166 - opp-microvolt = <900000 900000 1250000>; 167 - }; 168 - 169 - opp@760000000,900,2,4 { 170 - opp-microvolt = <900000 900000 1250000>; 171 - }; 172 - 173 - opp@760000000,900,1,7 { 174 - opp-microvolt = <900000 900000 1250000>; 175 - }; 176 - 177 - opp@760000000,900,2,7 { 178 - opp-microvolt = <900000 900000 1250000>; 179 - }; 180 - 181 - opp@760000000,900,1,8 { 182 - opp-microvolt = <900000 900000 1250000>; 183 - }; 184 - 185 - opp@760000000,900,2,8 { 186 174 opp-microvolt = <900000 900000 1250000>; 187 175 }; 188 176 ··· 126 282 opp-microvolt = <900000 900000 1250000>; 127 283 }; 128 284 129 - opp@860000000,900,2,1 { 130 - opp-microvolt = <900000 900000 1250000>; 131 - }; 132 - 133 - opp@860000000,900,3,1 { 134 - opp-microvolt = <900000 900000 1250000>; 135 - }; 136 - 137 - opp@860000000,900,2,2 { 138 - opp-microvolt = <900000 900000 1250000>; 139 - }; 140 - 141 - opp@860000000,900,3,2 { 142 - opp-microvolt = <900000 900000 1250000>; 143 - }; 144 - 145 - opp@860000000,900,2,3 { 146 - opp-microvolt = <900000 900000 1250000>; 147 - }; 148 - 149 - opp@860000000,900,3,3 { 150 - opp-microvolt = <900000 900000 1250000>; 151 - }; 152 - 153 - opp@860000000,900,2,4 { 154 - opp-microvolt = <900000 900000 1250000>; 155 - }; 156 - 157 - opp@860000000,900,3,4 { 158 - opp-microvolt = <900000 900000 1250000>; 159 - }; 160 - 161 - opp@860000000,900,2,7 { 162 - opp-microvolt = <900000 900000 1250000>; 163 - }; 164 - 165 - opp@860000000,900,3,7 { 166 - opp-microvolt = <900000 900000 1250000>; 167 - }; 168 - 169 - opp@860000000,900,4,7 { 170 - opp-microvolt = <900000 900000 1250000>; 171 - }; 172 - 173 - opp@860000000,900,2,8 { 174 - opp-microvolt = <900000 900000 1250000>; 175 - }; 176 - 177 - opp@860000000,900,3,8 { 178 - opp-microvolt = <900000 900000 1250000>; 179 - }; 180 - 181 - opp@860000000,900,4,8 { 182 - opp-microvolt = <900000 900000 1250000>; 183 - }; 184 - 185 285 opp@860000000,975 { 186 - opp-microvolt = <975000 975000 1250000>; 187 - }; 188 - 189 - opp@860000000,975,1,1 { 190 - opp-microvolt = <975000 975000 1250000>; 191 - }; 192 - 193 - opp@860000000,975,1,2 { 194 - opp-microvolt = <975000 975000 1250000>; 195 - }; 196 - 197 - opp@860000000,975,1,3 { 198 - opp-microvolt = <975000 975000 1250000>; 199 - }; 200 - 201 - opp@860000000,975,1,4 { 202 - opp-microvolt = <975000 975000 1250000>; 203 - }; 204 - 205 - opp@860000000,975,1,7 { 206 - opp-microvolt = <975000 975000 1250000>; 207 - }; 208 - 209 - opp@860000000,975,1,8 { 210 286 opp-microvolt = <975000 975000 1250000>; 211 287 }; 212 288 ··· 146 382 opp-microvolt = <975000 975000 1250000>; 147 383 }; 148 384 149 - opp@1000000000,975,2,1 { 150 - opp-microvolt = <975000 975000 1250000>; 151 - }; 152 - 153 - opp@1000000000,975,3,1 { 154 - opp-microvolt = <975000 975000 1250000>; 155 - }; 156 - 157 - opp@1000000000,975,2,2 { 158 - opp-microvolt = <975000 975000 1250000>; 159 - }; 160 - 161 - opp@1000000000,975,3,2 { 162 - opp-microvolt = <975000 975000 1250000>; 163 - }; 164 - 165 - opp@1000000000,975,2,3 { 166 - opp-microvolt = <975000 975000 1250000>; 167 - }; 168 - 169 - opp@1000000000,975,3,3 { 170 - opp-microvolt = <975000 975000 1250000>; 171 - }; 172 - 173 - opp@1000000000,975,2,4 { 174 - opp-microvolt = <975000 975000 1250000>; 175 - }; 176 - 177 - opp@1000000000,975,3,4 { 178 - opp-microvolt = <975000 975000 1250000>; 179 - }; 180 - 181 - opp@1000000000,975,2,7 { 182 - opp-microvolt = <975000 975000 1250000>; 183 - }; 184 - 185 - opp@1000000000,975,3,7 { 186 - opp-microvolt = <975000 975000 1250000>; 187 - }; 188 - 189 - opp@1000000000,975,4,7 { 190 - opp-microvolt = <975000 975000 1250000>; 191 - }; 192 - 193 - opp@1000000000,975,2,8 { 194 - opp-microvolt = <975000 975000 1250000>; 195 - }; 196 - 197 - opp@1000000000,975,3,8 { 198 - opp-microvolt = <975000 975000 1250000>; 199 - }; 200 - 201 - opp@1000000000,975,4,8 { 202 - opp-microvolt = <975000 975000 1250000>; 203 - }; 204 - 205 385 opp@1000000000,1000 { 206 386 opp-microvolt = <1000000 1000000 1250000>; 207 387 }; ··· 162 454 opp-microvolt = <975000 975000 1250000>; 163 455 }; 164 456 165 - opp@1100000000,975,3,1 { 166 - opp-microvolt = <975000 975000 1250000>; 167 - }; 168 - 169 - opp@1100000000,975,3,2 { 170 - opp-microvolt = <975000 975000 1250000>; 171 - }; 172 - 173 - opp@1100000000,975,3,3 { 174 - opp-microvolt = <975000 975000 1250000>; 175 - }; 176 - 177 - opp@1100000000,975,3,4 { 178 - opp-microvolt = <975000 975000 1250000>; 179 - }; 180 - 181 - opp@1100000000,975,3,7 { 182 - opp-microvolt = <975000 975000 1250000>; 183 - }; 184 - 185 - opp@1100000000,975,4,7 { 186 - opp-microvolt = <975000 975000 1250000>; 187 - }; 188 - 189 - opp@1100000000,975,3,8 { 190 - opp-microvolt = <975000 975000 1250000>; 191 - }; 192 - 193 - opp@1100000000,975,4,8 { 194 - opp-microvolt = <975000 975000 1250000>; 195 - }; 196 - 197 457 opp@1100000000,1000 { 198 - opp-microvolt = <1000000 1000000 1250000>; 199 - }; 200 - 201 - opp@1100000000,1000,2,1 { 202 - opp-microvolt = <1000000 1000000 1250000>; 203 - }; 204 - 205 - opp@1100000000,1000,2,2 { 206 - opp-microvolt = <1000000 1000000 1250000>; 207 - }; 208 - 209 - opp@1100000000,1000,2,3 { 210 - opp-microvolt = <1000000 1000000 1250000>; 211 - }; 212 - 213 - opp@1100000000,1000,2,4 { 214 - opp-microvolt = <1000000 1000000 1250000>; 215 - }; 216 - 217 - opp@1100000000,1000,2,7 { 218 - opp-microvolt = <1000000 1000000 1250000>; 219 - }; 220 - 221 - opp@1100000000,1000,2,8 { 222 458 opp-microvolt = <1000000 1000000 1250000>; 223 459 }; 224 460 ··· 186 534 opp-microvolt = <1000000 1000000 1250000>; 187 535 }; 188 536 189 - opp@1200000000,1000,3,1 { 190 - opp-microvolt = <1000000 1000000 1250000>; 191 - }; 192 - 193 - opp@1200000000,1000,3,2 { 194 - opp-microvolt = <1000000 1000000 1250000>; 195 - }; 196 - 197 - opp@1200000000,1000,3,3 { 198 - opp-microvolt = <1000000 1000000 1250000>; 199 - }; 200 - 201 - opp@1200000000,1000,3,4 { 202 - opp-microvolt = <1000000 1000000 1250000>; 203 - }; 204 - 205 - opp@1200000000,1000,3,7 { 206 - opp-microvolt = <1000000 1000000 1250000>; 207 - }; 208 - 209 - opp@1200000000,1000,4,7 { 210 - opp-microvolt = <1000000 1000000 1250000>; 211 - }; 212 - 213 - opp@1200000000,1000,3,8 { 214 - opp-microvolt = <1000000 1000000 1250000>; 215 - }; 216 - 217 - opp@1200000000,1000,4,8 { 218 - opp-microvolt = <1000000 1000000 1250000>; 219 - }; 220 - 221 537 opp@1200000000,1025 { 222 - opp-microvolt = <1025000 1025000 1250000>; 223 - }; 224 - 225 - opp@1200000000,1025,2,1 { 226 - opp-microvolt = <1025000 1025000 1250000>; 227 - }; 228 - 229 - opp@1200000000,1025,2,2 { 230 - opp-microvolt = <1025000 1025000 1250000>; 231 - }; 232 - 233 - opp@1200000000,1025,2,3 { 234 - opp-microvolt = <1025000 1025000 1250000>; 235 - }; 236 - 237 - opp@1200000000,1025,2,4 { 238 - opp-microvolt = <1025000 1025000 1250000>; 239 - }; 240 - 241 - opp@1200000000,1025,2,7 { 242 - opp-microvolt = <1025000 1025000 1250000>; 243 - }; 244 - 245 - opp@1200000000,1025,2,8 { 246 538 opp-microvolt = <1025000 1025000 1250000>; 247 539 }; 248 540 ··· 206 610 opp-microvolt = <1000000 1000000 1250000>; 207 611 }; 208 612 209 - opp@1300000000,1000,4,7 { 210 - opp-microvolt = <1000000 1000000 1250000>; 211 - }; 212 - 213 - opp@1300000000,1000,4,8 { 214 - opp-microvolt = <1000000 1000000 1250000>; 215 - }; 216 - 217 613 opp@1300000000,1025 { 218 - opp-microvolt = <1025000 1025000 1250000>; 219 - }; 220 - 221 - opp@1300000000,1025,3,1 { 222 - opp-microvolt = <1025000 1025000 1250000>; 223 - }; 224 - 225 - opp@1300000000,1025,3,7 { 226 - opp-microvolt = <1025000 1025000 1250000>; 227 - }; 228 - 229 - opp@1300000000,1025,3,8 { 230 614 opp-microvolt = <1025000 1025000 1250000>; 231 615 }; 232 616 ··· 214 638 opp-microvolt = <1050000 1050000 1250000>; 215 639 }; 216 640 217 - opp@1300000000,1050,2,1 { 218 - opp-microvolt = <1050000 1050000 1250000>; 219 - }; 220 - 221 - opp@1300000000,1050,3,2 { 222 - opp-microvolt = <1050000 1050000 1250000>; 223 - }; 224 - 225 - opp@1300000000,1050,3,3 { 226 - opp-microvolt = <1050000 1050000 1250000>; 227 - }; 228 - 229 - opp@1300000000,1050,3,4 { 230 - opp-microvolt = <1050000 1050000 1250000>; 231 - }; 232 - 233 - opp@1300000000,1050,3,5 { 234 - opp-microvolt = <1050000 1050000 1250000>; 235 - }; 236 - 237 - opp@1300000000,1050,3,6 { 238 - opp-microvolt = <1050000 1050000 1250000>; 239 - }; 240 - 241 - opp@1300000000,1050,2,7 { 242 - opp-microvolt = <1050000 1050000 1250000>; 243 - }; 244 - 245 - opp@1300000000,1050,2,8 { 246 - opp-microvolt = <1050000 1050000 1250000>; 247 - }; 248 - 249 - opp@1300000000,1050,3,12 { 250 - opp-microvolt = <1050000 1050000 1250000>; 251 - }; 252 - 253 - opp@1300000000,1050,3,13 { 254 - opp-microvolt = <1050000 1050000 1250000>; 255 - }; 256 - 257 641 opp@1300000000,1075 { 258 - opp-microvolt = <1075000 1075000 1250000>; 259 - }; 260 - 261 - opp@1300000000,1075,2,2 { 262 - opp-microvolt = <1075000 1075000 1250000>; 263 - }; 264 - 265 - opp@1300000000,1075,2,3 { 266 - opp-microvolt = <1075000 1075000 1250000>; 267 - }; 268 - 269 - opp@1300000000,1075,2,4 { 270 642 opp-microvolt = <1075000 1075000 1250000>; 271 643 }; 272 644 ··· 246 722 opp-microvolt = <1150000 1150000 1250000>; 247 723 }; 248 724 249 - opp@1400000000,1150,2,4 { 250 - opp-microvolt = <1150000 1150000 1250000>; 251 - }; 252 - 253 725 opp@1400000000,1175 { 254 726 opp-microvolt = <1175000 1175000 1250000>; 255 727 }; ··· 258 738 opp-microvolt = <1125000 1125000 1250000>; 259 739 }; 260 740 261 - opp@1500000000,1125,4,5 { 262 - opp-microvolt = <1125000 1125000 1250000>; 263 - }; 264 - 265 - opp@1500000000,1125,4,6 { 266 - opp-microvolt = <1125000 1125000 1250000>; 267 - }; 268 - 269 - opp@1500000000,1125,4,12 { 270 - opp-microvolt = <1125000 1125000 1250000>; 271 - }; 272 - 273 - opp@1500000000,1125,4,13 { 274 - opp-microvolt = <1125000 1125000 1250000>; 275 - }; 276 - 277 741 opp@1500000000,1150 { 278 - opp-microvolt = <1150000 1150000 1250000>; 279 - }; 280 - 281 - opp@1500000000,1150,3,5 { 282 - opp-microvolt = <1150000 1150000 1250000>; 283 - }; 284 - 285 - opp@1500000000,1150,3,6 { 286 - opp-microvolt = <1150000 1150000 1250000>; 287 - }; 288 - 289 - opp@1500000000,1150,3,12 { 290 - opp-microvolt = <1150000 1150000 1250000>; 291 - }; 292 - 293 - opp@1500000000,1150,3,13 { 294 742 opp-microvolt = <1150000 1150000 1250000>; 295 743 }; 296 744
+80 -786
arch/arm/boot/dts/tegra30-cpu-opp.dtsi
··· 109 109 110 110 opp@475000000,850 { 111 111 clock-latency-ns = <100000>; 112 - opp-supported-hw = <0x0F 0x0001>; 113 - opp-hz = /bits/ 64 <475000000>; 114 - }; 115 - 116 - opp@475000000,850,0,1 { 117 - clock-latency-ns = <100000>; 118 - opp-supported-hw = <0x01 0x0002>; 119 - opp-hz = /bits/ 64 <475000000>; 120 - }; 121 - 122 - opp@475000000,850,0,4 { 123 - clock-latency-ns = <100000>; 124 - opp-supported-hw = <0x01 0x0010>; 125 - opp-hz = /bits/ 64 <475000000>; 126 - }; 127 - 128 - opp@475000000,850,0,7 { 129 - clock-latency-ns = <100000>; 130 - opp-supported-hw = <0x01 0x0080>; 131 - opp-hz = /bits/ 64 <475000000>; 132 - }; 133 - 134 - opp@475000000,850,0,8 { 135 - clock-latency-ns = <100000>; 136 - opp-supported-hw = <0x01 0x0100>; 112 + opp-supported-hw = <0x0F 0x0001>, <0x01 0x0002>, 113 + <0x01 0x0010>, <0x01 0x0080>, 114 + <0x01 0x0100>; 137 115 opp-hz = /bits/ 64 <475000000>; 138 116 }; 139 117 ··· 135 157 136 158 opp@640000000,850 { 137 159 clock-latency-ns = <100000>; 138 - opp-supported-hw = <0x0F 0x0001>; 139 - opp-hz = /bits/ 64 <640000000>; 140 - }; 141 - 142 - opp@640000000,850,1,1 { 143 - clock-latency-ns = <100000>; 144 - opp-supported-hw = <0x02 0x0002>; 145 - opp-hz = /bits/ 64 <640000000>; 146 - }; 147 - 148 - opp@640000000,850,2,1 { 149 - clock-latency-ns = <100000>; 150 - opp-supported-hw = <0x04 0x0002>; 151 - opp-hz = /bits/ 64 <640000000>; 152 - }; 153 - 154 - opp@640000000,850,3,1 { 155 - clock-latency-ns = <100000>; 156 - opp-supported-hw = <0x08 0x0002>; 157 - opp-hz = /bits/ 64 <640000000>; 158 - }; 159 - 160 - opp@640000000,850,1,4 { 161 - clock-latency-ns = <100000>; 162 - opp-supported-hw = <0x02 0x0010>; 163 - opp-hz = /bits/ 64 <640000000>; 164 - }; 165 - 166 - opp@640000000,850,2,4 { 167 - clock-latency-ns = <100000>; 168 - opp-supported-hw = <0x04 0x0010>; 169 - opp-hz = /bits/ 64 <640000000>; 170 - }; 171 - 172 - opp@640000000,850,3,4 { 173 - clock-latency-ns = <100000>; 174 - opp-supported-hw = <0x08 0x0010>; 175 - opp-hz = /bits/ 64 <640000000>; 176 - }; 177 - 178 - opp@640000000,850,1,7 { 179 - clock-latency-ns = <100000>; 180 - opp-supported-hw = <0x02 0x0080>; 181 - opp-hz = /bits/ 64 <640000000>; 182 - }; 183 - 184 - opp@640000000,850,2,7 { 185 - clock-latency-ns = <100000>; 186 - opp-supported-hw = <0x04 0x0080>; 187 - opp-hz = /bits/ 64 <640000000>; 188 - }; 189 - 190 - opp@640000000,850,3,7 { 191 - clock-latency-ns = <100000>; 192 - opp-supported-hw = <0x08 0x0080>; 193 - opp-hz = /bits/ 64 <640000000>; 194 - }; 195 - 196 - opp@640000000,850,4,7 { 197 - clock-latency-ns = <100000>; 198 - opp-supported-hw = <0x10 0x0080>; 199 - opp-hz = /bits/ 64 <640000000>; 200 - }; 201 - 202 - opp@640000000,850,1,8 { 203 - clock-latency-ns = <100000>; 204 - opp-supported-hw = <0x02 0x0100>; 205 - opp-hz = /bits/ 64 <640000000>; 206 - }; 207 - 208 - opp@640000000,850,2,8 { 209 - clock-latency-ns = <100000>; 210 - opp-supported-hw = <0x04 0x0100>; 211 - opp-hz = /bits/ 64 <640000000>; 212 - }; 213 - 214 - opp@640000000,850,3,8 { 215 - clock-latency-ns = <100000>; 216 - opp-supported-hw = <0x08 0x0100>; 217 - opp-hz = /bits/ 64 <640000000>; 218 - }; 219 - 220 - opp@640000000,850,4,8 { 221 - clock-latency-ns = <100000>; 222 - opp-supported-hw = <0x10 0x0100>; 160 + opp-supported-hw = <0x0F 0x0001>, <0x02 0x0002>, 161 + <0x04 0x0002>, <0x08 0x0002>, 162 + <0x02 0x0010>, <0x04 0x0010>, 163 + <0x08 0x0010>, <0x02 0x0080>, 164 + <0x04 0x0080>, <0x08 0x0080>, 165 + <0x10 0x0080>, <0x02 0x0100>, 166 + <0x04 0x0100>, <0x08 0x0100>, 167 + <0x10 0x0100>; 223 168 opp-hz = /bits/ 64 <640000000>; 224 169 }; 225 170 ··· 154 253 155 254 opp@760000000,850 { 156 255 clock-latency-ns = <100000>; 157 - opp-supported-hw = <0x1E 0x3461>; 158 - opp-hz = /bits/ 64 <760000000>; 159 - }; 160 - 161 - opp@760000000,850,3,1 { 162 - clock-latency-ns = <100000>; 163 - opp-supported-hw = <0x08 0x0002>; 164 - opp-hz = /bits/ 64 <760000000>; 165 - }; 166 - 167 - opp@760000000,850,3,2 { 168 - clock-latency-ns = <100000>; 169 - opp-supported-hw = <0x08 0x0004>; 170 - opp-hz = /bits/ 64 <760000000>; 171 - }; 172 - 173 - opp@760000000,850,3,3 { 174 - clock-latency-ns = <100000>; 175 - opp-supported-hw = <0x08 0x0008>; 176 - opp-hz = /bits/ 64 <760000000>; 177 - }; 178 - 179 - opp@760000000,850,3,4 { 180 - clock-latency-ns = <100000>; 181 - opp-supported-hw = <0x08 0x0010>; 182 - opp-hz = /bits/ 64 <760000000>; 183 - }; 184 - 185 - opp@760000000,850,3,7 { 186 - clock-latency-ns = <100000>; 187 - opp-supported-hw = <0x08 0x0080>; 188 - opp-hz = /bits/ 64 <760000000>; 189 - }; 190 - 191 - opp@760000000,850,4,7 { 192 - clock-latency-ns = <100000>; 193 - opp-supported-hw = <0x10 0x0080>; 194 - opp-hz = /bits/ 64 <760000000>; 195 - }; 196 - 197 - opp@760000000,850,3,8 { 198 - clock-latency-ns = <100000>; 199 - opp-supported-hw = <0x08 0x0100>; 200 - opp-hz = /bits/ 64 <760000000>; 201 - }; 202 - 203 - opp@760000000,850,4,8 { 204 - clock-latency-ns = <100000>; 205 - opp-supported-hw = <0x10 0x0100>; 206 - opp-hz = /bits/ 64 <760000000>; 207 - }; 208 - 209 - opp@760000000,850,0,10 { 210 - clock-latency-ns = <100000>; 211 - opp-supported-hw = <0x01 0x0400>; 256 + opp-supported-hw = <0x1E 0x3461>, <0x08 0x0002>, 257 + <0x08 0x0004>, <0x08 0x0008>, 258 + <0x08 0x0010>, <0x08 0x0080>, 259 + <0x10 0x0080>, <0x08 0x0100>, 260 + <0x10 0x0100>, <0x01 0x0400>; 212 261 opp-hz = /bits/ 64 <760000000>; 213 262 }; 214 263 215 264 opp@760000000,900 { 216 265 clock-latency-ns = <100000>; 217 - opp-supported-hw = <0x01 0x0001>; 218 - opp-hz = /bits/ 64 <760000000>; 219 - }; 220 - 221 - opp@760000000,900,1,1 { 222 - clock-latency-ns = <100000>; 223 - opp-supported-hw = <0x02 0x0002>; 224 - opp-hz = /bits/ 64 <760000000>; 225 - }; 226 - 227 - opp@760000000,900,2,1 { 228 - clock-latency-ns = <100000>; 229 - opp-supported-hw = <0x04 0x0002>; 230 - opp-hz = /bits/ 64 <760000000>; 231 - }; 232 - 233 - opp@760000000,900,1,2 { 234 - clock-latency-ns = <100000>; 235 - opp-supported-hw = <0x02 0x0004>; 236 - opp-hz = /bits/ 64 <760000000>; 237 - }; 238 - 239 - opp@760000000,900,2,2 { 240 - clock-latency-ns = <100000>; 241 - opp-supported-hw = <0x04 0x0004>; 242 - opp-hz = /bits/ 64 <760000000>; 243 - }; 244 - 245 - opp@760000000,900,1,3 { 246 - clock-latency-ns = <100000>; 247 - opp-supported-hw = <0x02 0x0008>; 248 - opp-hz = /bits/ 64 <760000000>; 249 - }; 250 - 251 - opp@760000000,900,2,3 { 252 - clock-latency-ns = <100000>; 253 - opp-supported-hw = <0x04 0x0008>; 254 - opp-hz = /bits/ 64 <760000000>; 255 - }; 256 - 257 - opp@760000000,900,1,4 { 258 - clock-latency-ns = <100000>; 259 - opp-supported-hw = <0x02 0x0010>; 260 - opp-hz = /bits/ 64 <760000000>; 261 - }; 262 - 263 - opp@760000000,900,2,4 { 264 - clock-latency-ns = <100000>; 265 - opp-supported-hw = <0x04 0x0010>; 266 - opp-hz = /bits/ 64 <760000000>; 267 - }; 268 - 269 - opp@760000000,900,1,7 { 270 - clock-latency-ns = <100000>; 271 - opp-supported-hw = <0x02 0x0080>; 272 - opp-hz = /bits/ 64 <760000000>; 273 - }; 274 - 275 - opp@760000000,900,2,7 { 276 - clock-latency-ns = <100000>; 277 - opp-supported-hw = <0x04 0x0080>; 278 - opp-hz = /bits/ 64 <760000000>; 279 - }; 280 - 281 - opp@760000000,900,1,8 { 282 - clock-latency-ns = <100000>; 283 - opp-supported-hw = <0x02 0x0100>; 284 - opp-hz = /bits/ 64 <760000000>; 285 - }; 286 - 287 - opp@760000000,900,2,8 { 288 - clock-latency-ns = <100000>; 289 - opp-supported-hw = <0x04 0x0100>; 266 + opp-supported-hw = <0x01 0x0001>, <0x02 0x0002>, 267 + <0x04 0x0002>, <0x02 0x0004>, 268 + <0x04 0x0004>, <0x02 0x0008>, 269 + <0x04 0x0008>, <0x02 0x0010>, 270 + <0x04 0x0010>, <0x02 0x0080>, 271 + <0x04 0x0080>, <0x02 0x0100>, 272 + <0x04 0x0100>; 290 273 opp-hz = /bits/ 64 <760000000>; 291 274 }; 292 275 ··· 206 421 207 422 opp@860000000,900 { 208 423 clock-latency-ns = <100000>; 209 - opp-supported-hw = <0x02 0x0001>; 210 - opp-hz = /bits/ 64 <860000000>; 211 - }; 212 - 213 - opp@860000000,900,2,1 { 214 - clock-latency-ns = <100000>; 215 - opp-supported-hw = <0x04 0x0002>; 216 - opp-hz = /bits/ 64 <860000000>; 217 - }; 218 - 219 - opp@860000000,900,3,1 { 220 - clock-latency-ns = <100000>; 221 - opp-supported-hw = <0x08 0x0002>; 222 - opp-hz = /bits/ 64 <860000000>; 223 - }; 224 - 225 - opp@860000000,900,2,2 { 226 - clock-latency-ns = <100000>; 227 - opp-supported-hw = <0x04 0x0004>; 228 - opp-hz = /bits/ 64 <860000000>; 229 - }; 230 - 231 - opp@860000000,900,3,2 { 232 - clock-latency-ns = <100000>; 233 - opp-supported-hw = <0x08 0x0004>; 234 - opp-hz = /bits/ 64 <860000000>; 235 - }; 236 - 237 - opp@860000000,900,2,3 { 238 - clock-latency-ns = <100000>; 239 - opp-supported-hw = <0x04 0x0008>; 240 - opp-hz = /bits/ 64 <860000000>; 241 - }; 242 - 243 - opp@860000000,900,3,3 { 244 - clock-latency-ns = <100000>; 245 - opp-supported-hw = <0x08 0x0008>; 246 - opp-hz = /bits/ 64 <860000000>; 247 - }; 248 - 249 - opp@860000000,900,2,4 { 250 - clock-latency-ns = <100000>; 251 - opp-supported-hw = <0x04 0x0010>; 252 - opp-hz = /bits/ 64 <860000000>; 253 - }; 254 - 255 - opp@860000000,900,3,4 { 256 - clock-latency-ns = <100000>; 257 - opp-supported-hw = <0x08 0x0010>; 258 - opp-hz = /bits/ 64 <860000000>; 259 - }; 260 - 261 - opp@860000000,900,2,7 { 262 - clock-latency-ns = <100000>; 263 - opp-supported-hw = <0x04 0x0080>; 264 - opp-hz = /bits/ 64 <860000000>; 265 - }; 266 - 267 - opp@860000000,900,3,7 { 268 - clock-latency-ns = <100000>; 269 - opp-supported-hw = <0x08 0x0080>; 270 - opp-hz = /bits/ 64 <860000000>; 271 - }; 272 - 273 - opp@860000000,900,4,7 { 274 - clock-latency-ns = <100000>; 275 - opp-supported-hw = <0x10 0x0080>; 276 - opp-hz = /bits/ 64 <860000000>; 277 - }; 278 - 279 - opp@860000000,900,2,8 { 280 - clock-latency-ns = <100000>; 281 - opp-supported-hw = <0x04 0x0100>; 282 - opp-hz = /bits/ 64 <860000000>; 283 - }; 284 - 285 - opp@860000000,900,3,8 { 286 - clock-latency-ns = <100000>; 287 - opp-supported-hw = <0x08 0x0100>; 288 - opp-hz = /bits/ 64 <860000000>; 289 - }; 290 - 291 - opp@860000000,900,4,8 { 292 - clock-latency-ns = <100000>; 293 - opp-supported-hw = <0x10 0x0100>; 424 + opp-supported-hw = <0x02 0x0001>, <0x04 0x0002>, 425 + <0x08 0x0002>, <0x04 0x0004>, 426 + <0x08 0x0004>, <0x04 0x0008>, 427 + <0x08 0x0008>, <0x04 0x0010>, 428 + <0x08 0x0010>, <0x04 0x0080>, 429 + <0x08 0x0080>, <0x10 0x0080>, 430 + <0x04 0x0100>, <0x08 0x0100>, 431 + <0x10 0x0100>; 294 432 opp-hz = /bits/ 64 <860000000>; 295 433 }; 296 434 297 435 opp@860000000,975 { 298 436 clock-latency-ns = <100000>; 299 - opp-supported-hw = <0x01 0x0001>; 300 - opp-hz = /bits/ 64 <860000000>; 301 - }; 302 - 303 - opp@860000000,975,1,1 { 304 - clock-latency-ns = <100000>; 305 - opp-supported-hw = <0x02 0x0002>; 306 - opp-hz = /bits/ 64 <860000000>; 307 - }; 308 - 309 - opp@860000000,975,1,2 { 310 - clock-latency-ns = <100000>; 311 - opp-supported-hw = <0x02 0x0004>; 312 - opp-hz = /bits/ 64 <860000000>; 313 - }; 314 - 315 - opp@860000000,975,1,3 { 316 - clock-latency-ns = <100000>; 317 - opp-supported-hw = <0x02 0x0008>; 318 - opp-hz = /bits/ 64 <860000000>; 319 - }; 320 - 321 - opp@860000000,975,1,4 { 322 - clock-latency-ns = <100000>; 323 - opp-supported-hw = <0x02 0x0010>; 324 - opp-hz = /bits/ 64 <860000000>; 325 - }; 326 - 327 - opp@860000000,975,1,7 { 328 - clock-latency-ns = <100000>; 329 - opp-supported-hw = <0x02 0x0080>; 330 - opp-hz = /bits/ 64 <860000000>; 331 - }; 332 - 333 - opp@860000000,975,1,8 { 334 - clock-latency-ns = <100000>; 335 - opp-supported-hw = <0x02 0x0100>; 437 + opp-supported-hw = <0x01 0x0001>, <0x02 0x0002>, 438 + <0x02 0x0004>, <0x02 0x0008>, 439 + <0x02 0x0010>, <0x02 0x0080>, 440 + <0x02 0x0100>; 336 441 opp-hz = /bits/ 64 <860000000>; 337 442 }; 338 443 ··· 246 571 247 572 opp@1000000000,975 { 248 573 clock-latency-ns = <100000>; 249 - opp-supported-hw = <0x03 0x0001>; 250 - opp-hz = /bits/ 64 <1000000000>; 251 - }; 252 - 253 - opp@1000000000,975,2,1 { 254 - clock-latency-ns = <100000>; 255 - opp-supported-hw = <0x04 0x0002>; 256 - opp-hz = /bits/ 64 <1000000000>; 257 - }; 258 - 259 - opp@1000000000,975,3,1 { 260 - clock-latency-ns = <100000>; 261 - opp-supported-hw = <0x08 0x0002>; 262 - opp-hz = /bits/ 64 <1000000000>; 263 - }; 264 - 265 - opp@1000000000,975,2,2 { 266 - clock-latency-ns = <100000>; 267 - opp-supported-hw = <0x04 0x0004>; 268 - opp-hz = /bits/ 64 <1000000000>; 269 - }; 270 - 271 - opp@1000000000,975,3,2 { 272 - clock-latency-ns = <100000>; 273 - opp-supported-hw = <0x08 0x0004>; 274 - opp-hz = /bits/ 64 <1000000000>; 275 - }; 276 - 277 - opp@1000000000,975,2,3 { 278 - clock-latency-ns = <100000>; 279 - opp-supported-hw = <0x04 0x0008>; 280 - opp-hz = /bits/ 64 <1000000000>; 281 - }; 282 - 283 - opp@1000000000,975,3,3 { 284 - clock-latency-ns = <100000>; 285 - opp-supported-hw = <0x08 0x0008>; 286 - opp-hz = /bits/ 64 <1000000000>; 287 - }; 288 - 289 - opp@1000000000,975,2,4 { 290 - clock-latency-ns = <100000>; 291 - opp-supported-hw = <0x04 0x0010>; 292 - opp-hz = /bits/ 64 <1000000000>; 293 - }; 294 - 295 - opp@1000000000,975,3,4 { 296 - clock-latency-ns = <100000>; 297 - opp-supported-hw = <0x08 0x0010>; 298 - opp-hz = /bits/ 64 <1000000000>; 299 - }; 300 - 301 - opp@1000000000,975,2,7 { 302 - clock-latency-ns = <100000>; 303 - opp-supported-hw = <0x04 0x0080>; 304 - opp-hz = /bits/ 64 <1000000000>; 305 - }; 306 - 307 - opp@1000000000,975,3,7 { 308 - clock-latency-ns = <100000>; 309 - opp-supported-hw = <0x08 0x0080>; 310 - opp-hz = /bits/ 64 <1000000000>; 311 - }; 312 - 313 - opp@1000000000,975,4,7 { 314 - clock-latency-ns = <100000>; 315 - opp-supported-hw = <0x10 0x0080>; 316 - opp-hz = /bits/ 64 <1000000000>; 317 - }; 318 - 319 - opp@1000000000,975,2,8 { 320 - clock-latency-ns = <100000>; 321 - opp-supported-hw = <0x04 0x0100>; 322 - opp-hz = /bits/ 64 <1000000000>; 323 - }; 324 - 325 - opp@1000000000,975,3,8 { 326 - clock-latency-ns = <100000>; 327 - opp-supported-hw = <0x08 0x0100>; 328 - opp-hz = /bits/ 64 <1000000000>; 329 - }; 330 - 331 - opp@1000000000,975,4,8 { 332 - clock-latency-ns = <100000>; 333 - opp-supported-hw = <0x10 0x0100>; 574 + opp-supported-hw = <0x03 0x0001>, <0x04 0x0002>, 575 + <0x08 0x0002>, <0x04 0x0004>, 576 + <0x08 0x0004>, <0x04 0x0008>, 577 + <0x08 0x0008>, <0x04 0x0010>, 578 + <0x08 0x0010>, <0x04 0x0080>, 579 + <0x08 0x0080>, <0x10 0x0080>, 580 + <0x04 0x0100>, <0x08 0x0100>, 581 + <0x10 0x0100>; 334 582 opp-hz = /bits/ 64 <1000000000>; 335 583 }; 336 584 ··· 277 679 278 680 opp@1100000000,975 { 279 681 clock-latency-ns = <100000>; 280 - opp-supported-hw = <0x06 0x0001>; 281 - opp-hz = /bits/ 64 <1100000000>; 282 - }; 283 - 284 - opp@1100000000,975,3,1 { 285 - clock-latency-ns = <100000>; 286 - opp-supported-hw = <0x08 0x0002>; 287 - opp-hz = /bits/ 64 <1100000000>; 288 - }; 289 - 290 - opp@1100000000,975,3,2 { 291 - clock-latency-ns = <100000>; 292 - opp-supported-hw = <0x08 0x0004>; 293 - opp-hz = /bits/ 64 <1100000000>; 294 - }; 295 - 296 - opp@1100000000,975,3,3 { 297 - clock-latency-ns = <100000>; 298 - opp-supported-hw = <0x08 0x0008>; 299 - opp-hz = /bits/ 64 <1100000000>; 300 - }; 301 - 302 - opp@1100000000,975,3,4 { 303 - clock-latency-ns = <100000>; 304 - opp-supported-hw = <0x08 0x0010>; 305 - opp-hz = /bits/ 64 <1100000000>; 306 - }; 307 - 308 - opp@1100000000,975,3,7 { 309 - clock-latency-ns = <100000>; 310 - opp-supported-hw = <0x08 0x0080>; 311 - opp-hz = /bits/ 64 <1100000000>; 312 - }; 313 - 314 - opp@1100000000,975,4,7 { 315 - clock-latency-ns = <100000>; 316 - opp-supported-hw = <0x10 0x0080>; 317 - opp-hz = /bits/ 64 <1100000000>; 318 - }; 319 - 320 - opp@1100000000,975,3,8 { 321 - clock-latency-ns = <100000>; 322 - opp-supported-hw = <0x08 0x0100>; 323 - opp-hz = /bits/ 64 <1100000000>; 324 - }; 325 - 326 - opp@1100000000,975,4,8 { 327 - clock-latency-ns = <100000>; 328 - opp-supported-hw = <0x10 0x0100>; 682 + opp-supported-hw = <0x06 0x0001>, <0x08 0x0002>, 683 + <0x08 0x0004>, <0x08 0x0008>, 684 + <0x08 0x0010>, <0x08 0x0080>, 685 + <0x10 0x0080>, <0x08 0x0100>, 686 + <0x10 0x0100>; 329 687 opp-hz = /bits/ 64 <1100000000>; 330 688 }; 331 689 332 690 opp@1100000000,1000 { 333 691 clock-latency-ns = <100000>; 334 - opp-supported-hw = <0x01 0x0001>; 335 - opp-hz = /bits/ 64 <1100000000>; 336 - }; 337 - 338 - opp@1100000000,1000,2,1 { 339 - clock-latency-ns = <100000>; 340 - opp-supported-hw = <0x04 0x0002>; 341 - opp-hz = /bits/ 64 <1100000000>; 342 - }; 343 - 344 - opp@1100000000,1000,2,2 { 345 - clock-latency-ns = <100000>; 346 - opp-supported-hw = <0x04 0x0004>; 347 - opp-hz = /bits/ 64 <1100000000>; 348 - }; 349 - 350 - opp@1100000000,1000,2,3 { 351 - clock-latency-ns = <100000>; 352 - opp-supported-hw = <0x04 0x0008>; 353 - opp-hz = /bits/ 64 <1100000000>; 354 - }; 355 - 356 - opp@1100000000,1000,2,4 { 357 - clock-latency-ns = <100000>; 358 - opp-supported-hw = <0x04 0x0010>; 359 - opp-hz = /bits/ 64 <1100000000>; 360 - }; 361 - 362 - opp@1100000000,1000,2,7 { 363 - clock-latency-ns = <100000>; 364 - opp-supported-hw = <0x04 0x0080>; 365 - opp-hz = /bits/ 64 <1100000000>; 366 - }; 367 - 368 - opp@1100000000,1000,2,8 { 369 - clock-latency-ns = <100000>; 370 - opp-supported-hw = <0x04 0x0100>; 692 + opp-supported-hw = <0x01 0x0001>, <0x04 0x0002>, 693 + <0x04 0x0004>, <0x04 0x0008>, 694 + <0x04 0x0010>, <0x04 0x0080>, 695 + <0x04 0x0100>; 371 696 opp-hz = /bits/ 64 <1100000000>; 372 697 }; 373 698 ··· 320 799 321 800 opp@1200000000,1000 { 322 801 clock-latency-ns = <100000>; 323 - opp-supported-hw = <0x04 0x0001>; 324 - opp-hz = /bits/ 64 <1200000000>; 325 - }; 326 - 327 - opp@1200000000,1000,3,1 { 328 - clock-latency-ns = <100000>; 329 - opp-supported-hw = <0x08 0x0002>; 330 - opp-hz = /bits/ 64 <1200000000>; 331 - }; 332 - 333 - opp@1200000000,1000,3,2 { 334 - clock-latency-ns = <100000>; 335 - opp-supported-hw = <0x08 0x0004>; 336 - opp-hz = /bits/ 64 <1200000000>; 337 - }; 338 - 339 - opp@1200000000,1000,3,3 { 340 - clock-latency-ns = <100000>; 341 - opp-supported-hw = <0x08 0x0008>; 342 - opp-hz = /bits/ 64 <1200000000>; 343 - }; 344 - 345 - opp@1200000000,1000,3,4 { 346 - clock-latency-ns = <100000>; 347 - opp-supported-hw = <0x08 0x0010>; 348 - opp-hz = /bits/ 64 <1200000000>; 349 - }; 350 - 351 - opp@1200000000,1000,3,7 { 352 - clock-latency-ns = <100000>; 353 - opp-supported-hw = <0x08 0x0080>; 354 - opp-hz = /bits/ 64 <1200000000>; 355 - }; 356 - 357 - opp@1200000000,1000,4,7 { 358 - clock-latency-ns = <100000>; 359 - opp-supported-hw = <0x10 0x0080>; 360 - opp-hz = /bits/ 64 <1200000000>; 361 - }; 362 - 363 - opp@1200000000,1000,3,8 { 364 - clock-latency-ns = <100000>; 365 - opp-supported-hw = <0x08 0x0100>; 366 - opp-hz = /bits/ 64 <1200000000>; 367 - }; 368 - 369 - opp@1200000000,1000,4,8 { 370 - clock-latency-ns = <100000>; 371 - opp-supported-hw = <0x10 0x0100>; 802 + opp-supported-hw = <0x04 0x0001>, <0x08 0x0002>, 803 + <0x08 0x0004>, <0x08 0x0008>, 804 + <0x08 0x0010>, <0x08 0x0080>, 805 + <0x10 0x0080>, <0x08 0x0100>, 806 + <0x10 0x0100>; 372 807 opp-hz = /bits/ 64 <1200000000>; 373 808 }; 374 809 375 810 opp@1200000000,1025 { 376 811 clock-latency-ns = <100000>; 377 - opp-supported-hw = <0x02 0x0001>; 378 - opp-hz = /bits/ 64 <1200000000>; 379 - }; 380 - 381 - opp@1200000000,1025,2,1 { 382 - clock-latency-ns = <100000>; 383 - opp-supported-hw = <0x04 0x0002>; 384 - opp-hz = /bits/ 64 <1200000000>; 385 - }; 386 - 387 - opp@1200000000,1025,2,2 { 388 - clock-latency-ns = <100000>; 389 - opp-supported-hw = <0x04 0x0004>; 390 - opp-hz = /bits/ 64 <1200000000>; 391 - }; 392 - 393 - opp@1200000000,1025,2,3 { 394 - clock-latency-ns = <100000>; 395 - opp-supported-hw = <0x04 0x0008>; 396 - opp-hz = /bits/ 64 <1200000000>; 397 - }; 398 - 399 - opp@1200000000,1025,2,4 { 400 - clock-latency-ns = <100000>; 401 - opp-supported-hw = <0x04 0x0010>; 402 - opp-hz = /bits/ 64 <1200000000>; 403 - }; 404 - 405 - opp@1200000000,1025,2,7 { 406 - clock-latency-ns = <100000>; 407 - opp-supported-hw = <0x04 0x0080>; 408 - opp-hz = /bits/ 64 <1200000000>; 409 - }; 410 - 411 - opp@1200000000,1025,2,8 { 412 - clock-latency-ns = <100000>; 413 - opp-supported-hw = <0x04 0x0100>; 812 + opp-supported-hw = <0x02 0x0001>, <0x04 0x0002>, 813 + <0x04 0x0004>, <0x04 0x0008>, 814 + <0x04 0x0010>, <0x04 0x0080>, 815 + <0x04 0x0100>; 414 816 opp-hz = /bits/ 64 <1200000000>; 415 817 }; 416 818 ··· 357 913 358 914 opp@1300000000,1000 { 359 915 clock-latency-ns = <100000>; 360 - opp-supported-hw = <0x08 0x0001>; 361 - opp-hz = /bits/ 64 <1300000000>; 362 - }; 363 - 364 - opp@1300000000,1000,4,7 { 365 - clock-latency-ns = <100000>; 366 - opp-supported-hw = <0x10 0x0080>; 367 - opp-hz = /bits/ 64 <1300000000>; 368 - }; 369 - 370 - opp@1300000000,1000,4,8 { 371 - clock-latency-ns = <100000>; 372 - opp-supported-hw = <0x10 0x0100>; 916 + opp-supported-hw = <0x08 0x0001>, <0x10 0x0080>, 917 + <0x10 0x0100>; 373 918 opp-hz = /bits/ 64 <1300000000>; 374 919 }; 375 920 376 921 opp@1300000000,1025 { 377 922 clock-latency-ns = <100000>; 378 - opp-supported-hw = <0x04 0x0001>; 379 - opp-hz = /bits/ 64 <1300000000>; 380 - }; 381 - 382 - opp@1300000000,1025,3,1 { 383 - clock-latency-ns = <100000>; 384 - opp-supported-hw = <0x08 0x0002>; 385 - opp-hz = /bits/ 64 <1300000000>; 386 - }; 387 - 388 - opp@1300000000,1025,3,7 { 389 - clock-latency-ns = <100000>; 390 - opp-supported-hw = <0x08 0x0080>; 391 - opp-hz = /bits/ 64 <1300000000>; 392 - }; 393 - 394 - opp@1300000000,1025,3,8 { 395 - clock-latency-ns = <100000>; 396 - opp-supported-hw = <0x08 0x0100>; 923 + opp-supported-hw = <0x04 0x0001>, <0x08 0x0002>, 924 + <0x08 0x0080>, <0x08 0x0100>; 397 925 opp-hz = /bits/ 64 <1300000000>; 398 926 }; 399 927 400 928 opp@1300000000,1050 { 401 929 clock-latency-ns = <100000>; 402 - opp-supported-hw = <0x12 0x3061>; 403 - opp-hz = /bits/ 64 <1300000000>; 404 - }; 405 - 406 - opp@1300000000,1050,2,1 { 407 - clock-latency-ns = <100000>; 408 - opp-supported-hw = <0x04 0x0002>; 409 - opp-hz = /bits/ 64 <1300000000>; 410 - }; 411 - 412 - opp@1300000000,1050,3,2 { 413 - clock-latency-ns = <100000>; 414 - opp-supported-hw = <0x08 0x0004>; 415 - opp-hz = /bits/ 64 <1300000000>; 416 - }; 417 - 418 - opp@1300000000,1050,3,3 { 419 - clock-latency-ns = <100000>; 420 - opp-supported-hw = <0x08 0x0008>; 421 - opp-hz = /bits/ 64 <1300000000>; 422 - }; 423 - 424 - opp@1300000000,1050,3,4 { 425 - clock-latency-ns = <100000>; 426 - opp-supported-hw = <0x08 0x0010>; 427 - opp-hz = /bits/ 64 <1300000000>; 428 - }; 429 - 430 - opp@1300000000,1050,3,5 { 431 - clock-latency-ns = <100000>; 432 - opp-supported-hw = <0x08 0x0020>; 433 - opp-hz = /bits/ 64 <1300000000>; 434 - }; 435 - 436 - opp@1300000000,1050,3,6 { 437 - clock-latency-ns = <100000>; 438 - opp-supported-hw = <0x08 0x0040>; 439 - opp-hz = /bits/ 64 <1300000000>; 440 - }; 441 - 442 - opp@1300000000,1050,2,7 { 443 - clock-latency-ns = <100000>; 444 - opp-supported-hw = <0x04 0x0080>; 445 - opp-hz = /bits/ 64 <1300000000>; 446 - }; 447 - 448 - opp@1300000000,1050,2,8 { 449 - clock-latency-ns = <100000>; 450 - opp-supported-hw = <0x04 0x0100>; 451 - opp-hz = /bits/ 64 <1300000000>; 452 - }; 453 - 454 - opp@1300000000,1050,3,12 { 455 - clock-latency-ns = <100000>; 456 - opp-supported-hw = <0x08 0x1000>; 457 - opp-hz = /bits/ 64 <1300000000>; 458 - }; 459 - 460 - opp@1300000000,1050,3,13 { 461 - clock-latency-ns = <100000>; 462 - opp-supported-hw = <0x08 0x2000>; 930 + opp-supported-hw = <0x12 0x3061>, <0x04 0x0002>, 931 + <0x08 0x0004>, <0x08 0x0008>, 932 + <0x08 0x0010>, <0x08 0x0020>, 933 + <0x08 0x0040>, <0x04 0x0080>, 934 + <0x04 0x0100>, <0x08 0x1000>, 935 + <0x08 0x2000>; 463 936 opp-hz = /bits/ 64 <1300000000>; 464 937 }; 465 938 466 939 opp@1300000000,1075 { 467 940 clock-latency-ns = <100000>; 468 - opp-supported-hw = <0x02 0x0182>; 469 - opp-hz = /bits/ 64 <1300000000>; 470 - }; 471 - 472 - opp@1300000000,1075,2,2 { 473 - clock-latency-ns = <100000>; 474 - opp-supported-hw = <0x04 0x0004>; 475 - opp-hz = /bits/ 64 <1300000000>; 476 - }; 477 - 478 - opp@1300000000,1075,2,3 { 479 - clock-latency-ns = <100000>; 480 - opp-supported-hw = <0x04 0x0008>; 481 - opp-hz = /bits/ 64 <1300000000>; 482 - }; 483 - 484 - opp@1300000000,1075,2,4 { 485 - clock-latency-ns = <100000>; 486 - opp-supported-hw = <0x04 0x0010>; 941 + opp-supported-hw = <0x02 0x0182>, <0x04 0x0004>, 942 + <0x04 0x0008>, <0x04 0x0010>; 487 943 opp-hz = /bits/ 64 <1300000000>; 488 944 }; 489 945 ··· 425 1081 426 1082 opp@1400000000,1150 { 427 1083 clock-latency-ns = <100000>; 428 - opp-supported-hw = <0x02 0x000C>; 429 - opp-hz = /bits/ 64 <1400000000>; 430 - }; 431 - 432 - opp@1400000000,1150,2,4 { 433 - clock-latency-ns = <100000>; 434 - opp-supported-hw = <0x04 0x0010>; 1084 + opp-supported-hw = <0x02 0x000C>, <0x04 0x0010>; 435 1085 opp-hz = /bits/ 64 <1400000000>; 436 1086 }; 437 1087 ··· 443 1105 444 1106 opp@1500000000,1125 { 445 1107 clock-latency-ns = <100000>; 446 - opp-supported-hw = <0x08 0x0010>; 447 - opp-hz = /bits/ 64 <1500000000>; 448 - }; 449 - 450 - opp@1500000000,1125,4,5 { 451 - clock-latency-ns = <100000>; 452 - opp-supported-hw = <0x10 0x0020>; 453 - opp-hz = /bits/ 64 <1500000000>; 454 - }; 455 - 456 - opp@1500000000,1125,4,6 { 457 - clock-latency-ns = <100000>; 458 - opp-supported-hw = <0x10 0x0040>; 459 - opp-hz = /bits/ 64 <1500000000>; 460 - }; 461 - 462 - opp@1500000000,1125,4,12 { 463 - clock-latency-ns = <100000>; 464 - opp-supported-hw = <0x10 0x1000>; 465 - opp-hz = /bits/ 64 <1500000000>; 466 - }; 467 - 468 - opp@1500000000,1125,4,13 { 469 - clock-latency-ns = <100000>; 470 - opp-supported-hw = <0x10 0x2000>; 1108 + opp-supported-hw = <0x08 0x0010>, <0x10 0x0020>, 1109 + <0x10 0x0040>, <0x10 0x1000>, 1110 + <0x10 0x2000>; 471 1111 opp-hz = /bits/ 64 <1500000000>; 472 1112 }; 473 1113 474 1114 opp@1500000000,1150 { 475 1115 clock-latency-ns = <100000>; 476 - opp-supported-hw = <0x04 0x0010>; 477 - opp-hz = /bits/ 64 <1500000000>; 478 - }; 479 - 480 - opp@1500000000,1150,3,5 { 481 - clock-latency-ns = <100000>; 482 - opp-supported-hw = <0x08 0x0020>; 483 - opp-hz = /bits/ 64 <1500000000>; 484 - }; 485 - 486 - opp@1500000000,1150,3,6 { 487 - clock-latency-ns = <100000>; 488 - opp-supported-hw = <0x08 0x0040>; 489 - opp-hz = /bits/ 64 <1500000000>; 490 - }; 491 - 492 - opp@1500000000,1150,3,12 { 493 - clock-latency-ns = <100000>; 494 - opp-supported-hw = <0x08 0x1000>; 495 - opp-hz = /bits/ 64 <1500000000>; 496 - }; 497 - 498 - opp@1500000000,1150,3,13 { 499 - clock-latency-ns = <100000>; 500 - opp-supported-hw = <0x08 0x2000>; 1116 + opp-supported-hw = <0x04 0x0010>, <0x08 0x0020>, 1117 + <0x08 0x0040>, <0x08 0x1000>, 1118 + <0x08 0x2000>; 501 1119 opp-hz = /bits/ 64 <1500000000>; 502 1120 }; 503 1121
+5
arch/arm/include/asm/topology.h
··· 7 7 #include <linux/cpumask.h> 8 8 #include <linux/arch_topology.h> 9 9 10 + /* big.LITTLE switcher is incompatible with frequency invariance */ 11 + #ifndef CONFIG_BL_SWITCHER 10 12 /* Replace task scheduler's default frequency-invariant accounting */ 13 + #define arch_set_freq_scale topology_set_freq_scale 11 14 #define arch_scale_freq_capacity topology_get_freq_scale 15 + #define arch_scale_freq_invariant topology_scale_freq_invariant 16 + #endif 12 17 13 18 /* Replace task scheduler's default cpu-invariant accounting */ 14 19 #define arch_scale_cpu_capacity topology_get_cpu_scale
+2
arch/arm64/include/asm/topology.h
··· 26 26 #endif /* CONFIG_ARM64_AMU_EXTN */ 27 27 28 28 /* Replace task scheduler's default frequency-invariant accounting */ 29 + #define arch_set_freq_scale topology_set_freq_scale 29 30 #define arch_scale_freq_capacity topology_get_freq_scale 31 + #define arch_scale_freq_invariant topology_scale_freq_invariant 30 32 31 33 /* Replace task scheduler's default cpu-invariant accounting */ 32 34 #define arch_scale_cpu_capacity topology_get_cpu_scale
+8 -1
arch/arm64/kernel/topology.c
··· 248 248 static_branch_enable(&amu_fie_key); 249 249 } 250 250 251 + /* 252 + * If the system is not fully invariant after AMU init, disable 253 + * partial use of counters for frequency invariance. 254 + */ 255 + if (!topology_scale_freq_invariant()) 256 + static_branch_disable(&amu_fie_key); 257 + 251 258 free_valid_mask: 252 259 free_cpumask_var(valid_cpus); 253 260 ··· 262 255 } 263 256 late_initcall_sync(init_amu_fie); 264 257 265 - bool arch_freq_counters_available(struct cpumask *cpus) 258 + bool arch_freq_counters_available(const struct cpumask *cpus) 266 259 { 267 260 return amu_freq_invariant() && 268 261 cpumask_subset(cpus, amu_fie_cpus);
+28 -6
drivers/acpi/acpi_processor.c
··· 798 798 memset(&cx, 0, sizeof(cx)); 799 799 800 800 element = &cst->package.elements[i]; 801 - if (element->type != ACPI_TYPE_PACKAGE) 801 + if (element->type != ACPI_TYPE_PACKAGE) { 802 + acpi_handle_info(handle, "_CST C%d type(%x) is not package, skip...\n", 803 + i, element->type); 802 804 continue; 805 + } 803 806 804 - if (element->package.count != 4) 807 + if (element->package.count != 4) { 808 + acpi_handle_info(handle, "_CST C%d package count(%d) is not 4, skip...\n", 809 + i, element->package.count); 805 810 continue; 811 + } 806 812 807 813 obj = &element->package.elements[0]; 808 814 809 - if (obj->type != ACPI_TYPE_BUFFER) 815 + if (obj->type != ACPI_TYPE_BUFFER) { 816 + acpi_handle_info(handle, "_CST C%d package element[0] type(%x) is not buffer, skip...\n", 817 + i, obj->type); 810 818 continue; 819 + } 811 820 812 821 reg = (struct acpi_power_register *)obj->buffer.pointer; 813 822 814 823 obj = &element->package.elements[1]; 815 - if (obj->type != ACPI_TYPE_INTEGER) 824 + if (obj->type != ACPI_TYPE_INTEGER) { 825 + acpi_handle_info(handle, "_CST C[%d] package element[1] type(%x) is not integer, skip...\n", 826 + i, obj->type); 816 827 continue; 828 + } 817 829 818 830 cx.type = obj->integer.value; 819 831 /* ··· 862 850 cx.entry_method = ACPI_CSTATE_HALT; 863 851 snprintf(cx.desc, ACPI_CX_DESC_LEN, "ACPI HLT"); 864 852 } else { 853 + acpi_handle_info(handle, "_CST C%d declares FIXED_HARDWARE C-state but not supported in hardware, skip...\n", 854 + i); 865 855 continue; 866 856 } 867 857 } else if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_IO) { ··· 871 857 snprintf(cx.desc, ACPI_CX_DESC_LEN, "ACPI IOPORT 0x%x", 872 858 cx.address); 873 859 } else { 860 + acpi_handle_info(handle, "_CST C%d space_id(%x) neither FIXED_HARDWARE nor SYSTEM_IO, skip...\n", 861 + i, reg->space_id); 874 862 continue; 875 863 } 876 864 ··· 880 864 cx.valid = 1; 881 865 882 866 obj = &element->package.elements[2]; 883 - if (obj->type != ACPI_TYPE_INTEGER) 867 + if (obj->type != ACPI_TYPE_INTEGER) { 868 + acpi_handle_info(handle, "_CST C%d package element[2] type(%x) not integer, skip...\n", 869 + i, obj->type); 884 870 continue; 871 + } 885 872 886 873 cx.latency = obj->integer.value; 887 874 888 875 obj = &element->package.elements[3]; 889 - if (obj->type != ACPI_TYPE_INTEGER) 876 + if (obj->type != ACPI_TYPE_INTEGER) { 877 + acpi_handle_info(handle, "_CST C%d package element[3] type(%x) not integer, skip...\n", 878 + i, obj->type); 890 879 continue; 880 + } 891 881 892 882 memcpy(&info->states[++last_index], &cx, sizeof(cx)); 893 883 }
+3 -7
drivers/acpi/ec.c
··· 2011 2011 if (acpi_any_gpe_status_set(first_ec->gpe)) 2012 2012 return true; 2013 2013 2014 - if (ec_no_wakeup) 2015 - return false; 2016 - 2017 2014 /* 2018 2015 * Dispatch the EC GPE in-band, but do not report wakeup in any case 2019 2016 * to allow the caller to process events properly after that. 2020 2017 */ 2021 2018 ret = acpi_dispatch_gpe(NULL, first_ec->gpe); 2022 - if (ret == ACPI_INTERRUPT_HANDLED) { 2019 + if (ret == ACPI_INTERRUPT_HANDLED) 2023 2020 pm_pr_dbg("ACPI EC GPE dispatched\n"); 2024 2021 2025 - /* Flush the event and query workqueues. */ 2026 - acpi_ec_flush_work(); 2027 - } 2022 + /* Flush the event and query workqueues. */ 2023 + acpi_ec_flush_work(); 2028 2024 2029 2025 return false; 2030 2026 }
+12 -3
drivers/base/arch_topology.c
··· 21 21 #include <linux/sched.h> 22 22 #include <linux/smp.h> 23 23 24 - __weak bool arch_freq_counters_available(struct cpumask *cpus) 24 + bool topology_scale_freq_invariant(void) 25 + { 26 + return cpufreq_supports_freq_invariance() || 27 + arch_freq_counters_available(cpu_online_mask); 28 + } 29 + 30 + __weak bool arch_freq_counters_available(const struct cpumask *cpus) 25 31 { 26 32 return false; 27 33 } 28 34 DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE; 29 35 30 - void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, 31 - unsigned long max_freq) 36 + void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq, 37 + unsigned long max_freq) 32 38 { 33 39 unsigned long scale; 34 40 int i; 41 + 42 + if (WARN_ON_ONCE(!cur_freq || !max_freq)) 43 + return; 35 44 36 45 /* 37 46 * If the use of counters for FIE is enabled, just return as we don't
+31 -40
drivers/base/power/domain.c
··· 123 123 #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) 124 124 #define genpd_unlock(p) p->lock_ops->unlock(p) 125 125 126 - #define genpd_status_on(genpd) (genpd->status == GPD_STATE_ACTIVE) 126 + #define genpd_status_on(genpd) (genpd->status == GENPD_STATE_ON) 127 127 #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) 128 128 #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) 129 129 #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) ··· 222 222 * out of off and so update the idle time and vice 223 223 * versa. 224 224 */ 225 - if (genpd->status == GPD_STATE_ACTIVE) { 225 + if (genpd->status == GENPD_STATE_ON) { 226 226 int state_idx = genpd->state_idx; 227 227 228 228 genpd->states[state_idx].idle_time = ··· 497 497 struct pm_domain_data *pdd; 498 498 struct gpd_link *link; 499 499 unsigned int not_suspended = 0; 500 + int ret; 500 501 501 502 /* 502 503 * Do not try to power off the domain in the following situations: ··· 545 544 if (!genpd->gov) 546 545 genpd->state_idx = 0; 547 546 548 - if (genpd->power_off) { 549 - int ret; 547 + /* Don't power off, if a child domain is waiting to power on. */ 548 + if (atomic_read(&genpd->sd_count) > 0) 549 + return -EBUSY; 550 550 551 - if (atomic_read(&genpd->sd_count) > 0) 552 - return -EBUSY; 551 + ret = _genpd_power_off(genpd, true); 552 + if (ret) 553 + return ret; 553 554 554 - /* 555 - * If sd_count > 0 at this point, one of the subdomains hasn't 556 - * managed to call genpd_power_on() for the parent yet after 557 - * incrementing it. In that case genpd_power_on() will wait 558 - * for us to drop the lock, so we can call .power_off() and let 559 - * the genpd_power_on() restore power for us (this shouldn't 560 - * happen very often). 561 - */ 562 - ret = _genpd_power_off(genpd, true); 563 - if (ret) 564 - return ret; 565 - } 566 - 567 - genpd->status = GPD_STATE_POWER_OFF; 555 + genpd->status = GENPD_STATE_OFF; 568 556 genpd_update_accounting(genpd); 569 557 570 558 list_for_each_entry(link, &genpd->child_links, child_node) { ··· 606 616 if (ret) 607 617 goto err; 608 618 609 - genpd->status = GPD_STATE_ACTIVE; 619 + genpd->status = GENPD_STATE_ON; 610 620 genpd_update_accounting(genpd); 611 621 612 622 return 0; ··· 951 961 if (_genpd_power_off(genpd, false)) 952 962 return; 953 963 954 - genpd->status = GPD_STATE_POWER_OFF; 964 + genpd->status = GENPD_STATE_OFF; 955 965 956 966 list_for_each_entry(link, &genpd->child_links, child_node) { 957 967 genpd_sd_counter_dec(link->parent); ··· 997 1007 } 998 1008 999 1009 _genpd_power_on(genpd, false); 1000 - 1001 - genpd->status = GPD_STATE_ACTIVE; 1010 + genpd->status = GENPD_STATE_ON; 1002 1011 } 1003 1012 1004 1013 /** ··· 1276 1287 * so make it appear as powered off to genpd_sync_power_on(), 1277 1288 * so that it tries to power it on in case it was really off. 1278 1289 */ 1279 - genpd->status = GPD_STATE_POWER_OFF; 1290 + genpd->status = GENPD_STATE_OFF; 1280 1291 1281 1292 genpd_sync_power_on(genpd, true, 0); 1282 1293 genpd_unlock(genpd); ··· 1766 1777 genpd->gov = gov; 1767 1778 INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn); 1768 1779 atomic_set(&genpd->sd_count, 0); 1769 - genpd->status = is_off ? GPD_STATE_POWER_OFF : GPD_STATE_ACTIVE; 1780 + genpd->status = is_off ? GENPD_STATE_OFF : GENPD_STATE_ON; 1770 1781 genpd->device_count = 0; 1771 1782 genpd->max_off_time_ns = -1; 1772 1783 genpd->max_off_time_changed = true; ··· 2033 2044 if (genpd->set_performance_state) { 2034 2045 ret = dev_pm_opp_of_add_table(&genpd->dev); 2035 2046 if (ret) { 2036 - dev_err(&genpd->dev, "Failed to add OPP table: %d\n", 2037 - ret); 2047 + if (ret != -EPROBE_DEFER) 2048 + dev_err(&genpd->dev, "Failed to add OPP table: %d\n", 2049 + ret); 2038 2050 goto unlock; 2039 2051 } 2040 2052 ··· 2044 2054 * state. 2045 2055 */ 2046 2056 genpd->opp_table = dev_pm_opp_get_opp_table(&genpd->dev); 2047 - WARN_ON(!genpd->opp_table); 2057 + WARN_ON(IS_ERR(genpd->opp_table)); 2048 2058 } 2049 2059 2050 2060 ret = genpd_add_provider(np, genpd_xlate_simple, genpd); ··· 2101 2111 if (genpd->set_performance_state) { 2102 2112 ret = dev_pm_opp_of_add_table_indexed(&genpd->dev, i); 2103 2113 if (ret) { 2104 - dev_err(&genpd->dev, "Failed to add OPP table for index %d: %d\n", 2105 - i, ret); 2114 + if (ret != -EPROBE_DEFER) 2115 + dev_err(&genpd->dev, "Failed to add OPP table for index %d: %d\n", 2116 + i, ret); 2106 2117 goto error; 2107 2118 } 2108 2119 ··· 2112 2121 * performance state. 2113 2122 */ 2114 2123 genpd->opp_table = dev_pm_opp_get_opp_table_indexed(&genpd->dev, i); 2115 - WARN_ON(!genpd->opp_table); 2124 + WARN_ON(IS_ERR(genpd->opp_table)); 2116 2125 } 2117 2126 2118 2127 genpd->provider = &np->fwnode; ··· 2793 2802 struct generic_pm_domain *genpd) 2794 2803 { 2795 2804 static const char * const status_lookup[] = { 2796 - [GPD_STATE_ACTIVE] = "on", 2797 - [GPD_STATE_POWER_OFF] = "off" 2805 + [GENPD_STATE_ON] = "on", 2806 + [GENPD_STATE_OFF] = "off" 2798 2807 }; 2799 2808 struct pm_domain_data *pm_data; 2800 2809 const char *kobj_path; ··· 2872 2881 static int status_show(struct seq_file *s, void *data) 2873 2882 { 2874 2883 static const char * const status_lookup[] = { 2875 - [GPD_STATE_ACTIVE] = "on", 2876 - [GPD_STATE_POWER_OFF] = "off" 2884 + [GENPD_STATE_ON] = "on", 2885 + [GENPD_STATE_OFF] = "off" 2877 2886 }; 2878 2887 2879 2888 struct generic_pm_domain *genpd = s->private; ··· 2886 2895 if (WARN_ON_ONCE(genpd->status >= ARRAY_SIZE(status_lookup))) 2887 2896 goto exit; 2888 2897 2889 - if (genpd->status == GPD_STATE_POWER_OFF) 2898 + if (genpd->status == GENPD_STATE_OFF) 2890 2899 seq_printf(s, "%s-%u\n", status_lookup[genpd->status], 2891 2900 genpd->state_idx); 2892 2901 else ··· 2929 2938 ktime_t delta = 0; 2930 2939 s64 msecs; 2931 2940 2932 - if ((genpd->status == GPD_STATE_POWER_OFF) && 2941 + if ((genpd->status == GENPD_STATE_OFF) && 2933 2942 (genpd->state_idx == i)) 2934 2943 delta = ktime_sub(ktime_get(), genpd->accounting_time); 2935 2944 ··· 2952 2961 if (ret) 2953 2962 return -ERESTARTSYS; 2954 2963 2955 - if (genpd->status == GPD_STATE_ACTIVE) 2964 + if (genpd->status == GENPD_STATE_ON) 2956 2965 delta = ktime_sub(ktime_get(), genpd->accounting_time); 2957 2966 2958 2967 seq_printf(s, "%lld ms\n", ktime_to_ms( ··· 2975 2984 2976 2985 for (i = 0; i < genpd->state_count; i++) { 2977 2986 2978 - if ((genpd->status == GPD_STATE_POWER_OFF) && 2987 + if ((genpd->status == GENPD_STATE_OFF) && 2979 2988 (genpd->state_idx == i)) 2980 2989 delta = ktime_sub(ktime_get(), genpd->accounting_time); 2981 2990
+1 -4
drivers/base/power/runtime.c
··· 291 291 device_links_read_lock_held()) { 292 292 int retval; 293 293 294 - if (!(link->flags & DL_FLAG_PM_RUNTIME) || 295 - READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND) 294 + if (!(link->flags & DL_FLAG_PM_RUNTIME)) 296 295 continue; 297 296 298 297 retval = pm_runtime_get_sync(link->supplier); ··· 311 312 312 313 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, 313 314 device_links_read_lock_held()) { 314 - if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND) 315 - continue; 316 315 317 316 while (refcount_dec_not_one(&link->rpm_active)) 318 317 pm_runtime_put(link->supplier);
+1 -1
drivers/cpufreq/Kconfig.arm
··· 283 283 284 284 config ARM_STI_CPUFREQ 285 285 tristate "STi CPUFreq support" 286 - depends on SOC_STIH407 286 + depends on CPUFREQ_DT && SOC_STIH407 287 287 help 288 288 This driver uses the generic OPP framework to match the running 289 289 platform with a predefined set of suitable values. If not provided
+6
drivers/cpufreq/armada-37xx-cpufreq.c
··· 484 484 /* late_initcall, to guarantee the driver is loaded after A37xx clock driver */ 485 485 late_initcall(armada37xx_cpufreq_driver_init); 486 486 487 + static const struct of_device_id __maybe_unused armada37xx_cpufreq_of_match[] = { 488 + { .compatible = "marvell,armada-3700-nb-pm" }, 489 + { }, 490 + }; 491 + MODULE_DEVICE_TABLE(of, armada37xx_cpufreq_of_match); 492 + 487 493 MODULE_AUTHOR("Gregory CLEMENT <gregory.clement@free-electrons.com>"); 488 494 MODULE_DESCRIPTION("Armada 37xx cpufreq driver"); 489 495 MODULE_LICENSE("GPL");
+1
drivers/cpufreq/cpufreq-dt-platdev.c
··· 137 137 138 138 { .compatible = "st,stih407", }, 139 139 { .compatible = "st,stih410", }, 140 + { .compatible = "st,stih418", }, 140 141 141 142 { .compatible = "sigma,tango4", }, 142 143
+145 -153
drivers/cpufreq/cpufreq-dt.c
··· 13 13 #include <linux/cpufreq.h> 14 14 #include <linux/cpumask.h> 15 15 #include <linux/err.h> 16 + #include <linux/list.h> 16 17 #include <linux/module.h> 17 18 #include <linux/of.h> 18 19 #include <linux/pm_opp.h> ··· 25 24 #include "cpufreq-dt.h" 26 25 27 26 struct private_data { 28 - struct opp_table *opp_table; 27 + struct list_head node; 28 + 29 + cpumask_var_t cpus; 29 30 struct device *cpu_dev; 30 - const char *reg_name; 31 + struct opp_table *opp_table; 32 + struct opp_table *reg_opp_table; 31 33 bool have_static_opps; 32 34 }; 35 + 36 + static LIST_HEAD(priv_list); 33 37 34 38 static struct freq_attr *cpufreq_dt_attr[] = { 35 39 &cpufreq_freq_attr_scaling_available_freqs, ··· 42 36 NULL, 43 37 }; 44 38 39 + static struct private_data *cpufreq_dt_find_data(int cpu) 40 + { 41 + struct private_data *priv; 42 + 43 + list_for_each_entry(priv, &priv_list, node) { 44 + if (cpumask_test_cpu(cpu, priv->cpus)) 45 + return priv; 46 + } 47 + 48 + return NULL; 49 + } 50 + 45 51 static int set_target(struct cpufreq_policy *policy, unsigned int index) 46 52 { 47 53 struct private_data *priv = policy->driver_data; 48 54 unsigned long freq = policy->freq_table[index].frequency; 49 - int ret; 50 55 51 - ret = dev_pm_opp_set_rate(priv->cpu_dev, freq * 1000); 52 - 53 - if (!ret) { 54 - arch_set_freq_scale(policy->related_cpus, freq, 55 - policy->cpuinfo.max_freq); 56 - } 57 - 58 - return ret; 56 + return dev_pm_opp_set_rate(priv->cpu_dev, freq * 1000); 59 57 } 60 58 61 59 /* ··· 100 90 return name; 101 91 } 102 92 103 - static int resources_available(void) 104 - { 105 - struct device *cpu_dev; 106 - struct regulator *cpu_reg; 107 - struct clk *cpu_clk; 108 - int ret = 0; 109 - const char *name; 110 - 111 - cpu_dev = get_cpu_device(0); 112 - if (!cpu_dev) { 113 - pr_err("failed to get cpu0 device\n"); 114 - return -ENODEV; 115 - } 116 - 117 - cpu_clk = clk_get(cpu_dev, NULL); 118 - ret = PTR_ERR_OR_ZERO(cpu_clk); 119 - if (ret) { 120 - /* 121 - * If cpu's clk node is present, but clock is not yet 122 - * registered, we should try defering probe. 123 - */ 124 - if (ret == -EPROBE_DEFER) 125 - dev_dbg(cpu_dev, "clock not ready, retry\n"); 126 - else 127 - dev_err(cpu_dev, "failed to get clock: %d\n", ret); 128 - 129 - return ret; 130 - } 131 - 132 - clk_put(cpu_clk); 133 - 134 - ret = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL); 135 - if (ret) 136 - return ret; 137 - 138 - name = find_supply_name(cpu_dev); 139 - /* Platform doesn't require regulator */ 140 - if (!name) 141 - return 0; 142 - 143 - cpu_reg = regulator_get_optional(cpu_dev, name); 144 - ret = PTR_ERR_OR_ZERO(cpu_reg); 145 - if (ret) { 146 - /* 147 - * If cpu's regulator supply node is present, but regulator is 148 - * not yet registered, we should try defering probe. 149 - */ 150 - if (ret == -EPROBE_DEFER) 151 - dev_dbg(cpu_dev, "cpu0 regulator not ready, retry\n"); 152 - else 153 - dev_dbg(cpu_dev, "no regulator for cpu0: %d\n", ret); 154 - 155 - return ret; 156 - } 157 - 158 - regulator_put(cpu_reg); 159 - return 0; 160 - } 161 - 162 93 static int cpufreq_init(struct cpufreq_policy *policy) 163 94 { 164 95 struct cpufreq_frequency_table *freq_table; 165 - struct opp_table *opp_table = NULL; 166 96 struct private_data *priv; 167 97 struct device *cpu_dev; 168 98 struct clk *cpu_clk; 169 99 unsigned int transition_latency; 170 - bool fallback = false; 171 - const char *name; 172 100 int ret; 173 101 174 - cpu_dev = get_cpu_device(policy->cpu); 175 - if (!cpu_dev) { 176 - pr_err("failed to get cpu%d device\n", policy->cpu); 102 + priv = cpufreq_dt_find_data(policy->cpu); 103 + if (!priv) { 104 + pr_err("failed to find data for cpu%d\n", policy->cpu); 177 105 return -ENODEV; 178 106 } 107 + 108 + cpu_dev = priv->cpu_dev; 109 + cpumask_copy(policy->cpus, priv->cpus); 179 110 180 111 cpu_clk = clk_get(cpu_dev, NULL); 181 112 if (IS_ERR(cpu_clk)) { ··· 124 173 dev_err(cpu_dev, "%s: failed to get clk: %d\n", __func__, ret); 125 174 return ret; 126 175 } 127 - 128 - /* Get OPP-sharing information from "operating-points-v2" bindings */ 129 - ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, policy->cpus); 130 - if (ret) { 131 - if (ret != -ENOENT) 132 - goto out_put_clk; 133 - 134 - /* 135 - * operating-points-v2 not supported, fallback to old method of 136 - * finding shared-OPPs for backward compatibility if the 137 - * platform hasn't set sharing CPUs. 138 - */ 139 - if (dev_pm_opp_get_sharing_cpus(cpu_dev, policy->cpus)) 140 - fallback = true; 141 - } 142 - 143 - /* 144 - * OPP layer will be taking care of regulators now, but it needs to know 145 - * the name of the regulator first. 146 - */ 147 - name = find_supply_name(cpu_dev); 148 - if (name) { 149 - opp_table = dev_pm_opp_set_regulators(cpu_dev, &name, 1); 150 - if (IS_ERR(opp_table)) { 151 - ret = PTR_ERR(opp_table); 152 - dev_err(cpu_dev, "Failed to set regulator for cpu%d: %d\n", 153 - policy->cpu, ret); 154 - goto out_put_clk; 155 - } 156 - } 157 - 158 - priv = kzalloc(sizeof(*priv), GFP_KERNEL); 159 - if (!priv) { 160 - ret = -ENOMEM; 161 - goto out_put_regulator; 162 - } 163 - 164 - priv->reg_name = name; 165 - priv->opp_table = opp_table; 166 176 167 177 /* 168 178 * Initialize OPP tables for all policy->cpus. They will be shared by ··· 144 232 */ 145 233 ret = dev_pm_opp_get_opp_count(cpu_dev); 146 234 if (ret <= 0) { 147 - dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n"); 148 - ret = -EPROBE_DEFER; 235 + dev_err(cpu_dev, "OPP table can't be empty\n"); 236 + ret = -ENODEV; 149 237 goto out_free_opp; 150 - } 151 - 152 - if (fallback) { 153 - cpumask_setall(policy->cpus); 154 - 155 - /* 156 - * OPP tables are initialized only for policy->cpu, do it for 157 - * others as well. 158 - */ 159 - ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); 160 - if (ret) 161 - dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 162 - __func__, ret); 163 238 } 164 239 165 240 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); ··· 155 256 goto out_free_opp; 156 257 } 157 258 158 - priv->cpu_dev = cpu_dev; 159 259 policy->driver_data = priv; 160 260 policy->clk = cpu_clk; 161 261 policy->freq_table = freq_table; ··· 186 288 out_free_opp: 187 289 if (priv->have_static_opps) 188 290 dev_pm_opp_of_cpumask_remove_table(policy->cpus); 189 - kfree(priv); 190 - out_put_regulator: 191 - if (name) 192 - dev_pm_opp_put_regulators(opp_table); 193 - out_put_clk: 194 291 clk_put(cpu_clk); 195 292 196 293 return ret; ··· 213 320 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 214 321 if (priv->have_static_opps) 215 322 dev_pm_opp_of_cpumask_remove_table(policy->related_cpus); 216 - if (priv->reg_name) 217 - dev_pm_opp_put_regulators(priv->opp_table); 218 - 219 323 clk_put(policy->clk); 220 - kfree(priv); 221 - 222 324 return 0; 223 325 } 224 326 ··· 232 344 .suspend = cpufreq_generic_suspend, 233 345 }; 234 346 347 + static int dt_cpufreq_early_init(struct device *dev, int cpu) 348 + { 349 + struct private_data *priv; 350 + struct device *cpu_dev; 351 + const char *reg_name; 352 + int ret; 353 + 354 + /* Check if this CPU is already covered by some other policy */ 355 + if (cpufreq_dt_find_data(cpu)) 356 + return 0; 357 + 358 + cpu_dev = get_cpu_device(cpu); 359 + if (!cpu_dev) 360 + return -EPROBE_DEFER; 361 + 362 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 363 + if (!priv) 364 + return -ENOMEM; 365 + 366 + if (!alloc_cpumask_var(&priv->cpus, GFP_KERNEL)) 367 + return -ENOMEM; 368 + 369 + priv->cpu_dev = cpu_dev; 370 + 371 + /* Try to get OPP table early to ensure resources are available */ 372 + priv->opp_table = dev_pm_opp_get_opp_table(cpu_dev); 373 + if (IS_ERR(priv->opp_table)) { 374 + ret = PTR_ERR(priv->opp_table); 375 + if (ret != -EPROBE_DEFER) 376 + dev_err(cpu_dev, "failed to get OPP table: %d\n", ret); 377 + goto free_cpumask; 378 + } 379 + 380 + /* 381 + * OPP layer will be taking care of regulators now, but it needs to know 382 + * the name of the regulator first. 383 + */ 384 + reg_name = find_supply_name(cpu_dev); 385 + if (reg_name) { 386 + priv->reg_opp_table = dev_pm_opp_set_regulators(cpu_dev, 387 + &reg_name, 1); 388 + if (IS_ERR(priv->reg_opp_table)) { 389 + ret = PTR_ERR(priv->reg_opp_table); 390 + if (ret != -EPROBE_DEFER) 391 + dev_err(cpu_dev, "failed to set regulators: %d\n", 392 + ret); 393 + goto put_table; 394 + } 395 + } 396 + 397 + /* Find OPP sharing information so we can fill pri->cpus here */ 398 + /* Get OPP-sharing information from "operating-points-v2" bindings */ 399 + ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, priv->cpus); 400 + if (ret) { 401 + if (ret != -ENOENT) 402 + goto put_reg; 403 + 404 + /* 405 + * operating-points-v2 not supported, fallback to all CPUs share 406 + * OPP for backward compatibility if the platform hasn't set 407 + * sharing CPUs. 408 + */ 409 + if (dev_pm_opp_get_sharing_cpus(cpu_dev, priv->cpus)) { 410 + cpumask_setall(priv->cpus); 411 + 412 + /* 413 + * OPP tables are initialized only for cpu, do it for 414 + * others as well. 415 + */ 416 + ret = dev_pm_opp_set_sharing_cpus(cpu_dev, priv->cpus); 417 + if (ret) 418 + dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 419 + __func__, ret); 420 + } 421 + } 422 + 423 + list_add(&priv->node, &priv_list); 424 + return 0; 425 + 426 + put_reg: 427 + if (priv->reg_opp_table) 428 + dev_pm_opp_put_regulators(priv->reg_opp_table); 429 + put_table: 430 + dev_pm_opp_put_opp_table(priv->opp_table); 431 + free_cpumask: 432 + free_cpumask_var(priv->cpus); 433 + return ret; 434 + } 435 + 436 + static void dt_cpufreq_release(void) 437 + { 438 + struct private_data *priv, *tmp; 439 + 440 + list_for_each_entry_safe(priv, tmp, &priv_list, node) { 441 + if (priv->reg_opp_table) 442 + dev_pm_opp_put_regulators(priv->reg_opp_table); 443 + dev_pm_opp_put_opp_table(priv->opp_table); 444 + free_cpumask_var(priv->cpus); 445 + list_del(&priv->node); 446 + } 447 + } 448 + 235 449 static int dt_cpufreq_probe(struct platform_device *pdev) 236 450 { 237 451 struct cpufreq_dt_platform_data *data = dev_get_platdata(&pdev->dev); 238 - int ret; 452 + int ret, cpu; 239 453 240 - /* 241 - * All per-cluster (CPUs sharing clock/voltages) initialization is done 242 - * from ->init(). In probe(), we just need to make sure that clk and 243 - * regulators are available. Else defer probe and retry. 244 - * 245 - * FIXME: Is checking this only for CPU0 sufficient ? 246 - */ 247 - ret = resources_available(); 248 - if (ret) 249 - return ret; 454 + /* Request resources early so we can return in case of -EPROBE_DEFER */ 455 + for_each_possible_cpu(cpu) { 456 + ret = dt_cpufreq_early_init(&pdev->dev, cpu); 457 + if (ret) 458 + goto err; 459 + } 250 460 251 461 if (data) { 252 462 if (data->have_governor_per_policy) ··· 360 374 } 361 375 362 376 ret = cpufreq_register_driver(&dt_cpufreq_driver); 363 - if (ret) 377 + if (ret) { 364 378 dev_err(&pdev->dev, "failed register driver: %d\n", ret); 379 + goto err; 380 + } 365 381 382 + return 0; 383 + err: 384 + dt_cpufreq_release(); 366 385 return ret; 367 386 } 368 387 369 388 static int dt_cpufreq_remove(struct platform_device *pdev) 370 389 { 371 390 cpufreq_unregister_driver(&dt_cpufreq_driver); 391 + dt_cpufreq_release(); 372 392 return 0; 373 393 } 374 394
+39 -8
drivers/cpufreq/cpufreq.c
··· 61 61 static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); 62 62 static DEFINE_RWLOCK(cpufreq_driver_lock); 63 63 64 + static DEFINE_STATIC_KEY_FALSE(cpufreq_freq_invariance); 65 + bool cpufreq_supports_freq_invariance(void) 66 + { 67 + return static_branch_likely(&cpufreq_freq_invariance); 68 + } 69 + 64 70 /* Flag to suspend/resume CPUFreq governors */ 65 71 static bool cpufreq_suspended; 66 72 ··· 159 153 return idle_time; 160 154 } 161 155 EXPORT_SYMBOL_GPL(get_cpu_idle_time); 162 - 163 - __weak void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, 164 - unsigned long max_freq) 165 - { 166 - } 167 - EXPORT_SYMBOL_GPL(arch_set_freq_scale); 168 156 169 157 /* 170 158 * This is a generic cpufreq init() routine which can be used by cpufreq ··· 445 445 return; 446 446 447 447 cpufreq_notify_post_transition(policy, freqs, transition_failed); 448 + 449 + arch_set_freq_scale(policy->related_cpus, 450 + policy->cur, 451 + policy->cpuinfo.max_freq); 448 452 449 453 policy->transition_ongoing = false; 450 454 policy->transition_task = NULL; ··· 2060 2056 unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy, 2061 2057 unsigned int target_freq) 2062 2058 { 2063 - target_freq = clamp_val(target_freq, policy->min, policy->max); 2059 + unsigned int freq; 2060 + int cpu; 2064 2061 2065 - return cpufreq_driver->fast_switch(policy, target_freq); 2062 + target_freq = clamp_val(target_freq, policy->min, policy->max); 2063 + freq = cpufreq_driver->fast_switch(policy, target_freq); 2064 + 2065 + if (!freq) 2066 + return 0; 2067 + 2068 + policy->cur = freq; 2069 + arch_set_freq_scale(policy->related_cpus, freq, 2070 + policy->cpuinfo.max_freq); 2071 + cpufreq_stats_record_transition(policy, freq); 2072 + 2073 + if (trace_cpu_frequency_enabled()) { 2074 + for_each_cpu(cpu, policy->cpus) 2075 + trace_cpu_frequency(freq, cpu); 2076 + } 2077 + 2078 + return freq; 2066 2079 } 2067 2080 EXPORT_SYMBOL_GPL(cpufreq_driver_fast_switch); 2068 2081 ··· 2731 2710 cpufreq_driver = driver_data; 2732 2711 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2733 2712 2713 + /* 2714 + * Mark support for the scheduler's frequency invariance engine for 2715 + * drivers that implement target(), target_index() or fast_switch(). 2716 + */ 2717 + if (!cpufreq_driver->setpolicy) { 2718 + static_branch_enable_cpuslocked(&cpufreq_freq_invariance); 2719 + pr_debug("supports frequency invariance"); 2720 + } 2721 + 2734 2722 if (driver_data->setpolicy) 2735 2723 driver_data->flags |= CPUFREQ_CONST_LOOPS; 2736 2724 ··· 2809 2779 cpus_read_lock(); 2810 2780 subsys_interface_unregister(&cpufreq_interface); 2811 2781 remove_boost_sysfs_file(); 2782 + static_branch_disable_cpuslocked(&cpufreq_freq_invariance); 2812 2783 cpuhp_remove_state_nocalls_cpuslocked(hp_online); 2813 2784 2814 2785 write_lock_irqsave(&cpufreq_driver_lock, flags);
+73 -32
drivers/cpufreq/cpufreq_stats.c
··· 19 19 unsigned int state_num; 20 20 unsigned int last_index; 21 21 u64 *time_in_state; 22 - spinlock_t lock; 23 22 unsigned int *freq_table; 24 23 unsigned int *trans_table; 24 + 25 + /* Deferred reset */ 26 + unsigned int reset_pending; 27 + unsigned long long reset_time; 25 28 }; 26 29 27 - static void cpufreq_stats_update(struct cpufreq_stats *stats) 30 + static void cpufreq_stats_update(struct cpufreq_stats *stats, 31 + unsigned long long time) 28 32 { 29 33 unsigned long long cur_time = get_jiffies_64(); 30 34 31 - stats->time_in_state[stats->last_index] += cur_time - stats->last_time; 35 + stats->time_in_state[stats->last_index] += cur_time - time; 32 36 stats->last_time = cur_time; 33 37 } 34 38 35 - static void cpufreq_stats_clear_table(struct cpufreq_stats *stats) 39 + static void cpufreq_stats_reset_table(struct cpufreq_stats *stats) 36 40 { 37 41 unsigned int count = stats->max_state; 38 42 39 - spin_lock(&stats->lock); 40 43 memset(stats->time_in_state, 0, count * sizeof(u64)); 41 44 memset(stats->trans_table, 0, count * count * sizeof(int)); 42 45 stats->last_time = get_jiffies_64(); 43 46 stats->total_trans = 0; 44 - spin_unlock(&stats->lock); 47 + 48 + /* Adjust for the time elapsed since reset was requested */ 49 + WRITE_ONCE(stats->reset_pending, 0); 50 + /* 51 + * Prevent the reset_time read from being reordered before the 52 + * reset_pending accesses in cpufreq_stats_record_transition(). 53 + */ 54 + smp_rmb(); 55 + cpufreq_stats_update(stats, READ_ONCE(stats->reset_time)); 45 56 } 46 57 47 58 static ssize_t show_total_trans(struct cpufreq_policy *policy, char *buf) 48 59 { 49 - return sprintf(buf, "%d\n", policy->stats->total_trans); 60 + struct cpufreq_stats *stats = policy->stats; 61 + 62 + if (READ_ONCE(stats->reset_pending)) 63 + return sprintf(buf, "%d\n", 0); 64 + else 65 + return sprintf(buf, "%u\n", stats->total_trans); 50 66 } 51 67 cpufreq_freq_attr_ro(total_trans); 52 68 53 69 static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf) 54 70 { 55 71 struct cpufreq_stats *stats = policy->stats; 72 + bool pending = READ_ONCE(stats->reset_pending); 73 + unsigned long long time; 56 74 ssize_t len = 0; 57 75 int i; 58 76 59 - if (policy->fast_switch_enabled) 60 - return 0; 61 - 62 - spin_lock(&stats->lock); 63 - cpufreq_stats_update(stats); 64 - spin_unlock(&stats->lock); 65 - 66 77 for (i = 0; i < stats->state_num; i++) { 78 + if (pending) { 79 + if (i == stats->last_index) { 80 + /* 81 + * Prevent the reset_time read from occurring 82 + * before the reset_pending read above. 83 + */ 84 + smp_rmb(); 85 + time = get_jiffies_64() - READ_ONCE(stats->reset_time); 86 + } else { 87 + time = 0; 88 + } 89 + } else { 90 + time = stats->time_in_state[i]; 91 + if (i == stats->last_index) 92 + time += get_jiffies_64() - stats->last_time; 93 + } 94 + 67 95 len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i], 68 - (unsigned long long) 69 - jiffies_64_to_clock_t(stats->time_in_state[i])); 96 + jiffies_64_to_clock_t(time)); 70 97 } 71 98 return len; 72 99 } 73 100 cpufreq_freq_attr_ro(time_in_state); 74 101 102 + /* We don't care what is written to the attribute */ 75 103 static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf, 76 104 size_t count) 77 105 { 78 - /* We don't care what is written to the attribute. */ 79 - cpufreq_stats_clear_table(policy->stats); 106 + struct cpufreq_stats *stats = policy->stats; 107 + 108 + /* 109 + * Defer resetting of stats to cpufreq_stats_record_transition() to 110 + * avoid races. 111 + */ 112 + WRITE_ONCE(stats->reset_time, get_jiffies_64()); 113 + /* 114 + * The memory barrier below is to prevent the readers of reset_time from 115 + * seeing a stale or partially updated value. 116 + */ 117 + smp_wmb(); 118 + WRITE_ONCE(stats->reset_pending, 1); 119 + 80 120 return count; 81 121 } 82 122 cpufreq_freq_attr_wo(reset); ··· 124 84 static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf) 125 85 { 126 86 struct cpufreq_stats *stats = policy->stats; 87 + bool pending = READ_ONCE(stats->reset_pending); 127 88 ssize_t len = 0; 128 - int i, j; 129 - 130 - if (policy->fast_switch_enabled) 131 - return 0; 89 + int i, j, count; 132 90 133 91 len += scnprintf(buf + len, PAGE_SIZE - len, " From : To\n"); 134 92 len += scnprintf(buf + len, PAGE_SIZE - len, " : "); ··· 151 113 for (j = 0; j < stats->state_num; j++) { 152 114 if (len >= PAGE_SIZE) 153 115 break; 154 - len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ", 155 - stats->trans_table[i*stats->max_state+j]); 116 + 117 + if (pending) 118 + count = 0; 119 + else 120 + count = stats->trans_table[i * stats->max_state + j]; 121 + 122 + len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ", count); 156 123 } 157 124 if (len >= PAGE_SIZE) 158 125 break; ··· 251 208 stats->state_num = i; 252 209 stats->last_time = get_jiffies_64(); 253 210 stats->last_index = freq_table_get_index(stats, policy->cur); 254 - spin_lock_init(&stats->lock); 255 211 256 212 policy->stats = stats; 257 213 ret = sysfs_create_group(&policy->kobj, &stats_attr_group); ··· 270 228 struct cpufreq_stats *stats = policy->stats; 271 229 int old_index, new_index; 272 230 273 - if (!stats) { 274 - pr_debug("%s: No stats found\n", __func__); 231 + if (unlikely(!stats)) 275 232 return; 276 - } 233 + 234 + if (unlikely(READ_ONCE(stats->reset_pending))) 235 + cpufreq_stats_reset_table(stats); 277 236 278 237 old_index = stats->last_index; 279 238 new_index = freq_table_get_index(stats, new_freq); 280 239 281 240 /* We can't do stats->time_in_state[-1]= .. */ 282 - if (old_index == -1 || new_index == -1 || old_index == new_index) 241 + if (unlikely(old_index == -1 || new_index == -1 || old_index == new_index)) 283 242 return; 284 243 285 - spin_lock(&stats->lock); 286 - cpufreq_stats_update(stats); 244 + cpufreq_stats_update(stats, stats->last_time); 287 245 288 246 stats->last_index = new_index; 289 247 stats->trans_table[old_index * stats->max_state + new_index]++; 290 248 stats->total_trans++; 291 - spin_unlock(&stats->lock); 292 249 }
+2 -8
drivers/cpufreq/imx6q-cpufreq.c
··· 48 48 }; 49 49 50 50 static struct device *cpu_dev; 51 - static bool free_opp; 52 51 static struct cpufreq_frequency_table *freq_table; 53 52 static unsigned int max_freq; 54 53 static unsigned int transition_latency; ··· 389 390 goto put_reg; 390 391 } 391 392 392 - /* Because we have added the OPPs here, we must free them */ 393 - free_opp = true; 394 - 395 393 if (of_machine_is_compatible("fsl,imx6ul") || 396 394 of_machine_is_compatible("fsl,imx6ull")) { 397 395 ret = imx6ul_opp_check_speed_grading(cpu_dev); ··· 503 507 free_freq_table: 504 508 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 505 509 out_free_opp: 506 - if (free_opp) 507 - dev_pm_opp_of_remove_table(cpu_dev); 510 + dev_pm_opp_of_remove_table(cpu_dev); 508 511 put_reg: 509 512 if (!IS_ERR(arm_reg)) 510 513 regulator_put(arm_reg); ··· 523 528 { 524 529 cpufreq_unregister_driver(&imx6q_cpufreq_driver); 525 530 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 526 - if (free_opp) 527 - dev_pm_opp_of_remove_table(cpu_dev); 531 + dev_pm_opp_of_remove_table(cpu_dev); 528 532 regulator_put(arm_reg); 529 533 if (!IS_ERR(pu_reg)) 530 534 regulator_put(pu_reg);
+90 -54
drivers/cpufreq/qcom-cpufreq-hw.c
··· 19 19 #define LUT_L_VAL GENMASK(7, 0) 20 20 #define LUT_CORE_COUNT GENMASK(18, 16) 21 21 #define LUT_VOLT GENMASK(11, 0) 22 - #define LUT_ROW_SIZE 32 23 22 #define CLK_HW_DIV 2 24 23 #define LUT_TURBO_IND 1 25 24 26 - /* Register offsets */ 27 - #define REG_ENABLE 0x0 28 - #define REG_FREQ_LUT 0x110 29 - #define REG_VOLT_LUT 0x114 30 - #define REG_PERF_STATE 0x920 25 + struct qcom_cpufreq_soc_data { 26 + u32 reg_enable; 27 + u32 reg_freq_lut; 28 + u32 reg_volt_lut; 29 + u32 reg_perf_state; 30 + u8 lut_row_size; 31 + }; 32 + 33 + struct qcom_cpufreq_data { 34 + void __iomem *base; 35 + const struct qcom_cpufreq_soc_data *soc_data; 36 + }; 31 37 32 38 static unsigned long cpu_hw_rate, xo_rate; 33 - static struct platform_device *global_pdev; 34 39 static bool icc_scaling_enabled; 35 40 36 41 static int qcom_cpufreq_set_bw(struct cpufreq_policy *policy, ··· 82 77 static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy, 83 78 unsigned int index) 84 79 { 85 - void __iomem *perf_state_reg = policy->driver_data; 80 + struct qcom_cpufreq_data *data = policy->driver_data; 81 + const struct qcom_cpufreq_soc_data *soc_data = data->soc_data; 86 82 unsigned long freq = policy->freq_table[index].frequency; 87 83 88 - writel_relaxed(index, perf_state_reg); 84 + writel_relaxed(index, data->base + soc_data->reg_perf_state); 89 85 90 86 if (icc_scaling_enabled) 91 87 qcom_cpufreq_set_bw(policy, freq); 92 88 93 - arch_set_freq_scale(policy->related_cpus, freq, 94 - policy->cpuinfo.max_freq); 95 89 return 0; 96 90 } 97 91 98 92 static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) 99 93 { 100 - void __iomem *perf_state_reg; 94 + struct qcom_cpufreq_data *data; 95 + const struct qcom_cpufreq_soc_data *soc_data; 101 96 struct cpufreq_policy *policy; 102 97 unsigned int index; 103 98 ··· 105 100 if (!policy) 106 101 return 0; 107 102 108 - perf_state_reg = policy->driver_data; 103 + data = policy->driver_data; 104 + soc_data = data->soc_data; 109 105 110 - index = readl_relaxed(perf_state_reg); 106 + index = readl_relaxed(data->base + soc_data->reg_perf_state); 111 107 index = min(index, LUT_MAX_ENTRIES - 1); 112 108 113 109 return policy->freq_table[index].frequency; ··· 117 111 static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, 118 112 unsigned int target_freq) 119 113 { 120 - void __iomem *perf_state_reg = policy->driver_data; 114 + struct qcom_cpufreq_data *data = policy->driver_data; 115 + const struct qcom_cpufreq_soc_data *soc_data = data->soc_data; 121 116 unsigned int index; 122 - unsigned long freq; 123 117 124 118 index = policy->cached_resolved_idx; 125 - writel_relaxed(index, perf_state_reg); 119 + writel_relaxed(index, data->base + soc_data->reg_perf_state); 126 120 127 - freq = policy->freq_table[index].frequency; 128 - arch_set_freq_scale(policy->related_cpus, freq, 129 - policy->cpuinfo.max_freq); 130 - 131 - return freq; 121 + return policy->freq_table[index].frequency; 132 122 } 133 123 134 124 static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev, 135 - struct cpufreq_policy *policy, 136 - void __iomem *base) 125 + struct cpufreq_policy *policy) 137 126 { 138 127 u32 data, src, lval, i, core_count, prev_freq = 0, freq; 139 128 u32 volt; ··· 136 135 struct dev_pm_opp *opp; 137 136 unsigned long rate; 138 137 int ret; 138 + struct qcom_cpufreq_data *drv_data = policy->driver_data; 139 + const struct qcom_cpufreq_soc_data *soc_data = drv_data->soc_data; 139 140 140 141 table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL); 141 142 if (!table) ··· 164 161 } 165 162 166 163 for (i = 0; i < LUT_MAX_ENTRIES; i++) { 167 - data = readl_relaxed(base + REG_FREQ_LUT + 168 - i * LUT_ROW_SIZE); 164 + data = readl_relaxed(drv_data->base + soc_data->reg_freq_lut + 165 + i * soc_data->lut_row_size); 169 166 src = FIELD_GET(LUT_SRC, data); 170 167 lval = FIELD_GET(LUT_L_VAL, data); 171 168 core_count = FIELD_GET(LUT_CORE_COUNT, data); 172 169 173 - data = readl_relaxed(base + REG_VOLT_LUT + 174 - i * LUT_ROW_SIZE); 170 + data = readl_relaxed(drv_data->base + soc_data->reg_volt_lut + 171 + i * soc_data->lut_row_size); 175 172 volt = FIELD_GET(LUT_VOLT, data) * 1000; 176 173 177 174 if (src) ··· 180 177 freq = cpu_hw_rate / 1000; 181 178 182 179 if (freq != prev_freq && core_count != LUT_TURBO_IND) { 183 - table[i].frequency = freq; 184 - qcom_cpufreq_update_opp(cpu_dev, freq, volt); 185 - dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i, 180 + if (!qcom_cpufreq_update_opp(cpu_dev, freq, volt)) { 181 + table[i].frequency = freq; 182 + dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i, 186 183 freq, core_count); 184 + } else { 185 + dev_warn(cpu_dev, "failed to update OPP for freq=%d\n", freq); 186 + table[i].frequency = CPUFREQ_ENTRY_INVALID; 187 + } 188 + 187 189 } else if (core_count == LUT_TURBO_IND) { 188 190 table[i].frequency = CPUFREQ_ENTRY_INVALID; 189 191 } ··· 205 197 * as the boost frequency 206 198 */ 207 199 if (prev->frequency == CPUFREQ_ENTRY_INVALID) { 208 - prev->frequency = prev_freq; 209 - prev->flags = CPUFREQ_BOOST_FREQ; 210 - qcom_cpufreq_update_opp(cpu_dev, prev_freq, volt); 200 + if (!qcom_cpufreq_update_opp(cpu_dev, prev_freq, volt)) { 201 + prev->frequency = prev_freq; 202 + prev->flags = CPUFREQ_BOOST_FREQ; 203 + } else { 204 + dev_warn(cpu_dev, "failed to update OPP for freq=%d\n", 205 + freq); 206 + } 211 207 } 212 208 213 209 break; ··· 250 238 } 251 239 } 252 240 241 + static const struct qcom_cpufreq_soc_data qcom_soc_data = { 242 + .reg_enable = 0x0, 243 + .reg_freq_lut = 0x110, 244 + .reg_volt_lut = 0x114, 245 + .reg_perf_state = 0x920, 246 + .lut_row_size = 32, 247 + }; 248 + 249 + static const struct qcom_cpufreq_soc_data epss_soc_data = { 250 + .reg_enable = 0x0, 251 + .reg_freq_lut = 0x100, 252 + .reg_volt_lut = 0x200, 253 + .reg_perf_state = 0x320, 254 + .lut_row_size = 4, 255 + }; 256 + 257 + static const struct of_device_id qcom_cpufreq_hw_match[] = { 258 + { .compatible = "qcom,cpufreq-hw", .data = &qcom_soc_data }, 259 + { .compatible = "qcom,cpufreq-epss", .data = &epss_soc_data }, 260 + {} 261 + }; 262 + MODULE_DEVICE_TABLE(of, qcom_cpufreq_hw_match); 263 + 253 264 static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy) 254 265 { 255 - struct device *dev = &global_pdev->dev; 266 + struct platform_device *pdev = cpufreq_get_driver_data(); 267 + struct device *dev = &pdev->dev; 256 268 struct of_phandle_args args; 257 269 struct device_node *cpu_np; 258 270 struct device *cpu_dev; 259 - struct resource *res; 260 271 void __iomem *base; 272 + struct qcom_cpufreq_data *data; 261 273 int ret, index; 262 274 263 275 cpu_dev = get_cpu_device(policy->cpu); ··· 303 267 304 268 index = args.args[0]; 305 269 306 - res = platform_get_resource(global_pdev, IORESOURCE_MEM, index); 307 - if (!res) 308 - return -ENODEV; 270 + base = devm_platform_ioremap_resource(pdev, index); 271 + if (IS_ERR(base)) 272 + return PTR_ERR(base); 309 273 310 - base = devm_ioremap(dev, res->start, resource_size(res)); 311 - if (!base) 312 - return -ENOMEM; 274 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 275 + if (!data) { 276 + ret = -ENOMEM; 277 + goto error; 278 + } 279 + 280 + data->soc_data = of_device_get_match_data(&pdev->dev); 281 + data->base = base; 313 282 314 283 /* HW should be in enabled state to proceed */ 315 - if (!(readl_relaxed(base + REG_ENABLE) & 0x1)) { 284 + if (!(readl_relaxed(base + data->soc_data->reg_enable) & 0x1)) { 316 285 dev_err(dev, "Domain-%d cpufreq hardware not enabled\n", index); 317 286 ret = -ENODEV; 318 287 goto error; ··· 330 289 goto error; 331 290 } 332 291 333 - policy->driver_data = base + REG_PERF_STATE; 292 + policy->driver_data = data; 334 293 335 - ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy, base); 294 + ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy); 336 295 if (ret) { 337 296 dev_err(dev, "Domain-%d failed to read LUT\n", index); 338 297 goto error; ··· 356 315 static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy) 357 316 { 358 317 struct device *cpu_dev = get_cpu_device(policy->cpu); 359 - void __iomem *base = policy->driver_data - REG_PERF_STATE; 318 + struct qcom_cpufreq_data *data = policy->driver_data; 319 + struct platform_device *pdev = cpufreq_get_driver_data(); 360 320 361 321 dev_pm_opp_remove_all_dynamic(cpu_dev); 362 322 dev_pm_opp_of_cpumask_remove_table(policy->related_cpus); 363 323 kfree(policy->freq_table); 364 - devm_iounmap(&global_pdev->dev, base); 324 + devm_iounmap(&pdev->dev, data->base); 365 325 366 326 return 0; 367 327 } ··· 407 365 cpu_hw_rate = clk_get_rate(clk) / CLK_HW_DIV; 408 366 clk_put(clk); 409 367 410 - global_pdev = pdev; 368 + cpufreq_qcom_hw_driver.driver_data = pdev; 411 369 412 370 /* Check for optional interconnect paths on CPU0 */ 413 371 cpu_dev = get_cpu_device(0); ··· 431 389 { 432 390 return cpufreq_unregister_driver(&cpufreq_qcom_hw_driver); 433 391 } 434 - 435 - static const struct of_device_id qcom_cpufreq_hw_match[] = { 436 - { .compatible = "qcom,cpufreq-hw" }, 437 - {} 438 - }; 439 - MODULE_DEVICE_TABLE(of, qcom_cpufreq_hw_match); 440 392 441 393 static struct platform_driver qcom_cpufreq_hw_driver = { 442 394 .probe = qcom_cpufreq_hw_driver_probe,
+11 -20
drivers/cpufreq/s5pv210-cpufreq.c
··· 590 590 591 591 static int s5pv210_cpufreq_probe(struct platform_device *pdev) 592 592 { 593 + struct device *dev = &pdev->dev; 593 594 struct device_node *np; 594 595 int id, result = 0; 595 596 ··· 603 602 * cpufreq-dt driver. 604 603 */ 605 604 arm_regulator = regulator_get(NULL, "vddarm"); 606 - if (IS_ERR(arm_regulator)) { 607 - if (PTR_ERR(arm_regulator) == -EPROBE_DEFER) 608 - pr_debug("vddarm regulator not ready, defer\n"); 609 - else 610 - pr_err("failed to get regulator vddarm\n"); 611 - return PTR_ERR(arm_regulator); 612 - } 605 + if (IS_ERR(arm_regulator)) 606 + return dev_err_probe(dev, PTR_ERR(arm_regulator), 607 + "failed to get regulator vddarm\n"); 613 608 614 609 int_regulator = regulator_get(NULL, "vddint"); 615 610 if (IS_ERR(int_regulator)) { 616 - if (PTR_ERR(int_regulator) == -EPROBE_DEFER) 617 - pr_debug("vddint regulator not ready, defer\n"); 618 - else 619 - pr_err("failed to get regulator vddint\n"); 620 - result = PTR_ERR(int_regulator); 611 + result = dev_err_probe(dev, PTR_ERR(int_regulator), 612 + "failed to get regulator vddint\n"); 621 613 goto err_int_regulator; 622 614 } 623 615 624 616 np = of_find_compatible_node(NULL, NULL, "samsung,s5pv210-clock"); 625 617 if (!np) { 626 - pr_err("%s: failed to find clock controller DT node\n", 627 - __func__); 618 + dev_err(dev, "failed to find clock controller DT node\n"); 628 619 result = -ENODEV; 629 620 goto err_clock; 630 621 } ··· 624 631 clk_base = of_iomap(np, 0); 625 632 of_node_put(np); 626 633 if (!clk_base) { 627 - pr_err("%s: failed to map clock registers\n", __func__); 634 + dev_err(dev, "failed to map clock registers\n"); 628 635 result = -EFAULT; 629 636 goto err_clock; 630 637 } ··· 632 639 for_each_compatible_node(np, NULL, "samsung,s5pv210-dmc") { 633 640 id = of_alias_get_id(np, "dmc"); 634 641 if (id < 0 || id >= ARRAY_SIZE(dmc_base)) { 635 - pr_err("%s: failed to get alias of dmc node '%pOFn'\n", 636 - __func__, np); 642 + dev_err(dev, "failed to get alias of dmc node '%pOFn'\n", np); 637 643 of_node_put(np); 638 644 result = id; 639 645 goto err_clk_base; ··· 640 648 641 649 dmc_base[id] = of_iomap(np, 0); 642 650 if (!dmc_base[id]) { 643 - pr_err("%s: failed to map dmc%d registers\n", 644 - __func__, id); 651 + dev_err(dev, "failed to map dmc%d registers\n", id); 645 652 of_node_put(np); 646 653 result = -EFAULT; 647 654 goto err_dmc; ··· 649 658 650 659 for (id = 0; id < ARRAY_SIZE(dmc_base); ++id) { 651 660 if (!dmc_base[id]) { 652 - pr_err("%s: failed to find dmc%d node\n", __func__, id); 661 + dev_err(dev, "failed to find dmc%d node\n", id); 653 662 result = -ENODEV; 654 663 goto err_dmc; 655 664 }
+2 -10
drivers/cpufreq/scmi-cpufreq.c
··· 48 48 static int 49 49 scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index) 50 50 { 51 - int ret; 52 51 struct scmi_data *priv = policy->driver_data; 53 52 struct scmi_perf_ops *perf_ops = handle->perf_ops; 54 53 u64 freq = policy->freq_table[index].frequency; 55 54 56 - ret = perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false); 57 - if (!ret) 58 - arch_set_freq_scale(policy->related_cpus, freq, 59 - policy->cpuinfo.max_freq); 60 - return ret; 55 + return perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false); 61 56 } 62 57 63 58 static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy, ··· 62 67 struct scmi_perf_ops *perf_ops = handle->perf_ops; 63 68 64 69 if (!perf_ops->freq_set(handle, priv->domain_id, 65 - target_freq * 1000, true)) { 66 - arch_set_freq_scale(policy->related_cpus, target_freq, 67 - policy->cpuinfo.max_freq); 70 + target_freq * 1000, true)) 68 71 return target_freq; 69 - } 70 72 71 73 return 0; 72 74 }
+1 -5
drivers/cpufreq/scpi-cpufreq.c
··· 47 47 static int 48 48 scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index) 49 49 { 50 - unsigned long freq = policy->freq_table[index].frequency; 50 + u64 rate = policy->freq_table[index].frequency * 1000; 51 51 struct scpi_data *priv = policy->driver_data; 52 - u64 rate = freq * 1000; 53 52 int ret; 54 53 55 54 ret = clk_set_rate(priv->clk, rate); ··· 58 59 59 60 if (clk_get_rate(priv->clk) != rate) 60 61 return -EIO; 61 - 62 - arch_set_freq_scale(policy->related_cpus, freq, 63 - policy->cpuinfo.max_freq); 64 62 65 63 return 0; 66 64 }
+4 -2
drivers/cpufreq/sti-cpufreq.c
··· 141 141 static const struct reg_field *sti_cpufreq_match(void) 142 142 { 143 143 if (of_machine_is_compatible("st,stih407") || 144 - of_machine_is_compatible("st,stih410")) 144 + of_machine_is_compatible("st,stih410") || 145 + of_machine_is_compatible("st,stih418")) 145 146 return sti_stih407_dvfs_regfields; 146 147 147 148 return NULL; ··· 259 258 int ret; 260 259 261 260 if ((!of_machine_is_compatible("st,stih407")) && 262 - (!of_machine_is_compatible("st,stih410"))) 261 + (!of_machine_is_compatible("st,stih410")) && 262 + (!of_machine_is_compatible("st,stih418"))) 263 263 return -ENODEV; 264 264 265 265 ddata.cpu = get_cpu_device(0);
+30
drivers/cpufreq/tegra186-cpufreq.c
··· 14 14 15 15 #define EDVD_CORE_VOLT_FREQ(core) (0x20 + (core) * 0x4) 16 16 #define EDVD_CORE_VOLT_FREQ_F_SHIFT 0 17 + #define EDVD_CORE_VOLT_FREQ_F_MASK 0xffff 17 18 #define EDVD_CORE_VOLT_FREQ_V_SHIFT 16 18 19 19 20 struct tegra186_cpufreq_cluster_info { ··· 92 91 return 0; 93 92 } 94 93 94 + static unsigned int tegra186_cpufreq_get(unsigned int cpu) 95 + { 96 + struct cpufreq_frequency_table *tbl; 97 + struct cpufreq_policy *policy; 98 + void __iomem *edvd_reg; 99 + unsigned int i, freq = 0; 100 + u32 ndiv; 101 + 102 + policy = cpufreq_cpu_get(cpu); 103 + if (!policy) 104 + return 0; 105 + 106 + tbl = policy->freq_table; 107 + edvd_reg = policy->driver_data; 108 + ndiv = readl(edvd_reg) & EDVD_CORE_VOLT_FREQ_F_MASK; 109 + 110 + for (i = 0; tbl[i].frequency != CPUFREQ_TABLE_END; i++) { 111 + if ((tbl[i].driver_data & EDVD_CORE_VOLT_FREQ_F_MASK) == ndiv) { 112 + freq = tbl[i].frequency; 113 + break; 114 + } 115 + } 116 + 117 + cpufreq_cpu_put(policy); 118 + 119 + return freq; 120 + } 121 + 95 122 static struct cpufreq_driver tegra186_cpufreq_driver = { 96 123 .name = "tegra186", 97 124 .flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY | 98 125 CPUFREQ_NEED_INITIAL_FREQ_CHECK, 126 + .get = tegra186_cpufreq_get, 99 127 .verify = cpufreq_generic_frequency_table_verify, 100 128 .target_index = tegra186_cpufreq_set_target, 101 129 .init = tegra186_cpufreq_init,
+2 -10
drivers/cpufreq/vexpress-spc-cpufreq.c
··· 182 182 { 183 183 u32 cpu = policy->cpu, cur_cluster, new_cluster, actual_cluster; 184 184 unsigned int freqs_new; 185 - int ret; 186 185 187 186 cur_cluster = cpu_to_cluster(cpu); 188 187 new_cluster = actual_cluster = per_cpu(physical_cluster, cpu); ··· 196 197 new_cluster = A15_CLUSTER; 197 198 } 198 199 199 - ret = ve_spc_cpufreq_set_rate(cpu, actual_cluster, new_cluster, 200 - freqs_new); 201 - 202 - if (!ret) { 203 - arch_set_freq_scale(policy->related_cpus, freqs_new, 204 - policy->cpuinfo.max_freq); 205 - } 206 - 207 - return ret; 200 + return ve_spc_cpufreq_set_rate(cpu, actual_cluster, new_cluster, 201 + freqs_new); 208 202 } 209 203 210 204 static inline u32 get_table_count(struct cpufreq_frequency_table *table)
+31 -28
drivers/cpuidle/cpuidle-psci-domain.c
··· 105 105 kfree(states); 106 106 } 107 107 108 - static int psci_pd_init(struct device_node *np) 108 + static int psci_pd_init(struct device_node *np, bool use_osi) 109 109 { 110 110 struct generic_pm_domain *pd; 111 111 struct psci_pd_provider *pd_provider; ··· 135 135 136 136 pd->free_states = psci_pd_free_states; 137 137 pd->name = kbasename(pd->name); 138 - pd->power_off = psci_pd_power_off; 139 138 pd->states = states; 140 139 pd->state_count = state_count; 141 140 pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN; 141 + 142 + /* Allow power off when OSI has been successfully enabled. */ 143 + if (use_osi) 144 + pd->power_off = psci_pd_power_off; 145 + else 146 + pd->flags |= GENPD_FLAG_ALWAYS_ON; 142 147 143 148 /* Use governor for CPU PM domains if it has some states to manage. */ 144 149 pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL; ··· 195 190 } 196 191 } 197 192 198 - static int psci_pd_init_topology(struct device_node *np, bool add) 193 + static int psci_pd_init_topology(struct device_node *np) 199 194 { 200 195 struct device_node *node; 201 196 struct of_phandle_args child, parent; ··· 208 203 209 204 child.np = node; 210 205 child.args_count = 0; 211 - 212 - ret = add ? of_genpd_add_subdomain(&parent, &child) : 213 - of_genpd_remove_subdomain(&parent, &child); 206 + ret = of_genpd_add_subdomain(&parent, &child); 214 207 of_node_put(parent.np); 215 208 if (ret) { 216 209 of_node_put(node); ··· 219 216 return 0; 220 217 } 221 218 222 - static int psci_pd_add_topology(struct device_node *np) 219 + static bool psci_pd_try_set_osi_mode(void) 223 220 { 224 - return psci_pd_init_topology(np, true); 225 - } 221 + int ret; 226 222 227 - static void psci_pd_remove_topology(struct device_node *np) 228 - { 229 - psci_pd_init_topology(np, false); 223 + if (!psci_has_osi_support()) 224 + return false; 225 + 226 + ret = psci_set_osi_mode(true); 227 + if (ret) { 228 + pr_warn("failed to enable OSI mode: %d\n", ret); 229 + return false; 230 + } 231 + 232 + return true; 230 233 } 231 234 232 235 static void psci_cpuidle_domain_sync_state(struct device *dev) ··· 253 244 { 254 245 struct device_node *np = pdev->dev.of_node; 255 246 struct device_node *node; 247 + bool use_osi; 256 248 int ret = 0, pd_count = 0; 257 249 258 250 if (!np) 259 251 return -ENODEV; 260 252 261 - /* Currently limit the hierarchical topology to be used in OSI mode. */ 262 - if (!psci_has_osi_support()) 263 - return 0; 253 + /* If OSI mode is supported, let's try to enable it. */ 254 + use_osi = psci_pd_try_set_osi_mode(); 264 255 265 256 /* 266 257 * Parse child nodes for the "#power-domain-cells" property and ··· 270 261 if (!of_find_property(node, "#power-domain-cells", NULL)) 271 262 continue; 272 263 273 - ret = psci_pd_init(node); 264 + ret = psci_pd_init(node, use_osi); 274 265 if (ret) 275 266 goto put_node; 276 267 ··· 279 270 280 271 /* Bail out if not using the hierarchical CPU topology. */ 281 272 if (!pd_count) 282 - return 0; 273 + goto no_pd; 283 274 284 275 /* Link genpd masters/subdomains to model the CPU topology. */ 285 - ret = psci_pd_add_topology(np); 276 + ret = psci_pd_init_topology(np); 286 277 if (ret) 287 278 goto remove_pd; 288 - 289 - /* Try to enable OSI mode. */ 290 - ret = psci_set_osi_mode(); 291 - if (ret) { 292 - pr_warn("failed to enable OSI mode: %d\n", ret); 293 - psci_pd_remove_topology(np); 294 - goto remove_pd; 295 - } 296 279 297 280 pr_info("Initialized CPU PM domain topology\n"); 298 281 return 0; ··· 292 291 put_node: 293 292 of_node_put(node); 294 293 remove_pd: 295 - if (pd_count) 296 - psci_pd_remove(); 294 + psci_pd_remove(); 297 295 pr_err("failed to create CPU PM domains ret=%d\n", ret); 296 + no_pd: 297 + if (use_osi) 298 + psci_set_osi_mode(false); 298 299 return ret; 299 300 } 300 301
+20 -14
drivers/cpuidle/cpuidle-tegra.c
··· 172 172 static int tegra_cpuidle_state_enter(struct cpuidle_device *dev, 173 173 int index, unsigned int cpu) 174 174 { 175 - int ret; 175 + int err; 176 176 177 177 /* 178 178 * CC6 state is the "CPU cluster power-off" state. In order to ··· 183 183 * CPU cores, GIC and L2 cache). 184 184 */ 185 185 if (index == TEGRA_CC6) { 186 - ret = tegra_cpuidle_coupled_barrier(dev); 187 - if (ret) 188 - return ret; 186 + err = tegra_cpuidle_coupled_barrier(dev); 187 + if (err) 188 + return err; 189 189 } 190 190 191 191 local_fiq_disable(); ··· 194 194 195 195 switch (index) { 196 196 case TEGRA_C7: 197 - ret = tegra_cpuidle_c7_enter(); 197 + err = tegra_cpuidle_c7_enter(); 198 198 break; 199 199 200 200 case TEGRA_CC6: 201 - ret = tegra_cpuidle_cc6_enter(cpu); 201 + err = tegra_cpuidle_cc6_enter(cpu); 202 202 break; 203 203 204 204 default: 205 - ret = -EINVAL; 205 + err = -EINVAL; 206 206 break; 207 207 } 208 208 ··· 210 210 tegra_pm_clear_cpu_in_lp2(); 211 211 local_fiq_enable(); 212 212 213 - return ret; 213 + return err ?: index; 214 214 } 215 215 216 216 static int tegra_cpuidle_adjust_state_index(int index, unsigned int cpu) ··· 236 236 int index) 237 237 { 238 238 unsigned int cpu = cpu_logical_map(dev->cpu); 239 - int err; 239 + int ret; 240 240 241 241 index = tegra_cpuidle_adjust_state_index(index, cpu); 242 242 if (dev->states_usage[index].disable) 243 243 return -1; 244 244 245 245 if (index == TEGRA_C1) 246 - err = arm_cpuidle_simple_enter(dev, drv, index); 246 + ret = arm_cpuidle_simple_enter(dev, drv, index); 247 247 else 248 - err = tegra_cpuidle_state_enter(dev, index, cpu); 248 + ret = tegra_cpuidle_state_enter(dev, index, cpu); 249 249 250 - if (err && (err != -EINTR || index != TEGRA_CC6)) 251 - pr_err_once("failed to enter state %d err: %d\n", index, err); 250 + if (ret < 0) { 251 + if (ret != -EINTR || index != TEGRA_CC6) 252 + pr_err_once("failed to enter state %d err: %d\n", 253 + index, ret); 254 + index = -1; 255 + } else { 256 + index = ret; 257 + } 252 258 253 - return err ? -1 : index; 259 + return index; 254 260 } 255 261 256 262 static int tegra114_enter_s2idle(struct cpuidle_device *dev,
+1
drivers/cpuidle/cpuidle.c
··· 297 297 } 298 298 } else { 299 299 dev->last_residency_ns = 0; 300 + dev->states_usage[index].rejected++; 300 301 } 301 302 302 303 return entered_state;
+3
drivers/cpuidle/sysfs.c
··· 256 256 define_show_state_time_function(target_residency) 257 257 define_show_state_function(power_usage) 258 258 define_show_state_ull_function(usage) 259 + define_show_state_ull_function(rejected) 259 260 define_show_state_str_function(name) 260 261 define_show_state_str_function(desc) 261 262 define_show_state_ull_function(above) ··· 313 312 define_one_state_ro(residency, show_state_target_residency); 314 313 define_one_state_ro(power, show_state_power_usage); 315 314 define_one_state_ro(usage, show_state_usage); 315 + define_one_state_ro(rejected, show_state_rejected); 316 316 define_one_state_ro(time, show_state_time); 317 317 define_one_state_rw(disable, show_state_disable, store_state_disable); 318 318 define_one_state_ro(above, show_state_above); ··· 327 325 &attr_residency.attr, 328 326 &attr_power.attr, 329 327 &attr_usage.attr, 328 + &attr_rejected.attr, 330 329 &attr_time.attr, 331 330 &attr_disable.attr, 332 331 &attr_above.attr,
+8 -6
drivers/devfreq/devfreq-event.c
··· 213 213 * devfreq_event_get_edev_by_phandle() - Get the devfreq-event dev from 214 214 * devicetree. 215 215 * @dev : the pointer to the given device 216 + * @phandle_name: name of property holding a phandle value 216 217 * @index : the index into list of devfreq-event device 217 218 * 218 219 * Note that this function return the pointer of devfreq-event device. 219 220 */ 220 221 struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(struct device *dev, 221 - int index) 222 + const char *phandle_name, int index) 222 223 { 223 224 struct device_node *node; 224 225 struct devfreq_event_dev *edev; 225 226 226 - if (!dev->of_node) 227 + if (!dev->of_node || !phandle_name) 227 228 return ERR_PTR(-EINVAL); 228 229 229 - node = of_parse_phandle(dev->of_node, "devfreq-events", index); 230 + node = of_parse_phandle(dev->of_node, phandle_name, index); 230 231 if (!node) 231 232 return ERR_PTR(-ENODEV); 232 233 ··· 259 258 /** 260 259 * devfreq_event_get_edev_count() - Get the count of devfreq-event dev 261 260 * @dev : the pointer to the given device 261 + * @phandle_name: name of property holding a phandle value 262 262 * 263 263 * Note that this function return the count of devfreq-event devices. 264 264 */ 265 - int devfreq_event_get_edev_count(struct device *dev) 265 + int devfreq_event_get_edev_count(struct device *dev, const char *phandle_name) 266 266 { 267 267 int count; 268 268 269 - if (!dev->of_node) { 269 + if (!dev->of_node || !phandle_name) { 270 270 dev_err(dev, "device does not have a device node entry\n"); 271 271 return -EINVAL; 272 272 } 273 273 274 - count = of_property_count_elems_of_size(dev->of_node, "devfreq-events", 274 + count = of_property_count_elems_of_size(dev->of_node, phandle_name, 275 275 sizeof(u32)); 276 276 if (count < 0) { 277 277 dev_err(dev,
+43 -16
drivers/devfreq/devfreq.c
··· 984 984 985 985 #ifdef CONFIG_OF 986 986 /* 987 - * devfreq_get_devfreq_by_phandle - Get the devfreq device from devicetree 988 - * @dev - instance to the given device 989 - * @index - index into list of devfreq 987 + * devfreq_get_devfreq_by_node - Get the devfreq device from devicetree 988 + * @node - pointer to device_node 990 989 * 991 990 * return the instance of devfreq device 992 991 */ 993 - struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index) 992 + struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node) 994 993 { 995 - struct device_node *node; 996 994 struct devfreq *devfreq; 997 995 998 - if (!dev) 999 - return ERR_PTR(-EINVAL); 1000 - 1001 - if (!dev->of_node) 1002 - return ERR_PTR(-EINVAL); 1003 - 1004 - node = of_parse_phandle(dev->of_node, "devfreq", index); 1005 996 if (!node) 1006 - return ERR_PTR(-ENODEV); 997 + return ERR_PTR(-EINVAL); 1007 998 1008 999 mutex_lock(&devfreq_list_lock); 1009 1000 list_for_each_entry(devfreq, &devfreq_list, node) { 1010 1001 if (devfreq->dev.parent 1011 1002 && devfreq->dev.parent->of_node == node) { 1012 1003 mutex_unlock(&devfreq_list_lock); 1013 - of_node_put(node); 1014 1004 return devfreq; 1015 1005 } 1016 1006 } 1017 1007 mutex_unlock(&devfreq_list_lock); 1008 + 1009 + return ERR_PTR(-ENODEV); 1010 + } 1011 + 1012 + /* 1013 + * devfreq_get_devfreq_by_phandle - Get the devfreq device from devicetree 1014 + * @dev - instance to the given device 1015 + * @phandle_name - name of property holding a phandle value 1016 + * @index - index into list of devfreq 1017 + * 1018 + * return the instance of devfreq device 1019 + */ 1020 + struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 1021 + const char *phandle_name, int index) 1022 + { 1023 + struct device_node *node; 1024 + struct devfreq *devfreq; 1025 + 1026 + if (!dev || !phandle_name) 1027 + return ERR_PTR(-EINVAL); 1028 + 1029 + if (!dev->of_node) 1030 + return ERR_PTR(-EINVAL); 1031 + 1032 + node = of_parse_phandle(dev->of_node, phandle_name, index); 1033 + if (!node) 1034 + return ERR_PTR(-ENODEV); 1035 + 1036 + devfreq = devfreq_get_devfreq_by_node(node); 1018 1037 of_node_put(node); 1019 1038 1020 - return ERR_PTR(-EPROBE_DEFER); 1039 + return devfreq; 1021 1040 } 1041 + 1022 1042 #else 1023 - struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index) 1043 + struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node) 1044 + { 1045 + return ERR_PTR(-ENODEV); 1046 + } 1047 + 1048 + struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 1049 + const char *phandle_name, int index) 1024 1050 { 1025 1051 return ERR_PTR(-ENODEV); 1026 1052 } 1027 1053 #endif /* CONFIG_OF */ 1054 + EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_node); 1028 1055 EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_phandle); 1029 1056 1030 1057 /**
+4 -3
drivers/devfreq/exynos-bus.c
··· 193 193 * Get the devfreq-event devices to get the current utilization of 194 194 * buses. This raw data will be used in devfreq ondemand governor. 195 195 */ 196 - count = devfreq_event_get_edev_count(dev); 196 + count = devfreq_event_get_edev_count(dev, "devfreq-events"); 197 197 if (count < 0) { 198 198 dev_err(dev, "failed to get the count of devfreq-event dev\n"); 199 199 ret = count; ··· 209 209 } 210 210 211 211 for (i = 0; i < count; i++) { 212 - bus->edev[i] = devfreq_event_get_edev_by_phandle(dev, i); 212 + bus->edev[i] = devfreq_event_get_edev_by_phandle(dev, 213 + "devfreq-events", i); 213 214 if (IS_ERR(bus->edev[i])) { 214 215 ret = -EPROBE_DEFER; 215 216 goto err_regulator; ··· 361 360 profile->exit = exynos_bus_passive_exit; 362 361 363 362 /* Get the instance of parent devfreq device */ 364 - parent_devfreq = devfreq_get_devfreq_by_phandle(dev, 0); 363 + parent_devfreq = devfreq_get_devfreq_by_phandle(dev, "devfreq", 0); 365 364 if (IS_ERR(parent_devfreq)) 366 365 return -EPROBE_DEFER; 367 366
+1 -1
drivers/devfreq/rk3399_dmc.c
··· 341 341 return PTR_ERR(data->dmc_clk); 342 342 } 343 343 344 - data->edev = devfreq_event_get_edev_by_phandle(dev, 0); 344 + data->edev = devfreq_event_get_edev_by_phandle(dev, "devfreq-events", 0); 345 345 if (IS_ERR(data->edev)) 346 346 return -EPROBE_DEFER; 347 347
+5 -3
drivers/devfreq/tegra30-devfreq.c
··· 822 822 return err; 823 823 } 824 824 825 - reset_control_assert(tegra->reset); 826 - 827 825 err = clk_prepare_enable(tegra->clock); 828 826 if (err) { 829 827 dev_err(&pdev->dev, ··· 829 831 return err; 830 832 } 831 833 832 - reset_control_deassert(tegra->reset); 834 + err = reset_control_reset(tegra->reset); 835 + if (err) { 836 + dev_err(&pdev->dev, "Failed to reset hardware: %d\n", err); 837 + goto disable_clk; 838 + } 833 839 834 840 rate = clk_round_rate(tegra->emc_clock, ULONG_MAX); 835 841 if (rate < 0) {
+7 -5
drivers/firmware/psci/psci.c
··· 151 151 return invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0); 152 152 } 153 153 154 - int psci_set_osi_mode(void) 154 + int psci_set_osi_mode(bool enable) 155 155 { 156 + unsigned long suspend_mode; 156 157 int err; 157 158 158 - err = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE, 159 - PSCI_1_0_SUSPEND_MODE_OSI, 0, 0); 159 + suspend_mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : 160 + PSCI_1_0_SUSPEND_MODE_PC; 161 + 162 + err = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE, suspend_mode, 0, 0); 160 163 return psci_to_linux_errno(err); 161 164 } 162 165 ··· 549 546 pr_info("OSI mode supported.\n"); 550 547 551 548 /* Default to PC mode. */ 552 - invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE, 553 - PSCI_1_0_SUSPEND_MODE_PC, 0, 0); 549 + psci_set_osi_mode(false); 554 550 } 555 551 556 552 return 0;
+4 -2
drivers/memory/samsung/exynos5422-dmc.c
··· 1293 1293 int counters_size; 1294 1294 int ret, i; 1295 1295 1296 - dmc->num_counters = devfreq_event_get_edev_count(dmc->dev); 1296 + dmc->num_counters = devfreq_event_get_edev_count(dmc->dev, 1297 + "devfreq-events"); 1297 1298 if (dmc->num_counters < 0) { 1298 1299 dev_err(dmc->dev, "could not get devfreq-event counters\n"); 1299 1300 return dmc->num_counters; ··· 1307 1306 1308 1307 for (i = 0; i < dmc->num_counters; i++) { 1309 1308 dmc->counter[i] = 1310 - devfreq_event_get_edev_by_phandle(dmc->dev, i); 1309 + devfreq_event_get_edev_by_phandle(dmc->dev, 1310 + "devfreq-events", i); 1311 1311 if (IS_ERR_OR_NULL(dmc->counter[i])) 1312 1312 return -EPROBE_DEFER; 1313 1313 }
+121 -110
drivers/opp/core.c
··· 703 703 * Enable the regulator after setting its voltages, otherwise it breaks 704 704 * some boot-enabled regulators. 705 705 */ 706 - if (unlikely(!opp_table->regulator_enabled)) { 706 + if (unlikely(!opp_table->enabled)) { 707 707 ret = regulator_enable(reg); 708 708 if (ret < 0) 709 709 dev_warn(dev, "Failed to enable regulator: %d", ret); 710 - else 711 - opp_table->regulator_enabled = true; 712 710 } 713 711 714 712 return 0; ··· 779 781 return opp_table->set_opp(data); 780 782 } 781 783 784 + static int _set_required_opp(struct device *dev, struct device *pd_dev, 785 + struct dev_pm_opp *opp, int i) 786 + { 787 + unsigned int pstate = likely(opp) ? opp->required_opps[i]->pstate : 0; 788 + int ret; 789 + 790 + if (!pd_dev) 791 + return 0; 792 + 793 + ret = dev_pm_genpd_set_performance_state(pd_dev, pstate); 794 + if (ret) { 795 + dev_err(dev, "Failed to set performance rate of %s: %d (%d)\n", 796 + dev_name(pd_dev), pstate, ret); 797 + } 798 + 799 + return ret; 800 + } 801 + 782 802 /* This is only called for PM domain for now */ 783 803 static int _set_required_opps(struct device *dev, 784 804 struct opp_table *opp_table, 785 - struct dev_pm_opp *opp) 805 + struct dev_pm_opp *opp, bool up) 786 806 { 787 807 struct opp_table **required_opp_tables = opp_table->required_opp_tables; 788 808 struct device **genpd_virt_devs = opp_table->genpd_virt_devs; 789 - unsigned int pstate; 790 809 int i, ret = 0; 791 810 792 811 if (!required_opp_tables) 793 812 return 0; 794 813 795 814 /* Single genpd case */ 796 - if (!genpd_virt_devs) { 797 - pstate = likely(opp) ? opp->required_opps[0]->pstate : 0; 798 - ret = dev_pm_genpd_set_performance_state(dev, pstate); 799 - if (ret) { 800 - dev_err(dev, "Failed to set performance state of %s: %d (%d)\n", 801 - dev_name(dev), pstate, ret); 802 - } 803 - return ret; 804 - } 815 + if (!genpd_virt_devs) 816 + return _set_required_opp(dev, dev, opp, 0); 805 817 806 818 /* Multiple genpd case */ 807 819 ··· 821 813 */ 822 814 mutex_lock(&opp_table->genpd_virt_dev_lock); 823 815 824 - for (i = 0; i < opp_table->required_opp_count; i++) { 825 - pstate = likely(opp) ? opp->required_opps[i]->pstate : 0; 826 - 827 - if (!genpd_virt_devs[i]) 828 - continue; 829 - 830 - ret = dev_pm_genpd_set_performance_state(genpd_virt_devs[i], pstate); 831 - if (ret) { 832 - dev_err(dev, "Failed to set performance rate of %s: %d (%d)\n", 833 - dev_name(genpd_virt_devs[i]), pstate, ret); 834 - break; 816 + /* Scaling up? Set required OPPs in normal order, else reverse */ 817 + if (up) { 818 + for (i = 0; i < opp_table->required_opp_count; i++) { 819 + ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i); 820 + if (ret) 821 + break; 822 + } 823 + } else { 824 + for (i = opp_table->required_opp_count - 1; i >= 0; i--) { 825 + ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i); 826 + if (ret) 827 + break; 835 828 } 836 829 } 830 + 837 831 mutex_unlock(&opp_table->genpd_virt_dev_lock); 838 832 839 833 return ret; ··· 872 862 } 873 863 EXPORT_SYMBOL_GPL(dev_pm_opp_set_bw); 874 864 865 + static int _opp_set_rate_zero(struct device *dev, struct opp_table *opp_table) 866 + { 867 + int ret; 868 + 869 + if (!opp_table->enabled) 870 + return 0; 871 + 872 + /* 873 + * Some drivers need to support cases where some platforms may 874 + * have OPP table for the device, while others don't and 875 + * opp_set_rate() just needs to behave like clk_set_rate(). 876 + */ 877 + if (!_get_opp_count(opp_table)) 878 + return 0; 879 + 880 + ret = _set_opp_bw(opp_table, NULL, dev, true); 881 + if (ret) 882 + return ret; 883 + 884 + if (opp_table->regulators) 885 + regulator_disable(opp_table->regulators[0]); 886 + 887 + ret = _set_required_opps(dev, opp_table, NULL, false); 888 + 889 + opp_table->enabled = false; 890 + return ret; 891 + } 892 + 875 893 /** 876 894 * dev_pm_opp_set_rate() - Configure new OPP based on frequency 877 895 * @dev: device for which we do this operation ··· 926 888 } 927 889 928 890 if (unlikely(!target_freq)) { 929 - /* 930 - * Some drivers need to support cases where some platforms may 931 - * have OPP table for the device, while others don't and 932 - * opp_set_rate() just needs to behave like clk_set_rate(). 933 - */ 934 - if (!_get_opp_count(opp_table)) { 935 - ret = 0; 936 - goto put_opp_table; 937 - } 938 - 939 - if (!opp_table->required_opp_tables && !opp_table->regulators && 940 - !opp_table->paths) { 941 - dev_err(dev, "target frequency can't be 0\n"); 942 - ret = -EINVAL; 943 - goto put_opp_table; 944 - } 945 - 946 - ret = _set_opp_bw(opp_table, NULL, dev, true); 947 - if (ret) 948 - goto put_opp_table; 949 - 950 - if (opp_table->regulator_enabled) { 951 - regulator_disable(opp_table->regulators[0]); 952 - opp_table->regulator_enabled = false; 953 - } 954 - 955 - ret = _set_required_opps(dev, opp_table, NULL); 891 + ret = _opp_set_rate_zero(dev, opp_table); 956 892 goto put_opp_table; 957 893 } 958 894 ··· 945 933 old_freq = clk_get_rate(clk); 946 934 947 935 /* Return early if nothing to do */ 948 - if (old_freq == freq) { 949 - if (!opp_table->required_opp_tables && !opp_table->regulators && 950 - !opp_table->paths) { 951 - dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n", 952 - __func__, freq); 953 - ret = 0; 954 - goto put_opp_table; 955 - } 936 + if (opp_table->enabled && old_freq == freq) { 937 + dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n", 938 + __func__, freq); 939 + ret = 0; 940 + goto put_opp_table; 956 941 } 957 942 958 943 /* ··· 985 976 986 977 /* Scaling up? Configure required OPPs before frequency */ 987 978 if (freq >= old_freq) { 988 - ret = _set_required_opps(dev, opp_table, opp); 979 + ret = _set_required_opps(dev, opp_table, opp, true); 989 980 if (ret) 990 981 goto put_opp; 991 982 } ··· 1005 996 1006 997 /* Scaling down? Configure required OPPs after frequency */ 1007 998 if (!ret && freq < old_freq) { 1008 - ret = _set_required_opps(dev, opp_table, opp); 999 + ret = _set_required_opps(dev, opp_table, opp, false); 1009 1000 if (ret) 1010 1001 dev_err(dev, "Failed to set required opps: %d\n", ret); 1011 1002 } 1012 1003 1013 - if (!ret) 1004 + if (!ret) { 1014 1005 ret = _set_opp_bw(opp_table, opp, dev, false); 1006 + if (!ret) 1007 + opp_table->enabled = true; 1008 + } 1015 1009 1016 1010 put_opp: 1017 1011 dev_pm_opp_put(opp); ··· 1080 1068 */ 1081 1069 opp_table = kzalloc(sizeof(*opp_table), GFP_KERNEL); 1082 1070 if (!opp_table) 1083 - return NULL; 1071 + return ERR_PTR(-ENOMEM); 1084 1072 1085 1073 mutex_init(&opp_table->lock); 1086 1074 mutex_init(&opp_table->genpd_virt_dev_lock); ··· 1091 1079 1092 1080 opp_dev = _add_opp_dev(dev, opp_table); 1093 1081 if (!opp_dev) { 1094 - kfree(opp_table); 1095 - return NULL; 1082 + ret = -ENOMEM; 1083 + goto err; 1096 1084 } 1097 1085 1098 1086 _of_init_opp_table(opp_table, dev, index); ··· 1101 1089 opp_table->clk = clk_get(dev, NULL); 1102 1090 if (IS_ERR(opp_table->clk)) { 1103 1091 ret = PTR_ERR(opp_table->clk); 1104 - if (ret != -EPROBE_DEFER) 1105 - dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, 1106 - ret); 1092 + if (ret == -EPROBE_DEFER) 1093 + goto err; 1094 + 1095 + dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, ret); 1107 1096 } 1108 1097 1109 1098 /* Find interconnect path(s) for the device */ 1110 1099 ret = dev_pm_opp_of_find_icc_paths(dev, opp_table); 1111 - if (ret) 1100 + if (ret) { 1101 + if (ret == -EPROBE_DEFER) 1102 + goto err; 1103 + 1112 1104 dev_warn(dev, "%s: Error finding interconnect paths: %d\n", 1113 1105 __func__, ret); 1106 + } 1114 1107 1115 1108 BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head); 1116 1109 INIT_LIST_HEAD(&opp_table->opp_list); ··· 1124 1107 /* Secure the device table modification */ 1125 1108 list_add(&opp_table->node, &opp_tables); 1126 1109 return opp_table; 1110 + 1111 + err: 1112 + kfree(opp_table); 1113 + return ERR_PTR(ret); 1127 1114 } 1128 1115 1129 1116 void _get_opp_table_kref(struct opp_table *opp_table) ··· 1150 1129 if (opp_table) { 1151 1130 if (!_add_opp_dev_unlocked(dev, opp_table)) { 1152 1131 dev_pm_opp_put_opp_table(opp_table); 1153 - opp_table = NULL; 1132 + opp_table = ERR_PTR(-ENOMEM); 1154 1133 } 1155 1134 goto unlock; 1156 1135 } ··· 1602 1581 struct opp_table *opp_table; 1603 1582 1604 1583 opp_table = dev_pm_opp_get_opp_table(dev); 1605 - if (!opp_table) 1606 - return ERR_PTR(-ENOMEM); 1584 + if (IS_ERR(opp_table)) 1585 + return opp_table; 1607 1586 1608 1587 /* Make sure there are no concurrent readers while updating opp_table */ 1609 1588 WARN_ON(!list_empty(&opp_table->opp_list)); ··· 1661 1640 struct opp_table *opp_table; 1662 1641 1663 1642 opp_table = dev_pm_opp_get_opp_table(dev); 1664 - if (!opp_table) 1665 - return ERR_PTR(-ENOMEM); 1643 + if (IS_ERR(opp_table)) 1644 + return opp_table; 1666 1645 1667 1646 /* Make sure there are no concurrent readers while updating opp_table */ 1668 1647 WARN_ON(!list_empty(&opp_table->opp_list)); ··· 1754 1733 int ret, i; 1755 1734 1756 1735 opp_table = dev_pm_opp_get_opp_table(dev); 1757 - if (!opp_table) 1758 - return ERR_PTR(-ENOMEM); 1736 + if (IS_ERR(opp_table)) 1737 + return opp_table; 1759 1738 1760 1739 /* This should be called before OPPs are initialized */ 1761 1740 if (WARN_ON(!list_empty(&opp_table->opp_list))) { ··· 1825 1804 /* Make sure there are no concurrent readers while updating opp_table */ 1826 1805 WARN_ON(!list_empty(&opp_table->opp_list)); 1827 1806 1828 - if (opp_table->regulator_enabled) { 1807 + if (opp_table->enabled) { 1829 1808 for (i = opp_table->regulator_count - 1; i >= 0; i--) 1830 1809 regulator_disable(opp_table->regulators[i]); 1831 - 1832 - opp_table->regulator_enabled = false; 1833 1810 } 1834 1811 1835 1812 for (i = opp_table->regulator_count - 1; i >= 0; i--) ··· 1862 1843 int ret; 1863 1844 1864 1845 opp_table = dev_pm_opp_get_opp_table(dev); 1865 - if (!opp_table) 1866 - return ERR_PTR(-ENOMEM); 1846 + if (IS_ERR(opp_table)) 1847 + return opp_table; 1867 1848 1868 1849 /* This should be called before OPPs are initialized */ 1869 1850 if (WARN_ON(!list_empty(&opp_table->opp_list))) { ··· 1930 1911 return ERR_PTR(-EINVAL); 1931 1912 1932 1913 opp_table = dev_pm_opp_get_opp_table(dev); 1933 - if (!opp_table) 1934 - return ERR_PTR(-ENOMEM); 1914 + if (!IS_ERR(opp_table)) 1915 + return opp_table; 1935 1916 1936 1917 /* This should be called before OPPs are initialized */ 1937 1918 if (WARN_ON(!list_empty(&opp_table->opp_list))) { ··· 1967 1948 static void _opp_detach_genpd(struct opp_table *opp_table) 1968 1949 { 1969 1950 int index; 1951 + 1952 + if (!opp_table->genpd_virt_devs) 1953 + return; 1970 1954 1971 1955 for (index = 0; index < opp_table->required_opp_count; index++) { 1972 1956 if (!opp_table->genpd_virt_devs[index]) ··· 2014 1992 const char **name = names; 2015 1993 2016 1994 opp_table = dev_pm_opp_get_opp_table(dev); 2017 - if (!opp_table) 2018 - return ERR_PTR(-ENOMEM); 1995 + if (IS_ERR(opp_table)) 1996 + return opp_table; 1997 + 1998 + if (opp_table->genpd_virt_devs) 1999 + return opp_table; 2019 2000 2020 2001 /* 2021 2002 * If the genpd's OPP table isn't already initialized, parsing of the ··· 2042 2017 if (index >= opp_table->required_opp_count) { 2043 2018 dev_err(dev, "Index can't be greater than required-opp-count - 1, %s (%d : %d)\n", 2044 2019 *name, opp_table->required_opp_count, index); 2045 - goto err; 2046 - } 2047 - 2048 - if (opp_table->genpd_virt_devs[index]) { 2049 - dev_err(dev, "Genpd virtual device already set %s\n", 2050 - *name); 2051 2020 goto err; 2052 2021 } 2053 2022 ··· 2117 2098 int dest_pstate = -EINVAL; 2118 2099 int i; 2119 2100 2120 - if (!pstate) 2121 - return 0; 2122 - 2123 2101 /* 2124 2102 * Normally the src_table will have the "required_opps" property set to 2125 2103 * point to one of the OPPs in the dst_table, but in some cases the ··· 2179 2163 int ret; 2180 2164 2181 2165 opp_table = dev_pm_opp_get_opp_table(dev); 2182 - if (!opp_table) 2183 - return -ENOMEM; 2166 + if (IS_ERR(opp_table)) 2167 + return PTR_ERR(opp_table); 2184 2168 2185 2169 /* Fix regulator count for dynamic OPPs */ 2186 2170 opp_table->regulator_count = 1; ··· 2421 2405 } 2422 2406 EXPORT_SYMBOL(dev_pm_opp_unregister_notifier); 2423 2407 2424 - void _dev_pm_opp_find_and_remove_table(struct device *dev) 2408 + /** 2409 + * dev_pm_opp_remove_table() - Free all OPPs associated with the device 2410 + * @dev: device pointer used to lookup OPP table. 2411 + * 2412 + * Free both OPPs created using static entries present in DT and the 2413 + * dynamically added entries. 2414 + */ 2415 + void dev_pm_opp_remove_table(struct device *dev) 2425 2416 { 2426 2417 struct opp_table *opp_table; 2427 2418 ··· 2454 2431 2455 2432 /* Drop reference taken by _find_opp_table() */ 2456 2433 dev_pm_opp_put_opp_table(opp_table); 2457 - } 2458 - 2459 - /** 2460 - * dev_pm_opp_remove_table() - Free all OPPs associated with the device 2461 - * @dev: device pointer used to lookup OPP table. 2462 - * 2463 - * Free both OPPs created using static entries present in DT and the 2464 - * dynamically added entries. 2465 - */ 2466 - void dev_pm_opp_remove_table(struct device *dev) 2467 - { 2468 - _dev_pm_opp_find_and_remove_table(dev); 2469 2434 } 2470 2435 EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table);
+1 -1
drivers/opp/cpu.c
··· 124 124 continue; 125 125 } 126 126 127 - _dev_pm_opp_find_and_remove_table(cpu_dev); 127 + dev_pm_opp_remove_table(cpu_dev); 128 128 } 129 129 } 130 130
+71 -45
drivers/opp/of.c
··· 434 434 static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table, 435 435 struct device_node *np) 436 436 { 437 - unsigned int count = opp_table->supported_hw_count; 438 - u32 version; 439 - int ret; 437 + unsigned int levels = opp_table->supported_hw_count; 438 + int count, versions, ret, i, j; 439 + u32 val; 440 440 441 441 if (!opp_table->supported_hw) { 442 442 /* ··· 451 451 return true; 452 452 } 453 453 454 - while (count--) { 455 - ret = of_property_read_u32_index(np, "opp-supported-hw", count, 456 - &version); 457 - if (ret) { 458 - dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n", 459 - __func__, count, ret); 460 - return false; 461 - } 462 - 463 - /* Both of these are bitwise masks of the versions */ 464 - if (!(version & opp_table->supported_hw[count])) 465 - return false; 454 + count = of_property_count_u32_elems(np, "opp-supported-hw"); 455 + if (count <= 0 || count % levels) { 456 + dev_err(dev, "%s: Invalid opp-supported-hw property (%d)\n", 457 + __func__, count); 458 + return false; 466 459 } 467 460 468 - return true; 461 + versions = count / levels; 462 + 463 + /* All levels in at least one of the versions should match */ 464 + for (i = 0; i < versions; i++) { 465 + bool supported = true; 466 + 467 + for (j = 0; j < levels; j++) { 468 + ret = of_property_read_u32_index(np, "opp-supported-hw", 469 + i * levels + j, &val); 470 + if (ret) { 471 + dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n", 472 + __func__, i * levels + j, ret); 473 + return false; 474 + } 475 + 476 + /* Check if the level is supported */ 477 + if (!(val & opp_table->supported_hw[j])) { 478 + supported = false; 479 + break; 480 + } 481 + } 482 + 483 + if (supported) 484 + return true; 485 + } 486 + 487 + return false; 469 488 } 470 489 471 490 static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, ··· 635 616 */ 636 617 void dev_pm_opp_of_remove_table(struct device *dev) 637 618 { 638 - _dev_pm_opp_find_and_remove_table(dev); 619 + dev_pm_opp_remove_table(dev); 639 620 } 640 621 EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); 641 622 ··· 842 823 static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table) 843 824 { 844 825 struct device_node *np; 845 - int ret, count = 0, pstate_count = 0; 826 + int ret, count = 0; 846 827 struct dev_pm_opp *opp; 847 828 848 829 /* OPP table is already initialized for the device */ ··· 876 857 goto remove_static_opp; 877 858 } 878 859 879 - list_for_each_entry(opp, &opp_table->opp_list, node) 880 - pstate_count += !!opp->pstate; 881 - 882 - /* Either all or none of the nodes shall have performance state set */ 883 - if (pstate_count && pstate_count != count) { 884 - dev_err(dev, "Not all nodes have performance state set (%d: %d)\n", 885 - count, pstate_count); 886 - ret = -ENOENT; 887 - goto remove_static_opp; 860 + list_for_each_entry(opp, &opp_table->opp_list, node) { 861 + /* Any non-zero performance state would enable the feature */ 862 + if (opp->pstate) { 863 + opp_table->genpd_performance_state = true; 864 + break; 865 + } 888 866 } 889 - 890 - if (pstate_count) 891 - opp_table->genpd_performance_state = true; 892 867 893 868 return 0; 894 869 ··· 899 886 const __be32 *val; 900 887 int nr, ret = 0; 901 888 889 + mutex_lock(&opp_table->lock); 890 + if (opp_table->parsed_static_opps) { 891 + opp_table->parsed_static_opps++; 892 + mutex_unlock(&opp_table->lock); 893 + return 0; 894 + } 895 + 896 + opp_table->parsed_static_opps = 1; 897 + mutex_unlock(&opp_table->lock); 898 + 902 899 prop = of_find_property(dev->of_node, "operating-points", NULL); 903 - if (!prop) 904 - return -ENODEV; 905 - if (!prop->value) 906 - return -ENODATA; 900 + if (!prop) { 901 + ret = -ENODEV; 902 + goto remove_static_opp; 903 + } 904 + if (!prop->value) { 905 + ret = -ENODATA; 906 + goto remove_static_opp; 907 + } 907 908 908 909 /* 909 910 * Each OPP is a set of tuples consisting of frequency and ··· 926 899 nr = prop->length / sizeof(u32); 927 900 if (nr % 2) { 928 901 dev_err(dev, "%s: Invalid OPP table\n", __func__); 929 - return -EINVAL; 902 + ret = -EINVAL; 903 + goto remove_static_opp; 930 904 } 931 - 932 - mutex_lock(&opp_table->lock); 933 - opp_table->parsed_static_opps = 1; 934 - mutex_unlock(&opp_table->lock); 935 905 936 906 val = prop->value; 937 907 while (nr) { ··· 939 915 if (ret) { 940 916 dev_err(dev, "%s: Failed to add OPP %ld (%d)\n", 941 917 __func__, freq, ret); 942 - _opp_remove_all_static(opp_table); 943 - return ret; 918 + goto remove_static_opp; 944 919 } 945 920 nr -= 2; 946 921 } 922 + 923 + remove_static_opp: 924 + _opp_remove_all_static(opp_table); 947 925 948 926 return ret; 949 927 } ··· 973 947 int ret; 974 948 975 949 opp_table = dev_pm_opp_get_opp_table_indexed(dev, 0); 976 - if (!opp_table) 977 - return -ENOMEM; 950 + if (IS_ERR(opp_table)) 951 + return PTR_ERR(opp_table); 978 952 979 953 /* 980 954 * OPPs have two version of bindings now. Also try the old (v1) ··· 1028 1002 } 1029 1003 1030 1004 opp_table = dev_pm_opp_get_opp_table_indexed(dev, index); 1031 - if (!opp_table) 1032 - return -ENOMEM; 1005 + if (IS_ERR(opp_table)) 1006 + return PTR_ERR(opp_table); 1033 1007 1034 1008 ret = _of_add_opp_table_v2(dev, opp_table); 1035 1009 if (ret)
+2 -3
drivers/opp/opp.h
··· 147 147 * @clk: Device's clock handle 148 148 * @regulators: Supply regulators 149 149 * @regulator_count: Number of power supply regulators. Its value can be -1 150 - * @regulator_enabled: Set to true if regulators were previously enabled. 151 150 * (uninitialized), 0 (no opp-microvolt property) or > 0 (has opp-microvolt 152 151 * property). 153 152 * @paths: Interconnect path handles 154 153 * @path_count: Number of interconnect paths 154 + * @enabled: Set to true if the device's resources are enabled/configured. 155 155 * @genpd_performance_state: Device's power domain support performance state. 156 156 * @is_genpd: Marks if the OPP table belongs to a genpd. 157 157 * @set_opp: Platform specific set_opp callback ··· 195 195 struct clk *clk; 196 196 struct regulator **regulators; 197 197 int regulator_count; 198 - bool regulator_enabled; 199 198 struct icc_path **paths; 200 199 unsigned int path_count; 200 + bool enabled; 201 201 bool genpd_performance_state; 202 202 bool is_genpd; 203 203 ··· 217 217 int _get_opp_count(struct opp_table *opp_table); 218 218 struct opp_table *_find_opp_table(struct device *dev); 219 219 struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); 220 - void _dev_pm_opp_find_and_remove_table(struct device *dev); 221 220 struct dev_pm_opp *_opp_allocate(struct opp_table *opp_table); 222 221 void _opp_free(struct dev_pm_opp *opp); 223 222 int _opp_compare_key(struct dev_pm_opp *opp1, struct dev_pm_opp *opp2);
+10
drivers/pci/pci-acpi.c
··· 944 944 if (!dev->is_hotplug_bridge) 945 945 return false; 946 946 947 + /* Assume D3 support if the bridge is power-manageable by ACPI. */ 948 + adev = ACPI_COMPANION(&dev->dev); 949 + if (!adev && !pci_dev_is_added(dev)) { 950 + adev = acpi_pci_find_companion(&dev->dev); 951 + ACPI_COMPANION_SET(&dev->dev, adev); 952 + } 953 + 954 + if (adev && acpi_device_power_manageable(adev)) 955 + return true; 956 + 947 957 /* 948 958 * Look for a special _DSD property for the root port and if it 949 959 * is set we know the hierarchy behind it supports D3 just fine.
+1 -7
drivers/power/avs/qcom-cpr.c
··· 665 665 666 666 static int cpr_disable(struct cpr_drv *drv) 667 667 { 668 - int ret; 669 - 670 668 mutex_lock(&drv->lock); 671 669 672 670 if (cpr_is_allowed(drv)) { ··· 674 676 675 677 mutex_unlock(&drv->lock); 676 678 677 - ret = regulator_disable(drv->vdd_apc); 678 - if (ret) 679 - return ret; 680 - 681 - return 0; 679 + return regulator_disable(drv->vdd_apc); 682 680 } 683 681 684 682 static int cpr_config(struct cpr_drv *drv)
+1
drivers/powercap/idle_inject.c
··· 43 43 #include <linux/sched.h> 44 44 #include <linux/slab.h> 45 45 #include <linux/smpboot.h> 46 + #include <linux/idle_inject.h> 46 47 47 48 #include <uapi/linux/sched/types.h> 48 49
+1 -1
drivers/soc/samsung/exynos-asv.c
··· 93 93 continue; 94 94 95 95 opp_table = dev_pm_opp_get_opp_table(cpu); 96 - if (IS_ERR_OR_NULL(opp_table)) 96 + if (IS_ERR(opp_table)) 97 97 continue; 98 98 99 99 if (!last_opp_table || opp_table != last_opp_table) {
+5 -1
include/linux/arch_topology.h
··· 30 30 return per_cpu(freq_scale, cpu); 31 31 } 32 32 33 - bool arch_freq_counters_available(struct cpumask *cpus); 33 + void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq, 34 + unsigned long max_freq); 35 + bool topology_scale_freq_invariant(void); 36 + 37 + bool arch_freq_counters_available(const struct cpumask *cpus); 34 38 35 39 DECLARE_PER_CPU(unsigned long, thermal_pressure); 36 40
+13 -2
include/linux/cpufreq.h
··· 217 217 void cpufreq_update_policy(unsigned int cpu); 218 218 void cpufreq_update_limits(unsigned int cpu); 219 219 bool have_governor_per_policy(void); 220 + bool cpufreq_supports_freq_invariance(void); 220 221 struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy); 221 222 void cpufreq_enable_fast_switch(struct cpufreq_policy *policy); 222 223 void cpufreq_disable_fast_switch(struct cpufreq_policy *policy); ··· 237 236 static inline unsigned int cpufreq_get_hw_max_freq(unsigned int cpu) 238 237 { 239 238 return 0; 239 + } 240 + static inline bool cpufreq_supports_freq_invariance(void) 241 + { 242 + return false; 240 243 } 241 244 static inline void disable_cpufreq(void) { } 242 245 #endif ··· 1011 1006 extern void arch_freq_prepare_all(void); 1012 1007 extern unsigned int arch_freq_get_on_cpu(int cpu); 1013 1008 1014 - extern void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq, 1015 - unsigned long max_freq); 1009 + #ifndef arch_set_freq_scale 1010 + static __always_inline 1011 + void arch_set_freq_scale(const struct cpumask *cpus, 1012 + unsigned long cur_freq, 1013 + unsigned long max_freq) 1014 + { 1015 + } 1016 + #endif 1016 1017 1017 1018 /* the following are really really optional */ 1018 1019 extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs;
+1
include/linux/cpuidle.h
··· 38 38 u64 time_ns; 39 39 unsigned long long above; /* Number of times it's been too deep */ 40 40 unsigned long long below; /* Number of times it's been too shallow */ 41 + unsigned long long rejected; /* Number of times idle entry was rejected */ 41 42 #ifdef CONFIG_SUSPEND 42 43 unsigned long long s2idle_usage; 43 44 unsigned long long s2idle_time; /* in US */
+10 -4
include/linux/devfreq-event.h
··· 106 106 struct devfreq_event_data *edata); 107 107 extern int devfreq_event_reset_event(struct devfreq_event_dev *edev); 108 108 extern struct devfreq_event_dev *devfreq_event_get_edev_by_phandle( 109 - struct device *dev, int index); 110 - extern int devfreq_event_get_edev_count(struct device *dev); 109 + struct device *dev, 110 + const char *phandle_name, 111 + int index); 112 + extern int devfreq_event_get_edev_count(struct device *dev, 113 + const char *phandle_name); 111 114 extern struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev, 112 115 struct devfreq_event_desc *desc); 113 116 extern int devfreq_event_remove_edev(struct devfreq_event_dev *edev); ··· 155 152 } 156 153 157 154 static inline struct devfreq_event_dev *devfreq_event_get_edev_by_phandle( 158 - struct device *dev, int index) 155 + struct device *dev, 156 + const char *phandle_name, 157 + int index) 159 158 { 160 159 return ERR_PTR(-EINVAL); 161 160 } 162 161 163 - static inline int devfreq_event_get_edev_count(struct device *dev) 162 + static inline int devfreq_event_get_edev_count(struct device *dev, 163 + const char *phandle_name) 164 164 { 165 165 return -EINVAL; 166 166 }
+9 -2
include/linux/devfreq.h
··· 261 261 struct devfreq *devfreq, 262 262 struct notifier_block *nb, 263 263 unsigned int list); 264 - struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index); 264 + struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node); 265 + struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 266 + const char *phandle_name, int index); 265 267 266 268 #if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) 267 269 /** ··· 416 414 { 417 415 } 418 416 417 + static inline struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node) 418 + { 419 + return ERR_PTR(-ENODEV); 420 + } 421 + 419 422 static inline struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, 420 - int index) 423 + const char *phandle_name, int index) 421 424 { 422 425 return ERR_PTR(-ENODEV); 423 426 }
+1 -1
include/linux/pm.h
··· 590 590 #endif 591 591 #ifdef CONFIG_PM 592 592 struct hrtimer suspend_timer; 593 - unsigned long timer_expires; 593 + u64 timer_expires; 594 594 struct work_struct work; 595 595 wait_queue_head_t wait_queue; 596 596 struct wake_irq *wakeirq;
+2 -2
include/linux/pm_domain.h
··· 64 64 #define GENPD_FLAG_RPM_ALWAYS_ON (1U << 5) 65 65 66 66 enum gpd_status { 67 - GPD_STATE_ACTIVE = 0, /* PM domain is active */ 68 - GPD_STATE_POWER_OFF, /* PM domain is off */ 67 + GENPD_STATE_ON = 0, /* PM domain is on */ 68 + GENPD_STATE_OFF, /* PM domain is off */ 69 69 }; 70 70 71 71 struct dev_power_governor {
+1 -1
include/linux/psci.h
··· 18 18 19 19 int psci_cpu_suspend_enter(u32 state); 20 20 bool psci_power_state_is_valid(u32 state); 21 - int psci_set_osi_mode(void); 21 + int psci_set_osi_mode(bool enable); 22 22 bool psci_has_osi_support(void); 23 23 24 24 struct psci_operations {
-11
kernel/power/hibernate.c
··· 946 946 947 947 /* Check if the device is there */ 948 948 swsusp_resume_device = name_to_dev_t(resume_file); 949 - 950 - /* 951 - * name_to_dev_t is ineffective to verify parition if resume_file is in 952 - * integer format. (e.g. major:minor) 953 - */ 954 - if (isdigit(resume_file[0]) && resume_wait) { 955 - int partno; 956 - while (!get_gendisk(swsusp_resume_device, &partno)) 957 - msleep(10); 958 - } 959 - 960 949 if (!swsusp_resume_device) { 961 950 /* 962 951 * Some device discovery might still be in progress; we need
+15
kernel/power/swap.c
··· 226 226 atomic_t count; 227 227 wait_queue_head_t wait; 228 228 blk_status_t error; 229 + struct blk_plug plug; 229 230 }; 230 231 231 232 static void hib_init_batch(struct hib_bio_batch *hb) ··· 234 233 atomic_set(&hb->count, 0); 235 234 init_waitqueue_head(&hb->wait); 236 235 hb->error = BLK_STS_OK; 236 + blk_start_plug(&hb->plug); 237 + } 238 + 239 + static void hib_finish_batch(struct hib_bio_batch *hb) 240 + { 241 + blk_finish_plug(&hb->plug); 237 242 } 238 243 239 244 static void hib_end_io(struct bio *bio) ··· 301 294 302 295 static blk_status_t hib_wait_io(struct hib_bio_batch *hb) 303 296 { 297 + /* 298 + * We are relying on the behavior of blk_plug that a thread with 299 + * a plug will flush the plug list before sleeping. 300 + */ 304 301 wait_event(hb->wait, atomic_read(&hb->count) == 0); 305 302 return blk_status_to_errno(hb->error); 306 303 } ··· 569 558 nr_pages++; 570 559 } 571 560 err2 = hib_wait_io(&hb); 561 + hib_finish_batch(&hb); 572 562 stop = ktime_get(); 573 563 if (!ret) 574 564 ret = err2; ··· 863 851 pr_info("Image saving done\n"); 864 852 swsusp_show_speed(start, stop, nr_to_write, "Wrote"); 865 853 out_clean: 854 + hib_finish_batch(&hb); 866 855 if (crc) { 867 856 if (crc->thr) 868 857 kthread_stop(crc->thr); ··· 1094 1081 nr_pages++; 1095 1082 } 1096 1083 err2 = hib_wait_io(&hb); 1084 + hib_finish_batch(&hb); 1097 1085 stop = ktime_get(); 1098 1086 if (!ret) 1099 1087 ret = err2; ··· 1458 1444 } 1459 1445 swsusp_show_speed(start, stop, nr_to_read, "Read"); 1460 1446 out_clean: 1447 + hib_finish_batch(&hb); 1461 1448 for (i = 0; i < ring_size; i++) 1462 1449 free_page((unsigned long)page[i]); 1463 1450 if (crc) {
+2 -16
kernel/sched/cpufreq_schedutil.c
··· 114 114 static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time, 115 115 unsigned int next_freq) 116 116 { 117 - struct cpufreq_policy *policy = sg_policy->policy; 118 - int cpu; 119 - 120 - if (!sugov_update_next_freq(sg_policy, time, next_freq)) 121 - return; 122 - 123 - next_freq = cpufreq_driver_fast_switch(policy, next_freq); 124 - if (!next_freq) 125 - return; 126 - 127 - policy->cur = next_freq; 128 - 129 - if (trace_cpu_frequency_enabled()) { 130 - for_each_cpu(cpu, policy->cpus) 131 - trace_cpu_frequency(next_freq, cpu); 132 - } 117 + if (sugov_update_next_freq(sg_policy, time, next_freq)) 118 + cpufreq_driver_fast_switch(sg_policy->policy, next_freq); 133 119 } 134 120 135 121 static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,